Recent advances in PDF modeling of turbulent reacting flows
NASA Technical Reports Server (NTRS)
Leonard, Andrew D.; Dai, F.
1995-01-01
This viewgraph presentation concludes that a Monte Carlo probability density function (PDF) solution successfully couples with an existing finite volume code; PDF solution method applied to turbulent reacting flows shows good agreement with data; and PDF methods must be run on parallel machines for practical use.
A PDF closure model for compressible turbulent chemically reacting flows
NASA Technical Reports Server (NTRS)
Kollmann, W.
1992-01-01
The objective of the proposed research project was the analysis of single point closures based on probability density function (pdf) and characteristic functions and the development of a prediction method for the joint velocity-scalar pdf in turbulent reacting flows. Turbulent flows of boundary layer type and stagnation point flows with and without chemical reactions were be calculated as principal applications. Pdf methods for compressible reacting flows were developed and tested in comparison with available experimental data. The research work carried in this project was concentrated on the closure of pdf equations for incompressible and compressible turbulent flows with and without chemical reactions.
PDF modeling of near-wall turbulent flows
NASA Astrophysics Data System (ADS)
Dreeben, Thomas David
1997-06-01
Pdf methods are extended to include modeling of wall- bounded turbulent flows. For flows in which resolution of the viscous sublayer is desired, a Pdf near-wall model is developed in which the Generalized Langevin model is combined with an exact model for viscous transport. Durbin's method of elliptic relaxation is used to incorporate the wall effects into the governing equations without the use of wall functions or damping functions. Close to the wall, the Generalized Langevin model provides an analogy to the effect of the fluctuating continuity equation. This enables accurate modeling of the near-wall turbulent statistics. Demonstrated accuracy for fully-developed channel flow is achieved with a Pdf/Monte Carlo simulation, and with its related Reynolds-stress closure. For flows in which the details of the viscous sublayer are not important, a Pdf wall- function method is developed with the Simplified Langevin model.
Probabilistic density function method for nonlinear dynamical systems driven by colored noise.
Barajas-Solano, David A; Tartakovsky, Alexandre M
2016-05-01
We present a probability density function (PDF) method for a system of nonlinear stochastic ordinary differential equations driven by colored noise. The method provides an integrodifferential equation for the temporal evolution of the joint PDF of the system's state, which we close by means of a modified large-eddy-diffusivity (LED) closure. In contrast to the classical LED closure, the proposed closure accounts for advective transport of the PDF in the approximate temporal deconvolution of the integrodifferential equation. In addition, we introduce the generalized local linearization approximation for deriving a computable PDF equation in the form of a second-order partial differential equation. We demonstrate that the proposed closure and localization accurately describe the dynamics of the PDF in phase space for systems driven by noise with arbitrary autocorrelation time. We apply the proposed PDF method to analyze a set of Kramers equations driven by exponentially autocorrelated Gaussian colored noise to study nonlinear oscillators and the dynamics and stability of a power grid. Numerical experiments show the PDF method is accurate when the noise autocorrelation time is either much shorter or longer than the system's relaxation time, while the accuracy decreases as the ratio of the two timescales approaches unity. Similarly, the PDF method accuracy decreases with increasing standard deviation of the noise.
Comments on the present state and future directions of PDF methods
NASA Technical Reports Server (NTRS)
Obrien, E. E.
1992-01-01
The one point probability density function (PDF) method is examined in light of its use in actual engineering problems. The PDF method, although relatively complicated, appears to be the only format available to handle the nonlinear stochastic difficulties caused by typical reaction kinetics. Turbulence modeling, if it is to play a central role in combustion modeling, has to be integrated with the chemistry in a way which produces accurate numerical solutions to combustion problems. It is questionable whether the development of turbulent models in isolation from the peculiar statistics of reactant concentrations is a fruitful line of development as far as propulsion is concerned. There are three issues for which additional viewgraphs are prepared: the one point pdf method; the amplitude mapping closure; and a hybrid strategy for replacing a full two point pdf treatment of reacting flows by a single point pdf and correlation functions. An appeal is made for the establishment of an adequate data base for compressible flow with reactions for Mach numbers of unity or higher.
Pdf - Transport equations for chemically reacting flows
NASA Technical Reports Server (NTRS)
Kollmann, W.
1989-01-01
The closure problem for the transport equations for pdf and the characteristic functions of turbulent, chemically reacting flows is addressed. The properties of the linear and closed equations for the characteristic functional for Eulerian and Lagrangian variables are established, and the closure problem for the finite-dimensional case is discussed for pdf and characteristic functions. It is shown that the closure for the scalar dissipation term in the pdf equation developed by Dopazo (1979) and Kollmann et al. (1982) results in a single integral, in contrast to the pdf, where double integration is required. Some recent results using pdf methods obtained for turbulent flows with combustion, including effects of chemical nonequilibrium, are discussed.
The pdf approach to turbulent flow
NASA Technical Reports Server (NTRS)
Kollmann, W.
1990-01-01
This paper provides a detailed discussion of the theory and application of probability density function (pdf) methods, which provide a complete statistical description of turbulent flow fields at a single point or a finite number of points. The basic laws governing the flow of Newtonian fluids are set up in the Eulerian and the Lagrangian frame, and the exact and linear equations for the characteristic functionals in those frames are discussed. Pdf equations in both frames are derived as Fourier transforms of the equations of the characteristic functions. Possible formulations for the nonclosed terms in the pdf equation are discussed, their properties are assessed, and closure modes for the molecular-transport and the fluctuating pressure-gradient terms are reviewed. The application of pdf methods to turbulent combustion flows, supersonic flows, and the interaction of turbulence with shock waves is discussed.
Probabilistic density function method for nonlinear dynamical systems driven by colored noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barajas-Solano, David A.; Tartakovsky, Alexandre M.
2016-05-01
We present a probability density function (PDF) method for a system of nonlinear stochastic ordinary differential equations driven by colored noise. The method provides an integro-differential equation for the temporal evolution of the joint PDF of the system's state, which we close by means of a modified Large-Eddy-Diffusivity-type closure. Additionally, we introduce the generalized local linearization (LL) approximation for deriving a computable PDF equation in the form of the second-order partial differential equation (PDE). We demonstrate the proposed closure and localization accurately describe the dynamics of the PDF in phase space for systems driven by noise with arbitrary auto-correlation time.more » We apply the proposed PDF method to the analysis of a set of Kramers equations driven by exponentially auto-correlated Gaussian colored noise to study the dynamics and stability of a power grid.« less
Pressure algorithm for elliptic flow calculations with the PDF method
NASA Technical Reports Server (NTRS)
Anand, M. S.; Pope, S. B.; Mongia, H. C.
1991-01-01
An algorithm to determine the mean pressure field for elliptic flow calculations with the probability density function (PDF) method is developed and applied. The PDF method is a most promising approach for the computation of turbulent reacting flows. Previous computations of elliptic flows with the method were in conjunction with conventional finite volume based calculations that provided the mean pressure field. The algorithm developed and described here permits the mean pressure field to be determined within the PDF calculations. The PDF method incorporating the pressure algorithm is applied to the flow past a backward-facing step. The results are in good agreement with data for the reattachment length, mean velocities, and turbulence quantities including triple correlations.
A time dependent mixing model to close PDF equations for transport in heterogeneous aquifers
NASA Astrophysics Data System (ADS)
Schüler, L.; Suciu, N.; Knabner, P.; Attinger, S.
2016-10-01
Probability density function (PDF) methods are a promising alternative to predicting the transport of solutes in groundwater under uncertainty. They make it possible to derive the evolution equations of the mean concentration and the concentration variance, used in moment methods. The mixing model, describing the transport of the PDF in concentration space, is essential for both methods. Finding a satisfactory mixing model is still an open question and due to the rather elaborate PDF methods, a difficult undertaking. Both the PDF equation and the concentration variance equation depend on the same mixing model. This connection is used to find and test an improved mixing model for the much easier to handle concentration variance. Subsequently, this mixing model is transferred to the PDF equation and tested. The newly proposed mixing model yields significantly improved results for both variance modelling and PDF modelling.
Audio feature extraction using probability distribution function
NASA Astrophysics Data System (ADS)
Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.
2015-05-01
Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.
Compressible cavitation with stochastic field method
NASA Astrophysics Data System (ADS)
Class, Andreas; Dumond, Julien
2012-11-01
Non-linear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally the simulation of pdf transport requires Monte-Carlo codes based on Lagrange particles or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic field method solving pdf transport based on Euler fields has been proposed which eliminates the necessity to mix Euler and Lagrange techniques or prescribed pdf assumptions. In the present work, part of the PhD Design and analysis of a Passive Outflow Reducer relying on cavitation, a first application of the stochastic field method to multi-phase flow and in particular to cavitating flow is presented. The application considered is a nozzle subjected to high velocity flow so that sheet cavitation is observed near the nozzle surface in the divergent section. It is demonstrated that the stochastic field formulation captures the wide range of pdf shapes present at different locations. The method is compatible with finite-volume codes where all existing physical models available for Lagrange techniques, presumed pdf or binning methods can be easily extended to the stochastic field formulation.
Parameterizing deep convection using the assumed probability density function method
Storer, R. L.; Griffin, B. M.; Höft, J.; ...
2014-06-11
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Parameterizing deep convection using the assumed probability density function method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storer, R. L.; Griffin, B. M.; Höft, J.
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak.more » The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
Parameterizing deep convection using the assumed probability density function method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storer, R. L.; Griffin, B. M.; Hoft, Jan
2015-01-06
Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. Themore » same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less
A Tomographic Method for the Reconstruction of Local Probability Density Functions
NASA Technical Reports Server (NTRS)
Sivathanu, Y. R.; Gore, J. P.
1993-01-01
A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.
NASA Astrophysics Data System (ADS)
Choi, Jinhyeok; Kim, Hyeonjin
2016-12-01
To improve the efficacy of undersampled MRI, a method of designing adaptive sampling functions is proposed that is simple to implement on an MR scanner and yet effectively improves the performance of the sampling functions. An approximation of the energy distribution of an image (E-map) is estimated from highly undersampled k-space data acquired in a prescan and efficiently recycled in the main scan. An adaptive probability density function (PDF) is generated by combining the E-map with a modeled PDF. A set of candidate sampling functions are then prepared from the adaptive PDF, among which the one with maximum energy is selected as the final sampling function. To validate its computational efficiency, the proposed method was implemented on an MR scanner, and its robust performance in Fourier-transform (FT) MRI and compressed sensing (CS) MRI was tested by simulations and in a cherry tomato. The proposed method consistently outperforms the conventional modeled PDF approach for undersampling ratios of 0.2 or higher in both FT-MRI and CS-MRI. To fully benefit from undersampled MRI, it is preferable that the design of adaptive sampling functions be performed online immediately before the main scan. In this way, the proposed method may further improve the efficacy of the undersampled MRI.
Numerical solutions of the complete Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Hassan, H. A.
1993-01-01
The objective of this study is to compare the use of assumed pdf (probability density function) approaches for modeling supersonic turbulent reacting flowfields with the more elaborate approach where the pdf evolution equation is solved. Assumed pdf approaches for averaging the chemical source terms require modest increases in CPU time typically of the order of 20 percent above treating the source terms as 'laminar.' However, it is difficult to assume a form for these pdf's a priori that correctly mimics the behavior of the actual pdf governing the flow. Solving the evolution equation for the pdf is a theoretically sound approach, but because of the large dimensionality of this function, its solution requires a Monte Carlo method which is computationally expensive and slow to coverage. Preliminary results show both pdf approaches to yield similar solutions for the mean flow variables.
The PDF method for turbulent combustion
NASA Technical Reports Server (NTRS)
Pope, S. B.
1991-01-01
Probability Density Function (PDF) methods provide a means of calculating the properties of turbulent reacting flows. They have been successfully applied to many turbulent flames, including some with finite rate kinetic effects. Here the methods are reviewed with an emphasis on computational issues and their application to turbulent combustion.
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Liu, Nan-Suey
2012-01-01
This paper presents the numerical simulations of the Jet-A spray reacting flow in a single element lean direct injection (LDI) injector by using the National Combustion Code (NCC) with and without invoking the Eulerian scalar probability density function (PDF) method. The flow field is calculated by using the Reynolds averaged Navier-Stokes equations (RANS and URANS) with nonlinear turbulence models, and when the scalar PDF method is invoked, the energy and compositions or species mass fractions are calculated by solving the equation of an ensemble averaged density-weighted fine-grained probability density function that is referred to here as the averaged probability density function (APDF). A nonlinear model for closing the convection term of the scalar APDF equation is used in the presented simulations and will be briefly described. Detailed comparisons between the results and available experimental data are carried out. Some positive findings of invoking the Eulerian scalar PDF method in both improving the simulation quality and reducing the computing cost are observed.
Sheng, Ke; Cai, Jing; Brookeman, James; Molloy, Janelle; Christopher, John; Read, Paul
2006-09-01
Lung tumor motion trajectories measured by four-dimensional CT or dynamic MRI can be converted to a probability density function (PDF), which describes the probability of the tumor at a certain position, for PDF based treatment planning. Using this method in simulated sequential tomotherapy, we study the dose reduction of normal tissues and more important, the effect of PDF reproducibility on the accuracy of dosimetry. For these purposes, realistic PDFs were obtained from two dynamic MRI scans of a healthy volunteer within a 2 week interval. The first PDF was accumulated from a 300 s scan and the second PDF was calculated from variable scan times from 5 s (one breathing cycle) to 300 s. Optimized beam fluences based on the second PDF were delivered to the hypothetical gross target volume (GTV) of a lung phantom that moved following the first PDF The reproducibility between two PDFs varied from low (78%) to high (94.8%) when the second scan time increased from 5 s to 300 s. When a highly reproducible PDF was used in optimization, the dose coverage of GTV was maintained; phantom lung receiving 10%-20% prescription dose was reduced by 40%-50% and the mean phantom lung dose was reduced by 9.6%. However, optimization based on PDF with low reproducibility resulted in a 50% underdosed GTV. The dosimetric error increased nearly exponentially as the PDF error increased. Therefore, although the dose of the tumor surrounding tissue can be theoretically reduced by PDF based treatment planning, the reliability and applicability of this method highly depend on if a reproducible PDF exists and is measurable. By correlating the dosimetric error and PDF error together, a useful guideline for PDF data acquisition and patient qualification for PDF based planning can be derived.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheng, Ke; Cai Jing; Brookeman, James
2006-09-15
Lung tumor motion trajectories measured by four-dimensional CT or dynamic MRI can be converted to a probability density function (PDF), which describes the probability of the tumor at a certain position, for PDF based treatment planning. Using this method in simulated sequential tomotherapy, we study the dose reduction of normal tissues and more important, the effect of PDF reproducibility on the accuracy of dosimetry. For these purposes, realistic PDFs were obtained from two dynamic MRI scans of a healthy volunteer within a 2 week interval. The first PDF was accumulated from a 300 s scan and the second PDF wasmore » calculated from variable scan times from 5 s (one breathing cycle) to 300 s. Optimized beam fluences based on the second PDF were delivered to the hypothetical gross target volume (GTV) of a lung phantom that moved following the first PDF. The reproducibility between two PDFs varied from low (78%) to high (94.8%) when the second scan time increased from 5 s to 300 s. When a highly reproducible PDF was used in optimization, the dose coverage of GTV was maintained; phantom lung receiving 10%-20% prescription dose was reduced by 40%-50% and the mean phantom lung dose was reduced by 9.6%. However, optimization based on PDF with low reproducibility resulted in a 50% underdosed GTV. The dosimetric error increased nearly exponentially as the PDF error increased. Therefore, although the dose of the tumor surrounding tissue can be theoretically reduced by PDF based treatment planning, the reliability and applicability of this method highly depend on if a reproducible PDF exists and is measurable. By correlating the dosimetric error and PDF error together, a useful guideline for PDF data acquisition and patient qualification for PDF based planning can be derived.« less
A multi-scalar PDF approach for LES of turbulent spray combustion
NASA Astrophysics Data System (ADS)
Raman, Venkat; Heye, Colin
2011-11-01
A comprehensive joint-scalar probability density function (PDF) approach is proposed for large eddy simulation (LES) of turbulent spray combustion and tests are conducted to analyze the validity and modeling requirements. The PDF method has the advantage that the chemical source term appears closed but requires models for the small scale mixing process. A stable and consistent numerical algorithm for the LES/PDF approach is presented. To understand the modeling issues in the PDF method, direct numerical simulation of a spray flame at three different fuel droplet Stokes numbers and an equivalent gaseous flame are carried out. Assumptions in closing the subfilter conditional diffusion term in the filtered PDF transport equation are evaluated for various model forms. In addition, the validity of evaporation rate models in high Stokes number flows is analyzed.
NASA Technical Reports Server (NTRS)
Chen, J.-Y.
1992-01-01
Viewgraphs are presented on the following topics: the grand challenge of combustion engineering; research of probability density function (PDF) methods at Sandia; experiments of turbulent jet flames (Masri and Dibble, 1988); departures from chemical equilibrium; modeling turbulent reacting flows; superequilibrium OH radical; pdf modeling of turbulent jet flames; scatter plot for CH4 (methane) and O2 (oxygen); methanol turbulent jet flames; comparisons between predictions and experimental data; and turbulent C2H4 jet flames.
Local structure studies of materials using pair distribution function analysis
NASA Astrophysics Data System (ADS)
Peterson, Joseph W.
A collection of pair distribution function studies on various materials is presented in this dissertation. In each case, local structure information of interest pushes the current limits of what these studies can accomplish. The goal is to provide insight into the individual material behaviors as well as to investigate ways to expand the current limits of PDF analysis. Where possible, I provide a framework for how PDF analysis might be applied to a wider set of material phenomena. Throughout the dissertation, I discuss 0 the capabilities of the PDF method to provide information pertaining to a material's structure and properties, ii) current limitations in the conventional approach to PDF analysis, iii) possible solutions to overcome certain limitations in PDF analysis, and iv) suggestions for future work to expand and improve the capabilities PDF analysis.
Robust functional statistics applied to Probability Density Function shape screening of sEMG data.
Boudaoud, S; Rix, H; Al Harrach, M; Marin, F
2014-01-01
Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.
Moore, Michael D; Shi, Zhenqi; Wildfong, Peter L D
2010-12-01
To develop a method for drawing statistical inferences from differences between multiple experimental pair distribution function (PDF) transforms of powder X-ray diffraction (PXRD) data. The appropriate treatment of initial PXRD error estimates using traditional error propagation algorithms was tested using Monte Carlo simulations on amorphous ketoconazole. An amorphous felodipine:polyvinyl pyrrolidone:vinyl acetate (PVPva) physical mixture was prepared to define an error threshold. Co-solidified products of felodipine:PVPva and terfenadine:PVPva were prepared using a melt-quench method and subsequently analyzed using PXRD and PDF. Differential scanning calorimetry (DSC) was used as an additional characterization method. The appropriate manipulation of initial PXRD error estimates through the PDF transform were confirmed using the Monte Carlo simulations for amorphous ketoconazole. The felodipine:PVPva physical mixture PDF analysis determined ±3σ to be an appropriate error threshold. Using the PDF and error propagation principles, the felodipine:PVPva co-solidified product was determined to be completely miscible, and the terfenadine:PVPva co-solidified product, although having appearances of an amorphous molecular solid dispersion by DSC, was determined to be phase-separated. Statistically based inferences were successfully drawn from PDF transforms of PXRD patterns obtained from composite systems. The principles applied herein may be universally adapted to many different systems and provide a fundamentally sound basis for drawing structural conclusions from PDF studies.
A Validation Summary of the NCC Turbulent Reacting/non-reacting Spray Computations
NASA Technical Reports Server (NTRS)
Raju, M. S.; Liu, N.-S. (Technical Monitor)
2000-01-01
This pper provides a validation summary of the spray computations performed as a part of the NCC (National Combustion Code) development activity. NCC is being developed with the aim of advancing the current prediction tools used in the design of advanced technology combustors based on the multidimensional computational methods. The solution procedure combines the novelty of the application of the scalar Monte Carlo PDF (Probability Density Function) method to the modeling of turbulent spray flames with the ability to perform the computations on unstructured grids with parallel computing. The calculation procedure was applied to predict the flow properties of three different spray cases. One is a nonswirling unconfined reacting spray, the second is a nonswirling unconfined nonreacting spray, and the third is a confined swirl-stabilized spray flame. The comparisons involving both gas-phase and droplet velocities, droplet size distributions, and gas-phase temperatures show reasonable agreement with the available experimental data. The comparisons involve both the results obtained from the use of the Monte Carlo PDF method as well as those obtained from the conventional computational fluid dynamics (CFD) solution. Detailed comparisons in the case of a reacting nonswirling spray clearly highlight the importance of chemistry/turbulence interactions in the modeling of reacting sprays. The results from the PDF and non-PDF methods were found to be markedly different and the PDF solution is closer to the reported experimental data. The PDF computations predict that most of the combustion occurs in a predominantly diffusion-flame environment. However, the non-PDF solution predicts incorrectly that the combustion occurs in a predominantly vaporization-controlled regime. The Monte Carlo temperature distribution shows that the functional form of the PDF for the temperature fluctuations varies substantially from point to point. The results also bring to the fore some of the deficiencies associated with the use of assumed-shape PDF methods in spray computations.
The orbital PDF: general inference of the gravitational potential from steady-state tracers
NASA Astrophysics Data System (ADS)
Han, Jiaxin; Wang, Wenting; Cole, Shaun; Frenk, Carlos S.
2016-02-01
We develop two general methods to infer the gravitational potential of a system using steady-state tracers, I.e. tracers with a time-independent phase-space distribution. Combined with the phase-space continuity equation, the time independence implies a universal orbital probability density function (oPDF) dP(λ|orbit) ∝ dt, where λ is the coordinate of the particle along the orbit. The oPDF is equivalent to Jeans theorem, and is the key physical ingredient behind most dynamical modelling of steady-state tracers. In the case of a spherical potential, we develop a likelihood estimator that fits analytical potentials to the system and a non-parametric method (`phase-mark') that reconstructs the potential profile, both assuming only the oPDF. The methods involve no extra assumptions about the tracer distribution function and can be applied to tracers with any arbitrary distribution of orbits, with possible extension to non-spherical potentials. The methods are tested on Monte Carlo samples of steady-state tracers in dark matter haloes to show that they are unbiased as well as efficient. A fully documented C/PYTHON code implementing our method is freely available at a GitHub repository linked from http://icc.dur.ac.uk/data/#oPDF.
NASA Astrophysics Data System (ADS)
Haworth, Daniel
2013-11-01
The importance of explicitly accounting for the effects of unresolved turbulent fluctuations in Reynolds-averaged and large-eddy simulations of chemically reacting turbulent flows is increasingly recognized. Transported probability density function (PDF) methods have emerged as one of the most promising modeling approaches for this purpose. In particular, PDF methods provide an elegant and effective resolution to the closure problems that arise from averaging or filtering terms that correspond to nonlinear point processes, including chemical reaction source terms and radiative emission. PDF methods traditionally have been associated with studies of turbulence-chemistry interactions in laboratory-scale, atmospheric-pressure, nonluminous, statistically stationary nonpremixed turbulent flames; and Lagrangian particle-based Monte Carlo numerical algorithms have been the predominant method for solving modeled PDF transport equations. Recent advances and trends in PDF methods are reviewed and discussed. These include advances in particle-based algorithms, alternatives to particle-based algorithms (e.g., Eulerian field methods), treatment of combustion regimes beyond low-to-moderate-Damköhler-number nonpremixed systems (e.g., premixed flamelets), extensions to include radiation heat transfer and multiphase systems (e.g., soot and fuel sprays), and the use of PDF methods as the basis for subfilter-scale modeling in large-eddy simulation. Examples are provided that illustrate the utility and effectiveness of PDF methods for physics discovery and for applications to practical combustion systems. These include comparisons of results obtained using the PDF method with those from models that neglect unresolved turbulent fluctuations in composition and temperature in the averaged or filtered chemical source terms and/or the radiation heat transfer source terms. In this way, the effects of turbulence-chemistry-radiation interactions can be isolated and quantified.
NASA Astrophysics Data System (ADS)
Jaishree, J.; Haworth, D. C.
2012-06-01
Transported probability density function (PDF) methods have been applied widely and effectively for modelling turbulent reacting flows. In most applications of PDF methods to date, Lagrangian particle Monte Carlo algorithms have been used to solve a modelled PDF transport equation. However, Lagrangian particle PDF methods are computationally intensive and are not readily integrated into conventional Eulerian computational fluid dynamics (CFD) codes. Eulerian field PDF methods have been proposed as an alternative. Here a systematic comparison is performed among three methods for solving the same underlying modelled composition PDF transport equation: a consistent hybrid Lagrangian particle/Eulerian mesh (LPEM) method, a stochastic Eulerian field (SEF) method and a deterministic Eulerian field method with a direct-quadrature-method-of-moments closure (a multi-environment PDF-MEPDF method). The comparisons have been made in simulations of a series of three non-premixed, piloted methane-air turbulent jet flames that exhibit progressively increasing levels of local extinction and turbulence-chemistry interactions: Sandia/TUD flames D, E and F. The three PDF methods have been implemented using the same underlying CFD solver, and results obtained using the three methods have been compared using (to the extent possible) equivalent physical models and numerical parameters. Reasonably converged mean and rms scalar profiles are obtained using 40 particles per cell for the LPEM method or 40 Eulerian fields for the SEF method. Results from these stochastic methods are compared with results obtained using two- and three-environment MEPDF methods. The relative advantages and disadvantages of each method in terms of accuracy and computational requirements are explored and identified. In general, the results obtained from the two stochastic methods (LPEM and SEF) are very similar, and are in closer agreement with experimental measurements than those obtained using the MEPDF method, while MEPDF is the most computationally efficient of the three methods. These and other findings are discussed in detail.
Chieng, Norman; Trnka, Hjalte; Boetker, Johan; Pikal, Michael; Rantanen, Jukka; Grohganz, Holger
2013-09-15
The purpose of this study is to investigate the use of multivariate data analysis for powder X-ray diffraction-pair-wise distribution function (PXRD-PDF) data to detect phase separation in freeze-dried binary amorphous systems. Polymer-polymer and polymer-sugar binary systems at various ratios were freeze-dried. All samples were analyzed by PXRD, transformed to PDF and analyzed by principal component analysis (PCA). These results were validated by differential scanning calorimetry (DSC) through characterization of glass transition of the maximally freeze-concentrate solute (Tg'). Analysis of PXRD-PDF data using PCA provides a more clear 'miscible' or 'phase separated' interpretation through the distribution pattern of samples on a score plot presentation compared to residual plot method. In a phase separated system, samples were found to be evenly distributed around the theoretical PDF profile. For systems that were miscible, a clear deviation of samples away from the theoretical PDF profile was observed. Moreover, PCA analysis allows simultaneous analysis of replicate samples. Comparatively, the phase behavior analysis from PXRD-PDF-PCA method was in agreement with the DSC results. Overall, the combined PXRD-PDF-PCA approach improves the clarity of the PXRD-PDF results and can be used as an alternative explorative data analytical tool in detecting phase separation in freeze-dried binary amorphous systems. Copyright © 2013 Elsevier B.V. All rights reserved.
A Discrete Probability Function Method for the Equation of Radiative Transfer
NASA Technical Reports Server (NTRS)
Sivathanu, Y. R.; Gore, J. P.
1993-01-01
A discrete probability function (DPF) method for the equation of radiative transfer is derived. The DPF is defined as the integral of the probability density function (PDF) over a discrete interval. The derivation allows the evaluation of the PDF of intensities leaving desired radiation paths including turbulence-radiation interactions without the use of computer intensive stochastic methods. The DPF method has a distinct advantage over conventional PDF methods since the creation of a partial differential equation from the equation of transfer is avoided. Further, convergence of all moments of intensity is guaranteed at the basic level of simulation unlike the stochastic method where the number of realizations for convergence of higher order moments increases rapidly. The DPF method is described for a representative path with approximately integral-length scale-sized spatial discretization. The results show good agreement with measurements in a propylene/air flame except for the effects of intermittency resulting from highly correlated realizations. The method can be extended to the treatment of spatial correlations as described in the Appendix. However, information regarding spatial correlations in turbulent flames is needed prior to the execution of this extension.
NASA Astrophysics Data System (ADS)
Bao, Zhenkun; Li, Xiaolong; Luo, Xiangyang
2017-01-01
Extracting informative statistic features is the most essential technical issue of steganalysis. Among various steganalysis methods, probability density function (PDF) and characteristic function (CF) moments are two important types of features due to the excellent ability for distinguishing the cover images from the stego ones. The two types of features are quite similar in definition. The only difference is that the PDF moments are computed in the spatial domain, while the CF moments are computed in the Fourier-transformed domain. Then, the comparison between PDF and CF moments is an interesting question of steganalysis. Several theoretical results have been derived, and CF moments are proved better than PDF moments in some cases. However, in the log prediction error wavelet subband of wavelet decomposition, some experiments show that the result is opposite and lacks a rigorous explanation. To solve this problem, a comparison result based on the rigorous proof is presented: the first-order PDF moment is proved better than the CF moment, while the second-order CF moment is better than the PDF moment. It tries to open the theoretical discussion on steganalysis and the question of finding suitable statistical features.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil
Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.
Gorelik, Tatiana E; Schmidt, Martin U; Kolb, Ute; Billinge, Simon J L
2015-04-01
This paper shows that pair-distribution function (PDF) analyses can be carried out on organic and organometallic compounds from powder electron diffraction data. Different experimental setups are demonstrated, including selected area electron diffraction and nanodiffraction in transmission electron microscopy or nanodiffraction in scanning transmission electron microscopy modes. The methods were demonstrated on organometallic complexes (chlorinated and unchlorinated copper phthalocyanine) and on purely organic compounds (quinacridone). The PDF curves from powder electron diffraction data, called ePDF, are in good agreement with PDF curves determined from X-ray powder data demonstrating that the problems of obtaining kinematical scattering data and avoiding beam damage of the sample are possible to resolve.
Stochastic-field cavitation model
NASA Astrophysics Data System (ADS)
Dumond, J.; Magagnato, F.; Class, A.
2013-07-01
Nonlinear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally, the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian "particles" or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and, in particular, to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. First, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.
A cavitation model based on Eulerian stochastic fields
NASA Astrophysics Data System (ADS)
Magagnato, F.; Dumond, J.
2013-12-01
Non-linear phenomena can often be described using probability density functions (pdf) and pdf transport models. Traditionally the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian "particles" or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and in particular to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. Firstly, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.
Stochastic-field cavitation model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumond, J., E-mail: julien.dumond@areva.com; AREVA GmbH, Erlangen, Paul-Gossen-Strasse 100, D-91052 Erlangen; Magagnato, F.
2013-07-15
Nonlinear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally, the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian “particles” or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and, in particular, to cavitating flow. To validate the proposed stochastic-fieldmore » cavitation model, two applications are considered. First, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.« less
Calculations of the flow properties of a confined diffusion flame
NASA Technical Reports Server (NTRS)
Kim, Yongmo; Chung, T. J.; Sohn, Jeong L.
1989-01-01
A finite element algorithm for the computation of confined, axisymmetric, turbulent diffusion flames is developed. The mean mixture properties were obtained by three methods based on diffusion flame concept: without using a probability density function (PDF), with a double-delta PDF, and with a beta PDF. A comparison is made for the combustion models, and the effect of turbulence on combustion are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ping; Wang, Chenyu; Li, Mingjie
In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) can not fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First,more » the modeling error PDF by the tradional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. Furthermore, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ping; Wang, Chenyu; Li, Mingjie
In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less
Zhou, Ping; Wang, Chenyu; Li, Mingjie; ...
2018-01-31
In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less
Simplified Computation for Nonparametric Windows Method of Probability Density Function Estimation.
Joshi, Niranjan; Kadir, Timor; Brady, Michael
2011-08-01
Recently, Kadir and Brady proposed a method for estimating probability density functions (PDFs) for digital signals which they call the Nonparametric (NP) Windows method. The method involves constructing a continuous space representation of the discrete space and sampled signal by using a suitable interpolation method. NP Windows requires only a small number of observed signal samples to estimate the PDF and is completely data driven. In this short paper, we first develop analytical formulae to obtain the NP Windows PDF estimates for 1D, 2D, and 3D signals, for different interpolation methods. We then show that the original procedure to calculate the PDF estimate can be significantly simplified and made computationally more efficient by a judicious choice of the frame of reference. We have also outlined specific algorithmic details of the procedures enabling quick implementation. Our reformulation of the original concept has directly demonstrated a close link between the NP Windows method and the Kernel Density Estimator.
PDF methods for combustion in high-speed turbulent flows
NASA Technical Reports Server (NTRS)
Pope, Stephen B.
1995-01-01
This report describes the research performed during the second year of this three-year project. The ultimate objective of the project is extend the applicability of probability density function (pdf) methods from incompressible to compressible turbulent reactive flows. As described in subsequent sections, progress has been made on: (1) formulation and modelling of pdf equations for compressible turbulence, in both homogeneous and inhomogeneous inert flows; and (2) implementation of the compressible model in various flow configurations, namely decaying isotropic turbulence, homogeneous shear flow and plane mixing layer.
Golman, Mikhail; Padovano, William; Shmuylovich, Leonid; Kovács, Sándor J
2018-03-01
Conventional echocardiographic diastolic function (DF) assessment approximates transmitral flow velocity contours (Doppler E-waves) as triangles, with peak (E peak ), acceleration time (AT), and deceleration time (DT) as indexes. These metrics have limited value because they are unable to characterize the underlying physiology. The parametrized diastolic filling (PDF) formalism provides a physiologic, kinematic mechanism based characterization of DF by extracting chamber stiffness (k), relaxation (c), and load (x o ) from E-wave contours. We derive the mathematical relationship between the PDF parameters and E peak , AT, DT and thereby introduce the geometric method (GM) that computes the PDF parameters using E peak , AT, and DT as input. Numerical experiments validated GM by analysis of 208 E-waves from 31 datasets spanning the full range of clinical diastolic function. GM yielded indistinguishable average parameter values per subject vs. the gold-standard PDF method (k: R 2 = 0.94, c: R 2 = 0.95, x o : R 2 = 0.95, p < 0.01 all parameters). Additionally, inter-rater reliability for GM-determined parameters was excellent (k: ICC = 0.956 c: ICC = 0.944, x o : ICC = 0.993). Results indicate that E-wave symmetry (AT/DT) may comprise a new index of DF. By employing indexes (E peak , AT, DT) that are already in standard clinical use the GM capitalizes on the power of the PDF method to quantify DF in terms of physiologic chamber properties.
Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas
2005-01-01
The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.
Investigations of turbulent scalar fields using probability density function approach
NASA Technical Reports Server (NTRS)
Gao, Feng
1991-01-01
Scalar fields undergoing random advection have attracted much attention from researchers in both the theoretical and practical sectors. Research interest spans from the study of the small scale structures of turbulent scalar fields to the modeling and simulations of turbulent reacting flows. The probability density function (PDF) method is an effective tool in the study of turbulent scalar fields, especially for those which involve chemical reactions. It has been argued that a one-point, joint PDF approach is the one to choose from among many simulation and closure methods for turbulent combustion and chemically reacting flows based on its practical feasibility in the foreseeable future for multiple reactants. Instead of the multi-point PDF, the joint PDF of a scalar and its gradient which represents the roles of both scalar and scalar diffusion is introduced. A proper closure model for the molecular diffusion term in the PDF equation is investigated. Another direction in this research is to study the mapping closure method that has been recently proposed to deal with the PDF's in turbulent fields. This method seems to have captured the physics correctly when applied to diffusion problems. However, if the turbulent stretching is included, the amplitude mapping has to be supplemented by either adjusting the parameters representing turbulent stretching at each time step or by introducing the coordinate mapping. This technique is still under development and seems to be quite promising. The final objective of this project is to understand some fundamental properties of the turbulent scalar fields and to develop practical numerical schemes that are capable of handling turbulent reacting flows.
Jian, Y; Yao, R; Mulnix, T; Jin, X; Carson, R E
2015-01-07
Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners-the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [(11)C]AFM rats imaged on the HRRT and [(11)C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods.
Jian, Y; Yao, R; Mulnix, T; Jin, X; Carson, R E
2016-01-01
Resolution degradation in PET image reconstruction can be caused by inaccurate modeling of the physical factors in the acquisition process. Resolution modeling (RM) is a common technique that takes into account the resolution degrading factors in the system matrix. Our previous work has introduced a probability density function (PDF) method of deriving the resolution kernels from Monte Carlo simulation and parameterizing the LORs to reduce the number of kernels needed for image reconstruction. In addition, LOR-PDF allows different PDFs to be applied to LORs from different crystal layer pairs of the HRRT. In this study, a thorough test was performed with this new model (LOR-PDF) applied to two PET scanners - the HRRT and Focus-220. A more uniform resolution distribution was observed in point source reconstructions by replacing the spatially-invariant kernels with the spatially-variant LOR-PDF. Specifically, from the center to the edge of radial field of view (FOV) of the HRRT, the measured in-plane FWHMs of point sources in a warm background varied slightly from 1.7 mm to 1.9 mm in LOR-PDF reconstructions. In Minihot and contrast phantom reconstructions, LOR-PDF resulted in up to 9% higher contrast at any given noise level than image-space resolution model. LOR-PDF also has the advantage in performing crystal-layer-dependent resolution modeling. The contrast improvement by using LOR-PDF was verified statistically by replicate reconstructions. In addition, [11C]AFM rats imaged on the HRRT and [11C]PHNO rats imaged on the Focus-220 were utilized to demonstrated the advantage of the new model. Higher contrast between high-uptake regions of only a few millimeter diameter and the background was observed in LOR-PDF reconstruction than in other methods. PMID:25490063
Talsma, Aaron D; Christov, Christo P; Terriente-Felix, Ana; Linneweber, Gerit A; Perea, Daniel; Wayland, Matthew; Shafer, Orie T; Miguel-Aliaga, Irene
2012-07-24
The role of the central neuropeptide pigment-dispersing factor (PDF) in circadian timekeeping in Drosophila is remarkably similar to that of vasoactive intestinal peptide (VIP) in mammals. Like VIP, PDF is expressed outside the circadian network by neurons innervating the gut, but the function and mode of action of this PDF have not been characterized. Here we investigate the visceral roles of PDF by adapting cellular and physiological methods to the study of visceral responses to PDF signaling in wild-type and mutant genetic backgrounds. We find that intestinal PDF acts at a distance on the renal system, where it regulates ureter contractions. We show that PdfR, PDF's established receptor, is expressed by the muscles of the excretory system, and present evidence that PdfR-induced cAMP increases underlie the myotropic effects of PDF. These findings extend the similarities between PDF and VIP beyond their shared central role as circadian regulators, and uncover an unexpected endocrine mode of myotropic action for an intestinal neuropeptide on the renal system.
Talsma, Aaron D.; Christov, Christo P.; Terriente-Felix, Ana; Linneweber, Gerit A.; Perea, Daniel; Wayland, Matthew; Shafer, Orie T.; Miguel-Aliaga, Irene
2012-01-01
The role of the central neuropeptide pigment-dispersing factor (PDF) in circadian timekeeping in Drosophila is remarkably similar to that of vasoactive intestinal peptide (VIP) in mammals. Like VIP, PDF is expressed outside the circadian network by neurons innervating the gut, but the function and mode of action of this PDF have not been characterized. Here we investigate the visceral roles of PDF by adapting cellular and physiological methods to the study of visceral responses to PDF signaling in wild-type and mutant genetic backgrounds. We find that intestinal PDF acts at a distance on the renal system, where it regulates ureter contractions. We show that PdfR, PDF's established receptor, is expressed by the muscles of the excretory system, and present evidence that PdfR-induced cAMP increases underlie the myotropic effects of PDF. These findings extend the similarities between PDF and VIP beyond their shared central role as circadian regulators, and uncover an unexpected endocrine mode of myotropic action for an intestinal neuropeptide on the renal system. PMID:22778427
Evaluation of a Consistent LES/PDF Method Using a Series of Experimental Spray Flames
NASA Astrophysics Data System (ADS)
Heye, Colin; Raman, Venkat
2012-11-01
A consistent method for the evolution of the joint-scalar probability density function (PDF) transport equation is proposed for application to large eddy simulation (LES) of turbulent reacting flows containing evaporating spray droplets. PDF transport equations provide the benefit of including the chemical source term in closed form, however, additional terms describing LES subfilter mixing must be modeled. The recent availability of detailed experimental measurements provide model validation data for a wide range of evaporation rates and combustion regimes, as is well-known to occur in spray flames. In this work, the experimental data will used to investigate the impact of droplet mass loading and evaporation rates on the subfilter scalar PDF shape in comparison with conventional flamelet models. In addition, existing model term closures in the PDF transport equations are evaluated with a focus on their validity in the presence of regime changes.
NASA Astrophysics Data System (ADS)
Xu, Jun; Dang, Chao; Kong, Fan
2017-10-01
This paper presents a new method for efficient structural reliability analysis. In this method, a rotational quasi-symmetric point method (RQ-SPM) is proposed for evaluating the fractional moments of the performance function. Then, the derivation of the performance function's probability density function (PDF) is carried out based on the maximum entropy method in which constraints are specified in terms of fractional moments. In this regard, the probability of failure can be obtained by a simple integral over the performance function's PDF. Six examples, including a finite element-based reliability analysis and a dynamic system with strong nonlinearity, are used to illustrate the efficacy of the proposed method. All the computed results are compared with those by Monte Carlo simulation (MCS). It is found that the proposed method can provide very accurate results with low computational effort.
A transformed path integral approach for solution of the Fokker-Planck equation
NASA Astrophysics Data System (ADS)
Subramaniam, Gnana M.; Vedula, Prakash
2017-10-01
A novel path integral (PI) based method for solution of the Fokker-Planck equation is presented. The proposed method, termed the transformed path integral (TPI) method, utilizes a new formulation for the underlying short-time propagator to perform the evolution of the probability density function (PDF) in a transformed computational domain where a more accurate representation of the PDF can be ensured. The new formulation, based on a dynamic transformation of the original state space with the statistics of the PDF as parameters, preserves the non-negativity of the PDF and incorporates short-time properties of the underlying stochastic process. New update equations for the state PDF in a transformed space and the parameters of the transformation (including mean and covariance) that better accommodate nonlinearities in drift and non-Gaussian behavior in distributions are proposed (based on properties of the SDE). Owing to the choice of transformation considered, the proposed method maps a fixed grid in transformed space to a dynamically adaptive grid in the original state space. The TPI method, in contrast to conventional methods such as Monte Carlo simulations and fixed grid approaches, is able to better represent the distributions (especially the tail information) and better address challenges in processes with large diffusion, large drift and large concentration of PDF. Additionally, in the proposed TPI method, error bounds on the probability in the computational domain can be obtained using the Chebyshev's inequality. The benefits of the TPI method over conventional methods are illustrated through simulations of linear and nonlinear drift processes in one-dimensional and multidimensional state spaces. The effects of spatial and temporal grid resolutions as well as that of the diffusion coefficient on the error in the PDF are also characterized.
Comparison of PDF and Moment Closure Methods in the Modeling of Turbulent Reacting Flows
NASA Technical Reports Server (NTRS)
Norris, Andrew T.; Hsu, Andrew T.
1994-01-01
In modeling turbulent reactive flows, Probability Density Function (PDF) methods have an advantage over the more traditional moment closure schemes in that the PDF formulation treats the chemical reaction source terms exactly, while moment closure methods are required to model the mean reaction rate. The common model used is the laminar chemistry approximation, where the effects of turbulence on the reaction are assumed negligible. For flows with low turbulence levels and fast chemistry, the difference between the two methods can be expected to be small. However for flows with finite rate chemistry and high turbulence levels, significant errors can be expected in the moment closure method. In this paper, the ability of the PDF method and the moment closure scheme to accurately model a turbulent reacting flow is tested. To accomplish this, both schemes were used to model a CO/H2/N2- air piloted diffusion flame near extinction. Identical thermochemistry, turbulence models, initial conditions and boundary conditions are employed to ensure a consistent comparison can be made. The results of the two methods are compared to experimental data as well as to each other. The comparison reveals that the PDF method provides good agreement with the experimental data, while the moment closure scheme incorrectly shows a broad, laminar-like flame structure.
Probability Density Functions of Observed Rainfall in Montana
NASA Technical Reports Server (NTRS)
Larsen, Scott D.; Johnson, L. Ronald; Smith, Paul L.
1995-01-01
The question of whether a rain rate probability density function (PDF) can vary uniformly between precipitation events is examined. Image analysis on large samples of radar echoes is possible because of advances in technology. The data provided by such an analysis easily allow development of radar reflectivity factors (and by extension rain rate) distribution. Finding a PDF becomes a matter of finding a function that describes the curve approximating the resulting distributions. Ideally, one PDF would exist for all cases; or many PDF's that have the same functional form with only systematic variations in parameters (such as size or shape) exist. Satisfying either of theses cases will, validate the theoretical basis of the Area Time Integral (ATI). Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89 percent of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit. Using the method of moments and Elderton's curve selection criteria, the Pearson Type 1 equation was identified as a potential fit for 89% of the observed distributions. Further analysis indicates that the Type 1 curve does approximate the shape of the distributions but quantitatively does not produce a great fit.
Argenti, Fabrizio; Bianchi, Tiziano; Alparone, Luciano
2006-11-01
In this paper, a new despeckling method based on undecimated wavelet decomposition and maximum a posteriori MIAP) estimation is proposed. Such a method relies on the assumption that the probability density function (pdf) of each wavelet coefficient is generalized Gaussian (GG). The major novelty of the proposed approach is that the parameters of the GG pdf are taken to be space-varying within each wavelet frame. Thus, they may be adjusted to spatial image context, not only to scale and orientation. Since the MAP equation to be solved is a function of the parameters of the assumed pdf model, the variance and shape factor of the GG function are derived from the theoretical moments, which depend on the moments and joint moments of the observed noisy signal and on the statistics of speckle. The solution of the MAP equation yields the MAP estimate of the wavelet coefficients of the noise-free image. The restored SAR image is synthesized from such coefficients. Experimental results, carried out on both synthetic speckled images and true SAR images, demonstrate that MAP filtering can be successfully applied to SAR images represented in the shift-invariant wavelet domain, without resorting to a logarithmic transformation.
Application of PDF methods to compressible turbulent flows
NASA Astrophysics Data System (ADS)
Delarue, B. J.; Pope, S. B.
1997-09-01
A particle method applying the probability density function (PDF) approach to turbulent compressible flows is presented. The method is applied to several turbulent flows, including the compressible mixing layer, and good agreement is obtained with experimental data. The PDF equation is solved using a Lagrangian/Monte Carlo method. To accurately account for the effects of compressibility on the flow, the velocity PDF formulation is extended to include thermodynamic variables such as the pressure and the internal energy. The mean pressure, the determination of which has been the object of active research over the last few years, is obtained directly from the particle properties. It is therefore not necessary to link the PDF solver with a finite-volume type solver. The stochastic differential equations (SDE) which model the evolution of particle properties are based on existing second-order closures for compressible turbulence, limited in application to low turbulent Mach number flows. Tests are conducted in decaying isotropic turbulence to compare the performances of the PDF method with the Reynolds-stress closures from which it is derived, and in homogeneous shear flows, at which stage comparison with direct numerical simulation (DNS) data is conducted. The model is then applied to the plane compressible mixing layer, reproducing the well-known decrease in the spreading rate with increasing compressibility. It must be emphasized that the goal of this paper is not as much to assess the performance of models of compressibility effects, as it is to present an innovative and consistent PDF formulation designed for turbulent inhomogeneous compressible flows, with the aim of extending it further to deal with supersonic reacting flows.
An actuarial approach to retrofit savings in buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Subbarao, Krishnappa; Etingov, Pavel V.; Reddy, T. A.
An actuarial method has been developed for determining energy savings from retrofits from energy use data for a number of buildings. This method should be contrasted with the traditional method of using pre- and post-retrofit data on the same building. This method supports the U.S. Department of Energy Building Performance Database of real building performance data and related tools that enable engineering and financial practitioners to evaluate retrofits. The actuarial approach derives, from the database, probability density functions (PDFs) for energy savings from retrofits by creating peer groups for the user’s pre post buildings. From the energy use distribution ofmore » the two groups, the savings PDF is derived. This provides the basis for engineering analysis as well as financial risk analysis leading to investment decisions. Several technical issues are addressed: The savings PDF is obtained from the pre- and post-PDF through a convolution. Smoothing using kernel density estimation is applied to make the PDF more realistic. The low data density problem can be mitigated through a neighborhood methodology. Correlations between pre and post buildings are addressed to improve the savings PDF. Sample size effects are addressed through the Kolmogorov--Smirnov tests and quantile-quantile plots.« less
PDF approach for turbulent scalar field: Some recent developments
NASA Technical Reports Server (NTRS)
Gao, Feng
1993-01-01
The probability density function (PDF) method has been proven a very useful approach in turbulence research. It has been particularly effective in simulating turbulent reacting flows and in studying some detailed statistical properties generated by a turbulent field There are, however, some important questions that have yet to be answered in PDF studies. Our efforts in the past year have been focused on two areas. First, a simple mixing model suitable for Monte Carlo simulations has been developed based on the mapping closure. Secondly, the mechanism of turbulent transport has been analyzed in order to understand the recently observed abnormal PDF's of turbulent temperature fields generated by linear heat sources.
The photon content of the proton
NASA Astrophysics Data System (ADS)
Manohar, Aneesh V.; Nason, Paolo; Salam, Gavin P.; Zanderighi, Giulia
2017-12-01
The photon PDF of the proton is needed for precision comparisons of LHC cross sections with theoretical predictions. In a recent paper, we showed how the photon PDF could be determined in terms of the electromagnetic proton structure functions F 2 and F L measured in electron-proton scattering experiments, and gave an explicit formula for the PDF including all terms up to next-to-leading order. In this paper we give details of the derivation. We obtain the photon PDF using the factorisation theorem and applying it to suitable BSM hard scattering processes. We also obtain the same PDF in a process-independent manner using the usual definition of PDFs in terms of light-cone Fourier transforms of products of operators. We show how our method gives an exact representation for the photon PDF in terms of F 2 and F L , valid to all orders in QED and QCD, and including all non-perturbative corrections. This representation is then used to give an explicit formula for the photon PDF to one order higher than our previous result. We also generalise our results to obtain formulæ for the polarised photon PDF, as well as the photon TMDPDF. Using our formula, we derive the P γ i subset of DGLAP splitting functions to order αα s and α 2, which agree with known results. We give a detailed explanation of the approach that we follow to determine a photon PDF and its uncertainty within the above framework.
Quantum diffusion during inflation and primordial black holes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pattison, Chris; Assadullahi, Hooshyar; Wands, David
We calculate the full probability density function (PDF) of inflationary curvature perturbations, even in the presence of large quantum backreaction. Making use of the stochastic-δ N formalism, two complementary methods are developed, one based on solving an ordinary differential equation for the characteristic function of the PDF, and the other based on solving a heat equation for the PDF directly. In the classical limit where quantum diffusion is small, we develop an expansion scheme that not only recovers the standard Gaussian PDF at leading order, but also allows us to calculate the first non-Gaussian corrections to the usual result. Inmore » the opposite limit where quantum diffusion is large, we find that the PDF is given by an elliptic theta function, which is fully characterised by the ratio between the squared width and height (in Planck mass units) of the region where stochastic effects dominate. We then apply these results to the calculation of the mass fraction of primordial black holes from inflation, and show that no more than ∼ 1 e -fold can be spent in regions of the potential dominated by quantum diffusion. We explain how this requirement constrains inflationary potentials with two examples.« less
Quantum diffusion during inflation and primordial black holes
NASA Astrophysics Data System (ADS)
Pattison, Chris; Vennin, Vincent; Assadullahi, Hooshyar; Wands, David
2017-10-01
We calculate the full probability density function (PDF) of inflationary curvature perturbations, even in the presence of large quantum backreaction. Making use of the stochastic-δ N formalism, two complementary methods are developed, one based on solving an ordinary differential equation for the characteristic function of the PDF, and the other based on solving a heat equation for the PDF directly. In the classical limit where quantum diffusion is small, we develop an expansion scheme that not only recovers the standard Gaussian PDF at leading order, but also allows us to calculate the first non-Gaussian corrections to the usual result. In the opposite limit where quantum diffusion is large, we find that the PDF is given by an elliptic theta function, which is fully characterised by the ratio between the squared width and height (in Planck mass units) of the region where stochastic effects dominate. We then apply these results to the calculation of the mass fraction of primordial black holes from inflation, and show that no more than ~ 1 e-fold can be spent in regions of the potential dominated by quantum diffusion. We explain how this requirement constrains inflationary potentials with two examples.
Numerical simulation of turbulent combustion: Scientific challenges
NASA Astrophysics Data System (ADS)
Ren, ZhuYin; Lu, Zhen; Hou, LingYun; Lu, LiuYan
2014-08-01
Predictive simulation of engine combustion is key to understanding the underlying complicated physicochemical processes, improving engine performance, and reducing pollutant emissions. Critical issues as turbulence modeling, turbulence-chemistry interaction, and accommodation of detailed chemical kinetics in complex flows remain challenging and essential for high-fidelity combustion simulation. This paper reviews the current status of the state-of-the-art large eddy simulation (LES)/prob-ability density function (PDF)/detailed chemistry approach that can address the three challenging modelling issues. PDF as a subgrid model for LES is formulated and the hybrid mesh-particle method for LES/PDF simulations is described. Then the development need in micro-mixing models for the PDF simulations of turbulent premixed combustion is identified. Finally the different acceleration methods for detailed chemistry are reviewed and a combined strategy is proposed for further development.
Exact PDF equations and closure approximations for advective-reactive transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venturi, D.; Tartakovsky, Daniel M.; Tartakovsky, Alexandre M.
2013-06-01
Mathematical models of advection–reaction phenomena rely on advective flow velocity and (bio) chemical reaction rates that are notoriously random. By using functional integral methods, we derive exact evolution equations for the probability density function (PDF) of the state variables of the advection–reaction system in the presence of random transport velocity and random reaction rates with rather arbitrary distributions. These PDF equations are solved analytically for transport with deterministic flow velocity and a linear reaction rate represented mathematically by a heterog eneous and strongly-correlated random field. Our analytical solution is then used to investigate the accuracy and robustness of the recentlymore » proposed large-eddy diffusivity (LED) closure approximation [1]. We find that the solution to the LED-based PDF equation, which is exact for uncorrelated reaction rates, is accurate even in the presence of strong correlations and it provides an upper bound of predictive uncertainty.« less
Convolutionless Nakajima-Zwanzig equations for stochastic analysis in nonlinear dynamical systems.
Venturi, D; Karniadakis, G E
2014-06-08
Determining the statistical properties of stochastic nonlinear systems is of major interest across many disciplines. Currently, there are no general efficient methods to deal with this challenging problem that involves high dimensionality, low regularity and random frequencies. We propose a framework for stochastic analysis in nonlinear dynamical systems based on goal-oriented probability density function (PDF) methods. The key idea stems from techniques of irreversible statistical mechanics, and it relies on deriving evolution equations for the PDF of quantities of interest, e.g. functionals of the solution to systems of stochastic ordinary and partial differential equations. Such quantities could be low-dimensional objects in infinite dimensional phase spaces. We develop the goal-oriented PDF method in the context of the time-convolutionless Nakajima-Zwanzig-Mori formalism. We address the question of approximation of reduced-order density equations by multi-level coarse graining, perturbation series and operator cumulant resummation. Numerical examples are presented for stochastic resonance and stochastic advection-reaction problems.
Convolutionless Nakajima–Zwanzig equations for stochastic analysis in nonlinear dynamical systems
Venturi, D.; Karniadakis, G. E.
2014-01-01
Determining the statistical properties of stochastic nonlinear systems is of major interest across many disciplines. Currently, there are no general efficient methods to deal with this challenging problem that involves high dimensionality, low regularity and random frequencies. We propose a framework for stochastic analysis in nonlinear dynamical systems based on goal-oriented probability density function (PDF) methods. The key idea stems from techniques of irreversible statistical mechanics, and it relies on deriving evolution equations for the PDF of quantities of interest, e.g. functionals of the solution to systems of stochastic ordinary and partial differential equations. Such quantities could be low-dimensional objects in infinite dimensional phase spaces. We develop the goal-oriented PDF method in the context of the time-convolutionless Nakajima–Zwanzig–Mori formalism. We address the question of approximation of reduced-order density equations by multi-level coarse graining, perturbation series and operator cumulant resummation. Numerical examples are presented for stochastic resonance and stochastic advection–reaction problems. PMID:24910519
Superstatistics analysis of the ion current distribution function: Met3PbCl influence study.
Miśkiewicz, Janusz; Trela, Zenon; Przestalski, Stanisław; Karcz, Waldemar
2010-09-01
A novel analysis of ion current time series is proposed. It is shown that higher (second, third and fourth) statistical moments of the ion current probability distribution function (PDF) can yield new information about ion channel properties. The method is illustrated on a two-state model where the PDF of the compound states are given by normal distributions. The proposed method was applied to the analysis of the SV cation channels of vacuolar membrane of Beta vulgaris and the influence of trimethyllead chloride (Met(3)PbCl) on the ion current probability distribution. Ion currents were measured by patch-clamp technique. It was shown that Met(3)PbCl influences the variance of the open-state ion current but does not alter the PDF of the closed-state ion current. Incorporation of higher statistical moments into the standard investigation of ion channel properties is proposed.
The PDF4LHC report on PDFs and LHC data: Results from Run I and preparation for Run II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rojo, Juan; Accardi, Alberto; Ball, Richard D.
2015-09-16
The accurate determination of Parton Distribution Functions (PDFs) of the proton is an essential ingredient of the Large Hadron Collider (LHC) program. PDF uncertainties impact a wide range of processes, from Higgs boson characterization and precision Standard Model measurements to New Physics searches. A major recent development in modern PDF analyses has been to exploit the wealth of new information contained in precision measurements from the LHC Run I, as well as progress in tools and methods to include these data in PDF fits. In this report we summarize the information that PDF-sensitive measurements at the LHC have provided somore » far, and review the prospects for further constraining PDFs with data from the recently started Run II. As a result, this document aims to provide useful input to the LHC collaborations to prioritize their PDF-sensitive measurements at Run II, as well as a comprehensive reference for the PDF-fitting collaborations.« less
Persistent Deterioration of Functioning (PDF) and change in well-being in older persons.
Jonker, Angèle A; Comijs, Hannie C; Knipscheer, Kees C; Deeg, Dorly J
2008-10-01
It is often assumed that aging is accompanied by diverse and constant functional and cognitive decline, and it is therefore surprising that the well-being of older persons does not appear to decline in the same way. This study investigates longitudinally whether well-being in older persons changes due to Persistent Deterioration of Functioning (PDF). Data were collected in the context of the Longitudinal Aging Study Amsterdam (LASA). Conditions of PDF are persistent decline in cognitive functioning, physical functioning and increase in chronic diseases. Measurements of well-being included life satisfaction, positive affect, and valuation of life. T-tests were used to analyse mean difference scores for well-being, and univariate and multivariate regression analyses were performed to examine changes in three well-being outcomes in relation to PDF. Cross-sectional analyses showed significant differences and associations between the two PDF subgroups and non- PDF for well-being at T3. In longitudinal analyses, we found significant decreases in and associations with wellbeing over time in respondents fulfilling one PDF condition (mild PDF). For respondents fulfilling two or more PDF conditions (severe PDF), longitudinally no significant associations were found. Cognitive aspects of well-being (life satisfaction and valuation of life) and the affective element (positive affect) of well-being appear to be influenced negatively by mild PDF, whereas well-being does not seem to be diminished in persons with more severe PDF. This may be due to the ability to accept finally the inevitable situation of severe PDF.
Adaptive non-linear control for cancer therapy through a Fokker-Planck observer.
Shakeri, Ehsan; Latif-Shabgahi, Gholamreza; Esmaeili Abharian, Amir
2018-04-01
In recent years, many efforts have been made to present optimal strategies for cancer therapy through the mathematical modelling of tumour-cell population dynamics and optimal control theory. In many cases, therapy effect is included in the drift term of the stochastic Gompertz model. By fitting the model with empirical data, the parameters of therapy function are estimated. The reported research works have not presented any algorithm to determine the optimal parameters of therapy function. In this study, a logarithmic therapy function is entered in the drift term of the Gompertz model. Using the proposed control algorithm, the therapy function parameters are predicted and adaptively adjusted. To control the growth of tumour-cell population, its moments must be manipulated. This study employs the probability density function (PDF) control approach because of its ability to control all the process moments. A Fokker-Planck-based non-linear stochastic observer will be used to determine the PDF of the process. A cost function based on the difference between a predefined desired PDF and PDF of tumour-cell population is defined. Using the proposed algorithm, the therapy function parameters are adjusted in such a manner that the cost function is minimised. The existence of an optimal therapy function is also proved. The numerical results are finally given to demonstrate the effectiveness of the proposed method.
Quasi parton distributions and the gradient flow
Monahan, Christopher; Orginos, Kostas
2017-03-22
We propose a new approach to determining quasi parton distribution functions (PDFs) from lattice quantum chromodynamics. By incorporating the gradient flow, this method guarantees that the lattice quasi PDFs are finite in the continuum limit and evades the thorny, and as yet unresolved, issue of the renormalization of quasi PDFs on the lattice. In the limit that the flow time is much smaller than the length scale set by the nucleon momentum, the moments of the smeared quasi PDF are proportional to those of the lightfront PDF. Finally, we use this relation to derive evolution equations for the matching kernelmore » that relates the smeared quasi PDF and the light-front PDF.« less
Quasi parton distributions and the gradient flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monahan, Christopher; Orginos, Kostas
We propose a new approach to determining quasi parton distribution functions (PDFs) from lattice quantum chromodynamics. By incorporating the gradient flow, this method guarantees that the lattice quasi PDFs are finite in the continuum limit and evades the thorny, and as yet unresolved, issue of the renormalization of quasi PDFs on the lattice. In the limit that the flow time is much smaller than the length scale set by the nucleon momentum, the moments of the smeared quasi PDF are proportional to those of the lightfront PDF. Finally, we use this relation to derive evolution equations for the matching kernelmore » that relates the smeared quasi PDF and the light-front PDF.« less
NASA Astrophysics Data System (ADS)
Consalvi, Jean-Louis
2017-01-01
The time-averaged Radiative Transfer Equation (RTE) introduces two unclosed terms, known as `absorption Turbulence Radiation Interaction (TRI)' and `emission TRI'. Emission TRI is related to the non-linear coupling between fluctuations of the absorption coefficient and fluctuations of the Planck function and can be described without introduction any approximation by using a transported PDF method. In this study, a hybrid flamelet/ Stochastic Eulerian Field Model is used to solve the transport equation of the one-point one-time PDF. In this formulation, the steady laminar flamelet model (SLF) is coupled to a joint Probability Density Function (PDF) of mixture fraction, enthalpy defect, scalar dissipation rate, and soot quantities and the PDF transport equation is solved by using a Stochastic Eulerian Field (SEF) method. Soot production is modeled by a semi-empirical model and the spectral dependence of the radiatively participating species, namely combustion products and soot, are computed by using a Narrow Band Correlated-k (NBCK) model. The model is applied to simulate an ethylene/methane turbulent jet flame burning in an oxygen-enriched environment. Model results are compared with the experiments and the effects of taken into account Emission TRI on flame structure, soot production and radiative loss are discussed.
NASA Astrophysics Data System (ADS)
Liu, Lisheng; Zhang, Heyong; Guo, Jin; Zhao, Shuai; Wang, Tingfeng
2012-08-01
In this paper, we report a mathematical derivation of probability density function (PDF) of time-interval between two successive photoelectrons of the laser heterodyne signal, and give a confirmation of the theoretical result by both numerical simulation and an experiment. The PDF curve of the beat signal displays a series of fluctuations, the period and amplitude of which are respectively determined by the beat frequency and the mixing efficiency. The beat frequency is derived from the frequency of fluctuations accordingly when the PDF curve is measured. This frequency measurement method still works while the traditional Fast Fourier Transform (FFT) algorithm hardly derives the correct peak value of the beat frequency in the condition that we detect 80 MHz beat signal with 8 Mcps (counts per-second) photons count rate, and this indicates an advantage of the PDF method.
Gorelik, Tatiana E.; Billinge, Simon J. L.; Schmidt, Martin U.; ...
2015-04-01
This paper shows for the first time that pair-distribution function analyses can be carried out on organic and organo-metallic compounds from powder electron diffraction data. Different experimental setups are demonstrated, including selected area electron diffraction (SAED) and nanodiffraction in transmission electron microscopy (TEM) or nanodiffraction in scanning transmission electron microscopy (STEM) modes. The methods were demonstrated on organo-metallic complexes (chlorinated and unchlorinated copper-phthalocyanine) and on purely organic compounds (quinacridone). The PDF curves from powder electron diffraction data, called ePDF, are in good agreement with PDF curves determined from X-ray powder data demonstrating that the problems of obtaining kinematical scattering datamore » and avoiding beam-damage of the sample are possible to resolve.« less
Vaknin, David; Bu, Wei; Travesset, Alex
2008-07-28
We show that the structure factor S(q) of water can be obtained from x-ray synchrotron experiments at grazing angle of incidence (in reflection mode) by using a liquid surface diffractometer. The corrections used to obtain S(q) self-consistently are described. Applying these corrections to scans at different incident beam angles (above the critical angle) collapses the measured intensities into a single master curve, without fitting parameters, which within a scale factor yields S(q). Performing the measurements below the critical angle for total reflectivity yields the structure factor of the top most layers of the water/vapor interface. Our results indicate water restructuring at the vapor/water interface. We also introduce a new approach to extract g(r), the pair distribution function (PDF), by expressing the PDF as a linear sum of error functions whose parameters are refined by applying a nonlinear least square fit method. This approach enables a straightforward determination of the inherent uncertainties in the PDF. Implications of our results to previously measured and theoretical predictions of the PDF are also discussed.
Parsons, Tom
2008-01-01
Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques [e.g., Ellsworth et al., 1999]. In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means [e.g., NIST/SEMATECH, 2006]. For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDF?s, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.
Spatial homogenization methods for pin-by-pin neutron transport calculations
NASA Astrophysics Data System (ADS)
Kozlowski, Tomasz
For practical reactor core applications low-order transport approximations such as SP3 have been shown to provide sufficient accuracy for both static and transient calculations with considerably less computational expense than the discrete ordinate or the full spherical harmonics methods. These methods have been applied in several core simulators where homogenization was performed at the level of the pin cell. One of the principal problems has been to recover the error introduced by pin-cell homogenization. Two basic approaches to treat pin-cell homogenization error have been proposed: Superhomogenization (SPH) factors and Pin-Cell Discontinuity Factors (PDF). These methods are based on well established Equivalence Theory and Generalized Equivalence Theory to generate appropriate group constants. These methods are able to treat all sources of error together, allowing even few-group diffusion with one mesh per cell to reproduce the reference solution. A detailed investigation and consistent comparison of both homogenization techniques showed potential of PDF approach to improve accuracy of core calculation, but also reveal its limitation. In principle, the method is applicable only for the boundary conditions at which it was created, i.e. for boundary conditions considered during the homogenization process---normally zero current. Therefore, there exists a need to improve this method, making it more general and environment independent. The goal of proposed general homogenization technique is to create a function that is able to correctly predict the appropriate correction factor with only homogeneous information available, i.e. a function based on heterogeneous solution that could approximate PDFs using homogeneous solution. It has been shown that the PDF can be well approximated by least-square polynomial fit of non-dimensional heterogeneous solution and later used for PDF prediction using homogeneous solution. This shows a promise for PDF prediction for off-reference conditions, such as during reactor transients which provide conditions that can not typically be anticipated a priori.
Specialized minimal PDFs for optimized LHC calculations.
Carrazza, Stefano; Forte, Stefano; Kassabov, Zahari; Rojo, Juan
2016-01-01
We present a methodology for the construction of parton distribution functions (PDFs) designed to provide an accurate representation of PDF uncertainties for specific processes or classes of processes with a minimal number of PDF error sets: specialized minimal PDF sets, or SM-PDFs. We construct these SM-PDFs in such a way that sets corresponding to different input processes can be combined without losing information, specifically as regards their correlations, and that they are robust upon smooth variations of the kinematic cuts. The proposed strategy never discards information, so that the SM-PDF sets can be enlarged by the addition of new processes, until the prior PDF set is eventually recovered for a large enough set of processes. We illustrate the method by producing SM-PDFs tailored to Higgs, top-quark pair, and electroweak gauge boson physics, and we determine that, when the PDF4LHC15 combined set is used as the prior, around 11, 4, and 11 Hessian eigenvectors, respectively, are enough to fully describe the corresponding processes.
Hybrid finite-volume/transported PDF method for the simulation of turbulent reactive flows
NASA Astrophysics Data System (ADS)
Raman, Venkatramanan
A novel computational scheme is formulated for simulating turbulent reactive flows in complex geometries with detailed chemical kinetics. A Probability Density Function (PDF) based method that handles the scalar transport equation is coupled with an existing Finite Volume (FV) Reynolds-Averaged Navier-Stokes (RANS) flow solver. The PDF formulation leads to closed chemical source terms and facilitates the use of detailed chemical mechanisms without approximations. The particle-based PDF scheme is modified to handle complex geometries and grid structures. Grid-independent particle evolution schemes that scale linearly with the problem size are implemented in the Monte-Carlo PDF solver. A novel algorithm, in situ adaptive tabulation (ISAT) is employed to ensure tractability of complex chemistry involving a multitude of species. Several non-reacting test cases are performed to ascertain the efficiency and accuracy of the method. Simulation results from a turbulent jet-diffusion flame case are compared against experimental data. The effect of micromixing model, turbulence model and reaction scheme on flame predictions are discussed extensively. Finally, the method is used to analyze the Dow Chlorination Reactor. Detailed kinetics involving 37 species and 158 reactions as well as a reduced form with 16 species and 21 reactions are used. The effect of inlet configuration on reactor behavior and product distribution is analyzed. Plant-scale reactors exhibit quenching phenomena that cannot be reproduced by conventional simulation methods. The FV-PDF method predicts quenching accurately and provides insight into the dynamics of the reactor near extinction. The accuracy of the fractional time-stepping technique in discussed in the context of apparent multiple-steady states observed in a non-premixed feed configuration of the chlorination reactor.
Monte Carlo PDF method for turbulent reacting flow in a jet-stirred reactor
NASA Astrophysics Data System (ADS)
Roekaerts, D.
1992-01-01
A stochastic algorithm for the solution of the modeled scalar probability density function (PDF) transport equation for single-phase turbulent reacting flow is described. Cylindrical symmetry is assumed. The PDF is represented by ensembles of N representative values of the thermochemical variables in each cell of a nonuniform finite-difference grid and operations on these elements representing convection, diffusion, mixing and reaction are derived. A simplified model and solution algorithm which neglects the influence of turbulent fluctuations on mean reaction rates is also described. Both algorithms are applied to a selectivity problem in a real reactor.
Uncertainty propagation for statistical impact prediction of space debris
NASA Astrophysics Data System (ADS)
Hoogendoorn, R.; Mooij, E.; Geul, J.
2018-01-01
Predictions of the impact time and location of space debris in a decaying trajectory are highly influenced by uncertainties. The traditional Monte Carlo (MC) method can be used to perform accurate statistical impact predictions, but requires a large computational effort. A method is investigated that directly propagates a Probability Density Function (PDF) in time, which has the potential to obtain more accurate results with less computational effort. The decaying trajectory of Delta-K rocket stages was used to test the methods using a six degrees-of-freedom state model. The PDF of the state of the body was propagated in time to obtain impact-time distributions. This Direct PDF Propagation (DPP) method results in a multi-dimensional scattered dataset of the PDF of the state, which is highly challenging to process. No accurate results could be obtained, because of the structure of the DPP data and the high dimensionality. Therefore, the DPP method is less suitable for practical uncontrolled entry problems and the traditional MC method remains superior. Additionally, the MC method was used with two improved uncertainty models to obtain impact-time distributions, which were validated using observations of true impacts. For one of the two uncertainty models, statistically more valid impact-time distributions were obtained than in previous research.
Lear, Bridget C; Zhang, Luoying; Allada, Ravi
2009-07-01
Discrete clusters of circadian clock neurons temporally organize daily behaviors such as sleep and wake. In Drosophila, a network of just 150 neurons drives two peaks of timed activity in the morning and evening. A subset of these neurons expresses the neuropeptide pigment dispersing factor (PDF), which is important for promoting morning behavior as well as maintaining robust free-running rhythmicity in constant conditions. Yet, how PDF acts on downstream circuits to mediate rhythmic behavior is unknown. Using circuit-directed rescue of PDF receptor mutants, we show that PDF targeting of just approximately 30 non-PDF evening circadian neurons is sufficient to drive morning behavior. This function is not accompanied by large changes in core molecular oscillators in light-dark, indicating that PDF RECEPTOR likely regulates the output of these cells under these conditions. We find that PDF also acts on this focused set of non-PDF neurons to regulate both evening activity phase and period length, consistent with modest resetting effects on core oscillators. PDF likely acts on more distributed pacemaker neuron targets, including the PDF neurons themselves, to regulate rhythmic strength. Here we reveal defining features of the circuit-diagram for PDF peptide function in circadian behavior, revealing the direct neuronal targets of PDF as well as its behavioral functions at those sites. These studies define a key direct output circuit sufficient for multiple PDF dependent behaviors.
Progress in the development of PDF turbulence models for combustion
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1991-01-01
A combined Monte Carlo-computational fluid dynamic (CFD) algorithm was developed recently at Lewis Research Center (LeRC) for turbulent reacting flows. In this algorithm, conventional CFD schemes are employed to obtain the velocity field and other velocity related turbulent quantities, and a Monte Carlo scheme is used to solve the evolution equation for the probability density function (pdf) of species mass fraction and temperature. In combustion computations, the predictions of chemical reaction rates (the source terms in the species conservation equation) are poor if conventional turbulence modles are used. The main difficulty lies in the fact that the reaction rate is highly nonlinear, and the use of averaged temperature produces excessively large errors. Moment closure models for the source terms have attained only limited success. The probability density function (pdf) method seems to be the only alternative at the present time that uses local instantaneous values of the temperature, density, etc., in predicting chemical reaction rates, and thus may be the only viable approach for more accurate turbulent combustion calculations. Assumed pdf's are useful in simple problems; however, for more general combustion problems, the solution of an evolution equation for the pdf is necessary.
Global QCD Analysis of the Nucleon Tensor Charge with Lattice QCD Constraints
NASA Astrophysics Data System (ADS)
Shows, Harvey, III; Melnitchouk, Wally; Sato, Nobuo
2017-09-01
By studying the parton distribution functions (PDFs) of a nucleon, we probe the partonic scale of nature, exploring what it means to be a nucleon. In this study, we are interested in the transversity PDF-the least studied of the three collinear PDFs. By conducting a global analysis on experimental data from semi-inclusive deep inelastic scattering (SIDIS), as well as single-inclusive e+e- annihilation (SIA), we extract the fit parameters needed to describe the transverse moment dependent (TMD) transversity PDF, as well as the Collins fragmentation function. Once the collinear transversity PDF is obtained by integrating the extracted TMD PDF, we wish to resolve discrepancies between lattice QCD calculations and phenomenological extractions of the tensor charge from data. Here we show our results for the transversity distribution and tensor charge. Using our method of iterative Monte Carlo, we now have a more robust understanding of the transversity PDF. With these results we are able to progress in our understanding of TMD PDFs, as well as testify to the efficacy of current lattice QCD calculations. This work is made possible through support from NSF award 1659177 to Old Dominion University.
Recent progress in the joint velocity-scalar PDF method
NASA Technical Reports Server (NTRS)
Anand, M. S.
1995-01-01
This viewgraph presentation discusses joint velocity-scalar PDF method; turbulent combustion modeling issues for gas turbine combustors; PDF calculations for a recirculating flow; stochastic dissipation model; joint PDF calculations for swirling flows; spray calculations; reduced kinetics/manifold methods; parallel processing; and joint PDF focus areas.
Cai, Jing; Read, Paul W; Altes, Talissa A; Molloy, Janelle A; Brookeman, James R; Sheng, Ke
2007-01-21
Treatment planning based on probability distribution function (PDF) of patient geometries has been shown a potential off-line strategy to incorporate organ motion, but the application of such approach highly depends upon the reproducibility of the PDF. In this paper, we investigated the dependences of the PDF reproducibility on the imaging acquisition parameters, specifically the scan time and the frame rate. Three healthy subjects underwent a continuous 5 min magnetic resonance (MR) scan in the sagittal plane with a frame rate of approximately 10 f s-1, and the experiments were repeated with an interval of 2 to 3 weeks. A total of nine pulmonary vessels from different lung regions (upper, middle and lower) were tracked and the dependences of their displacement PDF reproducibility were evaluated as a function of scan time and frame rate. As results, the PDF reproducibility error decreased with prolonged scans and appeared to approach equilibrium state in subjects 2 and 3 within the 5 min scan. The PDF accuracy increased in the power function with the increase of frame rate; however, the PDF reproducibility showed less sensitivity to frame rate presumably due to the randomness of breathing which dominates the effects. As the key component of the PDF-based treatment planning, the reproducibility of the PDF affects the dosimetric accuracy substantially. This study provides a reference for acquiring MR-based PDF of structures in the lung.
Improved Modeling of Finite-Rate Turbulent Combustion Processes in Research Combustors
NASA Technical Reports Server (NTRS)
VanOverbeke, Thomas J.
1998-01-01
The objective of this thesis is to further develop and test a stochastic model of turbulent combustion in recirculating flows. There is a requirement to increase the accuracy of multi-dimensional combustion predictions. As turbulence affects reaction rates, this interaction must be more accurately evaluated. In this work a more physically correct way of handling the interaction of turbulence on combustion is further developed and tested. As turbulence involves randomness, stochastic modeling is used. Averaged values such as temperature and species concentration are found by integrating the probability density function (pdf) over the range of the scalar. The model in this work does not assume the pdf type, but solves for the evolution of the pdf using the Monte Carlo solution technique. The model is further developed by including a more robust reaction solver, by using accurate thermodynamics and by more accurate transport elements. The stochastic method is used with Semi-Implicit Method for Pressure-Linked Equations. The SIMPLE method is used to solve for velocity, pressure, turbulent kinetic energy and dissipation. The pdf solver solves for temperature and species concentration. Thus, the method is partially familiar to combustor engineers. The method is compared to benchmark experimental data and baseline calculations. The baseline method was tested on isothermal flows, evaporating sprays and combusting sprays. Pdf and baseline predictions were performed for three diffusion flames and one premixed flame. The pdf method predicted lower combustion rates than the baseline method in agreement with the data, except for the premixed flame. The baseline and stochastic predictions bounded the experimental data for the premixed flame. The use of a continuous mixing model or relax to mean mixing model had little effect on the prediction of average temperature. Two grids were used in a hydrogen diffusion flame simulation. Grid density did not effect the predictions except for peak temperature and tangential velocity. The hybrid pdf method did take longer and required more memory, but has a theoretical basis to extend to many reaction steps which cannot be said of current turbulent combustion models.
A biomechanical model for fibril recruitment: Evaluation in tendons and arteries.
Bevan, Tim; Merabet, Nadege; Hornsby, Jack; Watton, Paul N; Thompson, Mark S
2018-06-06
Simulations of soft tissue mechanobiological behaviour are increasingly important for clinical prediction of aneurysm, tendinopathy and other disorders. Mechanical behaviour at low stretches is governed by fibril straightening, transitioning into load-bearing at recruitment stretch, resulting in a tissue stiffening effect. Previous investigations have suggested theoretical relationships between stress-stretch measurements and recruitment probability density function (PDF) but not derived these rigorously nor evaluated these experimentally. Other work has proposed image-based methods for measurement of recruitment but made use of arbitrary fibril critical straightness parameters. The aim of this work was to provide a sound theoretical basis for estimating recruitment PDF from stress-stretch measurements and to evaluate this relationship using image-based methods, clearly motivating the choice of fibril critical straightness parameter in rat tail tendon and porcine artery. Rigorous derivation showed that the recruitment PDF may be estimated from the second stretch derivative of the first Piola-Kirchoff tissue stress. Image-based fibril recruitment identified the fibril straightness parameter that maximised Pearson correlation coefficients (PCC) with estimated PDFs. Using these critical straightness parameters the new method for estimating recruitment PDF showed a PCC with image-based measures of 0.915 and 0.933 for tendons and arteries respectively. This method may be used for accurate estimation of fibril recruitment PDF in mechanobiological simulation where fibril-level mechanical parameters are important for predicting cell behaviour. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
He, Jiansen; Wang, Yin; Pei, Zhongtian; Zhang, Lei; Tu, Chuanyi
2017-04-01
Energy transfer rate of turbulence is not uniform everywhere but suggested to follow a certain distribution, e.g., lognormal distribution (Kolmogorov 1962). The inhomogeneous transfer rate leads to emergence of intermittency, which may be identified with some parameter, e.g., normalized partial variance increments (PVI) (Greco et al., 2009). Large PVI of magnetic field fluctuations are found to have a temperature distribution with the median and mean values higher than that for small PVI level (Osman et al., 2012). However, there is a large proportion of overlap between temperature distributions associated with the smaller and larger PVIs. So it is recognized that only PVI cannot fully determine the temperature, since the one-to-one mapping relationship does not exist. One may be curious about the reason responsible for the considerable overlap of conditional temperature distribution for different levels of PVI. Usually the hotter plasma with higher temperature is speculated to be heated more with more dissipation of turbulence energy corresponding to more energy cascading rate, if the temperature fluctuation of the eigen wave mode is not taken into account. To explore the statistical relationship between turbulence cascading and plasma thermal state, we aim to study and reveal, for the first time, the conditional probability function of "energy transfer rate" under different levels of PVI condition (PDF(ɛ|PVI)), and compare it with the conditional probability function of temperature. The conditional probability distribution function, PDF(ɛ|PVI), is derived from PDF(PVI|ɛ)·PDF(ɛ)/PDF(PVI) according to the Bayesian theorem. PDF(PVI) can be obtained directly from the data. PDF(ɛ) is derived from the conjugate-gradient inversion of PDF(PVI) by assuming reasonably that PDF(δB|σ) is a Gaussian distribution, where PVI=|δB|/ σ and σ ( ɛι)1/3. PDF(ɛ) can also be acquired from fitting PDF(δB) with an integral function ∫PDF(δB|σ)PDF(σ)d σ. As a result, PDF(ɛ|PVI) is found to shift to higher median value of ɛ with increasing PVI but with a significant overlap of PDFs for different PVIs. Therefore, PDF(ɛ|PVI) is similar to PDF(T|PVI) in the sense of slow migration along with increasing PVI. The detailed comparison between these two conditional PDFs are also performed.
NASA Astrophysics Data System (ADS)
Zhang, Hongda; Han, Chao; Ye, Taohong; Ren, Zhuyin
2016-03-01
A method of chemistry tabulation combined with presumed probability density function (PDF) is applied to simulate piloted premixed jet burner flames with high Karlovitz number using large eddy simulation. Thermo-chemistry states are tabulated by the combination of auto-ignition and extended auto-ignition model. To evaluate the predictive capability of the proposed tabulation method to represent the thermo-chemistry states under the condition of different fresh gases temperature, a-priori study is conducted by performing idealised transient one-dimensional premixed flame simulations. Presumed PDF is used to involve the interaction of turbulence and flame with beta PDF to model the reaction progress variable distribution. Two presumed PDF models, Dirichlet distribution and independent beta distribution, respectively, are applied for representing the interaction between two mixture fractions that are associated with three inlet streams. Comparisons of statistical results show that two presumed PDF models for the two mixture fractions are both capable of predicting temperature and major species profiles, however, they are shown to have a significant effect on the predictions for intermediate species. An analysis of the thermo-chemical state-space representation of the sub-grid scale (SGS) combustion model is performed by comparing correlations between the carbon monoxide mass fraction and temperature. The SGS combustion model based on the proposed chemistry tabulation can reasonably capture the peak value and change trend of intermediate species. Aspects regarding model extensions to adequately predict the peak location of intermediate species are discussed.
An Overview of the NCC Spray/Monte-Carlo-PDF Computations
NASA Technical Reports Server (NTRS)
Raju, M. S.; Liu, Nan-Suey (Technical Monitor)
2000-01-01
This paper advances the state-of-the-art in spray computations with some of our recent contributions involving scalar Monte Carlo PDF (Probability Density Function), unstructured grids and parallel computing. It provides a complete overview of the scalar Monte Carlo PDF and Lagrangian spray computer codes developed for application with unstructured grids and parallel computing. Detailed comparisons for the case of a reacting non-swirling spray clearly highlight the important role that chemistry/turbulence interactions play in the modeling of reacting sprays. The results from the PDF and non-PDF methods were found to be markedly different and the PDF solution is closer to the reported experimental data. The PDF computations predict that some of the combustion occurs in a predominantly premixed-flame environment and the rest in a predominantly diffusion-flame environment. However, the non-PDF solution predicts wrongly for the combustion to occur in a vaporization-controlled regime. Near the premixed flame, the Monte Carlo particle temperature distribution shows two distinct peaks: one centered around the flame temperature and the other around the surrounding-gas temperature. Near the diffusion flame, the Monte Carlo particle temperature distribution shows a single peak. In both cases, the computed PDF's shape and strength are found to vary substantially depending upon the proximity to the flame surface. The results bring to the fore some of the deficiencies associated with the use of assumed-shape PDF methods in spray computations. Finally, we end the paper by demonstrating the computational viability of the present solution procedure for its use in 3D combustor calculations by summarizing the results of a 3D test case with periodic boundary conditions. For the 3D case, the parallel performance of all the three solvers (CFD, PDF, and spray) has been found to be good when the computations were performed on a 24-processor SGI Origin work-station.
A multivariate quadrature based moment method for LES based modeling of supersonic combustion
NASA Astrophysics Data System (ADS)
Donde, Pratik; Koo, Heeseok; Raman, Venkat
2012-07-01
The transported probability density function (PDF) approach is a powerful technique for large eddy simulation (LES) based modeling of scramjet combustors. In this approach, a high-dimensional transport equation for the joint composition-enthalpy PDF needs to be solved. Quadrature based approaches provide deterministic Eulerian methods for solving the joint-PDF transport equation. In this work, it is first demonstrated that the numerical errors associated with LES require special care in the development of PDF solution algorithms. The direct quadrature method of moments (DQMOM) is one quadrature-based approach developed for supersonic combustion modeling. This approach is shown to generate inconsistent evolution of the scalar moments. Further, gradient-based source terms that appear in the DQMOM transport equations are severely underpredicted in LES leading to artificial mixing of fuel and oxidizer. To overcome these numerical issues, a semi-discrete quadrature method of moments (SeQMOM) is formulated. The performance of the new technique is compared with the DQMOM approach in canonical flow configurations as well as a three-dimensional supersonic cavity stabilized flame configuration. The SeQMOM approach is shown to predict subfilter statistics accurately compared to the DQMOM approach.
Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows
NASA Technical Reports Server (NTRS)
Givi, P.; Frankel, S. H.; Adumitroaie, V.; Sabini, G.; Madnia, C. K.
1993-01-01
The primary objective of this research is to extend current capabilities of Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) for the computational analyses of high speed reacting flows. Our efforts in the first two years of this research have been concentrated on a priori investigations of single-point Probability Density Function (PDF) methods for providing subgrid closures in reacting turbulent flows. In the efforts initiated in the third year, our primary focus has been on performing actual LES by means of PDF methods. The approach is based on assumed PDF methods and we have performed extensive analysis of turbulent reacting flows by means of LES. This includes simulations of both three-dimensional (3D) isotropic compressible flows and two-dimensional reacting planar mixing layers. In addition to these LES analyses, some work is in progress to assess the extent of validity of our assumed PDF methods. This assessment is done by making detailed companions with recent laboratory data in predicting the rate of reactant conversion in parallel reacting shear flows. This report provides a summary of our achievements for the first six months of the third year of this program.
Seluzicki, Adam; Flourakis, Matthieu; Kula-Eversole, Elzbieta; Zhang, Luoying; Kilman, Valerie; Allada, Ravi
2014-03-01
Molecular circadian clocks are interconnected via neural networks. In Drosophila, PIGMENT-DISPERSING FACTOR (PDF) acts as a master network regulator with dual functions in synchronizing molecular oscillations between disparate PDF(+) and PDF(-) circadian pacemaker neurons and controlling pacemaker neuron output. Yet the mechanisms by which PDF functions are not clear. We demonstrate that genetic inhibition of protein kinase A (PKA) in PDF(-) clock neurons can phenocopy PDF mutants while activated PKA can partially rescue PDF receptor mutants. PKA subunit transcripts are also under clock control in non-PDF DN1p neurons. To address the core clock target of PDF, we rescued per in PDF neurons of arrhythmic per⁰¹ mutants. PDF neuron rescue induced high amplitude rhythms in the clock component TIMELESS (TIM) in per-less DN1p neurons. Complete loss of PDF or PKA inhibition also results in reduced TIM levels in non-PDF neurons of per⁰¹ flies. To address how PDF impacts pacemaker neuron output, we focally applied PDF to DN1p neurons and found that it acutely depolarizes and increases firing rates of DN1p neurons. Surprisingly, these effects are reduced in the presence of an adenylate cyclase inhibitor, yet persist in the presence of PKA inhibition. We have provided evidence for a signaling mechanism (PKA) and a molecular target (TIM) by which PDF resets and synchronizes clocks and demonstrates an acute direct excitatory effect of PDF on target neurons to control neuronal output. The identification of TIM as a target of PDF signaling suggests it is a multimodal integrator of cell autonomous clock, environmental light, and neural network signaling. Moreover, these data reveal a bifurcation of PKA-dependent clock effects and PKA-independent output effects. Taken together, our results provide a molecular and cellular basis for the dual functions of PDF in clock resetting and pacemaker output.
Seluzicki, Adam; Flourakis, Matthieu; Kula-Eversole, Elzbieta; Zhang, Luoying; Kilman, Valerie; Allada, Ravi
2014-01-01
Molecular circadian clocks are interconnected via neural networks. In Drosophila, PIGMENT-DISPERSING FACTOR (PDF) acts as a master network regulator with dual functions in synchronizing molecular oscillations between disparate PDF(+) and PDF(−) circadian pacemaker neurons and controlling pacemaker neuron output. Yet the mechanisms by which PDF functions are not clear. We demonstrate that genetic inhibition of protein kinase A (PKA) in PDF(−) clock neurons can phenocopy PDF mutants while activated PKA can partially rescue PDF receptor mutants. PKA subunit transcripts are also under clock control in non-PDF DN1p neurons. To address the core clock target of PDF, we rescued per in PDF neurons of arrhythmic per01 mutants. PDF neuron rescue induced high amplitude rhythms in the clock component TIMELESS (TIM) in per-less DN1p neurons. Complete loss of PDF or PKA inhibition also results in reduced TIM levels in non-PDF neurons of per01 flies. To address how PDF impacts pacemaker neuron output, we focally applied PDF to DN1p neurons and found that it acutely depolarizes and increases firing rates of DN1p neurons. Surprisingly, these effects are reduced in the presence of an adenylate cyclase inhibitor, yet persist in the presence of PKA inhibition. We have provided evidence for a signaling mechanism (PKA) and a molecular target (TIM) by which PDF resets and synchronizes clocks and demonstrates an acute direct excitatory effect of PDF on target neurons to control neuronal output. The identification of TIM as a target of PDF signaling suggests it is a multimodal integrator of cell autonomous clock, environmental light, and neural network signaling. Moreover, these data reveal a bifurcation of PKA-dependent clock effects and PKA-independent output effects. Taken together, our results provide a molecular and cellular basis for the dual functions of PDF in clock resetting and pacemaker output. PMID:24643294
A Lagrangian mixing frequency model for transported PDF modeling
NASA Astrophysics Data System (ADS)
Turkeri, Hasret; Zhao, Xinyu
2017-11-01
In this study, a Lagrangian mixing frequency model is proposed for molecular mixing models within the framework of transported probability density function (PDF) methods. The model is based on the dissipations of mixture fraction and progress variables obtained from Lagrangian particles in PDF methods. The new model is proposed as a remedy to the difficulty in choosing the optimal model constant parameters when using conventional mixing frequency models. The model is implemented in combination with the Interaction by exchange with the mean (IEM) mixing model. The performance of the new model is examined by performing simulations of Sandia Flame D and the turbulent premixed flame from the Cambridge stratified flame series. The simulations are performed using the pdfFOAM solver which is a LES/PDF solver developed entirely in OpenFOAM. A 16-species reduced mechanism is used to represent methane/air combustion, and in situ adaptive tabulation is employed to accelerate the finite-rate chemistry calculations. The results are compared with experimental measurements as well as with the results obtained using conventional mixing frequency models. Dynamic mixing frequencies are predicted using the new model without solving additional transport equations, and good agreement with experimental data is observed.
Soot and Spectral Radiation Modeling for a High-Pressure Turbulent Spray Flame
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferreryo-Fernandez, Sebastian; Paul, Chandan; Sircar, Arpan
Simulations are performed of a transient high-pressure turbulent n-dodecane spray flame under engine-relevant conditions. An unsteady RANS formulation is used, with detailed chemistry, a semi-empirical two-equation soot model, and a particle-based transported composition probability density function (PDF) method to account for unresolved turbulent fluctuations in composition and temperature. Results from the PDF model are compared with those from a locally well-stirred reactor (WSR) model to quantify the effects of turbulence-chemistry-soot interactions. Computed liquid and vapor penetration versus time, ignition delay, and flame lift-off height are in good agreement with experiment, and relatively small differences are seen between the WSR andmore » PDF models for these global quantities. Computed soot levels and spatial soot distributions from the WSR and PDF models show large differences, with PDF results being in better agreement with experimental measurements. An uncoupled photon Monte Carlo method with line-by-line spectral resolution is used to compute the spectral intensity distribution of the radiation leaving the flame. This provides new insight into the relative importance of molecular gas radiation versus soot radiation, and the importance of turbulent fluctuations on radiative heat transfer.« less
Non-Gaussianity in a quasiclassical electronic circuit
NASA Astrophysics Data System (ADS)
Suzuki, Takafumi J.; Hayakawa, Hisao
2017-05-01
We study the non-Gaussian dynamics of a quasiclassical electronic circuit coupled to a mesoscopic conductor. Non-Gaussian noise accompanying the nonequilibrium transport through the conductor significantly modifies the stationary probability density function (PDF) of the flux in the dissipative circuit. We incorporate weak quantum fluctuation of the dissipative LC circuit with a stochastic method and evaluate the quantum correction of the stationary PDF. Furthermore, an inverse formula to infer the statistical properties of the non-Gaussian noise from the stationary PDF is derived in the classical-quantum crossover regime. The quantum correction is indispensable to correctly estimate the microscopic transfer events in the QPC with the quasiclassical inverse formula.
Estimate of uncertainties in polarized parton distributions
NASA Astrophysics Data System (ADS)
Miyama, M.; Goto, Y.; Hirai, M.; Kobayashi, H.; Kumano, S.; Morii, T.; Saito, N.; Shibata, T.-A.; Yamanishi, T.
2001-10-01
From \\chi^2 analysis of polarized deep inelastic scattering data, we determined polarized parton distribution functions (Y. Goto et al. (AAC), Phys. Rev. D 62, 34017 (2000).). In order to clarify the reliability of the obtained distributions, we should estimate uncertainties of the distributions. In this talk, we discuss the pol-PDF uncertainties by using a Hessian method. A Hessian matrix H_ij is given by second derivatives of the \\chi^2, and the error matrix \\varepsilon_ij is defined as the inverse matrix of H_ij. Using the error matrix, we calculate the error of a function F by (δ F)^2 = sum_i,j fracpartial Fpartial ai \\varepsilon_ij fracpartial Fpartial aj , where a_i,j are the parameters in the \\chi^2 analysis. Using this method, we show the uncertainties of the pol-PDF, structure functions g_1, and spin asymmetries A_1. Furthermore, we show a role of future experiments such as the RHIC-Spin. An important purpose of planned experiments in the near future is to determine the polarized gluon distribution function Δ g (x) in detail. We reanalyze the pol-PDF uncertainties including the gluon fake data which are expected to be given by the upcoming experiments. From this analysis, we discuss how much the uncertainties of Δ g (x) can be improved by such measurements.
Li, Ming Xin; Liu, Jun Feng; Lu, Jian Da; Zhu, Ying; Kuang, Ding Wei; Xiang, Jian Bing; Sun, Peng; Wang, Wei; Xue, Jun; Gu, Yong; Hao, Chuan Ming
2016-12-01
The object of this study is to explore whether the plasmadiafiltration (PDF) is more effective in improving the intestinal mucosal barrier function by removing more key large molecular inflammatory mediators and then prolonging the survival time. Totally, 24 porcine sepsis models induced by cecal ligation and puncture (CLP) operation were randomly divided into three groups: PDF group, high-volume hemofiltration (HVHF) group, and control group, and received 8 h treatment, respectively. The expression of ZO-1 and occludin in intestinal mucosal epithelial cells were detected by immunohistochemistry, and apoptotic protein caspase-3-positive lymphocytes were signed in mesenteric lymph nodes by TUNEL staining. The hemodynamic parameters were measured by invasive cavity detection. The tumor necrosis factor alpha (TNFα) and high-mobility group protein 1 (HMGB1) were tested by ELISA method. And then, the survival curves with all-cause death were compared with three groups. PDF led to a superior reversal of sepsis-related hemodynamic impairment and serum biochemistry abnormalities and resulted in longer survival time compared with HVHF and control (p < 0.01). Definitive protection from excessive TNF-α and HMGB1 response were only achieved by PDF. A more regular distribution pattern of ZO-1 and occludin along the epithelium was found in PDF animals (p < 0.01). The presence of apoptotic lymphocytes was significantly reduced in the PDF animals (p < 0.01). PDF can effectively eliminate more pivotal inflammatory mediators of TNFα and HMGB1 and reduce the inflammation damage of the intestinal mucosal barrier and apoptosis of lymphocyte then improve the circulation function and prolong the survival time.
Characterization of microscopic deformation through two-point spatial correlation functions
NASA Astrophysics Data System (ADS)
Huang, Guan-Rong; Wu, Bin; Wang, Yangyang; Chen, Wei-Ren
2018-01-01
The molecular rearrangements of most fluids under flow and deformation do not directly follow the macroscopic strain field. In this work, we describe a phenomenological method for characterizing such nonaffine deformation via the anisotropic pair distribution function (PDF). We demonstrate how the microscopic strain can be calculated in both simple shear and uniaxial extension, by perturbation expansion of anisotropic PDF in terms of real spherical harmonics. Our results, given in the real as well as the reciprocal space, can be applied in spectrum analysis of small-angle scattering experiments and nonequilibrium molecular dynamics simulations of soft matter under flow.
Characterization of microscopic deformation through two-point spatial correlation functions.
Huang, Guan-Rong; Wu, Bin; Wang, Yangyang; Chen, Wei-Ren
2018-01-01
The molecular rearrangements of most fluids under flow and deformation do not directly follow the macroscopic strain field. In this work, we describe a phenomenological method for characterizing such nonaffine deformation via the anisotropic pair distribution function (PDF). We demonstrate how the microscopic strain can be calculated in both simple shear and uniaxial extension, by perturbation expansion of anisotropic PDF in terms of real spherical harmonics. Our results, given in the real as well as the reciprocal space, can be applied in spectrum analysis of small-angle scattering experiments and nonequilibrium molecular dynamics simulations of soft matter under flow.
Probability density function approach for compressible turbulent reacting flows
NASA Technical Reports Server (NTRS)
Hsu, A. T.; Tsai, Y.-L. P.; Raju, M. S.
1994-01-01
The objective of the present work is to extend the probability density function (PDF) tubulence model to compressible reacting flows. The proability density function of the species mass fractions and enthalpy are obtained by solving a PDF evolution equation using a Monte Carlo scheme. The PDF solution procedure is coupled with a compression finite-volume flow solver which provides the velocity and pressure fields. A modeled PDF equation for compressible flows, capable of treating flows with shock waves and suitable to the present coupling scheme, is proposed and tested. Convergence of the combined finite-volume Monte Carlo solution procedure is discussed. Two super sonic diffusion flames are studied using the proposed PDF model and the results are compared with experimental data; marked improvements over solutions without PDF are observed.
Stochastic Forecasting of Algae Blooms in Lakes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Peng; Tartakovsky, Daniel M.; Tartakovsky, Alexandre M.
We consider the development of harmful algae blooms (HABs) in a lake with uncertain nutrients inflow. Two general frameworks, Fokker-Planck equation and the PDF methods, are developed to quantify the resultant concentration uncertainty of various algae groups, via deriving a deterministic equation of their joint probability density function (PDF). A computational example is examined to study the evolution of cyanobacteria (the blue-green algae) and the impacts of initial concentration and inflow-outflow ratio.
Prill, Dragica; Juhás, Pavol; Billinge, Simon J L; Schmidt, Martin U
2016-01-01
A method towards the solution and refinement of organic crystal structures by fitting to the atomic pair distribution function (PDF) is developed. Approximate lattice parameters and molecular geometry must be given as input. The molecule is generally treated as a rigid body. The positions and orientations of the molecules inside the unit cell are optimized starting from random values. The PDF is obtained from carefully measured X-ray powder diffraction data. The method resembles `real-space' methods for structure solution from powder data, but works with PDF data instead of the diffraction pattern itself. As such it may be used in situations where the organic compounds are not long-range-ordered, are poorly crystalline, or nanocrystalline. The procedure was applied to solve and refine the crystal structures of quinacridone (β phase), naphthalene and allopurinol. In the case of allopurinol it was even possible to successfully solve and refine the structure in P1 with four independent molecules. As an example of a flexible molecule, the crystal structure of paracetamol was refined using restraints for bond lengths, bond angles and selected torsion angles. In all cases, the resulting structures are in excellent agreement with structures from single-crystal data.
A PDF projection method: A pressure algorithm for stand-alone transported PDFs
NASA Astrophysics Data System (ADS)
Ghorbani, Asghar; Steinhilber, Gerd; Markus, Detlev; Maas, Ulrich
2015-03-01
In this paper, a new formulation of the projection approach is introduced for stand-alone probability density function (PDF) methods. The method is suitable for applications in low-Mach number transient turbulent reacting flows. The method is based on a fractional step method in which first the advection-diffusion-reaction equations are modelled and solved within a particle-based PDF method to predict an intermediate velocity field. Then the mean velocity field is projected onto a space where the continuity for the mean velocity is satisfied. In this approach, a Poisson equation is solved on the Eulerian grid to obtain the mean pressure field. Then the mean pressure is interpolated at the location of each stochastic Lagrangian particle. The formulation of the Poisson equation avoids the time derivatives of the density (due to convection) as well as second-order spatial derivatives. This in turn eliminates the major sources of instability in the presence of stochastic noise that are inherent in particle-based PDF methods. The convergence of the algorithm (in the non-turbulent case) is investigated first by the method of manufactured solutions. Then the algorithm is applied to a one-dimensional turbulent premixed flame in order to assess the accuracy and convergence of the method in the case of turbulent combustion. As a part of this work, we also apply the algorithm to a more realistic flow, namely a transient turbulent reacting jet, in order to assess the performance of the method.
Probability and Cumulative Density Function Methods for the Stochastic Advection-Reaction Equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barajas-Solano, David A.; Tartakovsky, Alexandre M.
We present a cumulative density function (CDF) method for the probabilistic analysis of $d$-dimensional advection-dominated reactive transport in heterogeneous media. We employ a probabilistic approach in which epistemic uncertainty on the spatial heterogeneity of Darcy-scale transport coefficients is modeled in terms of random fields with given correlation structures. Our proposed CDF method employs a modified Large-Eddy-Diffusivity (LED) approach to close and localize the nonlocal equations governing the one-point PDF and CDF of the concentration field, resulting in a $(d + 1)$ dimensional PDE. Compared to the classsical LED localization, the proposed modified LED localization explicitly accounts for the mean-field advectivemore » dynamics over the phase space of the PDF and CDF. To illustrate the accuracy of the proposed closure, we apply our CDF method to one-dimensional single-species reactive transport with uncertain, heterogeneous advection velocities and reaction rates modeled as random fields.« less
PDF investigations of turbulent non-premixed jet flames with thin reaction zones
NASA Astrophysics Data System (ADS)
Wang, Haifeng; Pope, Stephen
2012-11-01
PDF (probability density function) modeling studies are carried out for the Sydney piloted jet flames. These Sydney flames feature much thinner reaction zones in the mixture fraction space compared to those in the well-studied Sandia piloted jet flames. The performance of the different turbulent combustion models in the Sydney flames with thin reaction zones has not been examined extensively before, and this work aims at evaluating the capability of the PDF method to represent the thin turbulent flame structures in the Sydney piloted flames. Parametric and sensitivity PDF studies are performed with respect to the different models and model parameters. A global error parameter is defined to quantify the departure of the simulation results from the experimental data, and is used to assess the performance of the different set of models and model parameters.
The present state and future directions of PDF methods
NASA Technical Reports Server (NTRS)
Pope, S. B.
1992-01-01
The objectives of the workshop are presented in viewgraph format, as is this entire article. The objectives are to discuss the present status and the future direction of various levels of engineering turbulence modeling related to Computational Fluid Dynamics (CFD) computations for propulsion; to assure that combustion is an essential part of propulsion; and to discuss Probability Density Function (PDF) methods for turbulent combustion. Essential to the integration of turbulent combustion models is the development of turbulent model, chemical kinetics, and numerical method. Some turbulent combustion models typically used in industry are the k-epsilon turbulent model, the equilibrium/mixing limited combustion, and the finite volume codes.
Zhai, Yihui; Bloch, Jacek; Hömme, Meike; Schaefer, Julia; Hackert, Thilo; Philippin, Bärbel; Schwenger, Vedat; Schaefer, Franz; Schmitt, Claus P
2012-07-01
Biocompatible peritoneal dialysis fluids (PDF) are buffered with lactate and/or bicarbonate. We hypothesized that the reduced toxicity of the biocompatible solutions might unmask specific effects of the buffer type on mesothelial cell functions. Human peritoneal mesothelial cells (HPMC) were incubated with bicarbonate (B-)PDF or lactate-buffered (L-)PDF followed by messenger RNA (mRNA) and protein analysis. Gene silencing was achieved using small interfering RNA (siRNA), functional studies using Transwell culture systems, and monolayer wound-healing assays. Incubation with B-PDF increased HPMC migration in the Transwell and monolayer wound-healing assay to 245 ± 99 and 137 ± 11% compared with L-PDF. Gene silencing showed this effect to be entirely dependent on the expression of aquaporin-1 (AQP-1) and independent of AQP-3. Exposure of HPMC to B-PDF increased AQP-1 mRNA and protein abundance to 209 ± 80 and 197 ± 60% of medium control; the effect was pH dependent. L-PDF reduced AQP-1 mRNA. Addition of bicarbonate to L-PDF increased AQP-1 abundance by threefold; mRNA half-life remained unchanged. Immunocytochemistry confirmed opposite changes of AQP-1 cell-membrane abundance with B-PDF and L-PDF. Peritoneal mesothelial AQP-1 abundance and migration capacity is regulated by pH and buffer agents used in PD solutions. In vivo studies are required to delineate the impact with respect to long-term peritoneal membrane integrity and function.
Herman, Benjamin R; Gross, Barry; Moshary, Fred; Ahmed, Samir
2008-04-01
We investigate the assessment of uncertainty in the inference of aerosol size distributions from backscatter and extinction measurements that can be obtained from a modern elastic/Raman lidar system with a Nd:YAG laser transmitter. To calculate the uncertainty, an analytic formula for the correlated probability density function (PDF) describing the error for an optical coefficient ratio is derived based on a normally distributed fractional error in the optical coefficients. Assuming a monomodal lognormal particle size distribution of spherical, homogeneous particles with a known index of refraction, we compare the assessment of uncertainty using a more conventional forward Monte Carlo method with that obtained from a Bayesian posterior PDF assuming a uniform prior PDF and show that substantial differences between the two methods exist. In addition, we use the posterior PDF formalism, which was extended to include an unknown refractive index, to find credible sets for a variety of optical measurement scenarios. We find the uncertainty is greatly reduced with the addition of suitable extinction measurements in contrast to the inclusion of extra backscatter coefficients, which we show to have a minimal effect and strengthens similar observations based on numerical regularization methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Metz, Peter; Koch, Robert; Cladek, Bernadette
Ion-exchanged Aurivillius materials form perovskite nanosheet booklets wherein well-defined bi-periodic sheets, with ~11.5 Å thickness, exhibit extensive stacking disorder. The perovskite layer contents were defined initially using combined synchrotron X-ray and neutron Rietveld refinement of the parent Aurivillius structure. The structure of the subsequently ion-exchanged material, which is disordered in its stacking sequence, is analyzed using both pair distribution function (PDF) analysis and recursive method simulations of the scattered intensity. Combined X-ray and neutron PDF refinement of supercell stacking models demonstrates sensitivity of the PDF to both perpendicular and transverse stacking vector components. Further, hierarchical ensembles of stacking models weightedmore » by a standard normal distribution are demonstrated to improve PDF fit over 1–25 Å. Recursive method simulations of the X-ray scattering profile demonstrate agreement between the real space stacking analysis and more conventional reciprocal space methods. The local structure of the perovskite sheet is demonstrated to relax only slightly from the Aurivillius structure after ion exchange.« less
Representation of Probability Density Functions from Orbit Determination using the Particle Filter
NASA Technical Reports Server (NTRS)
Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell
2012-01-01
Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.
Rodriguez, Alberto; Vasquez, Louella J; Römer, Rudolf A
2009-03-13
The probability density function (PDF) for critical wave function amplitudes is studied in the three-dimensional Anderson model. We present a formal expression between the PDF and the multifractal spectrum f(alpha) in which the role of finite-size corrections is properly analyzed. We show the non-Gaussian nature and the existence of a symmetry relation in the PDF. From the PDF, we extract information about f(alpha) at criticality such as the presence of negative fractal dimensions and the possible existence of termination points. A PDF-based multifractal analysis is shown to be a valid alternative to the standard approach based on the scaling of inverse participation ratios.
External intermittency prediction using AMR solutions of RANS turbulence and transported PDF models
NASA Astrophysics Data System (ADS)
Olivieri, D. A.; Fairweather, M.; Falle, S. A. E. G.
2011-12-01
External intermittency in turbulent round jets is predicted using a Reynolds-averaged Navier-Stokes modelling approach coupled to solutions of the transported probability density function (pdf) equation for scalar variables. Solutions to the descriptive equations are obtained using a finite-volume method, combined with an adaptive mesh refinement algorithm, applied in both physical and compositional space. This method contrasts with conventional approaches to solving the transported pdf equation which generally employ Monte Carlo techniques. Intermittency-modified eddy viscosity and second-moment turbulence closures are used to accommodate the effects of intermittency on the flow field, with the influence of intermittency also included, through modifications to the mixing model, in the transported pdf equation. Predictions of the overall model are compared with experimental data on the velocity and scalar fields in a round jet, as well as against measurements of intermittency profiles and scalar pdfs in a number of flows, with good agreement obtained. For the cases considered, predictions based on the second-moment turbulence closure are clearly superior, although both turbulence models give realistic predictions of the bimodal scalar pdfs observed experimentally.
PDF approach for compressible turbulent reacting flows
NASA Technical Reports Server (NTRS)
Hsu, A. T.; Tsai, Y.-L. P.; Raju, M. S.
1993-01-01
The objective of the present work is to develop a probability density function (pdf) turbulence model for compressible reacting flows for use with a CFD flow solver. The probability density function of the species mass fraction and enthalpy are obtained by solving a pdf evolution equation using a Monte Carlo scheme. The pdf solution procedure is coupled with a compressible CFD flow solver which provides the velocity and pressure fields. A modeled pdf equation for compressible flows, capable of capturing shock waves and suitable to the present coupling scheme, is proposed and tested. Convergence of the combined finite-volume Monte Carlo solution procedure is discussed, and an averaging procedure is developed to provide smooth Monte-Carlo solutions to ensure convergence. Two supersonic diffusion flames are studied using the proposed pdf model and the results are compared with experimental data; marked improvements over CFD solutions without pdf are observed. Preliminary applications of pdf to 3D flows are also reported.
Klose, Markus; Duvall, Laura; Li, Weihua; Liang, Xitong; Ren, Chi; Steinbach, Joe Henry; Taghert, Paul H
2016-05-18
The neuropeptide PDF promotes the normal sequencing of circadian behavioral rhythms in Drosophila, but its signaling mechanisms are not well understood. We report daily rhythmicity in responsiveness to PDF in critical pacemakers called small LNvs. There is a daily change in potency, as great as 10-fold higher, around dawn. The rhythm persists in constant darkness and does not require endogenous ligand (PDF) signaling or rhythmic receptor gene transcription. Furthermore, rhythmic responsiveness reflects the properties of the pacemaker cell type, not the receptor. Dopamine responsiveness also cycles, in phase with that of PDF, in the same pacemakers, but does not cycle in large LNv. The activity of RalA GTPase in s-LNv regulates PDF responsiveness and behavioral locomotor rhythms. Additionally, cell-autonomous PDF signaling reversed the circadian behavioral effects of lowered RalA activity. Thus, RalA activity confers high PDF responsiveness, providing a daily gate around the dawn hours to promote functional PDF signaling. Copyright © 2016 Elsevier Inc. All rights reserved.
Liang, Xitong; Ren, Chi; Steinbach, Joe Henry; Taghert, Paul H.
2016-01-01
The neuropeptide PDF promotes the normal sequencing of circadian behavioral rhythms in Drosophila, but its signaling mechanisms are not well understood. We report daily rhythmicity in responsiveness to PDF in critical pacemakers called small LNvs. There is a daily change in potency, as great as 10-fold higher, around dawn. The rhythm persists in constant darkness, does not require endogenous ligand (PDF) signaling, or rhythmic receptor gene transcription. Furthermore, rhythmic responsiveness reflects the properties of the pacemaker cell type, not the receptor. Dopamine responsiveness also cycles, in phase with that of PDF, in the same pacemakers, but does not cycle in large LNv. The activity of RalA GTPase in s-LNv regulates PDF responsiveness and behavioral locomotor rhythms. Additional, cell autonomous PDF signaling reversed the circadian behavioral effects of lowered RalA activity. Thus RalA activity confers high PDF responsiveness, providing a daily gate around the dawn hours to promote functional PDF signaling. PMID:27161526
Galaxy clustering with photometric surveys using PDF redshift information
Asorey, J.; Carrasco Kind, M.; Sevilla-Noarbe, I.; ...
2016-03-28
Here, photometric surveys produce large-area maps of the galaxy distribution, but with less accurate redshift information than is obtained from spectroscopic methods. Modern photometric redshift (photo-z) algorithms use galaxy magnitudes, or colors, that are obtained through multi-band imaging to produce a probability density function (PDF) for each galaxy in the map. We used simulated data to study the effect of using different photo-z estimators to assign galaxies to redshift bins in order to compare their effects on angular clustering and galaxy bias measurements. We found that if we use the entire PDF, rather than a single-point (mean or mode) estimate, the deviations are less biased, especially when using narrow redshift bins. When the redshift bin widths aremore » $$\\Delta z=0.1$$, the use of the entire PDF reduces the typical measurement bias from 5%, when using single point estimates, to 3%.« less
Large eddy simulations and direct numerical simulations of high speed turbulent reacting flows
NASA Technical Reports Server (NTRS)
Givi, P.; Madnia, C. K.; Steinberger, C. J.; Frankel, S. H.
1992-01-01
The basic objective of this research is to extend the capabilities of Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) for the computational analyses of high speed reacting flows. In the efforts related to LES, we were primarily involved with assessing the performance of the various modern methods based on the Probability Density Function (PDF) methods for providing closures for treating the subgrid fluctuation correlations of scalar quantities in reacting turbulent flows. In the work on DNS, we concentrated on understanding some of the relevant physics of compressible reacting flows by means of statistical analysis of the data generated by DNS of such flows. In the research conducted in the second year of this program, our efforts focused on the modeling of homogeneous compressible turbulent flows by PDF methods, and on DNS of non-equilibrium reacting high speed mixing layers. Some preliminary work is also in progress on PDF modeling of shear flows, and also on LES of such flows.
Zito, Gianluigi; Rusciano, Giulia; Sasso, Antonio
2016-08-07
Suitable metal nanostructures may induce surface-enhanced Raman scattering (SERS) enhancement factors (EFs) large-enough to reach single-molecule sensitivity. However, the gap hot-spot EF probability density function (PDF) has the character of a long-tail distribution, which dramatically mines the reproducibility of SERS experiments. Herein, we carry out electrodynamic calculations based on a 3D finite element method of two plasmonic nanostructures, combined with Monte Carlo simulations of the EF statistics under different external conditions. We compare the PDF produced by a homodimer of nanoparticles with that provided by a self-similar trimer. We show that the PDF is sensitive to the spatial distribution of near-field enhancement specifically supported by the nanostructure geometry. Breaking the symmetry of the plasmonic system is responsible for inducing particular modulations of the PDF tail resembling a multiple Poisson distribution. We also study the influence that molecular diffusion towards the hottest hot-spot, or selective hot-spot targeting, might have on the EF PDF. Our results quantitatively assess the possibility of designing the response of a SERS substrate so as to contain the intrinsic EF PDF variance and significantly improving, in principle, the reproducibility of SERS experiments.
A consistent transported PDF model for treating differential molecular diffusion
NASA Astrophysics Data System (ADS)
Wang, Haifeng; Zhang, Pei
2016-11-01
Differential molecular diffusion is a fundamentally significant phenomenon in all multi-component turbulent reacting or non-reacting flows caused by the different rates of molecular diffusion of energy and species concentrations. In the transported probability density function (PDF) method, the differential molecular diffusion can be treated by using a mean drift model developed by McDermott and Pope. This model correctly accounts for the differential molecular diffusion in the scalar mean transport and yields a correct DNS limit of the scalar variance production. The model, however, misses the molecular diffusion term in the scalar variance transport equation, which yields an inconsistent prediction of the scalar variance in the transported PDF method. In this work, a new model is introduced to remedy this problem that can yield a consistent scalar variance prediction. The model formulation along with its numerical implementation is discussed, and the model validation is conducted in a turbulent mixing layer problem.
A surrogate accelerated multicanonical Monte Carlo method for uncertainty quantification
NASA Astrophysics Data System (ADS)
Wu, Keyi; Li, Jinglai
2016-09-01
In this work we consider a class of uncertainty quantification problems where the system performance or reliability is characterized by a scalar parameter y. The performance parameter y is random due to the presence of various sources of uncertainty in the system, and our goal is to estimate the probability density function (PDF) of y. We propose to use the multicanonical Monte Carlo (MMC) method, a special type of adaptive importance sampling algorithms, to compute the PDF of interest. Moreover, we develop an adaptive algorithm to construct local Gaussian process surrogates to further accelerate the MMC iterations. With numerical examples we demonstrate that the proposed method can achieve several orders of magnitudes of speedup over the standard Monte Carlo methods.
Regional statistics in confined two-dimensional decaying turbulence.
Házi, Gábor; Tóth, Gábor
2011-06-28
Two-dimensional decaying turbulence in a square container has been simulated using the lattice Boltzmann method. The probability density function (PDF) of the vorticity and the particle distribution functions have been determined at various regions of the domain. It is shown that, after the initial stage of decay, the regional area averaged enstrophy fluctuates strongly around a mean value in time. The ratio of the regional mean and the overall enstrophies increases monotonously with increasing distance from the wall. This function shows a similar shape to the axial mean velocity profile of turbulent channel flows. The PDF of the vorticity peaks at zero and is nearly symmetric considering the statistics in the overall domain. Approaching the wall, the PDFs become skewed owing to the boundary layer.
nCTEQ15 - Global analysis of nuclear parton distributions with uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kusina, A.; Kovarik, Karol; Jezo, T.
2015-09-01
We present the first official release of the nCTEQ nuclear parton distribution functions with errors. The main addition to the previous nCTEQ PDFs is the introduction of PDF uncertainties based on the Hessian method. Another important addition is the inclusion of pion production data from RHIC that give us a handle on constraining the gluon PDF. This contribution summarizes our results from arXiv:1509.00792 and concentrates on the comparison with other groups providing nuclear parton distributions.
nCTEQ15 - Global analysis of nuclear parton distributions with uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kusina, A.; Kovarik, K.; Jezo, T.
2015-09-04
We present the first official release of the nCTEQ nuclear parton distribution functions with errors. The main addition to the previous nCTEQ PDFs is the introduction of PDF uncertainties based on the Hessian method. Another important addition is the inclusion of pion production data from RHIC that give us a handle on constraining the gluon PDF. This contribution summarizes our results from arXiv:1509.00792, and concentrates on the comparison with other groups providing nuclear parton distributions.
Modeling of turbulent chemical reaction
NASA Technical Reports Server (NTRS)
Chen, J.-Y.
1995-01-01
Viewgraphs are presented on modeling turbulent reacting flows, regimes of turbulent combustion, regimes of premixed and regimes of non-premixed turbulent combustion, chemical closure models, flamelet model, conditional moment closure (CMC), NO(x) emissions from turbulent H2 jet flames, probability density function (PDF), departures from chemical equilibrium, mixing models for PDF methods, comparison of predicted and measured H2O mass fractions in turbulent nonpremixed jet flames, experimental evidence of preferential diffusion in turbulent jet flames, and computation of turbulent reacting flows.
NASA Astrophysics Data System (ADS)
Rings, Joerg; Vrugt, Jasper A.; Schoups, Gerrit; Huisman, Johan A.; Vereecken, Harry
2012-05-01
Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive probability density function (pdf) of any quantity of interest is a weighted average of pdfs centered around the individual (possibly bias-corrected) forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts, and reflect the individual models skill over a training (calibration) period. The original BMA approach presented by Raftery et al. (2005) assumes that the conditional pdf of each individual model is adequately described with a rather standard Gaussian or Gamma statistical distribution, possibly with a heteroscedastic variance. Here we analyze the advantages of using BMA with a flexible representation of the conditional pdf. A joint particle filtering and Gaussian mixture modeling framework is presented to derive analytically, as closely and consistently as possible, the evolving forecast density (conditional pdf) of each constituent ensemble member. The median forecasts and evolving conditional pdfs of the constituent models are subsequently combined using BMA to derive one overall predictive distribution. This paper introduces the theory and concepts of this new ensemble postprocessing method, and demonstrates its usefulness and applicability by numerical simulation of the rainfall-runoff transformation using discharge data from three different catchments in the contiguous United States. The revised BMA method receives significantly lower-prediction errors than the original default BMA method (due to filtering) with predictive uncertainty intervals that are substantially smaller but still statistically coherent (due to the use of a time-variant conditional pdf).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamez-Mendoza, Liliana; Terban, Maxwell W.; Billinge, Simon J. L.
The particle size of supported catalysts is a key characteristic for determining structure–property relationships. It is a challenge to obtain this information accurately andin situusing crystallographic methods owing to the small size of such particles (<5 nm) and the fact that they are supported. In this work, the pair distribution function (PDF) technique was used to obtain the particle size distribution of supported Pt catalysts as they grow under typical synthesis conditions. The PDF of Pt nanoparticles grown on zeolite X was isolated and refined using two models: a monodisperse spherical model (single particle size) and a lognormal size distribution.more » The results were compared and validated using scanning transmission electron microscopy (STEM) results. Both models describe the same trends in average particle size with temperature, but the results of the number-weighted lognormal size distributions can also accurately describe the mean size and the width of the size distributions obtained from STEM. Since the PDF yields crystallite sizes, these results suggest that the grown Pt nanoparticles are monocrystalline. This work shows that refinement of the PDF of small supported monocrystalline nanoparticles can yield accurate mean particle sizes and distributions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prill, Dragica; Juhas, Pavol; Billinge, Simon J. L.
2016-01-01
In this study, a method towards the solution and refinement of organic crystal structures by fitting to the atomic pair distribution function (PDF) is developed. Approximate lattice parameters and molecular geometry must be given as input. The molecule is generally treated as a rigid body. The positions and orientations of the molecules inside the unit cell are optimized starting from random values. The PDF is obtained from carefully measured X-ray powder diffraction data. The method resembles `real-space' methods for structure solution from powder data, but works with PDF data instead of the diffraction pattern itself. As such it may bemore » used in situations where the organic compounds are not long-range-ordered, are poorly crystalline, or nanocrystalline. The procedure was applied to solve and refine the crystal structures of quinacridone (β phase), naphthalene and allopurinol. In the case of allopurinol it was even possible to successfully solve and refine the structure in P1 with four independent molecules. As an example of a flexible molecule, the crystal structure of paracetamol was refined using restraints for bond lengths, bond angles and selected torsion angles. In all cases, the resulting structures are in excellent agreement with structures from single-crystal data.« less
A Model of Self-Monitoring Blood Glucose Measurement Error.
Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio
2017-07-01
A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.
Probability distribution functions for intermittent scrape-off layer plasma fluctuations
NASA Astrophysics Data System (ADS)
Theodorsen, A.; Garcia, O. E.
2018-03-01
A stochastic model for intermittent fluctuations in the scrape-off layer of magnetically confined plasmas has been constructed based on a super-position of uncorrelated pulses arriving according to a Poisson process. In the most common applications of the model, the pulse amplitudes are assumed exponentially distributed, supported by conditional averaging of large-amplitude fluctuations in experimental measurement data. This basic assumption has two potential limitations. First, statistical analysis of measurement data using conditional averaging only reveals the tail of the amplitude distribution to be exponentially distributed. Second, exponentially distributed amplitudes leads to a positive definite signal which cannot capture fluctuations in for example electric potential and radial velocity. Assuming pulse amplitudes which are not positive definite often make finding a closed form for the probability density function (PDF) difficult, even if the characteristic function remains relatively simple. Thus estimating model parameters requires an approach based on the characteristic function, not the PDF. In this contribution, the effect of changing the amplitude distribution on the moments, PDF and characteristic function of the process is investigated and a parameter estimation method using the empirical characteristic function is presented and tested on synthetically generated data. This proves valuable for describing intermittent fluctuations of all plasma parameters in the boundary region of magnetized plasmas.
Clerkin, L.; Kirk, D.; Manera, M.; ...
2016-08-30
It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence (kappa_WL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the Counts in Cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey (DES) Science Verification data over 139 deg^2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirmmore » that the galaxy density contrast distribution is well modeled by a lognormal PDF convolved with Poisson noise at angular scales from 10-40 arcmin (corresponding to physical scales of 3-10 Mpc). We note that as kappa_WL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the kappa_WL distribution is well modeled by a lognormal PDF convolved with Gaussian shape noise at scales between 10 and 20 arcmin, with a best-fit chi^2/DOF of 1.11 compared to 1.84 for a Gaussian model, corresponding to p-values 0.35 and 0.07 respectively, at a scale of 10 arcmin. Above 20 arcmin a simple Gaussian model is sufficient. The joint PDF is also reasonably fitted by a bivariate lognormal. As a consistency check we compare the variances derived from the lognormal modelling with those directly measured via CiC. Our methods are validated against maps from the MICE Grand Challenge N-body simulation.« less
NASA Astrophysics Data System (ADS)
Clerkin, L.; Kirk, D.; Manera, M.; Lahav, O.; Abdalla, F.; Amara, A.; Bacon, D.; Chang, C.; Gaztañaga, E.; Hawken, A.; Jain, B.; Joachimi, B.; Vikram, V.; Abbott, T.; Allam, S.; Armstrong, R.; Benoit-Lévy, A.; Bernstein, G. M.; Bernstein, R. A.; Bertin, E.; Brooks, D.; Burke, D. L.; Rosell, A. Carnero; Carrasco Kind, M.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Dietrich, J. P.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; James, D. J.; Kent, S.; Kuehn, K.; Kuropatkin, N.; Lima, M.; Melchior, P.; Miquel, R.; Nord, B.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Sanchez, E.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Walker, A. R.
2017-04-01
It is well known that the probability distribution function (PDF) of galaxy density contrast is approximately lognormal; whether the PDF of mass fluctuations derived from weak lensing convergence (κWL) is lognormal is less well established. We derive PDFs of the galaxy and projected matter density distributions via the counts-in-cells (CiC) method. We use maps of galaxies and weak lensing convergence produced from the Dark Energy Survey Science Verification data over 139 deg2. We test whether the underlying density contrast is well described by a lognormal distribution for the galaxies, the convergence and their joint PDF. We confirm that the galaxy density contrast distribution is well modelled by a lognormal PDF convolved with Poisson noise at angular scales from 10 to 40 arcmin (corresponding to physical scales of 3-10 Mpc). We note that as κWL is a weighted sum of the mass fluctuations along the line of sight, its PDF is expected to be only approximately lognormal. We find that the κWL distribution is well modelled by a lognormal PDF convolved with Gaussian shape noise at scales between 10 and 20 arcmin, with a best-fitting χ2/dof of 1.11 compared to 1.84 for a Gaussian model, corresponding to p-values 0.35 and 0.07, respectively, at a scale of 10 arcmin. Above 20 arcmin a simple Gaussian model is sufficient. The joint PDF is also reasonably fitted by a bivariate lognormal. As a consistency check, we compare the variances derived from the lognormal modelling with those directly measured via CiC. Our methods are validated against maps from the MICE Grand Challenge N-body simulation.
van de Kamp, Thomas; dos Santos Rolo, Tomy; Vagovič, Patrik; Baumbach, Tilo; Riedel, Alexander
2014-01-01
Digital surface mesh models based on segmented datasets have become an integral part of studies on animal anatomy and functional morphology; usually, they are published as static images, movies or as interactive PDF files. We demonstrate the use of animated 3D models embedded in PDF documents, which combine the advantages of both movie and interactivity, based on the example of preserved Trigonopterus weevils. The method is particularly suitable to simulate joints with largely deterministic movements due to precise form closure. We illustrate the function of an individual screw-and-nut type hip joint and proceed to the complex movements of the entire insect attaining a defence position. This posture is achieved by a specific cascade of movements: Head and legs interlock mutually and with specific features of thorax and the first abdominal ventrite, presumably to increase the mechanical stability of the beetle and to maintain the defence position with minimal muscle activity. The deterministic interaction of accurately fitting body parts follows a defined sequence, which resembles a piece of engineering.
van de Kamp, Thomas; dos Santos Rolo, Tomy; Vagovič, Patrik; Baumbach, Tilo; Riedel, Alexander
2014-01-01
Digital surface mesh models based on segmented datasets have become an integral part of studies on animal anatomy and functional morphology; usually, they are published as static images, movies or as interactive PDF files. We demonstrate the use of animated 3D models embedded in PDF documents, which combine the advantages of both movie and interactivity, based on the example of preserved Trigonopterus weevils. The method is particularly suitable to simulate joints with largely deterministic movements due to precise form closure. We illustrate the function of an individual screw-and-nut type hip joint and proceed to the complex movements of the entire insect attaining a defence position. This posture is achieved by a specific cascade of movements: Head and legs interlock mutually and with specific features of thorax and the first abdominal ventrite, presumably to increase the mechanical stability of the beetle and to maintain the defence position with minimal muscle activity. The deterministic interaction of accurately fitting body parts follows a defined sequence, which resembles a piece of engineering. PMID:25029366
Computations of steady-state and transient premixed turbulent flames using pdf methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hulek, T.; Lindstedt, R.P.
1996-03-01
Premixed propagating turbulent flames are modeled using a one-point, single time, joint velocity-composition probability density function (pdf) closure. The pdf evolution equation is solved using a Monte Carlo method. The unclosed terms in the pdf equation are modeled using a modified version of the binomial Langevin model for scalar mixing of Valino and Dopazo, and the Haworth and Pope (HP) and Lagrangian Speziale-Sarkar-Gatski (LSSG) models for the viscous dissipation of velocity and the fluctuating pressure gradient. The source terms for the presumed one-step chemical reaction are extracted from the rate of fuel consumption in laminar premixed hydrocarbon flames, computed usingmore » a detailed chemical kinetic mechanism. Steady-state and transient solutions are obtained for planar turbulent methane-air and propane-air flames. The transient solution method features a coupling with a Finite Volume (FV) code to obtain the mean pressure field. The results are compared with the burning velocity measurements of Abdel-Gayed et al. and with velocity measurements obtained in freely propagating propane-air flames by Videto and Santavicca. The effects of different upstream turbulence fields, chemical source terms (different fuels and strained/unstrained laminar flames) and the influence of the velocity statistics models (HP and LSSG) are assessed.« less
Measuring droplet size distributions from overlapping interferometric particle images.
Bocanegra Evans, Humberto; Dam, Nico; van der Voort, Dennis; Bertens, Guus; van de Water, Willem
2015-02-01
Interferometric particle imaging provides a simple way to measure the probability density function (PDF) of droplet sizes from out-focus images. The optical setup is straightforward, but the interpretation of the data is a problem when particle images overlap. We propose a new way to analyze the images. The emphasis is not on a precise identification of droplets, but on obtaining a good estimate of the PDF of droplet sizes in the case of overlapping particle images. The algorithm is tested using synthetic and experimental data. We next use these methods to measure the PDF of droplet sizes produced by spinning disk aerosol generators. The mean primary droplet diameter agrees with predictions from the literature, but we find a broad distribution of satellite droplet sizes.
Kula, Elzbieta; Levitan, Edwin S; Pyza, Elzbieta; Rosbash, Michael
2006-04-01
In Drosophila, the neuropeptide pigment-dispersing factor (PDF) is a likely circadian molecule, secreted by central pacemaker neurons (LNvs). PDF is expressed in both small and large LNvs (sLNvs and lLNvs), and there are striking circadian oscillations of PDF staining intensity in the small cell termini, which require a functional molecular clock. This cycling may be relevant to the proposed role of PDF as a synchronizer of the clock system or as an output signal connecting pacemaker cells to locomotor activity centers. In this study, the authors use a generic neuropeptide fusion protein (atrial natriuretic factor-green fluorescent protein [ANF-GFP]) and show that it can be expressed in the same neurons as PDF itself. Yet, ANF-GFP as well as PDF itself does not manifest any cyclical accumulation in sLNv termini in adult transgenic flies. Surprisingly, the absence of detectable PDF cycling is not accompanied by any detectable behavioral pheno-type, since these transgenic flies have normal morning and evening anticipation in a light-dark cycle (LD) and are fully rhythmic in constant darkness (DD). The molecular clock is also not compromised. The results suggest that robust PDF cycling in sLNv termini plays no more than a minor role in the Drosophila circadian system and is apparently not even necessary for clock output function.
NASA Astrophysics Data System (ADS)
Frandsen, Benjamin A.
Mott insulators are materials in which strong correlations among the electrons induce an unconventional insulating state. Rich interplay between the structural, magnetic, and electronic degrees of freedom resulting from the electron correlation can lead to unusual complexity of Mott materials on the atomic scale, such as microscopically heterogeneous phases or local structural correlations that deviate significantly from the average structure. Such behavior must be studied by suitable experimental techniques, i.e. "local probes", that are sensitive to this local behavior rather than just the bulk, average properties. In this thesis, I will present results from our studies of multiple families of Mott insulators using two such local probes: muon spin relaxation (muSR), a probe of local magnetism; and pair distribution function (PDF) analysis of x-ray and neutron total scattering, a probe of local atomic structure. In addition, I will present the development of magnetic pair distribution function analysis, a novel method for studying local magnetic correlations that is highly complementary to the muSR and atomic PDF techniques. We used muSR to study the phase transition from Mott insulator to metal in two archetypal Mott insulating systems: RENiO3 (RE = rare earth element) and V2O3. In both of these systems, the Mott insulating state can be suppressed by tuning a nonthermal parameter, resulting in a "quantum" phase transition at zero temperature from the Mott insulating state to a metallic state. In RENiO3, this occurs through variation of the rare-earth element in the chemical composition; in V 2O3, through the application of hydrostatic pressure. Our results show that the metallic and Mott insulating states unexpectedly coexist in phase-separated regions across a large portion of parameter space near the Mott quantum phase transition and that the magnitude of the ordered antiferromagnetic moment remains constant across the phase diagram until it is abruptly destroyed at the quantum phase transition. Taken together, these findings point unambiguously to a first-order quantum phase transition in these systems. We also conducted x-ray and neutron PDF experiments, which suggest that the distinct atomic structures associated with the insulating and metallic phases similarly coexist near the quantum phase transition. These results have significant implications for our understanding of the Mott metal-insulator quantum phase transition in real materials. The second part of this thesis centers on the derivation and development of the magnetic pair distribution function (mPDF) technique and its application to the antiferromagnetic Mott insulator MnO. The atomic PDF method involves Fourier transforming the x-ray or neutron total scattering intensity from reciprocal space into real space to directly reveal the local atomic correlations in a material, which may deviate significantly from the average crystallographic structure of that material. Likewise, the mPDF method involves Fourier transforming the magnetic neutron total scattering intensity to probe the local correlations of magnetic moments in the material, which may exist on short length scales even when the material has no long-range magnetic order. After deriving the fundamental mPDF equations and providing a proof-of-principle by recovering the known magnetic structure of antiferromagnetic MnO, we used this technique to investigate the short-range magnetic correlations that persist well into the paramagnetic phase of MnO. By combining the mPDF measurements with ab initio calculations of the spin-spin correlation function in paramagnetic MnO, we were able to quantitatively account for the observed mPDF. We also used the mPDF data to evaluate competing ab initio theories, thereby resolving some longstanding questions about the magnetic exchange interactions in MnO.
The GABA(A) receptor RDL acts in peptidergic PDF neurons to promote sleep in Drosophila.
Chung, Brian Y; Kilman, Valerie L; Keath, J Russel; Pitman, Jena L; Allada, Ravi
2009-03-10
Sleep is regulated by a circadian clock that times sleep and wake to specific times of day and a homeostat that drives sleep as a function of prior wakefulness. To analyze the role of the circadian clock, we have used the fruit fly Drosophila. Flies display the core behavioral features of sleep, including relative immobility, elevated arousal thresholds, and homeostatic regulation. We assessed sleep-wake modulation by a core set of circadian pacemaker neurons that express the neuropeptide PDF. We find that disruption of PDF function increases sleep during the late night in light:dark and the first subjective day of constant darkness. Flies deploy genetic and neurotransmitter pathways to regulate sleep that are similar to those of their mammalian counterparts, including GABA. We find that RNA interference-mediated knockdown of the GABA(A) receptor gene, Resistant to dieldrin (Rdl), in PDF neurons reduces sleep, consistent with a role for GABA in inhibiting PDF neuron function. Patch-clamp electrophysiology reveals GABA-activated picrotoxin-sensitive chloride currents on PDF+ neurons. In addition, RDL is detectable most strongly on the large subset of PDF+ pacemaker neurons. These results suggest that GABAergic inhibition of arousal-promoting PDF neurons is an important mode of sleep-wake regulation in vivo.
Cole, Jacqueline M; Cheng, Xie; Payne, Michael C
2016-11-07
The use of principal component analysis (PCA) to statistically infer features of local structure from experimental pair distribution function (PDF) data is assessed on a case study of rare-earth phosphate glasses (REPGs). Such glasses, codoped with two rare-earth ions (R and R') of different sizes and optical properties, are of interest to the laser industry. The determination of structure-property relationships in these materials is an important aspect of their technological development. Yet, realizing the local structure of codoped REPGs presents significant challenges relative to their singly doped counterparts; specifically, R and R' are difficult to distinguish in terms of establishing relative material compositions, identifying atomic pairwise correlation profiles in a PDF that are associated with each ion, and resolving peak overlap of such profiles in PDFs. This study demonstrates that PCA can be employed to help overcome these structural complications, by statistically inferring trends in PDFs that exist for a restricted set of experimental data on REPGs, and using these as training data to predict material compositions and PDF profiles in unknown codoped REPGs. The application of these PCA methods to resolve individual atomic pairwise correlations in t(r) signatures is also presented. The training methods developed for these structural predictions are prevalidated by testing their ability to reproduce known physical phenomena, such as the lanthanide contraction, on PDF signatures of the structurally simpler singly doped REPGs. The intrinsic limitations of applying PCA to analyze PDFs relative to the quality control of source data, data processing, and sample definition, are also considered. While this case study is limited to lanthanide-doped REPGs, this type of statistical inference may easily be extended to other inorganic solid-state materials and be exploited in large-scale data-mining efforts that probe many t(r) functions.
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1992-01-01
Turbulent combustion can not be simulated adequately by conventional moment closure turbulent models. The probability density function (PDF) method offers an attractive alternative: in a PDF model, the chemical source terms are closed and do not require additional models. Because the number of computational operations grows only linearly in the Monte Carlo scheme, it is chosen over finite differencing schemes. A grid dependent Monte Carlo scheme following J.Y. Chen and W. Kollmann has been studied in the present work. It was found that in order to conserve the mass fractions absolutely, one needs to add further restrictions to the scheme, namely alpha(sub j) + gamma(sub j) = alpha(sub j - 1) + gamma(sub j + 1). A new algorithm was devised that satisfied this restriction in the case of pure diffusion or uniform flow problems. Using examples, it is shown that absolute conservation can be achieved. Although for non-uniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.
On recontamination and directional-bias problems in Monte Carlo simulation of PDF turbulence models
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1991-01-01
Turbulent combustion can not be simulated adequately by conventional moment closure turbulence models. The difficulty lies in the fact that the reaction rate is in general an exponential function of the temperature, and the higher order correlations in the conventional moment closure models of the chemical source term can not be neglected, making the applications of such models impractical. The probability density function (pdf) method offers an attractive alternative: in a pdf model, the chemical source terms are closed and do not require additional models. A grid dependent Monte Carlo scheme was studied, since it is a logical alternative, wherein the number of computer operations increases only linearly with the increase of number of independent variables, as compared to the exponential increase in a conventional finite difference scheme. A new algorithm was devised that satisfies a restriction in the case of pure diffusion or uniform flow problems. Although for nonuniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.
Atomic Structure of a Cesium Aluminosilicate Geopolymer: A Pair Distribution Function Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bell, J.; Sarin, P; Provis, J
2008-01-01
The atomic pair distribution function (PDF) method was used to study the structure of cesium aluminosilicate geopolymer. The geopolymer was prepared by reacting metakaolin with cesium silicate solution followed by curing at 50C for 24 h in a sealed container. Heating of Cs-geopolymer above 1000C resulted in formation of crystalline pollucite (CsAlSi{sub 2}O{sub 6}). PDF refinement of the pollucite phase formed displayed an excellent fit over the 10-30 {angstrom} range when compared with a cubic pollucite model. A poorer fit was attained from 1-10 {angstrom} due to an additional amorphous phase present in the heated geopolymer. On the basis ofmore » PDF analysis, unheated Cs-geopolymer displayed structural ordering similar to pollucite up to a length scale of 9 {angstrom}, despite some differences. Our results suggest that hydrated Cs{sup +} ions were an integral part of the Cs-geopolymer structure and that most of the water present was not associated with Al-OH or Si-OH bonds.« less
Parsons, T.
2008-01-01
Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques (e.g., Ellsworth et al., 1999). In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means (e.g., NIST/SEMATECH, 2006). For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDFs, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.
A composition joint PDF method for the modeling of spray flames
NASA Technical Reports Server (NTRS)
Raju, M. S.
1995-01-01
This viewgraph presentation discusses an extension of the probability density function (PDF) method to the modeling of spray flames to evaluate the limitations and capabilities of this method in the modeling of gas-turbine combustor flows. The comparisons show that the general features of the flowfield are correctly predicted by the present solution procedure. The present solution appears to provide a better representation of the temperature field, particularly, in the reverse-velocity zone. The overpredictions in the centerline velocity could be attributed to the following reasons: (1) the use of k-epsilon turbulence model is known to be less precise in highly swirling flows and (2) the swirl number used here is reported to be estimated rather than measured.
Mackenzie, Ruth; Holmes, Clifford J; Jones, Suzanne; Williams, John D; Topley, Nicholas
2003-12-01
Clinical indices of in vivo biocompatibility: The role of ex vivo cell function studies and effluent markers in peritoneal dialysis patients. Over the past 20 years, studies of the biocompatibility profile of peritoneal dialysis solutions (PDF) have evolved from initial in vitro studies assessing the impact of solutions on leukocyte function to evaluations of mesothelial cell behavior. More recent biocompatibility evaluations have involved assessments of the impact of PDF on membrane integrity and cell function in peritoneal dialysis (PD) patients. The development of ex vivo systems for the evaluation of in vivo cell function, and effluent markers of membrane integrity and inflammation in patients exposed both acutely and chronically to conventional and new PDF will be interpreted in the context of our current understanding of the biology of the dialyzed peritoneum. The available data indicate that exposure of the peritoneal environment to more biocompatible PDF is associated with improvements in peritoneal cell function, alterations in markers of membrane integrity, and reduced local inflammation. These data suggest that more biocompatible PDF will have a positive impact on host defense, peritoneal homeostasis, and the long-term preservation of peritoneal membrane function in PD patients.
High-pressure pair distribution function (PDF) measurement using high-energy focused x-ray beam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Xinguo, E-mail: xhong@bnl.gov; Weidner, Donald J.; Ehm, Lars
In this paper, we report recent development of the high-pressure pair distribution function (HP-PDF) measurement technique using a focused high-energy X-ray beam coupled with a diamond anvil cell (DAC). The focusing optics consist of a sagittally bent Laue monochromator and Kirkpatrick-Baez (K–B) mirrors. This combination provides a clean high-energy X-ray beam suitable for HP-PDF research. Demonstration of the HP-PDF technique for nanocrystalline platinum under quasi-hydrostatic condition above 30 GPa is presented.
NASA Astrophysics Data System (ADS)
Bakosi, J.; Franzese, P.; Boybeyi, Z.
2007-11-01
Dispersion of a passive scalar from concentrated sources in fully developed turbulent channel flow is studied with the probability density function (PDF) method. The joint PDF of velocity, turbulent frequency and scalar concentration is represented by a large number of Lagrangian particles. A stochastic near-wall PDF model combines the generalized Langevin model of Haworth and Pope [Phys. Fluids 29, 387 (1986)] with Durbin's [J. Fluid Mech. 249, 465 (1993)] method of elliptic relaxation to provide a mathematically exact treatment of convective and viscous transport with a nonlocal representation of the near-wall Reynolds stress anisotropy. The presence of walls is incorporated through the imposition of no-slip and impermeability conditions on particles without the use of damping or wall-functions. Information on the turbulent time scale is supplied by the gamma-distribution model of van Slooten et al. [Phys. Fluids 10, 246 (1998)]. Two different micromixing models are compared that incorporate the effect of small scale mixing on the transported scalar: the widely used interaction by exchange with the mean and the interaction by exchange with the conditional mean model. Single-point velocity and concentration statistics are compared to direct numerical simulation and experimental data at Reτ=1080 based on the friction velocity and the channel half width. The joint model accurately reproduces a wide variety of conditional and unconditional statistics in both physical and composition space.
PDF-modulated visual inputs and cryptochrome define diurnal behavior in Drosophila.
Cusumano, Paola; Klarsfeld, André; Chélot, Elisabeth; Picot, Marie; Richier, Benjamin; Rouyer, François
2009-11-01
Morning and evening circadian oscillators control the bimodal activity of Drosophila in light-dark cycles. The lateral neurons evening oscillator (LN-EO) is important for promoting diurnal activity at dusk. We found that the LN-EO autonomously synchronized to light-dark cycles through either the cryptochrome (CRY) that it expressed or the visual system. In conditions in which CRY was not activated, flies depleted for pigment-dispersing factor (PDF) or its receptor lost the evening activity and displayed reversed PER oscillations in the LN-EO. Rescue experiments indicated that normal PER cycling and the presence of evening activity relied on PDF secretion from the large ventral lateral neurons and PDF receptor function in the LN-EO. The LN-EO thus integrates light inputs and PDF signaling to control Drosophila diurnal behavior, revealing a new clock-independent function for PDF.
Computations of turbulent lean premixed combustion using conditional moment closure
NASA Astrophysics Data System (ADS)
Amzin, Shokri; Swaminathan, Nedunchezhian
2013-12-01
Conditional Moment Closure (CMC) is a suitable method for predicting scalars such as carbon monoxide with slow chemical time scales in turbulent combustion. Although this method has been successfully applied to non-premixed combustion, its application to lean premixed combustion is rare. In this study the CMC method is used to compute piloted lean premixed combustion in a distributed combustion regime. The conditional scalar dissipation rate of the conditioning scalar, the progress variable, is closed using an algebraic model and turbulence is modelled using the standard k-ɛ model. The conditional mean reaction rate is closed using a first order CMC closure with the GRI-3.0 chemical mechanism to represent the chemical kinetics of methane oxidation. The PDF of the progress variable is obtained using a presumed shape with the Beta function. The computed results are compared with the experimental measurements and earlier computations using the transported PDF approach. The results show reasonable agreement with the experimental measurements and are consistent with the transported PDF computations. When the compounded effects of shear-turbulence and flame are strong, second order closures may be required for the CMC.
Bayesian analysis of the flutter margin method in aeroelasticity
Khalil, Mohammad; Poirel, Dominique; Sarkar, Abhijit
2016-08-27
A Bayesian statistical framework is presented for Zimmerman and Weissenburger flutter margin method which considers the uncertainties in aeroelastic modal parameters. The proposed methodology overcomes the limitations of the previously developed least-square based estimation technique which relies on the Gaussian approximation of the flutter margin probability density function (pdf). Using the measured free-decay responses at subcritical (preflutter) airspeeds, the joint non-Gaussain posterior pdf of the modal parameters is sampled using the Metropolis–Hastings (MH) Markov chain Monte Carlo (MCMC) algorithm. The posterior MCMC samples of the modal parameters are then used to obtain the flutter margin pdfs and finally the fluttermore » speed pdf. The usefulness of the Bayesian flutter margin method is demonstrated using synthetic data generated from a two-degree-of-freedom pitch-plunge aeroelastic model. The robustness of the statistical framework is demonstrated using different sets of measurement data. In conclusion, it will be shown that the probabilistic (Bayesian) approach reduces the number of test points required in providing a flutter speed estimate for a given accuracy and precision.« less
NASA Astrophysics Data System (ADS)
Higginson, Drew P.
2017-11-01
We describe and justify a full-angle scattering (FAS) method to faithfully reproduce the accumulated differential angular Rutherford scattering probability distribution function (pdf) of particles in a plasma. The FAS method splits the scattering events into two regions. At small angles it is described by cumulative scattering events resulting, via the central limit theorem, in a Gaussian-like pdf; at larger angles it is described by single-event scatters and retains a pdf that follows the form of the Rutherford differential cross-section. The FAS method is verified using discrete Monte-Carlo scattering simulations run at small timesteps to include each individual scattering event. We identify the FAS regime of interest as where the ratio of temporal/spatial scale-of-interest to slowing-down time/length is from 10-3 to 0.3-0.7; the upper limit corresponds to Coulomb logarithm of 20-2, respectively. Two test problems, high-velocity interpenetrating plasma flows and keV-temperature ion equilibration, are used to highlight systems where including FAS is important to capture relevant physics.
Statistical Modeling of Retinal Optical Coherence Tomography.
Amini, Zahra; Rabbani, Hossein
2016-06-01
In this paper, a new model for retinal Optical Coherence Tomography (OCT) images is proposed. This statistical model is based on introducing a nonlinear Gaussianization transform to convert the probability distribution function (pdf) of each OCT intra-retinal layer to a Gaussian distribution. The retina is a layered structure and in OCT each of these layers has a specific pdf which is corrupted by speckle noise, therefore a mixture model for statistical modeling of OCT images is proposed. A Normal-Laplace distribution, which is a convolution of a Laplace pdf and Gaussian noise, is proposed as the distribution of each component of this model. The reason for choosing Laplace pdf is the monotonically decaying behavior of OCT intensities in each layer for healthy cases. After fitting a mixture model to the data, each component is gaussianized and all of them are combined by Averaged Maximum A Posterior (AMAP) method. To demonstrate the ability of this method, a new contrast enhancement method based on this statistical model is proposed and tested on thirteen healthy 3D OCTs taken by the Topcon 3D OCT and five 3D OCTs from Age-related Macular Degeneration (AMD) patients, taken by Zeiss Cirrus HD-OCT. Comparing the results with two contending techniques, the prominence of the proposed method is demonstrated both visually and numerically. Furthermore, to prove the efficacy of the proposed method for a more direct and specific purpose, an improvement in the segmentation of intra-retinal layers using the proposed contrast enhancement method as a preprocessing step, is demonstrated.
Newe, Axel; Becker, Linda; Schenk, Andrea
2014-01-01
Background & Objectives The Portable Document Format (PDF) is the de-facto standard for the exchange of electronic documents. It is platform-independent, suitable for the exchange of medical data, and allows for the embedding of three-dimensional (3D) surface mesh models. In this article, we present the first clinical routine application of interactive 3D surface mesh models which have been integrated into PDF files for the presentation and the exchange of Computer Assisted Surgery Planning (CASP) results in liver surgery. We aimed to prove the feasibility of applying 3D PDF in medical reporting and investigated the user experience with this new technology. Methods We developed an interactive 3D PDF report document format and implemented a software tool to create these reports automatically. After more than 1000 liver CASP cases that have been reported in clinical routine using our 3D PDF report, an international user survey was carried out online to evaluate the user experience. Results Our solution enables the user to interactively explore the anatomical configuration and to have different analyses and various resection proposals displayed within a 3D PDF document covering only a single page that acts more like a software application than like a typical PDF file (“PDF App”). The new 3D PDF report offers many advantages over the previous solutions. According to the results of the online survey, the users have assessed the pragmatic quality (functionality, usability, perspicuity, efficiency) as well as the hedonic quality (attractiveness, novelty) very positively. Conclusion The usage of 3D PDF for reporting and sharing CASP results is feasible and well accepted by the target audience. Using interactive PDF with embedded 3D models is an enabler for presenting and exchanging complex medical information in an easy and platform-independent way. Medical staff as well as patients can benefit from the possibilities provided by 3D PDF. Our results open the door for a wider use of this new technology, since the basic idea can and should be applied for many medical disciplines and use cases. PMID:25551375
Quantifying the interplay effect in prostate IMRT delivery using a convolution-based method.
Li, Haisen S; Chetty, Indrin J; Solberg, Timothy D
2008-05-01
The authors present a segment-based convolution method to account for the interplay effect between intrafraction organ motion and the multileaf collimator position for each particular segment in intensity modulated radiation therapy (IMRT) delivered in a step-and-shoot manner. In this method, the static dose distribution attributed to each segment is convolved with the probability density function (PDF) of motion during delivery of the segment, whereas in the conventional convolution method ("average-based convolution"), the static dose distribution is convolved with the PDF averaged over an entire fraction, an entire treatment course, or even an entire patient population. In the case of IMRT delivered in a step-and-shoot manner, the average-based convolution method assumes that in each segment the target volume experiences the same motion pattern (PDF) as that of population. In the segment-based convolution method, the dose during each segment is calculated by convolving the static dose with the motion PDF specific to that segment, allowing both intrafraction motion and the interplay effect to be accounted for in the dose calculation. Intrafraction prostate motion data from a population of 35 patients tracked using the Calypso system (Calypso Medical Technologies, Inc., Seattle, WA) was used to generate motion PDFs. These were then convolved with dose distributions from clinical prostate IMRT plans. For a single segment with a small number of monitor units, the interplay effect introduced errors of up to 25.9% in the mean CTV dose compared against the planned dose evaluated by using the PDF of the entire fraction. In contrast, the interplay effect reduced the minimum CTV dose by 4.4%, and the CTV generalized equivalent uniform dose by 1.3%, in single fraction plans. For entire treatment courses delivered in either a hypofractionated (five fractions) or conventional (> 30 fractions) regimen, the discrepancy in total dose due to interplay effect was negligible.
Finley, B L; Scott, P K; Mayhall, D A
1994-08-01
It has recently been suggested that "standard" data distributions for key exposure variables should be developed wherever appropriate for use in probabilistic or "Monte Carlo" exposure analyses. Soil-on-skin adherence estimates represent an ideal candidate for development of a standard data distribution: There are several readily available studies which offer a consistent pattern of reported results, and more importantly, soil adherence to skin is likely to vary little from site-to-site. In this paper, we thoroughly review each of the published soil adherence studies with respect to study design, sampling, and analytical methods, and level of confidence in the reported results. Based on these studies, probability density functions (PDF) of soil adherence values were examined for different age groups and different sampling techniques. The soil adherence PDF developed from adult data was found to resemble closely the soil adherence PDF based on child data in terms of both central tendency (mean = 0.49 and 0.63 mg-soil/cm2-skin, respectively) and 95th percentile values (1.6 and 2.4 mg-soil/cm2-skin, respectively). Accordingly, a single, "standard" PDF is presented based on all data collected for all age groups. This standard PDF is lognormally distributed; the arithmetic mean and standard deviation are 0.52 +/- 0.9 mg-soil/cm2-skin. Since our review of the literature indicates that soil adherence under environmental conditions will be minimally influenced by age, sex, soil type, or particle size, this PDF should be considered applicable to all settings. The 50th and 95th percentile values of the standard PDF (0.25 and 1.7 mg-soil/cm2-skin, respectively) are very similar to recent U.S. EPA estimates of "average" and "upper-bound" soil adherence (0.2 and 1.0 mg-soil/cm2-skin, respectively).
Gamez-Mendoza, Liliana; Terban, Maxwell W.; Billinge, Simon J. L.; ...
2017-04-13
The particle size of supported catalysts is a key characteristic for determining structure–property relationships. It is a challenge to obtain this information accurately and in situ using crystallographic methods owing to the small size of such particles (<5 nm) and the fact that they are supported. In this work, the pair distribution function (PDF) technique was used to obtain the particle size distribution of supported Pt catalysts as they grow under typical synthesis conditions. The PDF of Pt nanoparticles grown on zeolite X was isolated and refined using two models: a monodisperse spherical model (single particle size) and a lognormalmore » size distribution. The results were compared and validated using scanning transmission electron microscopy (STEM) results. Both models describe the same trends in average particle size with temperature, but the results of the number-weighted lognormal size distributions can also accurately describe the mean size and the width of the size distributions obtained from STEM. Since the PDF yields crystallite sizes, these results suggest that the grown Pt nanoparticles are monocrystalline. As a result, this work shows that refinement of the PDF of small supported monocrystalline nanoparticles can yield accurate mean particle sizes and distributions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamez-Mendoza, Liliana; Terban, Maxwell W.; Billinge, Simon J. L.
The particle size of supported catalysts is a key characteristic for determining structure–property relationships. It is a challenge to obtain this information accurately and in situ using crystallographic methods owing to the small size of such particles (<5 nm) and the fact that they are supported. In this work, the pair distribution function (PDF) technique was used to obtain the particle size distribution of supported Pt catalysts as they grow under typical synthesis conditions. The PDF of Pt nanoparticles grown on zeolite X was isolated and refined using two models: a monodisperse spherical model (single particle size) and a lognormalmore » size distribution. The results were compared and validated using scanning transmission electron microscopy (STEM) results. Both models describe the same trends in average particle size with temperature, but the results of the number-weighted lognormal size distributions can also accurately describe the mean size and the width of the size distributions obtained from STEM. Since the PDF yields crystallite sizes, these results suggest that the grown Pt nanoparticles are monocrystalline. As a result, this work shows that refinement of the PDF of small supported monocrystalline nanoparticles can yield accurate mean particle sizes and distributions.« less
Efficient and robust computation of PDF features from diffusion MR signal.
Assemlal, Haz-Edine; Tschumperlé, David; Brun, Luc
2009-10-01
We present a method for the estimation of various features of the tissue micro-architecture using the diffusion magnetic resonance imaging. The considered features are designed from the displacement probability density function (PDF). The estimation is based on two steps: first the approximation of the signal by a series expansion made of Gaussian-Laguerre and Spherical Harmonics functions; followed by a projection on a finite dimensional space. Besides, we propose to tackle the problem of the robustness to Rician noise corrupting in-vivo acquisitions. Our feature estimation is expressed as a variational minimization process leading to a variational framework which is robust to noise. This approach is very flexible regarding the number of samples and enables the computation of a large set of various features of the local tissues structure. We demonstrate the effectiveness of the method with results on both synthetic phantom and real MR datasets acquired in a clinical time-frame.
Birth Control - Multiple Languages
... Burmese (myanma bhasa) Expand Section Birth Control Methods - English PDF Birth Control Methods - myanma bhasa (Burmese) PDF ... Mandarin dialect) (简体中文) Expand Section Before Your Vasectomy - English PDF Before Your Vasectomy - 简体中文 (Chinese, Simplified (Mandarin ...
Assessment of PDF Micromixing Models Using DNS Data for a Two-Step Reaction
NASA Astrophysics Data System (ADS)
Tsai, Kuochen; Chakrabarti, Mitali; Fox, Rodney O.; Hill, James C.
1996-11-01
Although the probability density function (PDF) method is known to treat the chemical reaction terms exactly, its application to turbulent reacting flows have been overshadowed by the ability to model the molecular mixing terms satisfactorily. In this study, two PDF molecular mixing models, the linear-mean-square-estimation (LMSE or IEM) model and the generalized interaction-by-exchange-with-the-mean (GIEM) model, are compared with the DNS data in decaying turbulence with a two-step parallel-consecutive reaction and two segregated initial conditions: ``slabs" and ``blobs". Since the molecular mixing model is expected to have a strong effect on the mean values of chemical species under such initial conditions, the model evaluation is intended to answer the following questions: Can the PDF models predict the mean values of chemical species correctly with completely segregated initial conditions? (2) Is a single molecular mixing timescale sufficient for the PDF models to predict the mean values with different initial conditions? (3) Will the chemical reactions change the molecular mixing timescales of the reacting species enough to affect the accuracy of the model's prediction for the mean values of chemical species?
NASA Technical Reports Server (NTRS)
Sellers, Piers
2012-01-01
Soil wetness typically shows great spatial variability over the length scales of general circulation model (GCM) grid areas (approx 100 km ), and the functions relating evapotranspiration and photosynthetic rate to local-scale (approx 1 m) soil wetness are highly non-linear. Soil respiration is also highly dependent on very small-scale variations in soil wetness. We therefore expect significant inaccuracies whenever we insert a single grid area-average soil wetness value into a function to calculate any of these rates for the grid area. For the particular case of evapotranspiration., this method - use of a grid-averaged soil wetness value - can also provoke severe oscillations in the evapotranspiration rate and soil wetness under some conditions. A method is presented whereby the probability distribution timction(pdf) for soil wetness within a grid area is represented by binning. and numerical integration of the binned pdf is performed to provide a spatially-integrated wetness stress term for the whole grid area, which then permits calculation of grid area fluxes in a single operation. The method is very accurate when 10 or more bins are used, can deal realistically with spatially variable precipitation, conserves moisture exactly and allows for precise modification of the soil wetness pdf after every time step. The method could also be applied to other ecological problems where small-scale processes must be area-integrated, or upscaled, to estimate fluxes over large areas, for example in treatments of the terrestrial carbon budget or trace gas generation.
An improved numerical method for the kernel density functional estimation of disperse flow
NASA Astrophysics Data System (ADS)
Smith, Timothy; Ranjan, Reetesh; Pantano, Carlos
2014-11-01
We present an improved numerical method to solve the transport equation for the one-point particle density function (pdf), which can be used to model disperse flows. The transport equation, a hyperbolic partial differential equation (PDE) with a source term, is derived from the Lagrangian equations for a dilute particle system by treating position and velocity as state-space variables. The method approximates the pdf by a discrete mixture of kernel density functions (KDFs) with space and time varying parameters and performs a global Rayleigh-Ritz like least-square minimization on the state-space of velocity. Such an approximation leads to a hyperbolic system of PDEs for the KDF parameters that cannot be written completely in conservation form. This system is solved using a numerical method that is path-consistent, according to the theory of non-conservative hyperbolic equations. The resulting formulation is a Roe-like update that utilizes the local eigensystem information of the linearized system of PDEs. We will present the formulation of the base method, its higher-order extension and further regularization to demonstrate that the method can predict statistics of disperse flows in an accurate, consistent and efficient manner. This project was funded by NSF Project NSF-DMS 1318161.
Rao-Blackwellization for Adaptive Gaussian Sum Nonlinear Model Propagation
NASA Technical Reports Server (NTRS)
Semper, Sean R.; Crassidis, John L.; George, Jemin; Mukherjee, Siddharth; Singla, Puneet
2015-01-01
When dealing with imperfect data and general models of dynamic systems, the best estimate is always sought in the presence of uncertainty or unknown parameters. In many cases, as the first attempt, the Extended Kalman filter (EKF) provides sufficient solutions to handling issues arising from nonlinear and non-Gaussian estimation problems. But these issues may lead unacceptable performance and even divergence. In order to accurately capture the nonlinearities of most real-world dynamic systems, advanced filtering methods have been created to reduce filter divergence while enhancing performance. Approaches, such as Gaussian sum filtering, grid based Bayesian methods and particle filters are well-known examples of advanced methods used to represent and recursively reproduce an approximation to the state probability density function (pdf). Some of these filtering methods were conceptually developed years before their widespread uses were realized. Advanced nonlinear filtering methods currently benefit from the computing advancements in computational speeds, memory, and parallel processing. Grid based methods, multiple-model approaches and Gaussian sum filtering are numerical solutions that take advantage of different state coordinates or multiple-model methods that reduced the amount of approximations used. Choosing an efficient grid is very difficult for multi-dimensional state spaces, and oftentimes expensive computations must be done at each point. For the original Gaussian sum filter, a weighted sum of Gaussian density functions approximates the pdf but suffers at the update step for the individual component weight selections. In order to improve upon the original Gaussian sum filter, Ref. [2] introduces a weight update approach at the filter propagation stage instead of the measurement update stage. This weight update is performed by minimizing the integral square difference between the true forecast pdf and its Gaussian sum approximation. By adaptively updating each component weight during the nonlinear propagation stage an approximation of the true pdf can be successfully reconstructed. Particle filtering (PF) methods have gained popularity recently for solving nonlinear estimation problems due to their straightforward approach and the processing capabilities mentioned above. The basic concept behind PF is to represent any pdf as a set of random samples. As the number of samples increases, they will theoretically converge to the exact, equivalent representation of the desired pdf. When the estimated qth moment is needed, the samples are used for its construction allowing further analysis of the pdf characteristics. However, filter performance deteriorates as the dimension of the state vector increases. To overcome this problem Ref. [5] applies a marginalization technique for PF methods, decreasing complexity of the system to one linear and another nonlinear state estimation problem. The marginalization theory was originally developed by Rao and Blackwell independently. According to Ref. [6] it improves any given estimator under every convex loss function. The improvement comes from calculating a conditional expected value, often involving integrating out a supportive statistic. In other words, Rao-Blackwellization allows for smaller but separate computations to be carried out while reaching the main objective of the estimator. In the case of improving an estimator's variance, any supporting statistic can be removed and its variance determined. Next, any other information that dependents on the supporting statistic is found along with its respective variance. A new approach is developed here by utilizing the strengths of the adaptive Gaussian sum propagation in Ref. [2] and a marginalization approach used for PF methods found in Ref. [7]. In the following sections a modified filtering approach is presented based on a special state-space model within nonlinear systems to reduce the dimensionality of the optimization problem in Ref. [2]. First, the adaptive Gaussian sum propagation is explained and then the new marginalized adaptive Gaussian sum propagation is derived. Finally, an example simulation is presented.
Interference of Peritoneal Dialysis Fluids with Cell Cycle Mechanisms
Büchel, Janine; Bartosova, Maria; Eich, Gwendolyn; Wittenberger, Timo; Klein-Hitpass, Ludger; Steppan, Sonja; Hackert, Thilo; Schaefer, Franz; Passlick-Deetjen, Jutta; Schmitt, Claus P.
2015-01-01
♦ Introduction: Peritoneal dialysis fluids (PDF) differ with respect to osmotic and buffer compound, and pH and glucose degradation products (GDP) content. The impact on peritoneal membrane integrity is still insufficiently described. We assessed global genomic effects of PDF in primary human peritoneal mesothelial cells (PMC) by whole genome analyses, quantitative real-time polymerase chain reaction (RT-PCR) and functional measurements. ♦ Methods: PMC isolated from omentum of non-uremic patients were incubated with conventional single chamber PDF (CPDF), lactate- (LPDF), bicarbonate- (BPDF) and bicarbonate/lactate-buffered double-chamber PDF (BLPDF), icodextrin (IPDF) and amino acid PDF (APDF), diluted 1:1 with medium. Affymetrix GeneChip U133Plus2.0 (Affymetrix, CA, USA) and quantitative RT-PCR were applied; cell viability was assessed by proliferation assays. ♦ Results: The number of differentially expressed genes compared to medium was 464 with APDF, 208 with CPDF, 169 with IPDF, 71 with LPDF, 45 with BPDF and 42 with BLPDF. Out of these genes 74%, 73%, 79%, 72%, 47% and 57% were downregulated. Gene Ontology (GO) term annotations mainly revealed associations with cell cycle (p = 10-35), cell division, mitosis, and DNA replication. One hundred and eighteen out of 249 probe sets detecting genes involved in cell cycle/division were suppressed, with APDF-treated PMC being affected the most regarding absolute number and degree, followed by CPDF and IPDF. Bicarbonate-containing PDF and BLPDF-treated PMC were affected the least. Quantitative RT-PCR measurements confirmed microarray findings for key cell cycle genes (CDK1/CCNB1/CCNE2/AURKA/KIF11/KIF14). Suppression was lowest for BPDF and BLPDF, they upregulated CCNE2 and SMC4. All PDF upregulated 3 out of 4 assessed cell cycle repressors (p53/BAX/p21). Cell viability scores confirmed gene expression results, being 79% of medium for LPDF, 101% for BLPDF, 51% for CPDF and 23% for IPDF. Amino acid-containing PDF (84%) incubated cells were as viable as BPDF (86%). ♦ Conclusion: In conclusion, PD solutions substantially differ with regard to their gene regulating profile and impact on vital functions of PMC, i.e. on cells known to be essential for peritoneal membrane homeostasis. PMID:25082841
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Fan; Medical Physics Graduate Program, Duke University, Durham, North Carolina; Hu Jing
2012-11-01
Purpose: To evaluate the reproducibility of tumor motion probability distribution function (PDF) in stereotactic body radiation therapy (SBRT) of lung cancer using cine megavoltage (MV) images. Methods and Materials: Cine MV images of 20 patients acquired during three-dimensional conformal (6-11 beams) SBRT treatments were retrospectively analyzed to extract tumor motion trajectories. For each patient, tumor motion PDFs were generated per fraction (PDF{sub n}) using three selected 'usable' beams. Patients without at least three usable beams were excluded from the study. Fractional PDF reproducibility (R{sub n}) was calculated as the Dice similarity coefficient between PDF{sub n} to a 'ground-truth' PDF (PDF{submore » g}), which was generated using the selected beams of all fractions. The mean of R{sub n}, labeled as R{sub m}, was calculated for each patient and correlated to the patient's mean tumor motion rang (A{sub m}). Change of R{sub m} during the course of SBRT treatments was also evaluated. Intra- and intersubject coefficient of variation (CV) of R{sub m} and A{sub m} were determined. Results: Thirteen patients had at least three usable beams and were analyzed. The mean of R{sub m} was 0.87 (range, 0.84-0.95). The mean of A{sub m} was 3.18 mm (range, 0.46-7.80 mm). R{sub m} was found to decrease as A{sub m} increases following an equation of R{sub m} = 0.17e{sup -0.9Am} + 0.84. R{sub m} also decreased slightly throughout the course of treatments. Intersubject CV of R{sub m} (0.05) was comparable to intrasubject CV of R{sub m} (range, 0.02-0.09); intersubject CV of A{sub m} (0.73) was significantly greater than intrasubject CV of A{sub m} (range, 0.09-0.24). Conclusions: Tumor motion PDF can be determined using cine MV images acquired during the treatments. The reproducibility of lung tumor motion PDF decreased exponentially as the tumor motion range increased and decreased slightly throughout the course of the treatments.« less
Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study
NASA Astrophysics Data System (ADS)
Troudi, Molka; Alimi, Adel M.; Saoudi, Samir
2008-12-01
The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.
NASA Technical Reports Server (NTRS)
Drozda, Tomasz G.; Quinlan, Jesse R.; Pisciuneri, Patrick H.; Yilmaz, S. Levent
2012-01-01
Significant progress has been made in the development of subgrid scale (SGS) closures based on a filtered density function (FDF) for large eddy simulations (LES) of turbulent reacting flows. The FDF is the counterpart of the probability density function (PDF) method, which has proven effective in Reynolds averaged simulations (RAS). However, while systematic progress is being made advancing the FDF models for relatively simple flows and lab-scale flames, the application of these methods in complex geometries and high speed, wall-bounded flows with shocks remains a challenge. The key difficulties are the significant computational cost associated with solving the FDF transport equation and numerically stiff finite rate chemistry. For LES/FDF methods to make a more significant impact in practical applications a pragmatic approach must be taken that significantly reduces the computational cost while maintaining high modeling fidelity. An example of one such ongoing effort is at the NASA Langley Research Center, where the first generation FDF models, namely the scalar filtered mass density function (SFMDF) are being implemented into VULCAN, a production-quality RAS and LES solver widely used for design of high speed propulsion flowpaths. This effort leverages internal and external collaborations to reduce the overall computational cost of high fidelity simulations in VULCAN by: implementing high order methods that allow reduction in the total number of computational cells without loss in accuracy; implementing first generation of high fidelity scalar PDF/FDF models applicable to high-speed compressible flows; coupling RAS/PDF and LES/FDF into a hybrid framework to efficiently and accurately model the effects of combustion in the vicinity of the walls; developing efficient Lagrangian particle tracking algorithms to support robust solutions of the FDF equations for high speed flows; and utilizing finite rate chemistry parametrization, such as flamelet models, to reduce the number of transported reactive species and remove numerical stiffness. This paper briefly introduces the SFMDF model (highlighting key benefits and challenges), and discusses particle tracking for flows with shocks, the hybrid coupled RAS/PDF and LES/FDF model, flamelet generated manifolds (FGM) model, and the Irregularly Portioned Lagrangian Monte Carlo Finite Difference (IPLMCFD) methodology for scalable simulation of high-speed reacting compressible flows.
Vecchiato, G; Di Flumeri, G; Maglione, A G; Cherubino, P; Kong, W; Trettel, A; Babiloni, F
2014-01-01
Nowadays, there is a growing interest in measuring the impact of advertisements through the estimation of cerebral reactions. Several techniques and methods are used and discussed in the consumer neuroscience. In such a context, the present paper provides a novel method to estimate the level of memorization occurred in subjects during the observation of TV commercials. In particular, the present work introduce the Peak Density Function (PDF) as an electroencephalographic (EEG) time-varying variable which is correlated with the cerebral events of memorization of TV commercials. The analysis has been performed on the EEG activity recorded on twenty healthy subjects during the exposition to several advertisements. After the EEG recordings, an interview has been performed to obtain the information about the memorized scenes for all the video clips watched by the subjects. Such information has been put in correlation with the occurrence of transient peaks of EEG synchronization in the theta band, by computing the PDF. The present results show that the increase of PDF is positively correlated, scene by scene, (R=0.46, p<;0.01) with the spontaneous recall of subjects. This technology could be of help for marketers to overcome the drawbacks of the standard marketing tools (e.g., interviews, focus groups) when analyzing the impact of advertisements.
New stochastic approach for extreme response of slow drift motion of moored floating structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kato, Shunji; Okazaki, Takashi
1995-12-31
A new stochastic method for investigating the flow drift response statistics of moored floating structures is described. Assuming that wave drift excitation process can be driven by a Gaussian white noise process, an exact stochastic equation governing a time evolution of the response Probability Density Function (PDF) is derived on a basis of Projection operator technique in the field of statistical physics. In order to get an approximate solution of the GFP equation, the authors develop the renormalized perturbation technique which is a kind of singular perturbation methods and solve the GFP equation taken into account up to third ordermore » moments of a non-Gaussian excitation. As an example of the present method, a closed form of the joint PDF is derived for linear response in surge motion subjected to a non-Gaussian wave drift excitation and it is represented by the product of a form factor and the quasi-Cauchy PDFs. In this case, the motion displacement and velocity processes are not mutually independent if the excitation process has a significant third order moment. From a comparison between the response PDF by the present solution and the exact one derived by Naess, it is found that the present solution is effective for calculating both the response PDF and the joint PDF. Furthermore it is shown that the displacement-velocity independence is satisfied if the damping coefficient in equation of motion is not so large and that both the non-Gaussian property of excitation and the damping coefficient should be taken into account for estimating the probability exceedance of the response.« less
PDF receptor signaling in Caenorhabditis elegans modulates locomotion and egg-laying.
Meelkop, Ellen; Temmerman, Liesbet; Janssen, Tom; Suetens, Nick; Beets, Isabel; Van Rompay, Liesbeth; Shanmugam, Nilesh; Husson, Steven J; Schoofs, Liliane
2012-09-25
In Caenorhabditis elegans, pdfr-1 encodes three receptors of the secretin receptor family. These G protein-coupled receptors are activated by three neuropeptides, pigment dispersing factors 1a, 1b and 2, which are encoded by pdf-1 and pdf-2. We isolated a PDF receptor loss-of-function allele (lst34) by means of a mutagenesis screen and show that the PDF signaling system is involved in locomotion and egg-laying. We demonstrate that the pdfr-1 mutant phenocopies the defective locomotor behavior of the pdf-1 mutant and that pdf-1 and pdf-2 behave antagonistically. All three PDF receptor splice variants are involved in the regulation of locomotor behavior. Cell specific rescue experiments show that this pdf mediated behavior is regulated by neurons rather than body wall muscles. We also show that egg-laying patterns of pdf-1 and pdf-2 mutants are affected, but not those of pdfr-1 mutants, pointing to a novel role for the PDF-system in the regulation of egg-laying. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
McKague, Darren Shawn
2001-12-01
The statistical properties of clouds and precipitation on a global scale are important to our understanding of climate. Inversion methods exist to retrieve the needed cloud and precipitation properties from satellite data pixel-by-pixel that can then be summarized over large data sets to obtain the desired statistics. These methods can be quite computationally expensive, and typically don't provide errors on the statistics. A new method is developed to directly retrieve probability distributions of parameters from the distribution of measured radiances. The method also provides estimates of the errors on the retrieved distributions. The method can retrieve joint distributions of parameters that allows for the study of the connection between parameters. A forward radiative transfer model creates a mapping from retrieval parameter space to radiance space. A Monte Carlo procedure uses the mapping to transform probability density from the observed radiance histogram to a two- dimensional retrieval property probability distribution function (PDF). An estimate of the uncertainty in the retrieved PDF is calculated from random realizations of the radiance to retrieval parameter PDF transformation given the uncertainty of the observed radiances, the radiance PDF, the forward radiative transfer, the finite number of prior state vectors, and the non-unique mapping to retrieval parameter space. The retrieval method is also applied to the remote sensing of precipitation from SSM/I microwave data. A method of stochastically generating hydrometeor fields based on the fields from a numerical cloud model is used to create the precipitation parameter radiance space transformation. The impact of vertical and horizontal variability within the hydrometeor fields has a significant impact on algorithm performance. Beamfilling factors are computed from the simulated hydrometeor fields. The beamfilling factors vary quite a bit depending upon the horizontal structure of the rain. The algorithm is applied to SSM/I images from the eastern tropical Pacific and is compared to PDFs of rain rate computed using pixel-by-pixel retrievals from Wilheit and from Liu and Curry. Differences exist between the three methods, but good general agreement is seen between the PDF retrieval algorithm and the algorithm of Liu and Curry. (Abstract shortened by UMI.)
GW182 controls Drosophila circadian behavior and PDF-Receptor signaling
Zhang, Yong; Emery, Patrick
2013-01-01
The neuropeptide PDF is crucial for Drosophila circadian behavior: it keeps circadian neurons synchronized. Here, we identify GW182 as a key regulator of PDF signaling. Indeed, GW182 downregulation results in phenotypes similar to those of Pdf and Pdf-receptor (Pdfr) mutants. gw182 genetically interacts with Pdfr and cAMP signaling, which is essential for PDFR function. GW182 mediates miRNA-dependent gene silencing through its interaction with AGO1. Consistently, GW182's AGO1 interaction domain is required for GW182's circadian function. Moreover, our results indicate that GW182 modulates PDFR signaling by silencing the expression of the cAMP phosphodiesterase DUNCE. Importantly, this repression is under photic control, and GW182 activity level - which is limiting in circadian neurons - influences the responses of the circadian neural network to light. We propose that GW182's gene silencing activity functions as a rheostat for PDFR signaling, and thus profoundly impacts the circadian neural network and its response to environmental inputs. PMID:23583112
Kinetic and dynamic probability-density-function descriptions of disperse turbulent two-phase flows
NASA Astrophysics Data System (ADS)
Minier, Jean-Pierre; Profeta, Christophe
2015-11-01
This article analyzes the status of two classical one-particle probability density function (PDF) descriptions of the dynamics of discrete particles dispersed in turbulent flows. The first PDF formulation considers only the process made up by particle position and velocity Zp=(xp,Up) and is represented by its PDF p (t ;yp,Vp) which is the solution of a kinetic PDF equation obtained through a flux closure based on the Furutsu-Novikov theorem. The second PDF formulation includes fluid variables into the particle state vector, for example, the fluid velocity seen by particles Zp=(xp,Up,Us) , and, consequently, handles an extended PDF p (t ;yp,Vp,Vs) which is the solution of a dynamic PDF equation. For high-Reynolds-number fluid flows, a typical formulation of the latter category relies on a Langevin model for the trajectories of the fluid seen or, conversely, on a Fokker-Planck equation for the extended PDF. In the present work, a new derivation of the kinetic PDF equation is worked out and new physical expressions of the dispersion tensors entering the kinetic PDF equation are obtained by starting from the extended PDF and integrating over the fluid seen. This demonstrates that, under the same assumption of a Gaussian colored noise and irrespective of the specific stochastic model chosen for the fluid seen, the kinetic PDF description is the marginal of a dynamic PDF one. However, a detailed analysis reveals that kinetic PDF models of particle dynamics in turbulent flows described by statistical correlations constitute incomplete stand-alone PDF descriptions and, moreover, that present kinetic-PDF equations are mathematically ill posed. This is shown to be the consequence of the non-Markovian characteristic of the stochastic process retained to describe the system and the use of an external colored noise. Furthermore, developments bring out that well-posed PDF descriptions are essentially due to a proper choice of the variables selected to describe physical systems and guidelines are formulated to emphasize the key role played by the notion of slow and fast variables.
A Modeling and Data Analysis of Laser Beam Propagation in the Maritime Domain
2015-05-18
approach to computing pdfs is the Kernel Density Method (Reference [9] has an intro - duction to the method), which we will apply to compute the pdf of our...The project has two parts to it: 1) we present a computational analysis of different probability density function approximation techniques; and 2) we... computational analysis of different probability density function approximation techniques; and 2) we introduce preliminary steps towards developing a
Electron transfer statistics and thermal fluctuations in molecular junctions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goswami, Himangshu Prabal; Harbola, Upendra
2015-02-28
We derive analytical expressions for probability distribution function (PDF) for electron transport in a simple model of quantum junction in presence of thermal fluctuations. Our approach is based on the large deviation theory combined with the generating function method. For large number of electrons transferred, the PDF is found to decay exponentially in the tails with different rates due to applied bias. This asymmetry in the PDF is related to the fluctuation theorem. Statistics of fluctuations are analyzed in terms of the Fano factor. Thermal fluctuations play a quantitative role in determining the statistics of electron transfer; they tend tomore » suppress the average current while enhancing the fluctuations in particle transfer. This gives rise to both bunching and antibunching phenomena as determined by the Fano factor. The thermal fluctuations and shot noise compete with each other and determine the net (effective) statistics of particle transfer. Exact analytical expression is obtained for delay time distribution. The optimal values of the delay time between successive electron transfers can be lowered below the corresponding shot noise values by tuning the thermal effects.« less
Efficient computation of PDF-based characteristics from diffusion MR signal.
Assemlal, Haz-Edine; Tschumperlé, David; Brun, Luc
2008-01-01
We present a general method for the computation of PDF-based characteristics of the tissue micro-architecture in MR imaging. The approach relies on the approximation of the MR signal by a series expansion based on Spherical Harmonics and Laguerre-Gaussian functions, followed by a simple projection step that is efficiently done in a finite dimensional space. The resulting algorithm is generic, flexible and is able to compute a large set of useful characteristics of the local tissues structure. We illustrate the effectiveness of this approach by showing results on synthetic and real MR datasets acquired in a clinical time-frame.
Velocity statistics of the Nagel-Schreckenberg model
NASA Astrophysics Data System (ADS)
Bain, Nicolas; Emig, Thorsten; Ulm, Franz-Josef; Schreckenberg, Michael
2016-02-01
The statistics of velocities in the cellular automaton model of Nagel and Schreckenberg for traffic are studied. From numerical simulations, we obtain the probability distribution function (PDF) for vehicle velocities and the velocity-velocity (vv) covariance function. We identify the probability to find a standing vehicle as a potential order parameter that signals nicely the transition between free congested flow for a sufficiently large number of velocity states. Our results for the vv covariance function resemble features of a second-order phase transition. We develop a 3-body approximation that allows us to relate the PDFs for velocities and headways. Using this relation, an approximation to the velocity PDF is obtained from the headway PDF observed in simulations. We find a remarkable agreement between this approximation and the velocity PDF obtained from simulations.
Velocity statistics of the Nagel-Schreckenberg model.
Bain, Nicolas; Emig, Thorsten; Ulm, Franz-Josef; Schreckenberg, Michael
2016-02-01
The statistics of velocities in the cellular automaton model of Nagel and Schreckenberg for traffic are studied. From numerical simulations, we obtain the probability distribution function (PDF) for vehicle velocities and the velocity-velocity (vv) covariance function. We identify the probability to find a standing vehicle as a potential order parameter that signals nicely the transition between free congested flow for a sufficiently large number of velocity states. Our results for the vv covariance function resemble features of a second-order phase transition. We develop a 3-body approximation that allows us to relate the PDFs for velocities and headways. Using this relation, an approximation to the velocity PDF is obtained from the headway PDF observed in simulations. We find a remarkable agreement between this approximation and the velocity PDF obtained from simulations.
Shotorban, Babak
2010-04-01
The dynamic least-squares kernel density (LSQKD) model [C. Pantano and B. Shotorban, Phys. Rev. E 76, 066705 (2007)] is used to solve the Fokker-Planck equations. In this model the probability density function (PDF) is approximated by a linear combination of basis functions with unknown parameters whose governing equations are determined by a global least-squares approximation of the PDF in the phase space. In this work basis functions are set to be Gaussian for which the mean, variance, and covariances are governed by a set of partial differential equations (PDEs) or ordinary differential equations (ODEs) depending on what phase-space variables are approximated by Gaussian functions. Three sample problems of univariate double-well potential, bivariate bistable neurodynamical system [G. Deco and D. Martí, Phys. Rev. E 75, 031913 (2007)], and bivariate Brownian particles in a nonuniform gas are studied. The LSQKD is verified for these problems as its results are compared against the results of the method of characteristics in nondiffusive cases and the stochastic particle method in diffusive cases. For the double-well potential problem it is observed that for low to moderate diffusivity the dynamic LSQKD well predicts the stationary PDF for which there is an exact solution. A similar observation is made for the bistable neurodynamical system. In both these problems least-squares approximation is made on all phase-space variables resulting in a set of ODEs with time as the independent variable for the Gaussian function parameters. In the problem of Brownian particles in a nonuniform gas, this approximation is made only for the particle velocity variable leading to a set of PDEs with time and particle position as independent variables. Solving these PDEs, a very good performance by LSQKD is observed for a wide range of diffusivities.
Yao, Rutao; Ramachandra, Ranjith M.; Mahajan, Neeraj; Rathod, Vinay; Gunasekar, Noel; Panse, Ashish; Ma, Tianyu; Jian, Yiqiang; Yan, Jianhua; Carson, Richard E.
2012-01-01
To achieve optimal PET image reconstruction through better system modeling, we developed a system matrix that is based on the probability density function for each line of response (LOR-PDF). The LOR-PDFs are grouped by LOR-to-detector incident angles to form a highly compact system matrix. The system matrix was implemented in the MOLAR list mode reconstruction algorithm for a small animal PET scanner. The impact of LOR-PDF on reconstructed image quality was assessed qualitatively as well as quantitatively in terms of contrast recovery coefficient (CRC) and coefficient of variance (COV), and its performance was compared with a fixed Gaussian (iso-Gaussian) line spread function. The LOR-PDFs of 3 coincidence signal emitting sources, 1) ideal positron emitter that emits perfect back-to-back γ rays (γγ) in air; 2) fluorine-18 (18F) nuclide in water; and 3) oxygen-15 (15O) nuclide in water, were derived, and assessed with simulated and experimental phantom data. The derived LOR-PDFs showed anisotropic and asymmetric characteristics dependent on LOR-detector angle, coincidence emitting source, and the medium, consistent with common PET physical principles. The comparison of the iso-Gaussian function and LOR-PDF showed that: 1) without positron range and acolinearity effects, the LOR-PDF achieved better or similar trade-offs of contrast recovery and noise for objects of 4-mm radius or larger, and this advantage extended to smaller objects (e.g. 2-mm radius sphere, 0.6-mm radius hot-rods) at higher iteration numbers; and 2) with positron range and acolinearity effects, the iso-Gaussian achieved similar or better resolution recovery depending on the significance of positron range effect. We conclude that the 3-D LOR-PDF approach is an effective method to generate an accurate and compact system matrix. However, when used directly in expectation-maximization based list-mode iterative reconstruction algorithms such as MOLAR, its superiority is not clear. For this application, using an iso-Gaussian function in MOLAR is a simple but effective technique for PET reconstruction. PMID:23032702
A New LES/PDF Method for Computational Modeling of Turbulent Reacting Flows
NASA Astrophysics Data System (ADS)
Turkeri, Hasret; Muradoglu, Metin; Pope, Stephen B.
2013-11-01
A new LES/PDF method is developed for computational modeling of turbulent reacting flows. The open source package, OpenFOAM, is adopted as the LES solver and combined with the particle-based Monte Carlo method to solve the LES/PDF model equations. The dynamic Smagorinsky model is employed to account for the subgrid-scale motions. The LES solver is first validated for the Sandia Flame D using a steady flamelet method in which the chemical compositions, density and temperature fields are parameterized by the mean mixture fraction and its variance. In this approach, the modeled transport equations for the mean mixture fraction and the square of the mixture fraction are solved and the variance is then computed from its definition. The results are found to be in a good agreement with the experimental data. Then the LES solver is combined with the particle-based Monte Carlo algorithm to form a complete solver for the LES/PDF model equations. The in situ adaptive tabulation (ISAT) algorithm is incorporated into the LES/PDF method for efficient implementation of detailed chemical kinetics. The LES/PDF method is also applied to the Sandia Flame D using the GRI-Mech 3.0 chemical mechanism and the results are compared with the experimental data and the earlier PDF simulations. The Scientific and Technical Research Council of Turkey (TUBITAK), Grant No. 111M067.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Higginson, Drew P.
Here, we describe and justify a full-angle scattering (FAS) method to faithfully reproduce the accumulated differential angular Rutherford scattering probability distribution function (pdf) of particles in a plasma. The FAS method splits the scattering events into two regions. At small angles it is described by cumulative scattering events resulting, via the central limit theorem, in a Gaussian-like pdf; at larger angles it is described by single-event scatters and retains a pdf that follows the form of the Rutherford differential cross-section. The FAS method is verified using discrete Monte-Carlo scattering simulations run at small timesteps to include each individual scattering event.more » We identify the FAS regime of interest as where the ratio of temporal/spatial scale-of-interest to slowing-down time/length is from 10 -3 to 0.3–0.7; the upper limit corresponds to Coulomb logarithm of 20–2, respectively. Two test problems, high-velocity interpenetrating plasma flows and keV-temperature ion equilibration, are used to highlight systems where including FAS is important to capture relevant physics.« less
Higginson, Drew P.
2017-08-12
Here, we describe and justify a full-angle scattering (FAS) method to faithfully reproduce the accumulated differential angular Rutherford scattering probability distribution function (pdf) of particles in a plasma. The FAS method splits the scattering events into two regions. At small angles it is described by cumulative scattering events resulting, via the central limit theorem, in a Gaussian-like pdf; at larger angles it is described by single-event scatters and retains a pdf that follows the form of the Rutherford differential cross-section. The FAS method is verified using discrete Monte-Carlo scattering simulations run at small timesteps to include each individual scattering event.more » We identify the FAS regime of interest as where the ratio of temporal/spatial scale-of-interest to slowing-down time/length is from 10 -3 to 0.3–0.7; the upper limit corresponds to Coulomb logarithm of 20–2, respectively. Two test problems, high-velocity interpenetrating plasma flows and keV-temperature ion equilibration, are used to highlight systems where including FAS is important to capture relevant physics.« less
Conservational PDF Equations of Turbulence
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Liu, Nan-Suey
2010-01-01
Recently we have revisited the traditional probability density function (PDF) equations for the velocity and species in turbulent incompressible flows. They are all unclosed due to the appearance of various conditional means which are modeled empirically. However, we have observed that it is possible to establish a closed velocity PDF equation and a closed joint velocity and species PDF equation through conditions derived from the integral form of the Navier-Stokes equations. Although, in theory, the resulted PDF equations are neither general nor unique, they nevertheless lead to the exact transport equations for the first moment as well as all higher order moments. We refer these PDF equations as the conservational PDF equations. This observation is worth further exploration for its validity and CFD application
Cole, Jacqueline M.; Cheng, Xie; Payne, Michael C.
2016-10-18
The use of principal component analysis (PCA) to statistically infer features of local structure from experimental pair distribution function (PDF) data is assessed on a case study of rare-earth phosphate glasses (REPGs). Such glasses, co-doped with two rare-earth ions (R and R’) of different sizes and optical properties, are of interest to the laser industry. The determination of structure-property relationships in these materials is an important aspect of their technological development. Yet, realizing the local structure of co-doped REPGs presents significant challenges relative to their singly-doped counterparts; specifically, R and R’ are difficult to distinguish in terms of establishing relativemore » material compositions, identifying atomic pairwise correlation profiles in a PDF that are associated with each ion, and resolving peak overlap of such profiles in PDFs. This study demonstrates that PCA can be employed to help overcome these structural complications, by statistically inferring trends in PDFs that exist for a restricted set of experimental data on REPGs, and using these as training data to predict material compositions and PDF profiles in unknown co-doped REPGs. The application of these PCA methods to resolve individual atomic pairwise correlations in t(r) signatures is also presented. The training methods developed for these structural predictions are pre-validated by testing their ability to reproduce known physical phenomena, such as the lanthanide contraction, on PDF signatures of the structurally simpler singly-doped REPGs. The intrinsic limitations of applying PCA to analyze PDFs relative to the quality control of source data, data processing, and sample definition, are also considered. Furthermore, while this case study is limited to lanthanide-doped REPGs, this type of statistical inference may easily be extended to other inorganic solid-state materials, and be exploited in large-scale data-mining efforts that probe many t(r) functions.« less
Improvements and new features in the PDF module
NASA Technical Reports Server (NTRS)
Norris, Andrew T.
1995-01-01
This viewgraph presentation discusses what models are used in this package and what their advantages and disadvantages are, how the probability density function (PDF) model is implemented and the features of the program, and what can be expected in the future from the NASA Lewis PDF code.
Wülbeck, Corinna; Grieshaber, Eva; Helfrich-Förster, Charlotte
2009-10-01
The neuropeptide pigment-dispersing factor (PDF) plays an essential role in the circadian clock of the fruit fly Drosophila melanogaster, but many details of PDF signaling in the clock network are still unknown. We tried to interfere with PDF signaling by blocking the GTPase Shibire in PDF neurons. Shibire is an ortholog of the mammalian Dynamins and is essential for endocytosis of clathrin-coated vesicles at the plasma membrane. Such endocytosis is used for neurotransmitter reuptake by presynaptic neurons, which is a prerequisite of synaptic vesicle recycling, and receptor-mediated endocytosis in the postsynaptic neuron, which leads to signal termination. By blocking Shibire function via overexpression of a dominant negative mutant form of Shibire in PDF neurons, we slowed down the behavioral rhythm by 3 h. This effect was absent in PDF receptor null mutants, indicating that we interfered with PDF receptor-mediated endocytosis. Because we obtained similar behavioral phenotypes by increasing the PDF level in regions close to PDF neurons, we conclude that blocking Shibire did prolong PDF signaling in the neurons that respond to PDF. Obviously, terminating the PDF signaling via receptor-mediated endocytosis is a crucial step in determining the period of behavioral rhythms.
Qian, Li Jun; Zhou, Mi; Xu, Jian Rong
2008-07-01
The objective of this article is to explain an easy and effective approach for managing radiologic files in portable document format (PDF) using iTunes. PDF files are widely used as a standard file format for electronic publications as well as for medical online documents. Unfortunately, there is a lack of powerful software to manage numerous PDF documents. In this article, we explain how to use the hidden function of iTunes (Apple Computer) to manage PDF documents as easily as managing music files.
Characteristic Structure of Star-forming Clouds
NASA Astrophysics Data System (ADS)
Myers, Philip C.
2015-06-01
This paper presents a new method to diagnose the star-forming potential of a molecular cloud region from the probability density function of its column density (N-pdf). This method provides expressions for the column density and mass profiles of a symmetric filament having the same N-pdf as a filamentary region. The central concentration of this characteristic filament can distinguish regions and can quantify their fertility for star formation. Profiles are calculated for N-pdfs which are pure lognormal, pure power law, or a combination. In relation to models of singular polytropic cylinders, characteristic filaments can be unbound, bound, or collapsing depending on their central concentration. Such filamentary models of the dynamical state of N-pdf gas are more relevant to star-forming regions than are spherical collapse models. The star formation fertility of a bound or collapsing filament is quantified by its mean mass accretion rate when in radial free fall. For a given mass per length, the fertility increases with the filament mean column density and with its initial concentration. In selected regions the fertility of their characteristic filaments increases with the level of star formation.
An Investigation of a Hybrid Mixing Model for PDF Simulations of Turbulent Premixed Flames
NASA Astrophysics Data System (ADS)
Zhou, Hua; Li, Shan; Wang, Hu; Ren, Zhuyin
2015-11-01
Predictive simulations of turbulent premixed flames over a wide range of Damköhler numbers in the framework of Probability Density Function (PDF) method still remain challenging due to the deficiency in current micro-mixing models. In this work, a hybrid micro-mixing model, valid in both the flamelet regime and broken reaction zone regime, is proposed. A priori testing of this model is first performed by examining the conditional scalar dissipation rate and conditional scalar diffusion in a 3-D direct numerical simulation dataset of a temporally evolving turbulent slot jet flame of lean premixed H2-air in the thin reaction zone regime. Then, this new model is applied to PDF simulations of the Piloted Premixed Jet Burner (PPJB) flames, which are a set of highly shear turbulent premixed flames and feature strong turbulence-chemistry interaction at high Reynolds and Karlovitz numbers. Supported by NSFC 51476087 and NSFC 91441202.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abeykoon, A. M. Milinda; Hu, Hefei; Wu, Lijun
2015-02-01
We explore and describe different protocols for calibrating electron pair distribution function (ePDF) measurements for quantitative studies on nano-materials. We find the most accurate approach to determine the camera-length is to use a standard calibration sample of Au nanoparticles from National Institute of Standards and Technology. Different protocols for data collection are also explored, as are possible operational errors, to find the best approaches for accurate data collection for quantitative ePDF studies.
Electrical silencing of PDF neurons advances the phase of non-PDF clock neurons in Drosophila.
Wu, Ying; Cao, Guan; Nitabach, Michael N
2008-04-01
Drosophila clock neurons exhibit self-sustaining cellular oscillations that rely in part on rhythmic transcriptional feedback loops. We have previously determined that electrical silencing of the pigment dispersing factor (PDF)-expressing lateral-ventral (LN(V)) pacemaker subset of fly clock neurons via expression of an inward-rectifier K(+) channel (Kir2.1) severely disrupts free-running rhythms of locomotor activity-most flies are arrhythmic and those that are not exhibit weak short-period rhythms-and abolishes LN(V) molecular oscillation in constant darkness. PDF is known to be an important LN(V) output signal. Here we examine the effects of electrical silencing of the LN(V) pacemakers on molecular rhythms in other, nonsilenced, subsets of clock neurons. In contrast to previously described cell-autonomous abolition of free-running molecular rhythms, we find that electrical silencing of the LN(V) pacemakers via Kir2.1 expression does not impair molecular rhythms in LN(D), DN1, and DN2 subsets of clock neurons. However, free-running molecular rhythms in these non-LN(V) clock neurons occur with advanced phase. Electrical silencing of LN(V)s phenocopies PDF null mutation (pdf (01) ) at both behavioral and molecular levels except for the complete abolition of free-running cellular oscillation in the LN(V)s themselves. LN(V) electrically silenced or pdf 01 flies exhibit weak free-running behavioral rhythms with short period, and the molecular oscillation in non-LN(V) neurons phase advances in constant darkness. That LN( V) electrical silencing leads to the same behavioral and non-LN( V) molecular phenotypes as pdf 01 suggests that persistence of LN(V) molecular oscillation in pdf 01 flies has no functional effect, either on behavioral rhythms or on non-LN(V) molecular rhythms. We thus conclude that functionally relevant signals from LN(V)s to non-LN(V) clock neurons and other downstream targets rely both on PDF signaling and LN(V) electrical activity, and that LN( V)s do not ordinarily send functionally relevant signals via PDF-independent mechanisms.
PDF modeling of turbulent flows on unstructured grids
NASA Astrophysics Data System (ADS)
Bakosi, Jozsef
In probability density function (PDF) methods of turbulent flows, the joint PDF of several flow variables is computed by numerically integrating a system of stochastic differential equations for Lagrangian particles. Because the technique solves a transport equation for the PDF of the velocity and scalars, a mathematically exact treatment of advection, viscous effects and arbitrarily complex chemical reactions is possible; these processes are treated without closure assumptions. A set of algorithms is proposed to provide an efficient solution of the PDF transport equation modeling the joint PDF of turbulent velocity, frequency and concentration of a passive scalar in geometrically complex configurations. An unstructured Eulerian grid is employed to extract Eulerian statistics, to solve for quantities represented at fixed locations of the domain and to track particles. All three aspects regarding the grid make use of the finite element method. Compared to hybrid methods, the current methodology is stand-alone, therefore it is consistent both numerically and at the level of turbulence closure without the use of consistency conditions. Since both the turbulent velocity and scalar concentration fields are represented in a stochastic way, the method allows for a direct and close interaction between these fields, which is beneficial in computing accurate scalar statistics. Boundary conditions implemented along solid bodies are of the free-slip and no-slip type without the need for ghost elements. Boundary layers at no-slip boundaries are either fully resolved down to the viscous sublayer, explicitly modeling the high anisotropy and inhomogeneity of the low-Reynolds-number wall region without damping or wall-functions or specified via logarithmic wall-functions. As in moment closures and large eddy simulation, these wall-treatments provide the usual trade-off between resolution and computational cost as required by the given application. Particular attention is focused on modeling the dispersion of passive scalars in inhomogeneous turbulent flows. Two different micromixing models are investigated that incorporate the effect of small scale mixing on the transported scalar: the widely used interaction by exchange with the mean and the interaction by exchange with the conditional mean model. An adaptive algorithm to compute the velocity-conditioned scalar mean is proposed that homogenizes the statistical error over the sample space with no assumption on the shape of the underlying velocity PDF. The development also concentrates on a generally applicable micromixing timescale for complex flow domains. Several newly developed algorithms are described in detail that facilitate a stable numerical solution in arbitrarily complex flow geometries, including a stabilized mean-pressure projection scheme, the estimation of conditional and unconditional Eulerian statistics and their derivatives from stochastic particle fields employing finite element shapefunctions, particle tracking through unstructured grids, an efficient particle redistribution procedure and techniques related to efficient random number generation. The algorithm is validated and tested by computing three different turbulent flows: the fully developed turbulent channel flow, a street canyon (or cavity) flow and the turbulent wake behind a circular cylinder at a sub-critical Reynolds number. The solver has been parallelized and optimized for shared memory and multi-core architectures using the OpenMP standard. Relevant aspects of performance and parallelism on cache-based shared memory machines are discussed and presented in detail. The methodology shows great promise in the simulation of high-Reynolds-number incompressible inert or reactive turbulent flows in realistic configurations.
Statistical measurement of the gamma-ray source-count distribution as a function of energy
NASA Astrophysics Data System (ADS)
Zechlin, H.-S.; Cuoco, A.; Donato, F.; Fornengo, N.; Regis, M.
2017-01-01
Photon counts statistics have recently been proven to provide a sensitive observable for characterizing gamma-ray source populations and for measuring the composition of the gamma-ray sky. In this work, we generalize the use of the standard 1-point probability distribution function (1pPDF) to decompose the high-latitude gamma-ray emission observed with Fermi-LAT into: (i) point-source contributions, (ii) the Galactic foreground contribution, and (iii) a diffuse isotropic background contribution. We analyze gamma-ray data in five adjacent energy bands between 1 and 171 GeV. We measure the source-count distribution dN/dS as a function of energy, and demonstrate that our results extend current measurements from source catalogs to the regime of so far undetected sources. Our method improves the sensitivity for resolving point-source populations by about one order of magnitude in flux. The dN/dS distribution as a function of flux is found to be compatible with a broken power law. We derive upper limits on further possible breaks as well as the angular power of unresolved sources. We discuss the composition of the gamma-ray sky and capabilities of the 1pPDF method.
Impacts of icodextrin on integrin-mediated wound healing of peritoneal mesothelial cells.
Matsumoto, Mika; Tamura, Masahito; Miyamoto, Tetsu; Furuno, Yumi; Kabashima, Narutoshi; Serino, Ryota; Shibata, Tatsuya; Kanegae, Kaori; Takeuchi, Masaaki; Abe, Haruhiko; Okazaki, Masahiro; Otsuji, Yutaka
2012-06-14
Exposure to glucose and its metabolites in peritoneal dialysis fluid (PDF) results in structural alterations of the peritoneal membrane. Icodextrin-containing PDF eliminates glucose and reduces deterioration of peritoneal membrane function, but direct effects of icodextrin molecules on peritoneal mesothelial cells have yet to be elucidated. We compared the impacts of icodextrin itself with those of glucose under PDF-free conditions on wound healing processes of injured mesothelial cell monolayers, focusing on integrin-mediated cell adhesion mechanisms. Regeneration processes of the peritoneal mesothelial cell monolayer were investigated employing an in vitro wound healing assay of cultured rat peritoneal mesothelial cells treated with icodextrin powder- or glucose-dissolved culture medium without PDF, as well as icodextrin- or glucose-containing PDF. The effects of icodextrin on integrin-mediated cell adhesions were examined by immunocytochemistry and Western blotting against focal adhesion kinase (FAK). Cell migration over fibronectin was inhibited in conventional glucose-containing PDF, while icodextrin-containing PDF exerted no significant inhibitory effects. Culture medium containing 1.5% glucose without PDF also inhibited wound healing of mesothelial cells, while 7.5% icodextrin-dissolved culture medium without PDF had no inhibitory effects. Glucose suppressed cell motility by inhibiting tyrosine phosphorylation of FAK, formation of focal adhesions, and cell spreading, while icodextrin had no effects on any of these mesothelial cell functions. Our results demonstrate icodextrin to have no adverse effects on wound healing processes of peritoneal mesothelial cells. Preservation of integrin-mediated cell adhesion might be one of the molecular mechanisms accounting for the superior biocompatibility of icodextrin-containing PDF. Copyright © 2012 Elsevier Inc. All rights reserved.
Combined PDF and Rietveld studies of ADORable zeolites and the disordered intermediate IPC-1P.
Morris, Samuel A; Wheatley, Paul S; Položij, Miroslav; Nachtigall, Petr; Eliášová, Pavla; Čejka, Jiří; Lucas, Tim C; Hriljac, Joseph A; Pinar, Ana B; Morris, Russell E
2016-09-28
The disordered intermediate of the ADORable zeolite UTL has been structurally confirmed using the pair distribution function (PDF) technique. The intermediate, IPC-1P, is a disordered layered compound formed by the hydrolysis of UTL in 0.1 M hydrochloric acid solution. Its structure is unsolvable by traditional X-ray diffraction techniques. The PDF technique was first benchmarked against high-quality synchrotron Rietveld refinements of IPC-2 (OKO) and IPC-4 (PCR) - two end products of IPC-1P condensation that share very similar structural features. An IPC-1P starting model derived from density functional theory was used for the PDF refinement, which yielded a final fit of Rw = 18% and a geometrically reasonable structure. This confirms the layers do stay intact throughout the ADOR process and shows PDF is a viable technique for layered zeolite structure determination.
Cosmological constraints from the convergence 1-point probability distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, Kenneth; Blazek, Jonathan; Honscheid, Klaus
2017-06-29
Here, we examine the cosmological information available from the 1-point probability density function (PDF) of the weak-lensing convergence field, utilizing fast l-picola simulations and a Fisher analysis. We find competitive constraints in the Ωm–σ8 plane from the convergence PDF with 188 arcmin 2 pixels compared to the cosmic shear power spectrum with an equivalent number of modes (ℓ < 886). The convergence PDF also partially breaks the degeneracy cosmic shear exhibits in that parameter space. A joint analysis of the convergence PDF and shear 2-point function also reduces the impact of shape measurement systematics, to which the PDF is lessmore » susceptible, and improves the total figure of merit by a factor of 2–3, depending on the level of systematics. Finally, we present a correction factor necessary for calculating the unbiased Fisher information from finite differences using a limited number of cosmological simulations.« less
Cosmological constraints from the convergence 1-point probability distribution
NASA Astrophysics Data System (ADS)
Patton, Kenneth; Blazek, Jonathan; Honscheid, Klaus; Huff, Eric; Melchior, Peter; Ross, Ashley J.; Suchyta, Eric
2017-11-01
We examine the cosmological information available from the 1-point probability density function (PDF) of the weak-lensing convergence field, utilizing fast L-PICOLA simulations and a Fisher analysis. We find competitive constraints in the Ωm-σ8 plane from the convergence PDF with 188 arcmin2 pixels compared to the cosmic shear power spectrum with an equivalent number of modes (ℓ < 886). The convergence PDF also partially breaks the degeneracy cosmic shear exhibits in that parameter space. A joint analysis of the convergence PDF and shear 2-point function also reduces the impact of shape measurement systematics, to which the PDF is less susceptible, and improves the total figure of merit by a factor of 2-3, depending on the level of systematics. Finally, we present a correction factor necessary for calculating the unbiased Fisher information from finite differences using a limited number of cosmological simulations.
Cooling rate dependence and local structure in aluminum monatomic metallic glass
NASA Astrophysics Data System (ADS)
Kbirou, M.; Trady, S.; Hasnaoui, A.; Mazroui, M.
2017-10-01
The local atomic structure in aluminium monatomic metallic glass is studied using molecular dynamics simulations combined with the embedded atom method (EAM). We have used a variety of analytical methods to characterise the atomic configurations of our system: the Pair Distribution Function (PDF), the Common Neighbour Analysis (CNA) and the Voronoi Tessellation Analysis. CNA was used to investigate the order change from liquid to amorphous phases, recognising that the amount of icosahedral clusters increases with the decrease of temperature. The Voronoi analysis revealed that the icosahedral-like polyhedral are the predominant ones. It has been observed that the PDF function shows a splitting in the second peak, which cannot be attributed to the only ideal icosahedral polyhedron 〈0, 0, 12, 0〉, but also to the formation of other Voronoi polyhedra 〈0, 1, 10, 2〉 . Further, the PDFs were then integrated giving the cumulative coordination number in order to compute the fractal dimension (df).
GW182 controls Drosophila circadian behavior and PDF-receptor signaling.
Zhang, Yong; Emery, Patrick
2013-04-10
The neuropeptide PDF is crucial for Drosophila circadian behavior: it keeps circadian neurons synchronized. Here, we identify GW182 as a key regulator of PDF signaling. Indeed, GW182 downregulation results in phenotypes similar to those of Pdf and Pdf-receptor (Pdfr) mutants. gw182 genetically interacts with Pdfr and cAMP signaling, which is essential for PDFR function. GW182 mediates miRNA-dependent gene silencing through its interaction with AGO1. Consistently, GW182's AGO1 interaction domain is required for GW182's circadian function. Moreover, our results indicate that GW182 modulates PDFR signaling by silencing the expression of the cAMP phosphodiesterase DUNCE. Importantly, this repression is under photic control, and GW182 activity level--which is limiting in circadian neurons--influences the responses of the circadian neural network to light. We propose that GW182's gene silencing activity functions as a rheostat for PDFR signaling and thus profoundly impacts the circadian neural network and its response to environmental inputs. Copyright © 2013 Elsevier Inc. All rights reserved.
The GABAA Receptor RDL Acts in Peptidergic PDF Neurons to Promote Sleep in Drosophila
Chung, Brian Y.; Kilman, Valerie L.; Keath, J. Russel; Pitman, Jena L.; Allada, Ravi
2011-01-01
SUMMARY Sleep is regulated by a circadian clock that largely times sleep and wake to occur at specific times of day and a sleep homeostat that drives sleep as a function of duration of prior wakefulness[1]. To better understand the role of the circadian clock in sleep regulation, we have been using the fruit fly Drosophila melanogaster[2]. Fruit flies display all of the core behavioral features of sleep including relative immobility, elevated arousal thresholds and homeostatic regulation[2, 3]. We assessed sleep-wake modulation by a core set of 20 circadian pacemaker neurons that express the neuropeptide PDF. We find that PDF neuron ablation, loss of pdf or its receptor pdfr results in increased sleep during the late night in light:dark (LD) conditions and more prominent increases on the first subjective day of constant darkness (DD). Flies deploy similar genetic and neurotransmitter pathways to regulate sleep as their mammalian counterparts, including GABA[4]. We find that RNAi-mediated knockdown of the GABAA receptor gene, Resistant to dieldrin (Rdl), in PDF neurons, reduced sleep consistent with a role for GABA in inhibiting PDF neuron function. Patch clamp electrophysiology reveals GABA-activated picrotoxin-sensitive chloride currents on PDF+ neurons. In addition, RDL is detectable most strongly on the large subset of PDF+ pacemaker neurons. These results suggest that GABAergic inhibition of arousal promoting PDF neurons is an important mode of sleep-wake regulation in vivo. PMID:19230663
A large eddy simulation scheme for turbulent reacting flows
NASA Technical Reports Server (NTRS)
Gao, Feng
1993-01-01
The recent development of the dynamic subgrid-scale (SGS) model has provided a consistent method for generating localized turbulent mixing models and has opened up great possibilities for applying the large eddy simulation (LES) technique to real world problems. Given the fact that the direct numerical simulation (DNS) can not solve for engineering flow problems in the foreseeable future (Reynolds 1989), the LES is certainly an attractive alternative. It seems only natural to bring this new development in SGS modeling to bear on the reacting flows. The major stumbling block for introducing LES to reacting flow problems has been the proper modeling of the reaction source terms. Various models have been proposed, but none of them has a wide range of applicability. For example, some of the models in combustion have been based on the flamelet assumption which is only valid for relatively fast reactions. Some other models have neglected the effects of chemical reactions on the turbulent mixing time scale, which is certainly not valid for fast and non-isothermal reactions. The probability density function (PDF) method can be usefully employed to deal with the modeling of the reaction source terms. In order to fit into the framework of LES, a new PDF, the large eddy PDF (LEPDF), is introduced. This PDF provides an accurate representation for the filtered chemical source terms and can be readily calculated in the simulations. The details of this scheme are described.
Variable Density Effects in Stochastic Lagrangian Models for Turbulent Combustion
2016-07-20
PDF methods in dealing with chemical reaction and convection are preserved irrespective of density variation. Since the density variation in a typical...combustion process may be as large as factor of seven, including variable- density effects in PDF methods is of significance. Conventionally, the...strategy of modelling variable density flows in PDF methods is similar to that used for second-moment closure models (SMCM): models are developed based on
Radar cross section models for limited aspect angle windows
NASA Astrophysics Data System (ADS)
Robinson, Mark C.
1992-12-01
This thesis presents a method for building Radar Cross Section (RCS) models of aircraft based on static data taken from limited aspect angle windows. These models statistically characterize static RCS. This is done to show that a limited number of samples can be used to effectively characterize static aircraft RCS. The optimum models are determined by performing both a Kolmogorov and a Chi-Square goodness-of-fit test comparing the static RCS data with a variety of probability density functions (pdf) that are known to be effective at approximating the static RCS of aircraft. The optimum parameter estimator is also determined by the goodness of-fit tests if there is a difference in pdf parameters obtained by the Maximum Likelihood Estimator (MLE) and the Method of Moments (MoM) estimators.
Trading efficiency for effectiveness in similarity-based indexing for image databases
NASA Astrophysics Data System (ADS)
Barros, Julio E.; French, James C.; Martin, Worthy N.; Kelly, Patrick M.
1995-11-01
Image databases typically manage feature data that can be viewed as points in a feature space. Some features, however, can be better expressed as a collection of points or described by a probability distribution function (PDF) rather than as a single point. In earlier work we introduced a similarity measure and a method for indexing and searching the PDF descriptions of these items that guarantees an answer equivalent to sequential search. Unfortunately, certain properties of the data can restrict the efficiency of that method. In this paper we extend that work and examine trade-offs between efficiency and answer quality or effectiveness. These trade-offs reduce the amount of work required during a search by reducing the number of undesired items fetched without excluding an excessive number of the desired ones.
The study of PDF turbulence models in combustion
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1991-01-01
The accurate prediction of turbulent combustion is still beyond reach for today's computation techniques. It is the consensus of the combustion profession that the predictions of chemically reacting flow were poor if conventional turbulence models were used. The main difficulty lies in the fact that the reaction rate is highly nonlinear, and the use of averaged temperature, pressure, and density produces excessively large errors. The probability density function (PDF) method is the only alternative at the present time that uses local instant values of the temperature, density, etc. in predicting chemical reaction rate, and thus it is the only viable approach for turbulent combustion calculations.
NASA Technical Reports Server (NTRS)
Antaki, P. J.
1981-01-01
The joint probability distribution function (pdf), which is a modification of the bivariate Gaussian pdf, is discussed and results are presented for a global reaction model using the joint pdf. An alternative joint pdf is discussed. A criterion which permits the selection of temperature pdf's in different regions of turbulent, reacting flow fields is developed. Two principal approaches to the determination of reaction rates in computer programs containing detailed chemical kinetics are outlined. These models represent a practical solution to the modeling of species reaction rates in turbulent, reacting flows.
Selcho, Mareike; Mühlbauer, Barbara; Hensgen, Ronja; Shiga, Sakiko; Wegener, Christian; Yasuyama, Kouji
2018-06-01
The peptidergic Pigment-dispersing factor (PDF)-Tri neurons are a group of non-clock neurons that appear transiently around the time of adult ecdysis (=eclosion) in the fruit fly Drosophila melanogaster. This specific developmental pattern points to a function of these neurons in eclosion or other processes that are active around pupal-adult transition. As a first step to understand the role of these neurons, we here characterize the anatomy of the PDF-Tri neurons. In addition, we describe a further set of peptidergic neurons that have been associated with eclosion behavior, eclosion hormone (EH), and crustacean cardioactive peptide (CCAP) neurons, to single cell level in the pharate adult brain. PDF-Tri neurons as well as CCAP neurons co-express a classical transmitter indicated by the occurrence of small clear vesicles in addition to dense-core vesicles containing the peptides. In the tritocerebrum, gnathal ganglion and the superior protocerebrum PDF-Tri neurites contain peptidergic varicosities and both pre- and postsynaptic sites, suggesting that the PDF-Tri neurons represent modulatory rather than pure interneurons that connect the subesophageal zone with the superior protocerebrum. The extensive overlap of PDF-Tri arborizations with neurites of CCAP- and EH-expressing neurons in distinct brain regions provides anatomical evidence for a possible function of the PDF-Tri neurons in eclosion behavior. © 2018 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Wang, Jixin; Wang, Zhenyu; Yu, Xiangjun; Yao, Mingyao; Yao, Zongwei; Zhang, Erping
2012-09-01
Highly versatile machines, such as wheel loaders, forklifts, and mining haulers, are subject to many kinds of working conditions, as well as indefinite factors that lead to the complexity of the load. The load probability distribution function (PDF) of transmission gears has many distributions centers; thus, its PDF cannot be well represented by just a single-peak function. For the purpose of representing the distribution characteristics of the complicated phenomenon accurately, this paper proposes a novel method to establish a mixture model. Based on linear regression models and correlation coefficients, the proposed method can be used to automatically select the best-fitting function in the mixture model. Coefficient of determination, the mean square error, and the maximum deviation are chosen and then used as judging criteria to describe the fitting precision between the theoretical distribution and the corresponding histogram of the available load data. The applicability of this modeling method is illustrated by the field testing data of a wheel loader. Meanwhile, the load spectra based on the mixture model are compiled. The comparison results show that the mixture model is more suitable for the description of the load-distribution characteristics. The proposed research improves the flexibility and intelligence of modeling, reduces the statistical error and enhances the fitting accuracy, and the load spectra complied by this method can better reflect the actual load characteristic of the gear component.
Total Scattering Analysis of Disordered Nanosheet Materials
NASA Astrophysics Data System (ADS)
Metz, Peter C.
Two dimensional materials are of increasing interest as building blocks for functional coatings, catalysts, and electrochemical devices. While increasingly sophisticated processing routes have been designed to obtain high-quality exfoliated nanosheets and controlled, self-assembled mesostructures, structural characterization of these materials remains challenging. This work presents a novel method of analyzing pair distribution function (PDF) data for disordered nanosheet ensembles, where supercell stacking models are used to infer atom correlations over as much as 50 A. Hierarchical models are used to reduce the parameter space of the refined model and help eliminate strongly correlated parameters. Three data sets for restacked nanosheet assemblies with stacking disorder are analyzed using these methods: simulated data for graphene-like layers, experimental data for 1 nm thick perovskite layers, and experimental data for highly defective delta-MnO2 layers. In each case, the sensitivity of the PDF to the real-space distribution of layer positions is demonstrated by exploring the fit residual as a function of stacking vectors. The refined models demonstrate that nanosheets tend towards local interlayer ordering, which is hypothesized to be driven by the electrostatic potential of the layer surfaces. Correctly accounting for interlayer atom correlations permits more accurate refinement of local structural details including local structure perturbations and defect site occupancies. In the delta-MnO2 nanosheet material, the new modeling approach identified 14% Mn vacancies while application of 3D periodic crystalline models to the < 7 A PDF region suggests a 25% vacancy concentration. In contrast, the perovskite nanosheet material is demonstrated to exhibit almost negligible structural relaxation in contrast with the bulk crystalline material from which it is derived.
Modulation Based on Probability Density Functions
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
2009-01-01
A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.
Of pacemakers and statistics: the actuarial method extended.
Dussel, J; Wolbarst, A B; Scott-Millar, R N; Obel, I W
1980-01-01
Pacemakers cease functioning because of either natural battery exhaustion (nbe) or component failure (cf). A study of four series of pacemakers shows that a simple extension of the actuarial method, so as to incorporate Normal statistics, makes possible a quantitative differentiation between the two modes of failure. This involves the separation of the overall failure probability density function PDF(t) into constituent parts pdfnbe(t) and pdfcf(t). The approach should allow a meaningful comparison of the characteristics of different pacemaker types.
A probability-based multi-cycle sorting method for 4D-MRI: A simulation study.
Liang, Xiao; Yin, Fang-Fang; Liu, Yilin; Cai, Jing
2016-12-01
To develop a novel probability-based sorting method capable of generating multiple breathing cycles of 4D-MRI images and to evaluate performance of this new method by comparing with conventional phase-based methods in terms of image quality and tumor motion measurement. Based on previous findings that breathing motion probability density function (PDF) of a single breathing cycle is dramatically different from true stabilized PDF that resulted from many breathing cycles, it is expected that a probability-based sorting method capable of generating multiple breathing cycles of 4D images may capture breathing variation information missing from conventional single-cycle sorting methods. The overall idea is to identify a few main breathing cycles (and their corresponding weightings) that can best represent the main breathing patterns of the patient and then reconstruct a set of 4D images for each of the identified main breathing cycles. This method is implemented in three steps: (1) The breathing signal is decomposed into individual breathing cycles, characterized by amplitude, and period; (2) individual breathing cycles are grouped based on amplitude and period to determine the main breathing cycles. If a group contains more than 10% of all breathing cycles in a breathing signal, it is determined as a main breathing pattern group and is represented by the average of individual breathing cycles in the group; (3) for each main breathing cycle, a set of 4D images is reconstructed using a result-driven sorting method adapted from our previous study. The probability-based sorting method was first tested on 26 patients' breathing signals to evaluate its feasibility of improving target motion PDF. The new method was subsequently tested for a sequential image acquisition scheme on the 4D digital extended cardiac torso (XCAT) phantom. Performance of the probability-based and conventional sorting methods was evaluated in terms of target volume precision and accuracy as measured by the 4D images, and also the accuracy of average intensity projection (AIP) of 4D images. Probability-based sorting showed improved similarity of breathing motion PDF from 4D images to reference PDF compared to single cycle sorting, indicated by the significant increase in Dice similarity coefficient (DSC) (probability-based sorting, DSC = 0.89 ± 0.03, and single cycle sorting, DSC = 0.83 ± 0.05, p-value <0.001). Based on the simulation study on XCAT, the probability-based method outperforms the conventional phase-based methods in qualitative evaluation on motion artifacts and quantitative evaluation on tumor volume precision and accuracy and accuracy of AIP of the 4D images. In this paper the authors demonstrated the feasibility of a novel probability-based multicycle 4D image sorting method. The authors' preliminary results showed that the new method can improve the accuracy of tumor motion PDF and the AIP of 4D images, presenting potential advantages over the conventional phase-based sorting method for radiation therapy motion management.
Mossahebi, Sina; Zhu, Simeng; Chen, Howard; Shmuylovich, Leonid; Ghosh, Erina; Kovács, Sándor J.
2014-01-01
Quantitative cardiac function assessment remains a challenge for physiologists and clinicians. Although historically invasive methods have comprised the only means available, the development of noninvasive imaging modalities (echocardiography, MRI, CT) having high temporal and spatial resolution provide a new window for quantitative diastolic function assessment. Echocardiography is the agreed upon standard for diastolic function assessment, but indexes in current clinical use merely utilize selected features of chamber dimension (M-mode) or blood/tissue motion (Doppler) waveforms without incorporating the physiologic causal determinants of the motion itself. The recognition that all left ventricles (LV) initiate filling by serving as mechanical suction pumps allows global diastolic function to be assessed based on laws of motion that apply to all chambers. What differentiates one heart from another are the parameters of the equation of motion that governs filling. Accordingly, development of the Parametrized Diastolic Filling (PDF) formalism has shown that the entire range of clinically observed early transmitral flow (Doppler E-wave) patterns are extremely well fit by the laws of damped oscillatory motion. This permits analysis of individual E-waves in accordance with a causal mechanism (recoil-initiated suction) that yields three (numerically) unique lumped parameters whose physiologic analogues are chamber stiffness (k), viscoelasticity/relaxation (c), and load (xo). The recording of transmitral flow (Doppler E-waves) is standard practice in clinical cardiology and, therefore, the echocardiographic recording method is only briefly reviewed. Our focus is on determination of the PDF parameters from routinely recorded E-wave data. As the highlighted results indicate, once the PDF parameters have been obtained from a suitable number of load varying E-waves, the investigator is free to use the parameters or construct indexes from the parameters (such as stored energy 1/2kxo2, maximum A-V pressure gradient kxo, load independent index of diastolic function, etc.) and select the aspect of physiology or pathophysiology to be quantified. PMID:25226101
Mossahebi, Sina; Zhu, Simeng; Chen, Howard; Shmuylovich, Leonid; Ghosh, Erina; Kovács, Sándor J
2014-09-01
Quantitative cardiac function assessment remains a challenge for physiologists and clinicians. Although historically invasive methods have comprised the only means available, the development of noninvasive imaging modalities (echocardiography, MRI, CT) having high temporal and spatial resolution provide a new window for quantitative diastolic function assessment. Echocardiography is the agreed upon standard for diastolic function assessment, but indexes in current clinical use merely utilize selected features of chamber dimension (M-mode) or blood/tissue motion (Doppler) waveforms without incorporating the physiologic causal determinants of the motion itself. The recognition that all left ventricles (LV) initiate filling by serving as mechanical suction pumps allows global diastolic function to be assessed based on laws of motion that apply to all chambers. What differentiates one heart from another are the parameters of the equation of motion that governs filling. Accordingly, development of the Parametrized Diastolic Filling (PDF) formalism has shown that the entire range of clinically observed early transmitral flow (Doppler E-wave) patterns are extremely well fit by the laws of damped oscillatory motion. This permits analysis of individual E-waves in accordance with a causal mechanism (recoil-initiated suction) that yields three (numerically) unique lumped parameters whose physiologic analogues are chamber stiffness (k), viscoelasticity/relaxation (c), and load (xo). The recording of transmitral flow (Doppler E-waves) is standard practice in clinical cardiology and, therefore, the echocardiographic recording method is only briefly reviewed. Our focus is on determination of the PDF parameters from routinely recorded E-wave data. As the highlighted results indicate, once the PDF parameters have been obtained from a suitable number of load varying E-waves, the investigator is free to use the parameters or construct indexes from the parameters (such as stored energy 1/2kxo(2), maximum A-V pressure gradient kxo, load independent index of diastolic function, etc.) and select the aspect of physiology or pathophysiology to be quantified.
Awazu, Akinori; Tanabe, Takahiro; Kamitani, Mari; Tezuka, Ayumi; Nagano, Atsushi J
2018-05-29
Gene expression levels exhibit stochastic variations among genetically identical organisms under the same environmental conditions. In many recent transcriptome analyses based on RNA sequencing (RNA-seq), variations in gene expression levels among replicates were assumed to follow a negative binomial distribution, although the physiological basis of this assumption remains unclear. In this study, RNA-seq data were obtained from Arabidopsis thaliana under eight conditions (21-27 replicates), and the characteristics of gene-dependent empirical probability density function (ePDF) profiles of gene expression levels were analyzed. For A. thaliana and Saccharomyces cerevisiae, various types of ePDF of gene expression levels were obtained that were classified as Gaussian, power law-like containing a long tail, or intermediate. These ePDF profiles were well fitted with a Gauss-power mixing distribution function derived from a simple model of a stochastic transcriptional network containing a feedback loop. The fitting function suggested that gene expression levels with long-tailed ePDFs would be strongly influenced by feedback regulation. Furthermore, the features of gene expression levels are correlated with their functions, with the levels of essential genes tending to follow a Gaussian-like ePDF while those of genes encoding nucleic acid-binding proteins and transcription factors exhibit long-tailed ePDF.
Numerical methods for the weakly compressible Generalized Langevin Model in Eulerian reference frame
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azarnykh, Dmitrii, E-mail: d.azarnykh@tum.de; Litvinov, Sergey; Adams, Nikolaus A.
2016-06-01
A well established approach for the computation of turbulent flow without resolving all turbulent flow scales is to solve a filtered or averaged set of equations, and to model non-resolved scales by closures derived from transported probability density functions (PDF) for velocity fluctuations. Effective numerical methods for PDF transport employ the equivalence between the Fokker–Planck equation for the PDF and a Generalized Langevin Model (GLM), and compute the PDF by transporting a set of sampling particles by GLM (Pope (1985) [1]). The natural representation of GLM is a system of stochastic differential equations in a Lagrangian reference frame, typically solvedmore » by particle methods. A representation in a Eulerian reference frame, however, has the potential to significantly reduce computational effort and to allow for the seamless integration into a Eulerian-frame numerical flow solver. GLM in a Eulerian frame (GLMEF) formally corresponds to the nonlinear fluctuating hydrodynamic equations derived by Nakamura and Yoshimori (2009) [12]. Unlike the more common Landau–Lifshitz Navier–Stokes (LLNS) equations these equations are derived from the underdamped Langevin equation and are not based on a local equilibrium assumption. Similarly to LLNS equations the numerical solution of GLMEF requires special considerations. In this paper we investigate different numerical approaches to solving GLMEF with respect to the correct representation of stochastic properties of the solution. We find that a discretely conservative staggered finite-difference scheme, adapted from a scheme originally proposed for turbulent incompressible flow, in conjunction with a strongly stable (for non-stochastic PDE) Runge–Kutta method performs better for GLMEF than schemes adopted from those proposed previously for the LLNS. We show that equilibrium stochastic fluctuations are correctly reproduced.« less
Abeykoon, A M Milinda; Donner, Wolfgang; Brunelli, Michela; Castro-Colin, Miguel; Jacobson, Allan J; Moss, Simon C
2009-09-23
The structure of Se particles in the approximately 13 A diameter alpha-cages of zeolite NdY has been determined by Rietveld refinement and pair distribution function (PDF) analysis of X-ray data. With the diffuse scattering subtracted an average structure comprised of an undistorted framework containing nanoclusters of 20 Se atoms is observed. The intracluster correlations and the cluster-framework correlations which give rise to diffuse scattering were modeled by using PDF analysis.
Pigment-Dispersing Factor Signaling and Circadian Rhythms in Insect Locomotor Activity
Shafer, Orie T.; Yao, Zepeng
2014-01-01
Though expressed in relatively few neurons in insect nervous systems, pigment-dispersing factor (PDF) plays many roles in the control of behavior and physiology. PDF’s role in circadian timekeeping is its best-understood function and the focus of this review. Here we recount the isolation and characterization of insect PDFs, review the evidence that PDF acts as a circadian clock output factor, and discuss emerging models of how PDF functions within circadian clock neuron network of Drosophila, the species in which this peptide’s circadian roles are best understood. PMID:25386391
Signaling of Pigment-Dispersing Factor (PDF) in the Madeira Cockroach Rhyparobia maderae
Funk, Nico W.; Giese, Maria; Baz, El-Sayed; Stengl, Monika
2014-01-01
The insect neuropeptide pigment-dispersing factor (PDF) is a functional ortholog of vasoactive intestinal polypeptide, the coupling factor of the mammalian circadian pacemaker. Despite of PDF's importance for synchronized circadian locomotor activity rhythms its signaling is not well understood. We studied PDF signaling in primary cell cultures of the accessory medulla, the circadian pacemaker of the Madeira cockroach. In Ca2+ imaging studies four types of PDF-responses were distinguished. In regularly bursting type 1 pacemakers PDF application resulted in dose-dependent long-lasting increases in Ca2+ baseline concentration and frequency of oscillating Ca2+ transients. Adenylyl cyclase antagonists prevented PDF-responses in type 1 cells, indicating that PDF signaled via elevation of intracellular cAMP levels. In contrast, in type 2 pacemakers PDF transiently raised intracellular Ca2+ levels even after blocking adenylyl cyclase activity. In patch clamp experiments the previously characterized types 1–4 could not be identified. Instead, PDF-responses were categorized according to ion channels affected. Application of PDF inhibited outward potassium or inward sodium currents, sometimes in the same neuron. In a comparison of Ca2+ imaging and patch clamp experiments we hypothesized that in type 1 cells PDF-dependent rises in cAMP concentrations block primarily outward K+ currents. Possibly, this PDF-dependent depolarization underlies PDF-dependent phase advances of pacemakers. Finally, we propose that PDF-dependent concomitant modulation of K+ and Na+ channels in coupled pacemakers causes ultradian membrane potential oscillations as prerequisite to efficient synchronization via resonance. PMID:25269074
Keating, Jonathan; Sankar, Gopinathan; Hyde, Timothy I; Kohara, Shinji; Ohara, Koji
2013-06-14
The PdO-Pd phase transformation in a 4 wt% Pd/Al2O3 catalyst has been investigated using in situ X-ray absorption spectroscopy (XAS) and in situ X-ray total scattering (also known as high-energy X-ray diffraction) techniques. Both the partial and total pair distribution functions (PDF) from these respective techniques have been analysed in depth. New information from PDF analysis of total scattering data has been garnered using the differential PDF (d-PDF) approach where only correlations orginating from PdO and metallic Pd are extracted. This method circumvents problems encountered in characerising the catalytically active components due to the diffuse scattering from the disordered γ-Al2O3 support phase. Quantitative analysis of the palladium components within the catalyst allowed for the phase composition to be established at various temperatures. Above 850 °C it was found that PdO had converted to metallic Pd, however, the extent of reduction was of the order ca. 70% Pd metal and 30% PdO. Complementary in situ XANES and EXAFS were performed, with heating to high temperature and subsequent cooling in air, and the results of the analyses support the observations, that residual PdO is detected at elevated temperatures. Hysteresis in the transformation upon cooling is confirmed from XAS studies where reoxidation occurs below 680 °C.
Deep PDF parsing to extract features for detecting embedded malware.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Munson, Miles Arthur; Cross, Jesse S.
2011-09-01
The number of PDF files with embedded malicious code has risen significantly in the past few years. This is due to the portability of the file format, the ways Adobe Reader recovers from corrupt PDF files, the addition of many multimedia and scripting extensions to the file format, and many format properties the malware author may use to disguise the presence of malware. Current research focuses on executable, MS Office, and HTML formats. In this paper, several features and properties of PDF Files are identified. Features are extracted using an instrumented open source PDF viewer. The feature descriptions of benignmore » and malicious PDFs can be used to construct a machine learning model for detecting possible malware in future PDF files. The detection rate of PDF malware by current antivirus software is very low. A PDF file is easy to edit and manipulate because it is a text format, providing a low barrier to malware authors. Analyzing PDF files for malware is nonetheless difficult because of (a) the complexity of the formatting language, (b) the parsing idiosyncrasies in Adobe Reader, and (c) undocumented correction techniques employed in Adobe Reader. In May 2011, Esparza demonstrated that PDF malware could be hidden from 42 of 43 antivirus packages by combining multiple obfuscation techniques [4]. One reason current antivirus software fails is the ease of varying byte sequences in PDF malware, thereby rendering conventional signature-based virus detection useless. The compression and encryption functions produce sequences of bytes that are each functions of multiple input bytes. As a result, padding the malware payload with some whitespace before compression/encryption can change many of the bytes in the final payload. In this study we analyzed a corpus of 2591 benign and 87 malicious PDF files. While this corpus is admittedly small, it allowed us to test a system for collecting indicators of embedded PDF malware. We will call these indicators features throughout the rest of this report. The features are extracted using an instrumented PDF viewer, and are the inputs to a prediction model that scores the likelihood of a PDF file containing malware. The prediction model is constructed from a sample of labeled data by a machine learning algorithm (specifically, decision tree ensemble learning). Preliminary experiments show that the model is able to detect half of the PDF malware in the corpus with zero false alarms. We conclude the report with suggestions for extending this work to detect a greater variety of PDF malware.« less
Emg Amplitude Estimators Based on Probability Distribution for Muscle-Computer Interface
NASA Astrophysics Data System (ADS)
Phinyomark, Angkoon; Quaine, Franck; Laurillau, Yann; Thongpanja, Sirinee; Limsakul, Chusak; Phukpattaranont, Pornchai
To develop an advanced muscle-computer interface (MCI) based on surface electromyography (EMG) signal, the amplitude estimations of muscle activities, i.e., root mean square (RMS) and mean absolute value (MAV) are widely used as a convenient and accurate input for a recognition system. Their classification performance is comparable to advanced and high computational time-scale methods, i.e., the wavelet transform. However, the signal-to-noise-ratio (SNR) performance of RMS and MAV depends on a probability density function (PDF) of EMG signals, i.e., Gaussian or Laplacian. The PDF of upper-limb motions associated with EMG signals is still not clear, especially for dynamic muscle contraction. In this paper, the EMG PDF is investigated based on surface EMG recorded during finger, hand, wrist and forearm motions. The results show that on average the experimental EMG PDF is closer to a Laplacian density, particularly for male subject and flexor muscle. For the amplitude estimation, MAV has a higher SNR, defined as the mean feature divided by its fluctuation, than RMS. Due to a same discrimination of RMS and MAV in feature space, MAV is recommended to be used as a suitable EMG amplitude estimator for EMG-based MCIs.
Evolution of the concentration PDF in random environments modeled by global random walk
NASA Astrophysics Data System (ADS)
Suciu, Nicolae; Vamos, Calin; Attinger, Sabine; Knabner, Peter
2013-04-01
The evolution of the probability density function (PDF) of concentrations of chemical species transported in random environments is often modeled by ensembles of notional particles. The particles move in physical space along stochastic-Lagrangian trajectories governed by Ito equations, with drift coefficients given by the local values of the resolved velocity field and diffusion coefficients obtained by stochastic or space-filtering upscaling procedures. A general model for the sub-grid mixing also can be formulated as a system of Ito equations solving for trajectories in the composition space. The PDF is finally estimated by the number of particles in space-concentration control volumes. In spite of their efficiency, Lagrangian approaches suffer from two severe limitations. Since the particle trajectories are constructed sequentially, the demanded computing resources increase linearly with the number of particles. Moreover, the need to gather particles at the center of computational cells to perform the mixing step and to estimate statistical parameters, as well as the interpolation of various terms to particle positions, inevitably produce numerical diffusion in either particle-mesh or grid-free particle methods. To overcome these limitations, we introduce a global random walk method to solve the system of Ito equations in physical and composition spaces, which models the evolution of the random concentration's PDF. The algorithm consists of a superposition on a regular lattice of many weak Euler schemes for the set of Ito equations. Since all particles starting from a site of the space-concentration lattice are spread in a single numerical procedure, one obtains PDF estimates at the lattice sites at computational costs comparable with those for solving the system of Ito equations associated to a single particle. The new method avoids the limitations concerning the number of particles in Lagrangian approaches, completely removes the numerical diffusion, and speeds up the computation by orders of magnitude. The approach is illustrated for the transport of passive scalars in heterogeneous aquifers, with hydraulic conductivity modeled as a random field.
Weatherbee, Andrew; Sugita, Mitsuro; Bizheva, Kostadinka; Popov, Ivan; Vitkin, Alex
2016-06-15
The distribution of backscattered intensities as described by the probability density function (PDF) of tissue-scattered light contains information that may be useful for tissue assessment and diagnosis, including characterization of its pathology. In this Letter, we examine the PDF description of the light scattering statistics in a well characterized tissue-like particulate medium using optical coherence tomography (OCT). It is shown that for low scatterer density, the governing statistics depart considerably from a Gaussian description and follow the K distribution for both OCT amplitude and intensity. The PDF formalism is shown to be independent of the scatterer flow conditions; this is expected from theory, and suggests robustness and motion independence of the OCT amplitude (and OCT intensity) PDF metrics in the context of potential biomedical applications.
Improved biocompatibility of bicarbonate/lactate-buffered PDF is not related to pH.
Zareie, Mohammad; Keuning, Eelco D; ter Wee, Piet M; Schalkwijk, Casper G; Beelen, Robert H J; van den Born, Jacob
2006-01-01
Chronic exposure to conventional peritoneal dialysis fluid (PDF) is associated with functional and structural alterations of the peritoneal membrane. The bioincompatibility of conventional PDF can be due to hypertonicity, high glucose concentration, lactate buffering system, presence of glucose degradation products (GDPs) and/or acidic pH. Although various investigators have studied the sole effects of hyperosmolarity, high glucose, GDPs and lactate buffer in experimental PD, less attention has been paid to the chronic impact of low pH in vivo. Rats received daily 10 ml of either conventional lactate-buffered PDF (pH 5.2; n=7), a standard bicarbonate/lactate-buffered PDF with physiological pH (n=8), bicarbonate/lactate-buffered PDF with acidic pH (adjusted to pH 5.2 with 1 N hydrochloride, n=5), or bicarbonate/lactate buffer, without glucose, pH 7.4 (n=7). Fluids were instilled via peritoneal catheters connected to implanted subcutaneous mini vascular access ports for 8 weeks. Control animals with or without peritoneal catheters served as control groups (n=8/group). Various functional (2 h PET) and morphological/cellular parameters were analyzed. Compared with control groups and the buffer group, conventional lactate-buffered PDF induced a number of morphological/cellular changes, including angiogenesis and fibrosis in various peritoneal tissues (all parameters P<0.05), accompanied by increased glucose absorption and reduced ultrafiltration capacity. Daily exposure to standard or acidified bicarbonate/lactate-buffered PDF improved the performance of the peritoneal membrane, evidenced by reduced new vessel formation in omentum (P<0.02) and parietal peritoneum (P<0.008), reduced fibrosis (P<0.02) and improved ultrafiltration capacity. No significant differences were found between standard and acidified bicarbonate/lactate-buffered PDF. During PET, acidic PDF was neutralized within 15 to 20 min. The bicarbonate/lactate-buffered PDF, acidity per se did not contribute substantially to peritoneal worsening in our in vivo model for PD, which might be explained by the buffering capacity of the peritoneum.
PDF for Healthcare and Child Health Data Forms.
Zuckerman, Alan E; Schneider, Joseph H; Miller, Ken
2008-11-06
PDF-H is a new best practices standard that uses XFA forms and embedded JavaScript to combine PDF forms with XML data. Preliminary experience with AAP child health forms shows that the combination of PDF with XML is a more effective method to visualize familiar data on paper and the web than the traditional use of XML and XSLT. Both PDF-H and HL7 Clinical Document Architecture can co-exist using the same data for different display formats.
Shankar, P Mohana
2003-03-01
A compound probability density function (pdf) is presented to describe the envelope of the backscattered echo from tissue. This pdf allows local and global variation in scattering cross sections in tissue. The ultrasonic backscattering cross sections are assumed to be gamma distributed. The gamma distribution also is used to model the randomness in the average cross sections. This gamma-gamma model results in the compound scattering pdf for the envelope. The relationship of this compound pdf to the Rayleigh, K, and Nakagami distributions is explored through an analysis of the signal-to-noise ratio of the envelopes and random number simulations. The three parameter compound pdf appears to be flexible enough to represent envelope statistics giving rise to Rayleigh, K, and Nakagami distributions.
Signaling of pigment-dispersing factor (PDF) in the Madeira cockroach Rhyparobia maderae.
Wei, Hongying; Yasar, Hanzey; Funk, Nico W; Giese, Maria; Baz, El-Sayed; Stengl, Monika
2014-01-01
The insect neuropeptide pigment-dispersing factor (PDF) is a functional ortholog of vasoactive intestinal polypeptide, the coupling factor of the mammalian circadian pacemaker. Despite of PDF's importance for synchronized circadian locomotor activity rhythms its signaling is not well understood. We studied PDF signaling in primary cell cultures of the accessory medulla, the circadian pacemaker of the Madeira cockroach. In Ca²⁺ imaging studies four types of PDF-responses were distinguished. In regularly bursting type 1 pacemakers PDF application resulted in dose-dependent long-lasting increases in Ca²⁺ baseline concentration and frequency of oscillating Ca²⁺ transients. Adenylyl cyclase antagonists prevented PDF-responses in type 1 cells, indicating that PDF signaled via elevation of intracellular cAMP levels. In contrast, in type 2 pacemakers PDF transiently raised intracellular Ca²⁺ levels even after blocking adenylyl cyclase activity. In patch clamp experiments the previously characterized types 1-4 could not be identified. Instead, PDF-responses were categorized according to ion channels affected. Application of PDF inhibited outward potassium or inward sodium currents, sometimes in the same neuron. In a comparison of Ca²⁺ imaging and patch clamp experiments we hypothesized that in type 1 cells PDF-dependent rises in cAMP concentrations block primarily outward K⁺ currents. Possibly, this PDF-dependent depolarization underlies PDF-dependent phase advances of pacemakers. Finally, we propose that PDF-dependent concomitant modulation of K⁺ and Na⁺ channels in coupled pacemakers causes ultradian membrane potential oscillations as prerequisite to efficient synchronization via resonance.
Nelson, Charles; Avramov-Zamurovic, Svetlana; Korotkova, Olga; Malek-Madani, Reza; Sova, Raymond; Davidson, Frederic
2013-11-01
Irradiance fluctuations of an infrared laser beam from a shore-to-ship data link ranging from 5.1 to 17.8 km are compared to lognormal (LN), gamma-gamma (GG) with aperture averaging, and gamma-Laguerre (GL) distributions. From our data analysis, the LN and GG probability density function (PDF) models were generally in good agreement in near-weak to moderate fluctuations. This was also true in moderate to strong fluctuations when the spatial coherence radius was smaller than the detector aperture size, with the exception of the 2.54 cm power-in-bucket (PIB) where the LN PDF model fit best. For moderate to strong fluctuations, the GG PDF model tended to outperform the LN PDF model when the spatial coherence radius was greater than the detector aperture size. Additionally, the GL PDF model had the best or next to best overall fit in all cases with the exception of the 2.54 cm PIB where the scintillation index was highest. The GL PDF model also appears to be robust for off-of-beam center laser beam applications.
Total Scattering and Pair Distribution Function Analysis in Modelling Disorder in PZN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitfield, Ross E.; Goossens, Darren J; Welberry, T. R.
2016-01-01
The ability of the pair distribution function (PDF) analysis of total scattering (TS) from a powder to determine the local ordering in ferroelectric PZN (PbZn 1/3Nb 2/3O 3) has been explored by comparison with a model established using single-crystal diffuse scattering (SCDS). While X-ray PDF analysis is discussed, the focus is on neutron diffraction results because of the greater extent of the data and the sensitivity of the neutron to oxygen atoms, the behaviour of which is important in PZN. The PDF was shown to be sensitive to many effects not apparent in the average crystal structure, including variations inmore » the B-site—O separation distances and the fact that (110) Pb 2+ displacements are most likely. A qualitative comparison between SCDS and the PDF shows that some features apparent in SCDS were not apparent in the PDF. These tended to pertain to short-range correlations in the structure, rather than to interatomic separations. For example, in SCDS the short-range alternation of the B-site cations was quite apparent in diffuse scattering at (½ ½ ½), whereas it was not apparent in the PDF.« less
NASA Astrophysics Data System (ADS)
Frandsen, Benjamin A.; Brunelli, Michela; Page, Katharine; Uemura, Yasutomo J.; Staunton, Julie B.; Billinge, Simon J. L.
2016-05-01
We present a temperature-dependent atomic and magnetic pair distribution function (PDF) analysis of neutron total scattering measurements of antiferromagnetic MnO, an archetypal strongly correlated transition-metal oxide. The known antiferromagnetic ground-state structure fits the low-temperature data closely with refined parameters that agree with conventional techniques, confirming the reliability of the newly developed magnetic PDF method. The measurements performed in the paramagnetic phase reveal significant short-range magnetic correlations on a ˜1 nm length scale that differ substantially from the low-temperature long-range spin arrangement. Ab initio calculations using a self-interaction-corrected local spin density approximation of density functional theory predict magnetic interactions dominated by Anderson superexchange and reproduce the measured short-range magnetic correlations to a high degree of accuracy. Further calculations simulating an additional contribution from a direct exchange interaction show much worse agreement with the data. The Anderson superexchange model for MnO is thus verified by experimentation and confirmed by ab initio theory.
Ghosh, Sayan; Das, Swagatam; Vasilakos, Athanasios V; Suresh, Kaushik
2012-02-01
Differential evolution (DE) is arguably one of the most powerful stochastic real-parameter optimization algorithms of current interest. Since its inception in the mid 1990s, DE has been finding many successful applications in real-world optimization problems from diverse domains of science and engineering. This paper takes a first significant step toward the convergence analysis of a canonical DE (DE/rand/1/bin) algorithm. It first deduces a time-recursive relationship for the probability density function (PDF) of the trial solutions, taking into consideration the DE-type mutation, crossover, and selection mechanisms. Then, by applying the concepts of Lyapunov stability theorems, it shows that as time approaches infinity, the PDF of the trial solutions concentrates narrowly around the global optimum of the objective function, assuming the shape of a Dirac delta distribution. Asymptotic convergence behavior of the population PDF is established by constructing a Lyapunov functional based on the PDF and showing that it monotonically decreases with time. The analysis is applicable to a class of continuous and real-valued objective functions that possesses a unique global optimum (but may have multiple local optima). Theoretical results have been substantiated with relevant computer simulations.
Duvall, Laura B.; Taghert, Paul H.
2012-01-01
The neuropeptide Pigment Dispersing Factor (PDF) is essential for normal circadian function in Drosophila. It synchronizes the phases of M pacemakers, while in E pacemakers it decelerates their cycling and supports their amplitude. The PDF receptor (PDF-R) is present in both M and subsets of E cells. Activation of PDF-R stimulates cAMP increases in vitro and in M cells in vivo. The present study asks: What is the identity of downstream signaling components that are associated with PDF receptor in specific circadian pacemaker neurons? Using live imaging of intact fly brains and transgenic RNAi, we show that adenylate cyclase AC3 underlies PDF signaling in M cells. Genetic disruptions of AC3 specifically disrupt PDF responses: they do not affect other Gs-coupled GPCR signaling in M cells, they can be rescued, and they do not represent developmental alterations. Knockdown of the Drosophila AKAP-like scaffolding protein Nervy also reduces PDF responses. Flies with AC3 alterations show behavioral syndromes consistent with known roles of M pacemakers as mediated by PDF. Surprisingly, disruption of AC3 does not alter PDF responses in E cells—the PDF-R(+) LNd. Within M pacemakers, PDF-R couples preferentially to a single AC, but PDF-R association with a different AC(s) is needed to explain PDF signaling in the E pacemakers. Thus critical pathways of circadian synchronization are mediated by highly specific second messenger components. These findings support a hypothesis that PDF signaling components within target cells are sequestered into “circadian signalosomes,” whose compositions differ between E and M pacemaker cell types. PMID:22679392
Duvall, Laura B; Taghert, Paul H
2012-01-01
The neuropeptide Pigment Dispersing Factor (PDF) is essential for normal circadian function in Drosophila. It synchronizes the phases of M pacemakers, while in E pacemakers it decelerates their cycling and supports their amplitude. The PDF receptor (PDF-R) is present in both M and subsets of E cells. Activation of PDF-R stimulates cAMP increases in vitro and in M cells in vivo. The present study asks: What is the identity of downstream signaling components that are associated with PDF receptor in specific circadian pacemaker neurons? Using live imaging of intact fly brains and transgenic RNAi, we show that adenylate cyclase AC3 underlies PDF signaling in M cells. Genetic disruptions of AC3 specifically disrupt PDF responses: they do not affect other Gs-coupled GPCR signaling in M cells, they can be rescued, and they do not represent developmental alterations. Knockdown of the Drosophila AKAP-like scaffolding protein Nervy also reduces PDF responses. Flies with AC3 alterations show behavioral syndromes consistent with known roles of M pacemakers as mediated by PDF. Surprisingly, disruption of AC3 does not alter PDF responses in E cells--the PDF-R(+) LNd. Within M pacemakers, PDF-R couples preferentially to a single AC, but PDF-R association with a different AC(s) is needed to explain PDF signaling in the E pacemakers. Thus critical pathways of circadian synchronization are mediated by highly specific second messenger components. These findings support a hypothesis that PDF signaling components within target cells are sequestered into "circadian signalosomes," whose compositions differ between E and M pacemaker cell types.
Continuous time anomalous diffusion in a composite medium.
Stickler, B A; Schachinger, E
2011-08-01
The one-dimensional continuous time anomalous diffusion in composite media consisting of a finite number of layers in immediate contact is investigated. The diffusion process itself is described with the help of two probability density functions (PDFs), one of which is an arbitrary jump-length PDF, and the other is a long-tailed waiting-time PDF characterized by the waiting-time index β∈(0,1). The former is assumed to be a function of the space coordinate x and the time coordinate t while the latter is a function of x and the time interval. For such an environment a very general form of the diffusion equation is derived which describes the continuous time anomalous diffusion in a composite medium. This result is then specialized to two particular forms of the jump-length PDF, namely the continuous time Lévy flight PDF and the continuous time truncated Lévy flight PDF. In both cases the PDFs are characterized by the Lévy index α∈(0,2) which is regarded to be a function of x and t. It is possible to demonstrate that for particular choices of the indices α and β other equations for anomalous diffusion, well known from the literature, follow immediately. This demonstrates the very general applicability of the derivation and of the resulting fractional differential equation discussed here.
Volatility in financial markets: stochastic models and empirical results
NASA Astrophysics Data System (ADS)
Miccichè, Salvatore; Bonanno, Giovanni; Lillo, Fabrizio; Mantegna, Rosario N.
2002-11-01
We investigate the historical volatility of the 100 most capitalized stocks traded in US equity markets. An empirical probability density function (pdf) of volatility is obtained and compared with the theoretical predictions of a lognormal model and of the Hull and White model. The lognormal model well describes the pdf in the region of low values of volatility whereas the Hull and White model better approximates the empirical pdf for large values of volatility. Both models fail in describing the empirical pdf over a moderately large volatility range.
HERAFitter: Open source QCD fit project
Alekhin, S.; Behnke, O.; Belov, P.; ...
2015-07-01
HERAFitter is an open-source package that provides a framework for the determination of the parton distribution functions (PDFs) of the proton and for many different kinds of analyses in Quantum Chromodynamics (QCD). It encodes results from a wide range of experimental measurements in lepton-proton deep inelastic scattering and proton-proton (proton-antiproton) collisions at hadron colliders. These are complemented with a variety of theoretical options for calculating PDF-dependent cross section predictions corresponding to the measurements. The framework covers a large number of the existing methods and schemes used for PDF determination. The data and theoretical predictions are brought together through numerous methodologicalmore » options for carrying out PDF fits and plotting tools to help visualise the results. While primarily based on the approach of collinear factorisation, HERAFitter also provides facilities for fits of dipole models and transverse-momentum dependent PDFs. The package can be used to study the impact of new precise measurements from hadron colliders. This paper describes the general structure of HERAFitter and its wide choice of options.« less
A determination of the charm content of the proton: The NNPDF Collaboration.
Ball, Richard D; Bertone, Valerio; Bonvini, Marco; Carrazza, Stefano; Forte, Stefano; Guffanti, Alberto; Hartland, Nathan P; Rojo, Juan; Rottoli, Luca
2016-01-01
We present an unbiased determination of the charm content of the proton, in which the charm parton distribution function (PDF) is parametrized on the same footing as the light quarks and the gluon in a global PDF analysis. This determination relies on the NLO calculation of deep-inelastic structure functions in the FONLL scheme, generalized to account for massive charm-initiated contributions. When the EMC charm structure function dataset is included, it is well described by the fit, and PDF uncertainties in the fitted charm PDF are significantly reduced. We then find that the fitted charm PDF vanishes within uncertainties at a scale [Formula: see text] GeV for all [Formula: see text], independent of the value of [Formula: see text] used in the coefficient functions. We also find some evidence that the charm PDF at large [Formula: see text] and low scales does not vanish, but rather has an "intrinsic" component, very weakly scale dependent and almost independent of the value of [Formula: see text], carrying less than [Formula: see text] of the total momentum of the proton. The uncertainties in all other PDFs are only slightly increased by the inclusion of fitted charm, while the dependence of these PDFs on [Formula: see text] is reduced. The increased stability with respect to [Formula: see text] persists at high scales and is the main implication of our results for LHC phenomenology. Our results show that if the EMC data are correct, then the usual approach in which charm is perturbatively generated leads to biased results for the charm PDF, though at small x this bias could be reabsorbed if the uncertainty due to the charm mass and missing higher orders were included. We show that LHC data for processes, such as high [Formula: see text] and large rapidity charm pair production and [Formula: see text] production, have the potential to confirm or disprove the implications of the EMC data.
Distinguishing dark matter from unresolved point sources in the Inner Galaxy with photon statistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Samuel K.; Lisanti, Mariangela; Safdi, Benjamin R., E-mail: samuelkl@princeton.edu, E-mail: mlisanti@princeton.edu, E-mail: bsafdi@princeton.edu
2015-05-01
Data from the Fermi Large Area Telescope suggests that there is an extended excess of GeV gamma-ray photons in the Inner Galaxy. Identifying potential astrophysical sources that contribute to this excess is an important step in verifying whether the signal originates from annihilating dark matter. In this paper, we focus on the potential contribution of unresolved point sources, such as millisecond pulsars (MSPs). We propose that the statistics of the photons—in particular, the flux probability density function (PDF) of the photon counts below the point-source detection threshold—can potentially distinguish between the dark-matter and point-source interpretations. We calculate the flux PDFmore » via the method of generating functions for these two models of the excess. Working in the framework of Bayesian model comparison, we then demonstrate that the flux PDF can potentially provide evidence for an unresolved MSP-like point-source population.« less
A probability-based multi-cycle sorting method for 4D-MRI: A simulation study
Liang, Xiao; Yin, Fang-Fang; Liu, Yilin; Cai, Jing
2016-01-01
Purpose: To develop a novel probability-based sorting method capable of generating multiple breathing cycles of 4D-MRI images and to evaluate performance of this new method by comparing with conventional phase-based methods in terms of image quality and tumor motion measurement. Methods: Based on previous findings that breathing motion probability density function (PDF) of a single breathing cycle is dramatically different from true stabilized PDF that resulted from many breathing cycles, it is expected that a probability-based sorting method capable of generating multiple breathing cycles of 4D images may capture breathing variation information missing from conventional single-cycle sorting methods. The overall idea is to identify a few main breathing cycles (and their corresponding weightings) that can best represent the main breathing patterns of the patient and then reconstruct a set of 4D images for each of the identified main breathing cycles. This method is implemented in three steps: (1) The breathing signal is decomposed into individual breathing cycles, characterized by amplitude, and period; (2) individual breathing cycles are grouped based on amplitude and period to determine the main breathing cycles. If a group contains more than 10% of all breathing cycles in a breathing signal, it is determined as a main breathing pattern group and is represented by the average of individual breathing cycles in the group; (3) for each main breathing cycle, a set of 4D images is reconstructed using a result-driven sorting method adapted from our previous study. The probability-based sorting method was first tested on 26 patients’ breathing signals to evaluate its feasibility of improving target motion PDF. The new method was subsequently tested for a sequential image acquisition scheme on the 4D digital extended cardiac torso (XCAT) phantom. Performance of the probability-based and conventional sorting methods was evaluated in terms of target volume precision and accuracy as measured by the 4D images, and also the accuracy of average intensity projection (AIP) of 4D images. Results: Probability-based sorting showed improved similarity of breathing motion PDF from 4D images to reference PDF compared to single cycle sorting, indicated by the significant increase in Dice similarity coefficient (DSC) (probability-based sorting, DSC = 0.89 ± 0.03, and single cycle sorting, DSC = 0.83 ± 0.05, p-value <0.001). Based on the simulation study on XCAT, the probability-based method outperforms the conventional phase-based methods in qualitative evaluation on motion artifacts and quantitative evaluation on tumor volume precision and accuracy and accuracy of AIP of the 4D images. Conclusions: In this paper the authors demonstrated the feasibility of a novel probability-based multicycle 4D image sorting method. The authors’ preliminary results showed that the new method can improve the accuracy of tumor motion PDF and the AIP of 4D images, presenting potential advantages over the conventional phase-based sorting method for radiation therapy motion management. PMID:27908178
Isaac, R Elwyn; Johnson, Erik C; Audsley, Neil; Shirras, Alan D
2007-12-01
Recent studies have firmly established pigment dispersing factor (PDF), a C-terminally amidated octodecapeptide, as a key neurotransmitter regulating rhythmic circadian locomotory behaviours in adult Drosophila melanogaster. The mechanisms by which PDF functions as a circadian peptide transmitter are not fully understood, however; in particular, nothing is known about the role of extracellular peptidases in terminating PDF signalling at synapses. In this study we show that PDF is susceptible to hydrolysis by neprilysin, an endopeptidase that is enriched in synaptic membranes of mammals and insects. Neprilysin cleaves PDF at the internal Ser7-Leu8 peptide bond to generate PDF1-7 and PDF8-18. Neither of these fragments were able to increase intracellular cAMP levels in HEK293 cells cotransfected with the Drosophila PDF receptor cDNA and a firefly luciferase reporter gene, confirming that such cleavage results in PDF inactivation. The Ser7-Leu8 peptide bond was also the principal cleavage site when PDF was incubated with membranes prepared from heads of adult Drosophila. This endopeptidase activity was inhibited by the neprilysin inhibitors phosphoramidon (IC(50,) 0.15 micromol l(-1)) and thiorphan (IC(50,) 1.2 micromol l(-1)). We propose that cleavage by a member of the Drosophila neprilysin family of endopeptidases is the most likely mechanism for inactivating synaptic PDF and that neprilysin might have an important role in regulating PDF signals within circadian neural circuits.
Modeling of turbulent supersonic H2-air combustion with a multivariate beta PDF
NASA Technical Reports Server (NTRS)
Baurle, R. A.; Hassan, H. A.
1993-01-01
Recent calculations of turbulent supersonic reacting shear flows using an assumed multivariate beta PDF (probability density function) resulted in reduced production rates and a delay in the onset of combustion. This result is not consistent with available measurements. The present research explores two possible reasons for this behavior: use of PDF's that do not yield Favre averaged quantities, and the gradient diffusion assumption. A new multivariate beta PDF involving species densities is introduced which makes it possible to compute Favre averaged mass fractions. However, using this PDF did not improve comparisons with experiment. A countergradient diffusion model is then introduced. Preliminary calculations suggest this to be the cause of the discrepancy.
A method to estimate stellar ages from kinematical data
NASA Astrophysics Data System (ADS)
Almeida-Fernandes, F.; Rocha-Pinto, H. J.
2018-05-01
We present a method to build a probability density function (PDF) for the age of a star based on its peculiar velocities U, V, and W and its orbital eccentricity. The sample used in this work comes from the Geneva-Copenhagen Survey (GCS) that contains the spatial velocities, orbital eccentricities, and isochronal ages for about 14 000 stars. Using the GCS stars, we fitted the parameters that describe the relations between the distributions of kinematical properties and age. This parametrization allows us to obtain an age probability from the kinematical data. From this age PDF, we estimate an individual average age for the star using the most likely age and the expected age. We have obtained the stellar age PDF for the age of 9102 stars from the GCS and have shown that the distribution of individual ages derived from our method is in good agreement with the distribution of isochronal ages. We also observe a decline in the mean metallicity with our ages for stars younger than 7 Gyr, similar to the one observed for isochronal ages. This method can be useful for the estimation of rough stellar ages for those stars that fall in areas of the Hertzsprung-Russell diagram where isochrones are tightly crowded. As an example of this method, we estimate the age of Trappist-1, which is a M8V star, obtaining the age of t(UVW) = 12.50(+0.29 - 6.23) Gyr.
Fully probabilistic control for stochastic nonlinear control systems with input dependent noise.
Herzallah, Randa
2015-03-01
Robust controllers for nonlinear stochastic systems with functional uncertainties can be consistently designed using probabilistic control methods. In this paper a generalised probabilistic controller design for the minimisation of the Kullback-Leibler divergence between the actual joint probability density function (pdf) of the closed loop control system, and an ideal joint pdf is presented emphasising how the uncertainty can be systematically incorporated in the absence of reliable systems models. To achieve this objective all probabilistic models of the system are estimated from process data using mixture density networks (MDNs) where all the parameters of the estimated pdfs are taken to be state and control input dependent. Based on this dependency of the density parameters on the input values, explicit formulations to the construction of optimal generalised probabilistic controllers are obtained through the techniques of dynamic programming and adaptive critic methods. Using the proposed generalised probabilistic controller, the conditional joint pdfs can be made to follow the ideal ones. A simulation example is used to demonstrate the implementation of the algorithm and encouraging results are obtained. Copyright © 2014 Elsevier Ltd. All rights reserved.
Salman, Katrin; Cain, Peter A; Fitzgerald, Benjamin T; Sundqvist, Martin G; Ugander, Martin
2017-07-01
Cardiac amyloidosis is a rare but serious condition with poor survival. One of the early findings by echocardiography is impaired diastolic function, even before the development of cardiac symptoms. Early diagnosis is important, permitting initiation of treatment aimed at improving survival. The parameterized diastolic filling (PDF) formalism entails describing the left ventricular filling pattern during early diastole using the mathematical equation for the motion of a damped harmonic oscillator. We hypothesized that echocardiographic PDF analysis could detect differences in diastolic function between patients with amyloidosis and controls. Pulsed-wave Doppler echocardiography of transmitral flow was measured in 13 patients with amyloid heart disease and 13 age- and gender matched controls. E- waves (2 to 3 per subject) were analyzed using in-house developed software. Nine PDF-derived parameters were obtained in addition to conventional echocardiographic parameters of diastolic function. Compared to controls, cardiac amyloidosis patients had a larger left atrial area (23.7 ± 7.5 cm 2 vs. 18.5 ± 4.8 cm 2 , p = 0.04), greater interventricular septum wall thickness (14.4 ± 2.6 mm vs. 9.3 ± 1.3 mm, p < 0.001), lower e' (0.06 ± 0.02 m/s vs. 0.09 ± 0.02 m/s, p < 0.001) and higher E/e' (18.0 ± 12.9 vs. 7.7 ± 1.3, p = 0.001). The PDF parameter peak resistive force was greater in cardiac amyloidosis patients compared to controls (17.9 ± 5.7 mN vs. 13.1 ± 3.1 mN, p = 0.03), and other PDF parameters did not differ. PDF analysis revealed that patients with cardiac amyloidosis had a greater peak resistive force compared to controls, consistent with a greater degree of diastolic dysfunction. PDF analysis may be useful in characterizing diastolic function in amyloid heart disease. Copyright © 2017 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Olds, Daniel; Wang, Hsiu -Wen; Page, Katharine L.
2015-09-04
In this work we discuss the potential problems and currently available solutions in modeling powder-diffraction based pair-distribution function (PDF) data from systems where morphological feature information content includes distances in the nanometer length scale, such as finite nanoparticles, nanoporous networks, and nanoscale precipitates in bulk materials. The implications of an experimental finite minimum Q-value are addressed by simulation, which also demonstrates the advantages of combining PDF data with small angle scattering data (SAS). In addition, we introduce a simple Fortran90 code, DShaper, which may be incorporated into PDF data fitting routines in order to approximate the so-called shape-function for anymore » atomistic model.« less
PDF4LHC recommendations for LHC Run II
Butterworth, Jon; Carrazza, Stefano; Cooper-Sarkar, Amanda; ...
2016-01-06
We provide an updated recommendation for the usage of sets of parton distribution functions (PDFs) and the assessment of PDF and PDF+αs uncertainties suitable for applications at the LHC Run II. We review developments since the previous PDF4LHC recommendation, and discuss and compare the new generation of PDFs, which include substantial information from experimental data from the Run I of the LHC. We then propose a new prescription for the combination of a suitable subset of the available PDF sets, which is presented in terms of a single combined PDF set. Lastly, we finally discuss tools which allow for themore » delivery of this combined set in terms of optimized sets of Hessian eigenvectors or Monte Carlo replicas, and their usage, and provide some examples of their application to LHC phenomenology.« less
On the probability distribution function of the mass surface density of molecular clouds. I
NASA Astrophysics Data System (ADS)
Fischera, Jörg
2014-05-01
The probability distribution function (PDF) of the mass surface density is an essential characteristic of the structure of molecular clouds or the interstellar medium in general. Observations of the PDF of molecular clouds indicate a composition of a broad distribution around the maximum and a decreasing tail at high mass surface densities. The first component is attributed to the random distribution of gas which is modeled using a log-normal function while the second component is attributed to condensed structures modeled using a simple power-law. The aim of this paper is to provide an analytical model of the PDF of condensed structures which can be used by observers to extract information about the condensations. The condensed structures are considered to be either spheres or cylinders with a truncated radial density profile at cloud radius rcl. The assumed profile is of the form ρ(r) = ρc/ (1 + (r/r0)2)n/ 2 for arbitrary power n where ρc and r0 are the central density and the inner radius, respectively. An implicit function is obtained which either truncates (sphere) or has a pole (cylinder) at maximal mass surface density. The PDF of spherical condensations and the asymptotic PDF of cylinders in the limit of infinite overdensity ρc/ρ(rcl) flattens for steeper density profiles and has a power law asymptote at low and high mass surface densities and a well defined maximum. The power index of the asymptote Σ- γ of the logarithmic PDF (ΣP(Σ)) in the limit of high mass surface densities is given by γ = (n + 1)/(n - 1) - 1 (spheres) or by γ = n/ (n - 1) - 1 (cylinders in the limit of infinite overdensity). Appendices are available in electronic form at http://www.aanda.org
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ovchinnikov, Mikhail; Lim, Kyo-Sun Sunny; Larson, Vincent E.
Coarse-resolution climate models increasingly rely on probability density functions (PDFs) to represent subgrid-scale variability of prognostic variables. While PDFs characterize the horizontal variability, a separate treatment is needed to account for the vertical structure of clouds and precipitation. When sub-columns are drawn from these PDFs for microphysics or radiation parameterizations, appropriate vertical correlations must be enforced via PDF overlap specifications. This study evaluates the representation of PDF overlap in the Subgrid Importance Latin Hypercube Sampler (SILHS) employed in the assumed PDF turbulence and cloud scheme called the Cloud Layers Unified By Binormals (CLUBB). PDF overlap in CLUBB-SILHS simulations of continentalmore » and tropical oceanic deep convection is compared with overlap of PDF of various microphysics variables in cloud-resolving model (CRM) simulations of the same cases that explicitly predict the 3D structure of cloud and precipitation fields. CRM results show that PDF overlap varies significantly between different hydrometeor types, as well as between PDFs of mass and number mixing ratios for each species, - a distinction that the current SILHS implementation does not make. In CRM simulations that explicitly resolve cloud and precipitation structures, faster falling species, such as rain and graupel, exhibit significantly higher coherence in their vertical distributions than slow falling cloud liquid and ice. These results suggest that to improve the overlap treatment in the sub-column generator, the PDF correlations need to depend on hydrometeor properties, such as fall speeds, in addition to the currently implemented dependency on the turbulent convective length scale.« less
Betty Petersen Memorial Library - NCWCP Publications - NWS
. Polger P. Comparative Analysis of a New Integration Method With Certain Standard Methods (.PDF file) 52 file) 169 1978 Gerrity J. Elemental-Filter Design Considerations (.PDF file) 170 1978 Shuman F. G
NASA Technical Reports Server (NTRS)
1995-01-01
The success of any solution methodology for studying gas-turbine combustor flows depends a great deal on how well it can model various complex, rate-controlling processes associated with turbulent transport, mixing, chemical kinetics, evaporation and spreading rates of the spray, convective and radiative heat transfer, and other phenomena. These phenomena often strongly interact with each other at disparate time and length scales. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and evaporation in many practical combustion devices. Turbulence manifests its influence in a diffusion flame in several forms depending on how turbulence interacts with various flame scales. These forms range from the so-called wrinkled, or stretched, flamelets regime, to the distributed combustion regime. Conventional turbulence closure models have difficulty in treating highly nonlinear reaction rates. A solution procedure based on the joint composition probability density function (PDF) approach holds the promise of modeling various important combustion phenomena relevant to practical combustion devices such as extinction, blowoff limits, and emissions predictions because it can handle the nonlinear chemical reaction rates without any approximation. In this approach, mean and turbulence gas-phase velocity fields are determined from a standard turbulence model; the joint composition field of species and enthalpy are determined from the solution of a modeled PDF transport equation; and a Lagrangian-based dilute spray model is used for the liquid-phase representation with appropriate consideration of the exchanges of mass, momentum, and energy between the two phases. The PDF transport equation is solved by a Monte Carlo method, and existing state-of-the-art numerical representations are used to solve the mean gasphase velocity and turbulence fields together with the liquid-phase equations. The joint composition PDF approach was extended in our previous work to the study of compressible reacting flows. The application of this method to several supersonic diffusion flames associated with scramjet combustor flow fields provided favorable comparisons with the available experimental data. A further extension of this approach to spray flames, three-dimensional computations, and parallel computing was reported in a recent paper. The recently developed PDF/SPRAY/computational fluid dynamics (CFD) module combines the novelty of the joint composition PDF approach with the ability to run on parallel architectures. This algorithm was implemented on the NASA Lewis Research Center's Cray T3D, a massively parallel computer with an aggregate of 64 processor elements. The calculation procedure was applied to predict the flow properties of both open and confined swirl-stabilized spray flames.
A compression algorithm for the combination of PDF sets.
Carrazza, Stefano; Latorre, José I; Rojo, Juan; Watt, Graeme
The current PDF4LHC recommendation to estimate uncertainties due to parton distribution functions (PDFs) in theoretical predictions for LHC processes involves the combination of separate predictions computed using PDF sets from different groups, each of which comprises a relatively large number of either Hessian eigenvectors or Monte Carlo (MC) replicas. While many fixed-order and parton shower programs allow the evaluation of PDF uncertainties for a single PDF set at no additional CPU cost, this feature is not universal, and, moreover, the a posteriori combination of the predictions using at least three different PDF sets is still required. In this work, we present a strategy for the statistical combination of individual PDF sets, based on the MC representation of Hessian sets, followed by a compression algorithm for the reduction of the number of MC replicas. We illustrate our strategy with the combination and compression of the recent NNPDF3.0, CT14 and MMHT14 NNLO PDF sets. The resulting compressed Monte Carlo PDF sets are validated at the level of parton luminosities and LHC inclusive cross sections and differential distributions. We determine that around 100 replicas provide an adequate representation of the probability distribution for the original combined PDF set, suitable for general applications to LHC phenomenology.
NASA Astrophysics Data System (ADS)
Kim, K. K.; Hamm, S. Y.; Kim, S. O.; Yun, S. T.
2016-12-01
For confronting global climate change, carbon capture and storage (CCS) is one of several very useful strategies as using capture of greenhouse gases like CO2 spewed from stacks and then isolation of the gases in underground geologic storage. CO2-rich groundwater could be produced by CO2 dissolution into fresh groundwater around a CO2 storage site. As consequence, natural analogue studies related to geologic storage provide insights into future geologic CO2 storage sites as well as can provide crucial information on the safety and security of geologic sequestration, the long-term impact of CO2 storage on the environment, and field operation and monitoring that could be implemented for geologic sequestration. In this study, we developed CO2 leakage monitoring method using probability density function (PDF) by characterizing naturally occurring CO2-rich groundwater. For the study, we used existing data of CO2-rich groundwaters in different geological regions (Gangwondo, Gyeongsangdo, and Choongchungdo provinces) in South Korea. Using PDF method and QI (quantitative index), we executed qualitative and quantitative comparisons among local areas and chemical constituents. Geochemical properties of groundwater with/without CO2 as the PDF forms proved that pH, EC, TDS, HCO3-, Ca2+, Mg2+, and SiO2 were effective monitoring parameters for carbonated groundwater in the case of CO2leakage from an underground storage site. KEY WORDS: CO2-rich groundwater, CO2 storage site, monitoring parameter, natural analogue, probability density function (PDF), QI_quantitative index Acknowledgement This study was supported by the "Basic Science Research Program through the National Research Foundation of Korea (NRF), which is funded by the Ministry of Education (NRF-2013R1A1A2058186)" and the "R&D Project on Environmental Management of Geologic CO2 Storage" from KEITI (Project number: 2014001810003).
Jensen, K. M.Ø.; Blichfeld, A. B.; Bauers, S. R.; ...
2015-07-05
By means of normal incidence, high flux and high energy x-rays, we have obtained total scattering data for Pair Distribution Function (PDF) analysis from thin films (tf), suitable for local structure analysis. By using amorphous substrates as support for the films, the standard Rapid Acquisition PDF setup can be applied and the scattering signal from the film can be isolated from the total scattering data through subtraction of an independently measured background signal. No angular corrections to the data are needed, as would be the case for grazing incidence measurements. We illustrate the ‘tfPDF’ method through studies of as depositedmore » (i.e. amorphous) and crystalline FeSb 3 films, where the local structure analysis gives insight into the stabilization of the metastable skutterudite FeSb 3 phase. The films were prepared by depositing ultra-thin alternating layers of Fe and Sb, which interdiffuse and after annealing crystallize to form the FeSb 3 structure. The tfPDF data show that the amorphous precursor phase consists of corner-sharing FeSb 6 octahedra with motifs highly resembling the local structure in crystalline FeSb 3. Analysis of the amorphous structure allows predicting whether the final crystalline product will form the FeSb 3 phase with or without excess Sb present. The study thus illustrates how analysis of the local structure in amorphous precursor films can help to understand crystallization processes of metastable phases and opens for a range of new local structure studies of thin films.« less
NASA Astrophysics Data System (ADS)
Coclite, A.; Pascazio, G.; De Palma, P.; Cutrone, L.
2016-07-01
Flamelet-Progress-Variable (FPV) combustion models allow the evaluation of all thermochemical quantities in a reacting flow by computing only the mixture fraction Z and a progress variable C. When using such a method to predict turbulent combustion in conjunction with a turbulence model, a probability density function (PDF) is required to evaluate statistical averages (e. g., Favre averages) of chemical quantities. The choice of the PDF is a compromise between computational costs and accuracy level. The aim of this paper is to investigate the influence of the PDF choice and its modeling aspects to predict turbulent combustion. Three different models are considered: the standard one, based on the choice of a β-distribution for Z and a Dirac-distribution for C; a model employing a β-distribution for both Z and C; and the third model obtained using a β-distribution for Z and the statistically most likely distribution (SMLD) for C. The standard model, although widely used, does not take into account the interaction between turbulence and chemical kinetics as well as the dependence of the progress variable not only on its mean but also on its variance. The SMLD approach establishes a systematic framework to incorporate informations from an arbitrary number of moments, thus providing an improvement over conventionally employed presumed PDF closure models. The rational behind the choice of the three PDFs is described in some details and the prediction capability of the corresponding models is tested vs. well-known test cases, namely, the Sandia flames, and H2-air supersonic combustion.
Jensen, Kirsten M. Ø.; Blichfeld, Anders B.; Bauers, Sage R.; Wood, Suzannah R.; Dooryhée, Eric; Johnson, David C.; Iversen, Bo B.; Billinge, Simon J. L.
2015-01-01
By means of normal-incidence, high-flux and high-energy X-rays, total scattering data for pair distribution function (PDF) analysis have been obtained from thin films (tf), suitable for local structure analysis. By using amorphous substrates as support for the films, the standard Rapid Acquisition PDF setup can be applied and the scattering signal from the film can be isolated from the total scattering data through subtraction of an independently measured background signal. No angular corrections to the data are needed, as would be the case for grazing incidence measurements. The ‘tfPDF’ method is illustrated through studies of as-deposited (i.e. amorphous) and crystalline FeSb3 films, where the local structure analysis gives insight into the stabilization of the metastable skutterudite FeSb3 phase. The films were prepared by depositing ultra-thin alternating layers of Fe and Sb, which interdiffuse and after annealing crystallize to form the FeSb3 structure. The tfPDF data show that the amorphous precursor phase consists of corner-sharing FeSb6 octahedra with motifs highly resembling the local structure in crystalline FeSb3. Analysis of the amorphous structure allows the prediction of whether the final crystalline product will form the FeSb3 phase with or without excess Sb present. The study thus illustrates how analysis of the local structure in amorphous precursor films can help to understand crystallization processes of metastable phases and opens for a range of new local structure studies of thin films. PMID:26306190
Cooley, Richard L.
1993-01-01
A new method is developed to efficiently compute exact Scheffé-type confidence intervals for output (or other function of parameters) g(β) derived from a groundwater flow model. The method is general in that parameter uncertainty can be specified by any statistical distribution having a log probability density function (log pdf) that can be expanded in a Taylor series. However, for this study parameter uncertainty is specified by a statistical multivariate beta distribution that incorporates hydrogeologic information in the form of the investigator's best estimates of parameters and a grouping of random variables representing possible parameter values so that each group is defined by maximum and minimum bounds and an ordering according to increasing value. The new method forms the confidence intervals from maximum and minimum limits of g(β) on a contour of a linear combination of (1) the quadratic form for the parameters used by Cooley and Vecchia (1987) and (2) the log pdf for the multivariate beta distribution. Three example problems are used to compare characteristics of the confidence intervals for hydraulic head obtained using different weights for the linear combination. Different weights generally produced similar confidence intervals, whereas the method of Cooley and Vecchia (1987) often produced much larger confidence intervals.
NASA Astrophysics Data System (ADS)
Madadi-Kandjani, E.; Fox, R. O.; Passalacqua, A.
2017-06-01
An extended quadrature method of moments using the β kernel density function (β -EQMOM) is used to approximate solutions to the evolution equation for univariate and bivariate composition probability distribution functions (PDFs) of a passive scalar for binary and ternary mixing. The key element of interest is the molecular mixing term, which is described using the Fokker-Planck (FP) molecular mixing model. The direct numerical simulations (DNSs) of Eswaran and Pope ["Direct numerical simulations of the turbulent mixing of a passive scalar," Phys. Fluids 31, 506 (1988)] and the amplitude mapping closure (AMC) of Pope ["Mapping closures for turbulent mixing and reaction," Theor. Comput. Fluid Dyn. 2, 255 (1991)] are taken as reference solutions to establish the accuracy of the FP model in the case of binary mixing. The DNSs of Juneja and Pope ["A DNS study of turbulent mixing of two passive scalars," Phys. Fluids 8, 2161 (1996)] are used to validate the results obtained for ternary mixing. Simulations are performed with both the conditional scalar dissipation rate (CSDR) proposed by Fox [Computational Methods for Turbulent Reacting Flows (Cambridge University Press, 2003)] and the CSDR from AMC, with the scalar dissipation rate provided as input and obtained from the DNS. Using scalar moments up to fourth order, the ability of the FP model to capture the evolution of the shape of the PDF, important in turbulent mixing problems, is demonstrated. Compared to the widely used assumed β -PDF model [S. S. Girimaji, "Assumed β-pdf model for turbulent mixing: Validation and extension to multiple scalar mixing," Combust. Sci. Technol. 78, 177 (1991)], the β -EQMOM solution to the FP model more accurately describes the initial mixing process with a relatively small increase in computational cost.
Modeling of Dissipation Element Statistics in Turbulent Non-Premixed Jet Flames
NASA Astrophysics Data System (ADS)
Denker, Dominik; Attili, Antonio; Boschung, Jonas; Hennig, Fabian; Pitsch, Heinz
2017-11-01
The dissipation element (DE) analysis is a method for analyzing and compartmentalizing turbulent scalar fields. DEs can be described by two parameters, namely the Euclidean distance l between their extremal points and the scalar difference in the respective points Δϕ . The joint probability density function (jPDF) of these two parameters P(Δϕ , l) is expected to suffice for a statistical reconstruction of the scalar field. In addition, reacting scalars show a strong correlation with these DE parameters in both premixed and non-premixed flames. Normalized DE statistics show a remarkable invariance towards changes in Reynolds numbers. This feature of DE statistics was exploited in a Boltzmann-type evolution equation based model for the probability density function (PDF) of the distance between the extremal points P(l) in isotropic turbulence. Later, this model was extended for the jPDF P(Δϕ , l) and then adapted for the use in free shear flows. The effect of heat release on the scalar scales and DE statistics is investigated and an extended model for non-premixed jet flames is introduced, which accounts for the presence of chemical reactions. This new model is validated against a series of DNS of temporally evolving jet flames. European Research Council Project ``Milestone''.
Laser transit anemometer software development program
NASA Technical Reports Server (NTRS)
Abbiss, John B.
1989-01-01
Algorithms were developed for the extraction of two components of mean velocity, standard deviation, and the associated correlation coefficient from laser transit anemometry (LTA) data ensembles. The solution method is based on an assumed two-dimensional Gaussian probability density function (PDF) model of the flow field under investigation. The procedure consists of transforming the data ensembles from the data acquisition domain (consisting of time and angle information) to the velocity space domain (consisting of velocity component information). The mean velocity results are obtained from the data ensemble centroid. Through a least squares fitting of the transformed data to an ellipse representing the intersection of a plane with the PDF, the standard deviations and correlation coefficient are obtained. A data set simulation method is presented to test the data reduction process. Results of using the simulation system with a limited test matrix of input values is also given.
Risk assessment of turbine rotor failure using probabilistic ultrasonic non-destructive evaluations
NASA Astrophysics Data System (ADS)
Guan, Xuefei; Zhang, Jingdan; Zhou, S. Kevin; Rasselkorde, El Mahjoub; Abbasi, Waheed A.
2014-02-01
The study presents a method and application of risk assessment methodology for turbine rotor fatigue failure using probabilistic ultrasonic nondestructive evaluations. A rigorous probabilistic modeling for ultrasonic flaw sizing is developed by incorporating the model-assisted probability of detection, and the probability density function (PDF) of the actual flaw size is derived. Two general scenarios, namely the ultrasonic inspection with an identified flaw indication and the ultrasonic inspection without flaw indication, are considered in the derivation. To perform estimations for fatigue reliability and remaining useful life, uncertainties from ultrasonic flaw sizing and fatigue model parameters are systematically included and quantified. The model parameter PDF is estimated using Bayesian parameter estimation and actual fatigue testing data. The overall method is demonstrated using a realistic application of steam turbine rotor, and the risk analysis under given safety criteria is provided to support maintenance planning.
Modeling of turbulent supersonic H2-air combustion with an improved joint beta PDF
NASA Technical Reports Server (NTRS)
Baurle, R. A.; Hassan, H. A.
1991-01-01
Attempts at modeling recent experiments of Cheng et al. indicated that discrepancies between theory and experiment can be a result of the form of assumed probability density function (PDF) and/or the turbulence model employed. Improvements in both the form of the assumed PDF and the turbulence model are presented. The results are again used to compare with measurements. Initial comparisons are encouraging.
Allatostatin A Signalling in Drosophila Regulates Feeding and Sleep and Is Modulated by PDF.
Chen, Jiangtian; Reiher, Wencke; Hermann-Luibl, Christiane; Sellami, Azza; Cognigni, Paola; Kondo, Shu; Helfrich-Förster, Charlotte; Veenstra, Jan A; Wegener, Christian
2016-09-01
Feeding and sleep are fundamental behaviours with significant interconnections and cross-modulations. The circadian system and peptidergic signals are important components of this modulation, but still little is known about the mechanisms and networks by which they interact to regulate feeding and sleep. We show that specific thermogenetic activation of peptidergic Allatostatin A (AstA)-expressing PLP neurons and enteroendocrine cells reduces feeding and promotes sleep in the fruit fly Drosophila. The effects of AstA cell activation are mediated by AstA peptides with receptors homolog to galanin receptors subserving similar and apparently conserved functions in vertebrates. We further identify the PLP neurons as a downstream target of the neuropeptide pigment-dispersing factor (PDF), an output factor of the circadian clock. PLP neurons are contacted by PDF-expressing clock neurons, and express a functional PDF receptor demonstrated by cAMP imaging. Silencing of AstA signalling and continuous input to AstA cells by tethered PDF changes the sleep/activity ratio in opposite directions but does not affect rhythmicity. Taken together, our results suggest that pleiotropic AstA signalling by a distinct neuronal and enteroendocrine AstA cell subset adapts the fly to a digestive energy-saving state which can be modulated by PDF.
Allatostatin A Signalling in Drosophila Regulates Feeding and Sleep and Is Modulated by PDF
Reiher, Wencke; Hermann-Luibl, Christiane; Sellami, Azza; Cognigni, Paola; Helfrich-Förster, Charlotte; Veenstra, Jan A.
2016-01-01
Feeding and sleep are fundamental behaviours with significant interconnections and cross-modulations. The circadian system and peptidergic signals are important components of this modulation, but still little is known about the mechanisms and networks by which they interact to regulate feeding and sleep. We show that specific thermogenetic activation of peptidergic Allatostatin A (AstA)-expressing PLP neurons and enteroendocrine cells reduces feeding and promotes sleep in the fruit fly Drosophila. The effects of AstA cell activation are mediated by AstA peptides with receptors homolog to galanin receptors subserving similar and apparently conserved functions in vertebrates. We further identify the PLP neurons as a downstream target of the neuropeptide pigment-dispersing factor (PDF), an output factor of the circadian clock. PLP neurons are contacted by PDF-expressing clock neurons, and express a functional PDF receptor demonstrated by cAMP imaging. Silencing of AstA signalling and continuous input to AstA cells by tethered PDF changes the sleep/activity ratio in opposite directions but does not affect rhythmicity. Taken together, our results suggest that pleiotropic AstA signalling by a distinct neuronal and enteroendocrine AstA cell subset adapts the fly to a digestive energy-saving state which can be modulated by PDF. PMID:27689358
Grzela, Renata; Nusbaum, Julien; Fieulaine, Sonia; Lavecchia, Francesco; Desmadril, Michel; Nhiri, Naima; Van Dorsselaer, Alain; Cianferani, Sarah; Jacquet, Eric; Meinnel, Thierry; Giglione, Carmela
2018-02-01
Unexpected peptide deformylase (PDF) genes were recently retrieved in numerous marine phage genomes. While various hypotheses dealing with the occurrence of these intriguing sequences have been made, no further characterization and functional studies have been described thus far. In this study, we characterize the bacteriophage Vp16 PDF enzyme, as representative member of the newly identified C-terminally truncated viral PDFs. We show here that conditions classically used for bacterial PDFs lead to an enzyme exhibiting weak activity. Nonetheless, our integrated biophysical and biochemical approaches reveal specific effects of pH and metals on Vp16 PDF stability and activity. A novel purification protocol taking in account these data allowed strong improvement of Vp16 PDF specific activity to values similar to those of bacterial PDFs. We next show that Vp16 PDF is as sensitive to the natural inhibitor compound of PDFs, actinonin, as bacterial PDFs. Comparison of the 3D structures of Vp16 and E. coli PDFs bound to actinonin also reveals that both PDFs display identical substrate binding mode. We conclude that bacteriophage Vp16 PDF protein has functional peptide deformylase activity and we suggest that encoded phage PDFs might be important for viral fitness. Copyright © 2017 Elsevier B.V. All rights reserved.
Whitfield, Ross E.; Goossens, Darren J.; Welberry, T. Richard
2016-01-01
The ability of the pair distribution function (PDF) analysis of total scattering (TS) from a powder to determine the local ordering in ferroelectric PZN (PbZn1/3Nb2/3O3) has been explored by comparison with a model established using single-crystal diffuse scattering (SCDS). While X-ray PDF analysis is discussed, the focus is on neutron diffraction results because of the greater extent of the data and the sensitivity of the neutron to oxygen atoms, the behaviour of which is important in PZN. The PDF was shown to be sensitive to many effects not apparent in the average crystal structure, including variations in the B-site—O separation distances and the fact that 〈110〉 Pb2+ displacements are most likely. A qualitative comparison between SCDS and the PDF shows that some features apparent in SCDS were not apparent in the PDF. These tended to pertain to short-range correlations in the structure, rather than to interatomic separations. For example, in SCDS the short-range alternation of the B-site cations was quite apparent in diffuse scattering at (½ ½ ½), whereas it was not apparent in the PDF. PMID:26870378
Probing strange quark in proton using pp->Wc at Dø
NASA Astrophysics Data System (ADS)
Ahsan, Mahsana
2005-04-01
I will describe a measurement of s-quark parton distribution function(PDF) using the Dø detector at Fermilab Tevatron. As s-quark PDF has only been measured in νN deep inelastic scattering, it is important to know if the same PDF works in pp inelastic experiments at √s=1.96TeV. Importance of the measurement of s-quark PDF also arises in the tests of QCD and EW dynamics and the background measurements of the New Physics processes(e.g t ->c ô). To measure s-quark PDF I chose to study W + c production in pp collisions via the parton level processes sg->W^-c , sg->W^+ c, dg->W^-c or gd->W^+c, where d-quark and gluon fusion contribute about 15% of the total Wc production rate. The ratio of W+c cross section to the inclusive W+1j, is of the order of 10-2 - dependent on the Pt of the jet. A study of the sensitivity of this ratio to the parametrization of the s-quark PDF may eventually allow us to determine a precise distribution function. Current status of analysis is the studies of charm tagging efficiencies in MC, the event selection efficiencies and signal to background ratios.
First photon detection in time-resolved transillumination imaging: a theoretical evaluation.
Behin-Ain, S; van Doorn, T; Patterson, J R
2004-09-07
First photon detection, as a special case of time-resolved transillumination imaging, is studied through the derivation of the temporal probability density function (pdf) for the first arriving photon. The pdf for different laser intensities, media and second and later arriving photons were generated. The arrival time of the first detected photon reduced as the laser power increased and also when the scattering and absorption coefficients decreased. The pdf for an imbedded totally absorbing 3 mm inhomogeneity may be distinguished from the pdf of a homogeneous turbid medium similar to that of human breast in dimensions and optical properties.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-29
... the PDF or XML version of Form No. 549D as a method to eFile quarterly data and whether they intend on... fillable Form No. 549D PDF and XML to file data \\2\\ pursuant to Order Nos. 735 and 735-A.\\3\\ [[Page 17414... PDF ( http://www.ferc.gov/docs-filing/forms/form-549d/form-549d.pdf ) or XML ( http://www.ferc.gov...
Estimation of laser beam pointing parameters in the presence of atmospheric turbulence.
Borah, Deva K; Voelz, David G
2007-08-10
The problem of estimating mechanical boresight and jitter performance of a laser pointing system in the presence of atmospheric turbulence is considered. A novel estimator based on maximizing an average probability density function (pdf) of the received signal is presented. The proposed estimator uses a Gaussian far-field mean irradiance profile, and the irradiance pdf is assumed to be lognormal. The estimates are obtained using a sequence of return signal values from the intended target. Alternatively, one can think of the estimates being made by a cooperative target using the received signal samples directly. The estimator does not require sample-to-sample atmospheric turbulence parameter information. The approach is evaluated using wave optics simulation for both weak and strong turbulence conditions. Our results show that very good boresight and jitter estimation performance can be obtained under the weak turbulence regime. We also propose a novel technique to include the effect of very low received intensity values that cannot be measured well by the receiving device. The proposed technique provides significant improvement over a conventional approach where such samples are simply ignored. Since our method is derived from the lognormal irradiance pdf, the performance under strong turbulence is degraded. However, the ideas can be extended with appropriate pdf models to obtain more accurate results under strong turbulence conditions.
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering.
Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus
2014-12-01
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.
Pirooznia, Sheila K.; Chiu, Kellie; Chan, May T.; Zimmerman, John E.; Elefant, Felice
2012-01-01
Tip60 is a histone acetyltransferase (HAT) enzyme that epigenetically regulates genes enriched for neuronal functions through interaction with the amyloid precursor protein (APP) intracellular domain. However, whether Tip60-mediated epigenetic dysregulation affects specific neuronal processes in vivo and contributes to neurodegeneration remains unclear. Here, we show that Tip60 HAT activity mediates axonal growth of the Drosophila pacemaker cells, termed “small ventrolateral neurons” (sLNvs), and their production of the neuropeptide pigment-dispersing factor (PDF) that functions to stabilize Drosophila sleep–wake cycles. Using genetic approaches, we show that loss of Tip60 HAT activity in the presence of the Alzheimer’s disease-associated APP affects PDF expression and causes retraction of the sLNv synaptic arbor required for presynaptic release of PDF. Functional consequence of these effects is evidenced by disruption of the sleep–wake cycle in these flies. Notably, overexpression of Tip60 in conjunction with APP rescues these sleep–wake disturbances by inducing overelaboration of the sLNv synaptic terminals and increasing PDF levels, supporting a neuroprotective role for dTip60 in sLNv growth and function under APP-induced neurodegenerative conditions. Our findings reveal a novel mechanism for Tip60 mediated sleep–wake regulation via control of axonal growth and PDF levels within the sLNv-encompassing neural network and provide insight into epigenetic-based regulation of sleep disturbances observed in neurodegenerative diseases like Alzheimer’s disease. PMID:22982579
Liberek, Tomasz; Lichodziejewska-Niemierko, Monika; Knopinska-Posluszny, Wanda; Schaub, Thomas P; Kirchgessner, Judith; Passlick-Deetjen, Jutta; Rutkowski, Boleslaw
2002-01-01
In order to evaluate the biocompatibility profile of a newly designed peritoneal dialysis fluid (PDF), we evaluated peritoneal leukocyte (PMphi) cytokine release following overnight in vivo dwells using standard, lactate-buffered, single-chamber bag PDF (Lac-PDF) and purely bicarbonate-buffered, double-chamber bag PDF containing 34 (Bic-PDF) or 39 (Bic Hi-PDF) mmol/L bicarbonate. A randomized, open, crossover clinical trial with single weekly test dwells was performed in stable, long-term continuous ambulatory PD patients (n = 8). During 8-hour overnight dwells, PMphi were exposed to different PDF containing 1.5% glucose. After drainage, peritoneal cells were isolated and incubated with RPMI 1640 medium for 2 or 3 hours, with and without stimulation by lipopolysaccharide (LPS). Ex vivo release of tumor necrosis factor (TNF)-alpha and interleukin (IL)-6 was measured by specific ELISA technique. After pre-exposure to Lac-PDF, PMphi generated 242 +/- 279 pg TNFalpha/10(6) cells and 157 +/- 105 pg IL-6/10(6) cells. When pre-exposed to Bic-PDF and Bic Hi-PDF, TNFa and IL-6 production of PMphi was not significantly different from Lac-PDF. After LPS stimulation (100 ng/mL), PMD secretion of TNFalpha and IL-6 pre-exposed to three PDF revealed no significant differences between groups: TNFalpha was 2,864 +/- 1,216, 2,910 +/- 1,202, and 3,291 +/- 558 pg/10(6) cells after overnight dwells with Lac-PDF, Bic-PDF, and Bic Hi-PDF, respectively. Comparably, LPS-stimulated (100 pg/ mL) PMphi showed IL-6 secretion of 891 +/- 335, 1,380 +/- 1,149, and 1,442 +/- 966 pg/10(6) cells for Lac-PDF, Bic-PDF, and Bic Hi-PDF. After long-term overnight dwells, initial pH, the different buffers, and varying glucose degradation product levels of PDF do not strongly affect PMphi function with respect to cytokine release. The lack of significant differences between fluids may result from the complete dialysate equilibration achieved during the overnight intraperitoneal dwell.
Uncertainty quantification tools for multiphase gas-solid flow simulations using MFIX
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fox, Rodney O.; Passalacqua, Alberto
2016-02-01
Computational fluid dynamics (CFD) has been widely studied and used in the scientific community and in the industry. Various models were proposed to solve problems in different areas. However, all models deviate from reality. Uncertainty quantification (UQ) process evaluates the overall uncertainties associated with the prediction of quantities of interest. In particular it studies the propagation of input uncertainties to the outputs of the models so that confidence intervals can be provided for the simulation results. In the present work, a non-intrusive quadrature-based uncertainty quantification (QBUQ) approach is proposed. The probability distribution function (PDF) of the system response can bemore » then reconstructed using extended quadrature method of moments (EQMOM) and extended conditional quadrature method of moments (ECQMOM). The report first explains the theory of QBUQ approach, including methods to generate samples for problems with single or multiple uncertain input parameters, low order statistics, and required number of samples. Then methods for univariate PDF reconstruction (EQMOM) and multivariate PDF reconstruction (ECQMOM) are explained. The implementation of QBUQ approach into the open-source CFD code MFIX is discussed next. At last, QBUQ approach is demonstrated in several applications. The method is first applied to two examples: a developing flow in a channel with uncertain viscosity, and an oblique shock problem with uncertain upstream Mach number. The error in the prediction of the moment response is studied as a function of the number of samples, and the accuracy of the moments required to reconstruct the PDF of the system response is discussed. The QBUQ approach is then demonstrated by considering a bubbling fluidized bed as example application. The mean particle size is assumed to be the uncertain input parameter. The system is simulated with a standard two-fluid model with kinetic theory closures for the particulate phase implemented into MFIX. The effect of uncertainty on the disperse-phase volume fraction, on the phase velocities and on the pressure drop inside the fluidized bed are examined, and the reconstructed PDFs are provided for the three quantities studied. Then the approach is applied to a bubbling fluidized bed with two uncertain parameters, particle-particle and particle-wall restitution coefficients. Contour plots of the mean and standard deviation of solid volume fraction, solid phase velocities and gas pressure are provided. The PDFs of the response are reconstructed using EQMOM with appropriate kernel density functions. The simulation results are compared to experimental data provided by the 2013 NETL small-scale challenge problem. Lastly, the proposed procedure is demonstrated by considering a riser of a circulating fluidized bed as an example application. The mean particle size is considered to be the uncertain input parameter. Contour plots of the mean and standard deviation of solid volume fraction, solid phase velocities, and granular temperature are provided. Mean values and confidence intervals of the quantities of interest are compared to the experiment results. The univariate and bivariate PDF reconstructions of the system response are performed using EQMOM and ECQMOM.« less
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
High throughput nonparametric probability density estimation
Farmer, Jenny
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803
Gatto, Cheryl L.; Broadie, Kendal
2011-01-01
Fragile X syndrome (FXS), caused by loss of fragile X mental retardation 1 (FMR1) gene function, is the most common heritable cause of intellectual disability and autism spectrum disorders. The FMR1 product (FMRP) is an RNA-binding protein best established to function in activity-dependent modulation of synaptic connections. In the Drosophila FXS disease model, loss of functionally-conserved dFMRP causes synaptic overgrowth and overelaboration in pigment dispersing factor (PDF) peptidergic neurons in the adult brain. Here, we identify a very different component of PDF neuron misregulation in dfmr1 mutants: the aberrant retention of normally developmentally-transient PDF tritocerebral (PDF-TRI) neurons. In wild-type animals, PDF-TRI neurons in the central brain undergo programmed cell death and complete, processive clearance within days of eclosion. In the absence of dFMRP, a defective apoptotic program leads to constitutive maintenance of these peptidergic neurons. We tested whether this apoptotic defect is circuit-specific by examining crustacean cardioactive peptide (CCAP) and bursicon circuits, which are similarly developmentally-transient and normally eliminated immediately post-eclosion. In dfmr1 null mutants, CCAP/bursicon neurons also exhibit significantly delayed clearance dynamics, but are subsequently eliminated from the nervous system, in contrast to the fully persistent PDF-TRI neurons. Thus, the requirement of dFMRP for the retention of transitory peptidergic neurons shows evident circuit specificity. The novel defect of impaired apoptosis and aberrant neuron persistence in the Drosophila FXS model suggests an entirely new level of “pruning” dysfunction may contribute to the FXS disease state. PMID:21596027
Disk diffusion quality control guidelines for NVP-PDF 713: a novel peptide deformylase inhibitor.
Anderegg, Tamara R; Jones, Ronald N
2004-01-01
NVP-PDF713 is a peptide deformylase inhibitor that has emerged as a candidate for treating Gram-positive infections and selected Gram-negative species that commonly cause community-acquired respiratory tract infections. This report summarizes the results of a multi-center (seven participants) disk diffusion quality control (QC) investigation for NVP PDF-713 using guidelines of the National Committee for Clinical Laboratory Standards and the standardized disk diffusion method. A total of 420 NVP-PDF 713 zone diameter values were generated for each QC organism. The proposed zone diameter ranges contained 97.6-99.8% of the reported participant results and were: Staphylococcus aureus ATCC 25923 (25-35 mm), Streptococcus pneumoniae ATCC 49619 (30-37 mm), and Haemophilus influenzae ATCC 49247 (24-32 mm). These QC criteria for the disk diffusion method should be applied during the NVP-PDF 713 clinical trials to maximize test accuracy.
Dubin, Ruth F; Teerlink, John R; Schiller, Nelson B; Alokozai, Dean; Peralta, Carmen A; Johansen, Kirsten L
2013-10-01
Post-dialysis fatigue (PDF) is a common, debilitating symptom that remains poorly understood. Cardiac wall motion abnormalities (WMAs) may worsen during dialysis, but it is unknown whether WMA are associated with PDF. Forty patients were recruited from University of California San Francisco-affiliated dialysis units between January 2010 and February 2011. Participants underwent echocardiograms before and during the last hour of 79 dialysis sessions. Myocardial segments were graded 1-4 by a blinded reviewer, with four representing the worst WMA, and the segmental scores were summed for each echocardiogram. Patients completed questionnaires about their symptoms. Severe PDF (defined as lasting >2 h after dialysis) was analysed using a generalized linear model with candidate predictors including anemia, intradialytic hemodynamics and cardiac function. Forty-four percent of patients with worsened WMA (n=9) had severe PDF, compared with 13% of patients with improved or unchanged WMA (P = 0.04). A one-point increase in the WMA score during dialysis was associated with a 10% higher RR of severe PDF [RR: 1.1, 95% CI (1.1, 1.2), P < 0.001]. After multivariable adjustment, every point increase in the WMA score was associated with a 2-fold higher risk of severe PDF [RR: 1.9, 95% CI (1.4, 2.6), P < 0.001]. History of depression was associated with severe PDF after adjustment for demographics and comorbidities [RR: 3.4, 95% CI (1.3, 9), P = 0.01], but anemia, hemodynamics and other parameters of cardiac function were not. Although cross-sectional, these results suggest that some patients may experience severe PDF as a symptom of cardiac ischemia occurring during dialysis.
NASA Astrophysics Data System (ADS)
Pressel, K. G.; Collins, W.; Desai, A. R.
2011-12-01
Deficiencies in the parameterization of boundary layer clouds in global climate models (GCMs) remains one of the greatest sources of uncertainty in climate change predictions. Many GCM cloud parameterizations, which seek to include some representation of subgrid-scale cloud variability, do so by making assumptions regarding the subgrid-scale spatial probability density function (PDF) of total water content. Properly specifying the form and parameters of the total water PDF is an essential step in the formulation of PDF based cloud parameterizations. In the cloud free boundary layer, the PDF of total water mixing ratio is equivalent to the PDF of water vapor mixing ratio. Understanding the PDF of water vapor mixing ratio in the cloud free atmosphere is a necessary step towards understanding the PDF of water vapor in the cloudy atmosphere. A primary challenge in empirically constraining the PDF of water vapor mixing ratio is a distinct lack of a spatially distributed observational dataset at or near cloud scale. However, at meso-beta (20-50km) and larger scales, there is a wealth of information on the spatial distribution of water vapor contained in the physically retrieved water vapor profiles from the Atmospheric Infrared Sounder onboard NASA`s Aqua satellite. The scaling (scale-invariance) of the observed water vapor field has been suggested as means of using observations at satellite observed (meso-beta) scales to derive information about cloud scale PDFs. However, doing so requires the derivation of a robust climatology of water vapor scaling from in-situ observations across the meso- gamma (2-20km) and meso-beta scales. In this work, we present the results of the scaling of high frequency (10Hz) time series of water vapor mixing ratio as observed from the 447m WLEF tower located near Park Falls, Wisconsin. Observations from a tall tower offer an ideal set of observations with which to investigate scaling at meso-gamma and meso-beta scales requiring only the assumption of Taylor`s Hypothesis to convert observed time scales to spatial scales. Furthermore, the WLEF tower holds an instrument suite offering a diverse set of variables at the 396m, 122m, and 30m levels with which to characterize the state of the boundary layer. Three methods are used to compute scaling exponents for the observed time series; poor man`s variance spectra, first order structure functions, and detrended fluctuation analysis. In each case scaling exponents are computed by linear regression. The results for each method are compared and used to build a climatology of scaling exponents. In particular, the results for June 2007 are presented, and it is shown that the scaling of water vapor time series at the 396m level is characterized by two regimes that are determined by the state of the boundary layer. Finally, the results are compared to, and shown to be roughly consistent with, scaling exponents computed from AIRS observations.
Pigment-Dispersing Factor-expressing neurons convey circadian information in the honey bee brain
Beer, Katharina; Kolbe, Esther; Kahana, Noa B.; Yayon, Nadav; Weiss, Ron; Menegazzi, Pamela; Bloch, Guy
2018-01-01
Pigment-Dispersing Factor (PDF) is an important neuropeptide in the brain circadian network of Drosophila and other insects, but its role in bees in which the circadian clock influences complex behaviour is not well understood. We combined high-resolution neuroanatomical characterizations, quantification of PDF levels over the day and brain injections of synthetic PDF peptide to study the role of PDF in the honey bee Apis mellifera. We show that PDF co-localizes with the clock protein Period (PER) in a cluster of laterally located neurons and that the widespread arborizations of these PER/PDF neurons are in close vicinity to other PER-positive cells (neurons and glia). PDF-immunostaining intensity oscillates in a diurnal and circadian manner with possible influences for age or worker task on synchrony of oscillations in different brain areas. Finally, PDF injection into the area between optic lobes and the central brain at the end of the subjective day produced a consistent trend of phase-delayed circadian rhythms in locomotor activity. Altogether, these results are consistent with the hypothesis that PDF is a neuromodulator that conveys circadian information from pacemaker cells to brain centres involved in diverse functions including locomotion, time memory and sun-compass orientation. PMID:29321240
NASA Astrophysics Data System (ADS)
Zhang, D.; Liao, Q.
2016-12-01
The Bayesian inference provides a convenient framework to solve statistical inverse problems. In this method, the parameters to be identified are treated as random variables. The prior knowledge, the system nonlinearity, and the measurement errors can be directly incorporated in the posterior probability density function (PDF) of the parameters. The Markov chain Monte Carlo (MCMC) method is a powerful tool to generate samples from the posterior PDF. However, since the MCMC usually requires thousands or even millions of forward simulations, it can be a computationally intensive endeavor, particularly when faced with large-scale flow and transport models. To address this issue, we construct a surrogate system for the model responses in the form of polynomials by the stochastic collocation method. In addition, we employ interpolation based on the nested sparse grids and takes into account the different importance of the parameters, under the condition of high random dimensions in the stochastic space. Furthermore, in case of low regularity such as discontinuous or unsmooth relation between the input parameters and the output responses, we introduce an additional transform process to improve the accuracy of the surrogate model. Once we build the surrogate system, we may evaluate the likelihood with very little computational cost. We analyzed the convergence rate of the forward solution and the surrogate posterior by Kullback-Leibler divergence, which quantifies the difference between probability distributions. The fast convergence of the forward solution implies fast convergence of the surrogate posterior to the true posterior. We also tested the proposed algorithm on water-flooding two-phase flow reservoir examples. The posterior PDF calculated from a very long chain with direct forward simulation is assumed to be accurate. The posterior PDF calculated using the surrogate model is in reasonable agreement with the reference, revealing a great improvement in terms of computational efficiency.
James, Kevin R; Dowling, David R
2008-09-01
In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.
GTM-Based QSAR Models and Their Applicability Domains.
Gaspar, H A; Baskin, I I; Marcou, G; Horvath, D; Varnek, A
2015-06-01
In this paper we demonstrate that Generative Topographic Mapping (GTM), a machine learning method traditionally used for data visualisation, can be efficiently applied to QSAR modelling using probability distribution functions (PDF) computed in the latent 2-dimensional space. Several different scenarios of the activity assessment were considered: (i) the "activity landscape" approach based on direct use of PDF, (ii) QSAR models involving GTM-generated on descriptors derived from PDF, and, (iii) the k-Nearest Neighbours approach in 2D latent space. Benchmarking calculations were performed on five different datasets: stability constants of metal cations Ca(2+) , Gd(3+) and Lu(3+) complexes with organic ligands in water, aqueous solubility and activity of thrombin inhibitors. It has been shown that the performance of GTM-based regression models is similar to that obtained with some popular machine-learning methods (random forest, k-NN, M5P regression tree and PLS) and ISIDA fragment descriptors. By comparing GTM activity landscapes built both on predicted and experimental activities, we may visually assess the model's performance and identify the areas in the chemical space corresponding to reliable predictions. The applicability domain used in this work is based on data likelihood. Its application has significantly improved the model performances for 4 out of 5 datasets. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Patrignani, C.; Particle Data Group
2016-10-01
The Review summarizes much of particle physics and cosmology. Using data from previous editions, plus 3,062 new measurements from 721 papers, we list, evaluate, and average measured properties of gauge bosons and the recently discovered Higgs boson, leptons, quarks, mesons, and baryons. We summarize searches for hypothetical particles such as supersymmetric particles, heavy bosons, axions, dark photons, etc. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as Higgs Boson Physics, Supersymmetry, Grand Unified Theories, Neutrino Mixing, Dark Energy, Dark Matter, Cosmology, Particle Detectors, Colliders, Probability and Statistics. Among the 117 reviews are many that are new or heavily revised, including those on Pentaquarks and Inflation. The complete Review is published online in a journal and on the website of the Particle Data Group (http://pdg.lbl.gov). The printed PDG Book contains the Summary Tables and all review articles but no longer includes the detailed tables from the Particle Listings. A Booklet with the Summary Tables and abbreviated versions of some of the review articles is also available. Contents Abstract, Contributors, Highlights and Table of ContentsAcrobat PDF (150 KB) IntroductionAcrobat PDF (456 KB) Particle Physics Summary Tables Gauge and Higgs bosonsAcrobat PDF (155 KB) LeptonsAcrobat PDF (134 KB) QuarksAcrobat PDF (84 KB) MesonsAcrobat PDF (871 KB) BaryonsAcrobat PDF (300 KB) Searches (Supersymmetry, Compositeness, etc.)Acrobat PDF (91 KB) Tests of conservation lawsAcrobat PDF (330 KB) Reviews, Tables, and Plots Detailed contents for this sectionAcrobat PDF (37 KB) Constants, Units, Atomic and Nuclear PropertiesAcrobat PDF (278 KB) Standard Model and Related TopicsAcrobat PDF (7.3 MB) Astrophysics and CosmologyAcrobat PDF (2.7 MB) Experimental Methods and CollidersAcrobat PDF (3.8 MB) Mathematical Tools or Statistics, Monte Carlo, Group Theory Acrobat PDF (1.3 MB) Kinematics, Cross-Section Formulae, and PlotsAcrobat PDF (3.9 MB) Particle Listings Illustrative key and abbreviationsAcrobat PDF (235 KB) Gauge and Higgs bosonsAcrobat PDF (2 MB) LeptonsAcrobat PDF (1.5 MB) QuarksAcrobat PDF (1.2 MB) Mesons: Light unflavored and strangeAcrobat PDF (4 MB) Mesons: Charmed and bottomAcrobat PDF (7.4 MB) Mesons: OtherAcrobat PDF (3.1 MB) BaryonsAcrobat PDF (3.97 MB) Miscellaneous searchesAcrobat PDF (2.4 MB) IndexAcrobat PDF (160 KB)
NASA Technical Reports Server (NTRS)
Selle, L. C.; Bellan, Josette
2006-01-01
Transitional databases from Direct Numerical Simulation (DNS) of three-dimensional mixing layers for single-phase flows and two-phase flows with evaporation are analyzed and used to examine the typical hypothesis that the scalar dissipation Probability Distribution Function (PDF) may be modeled as a Gaussian. The databases encompass a single-component fuel and four multicomponent fuels, two initial Reynolds numbers (Re), two mass loadings for two-phase flows and two free-stream gas temperatures. Using the DNS calculated moments of the scalar-dissipation PDF, it is shown, consistent with existing experimental information on single-phase flows, that the Gaussian is a modest approximation of the DNS-extracted PDF, particularly poor in the range of the high scalar-dissipation values, which are significant for turbulent reaction rate modeling in non-premixed flows using flamelet models. With the same DNS calculated moments of the scalar-dissipation PDF and making a change of variables, a model of this PDF is proposed in the form of the (beta)-PDF which is shown to approximate much better the DNS-extracted PDF, particularly in the regime of the high scalar-dissipation values. Several types of statistical measures are calculated over the ensemble of the fourteen databases. For each statistical measure, the proposed (beta)-PDF model is shown to be much superior to the Gaussian in approximating the DNS-extracted PDF. Additionally, the agreement between the DNS-extracted PDF and the (beta)-PDF even improves when the comparison is performed for higher initial Re layers, whereas the comparison with the Gaussian is independent of the initial Re values. For two-phase flows, the comparison between the DNS-extracted PDF and the (beta)-PDF also improves with increasing free-stream gas temperature and mass loading. The higher fidelity approximation of the DNS-extracted PDF by the (beta)-PDF with increasing Re, gas temperature and mass loading bodes well for turbulent reaction rate modeling.
Modeling the Bergeron-Findeisen Process Using PDF Methods With an Explicit Representation of Mixing
NASA Astrophysics Data System (ADS)
Jeffery, C.; Reisner, J.
2005-12-01
Currently, the accurate prediction of cloud droplet and ice crystal number concentration in cloud resolving, numerical weather prediction and climate models is a formidable challenge. The Bergeron-Findeisen process in which ice crystals grow by vapor deposition at the expense of super-cooled droplets is expected to be inhomogeneous in nature--some droplets will evaporate completely in centimeter-scale filaments of sub-saturated air during turbulent mixing while others remain unchanged [Baker et al., QJRMS, 1980]--and is unresolved at even cloud-resolving scales. Despite the large body of observational evidence in support of the inhomogeneous mixing process affecting cloud droplet number [most recently, Brenguier et al., JAS, 2000], it is poorly understood and has yet to be parameterized and incorporated into a numerical model. In this talk, we investigate the Bergeron-Findeisen process using a new approach based on simulations of the probability density function (PDF) of relative humidity during turbulent mixing. PDF methods offer a key advantage over Eulerian (spatial) models of cloud mixing and evaporation: the low probability (cm-scale) filaments of entrained air are explicitly resolved (in probability space) during the mixing event even though their spatial shape, size and location remain unknown. Our PDF approach reveals the following features of the inhomogeneous mixing process during the isobaric turbulent mixing of two parcels containing super-cooled water and ice, respectively: (1) The scavenging of super-cooled droplets is inhomogeneous in nature; some droplets evaporate completely at early times while others remain unchanged. (2) The degree of total droplet evaporation during the initial mixing period depends linearly on the mixing fractions of the two parcels and logarithmically on Damköhler number (Da)---the ratio of turbulent to evaporative time-scales. (3) Our simulations predict that the PDF of Lagrangian (time-integrated) subsaturation (S) goes as S-1 at high Da. This behavior results from a Gaussian mixing closure and requires observational validation.
Polynomial probability distribution estimation using the method of moments
Mattsson, Lars; Rydén, Jesper
2017-01-01
We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949
Polynomial probability distribution estimation using the method of moments.
Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper
2017-01-01
We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.
Non-Gaussian PDF Modeling of Turbulent Boundary Layer Fluctuating Pressure Excitation
NASA Technical Reports Server (NTRS)
Steinwolf, Alexander; Rizzi, Stephen A.
2003-01-01
The purpose of the study is to investigate properties of the probability density function (PDF) of turbulent boundary layer fluctuating pressures measured on the exterior of a supersonic transport aircraft. It is shown that fluctuating pressure PDFs differ from the Gaussian distribution even for surface conditions having no significant discontinuities. The PDF tails are wider and longer than those of the Gaussian model. For pressure fluctuations upstream of forward-facing step discontinuities and downstream of aft-facing step discontinuities, deviations from the Gaussian model are more significant and the PDFs become asymmetrical. Various analytical PDF distributions are used and further developed to model this behavior.
A comparison of gantry-mounted x-ray-based real-time target tracking methods.
Montanaro, Tim; Nguyen, Doan Trang; Keall, Paul J; Booth, Jeremy; Caillet, Vincent; Eade, Thomas; Haddad, Carol; Shieh, Chun-Chien
2018-03-01
Most modern radiotherapy machines are built with a 2D kV imaging system. Combining this imaging system with a 2D-3D inference method would allow for a ready-made option for real-time 3D tumor tracking. This work investigates and compares the accuracy of four existing 2D-3D inference methods using both motion traces inferred from external surrogates and measured internally from implanted beacons. Tumor motion data from 160 fractions (46 thoracic/abdominal patients) of Synchrony traces (inferred traces), and 28 fractions (7 lung patients) of Calypso traces (internal traces) from the LIGHT SABR trial (NCT02514512) were used in this study. The motion traces were used as the ground truth. The ground truth trajectories were used in silico to generate 2D positions projected on the kV detector. These 2D traces were then passed to the 2D-3D inference methods: interdimensional correlation, Gaussian probability density function (PDF), arbitrary-shape PDF, and the Kalman filter. The inferred 3D positions were compared with the ground truth to determine tracking errors. The relationships between tracking error and motion magnitude, interdimensional correlation, and breathing periodicity index (BPI) were also investigated. Larger tracking errors were observed from the Calypso traces, with RMS and 95th percentile 3D errors of 0.84-1.25 mm and 1.72-2.64 mm, compared to 0.45-0.68 mm and 0.74-1.13 mm from the Synchrony traces. The Gaussian PDF method was found to be the most accurate, followed by the Kalman filter, the interdimensional correlation method, and the arbitrary-shape PDF method. Tracking error was found to strongly and positively correlate with motion magnitude for both the Synchrony and Calypso traces and for all four methods. Interdimensional correlation and BPI were found to negatively correlate with tracking error only for the Synchrony traces. The Synchrony traces exhibited higher interdimensional correlation than the Calypso traces especially in the anterior-posterior direction. Inferred traces often exhibit higher interdimensional correlation, which are not true representation of thoracic/abdominal motion and may underestimate kV-based tracking errors. The use of internal traces acquired from systems such as Calypso is advised for future kV-based tracking studies. The Gaussian PDF method is the most accurate 2D-3D inference method for tracking thoracic/abdominal targets. Motion magnitude has significant impact on 2D-3D inference error, and should be considered when estimating kV-based tracking error. © 2018 American Association of Physicists in Medicine.
Frandsen, Benjamin A; Brunelli, Michela; Page, Katharine; Uemura, Yasutomo J; Staunton, Julie B; Billinge, Simon J L
2016-05-13
We present a temperature-dependent atomic and magnetic pair distribution function (PDF) analysis of neutron total scattering measurements of antiferromagnetic MnO, an archetypal strongly correlated transition-metal oxide. The known antiferromagnetic ground-state structure fits the low-temperature data closely with refined parameters that agree with conventional techniques, confirming the reliability of the newly developed magnetic PDF method. The measurements performed in the paramagnetic phase reveal significant short-range magnetic correlations on a ∼1 nm length scale that differ substantially from the low-temperature long-range spin arrangement. Ab initio calculations using a self-interaction-corrected local spin density approximation of density functional theory predict magnetic interactions dominated by Anderson superexchange and reproduce the measured short-range magnetic correlations to a high degree of accuracy. Further calculations simulating an additional contribution from a direct exchange interaction show much worse agreement with the data. The Anderson superexchange model for MnO is thus verified by experimentation and confirmed by ab initio theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benjamin A. Frandsen; Brunelli, Michela; Page, Katharine
Here, we present a temperature-dependent atomic and magnetic pair distribution function (PDF) analysis of neutron total scattering measurements of antiferromagnetic MnO, an archetypal strongly correlated transition-metal oxide. The known antiferromagnetic ground-state structure fits the low-temperature data closely with refined parameters that agree with conventional techniques, confirming the reliability of the newly developed magnetic PDF method. The measurements performed in the paramagnetic phase reveal significant short-range magnetic correlations on a ~1 nm length scale that differ substantially from the low-temperature long-range spin arrangement. Ab initio calculations using a self-interaction-corrected local spin density approximation of density functional theory predict magnetic interactions dominatedmore » by Anderson superexchange and reproduce the measured short-range magnetic correlations to a high degree of accuracy. Further calculations simulating an additional contribution from a direct exchange interaction show much worse agreement with the data. Furthermore, the Anderson superexchange model for MnO is thus verified by experimentation and confirmed by ab initio theory.« less
Benjamin A. Frandsen; Brunelli, Michela; Page, Katharine; ...
2016-05-11
Here, we present a temperature-dependent atomic and magnetic pair distribution function (PDF) analysis of neutron total scattering measurements of antiferromagnetic MnO, an archetypal strongly correlated transition-metal oxide. The known antiferromagnetic ground-state structure fits the low-temperature data closely with refined parameters that agree with conventional techniques, confirming the reliability of the newly developed magnetic PDF method. The measurements performed in the paramagnetic phase reveal significant short-range magnetic correlations on a ~1 nm length scale that differ substantially from the low-temperature long-range spin arrangement. Ab initio calculations using a self-interaction-corrected local spin density approximation of density functional theory predict magnetic interactions dominatedmore » by Anderson superexchange and reproduce the measured short-range magnetic correlations to a high degree of accuracy. Further calculations simulating an additional contribution from a direct exchange interaction show much worse agreement with the data. Furthermore, the Anderson superexchange model for MnO is thus verified by experimentation and confirmed by ab initio theory.« less
On the probability distribution function of the mass surface density of molecular clouds. II.
NASA Astrophysics Data System (ADS)
Fischera, Jörg
2014-11-01
The probability distribution function (PDF) of the mass surface density of molecular clouds provides essential information about the structure of molecular cloud gas and condensed structures out of which stars may form. In general, the PDF shows two basic components: a broad distribution around the maximum with resemblance to a log-normal function, and a tail at high mass surface densities attributed to turbulence and self-gravity. In a previous paper, the PDF of condensed structures has been analyzed and an analytical formula presented based on a truncated radial density profile, ρ(r) = ρc/ (1 + (r/r0)2)n/ 2 with central density ρc and inner radius r0, widely used in astrophysics as a generalization of physical density profiles. In this paper, the results are applied to analyze the PDF of self-gravitating, isothermal, pressurized, spherical (Bonnor-Ebert spheres) and cylindrical condensed structures with emphasis on the dependence of the PDF on the external pressure pext and on the overpressure q-1 = pc/pext, where pc is the central pressure. Apart from individual clouds, we also consider ensembles of spheres or cylinders, where effects caused by a variation of pressure ratio, a distribution of condensed cores within a turbulent gas, and (in case of cylinders) a distribution of inclination angles on the mean PDF are analyzed. The probability distribution of pressure ratios q-1 is assumed to be given by P(q-1) ∝ q-k1/ (1 + (q0/q)γ)(k1 + k2) /γ, where k1, γ, k2, and q0 are fixed parameters. The PDF of individual spheres with overpressures below ~100 is well represented by the PDF of a sphere with an analytical density profile with n = 3. At higher pressure ratios, the PDF at mass surface densities Σ ≪ Σ(0), where Σ(0) is the central mass surface density, asymptotically approaches the PDF of a sphere with n = 2. Consequently, the power-law asymptote at mass surface densities above the peak steepens from Psph(Σ) ∝ Σ-2 to Psph(Σ) ∝ Σ-3. The corresponding asymptote of the PDF of cylinders for the large q-1 is approximately given by Pcyl(Σ) ∝ Σ-4/3(1 - (Σ/Σ(0))2/3)-1/2. The distribution of overpressures q-1 produces a power-law asymptote at high mass surface densities given by
Shimada, Naoto; Inami, Show; Sato, Shoma; Kitamoto, Toshihiro; Sakai, Takaomi
2016-01-01
Apterous (Ap), the best studied LIM-homeodomain transcription factor in Drosophila, cooperates with the cofactor Chip (Chi) to regulate transcription of specific target genes. Although Ap regulates various developmental processes, its function in the adult brain remains unclear. Here, we report that Ap and Chi in the neurons expressing PDF, a neuropeptide, play important roles in proper sleep/wake regulation in adult flies. PDF-expressing neurons consist of two neuronal clusters: small ventral-lateral neurons (s-LNvs) acting as the circadian pacemaker and large ventral-lateral neurons (l-LNvs) regulating light-driven arousal. We identified that Ap localizes to the nuclei of s-LNvs and l-LNvs. In light-dark (LD) cycles, RNAi knockdown or the targeted expression of dominant-negative forms of Ap or Chi in PDF-expressing neurons or l-LNvs promoted arousal. In contrast, in constant darkness, knockdown of Ap in PDF-expressing neurons did not promote arousal, indicating that a reduced Ap function in PDF-expressing neurons promotes light-driven arousal. Furthermore, Ap expression in l-LNvs showed daily rhythms (peaking at midnight), which are generated by a direct light-dependent mechanism rather than by the endogenous clock. These results raise the possibility that the daily oscillation of Ap expression in l-LNvs may contribute to the buffering of light-driven arousal in wild-type flies. PMID:27853240
Shimada, Naoto; Inami, Show; Sato, Shoma; Kitamoto, Toshihiro; Sakai, Takaomi
2016-11-17
Apterous (Ap), the best studied LIM-homeodomain transcription factor in Drosophila, cooperates with the cofactor Chip (Chi) to regulate transcription of specific target genes. Although Ap regulates various developmental processes, its function in the adult brain remains unclear. Here, we report that Ap and Chi in the neurons expressing PDF, a neuropeptide, play important roles in proper sleep/wake regulation in adult flies. PDF-expressing neurons consist of two neuronal clusters: small ventral-lateral neurons (s-LNvs) acting as the circadian pacemaker and large ventral-lateral neurons (l-LNvs) regulating light-driven arousal. We identified that Ap localizes to the nuclei of s-LNvs and l-LNvs. In light-dark (LD) cycles, RNAi knockdown or the targeted expression of dominant-negative forms of Ap or Chi in PDF-expressing neurons or l-LNvs promoted arousal. In contrast, in constant darkness, knockdown of Ap in PDF-expressing neurons did not promote arousal, indicating that a reduced Ap function in PDF-expressing neurons promotes light-driven arousal. Furthermore, Ap expression in l-LNvs showed daily rhythms (peaking at midnight), which are generated by a direct light-dependent mechanism rather than by the endogenous clock. These results raise the possibility that the daily oscillation of Ap expression in l-LNvs may contribute to the buffering of light-driven arousal in wild-type flies.
Modeling non-Fickian dispersion by use of the velocity PDF on the pore scale
NASA Astrophysics Data System (ADS)
Kooshapur, Sheema; Manhart, Michael
2015-04-01
For obtaining a description of reactive flows in porous media, apart from the geometrical complications of resolving the velocities and scalar values, one has to deal with the additional reactive term in the transport equation. An accurate description of the interface of the reacting fluids - which is strongly influenced by dispersion- is essential for resolving this term. In REV-based simulations the reactive term needs to be modeled taking sub-REV fluctuations and possibly non-Fickian dispersion into account. Non-Fickian dispersion has been observed in strongly heterogeneous domains and in early phases of transport. A fully resolved solution of the Navier-Stokes and transport equations which yields a detailed description of the flow properties, dispersion, interfaces of fluids, etc. however, is not practical for domains containing more than a few thousand grains, due to the huge computational effort required. Through Probability Density Function (PDF) based methods, the velocity distribution in the pore space can facilitate the understanding and modelling of non-Fickian dispersion [1,2]. Our aim is to model the transition between non-Fickian and Fickian dispersion in a random sphere pack within the framework of a PDF based transport model proposed by Meyer and Tchelepi [1,3]. They proposed a stochastic transport model where velocity components of tracer particles are represented by a continuous Markovian stochastic process. In addition to [3], we consider the effects of pore scale diffusion and formulate a different stochastic equation for the increments in velocity space from first principles. To assess the terms in this equation, we performed Direct Numerical Simulations (DNS) for solving the Navier-Stokes equation on a random sphere pack. We extracted the PDFs and statistical moments (up to the 4th moment) of the stream-wise velocity, u, and first and second order velocity derivatives both independent and conditioned on velocity. By using this data and combining the Taylor expansion of velocity increments, du, and the Langevin equation for point particles we obtained the components of velocity fluxes which point to a drift and diffusion behavior in the velocity space. Thus a partial differential equation for the velocity PDF has been formulated that constitutes an advection-diffusion equation in velocity space (a Fokker-Planck equation) in which the drift and diffusion coefficients are obtained using the velocity conditioned statistics of the derivatives of the pore scale velocity field. This has been solved by both a Random Walk (RW) model and a Finite Volume method. We conclude that both, these methods are able to simulate the velocity PDF obtained by DNS. References [1] D. W. Meyer, P. Jenny, H.A.Tschelepi, A joint velocity-concentration PDF method for traqcer flow in heterogeneous porous media, Water Resour.Res., 46, W12522, (2010). [2] Nowak, W., R. L. Schwede, O. A. Cirpka, and I. Neuweiler, Probability density functions of hydraulic head and velocity in three-dimensional heterogeneous porous media, Water Resour.Res., 44, W08452, (2008) [3] D. W. Meyer, H. A. Tchelepi, Particle-based transport model with Markovian velocity processes for tracer dispersion in highly heterogeneous porous media, Water Resour. Res., 46, W11552, (2010)
Mavar-Haramija, Marija; Prats-Galino, Alberto; Méndez, Juan A Juanes; Puigdelívoll-Sánchez, Anna; de Notaris, Matteo
2015-10-01
A three-dimensional (3D) model of the skull base was reconstructed from the pre- and post-dissection head CT images and embedded in a Portable Document Format (PDF) file, which can be opened by freely available software and used offline. The CT images were segmented using a specific 3D software platform for biomedical data, and the resulting 3D geometrical models of anatomical structures were used for dual purpose: to simulate the extended endoscopic endonasal transsphenoidal approaches and to perform the quantitative analysis of the procedures. The analysis consisted of bone removal quantification and the calculation of quantitative parameters (surgical freedom and exposure area) of each procedure. The results are presented in three PDF documents containing JavaScript-based functions. The 3D-PDF files include reconstructions of the nasal structures (nasal septum, vomer, middle turbinates), the bony structures of the anterior skull base and maxillofacial region and partial reconstructions of the optic nerve, the hypoglossal and vidian canals and the internal carotid arteries. Alongside the anatomical model, axial, sagittal and coronal CT images are shown. Interactive 3D presentations were created to explain the surgery and the associated quantification methods step-by-step. The resulting 3D-PDF files allow the user to interact with the model through easily available software, free of charge and in an intuitive manner. The files are available for offline use on a personal computer and no previous specialized knowledge in informatics is required. The documents can be downloaded at http://hdl.handle.net/2445/55224 .
NASA Astrophysics Data System (ADS)
He, Xiaozhou; Wang, Yin; Tong, Penger
2018-05-01
Non-Gaussian fluctuations with an exponential tail in their probability density function (PDF) are often observed in nonequilibrium steady states (NESSs) and one does not understand why they appear so often. Turbulent Rayleigh-Bénard convection (RBC) is an example of such a NESS, in which the measured PDF P (δ T ) of temperature fluctuations δ T in the central region of the flow has a long exponential tail. Here we show that because of the dynamic heterogeneity in RBC, the exponential PDF is generated by a convolution of a set of dynamics modes conditioned on a constant local thermal dissipation rate ɛ . The conditional PDF G (δ T |ɛ ) of δ T under a constant ɛ is found to be of Gaussian form and its variance σT2 for different values of ɛ follows an exponential distribution. The convolution of the two distribution functions gives rise to the exponential PDF P (δ T ) . This work thus provides a physical mechanism of the observed exponential distribution of δ T in RBC and also sheds light on the origin of non-Gaussian fluctuations in other NESSs.
Németh, Károly; Chapman, Karena W; Balasubramanian, Mahalingam; Shyam, Badri; Chupas, Peter J; Heald, Steve M; Newville, Matt; Klingler, Robert J; Winans, Randall E; Almer, Jonathan D; Sandi, Giselle; Srajer, George
2012-02-21
An efficient implementation of simultaneous reverse Monte Carlo (RMC) modeling of pair distribution function (PDF) and EXAFS spectra is reported. This implementation is an extension of the technique established by Krayzman et al. [J. Appl. Cryst. 42, 867 (2009)] in the sense that it enables simultaneous real-space fitting of x-ray PDF with accurate treatment of Q-dependence of the scattering cross-sections and EXAFS with multiple photoelectron scattering included. The extension also allows for atom swaps during EXAFS fits thereby enabling modeling the effects of chemical disorder, such as migrating atoms and vacancies. Significant acceleration of EXAFS computation is achieved via discretization of effective path lengths and subsequent reduction of operation counts. The validity and accuracy of the approach is illustrated on small atomic clusters and on 5500-9000 atom models of bcc-Fe and α-Fe(2)O(3). The accuracy gains of combined simultaneous EXAFS and PDF fits are pointed out against PDF-only and EXAFS-only RMC fits. Our modeling approach may be widely used in PDF and EXAFS based investigations of disordered materials. © 2012 American Institute of Physics
Smart "geomorphological" map browsing - a tale about geomorphological maps and the internet
NASA Astrophysics Data System (ADS)
Geilhausen, M.; Otto, J.-C.
2012-04-01
With the digital production of geomorphological maps, the dissemination of research outputs now extends beyond simple paper products. Internet technologies can contribute to both, the dissemination of geomorphological maps and access to geomorphologic data and help to make geomorphological knowledge available to a greater public. Indeed, many national geological surveys employ end-to-end digital workflows from data capture in the field to final map production and dissemination. This paper deals with the potential of web mapping applications and interactive, portable georeferenced PDF maps for the distribution of geomorphological information. Web mapping applications such as Google Maps have become very popular and widespread and increased the interest and access to mapping. They link the Internet with GIS technology and are a common way of presenting dynamic maps online. The GIS processing is performed online and maps are visualised in interactive web viewers characterised by different capabilities such as zooming, panning or adding further thematic layers, with the map refreshed after each task. Depending on the system architecture and the components used, advanced symbology, map overlays from different applications and sources and their integration into a Desktop GIS are possible. This interoperability is achieved through the use of international open standards that include mechanisms for the integration and visualisation of information from multiple sources. The portable document format (PDF) is commonly used for printing and is a standard format that can be processed by many graphic software and printers without loss of information. A GeoPDF enables the sharing of geospatial maps and data in PDF documents. Multiple, independent map frames with individual spatial reference systems are possible within a GeoPDF, for example, for map overlays or insets. Geospatial functionality of a GeoPDF includes scalable map display, layer visibility control, access to attribute data, coordinate queries and spatial measurements. The full functionality of GeoPDFs requires free and user-friendly plug-ins for PDF readers and GIS software. A GeoPDF enables fundamental GIS functionality turning the formerly static PDF map into an interactive, portable georeferenced PDF map. GeoPDFs are easy to create and provide an interesting and valuable way to disseminate geomorphological maps. Our motivation to engage with the online distribution of geomorphological maps originates in the increasing number of web mapping applications available today indicating that the Internet has become a medium for displaying geographical information in rich forms and user-friendly interfaces. So, why not use the Internet to distribute geomorphological maps and enhance their practical application? Web mapping and dynamic PDF maps can play a key role in the movement towards a global dissemination of geomorphological information. This will be exemplified by live demonstrations of i.) existing geomorphological WebGIS applications, ii.) data merging from various sources using web map services, and iii.) free to download GeoPDF maps during the presentations.
2011-02-17
document objects, on one or more electronic document pages. These commands have their roots in typography , so, to understand the PDF Language, one...must have at least a rudimentary understanding of typography . Only a few of the typographic commands, called text showing operators, can hold strings
NASA Astrophysics Data System (ADS)
Nezhadhaghighi, Mohsen Ghasemi
2017-08-01
Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ -stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α . We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ -stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.
Nezhadhaghighi, Mohsen Ghasemi
2017-08-01
Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ-stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α. We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ-stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.
NASA Astrophysics Data System (ADS)
Sun, Jianbao; Shen, Zheng-Kang; Bürgmann, Roland; Wang, Min; Chen, Lichun; Xu, Xiwei
2013-08-01
develop a three-step maximum a posteriori probability method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic deformation solutions of earthquake rupture. The method originates from the fully Bayesian inversion and mixed linear-nonlinear Bayesian inversion methods and shares the same posterior PDF with them, while overcoming difficulties with convergence when large numbers of low-quality data are used and greatly improving the convergence rate using optimization procedures. A highly efficient global optimization algorithm, adaptive simulated annealing, is used to search for the maximum of a posterior PDF ("mode" in statistics) in the first step. The second step inversion approaches the "true" solution further using the Monte Carlo inversion technique with positivity constraints, with all parameters obtained from the first step as the initial solution. Then slip artifacts are eliminated from slip models in the third step using the same procedure of the second step, with fixed fault geometry parameters. We first design a fault model with 45° dip angle and oblique slip, and produce corresponding synthetic interferometric synthetic aperture radar (InSAR) data sets to validate the reliability and efficiency of the new method. We then apply this method to InSAR data inversion for the coseismic slip distribution of the 14 April 2010 Mw 6.9 Yushu, China earthquake. Our preferred slip model is composed of three segments with most of the slip occurring within 15 km depth and the maximum slip reaches 1.38 m at the surface. The seismic moment released is estimated to be 2.32e+19 Nm, consistent with the seismic estimate of 2.50e+19 Nm.
Modelling the structure of Zr-rich Pb(Zr1-xTix)O3, x = 0.4 by a multiphase approach.
Bogdanov, Alexander; Mysovsky, Andrey; Pickard, Chris J; Kimmel, Anna V
2016-10-12
Solid solution perovskite Pb(Zr 1-x Ti x )O 3 (PZT) is an industrially important material. Despite the long history of experimental and theoretical studies, the structure of this material is still under intensive discussion. In this work, we have applied structure searching coupled with density functional theory methods to provide a multiphase description of this material at x = 0.4. We demonstrate that the permutational freedom of B-site cations leads to the stabilisation of a variety of local phases reflecting a relatively flat energy landscape of PZT. Using a set of predicted local phases we reproduce the experimental pair distribution function (PDF) profile with high accuracy. We introduce a complex multiphase picture of the structure of PZT and show that additional monoclinic and rhombohedral phases account for a better description of the experimental PDF profile. We propose that such a multiphase picture reflects the entropy reached in the sample during the preparation process.
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering
Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus
2015-01-01
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs. PMID:26146475
PDF methods for turbulent reactive flows
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1995-01-01
Viewgraphs are presented on computation of turbulent combustion, governing equations, closure problem, PDF modeling of turbulent reactive flows, validation cases, current projects, and collaboration with industry and technology transfer.
Transported PDF Modeling of Nonpremixed Turbulent CO/H-2/N-2 Jet Flames
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, xinyu; Haworth, D. C.; Huckaby, E. David
2012-01-01
Turbulent CO/H{sub 2}/N{sub 2} (“syngas”) flames are simulated using a transported composition probability density function (PDF) method. A consistent hybrid Lagrangian particle/Eulerian mesh algorithm is used to solve the modeled PDF transport equation. The model includes standard k–ϵ turbulence, gradient transport for scalars, and Euclidean minimum spanning tree (EMST) mixing. Sensitivities of model results to variations in the turbulence model, the treatment of radiation heat transfer, the choice of chemical mechanism, and the PDF mixing model are explored. A baseline model reproduces the measured mean and rms temperature, major species, and minor species profiles reasonably well, and captures the scalingmore » that is observed in the experiments. Both our results and the literature suggest that further improvements can be realized with adjustments in the turbulence model, the radiation heat transfer model, and the chemical mechanism. Although radiation effects are relatively small in these flames, consideration of radiation is important for accurate NO prediction. Chemical mechanisms that have been developed specifically for fuels with high concentrations of CO and H{sub 2} perform better than a methane mechanism that was not designed for this purpose. It is important to account explicitly for turbulence–chemistry interactions, although the details of the mixing model do not make a large difference in the results, within reasonable limits.« less
Wang, Chenggang; Ding, Yezhang; Yao, Jin; Zhang, Yanping; Sun, Yijun; Colee, James; Mou, Zhonglin
2015-09-01
The evolutionarily conserved Elongator complex functions in diverse biological processes including salicylic acid-mediated immune response. However, how Elongator functions in jasmonic acid (JA)/ethylene (ET)-mediated defense is unknown. Here, we show that Elongator is required for full induction of the JA/ET defense pathway marker gene PLANT DEFENSIN1.2 (PDF1.2) and for resistance to the necrotrophic fungal pathogens Botrytis cinerea and Alternaria brassicicola. A loss-of-function mutation in the Arabidopsis Elongator subunit 2 (ELP2) alters B. cinerea-induced transcriptome reprogramming. Interestingly, in elp2, expression of WRKY33, OCTADECANOID-RESPONSIVE ARABIDOPSIS AP2/ERF59 (ORA59), and PDF1.2 is inhibited, whereas transcription of MYC2 and its target genes is enhanced. However, overexpression of WRKY33 or ORA59 and mutation of MYC2 fail to restore PDF1.2 expression and B. cinerea resistance in elp2, suggesting that ELP2 is required for induction of not only WRKY33 and ORA59 but also PDF1.2. Moreover, elp2 is as susceptible as coronatine-insensitive1 (coi1) and ethylene-insensitive2 (ein2) to B. cinerea, indicating that ELP2 is an important player in B. cinerea resistance. Further analysis of the lesion sizes on the double mutants elp2 coi1 and elp2 ein2 and the corresponding single mutants revealed that the function of ELP2 overlaps with COI1 and is additive to EIN2 for B. cinerea resistance. Finally, basal histone acetylation levels in the coding regions of WRKY33, ORA59, and PDF1.2 are reduced in elp2 and a functional ELP2-GFP fusion protein binds to the chromatin of these genes, suggesting that constitutive ELP2-mediated histone acetylation may be required for full activation of the WRKY33/ORA59/PDF1.2 transcriptional cascade. © 2015 The Authors The Plant Journal © 2015 John Wiley & Sons Ltd.
40 CFR 141.402 - Ground water source microbial monitoring and analytical methods.
Code of Federal Regulations, 2010 CFR
2010-07-01
.../1604sp02.pdf or from EPA's Water Resource Center (RC-4100T), 1200 Pennsylvania Avenue, NW., Washington, DC... available at http://www.epa.gov/nerlcwww/1600sp02.pdf or from EPA's Water Resource Center (RC-4100T), 1200.../1601ap01.pdf or from EPA's Water Resource Center (RC-4100T), 1200 Pennsylvania Avenue, NW., Washington, DC...
Properties of the probability density function of the non-central chi-squared distribution
NASA Astrophysics Data System (ADS)
András, Szilárd; Baricz, Árpád
2008-10-01
In this paper we consider the probability density function (pdf) of a non-central [chi]2 distribution with arbitrary number of degrees of freedom. For this function we prove that can be represented as a finite sum and we deduce a partial derivative formula. Moreover, we show that the pdf is log-concave when the degrees of freedom is greater or equal than 2. At the end of this paper we present some Turán-type inequalities for this function and an elegant application of the monotone form of l'Hospital's rule in probability theory is given.
METAPHOR: Probability density estimation for machine learning based photometric redshifts
NASA Astrophysics Data System (ADS)
Amaro, V.; Cavuoti, S.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.
2017-06-01
We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method able to provide a reliable PDF for photometric galaxy redshifts estimated through empirical techniques. METAPHOR is a modular workflow, mainly based on the MLPQNA neural network as internal engine to derive photometric galaxy redshifts, but giving the possibility to easily replace MLPQNA with any other method to predict photo-z's and their PDF. We present here the results about a validation test of the workflow on the galaxies from SDSS-DR9, showing also the universality of the method by replacing MLPQNA with KNN and Random Forest models. The validation test include also a comparison with the PDF's derived from a traditional SED template fitting method (Le Phare).
NASA Astrophysics Data System (ADS)
Savre, J.; Ekman, A. M. L.
2015-05-01
A new parameterization for heterogeneous ice nucleation constrained by laboratory data and based on classical nucleation theory is introduced. Key features of the parameterization include the following: a consistent and modular modeling framework for treating condensation/immersion and deposition freezing, the possibility to consider various potential ice nucleating particle types (e.g., dust, black carbon, and bacteria), and the possibility to account for an aerosol size distribution. The ice nucleating ability of each aerosol type is described using a contact angle (θ) probability density function (PDF). A new modeling strategy is described to allow the θ PDF to evolve in time so that the most efficient ice nuclei (associated with the lowest θ values) are progressively removed as they nucleate ice. A computationally efficient quasi Monte Carlo method is used to integrate the computed ice nucleation rates over both size and contact angle distributions. The parameterization is employed in a parcel model, forced by an ensemble of Lagrangian trajectories extracted from a three-dimensional simulation of a springtime low-level Arctic mixed-phase cloud, in order to evaluate the accuracy and convergence of the method using different settings. The same model setup is then employed to examine the importance of various parameters for the simulated ice production. Modeling the time evolution of the θ PDF is found to be particularly crucial; assuming a time-independent θ PDF significantly overestimates the ice nucleation rates. It is stressed that the capacity of black carbon (BC) to form ice in the condensation/immersion freezing mode is highly uncertain, in particular at temperatures warmer than -20°C. In its current version, the parameterization most likely overestimates ice initiation by BC.
Crystal structure of an EfPDF complex with Met-Ala-Ser based on crystallographic packing.
Nam, Ki Hyun; Kim, Kook-Han; Kim, Eunice Eun Kyeong; Hwang, Kwang Yeon
2009-04-17
PDF (peptide deformylase) plays a critical role in the production of mature proteins by removing the N-formyl polypeptide of nascent proteins in the prokaryote cell system. This protein is essential for bacterial growth, making it an attractive target for the design of new antibiotics. Accordingly, PDF has been evaluated as a drug target; however, architectural mechanism studies of PDF have not yet fully elucidated its molecular function. We recently reported the crystal structure of PDF produced by Enterococcus faecium [K.H. Nam, J.I. Ham, A. Priyadarshi, E.E. Kim, N. Chung, K.Y. Hwang, "Insight into the antibacterial drug design and architectural mechanism of peptide recognition from the E. faecium peptide deformylase structure", Proteins 74 (2009) 261-265]. Here, we present the crystal structure of the EfPDF complex with MAS (Met-Ser-Ala), thereby not only delineating the architectural mechanism for the recognition of mimic-peptides by N-terminal cleaved expression peptide, but also suggesting possible targets for rational design of antibacterial drugs. In addition to their implications for drug design, these structural studies will facilitate elucidation of the architectural mechanism responsible for the peptide recognition of PDF.
Hoshino, Taro; Ishii, Hiroki; Kitano, Taisuke; Shindo, Mitsutoshi; Miyazawa, Haruhisa; Yamada, Hodaka; Ito, Kiyonori; Ueda, Yuichiro; Kaku, Yoshio; Hirai, Keiji; Mori, Honami; Ookawara, Susumu; Tabei, Kaoru; Morishita, Yoshiyuki
2016-02-01
The highly concentrated lactate in peritoneal dialysis fluid (PDF) has been considered to contribute to peritoneal failure in patients undergoing PD. A new PDF containing a lower lactate concentration, physiological bicarbonate concentration, and neutral pH (bicarbonate/lactate-buffered neutral PDF) was recently developed. We compared the clinical effects of this bicarbonate/lactate-buffered neutral PDF and a lactate-buffered neutral PDF. Patients undergoing PD were changed from a lactate-buffered neutral PDF to a bicarbonate/lactate-buffered neutral PDF. We then investigated the changes in peritoneal functions as estimated by a peritoneal equilibration test (PET) and the following surrogate markers of peritoneal membrane failure in the drained dialysate: fibrin degradation products (FDP), vascular endothelial growth factor (VEGF), cancer antigen 125 (CA125), interleukin-6 (IL-6), and transforming growth factor beta 1 (TGF-β1). Fourteen patients undergoing PD were enrolled. The PET results were not different before and after use of the bicarbonate/lactate-buffered neutral PDF. The FDP concentration significantly decreased from 15.60 ± 13.90 to 6.04 ± 3.49 μg/mL (p = 0.02) and the VEGF concentration significantly decreased from 37.83 ± 15.82 to 27.70 ± 3.80 pg/mL (p = 0.02), while the CA125 and IL-6 concentrations remained unchanged before and after use of the bicarbonate/lactate-buffered neutral PDF. TGF-β1 was not detected in most patients. The bicarbonate/lactate-buffered neutral PDF decreased the FDP and VEGF concentrations in the drained dialysate. These results suggest that the decreased lactate level achieved by administration of bicarbonate with a neutral pH in PDF may contribute to decreased peritoneal membrane failure in patients undergoing PD.
Honda, Takeshi; Matsushima, Ayami; Sumida, Kazunori; Chuman, Yoshiro; Sakaguchi, Kazuyasu; Onoue, Hitoshi; Meinertzhagen, Ian A; Shimohigashi, Yasuyuki; Shimohigashi, Miki
2006-11-20
Pigment-dispersing factor (PDF) is an 18-mer peptide that acts as a principal neurotransmitter of the insect circadian clock. Our previous study, utilizing anti-Uca beta-PDH polyclonal antibody (pAb) to immunolabel the optic lobe of the cricket Gryllus bimaculatus, suggested the existence of an alternative PDF-like peptide in the outer cells of the first neuropile, or lamina (La), which were much less immunoreactive than the inner cells of the second neuropile, the medulla (Me). To obtain structural information about such a PDF-like peptide, we prepared 10 anti-Gryllus PDF monoclonal (mAb) and pAb antibodies and analyzed their detailed epitope specificities. The PDFMe and PDFLa inner cells and their axonal projections were clearly immunoreactive to all these antibodies, revealing the widespread immunocytochemical organization of the PDF system in the optic lobe, as seen previously with anti-Uca beta-PDH pAb and anti-Gryllus PDF mAb, the epitope structures of which were also clarified in this study. The lamina outer cells, which we found lacked a target pdf mRNA, displayed specific immunoreactivities, indicating that the cells contain a distinct PDF-like peptide possessing both N- and C-terminal structures. These cells were not immunolabeled by some other monoclonal antibodies, however, implying that the PDFLa outer cells have a PDF isoform peptide devoid of Asn at positions 6 and 16. This isoform was also identified in a varicose arborization in the lamina. These results suggest not only the structure of the peptide, but also the possibility of additional functions of this novel PDF isoform.
Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions
NASA Astrophysics Data System (ADS)
Chen, Nan; Majda, Andrew J.
2018-02-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.
Zhang, Fan; Hu, Jing; Kelsey, Chris R; Yoo, David; Yin, Fang-Fang; Cai, Jing
2012-11-01
To evaluate the reproducibility of tumor motion probability distribution function (PDF) in stereotactic body radiation therapy (SBRT) of lung cancer using cine megavoltage (MV) images. Cine MV images of 20 patients acquired during three-dimensional conformal (6-11 beams) SBRT treatments were retrospectively analyzed to extract tumor motion trajectories. For each patient, tumor motion PDFs were generated per fraction (PDF(n)) using three selected "usable" beams. Patients without at least three usable beams were excluded from the study. Fractional PDF reproducibility (R(n)) was calculated as the Dice similarity coefficient between PDF(n) to a "ground-truth" PDF (PDF(g)), which was generated using the selected beams of all fractions. The mean of R(n), labeled as R(m), was calculated for each patient and correlated to the patient's mean tumor motion rang (A(m)). Change of R(m) during the course of SBRT treatments was also evaluated. Intra- and intersubject coefficient of variation (CV) of R(m) and A(m) were determined. Thirteen patients had at least three usable beams and were analyzed. The mean of R(m) was 0.87 (range, 0.84-0.95). The mean of A(m) was 3.18 mm (range, 0.46-7.80 mm). R(m) was found to decrease as A(m) increases following an equation of R(m) = 0.17e(-0.9Am) + 0.84. R(m) also decreased slightly throughout the course of treatments. Intersubject CV of R(m) (0.05) was comparable to intrasubject CV of R(m) (range, 0.02-0.09); intersubject CV of A(m) (0.73) was significantly greater than intrasubject CV of A(m) (range, 0.09-0.24). Tumor motion PDF can be determined using cine MV images acquired during the treatments. The reproducibility of lung tumor motion PDF decreased exponentially as the tumor motion range increased and decreased slightly throughout the course of the treatments. Copyright © 2012 Elsevier Inc. All rights reserved.
Separation of components from a scale mixture of Gaussian white noises
NASA Astrophysics Data System (ADS)
Vamoş, Călin; Crăciun, Maria
2010-05-01
The time evolution of a physical quantity associated with a thermodynamic system whose equilibrium fluctuations are modulated in amplitude by a slowly varying phenomenon can be modeled as the product of a Gaussian white noise {Zt} and a stochastic process with strictly positive values {Vt} referred to as volatility. The probability density function (pdf) of the process Xt=VtZt is a scale mixture of Gaussian white noises expressed as a time average of Gaussian distributions weighted by the pdf of the volatility. The separation of the two components of {Xt} can be achieved by imposing the condition that the absolute values of the estimated white noise be uncorrelated. We apply this method to the time series of the returns of the daily S&P500 index, which has also been analyzed by means of the superstatistics method that imposes the condition that the estimated white noise be Gaussian. The advantage of our method is that this financial time series is processed without partitioning or removal of the extreme events and the estimated white noise becomes almost Gaussian only as result of the uncorrelation condition.
NASA Astrophysics Data System (ADS)
Rahmani, Kianoosh; Kavousifard, Farzaneh; Abbasi, Alireza
2017-09-01
This article proposes a novel probabilistic Distribution Feeder Reconfiguration (DFR) based method to consider the uncertainty impacts into account with high accuracy. In order to achieve the set aim, different scenarios are generated to demonstrate the degree of uncertainty in the investigated elements which are known as the active and reactive load consumption and the active power generation of the wind power units. Notably, a normal Probability Density Function (PDF) based on the desired accuracy is divided into several class intervals for each uncertain parameter. Besides, the Weiball PDF is utilised for modelling wind generators and taking the variation impacts of the power production in wind generators. The proposed problem is solved based on Fuzzy Adaptive Modified Particle Swarm Optimisation to find the most optimal switching scheme during the Multi-objective DFR. Moreover, this paper holds two suggestions known as new mutation methods to adjust the inertia weight of PSO by the fuzzy rules to enhance its ability in global searching within the entire search space.
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming
2014-10-01
Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.
Transcriptional regulation of the human mitochondrial peptide deformylase (PDF).
Pereira-Castro, Isabel; Costa, Luís Teixeira da; Amorim, António; Azevedo, Luisa
2012-05-18
The last years of research have been particularly dynamic in establishing the importance of peptide deformylase (PDF), a protein of the N-terminal methionine excision (NME) pathway that removes formyl-methionine from mitochondrial-encoded proteins. The genomic sequence of the human PDF gene is shared with the COG8 gene, which encodes a component of the oligomeric golgi complex, a very unusual case in Eukaryotic genomes. Since PDF is crucial in maintaining mitochondrial function and given the atypical short distance between the end of COG8 coding sequence and the PDF initiation codon, we investigated whether the regulation of the human PDF is affected by the COG8 overlapping partner. Our data reveals that PDF has several transcription start sites, the most important of which only 18 bp from the initiation codon. Furthermore, luciferase-activation assays using differently-sized fragments defined a 97 bp minimal promoter region for human PDF, which is capable of very strong transcriptional activity. This fragment contains a potential Sp1 binding site highly conserved in mammalian species. We show that this binding site, whose mutation significantly reduces transcription activation, is a target for the Sp1 transcription factor, and possibly of other members of the Sp family. Importantly, the entire minimal promoter region is located after the end of COG8's coding region, strongly suggesting that the human PDF preserves an independent regulation from its overlapping partner. Copyright © 2012 Elsevier Inc. All rights reserved.
Cross-reference identification within a PDF document
NASA Astrophysics Data System (ADS)
Li, Sida; Gao, Liangcai; Tang, Zhi; Yu, Yinyan
2015-01-01
Cross-references, such like footnotes, endnotes, figure/table captions, references, are a common and useful type of page elements to further explain their corresponding entities in the target document. In this paper, we focus on cross-reference identification in a PDF document, and present a robust method as a case study of identifying footnotes and figure references. The proposed method first extracts footnotes and figure captions, and then matches them with their corresponding references within a document. A number of novel features within a PDF document, i.e., page layout, font information, lexical and linguistic features of cross-references, are utilized for the task. Clustering is adopted to handle the features that are stable in one document but varied in different kinds of documents so that the process of identification is adaptive with document types. In addition, this method leverages results from the matching process to provide feedback to the identification process and further improve the algorithm accuracy. The primary experiments in real document sets show that the proposed method is promising to identify cross-reference in a PDF document.
Case Report: Diagnosis of a Rare Plaque-Like Dermal Fibroma Successfully Treated With Mohs Surgery.
Gill, Pavandeep; Arlette, John; Shiau, Carolyn J; Abi Daoud, Marie S
CD34-positive plaque-like dermal fibroma (PDF) is a poorly characterised benign dermal neoplasm that has a wide differential diagnosis. It can be mistaken for other entities on superficial biopsy and be overtreated, leading to unnecessary worry and extensive surgery. To report on an uncommon presentation of this entity, the histopathologic differential diagnosis of PDF, and a novel treatment method. Clinical and histopathological information was obtained for a PDF lesion on a 75-year-old man. On superficial biopsy, the PDF lesion was misinterpreted as a possible neurothekeoma. Successful Mohs surgery and genetic testing confirmed the diagnosis of PDF, and the patient received appropriate tissue-sparing surgical management. This case adds to our current knowledge about PDF and highlights the importance of early recognition of these lesions to direct appropriate diagnostic testing (full-thickness biopsy) and management. This case confirms successful management with Mohs surgery.
NASA Astrophysics Data System (ADS)
Veltchev, Todor; Donkov, Sava; Stanchev, Orlin
2017-07-01
We present a method to derive the density scaling relation
SU-F-T-94: Plan2pdf - a Software Tool for Automatic Plan Report for Philips Pinnacle TPS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, C
Purpose: To implement an automatic electronic PDF plan reporting tool for Philips Pinnacle treatment planning system (TPS) Methods: An electronic treatment plan reporting software is developed by us to enable fully automatic PDF report from Pinnacle TPS to external EMR programs such as MOSAIQ. The tool is named “plan2pdf”. plan2pdf is implemented using Pinnacle scripts, Java and UNIX shell scripts, without any external program needed. plan2pdf supports full auto-mode and manual mode reporting. In full auto-mode, with a single mouse click, plan2pdf will generate a detailed Pinnacle plan report in PDF format, which includes customizable cover page, Pinnacle plan summary,more » orthogonal views through each plan POI and maximum dose point, DRR for each beam, serial transverse views captured throughout the dose grid at a user specified interval, DVH and scorecard windows. The final PDF report is also automatically bookmarked for each section above for convenient plan review. The final PDF report can either be saved on a user specified folder on Pinnacle, or it can be automatically exported to an EMR import folder via a user configured FTP service. In manual capture mode, plan2pdf allows users to capture any Pinnacle plan by full screen, individual window or rectangular ROI drawn on screen. Furthermore, to avoid possible patients’ plan mix-up during auto-mode reporting, a user conflict check feature is included in plan2pdf: it prompts user to wait if another patient is being exported by plan2pdf by another user. Results: plan2pdf is tested extensively and successfully at our institution consists of 5 centers, 15 dosimetrists and 10 physicists, running Pinnacle version 9.10 on Enterprise servers. Conclusion: plan2pdf provides a highly efficient, user friendly and clinical proven platform for all Philips Pinnacle users, to generate a detailed plan report in PDF format for external EMR systems.« less
EUPDF: An Eulerian-Based Monte Carlo Probability Density Function (PDF) Solver. User's Manual
NASA Technical Reports Server (NTRS)
Raju, M. S.
1998-01-01
EUPDF is an Eulerian-based Monte Carlo PDF solver developed for application with sprays, combustion, parallel computing and unstructured grids. It is designed to be massively parallel and could easily be coupled with any existing gas-phase flow and spray solvers. The solver accommodates the use of an unstructured mesh with mixed elements of either triangular, quadrilateral, and/or tetrahedral type. The manual provides the user with the coding required to couple the PDF code to any given flow code and a basic understanding of the EUPDF code structure as well as the models involved in the PDF formulation. The source code of EUPDF will be available with the release of the National Combustion Code (NCC) as a complete package.
NASA Technical Reports Server (NTRS)
Raju, M. S.
1998-01-01
The success of any solution methodology used in the study of gas-turbine combustor flows depends a great deal on how well it can model the various complex and rate controlling processes associated with the spray's turbulent transport, mixing, chemical kinetics, evaporation, and spreading rates, as well as convective and radiative heat transfer and other phenomena. The phenomena to be modeled, which are controlled by these processes, often strongly interact with each other at different times and locations. In particular, turbulence plays an important role in determining the rates of mass and heat transfer, chemical reactions, and evaporation in many practical combustion devices. The influence of turbulence in a diffusion flame manifests itself in several forms, ranging from the so-called wrinkled, or stretched, flamelets regime to the distributed combustion regime, depending upon how turbulence interacts with various flame scales. Conventional turbulence models have difficulty treating highly nonlinear reaction rates. A solution procedure based on the composition joint probability density function (PDF) approach holds the promise of modeling various important combustion phenomena relevant to practical combustion devices (such as extinction, blowoff limits, and emissions predictions) because it can account for nonlinear chemical reaction rates without making approximations. In an attempt to advance the state-of-the-art in multidimensional numerical methods, we at the NASA Lewis Research Center extended our previous work on the PDF method to unstructured grids, parallel computing, and sprays. EUPDF, which was developed by M.S. Raju of Nyma, Inc., was designed to be massively parallel and could easily be coupled with any existing gas-phase and/or spray solvers. EUPDF can use an unstructured mesh with mixed triangular, quadrilateral, and/or tetrahedral elements. The application of the PDF method showed favorable results when applied to several supersonic-diffusion flames and spray flames. The EUPDF source code will be available with the National Combustion Code (NCC) as a complete package.
Turbulence-induced relative velocity of dust particles. III. The probability distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Liubin; Padoan, Paolo; Scalo, John, E-mail: lpan@cfa.harvard.edu, E-mail: ppadoan@icc.ub.edu, E-mail: parrot@astro.as.utexas.edu
2014-09-01
Motivated by its important role in the collisional growth of dust particles in protoplanetary disks, we investigate the probability distribution function (PDF) of the relative velocity of inertial particles suspended in turbulent flows. Using the simulation from our previous work, we compute the relative velocity PDF as a function of the friction timescales, τ{sub p1} and τ{sub p2}, of two particles of arbitrary sizes. The friction time of the particles included in the simulation ranges from 0.1τ{sub η} to 54T {sub L}, where τ{sub η} and T {sub L} are the Kolmogorov time and the Lagrangian correlation time of themore » flow, respectively. The relative velocity PDF is generically non-Gaussian, exhibiting fat tails. For a fixed value of τ{sub p1}, the PDF shape is the fattest for equal-size particles (τ{sub p2} = τ{sub p1}), and becomes thinner at both τ{sub p2} < τ{sub p1} and τ{sub p2} > τ{sub p1}. Defining f as the friction time ratio of the smaller particle to the larger one, we find that, at a given f in (1/2) ≲ f ≲ 1, the PDF fatness first increases with the friction time τ{sub p,h} of the larger particle, peaks at τ{sub p,h} ≅ τ{sub η}, and then decreases as τ{sub p,h} increases further. For 0 ≤ f ≲ (1/4), the PDF becomes continuously thinner with increasing τ{sub p,h}. The PDF is nearly Gaussian only if τ{sub p,h} is sufficiently large (>>T {sub L}). These features are successfully explained by the Pan and Padoan model. Using our simulation data and some simplifying assumptions, we estimated the fractions of collisions resulting in sticking, bouncing, and fragmentation as a function of the dust size in protoplanetary disks, and argued that accounting for non-Gaussianity of the collision velocity may help further alleviate the bouncing barrier problem.« less
NASA Astrophysics Data System (ADS)
Zhang, Pei; Barlow, Robert; Masri, Assaad; Wang, Haifeng
2016-11-01
The mixture fraction and progress variable are often used as independent variables for describing turbulent premixed and non-premixed flames. There is a growing interest in using these two variables for describing partially premixed flames. The joint statistical distribution of the mixture fraction and progress variable is of great interest in developing models for partially premixed flames. In this work, we conduct predictive studies of the joint statistics of mixture fraction and progress variable in a series of piloted methane jet flames with inhomogeneous inlet flows. The employed models combine large eddy simulations with the Monte Carlo probability density function (PDF) method. The joint PDFs and marginal PDFs are examined in detail by comparing the model predictions and the measurements. Different presumed shapes of the joint PDFs are also evaluated.
Meta-heuristic CRPS minimization for the calibration of short-range probabilistic forecasts
NASA Astrophysics Data System (ADS)
Mohammadi, Seyedeh Atefeh; Rahmani, Morteza; Azadi, Majid
2016-08-01
This paper deals with the probabilistic short-range temperature forecasts over synoptic meteorological stations across Iran using non-homogeneous Gaussian regression (NGR). NGR creates a Gaussian forecast probability density function (PDF) from the ensemble output. The mean of the normal predictive PDF is a bias-corrected weighted average of the ensemble members and its variance is a linear function of the raw ensemble variance. The coefficients for the mean and variance are estimated by minimizing the continuous ranked probability score (CRPS) during a training period. CRPS is a scoring rule for distributional forecasts. In the paper of Gneiting et al. (Mon Weather Rev 133:1098-1118, 2005), Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is used to minimize the CRPS. Since BFGS is a conventional optimization method with its own limitations, we suggest using the particle swarm optimization (PSO), a robust meta-heuristic method, to minimize the CRPS. The ensemble prediction system used in this study consists of nine different configurations of the weather research and forecasting model for 48-h forecasts of temperature during autumn and winter 2011 and 2012. The probabilistic forecasts were evaluated using several common verification scores including Brier score, attribute diagram and rank histogram. Results show that both BFGS and PSO find the optimal solution and show the same evaluation scores, but PSO can do this with a feasible random first guess and much less computational complexity.
First passage Brownian functional properties of snowmelt dynamics
NASA Astrophysics Data System (ADS)
Dubey, Ashutosh; Bandyopadhyay, Malay
2018-04-01
In this paper, we model snow-melt dynamics in terms of a Brownian motion (BM) with purely time dependent drift and difusion and examine its first passage properties by suggesting and examining several Brownian functionals which characterize the lifetime and reactivity of such stochastic processes. We introduce several probability distribution functions (PDFs) associated with such time dependent BMs. For instance, for a BM with initial starting point x0, we derive analytical expressions for : (i) the PDF P(tf|x0) of the first passage time tf which specify the lifetime of such stochastic process, (ii) the PDF P(A|x0) of the area A till the first passage time and it provides us numerous valuable information about the total fresh water availability during melting, (iii) the PDF P(M) associated with the maximum size M of the BM process before the first passage time, and (iv) the joint PDF P(M; tm) of the maximum size M and its occurrence time tm before the first passage time. These P(M) and P(M; tm) are useful in determining the time of maximum fresh water availability and in calculating the total maximum amount of available fresh water. These PDFs are examined for the power law time dependent drift and diffusion which matches quite well with the available data of snowmelt dynamics.
APFEL: A PDF evolution library with QED corrections
NASA Astrophysics Data System (ADS)
Bertone, Valerio; Carrazza, Stefano; Rojo, Juan
2014-06-01
Quantum electrodynamics and electroweak corrections are important ingredients for many theoretical predictions at the LHC. This paper documents APFEL, a new PDF evolution package that allows for the first time to perform DGLAP evolution up to NNLO in QCD and to LO in QED, in the variable-flavor-number scheme and with either pole or MS bar heavy quark masses. APFEL consistently accounts for the QED corrections to the evolution of quark and gluon PDFs and for the contribution from the photon PDF in the proton. The coupled QCD ⊗ QED equations are solved in x-space by means of higher order interpolation, followed by Runge-Kutta solution of the resulting discretized evolution equations. APFEL is based on an innovative and flexible methodology for the sequential solution of the QCD and QED evolution equations and their combination. In addition to PDF evolution, APFEL provides a module that computes Deep-Inelastic Scattering structure functions in the FONLL general-mass variable-flavor-number scheme up to O(αs2) . All the functionalities of APFEL can be accessed via a Graphical User Interface, supplemented with a variety of plotting tools for PDFs, parton luminosities and structure functions. Written in FORTRAN 77, APFEL can also be used via the C/C++ and Python interfaces, and is publicly available from the HepForge repository.
NASA Astrophysics Data System (ADS)
Wei, J. Q.; Cong, Y. C.; Xiao, M. Q.
2018-05-01
As renewable energies are increasingly integrated into power systems, there is increasing interest in stochastic analysis of power systems.Better techniques should be developed to account for the uncertainty caused by penetration of renewables and consequently analyse its impacts on stochastic stability of power systems. In this paper, the Stochastic Differential Equations (SDEs) are used to represent the evolutionary behaviour of the power systems. The stationary Probability Density Function (PDF) solution to SDEs modelling power systems excited by Gaussian white noise is analysed. Subjected to such random excitation, the Joint Probability Density Function (JPDF) solution to the phase angle and angular velocity is governed by the generalized Fokker-Planck-Kolmogorov (FPK) equation. To solve this equation, the numerical method is adopted. Special measure is taken such that the generalized FPK equation is satisfied in the average sense of integration with the assumed PDF. Both weak and strong intensities of the stochastic excitations are considered in a single machine infinite bus power system. The numerical analysis has the same result as the one given by the Monte Carlo simulation. Potential studies on stochastic behaviour of multi-machine power systems with random excitations are discussed at the end.
Mayer, Georg; Hering, Lars; Stosch, Juliane M; Stevenson, Paul A; Dircksen, Heinrich
2015-09-01
Pigment-dispersing factor (PDF) denotes a conserved family of homologous neuropeptides present in several invertebrate groups, including mollusks, nematodes, insects, and crustaceans (referred to here as pigment-dispersing hormone [PDH]). With regard to their encoding genes (pdf, pdh), insects possess only one, nematodes two, and decapod crustaceans up to three, but their phylogenetic relationship is unknown. To shed light on the origin and diversification of pdf/pdh homologs in Panarthropoda (Onychophora + Tardigrada + Arthropoda) and other molting animals (Ecdysozoa), we analyzed the transcriptomes of five distantly related onychophorans and a representative tardigrade and searched for putative pdf homologs in publically available genomes of other protostomes. This revealed only one pdf homolog in several mollusk and annelid species; two in Onychophora, Priapulida, and Nematoda; and three in Tardigrada. Phylogenetic analyses suggest that the last common ancestor of Panarthropoda possessed two pdf homologs, one of which was lost in the arthropod or arthropod/tardigrade lineage, followed by subsequent duplications of the remaining homolog in some taxa. Immunolocalization of PDF-like peptides in six onychophoran species, by using a broadly reactive antibody that recognizes PDF/PDH peptides in numerous species, revealed an elaborate system of neurons and fibers in their central and peripheral nervous systems. Large varicose projections in the heart suggest that the PDF neuropeptides functioned as both circulating hormones and locally released transmitters in the last common ancestor of Onychophora and Arthropoda. The lack of PDF-like-immunoreactive somata associated with the onychophoran optic ganglion conforms to the hypothesis that onychophoran eyes are homologous to the arthropod median ocelli. © 2015 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abeykoon, A. M. Milinda; Hu, Hefei; Wu, Lijun
2015-01-30
Different protocols for calibrating electron pair distribution function (ePDF) measurements are explored and described for quantitative studies on nanomaterials. It is found that the most accurate approach to determine the camera length is to use a standard calibration sample of Au nanoparticles from the National Institute of Standards and Technology. Different protocols for data collection are also explored, as are possible operational errors, to find the best approaches for accurate data collection for quantitative ePDF studies.
NASA Technical Reports Server (NTRS)
Freilich, M. H.; Pawka, S. S.
1987-01-01
The statistics of Sxy estimates derived from orthogonal-component measurements are examined. Based on results of Goodman (1957), the probability density function (pdf) for Sxy(f) estimates is derived, and a closed-form solution for arbitrary moments of the distribution is obtained. Characteristic functions are used to derive the exact pdf of Sxy(tot). In practice, a simple Gaussian approximation is found to be highly accurate even for relatively few degrees of freedom. Implications for experiment design are discussed, and a maximum-likelihood estimator for a posterior estimation is outlined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manshour, Pouya; Ghasemi, Fatemeh; Sahimi, Muhammad
High-quality measurements of seismic activities around the world provide a wealth of data and information that are relevant to understanding of when earthquakes may occur. If viewed as complex stochastic time series, such data may be analyzed by methods that provide deeper insights into their nature, hence leading to better understanding of the data and their possible implications for earthquakes. In this paper, we provide further evidence for our recent proposal [P. Mansour et al., Phys. Rev. Lett. 102, 014101 (2009)] for the existence of a transition in the shape of the probability density function (PDF) of the successive detrendedmore » increments of the stochastic fluctuations of Earth's vertical velocity V{sub z}, collected by broadband stations before moderate and large earthquakes. To demonstrate the transition, we carried out extensive analysis of the data for V{sub z} for 12 earthquakes in several regions around the world, including the recent catasrophic one in Haiti. The analysis supports the hypothesis that before and near the time of an earthquake, the shape of the PDF undergoes significant and discernable changes, which can be characterized quantitatively. The typical time over which the PDF undergoes the transition is about 5-10 h prior to a moderate or large earthquake.« less
Stratified turbulent Bunsen flames: flame surface analysis and flame surface density modelling
NASA Astrophysics Data System (ADS)
Ramaekers, W. J. S.; van Oijen, J. A.; de Goey, L. P. H.
2012-12-01
In this paper it is investigated whether the Flame Surface Density (FSD) model, developed for turbulent premixed combustion, is also applicable to stratified flames. Direct Numerical Simulations (DNS) of turbulent stratified Bunsen flames have been carried out, using the Flamelet Generated Manifold (FGM) reduction method for reaction kinetics. Before examining the suitability of the FSD model, flame surfaces are characterized in terms of thickness, curvature and stratification. All flames are in the Thin Reaction Zones regime, and the maximum equivalence ratio range covers 0.1⩽φ⩽1.3. For all flames, local flame thicknesses correspond very well to those observed in stretchless, steady premixed flamelets. Extracted curvature radii and mixing length scales are significantly larger than the flame thickness, implying that the stratified flames all burn in a premixed mode. The remaining challenge is accounting for the large variation in (subfilter) mass burning rate. In this contribution, the FSD model is proven to be applicable for Large Eddy Simulations (LES) of stratified flames for the equivalence ratio range 0.1⩽φ⩽1.3. Subfilter mass burning rate variations are taken into account by a subfilter Probability Density Function (PDF) for the mixture fraction, on which the mass burning rate directly depends. A priori analysis point out that for small stratifications (0.4⩽φ⩽1.0), the replacement of the subfilter PDF (obtained from DNS data) by the corresponding Dirac function is appropriate. Integration of the Dirac function with the mass burning rate m=m(φ), can then adequately model the filtered mass burning rate obtained from filtered DNS data. For a larger stratification (0.1⩽φ⩽1.3), and filter widths up to ten flame thicknesses, a β-function for the subfilter PDF yields substantially better predictions than a Dirac function. Finally, inclusion of a simple algebraic model for the FSD resulted only in small additional deviations from DNS data, thereby rendering this approach promising for application in LES.
Relative frequencies of constrained events in stochastic processes: An analytical approach.
Rusconi, S; Akhmatskaya, E; Sokolovski, D; Ballard, N; de la Cal, J C
2015-10-01
The stochastic simulation algorithm (SSA) and the corresponding Monte Carlo (MC) method are among the most common approaches for studying stochastic processes. They relies on knowledge of interevent probability density functions (PDFs) and on information about dependencies between all possible events. Analytical representations of a PDF are difficult to specify in advance, in many real life applications. Knowing the shapes of PDFs, and using experimental data, different optimization schemes can be applied in order to evaluate probability density functions and, therefore, the properties of the studied system. Such methods, however, are computationally demanding, and often not feasible. We show that, in the case where experimentally accessed properties are directly related to the frequencies of events involved, it may be possible to replace the heavy Monte Carlo core of optimization schemes with an analytical solution. Such a replacement not only provides a more accurate estimation of the properties of the process, but also reduces the simulation time by a factor of order of the sample size (at least ≈10(4)). The proposed analytical approach is valid for any choice of PDF. The accuracy, computational efficiency, and advantages of the method over MC procedures are demonstrated in the exactly solvable case and in the evaluation of branching fractions in controlled radical polymerization (CRP) of acrylic monomers. This polymerization can be modeled by a constrained stochastic process. Constrained systems are quite common, and this makes the method useful for various applications.
The study of PDF turbulence models in combustion
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1991-01-01
In combustion computations, it is known that the predictions of chemical reaction rates are poor if conventional turbulence models are used. The probability density function (pdf) method seems to be the only alternative that uses local instantaneous values of the temperature, density, etc., in predicting chemical reaction rates, and thus is the only viable approach for more accurate turbulent combustion calculations. The fact that the pdf equation has a very large dimensionality renders finite difference schemes extremely demanding on computer memories and thus impractical. A logical alternative is the Monte Carlo scheme. Since CFD has a certain maturity as well as acceptance, it seems that the use of a combined CFD and Monte Carlo scheme is more beneficial. Therefore, a scheme is chosen that uses a conventional CFD flow solver in calculating the flow field properties such as velocity, pressure, etc., while the chemical reaction part is solved using a Monte Carlo scheme. The discharge of a heated turbulent plane jet into quiescent air was studied. Experimental data for this problem shows that when the temperature difference between the jet and the surrounding air is small, buoyancy effect can be neglected and the temperature can be treated as a passive scalar. The fact that jet flows have a self-similar solution lends convenience in the modeling study. Futhermore, the existence of experimental data for turbulent shear stress and temperature variance make the case ideal for the testing of pdf models wherein these values can be directly evaluated.
NASA Technical Reports Server (NTRS)
Mashiku, Alinda; Garrison, James L.; Carpenter, J. Russell
2012-01-01
The tracking of space objects requires frequent and accurate monitoring for collision avoidance. As even collision events with very low probability are important, accurate prediction of collisions require the representation of the full probability density function (PDF) of the random orbit state. Through representing the full PDF of the orbit state for orbit maintenance and collision avoidance, we can take advantage of the statistical information present in the heavy tailed distributions, more accurately representing the orbit states with low probability. The classical methods of orbit determination (i.e. Kalman Filter and its derivatives) provide state estimates based on only the second moments of the state and measurement errors that are captured by assuming a Gaussian distribution. Although the measurement errors can be accurately assumed to have a Gaussian distribution, errors with a non-Gaussian distribution could arise during propagation between observations. Moreover, unmodeled dynamics in the orbit model could introduce non-Gaussian errors into the process noise. A Particle Filter (PF) is proposed as a nonlinear filtering technique that is capable of propagating and estimating a more complete representation of the state distribution as an accurate approximation of a full PDF. The PF uses Monte Carlo runs to generate particles that approximate the full PDF representation. The PF is applied in the estimation and propagation of a highly eccentric orbit and the results are compared to the Extended Kalman Filter and Splitting Gaussian Mixture algorithms to demonstrate its proficiency.
An Integrated Nonlinear Analysis library - (INA) for solar system plasma turbulence
NASA Astrophysics Data System (ADS)
Munteanu, Costel; Kovacs, Peter; Echim, Marius; Koppan, Andras
2014-05-01
We present an integrated software library dedicated to the analysis of time series recorded in space and adapted to investigate turbulence, intermittency and multifractals. The library is written in MATLAB and provides a graphical user interface (GUI) customized for the analysis of space physics data available online like: Coordinated Data Analysis Web (CDAWeb), Automated Multi Dataset Analysis system (AMDA), Planetary Science Archive (PSA), World Data Center Kyoto (WDC), Ulysses Final Archive (UFA) and Cluster Active Archive (CAA). Three main modules are already implemented in INA : the Power Spectral Density (PSD) Analysis, the Wavelet and Intemittency Analysis and the Probability Density Functions (PDF) analysis.The layered structure of the software allows the user to easily switch between different modules/methods while retaining the same time interval for the analysis. The wavelet analysis module includes algorithms to compute and analyse the PSD, the Scalogram, the Local Intermittency Measure (LIM) or the Flatness parameter. The PDF analysis module includes algorithms for computing the PDFs for a range of scales and parameters fully customizable by the user; it also computes the Flatness parameter and enables fast comparison with standard PDF profiles like, for instance, the Gaussian PDF. The library has been already tested on Cluster and Venus Express data and we will show relevant examples. Research supported by the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement no 313038/STORM, and a grant of the Romanian Ministry of National Education, CNCS UEFISCDI, project number PN-II-ID PCE-2012-4-0418.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smallwood, D.O.
It is recognized that some dynamic and noise environments are characterized by time histories which are not Gaussian. An example is high intensity acoustic noise. Another example is some transportation vibration. A better simulation of these environments can be generated if a zero mean non-Gaussian time history can be reproduced with a specified auto (or power) spectral density (ASD or PSD) and a specified probability density function (pdf). After the required time history is synthesized, the waveform can be used for simulation purposes. For example, modem waveform reproduction techniques can be used to reproduce the waveform on electrodynamic or electrohydraulicmore » shakers. Or the waveforms can be used in digital simulations. A method is presented for the generation of realizations of zero mean non-Gaussian random time histories with a specified ASD, and pdf. First a Gaussian time history with the specified auto (or power) spectral density (ASD) is generated. A monotonic nonlinear function relating the Gaussian waveform to the desired realization is then established based on the Cumulative Distribution Function (CDF) of the desired waveform and the known CDF of a Gaussian waveform. The established function is used to transform the Gaussian waveform to a realization of the desired waveform. Since the transformation preserves the zero-crossings and peaks of the original Gaussian waveform, and does not introduce any substantial discontinuities, the ASD is not substantially changed. Several methods are available to generate a realization of a Gaussian distributed waveform with a known ASD. The method of Smallwood and Paez (1993) is an example. However, the generation of random noise with a specified ASD but with a non-Gaussian distribution is less well known.« less
Ensemble Kalman filtering in presence of inequality constraints
NASA Astrophysics Data System (ADS)
van Leeuwen, P. J.
2009-04-01
Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.
32 CFR 806b.29 - Sending personal information over electronic mail.
Code of Federal Regulations, 2010 CFR
2010-07-01
... methods may include encryption or password protecting the information in a separate Word document. When....mil/whs/directives/corres/pdf/54007r_0998/p54007r.pdf. (c) Do not disclose personal information to...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strom, Daniel J.; Joyce, Kevin E.; Maclellan, Jay A.
2012-04-17
In making low-level radioactivity measurements of populations, it is commonly observed that a substantial portion of net results are negative. Furthermore, the observed variance of the measurement results arises from a combination of measurement uncertainty and population variability. This paper presents a method for disaggregating measurement uncertainty from population variability to produce a probability density function (PDF) of possibly true results. To do this, simple, justifiable, and reasonable assumptions are made about the relationship of the measurements to the measurands (the 'true values'). The measurements are assumed to be unbiased, that is, that their average value is the average ofmore » the measurands. Using traditional estimates of each measurement's uncertainty to disaggregate population variability from measurement uncertainty, a PDF of measurands for the population is produced. Then, using Bayes's theorem, the same assumptions, and all the data from the population of individuals, a prior PDF is computed for each individual's measurand. These PDFs are non-negative, and their average is equal to the average of the measurement results for the population. The uncertainty in these Bayesian posterior PDFs is all Berkson with no remaining classical component. The methods are applied to baseline bioassay data from the Hanford site. The data include 90Sr urinalysis measurements on 128 people, 137Cs in vivo measurements on 5,337 people, and 239Pu urinalysis measurements on 3,270 people. The method produces excellent results for the 90Sr and 137Cs measurements, since there are nonzero concentrations of these global fallout radionuclides in people who have not been occupationally exposed. The method does not work for the 239Pu measurements in non-occupationally exposed people because the population average is essentially zero.« less
Newe, Axel; Becker, Linda; Schenk, Andrea
2014-01-01
The Portable Document Format (PDF) is the de-facto standard for the exchange of electronic documents. It is platform-independent, suitable for the exchange of medical data, and allows for the embedding of three-dimensional (3D) surface mesh models. In this article, we present the first clinical routine application of interactive 3D surface mesh models which have been integrated into PDF files for the presentation and the exchange of Computer Assisted Surgery Planning (CASP) results in liver surgery. We aimed to prove the feasibility of applying 3D PDF in medical reporting and investigated the user experience with this new technology. We developed an interactive 3D PDF report document format and implemented a software tool to create these reports automatically. After more than 1000 liver CASP cases that have been reported in clinical routine using our 3D PDF report, an international user survey was carried out online to evaluate the user experience. Our solution enables the user to interactively explore the anatomical configuration and to have different analyses and various resection proposals displayed within a 3D PDF document covering only a single page that acts more like a software application than like a typical PDF file ("PDF App"). The new 3D PDF report offers many advantages over the previous solutions. According to the results of the online survey, the users have assessed the pragmatic quality (functionality, usability, perspicuity, efficiency) as well as the hedonic quality (attractiveness, novelty) very positively. The usage of 3D PDF for reporting and sharing CASP results is feasible and well accepted by the target audience. Using interactive PDF with embedded 3D models is an enabler for presenting and exchanging complex medical information in an easy and platform-independent way. Medical staff as well as patients can benefit from the possibilities provided by 3D PDF. Our results open the door for a wider use of this new technology, since the basic idea can and should be applied for many medical disciplines and use cases.
Dynamical Epidemic Suppression Using Stochastic Prediction and Control
2004-10-28
initial probability density function (PDF), p: D C R2 -- R, is defined by the stochastic Frobenius - Perron For deterministic systems, normal methods of...induced chaos. To analyze the qualitative change, we apply the technique of the stochastic Frobenius - Perron operator [L. Billings et al., Phys. Rev. Lett...transition matrix describing the probability of transport from one region of phase space to another, which approximates the stochastic Frobenius - Perron
Ghosh, Erina; Shmuylovich, Leonid; Kovacs, Sandor J
2009-01-01
The filling (diastolic) function of the human left ventricle is most commonly assessed by echocardiography, a non-invasive imaging modality. To quantify diastolic function (DF) empiric indices are obtained from the features (height, duration, area) of transmitral flow velocity contour, obtained by echocardiography. The parameterized diastolic filling (PDF) formalism is a kinematic model developed by Kovács et. al. which incorporates the suction pump attribute of the left ventricle and facilitates DF quantitation by analysis of echocardiographic transmitral flow velocity contours in terms of stiffness (k), relaxation (c) and load (x(0)). A complementary approach developed by Gharib et. al., uses fluid mechanics and characterizes DF in terms of vortex formation time (T*) derived from streamline features formed by the jet of blood aspirated into the ventricle. Both of these methods characterize DF using a causality-based approach. In this paper, we derive T*'s kinematic analogue T*(kinematic) in terms of k, c and x(0). A comparison between T*(kinematic) and T*(fluid) (mechanic) obtained from averaged transmitral velocity and mitral annulus diameter, is presented. We found that T* calculated by the two methods were comparable and T*(kinematic) correlated with the peak LV recoil driving force kx(0).
Nuclear PDF for neutrino and charged lepton data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovarik, K.
2011-10-06
Neutrino Deep Inelastic Scattering (DIS) on nuclei is an essential process to constrain the strange quark parton distribution functions (PDF) in the proton. The critical component on the way to using the neutrino DIS data in a proton PDF analysis is understanding the nuclear effects in parton distribution functions. We parametrize these effects by nuclear parton distribution functions (NPDF). Here we compare results from two analysis of NPDF both done at next-to-leading order in QCD. The first uses neutral current charged-lepton (l{sup {+-}A}) Deeply Inelastic Scattering (DIS) and Drell-Yan data for several nuclear targets and the second uses neutrino-nucleon DISmore » data. We compare the nuclear corrections factors (F{sub 2}{sup Fe}/F{sub 2}{sup D}) for the charged-lepton data with other results from the literature. In particular, we compare and contrast fits based upon the charged-lepton DIS data with those using neutrino-nucleon DIS data.« less
Choi, Charles; Cao, Guan; Tanenhaus, Anne K.; McCarthy, Ellena v.; Jung, Misun; Schleyer, William; Shang, Yuhua; Rosbash, Michael; Yin, Jerry C.P.; Nitabach, Michael N.
2012-01-01
Drosophila melanogaster flies concentrate behavioral activity around dawn and dusk. This organization of daily activity is controlled by central circadian clock neurons, including the lateral ventral pacemaker neurons (LNvs) that secrete the neuropeptide PDF (Pigment Dispersing Factor). Previous studies have demonstrated the requirement for PDF signaling to PDF receptor (PDFR)-expressing dorsal clock neurons in organizing circadian activity. While LNvs also express functional PDFR, the role of these autoreceptors has remained enigmatic. Here we show that (1) PDFR activation in LNvs shifts the balance of circadian activity from evening to morning, similar to behavioral responses to summer-like environmental conditions and (2) this shift is mediated by stimulation of the Ga,s-cAMP pathway and a consequent change in PDF/neurotransmitter co-release from the LNvs. These results suggest a novel mechanism for environmental control of the allocation of circadian activity and provide new general insight into the role of neuropeptide autoreceptors in behavioral control circuits. PMID:22938867
PDF neuron firing phase-shifts key circadian activity neurons in Drosophila
Guo, Fang; Cerullo, Isadora; Chen, Xiao; Rosbash, Michael
2014-01-01
Our experiments address two long-standing models for the function of the Drosophila brain circadian network: a dual oscillator model, which emphasizes the primacy of PDF-containing neurons, and a cell-autonomous model for circadian phase adjustment. We identify five different circadian (E) neurons that are a major source of rhythmicity and locomotor activity. Brief firing of PDF cells at different times of day generates a phase response curve (PRC), which mimics a light-mediated PRC and requires PDF receptor expression in the five E neurons. Firing also resembles light by causing TIM degradation in downstream neurons. Unlike light however, firing-mediated phase-shifting is CRY-independent and exploits the E3 ligase component CUL-3 in the early night to degrade TIM. Our results suggest that PDF neurons integrate light information and then modulate the phase of E cell oscillations and behavioral rhythms. The results also explain how fly brain rhythms persist in constant darkness and without CRY. DOI: http://dx.doi.org/10.7554/eLife.02780.001 PMID:24939987
PDF neuron firing phase-shifts key circadian activity neurons in Drosophila.
Guo, Fang; Cerullo, Isadora; Chen, Xiao; Rosbash, Michael
2014-06-17
Our experiments address two long-standing models for the function of the Drosophila brain circadian network: a dual oscillator model, which emphasizes the primacy of PDF-containing neurons, and a cell-autonomous model for circadian phase adjustment. We identify five different circadian (E) neurons that are a major source of rhythmicity and locomotor activity. Brief firing of PDF cells at different times of day generates a phase response curve (PRC), which mimics a light-mediated PRC and requires PDF receptor expression in the five E neurons. Firing also resembles light by causing TIM degradation in downstream neurons. Unlike light however, firing-mediated phase-shifting is CRY-independent and exploits the E3 ligase component CUL-3 in the early night to degrade TIM. Our results suggest that PDF neurons integrate light information and then modulate the phase of E cell oscillations and behavioral rhythms. The results also explain how fly brain rhythms persist in constant darkness and without CRY.
Matching the quasiparton distribution in a momentum subtraction scheme
NASA Astrophysics Data System (ADS)
Stewart, Iain W.; Zhao, Yong
2018-03-01
The quasiparton distribution is a spatial correlation of quarks or gluons along the z direction in a moving nucleon which enables direct lattice calculations of parton distribution functions. It can be defined with a nonperturbative renormalization in a regularization independent momentum subtraction scheme (RI/MOM), which can then be perturbatively related to the collinear parton distribution in the MS ¯ scheme. Here we carry out a direct matching from the RI/MOM scheme for the quasi-PDF to the MS ¯ PDF, determining the non-singlet quark matching coefficient at next-to-leading order in perturbation theory. We find that the RI/MOM matching coefficient is insensitive to the ultraviolet region of convolution integral, exhibits improved perturbative convergence when converting between the quasi-PDF and PDF, and is consistent with a quasi-PDF that vanishes in the unphysical region as the proton momentum Pz→∞ , unlike other schemes. This direct approach therefore has the potential to improve the accuracy for converting quasidistribution lattice calculations to collinear distributions.
Constrained Kalman Filtering Via Density Function Truncation for Turbofan Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2006-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter truncates the PDF (probability density function) of the Kalman filter estimate at the known constraints and then computes the constrained filter estimate as the mean of the truncated PDF. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is demonstrated via simulation results obtained from a turbofan engine model. The turbofan engine model contains 3 state variables, 11 measurements, and 10 component health parameters. It is also shown that the truncated Kalman filter may be a more accurate way of incorporating inequality constraints than other constrained filters (e.g., the projection approach to constrained filtering).
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-11
... massive emails, word processing documents, PDF files, spreadsheets, presentations, database entries, and....pdf . PURPOSES: OGC-EDMS provides OGC with a method to initiate, track, and manage the collection...
Overexpression of peptide deformylase in breast, colon, and lung cancers
2013-01-01
Background Human mitochondrial peptide deformylase (PDF) has been proposed as a novel cancer therapeutic target. However, very little is known about its expression and regulation in human tissues. The purpose of this study was to characterize the expression pattern of PDF in cancerous tissues and to identify mechanisms that regulate its expression. Methods The mRNA expression levels of PDF and methionine aminopeptidase 1D (MAP1D), an enzyme involved in a related pathway with PDF, were determined using tissue panels containing cDNA from patients with various types of cancer (breast, colon, kidney, liver, lung, ovarian, prostate, or thyroid) and human cell lines. Protein levels of PDF were also determined in 2 colon cancer patients via western blotting. Colon cancer cells were treated with inhibitors of ERK, Akt, and mTOR signaling pathways and the resulting effects on PDF and MAP1D mRNA levels were determined by qPCR for colon and lung cancer cell lines. Finally, the effects of a PDF inhibitor, actinonin, on the proliferation of breast, colon, and prostate cell lines were determined using the CyQUANT assay. Results PDF and MAP1D mRNA levels were elevated in cancer cell lines compared to non-cancer lines. PDF mRNA levels were significantly increased in breast, colon, and lung cancer samples while MAP1D mRNA levels were increased in just colon cancers. The expression of PDF and MAP1D varied with stage in these cancers. Further, PDF protein expression was elevated in colon cancer tissue samples. Inhibition of the MEK/ERK, but not PI3K or mTOR, pathway reduced the expression of PDF and MAP1D in both colon and lung cancer cell lines. Further, inhibition of PDF with actinonin resulted in greater reduction of breast, colon, and prostate cancer cell proliferation than non-cancer cell lines. Conclusions This is the first report showing that PDF is over-expressed in breast, colon, and lung cancers, and the first evidence that the MEK/ERK pathway plays a role in regulating the expression of PDF and MAP1D. The over-expression of PDF in several cancers and the inhibition of cancer cell growth by a PDF inhibitor suggest this enzyme may act as an oncogene to promote cancer cell proliferation. PMID:23815882
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margolin, L. G.
The applicability of Navier–Stokes equations is limited to near-equilibrium flows in which the gradients of density, velocity and energy are small. Here I propose an extension of the Chapman–Enskog approximation in which the velocity probability distribution function (PDF) is averaged in the coordinate phase space as well as the velocity phase space. I derive a PDF that depends on the gradients and represents a first-order generalization of local thermodynamic equilibrium. I then integrate this PDF to derive a hydrodynamic model. Finally, I discuss the properties of that model and its relation to the discrete equations of computational fluid dynamics.
Margolin, L. G.
2018-03-19
The applicability of Navier–Stokes equations is limited to near-equilibrium flows in which the gradients of density, velocity and energy are small. Here I propose an extension of the Chapman–Enskog approximation in which the velocity probability distribution function (PDF) is averaged in the coordinate phase space as well as the velocity phase space. I derive a PDF that depends on the gradients and represents a first-order generalization of local thermodynamic equilibrium. I then integrate this PDF to derive a hydrodynamic model. Finally, I discuss the properties of that model and its relation to the discrete equations of computational fluid dynamics.
Eulerian formulation of the interacting particle representation model of homogeneous turbulence
Campos, Alejandro; Duraisamy, Karthik; Iaccarino, Gianluca
2016-10-21
The Interacting Particle Representation Model (IPRM) of homogeneous turbulence incorporates information about the morphology of turbulent structures within the con nes of a one-point model. In the original formulation [Kassinos & Reynolds, Center for Turbulence Research: Annual Research Briefs, 31{51, (1996)], the IPRM was developed in a Lagrangian setting by evolving second moments of velocity conditional on a given gradient vector. In the present work, the IPRM is re-formulated in an Eulerian framework and evolution equations are developed for the marginal PDFs. Eulerian methods avoid the issues associated with statistical estimators used by Lagrangian approaches, such as slow convergence. Amore » specific emphasis of this work is to use the IPRM to examine the long time evolution of homogeneous turbulence. We first describe the derivation of the marginal PDF in spherical coordinates, which reduces the number of independent variables and the cost associated with Eulerian simulations of PDF models. Next, a numerical method based on radial basis functions over a spherical domain is adapted to the IPRM. Finally, results obtained with the new Eulerian solution method are thoroughly analyzed. The sensitivity of the Eulerian simulations to parameters of the numerical scheme, such as the size of the time step and the shape parameter of the radial basis functions, is examined. A comparison between Eulerian and Lagrangian simulations is performed to discern the capabilities of each of the methods. Finally, a linear stability analysis based on the eigenvalues of the discrete differential operators is carried out for both the new Eulerian solution method and the original Lagrangian approach.« less
Interference of peritoneal dialysis fluids with cell cycle mechanisms.
Büchel, Janine; Bartosova, Maria; Eich, Gwendolyn; Wittenberger, Timo; Klein-Hitpass, Ludger; Steppan, Sonja; Hackert, Thilo; Schaefer, Franz; Passlick-Deetjen, Jutta; Schmitt, Claus P
2015-01-01
Peritoneal dialysis fluids (PDF) differ with respect to osmotic and buffer compound, and pH and glucose degradation products (GDP) content. The impact on peritoneal membrane integrity is still insufficiently described. We assessed global genomic effects of PDF in primary human peritoneal mesothelial cells (PMC) by whole genome analyses, quantitative real-time polymerase chain reaction (RT-PCR) and functional measurements. PMC isolated from omentum of non-uremic patients were incubated with conventional single chamber PDF (CPDF), lactate- (LPDF), bicarbonate- (BPDF) and bicarbonate/lactate-buffered double-chamber PDF (BLPDF), icodextrin (IPDF) and amino acid PDF (APDF), diluted 1:1 with medium. Affymetrix GeneChip U133Plus2.0 (Affymetrix, CA, USA) and quantitative RT-PCR were applied; cell viability was assessed by proliferation assays. The number of differentially expressed genes compared to medium was 464 with APDF, 208 with CPDF, 169 with IPDF, 71 with LPDF, 45 with BPDF and 42 with BLPDF. Out of these genes 74%, 73%, 79%, 72%, 47% and 57% were downregulated. Gene Ontology (GO) term annotations mainly revealed associations with cell cycle (p = 10(-35)), cell division, mitosis, and DNA replication. One hundred and eighteen out of 249 probe sets detecting genes involved in cell cycle/division were suppressed, with APDF-treated PMC being affected the most regarding absolute number and degree, followed by CPDF and IPDF. Bicarbonate-containing PDF and BLPDF-treated PMC were affected the least. Quantitative RT-PCR measurements confirmed microarray findings for key cell cycle genes (CDK1/CCNB1/CCNE2/AURKA/KIF11/KIF14). Suppression was lowest for BPDF and BLPDF, they upregulated CCNE2 and SMC4. All PDF upregulated 3 out of 4 assessed cell cycle repressors (p53/BAX/p21). Cell viability scores confirmed gene expression results, being 79% of medium for LPDF, 101% for BLPDF, 51% for CPDF and 23% for IPDF. Amino acid-containing PDF (84%) incubated cells were as viable as BPDF (86%). In conclusion, PD solutions substantially differ with regard to their gene regulating profile and impact on vital functions of PMC, i.e. on cells known to be essential for peritoneal membrane homeostasis. Copyright © 2015 International Society for Peritoneal Dialysis.
Uncertainty quantification of voice signal production mechanical model and experimental updating
NASA Astrophysics Data System (ADS)
Cataldo, E.; Soize, C.; Sampaio, R.
2013-11-01
The aim of this paper is to analyze the uncertainty quantification in a voice production mechanical model and update the probability density function corresponding to the tension parameter using the Bayes method and experimental data. Three parameters are considered uncertain in the voice production mechanical model used: the tension parameter, the neutral glottal area and the subglottal pressure. The tension parameter of the vocal folds is mainly responsible for the changing of the fundamental frequency of a voice signal, generated by a mechanical/mathematical model for producing voiced sounds. The three uncertain parameters are modeled by random variables. The probability density function related to the tension parameter is considered uniform and the probability density functions related to the neutral glottal area and the subglottal pressure are constructed using the Maximum Entropy Principle. The output of the stochastic computational model is the random voice signal and the Monte Carlo method is used to solve the stochastic equations allowing realizations of the random voice signals to be generated. For each realization of the random voice signal, the corresponding realization of the random fundamental frequency is calculated and the prior pdf of this random fundamental frequency is then estimated. Experimental data are available for the fundamental frequency and the posterior probability density function of the random tension parameter is then estimated using the Bayes method. In addition, an application is performed considering a case with a pathology in the vocal folds. The strategy developed here is important mainly due to two things. The first one is related to the possibility of updating the probability density function of a parameter, the tension parameter of the vocal folds, which cannot be measured direct and the second one is related to the construction of the likelihood function. In general, it is predefined using the known pdf. Here, it is constructed in a new and different manner, using the own system considered.
Pair distribution function analysis applied to decahedral gold nanoparticles
NASA Astrophysics Data System (ADS)
Nakotte, H.; Silkwood, C.; Page, K.; Wang, H.-W.; Olds, D.; Kiefer, B.; Manna, S.; Karpov, D.; Fohtung, E.; Fullerton, E. E.
2017-11-01
The five-fold symmetry of face-centered cubic (fcc) derived nanoparticles is inconsistent with the translational symmetry of a Bravais lattice and generally explained by multiple twinning of a tetrahedral subunit about a (joint) symmetry axis, with or without structural modification to the fcc motif. Unlike in bulk materials, five-fold twinning in cubic nanoparticles is common and strongly affects their structural, chemical, and electronic properties. To test and verify theoretical approaches, it is therefore pertinent that the local structural features of such materials can be fully characterized. The small size of nanoparticles severely limits the application of traditional analysis techniques, such as Bragg diffraction. A complete description of the atomic arrangement in nanoparticles therefore requires a departure from the concept of translational symmetry, and prevents fully evaluating all the structural features experimentally. We describe how recent advances in instrumentation, together with the increasing power of computing, are shaping the development of alternative analysis methods of scattering data for nanostructures. We present the application of Debye scattering and pair distribution function (PDF) analysis towards modeling of the total scattering data for the example of decahedral gold nanoparticles. PDF measurements provide a statistical description of the pair correlations of atoms within a material, allowing one to evaluate the probability of finding two atoms within a given distance. We explored the sensitivity of existing synchrotron x-ray PDF instruments for distinguishing four different simple models for our gold nanoparticles: a multiply twinned fcc decahedron with either a single gap or multiple distributed gaps, a relaxed body-centered orthorhombic (bco) decahedron, and a hybrid decahedron. The data simulations of the models were then compared with experimental data from synchrotron x-ray total scattering. We present our experimentally derived atomistic models of the gold nanoparticles, with surprising results and a perspective on remaining challenges. Our findings provide evidence for the suitability of PDF analysis in the characterization of other nanosized particles that may have commercial applications.
Kabekkodu, Soorya N; Faber, John; Fawcett, Tim
2002-06-01
The International Centre for Diffraction Data (ICDD) is responding to the changing needs in powder diffraction and materials analysis by developing the Powder Diffraction File (PDF) in a very flexible relational database (RDB) format. The PDF now contains 136,895 powder diffraction patterns. In this paper, an attempt is made to give an overview of the PDF-4, search/match methods and the advantages of having the PDF-4 in RDB format. Some case studies have been carried out to search for crystallization trends, properties, frequencies of space groups and prototype structures. These studies give a good understanding of the basic structural aspects of classes of compounds present in the database. The present paper also reports data-mining techniques and demonstrates the power of a relational database over the traditional (flat-file) database structures.
NASA Astrophysics Data System (ADS)
Tremblin, P.; Schneider, N.; Minier, V.; Didelon, P.; Hill, T.; Anderson, L. D.; Motte, F.; Zavagno, A.; André, Ph.; Arzoumanian, D.; Audit, E.; Benedettini, M.; Bontemps, S.; Csengeri, T.; Di Francesco, J.; Giannini, T.; Hennemann, M.; Nguyen Luong, Q.; Marston, A. P.; Peretto, N.; Rivera-Ingraham, A.; Russeil, D.; Rygl, K. L. J.; Spinoglio, L.; White, G. J.
2014-04-01
Aims: Ionization feedback should impact the probability distribution function (PDF) of the column density of cold dust around the ionized gas. We aim to quantify this effect and discuss its potential link to the core and initial mass function (CMF/IMF). Methods: We used Herschel column density maps of several regions observed within the HOBYS key program in a systematic way: M 16, the Rosette and Vela C molecular clouds, and the RCW 120 H ii region. We computed the PDFs in concentric disks around the main ionizing sources, determined their properties, and discuss the effect of ionization pressure on the distribution of the column density. Results: We fitted the column density PDFs of all clouds with two lognormal distributions, since they present a "double-peak" or an enlarged shape in the PDF. Our interpretation is that the lowest part of the column density distribution describes the turbulent molecular gas, while the second peak corresponds to a compression zone induced by the expansion of the ionized gas into the turbulent molecular cloud. Such a double peak is not visible for all clouds associated with ionization fronts, but it depends on the relative importance of ionization pressure and turbulent ram pressure. A power-law tail is present for higher column densities, which are generally ascribed to the effect of gravity. The condensations at the edge of the ionized gas have a steep compressed radial profile, sometimes recognizable in the flattening of the power-law tail. This could lead to an unambiguous criterion that is able to disentangle triggered star formation from pre-existing star formation. Conclusions: In the context of the gravo-turbulent scenario for the origin of the CMF/IMF, the double-peaked or enlarged shape of the PDF may affect the formation of objects at both the low-mass and the high-mass ends of the CMF/IMF. In particular, a broader PDF is required by the gravo-turbulent scenario to fit the IMF properly with a reasonable initial Mach number for the molecular cloud. Since other physical processes (e.g., the equation of state and the variations among the core properties) have already been said to broaden the PDF, the relative importance of the different effects remains an open question. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.
Spectral modeling of radiation in combustion systems
NASA Astrophysics Data System (ADS)
Pal, Gopalendu
Radiation calculations are important in combustion due to the high temperatures encountered but has not been studied in sufficient detail in the case of turbulent flames. Radiation calculations for such problems require accurate, robust, and computationally efficient models for the solution of radiative transfer equation (RTE), and spectral properties of radiation. One more layer of complexity is added in predicting the overall heat transfer in turbulent combustion systems due to nonlinear interactions between turbulent fluctuations and radiation. The present work is aimed at the development of finite volume-based high-accuracy thermal radiation modeling, including spectral radiation properties in order to accurately capture turbulence-radiation interactions (TRI) and predict heat transfer in turbulent combustion systems correctly and efficiently. The turbulent fluctuations of temperature and chemical species concentrations have strong effects on spectral radiative intensities, and TRI create a closure problem when the governing partial differential equations are averaged. Recently, several approaches have been proposed to take TRI into account. Among these attempts the most promising approaches are the probability density function (PDF) methods, which can treat nonlinear coupling between turbulence and radiative emission exactly, i.e., "emission TRI". The basic idea of the PDF method is to treat physical variables as random variables and to solve the PDF transport equation stochastically. The actual reacting flow field is represented by a large number of discrete stochastic particles each carrying their own random variable values and evolving with time. The mean value of any function of those random variables, such as the chemical source term, can be evaluated exactly by taking the ensemble average of particles. The local emission term belongs to this class and thus, can be evaluated directly and exactly from particle ensembles. However, the local absorption term involves interactions between the local particle and energy emitted by all other particles and, hence, cannot be obtained from particle ensembles directly. To close the nonlinear coupling between turbulence and absorption, i.e., "absorption TRI", an optically thin fluctuation approximation can be applied to virtually all combustion problems and obtain acceptable accuracy. In the present study a composition-PDF method is applied, in which only the temperature and the species concentrations are treated as random variables. A closely coupled hybrid finite-volume/Monte Carlo scheme is adopted, in which the Monte Carlo method is used to solve the composition-PDF for chemical reactions and the finite volume method is used to solve for the flow field and radiation. Spherical harmonics method-based finite volume solvers (P-1 and P-3) are developed using the data structures of the high fidelity open-source code flow software OpenFOAM. Spectral radiative properties of the participating medium are modeled using full-spectrum k-distribution methods. Advancements of basic k-distribution methods are performed for nongray nonhomogeneous gas- and particulate-phase (soot, fuel droplets, ash, etc.) participating media using multi-scale and multi-group based approaches. These methods achieve close-to benchmark line-by-line (LBL) accuracy in strongly inhomogeneous media at a tiny fraction of LBL's computational cost. A portable spectral module is developed, which includes all the basic to advanced k-distribution methods along with the precompiled accurate and compact k-distribution databases. The P-1 /P-3 RTE solver coupled with the spectral module is used in conjunction with the combined Reynolds-averaged Navier-Stokes (RANS) and composition-PDF-based turbulence-chemistry solver to investigate TRI in multiphase turbulent combustion systems. The combustion solvers developed in this study is employed to simulate several turbulent jet flames, such as Sandia Flame D, and artificial nonsooting and sooting flames derived from Flame D. The effects of combustion chemistry, radiation and TRI on total heat transfer and pollutant (such as NO x) generation are studied for the above flames. The accuracy of the overall combustion solver is assessed by comparing it with the experimental data for Flame D. Comparison of the accuracy and the computational cost among various spectral models and RTE solvers is extensively done on the artificial flames derived from Flame D to demonstrate the necessity of accurate modeling of radiation in combustion problems.
ERIC Educational Resources Information Center
Samejima, Fumiko
The rationale behind the method of estimating the operating characteristics of discrete item responses when the test information of the Old Test is not constant was presented previously. In the present study, two subtests of the Old Test, i.e. Subtests 1, and 2, each of which has a different non-constant test information function, are used in…
The Joker: A Custom Monte Carlo Sampler for Binary-star and Exoplanet Radial Velocity Data
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.; Hogg, David W.; Foreman-Mackey, Daniel; Rix, Hans-Walter
2017-03-01
Given sparse or low-quality radial velocity measurements of a star, there are often many qualitatively different stellar or exoplanet companion orbit models that are consistent with the data. The consequent multimodality of the likelihood function leads to extremely challenging search, optimization, and Markov chain Monte Carlo (MCMC) posterior sampling over the orbital parameters. Here we create a custom Monte Carlo sampler for sparse or noisy radial velocity measurements of two-body systems that can produce posterior samples for orbital parameters even when the likelihood function is poorly behaved. The six standard orbital parameters for a binary system can be split into four nonlinear parameters (period, eccentricity, argument of pericenter, phase) and two linear parameters (velocity amplitude, barycenter velocity). We capitalize on this by building a sampling method in which we densely sample the prior probability density function (pdf) in the nonlinear parameters and perform rejection sampling using a likelihood function marginalized over the linear parameters. With sparse or uninformative data, the sampling obtained by this rejection sampling is generally multimodal and dense. With informative data, the sampling becomes effectively unimodal but too sparse: in these cases we follow the rejection sampling with standard MCMC. The method produces correct samplings in orbital parameters for data that include as few as three epochs. The Joker can therefore be used to produce proper samplings of multimodal pdfs, which are still informative and can be used in hierarchical (population) modeling. We give some examples that show how the posterior pdf depends sensitively on the number and time coverage of the observations and their uncertainties.
Large Eddy Simulation of Flame Flashback in Swirling Premixed Flames
NASA Astrophysics Data System (ADS)
Lietz, Christopher; Raman, Venkatramanan
2014-11-01
In the design of high-hydrogen content gas turbines for power generation, flashback of the turbulent flame by propagation through the low velocity boundary layers in the premixing region is an operationally dangerous event. Predictive models that could accurately capture the onset and subsequent behavior of flashback would be indispensable in gas turbine design. The large eddy simulation (LES) approach is used here to model this process. The goal is to examine the validity of a probability distribution function (PDF) based model in the context of a lean premixed flame in a confined geometry. A turbulent swirling flow geometry and corresponding experimental data is used for validation. A suite of LES calculations are performed on a large unstructured mesh for varying fuel compositions operating at several equivalence ratios. It is shown that the PDF based method can predict some statistical properties of the flame front, with improvement over other models in the same application.
Modeling molecular mixing in a spatially inhomogeneous turbulent flow
NASA Astrophysics Data System (ADS)
Meyer, Daniel W.; Deb, Rajdeep
2012-02-01
Simulations of spatially inhomogeneous turbulent mixing in decaying grid turbulence with a joint velocity-concentration probability density function (PDF) method were conducted. The inert mixing scenario involves three streams with different compositions. The mixing model of Meyer ["A new particle interaction mixing model for turbulent dispersion and turbulent reactive flows," Phys. Fluids 22(3), 035103 (2010)], the interaction by exchange with the mean (IEM) model and its velocity-conditional variant, i.e., the IECM model, were applied. For reference, the direct numerical simulation data provided by Sawford and de Bruyn Kops ["Direct numerical simulation and lagrangian modeling of joint scalar statistics in ternary mixing," Phys. Fluids 20(9), 095106 (2008)] was used. It was found that velocity conditioning is essential to obtain accurate concentration PDF predictions. Moreover, the model of Meyer provides significantly better results compared to the IECM model at comparable computational expense.
Shock loading predictions from application of indicial theory to shock-turbulence interactions
NASA Technical Reports Server (NTRS)
Keefe, Laurence R.; Nixon, David
1991-01-01
A sequence of steps that permits prediction of some of the characteristics of the pressure field beneath a fluctuating shock wave from knowledge of the oncoming turbulent boundary layer is presented. The theory first predicts the power spectrum and pdf of the position and velocity of the shock wave, which are then used to obtain the shock frequency distribution, and the pdf of the pressure field, as a function of position within the interaction region. To test the validity of the crucial assumption of linearity, the indicial response of a normal shock is calculated from numerical simulation. This indicial response, after being fit by a simple relaxation model, is used to predict the shock position and velocity spectra, along with the shock passage frequency distribution. The low frequency portion of the shock spectra, where most of the energy is concentrated, is satisfactorily predicted by this method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.
In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less
Evolution of column density distributions within Orion A⋆
NASA Astrophysics Data System (ADS)
Stutz, A. M.; Kainulainen, J.
2015-05-01
We compare the structure of star-forming molecular clouds in different regions of Orion A to determine how the column density probability distribution function (N-PDF) varies with environmental conditions such as the fraction of young protostars. A correlation between the N-PDF slope and Class 0 protostar fraction has been previously observed in a low-mass star-formation region (Perseus); here we test whether a similar correlation is observed in a high-mass star-forming region. We used Herschel PACS and SPIRE cold dust emission observations to derive a column density map of Orion A. We used the Herschel Orion Protostar Survey catalog to accurately identify and classify the Orion A young stellar object content, including the cold and relatively short-lived Class 0 protostars (with a lifetime of ~0.14 Myr). We divided Orion A into eight independent regions of 0.25 square degrees (13.5 pc2); in each region we fit the N-PDF distribution with a power law, and we measured the fraction of Class 0 protostars. We used a maximum-likelihood method to measure the N-PDF power-law index without binning the column density data. We find that the Class 0 fraction is higher in regions with flatter column density distributions. We tested the effects of incompleteness, extinction-driven misclassification of Class 0 sources, resolution, and adopted pixel-scales. We show that these effects cannot account for the observed trend. Our observations demonstrate an association between the slope of the power-law N-PDF and the Class 0 fractions within Orion A. Various interpretations are discussed, including timescales based on the Class 0 protostar fraction assuming a constant star-formation rate. The observed relation suggests that the N-PDF can be related to an evolutionary state of the gas. If universal, such a relation permits evaluating the evolutionary state from the N-PDF power-law index at much greater distances than those accessible with protostar counts. Appendices are available in electronic form at http://www.aanda.orgThe N(H) map as a FITS file is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/577/L6
Mortier, Siska; De Vriese, An S; McLoughlin, Rachel M; Topley, Nicholas; Schaub, Thomas P; Passlick-Deetjen, Jutta; Lameire, Norbert H
2003-05-01
Peritonitis remains an important cause of morbidity and technique failure in peritoneal dialysis (PD). Conventional peritoneal dialysate fluids (PDF) inhibit peritoneal leukocyte function in vitro and may thus adversely affect the immune response to peritonitis. New PDF have been designed with neutral pH, low glucose degradation product (GDP) contents, and bicarbonate as buffer. The present intravital microscopy study examined the effects of conventional and new PDF on leukocyte behavior in the peritoneal microcirculation of Wistar rats. The visceral peritoneum was superfused by a control solution (EBSS), a conventional (CAPD), or a new bicarbonate-buffered PDF with neutral pH and low GDP content (CAPD BicaVera). In addition, spent conventional and new PDF were tested. The number of rolling, adhering, and extravasated leukocytes and leukocyte rolling velocity were assessed at different time intervals after exposure to lipopolysaccharide (LPS) or cell-free supernatants of coagulase-negative staphylococci (CNS-CFS). Exposure to LPS or CNS-CFS dissolved in EBSS dramatically increased the number of rolling, adhering and extravasated leukocytes and decreased leukocyte rolling velocity. Superfusion by CAPD abolished the LPS- or CNS-CFS-induced leukocyte recruitment, whereas CAPD BicaVera had significantly fewer depressant effect. Spent PDF affected the leukocyte response in a similar way as fresh PDF. High lactate concentrations, GDP, and hypertonicity appeared to be mainly responsible for the inhibition of leukocyte recruitment. In conclusion, conventional PDF abolish in vivo leukocyte recruitment in response to potent inflammatory stimuli. Bicarbonate-buffered pH-neutral PDF with low GDP contents have fewer depressant effects and may therefore contribute to a better preservation of peritoneal host defense.
CT14QED parton distribution functions from isolated photon production in deep inelastic scattering
NASA Astrophysics Data System (ADS)
Schmidt, Carl; Pumplin, Jon; Stump, Daniel; Yuan, C.-P.
2016-06-01
We describe the implementation of quantum electrodynamic (QED) evolution at leading order (LO) along with quantum chromodynamic (QCD) evolution at next-to-leading order (NLO) in the CTEQ-TEA global analysis package. The inelastic contribution to the photon parton distribution function (PDF) is described by a two-parameter ansatz, coming from radiation off the valence quarks, and based on the CT14 NLO PDFs. Setting the two parameters to be equal allows us to completely specify the inelastic photon PDF in terms of the inelastic momentum fraction carried by the photon, p0γ, at the initial scale Q0=1.295 GeV . We obtain constraints on the photon PDF by comparing with ZEUS data [S. Chekanov et al. (ZEUS Collaboration), Phys. Lett. B 687, 16 (2010)] on the production of isolated photons in deep inelastic scattering, e p →e γ +X . For this comparison we present a new perturbative calculation of the process that consistently combines the photon-initiated contribution with the quark-initiated contribution. Comparison with the data allows us to put a constraint at the 90% confidence level of p0γ≲0.14 % for the inelastic photon PDF at the initial scale of Q0=1.295 GeV in the one-parameter radiative ansatz. The resulting inelastic CT14QED PDFs will be made available to the public. In addition, we also provide CT14QEDinc PDFs, in which the inclusive photon PDF at the scale Q0 is defined by the sum of the inelastic photon PDF and the elastic photon distribution obtained from the equivalent photon approximation.
Decoupling the NLO-coupled QED⊗QCD, DGLAP evolution equations, using Laplace transform method
NASA Astrophysics Data System (ADS)
Mottaghizadeh, Marzieh; Eslami, Parvin; Taghavi-Shahri, Fatemeh
2017-05-01
We analytically solved the QED⊗QCD-coupled DGLAP evolution equations at leading order (LO) quantum electrodynamics (QED) and next-to-leading order (NLO) quantum chromodynamics (QCD) approximations, using the Laplace transform method and then computed the proton structure function in terms of the unpolarized parton distribution functions. Our analytical solutions for parton densities are in good agreement with those from CT14QED (1.2952 < Q2 < 1010) (Ref. 6) global parametrizations and APFEL (A PDF Evolution Library) (2 < Q2 < 108) (Ref. 4). We also compared the proton structure function, F2p(x,Q2), with the experimental data released by the ZEUS and H1 collaborations at HERA. There is a nice agreement between them in the range of low and high x and Q2.
The structure of the proton in the LHC precision era
NASA Astrophysics Data System (ADS)
Gao, Jun; Harland-Lang, Lucian; Rojo, Juan
2018-05-01
We review recent progress in the determination of the parton distribution functions (PDFs) of the proton, with emphasis on the applications for precision phenomenology at the Large Hadron Collider (LHC). First of all, we introduce the general theoretical framework underlying the global QCD analysis of the quark and gluon internal structure of protons. We then present a detailed overview of the hard-scattering measurements, and the corresponding theory predictions, that are used in state-of-the-art PDF fits. We emphasize here the role that higher-order QCD and electroweak corrections play in the description of recent high-precision collider data. We present the methodology used to extract PDFs in global analyses, including the PDF parametrization strategy and the definition and propagation of PDF uncertainties. Then we review and compare the most recent releases from the various PDF fitting collaborations, highlighting their differences and similarities. We discuss the role that QED corrections and photon-initiated contributions play in modern PDF analysis. We provide representative examples of the implications of PDF fits for high-precision LHC phenomenological applications, such as Higgs coupling measurements and searches for high-mass New Physics resonances. We conclude this report by discussing some selected topics relevant for the future of PDF determinations, including the treatment of theoretical uncertainties, the connection with lattice QCD calculations, and the role of PDFs at future high-energy colliders beyond the LHC.
Statistical segmentation of multidimensional brain datasets
NASA Astrophysics Data System (ADS)
Desco, Manuel; Gispert, Juan D.; Reig, Santiago; Santos, Andres; Pascau, Javier; Malpica, Norberto; Garcia-Barreno, Pedro
2001-07-01
This paper presents an automatic segmentation procedure for MRI neuroimages that overcomes part of the problems involved in multidimensional clustering techniques like partial volume effects (PVE), processing speed and difficulty of incorporating a priori knowledge. The method is a three-stage procedure: 1) Exclusion of background and skull voxels using threshold-based region growing techniques with fully automated seed selection. 2) Expectation Maximization algorithms are used to estimate the probability density function (PDF) of the remaining pixels, which are assumed to be mixtures of gaussians. These pixels can then be classified into cerebrospinal fluid (CSF), white matter and grey matter. Using this procedure, our method takes advantage of using the full covariance matrix (instead of the diagonal) for the joint PDF estimation. On the other hand, logistic discrimination techniques are more robust against violation of multi-gaussian assumptions. 3) A priori knowledge is added using Markov Random Field techniques. The algorithm has been tested with a dataset of 30 brain MRI studies (co-registered T1 and T2 MRI). Our method was compared with clustering techniques and with template-based statistical segmentation, using manual segmentation as a gold-standard. Our results were more robust and closer to the gold-standard.
Semantic Web Technology for Mapping and Applying Clinical Functional Assessment Information
2015-05-01
summaries in free text (Figure 1) or form-based documents that are accessible as PDFs (Figure 2). Because there is no coding scheme for clinical...come from specific 2 The DBQs are VBA -21-0960M-14-ARE-Back.pdf, VBA -21-0960M-9-ARE-KneeLowerLeg.pdf, VBA -21-0960A- 1-ARE-ischemic, NEURO - TBI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harstad, E. N.; Harlow, Francis Harvey,; Schreyer, H. L.
Our goal is to develop constitutive relations for the behavior of a solid polymer during high-strain-rate deformations. In contrast to the classic thermodynamic techniques for deriving stress-strain response in static (equilibrium) circumstances, we employ a statistical-mechanics approach, in which we evolve a probability distribution function (PDF) for the velocity fluctuations of the repeating units of the chain. We use a Langevin description for the dynamics of a single repeating unit and a Lioville equation to describe the variations of the PDF. Moments of the PDF give the conservation equations for a single polymer chain embedded in other similar chains. Tomore » extract single-chain analytical constitutive relations these equations have been solved for representative loading paths. By this process we discover that a measure of nonuniform chain link displacement serves this purpose very well. We then derive an evolution equation for the descriptor function, with the result being a history-dependent constitutive relation.« less
Modification and identification of a vector for making a large phage antibody library.
Zhang, Guo-min; Chen, Yü-ping; Guan, Yuan-zhi; Wang, Yan; An, Yun-qing
2007-11-20
The large phage antibody library is used to obtain high-affinity human antibody, and the Loxp/cre site-specific recombination system is a potential method for constructing a large phage antibody library. In the present study, a phage antibody library vector pDF was reconstructed to construct diabody more quickly and conveniently without injury to homologous recombination and the expression function of the vector and thus to integrate construction of the large phage antibody library with the preparation of diabodies. scFv was obtained by overlap polymerase chain reaction (PCR) amplification with the newly designed VL and VH extension primers. loxp511 was flanked by VL and VH and the endonuclease ACC III encoding sequences were introduced on both sides of loxp511. scFv was cloned into the vector pDF to obtain the vector pDscFv. The vector expression function was identified and the feasibility of diabody preparation was evaluated. A large phage antibody library was constructed in pDscFv. Several antigens were used to screen the antibody library and the quality of the antibody library was evaluated. The phage antibody library expression vector pDscFv was successfully constructed and confirmed to express functional scFv. The large phage antibody library constructed using this vector was of high diversity. Screening of the library on 6 antigens confirmed the generation of specific antibodies to these antigens. Two antibodies were subjected to enzymatic digestion and were prepared into diabody with functional expression. The reconstructed vector pDscFv retains its recombination capability and expression function and can be used to construct large phage antibody libraries. It can be used as a convenient and quick method for preparing diabodies after simple enzymatic digestion, which facilitates clinical trials and application of antibody therapy.
Probability distributions for multimeric systems.
Albert, Jaroslav; Rooman, Marianne
2016-01-01
We propose a fast and accurate method of obtaining the equilibrium mono-modal joint probability distributions for multimeric systems. The method necessitates only two assumptions: the copy number of all species of molecule may be treated as continuous; and, the probability density functions (pdf) are well-approximated by multivariate skew normal distributions (MSND). Starting from the master equation, we convert the problem into a set of equations for the statistical moments which are then expressed in terms of the parameters intrinsic to the MSND. Using an optimization package on Mathematica, we minimize a Euclidian distance function comprising of a sum of the squared difference between the left and the right hand sides of these equations. Comparison of results obtained via our method with those rendered by the Gillespie algorithm demonstrates our method to be highly accurate as well as efficient.
NASA Technical Reports Server (NTRS)
Palosz, B.; Grzanka, E.; Gierlotka, S.; Stelmakh, S.; Pielaszek, R.; Bismayer, U.; Weber, H.-P.; Palosz, W.
2003-01-01
Two methods of the analysis of powder diffraction patterns of diamond and SiC nanocrystals are presented: (a) examination of changes of the lattice parameters with diffraction vector Q ('apparent lattice parameter', alp) which refers to Bragg scattering, and (b), examination of changes of inter-atomic distances based on the analysis of the atomic Pair Distribution Function, PDF. Application of these methods was studied based on the theoretical diffraction patterns computed for models of nanocrystals having (i) a perfect crystal lattice, and (ii), a core-shell structure, i.e. constituting a two-phase system. The models are defined by the lattice parameter of the grain core, thickness of the surface shell, and the magnitude and distribution of the strain field in the shell. X-ray and neutron experimental diffraction data of nanocrystalline SiC and diamond powders of the grain diameter from 4 nm up to micrometers were used. The effects of the internal pressure and strain at the grain surface on the structure are discussed based on the experimentally determined dependence of the alp values on the Q-vector, and changes of the interatomic distances with the grain size determined experimentally by the atomic Pair Distribution Function (PDF) analysis. The experimental results lend a strong support to the concept of a two-phase, core and the surface shell structure of nanocrystalline diamond and SiC.
Simulations of sooting turbulent jet flames using a hybrid flamelet/stochastic Eulerian field method
NASA Astrophysics Data System (ADS)
Consalvi, Jean-Louis; Nmira, Fatiha; Burot, Daria
2016-03-01
The stochastic Eulerian field method is applied to simulate 12 turbulent C1-C3 hydrocarbon jet diffusion flames covering a wide range of Reynolds numbers and fuel sooting propensities. The joint scalar probability density function (PDF) is a function of the mixture fraction, enthalpy defect, scalar dissipation rate and representative soot properties. Soot production is modelled by a semi-empirical acetylene/benzene-based soot model. Spectral gas and soot radiation is modelled using a wide-band correlated-k model. Emission turbulent radiation interactions (TRIs) are taken into account by means of the PDF method, whereas absorption TRIs are modelled using the optically thin fluctuation approximation. Model predictions are found to be in reasonable agreement with experimental data in terms of flame structure, soot quantities and radiative loss. Mean soot volume fractions are predicted within a factor of two of the experiments whereas radiant fractions and peaks of wall radiative fluxes are within 20%. The study also aims to assess approximate radiative models, namely the optically thin approximation (OTA) and grey medium approximation. These approximations affect significantly the radiative loss and should be avoided if accurate predictions of the radiative flux are desired. At atmospheric pressure, the relative errors that they produced on the peaks of temperature and soot volume fraction are within both experimental and model uncertainties. However, these discrepancies are found to increase with pressure, suggesting that spectral models describing properly the self-absorption should be considered at over-atmospheric pressure.
A study of hydrogen diffusion flames using PDF turbulence model
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1991-01-01
The application of probability density function (pdf) turbulence models is addressed. For the purpose of accurate prediction of turbulent combustion, an algorithm that combines a conventional computational fluid dynamic (CFD) flow solver with the Monte Carlo simulation of the pdf evolution equation was developed. The algorithm was validated using experimental data for a heated turbulent plane jet. The study of H2-F2 diffusion flames was carried out using this algorithm. Numerical results compared favorably with experimental data. The computations show that the flame center shifts as the equivalence ratio changes, and that for the same equivalence ratio, similarity solutions for flames exist.
A study of hydrogen diffusion flames using PDF turbulence model
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1991-01-01
The application of probability density function (pdf) turbulence models is addressed in this work. For the purpose of accurate prediction of turbulent combustion, an algorithm that combines a conventional CFD flow solver with the Monte Carlo simulation of the pdf evolution equation has been developed. The algorithm has been validated using experimental data for a heated turbulent plane jet. The study of H2-F2 diffusion flames has been carried out using this algorithm. Numerical results compared favorably with experimental data. The computuations show that the flame center shifts as the equivalence ratio changes, and that for the same equivalence ratio, similarity solutions for flames exist.
NASA Astrophysics Data System (ADS)
Margolin, L. G.
2018-04-01
The applicability of Navier-Stokes equations is limited to near-equilibrium flows in which the gradients of density, velocity and energy are small. Here I propose an extension of the Chapman-Enskog approximation in which the velocity probability distribution function (PDF) is averaged in the coordinate phase space as well as the velocity phase space. I derive a PDF that depends on the gradients and represents a first-order generalization of local thermodynamic equilibrium. I then integrate this PDF to derive a hydrodynamic model. I discuss the properties of that model and its relation to the discrete equations of computational fluid dynamics. This article is part of the theme issue `Hilbert's sixth problem'.
NASA Astrophysics Data System (ADS)
Marro, Massimo; Salizzoni, Pietro; Soulhac, Lionel; Cassiani, Massimo
2018-06-01
We analyze the reliability of the Lagrangian stochastic micromixing method in predicting higher-order statistics of the passive scalar concentration induced by an elevated source (of varying diameter) placed in a turbulent boundary layer. To that purpose we analyze two different modelling approaches by testing their results against the wind-tunnel measurements discussed in Part I (Nironi et al., Boundary-Layer Meteorology, 2015, Vol. 156, 415-446). The first is a probability density function (PDF) micromixing model that simulates the effects of the molecular diffusivity on the concentration fluctuations by taking into account the background particles. The second is a new model, named VPΓ, conceived in order to minimize the computational costs. This is based on the volumetric particle approach providing estimates of the first two concentration moments with no need for the simulation of the background particles. In this second approach, higher-order moments are computed based on the estimates of these two moments and under the assumption that the concentration PDF is a Gamma distribution. The comparisons concern the spatial distribution of the first four moments of the concentration and the evolution of the PDF along the plume centreline. The novelty of this work is twofold: (i) we perform a systematic comparison of the results of micro-mixing Lagrangian models against experiments providing profiles of the first four moments of the concentration within an inhomogeneous and anisotropic turbulent flow, and (ii) we show the reliability of the VPΓ model as an operational tool for the prediction of the PDF of the concentration.
NASA Astrophysics Data System (ADS)
Marro, Massimo; Salizzoni, Pietro; Soulhac, Lionel; Cassiani, Massimo
2018-01-01
We analyze the reliability of the Lagrangian stochastic micromixing method in predicting higher-order statistics of the passive scalar concentration induced by an elevated source (of varying diameter) placed in a turbulent boundary layer. To that purpose we analyze two different modelling approaches by testing their results against the wind-tunnel measurements discussed in Part I (Nironi et al., Boundary-Layer Meteorology, 2015, Vol. 156, 415-446). The first is a probability density function (PDF) micromixing model that simulates the effects of the molecular diffusivity on the concentration fluctuations by taking into account the background particles. The second is a new model, named VPΓ, conceived in order to minimize the computational costs. This is based on the volumetric particle approach providing estimates of the first two concentration moments with no need for the simulation of the background particles. In this second approach, higher-order moments are computed based on the estimates of these two moments and under the assumption that the concentration PDF is a Gamma distribution. The comparisons concern the spatial distribution of the first four moments of the concentration and the evolution of the PDF along the plume centreline. The novelty of this work is twofold: (i) we perform a systematic comparison of the results of micro-mixing Lagrangian models against experiments providing profiles of the first four moments of the concentration within an inhomogeneous and anisotropic turbulent flow, and (ii) we show the reliability of the VPΓ model as an operational tool for the prediction of the PDF of the concentration.
Bauer, M; Riech, S; Brandes, I; Waeschle, R M
2015-11-01
The quality assurance of care and patient safety, with increasing cost pressure and performance levels is of major importance in the high-risk and high cost area of the operating room (OR). Standard operating procedures (SOP) are an established tool for structuring and standardization of the clinical treatment pathways and show multiple benefits for quality assurance and process optimization. An internal project was initiated in the department of anesthesiology and a continuous improvement process was carried out to build up a comprehensive SOP library. In the first step the spectrum of procedures in anesthesiology was transferred to PDF-based SOPs. The further development to an app-based SOP library (Aesculapp) was due to the high resource expenditure for the administration and maintenance of the large PDF-based SOP collection and to deficits in the mobile availability. The next developmental stage, the SOP healthcare information assistant (SOPHIA) included a simplified and advanced update feature, an archive feature previously missing and notably the possibility to share the SOP library with other departments including the option to adapt each SOP to the individual situation. A survey of the personnel showed that the app-based allocation of SOPs (Aesculapp, SOPHIA) had a higher acceptance than the PDF-based developmental stage SOP form. The SOP management system SOPHIA combines the benefits of the forerunner version Aesculapp with improved options for intradepartmental maintenance and administration of the SOPs and the possibility of an export and editing function for interinstitutional exchange of SOPs.
Modeling envelope statistics of blood and myocardium for segmentation of echocardiographic images.
Nillesen, Maartje M; Lopata, Richard G P; Gerrits, Inge H; Kapusta, Livia; Thijssen, Johan M; de Korte, Chris L
2008-04-01
The objective of this study was to investigate the use of speckle statistics as a preprocessing step for segmentation of the myocardium in echocardiographic images. Three-dimensional (3D) and biplane image sequences of the left ventricle of two healthy children and one dog (beagle) were acquired. Pixel-based speckle statistics of manually segmented blood and myocardial regions were investigated by fitting various probability density functions (pdf). The statistics of heart muscle and blood could both be optimally modeled by a K-pdf or Gamma-pdf (Kolmogorov-Smirnov goodness-of-fit test). Scale and shape parameters of both distributions could differentiate between blood and myocardium. Local estimation of these parameters was used to obtain parametric images, where window size was related to speckle size (5 x 2 speckles). Moment-based and maximum-likelihood estimators were used. Scale parameters were still able to differentiate blood from myocardium; however, smoothing of edges of anatomical structures occurred. Estimation of the shape parameter required a larger window size, leading to unacceptable blurring. Using these parameters as an input for segmentation resulted in unreliable segmentation. Adaptive mean squares filtering was then introduced using the moment-based scale parameter (sigma(2)/mu) of the Gamma-pdf to automatically steer the two-dimensional (2D) local filtering process. This method adequately preserved sharpness of the edges. In conclusion, a trade-off between preservation of sharpness of edges and goodness-of-fit when estimating local shape and scale parameters is evident for parametric images. For this reason, adaptive filtering outperforms parametric imaging for the segmentation of echocardiographic images.
Joint constraints on galaxy bias and σ{sub 8} through the N-pdf of the galaxy number density
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arnalte-Mur, Pablo; Martínez, Vicent J.; Vielva, Patricio
We present a full description of the N-probability density function of the galaxy number density fluctuations. This N-pdf is given in terms, on the one hand, of the cold dark matter correlations and, on the other hand, of the galaxy bias parameter. The method relies on the assumption commonly adopted that the dark matter density fluctuations follow a local non-linear transformation of the initial energy density perturbations. The N-pdf of the galaxy number density fluctuations allows for an optimal estimation of the bias parameter (e.g., via maximum-likelihood estimation, or Bayesian inference if there exists any a priori information on themore » bias parameter), and of those parameters defining the dark matter correlations, in particular its amplitude (σ{sub 8}). It also provides the proper framework to perform model selection between two competitive hypotheses. The parameters estimation capabilities of the N-pdf are proved by SDSS-like simulations (both, ideal log-normal simulations and mocks obtained from Las Damas simulations), showing that our estimator is unbiased. We apply our formalism to the 7th release of the SDSS main sample (for a volume-limited subset with absolute magnitudes M{sub r} ≤ −20). We obtain b-circumflex = 1.193 ± 0.074 and σ-bar{sub 8} = 0.862 ± 0.080, for galaxy number density fluctuations in cells of the size of 30h{sup −1}Mpc. Different model selection criteria show that galaxy biasing is clearly favoured.« less
PDF added value of a high resolution climate simulation for precipitation
NASA Astrophysics Data System (ADS)
Soares, Pedro M. M.; Cardoso, Rita M.
2015-04-01
General Circulation Models (GCMs) are models suitable to study the global atmospheric system, its evolution and response to changes in external forcing, namely to increasing emissions of CO2. However, the resolution of GCMs, of the order of 1o, is not sufficient to reproduce finer scale features of the atmospheric flow related to complex topography, coastal processes and boundary layer processes, and higher resolution models are needed to describe observed weather and climate. The latter are known as Regional Climate Models (RCMs) and are widely used to downscale GCMs results for many regions of the globe and are able to capture physically consistent regional and local circulations. Most of the RCMs evaluations rely on the comparison of its results with observations, either from weather stations networks or regular gridded datasets, revealing the ability of RCMs to describe local climatic properties, and assuming most of the times its higher performance in comparison with the forcing GCMs. The additional climatic details given by RCMs when compared with the results of the driving models is usually named as added value, and it's evaluation is still scarce and controversial in the literuature. Recently, some studies have proposed different methodologies to different applications and processes to characterize the added value of specific RCMs. A number of examples reveal that some RCMs do add value to GCMs in some properties or regions, and also the opposite, elighnening that RCMs may add value to GCM resuls, but improvements depend basically on the type of application, model setup, atmospheric property and location. The precipitation can be characterized by histograms of daily precipitation, or also known as probability density functions (PDFs). There are different strategies to evaluate the quality of both GCMs and RCMs in describing the precipitation PDFs when compared to observations. Here, we present a new method to measure the PDF added value obtained from dynamical downscaling, based on simple PDF skill scores. The measure can assess the full quality of the PDFs and at the same time integrates a flexible manner to weight differently the PDF tails. In this study we apply the referred method to characaterize the PDF added value of a high resolution simulation with the WRF model. Results from a WRF climate simulation centred at the Iberian Penisnula with two nested grids, a larger one at 27km and a smaller one at 9km. This simulation is forced by ERA-Interim. The observational data used covers from rain gauges precipitation records to observational regular grids of daily precipitation. Two regular gridded precipitation datasets are used. A Portuguese grid precipitation dataset developed at 0.2°× 0.2°, from observed rain gauges daily precipitation. A second one corresponding to the ENSEMBLES observational gridded dataset for Europe, which includes daily precipitation values at 0.25°. The analisys shows an important PDF added value from the higher resolution simulation, regarding the full PDF and the extremes. This method shows higher potential to be applied to other simulation exercises and to evaluate other variables.
Model-based Bayesian inference for ROC data analysis
NASA Astrophysics Data System (ADS)
Lei, Tianhu; Bae, K. Ty
2013-03-01
This paper presents a study of model-based Bayesian inference to Receiver Operating Characteristics (ROC) data. The model is a simple version of general non-linear regression model. Different from Dorfman model, it uses a probit link function with a covariate variable having zero-one two values to express binormal distributions in a single formula. Model also includes a scale parameter. Bayesian inference is implemented by Markov Chain Monte Carlo (MCMC) method carried out by Bayesian analysis Using Gibbs Sampling (BUGS). Contrast to the classical statistical theory, Bayesian approach considers model parameters as random variables characterized by prior distributions. With substantial amount of simulated samples generated by sampling algorithm, posterior distributions of parameters as well as parameters themselves can be accurately estimated. MCMC-based BUGS adopts Adaptive Rejection Sampling (ARS) protocol which requires the probability density function (pdf) which samples are drawing from be log concave with respect to the targeted parameters. Our study corrects a common misconception and proves that pdf of this regression model is log concave with respect to its scale parameter. Therefore, ARS's requirement is satisfied and a Gaussian prior which is conjugate and possesses many analytic and computational advantages is assigned to the scale parameter. A cohort of 20 simulated data sets and 20 simulations from each data set are used in our study. Output analysis and convergence diagnostics for MCMC method are assessed by CODA package. Models and methods by using continuous Gaussian prior and discrete categorical prior are compared. Intensive simulations and performance measures are given to illustrate our practice in the framework of model-based Bayesian inference using MCMC method.
Q-Space Truncation and Sampling in Diffusion Spectrum Imaging
Tian, Qiyuan; Rokem, Ariel; Folkerth, Rebecca D.; Nummenmaa, Aapo; Fan, Qiuyun; Edlow, Brian L.; McNab, Jennifer A.
2015-01-01
Purpose To characterize the q-space truncation and sampling on the spin-displacement probability density function (PDF) in diffusion spectrum imaging (DSI). Methods DSI data were acquired using the MGH-USC connectome scanner (Gmax=300mT/m) with bmax=30,000s/mm2, 17×17×17, 15×15×15 and 11×11×11 grids in ex vivo human brains and bmax=10,000s/mm2, 11×11×11 grid in vivo. An additional in vivo scan using bmax=7,000s/mm2, 11×11×11 grid was performed with a derated gradient strength of 40mT/m. PDFs and orientation distribution functions (ODFs) were reconstructed with different q-space filtering and PDF integration lengths, and from down-sampled data by factors of two and three. Results Both ex vivo and in vivo data showed Gibbs ringing in PDFs, which becomes the main source of artifact in the subsequently reconstructed ODFs. For down-sampled data, PDFs interfere with the first replicas or their ringing, leading to obscured orientations in ODFs. Conclusion The minimum required q-space sampling density corresponds to a field-of-view approximately equal to twice the mean displacement distance (MDD) of the tissue. The 11×11×11 grid is suitable for both ex vivo and in vivo DSI experiments. To minimize the effects of Gibbs ringing, ODFs should be reconstructed from unfiltered q-space data with the integration length over the PDF constrained to around the MDD. PMID:26762670
Manshour, Pouya; Ghasemi, Fatemeh; Matsumoto, T; Gómez, J; Sahimi, Muhammad; Peinke, J; Pacheco, A F; Tabar, M Reza Rahimi
2010-09-01
High-quality measurements of seismic activities around the world provide a wealth of data and information that are relevant to understanding of when earthquakes may occur. If viewed as complex stochastic time series, such data may be analyzed by methods that provide deeper insights into their nature, hence leading to better understanding of the data and their possible implications for earthquakes. In this paper, we provide further evidence for our recent proposal [P. Mansour, Phys. Rev. Lett. 102, 014101 (2009)10.1103/PhysRevLett.102.014101] for the existence of a transition in the shape of the probability density function (PDF) of the successive detrended increments of the stochastic fluctuations of Earth's vertical velocity V_{z} , collected by broadband stations before moderate and large earthquakes. To demonstrate the transition, we carried out extensive analysis of the data for V_{z} for 12 earthquakes in several regions around the world, including the recent catasrophic one in Haiti. The analysis supports the hypothesis that before and near the time of an earthquake, the shape of the PDF undergoes significant and discernable changes, which can be characterized quantitatively. The typical time over which the PDF undergoes the transition is about 5-10 h prior to a moderate or large earthquake.
Choi, Charles; Cao, Guan; Tanenhaus, Anne K; McCarthy, Ellena V; Jung, Misun; Schleyer, William; Shang, Yuhua; Rosbash, Michael; Yin, Jerry C P; Nitabach, Michael N
2012-08-30
Drosophila melanogaster flies concentrate behavioral activity around dawn and dusk. This organization of daily activity is controlled by central circadian clock neurons, including the lateral-ventral pacemaker neurons (LN(v)s) that secrete the neuropeptide PDF (pigment dispersing factor). Previous studies have demonstrated the requirement for PDF signaling to PDF receptor (PDFR)-expressing dorsal clock neurons in organizing circadian activity. Although LN(v)s also express functional PDFR, the role of these autoreceptors has remained enigmatic. Here, we show that (1) PDFR activation in LN(v)s shifts the balance of circadian activity from evening to morning, similar to behavioral responses to summer-like environmental conditions, and (2) this shift is mediated by stimulation of the Gα,s-cAMP pathway and a consequent change in PDF/neurotransmitter corelease from the LN(v)s. These results suggest another mechanism for environmental control of the allocation of circadian activity and provide new general insight into the role of neuropeptide autoreceptors in behavioral control circuits. Copyright © 2012 The Authors. Published by Elsevier Inc. All rights reserved.
Lyke, Stephen D; Voelz, David G; Roggemann, Michael C
2009-11-20
The probability density function (PDF) of aperture-averaged irradiance fluctuations is calculated from wave-optics simulations of a laser after propagating through atmospheric turbulence to investigate the evolution of the distribution as the aperture diameter is increased. The simulation data distribution is compared to theoretical gamma-gamma and lognormal PDF models under a variety of scintillation regimes from weak to strong. Results show that under weak scintillation conditions both the gamma-gamma and lognormal PDF models provide a good fit to the simulation data for all aperture sizes studied. Our results indicate that in moderate scintillation the gamma-gamma PDF provides a better fit to the simulation data than the lognormal PDF for all aperture sizes studied. In the strong scintillation regime, the simulation data distribution is gamma gamma for aperture sizes much smaller than the coherence radius rho0 and lognormal for aperture sizes on the order of rho0 and larger. Examples of how these results affect the bit-error rate of an on-off keyed free space optical communication link are presented.
Polynomial Chaos Based Acoustic Uncertainty Predictions from Ocean Forecast Ensembles
NASA Astrophysics Data System (ADS)
Dennis, S.
2016-02-01
Most significant ocean acoustic propagation occurs at tens of kilometers, at scales small compared basin and to most fine scale ocean modeling. To address the increased emphasis on uncertainty quantification, for example transmission loss (TL) probability density functions (PDF) within some radius, a polynomial chaos (PC) based method is utilized. In order to capture uncertainty in ocean modeling, Navy Coastal Ocean Model (NCOM) now includes ensembles distributed to reflect the ocean analysis statistics. Since the ensembles are included in the data assimilation for the new forecast ensembles, the acoustic modeling uses the ensemble predictions in a similar fashion for creating sound speed distribution over an acoustically relevant domain. Within an acoustic domain, singular value decomposition over the combined time-space structure of the sound speeds can be used to create Karhunen-Loève expansions of sound speed, subject to multivariate normality testing. These sound speed expansions serve as a basis for Hermite polynomial chaos expansions of derived quantities, in particular TL. The PC expansion coefficients result from so-called non-intrusive methods, involving evaluation of TL at multi-dimensional Gauss-Hermite quadrature collocation points. Traditional TL calculation from standard acoustic propagation modeling could be prohibitively time consuming at all multi-dimensional collocation points. This method employs Smolyak order and gridding methods to allow adaptive sub-sampling of the collocation points to determine only the most significant PC expansion coefficients to within a preset tolerance. Practically, the Smolyak order and grid sizes grow only polynomially in the number of Karhunen-Loève terms, alleviating the curse of dimensionality. The resulting TL PC coefficients allow the determination of TL PDF normality and its mean and standard deviation. In the non-normal case, PC Monte Carlo methods are used to rapidly establish the PDF. This work was sponsored by the Office of Naval Research
Efficient Statistically Accurate Algorithms for the Fokker-Planck Equation in Large Dimensions
NASA Astrophysics Data System (ADS)
Chen, N.; Majda, A.
2017-12-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method, which is based on an effective data assimilation framework, provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace. Therefore, it is computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from the traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has a significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O(100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.
CT14 intrinsic charm parton distribution functions from CTEQ-TEA global analysis
NASA Astrophysics Data System (ADS)
Hou, Tie-Jiun; Dulat, Sayipjamal; Gao, Jun; Guzzi, Marco; Huston, Joey; Nadolsky, Pavel; Schmidt, Carl; Winter, Jan; Xie, Keping; Yuan, C.-P.
2018-02-01
We investigate the possibility of a (sizable) nonperturbative contribution to the charm parton distribution function (PDF) in a nucleon, theoretical issues arising in its interpretation, and its potential impact on LHC scattering processes. The "fitted charm" PDF obtained in various QCD analyses contains a process-dependent component that is partly traced to power-suppressed radiative contributions in DIS and is generally different at the LHC. We discuss separation of the universal component of the nonperturbative charm from the rest of the radiative contributions and estimate its magnitude in the CT14 global QCD analysis at the next-to-next-to leading order in the QCD coupling strength, including the latest experimental data from HERA and the Large Hadron Collider. Models for the nonperturbative charm PDF are examined as a function of the charm quark mass and other parameters. The prospects for testing these models in the associated production of a Z boson and a charm jet at the LHC are studied under realistic assumptions, including effects of the final-state parton showering.
Xiao, Qiyang; Li, Jian; Bai, Zhiliang; Sun, Jiedi; Zhou, Nan; Zeng, Zhoumo
2016-12-13
In this study, a small leak detection method based on variational mode decomposition (VMD) and ambiguity correlation classification (ACC) is proposed. The signals acquired from sensors were decomposed using the VMD, and numerous components were obtained. According to the probability density function (PDF), an adaptive de-noising algorithm based on VMD is proposed for noise component processing and de-noised components reconstruction. Furthermore, the ambiguity function image was employed for analysis of the reconstructed signals. Based on the correlation coefficient, ACC is proposed to detect the small leak of pipeline. The analysis of pipeline leakage signals, using 1 mm and 2 mm leaks, has shown that proposed detection method can detect a small leak accurately and effectively. Moreover, the experimental results have shown that the proposed method achieved better performances than support vector machine (SVM) and back propagation neural network (BP) methods.
Xiao, Qiyang; Li, Jian; Bai, Zhiliang; Sun, Jiedi; Zhou, Nan; Zeng, Zhoumo
2016-01-01
In this study, a small leak detection method based on variational mode decomposition (VMD) and ambiguity correlation classification (ACC) is proposed. The signals acquired from sensors were decomposed using the VMD, and numerous components were obtained. According to the probability density function (PDF), an adaptive de-noising algorithm based on VMD is proposed for noise component processing and de-noised components reconstruction. Furthermore, the ambiguity function image was employed for analysis of the reconstructed signals. Based on the correlation coefficient, ACC is proposed to detect the small leak of pipeline. The analysis of pipeline leakage signals, using 1 mm and 2 mm leaks, has shown that proposed detection method can detect a small leak accurately and effectively. Moreover, the experimental results have shown that the proposed method achieved better performances than support vector machine (SVM) and back propagation neural network (BP) methods. PMID:27983577
PDF-1 neuropeptide signaling modulates a neural circuit for mate-searching behavior in C. elegans.
Barrios, Arantza; Ghosh, Rajarshi; Fang, Chunhui; Emmons, Scott W; Barr, Maureen M
2012-12-01
Appetitive behaviors require complex decision making that involves the integration of environmental stimuli and physiological needs. C. elegans mate searching is a male-specific exploratory behavior regulated by two competing needs: food and reproductive appetite. We found that the pigment dispersing factor receptor (PDFR-1) modulates the circuit that encodes the male reproductive drive that promotes male exploration following mate deprivation. PDFR-1 and its ligand, PDF-1, stimulated mate searching in the male, but not in the hermaphrodite. pdf-1 was required in the gender-shared interneuron AIM, and the receptor acted in internal and external environment-sensing neurons of the shared nervous system (URY, PQR and PHA) to produce mate-searching behavior. Thus, the pdf-1 and pdfr-1 pathway functions in non-sex-specific neurons to produce a male-specific, goal-oriented exploratory behavior. Our results indicate that secretin neuropeptidergic signaling is involved in regulating motivational internal states.
A critical appraisal and evaluation of modern PDFs
Accardi, A.; Alekhin, S.; Blumlein, J.; ...
2016-08-23
Here, we review the present status in the determination of parton distribution functions (PDFs) in the light of the precision requirements for the LHC in Run 2 as well as other future colliders. We provide brief descriptions of all currently available PDF sets and use them to compute cross sections for a number of benchmark processes including the Higgs boson production in gluon-gluon fusion at the LHC. We show that the differences in the predictions obtained with the various PDFs are due to particular theory assumptions such as the heavy-flavor schemes used in the PDF fits, the account of powermore » corrections and others. We comment on PDF uncertainties in the kinematic region covered by the LHC and on averaging procedures for PDFs, such as realized by the PDF4LHC15 sets. As a result, we provide recommendations for the usage of sets of PDFs for theory predictions at the LHC.« less
Under-sampling trajectory design for compressed sensing based DCE-MRI.
Liu, Duan-duan; Liang, Dong; Zhang, Na; Liu, Xin; Zhang, Yuan-ting
2013-01-01
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) needs high temporal and spatial resolution to accurately estimate quantitative parameters and characterize tumor vasculature. Compressed Sensing (CS) has the potential to accomplish this mutual importance. However, the randomness in CS under-sampling trajectory designed using the traditional variable density (VD) scheme may translate to uncertainty in kinetic parameter estimation when high reduction factors are used. Therefore, accurate parameter estimation using VD scheme usually needs multiple adjustments on parameters of Probability Density Function (PDF), and multiple reconstructions even with fixed PDF, which is inapplicable for DCE-MRI. In this paper, an under-sampling trajectory design which is robust to the change on PDF parameters and randomness with fixed PDF is studied. The strategy is to adaptively segment k-space into low-and high frequency domain, and only apply VD scheme in high-frequency domain. Simulation results demonstrate high accuracy and robustness comparing to VD design.
Berg, Alexander K; Manokaran, Sumathra; Eiler, Daniel; Kooren, Joel; Mallik, Sanku; Srivastava, D K
2008-01-01
Peptide deformylase (PDF) catalyzes the removal of formyl group from the N-terminal methionine residues of nascent proteins in prokaryotes, and this enzyme is a high priority target for antibiotic design. In pursuit of delineating the structural-functional features of Escherichia coli PDF (EcPDF), we investigated the mechanistic pathway for the guanidinium chloride (GdmCl)-induced unfolding of the enzyme by monitoring the secondary structural changes via CD spectroscopy. The experimental data revealed that EcPDF is a highly stable enzyme, and it undergoes slow denaturation in the presence of varying concentrations of GdmCl. The most interesting aspect of these studies has been the abrupt reversal of the unfolding pathway at low to moderate concentrations of the denaturant, but not at high concentration. An energetic rationale for such an unprecedented feature in protein chemistry is offered.
An LES-PBE-PDF approach for modeling particle formation in turbulent reacting flows
NASA Astrophysics Data System (ADS)
Sewerin, Fabian; Rigopoulos, Stelios
2017-10-01
Many chemical and environmental processes involve the formation of a polydispersed particulate phase in a turbulent carrier flow. Frequently, the immersed particles are characterized by an intrinsic property such as the particle size, and the distribution of this property across a sample population is taken as an indicator for the quality of the particulate product or its environmental impact. In the present article, we propose a comprehensive model and an efficient numerical solution scheme for predicting the evolution of the property distribution associated with a polydispersed particulate phase forming in a turbulent reacting flow. Here, the particulate phase is described in terms of the particle number density whose evolution in both physical and particle property space is governed by the population balance equation (PBE). Based on the concept of large eddy simulation (LES), we augment the existing LES-transported probability density function (PDF) approach for fluid phase scalars by the particle number density and obtain a modeled evolution equation for the filtered PDF associated with the instantaneous fluid composition and particle property distribution. This LES-PBE-PDF approach allows us to predict the LES-filtered fluid composition and particle property distribution at each spatial location and point in time without any restriction on the chemical or particle formation kinetics. In view of a numerical solution, we apply the method of Eulerian stochastic fields, invoking an explicit adaptive grid technique in order to discretize the stochastic field equation for the number density in particle property space. In this way, sharp moving features of the particle property distribution can be accurately resolved at a significantly reduced computational cost. As a test case, we consider the condensation of an aerosol in a developed turbulent mixing layer. Our investigation not only demonstrates the predictive capabilities of the LES-PBE-PDF model but also indicates the computational efficiency of the numerical solution scheme.
The pdf approach to turbulent polydispersed two-phase flows
NASA Astrophysics Data System (ADS)
Minier, Jean-Pierre; Peirano, Eric
2001-10-01
The purpose of this paper is to develop a probabilistic approach to turbulent polydispersed two-phase flows. The two-phase flows considered are composed of a continuous phase, which is a turbulent fluid, and a dispersed phase, which represents an ensemble of discrete particles (solid particles, droplets or bubbles). Gathering the difficulties of turbulent flows and of particle motion, the challenge is to work out a general modelling approach that meets three requirements: to treat accurately the physically relevant phenomena, to provide enough information to address issues of complex physics (combustion, polydispersed particle flows, …) and to remain tractable for general non-homogeneous flows. The present probabilistic approach models the statistical dynamics of the system and consists in simulating the joint probability density function (pdf) of a number of fluid and discrete particle properties. A new point is that both the fluid and the particles are included in the pdf description. The derivation of the joint pdf model for the fluid and for the discrete particles is worked out in several steps. The mathematical properties of stochastic processes are first recalled. The various hierarchies of pdf descriptions are detailed and the physical principles that are used in the construction of the models are explained. The Lagrangian one-particle probabilistic description is developed first for the fluid alone, then for the discrete particles and finally for the joint fluid and particle turbulent systems. In the case of the probabilistic description for the fluid alone or for the discrete particles alone, numerical computations are presented and discussed to illustrate how the method works in practice and the kind of information that can be extracted from it. Comments on the current modelling state and propositions for future investigations which try to link the present work with other ideas in physics are made at the end of the paper.
New Tools to Convert PDF Math Contents into Accessible e-Books Efficiently.
Suzuki, Masakazu; Terada, Yugo; Kanahori, Toshihiro; Yamaguchi, Katsuhito
2015-01-01
New features in our math-OCR software to convert PDF math contents into accessible e-books are shown. A method for recognizing PDF is thoroughly improved. In addition, contents in any selected area including math formulas in a PDF file can be cut and pasted into a document in various accessible formats, which is automatically recognized and converted into texts and accessible math formulas through this process. Combining it with our authoring tool for a technical document, one can easily produce accessible e-books in various formats such as DAISY, accessible EPUB3, DAISY-like HTML5, Microsoft Word with math objects and so on. Those contents are useful for various print-disabled students ranging from the blind to the dyslexic.
Ubiquity of Benford's law and emergence of the reciprocal distribution
Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.
2016-04-07
In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less
NASA Astrophysics Data System (ADS)
Kitamura, Naoto; Vogel, Sven C.; Idemoto, Yasushi
2013-06-01
In this work, we focused on La0.95Ba0.05Ga0.8Mg0.2O3-δ with the perovskite structure, and investigated the local structure around the oxygen vacancy by pair distribution function (PDF) method and density functional theory (DFT) calculation. By comparing the G(r) simulated based on the DFT calculation and the experimentally-observed G(r), it was suggested that the oxygen vacancy was trapped by Ba2+ at the La3+ site at least at room temperature. Such a defect association may be one of the reasons why the La0.95Ba0.05Ga0.8Mg0.2O3-δ showed lower oxide-ion conductivity than (La,Sr)(Ga,Mg)O3-δ which was widely-used as an electrolyte of the solid oxide fuel cell.
. 2015. Methods for Analyzing the Economic Value of Concentrating Solar Power with Thermal Energy in the United States, Potential Lessons for ChinaPDF. Golden, CO: National Renewable Energy . Renewable Electricity: Insights for the Coming DecadePDF. Golden, CO: National Renewable Energy Laboratory
... PDF Cytomegalovirus (CMV) PDF | Espanol PDF Escherichia coli (E. coli) PDF | Espanol PDF Fifth Disease (parvovirus B19) PDF | Espanol PDF Hepatitis A and the Vaccine PDF | Espanol PDF HPV Vaccine PDF | ... | Espanol PDF E-cigarettes (Vaping) PDF | Espanol PDF Heroin PDF | Espanol ...
Probability density function learning by unsupervised neurons.
Fiori, S
2001-10-01
In a recent work, we introduced the concept of pseudo-polynomial adaptive activation function neuron (FAN) and presented an unsupervised information-theoretic learning theory for such structure. The learning model is based on entropy optimization and provides a way of learning probability distributions from incomplete data. The aim of the present paper is to illustrate some theoretical features of the FAN neuron, to extend its learning theory to asymmetrical density function approximation, and to provide an analytical and numerical comparison with other known density function estimation methods, with special emphasis to the universal approximation ability. The paper also provides a survey of PDF learning from incomplete data, as well as results of several experiments performed on real-world problems and signals.
Kinematic Characterization of Left Ventricular Chamber Stiffness and Relaxation
NASA Astrophysics Data System (ADS)
Mossahebi, Sina
Heart failure is the most common cause of hospitalization today, and diastolic heart failure accounts for 40-50% of cases. Therefore, it is critical to identify diastolic dysfunction at a subclinical stage so that appropriate therapy can be administered before ventricular function is further, and perhaps irreversibly impaired. Basic concepts in physics such as kinematic modeling provide a unique method with which to characterize cardiovascular physiology, specifically diastolic function (DF). The advantage of an approach that is standard in physics, such as the kinematic modeling is its causal formulation that functions in contrast to correlative approaches traditionally utilized in the life sciences. Our research group has pioneered theoretical and experimental quantitative analysis of DF in humans, using both non-invasive (echocardiography, cardiac MRI) and invasive (simultaneous catheterization-echocardiography) methods. Our group developed and validated the Parametrized Diastolic Filling (PDF) formalism which is motivated by basic physiologic principles (LV is a mechanical suction pump at the mitral valve opening) that obey Newton's Laws. PDF formalism is a kinematic model of filling employing an equation of motion, the solution of which accurately predicts all E-wave contours in accordance with the rules of damped harmonic oscillatory motion. The equation's lumped parameters---ventricular stiffness, ventricular viscoelasticity/relaxation and ventricular load---are obtained by solving the 'inverse problem'. The parameters' physiologic significance and clinical utility have been repeatedly demonstrated in multiple clinical settings. In this work we apply our kinematic modeling approach to better understand how the heart works as it fills in order to advance the relationship between physiology and mathematical modeling. Through the use of this modeling, we thereby define and validate novel, causal indexes of diastolic function such as early rapid filling energy, diastatic stiffness, and relaxation and stiffness components of E-wave deceleration time.
Local and average structure of Mn- and La-substituted BiFeO{sub 3}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Bo; Selbach, Sverre M., E-mail: selbach@ntnu.no
2017-06-15
The local and average structure of solid solutions of the multiferroic perovskite BiFeO{sub 3} is investigated by synchrotron X-ray diffraction (XRD) and electron density functional theory (DFT) calculations. The average experimental structure is determined by Rietveld refinement and the local structure by total scattering data analyzed in real space with the pair distribution function (PDF) method. With equal concentrations of La on the Bi site or Mn on the Fe site, La causes larger structural distortions than Mn. Structural models based on DFT relaxed geometry give an improved fit to experimental PDFs compared to models constrained by the space groupmore » symmetry. Berry phase calculations predict a higher ferroelectric polarization than the experimental literature values, reflecting that structural disorder is not captured in either average structure space group models or DFT calculations with artificial long range order imposed by periodic boundary conditions. Only by including point defects in a supercell, here Bi vacancies, can DFT calculations reproduce the literature results on the structure and ferroelectric polarization of Mn-substituted BiFeO{sub 3}. The combination of local and average structure sensitive experimental methods with DFT calculations is useful for illuminating the structure-property-composition relationships in complex functional oxides with local structural distortions. - Graphical abstract: The experimental and simulated partial pair distribution functions (PDF) for BiFeO{sub 3}, BiFe{sub 0.875}Mn{sub 0.125}O{sub 3}, BiFe{sub 0.75}Mn{sub 0.25}O{sub 3} and Bi{sub 0.9}La{sub 0.1}FeO{sub 3}.« less
The landscape of W± and Z bosons produced in pp collisions up to LHC energies
NASA Astrophysics Data System (ADS)
Basso, Eduardo; Bourrely, Claude; Pasechnik, Roman; Soffer, Jacques
2017-10-01
We consider a selection of recent experimental results on electroweak W± , Z gauge boson production in pp collisions at BNL RHIC and CERN LHC energies in comparison to prediction of perturbative QCD calculations based on different sets of NLO parton distribution functions including the statistical PDF model known from fits to the DIS data. We show that the current statistical PDF parametrization (fitted to the DIS data only) underestimates the LHC data on W± , Z gauge boson production cross sections at the NLO by about 20%. This suggests that there is a need to refit the parameters of the statistical PDF including the latest LHC data.
Weak limit of the three-state quantum walk on the line
NASA Astrophysics Data System (ADS)
Falkner, Stefan; Boettcher, Stefan
2014-07-01
We revisit the one-dimensional discrete time quantum walk with three states and the Grover coin, the simplest model that exhibits localization in a quantum walk. We derive analytic expressions for the localization and a long-time approximation for the entire probability density function (PDF). We find the possibility for asymmetric localization to the extreme that it vanishes completely on one site of the initial conditions. We also connect the time-averaged approximation of the PDF found by Inui et al. [Phys. Rev. E 72, 056112 (2005), 10.1103/PhysRevE.72.056112] to a spatial average of the walk. We show that this smoothed approximation predicts moments of the real PDF accurately.
Probability density function of a puff dispersing from the wall of a turbulent channel
NASA Astrophysics Data System (ADS)
Nguyen, Quoc; Papavassiliou, Dimitrios
2015-11-01
Study of dispersion of passive contaminants in turbulence has proved to be helpful in understanding fundamental heat and mass transfer phenomena. Many simulation and experimental works have been carried out to locate and track motions of scalar markers in a flow. One method is to combine Direct Numerical Simulation (DNS) and Lagrangian Scalar Tracking (LST) to record locations of markers. While this has proved to be useful, high computational cost remains a concern. In this study, we develop a model that could reproduce results obtained by DNS and LST for turbulent flow. Puffs of markers with different Schmidt numbers were released into a flow field at a frictional Reynolds number of 150. The point of release was at the channel wall, so that both diffusion and convection contribute to the puff dispersion pattern, defining different stages of dispersion. Based on outputs from DNS and LST, we seek the most suitable and feasible probability density function (PDF) that represents distribution of markers in the flow field. The PDF would play a significant role in predicting heat and mass transfer in wall turbulence, and would prove to be helpful where DNS and LST are not always available.
Probability Distribution Extraction from TEC Estimates based on Kernel Density Estimation
NASA Astrophysics Data System (ADS)
Demir, Uygar; Toker, Cenk; Çenet, Duygu
2016-07-01
Statistical analysis of the ionosphere, specifically the Total Electron Content (TEC), may reveal important information about its temporal and spatial characteristics. One of the core metrics that express the statistical properties of a stochastic process is its Probability Density Function (pdf). Furthermore, statistical parameters such as mean, variance and kurtosis, which can be derived from the pdf, may provide information about the spatial uniformity or clustering of the electron content. For example, the variance differentiates between a quiet ionosphere and a disturbed one, whereas kurtosis differentiates between a geomagnetic storm and an earthquake. Therefore, valuable information about the state of the ionosphere (and the natural phenomena that cause the disturbance) can be obtained by looking at the statistical parameters. In the literature, there are publications which try to fit the histogram of TEC estimates to some well-known pdf.s such as Gaussian, Exponential, etc. However, constraining a histogram to fit to a function with a fixed shape will increase estimation error, and all the information extracted from such pdf will continue to contain this error. In such techniques, it is highly likely to observe some artificial characteristics in the estimated pdf which is not present in the original data. In the present study, we use the Kernel Density Estimation (KDE) technique to estimate the pdf of the TEC. KDE is a non-parametric approach which does not impose a specific form on the TEC. As a result, better pdf estimates that almost perfectly fit to the observed TEC values can be obtained as compared to the techniques mentioned above. KDE is particularly good at representing the tail probabilities, and outliers. We also calculate the mean, variance and kurtosis of the measured TEC values. The technique is applied to the ionosphere over Turkey where the TEC values are estimated from the GNSS measurement from the TNPGN-Active (Turkish National Permanent GNSS Network) network. This study is supported by by TUBITAK 115E915 and Joint TUBITAK 114E092 and AS CR14/001 projects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, Suzannah R.; Woods, Keenan N.; Plassmeyer, Paul N.
Amorphous metal oxides are central to a variety of technological applications. In particular, indium gallium oxide has garnered attention as a thin-film transistor channel layer material. In this work we examine the structural evolution of indium gallium oxide gel-derived powders and thin films using infrared vibrational spectroscopy, X-ray diffraction, and pair distribution function (PDF) analysis of X-ray total scattering from standard and normal incidence thin-film geometries (tfPDF). We find that the gel-derived powders and films from the same aqueous precursor evolve differently with temperature, forming mixtures of Ga-substituted In2O3 and In-substituted β-Ga2O3 with different degrees of substitution. X-ray total scatteringmore » and PDF analysis indicate that the majority phase for both the powders and films is an amorphous/nanocrystalline β-Ga2O3 phase, with a minor constituent of In2O3 with significantly larger coherence lengths. This amorphous β-Ga2O3 phase could not be identified using the conventional Bragg diffraction techniques traditionally used to study crystalline metal oxide thin films. The combination of Bragg diffraction and tfPDF provides a much more complete description of film composition and structure, which can be used to detail the effect of processing conditions and structure–property relationships. This study also demonstrates how structural features of amorphous materials, traditionally difficult to characterize by standard diffraction, can be elucidated using tfPDF.« less
Stochastic static fault slip inversion from geodetic data with non-negativity and bound constraints
NASA Astrophysics Data System (ADS)
Nocquet, J.-M.
2018-07-01
Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modelling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a truncated multivariate normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulae for the single, 2-D or n-D marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations. Posterior mean and covariance can also be efficiently derived. I show that the maximum posterior (MAP) can be obtained using a non-negative least-squares algorithm for the single truncated case or using the bounded-variable least-squares algorithm for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modelling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC-based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the MAP is extremely fast.
Berkeley Lab - Materials Sciences Division
2018 [PDF] October 2017 [PDF] July 2017 [PDF] April 2017 [PDF] January 2017 [PDF] October 2016 [PDF ] July 2016 [PDF] April 2016 [PDF] January 2016 [PDF] October 2015 [PDF] March 2015 [PDF] December 2014 [PDF] April 2014 [PDF] February 2014 [PDF] September 2013 [PDF] March 2013 [PDF] October, 2012 [PDF
Non-Gaussian behavior in jamming / unjamming transition in dense granular materials
NASA Astrophysics Data System (ADS)
Atman, A. P. F.; Kolb, E.; Combe, G.; Paiva, H. A.; Martins, G. H. B.
2013-06-01
Experiments of penetration of a cylindrical intruder inside a bidimensional dense and disordered granular media were reported recently showing the jamming / unjamming transition. In the present work, we perform molecular dynamics simulations with the same geometry in order to assess both kinematic and static features of jamming / unjamming transition. We study the statistics of the particles velocities at the neighborhood of the intruder to evince that both experiments and simulations present the same qualitative behavior. We observe that the probability density functions (PDF) of velocities deviate from Gaussian depending on the packing fraction of the granular assembly. In order to quantify these deviations we consider a q-Gaussian (Tsallis) function to fit the PDF's. The q-value can be an indication of the presence of long range correlations along the system. We compare the fitted PDF's obtained with those obtained using the stretched exponential, and sketch some conclusions concerning the nature of the correlations along a granular confined flow.
Density probability distribution functions of diffuse gas in the Milky Way
NASA Astrophysics Data System (ADS)
Berkhuijsen, E. M.; Fletcher, A.
2008-10-01
In a search for the signature of turbulence in the diffuse interstellar medium (ISM) in gas density distributions, we determined the probability distribution functions (PDFs) of the average volume densities of the diffuse gas. The densities were derived from dispersion measures and HI column densities towards pulsars and stars at known distances. The PDFs of the average densities of the diffuse ionized gas (DIG) and the diffuse atomic gas are close to lognormal, especially when lines of sight at |b| < 5° and |b| >= 5° are considered separately. The PDF of
Interactions of Gut Microbiota, Endotoxemia, Immune Function, and Diet in Exertional Heatstroke
Lee, Elaine C.; Armstrong, Elizabeth M.
2018-01-01
Exertional heatstroke (EHS) is a medical emergency that cannot be predicted, requires immediate whole-body cooling to reduce elevated internal body temperature, and is influenced by numerous host and environmental factors. Widely accepted predisposing factors (PDF) include prolonged or intense exercise, lack of heat acclimatization, sleep deprivation, dehydration, diet, alcohol abuse, drug use, chronic inflammation, febrile illness, older age, and nonsteroidal anti-inflammatory drug use. The present review links these factors to the human intestinal microbiota (IM) and diet, which previously have not been appreciated as PDF. This review also describes plausible mechanisms by which these PDF lead to EHS: endotoxemia resulting from elevated plasma lipopolysaccharide (i.e., a structural component of the outer membrane of Gram-negative bacteria) and tissue injury from oxygen free radicals. We propose that recognizing the lifestyle and host factors which are influenced by intestine-microbial interactions, and modifying habitual dietary patterns to alter the IM ecosystem, will encourage efficient immune function, optimize the intestinal epithelial barrier, and reduce EHS morbidity and mortality. PMID:29850597
An initial analysis of short- and medium-range correlations potential non-Pt catalysts in CoNx
NASA Astrophysics Data System (ADS)
Peterson, Joe
2009-10-01
A potential show stopper for the development of fuel cells for the commercial automotive industry is the design of low-cost catalysts. The best catalysts are based on platinum, which is a rare and expensive noble metal. Our group has been involved in the characterization of potential materials for non-Pt catalysts. In this presentation, I will present some preliminary neutron scattering data from a nanocrystalline powder sample of CoNx. It is apparent that the diffraction data cannot be analyzed with standard Riedveld refinement, and we have to invoke pair distribution function (PDF) analysis. The PDF provides insight into short-range correlations, as it measures the probabilities of short- and mid-range interatomic distances in a material. The analysis reveals a strong incoherent scattering response, which is indicative of the presence of hydrogen in the sample. After correcting for the incoherent scattering, one obtains the normalized scattering function S(Q), whose Fourier transform yields the PDF.
An initial analysis of short- and medium-range correlations potential non-Pt catalysts in CoNx
NASA Astrophysics Data System (ADS)
Peterson, Joe
2010-03-01
A potential show stopper for the development of fuel cells for the commercial automotive industry is the design of low-cost catalysts. The best catalysts are based on platinum, which is a rare and expensive noble metal. Our group has been involved in the characterization of potential materials for non-Pt catalysts. In this presentation, I will present some preliminary neutron scattering data from a nanocrystalline powder sample of CoNx. It is apparent that the diffraction data cannot be analyzed with standard Riedveld refinement, and we have to invoke pair distribution function (PDF) analysis. The PDF provides insight into short-range correlations, as it measures the probabilities of short- and mid-range interatomic distances in a material. The analysis reveals a strong incoherent scattering response, which is indicative of the presence of hydrogen in the sample. After correcting for the incoherent scattering, one obtains the normalized scattering function S(Q), whose Fourier transform yields the PDF.
Strange-quark asymmetry in the proton in chiral effective theory
Wang, X. G.; Ji, Chueng -Ryong; Melnitchouk, W.; ...
2016-11-29
We perform a comprehensive analysis of the strange-antistrange parton distribution function (PDF) asymmetry in the proton in the framework of chiral effective theory, including the full set of lowest-order kaon loop diagrams with off-shell and contact interactions, in addition to the usual on-shell contributions previously discussed in the literature. We identify the presence of δ-function contributions to the s¯ PDF at x = 0, with a corresponding valencelike component of the s-quark PDF at larger x, which allows greater flexibility for the shape of s–s¯. Expanding the moments of the PDFs in terms of the pseudoscalar kaon mass, we computemore » the leading nonanalytic behavior of the number and momentum integrals of the s and s¯ distributions, consistent with the chiral symmetry of QCD. Lastly, we discuss the implications of our results for the understanding of the NuTeV anomaly and for the phenomenology of strange-quark PDFs in global QCD analysis.« less
Multi-variate joint PDF for non-Gaussianities: exact formulation and generic approximations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verde, Licia; Jimenez, Raul; Alvarez-Gaume, Luis
2013-06-01
We provide an exact expression for the multi-variate joint probability distribution function of non-Gaussian fields primordially arising from local transformations of a Gaussian field. This kind of non-Gaussianity is generated in many models of inflation. We apply our expression to the non-Gaussianity estimation from Cosmic Microwave Background maps and the halo mass function where we obtain analytical expressions. We also provide analytic approximations and their range of validity. For the Cosmic Microwave Background we give a fast way to compute the PDF which is valid up to more than 7σ for f{sub NL} values (both true and sampled) not ruledmore » out by current observations, which consists of expressing the PDF as a combination of bispectrum and trispectrum of the temperature maps. The resulting expression is valid for any kind of non-Gaussianity and is not limited to the local type. The above results may serve as the basis for a fully Bayesian analysis of the non-Gaussianity parameter.« less
Assessment: Monitoring & Evaluation in a Stabilisation Context
2010-09-15
http://www.oecd.org/dataoecd/23/27/35281194.pdf b. SIDA (2004), The Logical Framework Approach. A summary of the theory behind the LFA method...en_21571361_34047972_39774574 _1_1_1_1,00.pdf 3. SIDA (2004), Stefan Molund and Göran Schill, Looking Back, Moving Forward, Sida Evaluation Manual. Available at
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, J; Ding, X; Liang, J
2016-06-15
Purpose: With energy repainting in lung IMPT, the dose delivered is approximate to the convolution of dose in each phase with corresponding breathing PDF. This study is to compute breathing PDF weighted 4D dose in lung IMPT treatment and compare to its initial robust plan. Methods: Six lung patients were evaluated in this study. Amsterdam shroud image were generated from pre-treatment 4D cone-beam projections. Diaphragm motion curve was extract from the shroud image and the breathing PDF was generated. Each patient was planned to 60 Gy (12GyX5). In initial plans, ITV density on average CT was overridden with its maximummore » value for planning, using two IMPT beams with robust optimization (5mm uncertainty in patient position and 3.5% range uncertainty). The plan was applied to all 4D CT phases. The dose in each phase was deformed to a reference phase. 4D dose is reconstructed by summing all these doses based on corresponding weighting from the PDF. Plan parameters, including maximum dose (Dmax), ITV V100, homogeneity index (HI=D2/D98), R50 (50%IDL/ITV), and the lung-GTV’s V12.5 and V5 were compared between the reconstructed 4D dose to initial plans. Results: The Dmax is significantly less dose in the reconstructed 4D dose, 68.12±3.5Gy, vs. 70.1±4.3Gy in the initial plans (p=0.015). No significant difference is found for the ITV V100, HI, and R50, 92.2%±15.4% vs. 96.3%±2.5% (p=0.565), 1.033±0.016 vs. 1.038±0.017 (p=0.548), 19.2±12.1 vs. 18.1±11.6 (p=0.265), for the 4D dose and initial plans, respectively. The lung-GTV V12.5 and V5 are significantly high in the 4D dose, 13.9%±4.8% vs. 13.0%±4.6% (p=0.021) and 17.6%±5.4% vs. 16.9%±5.2% (p=0.011), respectively. Conclusion: 4D dose reconstruction based on phase PDF can be used to evaluate the dose received by the patient. A robust optimization based on the phase PDF may even further improve patient care.« less
NASA Astrophysics Data System (ADS)
Prasai, Binay; Wilson, A. R.; Wiley, B. J.; Ren, Y.; Petkov, Valeri
2015-10-01
The extent to which current theoretical modeling alone can reveal real-world metallic nanoparticles (NPs) at the atomic level was scrutinized and demonstrated to be insufficient and how it can be improved by using a pragmatic approach involving straightforward experiments is shown. In particular, 4 to 6 nm in size silica supported Au100-xPdx (x = 30, 46 and 58) explored for catalytic applications is characterized structurally by total scattering experiments including high-energy synchrotron X-ray diffraction (XRD) coupled to atomic pair distribution function (PDF) analysis. Atomic-level models for the NPs are built by molecular dynamics simulations based on the archetypal for current theoretical modeling Sutton-Chen (SC) method. Models are matched against independent experimental data and are demonstrated to be inaccurate unless their theoretical foundation, i.e. the SC method, is supplemented with basic yet crucial information on the length and strength of metal-to-metal bonds and, when necessary, structural disorder in the actual NPs studied. An atomic PDF-based approach for accessing such information and implementing it in theoretical modeling is put forward. For completeness, the approach is concisely demonstrated on 15 nm in size water-dispersed Au particles explored for bio-medical applications and 16 nm in size hexane-dispersed Fe48Pd52 particles explored for magnetic applications as well. It is argued that when ``tuned up'' against experiments relevant to metals and alloys confined to nanoscale dimensions, such as total scattering coupled to atomic PDF analysis, rather than by mere intuition and/or against data for the respective solids, atomic-level theoretical modeling can provide a sound understanding of the synthesis-structure-property relationships in real-world metallic NPs. Ultimately this can help advance nanoscience and technology a step closer to producing metallic NPs by rational design.The extent to which current theoretical modeling alone can reveal real-world metallic nanoparticles (NPs) at the atomic level was scrutinized and demonstrated to be insufficient and how it can be improved by using a pragmatic approach involving straightforward experiments is shown. In particular, 4 to 6 nm in size silica supported Au100-xPdx (x = 30, 46 and 58) explored for catalytic applications is characterized structurally by total scattering experiments including high-energy synchrotron X-ray diffraction (XRD) coupled to atomic pair distribution function (PDF) analysis. Atomic-level models for the NPs are built by molecular dynamics simulations based on the archetypal for current theoretical modeling Sutton-Chen (SC) method. Models are matched against independent experimental data and are demonstrated to be inaccurate unless their theoretical foundation, i.e. the SC method, is supplemented with basic yet crucial information on the length and strength of metal-to-metal bonds and, when necessary, structural disorder in the actual NPs studied. An atomic PDF-based approach for accessing such information and implementing it in theoretical modeling is put forward. For completeness, the approach is concisely demonstrated on 15 nm in size water-dispersed Au particles explored for bio-medical applications and 16 nm in size hexane-dispersed Fe48Pd52 particles explored for magnetic applications as well. It is argued that when ``tuned up'' against experiments relevant to metals and alloys confined to nanoscale dimensions, such as total scattering coupled to atomic PDF analysis, rather than by mere intuition and/or against data for the respective solids, atomic-level theoretical modeling can provide a sound understanding of the synthesis-structure-property relationships in real-world metallic NPs. Ultimately this can help advance nanoscience and technology a step closer to producing metallic NPs by rational design. Electronic supplementary information (ESI) available: XRD patterns, TEM and 3D structure modelling methodology. See DOI: 10.1039/c5nr04678e
NASA Technical Reports Server (NTRS)
Madnia, C. K.; Frankel, S. H.; Givi, P.
1992-01-01
The presently obtained closed-form analytical expressions, which predict the limiting rate of mean reactant conversion in homogeneous turbulent flows under the influence of a binary reaction, are derived via the single-point pdf method based on amplitude mapping closure. With this model, the maximum rate of the mean reactant's decay can be conveniently expressed in terms of definite integrals of the parabolic cylinder functions. The results obtained are shown to be in good agreement with data generated by direct numerical simulations.
2MASS wide-field extinction maps. V. Corona Australis
NASA Astrophysics Data System (ADS)
Alves, João; Lombardi, Marco; Lada, Charles J.
2014-05-01
We present a near-infrared extinction map of a large region (~870 deg2) covering the isolated Corona Australis complex of molecular clouds. We reach a 1-σ error of 0.02 mag in the K-band extinction with a resolution of 3 arcmin over the entire map. We find that the Corona Australis cloud is about three times as large as revealed by previous CO and dust emission surveys. The cloud consists of a 45 pc long complex of filamentary structure from the well known star forming Western-end (the head, N ≥ 1023 cm-2) to the diffuse Eastern-end (the tail, N ≤ 1021 cm-2). Remarkably, about two thirds of the complex both in size and mass lie beneath AV ~ 1 mag. We find that the probability density function (PDF) of the cloud cannot be described by a single log-normal function. Similar to prior studies, we found a significant excess at high column densities, but a log-normal + power-law tail fit does not work well at low column densities. We show that at low column densities near the peak of the observed PDF, both the amplitude and shape of the PDF are dominated by noise in the extinction measurements making it impractical to derive the intrinsic cloud PDF below AK < 0.15 mag. Above AK ~ 0.15 mag, essentially the molecular component of the cloud, the PDF appears to be best described by a power-law with index -3, but could also described as the tail of a broad and relatively low amplitude, log-normal PDF that peaks at very low column densities. FITS files of the extinction maps are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/565/A18
Nonisotropic turbulence: A turbulent boundary layer
NASA Astrophysics Data System (ADS)
Liu, Kunlun
2005-11-01
The probability density function (PDF) and the two-point correlations of a flat-plate turbulent boundary layer subjected to the zero pressure gradient have been calculated by the direct numerical simulation. It is known that the strong shear force near the wall will deform the vortices and develop some stretched coherent structures like streaks and hairpins, which eventually cause the nonisotropy of wall shear flows. The PDF and the two-point correlations of isotropic flows have been studied for a long time. However, our knowledge about the influence of shear force on the PDF and two-point correlations is still very limited. This study is intended to investigate such influence by using a numerical simulation. Results are presented for a case having a Mach number of M=0.1 and a Reynolds number 2000, based on displacement thickness. The results indicate that the PDF of the streamwise velocity is Lognormal, the PDF of normal velocity is approximately Cauchy, and the PDF of the spanwise velocity is nearly Gaussian. The mean and variance of those PDFs vary according to the distance from the wall. And the two-point correlations are homogenous in the spanwise direction, have a slightly variation in the streamwise direction, but change a lot in the normal direction. Rww or Rvv can be represented as elliptic balls. And the well-chosen normalized system can enable Rww and Rvv to be self-similar.
A novel Bayesian framework for discriminative feature extraction in Brain-Computer Interfaces.
Suk, Heung-Il; Lee, Seong-Whan
2013-02-01
As there has been a paradigm shift in the learning load from a human subject to a computer, machine learning has been considered as a useful tool for Brain-Computer Interfaces (BCIs). In this paper, we propose a novel Bayesian framework for discriminative feature extraction for motor imagery classification in an EEG-based BCI in which the class-discriminative frequency bands and the corresponding spatial filters are optimized by means of the probabilistic and information-theoretic approaches. In our framework, the problem of simultaneous spatiospectral filter optimization is formulated as the estimation of an unknown posterior probability density function (pdf) that represents the probability that a single-trial EEG of predefined mental tasks can be discriminated in a state. In order to estimate the posterior pdf, we propose a particle-based approximation method by extending a factored-sampling technique with a diffusion process. An information-theoretic observation model is also devised to measure discriminative power of features between classes. From the viewpoint of classifier design, the proposed method naturally allows us to construct a spectrally weighted label decision rule by linearly combining the outputs from multiple classifiers. We demonstrate the feasibility and effectiveness of the proposed method by analyzing the results and its success on three public databases.
| photos Web page Web page 2016 PDF | photos Web page Web page 2015 PDF | photos | video Web page Web page 2014 PDF | photos | videos Web page Web page 2013 PDF | photos Web page Web page 2012 PDF | photos Web page Web page 2011 PDF | photos PDF Web page 2010 PDF PDF PDF 2009 PDF PDF PDF 2008 PDF PDF PDF 2007
ACHP | Case Digest - Protecting Historic Properties: Section 106 in Action
Digest index. Previous issues: Summer 2012 (PDF) Spring 2012 (PDF) Winter 2012 (PDF) Fall 2011 (PDF ) Summer 2011 (PDF) Spring 2011 (PDF) Winter 2011 (PDF) Fall 2010 (PDF) Summer 2010 (PDF) Winter 2010 (PDF ) Fall 2009 (PDF) Summer 2009 (PDF) Spring 2009 (PDF) Winter 2009 (PDF) Fall 2008 (PDF) Summer 2008 (PDF
NASA Astrophysics Data System (ADS)
Kim, Jeonglae; Pope, Stephen B.
2014-05-01
A turbulent lean-premixed propane-air flame stabilised by a triangular cylinder as a flame-holder is simulated to assess the accuracy and computational efficiency of combined dimension reduction and tabulation of chemistry. The computational condition matches the Volvo rig experiments. For the reactive simulation, the Lagrangian Large-Eddy Simulation/Probability Density Function (LES/PDF) formulation is used. A novel two-way coupling approach between LES and PDF is applied to obtain resolved density to reduce its statistical fluctuations. Composition mixing is evaluated by the modified Interaction-by-Exchange with the Mean (IEM) model. A baseline case uses In Situ Adaptive Tabulation (ISAT) to calculate chemical reactions efficiently. Its results demonstrate good agreement with the experimental measurements in turbulence statistics, temperature, and minor species mass fractions. For dimension reduction, 11 and 16 represented species are chosen and a variant of Rate Controlled Constrained Equilibrium (RCCE) is applied in conjunction with ISAT to each case. All the quantities in the comparison are indistinguishable from the baseline results using ISAT only. The combined use of RCCE/ISAT reduces the computational time for chemical reaction by more than 50%. However, for the current turbulent premixed flame, chemical reaction takes only a minor portion of the overall computational cost, in contrast to non-premixed flame simulations using LES/PDF, presumably due to the restricted manifold of purely premixed flame in the composition space. Instead, composition mixing is the major contributor to cost reduction since the mean-drift term, which is computationally expensive, is computed for the reduced representation. Overall, a reduction of more than 15% in the computational cost is obtained.
Studies of QCD structure in high-energy collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nadolsky, Pavel M.
2016-06-26
”Studies of QCD structure in high-energy collisions” is a research project in theoretical particle physics at Southern Methodist University funded by US DOE Award DE-SC0013681. The award furnished bridge funding for one year (2015/04/15-2016/03/31) between the periods funded by Nadolsky’s DOE Early Career Research Award DE-SC0003870 (in 2010-2015) and a DOE grant DE-SC0010129 for SMU Department of Physics (starting in April 2016). The primary objective of the research is to provide theoretical predictions for Run-2 of the CERN Large Hadron Collider (LHC). The LHC physics program relies on state-of-the-art predictions in the field of quantum chromodynamics. The main effort ofmore » our group went into the global analysis of parton distribution functions (PDFs) employed by the bulk of LHC computations. Parton distributions describe internal structure of protons during ultrarelivistic collisions. A new generation of CTEQ parton distribution functions (PDFs), CT14, was released in summer 2015 and quickly adopted by the HEP community. The new CT14 parametrizations of PDFs were obtained using benchmarked NNLO calculations and latest data from LHC and Tevatron experiments. The group developed advanced methods for the PDF analysis and estimation of uncertainties in LHC predictions associated with the PDFs. We invented and refined a new ’meta-parametrization’ technique that streamlines usage of PDFs in Higgs boson production and other numerous LHC processes, by combining PDFs from various groups using multivariate stochastic sampling. In 2015, the PDF4LHC working group recommended to LHC experimental collaborations to use ’meta-parametrizations’ as a standard technique for computing PDF uncertainties. Finally, to include new QCD processes into the global fits, our group worked on several (N)NNLO calculations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shiinoki, T; Hanazawa, H; Park, S
2015-06-15
Purpose: We aim to achieve new four-dimensional radiotherapy (4DRT) using the next generation real-time tumor-tracking (RTRT) system and flattening-filter-free techniques. To achieve new 4DRT, it is necessary to understand the respiratory motion of tumor. The purposes of this study were: 1.To develop the respiratory motion analysis tool using log files. 2.To evaluate the reproducibility of tumor motion probability distribution function (PDF) during stereotactic body RT (SBRT) of lung tumor. Methods: Seven patients having fiducial markers closely implanted to the lung tumor were enrolled in this study. The positions of fiducial markers were measured using the RTRT system (Mitsubishi Electronics Co.,more » JP) and recorded as two types of log files during the course of SBRT. For each patients, tumor motion range and tumor motion PDFs in left-right (LR), anterior-posterior (AP) and superior-inferior (SI) directions were calculated using log files of all beams per fraction (PDFn). Fractional PDF reproducibility (Rn) was calculated as Kullback-Leibler (KL) divergence between PDF1 and PDFn of tumor motion. The mean of Rn (Rm) was calculated for each patient and correlated to the patient’s mean tumor motion range (Am). The change of Rm during the course of SBRT was also evluated. These analyses were performed using in-house developed software. Results: The Rm were 0.19 (0.07–0.30), 0.14 (0.07–0.32) and 0.16 (0.09–0.28) in LR, AP and SI directions, respectively. The Am were 5.11 mm (2.58–9.99 mm), 7.81 mm (2.87–15.57 mm) and 11.26 mm (3.80–21.27 mm) in LR, AP and SI directions, respectively. The PDF reproducibility decreased as the tumor motion range increased in AP and SI direction. That decreased slightly through the course of RT in SI direction. Conclusion: We developed the respiratory motion analysis tool for 4DRT using log files and quantified the range and reproducibility of respiratory motion for lung tumors.« less
NASA Astrophysics Data System (ADS)
Eilers, Anna-Christina; Hennawi, Joseph F.; Lee, Khee-Gan
2017-08-01
We present a new Bayesian algorithm making use of Markov Chain Monte Carlo sampling that allows us to simultaneously estimate the unknown continuum level of each quasar in an ensemble of high-resolution spectra, as well as their common probability distribution function (PDF) for the transmitted Lyα forest flux. This fully automated PDF regulated continuum fitting method models the unknown quasar continuum with a linear principal component analysis (PCA) basis, with the PCA coefficients treated as nuisance parameters. The method allows one to estimate parameters governing the thermal state of the intergalactic medium (IGM), such as the slope of the temperature-density relation γ -1, while marginalizing out continuum uncertainties in a fully Bayesian way. Using realistic mock quasar spectra created from a simplified semi-numerical model of the IGM, we show that this method recovers the underlying quasar continua to a precision of ≃ 7 % and ≃ 10 % at z = 3 and z = 5, respectively. Given the number of principal component spectra, this is comparable to the underlying accuracy of the PCA model itself. Most importantly, we show that we can achieve a nearly unbiased estimate of the slope γ -1 of the IGM temperature-density relation with a precision of +/- 8.6 % at z = 3 and +/- 6.1 % at z = 5, for an ensemble of ten mock high-resolution quasar spectra. Applying this method to real quasar spectra and comparing to a more realistic IGM model from hydrodynamical simulations would enable precise measurements of the thermal and cosmological parameters governing the IGM, albeit with somewhat larger uncertainties, given the increased flexibility of the model.
Methods for detrending success metrics to account for inflationary and deflationary factors*
NASA Astrophysics Data System (ADS)
Petersen, A. M.; Penner, O.; Stanley, H. E.
2011-01-01
Time-dependent economic, technological, and social factors can artificially inflate or deflate quantitative measures for career success. Here we develop and test a statistical method for normalizing career success metrics across time dependent factors. In particular, this method addresses the long standing question: how do we compare the career achievements of professional athletes from different historical eras? Developing an objective approach will be of particular importance over the next decade as major league baseball (MLB) players from the "steroids era" become eligible for Hall of Fame induction. Some experts are calling for asterisks (*) to be placed next to the career statistics of athletes found guilty of using performance enhancing drugs (PED). Here we address this issue, as well as the general problem of comparing statistics from distinct eras, by detrending the seasonal statistics of professional baseball players. We detrend player statistics by normalizing achievements to seasonal averages, which accounts for changes in relative player ability resulting from a range of factors. Our methods are general, and can be extended to various arenas of competition where time-dependent factors play a key role. For five statistical categories, we compare the probability density function (pdf) of detrended career statistics to the pdf of raw career statistics calculated for all player careers in the 90-year period 1920-2009. We find that the functional form of these pdfs is stationary under detrending. This stationarity implies that the statistical regularity observed in the right-skewed distributions for longevity and success in professional sports arises from both the wide range of intrinsic talent among athletes and the underlying nature of competition. We fit the pdfs for career success by the Gamma distribution in order to calculate objective benchmarks based on extreme statistics which can be used for the identification of extraordinary careers.
Zaluzhnyy, I A; Kurta, R P; Menushenkov, A P; Ostrovskii, B I; Vartanyants, I A
2016-09-01
An x-ray scattering approach to determine the two-dimensional (2D) pair distribution function (PDF) in partially ordered 2D systems is proposed. We derive relations between the structure factor and PDF that enable quantitative studies of positional and bond-orientational (BO) order in real space. We apply this approach in the x-ray study of a liquid crystal (LC) film undergoing the smectic-A-hexatic-B phase transition, to analyze the interplay between the positional and BO order during the temperature evolution of the LC film. We analyze the positional correlation length in different directions in real space.
Hazard function analysis for flood planning under nonstationarity
NASA Astrophysics Data System (ADS)
Read, Laura K.; Vogel, Richard M.
2016-05-01
The field of hazard function analysis (HFA) involves a probabilistic assessment of the "time to failure" or "return period," T, of an event of interest. HFA is used in epidemiology, manufacturing, medicine, actuarial statistics, reliability engineering, economics, and elsewhere. For a stationary process, the probability distribution function (pdf) of the return period always follows an exponential distribution, the same is not true for nonstationary processes. When the process of interest, X, exhibits nonstationary behavior, HFA can provide a complementary approach to risk analysis with analytical tools particularly useful for hydrological applications. After a general introduction to HFA, we describe a new mathematical linkage between the magnitude of the flood event, X, and its return period, T, for nonstationary processes. We derive the probabilistic properties of T for a nonstationary one-parameter exponential model of X, and then use both Monte-Carlo simulation and HFA to generalize the behavior of T when X arises from a nonstationary two-parameter lognormal distribution. For this case, our findings suggest that a two-parameter Weibull distribution provides a reasonable approximation for the pdf of T. We document how HFA can provide an alternative approach to characterize the probabilistic properties of both nonstationary flood series and the resulting pdf of T.
Nanostructure Determination by Co-Refining Models to Multiple Datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Billinge, Simon J. L.
2011-05-31
The results of the work are contained in the publications resulting from the grant (which are listed below). Here I summarize the main findings from the last period of the award, 2006-2007: • Published a paper in Science with Igor Levin outlining the “Nanostructure Problem”, our inability to solve structure at the nanoscale. • Published a paper in Nature demonstrating the first ever ab-initio structure determination of a nanoparticle from atomic pair distribution function (PDF) data. • Published one book and 3 overview articles on PDF methods and the nanostructure problem. • Completed a project that sought to find amore » structural response to the presence of the so-called “intermediate phase” in network glasses which appears close to the rigidity percolation threshold in these systems. The main result was that we did not see convincing evidence for this, which drew into doubt the idea that Ge xSe 1-x glasses were a model system exhibiting rigidity percolation.« less
Creation of the BMA ensemble for SST using a parallel processing technique
NASA Astrophysics Data System (ADS)
Kim, Kwangjin; Lee, Yang Won
2013-10-01
Despite the same purpose, each satellite product has different value because of its inescapable uncertainty. Also the satellite products have been calculated for a long time, and the kinds of the products are various and enormous. So the efforts for reducing the uncertainty and dealing with enormous data will be necessary. In this paper, we create an ensemble Sea Surface Temperature (SST) using MODIS Aqua, MODIS Terra and COMS (Communication Ocean and Meteorological Satellite). We used Bayesian Model Averaging (BMA) as ensemble method. The principle of the BMA is synthesizing the conditional probability density function (PDF) using posterior probability as weight. The posterior probability is estimated using EM algorithm. The BMA PDF is obtained by weighted average. As the result, the ensemble SST showed the lowest RMSE and MAE, which proves the applicability of BMA for satellite data ensemble. As future work, parallel processing techniques using Hadoop framework will be adopted for more efficient computation of very big satellite data.
Automatic insertion of simulated microcalcification clusters in a software breast phantom
NASA Astrophysics Data System (ADS)
Shankla, Varsha; Pokrajac, David D.; Weinstein, Susan P.; DeLeo, Michael; Tuite, Catherine; Roth, Robyn; Conant, Emily F.; Maidment, Andrew D.; Bakic, Predrag R.
2014-03-01
An automated method has been developed to insert realistic clusters of simulated microcalcifications (MCs) into computer models of breast anatomy. This algorithm has been developed as part of a virtual clinical trial (VCT) software pipeline, which includes the simulation of breast anatomy, mechanical compression, image acquisition, image processing, display and interpretation. An automated insertion method has value in VCTs involving large numbers of images. The insertion method was designed to support various insertion placement strategies, governed by probability distribution functions (pdf). The pdf can be predicated on histological or biological models of tumor growth, or estimated from the locations of actual calcification clusters. To validate the automated insertion method, a 2-AFC observer study was designed to compare two placement strategies, undirected and directed. The undirected strategy could place a MC cluster anywhere within the phantom volume. The directed strategy placed MC clusters within fibroglandular tissue on the assumption that calcifications originate from epithelial breast tissue. Three radiologists were asked to select between two simulated phantom images, one from each placement strategy. Furthermore, questions were posed to probe the rationale behind the observer's selection. The radiologists found the resulting cluster placement to be realistic in 92% of cases, validating the automated insertion method. There was a significant preference for the cluster to be positioned on a background of adipose or mixed adipose/fibroglandular tissues. Based upon these results, this automated lesion placement method will be included in our VCT simulation pipeline.
Presumed PDF Modeling of Early Flame Propagation in Moderate to Intense Turbulence Environments
NASA Technical Reports Server (NTRS)
Carmen, Christina; Feikema, Douglas A.
2003-01-01
The present paper describes the results obtained from a one-dimensional time dependent numerical technique that simulates early flame propagation in a moderate to intense turbulent environment. Attention is focused on the development of a spark-ignited, premixed, lean methane/air mixture with the unsteady spherical flame propagating in homogeneous and isotropic turbulence. A Monte-Carlo particle tracking method, based upon the method of fractional steps, is utilized to simulate the phenomena represented by a probability density function (PDF) transport equation. Gaussian distributions of fluctuating velocity and fuel concentration are prescribed. Attention is focused on three primary parameters that influence the initial flame kernel growth: the detailed ignition system characteristics, the mixture composition, and the nature of the flow field. The computational results of moderate and intense isotropic turbulence suggests that flames within the distributed reaction zone are not as vulnerable, as traditionally believed, to the adverse effects of increased turbulence intensity. It is also shown that the magnitude of the flame front thickness significantly impacts the turbulent consumption flame speed. Flame conditions studied have fuel equivalence ratio s in the range phi = 0.6 to 0.9 at standard temperature and pressure.
NASA Astrophysics Data System (ADS)
Chen, Chaochao; Vachtsevanos, George; Orchard, Marcos E.
2012-04-01
Machine prognosis can be considered as the generation of long-term predictions that describe the evolution in time of a fault indicator, with the purpose of estimating the remaining useful life (RUL) of a failing component/subsystem so that timely maintenance can be performed to avoid catastrophic failures. This paper proposes an integrated RUL prediction method using adaptive neuro-fuzzy inference systems (ANFIS) and high-order particle filtering, which forecasts the time evolution of the fault indicator and estimates the probability density function (pdf) of RUL. The ANFIS is trained and integrated in a high-order particle filter as a model describing the fault progression. The high-order particle filter is used to estimate the current state and carry out p-step-ahead predictions via a set of particles. These predictions are used to estimate the RUL pdf. The performance of the proposed method is evaluated via the real-world data from a seeded fault test for a UH-60 helicopter planetary gear plate. The results demonstrate that it outperforms both the conventional ANFIS predictor and the particle-filter-based predictor where the fault growth model is a first-order model that is trained via the ANFIS.
Studies of the flow and turbulence fields in a turbulent pulsed jet flame using LES/PDF
NASA Astrophysics Data System (ADS)
Zhang, Pei; Masri, Assaad R.; Wang, Haifeng
2017-09-01
A turbulent piloted jet flame subject to a rapid velocity pulse in its fuel jet inflow is proposed as a new benchmark case for the study of turbulent combustion models. In this work, we perform modelling studies of this turbulent pulsed jet flame and focus on the predictions of its flow and turbulence fields. An advanced modelling strategy combining the large eddy simulation (LES) and the probability density function (PDF) methods is employed to model the turbulent pulsed jet flame. Characteristics of the velocity measurements are analysed to produce a time-dependent inflow condition that can be fed into the simulations. The effect of the uncertainty in the inflow turbulence intensity is investigated and is found to be very small. A method of specifying the inflow turbulence boundary condition for the simulations of the pulsed jet flame is assessed. The strategies for validating LES of statistically transient flames are discussed, and a new framework is developed consisting of different averaging strategies and a bootstrap method for constructing confidence intervals. Parametric studies are performed to examine the sensitivity of the predictions of the flow and turbulence fields to model and numerical parameters. A direct comparison of the predicted and measured time series of the axial velocity demonstrates a satisfactory prediction of the flow and turbulence fields of the pulsed jet flame by the employed modelling methods.
Representation of photon limited data in emission tomography using origin ensembles
NASA Astrophysics Data System (ADS)
Sitek, A.
2008-06-01
Representation and reconstruction of data obtained by emission tomography scanners are challenging due to high noise levels in the data. Typically, images obtained using tomographic measurements are represented using grids. In this work, we define images as sets of origins of events detected during tomographic measurements; we call these origin ensembles (OEs). A state in the ensemble is characterized by a vector of 3N parameters Y, where the parameters are the coordinates of origins of detected events in a three-dimensional space and N is the number of detected events. The 3N-dimensional probability density function (PDF) for that ensemble is derived, and we present an algorithm for OE image estimation from tomographic measurements. A displayable image (e.g. grid based image) is derived from the OE formulation by calculating ensemble expectations based on the PDF using the Markov chain Monte Carlo method. The approach was applied to computer-simulated 3D list-mode positron emission tomography data. The reconstruction errors for a 10 000 000 event acquisition for simulated ranged from 0.1 to 34.8%, depending on object size and sampling density. The method was also applied to experimental data and the results of the OE method were consistent with those obtained by a standard maximum-likelihood approach. The method is a new approach to representation and reconstruction of data obtained by photon-limited emission tomography measurements.
Work statistics of charged noninteracting fermions in slowly changing magnetic fields.
Yi, Juyeon; Talkner, Peter
2011-04-01
We consider N fermionic particles in a harmonic trap initially prepared in a thermal equilibrium state at temperature β^{-1} and examine the probability density function (pdf) of the work done by a magnetic field slowly varying in time. The behavior of the pdf crucially depends on the number of particles N but also on the temperature. At high temperatures (β≪1) the pdf is given by an asymmetric Laplace distribution for a single particle, and for many particles it approaches a Gaussian distribution with variance proportional to N/β(2). At low temperatures the pdf becomes strongly peaked at the center with a variance that still linearly increases with N but exponentially decreases with the temperature. We point out the consequences of these findings for the experimental confirmation of the Jarzynski equality such as the low probability issue at high temperatures and its solution at low temperatures, together with a discussion of the crossover behavior between the two temperature regimes. ©2011 American Physical Society
Work statistics of charged noninteracting fermions in slowly changing magnetic fields
NASA Astrophysics Data System (ADS)
Yi, Juyeon; Talkner, Peter
2011-04-01
We consider N fermionic particles in a harmonic trap initially prepared in a thermal equilibrium state at temperature β-1 and examine the probability density function (pdf) of the work done by a magnetic field slowly varying in time. The behavior of the pdf crucially depends on the number of particles N but also on the temperature. At high temperatures (β≪1) the pdf is given by an asymmetric Laplace distribution for a single particle, and for many particles it approaches a Gaussian distribution with variance proportional to N/β2. At low temperatures the pdf becomes strongly peaked at the center with a variance that still linearly increases with N but exponentially decreases with the temperature. We point out the consequences of these findings for the experimental confirmation of the Jarzynski equality such as the low probability issue at high temperatures and its solution at low temperatures, together with a discussion of the crossover behavior between the two temperature regimes.
A new subgrid-scale representation of hydrometeor fields using a multivariate PDF
Griffin, Brian M.; Larson, Vincent E.
2016-06-03
The subgrid-scale representation of hydrometeor fields is important for calculating microphysical process rates. In order to represent subgrid-scale variability, the Cloud Layers Unified By Binormals (CLUBB) parameterization uses a multivariate probability density function (PDF). In addition to vertical velocity, temperature, and moisture fields, the PDF includes hydrometeor fields. Previously, hydrometeor fields were assumed to follow a multivariate single lognormal distribution. Now, in order to better represent the distribution of hydrometeors, two new multivariate PDFs are formulated and introduced.The new PDFs represent hydrometeors using either a delta-lognormal or a delta-double-lognormal shape. The two new PDF distributions, plus the previous single lognormalmore » shape, are compared to histograms of data taken from large-eddy simulations (LESs) of a precipitating cumulus case, a drizzling stratocumulus case, and a deep convective case. In conclusion, the warm microphysical process rates produced by the different hydrometeor PDFs are compared to the same process rates produced by the LES.« less
Grenier, Antonin; Porras-Gutierrez, Ana-Gabriela; Groult, Henri; ...
2017-07-05
Detailed analysis of electrochemical reactions occurring in rechargeable Fluoride-Ion Batteries (FIBs) is provided by means of synchrotron X-ray diffraction (XRD) and Pair Distribution Function (PDF) analysis.
NASA Technical Reports Server (NTRS)
Foy, E.; Ronan, G.; Chinitz, W.
1982-01-01
A principal element to be derived from modeling turbulent reacting flows is an expression for the reaction rates of the various species involved in any particular combustion process under consideration. A temperature-derived most-likely probability density function (pdf) was used to describe the effects of temperature fluctuations on the Arrhenius reaction rate constant. A most-likely bivariate pdf described the effects of temperature and species concentrations fluctuations on the reaction rate. A criterion is developed for the use of an "appropriate" temperature pdf. The formulation of models to calculate the mean turbulent Arrhenius reaction rate constant and the mean turbulent reaction rate is considered and the results of calculations using these models are presented.
NASA Astrophysics Data System (ADS)
Sharma, Prabhat Kumar
2016-11-01
A framework is presented for the analysis of average symbol error rate (SER) for M-ary quadrature amplitude modulation in a free-space optical communication system. The standard probability density function (PDF)-based approach is extended to evaluate the average SER by representing the Q-function through its Meijer's G-function equivalent. Specifically, a converging power series expression for the average SER is derived considering the zero-boresight misalignment errors in the receiver side. The analysis presented here assumes a unified expression for the PDF of channel coefficient which incorporates the M-distributed atmospheric turbulence and Rayleigh-distributed radial displacement for the misalignment errors. The analytical results are compared with the results obtained using Q-function approximation. Further, the presented results are supported by the Monte Carlo simulations.
Yang, Dennis; Amin, Sunil; Gonzalez, Susana; Mullady, Daniel; Edmundowicz, Steven A; DeWitt, John M; Khashab, Mouen A; Wang, Andrew Y; Nagula, Satish; Buscaglia, Jonathan M; Bucobo, Juan Carlos; Wagh, Mihir S; Draganov, Peter V; Stevens, Tyler; Vargo, John J; Khara, Harshit S; Diehl, David L; Keswani, Rajesh N; Komanduri, Srinadh; Yachimski, Patrick S; Prabhu, Anoop; Kwon, Richard S; Watson, Rabindra R; Goodman, Adam J; Benias, Petros; Carr-Locke, David L; DiMaio, Christopher J
2017-02-01
Background and study aims Data on clinical outcomes of endoscopic drainage of debris-free pseudocysts (PDF) versus pseudocysts containing solid debris (PSD) are very limited. The aims of this study were to compare treatment outcomes between patients with PDF vs. PSD undergoing endoscopic ultrasound (EUS)-guided drainage via transmural stents. Patients and methods Retrospective review of 142 consecutive patients with pseudocysts who underwent EUS-guided transmural drainage (TM) from 2008 to 2014 at 15 academic centers in the United States. Main outcome measures included TM technical success, treatment outcomes (symptomatic and radiologic resolution), need for endoscopic re-intervention at follow-up, and adverse events (AEs). Results TM was performed in 90 patients with PDF and 52 with PSD. Technical success: PDF 87 (96.7 %) vs. PSD 51 (98.1 %). There was no difference in the rates for endoscopic re-intervention (5.5 % in PDF vs. 11.5 % in PSD; P = 0.33) or AEs (12.2 % in PDF vs. 19.2 % in PSD; P = 0.33). Median long-term follow-up after stent removal was 297 days (interquartile range [IQR]: 59 - 424 days) for PDF and 326 days (IQR: 180 - 448 days) for PSD ( P = 0.88). There was a higher rate of short-term radiologic resolution of PDF (45; 66.2 %) vs. PSD (21; 51.2 %) (OR = 0.30; 95 % CI: 0.13 - 0.72; P = 0.009). There was no difference in long-term symptomatic resolution (PDF: 70.4 % vs. PSD: 66.7 %; P = 0.72) or radiologic resolution (PDF: 68.9 % vs. PSD: 78.6 %; P = 0.72) Conclusions There was no difference in need for endoscopic re-intervention, AEs or long-term treatment outcomes in patients with PDF vs. PSD undergoing EUS-guided drainage with transmural stents. Based on these results, the presence of solid debris in pancreatic fluid collections does not appear to be associated with a poorer outcome.
PDF turbulence modeling and DNS
NASA Technical Reports Server (NTRS)
Hsu, A. T.
1992-01-01
The problem of time discontinuity (or jump condition) in the coalescence/dispersion (C/D) mixing model is addressed in probability density function (pdf). A C/D mixing model continuous in time is introduced. With the continuous mixing model, the process of chemical reaction can be fully coupled with mixing. In the case of homogeneous turbulence decay, the new model predicts a pdf very close to a Gaussian distribution, with finite higher moments also close to that of a Gaussian distribution. Results from the continuous mixing model are compared with both experimental data and numerical results from conventional C/D models. The effect of Coriolis forces on compressible homogeneous turbulence is studied using direct numerical simulation (DNS). The numerical method used in this study is an eight order compact difference scheme. Contrary to the conclusions reached by previous DNS studies on incompressible isotropic turbulence, the present results show that the Coriolis force increases the dissipation rate of turbulent kinetic energy, and that anisotropy develops as the Coriolis force increases. The Taylor-Proudman theory does apply since the derivatives in the direction of the rotation axis vanishes rapidly. A closer analysis reveals that the dissipation rate of the incompressible component of the turbulent kinetic energy indeed decreases with a higher rotation rate, consistent with incompressible flow simulations (Bardina), while the dissipation rate of the compressible part increases; the net gain is positive. Inertial waves are observed in the simulation results.
Local Renyi entropic profiles of DNA sequences.
Vinga, Susana; Almeida, Jonas S
2007-10-16
In a recent report the authors presented a new measure of continuous entropy for DNA sequences, which allows the estimation of their randomness level. The definition therein explored was based on the Rényi entropy of probability density estimation (pdf) using the Parzen's window method and applied to Chaos Game Representation/Universal Sequence Maps (CGR/USM). Subsequent work proposed a fractal pdf kernel as a more exact solution for the iterated map representation. This report extends the concepts of continuous entropy by defining DNA sequence entropic profiles using the new pdf estimations to refine the density estimation of motifs. The new methodology enables two results. On the one hand it shows that the entropic profiles are directly related with the statistical significance of motifs, allowing the study of under and over-representation of segments. On the other hand, by spanning the parameters of the kernel function it is possible to extract important information about the scale of each conserved DNA region. The computational applications, developed in Matlab m-code, the corresponding binary executables and additional material and examples are made publicly available at http://kdbio.inesc-id.pt/~svinga/ep/. The ability to detect local conservation from a scale-independent representation of symbolic sequences is particularly relevant for biological applications where conserved motifs occur in multiple, overlapping scales, with significant future applications in the recognition of foreign genomic material and inference of motif structures.
Local Renyi entropic profiles of DNA sequences
Vinga, Susana; Almeida, Jonas S
2007-01-01
Background In a recent report the authors presented a new measure of continuous entropy for DNA sequences, which allows the estimation of their randomness level. The definition therein explored was based on the Rényi entropy of probability density estimation (pdf) using the Parzen's window method and applied to Chaos Game Representation/Universal Sequence Maps (CGR/USM). Subsequent work proposed a fractal pdf kernel as a more exact solution for the iterated map representation. This report extends the concepts of continuous entropy by defining DNA sequence entropic profiles using the new pdf estimations to refine the density estimation of motifs. Results The new methodology enables two results. On the one hand it shows that the entropic profiles are directly related with the statistical significance of motifs, allowing the study of under and over-representation of segments. On the other hand, by spanning the parameters of the kernel function it is possible to extract important information about the scale of each conserved DNA region. The computational applications, developed in Matlab m-code, the corresponding binary executables and additional material and examples are made publicly available at . Conclusion The ability to detect local conservation from a scale-independent representation of symbolic sequences is particularly relevant for biological applications where conserved motifs occur in multiple, overlapping scales, with significant future applications in the recognition of foreign genomic material and inference of motif structures. PMID:17939871
NASA Astrophysics Data System (ADS)
Consalvi, J. L.; Nmira, F.
2016-03-01
The main objective of this article is to quantify the influence of the soot absorption coefficient-Planck function correlation on radiative loss and flame structure in an oxygen-enhanced propane turbulent diffusion flame. Calculations were run with and without accounting for this correlation by using a standard k-ε model and the steady laminar flamelet model (SLF) coupled to a joint Probability Density Function (PDF) of mixture fraction, enthalpy defect, scalar dissipation rate, and soot quantities. The PDF transport equation is solved by using a Stochastic Eulerian Field (SEF) method. The modeling of soot production is carried out by using a flamelet-based semi-empirical acetylene/benzene soot model. Radiative heat transfer is modeled by using a wide band correlated-k model and turbulent radiation interactions (TRI) are accounted for by using the Optically-Thin Fluctuation Approximation (OTFA). Predicted soot volume fraction, radiant wall heat flux distribution and radiant fraction are in good agreement with the available experimental data. Model results show that soot absorption coefficient and Planck function are negatively correlated in the region of intense soot emission. Neglecting this correlation is found to increase significantly the radiative loss leading to a substantial impact on flame structure in terms of mean and rms values of temperature. In addition mean and rms values of soot volume fraction are found to be less sensitive to the correlation than temperature since soot formation occurs mainly in a region where its influence is low.
NASA Technical Reports Server (NTRS)
Mei, Chuh; Dhainaut, Jean-Michel
2000-01-01
The Monte Carlo simulation method in conjunction with the finite element large deflection modal formulation are used to estimate fatigue life of aircraft panels subjected to stationary Gaussian band-limited white-noise excitations. Ten loading cases varying from 106 dB to 160 dB OASPL with bandwidth 1024 Hz are considered. For each load case, response statistics are obtained from an ensemble of 10 response time histories. The finite element nonlinear modal procedure yields time histories, probability density functions (PDF), power spectral densities and higher statistical moments of the maximum deflection and stress/strain. The method of moments of PSD with Dirlik's approach is employed to estimate the panel fatigue life.
Absolute x-ray energy calibration and monitoring using a diffraction-based method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Xinguo, E-mail: xhong@bnl.gov; Weidner, Donald J.; Duffy, Thomas S.
2016-07-27
In this paper, we report some recent developments of the diffraction-based absolute X-ray energy calibration method. In this calibration method, high spatial resolution of the measured detector offset is essential. To this end, a remotely controlled long-translation motorized stage was employed instead of the less convenient gauge blocks. It is found that the precision of absolute X-ray energy calibration (ΔE/E) is readily achieved down to the level of 10{sup −4} for high-energy monochromatic X-rays (e.g. 80 keV). Examples of applications to pair distribution function (PDF) measurements and energy monitoring for high-energy X-rays are presented.
mrpy: Renormalized generalized gamma distribution for HMF and galaxy ensemble properties comparisons
NASA Astrophysics Data System (ADS)
Murray, Steven G.; Robotham, Aaron S. G.; Power, Chris
2018-02-01
mrpy calculates the MRP parameterization of the Halo Mass Function. It calculates basic statistics of the truncated generalized gamma distribution (TGGD) with the TGGD class, including mean, mode, variance, skewness, pdf, and cdf. It generates MRP quantities with the MRP class, such as differential number counts and cumulative number counts, and offers various methods for generating normalizations. It can generate the MRP-based halo mass function as a function of physical parameters via the mrp_b13 function, and fit MRP parameters to data in the form of arbitrary curves and in the form of a sample of variates with the SimFit class. mrpy also calculates analytic hessians and jacobians at any point, and allows the user to alternate parameterizations of the same form via the reparameterize module.
Maximum entropy PDF projection: A review
NASA Astrophysics Data System (ADS)
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
Agrawal, Parul
2016-01-01
In Drosophila, a transcriptional feedback loop that is activated by CLOCK-CYCLE (CLK-CYC) complexes and repressed by PERIOD-TIMELESS (PER-TIM) complexes keeps circadian time. The timing of CLK-CYC activation and PER-TIM repression is regulated post-translationally, in part through rhythmic phosphorylation of CLK, PER, and TIM. Although kinases that control PER, TIM, and CLK levels, activity, and/or subcellular localization have been identified, less is known about phosphatases that control clock protein dephosphorylation. To identify clock-relevant phosphatases, clock-cell-specific RNAi knockdowns of Drosophila phosphatases were screened for altered activity rhythms. One phosphatase that was identified, the receptor protein tyrosine phosphatase leukocyte-antigen-related (LAR), abolished activity rhythms in constant darkness (DD) without disrupting the timekeeping mechanism in brain pacemaker neurons. However, expression of the neuropeptide pigment-dispersing factor (PDF), which mediates pacemaker neuron synchrony and output, is eliminated in the dorsal projections from small ventral lateral (sLNv) pacemaker neurons when Lar expression is knocked down during development, but not in adults. Loss of Lar function eliminates sLNv dorsal projections, but PDF expression persists in sLNv and large ventral lateral neuron cell bodies and their remaining projections. In contrast to the defects in lights-on and lights-off anticipatory activity seen in flies that lack PDF, Lar RNAi knockdown flies anticipate the lights-on and lights-off transition normally. Our results demonstrate that Lar is required for sLNv dorsal projection development and suggest that PDF expression in LNv cell bodies and their remaining projections mediate anticipation of the lights-on and lights-off transitions during a light/dark cycle. SIGNIFICANCE STATEMENT In animals, circadian clocks drive daily rhythms in physiology, metabolism, and behavior via transcriptional feedback loops. Because key circadian transcriptional activators and repressors are regulated by phosphorylation, we screened for phosphatases that alter activity rhythms when their expression was reduced. One such phosphatase, leukocyte-antigen-related (LAR), abolishes activity rhythms, but does not disrupt feedback loop function. Rather, Lar disrupts clock output by eliminating axonal processes from clock neurons that release pigment-dispersing factor (PDF) neuropeptide into the dorsal brain, but PDF expression persists in their cell bodies and remaining projections. In contrast to flies that lack PDF, flies that lack Lar anticipate lights-on and lights-off transitions normally, which suggests that the remaining PDF expression mediates activity during light/dark cycles. PMID:27030770
Chemically reacting supersonic flow calculation using an assumed PDF model
NASA Technical Reports Server (NTRS)
Farshchi, M.
1990-01-01
This work is motivated by the need to develop accurate models for chemically reacting compressible turbulent flow fields that are present in a typical supersonic combustion ramjet (SCRAMJET) engine. In this paper the development of a new assumed probability density function (PDF) reaction model for supersonic turbulent diffusion flames and its implementation into an efficient Navier-Stokes solver are discussed. The application of this model to a supersonic hydrogen-air flame will be considered.
Countering Strategic Preclusion: The Requirement for Truly Global Reach in the 21st Century
2012-06-01
Website, http://www.618tacc. amc.af.mil/shared/ media /document/AFD-110602-022.pdf (accessed 24 May 2012). The inter-theater airlift support to Operation...Crisis,” Pacific Air Forces Official Website, http://www.pacaf. af.mil/shared/ media /document/AFD-110330-077.pdf [accessed 24 May 2012...Reconnaissance Office (NRO), which lists as one of its key functions providing targeting and BDA support to military operations. These are
Electric field dependent local structure of (KxNa1-x) 0.5B i0.5Ti O3
NASA Astrophysics Data System (ADS)
Goetzee-Barral, A. J.; Usher, T.-M.; Stevenson, T. J.; Jones, J. L.; Levin, I.; Brown, A. P.; Bell, A. J.
2017-07-01
The in situ x-ray pair-distribution function (PDF) characterization technique has been used to study the behavior of (KxNa1-x) 0.5B i0.5Ti O3 , as a function of electric field. As opposed to conventional x-ray Bragg diffraction techniques, PDF is sensitive to local atomic displacements, detecting local structural changes at the angstrom to nanometer scale. Several field-dependent ordering mechanisms can be observed in x =0.15 , 0.18 and at the morphotropic phase boundary composition x =0.20 . X-ray total scattering shows suppression of diffuse scattering with increasing electric-field amplitude, indicative of an increase in structural ordering. Analysis of PDF peaks in the 3-4-Å range shows ordering of Bi-Ti distances parallel to the applied electric field, illustrated by peak amplitude redistribution parallel and perpendicular to the electric-field vector. A transition from <110 > to <112 > -type off-center displacements of Bi relative to the neighboring Ti atoms is observable with increasing x . Analysis of PDF peak shift with electric field shows the effects of Bi-Ti redistribution and onset of piezoelectric lattice strain. The combination of these field-induced ordering mechanisms is consistent with local redistribution of Bi-Ti distances associated with domain reorientation and an overall increase in order of atomic displacements.
Electric field dependent local structure of ( K x N a 1 - x ) 0.5 B i 0.5 Ti O 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goetzee-Barral, A. J.; Usher, T. -M.; Stevenson, T. J.
The in situ x-ray pair-distribution function (PDF) characterization technique has been used to study the behavior of (K xNa 1–x) 0.5Bi 0.5TiO 3, as a function of electric field. As opposed to conventional x-ray Bragg diffraction techniques, PDF is sensitive to local atomic displacements, detecting local structural changes at the angstrom to nanometer scale. Several field-dependent ordering mechanisms can be observed in x = 0.15, 0.18 and at the morphotropic phase boundary composition x = 0.20. X-ray total scattering shows suppression of diffuse scattering with increasing electric-field amplitude, indicative of an increase in structural ordering. Analysis of PDF peaks inmore » the 3–4-Å range shows ordering of Bi-Ti distances parallel to the applied electric field, illustrated by peak amplitude redistribution parallel and perpendicular to the electric-field vector. A transition from < 110 > to < 112 >-type off-center displacements of Bi relative to the neighboring Ti atoms is observable with increasing x. Analysis of PDF peak shift with electric field shows the effects of Bi-Ti redistribution and onset of piezoelectric lattice strain. Furthermore, the combination of these field-induced ordering mechanisms is consistent with local redistribution of Bi-Ti distances associated with domain reorientation and an overall increase in order of atomic displacements.« less
Stochastic approach to plasticity and yield in amorphous solids.
Hentschel, H G E; Jaiswal, Prabhat K; Procaccia, Itamar; Sastry, Srikanth
2015-12-01
We focus on the probability distribution function (PDF) P(Δγ;γ) where Δγ are the measured strain intervals between plastic events in a athermal strained amorphous solids, and γ measures the accumulated strain. The tail of this distribution as Δγ→0 (in the thermodynamic limit) scales like Δγ(η). The exponent η is related via scaling relations to the tail of the PDF of the eigenvalues of the plastic modes of the Hessian matrix P(λ) which scales like λ(θ), η=(θ-1)/2. The numerical values of η or θ can be determined easily in the unstrained material and in the yielded state of plastic flow. Special care is called for in the determination of these exponents between these states as γ increases. Determining the γ dependence of the PDF P(Δγ;γ) can shed important light on plasticity and yield. We conclude that the PDF's of both Δγ and λ are not continuous functions of γ. In slowly quenched amorphous solids they undergo two discontinuous transitions, first at γ=0(+) and then at the yield point γ=γ(Y) to plastic flow. In quickly quenched amorphous solids the second transition is smeared out due to the nonexisting stress peak before yield. The nature of these transitions and scaling relations with the system size dependence of 〈Δγ〉 are discussed.
A hybrid probabilistic/spectral model of scalar mixing
NASA Astrophysics Data System (ADS)
Vaithianathan, T.; Collins, Lance
2002-11-01
In the probability density function (PDF) description of a turbulent reacting flow, the local temperature and species concentration are replaced by a high-dimensional joint probability that describes the distribution of states in the fluid. The PDF has the great advantage of rendering the chemical reaction source terms closed, independent of their complexity. However, molecular mixing, which involves two-point information, must be modeled. Indeed, the qualitative shape of the PDF is sensitive to this modeling, hence the reliability of the model to predict even the closed chemical source terms rests heavily on the mixing model. We will present a new closure to the mixing based on a spectral representation of the scalar field. The model is implemented as an ensemble of stochastic particles, each carrying scalar concentrations at different wavenumbers. Scalar exchanges within a given particle represent ``transfer'' while scalar exchanges between particles represent ``mixing.'' The equations governing the scalar concentrations at each wavenumber are derived from the eddy damped quasi-normal Markovian (or EDQNM) theory. The model correctly predicts the evolution of an initial double delta function PDF into a Gaussian as seen in the numerical study by Eswaran & Pope (1988). Furthermore, the model predicts the scalar gradient distribution (which is available in this representation) approaches log normal at long times. Comparisons of the model with data derived from direct numerical simulations will be shown.
Electric field dependent local structure of ( K x N a 1 - x ) 0.5 B i 0.5 Ti O 3
Goetzee-Barral, A. J.; Usher, T. -M.; Stevenson, T. J.; ...
2017-07-31
The in situ x-ray pair-distribution function (PDF) characterization technique has been used to study the behavior of (K xNa 1–x) 0.5Bi 0.5TiO 3, as a function of electric field. As opposed to conventional x-ray Bragg diffraction techniques, PDF is sensitive to local atomic displacements, detecting local structural changes at the angstrom to nanometer scale. Several field-dependent ordering mechanisms can be observed in x = 0.15, 0.18 and at the morphotropic phase boundary composition x = 0.20. X-ray total scattering shows suppression of diffuse scattering with increasing electric-field amplitude, indicative of an increase in structural ordering. Analysis of PDF peaks inmore » the 3–4-Å range shows ordering of Bi-Ti distances parallel to the applied electric field, illustrated by peak amplitude redistribution parallel and perpendicular to the electric-field vector. A transition from < 110 > to < 112 >-type off-center displacements of Bi relative to the neighboring Ti atoms is observable with increasing x. Analysis of PDF peak shift with electric field shows the effects of Bi-Ti redistribution and onset of piezoelectric lattice strain. Furthermore, the combination of these field-induced ordering mechanisms is consistent with local redistribution of Bi-Ti distances associated with domain reorientation and an overall increase in order of atomic displacements.« less
A Partially-Stirred Batch Reactor Model for Under-Ventilated Fire Dynamics
NASA Astrophysics Data System (ADS)
McDermott, Randall; Weinschenk, Craig
2013-11-01
A simple discrete quadrature method is developed for closure of the mean chemical source term in large-eddy simulations (LES) and implemented in the publicly available fire model, Fire Dynamics Simulator (FDS). The method is cast as a partially-stirred batch reactor model for each computational cell. The model has three distinct components: (1) a subgrid mixing environment, (2) a mixing model, and (3) a set of chemical rate laws. The subgrid probability density function (PDF) is described by a linear combination of Dirac delta functions with quadrature weights set to satisfy simple integral constraints for the computational cell. It is shown that under certain limiting assumptions, the present method reduces to the eddy dissipation concept (EDC). The model is used to predict carbon monoxide concentrations in direct numerical simulation (DNS) of a methane slot burner and in LES of an under-ventilated compartment fire.
Sensor Drift Compensation Algorithm based on PDF Distance Minimization
NASA Astrophysics Data System (ADS)
Kim, Namyong; Byun, Hyung-Gi; Persaud, Krishna C.; Huh, Jeung-Soo
2009-05-01
In this paper, a new unsupervised classification algorithm is introduced for the compensation of sensor drift effects of the odor sensing system using a conducting polymer sensor array. The proposed method continues updating adaptive Radial Basis Function Network (RBFN) weights in the testing phase based on minimizing Euclidian Distance between two Probability Density Functions (PDFs) of a set of training phase output data and another set of testing phase output data. The output in the testing phase using the fixed weights of the RBFN are significantly dispersed and shifted from each target value due mostly to sensor drift effect. In the experimental results, the output data by the proposed methods are observed to be concentrated closer again to their own target values significantly. This indicates that the proposed method can be effectively applied to improved odor sensing system equipped with the capability of sensor drift effect compensation
Constraints on the Profiles of Total Water PDF in AGCMs from AIRS and a High-Resolution Model
NASA Technical Reports Server (NTRS)
Molod, Andrea
2012-01-01
Atmospheric general circulation model (AGCM) cloud parameterizations generally include an assumption about the subgrid-scale probability distribution function (PDF) of total water and its vertical profile. In the present study, the Atmospheric Infrared Sounder (AIRS) monthly-mean cloud amount and relative humidity fields are used to compute a proxy for the second moment of an AGCM total water PDF called the RH01 diagnostic, which is the AIRS mean relative humidity for cloud fractions of 0.1 or less. The dependence of the second moment on horizontal grid resolution is analyzed using results from a high-resolution global model simulation.The AIRS-derived RH01 diagnostic is generally larger near the surface than aloft, indicating a narrower PDF near the surface, and varies with the type of underlying surface. High-resolution model results show that the vertical structure of profiles of the AGCM PDF second moment is unchanged as the grid resolution changes from 200 to 100 to 50 km, and that the second-moment profiles shift toward higher values with decreasing grid spacing.Several Goddard Earth Observing System, version 5 (GEOS-5), AGCM simulations were performed with several choices for the profile of the PDF second moment. The resulting cloud and relative humidity fields were shown to be quite sensitive to the prescribed profile, and the use of a profile based on the AIRS-derived proxy results in improvements relative to observational estimates. The AIRS-guided total water PDF profiles, including their dependence on underlying surface type and on horizontal resolution, have been implemented in the version of the GEOS-5 AGCM used for publicly released simulations.
NASA Astrophysics Data System (ADS)
Holm-Alwmark, S.; Ferrière, L.; Alwmark, C.; Poelchau, M. H.
2018-01-01
Planar deformation features (PDFs) in quartz are the most widely used indicator of shock metamorphism in terrestrial rocks. They can also be used for estimating average shock pressures that quartz-bearing rocks have been subjected to. Here we report on a number of observations and problems that we have encountered when performing universal stage measurements and crystallographically indexing of PDF orientations in quartz. These include a comparison between manual and automated methods of indexing PDFs, an evaluation of the new stereographic projection template, and observations regarding the PDF statistics related to the c-axis position and rhombohedral plane symmetry. We further discuss the implications that our findings have for shock barometry studies. Our study shows that the currently used stereographic projection template for indexing PDFs in quartz might induce an overestimation of rhombohedral planes with low Miller-Bravais indices. We suggest, based on a comparison of different shock barometry methods, that a unified method of assigning shock pressures to samples based on PDFs in quartz is necessary to allow comparison of data sets. This method needs to take into account not only the average number of PDF sets/grain but also the number of high Miller-Bravais index planes, both of which are important factors according to our study. Finally, we present a suggestion for such a method (which is valid for nonporous quartz-bearing rock types), which consists of assigning quartz grains into types (A-E) based on the PDF orientation pattern, and then calculation of a mean shock pressure for each sample.
Asteroid orbital inversion using uniform phase-space sampling
NASA Astrophysics Data System (ADS)
Muinonen, K.; Pentikäinen, H.; Granvik, M.; Oszkiewicz, D.; Virtanen, J.
2014-07-01
We review statistical inverse methods for asteroid orbit computation from a small number of astrometric observations and short time intervals of observations. With the help of Markov-chain Monte Carlo methods (MCMC), we present a novel inverse method that utilizes uniform sampling of the phase space for the orbital elements. The statistical orbital ranging method (Virtanen et al. 2001, Muinonen et al. 2001) was set out to resolve the long-lasting challenges in the initial computation of orbits for asteroids. The ranging method starts from the selection of a pair of astrometric observations. Thereafter, the topocentric ranges and angular deviations in R.A. and Decl. are randomly sampled. The two Cartesian positions allow for the computation of orbital elements and, subsequently, the computation of ephemerides for the observation dates. Candidate orbital elements are included in the sample of accepted elements if the χ^2-value between the observed and computed observations is within a pre-defined threshold. The sample orbital elements obtain weights based on a certain debiasing procedure. When the weights are available, the full sample of orbital elements allows the probabilistic assessments for, e.g., object classification and ephemeris computation as well as the computation of collision probabilities. The MCMC ranging method (Oszkiewicz et al. 2009; see also Granvik et al. 2009) replaces the original sampling algorithm described above with a proposal probability density function (p.d.f.), and a chain of sample orbital elements results in the phase space. MCMC ranging is based on a bivariate Gaussian p.d.f. for the topocentric ranges, and allows for the sampling to focus on the phase-space domain with most of the probability mass. In the virtual-observation MCMC method (Muinonen et al. 2012), the proposal p.d.f. for the orbital elements is chosen to mimic the a posteriori p.d.f. for the elements: first, random errors are simulated for each observation, resulting in a set of virtual observations; second, corresponding virtual least-squares orbital elements are derived using the Nelder-Mead downhill simplex method; third, repeating the procedure two times allows for a computation of a difference for two sets of virtual orbital elements; and, fourth, this orbital-element difference constitutes a symmetric proposal in a random-walk Metropolis-Hastings algorithm, avoiding the explicit computation of the proposal p.d.f. In a discrete approximation, the allowed proposals coincide with the differences that are based on a large number of pre-computed sets of virtual least-squares orbital elements. The virtual-observation MCMC method is thus based on the characterization of the relevant volume in the orbital-element phase space. Here we utilize MCMC to map the phase-space domain of acceptable solutions. We can make use of the proposal p.d.f.s from the MCMC ranging and virtual-observation methods. The present phase-space mapping produces, upon convergence, a uniform sampling of the solution space within a pre-defined χ^2-value. The weights of the sampled orbital elements are then computed on the basis of the corresponding χ^2-values. The present method resembles the original ranging method. On one hand, MCMC mapping is insensitive to local extrema in the phase space and efficiently maps the solution space. This is somewhat contrary to the MCMC methods described above. On the other hand, MCMC mapping can suffer from producing a small number of sample elements with small χ^2-values, in resemblance to the original ranging method. We apply the methods to example near-Earth, main-belt, and transneptunian objects, and highlight the utilization of the methods in the data processing and analysis pipeline of the ESA Gaia space mission.
Ding, Jinliang; Chai, Tianyou; Wang, Hong
2011-03-01
This paper presents a novel offline modeling for product quality prediction of mineral processing which consists of a number of unit processes in series. The prediction of the product quality of the whole mineral process (i.e., the mixed concentrate grade) plays an important role and the establishment of its predictive model is a key issue for the plantwide optimization. For this purpose, a hybrid modeling approach of the mixed concentrate grade prediction is proposed, which consists of a linear model and a nonlinear model. The least-squares support vector machine is adopted to establish the nonlinear model. The inputs of the predictive model are the performance indices of each unit process, while the output is the mixed concentrate grade. In this paper, the model parameter selection is transformed into the shape control of the probability density function (PDF) of the modeling error. In this context, both the PDF-control-based and minimum-entropy-based model parameter selection approaches are proposed. Indeed, this is the first time that the PDF shape control idea is used to deal with system modeling, where the key idea is to turn model parameters so that either the modeling error PDF is controlled to follow a target PDF or the modeling error entropy is minimized. The experimental results using the real plant data and the comparison of the two approaches are discussed. The results show the effectiveness of the proposed approaches.
Graph-based layout analysis for PDF documents
NASA Astrophysics Data System (ADS)
Xu, Canhui; Tang, Zhi; Tao, Xin; Li, Yun; Shi, Cao
2013-03-01
To increase the flexibility and enrich the reading experience of e-book on small portable screens, a graph based method is proposed to perform layout analysis on Portable Document Format (PDF) documents. Digital born document has its inherent advantages like representing texts and fractional images in explicit form, which can be straightforwardly exploited. To integrate traditional image-based document analysis and the inherent meta-data provided by PDF parser, the page primitives including text, image and path elements are processed to produce text and non text layer for respective analysis. Graph-based method is developed in superpixel representation level, and page text elements corresponding to vertices are used to construct an undirected graph. Euclidean distance between adjacent vertices is applied in a top-down manner to cut the graph tree formed by Kruskal's algorithm. And edge orientation is then used in a bottom-up manner to extract text lines from each sub tree. On the other hand, non-textual objects are segmented by connected component analysis. For each segmented text and non-text composite, a 13-dimensional feature vector is extracted for labelling purpose. The experimental results on selected pages from PDF books are presented.
A novel background field removal method for MRI using projection onto dipole fields (PDF).
Liu, Tian; Khalidov, Ildar; de Rochefort, Ludovic; Spincemaille, Pascal; Liu, Jing; Tsiouris, A John; Wang, Yi
2011-11-01
For optimal image quality in susceptibility-weighted imaging and accurate quantification of susceptibility, it is necessary to isolate the local field generated by local magnetic sources (such as iron) from the background field that arises from imperfect shimming and variations in magnetic susceptibility of surrounding tissues (including air). Previous background removal techniques have limited effectiveness depending on the accuracy of model assumptions or information input. In this article, we report an observation that the magnetic field for a dipole outside a given region of interest (ROI) is approximately orthogonal to the magnetic field of a dipole inside the ROI. Accordingly, we propose a nonparametric background field removal technique based on projection onto dipole fields (PDF). In this PDF technique, the background field inside an ROI is decomposed into a field originating from dipoles outside the ROI using the projection theorem in Hilbert space. This novel PDF background removal technique was validated on a numerical simulation and a phantom experiment and was applied in human brain imaging, demonstrating substantial improvement in background field removal compared with the commonly used high-pass filtering method. Copyright © 2011 John Wiley & Sons, Ltd.
Statistics of partially-polarized fields: beyond the Stokes vector and coherence matrix
NASA Astrophysics Data System (ADS)
Charnotskii, Mikhail
2017-08-01
Traditionally, the partially-polarized light is characterized by the four Stokes parameters. Equivalent description is also provided by correlation tensor of the optical field. These statistics specify only the second moments of the complex amplitudes of the narrow-band two-dimensional electric field of the optical wave. Electric field vector of the random quasi monochromatic wave is a nonstationary oscillating two-dimensional real random variable. We introduce a novel statistical description of these partially polarized waves: the Period-Averaged Probability Density Function (PA-PDF) of the field. PA-PDF contains more information on the polarization state of the field than the Stokes vector. In particular, in addition to the conventional distinction between the polarized and depolarized components of the field PA-PDF allows to separate the coherent and fluctuating components of the field. We present several model examples of the fields with identical Stokes vectors and very distinct shapes of PA-PDF. In the simplest case of the nonstationary, oscillating normal 2-D probability distribution of the real electrical field and stationary 4-D probability distribution of the complex amplitudes, the newly-introduced PA-PDF is determined by 13 parameters that include the first moments and covariance matrix of the quadrature components of the oscillating vector field.
NASA Astrophysics Data System (ADS)
Fan, Cang; Liaw, P. K.; Wilson, T. W.; Choo, H.; Gao, Y. F.; Liu, C. T.; Proffen, Th.; Richardson, J. W.
2006-12-01
Contrary to reported results on structural relaxation inducing brittleness in amorphous alloys, the authors found that structural relaxation actually caused an increase in the strength of Zr55Cu35Al10 bulk metallic glass (BMG) without changing the plasticity. Three dimensional models were rebuilt for the as-cast and structurally relaxed BMGs by reverse Monte Carlo (RMC) simulations based on the pair distribution function (PDF) measured by neutron scattering. Only a small portion of the atom pairs was found to change to more dense packing. The concept of free volume was defined based on the PDF and RMC studies, and the mechanism of mechanical behavior was discussed.
Bayes classification of terrain cover using normalized polarimetric data
NASA Technical Reports Server (NTRS)
Yueh, H. A.; Swartz, A. A.; Kong, J. A.; Shin, R. T.; Novak, L. M.
1988-01-01
The normalized polarimetric classifier (NPC) which uses only the relative magnitudes and phases of the polarimetric data is proposed for discrimination of terrain elements. The probability density functions (PDFs) of polarimetric data are assumed to have a complex Gaussian distribution, and the marginal PDF of the normalized polarimetric data is derived by adopting the Euclidean norm as the normalization function. The general form of the distance measure for the NPC is also obtained. It is demonstrated that for polarimetric data with an arbitrary PDF, the distance measure of NPC will be independent of the normalization function selected even when the classifier is mistrained. A complex Gaussian distribution is assumed for the polarimetric data consisting of grass and tree regions. The probability of error for the NPC is compared with those of several other single-feature classifiers. The classification error of NPCs is shown to be independent of the normalization function.
Laser-beam scintillations for weak and moderate turbulence
NASA Astrophysics Data System (ADS)
Baskov, R. A.; Chumak, O. O.
2018-04-01
The scintillation index is obtained for the practically important range of weak and moderate atmospheric turbulence. To study this challenging range, the Boltzmann-Langevin kinetic equation, describing light propagation, is derived from first principles of quantum optics based on the technique of the photon distribution function (PDF) [Berman et al., Phys. Rev. A 74, 013805 (2006), 10.1103/PhysRevA.74.013805]. The paraxial approximation for laser beams reduces the collision integral for the PDF to a two-dimensional operator in the momentum space. Analytical solutions for the average value of PDF as well as for its fluctuating constituent are obtained using an iterative procedure. The calculated scintillation index is considerably greater than that obtained within the Rytov approximation even at moderate turbulence strength. The relevant explanation is proposed.
EUPDF-II: An Eulerian Joint Scalar Monte Carlo PDF Module : User's Manual
NASA Technical Reports Server (NTRS)
Raju, M. S.; Liu, Nan-Suey (Technical Monitor)
2004-01-01
EUPDF-II provides the solution for the species and temperature fields based on an evolution equation for PDF (Probability Density Function) and it is developed mainly for application with sprays, combustion, parallel computing, and unstructured grids. It is designed to be massively parallel and could easily be coupled with any existing gas-phase CFD and spray solvers. The solver accommodates the use of an unstructured mesh with mixed elements of either triangular, quadrilateral, and/or tetrahedral type. The manual provides the user with an understanding of the various models involved in the PDF formulation, its code structure and solution algorithm, and various other issues related to parallelization and its coupling with other solvers. The source code of EUPDF-II will be available with National Combustion Code (NCC) as a complete package.