Sample records for distribution method showed

  1. Scenario generation for stochastic optimization problems via the sparse grid method

    DOE PAGES

    Chen, Michael; Mehrotra, Sanjay; Papp, David

    2015-04-19

    We study the use of sparse grids in the scenario generation (or discretization) problem in stochastic programming problems where the uncertainty is modeled using a continuous multivariate distribution. We show that, under a regularity assumption on the random function involved, the sequence of optimal objective function values of the sparse grid approximations converges to the true optimal objective function values as the number of scenarios increases. The rate of convergence is also established. We treat separately the special case when the underlying distribution is an affine transform of a product of univariate distributions, and show how the sparse grid methodmore » can be adapted to the distribution by the use of quadrature formulas tailored to the distribution. We numerically compare the performance of the sparse grid method using different quadrature rules with classic quasi-Monte Carlo (QMC) methods, optimal rank-one lattice rules, and Monte Carlo (MC) scenario generation, using a series of utility maximization problems with up to 160 random variables. The results show that the sparse grid method is very efficient, especially if the integrand is sufficiently smooth. In such problems the sparse grid scenario generation method is found to need several orders of magnitude fewer scenarios than MC and QMC scenario generation to achieve the same accuracy. As a result, it is indicated that the method scales well with the dimension of the distribution--especially when the underlying distribution is an affine transform of a product of univariate distributions, in which case the method appears scalable to thousands of random variables.« less

  2. Laser ultrasonics for measurements of high-temperature elastic properties and internal temperature distribution

    NASA Astrophysics Data System (ADS)

    Matsumoto, Takahiro; Nagata, Yasuaki; Nose, Tetsuro; Kawashima, Katsuhiro

    2001-06-01

    We show two kinds of demonstrations using a laser ultrasonic method. First, we present the results of Young's modulus of ceramics at temperatures above 1600 °C. Second, we introduce the method to determine the internal temperature distribution of a hot steel plate with errors of less than 3%. We compare the results obtained by this laser ultrasonic method with conventional contact techniques to show the validity of this method.

  3. Applying Multivariate Discrete Distributions to Genetically Informative Count Data.

    PubMed

    Kirkpatrick, Robert M; Neale, Michael C

    2016-03-01

    We present a novel method of conducting biometric analysis of twin data when the phenotypes are integer-valued counts, which often show an L-shaped distribution. Monte Carlo simulation is used to compare five likelihood-based approaches to modeling: our multivariate discrete method, when its distributional assumptions are correct, when they are incorrect, and three other methods in common use. With data simulated from a skewed discrete distribution, recovery of twin correlations and proportions of additive genetic and common environment variance was generally poor for the Normal, Lognormal and Ordinal models, but good for the two discrete models. Sex-separate applications to substance-use data from twins in the Minnesota Twin Family Study showed superior performance of two discrete models. The new methods are implemented using R and OpenMx and are freely available.

  4. Generalized Cross Entropy Method for estimating joint distribution from incomplete information

    NASA Astrophysics Data System (ADS)

    Xu, Hai-Yan; Kuo, Shyh-Hao; Li, Guoqi; Legara, Erika Fille T.; Zhao, Daxuan; Monterola, Christopher P.

    2016-07-01

    Obtaining a full joint distribution from individual marginal distributions with incomplete information is a non-trivial task that continues to challenge researchers from various domains including economics, demography, and statistics. In this work, we develop a new methodology referred to as ;Generalized Cross Entropy Method; (GCEM) that is aimed at addressing the issue. The objective function is proposed to be a weighted sum of divergences between joint distributions and various references. We show that the solution of the GCEM is unique and global optimal. Furthermore, we illustrate the applicability and validity of the method by utilizing it to recover the joint distribution of a household profile of a given administrative region. In particular, we estimate the joint distribution of the household size, household dwelling type, and household home ownership in Singapore. Results show a high-accuracy estimation of the full joint distribution of the household profile under study. Finally, the impact of constraints and weight on the estimation of joint distribution is explored.

  5. Polynomial probability distribution estimation using the method of moments

    PubMed Central

    Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949

  6. Polynomial probability distribution estimation using the method of moments.

    PubMed

    Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.

  7. Tail-scope: Using friends to estimate heavy tails of degree distributions in large-scale complex networks

    NASA Astrophysics Data System (ADS)

    Eom, Young-Ho; Jo, Hang-Hyun

    2015-05-01

    Many complex networks in natural and social phenomena have often been characterized by heavy-tailed degree distributions. However, due to rapidly growing size of network data and concerns on privacy issues about using these data, it becomes more difficult to analyze complete data sets. Thus, it is crucial to devise effective and efficient estimation methods for heavy tails of degree distributions in large-scale networks only using local information of a small fraction of sampled nodes. Here we propose a tail-scope method based on local observational bias of the friendship paradox. We show that the tail-scope method outperforms the uniform node sampling for estimating heavy tails of degree distributions, while the opposite tendency is observed in the range of small degrees. In order to take advantages of both sampling methods, we devise the hybrid method that successfully recovers the whole range of degree distributions. Our tail-scope method shows how structural heterogeneities of large-scale complex networks can be used to effectively reveal the network structure only with limited local information.

  8. Temperature distribution of laser crystal in end-pumped DPSSL

    NASA Astrophysics Data System (ADS)

    Zheng, Yibo; Jia, Liping; Zhang, Lei; Wen, Jihua; Kang, Junjian

    2009-11-01

    The temperature distribution in different cooling system was studied. A thermal distribution model of laser crystal was established. Based on the calculation, the temperature distribution and deformation of ND:YVO4 crystal in different cooling system were obtained. When the pumping power is 2 W and the radius of pumping beams is 320μm, the temperature distribution and end face distortion of the laser crystal are lowest by using side directly hydrocooling method. The study shows that, the side directly hydrocooling method is a more efficient method to control the crystal temperature distribution and reduce the thermal effect.

  9. A new method for calculating ecological flow: Distribution flow method

    NASA Astrophysics Data System (ADS)

    Tan, Guangming; Yi, Ran; Chang, Jianbo; Shu, Caiwen; Yin, Zhi; Han, Shasha; Feng, Zhiyong; Lyu, Yiwei

    2018-04-01

    A distribution flow method (DFM) and its ecological flow index and evaluation grade standard are proposed to study the ecological flow of rivers based on broadening kernel density estimation. The proposed DFM and its ecological flow index and evaluation grade standard are applied into the calculation of ecological flow in the middle reaches of the Yangtze River and compared with traditional calculation method of hydrological ecological flow, method of flow evaluation, and calculation result of fish ecological flow. Results show that the DFM considers the intra- and inter-annual variations in natural runoff, thereby reducing the influence of extreme flow and uneven flow distributions during the year. This method also satisfies the actual runoff demand of river ecosystems, demonstrates superiority over the traditional hydrological methods, and shows a high space-time applicability and application value.

  10. Characterization of titanium dioxide nanoparticles in food products: analytical methods to define nanoparticles.

    PubMed

    Peters, Ruud J B; van Bemmel, Greet; Herrera-Rivera, Zahira; Helsper, Hans P F G; Marvin, Hans J P; Weigel, Stefan; Tromp, Peter C; Oomen, Agnes G; Rietveld, Anton G; Bouwmeester, Hans

    2014-07-09

    Titanium dioxide (TiO2) is a common food additive used to enhance the white color, brightness, and sometimes flavor of a variety of food products. In this study 7 food grade TiO2 materials (E171), 24 food products, and 3 personal care products were investigated for their TiO2 content and the number-based size distribution of TiO2 particles present in these products. Three principally different methods have been used to determine the number-based size distribution of TiO2 particles: electron microscopy, asymmetric flow field-flow fractionation combined with inductively coupled mass spectrometry, and single-particle inductively coupled mass spectrometry. The results show that all E171 materials have similar size distributions with primary particle sizes in the range of 60-300 nm. Depending on the analytical method used, 10-15% of the particles in these materials had sizes below 100 nm. In 24 of the 27 foods and personal care products detectable amounts of titanium were found ranging from 0.02 to 9.0 mg TiO2/g product. The number-based size distributions for TiO2 particles in the food and personal care products showed that 5-10% of the particles in these products had sizes below 100 nm, comparable to that found in the E171 materials. Comparable size distributions were found using the three principally different analytical methods. Although the applied methods are considered state of the art, they showed practical size limits for TiO2 particles in the range of 20-50 nm, which may introduce a significant bias in the size distribution because particles <20 nm are excluded. This shows the inability of current state of the art methods to support the European Union recommendation for the definition of nanomaterials.

  11. Spherical Harmonic Analysis of Particle Velocity Distribution Function: Comparison of Moments and Anisotropies using Cluster Data

    NASA Technical Reports Server (NTRS)

    Gurgiolo, Chris; Vinas, Adolfo F.

    2009-01-01

    This paper presents a spherical harmonic analysis of the plasma velocity distribution function using high-angular, energy, and time resolution Cluster data obtained from the PEACE spectrometer instrument to demonstrate how this analysis models the particle distribution function and its moments and anisotropies. The results show that spherical harmonic analysis produced a robust physical representation model of the velocity distribution function, resolving the main features of the measured distributions. From the spherical harmonic analysis, a minimum set of nine spectral coefficients was obtained from which the moment (up to the heat flux), anisotropy, and asymmetry calculations of the velocity distribution function were obtained. The spherical harmonic method provides a potentially effective "compression" technique that can be easily carried out onboard a spacecraft to determine the moments and anisotropies of the particle velocity distribution function for any species. These calculations were implemented using three different approaches, namely, the standard traditional integration, the spherical harmonic (SPH) spectral coefficients integration, and the singular value decomposition (SVD) on the spherical harmonic methods. A comparison among the various methods shows that both SPH and SVD approaches provide remarkable agreement with the standard moment integration method.

  12. Network-Oriented Approach to Distributed Generation Planning

    NASA Astrophysics Data System (ADS)

    Kochukov, O.; Mutule, A.

    2017-06-01

    The main objective of the paper is to present an innovative complex approach to distributed generation planning and show the advantages over existing methods. The approach will be most suitable for DNOs and authorities and has specific calculation targets to support the decision-making process. The method can be used for complex distribution networks with different arrangement and legal base.

  13. Particle size distributions and the vertical distribution of suspended matter in the upwelling region off Oregon

    NASA Technical Reports Server (NTRS)

    Kitchen, J. C.

    1977-01-01

    Various methods of presenting and mathematically describing particle size distribution are explained and evaluated. The hyperbolic distribution is found to be the most practical but the more complex characteristic vector analysis is the most sensitive to changes in the shape of the particle size distributions. A method for determining onshore-offshore flow patterns from the distribution of particulates was presented. A numerical model of the vertical structure of two size classes of particles was developed. The results show a close similarity to the observed distributions but overestimate the particle concentration by forty percent. This was attributed to ignoring grazing by zooplankton. Sensivity analyses showed the size preference was most responsive to the maximum specific growth rates and nutrient half saturation constants. The verical structure was highly dependent on the eddy diffusivity followed closely by the growth terms.

  14. Development of a nonlinear vortex method

    NASA Technical Reports Server (NTRS)

    Kandil, O. A.

    1982-01-01

    Steady and unsteady Nonliner Hybrid Vortex (NHV) method, for low aspect ratio wings at large angles of attack, is developed. The method uses vortex panels with first-order vorticity distribution (equivalent to second-order doublet distribution) to calculate the induced velocity in the near field using closed form expressions. In the far field, the distributed vorticity is reduced to concentrated vortex lines and the simpler Biot-Savart's law is employed. The method is applied to rectangular wings in steady and unsteady flows without any restriction on the order of magnitude of the disturbances in the flow field. The numerical results show that the method accurately predicts the distributed aerodynamic loads and that it is of acceptable computational efficiency.

  15. A new idea for visualization of lesions distribution in mammogram based on CPD registration method.

    PubMed

    Pan, Xiaoguang; Qi, Buer; Yu, Hongfei; Wei, Haiping; Kang, Yan

    2017-07-20

    Mammography is currently the most effective technique for breast cancer. Lesions distribution can provide support for clinical diagnosis and epidemiological studies. We presented a new idea to help radiologists study breast lesions distribution conveniently. We also developed an automatic tool based on this idea which could show visualization of lesions distribution in a standard mammogram. Firstly, establishing a lesion database to study; then, extracting breast contours and match different women's mammograms to a standard mammogram; finally, showing the lesion distribution in the standard mammogram, and providing the distribution statistics. The crucial process of developing this tool was matching different women's mammograms correctly. We used a hybrid breast contour extraction method combined with coherent point drift method to match different women's mammograms. We tested our automatic tool by four mass datasets of 641 images. The distribution results shown by the tool were consistent with the results counted according to their reports and mammograms by manual. We also discussed the registration error that was less than 3.3 mm in average distance. The new idea is effective and the automatic tool can provide lesions distribution results which are consistent with radiologists simply and conveniently.

  16. Regional analysis of annual maximum rainfall using TL-moments method

    NASA Astrophysics Data System (ADS)

    Shabri, Ani Bin; Daud, Zalina Mohd; Ariff, Noratiqah Mohd

    2011-06-01

    Information related to distributions of rainfall amounts are of great importance for designs of water-related structures. One of the concerns of hydrologists and engineers is the probability distribution for modeling of regional data. In this study, a novel approach to regional frequency analysis using L-moments is revisited. Subsequently, an alternative regional frequency analysis using the TL-moments method is employed. The results from both methods were then compared. The analysis was based on daily annual maximum rainfall data from 40 stations in Selangor Malaysia. TL-moments for the generalized extreme value (GEV) and generalized logistic (GLO) distributions were derived and used to develop the regional frequency analysis procedure. TL-moment ratio diagram and Z-test were employed in determining the best-fit distribution. Comparison between the two approaches showed that the L-moments and TL-moments produced equivalent results. GLO and GEV distributions were identified as the most suitable distributions for representing the statistical properties of extreme rainfall in Selangor. Monte Carlo simulation was used for performance evaluation, and it showed that the method of TL-moments was more efficient for lower quantile estimation compared with the L-moments.

  17. Decadal oscillations and extreme value distribution of river peak flows in the Meuse catchment

    NASA Astrophysics Data System (ADS)

    De Niel, Jan; Willems, Patrick

    2017-04-01

    In flood risk management, flood probabilities are often quantified through Generalized Pareto distributions of river peak flows. One of the main underlying assumptions is that all data points need to originate from one single underlying distribution (i.i.d. assumption). However, this hypothesis, although generally assumed to be correct for variables such as river peak flows, remains somehow questionable: flooding might indeed be caused by different hydrological and/or meteorological conditions. This study confirms these findings from previous research by showing a clear indication of the link between atmospheric conditions and flooding for the Meuse river in The Netherlands: decadal oscillations of river peak flows can (at least partially) be attributed to the occurrence of westerly weather types. The study further proposes a method to take this correlation between atmospheric conditions and river peak flows into account when calibrating an extreme value distribution for river peak flows. Rather than calibrating one single distribution to the data and potentially violating the i.i.d. assumption, weather type depending extreme value distributions are derived and composed. The study shows that, for the Meuse river in The Netherlands, such approach results in a more accurate extreme value distribution, especially with regards to extrapolations. Comparison of the proposed method with a traditional extreme value analysis approach and an alternative model-based approach for the same case study shows strong differences in the peak flow extrapolation. The design-flood for a 1,250 year return period is estimated at 4,800 m3s-1 for the proposed method, compared with 3,450 m3s-1 and 3,900 m3s-1 for the traditional method and a previous study. The methods were validated based on instrumental and documentary flood information of the past 500 years.

  18. Diagnostics for insufficiencies of posterior calculations in Bayesian signal inference.

    PubMed

    Dorn, Sebastian; Oppermann, Niels; Ensslin, Torsten A

    2013-11-01

    We present an error-diagnostic validation method for posterior distributions in Bayesian signal inference, an advancement of a previous work. It transfers deviations from the correct posterior into characteristic deviations from a uniform distribution of a quantity constructed for this purpose. We show that this method is able to reveal and discriminate several kinds of numerical and approximation errors, as well as their impact on the posterior distribution. For this we present four typical analytical examples of posteriors with incorrect variance, skewness, position of the maximum, or normalization. We show further how this test can be applied to multidimensional signals.

  19. Beyond Hosting Capacity: Using Shortest Path Methods to Minimize Upgrade Cost Pathways: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gensollen, Nicolas; Horowitz, Kelsey A; Palmintier, Bryan S

    We present in this paper a graph based forwardlooking algorithm applied to distribution planning in the context of distributed PV penetration. We study the target hosting capacity (THC) problem where the objective is to find the cheapest sequence of system upgrades to reach a predefined hosting capacity target value. We show in this paper that commonly used short-term cost minimization approaches lead most of the time to suboptimal solutions. By comparing our method against such myopic techniques on real distribution systems, we show that our algorithm is able to reduce the overall integration costs by looking at future decisions. Becausemore » hosting capacity is hard to compute, this problem requires efficient methods to search the space. We demonstrate here that heuristics using domain specific knowledge can be efficiently used to improve the algorithm performance such that real distribution systems can be studied.« less

  20. Regional frequency analysis of extreme rainfalls using partial L moments method

    NASA Astrophysics Data System (ADS)

    Zakaria, Zahrahtul Amani; Shabri, Ani

    2013-07-01

    An approach based on regional frequency analysis using L moments and LH moments are revisited in this study. Subsequently, an alternative regional frequency analysis using the partial L moments (PL moments) method is employed, and a new relationship for homogeneity analysis is developed. The results were then compared with those obtained using the method of L moments and LH moments of order two. The Selangor catchment, consisting of 37 sites and located on the west coast of Peninsular Malaysia, is chosen as a case study. PL moments for the generalized extreme value (GEV), generalized logistic (GLO), and generalized Pareto distributions were derived and used to develop the regional frequency analysis procedure. PL moment ratio diagram and Z test were employed in determining the best-fit distribution. Comparison between the three approaches showed that GLO and GEV distributions were identified as the suitable distributions for representing the statistical properties of extreme rainfall in Selangor. Monte Carlo simulation used for performance evaluation shows that the method of PL moments would outperform L and LH moments methods for estimation of large return period events.

  1. Distributed Control of Inverter-Based Lossy Microgrids for Power Sharing and Frequency Regulation Under Voltage Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Chin-Yao; Zhang, Wei

    This paper presents a new distributed control framework to coordinate inverter-interfaced distributed energy resources (DERs) in island microgrids. We show that under bounded load uncertainties, the proposed control method can steer the microgrid to a desired steady state with synchronized inverter frequency across the network and proportional sharing of both active and reactive powers among the inverters. We also show that such convergence can be achieved while respecting constraints on voltage magnitude and branch angle differences. The controller is robust under various contingency scenarios, including loss of communication links and failures of DERs. The proposed controller is applicable to lossymore » mesh microgrids with heterogeneous R/X distribution lines and reasonable parameter variations. Simulations based on various microgrid operation scenarios are also provided to show the effectiveness of the proposed control method.« less

  2. Applying simulation model to uniform field space charge distribution measurements by the PEA method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y.; Salama, M.M.A.

    1996-12-31

    Signals measured under uniform fields by the Pulsed Electroacoustic (PEA) method have been processed by the deconvolution procedure to obtain space charge distributions since 1988. To simplify data processing, a direct method has been proposed recently in which the deconvolution is eliminated. However, the surface charge cannot be represented well by the method because the surface charge has a bandwidth being from zero to infinity. The bandwidth of the charge distribution must be much narrower than the bandwidths of the PEA system transfer function in order to apply the direct method properly. When surface charges can not be distinguished frommore » space charge distributions, the accuracy and the resolution of the obtained space charge distributions decrease. To overcome this difficulty a simulation model is therefore proposed. This paper shows their attempts to apply the simulation model to obtain space charge distributions under plane-plane electrode configurations. Due to the page limitation for the paper, the charge distribution originated by the simulation model is compared to that obtained by the direct method with a set of simulated signals.« less

  3. The Molecular Weight Distribution of Polymer Samples

    ERIC Educational Resources Information Center

    Horta, Arturo; Pastoriza, M. Alejandra

    2007-01-01

    Various methods for the determination of the molecular weight distribution (MWD) of different polymer samples are presented. The study shows that the molecular weight averages and distribution of a polymerization completely depend on the characteristics of the reaction itself.

  4. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks

    PubMed Central

    Lam, William H. K.; Li, Qingquan

    2017-01-01

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks. PMID:29210978

  5. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.

    PubMed

    Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan

    2017-12-06

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.

  6. A study of finite mixture model: Bayesian approach on financial time series data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-07-01

    Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.

  7. The equivalence of Darmois-Israel and distributional method for thin shells in general relativity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mansouri, R.; Khorrami, M.

    1996-11-01

    A distributional method to solve the Einstein{close_quote}s field equations for thin shells is formulated. The familiar field equations and jump conditions of Darmois-Israel formalism are derived. A careful analysis of the Bianchi identities shows that, for cases under consideration, they make sense as distributions and lead to jump conditions of Darmois-Israel formalism. {copyright} {ital 1996 American Institute of Physics.}

  8. Comparison of Methods for Estimating Low Flow Characteristics of Streams

    USGS Publications Warehouse

    Tasker, Gary D.

    1987-01-01

    Four methods for estimating the 7-day, 10-year and 7-day, 20-year low flows for streams are compared by the bootstrap method. The bootstrap method is a Monte Carlo technique in which random samples are drawn from an unspecified sampling distribution defined from observed data. The nonparametric nature of the bootstrap makes it suitable for comparing methods based on a flow series for which the true distribution is unknown. Results show that the two methods based on hypothetical distribution (Log-Pearson III and Weibull) had lower mean square errors than did the G. E. P. Box-D. R. Cox transformation method or the Log-W. C. Boughton method which is based on a fit of plotting positions.

  9. Automatic Data Distribution for CFD Applications on Structured Grids

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Yan, Jerry

    2000-01-01

    Data distribution is an important step in implementation of any parallel algorithm. The data distribution determines data traffic, utilization of the interconnection network and affects the overall code efficiency. In recent years a number data distribution methods have been developed and used in real programs for improving data traffic. We use some of the methods for translating data dependence and affinity relations into data distribution directives. We describe an automatic data alignment and placement tool (ADAFT) which implements these methods and show it results for some CFD codes (NPB and ARC3D). Algorithms for program analysis and derivation of data distribution implemented in ADAFT are efficient three pass algorithms. Most algorithms have linear complexity with the exception of some graph algorithms having complexity O(n(sup 4)) in the worst case.

  10. Automatic Data Distribution for CFD Applications on Structured Grids

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Yan, Jerry

    1999-01-01

    Data distribution is an important step in implementation of any parallel algorithm. The data distribution determines data traffic, utilization of the interconnection network and affects the overall code efficiency. In recent years a number data distribution methods have been developed and used in real programs for improving data traffic. We use some of the methods for translating data dependence and affinity relations into data distribution directives. We describe an automatic data alignment and placement tool (ADAPT) which implements these methods and show it results for some CFD codes (NPB and ARC3D). Algorithms for program analysis and derivation of data distribution implemented in ADAPT are efficient three pass algorithms. Most algorithms have linear complexity with the exception of some graph algorithms having complexity O(n(sup 4)) in the worst case.

  11. Measuring the mass distribution in stellar systems

    NASA Astrophysics Data System (ADS)

    Tremaine, Scott

    2018-06-01

    One of the fundamental tasks of dynamical astronomy is to infer the distribution of mass in a stellar system from a snapshot of the positions and velocities of its stars. The usual approach to this task (e.g. Schwarzschild's method) involves fitting parametrized forms of the gravitational potential and the phase-space distribution to the data. We review the practical and conceptual difficulties in this approach and describe a novel statistical method for determining the mass distribution that does not require determining the phase-space distribution of the stars. We show that this new estimator out-performs other distribution-free estimators for the harmonic and Kepler potentials.

  12. Radar prediction of absolute rain fade distributions for earth-satellite paths and general methods for extrapolation of fade statistics to other locations

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1982-01-01

    The first absolute rain fade distribution method described establishes absolute fade statistics at a given site by means of a sampled radar data base. The second method extrapolates absolute fade statistics from one location to another, given simultaneously measured fade and rain rate statistics at the former. Both methods employ similar conditional fade statistic concepts and long term rain rate distributions. Probability deviations in the 2-19% range, with an 11% average, were obtained upon comparison of measured and predicted levels at given attenuations. The extrapolation of fade distributions to other locations at 28 GHz showed very good agreement with measured data at three sites located in the continental temperate region.

  13. Modeling error distributions of growth curve models through Bayesian methods.

    PubMed

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  14. The research of distributed interactive simulation based on HLA in coal mine industry inherent safety

    NASA Astrophysics Data System (ADS)

    Dou, Zhi-Wu

    2010-08-01

    To solve the inherent safety problem puzzling the coal mining industry, analyzing the characteristic and the application of distributed interactive simulation based on high level architecture (DIS/HLA), a new method is proposed for developing coal mining industry inherent safety distributed interactive simulation adopting HLA technology. Researching the function and structure of the system, a simple coal mining industry inherent safety is modeled with HLA, the FOM and SOM are developed, and the math models are suggested. The results of the instance research show that HLA plays an important role in developing distributed interactive simulation of complicated distributed system and the method is valid to solve the problem puzzling coal mining industry. To the coal mining industry, the conclusions show that the simulation system with HLA plays an important role to identify the source of hazard, to make the measure for accident, and to improve the level of management.

  15. Pulsed Laser Ablation-Induced Green Synthesis of TiO2 Nanoparticles and Application of Novel Small Angle X-Ray Scattering Technique for Nanoparticle Size and Size Distribution Analysis.

    PubMed

    Singh, Amandeep; Vihinen, Jorma; Frankberg, Erkka; Hyvärinen, Leo; Honkanen, Mari; Levänen, Erkki

    2016-12-01

    This paper aims to introduce small angle X-ray scattering (SAXS) as a promising technique for measuring size and size distribution of TiO 2 nanoparticles. In this manuscript, pulsed laser ablation in liquids (PLAL) has been demonstrated as a quick and simple technique for synthesizing TiO 2 nanoparticles directly into deionized water as a suspension from titanium targets. Spherical TiO 2 nanoparticles with diameters in the range 4-35 nm were observed with transmission electron microscopy (TEM). X-ray diffraction (XRD) showed highly crystalline nanoparticles that comprised of two main photoactive phases of TiO 2 : anatase and rutile. However, presence of minor amounts of brookite was also reported. The traditional methods for nanoparticle size and size distribution analysis such as electron microscopy-based methods are time-consuming. In this study, we have proposed and validated SAXS as a promising method for characterization of laser-ablated TiO 2 nanoparticles for their size and size distribution by comparing SAXS- and TEM-measured nanoparticle size and size distribution. SAXS- and TEM-measured size distributions closely followed each other for each sample, and size distributions in both showed maxima at the same nanoparticle size. The SAXS-measured nanoparticle diameters were slightly larger than the respective diameters measured by TEM. This was because SAXS measures an agglomerate consisting of several particles as one big particle which slightly increased the mean diameter. TEM- and SAXS-measured mean diameters when plotted together showed similar trend in the variation in the size as the laser power was changed which along with extremely similar size distributions for TEM and SAXS validated the application of SAXS for size distribution measurement of the synthesized TiO 2 nanoparticles.

  16. Evaluation of the Environmental DNA Method for Estimating Distribution and Biomass of Submerged Aquatic Plants

    PubMed Central

    Matsuhashi, Saeko; Doi, Hideyuki; Fujiwara, Ayaka; Watanabe, Sonoko; Minamoto, Toshifumi

    2016-01-01

    The environmental DNA (eDNA) method has increasingly been recognized as a powerful tool for monitoring aquatic animal species; however, its application for monitoring aquatic plants is limited. To evaluate eDNA analysis for estimating the distribution of aquatic plants, we compared its estimated distributions with eDNA analysis, visual observation, and past distribution records for the submerged species Hydrilla verticillata. Moreover, we conducted aquarium experiments using H. verticillata and Egeria densa and analyzed the relationships between eDNA concentrations and plant biomass to investigate the potential for biomass estimation. The occurrences estimated by eDNA analysis closely corresponded to past distribution records, and eDNA detections were more frequent than visual observations, indicating that the method is potentially more sensitive. The results of the aquarium experiments showed a positive relationship between plant biomass and eDNA concentration; however, the relationship was not always significant. The eDNA concentration peaked within three days of the start of the experiment in most cases, suggesting that plants do not release constant amounts of DNA. These results showed that eDNA analysis can be used for distribution surveys, and has the potential to estimate the biomass of aquatic plants. PMID:27304876

  17. Evaluation of the Environmental DNA Method for Estimating Distribution and Biomass of Submerged Aquatic Plants.

    PubMed

    Matsuhashi, Saeko; Doi, Hideyuki; Fujiwara, Ayaka; Watanabe, Sonoko; Minamoto, Toshifumi

    2016-01-01

    The environmental DNA (eDNA) method has increasingly been recognized as a powerful tool for monitoring aquatic animal species; however, its application for monitoring aquatic plants is limited. To evaluate eDNA analysis for estimating the distribution of aquatic plants, we compared its estimated distributions with eDNA analysis, visual observation, and past distribution records for the submerged species Hydrilla verticillata. Moreover, we conducted aquarium experiments using H. verticillata and Egeria densa and analyzed the relationships between eDNA concentrations and plant biomass to investigate the potential for biomass estimation. The occurrences estimated by eDNA analysis closely corresponded to past distribution records, and eDNA detections were more frequent than visual observations, indicating that the method is potentially more sensitive. The results of the aquarium experiments showed a positive relationship between plant biomass and eDNA concentration; however, the relationship was not always significant. The eDNA concentration peaked within three days of the start of the experiment in most cases, suggesting that plants do not release constant amounts of DNA. These results showed that eDNA analysis can be used for distribution surveys, and has the potential to estimate the biomass of aquatic plants.

  18. Improved estimation of hydraulic conductivity by combining stochastically simulated hydrofacies with geophysical data.

    PubMed

    Zhu, Lin; Gong, Huili; Chen, Yun; Li, Xiaojuan; Chang, Xiang; Cui, Yijiao

    2016-03-01

    Hydraulic conductivity is a major parameter affecting the output accuracy of groundwater flow and transport models. The most commonly used semi-empirical formula for estimating conductivity is Kozeny-Carman equation. However, this method alone does not work well with heterogeneous strata. Two important parameters, grain size and porosity, often show spatial variations at different scales. This study proposes a method for estimating conductivity distributions by combining a stochastic hydrofacies model with geophysical methods. The Markov chain model with transition probability matrix was adopted to re-construct structures of hydrofacies for deriving spatial deposit information. The geophysical and hydro-chemical data were used to estimate the porosity distribution through the Archie's law. Results show that the stochastic simulated hydrofacies model reflects the sedimentary features with an average model accuracy of 78% in comparison with borehole log data in the Chaobai alluvial fan. The estimated conductivity is reasonable and of the same order of magnitude of the outcomes of the pumping tests. The conductivity distribution is consistent with the sedimentary distributions. This study provides more reliable spatial distributions of the hydraulic parameters for further numerical modeling.

  19. Kirchhoff and Ohm in action: solving electric currents in continuous extended media

    NASA Astrophysics Data System (ADS)

    Dolinko, A. E.

    2018-03-01

    In this paper we show a simple and versatile computational simulation method for determining electric currents and electric potential in 2D and 3D media with arbitrary distribution of resistivity. One of the highlights of the proposed method is that the simulation space containing the distribution of resistivity and the points of external applied voltage are introduced by means of digital images or bitmaps, which easily allows simulating any phenomena involving distributions of resistivity. The simulation is based on the Kirchhoff’s laws of electric currents and it is solved by means of an iterative procedure. The method is also generalised to account for media with distributions of reactive impedance. At the end of this work, we show an example of application of the simulation, consisting in reproducing the response obtained with the geophysical method of electric resistivity tomography in presence of soil cracks. This paper is aimed at undergraduate or graduated students interested in computational physics and electricity and also researchers involved in the area of continuous electric media, which could find a simple and powerful tool for investigation.

  20. Deterministic analysis of extrinsic and intrinsic noise in an epidemiological model.

    PubMed

    Bayati, Basil S

    2016-05-01

    We couple a stochastic collocation method with an analytical expansion of the canonical epidemiological master equation to analyze the effects of both extrinsic and intrinsic noise. It is shown that depending on the distribution of the extrinsic noise, the master equation yields quantitatively different results compared to using the expectation of the distribution for the stochastic parameter. This difference is incident to the nonlinear terms in the master equation, and we show that the deviation away from the expectation of the extrinsic noise scales nonlinearly with the variance of the distribution. The method presented here converges linearly with respect to the number of particles in the system and exponentially with respect to the order of the polynomials used in the stochastic collocation calculation. This makes the method presented here more accurate than standard Monte Carlo methods, which suffer from slow, nonmonotonic convergence. In epidemiological terms, the results show that extrinsic fluctuations should be taken into account since they effect the speed of disease breakouts and that the gamma distribution should be used to model the basic reproductive number.

  1. Delving into α-stable distribution in noise suppression for seizure detection from scalp EEG

    NASA Astrophysics Data System (ADS)

    Wang, Yueming; Qi, Yu; Wang, Yiwen; Lei, Zhen; Zheng, Xiaoxiang; Pan, Gang

    2016-10-01

    Objective. There is serious noise in EEG caused by eye blink and muscle activities. The noise exhibits similar morphologies to epileptic seizure signals, leading to relatively high false alarms in most existing seizure detection methods. The objective in this paper is to develop an effective noise suppression method in seizure detection and explore the reason why it works. Approach. Based on a state-space model containing a non-linear observation function and multiple features as the observations, this paper delves deeply into the effect of the α-stable distribution in the noise suppression for seizure detection from scalp EEG. Compared with the Gaussian distribution, the α-stable distribution is asymmetric and has relatively heavy tails. These properties make it more powerful in modeling impulsive noise in EEG, which usually can not be handled by the Gaussian distribution. Specially, we give a detailed analysis in the state estimation process to show the reason why the α-stable distribution can suppress the impulsive noise. Main results. To justify each component in our model, we compare our method with 4 different models with different settings on a collected 331-hour epileptic EEG data. To show the superiority of our method, we compare it with the existing approaches on both our 331-hour data and 892-hour public data. The results demonstrate that our method is most effective in both the detection rate and the false alarm. Significance. This is the first attempt to incorporate the α-stable distribution to a state-space model for noise suppression in seizure detection and achieves the state-of-the-art performance.

  2. Comparison Study on the Estimation of the Spatial Distribution of Regional Soil Metal(loid)s Pollution Based on Kriging Interpolation and BP Neural Network.

    PubMed

    Jia, Zhenyi; Zhou, Shenglu; Su, Quanlong; Yi, Haomin; Wang, Junxiao

    2017-12-26

    Soil pollution by metal(loid)s resulting from rapid economic development is a major concern. Accurately estimating the spatial distribution of soil metal(loid) pollution has great significance in preventing and controlling soil pollution. In this study, 126 topsoil samples were collected in Kunshan City and the geo-accumulation index was selected as a pollution index. We used Kriging interpolation and BP neural network methods to estimate the spatial distribution of arsenic (As) and cadmium (Cd) pollution in the study area. Additionally, we introduced a cross-validation method to measure the errors of the estimation results by the two interpolation methods and discussed the accuracy of the information contained in the estimation results. The conclusions are as follows: data distribution characteristics, spatial variability, and mean square errors (MSE) of the different methods showed large differences. Estimation results from BP neural network models have a higher accuracy, the MSE of As and Cd are 0.0661 and 0.1743, respectively. However, the interpolation results show significant skewed distribution, and spatial autocorrelation is strong. Using Kriging interpolation, the MSE of As and Cd are 0.0804 and 0.2983, respectively. The estimation results have poorer accuracy. Combining the two methods can improve the accuracy of the Kriging interpolation and more comprehensively represent the spatial distribution characteristics of metal(loid)s in regional soil. The study may provide a scientific basis and technical support for the regulation of soil metal(loid) pollution.

  3. Effect of synthesis methods with different annealing temperatures on micro structure, cations distribution and magnetic properties of nano-nickel ferrite

    NASA Astrophysics Data System (ADS)

    El-Sayed, Karimat; Mohamed, Mohamed Bakr; Hamdy, Sh.; Ata-Allah, S. S.

    2017-02-01

    Nano-crystalline NiFe2O4 was synthesized by citrate and sol-gel methods at different annealing temperatures and the results were compared with a bulk sample prepared by ceramic method. The effect of methods of preparation and different annealing temperatures on the crystallize size, strain, bond lengths, bond angles, cations distribution and degree of inversions were investigated by X-ray powder diffraction, high resolution transmission electron microscope, Mössbauer effect spectrometer and vibrating sample magnetometer. The cations distributions were determined at both octahedral and tetrahedral sites using both Mössbauer effect spectroscopy and a modified Bertaut method using Rietveld method. The Mössbauer effect spectra showed a regular decrease in the hyperfine field with decreasing particle size. Saturation magnetization and coercivity are found to be affected by the particle size and the cations distribution.

  4. Effects of cleft type, facemask anchorage method, and alveolar bone graft on maxillary protraction: a three-dimensional finite element analysis.

    PubMed

    Yang, Il-Hyung; Chang, Young-Il; Kim, Tae-Woo; Ahn, Sug-Joon; Lim, Won-Hee; Lee, Nam-Ki; Baek, Seung-Hak

    2012-03-01

    To investigate biomechanical effects of cleft type (unilateral/bilateral cleft lip and palate), facemask anchorage method (tooth-borne and miniplate anchorage), and alveolar bone graft on maxillary protraction. Three-dimensional finite element analysis with application of orthopedic force (30° downward and forward to the occlusal plane, 500 g per side). Computed tomography data from a 13.5-year-old girl with maxillary hypoplasia. Eight three-dimensional finite element models were fabricated according to cleft type, facemask anchorage method, and alveolar bone graft. Initial stress distribution and displacement after force application were analyzed. Unilateral cleft lip and palate showed an asymmetric pattern in stress distribution and displacement before alveolar bone graft and demonstrated a symmetric pattern after alveolar bone graft. However, bilateral cleft lip and palate showed symmetric patterns in stress distribution and displacement before and after alveolar bone graft. In both cleft types, the graft extended the stress distribution area laterally beyond the infraorbital foramen. For both unilateral and bilateral cleft lip and palate, a facemask with a tooth-borne anchorage showed a dentoalveolar effect with prominent stress distribution and displacement on the upper canine point. In contrast, a facemask with miniplate anchorage exhibited an orthopedic effect with more favorable stress distribution and displacement on the middle maxilla point. In addition, the facemask with a miniplate anchorage showed a larger stress distribution area and sutural stress values than did the facemask with a tooth-borne anchorage. The pterygopalatine and zygomatico-maxillary sutures showed the largest sutural stress values with a facemask with a miniplate anchorage and after alveolar bone grafting, respectively. In this three-dimensional finite element analysis, it would be more advantageous to perform maxillary protraction using a facemask with a miniplate anchorage than a facemask with a tooth-borne anchorage and after alveolar bone graft rather than before alveolar bone graft, regardless of cleft type.

  5. Comparing the index-flood and multiple-regression methods using L-moments

    NASA Astrophysics Data System (ADS)

    Malekinezhad, H.; Nachtnebel, H. P.; Klik, A.

    In arid and semi-arid regions, the length of records is usually too short to ensure reliable quantile estimates. Comparing index-flood and multiple-regression analyses based on L-moments was the main objective of this study. Factor analysis was applied to determine main influencing variables on flood magnitude. Ward’s cluster and L-moments approaches were applied to several sites in the Namak-Lake basin in central Iran to delineate homogeneous regions based on site characteristics. Homogeneity test was done using L-moments-based measures. Several distributions were fitted to the regional flood data and index-flood and multiple-regression methods as two regional flood frequency methods were compared. The results of factor analysis showed that length of main waterway, compactness coefficient, mean annual precipitation, and mean annual temperature were the main variables affecting flood magnitude. The study area was divided into three regions based on the Ward’s method of clustering approach. The homogeneity test based on L-moments showed that all three regions were acceptably homogeneous. Five distributions were fitted to the annual peak flood data of three homogeneous regions. Using the L-moment ratios and the Z-statistic criteria, GEV distribution was identified as the most robust distribution among five candidate distributions for all the proposed sub-regions of the study area, and in general, it was concluded that the generalised extreme value distribution was the best-fit distribution for every three regions. The relative root mean square error (RRMSE) measure was applied for evaluating the performance of the index-flood and multiple-regression methods in comparison with the curve fitting (plotting position) method. In general, index-flood method gives more reliable estimations for various flood magnitudes of different recurrence intervals. Therefore, this method should be adopted as regional flood frequency method for the study area and the Namak-Lake basin in central Iran. To estimate floods of various return periods for gauged catchments in the study area, the mean annual peak flood of the catchments may be multiplied by corresponding values of the growth factors, and computed using the GEV distribution.

  6. Ice Water Classification Using Statistical Distribution Based Conditional Random Fields in RADARSAT-2 Dual Polarization Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Li, F.; Zhang, S.; Hao, W.; Zhu, T.; Yuan, L.; Xiao, F.

    2017-09-01

    In this paper, Statistical Distribution based Conditional Random Fields (STA-CRF) algorithm is exploited for improving marginal ice-water classification. Pixel level ice concentration is presented as the comparison of methods based on CRF. Furthermore, in order to explore the effective statistical distribution model to be integrated into STA-CRF, five statistical distribution models are investigated. The STA-CRF methods are tested on 2 scenes around Prydz Bay and Adélie Depression, where contain a variety of ice types during melt season. Experimental results indicate that the proposed method can resolve sea ice edge well in Marginal Ice Zone (MIZ) and show a robust distinction of ice and water.

  7. Mixture distributions of wind speed in the UAE

    NASA Astrophysics Data System (ADS)

    Shin, J.; Ouarda, T.; Lee, T. S.

    2013-12-01

    Wind speed probability distribution is commonly used to estimate potential wind energy. The 2-parameter Weibull distribution has been most widely used to characterize the distribution of wind speed. However, it is unable to properly model wind speed regimes when wind speed distribution presents bimodal and kurtotic shapes. Several studies have concluded that the Weibull distribution should not be used for frequency analysis of wind speed without investigation of wind speed distribution. Due to these mixture distributional characteristics of wind speed data, the application of mixture distributions should be further investigated in the frequency analysis of wind speed. A number of studies have investigated the potential wind energy in different parts of the Arabian Peninsula. Mixture distributional characteristics of wind speed were detected from some of these studies. Nevertheless, mixture distributions have not been employed for wind speed modeling in the Arabian Peninsula. In order to improve our understanding of wind energy potential in Arabian Peninsula, mixture distributions should be tested for the frequency analysis of wind speed. The aim of the current study is to assess the suitability of mixture distributions for the frequency analysis of wind speed in the UAE. Hourly mean wind speed data at 10-m height from 7 stations were used in the current study. The Weibull and Kappa distributions were employed as representatives of the conventional non-mixture distributions. 10 mixture distributions are used and constructed by mixing four probability distributions such as Normal, Gamma, Weibull and Extreme value type-one (EV-1) distributions. Three parameter estimation methods such as Expectation Maximization algorithm, Least Squares method and Meta-Heuristic Maximum Likelihood (MHML) method were employed to estimate the parameters of the mixture distributions. In order to compare the goodness-of-fit of tested distributions and parameter estimation methods for sample wind data, the adjusted coefficient of determination, Bayesian Information Criterion (BIC) and Chi-squared statistics were computed. Results indicate that MHML presents the best performance of parameter estimation for the used mixture distributions. In most of the employed 7 stations, mixture distributions give the best fit. When the wind speed regime shows mixture distributional characteristics, most of these regimes present the kurtotic statistical characteristic. Particularly, applications of mixture distributions for these stations show a significant improvement in explaining the whole wind speed regime. In addition, the Weibull-Weibull mixture distribution presents the best fit for the wind speed data in the UAE.

  8. Modeling vibration response and damping of cables and cabled structures

    NASA Astrophysics Data System (ADS)

    Spak, Kaitlin S.; Agnes, Gregory S.; Inman, Daniel J.

    2015-02-01

    In an effort to model the vibration response of cabled structures, the distributed transfer function method is developed to model cables and a simple cabled structure. The model includes shear effects, tension, and hysteretic damping for modeling of helical stranded cables, and includes a method for modeling cable attachment points using both linear and rotational damping and stiffness. The damped cable model shows agreement with experimental data for four types of stranded cables, and the damped cabled beam model shows agreement with experimental data for the cables attached to a beam structure, as well as improvement over the distributed mass method for cabled structure modeling.

  9. EMD-WVD time-frequency distribution for analysis of multi-component signals

    NASA Astrophysics Data System (ADS)

    Chai, Yunzi; Zhang, Xudong

    2016-10-01

    Time-frequency distribution (TFD) is two-dimensional function that indicates the time-varying frequency content of one-dimensional signals. And The Wigner-Ville distribution (WVD) is an important and effective time-frequency analysis method. The WVD can efficiently show the characteristic of a mono-component signal. However, a major drawback is the extra cross-terms when multi-component signals are analyzed by WVD. In order to eliminating the cross-terms, we decompose signals into single frequency components - Intrinsic Mode Function (IMF) - by using the Empirical Mode decomposition (EMD) first, then use WVD to analyze each single IMF. In this paper, we define this new time-frequency distribution as EMD-WVD. And the experiment results show that the proposed time-frequency method can solve the cross-terms problem effectively and improve the accuracy of WVD time-frequency analysis.

  10. Graphical determination of wall temperatures for heat transfers through walls of arbitrary shape

    NASA Technical Reports Server (NTRS)

    Lutz, Otto

    1950-01-01

    A graphical method is given which permits determining of the temperature distribution during heat transfer in arbitrarily shaped walls. Three examples show the application of the method. The further development of heat engines depends to a great extent on the control of the thermal stresses in the walls. The thermal stresses stem from the nonuniform temperature distribution in heat transfer through walls which are, for structural reasons, of various thicknesses and sometimes complicated shape. Thus, it is important to know the temperature distribution in these structural parts. Following, a method is given which permits solution of this problem.

  11. Adaptive allocation for binary outcomes using decreasingly informative priors.

    PubMed

    Sabo, Roy T

    2014-01-01

    A method of outcome-adaptive allocation is presented using Bayes methods, where a natural lead-in is incorporated through the use of informative yet skeptical prior distributions for each treatment group. These prior distributions are modeled on unobserved data in such a way that their influence on the allocation scheme decreases as the trial progresses. Simulation studies show this method to behave comparably to the Bayesian adaptive allocation method described by Thall and Wathen (2007), who incorporate a natural lead-in through sample-size-based exponents.

  12. Use of the Box-Cox Transformation in Detecting Changepoints in Daily Precipitation Data Series

    NASA Astrophysics Data System (ADS)

    Wang, X. L.; Chen, H.; Wu, Y.; Pu, Q.

    2009-04-01

    This study integrates a Box-Cox power transformation procedure into two statistical tests for detecting changepoints in Gaussian data series, to make the changepoint detection methods applicable to non-Gaussian data series, such as daily precipitation amounts. The detection power aspects of transformed methods in a common trend two-phase regression setting are assessed by Monte Carlo simulations for data of a log-normal or Gamma distribution. The results show that the transformed methods have increased the power of detection, in comparison with the corresponding original (untransformed) methods. The transformed data much better approximate to a Gaussian distribution. As an example of application, the new methods are applied to a series of daily precipitation amounts recorded at a station in Canada, showing satisfactory detection power.

  13. Generalized empirical Bayesian methods for discovery of differential data in high-throughput biology.

    PubMed

    Hardcastle, Thomas J

    2016-01-15

    High-throughput data are now commonplace in biological research. Rapidly changing technologies and application mean that novel methods for detecting differential behaviour that account for a 'large P, small n' setting are required at an increasing rate. The development of such methods is, in general, being done on an ad hoc basis, requiring further development cycles and a lack of standardization between analyses. We present here a generalized method for identifying differential behaviour within high-throughput biological data through empirical Bayesian methods. This approach is based on our baySeq algorithm for identification of differential expression in RNA-seq data based on a negative binomial distribution, and in paired data based on a beta-binomial distribution. Here we show how the same empirical Bayesian approach can be applied to any parametric distribution, removing the need for lengthy development of novel methods for differently distributed data. Comparisons with existing methods developed to address specific problems in high-throughput biological data show that these generic methods can achieve equivalent or better performance. A number of enhancements to the basic algorithm are also presented to increase flexibility and reduce computational costs. The methods are implemented in the R baySeq (v2) package, available on Bioconductor http://www.bioconductor.org/packages/release/bioc/html/baySeq.html. tjh48@cam.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Optimized spray drying process for preparation of one-step calcium-alginate gel microspheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popeski-Dimovski, Riste

    Calcium-alginate micro particles have been used extensively in drug delivery systems. Therefore we establish a one-step method for preparation of internally gelated micro particles with spherical shape and narrow size distribution. We use four types of alginate with different G/M ratio and molar weight. The size of the particles is measured using light diffraction and scanning electron microscopy. Measurements showed that with this method, micro particles with size distribution around 4 micrometers can be prepared, and SEM imaging showed that those particles are spherical in shape.

  15. Regional probability distribution of the annual reference evapotranspiration and its effective parameters in Iran

    NASA Astrophysics Data System (ADS)

    Khanmohammadi, Neda; Rezaie, Hossein; Montaseri, Majid; Behmanesh, Javad

    2017-10-01

    The reference evapotranspiration (ET0) plays an important role in water management plans in arid or semi-arid countries such as Iran. For this reason, the regional analysis of this parameter is important. But, ET0 process is affected by several meteorological parameters such as wind speed, solar radiation, temperature and relative humidity. Therefore, the effect of distribution type of effective meteorological variables on ET0 distribution was analyzed. For this purpose, the regional probability distribution of the annual ET0 and its effective parameters were selected. Used data in this research was recorded data at 30 synoptic stations of Iran during 1960-2014. Using the probability plot correlation coefficient (PPCC) test and the L-moment method, five common distributions were compared and the best distribution was selected. The results of PPCC test and L-moment diagram indicated that the Pearson type III distribution was the best probability distribution for fitting annual ET0 and its four effective parameters. The results of RMSE showed that the ability of the PPCC test and L-moment method for regional analysis of reference evapotranspiration and its effective parameters was similar. The results also showed that the distribution type of the parameters which affected ET0 values can affect the distribution of reference evapotranspiration.

  16. New algorithm and system for measuring size distribution of blood cells

    NASA Astrophysics Data System (ADS)

    Yao, Cuiping; Li, Zheng; Zhang, Zhenxi

    2004-06-01

    In optical scattering particle sizing, a numerical transform is sought so that a particle size distribution can be determined from angular measurements of near forward scattering, which has been adopted in the measurement of blood cells. In this paper a new method of counting and classification of blood cell, laser light scattering method from stationary suspensions, is presented. The genetic algorithm combined with nonnegative least squared algorithm is employed to inverse the size distribution of blood cells. Numerical tests show that these techniques can be successfully applied to measuring size distribution of blood cell with high stability.

  17. To Model Chemical Reactivity in Heterogeneous Emulsions, Think Homogeneous Microemulsions.

    PubMed

    Bravo-Díaz, Carlos; Romsted, Laurence Stuart; Liu, Changyao; Losada-Barreiro, Sonia; Pastoriza-Gallego, Maria José; Gao, Xiang; Gu, Qing; Krishnan, Gunaseelan; Sánchez-Paz, Verónica; Zhang, Yongliang; Dar, Aijaz Ahmad

    2015-08-25

    Two important and unsolved problems in the food industry and also fundamental questions in colloid chemistry are how to measure molecular distributions, especially antioxidants (AOs), and how to model chemical reactivity, including AO efficiency in opaque emulsions. The key to understanding reactivity in organized surfactant media is that reaction mechanisms are consistent with a discrete structures-separate continuous regions duality. Aggregate structures in emulsions are determined by highly cooperative but weak organizing forces that allow reactants to diffuse at rates approaching their diffusion-controlled limit. Reactant distributions for slow thermal bimolecular reactions are in dynamic equilibrium, and their distributions are proportional to their relative solubilities in the oil, interfacial, and aqueous regions. Our chemical kinetic method is grounded in thermodynamics and combines a pseudophase model with methods for monitoring the reactions of AOs with a hydrophobic arenediazonium ion probe in opaque emulsions. We introduce (a) the logic and basic assumptions of the pseudophase model used to define the distributions of AOs among the oil, interfacial, and aqueous regions in microemulsions and emulsions and (b) the dye derivatization and linear sweep voltammetry methods for monitoring the rates of reaction in opaque emulsions. Our results show that this approach provides a unique, versatile, and robust method for obtaining quantitative estimates of AO partition coefficients or partition constants and distributions and interfacial rate constants in emulsions. The examples provided illustrate the effects of various emulsion properties on AO distributions such as oil hydrophobicity, emulsifier structure and HLB, temperature, droplet size, surfactant charge, and acidity on reactant distributions. Finally, we show that the chemical kinetic method provides a natural explanation for the cut-off effect, a maximum followed by a sharp reduction in AO efficiency with increasing alkyl chain length of a particular AO. We conclude with perspectives and prospects.

  18. Investigation of Current Methods to Identify Helicopter Gear Health

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Lewicki, David G.; Le, Dy D.

    2007-01-01

    This paper provides an overview of current vibration methods used to identify the health of helicopter transmission gears. The gears are critical to the transmission system that provides propulsion, lift and maneuvering of the helicopter. This paper reviews techniques used to process vibration data to calculate conditions indicators (CI's), guidelines used by the government aviation authorities in developing and certifying the Health and Usage Monitoring System (HUMS), condition and health indicators used in commercial HUMS, and different methods used to set thresholds to detect damage. Initial assessment of a method to set thresholds for vibration based condition indicators applied to flight and test rig data by evaluating differences in distributions between comparable transmissions are also discussed. Gear condition indicator FM4 values are compared on an OH58 helicopter during 14 maneuvers and an OH58 transmission test stand during crack propagation tests. Preliminary results show the distributions between healthy helicopter and rig data are comparable and distributions between healthy and damaged gears show significant differences.

  19. Investigation of Current Methods to Identify Helicopter Gear Health

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Lewicki, David G.; Le, Dy D.

    2007-01-01

    This paper provides an overview of current vibration methods used to identify the health of helicopter transmission gears. The gears are critical to the transmission system that provides propulsion, lift and maneuvering of the helicopter. This paper reviews techniques used to process vibration data to calculate conditions indicators (CI s), guidelines used by the government aviation authorities in developing and certifying the Health and Usage Monitoring System (HUMS), condition and health indicators used in commercial HUMS, and different methods used to set thresholds to detect damage. Initial assessment of a method to set thresholds for vibration based condition indicators applied to flight and test rig data by evaluating differences in distributions between comparable transmissions are also discussed. Gear condition indicator FM4 values are compared on an OH58 helicopter during 14 maneuvers and an OH58 transmission test stand during crack propagation tests. Preliminary results show the distributions between healthy helicopter and rig data are comparable and distributions between healthy and damaged gears show significant differences.

  20. Gaussian theory for spatially distributed self-propelled particles

    NASA Astrophysics Data System (ADS)

    Seyed-Allaei, Hamid; Schimansky-Geier, Lutz; Ejtehadi, Mohammad Reza

    2016-12-01

    Obtaining a reduced description with particle and momentum flux densities outgoing from the microscopic equations of motion of the particles requires approximations. The usual method, we refer to as truncation method, is to zero Fourier modes of the orientation distribution starting from a given number. Here we propose another method to derive continuum equations for interacting self-propelled particles. The derivation is based on a Gaussian approximation (GA) of the distribution of the direction of particles. First, by means of simulation of the microscopic model, we justify that the distribution of individual directions fits well to a wrapped Gaussian distribution. Second, we numerically integrate the continuum equations derived in the GA in order to compare with results of simulations. We obtain that the global polarization in the GA exhibits a hysteresis in dependence on the noise intensity. It shows qualitatively the same behavior as we find in particles simulations. Moreover, both global polarizations agree perfectly for low noise intensities. The spatiotemporal structures of the GA are also in agreement with simulations. We conclude that the GA shows qualitative agreement for a wide range of noise intensities. In particular, for low noise intensities the agreement with simulations is better as other approximations, making the GA to an acceptable candidates of describing spatially distributed self-propelled particles.

  1. Investigation of diffusion length distribution on polycrystalline silicon wafers via photoluminescence methods

    PubMed Central

    Lou, Shishu; Zhu, Huishi; Hu, Shaoxu; Zhao, Chunhua; Han, Peide

    2015-01-01

    Characterization of the diffusion length of solar cells in space has been widely studied using various methods, but few studies have focused on a fast, simple way to obtain the quantified diffusion length distribution on a silicon wafer. In this work, we present two different facile methods of doing this by fitting photoluminescence images taken in two different wavelength ranges or from different sides. These methods, which are based on measuring the ratio of two photoluminescence images, yield absolute values of the diffusion length and are less sensitive to the inhomogeneity of the incident laser beam. A theoretical simulation and experimental demonstration of this method are presented. The diffusion length distributions on a polycrystalline silicon wafer obtained by the two methods show good agreement. PMID:26364565

  2. Enhancement of a 2D front-tracking algorithm with a non-uniform distribution of Lagrangian markers

    NASA Astrophysics Data System (ADS)

    Febres, Mijail; Legendre, Dominique

    2018-04-01

    The 2D front tracking method is enhanced to control the development of spurious velocities for non-uniform distributions of markers. The hybrid formulation of Shin et al. (2005) [7] is considered. A new tangent calculation is proposed for the calculation of the tension force at markers. A new reconstruction method is also proposed to manage non-uniform distributions of markers. We show that for both the static and the translating spherical drop test case the spurious currents are reduced to the machine precision. We also show that the ratio of the Lagrangian grid size Δs over the Eulerian grid size Δx has to satisfy Δs / Δx > 0.2 for ensuring such low level of spurious velocity. The method is found to provide very good agreement with benchmark test cases from the literature.

  3. Improved estimation of hydraulic conductivity by combining stochastically simulated hydrofacies with geophysical data

    PubMed Central

    Zhu, Lin; Gong, Huili; Chen, Yun; Li, Xiaojuan; Chang, Xiang; Cui, Yijiao

    2016-01-01

    Hydraulic conductivity is a major parameter affecting the output accuracy of groundwater flow and transport models. The most commonly used semi-empirical formula for estimating conductivity is Kozeny-Carman equation. However, this method alone does not work well with heterogeneous strata. Two important parameters, grain size and porosity, often show spatial variations at different scales. This study proposes a method for estimating conductivity distributions by combining a stochastic hydrofacies model with geophysical methods. The Markov chain model with transition probability matrix was adopted to re-construct structures of hydrofacies for deriving spatial deposit information. The geophysical and hydro-chemical data were used to estimate the porosity distribution through the Archie’s law. Results show that the stochastic simulated hydrofacies model reflects the sedimentary features with an average model accuracy of 78% in comparison with borehole log data in the Chaobai alluvial fan. The estimated conductivity is reasonable and of the same order of magnitude of the outcomes of the pumping tests. The conductivity distribution is consistent with the sedimentary distributions. This study provides more reliable spatial distributions of the hydraulic parameters for further numerical modeling. PMID:26927886

  4. Comparison Study on the Estimation of the Spatial Distribution of Regional Soil Metal(loid)s Pollution Based on Kriging Interpolation and BP Neural Network

    PubMed Central

    Zhou, Shenglu; Su, Quanlong; Yi, Haomin

    2017-01-01

    Soil pollution by metal(loid)s resulting from rapid economic development is a major concern. Accurately estimating the spatial distribution of soil metal(loid) pollution has great significance in preventing and controlling soil pollution. In this study, 126 topsoil samples were collected in Kunshan City and the geo-accumulation index was selected as a pollution index. We used Kriging interpolation and BP neural network methods to estimate the spatial distribution of arsenic (As) and cadmium (Cd) pollution in the study area. Additionally, we introduced a cross-validation method to measure the errors of the estimation results by the two interpolation methods and discussed the accuracy of the information contained in the estimation results. The conclusions are as follows: data distribution characteristics, spatial variability, and mean square errors (MSE) of the different methods showed large differences. Estimation results from BP neural network models have a higher accuracy, the MSE of As and Cd are 0.0661 and 0.1743, respectively. However, the interpolation results show significant skewed distribution, and spatial autocorrelation is strong. Using Kriging interpolation, the MSE of As and Cd are 0.0804 and 0.2983, respectively. The estimation results have poorer accuracy. Combining the two methods can improve the accuracy of the Kriging interpolation and more comprehensively represent the spatial distribution characteristics of metal(loid)s in regional soil. The study may provide a scientific basis and technical support for the regulation of soil metal(loid) pollution. PMID:29278363

  5. Design, implementation and application of distributed order PI control.

    PubMed

    Zhou, Fengyu; Zhao, Yang; Li, Yan; Chen, YangQuan

    2013-05-01

    In this paper, a series of distributed order PI controller design methods are derived and applied to the robust control of wheeled service robots, which can tolerate more structural and parametric uncertainties than the corresponding fractional order PI control. A practical discrete incremental distributed order PI control strategy is proposed basing on the discretization method and the frequency criterions, which can be commonly used in many fields of fractional order system, control and signal processing. Besides, an auto-tuning strategy and the genetic algorithm are applied to the distributed order PI control as well. A number of experimental results are provided to show the advantages and distinguished features of the discussed methods in fairways. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Research on social communication network evolution based on topology potential distribution

    NASA Astrophysics Data System (ADS)

    Zhao, Dongjie; Jiang, Jian; Li, Deyi; Zhang, Haisu; Chen, Guisheng

    2011-12-01

    Aiming at the problem of social communication network evolution, first, topology potential is introduced to measure the local influence among nodes in networks. Second, from the perspective of topology potential distribution the method of network evolution description based on topology potential distribution is presented, which takes the artificial intelligence with uncertainty as basic theory and local influence among nodes as essentiality. Then, a social communication network is constructed by enron email dataset, the method presented is used to analyze the characteristic of the social communication network evolution and some useful conclusions are got, implying that the method is effective, which shows that topology potential distribution can effectively describe the characteristic of sociology and detect the local changes in social communication network.

  7. Air method measurements of apple vessel length distributions with improved apparatus and theory

    Treesearch

    Shabtal Cohen; John Bennink; Mel Tyree

    2003-01-01

    Studies showing that rootstock dwarfing potential is related to plant hydraulic conductance led to the hypothesis that xylem properties are also related. Vessel length distribution and other properties of apple wood from a series of varieties were measured using the 'air method' in order to test this hypothesis. Apparatus was built to measure and monitor...

  8. Optical arc sensor using energy harvesting power source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Kyoo Nam, E-mail: knchoi@inu.ac.kr; Rho, Hee Hyuk, E-mail: rdoubleh0902@inu.ac.kr

    Wireless sensors without external power supply gained considerable attention due to convenience both in installation and operation. Optical arc detecting sensor equipping with self sustaining power supply using energy harvesting method was investigated. Continuous energy harvesting method was attempted using thermoelectric generator to supply standby power in micro ampere scale and operating power in mA scale. Peltier module with heat-sink was used for high efficiency electricity generator. Optical arc detecting sensor with hybrid filter showed insensitivity to fluorescent and incandescent lamps under simulated distribution panel condition. Signal processing using integrating function showed selective arc discharge detection capability to different arcmore » energy levels, with a resolution below 17 J energy difference, unaffected by bursting arc waveform. The sensor showed possibility for application to arc discharge detecting sensor in power distribution panel. Also experiment with proposed continuous energy harvesting method using thermoelectric power showed possibility as a self sustainable power source of remote sensor.« less

  9. Optical arc sensor using energy harvesting power source

    NASA Astrophysics Data System (ADS)

    Choi, Kyoo Nam; Rho, Hee Hyuk

    2016-06-01

    Wireless sensors without external power supply gained considerable attention due to convenience both in installation and operation. Optical arc detecting sensor equipping with self sustaining power supply using energy harvesting method was investigated. Continuous energy harvesting method was attempted using thermoelectric generator to supply standby power in micro ampere scale and operating power in mA scale. Peltier module with heat-sink was used for high efficiency electricity generator. Optical arc detecting sensor with hybrid filter showed insensitivity to fluorescent and incandescent lamps under simulated distribution panel condition. Signal processing using integrating function showed selective arc discharge detection capability to different arc energy levels, with a resolution below 17J energy difference, unaffected by bursting arc waveform. The sensor showed possibility for application to arc discharge detecting sensor in power distribution panel. Also experiment with proposed continuous energy harvesting method using thermoelectric power showed possibility as a self sustainable power source of remote sensor.

  10. Characterization of background concentrations of contaminants using a mixture of normal distributions.

    PubMed

    Qian, Song S; Lyons, Regan E

    2006-10-01

    We present a Bayesian approach for characterizing background contaminant concentration distributions using data from sites that may have been contaminated. Our method, focused on estimation, resolves several technical problems of the existing methods sanctioned by the U.S. Environmental Protection Agency (USEPA) (a hypothesis testing based method), resulting in a simple and quick procedure for estimating background contaminant concentrations. The proposed Bayesian method is applied to two data sets from a federal facility regulated under the Resource Conservation and Restoration Act. The results are compared to background distributions identified using existing methods recommended by the USEPA. The two data sets represent low and moderate levels of censorship in the data. Although an unbiased estimator is elusive, we show that the proposed Bayesian estimation method will have a smaller bias than the EPA recommended method.

  11. Evaluation of 4D-CT lung registration.

    PubMed

    Kabus, Sven; Klinder, Tobias; Murphy, Keelin; van Ginneken, Bram; van Lorenz, Cristian; Pluim, Josien P W

    2009-01-01

    Non-rigid registration accuracy assessment is typically performed by evaluating the target registration error at manually placed landmarks. For 4D-CT lung data, we compare two sets of landmark distributions: a smaller set primarily defined on vessel bifurcations as commonly described in the literature and a larger set being well-distributed throughout the lung volume. For six different registration schemes (three in-house schemes and three schemes frequently used by the community) the landmark error is evaluated and found to depend significantly on the distribution of the landmarks. In particular, lung regions near to the pleura show a target registration error three times larger than near-mediastinal regions. While the inter-method variability on the landmark positions is rather small, the methods show discriminating differences with respect to consistency and local volume change. In conclusion, both a well-distributed set of landmarks and a deformation vector field analysis are necessary for reliable non-rigid registration accuracy assessment.

  12. Poly (lactic-co-glycolic acid) particles prepared by microfluidics and conventional methods. Modulated particle size and rheology.

    PubMed

    Perez, Aurora; Hernández, Rebeca; Velasco, Diego; Voicu, Dan; Mijangos, Carmen

    2015-03-01

    Microfluidic techniques are expected to provide narrower particle size distribution than conventional methods for the preparation of poly (lactic-co-glycolic acid) (PLGA) microparticles. Besides, it is hypothesized that the particle size distribution of poly (lactic-co-glycolic acid) microparticles influences the settling behavior and rheological properties of its aqueous dispersions. For the preparation of PLGA particles, two different methods, microfluidic and conventional oil-in-water emulsification methods were employed. The particle size and particle size distribution of PLGA particles prepared by microfluidics were studied as a function of the flow rate of the organic phase while particles prepared by conventional methods were studied as a function of stirring rate. In order to study the stability and structural organization of colloidal dispersions, settling experiments and oscillatory rheological measurements were carried out on aqueous dispersions of PLGA particles with different particle size distributions. Microfluidics technique allowed the control of size and size distribution of the droplets formed in the process of emulsification. This resulted in a narrower particle size distribution for samples prepared by MF with respect to samples prepared by conventional methods. Polydisperse samples showed a larger tendency to aggregate, thus confirming the advantages of microfluidics over conventional methods, especially if biomedical applications are envisaged. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Modelling population distribution using remote sensing imagery and location-based data

    NASA Astrophysics Data System (ADS)

    Song, J.; Prishchepov, A. V.

    2017-12-01

    Detailed spatial distribution of population density is essential for city studies such as urban planning, environmental pollution and city emergency, even estimate pressure on the environment and human exposure and risks to health. However, most of the researches used census data as the detailed dynamic population distribution are difficult to acquire, especially in microscale research. This research describes a method using remote sensing imagery and location-based data to model population distribution at the function zone level. Firstly, urban functional zones within a city were mapped by high-resolution remote sensing images and POIs. The workflow of functional zones extraction includes five parts: (1) Urban land use classification. (2) Segmenting images in built-up area. (3) Identification of functional segments by POIs. (4) Identification of functional blocks by functional segmentation and weight coefficients. (5) Assessing accuracy by validation points. The result showed as Fig.1. Secondly, we applied ordinary least square and geographically weighted regression to assess spatial nonstationary relationship between light digital number (DN) and population density of sampling points. The two methods were employed to predict the population distribution over the research area. The R²of GWR model were in the order of 0.7 and typically showed significant variations over the region than traditional OLS model. The result showed as Fig.2.Validation with sampling points of population density demonstrated that the result predicted by the GWR model correlated well with light value. The result showed as Fig.3. Results showed: (1) Population density is not linear correlated with light brightness using global model. (2) VIIRS night-time light data could estimate population density integrating functional zones at city level. (3) GWR is a robust model to map population distribution, the adjusted R2 of corresponding GWR models were higher than the optimal OLS models, confirming that GWR models demonstrate better prediction accuracy. So this method provide detailed population density information for microscale citizen studies.

  14. Thermodynamic method for generating random stress distributions on an earthquake fault

    USGS Publications Warehouse

    Barall, Michael; Harris, Ruth A.

    2012-01-01

    This report presents a new method for generating random stress distributions on an earthquake fault, suitable for use as initial conditions in a dynamic rupture simulation. The method employs concepts from thermodynamics and statistical mechanics. A pattern of fault slip is considered to be analogous to a micro-state of a thermodynamic system. The energy of the micro-state is taken to be the elastic energy stored in the surrounding medium. Then, the Boltzmann distribution gives the probability of a given pattern of fault slip and stress. We show how to decompose the system into independent degrees of freedom, which makes it computationally feasible to select a random state. However, due to the equipartition theorem, straightforward application of the Boltzmann distribution leads to a divergence which predicts infinite stress. To avoid equipartition, we show that the finite strength of the fault acts to restrict the possible states of the system. By analyzing a set of earthquake scaling relations, we derive a new formula for the expected power spectral density of the stress distribution, which allows us to construct a computer algorithm free of infinities. We then present a new technique for controlling the extent of the rupture by generating a random stress distribution thousands of times larger than the fault surface, and selecting a portion which, by chance, has a positive stress perturbation of the desired size. Finally, we present a new two-stage nucleation method that combines a small zone of forced rupture with a larger zone of reduced fracture energy.

  15. Self-Organizing Maps and Parton Distribution Functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    K. Holcomb, Simonetta Liuti, D. Z. Perry

    2011-05-01

    We present a new method to extract parton distribution functions from high energy experimental data based on a specific type of neural networks, the Self-Organizing Maps. We illustrate the features of our new procedure that are particularly useful for an anaysis directed at extracting generalized parton distributions from data. We show quantitative results of our initial analysis of the parton distribution functions from inclusive deep inelastic scattering.

  16. Wind Curtailment and the Value of Transmission under a 2050 Wind Vision

    Science.gov Websites

    dispatches each generating unit in the geographical footprint in the least- cost method based on many inputs just as the Wind Vision study did, in a somewhat different geographical distribution due to data distributed fairly well throughout the western U.S. The map shows kind of a different story. The map shows

  17. Robust Methods for Moderation Analysis with a Two-Level Regression Model.

    PubMed

    Yang, Miao; Yuan, Ke-Hai

    2016-01-01

    Moderation analysis has many applications in social sciences. Most widely used estimation methods for moderation analysis assume that errors are normally distributed and homoscedastic. When these assumptions are not met, the results from a classical moderation analysis can be misleading. For more reliable moderation analysis, this article proposes two robust methods with a two-level regression model when the predictors do not contain measurement error. One method is based on maximum likelihood with Student's t distribution and the other is based on M-estimators with Huber-type weights. An algorithm for obtaining the robust estimators is developed. Consistent estimates of standard errors of the robust estimators are provided. The robust approaches are compared against normal-distribution-based maximum likelihood (NML) with respect to power and accuracy of parameter estimates through a simulation study. Results show that the robust approaches outperform NML under various distributional conditions. Application of the robust methods is illustrated through a real data example. An R program is developed and documented to facilitate the application of the robust methods.

  18. A Comparative Study of Distribution System Parameter Estimation Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup

    2016-07-17

    In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of bothmore » methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.« less

  19. Study on Diagnosing Three Dimensional Cloud Region

    NASA Astrophysics Data System (ADS)

    Cai, M., Jr.; Zhou, Y., Sr.

    2017-12-01

    Cloud mask and relative humidity (RH) provided by Cloudsat products from 2007 to 2008 are statistical analyzed to get RH Threshold between cloud and clear sky and its variation with height. A diagnosis method is proposed based on reanalysis data and applied to three-dimensional cloud field diagnosis of a real case. Diagnostic cloud field was compared to satellite, radar and other cloud precipitation observation. Main results are as follows. 1.Cloud region where cloud mask is bigger than 20 has a good space and time corresponding to the high value relative humidity region, which is provide by ECWMF AUX product. Statistical analysis of the RH frequency distribution within and outside cloud indicated that, distribution of RH in cloud at different height range shows single peak type, and the peak is near a RH value of 100%. Local atmospheric environment affects the RH distribution outside cloud, which leads to TH distribution vary in different region or different height. 2. RH threshold and its vertical distribution used for cloud diagnostic was analyzed from Threat Score method. The method is applied to a three dimension cloud diagnosis case study based on NCEP reanalysis data and th diagnostic cloud field is compared to satellite, radar and cloud precipitation observation on ground. It is found that, RH gradient is very big around cloud region and diagnosed cloud area by RH threshold method is relatively stable. Diagnostic cloud area has a good corresponding to updraft region. The cloud and clear sky distribution corresponds to satellite the TBB observations overall. Diagnostic cloud depth, or sum cloud layers distribution consists with optical thickness and precipitation on ground better. The cloud vertical profile reveals the relation between cloud vertical structure and weather system clearly. Diagnostic cloud distribution correspond to cloud observations on ground very well. 3. The method is improved by changing the vertical interval from altitude to temperature. The result shows that, the five factors , including TS score for clear sky, empty forecast, missed forecast, and especially TS score for cloud region and the accurate rate increased obviously. So, the RH threshold and its vertical distribution with temperature is better than with altitude. More tests and comparision should be done to assess the diagnosis method.

  20. Effect of distributed generation installation on power loss using genetic algorithm method

    NASA Astrophysics Data System (ADS)

    Hasibuan, A.; Masri, S.; Othman, W. A. F. W. B.

    2018-02-01

    Injection of the generator distributed in the distribution network can affect the power system significantly. The effect that occurs depends on the allocation of DG on each part of the distribution network. Implementation of this approach has been made to the IEEE 30 bus standard and shows the optimum location and size of the DG which shows a decrease in power losses in the system. This paper aims to show the impact of distributed generation on the distribution system losses. The main purpose of installing DG on a distribution system is to reduce power losses on the power system.Some problems in power systems that can be solved with the installation of DG, one of which will be explored in the use of DG in this study is to reduce the power loss in the transmission line. Simulation results from case studies on the IEEE 30 bus standard system show that the system power loss decreased from 5.7781 MW to 1,5757 MW or just 27,27%. The simulated DG is injected to the bus with the lowest voltage drop on the bus number 8.

  1. Confounding environmental colour and distribution shape leads to underestimation of population extinction risk.

    PubMed

    Fowler, Mike S; Ruokolainen, Lasse

    2013-01-01

    The colour of environmental variability influences the size of population fluctuations when filtered through density dependent dynamics, driving extinction risk through dynamical resonance. Slow fluctuations (low frequencies) dominate in red environments, rapid fluctuations (high frequencies) in blue environments and white environments are purely random (no frequencies dominate). Two methods are commonly employed to generate the coloured spatial and/or temporal stochastic (environmental) series used in combination with population (dynamical feedback) models: autoregressive [AR(1)] and sinusoidal (1/f) models. We show that changing environmental colour from white to red with 1/f models, and from white to red or blue with AR(1) models, generates coloured environmental series that are not normally distributed at finite time-scales, potentially confounding comparison with normally distributed white noise models. Increasing variability of sample Skewness and Kurtosis and decreasing mean Kurtosis of these series alter the frequency distribution shape of the realised values of the coloured stochastic processes. These changes in distribution shape alter patterns in the probability of single and series of extreme conditions. We show that the reduced extinction risk for undercompensating (slow growing) populations in red environments previously predicted with traditional 1/f methods is an artefact of changes in the distribution shapes of the environmental series. This is demonstrated by comparison with coloured series controlled to be normally distributed using spectral mimicry. Changes in the distribution shape that arise using traditional methods lead to underestimation of extinction risk in normally distributed, red 1/f environments. AR(1) methods also underestimate extinction risks in traditionally generated red environments. This work synthesises previous results and provides further insight into the processes driving extinction risk in model populations. We must let the characteristics of known natural environmental covariates (e.g., colour and distribution shape) guide us in our choice of how to best model the impact of coloured environmental variation on population dynamics.

  2. The Mean Distance to the nth Neighbour in a Uniform Distribution of Random Points: An Application of Probability Theory

    ERIC Educational Resources Information Center

    Bhattacharyya, Pratip; Chakrabarti, Bikas K.

    2008-01-01

    We study different ways of determining the mean distance (r[subscript n]) between a reference point and its nth neighbour among random points distributed with uniform density in a D-dimensional Euclidean space. First, we present a heuristic method; though this method provides only a crude mathematical result, it shows a simple way of estimating…

  3. A Step-Wise Approach to Elicit Triangular Distributions

    NASA Technical Reports Server (NTRS)

    Greenberg, Marc W.

    2013-01-01

    Adapt/combine known methods to demonstrate an expert judgment elicitation process that: 1.Models expert's inputs as a triangular distribution, 2.Incorporates techniques to account for expert bias and 3.Is structured in a way to help justify expert's inputs. This paper will show one way of "extracting" expert opinion for estimating purposes. Nevertheless, as with most subjective methods, there are many ways to do this.

  4. Time difference of arrival estimation of microseismic signals based on alpha-stable distribution

    NASA Astrophysics Data System (ADS)

    Jia, Rui-Sheng; Gong, Yue; Peng, Yan-Jun; Sun, Hong-Mei; Zhang, Xing-Li; Lu, Xin-Ming

    2018-05-01

    Microseismic signals are generally considered to follow the Gauss distribution. A comparison of the dynamic characteristics of sample variance and the symmetry of microseismic signals with the signals which follow α-stable distribution reveals that the microseismic signals have obvious pulse characteristics and that the probability density curve of the microseismic signal is approximately symmetric. Thus, the hypothesis that microseismic signals follow the symmetric α-stable distribution is proposed. On the premise of this hypothesis, the characteristic exponent α of the microseismic signals is obtained by utilizing the fractional low-order statistics, and then a new method of time difference of arrival (TDOA) estimation of microseismic signals based on fractional low-order covariance (FLOC) is proposed. Upon applying this method to the TDOA estimation of Ricker wavelet simulation signals and real microseismic signals, experimental results show that the FLOC method, which is based on the assumption of the symmetric α-stable distribution, leads to enhanced spatial resolution of the TDOA estimation relative to the generalized cross correlation (GCC) method, which is based on the assumption of the Gaussian distribution.

  5. A Bayesian inversion for slip distribution of 1 Apr 2007 Mw8.1 Solomon Islands Earthquake

    NASA Astrophysics Data System (ADS)

    Chen, T.; Luo, H.

    2013-12-01

    On 1 Apr 2007 the megathrust Mw8.1 Solomon Islands earthquake occurred in the southeast pacific along the New Britain subduction zone. 102 vertical displacement measurements over the southeastern end of the rupture zone from two field surveys after this event provide a unique constraint for slip distribution inversion. In conventional inversion method (such as bounded variable least squares) the smoothing parameter that determines the relative weight placed on fitting the data versus smoothing the slip distribution is often subjectively selected at the bend of the trade-off curve. Here a fully probabilistic inversion method[Fukuda,2008] is applied to estimate distributed slip and smoothing parameter objectively. The joint posterior probability density function of distributed slip and the smoothing parameter is formulated under a Bayesian framework and sampled with Markov chain Monte Carlo method. We estimate the spatial distribution of dip slip associated with the 1 Apr 2007 Solomon Islands earthquake with this method. Early results show a shallower dip angle than previous study and highly variable dip slip both along-strike and down-dip.

  6. Gradients estimation from random points with volumetric tensor in turbulence

    NASA Astrophysics Data System (ADS)

    Watanabe, Tomoaki; Nagata, Koji

    2017-12-01

    We present an estimation method of fully-resolved/coarse-grained gradients from randomly distributed points in turbulence. The method is based on a linear approximation of spatial gradients expressed with the volumetric tensor, which is a 3 × 3 matrix determined by a geometric distribution of the points. The coarse grained gradient can be considered as a low pass filtered gradient, whose cutoff is estimated with the eigenvalues of the volumetric tensor. The present method, the volumetric tensor approximation, is tested for velocity and passive scalar gradients in incompressible planar jet and mixing layer. Comparison with a finite difference approximation on a Cartesian grid shows that the volumetric tensor approximation computes the coarse grained gradients fairly well at a moderate computational cost under various conditions of spatial distributions of points. We also show that imposing the solenoidal condition improves the accuracy of the present method for solenoidal vectors, such as a velocity vector in incompressible flows, especially when the number of the points is not large. The volumetric tensor approximation with 4 points poorly estimates the gradient because of anisotropic distribution of the points. Increasing the number of points from 4 significantly improves the accuracy. Although the coarse grained gradient changes with the cutoff length, the volumetric tensor approximation yields the coarse grained gradient whose magnitude is close to the one obtained by the finite difference. We also show that the velocity gradient estimated with the present method well captures the turbulence characteristics such as local flow topology, amplification of enstrophy and strain, and energy transfer across scales.

  7. Preserving Institutional Privacy in Distributed binary Logistic Regression.

    PubMed

    Wu, Yuan; Jiang, Xiaoqian; Ohno-Machado, Lucila

    2012-01-01

    Privacy is becoming a major concern when sharing biomedical data across institutions. Although methods for protecting privacy of individual patients have been proposed, it is not clear how to protect the institutional privacy, which is many times a critical concern of data custodians. Built upon our previous work, Grid Binary LOgistic REgression (GLORE)1, we developed an Institutional Privacy-preserving Distributed binary Logistic Regression model (IPDLR) that considers both individual and institutional privacy for building a logistic regression model in a distributed manner. We tested our method using both simulated and clinical data, showing how it is possible to protect the privacy of individuals and of institutions using a distributed strategy.

  8. Electro-optic measurement of terahertz pulse energy distribution.

    PubMed

    Sun, J H; Gallacher, J G; Brussaard, G J H; Lemos, N; Issac, R; Huang, Z X; Dias, J M; Jaroszynski, D A

    2009-11-01

    An accurate and direct measurement of the energy distribution of a low repetition rate terahertz electromagnetic pulse is challenging because of the lack of sensitive detectors in this spectral range. In this paper, we show how the total energy and energy density distribution of a terahertz electromagnetic pulse can be determined by directly measuring the absolute electric field amplitude and beam energy density distribution using electro-optic detection. This method has potential use as a routine method of measuring the energy density of terahertz pulses that could be applied to evaluating future high power terahertz sources, terahertz imaging, and spatially and temporarily resolved pump-probe experiments.

  9. Ultra-thin carbon-fiber paper fabrication and carbon-fiber distribution homogeneity evaluation method

    NASA Astrophysics Data System (ADS)

    Zhang, L. F.; Chen, D. Y.; Wang, Q.; Li, H.; Zhao, Z. G.

    2018-01-01

    A preparation technology of ultra-thin Carbon-fiber paper is reported. Carbon fiber distribution homogeneity has a great influence on the properties of ultra-thin Carbon-fiber paper. In this paper, a self-developed homogeneity analysis system is introduced to assist users to evaluate the distribution homogeneity of Carbon fiber among two or more two-value images of carbon-fiber paper. A relative-uniformity factor W/H is introduced. The experimental results show that the smaller the W/H factor, the higher uniformity of the distribution of Carbon fiber is. The new uniformity-evaluation method provides a practical and reliable tool for analyzing homogeneity of materials.

  10. Single atom catalysts on amorphous supports: A quenched disorder perspective

    NASA Astrophysics Data System (ADS)

    Peters, Baron; Scott, Susannah L.

    2015-03-01

    Phenomenological models that invoke catalyst sites with different adsorption constants and rate constants are well-established, but computational and experimental methods are just beginning to provide atomically resolved details about amorphous surfaces and their active sites. This letter develops a statistical transformation from the quenched disorder distribution of site structures to the distribution of activation energies for sites on amorphous supports. We show that the overall kinetics are highly sensitive to the precise nature of the low energy tail in the activation energy distribution. Our analysis motivates further development of systematic methods to identify and understand the most reactive members of the active site distribution.

  11. Retaining both discrete and smooth features in 1D and 2D NMR relaxation and diffusion experiments

    NASA Astrophysics Data System (ADS)

    Reci, A.; Sederman, A. J.; Gladden, L. F.

    2017-11-01

    A new method of regularization of 1D and 2D NMR relaxation and diffusion experiments is proposed and a robust algorithm for its implementation is introduced. The new form of regularization, termed the Modified Total Generalized Variation (MTGV) regularization, offers a compromise between distinguishing discrete and smooth features in the reconstructed distributions. The method is compared to the conventional method of Tikhonov regularization and the recently proposed method of L1 regularization, when applied to simulated data of 1D spin-lattice relaxation, T1, 1D spin-spin relaxation, T2, and 2D T1-T2 NMR experiments. A range of simulated distributions composed of two lognormally distributed peaks were studied. The distributions differed with regard to the variance of the peaks, which were designed to investigate a range of distributions containing only discrete, only smooth or both features in the same distribution. Three different signal-to-noise ratios were studied: 2000, 200 and 20. A new metric is proposed to compare the distributions reconstructed from the different regularization methods with the true distributions. The metric is designed to penalise reconstructed distributions which show artefact peaks. Based on this metric, MTGV regularization performs better than Tikhonov and L1 regularization in all cases except when the distribution is known to only comprise of discrete peaks, in which case L1 regularization is slightly more accurate than MTGV regularization.

  12. Gaussian Quadrature is an efficient method for the back-transformation in estimating the usual intake distribution when assessing dietary exposure.

    PubMed

    Dekkers, A L M; Slob, W

    2012-10-01

    In dietary exposure assessment, statistical methods exist for estimating the usual intake distribution from daily intake data. These methods transform the dietary intake data to normal observations, eliminate the within-person variance, and then back-transform the data to the original scale. We propose Gaussian Quadrature (GQ), a numerical integration method, as an efficient way of back-transformation. We compare GQ with six published methods. One method uses a log-transformation, while the other methods, including GQ, use a Box-Cox transformation. This study shows that, for various parameter choices, the methods with a Box-Cox transformation estimate the theoretical usual intake distributions quite well, although one method, a Taylor approximation, is less accurate. Two applications--on folate intake and fruit consumption--confirmed these results. In one extreme case, some methods, including GQ, could not be applied for low percentiles. We solved this problem by modifying GQ. One method is based on the assumption that the daily intakes are log-normally distributed. Even if this condition is not fulfilled, the log-transformation performs well as long as the within-individual variance is small compared to the mean. We conclude that the modified GQ is an efficient, fast and accurate method for estimating the usual intake distribution. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Damage identification method for continuous girder bridges based on spatially-distributed long-gauge strain sensing under moving loads

    NASA Astrophysics Data System (ADS)

    Wu, Bitao; Wu, Gang; Yang, Caiqian; He, Yi

    2018-05-01

    A novel damage identification method for concrete continuous girder bridges based on spatially-distributed long-gauge strain sensing is presented in this paper. First, the variation regularity of the long-gauge strain influence line of continuous girder bridges which changes with the location of vehicles on the bridge is studied. According to this variation regularity, a calculation method for the distribution regularity of the area of long-gauge strain history is investigated. Second, a numerical simulation of damage identification based on the distribution regularity of the area of long-gauge strain history is conducted, and the results indicate that this method is effective for identifying damage and is not affected by the speed, axle number and weight of vehicles. Finally, a real bridge test on a highway is conducted, and the experimental results also show that this method is very effective for identifying damage in continuous girder bridges, and the local element stiffness distribution regularity can be revealed at the same time. This identified information is useful for maintaining of continuous girder bridges on highways.

  14. Simulations of Fractal Star Cluster Formation. I. New Insights for Measuring Mass Segregation of Star Clusters with Substructure

    NASA Astrophysics Data System (ADS)

    Yu, Jincheng; Puzia, Thomas H.; Lin, Congping; Zhang, Yiwei

    2017-05-01

    We compare the existent methods, including the minimum spanning tree based method and the local stellar density based method, in measuring mass segregation of star clusters. We find that the minimum spanning tree method reflects more the compactness, which represents the global spatial distribution of massive stars, while the local stellar density method reflects more the crowdedness, which provides the local gravitational potential information. It is suggested to measure the local and the global mass segregation simultaneously. We also develop a hybrid method that takes both aspects into account. This hybrid method balances the local and the global mass segregation in the sense that the predominant one is either caused by dynamical evolution or purely accidental, especially when such information is unknown a priori. In addition, we test our prescriptions with numerical models and show the impact of binaries in estimating the mass segregation value. As an application, we use these methods on the Orion Nebula Cluster (ONC) observations and the Taurus cluster. We find that the ONC is significantly mass segregated down to the 20th most massive stars. In contrast, the massive stars of the Taurus cluster are sparsely distributed in many different subclusters, showing a low degree of compactness. The massive stars of Taurus are also found to be distributed in the high-density region of the subclusters, showing significant mass segregation at subcluster scales. Meanwhile, we also apply these methods to discuss the possible mechanisms of the dynamical evolution of the simulated substructured star clusters.

  15. Simulations of Fractal Star Cluster Formation. I. New Insights for Measuring Mass Segregation of Star Clusters with Substructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Jincheng; Puzia, Thomas H.; Lin, Congping

    2017-05-10

    We compare the existent methods, including the minimum spanning tree based method and the local stellar density based method, in measuring mass segregation of star clusters. We find that the minimum spanning tree method reflects more the compactness, which represents the global spatial distribution of massive stars, while the local stellar density method reflects more the crowdedness, which provides the local gravitational potential information. It is suggested to measure the local and the global mass segregation simultaneously. We also develop a hybrid method that takes both aspects into account. This hybrid method balances the local and the global mass segregationmore » in the sense that the predominant one is either caused by dynamical evolution or purely accidental, especially when such information is unknown a priori. In addition, we test our prescriptions with numerical models and show the impact of binaries in estimating the mass segregation value. As an application, we use these methods on the Orion Nebula Cluster (ONC) observations and the Taurus cluster. We find that the ONC is significantly mass segregated down to the 20th most massive stars. In contrast, the massive stars of the Taurus cluster are sparsely distributed in many different subclusters, showing a low degree of compactness. The massive stars of Taurus are also found to be distributed in the high-density region of the subclusters, showing significant mass segregation at subcluster scales. Meanwhile, we also apply these methods to discuss the possible mechanisms of the dynamical evolution of the simulated substructured star clusters.« less

  16. All-versus-nothing proofs with n qubits distributed between m parties

    NASA Astrophysics Data System (ADS)

    Cabello, Adán; Moreno, Pilar

    2010-04-01

    All-versus-nothing (AVN) proofs show the conflict between Einstein, Podolsky, and Rosen’s elements of reality and the perfect correlations of some quantum states. Given an n-qubit state distributed between m parties, we provide a method with which to decide whether this distribution allows an m-partite AVN proof specific for this state using only single-qubit measurements. We apply this method to some recently obtained n-qubit m-particle states. In addition, we provide all inequivalent AVN proofs with less than nine qubits and a minimum number of parties.

  17. Estimating the extreme low-temperature event using nonparametric methods

    NASA Astrophysics Data System (ADS)

    D'Silva, Anisha

    This thesis presents a new method of estimating the one-in-N low temperature threshold using a non-parametric statistical method called kernel density estimation applied to daily average wind-adjusted temperatures. We apply our One-in-N Algorithm to local gas distribution companies (LDCs), as they have to forecast the daily natural gas needs of their consumers. In winter, demand for natural gas is high. Extreme low temperature events are not directly related to an LDCs gas demand forecasting, but knowledge of extreme low temperatures is important to ensure that an LDC has enough capacity to meet customer demands when extreme low temperatures are experienced. We present a detailed explanation of our One-in-N Algorithm and compare it to the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution. We show that our One-in-N Algorithm estimates the one-in- N low temperature threshold more accurately than the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution according to root mean square error (RMSE) measure at a 5% level of significance. The One-in- N Algorithm is tested by counting the number of times the daily average wind-adjusted temperature is less than or equal to the one-in- N low temperature threshold.

  18. Distributed Combinatorial Optimization Using Privacy on Mobile Phones

    NASA Astrophysics Data System (ADS)

    Ono, Satoshi; Katayama, Kimihiro; Nakayama, Shigeru

    This paper proposes a method for distributed combinatorial optimization which uses mobile phones as computers. In the proposed method, an ordinary computer generates solution candidates and mobile phones evaluates them by referring privacy — private information and preferences. Users therefore does not have to send their privacy to any other computers and does not have to refrain from inputting their preferences. They therefore can obtain satisfactory solution. Experimental results have showed the proposed method solved room assignment problems without sending users' privacy to a server.

  19. Volcanoes Distribution in Linear Segmentation of Mariana Arc

    NASA Astrophysics Data System (ADS)

    Andikagumi, H.; Macpherson, C.; McCaffrey, K. J. W.

    2016-12-01

    A new method has been developed to describe better volcanoes distribution pattern within Mariana Arc. A previous study assumed the distribution of volcanoes in the Mariana Arc is described by a small circle distribution which reflects the melting processes in a curved subduction zone. The small circle fit to this dataset used in the study, comprised 12 -mainly subaerial- volcanoes from Smithsonian Institute Global Volcanism Program, was reassessed by us to have a root-mean-square misfit of 2.5 km. The same method applied to a more complete dataset from Baker et al. (2008), consisting 37 subaerial and submarine volcanoes, resulted in an 8.4 km misfit. However, using the Hough Transform method on the larger dataset, lower misfits of great circle segments were achieved (3.1 and 3.0 km) for two possible segments combination. The results indicate that the distribution of volcanoes in the Mariana Arc is better described by a great circle pattern, instead of small circle. Variogram and cross-variogram analysis on volcano spacing and volume shows that there is spatial correlation between volcanoes between 420 and 500 km which corresponds to the maximum segmentation lengths from Hough Transform (320 km). Further analysis of volcano spacing by the coefficient of variation (Cv), shows a tendency toward not-random distribution as the Cv values are closer to zero than one. These distributions are inferred to be associated with the development of normal faults at the back arc as their Cv values also tend towards zero. To analyse whether volcano spacing is random or not, Cv values were simulated using a Monte Carlo method with random input. Only the southernmost segment has allowed us to reject the null hypothesis that volcanoes are randomly spaced at 95% confidence level by 0.007 estimated probability. This result shows infrequent regularity in volcano spacing by chance so that controlling factor in lithospheric scale should be analysed with different approach (not from random number generator). Sunda Arc which has been studied to have en enchelon segmentation and larger number of volcanoes will be further studied to understand particular upper plate influence in volcanoes distribution.

  20. A Simple Method for Estimating Informative Node Age Priors for the Fossil Calibration of Molecular Divergence Time Analyses

    PubMed Central

    Nowak, Michael D.; Smith, Andrew B.; Simpson, Carl; Zwickl, Derrick J.

    2013-01-01

    Molecular divergence time analyses often rely on the age of fossil lineages to calibrate node age estimates. Most divergence time analyses are now performed in a Bayesian framework, where fossil calibrations are incorporated as parametric prior probabilities on node ages. It is widely accepted that an ideal parameterization of such node age prior probabilities should be based on a comprehensive analysis of the fossil record of the clade of interest, but there is currently no generally applicable approach for calculating such informative priors. We provide here a simple and easily implemented method that employs fossil data to estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade, which can be used to fit an informative parametric prior probability distribution on a node age. Specifically, our method uses the extant diversity and the stratigraphic distribution of fossil lineages confidently assigned to a clade to fit a branching model of lineage diversification. Conditioning this on a simple model of fossil preservation, we estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade. The likelihood surface of missing history can then be translated into a parametric prior probability distribution on the age of the clade of interest. We show that the method performs well with simulated fossil distribution data, but that the likelihood surface of missing history can at times be too complex for the distribution-fitting algorithm employed by our software tool. An empirical example of the application of our method is performed to estimate echinoid node ages. A simulation-based sensitivity analysis using the echinoid data set shows that node age prior distributions estimated under poor preservation rates are significantly less informative than those estimated under high preservation rates. PMID:23755303

  1. Dichotomisation using a distributional approach when the outcome is skewed.

    PubMed

    Sauzet, Odile; Ofuya, Mercy; Peacock, Janet L

    2015-04-24

    Dichotomisation of continuous outcomes has been rightly criticised by statisticians because of the loss of information incurred. However to communicate a comparison of risks, dichotomised outcomes may be necessary. Peacock et al. developed a distributional approach to the dichotomisation of normally distributed outcomes allowing the presentation of a comparison of proportions with a measure of precision which reflects the comparison of means. Many common health outcomes are skewed so that the distributional method for the dichotomisation of continuous outcomes may not apply. We present a methodology to obtain dichotomised outcomes for skewed variables illustrated with data from several observational studies. We also report the results of a simulation study which tests the robustness of the method to deviation from normality and assess the validity of the newly developed method. The review showed that the pattern of dichotomisation was varying between outcomes. Birthweight, Blood pressure and BMI can either be transformed to normal so that normal distributional estimates for a comparison of proportions can be obtained or better, the skew-normal method can be used. For gestational age, no satisfactory transformation is available and only the skew-normal method is reliable. The normal distributional method is reliable also when there are small deviations from normality. The distributional method with its applicability for common skewed data allows researchers to provide both continuous and dichotomised estimates without losing information or precision. This will have the effect of providing a practical understanding of the difference in means in terms of proportions.

  2. WebDISCO: a web service for distributed cox model learning without patient-level data sharing.

    PubMed

    Lu, Chia-Lun; Wang, Shuang; Ji, Zhanglong; Wu, Yuan; Xiong, Li; Jiang, Xiaoqian; Ohno-Machado, Lucila

    2015-11-01

    The Cox proportional hazards model is a widely used method for analyzing survival data. To achieve sufficient statistical power in a survival analysis, it usually requires a large amount of data. Data sharing across institutions could be a potential workaround for providing this added power. The authors develop a web service for distributed Cox model learning (WebDISCO), which focuses on the proof-of-concept and algorithm development for federated survival analysis. The sensitive patient-level data can be processed locally and only the less-sensitive intermediate statistics are exchanged to build a global Cox model. Mathematical derivation shows that the proposed distributed algorithm is identical to the centralized Cox model. The authors evaluated the proposed framework at the University of California, San Diego (UCSD), Emory, and Duke. The experimental results show that both distributed and centralized models result in near-identical model coefficients with differences in the range [Formula: see text] to [Formula: see text]. The results confirm the mathematical derivation and show that the implementation of the distributed model can achieve the same results as the centralized implementation. The proposed method serves as a proof of concept, in which a publicly available dataset was used to evaluate the performance. The authors do not intend to suggest that this method can resolve policy and engineering issues related to the federated use of institutional data, but they should serve as evidence of the technical feasibility of the proposed approach.Conclusions WebDISCO (Web-based Distributed Cox Regression Model; https://webdisco.ucsd-dbmi.org:8443/cox/) provides a proof-of-concept web service that implements a distributed algorithm to conduct distributed survival analysis without sharing patient level data. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. The Emergent Capabilities of Distributed Satellites and Methods for Selecting Distributed Satellite Science Missions

    NASA Astrophysics Data System (ADS)

    Corbin, B. A.; Seager, S.; Ross, A.; Hoffman, J.

    2017-12-01

    Distributed satellite systems (DSS) have emerged as an effective and cheap way to conduct space science, thanks to advances in the small satellite industry. However, relatively few space science missions have utilized multiple assets to achieve their primary scientific goals. Previous research on methods for evaluating mission concepts designs have shown that distributed systems are rarely competitive with monolithic systems, partially because it is difficult to quantify the added value of DSSs over monolithic systems. Comparatively little research has focused on how DSSs can be used to achieve new, fundamental space science goals that cannot be achieved with monolithic systems or how to choose a design from a larger possible tradespace of options. There are seven emergent capabilities of distributed satellites: shared sampling, simultaneous sampling, self-sampling, census sampling, stacked sampling, staged sampling, and sacrifice sampling. These capabilities are either fundamentally, analytically, or operationally unique in their application to distributed science missions, and they can be leveraged to achieve science goals that are either impossible or difficult and costly to achieve with monolithic systems. The Responsive Systems Comparison (RSC) method combines Multi-Attribute Tradespace Exploration with Epoch-Era Analysis to examine benefits, costs, and flexible options in complex systems over the mission lifecycle. Modifications to the RSC method as it exists in previously published literature were made in order to more accurately characterize how value is derived from space science missions. New metrics help rank designs by the value derived over their entire mission lifecycle and show more accurate cumulative value distributions. The RSC method was applied to four case study science missions that leveraged the emergent capabilities of distributed satellites to achieve their primary science goals. In all four case studies, RSC showed how scientific value was gained that would be impossible or unsatisfactory with monolithic systems and how changes in design and context variables affected the overall mission value. Each study serves as a blueprint for how to conduct a Pre-Phase A study using these methods to learn more about the tradespace of a particular mission.

  4. Measurement of unsteady pressures in rotating systems

    NASA Technical Reports Server (NTRS)

    Kienappel, K.

    1978-01-01

    The principles of the experimental determination of unsteady periodic pressure distributions in rotating systems are reported. An indirect method is discussed, and the effects of the centrifugal force and the transmission behavior of the pressure measurement circuit were outlined. The required correction procedures are described and experimentally implemented in a test bench. Results show that the indirect method is suited to the measurement of unsteady nonharmonic pressure distributions in rotating systems.

  5. Ray tracing the Wigner distribution function for optical simulations

    NASA Astrophysics Data System (ADS)

    Mout, Marco; Wick, Michael; Bociort, Florian; Petschulat, Joerg; Urbach, Paul

    2018-01-01

    We study a simulation method that uses the Wigner distribution function to incorporate wave optical effects in an established framework based on geometrical optics, i.e., a ray tracing engine. We use the method to calculate point spread functions and show that it is accurate for paraxial systems but produces unphysical results in the presence of aberrations. The cause of these anomalies is explained using an analytical model.

  6. Comparing Distribution of Harbour Porpoises (Phocoena phocoena) Derived from Satellite Telemetry and Passive Acoustic Monitoring

    PubMed Central

    Rigét, Frank F.; Kyhn, Line A.; Sveegaard, Signe; Dietz, Rune; Tougaard, Jakob; Carlström, Julia A. K.; Carlén, Ida; Koblitz, Jens C.; Teilmann, Jonas

    2016-01-01

    Cetacean monitoring is essential in determining the status of a population. Different monitoring methods should reflect the real trends in abundance and patterns in distribution, and results should therefore ideally be independent of the selected method. Here, we compare two independent methods of describing harbour porpoise (Phocoena phocoena) relative distribution pattern in the western Baltic Sea. Satellite locations from 13 tagged harbour porpoises were used to build a Maximum Entropy (MaxEnt) model of suitable habitats. The data set was subsampled to one location every second day, which were sufficient to make reliable models over the summer (Jun-Aug) and autumn (Sep-Nov) seasons. The modelled results were compared to harbour porpoise acoustic activity obtained from 36 static acoustic monitoring stations (C-PODs) covering the same area. The C-POD data was expressed as the percentage of porpoise positive days/hours (the number of days/hours per day with porpoise detections) by season. The MaxEnt model and C-POD data showed a significant linear relationship with a strong decline in porpoise occurrence from west to east. This study shows that two very different methods provide comparable information on relative distribution patterns of harbour porpoises even in a low density area. PMID:27463509

  7. Spectral methods on arbitrary grids

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Gottlieb, David

    1995-01-01

    Stable and spectrally accurate numerical methods are constructed on arbitrary grids for partial differential equations. These new methods are equivalent to conventional spectral methods but do not rely on specific grid distributions. Specifically, we show how to implement Legendre Galerkin, Legendre collocation, and Laguerre Galerkin methodology on arbitrary grids.

  8. A method for determining and exploring the distribution of organic matters and hardness salts in natural waters

    NASA Astrophysics Data System (ADS)

    Sargsyan, Suren

    2017-11-01

    A question regarding how organic matters in water are associated with hardness salts hasn't been completely studied. For partially clarifying this question, a water fractional separation and investigation method has been recommended. The experiments carried out by the recommended method showed that the dynamics of the distribution of total hardness and permanganate oxidation values in the fractions of frozen and melted water samples coincided completely based on which it has been concluded that organic matters in natural waters are associated with hardness salts and always distributed in this form. All these findings are useful information for the deep study of macro- and microelements in water.

  9. Weighted minimum-norm source estimation of magnetoencephalography utilizing the temporal information of the measured data

    NASA Astrophysics Data System (ADS)

    Iwaki, Sunao; Ueno, Shoogo

    1998-06-01

    The weighted minimum-norm estimation (wMNE) is a popular method to obtain the source distribution in the human brain from magneto- and electro- encephalograpic measurements when detailed information about the generator profile is not available. We propose a method to reconstruct current distributions in the human brain based on the wMNE technique with the weighting factors defined by a simplified multiple signal classification (MUSIC) prescanning. In this method, in addition to the conventional depth normalization technique, weighting factors of the wMNE were determined by the cost values previously calculated by a simplified MUSIC scanning which contains the temporal information of the measured data. We performed computer simulations of this method and compared it with the conventional wMNE method. The results show that the proposed method is effective for the reconstruction of the current distributions from noisy data.

  10. An agarose gel electrophoretic method for analysis of hyaluronan molecular weight distribution.

    PubMed

    Lee, H G; Cowman, M K

    1994-06-01

    An electrophoretic method is described for determining the molecular weight distribution of hyaluronan (HA). The method involves separation of HA by electrophoresis on a 0.5% agarose gel, followed by detection of HA using the cationic dye Stains-All (3,3'-dimethyl-9-methyl-4,5,4'5'-dibenzothiacarbocyanine). The recommended sample load is 7 micrograms. Calibration of the method with HA standards of known molecular weight has established a linear relationship between electrophoretic mobility and the logarithm of the weight-average molecular weight over the range of approximately 0.2-6 x 10(6). The separated HA pattern may also be visualized after electrotransfer of HA from the agarose gel to a nylon membrane. The membrane may be stained with the dye alcian blue. Alternatively, specific detection of HA from impure samples can be achieved by probing the nylon membrane with biotin-labeled HA-binding protein and subsequent interaction with a streptavidin-linked gold reagent and silver staining for amplification. The electrophoretic method was used to analyze HA in two different liquid connective tissues. Normal human knee joint synovial fluid showed a narrow HA molecular weight distribution, with a peak at 6-7 x 10(6). Owl monkey vitreous HA also showed a narrow molecular weight distribution, with a peak at 5-6 x 10(6). These results agree well with available published data and indicate the applicability of the method to the analysis of impure HA samples which may be available in limited amounts.

  11. Comparing four methods to estimate usual intake distributions.

    PubMed

    Souverein, O W; Dekkers, A L; Geelen, A; Haubrock, J; de Vries, J H; Ocké, M C; Harttig, U; Boeing, H; van 't Veer, P

    2011-07-01

    The aim of this paper was to compare methods to estimate usual intake distributions of nutrients and foods. As 'true' usual intake distributions are not known in practice, the comparison was carried out through a simulation study, as well as empirically, by application to data from the European Food Consumption Validation (EFCOVAL) Study in which two 24-h dietary recalls (24-HDRs) and food frequency data were collected. The methods being compared were the Iowa State University Method (ISU), National Cancer Institute Method (NCI), Multiple Source Method (MSM) and Statistical Program for Age-adjusted Dietary Assessment (SPADE). Simulation data were constructed with varying numbers of subjects (n), different values for the Box-Cox transformation parameter (λ(BC)) and different values for the ratio of the within- and between-person variance (r(var)). All data were analyzed with the four different methods and the estimated usual mean intake and selected percentiles were obtained. Moreover, the 2-day within-person mean was estimated as an additional 'method'. These five methods were compared in terms of the mean bias, which was calculated as the mean of the differences between the estimated value and the known true value. The application of data from the EFCOVAL Project included calculations of nutrients (that is, protein, potassium, protein density) and foods (that is, vegetables, fruit and fish). Overall, the mean bias of the ISU, NCI, MSM and SPADE Methods was small. However, for all methods, the mean bias and the variation of the bias increased with smaller sample size, higher variance ratios and with more pronounced departures from normality. Serious mean bias (especially in the 95th percentile) was seen using the NCI Method when r(var) = 9, λ(BC) = 0 and n = 1000. The ISU Method and MSM showed a somewhat higher s.d. of the bias compared with NCI and SPADE Methods, indicating a larger method uncertainty. Furthermore, whereas the ISU, NCI and SPADE Methods produced unimodal density functions by definition, MSM produced distributions with 'peaks', when sample size was small, because of the fact that the population's usual intake distribution was based on estimated individual usual intakes. The application to the EFCOVAL data showed that all estimates of the percentiles and mean were within 5% of each other for the three nutrients analyzed. For vegetables, fruit and fish, the differences were larger than that for nutrients, but overall the sample mean was estimated reasonably. The four methods that were compared seem to provide good estimates of the usual intake distribution of nutrients. Nevertheless, care needs to be taken when a nutrient has a high within-person variation or has a highly skewed distribution, and when the sample size is small. As the methods offer different features, practical reasons may exist to prefer one method over the other.

  12. Comparison between wavelet transform and moving average as filter method of MODIS imagery to recognize paddy cropping pattern in West Java

    NASA Astrophysics Data System (ADS)

    Dwi Nugroho, Kreshna; Pebrianto, Singgih; Arif Fatoni, Muhammad; Fatikhunnada, Alvin; Liyantono; Setiawan, Yudi

    2017-01-01

    Information on the area and spatial distribution of paddy field are needed to support sustainable agricultural and food security program. Mapping or distribution of cropping pattern paddy field is important to obtain sustainability paddy field area. It can be done by direct observation and remote sensing method. This paper discusses remote sensing for paddy field monitoring based on MODIS time series data. In time series MODIS data, difficult to direct classified of data, because of temporal noise. Therefore wavelet transform and moving average are needed as filter methods. The Objective of this study is to recognize paddy cropping pattern with wavelet transform and moving average in West Java using MODIS imagery (MOD13Q1) from 2001 to 2015 then compared between both of methods. The result showed the spatial distribution almost have the same cropping pattern. The accuracy of wavelet transform (75.5%) is higher than moving average (70.5%). Both methods showed that the majority of the cropping pattern in West Java have pattern paddy-fallow-paddy-fallow with various time planting. The difference of the planting schedule was occurs caused by the availability of irrigation water.

  13. Numerical Study on Focusing of Ultrasounds in Microbubble-enhanced HIFU

    NASA Astrophysics Data System (ADS)

    Matsumoto, Yoichiro; Okita, Kohei; Takagi, Shu

    2011-11-01

    The injection of microbubbles into the target tissue enhances tissue heating in High-Intensity Focused Ultrasound therapy, via inertial cavitation. The control of the inertial cavitation is required to achieve the efficient tissue ablation. Microbubbles between a transducer and a target disturb the ultrasound propagation depending on the conditions. A method to clear such microbubbles has been proposed by Kajiyama et al. [Physics Procedia 3 (2010) 305-314]. In the method, the irradiation of intense ultrasounds with a burst waveform fragmentize microbubbles in the pathways before the irradiation of ultrasounds for tissue heating. The vitro experiment using a gel containing microbubbles has showed that the method enables to heat the target correctly by controlling the microbubble distribution. Following the experiment, we simulate the focusing of ultrasounds through a mixture containing microbubbles with considering the size and number density distributions in space. The numerical simulation shows that the movement of the heating region from the transducer side to the target by controlling the microbubble distributions. The numerical results elucidate well the experimental ones.

  14. Using a topographic index to distribute variable source area runoff predicted with the SCS curve-number equation

    NASA Astrophysics Data System (ADS)

    Lyon, Steve W.; Walter, M. Todd; Gérard-Marchant, Pierre; Steenhuis, Tammo S.

    2004-10-01

    Because the traditional Soil Conservation Service curve-number (SCS-CN) approach continues to be used ubiquitously in water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed and tested a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Predicting the location of source areas is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point-source pollution. The method presented here used the traditional SCS-CN approach to predict runoff volume and spatial extent of saturated areas and a topographic index, like that used in TOPMODEL, to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was applied to two subwatersheds of the Delaware basin in the Catskill Mountains region of New York State and one watershed in south-eastern Australia to produce runoff-probability maps. Observed saturated area locations in the watersheds agreed with the distributed CN-VSA method. Results showed good agreement with those obtained from the previously validated soil moisture routing (SMR) model. When compared with the traditional SCS-CN method, the distributed CN-VSA method predicted a similar total volume of runoff, but vastly different locations of runoff generation. Thus, the distributed CN-VSA approach provides a physically based method that is simple enough to be incorporated into water quality models, and other tools that currently use the traditional SCS-CN method, while still adhering to the principles of VSA hydrology.

  15. Holographic monitoring of spatial distributions of singlet oxygen in water

    NASA Astrophysics Data System (ADS)

    Belashov, A. V.; Bel'tyukova, D. M.; Vasyutinskii, O. S.; Petrov, N. V.; Semenova, I. V.; Chupov, A. S.

    2014-12-01

    A method for monitoring spatial distributions of singlet oxygen in biological media has been developed. Singlet oxygen was generated using Radachlorin® photosensitizer, while thermal disturbances caused by nonradiative deactivation of singlet oxygen were detected by the holographic interferometry technique. Processing of interferograms yields temperature maps that characterize the deactivation process and show the distribution of singlet oxygen species.

  16. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    NASA Astrophysics Data System (ADS)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  17. New spatial upscaling methods for multi-point measurements: From normal to p-normal

    NASA Astrophysics Data System (ADS)

    Liu, Feng; Li, Xin

    2017-12-01

    Careful attention must be given to determining whether the geophysical variables of interest are normally distributed, since the assumption of a normal distribution may not accurately reflect the probability distribution of some variables. As a generalization of the normal distribution, the p-normal distribution and its corresponding maximum likelihood estimation (the least power estimation, LPE) were introduced in upscaling methods for multi-point measurements. Six methods, including three normal-based methods, i.e., arithmetic average, least square estimation, block kriging, and three p-normal-based methods, i.e., LPE, geostatistics LPE and inverse distance weighted LPE are compared in two types of experiments: a synthetic experiment to evaluate the performance of the upscaling methods in terms of accuracy, stability and robustness, and a real-world experiment to produce real-world upscaling estimates using soil moisture data obtained from multi-scale observations. The results show that the p-normal-based methods produced lower mean absolute errors and outperformed the other techniques due to their universality and robustness. We conclude that introducing appropriate statistical parameters into an upscaling strategy can substantially improve the estimation, especially if the raw measurements are disorganized; however, further investigation is required to determine which parameter is the most effective among variance, spatial correlation information and parameter p.

  18. An advanced probabilistic structural analysis method for implicit performance functions

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.

    1989-01-01

    In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.

  19. Characterization and variability of particle size distributions in Hudson Bay, Canada

    NASA Astrophysics Data System (ADS)

    Xi, Hongyan; Larouche, Pierre; Tang, Shilin; Michel, Christine

    2014-06-01

    Particle size distribution (PSD) plays a significant role in many aspects of aquatic ecosystems, including phytoplankton dynamics, sediment fluxes, and optical scattering from particulates. As of yet, little is known on the variability of particle size distribution in marine ecosystems. In this study, we investigated the PSD properties and variability in Hudson Bay based on measurements from a laser diffractometer (LISST-100X Type-B) in concert with biogeochemical parameters collected during summer 2010. Results show that most power-law fitted PSD slopes ranged from 2.5 to 4.5, covering nearly the entire range observed for natural waters. Offshore waters showed a predominance of smaller particles while near the coast, the effect of riverine inputs on PSD were apparent. Particulate inorganic matter contributed more to total suspended matter in coastal waters leading to lower PSD slopes than offshore. The depth distribution of PSD slopes shows that larger particles were associated with the pycnocline. Below the pycnocline, smaller particles dominated the spectra. A comparison between a PSD slope-based method to derive phytoplankton size class (PSC) and pigment-based derived PSC showed the two methods agreed relatively well. This study provides valuable baseline information on particle size properties and phytoplankton composition estimates in a sub-arctic environment subject to rapid environmental change.

  20. Imaging of current distributions in superconducting thin film structures

    NASA Astrophysics Data System (ADS)

    Dönitz, Dietmar

    2006-10-01

    Local analysis plays an important role in many fields of scientific research. However, imaging methods are not very common in the investigation of superconductors. For more than 20 years, Low Temperature Scanning Electron Microscopy (LTSEM) has been successfully used at the University of Tübingen for studying of condensed matter phenomena, especially of superconductivity. In this thesis LTSEM was used for imaging current distributions in different superconducting thin film structures: - Imaging of current distributions in Josephson junctions with ferromagnetic interlayer, also known as SIFS junctions, showed inhomogeneous current transport over the junctions which directly led to an improvement in the fabrication process. An investigation of improved samples showed a very homogeneous current distribution without any trace of magnetic domains. Either such domains were not present or too small for imaging with the LTSEM. - An investigation of Nb/YBCO zigzag Josephson junctions yielded important information on signal formation in the LTSEM both for Josephson junctions in the short and in the long limit. Using a reference junction our signal formation model could be verified, thus confirming earlier results on short zigzag junctions. These results, which could be reproduced in this work, support the theory of d-wave symmetry in the superconducting order parameter of YBCO. Furthermore, investigations of the quasiparticle tunneling in the zigzag junctions showed the existence of Andreev bound states, which is another indication of the d-wave symmetry in YBCO. - The LTSEM study of Hot Electron Bolometers (HEB) allowed the first successful imaging of a stable 'Hot Spot', a self-heating region in HEB structures. Moreover, the electron beam was used to induce an - otherwise unstable - hot spot. Both investigations yielded information on the homogeneity of the samples. - An entirely new method of imaging the current distribution in superconducting interference devices (SQUIDs) could be developed. It is based on vortex imaging by LTSEM that had been established several years ago. The vortex signals can be used as local detectors for the vortex-free circulating sheet-current distribution J. Compared to previous inversion methods that infer J from the measured magnetic field, this method gives a more direct measurement of the current distribution. The experimental results were in very good agreement with numerical calculations of J. The presented investigations show how versatile and useful Low Temperature Scanning Electron Microscopy can be for studying superconducting thin film structures. Thus one may expect that many more important results can be obtained with this method.

  1. General Methods for Evolutionary Quantitative Genetic Inference from Generalized Mixed Models.

    PubMed

    de Villemereuil, Pierre; Schielzeth, Holger; Nakagawa, Shinichi; Morrissey, Michael

    2016-11-01

    Methods for inference and interpretation of evolutionary quantitative genetic parameters, and for prediction of the response to selection, are best developed for traits with normal distributions. Many traits of evolutionary interest, including many life history and behavioral traits, have inherently nonnormal distributions. The generalized linear mixed model (GLMM) framework has become a widely used tool for estimating quantitative genetic parameters for nonnormal traits. However, whereas GLMMs provide inference on a statistically convenient latent scale, it is often desirable to express quantitative genetic parameters on the scale upon which traits are measured. The parameters of fitted GLMMs, despite being on a latent scale, fully determine all quantities of potential interest on the scale on which traits are expressed. We provide expressions for deriving each of such quantities, including population means, phenotypic (co)variances, variance components including additive genetic (co)variances, and parameters such as heritability. We demonstrate that fixed effects have a strong impact on those parameters and show how to deal with this by averaging or integrating over fixed effects. The expressions require integration of quantities determined by the link function, over distributions of latent values. In general cases, the required integrals must be solved numerically, but efficient methods are available and we provide an implementation in an R package, QGglmm. We show that known formulas for quantities such as heritability of traits with binomial and Poisson distributions are special cases of our expressions. Additionally, we show how fitted GLMM can be incorporated into existing methods for predicting evolutionary trajectories. We demonstrate the accuracy of the resulting method for evolutionary prediction by simulation and apply our approach to data from a wild pedigreed vertebrate population. Copyright © 2016 de Villemereuil et al.

  2. Assessing the risk zones of Chagas' disease in Chile, in a world marked by global climatic change

    PubMed Central

    Tapia-Garay, Valentina; Figueroa, Daniela P; Maldonado, Ana; Frías-Laserre, Daniel; Gonzalez, Christian R; Parra, Alonso; Canals, Lucia; Apt, Werner; Alvarado, Sergio; Cáceres, Dante; Canals, Mauricio

    2018-01-01

    BACKGROUND Vector transmission of Trypanosoma cruzi appears to be interrupted in Chile; however, data show increasing incidence of Chagas' disease, raising concerns that there may be a reemerging problem. OBJECTIVE To estimate the actual risk in a changing world it is necessary to consider the historical vector distribution and correlate this distribution with the presence of cases and climate change. METHODS Potential distribution models of Triatoma infestans and Chagas disease were performed using Maxent, a machine-learning method. FINDINGS Climate change appears to play a major role in the reemergence of Chagas' disease and T. infestans in Chile. The distribution of both T. infestans and Chagas' disease correlated with maximum temperature, and the precipitation during the driest month. The overlap of Chagas' disease and T. infestans distribution areas was high. The distribution of T. infestans, under two global change scenarios, showed a minimal reduction tendency in suitable areas. MAIN CONCLUSION The impact of temperature and precipitation on the distribution of T. infestans, as shown by the models, indicates the need for aggressive control efforts; the current control measures, including T. infestans control campaigns, should be maintained with the same intensity as they have at present, avoiding sylvatic foci, intrusions, and recolonisation of human dwellings. PMID:29211105

  3. Comparison of Two Methods Used to Model Shape Parameters of Pareto Distributions

    USGS Publications Warehouse

    Liu, C.; Charpentier, R.R.; Su, J.

    2011-01-01

    Two methods are compared for estimating the shape parameters of Pareto field-size (or pool-size) distributions for petroleum resource assessment. Both methods assume mature exploration in which most of the larger fields have been discovered. Both methods use the sizes of larger discovered fields to estimate the numbers and sizes of smaller fields: (1) the tail-truncated method uses a plot of field size versus size rank, and (2) the log-geometric method uses data binned in field-size classes and the ratios of adjacent bin counts. Simulation experiments were conducted using discovered oil and gas pool-size distributions from four petroleum systems in Alberta, Canada and using Pareto distributions generated by Monte Carlo simulation. The estimates of the shape parameters of the Pareto distributions, calculated by both the tail-truncated and log-geometric methods, generally stabilize where discovered pool numbers are greater than 100. However, with fewer than 100 discoveries, these estimates can vary greatly with each new discovery. The estimated shape parameters of the tail-truncated method are more stable and larger than those of the log-geometric method where the number of discovered pools is more than 100. Both methods, however, tend to underestimate the shape parameter. Monte Carlo simulation was also used to create sequences of discovered pool sizes by sampling from a Pareto distribution with a discovery process model using a defined exploration efficiency (in order to show how biased the sampling was in favor of larger fields being discovered first). A higher (more biased) exploration efficiency gives better estimates of the Pareto shape parameters. ?? 2011 International Association for Mathematical Geosciences.

  4. A multiple distributed representation method based on neural network for biomedical event extraction.

    PubMed

    Wang, Anran; Wang, Jian; Lin, Hongfei; Zhang, Jianhai; Yang, Zhihao; Xu, Kan

    2017-12-20

    Biomedical event extraction is one of the most frontier domains in biomedical research. The two main subtasks of biomedical event extraction are trigger identification and arguments detection which can both be considered as classification problems. However, traditional state-of-the-art methods are based on support vector machine (SVM) with massive manually designed one-hot represented features, which require enormous work but lack semantic relation among words. In this paper, we propose a multiple distributed representation method for biomedical event extraction. The method combines context consisting of dependency-based word embedding, and task-based features represented in a distributed way as the input of deep learning models to train deep learning models. Finally, we used softmax classifier to label the example candidates. The experimental results on Multi-Level Event Extraction (MLEE) corpus show higher F-scores of 77.97% in trigger identification and 58.31% in overall compared to the state-of-the-art SVM method. Our distributed representation method for biomedical event extraction avoids the problems of semantic gap and dimension disaster from traditional one-hot representation methods. The promising results demonstrate that our proposed method is effective for biomedical event extraction.

  5. Bayesian assessment of uncertainty in aerosol size distributions and index of refraction retrieved from multiwavelength lidar measurements.

    PubMed

    Herman, Benjamin R; Gross, Barry; Moshary, Fred; Ahmed, Samir

    2008-04-01

    We investigate the assessment of uncertainty in the inference of aerosol size distributions from backscatter and extinction measurements that can be obtained from a modern elastic/Raman lidar system with a Nd:YAG laser transmitter. To calculate the uncertainty, an analytic formula for the correlated probability density function (PDF) describing the error for an optical coefficient ratio is derived based on a normally distributed fractional error in the optical coefficients. Assuming a monomodal lognormal particle size distribution of spherical, homogeneous particles with a known index of refraction, we compare the assessment of uncertainty using a more conventional forward Monte Carlo method with that obtained from a Bayesian posterior PDF assuming a uniform prior PDF and show that substantial differences between the two methods exist. In addition, we use the posterior PDF formalism, which was extended to include an unknown refractive index, to find credible sets for a variety of optical measurement scenarios. We find the uncertainty is greatly reduced with the addition of suitable extinction measurements in contrast to the inclusion of extra backscatter coefficients, which we show to have a minimal effect and strengthens similar observations based on numerical regularization methods.

  6. Accuracy and uncertainty analysis of soil Bbf spatial distribution estimation at a coking plant-contaminated site based on normalization geostatistical technologies.

    PubMed

    Liu, Geng; Niu, Junjie; Zhang, Chao; Guo, Guanlin

    2015-12-01

    Data distribution is usually skewed severely by the presence of hot spots in contaminated sites. This causes difficulties for accurate geostatistical data transformation. Three types of typical normal distribution transformation methods termed the normal score, Johnson, and Box-Cox transformations were applied to compare the effects of spatial interpolation with normal distribution transformation data of benzo(b)fluoranthene in a large-scale coking plant-contaminated site in north China. Three normal transformation methods decreased the skewness and kurtosis of the benzo(b)fluoranthene, and all the transformed data passed the Kolmogorov-Smirnov test threshold. Cross validation showed that Johnson ordinary kriging has a minimum root-mean-square error of 1.17 and a mean error of 0.19, which was more accurate than the other two models. The area with fewer sampling points and that with high levels of contamination showed the largest prediction standard errors based on the Johnson ordinary kriging prediction map. We introduce an ideal normal transformation method prior to geostatistical estimation for severely skewed data, which enhances the reliability of risk estimation and improves the accuracy for determination of remediation boundaries.

  7. Measurement and simulation of thermal neutron flux distribution in the RTP core

    NASA Astrophysics Data System (ADS)

    Rabir, Mohamad Hairie B.; Jalal Bayar, Abi Muttaqin B.; Hamzah, Na'im Syauqi B.; Mustafa, Muhammad Khairul Ariff B.; Karim, Julia Bt. Abdul; Zin, Muhammad Rawi B. Mohamed; Ismail, Yahya B.; Hussain, Mohd Huzair B.; Mat Husin, Mat Zin B.; Dan, Roslan B. Md; Ismail, Ahmad Razali B.; Husain, Nurfazila Bt.; Jalil Khan, Zareen Khan B. Abdul; Yakin, Shaiful Rizaide B. Mohd; Saad, Mohamad Fauzi B.; Masood, Zarina Bt.

    2018-01-01

    The in-core thermal neutron flux distribution was determined using measurement and simulation methods for the Malaysian’s PUSPATI TRIGA Reactor (RTP). In this work, online thermal neutron flux measurement using Self Powered Neutron Detector (SPND) has been performed to verify and validate the computational methods for neutron flux calculation in RTP calculations. The experimental results were used as a validation to the calculations performed with Monte Carlo code MCNP. The detail in-core neutron flux distributions were estimated using MCNP mesh tally method. The neutron flux mapping obtained revealed the heterogeneous configuration of the core. Based on the measurement and simulation, the thermal flux profile peaked at the centre of the core and gradually decreased towards the outer side of the core. The results show a good agreement (relatively) between calculation and measurement where both show the same radial thermal flux profile inside the core: MCNP model over estimation with maximum discrepancy around 20% higher compared to SPND measurement. As our model also predicts well the neutron flux distribution in the core it can be used for the characterization of the full core, that is neutron flux and spectra calculation, dose rate calculations, reaction rate calculations, etc.

  8. Probability distribution functions for unit hydrographs with optimization using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Ghorbani, Mohammad Ali; Singh, Vijay P.; Sivakumar, Bellie; H. Kashani, Mahsa; Atre, Atul Arvind; Asadi, Hakimeh

    2017-05-01

    A unit hydrograph (UH) of a watershed may be viewed as the unit pulse response function of a linear system. In recent years, the use of probability distribution functions (pdfs) for determining a UH has received much attention. In this study, a nonlinear optimization model is developed to transmute a UH into a pdf. The potential of six popular pdfs, namely two-parameter gamma, two-parameter Gumbel, two-parameter log-normal, two-parameter normal, three-parameter Pearson distribution, and two-parameter Weibull is tested on data from the Lighvan catchment in Iran. The probability distribution parameters are determined using the nonlinear least squares optimization method in two ways: (1) optimization by programming in Mathematica; and (2) optimization by applying genetic algorithm. The results are compared with those obtained by the traditional linear least squares method. The results show comparable capability and performance of two nonlinear methods. The gamma and Pearson distributions are the most successful models in preserving the rising and recession limbs of the unit hydographs. The log-normal distribution has a high ability in predicting both the peak flow and time to peak of the unit hydrograph. The nonlinear optimization method does not outperform the linear least squares method in determining the UH (especially for excess rainfall of one pulse), but is comparable.

  9. Ensemble-Biased Metadynamics: A Molecular Simulation Method to Sample Experimental Distributions

    PubMed Central

    Marinelli, Fabrizio; Faraldo-Gómez, José D.

    2015-01-01

    We introduce an enhanced-sampling method for molecular dynamics (MD) simulations referred to as ensemble-biased metadynamics (EBMetaD). The method biases a conventional MD simulation to sample a molecular ensemble that is consistent with one or more probability distributions known a priori, e.g., experimental intramolecular distance distributions obtained by double electron-electron resonance or other spectroscopic techniques. To this end, EBMetaD adds an adaptive biasing potential throughout the simulation that discourages sampling of configurations inconsistent with the target probability distributions. The bias introduced is the minimum necessary to fulfill the target distributions, i.e., EBMetaD satisfies the maximum-entropy principle. Unlike other methods, EBMetaD does not require multiple simulation replicas or the introduction of Lagrange multipliers, and is therefore computationally efficient and straightforward in practice. We demonstrate the performance and accuracy of the method for a model system as well as for spin-labeled T4 lysozyme in explicit water, and show how EBMetaD reproduces three double electron-electron resonance distance distributions concurrently within a few tens of nanoseconds of simulation time. EBMetaD is integrated in the open-source PLUMED plug-in (www.plumed-code.org), and can be therefore readily used with multiple MD engines. PMID:26083917

  10. Performance analysis of a brushless dc motor due to magnetization distribution in a continuous ring magnet

    NASA Astrophysics Data System (ADS)

    Hur, Jin; Jung, In-Soung; Sung, Ha-Gyeong; Park, Soon-Sup

    2003-05-01

    This paper represents the force performance of a brushless dc motor with a continuous ring-type permanent magnet (PM), considering its magnetization patterns: trapezoidal, trapezoidal with dead zone, and unbalanced trapezoidal magnetization with dead zone. The radial force density in PM motor causes vibration, because vibration is induced the traveling force from the rotating PM acting on the stator. Magnetization distribution of the PM as well as the shape of the teeth determines the distribution of force density. In particular, the distribution has a three-dimensional (3-D) pattern because of overhang, that is, it is not uniform in axial direction. Thus, the analysis of radial force density required dynamic analysis considering the 3-D shape of the teeth and overhang. The results show that the force density as a source of vibration varies considerably depending on the overhang and magnetization distribution patterns. In addition, the validity of the developed method, coupled 3-D equivalent magnetic circuit network method, with driving circuit and motion equation, is confirmed by comparison of conventional method using 3D finite element method.

  11. A new framework of statistical inferences based on the valid joint sampling distribution of the observed counts in an incomplete contingency table.

    PubMed

    Tian, Guo-Liang; Li, Hui-Qiong

    2017-08-01

    Some existing confidence interval methods and hypothesis testing methods in the analysis of a contingency table with incomplete observations in both margins entirely depend on an underlying assumption that the sampling distribution of the observed counts is a product of independent multinomial/binomial distributions for complete and incomplete counts. However, it can be shown that this independency assumption is incorrect and can result in unreliable conclusions because of the under-estimation of the uncertainty. Therefore, the first objective of this paper is to derive the valid joint sampling distribution of the observed counts in a contingency table with incomplete observations in both margins. The second objective is to provide a new framework for analyzing incomplete contingency tables based on the derived joint sampling distribution of the observed counts by developing a Fisher scoring algorithm to calculate maximum likelihood estimates of parameters of interest, the bootstrap confidence interval methods, and the bootstrap testing hypothesis methods. We compare the differences between the valid sampling distribution and the sampling distribution under the independency assumption. Simulation studies showed that average/expected confidence-interval widths of parameters based on the sampling distribution under the independency assumption are shorter than those based on the new sampling distribution, yielding unrealistic results. A real data set is analyzed to illustrate the application of the new sampling distribution for incomplete contingency tables and the analysis results again confirm the conclusions obtained from the simulation studies.

  12. Measurement of Device Parameters Using Image Recovery Techniques in Large-Scale IC Devices

    NASA Technical Reports Server (NTRS)

    Scheick, Leif; Edmonds, Larry

    2004-01-01

    Devices that respond to radiation on a cell level will produce histograms showing the relative frequency of cell damage as a function of damage. The measured distribution is the convolution of distributions from radiation responses, measurement noise, and manufacturing parameters. A method of extracting device characteristics and parameters from measured distributions via mathematical and image subtraction techniques is described.

  13. Geochemical, aeromagnetic, and generalized geologic maps showing distribution and abundance of molybdenum and zinc, Golconda and Iron Point quadrangles, Humboldt County, Nevada

    USGS Publications Warehouse

    Erickson, R.L.; Marsh, S.P.

    1972-01-01

    This series of maps shows the distribution and abundance of mercury, arsenic, antimony, tungsten, gold, copper, lead, and silver related to a geologic and aeromagnetic base in the Golconda and Iron Point 7½-minute quadrangles. All samples are rock samples; most are from shear or fault zones, fractures, jasperoid, breccia reefs, and altered rocks. All the samples were prepared and analyzed in truck-mounted laboratories at Winnemucca, Nevada. Arsenic, tungsten, copper, lead, and silver were determined by semiquantitative spectrographic methods by D.F. Siems and E.F. Cooley. Mercury and gold were determined by atomic absorption methods and antimony was determined by wet chemical methods by R.M. O'Leary, M.S. Erickson, and others.

  14. An efficient distribution method for nonlinear transport problems in highly heterogeneous stochastic porous media

    NASA Astrophysics Data System (ADS)

    Ibrahima, Fayadhoi; Meyer, Daniel; Tchelepi, Hamdi

    2016-04-01

    Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are crucial to explore possible scenarios and assess risks in subsurface problems. In particular, nonlinear two-phase flows in porous media are essential, yet challenging, in reservoir simulation and hydrology. Adding highly heterogeneous and uncertain input, such as the permeability and porosity fields, transforms the estimation of the flow response into a tough stochastic problem for which computationally expensive Monte Carlo (MC) simulations remain the preferred option.We propose an alternative approach to evaluate the probability distribution of the (water) saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the (water) saturation. The distribution method draws inspiration from a Lagrangian approach of the stochastic transport problem and expresses the saturation PDF and CDF essentially in terms of a deterministic mapping and the distribution and statistics of scalar random fields. In a large class of applications these random fields can be estimated at low computational costs (few MC runs), thus making the distribution method attractive. Even though the method relies on a key assumption of fixed streamlines, we show that it performs well for high input variances, which is the case of interest. Once the saturation distribution is determined, any one-point statistics thereof can be obtained, especially the saturation average and standard deviation. Moreover, the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be efficiently derived from the distribution method. These statistics can then be used for risk assessment, as well as data assimilation and uncertainty reduction in the prior knowledge of input distributions. We provide various examples and comparisons with MC simulations to illustrate the performance of the method.

  15. Calculating p-values and their significances with the Energy Test for large datasets

    NASA Astrophysics Data System (ADS)

    Barter, W.; Burr, C.; Parkes, C.

    2018-04-01

    The energy test method is a multi-dimensional test of whether two samples are consistent with arising from the same underlying population, through the calculation of a single test statistic (called the T-value). The method has recently been used in particle physics to search for samples that differ due to CP violation. The generalised extreme value function has previously been used to describe the distribution of T-values under the null hypothesis that the two samples are drawn from the same underlying population. We show that, in a simple test case, the distribution is not sufficiently well described by the generalised extreme value function. We present a new method, where the distribution of T-values under the null hypothesis when comparing two large samples can be found by scaling the distribution found when comparing small samples drawn from the same population. This method can then be used to quickly calculate the p-values associated with the results of the test.

  16. Robustness of S1 statistic with Hodges-Lehmann for skewed distributions

    NASA Astrophysics Data System (ADS)

    Ahad, Nor Aishah; Yahaya, Sharipah Soaad Syed; Yin, Lee Ping

    2016-10-01

    Analysis of variance (ANOVA) is a common use parametric method to test the differences in means for more than two groups when the populations are normally distributed. ANOVA is highly inefficient under the influence of non- normal and heteroscedastic settings. When the assumptions are violated, researchers are looking for alternative such as Kruskal-Wallis under nonparametric or robust method. This study focused on flexible method, S1 statistic for comparing groups using median as the location estimator. S1 statistic was modified by substituting the median with Hodges-Lehmann and the default scale estimator with the variance of Hodges-Lehmann and MADn to produce two different test statistics for comparing groups. Bootstrap method was used for testing the hypotheses since the sampling distributions of these modified S1 statistics are unknown. The performance of the proposed statistic in terms of Type I error was measured and compared against the original S1 statistic, ANOVA and Kruskal-Wallis. The propose procedures show improvement compared to the original statistic especially under extremely skewed distribution.

  17. Temporal and spatial PM10 concentration distribution using an inverse distance weighted method in Klang Valley, Malaysia

    NASA Astrophysics Data System (ADS)

    Tarmizi, S. N. M.; Asmat, A.; Sumari, S. M.

    2014-02-01

    PM10 is one of the air contaminants that can be harmful to human health. Meteorological factors and changes of monsoon season may affect the distribution of these particles. The objective of this study is to determine the temporal and spatial particulate matter (PM10) concentration distribution in Klang Valley, Malaysia by using the Inverse Distance Weighted (IDW) method at different monsoon season and meteorological conditions. PM10 and meteorological data were obtained from the Malaysian Department of Environment (DOE). Particles distribution data were added to the geographic database on a seasonal basis. Temporal and spatial patterns of PM10 concentration distribution were determined by using ArcGIS 9.3. The higher PM10 concentrations are observed during Southwest monsoon season. The values are lower during the Northeast monsoon season. Different monsoon seasons show different meteorological conditions that effect PM10 distribution.

  18. A comparison of electronic heterodyne moire deflectometry and electronic heterodyne holographic interferometry for flow measurements

    NASA Technical Reports Server (NTRS)

    Decker, A. J.; Stricker, J.

    1985-01-01

    Electronic heterodyne moire deflectometry and electronic heterodyne holographic interferometry are compared as methods for the accurate measurement of refractive index and density change distributions of phase objects. Experimental results are presented to show that the two methods have comparable accuracy for measuring the first derivative of the interferometric fringe shift. The phase object for the measurements is a large crystal of KD*P, whose refractive index distribution can be changed accurately and repeatably for the comparison. Although the refractive index change causes only about one interferometric fringe shift over the entire crystal, the derivative shows considerable detail for the comparison. As electronic phase measurement methods, both methods are very accurate and are intrinsically compatible with computer controlled readout and data processing. Heterodyne moire is relatively inexpensive and has high variable sensitivity. Heterodyne holographic interferometry is better developed, and can be used with poor quality optical access to the experiment.

  19. Multi-chain Markov chain Monte Carlo methods for computationally expensive models

    NASA Astrophysics Data System (ADS)

    Huang, M.; Ray, J.; Ren, H.; Hou, Z.; Bao, J.

    2017-12-01

    Markov chain Monte Carlo (MCMC) methods are used to infer model parameters from observational data. The parameters are inferred as probability densities, thus capturing estimation error due to sparsity of the data, and the shortcomings of the model. Multiple communicating chains executing the MCMC method have the potential to explore the parameter space better, and conceivably accelerate the convergence to the final distribution. We present results from tests conducted with the multi-chain method to show how the acceleration occurs i.e., for loose convergence tolerances, the multiple chains do not make much of a difference. The ensemble of chains also seems to have the ability to accelerate the convergence of a few chains that might start from suboptimal starting points. Finally, we show the performance of the chains in the estimation of O(10) parameters using computationally expensive forward models such as the Community Land Model, where the sampling burden is distributed over multiple chains.

  20. Influence of Force Fields and Quantum Chemistry Approach on Spectral Densities of BChl a in Solution and in FMO Proteins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chandrasekaran, Suryanarayanan; Aghtar, Mortaza; Valleau, Stéphanie

    2015-08-06

    Studies on light-harvesting (LH) systems have attracted much attention after the finding of long-lived quantum coherences in the exciton dynamics of the Fenna–Matthews–Olson (FMO) complex. In this complex, excitation energy transfer occurs between the bacteriochlorophyll a (BChl a) pigments. Two quantum mechanics/molecular mechanics (QM/MM) studies, each with a different force-field and quantum chemistry approach, reported different excitation energy distributions for the FMO complex. To understand the reasons for these differences in the predicted excitation energies, we have carried out a comparative study between the simulations using the CHARMM and AMBER force field and the Zerner intermediate neglect of differential orbitalmore » (ZINDO)/S and time-dependent density functional theory (TDDFT) quantum chemistry methods. The calculations using the CHARMM force field together with ZINDO/S or TDDFT always show a wider spread in the energy distribution compared to those using the AMBER force field. High- or low-energy tails in these energy distributions result in larger values for the spectral density at low frequencies. A detailed study on individual BChl a molecules in solution shows that without the environment, the density of states is the same for both force field sets. Including the environmental point charges, however, the excitation energy distribution gets broader and, depending on the applied methods, also asymmetric. The excitation energy distribution predicted using TDDFT together with the AMBER force field shows a symmetric, Gaussian-like distribution.« less

  1. Distributed memory compiler methods for irregular problems: Data copy reuse and runtime partitioning

    NASA Technical Reports Server (NTRS)

    Das, Raja; Ponnusamy, Ravi; Saltz, Joel; Mavriplis, Dimitri

    1991-01-01

    Outlined here are two methods which we believe will play an important role in any distributed memory compiler able to handle sparse and unstructured problems. We describe how to link runtime partitioners to distributed memory compilers. In our scheme, programmers can implicitly specify how data and loop iterations are to be distributed between processors. This insulates users from having to deal explicitly with potentially complex algorithms that carry out work and data partitioning. We also describe a viable mechanism for tracking and reusing copies of off-processor data. In many programs, several loops access the same off-processor memory locations. As long as it can be verified that the values assigned to off-processor memory locations remain unmodified, we show that we can effectively reuse stored off-processor data. We present experimental data from a 3-D unstructured Euler solver run on iPSC/860 to demonstrate the usefulness of our methods.

  2. Optimal distribution of integration time for intensity measurements in degree of linear polarization polarimetry.

    PubMed

    Li, Xiaobo; Hu, Haofeng; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie

    2016-04-04

    We consider the degree of linear polarization (DOLP) polarimetry system, which performs two intensity measurements at orthogonal polarization states to estimate DOLP. We show that if the total integration time of intensity measurements is fixed, the variance of the DOLP estimator depends on the distribution of integration time for two intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the DOLP estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time in an approximate way by employing Delta method and Lagrange multiplier method. According to the theoretical analyses and real-world experiments, it is shown that the variance of the DOLP estimator can be decreased for any value of DOLP. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improve the measurement accuracy of the polarimetry system.

  3. The integration of elastic wave properties and machine learning for the distribution of petrophysical properties in reservoir modeling

    NASA Astrophysics Data System (ADS)

    Ratnam, T. C.; Ghosh, D. P.; Negash, B. M.

    2018-05-01

    Conventional reservoir modeling employs variograms to predict the spatial distribution of petrophysical properties. This study aims to improve property distribution by incorporating elastic wave properties. In this study, elastic wave properties obtained from seismic inversion are used as input for an artificial neural network to predict neutron porosity in between well locations. The method employed in this study is supervised learning based on available well logs. This method converts every seismic trace into a pseudo-well log, hence reducing the uncertainty between well locations. By incorporating the seismic response, the reliance on geostatistical methods such as variograms for the distribution of petrophysical properties is reduced drastically. The results of the artificial neural network show good correlation with the neutron porosity log which gives confidence for spatial prediction in areas where well logs are not available.

  4. Voltage Based Detection Method for High Impedance Fault in a Distribution System

    NASA Astrophysics Data System (ADS)

    Thomas, Mini Shaji; Bhaskar, Namrata; Prakash, Anupama

    2016-09-01

    High-impedance faults (HIFs) on distribution feeders cannot be detected by conventional protection schemes, as HIFs are characterized by their low fault current level and waveform distortion due to the nonlinearity of the ground return path. This paper proposes a method to identify the HIFs in distribution system and isolate the faulty section, to reduce downtime. This method is based on voltage measurements along the distribution feeder and utilizes the sequence components of the voltages. Three models of high impedance faults have been considered and source side and load side breaking of the conductor have been studied in this work to capture a wide range of scenarios. The effect of neutral grounding of the source side transformer is also accounted in this study. The results show that the algorithm detects the HIFs accurately and rapidly. Thus, the faulty section can be isolated and service can be restored to the rest of the consumers.

  5. Improving the spectral measurement accuracy based on temperature distribution and spectra-temperature relationship

    NASA Astrophysics Data System (ADS)

    Li, Zhe; Feng, Jinchao; Liu, Pengyu; Sun, Zhonghua; Li, Gang; Jia, Kebin

    2018-05-01

    Temperature is usually considered as a fluctuation in near-infrared spectral measurement. Chemometric methods were extensively studied to correct the effect of temperature variations. However, temperature can be considered as a constructive parameter that provides detailed chemical information when systematically changed during the measurement. Our group has researched the relationship between temperature-induced spectral variation (TSVC) and normalized squared temperature. In this study, we focused on the influence of temperature distribution in calibration set. Multi-temperature calibration set selection (MTCS) method was proposed to improve the prediction accuracy by considering the temperature distribution of calibration samples. Furthermore, double-temperature calibration set selection (DTCS) method was proposed based on MTCS method and the relationship between TSVC and normalized squared temperature. We compare the prediction performance of PLS models based on random sampling method and proposed methods. The results from experimental studies showed that the prediction performance was improved by using proposed methods. Therefore, MTCS method and DTCS method will be the alternative methods to improve prediction accuracy in near-infrared spectral measurement.

  6. Spicy Adjectives and Nominal Donkeys: Capturing Semantic Deviance Using Compositionality in Distributional Spaces.

    PubMed

    Vecchi, Eva M; Marelli, Marco; Zamparelli, Roberto; Baroni, Marco

    2017-01-01

    Sophisticated senator and legislative onion. Whether or not you have ever heard of these things, we all have some intuition that one of them makes much less sense than the other. In this paper, we introduce a large dataset of human judgments about novel adjective-noun phrases. We use these data to test an approach to semantic deviance based on phrase representations derived with compositional distributional semantic methods, that is, methods that derive word meanings from contextual information, and approximate phrase meanings by combining word meanings. We present several simple measures extracted from distributional representations of words and phrases, and we show that they have a significant impact on predicting the acceptability of novel adjective-noun phrases even when a number of alternative measures classically employed in studies of compound processing and bigram plausibility are taken into account. Our results show that the extent to which an attributive adjective alters the distributional representation of the noun is the most significant factor in modeling the distinction between acceptable and deviant phrases. Our study extends current applications of compositional distributional semantic methods to linguistically and cognitively interesting problems, and it offers a new, quantitatively precise approach to the challenge of predicting when humans will find novel linguistic expressions acceptable and when they will not. Copyright © 2016 Cognitive Science Society, Inc.

  7. [Study on characteristics of pharmacological effects of traditional Chinese medicines distributing along stomach meridian based on medicinal property combination].

    PubMed

    Zhang, Bai-Xia; Gu, Hao; Guo, Hong-Ling; Ma, Li; Wang, Yun; Qiao, Yan-Jiang

    2014-07-01

    At present, studies on traditional Chinese medicine (TCM) properties are mostly restricted to a single or two kinds of medicinal properties, but deviated from the holism of the theoretical system of TCMs. In this paper, the characteristics of pharmacological effects of different property combinations of TCMs distributing in the stomach meridian were take as the study objective. The data of properties of TCMs distributing in the stomach meridian was collected from the Pharmacopoeia of the People's Republic of China (2005). The data of pharmacological effects of TCMs distributing in the stomach meridian was collected from all of literatures recorded in Chinese Journal Full-text Database (CNKI) since 1980, Science of Chinese Materia Medica (Yan Zhenghua, People's Medical Publishing House, 2006) and Clinical Science of Chinese Materia Medica (Gao Xuemin, Zhong Gansheng, Hebei Science and Technology Publishing House, 2005). The corresponding pharmacological effects of property combinations of TCMs distributing in the stomach meridian was mined by the method of association rules. The results of the association rules were consistent with the empirical knowledge, and showed that different medicinal property combinations had respective pharmacological characteristics, including differences and similarities in pharmacological effects of different medicinal property combinations. Medicinal property combinations with identical four properties or five tastes showed similar pharmacological effects; whereas medicinal property combinations with different four properties or five tastes showed differentiated pharmacological effects. However, medicinal property combinations with different four properties or five tastes could also show similar pharmacological effects. In this study, the medicinal property theory and the pharmacological effects of TCMs were combined to reveal the main characteristics and regularity of pharmacological effects of TCMs distributing in the stomach meridian and provide a new way of thinking and method for revealing the mechanism action of TCMs distributing in the stomach meridian and discovering the pharmacological effects of TCMs distributing in the stomach meridian.

  8. Defining conservation priorities using fragmentation forecasts

    Treesearch

    David Wear; John Pye; Kurt H. Riitters

    2004-01-01

    Methods are developed for forecasting the effects of population and economic growth on the distribution of interior forest habitat. An application to the southeastern United States shows that models provide significant explanatory power with regard to the observed distribution of interior forest. Estimates for economic and biophysical variables are significant and...

  9. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data.

    PubMed

    Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi

    2016-01-01

    Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points.

  10. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data

    PubMed Central

    Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi

    2016-01-01

    Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points. PMID:26807579

  11. Surface Wave Tomography with Spatially Varying Smoothing Based on Continuous Model Regionalization

    NASA Astrophysics Data System (ADS)

    Liu, Chuanming; Yao, Huajian

    2017-03-01

    Surface wave tomography based on continuous regionalization of model parameters is widely used to invert for 2-D phase or group velocity maps. An inevitable problem is that the distribution of ray paths is far from homogeneous due to the spatially uneven distribution of stations and seismic events, which often affects the spatial resolution of the tomographic model. We present an improved tomographic method with a spatially varying smoothing scheme that is based on the continuous regionalization approach. The smoothness of the inverted model is constrained by the Gaussian a priori model covariance function with spatially varying correlation lengths based on ray path density. In addition, a two-step inversion procedure is used to suppress the effects of data outliers on tomographic models. Both synthetic and real data are used to evaluate this newly developed tomographic algorithm. In the synthetic tests, when the contrived model has different scales of anomalies but with uneven ray path distribution, we compare the performance of our spatially varying smoothing method with the traditional inversion method, and show that the new method is capable of improving the recovery in regions of dense ray sampling. For real data applications, the resulting phase velocity maps of Rayleigh waves in SE Tibet produced using the spatially varying smoothing method show similar features to the results with the traditional method. However, the new results contain more detailed structures and appears to better resolve the amplitude of anomalies. From both synthetic and real data tests we demonstrate that our new approach is useful to achieve spatially varying resolution when used in regions with heterogeneous ray path distribution.

  12. Analyzing capture zone distributions (CZD) in growth: Theory and applications

    NASA Astrophysics Data System (ADS)

    Einstein, Theodore L.; Pimpinelli, Alberto; Luis González, Diego

    2014-09-01

    We have argued that the capture-zone distribution (CZD) in submonolayer growth can be well described by the generalized Wigner distribution (GWD) P(s) =asβ exp(-bs2), where s is the CZ area divided by its average value. This approach offers arguably the most robust (least sensitive to mass transport) method to find the critical nucleus size i, since β ≈ i + 2. Various analytical and numerical investigations, which we discuss, show that the simple GWD expression is inadequate in the tails of the distribution, it does account well for the central regime 0.5 < s < 2, where the data is sufficiently large to be reliably accessible experimentally. We summarize and catalog the many experiments in which this method has been applied.

  13. [Kriging analysis of vegetation index depression in peak cluster karst area].

    PubMed

    Yang, Qi-Yong; Jiang, Zhong-Cheng; Ma, Zu-Lu; Cao, Jian-Hua; Luo, Wei-Qun; Li, Wen-Jun; Duan, Xiao-Fang

    2012-04-01

    In order to master the spatial variability of the normal different vegetation index (NDVI) of the peak cluster karst area, taking into account the problem of the mountain shadow "missing" information of remote sensing images existing in the karst area, NDVI of the non-shaded area were extracted in Guohua Ecological Experimental Area, in Pingguo County, Guangxi applying image processing software, ENVI. The spatial variability of NDVI was analyzed applying geostatistical method, and the NDVI of the mountain shadow areas was predicted and validated. The results indicated that the NDVI of the study area showed strong spatial variability and spatial autocorrelation resulting from the impact of intrinsic factors, and the range was 300 m. The spatial distribution maps of the NDVI interpolated by Kriging interpolation method showed that the mean of NDVI was 0.196, apparently strip and block. The higher NDVI values distributed in the area where the slope was greater than 25 degrees of the peak cluster area, while the lower values distributed in the area such as foot of the peak cluster and depression, where slope was less than 25 degrees. Kriging method validation results show that interpolation has a very high prediction accuracy and could predict the NDVI of the shadow area, which provides a new idea and method for monitoring and evaluation of the karst rocky desertification.

  14. Quantifying data retention of perpendicular spin-transfer-torque magnetic random access memory chips using an effective thermal stability factor method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, Luc, E-mail: luc.thomas@headway.com; Jan, Guenole; Le, Son

    The thermal stability of perpendicular Spin-Transfer-Torque Magnetic Random Access Memory (STT-MRAM) devices is investigated at chip level. Experimental data are analyzed in the framework of the Néel-Brown model including distributions of the thermal stability factor Δ. We show that in the low error rate regime important for applications, the effect of distributions of Δ can be described by a single quantity, the effective thermal stability factor Δ{sub eff}, which encompasses both the median and the standard deviation of the distributions. Data retention of memory chips can be assessed accurately by measuring Δ{sub eff} as a function of device diameter andmore » temperature. We apply this method to show that 54 nm devices based on our perpendicular STT-MRAM design meet our 10 year data retention target up to 120 °C.« less

  15. Design methodology for micro-discrete planar optics with minimum illumination loss for an extended source.

    PubMed

    Shim, Jongmyeong; Park, Changsu; Lee, Jinhyung; Kang, Shinill

    2016-08-08

    Recently, studies have examined techniques for modeling the light distribution of light-emitting diodes (LEDs) for various applications owing to their low power consumption, longevity, and light weight. The energy mapping technique, a design method that matches the energy distributions of an LED light source and target area, has been the focus of active research because of its design efficiency and accuracy. However, these studies have not considered the effects of the emitting area of the LED source. Therefore, there are limitations to the design accuracy for small, high-power applications with a short distance between the light source and optical system. A design method for compensating for the light distribution of an extended source after the initial optics design based on a point source was proposed to overcome such limits, but its time-consuming process and limited design accuracy with multiple iterations raised the need for a new design method that considers an extended source in the initial design stage. This study proposed a method for designing discrete planar optics that controls the light distribution and minimizes the optical loss with an extended source and verified the proposed method experimentally. First, the extended source was modeled theoretically, and a design method for discrete planar optics with the optimum groove angle through energy mapping was proposed. To verify the design method, design for the discrete planar optics was achieved for applications in illumination for LED flash. In addition, discrete planar optics for LED illuminance were designed and fabricated to create a uniform illuminance distribution. Optical characterization of these structures showed that the design was optimal; i.e., we plotted the optical losses as a function of the groove angle, and found a clear minimum. Simulations and measurements showed that an efficient optical design was achieved for an extended source.

  16. [An EMD based time-frequency distribution and its application in EEG analysis].

    PubMed

    Li, Xiaobing; Chu, Meng; Qiu, Tianshuang; Bao, Haiping

    2007-10-01

    Hilbert-Huang transform (HHT) is a new time-frequency analytic method to analyze the nonlinear and the non-stationary signals. The key step of this method is the empirical mode decomposition (EMD), with which any complicated signal can be decomposed into a finite and small number of intrinsic mode functions (IMF). In this paper, a new EMD based method for suppressing the cross-term of Wigner-Ville distribution (WVD) is developed and is applied to analyze the epileptic EEG signals. The simulation data and analysis results show that the new method suppresses the cross-term of the WVD effectively with an excellent resolution.

  17. A multi points ultrasonic detection method for material flow of belt conveyor

    NASA Astrophysics Data System (ADS)

    Zhang, Li; He, Rongjun

    2018-03-01

    For big detection error of single point ultrasonic ranging technology used in material flow detection of belt conveyor when coal distributes unevenly or is large, a material flow detection method of belt conveyor is designed based on multi points ultrasonic counter ranging technology. The method can calculate approximate sectional area of material by locating multi points on surfaces of material and belt, in order to get material flow according to running speed of belt conveyor. The test results show that the method has smaller detection error than single point ultrasonic ranging technology under the condition of big coal with uneven distribution.

  18. A simple transformation independent method for outlier definition.

    PubMed

    Johansen, Martin Berg; Christensen, Peter Astrup

    2018-04-10

    Definition and elimination of outliers is a key element for medical laboratories establishing or verifying reference intervals (RIs). Especially as inclusion of just a few outlying observations may seriously affect the determination of the reference limits. Many methods have been developed for definition of outliers. Several of these methods are developed for the normal distribution and often data require transformation before outlier elimination. We have developed a non-parametric transformation independent outlier definition. The new method relies on drawing reproducible histograms. This is done by using defined bin sizes above and below the median. The method is compared to the method recommended by CLSI/IFCC, which uses Box-Cox transformation (BCT) and Tukey's fences for outlier definition. The comparison is done on eight simulated distributions and an indirect clinical datasets. The comparison on simulated distributions shows that without outliers added the recommended method in general defines fewer outliers. However, when outliers are added on one side the proposed method often produces better results. With outliers on both sides the methods are equally good. Furthermore, it is found that the presence of outliers affects the BCT, and subsequently affects the determined limits of current recommended methods. This is especially seen in skewed distributions. The proposed outlier definition reproduced current RI limits on clinical data containing outliers. We find our simple transformation independent outlier detection method as good as or better than the currently recommended methods.

  19. An oscillatory kernel function method for lifting surfaces in mixed transonic flow

    NASA Technical Reports Server (NTRS)

    Cunningham, A. M., Jr.

    1974-01-01

    A study was conducted on the use of combined subsonic and supersonic linear theory to obtain economical and yet realistic solutions to unsteady transonic flow problems. With some modification, existing linear theory methods were combined into a single computer program. The method was applied to problems for which measured steady Mach number distributions and unsteady pressure distributions were available. By comparing theory and experiment, the transonic method showed a significant improvement over uniform flow methods. The results also indicated that more exact local Mach number effects and normal shock boundary conditions on the perturbation potential were needed. The validity of these improvements was demonstrated by application to steady flow.

  20. Extracting information on the spatial variability in erosion rate stored in detrital cooling age distributions in river sands

    NASA Astrophysics Data System (ADS)

    Braun, Jean; Gemignani, Lorenzo; van der Beek, Peter

    2018-03-01

    One of the main purposes of detrital thermochronology is to provide constraints on the regional-scale exhumation rate and its spatial variability in actively eroding mountain ranges. Procedures that use cooling age distributions coupled with hypsometry and thermal models have been developed in order to extract quantitative estimates of erosion rate and its spatial distribution, assuming steady state between tectonic uplift and erosion. This hypothesis precludes the use of these procedures to assess the likely transient response of mountain belts to changes in tectonic or climatic forcing. Other methods are based on an a priori knowledge of the in situ distribution of ages to interpret the detrital age distributions. In this paper, we describe a simple method that, using the observed detrital mineral age distributions collected along a river, allows us to extract information about the relative distribution of erosion rates in an eroding catchment without relying on a steady-state assumption, the value of thermal parameters or an a priori knowledge of in situ age distributions. The model is based on a relatively low number of parameters describing lithological variability among the various sub-catchments and their sizes and only uses the raw ages. The method we propose is tested against synthetic age distributions to demonstrate its accuracy and the optimum conditions for it use. In order to illustrate the method, we invert age distributions collected along the main trunk of the Tsangpo-Siang-Brahmaputra river system in the eastern Himalaya. From the inversion of the cooling age distributions we predict present-day erosion rates of the catchments along the Tsangpo-Siang-Brahmaputra river system, as well as some of its tributaries. We show that detrital age distributions contain dual information about present-day erosion rate, i.e., from the predicted distribution of surface ages within each catchment and from the relative contribution of any given catchment to the river distribution. The method additionally allows comparing modern erosion rates to long-term exhumation rates. We provide a simple implementation of the method in Python code within a Jupyter Notebook that includes the data used in this paper for illustration purposes.

  1. EDMC: An enhanced distributed multi-channel anti-collision algorithm for RFID reader system

    NASA Astrophysics Data System (ADS)

    Zhang, YuJing; Cui, Yinghua

    2017-05-01

    In this paper, we proposes an enhanced distributed multi-channel reader anti-collision algorithm for RFID environments which is based on the distributed multi-channel reader anti-collision algorithm for RFID environments (called DiMCA). We proposes a monitor method to decide whether reader receive the latest control news after it selected the data channel. The simulation result shows that it improves interrogation delay.

  2. The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution

    NASA Astrophysics Data System (ADS)

    Shin, H.; Heo, J.; Kim, T.; Jung, Y.

    2007-12-01

    The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.

  3. A hierarchical Bayesian GEV model for improving local and regional flood quantile estimates

    NASA Astrophysics Data System (ADS)

    Lima, Carlos H. R.; Lall, Upmanu; Troy, Tara; Devineni, Naresh

    2016-10-01

    We estimate local and regional Generalized Extreme Value (GEV) distribution parameters for flood frequency analysis in a multilevel, hierarchical Bayesian framework, to explicitly model and reduce uncertainties. As prior information for the model, we assume that the GEV location and scale parameters for each site come from independent log-normal distributions, whose mean parameter scales with the drainage area. From empirical and theoretical arguments, the shape parameter for each site is shrunk towards a common mean. Non-informative prior distributions are assumed for the hyperparameters and the MCMC method is used to sample from the joint posterior distribution. The model is tested using annual maximum series from 20 streamflow gauges located in an 83,000 km2 flood prone basin in Southeast Brazil. The results show a significant reduction of uncertainty estimates of flood quantile estimates over the traditional GEV model, particularly for sites with shorter records. For return periods within the range of the data (around 50 years), the Bayesian credible intervals for the flood quantiles tend to be narrower than the classical confidence limits based on the delta method. As the return period increases beyond the range of the data, the confidence limits from the delta method become unreliable and the Bayesian credible intervals provide a way to estimate satisfactory confidence bands for the flood quantiles considering parameter uncertainties and regional information. In order to evaluate the applicability of the proposed hierarchical Bayesian model for regional flood frequency analysis, we estimate flood quantiles for three randomly chosen out-of-sample sites and compare with classical estimates using the index flood method. The posterior distributions of the scaling law coefficients are used to define the predictive distributions of the GEV location and scale parameters for the out-of-sample sites given only their drainage areas and the posterior distribution of the average shape parameter is taken as the regional predictive distribution for this parameter. While the index flood method does not provide a straightforward way to consider the uncertainties in the index flood and in the regional parameters, the results obtained here show that the proposed Bayesian method is able to produce adequate credible intervals for flood quantiles that are in accordance with empirical estimates.

  4. Gini estimation under infinite variance

    NASA Astrophysics Data System (ADS)

    Fontanari, Andrea; Taleb, Nassim Nicholas; Cirillo, Pasquale

    2018-07-01

    We study the problems related to the estimation of the Gini index in presence of a fat-tailed data generating process, i.e. one in the stable distribution class with finite mean but infinite variance (i.e. with tail index α ∈(1 , 2)). We show that, in such a case, the Gini coefficient cannot be reliably estimated using conventional nonparametric methods, because of a downward bias that emerges under fat tails. This has important implications for the ongoing discussion about economic inequality. We start by discussing how the nonparametric estimator of the Gini index undergoes a phase transition in the symmetry structure of its asymptotic distribution, as the data distribution shifts from the domain of attraction of a light-tailed distribution to that of a fat-tailed one, especially in the case of infinite variance. We also show how the nonparametric Gini bias increases with lower values of α. We then prove that maximum likelihood estimation outperforms nonparametric methods, requiring a much smaller sample size to reach efficiency. Finally, for fat-tailed data, we provide a simple correction mechanism to the small sample bias of the nonparametric estimator based on the distance between the mode and the mean of its asymptotic distribution.

  5. Optimization research on the concentration field of NO in selective catalytic reduction flue gas denitration system

    NASA Astrophysics Data System (ADS)

    Zheng, Qingyu; Zhang, Guoqiang; Che, Kai; Shao, Shikuan; Li, Yanfei

    2017-08-01

    Taking 660 MW generator unit denitration system as a study object, an optimization and adjustment method shall be designed to control ammonia slip, i.e. adjust ammonia injection system based on NO concentration distribution at inlet/outlet of the denitration system to make the injected ammonia distribute evenly. The results shows that, this method can effectively improve NO concentration distribution at outlet of the denitration system and decrease ammonia injection amount and ammonia slip concentration. Reduce adverse impact of SCR denitration process on the air preheater to realize safe production by guaranteeing that NO discharge shall reach the standard.

  6. Performance comparison of first-order conditional estimation with interaction and Bayesian estimation methods for estimating the population parameters and its distribution from data sets with a low number of subjects.

    PubMed

    Pradhan, Sudeep; Song, Byungjeong; Lee, Jaeyeon; Chae, Jung-Woo; Kim, Kyung Im; Back, Hyun-Moon; Han, Nayoung; Kwon, Kwang-Il; Yun, Hwi-Yeol

    2017-12-01

    Exploratory preclinical, as well as clinical trials, may involve a small number of patients, making it difficult to calculate and analyze the pharmacokinetic (PK) parameters, especially if the PK parameters show very high inter-individual variability (IIV). In this study, the performance of a classical first-order conditional estimation with interaction (FOCE-I) and expectation maximization (EM)-based Markov chain Monte Carlo Bayesian (BAYES) estimation methods were compared for estimating the population parameters and its distribution from data sets having a low number of subjects. In this study, 100 data sets were simulated with eight sampling points for each subject and with six different levels of IIV (5%, 10%, 20%, 30%, 50%, and 80%) in their PK parameter distribution. A stochastic simulation and estimation (SSE) study was performed to simultaneously simulate data sets and estimate the parameters using four different methods: FOCE-I only, BAYES(C) (FOCE-I and BAYES composite method), BAYES(F) (BAYES with all true initial parameters and fixed ω 2 ), and BAYES only. Relative root mean squared error (rRMSE) and relative estimation error (REE) were used to analyze the differences between true and estimated values. A case study was performed with a clinical data of theophylline available in NONMEM distribution media. NONMEM software assisted by Pirana, PsN, and Xpose was used to estimate population PK parameters, and R program was used to analyze and plot the results. The rRMSE and REE values of all parameter (fixed effect and random effect) estimates showed that all four methods performed equally at the lower IIV levels, while the FOCE-I method performed better than other EM-based methods at higher IIV levels (greater than 30%). In general, estimates of random-effect parameters showed significant bias and imprecision, irrespective of the estimation method used and the level of IIV. Similar performance of the estimation methods was observed with theophylline dataset. The classical FOCE-I method appeared to estimate the PK parameters more reliably than the BAYES method when using a simple model and data containing only a few subjects. EM-based estimation methods can be considered for adapting to the specific needs of a modeling project at later steps of modeling.

  7. Inferred Eccentricity and Period Distributions of Kepler Eclipsing Binaries

    NASA Astrophysics Data System (ADS)

    Prsa, Andrej; Matijevic, G.

    2014-01-01

    Determining the underlying eccentricity and orbital period distributions from an observed sample of eclipsing binary stars is not a trivial task. Shen and Turner (2008) have shown that the commonly used maximum likelihood estimators are biased to larger eccentricities and they do not describe the underlying distribution correctly; orbital periods suffer from a similar bias. Hogg, Myers and Bovy (2010) proposed a hierarchical probabilistic method for inferring the true eccentricity distribution of exoplanet orbits that uses the likelihood functions for individual star eccentricities. The authors show that proper inference outperforms the simple histogramming of the best-fit eccentricity values. We apply this method to the complete sample of eclipsing binary stars observed by the Kepler mission (Prsa et al. 2011) to derive the unbiased underlying eccentricity and orbital period distributions. These distributions can be used for the studies of multiple star formation, dynamical evolution, and they can serve as a drop-in replacement to prior, ad-hoc distributions used in the exoplanet field for determining false positive occurrence rates.

  8. Defining and Enabling Resiliency of Electric Distribution Systems With Multiple Microgrids

    DOE PAGES

    Chanda, Sayonsom; Srivastava, Anurag K.

    2016-05-02

    This paper presents a method for quantifying and enabling the resiliency of a power distribution system (PDS) using analytical hierarchical process and percolation theory. Using this metric, quantitative analysis can be done to analyze the impact of possible control decisions to pro-actively enable the resilient operation of distribution system with multiple microgrids and other resources. Developed resiliency metric can also be used in short term distribution system planning. The benefits of being able to quantify resiliency can help distribution system planning engineers and operators to justify control actions, compare different reconfiguration algorithms, develop proactive control actions to avert power systemmore » outage due to impending catastrophic weather situations or other adverse events. Validation of the proposed method is done using modified CERTS microgrids and a modified industrial distribution system. Furthermore, simulation results show topological and composite metric considering power system characteristics to quantify the resiliency of a distribution system with the proposed methodology, and improvements in resiliency using two-stage reconfiguration algorithm and multiple microgrids.« less

  9. Recovering 3D particle size distributions from 2D sections

    NASA Astrophysics Data System (ADS)

    Cuzzi, Jeffrey N.; Olson, Daniel M.

    2017-03-01

    We discuss different ways to convert observed, apparent particle size distributions from 2D sections (thin sections, SEM maps on planar surfaces, etc.) into true 3D particle size distributions. We give a simple, flexible, and practical method to do this; show which of these techniques gives the most faithful conversions; and provide (online) short computer codes to calculate both 2D-3D recoveries and simulations of 2D observations by random sectioning. The most important systematic bias of 2D sectioning, from the standpoint of most chondrite studies, is an overestimate of the abundance of the larger particles. We show that fairly good recoveries can be achieved from observed size distributions containing 100-300 individual measurements of apparent particle diameter.

  10. Applications of a direct/iterative design method to complex transonic configurations

    NASA Technical Reports Server (NTRS)

    Smith, Leigh Ann; Campbell, Richard L.

    1992-01-01

    The current study explores the use of an automated direct/iterative design method for the reduction of drag in transport configurations, including configurations with engine nacelles. The method requires the user to choose a proper target-pressure distribution and then develops a corresponding airfoil section. The method can be applied to two-dimensional airfoil sections or to three-dimensional wings. The three cases that are presented show successful application of the method for reducing drag from various sources. The first two cases demonstrate the use of the method to reduce induced drag by designing to an elliptic span-load distribution and to reduce wave drag by decreasing the shock strength for a given lift. In the second case, a body-mounted nacelle is added and the method is successfully used to eliminate increases in wing drag associated with the nacelle addition by designing to an arbitrary pressure distribution as a result of the redesigning of a wing in combination with a given underwing nacelle to clean-wing, target-pressure distributions. These cases illustrate several possible uses of the method for reducing different types of drag. The magnitude of the obtainable drag reduction varies with the constraints of the problem and the configuration to be modified.

  11. Distributed collaborative probabilistic design of multi-failure structure with fluid-structure interaction using fuzzy neural network of regression

    NASA Astrophysics Data System (ADS)

    Song, Lu-Kai; Wen, Jie; Fei, Cheng-Wei; Bai, Guang-Chen

    2018-05-01

    To improve the computing efficiency and precision of probabilistic design for multi-failure structure, a distributed collaborative probabilistic design method-based fuzzy neural network of regression (FR) (called as DCFRM) is proposed with the integration of distributed collaborative response surface method and fuzzy neural network regression model. The mathematical model of DCFRM is established and the probabilistic design idea with DCFRM is introduced. The probabilistic analysis of turbine blisk involving multi-failure modes (deformation failure, stress failure and strain failure) was investigated by considering fluid-structure interaction with the proposed method. The distribution characteristics, reliability degree, and sensitivity degree of each failure mode and overall failure mode on turbine blisk are obtained, which provides a useful reference for improving the performance and reliability of aeroengine. Through the comparison of methods shows that the DCFRM reshapes the probability of probabilistic analysis for multi-failure structure and improves the computing efficiency while keeping acceptable computational precision. Moreover, the proposed method offers a useful insight for reliability-based design optimization of multi-failure structure and thereby also enriches the theory and method of mechanical reliability design.

  12. Effects of random tooth profile errors on the dynamic behaviors of planetary gears

    NASA Astrophysics Data System (ADS)

    Xun, Chao; Long, Xinhua; Hua, Hongxing

    2018-02-01

    In this paper, a nonlinear random model is built to describe the dynamics of planetary gear trains (PGTs), in which the time-varying mesh stiffness, tooth profile modification (TPM), tooth contact loss, and random tooth profile error are considered. A stochastic method based on the method of multiple scales (MMS) is extended to analyze the statistical property of the dynamic performance of PGTs. By the proposed multiple-scales based stochastic method, the distributions of the dynamic transmission errors (DTEs) are investigated, and the lower and upper bounds are determined based on the 3σ principle. Monte Carlo method is employed to verify the proposed method. Results indicate that the proposed method can be used to determine the distribution of the DTE of PGTs high efficiently and allow a link between the manufacturing precision and the dynamical response. In addition, the effects of tooth profile modification on the distributions of vibration amplitudes and the probability of tooth contact loss with different manufacturing tooth profile errors are studied. The results show that the manufacturing precision affects the distribution of dynamic transmission errors dramatically and appropriate TPMs are helpful to decrease the nominal value and the deviation of the vibration amplitudes.

  13. Characterization of demographic expansions from pairwise comparisons of linked microsatellite haplotypes.

    PubMed

    Navascués, Miguel; Hardy, Olivier J; Burgarella, Concetta

    2009-03-01

    This work extends the methods of demographic inference based on the distribution of pairwise genetic differences between individuals (mismatch distribution) to the case of linked microsatellite data. Population genetics theory describes the distribution of mutations among a sample of genes under different demographic scenarios. However, the actual number of mutations can rarely be deduced from DNA polymorphisms. The inclusion of mutation models in theoretical predictions can improve the performance of statistical methods. We have developed a maximum-pseudolikelihood estimator for the parameters that characterize a demographic expansion for a series of linked loci evolving under a stepwise mutation model. Those loci would correspond to DNA polymorphisms of linked microsatellites (such as those found on the Y chromosome or the chloroplast genome). The proposed method was evaluated with simulated data sets and with a data set of chloroplast microsatellites that showed signal for demographic expansion in a previous study. The results show that inclusion of a mutational model in the analysis improves the estimates of the age of expansion in the case of older expansions.

  14. A mathematical deconvolution formulation for superficial dose distribution measurement by Cerenkov light dosimetry.

    PubMed

    Brost, Eric Edward; Watanabe, Yoichi

    2018-06-01

    Cerenkov photons are created by high-energy radiation beams used for radiation therapy. In this study, we developed a Cerenkov light dosimetry technique to obtain a two-dimensional dose distribution in a superficial region of medium from the images of Cerenkov photons by using a deconvolution method. An integral equation was derived to represent the Cerenkov photon image acquired by a camera for a given incident high-energy photon beam by using convolution kernels. Subsequently, an equation relating the planar dose at a depth to a Cerenkov photon image using the well-known relationship between the incident beam fluence and the dose distribution in a medium was obtained. The final equation contained a convolution kernel called the Cerenkov dose scatter function (CDSF). The CDSF function was obtained by deconvolving the Cerenkov scatter function (CSF) with the dose scatter function (DSF). The GAMOS (Geant4-based Architecture for Medicine-Oriented Simulations) Monte Carlo particle simulation software was used to obtain the CSF and DSF. The dose distribution was calculated from the Cerenkov photon intensity data using an iterative deconvolution method with the CDSF. The theoretical formulation was experimentally evaluated by using an optical phantom irradiated by high-energy photon beams. The intensity of the deconvolved Cerenkov photon image showed linear dependence on the dose rate and the photon beam energy. The relative intensity showed a field size dependence similar to the beam output factor. Deconvolved Cerenkov images showed improvement in dose profiles compared with the raw image data. In particular, the deconvolution significantly improved the agreement in the high dose gradient region, such as in the penumbra. Deconvolution with a single iteration was found to provide the most accurate solution of the dose. Two-dimensional dose distributions of the deconvolved Cerenkov images agreed well with the reference distributions for both square fields and a multileaf collimator (MLC) defined, irregularly shaped field. The proposed technique improved the accuracy of the Cerenkov photon dosimetry in the penumbra region. The results of this study showed initial validation of the deconvolution method for beam profile measurements in a homogeneous media. The new formulation accounted for the physical processes of Cerenkov photon transport in the medium more accurately than previously published methods. © 2018 American Association of Physicists in Medicine.

  15. Variation in sensitivity, absorption and density of the central rod distribution with eccentricity.

    PubMed

    Tornow, R P; Stilling, R

    1998-01-01

    To assess the human rod photopigment distribution and sensitivity with high spatial resolution within the central +/-15 degrees and to compare the results of pigment absorption, sensitivity and rod density distribution (number of rods per square degree). Rod photopigment density distribution was measured with imaging densitometry using a modified Rodenstock scanning laser ophthalmoscope. Dark-adapted sensitivity profiles were measured with green stimuli (17' arc diameter, 1 degrees spacing) using a T ubingen manual perimeter. Sensitivity profiles were plotted on a linear scale and rod photopigment optical density distribution profiles were converted to absorption profiles of the rod photopigment layer. Both the absorption profile of the rod photopigment and the linear sensitivity profile for green stimuli show a minimum at the foveal center and increase steeply with eccentricity. The variation with eccentricity corresponds to the rod density distribution. Rod photopigment absorption profiles, retinal sensitivity profiles, and the rod density distribution are linearly related within the central +/-15 degrees. This is in agreement with theoretical considerations. Both methods, imaging retinal densitometry using a scanning laser ophthalmoscope and dark-adapted perimetry with small green stimuli, are useful for assessing the central rod distribution and sensitivity. However, at present, both methods have limitations. Suggestions for improving the reliability of both methods are given.

  16. Blade design and analysis using a modified Euler solver

    NASA Technical Reports Server (NTRS)

    Leonard, O.; Vandenbraembussche, R. A.

    1991-01-01

    An iterative method for blade design based on Euler solver and described in an earlier paper is used to design compressor and turbine blades providing shock free transonic flows. The method shows a rapid convergence, and indicates how much the flow is sensitive to small modifications of the blade geometry, that the classical iterative use of analysis methods might not be able to define. The relationship between the required Mach number distribution and the resulting geometry is discussed. Examples show how geometrical constraints imposed upon the blade shape can be respected by using free geometrical parameters or by relaxing the required Mach number distribution. The same code is used both for the design of the required geometry and for the off-design calculations. Examples illustrate the difficulty of designing blade shapes with optimal performance also outside of the design point.

  17. Fast Low-Rank Bayesian Matrix Completion With Hierarchical Gaussian Prior Models

    NASA Astrophysics Data System (ADS)

    Yang, Linxiao; Fang, Jun; Duan, Huiping; Li, Hongbin; Zeng, Bing

    2018-06-01

    The problem of low rank matrix completion is considered in this paper. To exploit the underlying low-rank structure of the data matrix, we propose a hierarchical Gaussian prior model, where columns of the low-rank matrix are assumed to follow a Gaussian distribution with zero mean and a common precision matrix, and a Wishart distribution is specified as a hyperprior over the precision matrix. We show that such a hierarchical Gaussian prior has the potential to encourage a low-rank solution. Based on the proposed hierarchical prior model, a variational Bayesian method is developed for matrix completion, where the generalized approximate massage passing (GAMP) technique is embedded into the variational Bayesian inference in order to circumvent cumbersome matrix inverse operations. Simulation results show that our proposed method demonstrates superiority over existing state-of-the-art matrix completion methods.

  18. Uncertainty in determining extreme precipitation thresholds

    NASA Astrophysics Data System (ADS)

    Liu, Bingjun; Chen, Junfan; Chen, Xiaohong; Lian, Yanqing; Wu, Lili

    2013-10-01

    Extreme precipitation events are rare and occur mostly on a relatively small and local scale, which makes it difficult to set the thresholds for extreme precipitations in a large basin. Based on the long term daily precipitation data from 62 observation stations in the Pearl River Basin, this study has assessed the applicability of the non-parametric, parametric, and the detrended fluctuation analysis (DFA) methods in determining extreme precipitation threshold (EPT) and the certainty to EPTs from each method. Analyses from this study show the non-parametric absolute critical value method is easy to use, but unable to reflect the difference of spatial rainfall distribution. The non-parametric percentile method can account for the spatial distribution feature of precipitation, but the problem with this method is that the threshold value is sensitive to the size of rainfall data series and is subjected to the selection of a percentile thus make it difficult to determine reasonable threshold values for a large basin. The parametric method can provide the most apt description of extreme precipitations by fitting extreme precipitation distributions with probability distribution functions; however, selections of probability distribution functions, the goodness-of-fit tests, and the size of the rainfall data series can greatly affect the fitting accuracy. In contrast to the non-parametric and the parametric methods which are unable to provide information for EPTs with certainty, the DFA method although involving complicated computational processes has proven to be the most appropriate method that is able to provide a unique set of EPTs for a large basin with uneven spatio-temporal precipitation distribution. The consistency between the spatial distribution of DFA-based thresholds with the annual average precipitation, the coefficient of variation (CV), and the coefficient of skewness (CS) for the daily precipitation further proves that EPTs determined by the DFA method are more reasonable and applicable for the Pearl River Basin.

  19. Time evolution of a Gaussian class of quasi-distribution functions under quadratic Hamiltonian.

    PubMed

    Ginzburg, D; Mann, A

    2014-03-10

    A Lie algebraic method for propagation of the Wigner quasi-distribution function (QDF) under quadratic Hamiltonian was presented by Zoubi and Ben-Aryeh. We show that the same method can be used in order to propagate a rather general class of QDFs, which we call the "Gaussian class." This class contains as special cases the well-known Wigner, Husimi, Glauber, and Kirkwood-Rihaczek QDFs. We present some examples of the calculation of the time evolution of those functions.

  20. Fractal analysis of the dark matter and gas distributions in the Mare-Nostrum universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaite, José, E-mail: jose.gaite@upm.es

    2010-03-01

    We develop a method of multifractal analysis of N-body cosmological simulations that improves on the customary counts-in-cells method by taking special care of the effects of discreteness and large scale homogeneity. The analysis of the Mare-Nostrum simulation with our method provides strong evidence of self-similar multifractal distributions of dark matter and gas, with a halo mass function that is of Press-Schechter type but has a power-law exponent -2, as corresponds to a multifractal. Furthermore, our analysis shows that the dark matter and gas distributions are indistinguishable as multifractals. To determine if there is any gas biasing, we calculate the cross-correlationmore » coefficient, with negative but inconclusive results. Hence, we develop an effective Bayesian analysis connected with information theory, which clearly demonstrates that the gas is biased in a long range of scales, up to the scale of homogeneity. However, entropic measures related to the Bayesian analysis show that this gas bias is small (in a precise sense) and is such that the fractal singularities of both distributions coincide and are identical. We conclude that this common multifractal cosmic web structure is determined by the dynamics and is independent of the initial conditions.« less

  1. SimBA: simulation algorithm to fit extant-population distributions.

    PubMed

    Parida, Laxmi; Haiminen, Niina

    2015-03-14

    Simulation of populations with specified characteristics such as allele frequencies, linkage disequilibrium etc., is an integral component of many studies, including in-silico breeding optimization. Since the accuracy and sensitivity of population simulation is critical to the quality of the output of the applications that use them, accurate algorithms are required to provide a strong foundation to the methods in these studies. In this paper we present SimBA (Simulation using Best-fit Algorithm) a non-generative approach, based on a combination of stochastic techniques and discrete methods. We optimize a hill climbing algorithm and extend the framework to include multiple subpopulation structures. Additionally, we show that SimBA is very sensitive to the input specifications, i.e., very similar but distinct input characteristics result in distinct outputs with high fidelity to the specified distributions. This property of the simulation is not explicitly modeled or studied by previous methods. We show that SimBA outperforms the existing population simulation methods, both in terms of accuracy as well as time-efficiency. Not only does it construct populations that meet the input specifications more stringently than other published methods, SimBA is also easy to use. It does not require explicit parameter adaptations or calibrations. Also, it can work with input specified as distributions, without an exemplar matrix or population as required by some methods. SimBA is available at http://researcher.ibm.com/project/5669 .

  2. Measurement of distributions of temperature and wavelength-dependent emissivity of a laminar diffusion flame using hyper-spectral imaging technique

    NASA Astrophysics Data System (ADS)

    Liu, Huawei; Zheng, Shu; Zhou, Huaichun; Qi, Chaobo

    2016-02-01

    A generalized method to estimate a two-dimensional (2D) distribution of temperature and wavelength-dependent emissivity in a sooty flame with spectroscopic radiation intensities is proposed in this paper. The method adopts a Newton-type iterative method to solve the unknown coefficients in the polynomial relationship between the emissivity and the wavelength, as well as the unknown temperature. Polynomial functions with increasing order are examined, and final results are determined as the result converges. Numerical simulation on a fictitious flame with wavelength-dependent absorption coefficients shows a good performance with relative errors less than 0.5% in the average temperature. What’s more, a hyper-spectral imaging device is introduced to measure an ethylene/air laminar diffusion flame with the proposed method. The proper order for the polynomial function is selected to be 2, because every one order increase in the polynomial function will only bring in a temperature variation smaller than 20 K. For the ethylene laminar diffusion flame with 194 ml min-1 C2H4 and 284 L min-1 air studied in this paper, the 2D distribution of average temperature estimated along the line of sight is similar to, but smoother than that of the local temperature given in references, and the 2D distribution of emissivity shows a cumulative effect of the absorption coefficient along the line of sight. It also shows that emissivity of the flame decreases as the wavelength increases. The emissivity under wavelength 400 nm is about 2.5 times as much as that under wavelength 1000 nm for a typical line-of-sight in the flame, with the same trend for the absorption coefficient of soot varied with the wavelength.

  3. Influence of the weighing bar position in vessel on measurement of cement’s particle size distribution by using the buoyancy weighing-bar method

    NASA Astrophysics Data System (ADS)

    Tambun, R.; Sihombing, R. O.; Simanjuntak, A.; Hanum, F.

    2018-02-01

    The buoyancy weighing-bar method is a new simple and cost-effective method to determine the particle size distribution both settling and floating particle. In this method, the density change in a suspension due to particle migration is measured by weighing buoyancy against a weighing-bar hung in the suspension, and then the particle size distribution is calculated using the length of the bar and the time-course change in the mass of the bar. The apparatus of this method consists of a weighing-bar and an analytical balance with a hook for under-floor weighing. The weighing bar is used to detect the density change in suspension. In this study we investigate the influences of position of weighing bar in vessel on settling particle size distribution measurements of cement by using the buoyancy weighing-bar method. The vessel used in this experiment is graduated cylinder with the diameter of 65 mm and the position of weighing bar is in center and off center of vessel. The diameter of weighing bar in this experiment is 10 mm, and the kerosene is used as a dispersion liquids. The results obtained show that the positions of weighing bar in vessel have no significant effect on determination the cement’s particle size distribution by using buoyancy weighing-bar method, and the results obtained are comparable to those measured by using settling balance method.

  4. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions

    PubMed Central

    Yoshimoto, Junichiro; Shimizu, Yu; Okada, Go; Takamura, Masahiro; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji

    2017-01-01

    We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views) for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data. PMID:29049392

  5. Scene-based nonuniformity correction using local constant statistics.

    PubMed

    Zhang, Chao; Zhao, Wenyi

    2008-06-01

    In scene-based nonuniformity correction, the statistical approach assumes all possible values of the true-scene pixel are seen at each pixel location. This global-constant-statistics assumption does not distinguish fixed pattern noise from spatial variations in the average image. This often causes the "ghosting" artifacts in the corrected images since the existing spatial variations are treated as noises. We introduce a new statistical method to reduce the ghosting artifacts. Our method proposes a local-constant statistics that assumes that the temporal signal distribution is not constant at each pixel but is locally true. This considers statistically a constant distribution in a local region around each pixel but uneven distribution in a larger scale. Under the assumption that the fixed pattern noise concentrates in a higher spatial-frequency domain than the distribution variation, we apply a wavelet method to the gain and offset image of the noise and separate out the pattern noise from the spatial variations in the temporal distribution of the scene. We compare the results to the global-constant-statistics method using a clean sequence with large artificial pattern noises. We also apply the method to a challenging CCD video sequence and a LWIR sequence to show how effective it is in reducing noise and the ghosting artifacts.

  6. A Direction Finding Method with A 3-D Array Based on Aperture Synthesis

    NASA Astrophysics Data System (ADS)

    Li, Shiwen; Chen, Liangbing; Gao, Zhaozhao; Ma, Wenfeng

    2018-01-01

    Direction finding for electronic warfare application should provide a wider field of view as possible. But the maximum unambiguous field of view for conventional direction finding methods is a hemisphere. It cannot distinguish the direction of arrival of the signals from the back lobe of the array. In this paper, a full 3-D direction finding method based on aperture synthesis radiometry is proposed. The model of the direction finding system is illustrated, and the fundamentals are presented. The relationship between the outputs of the measurements of a 3-D array and the 3-D power distribution of the point sources can be represented by a 3-D Fourier transform, and then the 3-D power distribution of the point sources can be reconstructed by an inverse 3-D Fourier transform. And in order to display the 3-D power distribution of the point sources conveniently, the whole spherical distribution is represented by two 2-D circular distribution images, one of which is for the upper hemisphere, and the other is for the lower hemisphere. Then a numeric simulation is designed and conducted to demonstrate the feasibility of the method. The results show that the method can estimate the arbitrary direction of arrival of the signals in the 3-D space correctly.

  7. Alternative evaluation metrics for risk adjustment methods.

    PubMed

    Park, Sungchul; Basu, Anirban

    2018-06-01

    Risk adjustment is instituted to counter risk selection by accurately equating payments with expected expenditures. Traditional risk-adjustment methods are designed to estimate accurate payments at the group level. However, this generates residual risks at the individual level, especially for high-expenditure individuals, thereby inducing health plans to avoid those with high residual risks. To identify an optimal risk-adjustment method, we perform a comprehensive comparison of prediction accuracies at the group level, at the tail distributions, and at the individual level across 19 estimators: 9 parametric regression, 7 machine learning, and 3 distributional estimators. Using the 2013-2014 MarketScan database, we find that no one estimator performs best in all prediction accuracies. Generally, machine learning and distribution-based estimators achieve higher group-level prediction accuracy than parametric regression estimators. However, parametric regression estimators show higher tail distribution prediction accuracy and individual-level prediction accuracy, especially at the tails of the distribution. This suggests that there is a trade-off in selecting an appropriate risk-adjustment method between estimating accurate payments at the group level and lower residual risks at the individual level. Our results indicate that an optimal method cannot be determined solely on the basis of statistical metrics but rather needs to account for simulating plans' risk selective behaviors. Copyright © 2018 John Wiley & Sons, Ltd.

  8. On the estimation of spread rate for a biological population

    Treesearch

    Jim Clark; Lajos Horváth; Mark Lewis

    2001-01-01

    We propose a nonparametric estimator for the rate of spread of an introduced population. We prove that the limit distribution of the estimator is normal or stable, depending on the behavior of the moment generating function. We show that resampling methods can also be used to approximate the distribution of the estimators.

  9. Molecular Epidemiology of Canine Parvovirus, Europe

    PubMed Central

    Desario, Costantina; Addie, Diane D.; Martella, Vito; Vieira, Maria João; Elia, Gabriella; Zicola, Angelique; Davis, Christopher; Thompson, Gertrude; Thiry, Ethienne; Truyen, Uwe; Buonavoglia, Canio

    2007-01-01

    Canine parvovirus (CPV), which causes hemorrhagic enteritis in dogs, has 3 antigenic variants: types 2a, 2b, and 2c. Molecular method assessment of the distribution of the CPV variants in Europe showed that the new variant CPV-2c is widespread in Europe and that the viruses are distributed in different countries. PMID:17953097

  10. Syndromic surveillance models using Web data: the case of scarlet fever in the UK.

    PubMed

    Samaras, Loukas; García-Barriocanal, Elena; Sicilia, Miguel-Angel

    2012-03-01

    Recent research has shown the potential of Web queries as a source for syndromic surveillance, and existing studies show that these queries can be used as a basis for estimation and prediction of the development of a syndromic disease, such as influenza, using log linear (logit) statistical models. Two alternative models are applied to the relationship between cases and Web queries in this paper. We examine the applicability of using statistical methods to relate search engine queries with scarlet fever cases in the UK, taking advantage of tools to acquire the appropriate data from Google, and using an alternative statistical method based on gamma distributions. The results show that using logit models, the Pearson correlation factor between Web queries and the data obtained from the official agencies must be over 0.90, otherwise the prediction of the peak and the spread of the distributions gives significant deviations. In this paper, we describe the gamma distribution model and show that we can obtain better results in all cases using gamma transformations, and especially in those with a smaller correlation factor.

  11. Feasibility of continuous-variable quantum key distribution with noisy coherent states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Usenko, Vladyslav C.; Department of Optics, Palacky University, CZ-772 07 Olomouc; Filip, Radim

    2010-02-15

    We address security of the quantum key distribution scheme based on the noisy modulation of coherent states and investigate how it is robust against noise in the modulation regardless of the particular technical implementation. As the trusted preparation noise is shown to be security breaking even for purely lossy channels, we reveal the essential difference between two types of trusted noise, namely sender-side preparation noise and receiver-side detection noise, the latter being security preserving. We consider the method of sender-side state purification to compensate the preparation noise and show its applicability in the realistic conditions of channel loss, untrusted channelmore » excess noise, and trusted detection noise. We show that purification makes the scheme robust to the preparation noise (i.e., even the arbitrary noisy coherent states can in principle be used for the purpose of quantum key distribution). We also take into account the effect of realistic reconciliation and show that the purification method is still efficient in this case up to a limited value of preparation noise.« less

  12. Design and performance evaluation of a distributed OFDMA-based MAC protocol for MANETs.

    PubMed

    Park, Jaesung; Chung, Jiyoung; Lee, Hyungyu; Lee, Jung-Ryun

    2014-01-01

    In this paper, we propose a distributed MAC protocol for OFDMA-based wireless mobile ad hoc multihop networks, in which the resource reservation and data transmission procedures are operated in a distributed manner. A frame format is designed considering the characteristics of OFDMA that each node can transmit or receive data to or from multiple nodes simultaneously. Under this frame structure, we propose a distributed resource management method including network state estimation and resource reservation processes. We categorize five types of logical errors according to their root causes and show that two of the logical errors are inevitable while three of them are avoided under the proposed distributed MAC protocol. In addition, we provide a systematic method to determine the advertisement period of each node by presenting a clear relation between the accuracy of estimated network states and the signaling overhead. We evaluate the performance of the proposed protocol in respect of the reservation success rate and the success rate of data transmission. Since our method focuses on avoiding logical errors, it could be easily placed on top of the other resource allocation methods focusing on the physical layer issues of the resource management problem and interworked with them.

  13. Inverse analysis of non-uniform temperature distributions using multispectral pyrometry

    NASA Astrophysics Data System (ADS)

    Fu, Tairan; Duan, Minghao; Tian, Jibin; Shi, Congling

    2016-05-01

    Optical diagnostics can be used to obtain sub-pixel temperature information in remote sensing. A multispectral pyrometry method was developed using multiple spectral radiation intensities to deduce the temperature area distribution in the measurement region. The method transforms a spot multispectral pyrometer with a fixed field of view into a pyrometer with enhanced spatial resolution that can give sub-pixel temperature information from a "one pixel" measurement region. A temperature area fraction function was defined to represent the spatial temperature distribution in the measurement region. The method is illustrated by simulations of a multispectral pyrometer with a spectral range of 8.0-13.0 μm measuring a non-isothermal region with a temperature range of 500-800 K in the spot pyrometer field of view. The inverse algorithm for the sub-pixel temperature distribution (temperature area fractions) in the "one pixel" verifies this multispectral pyrometry method. The results show that an improved Levenberg-Marquardt algorithm is effective for this ill-posed inverse problem with relative errors in the temperature area fractions of (-3%, 3%) for most of the temperatures. The analysis provides a valuable reference for the use of spot multispectral pyrometers for sub-pixel temperature distributions in remote sensing measurements.

  14. Redshift data and statistical inference

    NASA Technical Reports Server (NTRS)

    Newman, William I.; Haynes, Martha P.; Terzian, Yervant

    1994-01-01

    Frequency histograms and the 'power spectrum analysis' (PSA) method, the latter developed by Yu & Peebles (1969), have been widely employed as techniques for establishing the existence of periodicities. We provide a formal analysis of these two classes of methods, including controlled numerical experiments, to better understand their proper use and application. In particular, we note that typical published applications of frequency histograms commonly employ far greater numbers of class intervals or bins than is advisable by statistical theory sometimes giving rise to the appearance of spurious patterns. The PSA method generates a sequence of random numbers from observational data which, it is claimed, is exponentially distributed with unit mean and variance, essentially independent of the distribution of the original data. We show that the derived random processes is nonstationary and produces a small but systematic bias in the usual estimate of the mean and variance. Although the derived variable may be reasonably described by an exponential distribution, the tail of the distribution is far removed from that of an exponential, thereby rendering statistical inference and confidence testing based on the tail of the distribution completely unreliable. Finally, we examine a number of astronomical examples wherein these methods have been used giving rise to widespread acceptance of statistically unconfirmed conclusions.

  15. On the modified grain-size-distribution method to evaluate the dynamic recrystallisation fraction in AISI 304 stainless steel

    NASA Astrophysics Data System (ADS)

    Hong, D. H.; Park, J. K.

    2018-04-01

    The purpose of the present work was to verify the grain size distribution (GSD) method, which was recently proposed by one of the present authors as a method for evaluating the fraction of dynamic recrystallisation (DRX) in a microalloyed medium carbon steel. To verify the GSD-method, we have selected a 304 stainless steel as a model system and have measured the evolution of the overall grain size distribution (including both the recrystallised and unrecrystallised grains) during hot compression at 1,000 °C in a Gleeble machine; the DRX fraction estimated using the GSD method is compared with the experimentally measured value via EBSD. The results show that the previous GSD method tends to overestimate the DRX fraction due to the utilisation of a plain lognormal distribution function (LDF). To overcome this shortcoming, we propose a modified GSD-method wherein an area-weighted LDF, in place of a plain LDF, is employed to model the evolution of GSD during hot deformation. Direct measurement of the DRX fraction using EBSD confirms that the modified GSD-method provides a reliable method for evaluating the DRX fraction from the experimentally measured GSDs. Reasonable agreement between the DRX fraction and softening fraction suggests that the Kocks-Mecking method utilising the Voce equation can be satisfactorily used to model the work hardening and dynamic recovery behaviour of steels during hot deformation.

  16. Avalanche statistics from data with low time resolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LeBlanc, Michael; Nawano, Aya; Wright, Wendelin J.

    Extracting avalanche distributions from experimental microplasticity data can be hampered by limited time resolution. We compute the effects of low time resolution on avalanche size distributions and give quantitative criteria for diagnosing and circumventing problems associated with low time resolution. We show that traditional analysis of data obtained at low acquisition rates can lead to avalanche size distributions with incorrect power-law exponents or no power-law scaling at all. Furthermore, we demonstrate that it can lead to apparent data collapses with incorrect power-law and cutoff exponents. We propose new methods to analyze low-resolution stress-time series that can recover the size distributionmore » of the underlying avalanches even when the resolution is so low that naive analysis methods give incorrect results. We test these methods on both downsampled simulation data from a simple model and downsampled bulk metallic glass compression data and find that the methods recover the correct critical exponents.« less

  17. Sample size determination for logistic regression on a logit-normal distribution.

    PubMed

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  18. Avalanche statistics from data with low time resolution

    DOE PAGES

    LeBlanc, Michael; Nawano, Aya; Wright, Wendelin J.; ...

    2016-11-22

    Extracting avalanche distributions from experimental microplasticity data can be hampered by limited time resolution. We compute the effects of low time resolution on avalanche size distributions and give quantitative criteria for diagnosing and circumventing problems associated with low time resolution. We show that traditional analysis of data obtained at low acquisition rates can lead to avalanche size distributions with incorrect power-law exponents or no power-law scaling at all. Furthermore, we demonstrate that it can lead to apparent data collapses with incorrect power-law and cutoff exponents. We propose new methods to analyze low-resolution stress-time series that can recover the size distributionmore » of the underlying avalanches even when the resolution is so low that naive analysis methods give incorrect results. We test these methods on both downsampled simulation data from a simple model and downsampled bulk metallic glass compression data and find that the methods recover the correct critical exponents.« less

  19. Application of a Threshold Method to Airborne-Spaceborne Attenuating-Wavelength Radars for the Estimation of Space-Time Rain-Rate Statistics.

    NASA Astrophysics Data System (ADS)

    Meneghini, Robert

    1998-09-01

    A method is proposed for estimating the area-average rain-rate distribution from attenuating-wavelength spaceborne or airborne radar data. Because highly attenuated radar returns yield unreliable estimates of the rain rate, these are eliminated by means of a proxy variable, Q, derived from the apparent radar reflectivity factors and a power law relating the attenuation coefficient and the reflectivity factor. In determining the probability distribution function of areawide rain rates, the elimination of attenuated measurements at high rain rates and the loss of data at light rain rates, because of low signal-to-noise ratios, leads to truncation of the distribution at the low and high ends. To estimate it over all rain rates, a lognormal distribution is assumed, the parameters of which are obtained from a nonlinear least squares fit to the truncated distribution. Implementation of this type of threshold method depends on the method used in estimating the high-resolution rain-rate estimates (e.g., either the standard Z-R or the Hitschfeld-Bordan estimate) and on the type of rain-rate estimate (either point or path averaged). To test the method, measured drop size distributions are used to characterize the rain along the radar beam. Comparisons with the standard single-threshold method or with the sample mean, taken over the high-resolution estimates, show that the present method usually provides more accurate determinations of the area-averaged rain rate if the values of the threshold parameter, QT, are chosen in the range from 0.2 to 0.4.

  20. A comparison of numerical solutions of partial differential equations with probabilistic and possibilistic parameters for the quantification of uncertainty in subsurface solute transport.

    PubMed

    Zhang, Kejiang; Achari, Gopal; Li, Hua

    2009-11-03

    Traditionally, uncertainty in parameters are represented as probabilistic distributions and incorporated into groundwater flow and contaminant transport models. With the advent of newer uncertainty theories, it is now understood that stochastic methods cannot properly represent non random uncertainties. In the groundwater flow and contaminant transport equations, uncertainty in some parameters may be random, whereas those of others may be non random. The objective of this paper is to develop a fuzzy-stochastic partial differential equation (FSPDE) model to simulate conditions where both random and non random uncertainties are involved in groundwater flow and solute transport. Three potential solution techniques namely, (a) transforming a probability distribution to a possibility distribution (Method I) then a FSPDE becomes a fuzzy partial differential equation (FPDE), (b) transforming a possibility distribution to a probability distribution (Method II) and then a FSPDE becomes a stochastic partial differential equation (SPDE), and (c) the combination of Monte Carlo methods and FPDE solution techniques (Method III) are proposed and compared. The effects of these three methods on the predictive results are investigated by using two case studies. The results show that the predictions obtained from Method II is a specific case of that got from Method I. When an exact probabilistic result is needed, Method II is suggested. As the loss or gain of information during a probability-possibility (or vice versa) transformation cannot be quantified, their influences on the predictive results is not known. Thus, Method III should probably be preferred for risk assessments.

  1. The price momentum of stock in distribution

    NASA Astrophysics Data System (ADS)

    Liu, Haijun; Wang, Longfei

    2018-02-01

    In this paper, a new momentum of stock in distribution is proposed and applied in real investment. Firstly, assuming that a stock behaves as a multi-particle system, its share-exchange distribution and cost distribution are introduced. Secondly, an estimation of the share-exchange distribution is given with daily transaction data by 3 σ rule from the normal distribution. Meanwhile, an iterative method is given to estimate the cost distribution. Based on the cost distribution, a new momentum is proposed for stock system. Thirdly, an empirical test is given to compare the new momentum with others by contrarian strategy. The result shows that the new one outperforms others in many places. Furthermore, entropy of stock is introduced according to its cost distribution.

  2. Reduction of CMIP5 models bias using Cumulative Distribution Function transform and impact on crops yields simulations across West Africa.

    NASA Astrophysics Data System (ADS)

    Moise Famien, Adjoua; Defrance, Dimitri; Sultan, Benjamin; Janicot, Serge; Vrac, Mathieu

    2017-04-01

    Different CMIP exercises show that the simulations of the future/current temperature and precipitation are complex with a high uncertainty degree. For example, the African monsoon system is not correctly simulated and most of the CMIP5 models underestimate the precipitation. Therefore, Global Climate Models (GCMs) show significant systematic biases that require bias correction before it can be used in impacts studies. Several methods of bias corrections have been developed for several years and are increasingly using more complex statistical methods. The aims of this work is to show the interest of the CDFt (Cumulative Distribution Function transfom (Michelangeli et al.,2009)) method to reduce the data bias from 29 CMIP5 GCMs over Africa and to assess the impact of bias corrected data on crop yields prediction by the end of the 21st century. In this work, we apply the CDFt to daily data covering the period from 1950 to 2099 (Historical and RCP8.5) and we correct the climate variables (temperature, precipitation, solar radiation, wind) by the use of the new daily database from the EU project WATer and global CHange (WATCH) available from 1979 to 2013 as reference data. The performance of the method is assessed in several cases. First, data are corrected based on different calibrations periods and are compared, on one hand, with observations to estimate the sensitivity of the method to the calibration period and, on other hand, with another bias-correction method used in the ISIMIP project. We find that, whatever the calibration period used, CDFt corrects well the mean state of variables and preserves their trend, as well as daily rainfall occurrence and intensity distributions. However, some differences appear when compared to the outputs obtained with the method used in ISIMIP and show that the quality of the correction is strongly related to the reference data. Secondly, we validate the bias correction method with the agronomic simulations (SARRA-H model (Kouressy et al., 2008)) by comparison with FAO crops yields estimations over West Africa. Impact simulations show that crop model is sensitive to input data. They show also decreasing in crop yields by the end of this century. Michelangeli, P. A., Vrac, M., & Loukos, H. (2009). Probabilistic downscaling approaches: Application to wind cumulative distribution functions. Geophysical Research Letters, 36(11). Kouressy M, Dingkuhn M, Vaksmann M and Heinemann A B 2008: Adaptation to diverse semi-arid environments of sorghum genotypes having different plant type and sensitivity to photoperiod. Agric. Forest Meteorol., http://dx.doi.org/10.1016/j.agrformet.2007.09.009

  3. The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.

    PubMed

    Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica

    2014-05-01

    The distribution of the product has several useful applications. One of these applications is its use to form confidence intervals for the indirect effect as the product of 2 regression coefficients. The purpose of this article is to investigate how the moments of the distribution of the product explain normal theory mediation confidence interval coverage and imbalance. Values of the critical ratio for each random variable are used to demonstrate how the moments of the distribution of the product change across values of the critical ratio observed in research studies. Results of the simulation study showed that as skewness in absolute value increases, coverage decreases. And as skewness in absolute value and kurtosis increases, imbalance increases. The difference between testing the significance of the indirect effect using the normal theory versus the asymmetric distribution of the product is further illustrated with a real data example. This article is the first study to show the direct link between the distribution of the product and indirect effect confidence intervals and clarifies the results of previous simulation studies by showing why normal theory confidence intervals for indirect effects are often less accurate than those obtained from the asymmetric distribution of the product or from resampling methods.

  4. Improving Transferability of Introduced Species’ Distribution Models: New Tools to Forecast the Spread of a Highly Invasive Seaweed

    PubMed Central

    Verbruggen, Heroen; Tyberghein, Lennert; Belton, Gareth S.; Mineur, Frederic; Jueterbock, Alexander; Hoarau, Galice; Gurgel, C. Frederico D.; De Clerck, Olivier

    2013-01-01

    The utility of species distribution models for applications in invasion and global change biology is critically dependent on their transferability between regions or points in time, respectively. We introduce two methods that aim to improve the transferability of presence-only models: density-based occurrence thinning and performance-based predictor selection. We evaluate the effect of these methods along with the impact of the choice of model complexity and geographic background on the transferability of a species distribution model between geographic regions. Our multifactorial experiment focuses on the notorious invasive seaweed Caulerpacylindracea (previously Caulerpa racemosa var. cylindracea ) and uses Maxent, a commonly used presence-only modeling technique. We show that model transferability is markedly improved by appropriate predictor selection, with occurrence thinning, model complexity and background choice having relatively minor effects. The data shows that, if available, occurrence records from the native and invaded regions should be combined as this leads to models with high predictive power while reducing the sensitivity to choices made in the modeling process. The inferred distribution model of Caulerpacylindracea shows the potential for this species to further spread along the coasts of Western Europe, western Africa and the south coast of Australia. PMID:23950789

  5. Droplet size and velocity distributions for spray modelling

    NASA Astrophysics Data System (ADS)

    Jones, D. P.; Watkins, A. P.

    2012-01-01

    Methods for constructing droplet size distributions and droplet velocity profiles are examined as a basis for the Eulerian spray model proposed in Beck and Watkins (2002,2003) [5,6]. Within the spray model, both distributions must be calculated at every control volume at every time-step where the spray is present and valid distributions must be guaranteed. Results show that the Maximum Entropy formalism combined with the Gamma distribution satisfy these conditions for the droplet size distributions. Approximating the droplet velocity profile is shown to be considerably more difficult due to the fact that it does not have compact support. An exponential model with a constrained exponent offers plausible profiles.

  6. Monte Carlo simulation of depth-dose distributions in TLD-100 under 90Sr-90Y irradiation.

    PubMed

    Rodríguez-Villafuerte, M; Gamboa-deBuen, I; Brandan, M E

    1997-04-01

    In this work the depth-dose distribution in TLD-100 dosimeters under beta irradiation from a 90Sr-90Y source was investigated using the Monte Carlo method. Comparisons between the simulated data and experimental results showed that the depth-dose distribution is strongly affected by the different components of both the source and dosimeter holders due to the large number of electron scattering events.

  7. Measurement of dispersion of nanoparticles in a dense suspension by high-sensitivity low-coherence dynamic light scattering

    NASA Astrophysics Data System (ADS)

    Ishii, Katsuhiro; Nakamura, Sohichiro; Sato, Yuki

    2014-08-01

    High-sensitivity low-coherence DLS apply to measurement of particle size distribution of pigments suspended in a ink. This method can be apply to extremely dense and turbid media without dilution. We show the temporal variation of particle size distribution of thixotropy and sedimentary pigments due to aggregation, agglomerate, and sedimentation. Moreover, we demonstrate the influence of dilution of ink to particle size distribution.

  8. Self-adaptive demodulation for polarization extinction ratio in distributed polarization coupling.

    PubMed

    Zhang, Hongxia; Ren, Yaguang; Liu, Tiegen; Jia, Dagong; Zhang, Yimo

    2013-06-20

    A self-adaptive method for distributed polarization extinction ratio (PER) demodulation is demonstrated. It is characterized by dynamic PER threshold coupling intensity (TCI) and nonuniform PER iteration step length (ISL). Based on the preset PER calculation accuracy and original distribution coupling intensity, TCI and ISL can be made self-adaptive to determine contributing coupling points inside the polarizing devices. Distributed PER is calculated by accumulating those coupling points automatically and selectively. Two different kinds of polarization-maintaining fibers are tested, and PERs are obtained after merely 3-5 iterations using the proposed method. Comparison experiments with Thorlabs commercial instrument are also conducted, and results show high consistency. In addition, the optimum preset PER calculation accuracy of 0.05 dB is obtained through many repeated experiments.

  9. Design of distributed systems of hydrolithosphere processes management. A synthesis of distributed management systems

    NASA Astrophysics Data System (ADS)

    Pershin, I. M.; Pervukhin, D. A.; Ilyushin, Y. V.; Afanaseva, O. V.

    2017-10-01

    The paper considers an important problem of designing distributed systems of hydrolithosphere processes management. The control actions on the hydrolithosphere processes under consideration are implemented by a set of extractive wells. The article shows the method of defining the approximation links for description of the dynamic characteristics of hydrolithosphere processes. The structure of distributed regulators, used in the management systems by the considered processes, is presented. The paper analyses the results of the synthesis of the distributed management system and the results of modelling the closed-loop control system by the parameters of the hydrolithosphere process.

  10. Mathematical models and methods of assisting state subsidy distribution at the regional level

    NASA Astrophysics Data System (ADS)

    Bondarenko, Yu V.; Azarnova, T. V.; Kashirina, I. L.; Goroshko, I. V.

    2018-03-01

    One of the most common forms of state support in the world is subsidization. By providing direct financial support to businesses, local authorities get an opportunity to set certain performance targets. Successful achievement of such targets depends not only on the amount of the budgetary allocations, but also on the distribution mechanisms adopted by the regional authorities. Analysis of the existing mechanisms of subsidies distribution in Russian regions shows that in most cases the choice of subsidy calculation formula and its parameters depends on the experts’ subjective opinion. The authors offer a new approach to assisting subsidy distribution at the regional level, which is based on mathematical models and methods, allowing to evaluate the influence of subsidy distribution on the region’s social and economic development. The results of calculations were discussed with the regional administration representatives who confirmed their significance for decision-making in the sphere of state control.

  11. Unbiased free energy estimates in fast nonequilibrium transformations using Gaussian mixtures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Procacci, Piero

    2015-04-21

    In this paper, we present an improved method for obtaining unbiased estimates of the free energy difference between two thermodynamic states using the work distribution measured in nonequilibrium driven experiments connecting these states. The method is based on the assumption that any observed work distribution is given by a mixture of Gaussian distributions, whose normal components are identical in either direction of the nonequilibrium process, with weights regulated by the Crooks theorem. Using the prototypical example for the driven unfolding/folding of deca-alanine, we show that the predicted behavior of the forward and reverse work distributions, assuming a combination of onlymore » two Gaussian components with Crooks derived weights, explains surprisingly well the striking asymmetry in the observed distributions at fast pulling speeds. The proposed methodology opens the way for a perfectly parallel implementation of Jarzynski-based free energy calculations in complex systems.« less

  12. Reducing Interpolation Artifacts for Mutual Information Based Image Registration

    PubMed Central

    Soleimani, H.; Khosravifard, M.A.

    2011-01-01

    Medical image registration methods which use mutual information as similarity measure have been improved in recent decades. Mutual Information is a basic concept of Information theory which indicates the dependency of two random variables (or two images). In order to evaluate the mutual information of two images their joint probability distribution is required. Several interpolation methods, such as Partial Volume (PV) and bilinear, are used to estimate joint probability distribution. Both of these two methods yield some artifacts on mutual information function. Partial Volume-Hanning window (PVH) and Generalized Partial Volume (GPV) methods are introduced to remove such artifacts. In this paper we show that the acceptable performance of these methods is not due to their kernel function. It's because of the number of pixels which incorporate in interpolation. Since using more pixels requires more complex and time consuming interpolation process, we propose a new interpolation method which uses only four pixels (the same as PV and bilinear interpolations) and removes most of the artifacts. Experimental results of the registration of Computed Tomography (CT) images show superiority of the proposed scheme. PMID:22606673

  13. Impact of Bioreactor Environment and Recovery Method on the Profile of Bacterial Populations from Water Distribution Systems.

    PubMed

    Luo, Xia; Jellison, Kristen L; Huynh, Kevin; Widmer, Giovanni

    2015-01-01

    Multiple rotating annular reactors were seeded with biofilms flushed from water distribution systems to assess (1) whether biofilms grown in bioreactors are representative of biofilms flushed from the water distribution system in terms of bacterial composition and diversity, and (2) whether the biofilm sampling method affects the population profile of the attached bacterial community. Biofilms were grown in bioreactors until thickness stabilized (9 to 11 weeks) and harvested from reactor coupons by sonication, stomaching, bead-beating, and manual scraping. High-throughput sequencing of 16S rRNA amplicons was used to profile bacterial populations from flushed biofilms seeded into bioreactors as well as biofilms recovered from bioreactor coupons by different methods. β diversity between flushed and reactor biofilms was compared to β diversity between (i) biofilms harvested from different reactors and (ii) biofilms harvested by different methods from the same reactor. These analyses showed that average diversity between flushed and bioreactor biofilms was double the diversity between biofilms from different reactors operated in parallel. The diversity between bioreactors was larger than the diversity associated with different biofilm recovery methods. Compared to other experimental variables, the method used to recover biofilms had a negligible impact on the outcome of water biofilm analyses based on 16S amplicon sequencing. Results from this study show that biofilms grown in reactors over 9 to 11 weeks are not representative models of the microbial populations flushed from a distribution system. Furthermore, the bacterial population profile of biofilms grown in replicate reactors from the same flushed water are likely to diverge. However, four common sampling protocols, which differ with respect to disruption of bacterial cells, provide similar information with respect to the 16S rRNA population profile of the biofilm community.

  14. Analysis of biomolecular solvation sites by 3D-RISM theory.

    PubMed

    Sindhikara, Daniel J; Hirata, Fumio

    2013-06-06

    We derive, implement, and apply equilibrium solvation site analysis for biomolecules. Our method utilizes 3D-RISM calculations to quickly obtain equilibrium solvent distributions without either necessity of simulation or limits of solvent sampling. Our analysis of these distributions extracts highest likelihood poses of solvent as well as localized entropies, enthalpies, and solvation free energies. We demonstrate our method on a structure of HIV-1 protease where excellent structural and thermodynamic data are available for comparison. Our results, obtained within minutes, show systematic agreement with available experimental data. Further, our results are in good agreement with established simulation-based solvent analysis methods. This method can be used not only for visual analysis of active site solvation but also for virtual screening methods and experimental refinement.

  15. Controlling bias and inflation in epigenome- and transcriptome-wide association studies using the empirical null distribution.

    PubMed

    van Iterson, Maarten; van Zwet, Erik W; Heijmans, Bastiaan T

    2017-01-27

    We show that epigenome- and transcriptome-wide association studies (EWAS and TWAS) are prone to significant inflation and bias of test statistics, an unrecognized phenomenon introducing spurious findings if left unaddressed. Neither GWAS-based methodology nor state-of-the-art confounder adjustment methods completely remove bias and inflation. We propose a Bayesian method to control bias and inflation in EWAS and TWAS based on estimation of the empirical null distribution. Using simulations and real data, we demonstrate that our method maximizes power while properly controlling the false positive rate. We illustrate the utility of our method in large-scale EWAS and TWAS meta-analyses of age and smoking.

  16. Planning of distributed generation in distribution network based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Li, Jinze; Qu, Zhi; He, Xiaoyang; Jin, Xiaoming; Li, Tie; Wang, Mingkai; Han, Qiu; Gao, Ziji; Jiang, Feng

    2018-02-01

    Large-scale access of distributed power can improve the current environmental pressure, at the same time, increasing the complexity and uncertainty of overall distribution system. Rational planning of distributed power can effectively improve the system voltage level. To this point, the specific impact on distribution network power quality caused by the access of typical distributed power was analyzed and from the point of improving the learning factor and the inertia weight, an improved particle swarm optimization algorithm (IPSO) was proposed which could solve distributed generation planning for distribution network to improve the local and global search performance of the algorithm. Results show that the proposed method can well reduce the system network loss and improve the economic performance of system operation with distributed generation.

  17. L-moments and TL-moments of the generalized lambda distribution

    USGS Publications Warehouse

    Asquith, W.H.

    2007-01-01

    The 4-parameter generalized lambda distribution (GLD) is a flexible distribution capable of mimicking the shapes of many distributions and data samples including those with heavy tails. The method of L-moments and the recently developed method of trimmed L-moments (TL-moments) are attractive techniques for parameter estimation for heavy-tailed distributions for which the L- and TL-moments have been defined. Analytical solutions for the first five L- and TL-moments in terms of GLD parameters are derived. Unfortunately, numerical methods are needed to compute the parameters from the L- or TL-moments. Algorithms are suggested for parameter estimation. Application of the GLD using both L- and TL-moment parameter estimates from example data is demonstrated, and comparison of the L-moment fit of the 4-parameter kappa distribution is made. A small simulation study of the 98th percentile (far-right tail) is conducted for a heavy-tail GLD with high-outlier contamination. The simulations show, with respect to estimation of the 98th-percent quantile, that TL-moments are less biased (more robost) in the presence of high-outlier contamination. However, the robustness comes at the expense of considerably more sampling variability. ?? 2006 Elsevier B.V. All rights reserved.

  18. Modelling road accident blackspots data with the discrete generalized Pareto distribution.

    PubMed

    Prieto, Faustino; Gómez-Déniz, Emilio; Sarabia, José María

    2014-10-01

    This study shows how road traffic networks events, in particular road accidents on blackspots, can be modelled with simple probabilistic distributions. We considered the number of crashes and the number of fatalities on Spanish blackspots in the period 2003-2007, from Spanish General Directorate of Traffic (DGT). We modelled those datasets, respectively, with the discrete generalized Pareto distribution (a discrete parametric model with three parameters) and with the discrete Lomax distribution (a discrete parametric model with two parameters, and particular case of the previous model). For that, we analyzed the basic properties of both parametric models: cumulative distribution, survival, probability mass, quantile and hazard functions, genesis and rth-order moments; applied two estimation methods of their parameters: the μ and (μ+1) frequency method and the maximum likelihood method; used two goodness-of-fit tests: Chi-square test and discrete Kolmogorov-Smirnov test based on bootstrap resampling; and compared them with the classical negative binomial distribution in terms of absolute probabilities and in models including covariates. We found that those probabilistic models can be useful to describe the road accident blackspots datasets analyzed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. A novel method for the investigation of liquid/liquid distribution coefficients and interface permeabilities applied to the water-octanol-drug system.

    PubMed

    Stein, Paul C; di Cagno, Massimiliano; Bauer-Brandl, Annette

    2011-09-01

    In this work a new, accurate and convenient technique for the measurement of distribution coefficients and membrane permeabilities based on nuclear magnetic resonance (NMR) is described. This method is a novel implementation of localized NMR spectroscopy and enables the simultaneous analysis of the drug content in the octanol and in the water phase without separation. For validation of the method, the distribution coefficients at pH = 7.4 of four active pharmaceutical ingredients (APIs), namely ibuprofen, ketoprofen, nadolol, and paracetamol (acetaminophen), were determined using a classical approach. These results were compared to the NMR experiments which are described in this work. For all substances, the respective distribution coefficients found with the two techniques coincided very well. Furthermore, the NMR experiments make it possible to follow the distribution of the drug between the phases as a function of position and time. Our results show that the technique, which is available on any modern NMR spectrometer, is well suited to the measurement of distribution coefficients. The experiments present also new insight into the dynamics of the water-octanol interface itself and permit measurement of the interface permeability.

  20. [Altitudinal patterns of species richness and species range size of vascular plants in Xiaolong- shan Reserve of Qinling Mountain: a test of Rapoport' s rule].

    PubMed

    Zheng, Zhi; Gong, Da-Jie; Sun, Cheng-Xiang; Li, Xiao-Jun; Li, Wan-Jiang

    2014-09-01

    Altitudinal patterns of species richness and species range size and their underlying mechanisms have long been a key topic in biogeography and biodiversity research. Rapoport's rule stated that the species richness gradually declined with the increasing altitude, while the species ranges became larger. Using altitude-distribution database from Xiaolongshan Reverse, this study explored the altitudinal patterns of vascular plant species richness and species range in Qinling Xiaolongshan Reserve, and examined the relationships between species richness and their distributional middle points in altitudinal bands for different fauna, taxonomic units and growth forms and tested the Rapoport's rule by using Stevens' method, Pagel's method, mid-point method and cross-species method. The results showed that the species richness of vascular plants except small-range species showed a unimodal pattern along the altitude in Qinling Xiaolongshan Reserve and the highest proportion of small-range species was found at the lower altitudinal bands and at the higher altitudinal bands. Due to different assemblages and examining methods, the relationships between species distributing range sizes and the altitudes were different. Increasing taxonomic units was easier to support Rapoport's rule, which was related to niche differences that the different taxonomic units occupied. The mean species range size of angiosperms showed a unimodal pattern along the altitude, while those of the gymnosperms and pteridophytes were unclearly regular. The mean species range size of the climbers was wider with the increasing altitude, while that of the shrubs which could adapt to different environmental situations was not sensitive to the change of altitude. Pagel's method was easier to support the Rapoport's rule, and then was Steven's method. On the contrary, due to the mid-domain effect, the results of the test by using the mid-point method showed that the mean species range size varied in a unimodal pattern along the altitude, which didn't support the Rapoport's rule, and because of the scatter-spot impact, the explanatory power of the cross-species method was much lower.

  1. Improved mapping of radio sources from VLBI data by least-square fit

    NASA Technical Reports Server (NTRS)

    Rodemich, E. R.

    1985-01-01

    A method is described for producing improved mapping of radio sources from Very Long Base Interferometry (VLBI) data. The method described is more direct than existing Fourier methods, is often more accurate, and runs at least as fast. The visibility data is modeled here, as in existing methods, as a function of the unknown brightness distribution and the unknown antenna gains and phases. These unknowns are chosen so that the resulting function values are as near as possible to the observed values. If researchers use the radio mapping source deviation to measure the closeness of this fit to the observed values, they are led to the problem of minimizing a certain function of all the unknown parameters. This minimization problem cannot be solved directly, but it can be attacked by iterative methods which we show converge automatically to the minimum with no user intervention. The resulting brightness distribution will furnish the best fit to the data among all brightness distributions of given resolution.

  2. Pore size distribution calculation from 1H NMR signal and N2 adsorption-desorption techniques

    NASA Astrophysics Data System (ADS)

    Hassan, Jamal

    2012-09-01

    The pore size distribution (PSD) of nano-material MCM-41 is determined using two different approaches: N2 adsorption-desorption and 1H NMR signal of water confined in silica nano-pores of MCM-41. The first approach is based on the recently modified Kelvin equation [J.V. Rocha, D. Barrera, K. Sapag, Top. Catal. 54(2011) 121-134] which deals with the known underestimation in pore size distribution for the mesoporous materials such as MCM-41 by introducing a correction factor to the classical Kelvin equation. The second method employs the Gibbs-Thompson equation, using NMR, for melting point depression of liquid in confined geometries. The result shows that both approaches give similar pore size distribution to some extent, and also the NMR technique can be considered as an alternative direct method to obtain quantitative results especially for mesoporous materials. The pore diameter estimated for the nano-material used in this study was about 35 and 38 Å for the modified Kelvin and NMR methods respectively. A comparison between these methods and the classical Kelvin equation is also presented.

  3. Material identification based on electrostatic sensing technology

    NASA Astrophysics Data System (ADS)

    Liu, Kai; Chen, Xi; Li, Jingnan

    2018-04-01

    When the robot travels on the surface of different media, the uncertainty of the medium will seriously affect the autonomous action of the robot. In this paper, the distribution characteristics of multiple electrostatic charges on the surface of materials are detected, so as to improve the accuracy of the existing electrostatic signal material identification methods, which is of great significance to help the robot optimize the control algorithm. In this paper, based on the electrostatic signal material identification method proposed by predecessors, the multi-channel detection circuit is used to obtain the electrostatic charge distribution at different positions of the material surface, the weights are introduced into the eigenvalue matrix, and the weight distribution is optimized by the evolutionary algorithm, which makes the eigenvalue matrix more accurately reflect the surface charge distribution characteristics of the material. The matrix is used as the input of the k-Nearest Neighbor (kNN)classification algorithm to classify the dielectric materials. The experimental results show that the proposed method can significantly improve the recognition rate of the existing electrostatic signal material recognition methods.

  4. A new method to estimate local pitch angles in spiral galaxies: Application to spiral arms and feathers in M81 and M51

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Puerari, Ivânio; Elmegreen, Bruce G.; Block, David L., E-mail: puerari@inaoep.mx

    2014-12-01

    We examine 8 μm IRAC images of the grand design two-arm spiral galaxies M81 and M51 using a new method whereby pitch angles are locally determined as a function of scale and position, in contrast to traditional Fourier transform spectral analyses which fit to average pitch angles for whole galaxies. The new analysis is based on a correlation between pieces of a galaxy in circular windows of (lnR,θ) space and logarithmic spirals with various pitch angles. The diameter of the windows is varied to study different scales. The result is a best-fit pitch angle to the spiral structure as amore » function of position and scale, or a distribution function of pitch angles as a function of scale for a given galactic region or area. We apply the method to determine the distribution of pitch angles in the arm and interarm regions of these two galaxies. In the arms, the method reproduces the known pitch angles for the main spirals on a large scale, but also shows higher pitch angles on smaller scales resulting from dust feathers. For the interarms, there is a broad distribution of pitch angles representing the continuation and evolution of the spiral arm feathers as the flow moves into the interarm regions. Our method shows a multiplicity of spiral structures on different scales, as expected from gas flow processes in a gravitating, turbulent and shearing interstellar medium. We also present results for M81 using classical 1D and 2D Fourier transforms, together with a new correlation method, which shows good agreement with conventional 2D Fourier transforms.« less

  5. Numerical simulation of thermal stress distributions in Czochralski-grown silicon crystals

    NASA Astrophysics Data System (ADS)

    Kumar, M. Avinash; Srinivasan, M.; Ramasamy, P.

    2018-04-01

    Numerical simulation is one of the important tools in the investigation and optimization of the single-crystal silicon grown by the Czochralski (Cz) method. A 2D steady global heat transfer model was used to investigate the temperature distribution and the thermal stress distributions at particular crystal position during the Cz growth process. The computation determines the thermal stress such as von Mises stress and maximum shear stress distribution along grown crystal and shows possible reason for dislocation formation in the Cz-grown single-crystal silicon.

  6. Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.

    PubMed

    de Nijs, Robin

    2015-07-21

    In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.

  7. Type I and II β-turns prediction using NMR chemical shifts.

    PubMed

    Wang, Ching-Cheng; Lai, Wen-Chung; Chuang, Woei-Jer

    2014-07-01

    A method for predicting type I and II β-turns using nuclear magnetic resonance (NMR) chemical shifts is proposed. Isolated β-turn chemical-shift data were collected from 1,798 protein chains. One-dimensional statistical analyses on chemical-shift data of three classes β-turn (type I, II, and VIII) showed different distributions at four positions, (i) to (i + 3). Considering the central two residues of type I β-turns, the mean values of Cο, Cα, H(N), and N(H) chemical shifts were generally (i + 1) > (i + 2). The mean values of Cβ and Hα chemical shifts were (i + 1) < (i + 2). The distributions of the central two residues in type II and VIII β-turns were also distinguishable by trends of chemical shift values. Two-dimensional cluster analyses on chemical-shift data show positional distributions more clearly. Based on these propensities of chemical shift classified as a function of position, rules were derived using scoring matrices for four consecutive residues to predict type I and II β-turns. The proposed method achieves an overall prediction accuracy of 83.2 and 84.2% with the Matthews correlation coefficient values of 0.317 and 0.632 for type I and II β-turns, indicating that its higher accuracy for type II turn prediction. The results show that it is feasible to use NMR chemical shifts to predict the β-turn types in proteins. The proposed method can be incorporated into other chemical-shift based protein secondary structure prediction methods.

  8. GIS-based poverty and population distribution analysis in China

    NASA Astrophysics Data System (ADS)

    Cui, Jing; Wang, Yingjie; Yan, Hong

    2009-07-01

    Geographically, poverty status is not only related with social-economic factors but also strongly affected by geographical environment. In the paper, GIS-based poverty and population distribution analysis method is introduced for revealing their regional differences. More than 100000 poor villages and 592 national key poor counties are chosen for the analysis. The results show that poverty distribution tends to concentrate in most of west China and mountainous rural areas of mid China. Furthermore, the fifth census data are overlaid to those poor areas in order to gain its internal diversity of social-economic characteristics. By overlaying poverty related social-economic parameters, such as sex ratio, illiteracy, education level, percentage of ethnic minorities, family composition, finding shows that poverty distribution is strongly correlated with high illiteracy rate, high percentage minorities, and larger family member.

  9. Continuous-variable measurement-device-independent quantum key distribution with photon subtraction

    NASA Astrophysics Data System (ADS)

    Ma, Hong-Xin; Huang, Peng; Bai, Dong-Yun; Wang, Shi-Yu; Bao, Wan-Su; Zeng, Gui-Hua

    2018-04-01

    It has been found that non-Gaussian operations can be applied to increase and distill entanglement between Gaussian entangled states. We show the successful use of the non-Gaussian operation, in particular, photon subtraction operation, on the continuous-variable measurement-device-independent quantum key distribution (CV-MDI-QKD) protocol. The proposed method can be implemented based on existing technologies. Security analysis shows that the photon subtraction operation can remarkably increase the maximal transmission distance of the CV-MDI-QKD protocol, which precisely make up for the shortcoming of the original CV-MDI-QKD protocol, and one-photon subtraction operation has the best performance. Moreover, the proposed protocol provides a feasible method for the experimental implementation of the CV-MDI-QKD protocol.

  10. Adaptive consensus of scale-free multi-agent system by randomly selecting links

    NASA Astrophysics Data System (ADS)

    Mou, Jinping; Ge, Huafeng

    2016-06-01

    This paper investigates an adaptive consensus problem for distributed scale-free multi-agent systems (SFMASs) by randomly selecting links, where the degree of each node follows a power-law distribution. The randomly selecting links are based on the assumption that every agent decides to select links among its neighbours according to the received data with a certain probability. Accordingly, a novel consensus protocol with the range of the received data is developed, and each node updates its state according to the protocol. By the iterative method and Cauchy inequality, the theoretical analysis shows that all errors among agents converge to zero, and in the meanwhile, several criteria of consensus are obtained. One numerical example shows the reliability of the proposed methods.

  11. LASER BIOLOGY AND MEDICINE: Light scattering study of rheumatoid arthritis

    NASA Astrophysics Data System (ADS)

    Beuthan, J.; Netz, U.; Minet, O.; Klose, Annerose D.; Hielscher, A. H.; Scheel, A.; Henniger, J.; Müller, G.

    2002-11-01

    The distribution of light scattered by finger joints is studied in the near-IR region. It is shown that variations in the optical parameters of the tissue (scattering coefficient μs, absorption coefficient μa, and anisotropy factor g) depend on the presence of the rheumatoid arthritis (RA). At the first stage, the distribution of scattered light was measured in diaphanoscopic experiments. The convolution of a Gaussian error function with the scattering phase function proved to be a good approximation of the data obtained. Then, a new method was developed for the reconstruction of distribution of optical parameters in the finger cross section. Model tests of the quality of this reconstruction method show good results.

  12. Finding differentially expressed genes in high dimensional data: Rank based test statistic via a distance measure.

    PubMed

    Mathur, Sunil; Sadana, Ajit

    2015-12-01

    We present a rank-based test statistic for the identification of differentially expressed genes using a distance measure. The proposed test statistic is highly robust against extreme values and does not assume the distribution of parent population. Simulation studies show that the proposed test is more powerful than some of the commonly used methods, such as paired t-test, Wilcoxon signed rank test, and significance analysis of microarray (SAM) under certain non-normal distributions. The asymptotic distribution of the test statistic, and the p-value function are discussed. The application of proposed method is shown using a real-life data set. © The Author(s) 2011.

  13. A Routing Path Construction Method for Key Dissemination Messages in Sensor Networks

    PubMed Central

    Moon, Soo Young; Cho, Tae Ho

    2014-01-01

    Authentication is an important security mechanism for detecting forged messages in a sensor network. Each cluster head (CH) in dynamic key distribution schemes forwards a key dissemination message that contains encrypted authentication keys within its cluster to next-hop nodes for the purpose of authentication. The forwarding path of the key dissemination message strongly affects the number of nodes to which the authentication keys in the message are actually distributed. We propose a routing method for the key dissemination messages to increase the number of nodes that obtain the authentication keys. In the proposed method, each node selects next-hop nodes to which the key dissemination message will be forwarded based on secret key indexes, the distance to the sink node, and the energy consumption of its neighbor nodes. The experimental results show that the proposed method can increase by 50–70% the number of nodes to which authentication keys in each cluster are distributed compared to geographic and energy-aware routing (GEAR). In addition, the proposed method can detect false reports earlier by using the distributed authentication keys, and it consumes less energy than GEAR when the false traffic ratio (FTR) is ≥10%. PMID:25136649

  14. An analytical method based on multipole moment expansion to calculate the flux distribution in Gammacell-220

    NASA Astrophysics Data System (ADS)

    Rezaeian, P.; Ataenia, V.; Shafiei, S.

    2017-12-01

    In this paper, the flux of photons inside the irradiation cell of the Gammacell-220 is calculated using an analytical method based on multipole moment expansion. The flux of the photons inside the irradiation cell is introduced as the function of monopole, dipoles and quadruples in the Cartesian coordinate system. For the source distribution of the Gammacell-220, the values of the multipole moments are specified by direct integrating. To confirm the validation of the presented methods, the flux distribution inside the irradiation cell was determined utilizing MCNP simulations as well as experimental measurements. To measure the flux inside the irradiation cell, Amber dosimeters were employed. The calculated values of the flux were in agreement with the values obtained by simulations and measurements, especially in the central zones of the irradiation cell. In order to show that the present method is a good approximation to determine the flux in the irradiation cell, the values of the multipole moments were obtained by fitting the simulation and experimental data using Levenberg-Marquardt algorithm. The present method leads to reasonable results for the all source distribution even without any symmetry which makes it a powerful tool for the source load planning.

  15. Estimation of the Vertical Distribution of Radiocesium in Soil on the Basis of the Characteristics of Gamma-Ray Spectra Obtained via Aerial Radiation Monitoring Using an Unmanned Helicopter.

    PubMed

    Ochi, Kotaro; Sasaki, Miyuki; Ishida, Mutsushi; Hamamoto, Shoichiro; Nishimura, Taku; Sanada, Yukihisa

    2017-08-17

    After the Fukushima Daiichi Nuclear Power Plant accident, the vertical distribution of radiocesium in soil has been investigated to better understand the behavior of radiocesium in the environment. The typical method used for measuring the vertical distribution of radiocesium is troublesome because it requires collection and measurement of the activity of soil samples. In this study, we established a method of estimating the vertical distribution of radiocesium by focusing on the characteristics of gamma-ray spectra obtained via aerial radiation monitoring using an unmanned helicopter. The estimates are based on actual measurement data collected at an extended farm. In this method, the change in the ratio of direct gamma rays to scattered gamma rays at various depths in the soil was utilized to quantify the vertical distribution of radiocesium. The results show a positive correlation between the abovementioned and the actual vertical distributions of radiocesium measured in the soil samples. A vertical distribution map was created on the basis of this ratio using a simple equation derived from the abovementioned correlation. This technique can provide a novel approach for effective selection of high-priority areas that require decontamination.

  16. A method to describe inelastic gamma field distribution in neutron gamma density logging.

    PubMed

    Zhang, Feng; Zhang, Quanying; Liu, Juntao; Wang, Xinguang; Wu, He; Jia, Wenbao; Ti, Yongzhou; Qiu, Fei; Zhang, Xiaoyang

    2017-11-01

    Pulsed neutron gamma density logging (NGD) is of great significance for radioprotection and density measurement in LWD, however, the current methods have difficulty in quantitative calculation and single factor analysis for the inelastic gamma field distribution. In order to clarify the NGD mechanism, a new method is developed to describe the inelastic gamma field distribution. Based on the fast-neutron scattering and gamma attenuation, the inelastic gamma field distribution is characterized by the inelastic scattering cross section, fast-neutron scattering free path, formation density and other parameters. And the contribution of formation parameters on the field distribution is quantitatively analyzed. The results shows the contribution of density attenuation is opposite to that of inelastic scattering cross section and fast-neutron scattering free path. And as the detector-spacing increases, the density attenuation gradually plays a dominant role in the gamma field distribution, which means large detector-spacing is more favorable for the density measurement. Besides, the relationship of density sensitivity and detector spacing was studied according to this gamma field distribution, therefore, the spacing of near and far gamma ray detector is determined. The research provides theoretical guidance for the tool parameter design and density determination of pulsed neutron gamma density logging technique. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Estimation of the Vertical Distribution of Radiocesium in Soil on the Basis of the Characteristics of Gamma-Ray Spectra Obtained via Aerial Radiation Monitoring Using an Unmanned Helicopter

    PubMed Central

    Ochi, Kotaro; Sasaki, Miyuki; Ishida, Mutsushi; Sanada, Yukihisa

    2017-01-01

    After the Fukushima Daiichi Nuclear Power Plant accident, the vertical distribution of radiocesium in soil has been investigated to better understand the behavior of radiocesium in the environment. The typical method used for measuring the vertical distribution of radiocesium is troublesome because it requires collection and measurement of the activity of soil samples. In this study, we established a method of estimating the vertical distribution of radiocesium by focusing on the characteristics of gamma-ray spectra obtained via aerial radiation monitoring using an unmanned helicopter. The estimates are based on actual measurement data collected at an extended farm. In this method, the change in the ratio of direct gamma rays to scattered gamma rays at various depths in the soil was utilized to quantify the vertical distribution of radiocesium. The results show a positive correlation between the abovementioned and the actual vertical distributions of radiocesium measured in the soil samples. A vertical distribution map was created on the basis of this ratio using a simple equation derived from the abovementioned correlation. This technique can provide a novel approach for effective selection of high-priority areas that require decontamination. PMID:28817098

  18. Multi-level structure in the large scale distribution of optically luminous galaxies

    NASA Astrophysics Data System (ADS)

    Deng, Xin-fa; Deng, Zu-gan; Liu, Yong-zhen

    1992-04-01

    Fractal dimensions in the large scale distribution of galaxies have been calculated with the method given by Wen et al. [1] Samples are taken from CfA redshift survey in northern and southern galactic [2] hemisphere in our analysis respectively. Results from these two regions are compared with each other. There are significant differences between the distributions in these two regions. However, our analyses do show some common features of the distributions in these two regions. All subsamples show multi-level fractal character distinctly. Combining it with the results from analyses of samples given by IRAS galaxies and results from samples given by redshift survey in pencil-beam fields, [3,4] we suggest that multi-level fractal structure is most likely to be a general and important character in the large scale distribution of galaxies. The possible implications of this character are discussed.

  19. Directional statistics-based reflectance model for isotropic bidirectional reflectance distribution functions.

    PubMed

    Nishino, Ko; Lombardi, Stephen

    2011-01-01

    We introduce a novel parametric bidirectional reflectance distribution function (BRDF) model that can accurately encode a wide variety of real-world isotropic BRDFs with a small number of parameters. The key observation we make is that a BRDF may be viewed as a statistical distribution on a unit hemisphere. We derive a novel directional statistics distribution, which we refer to as the hemispherical exponential power distribution, and model real-world isotropic BRDFs as mixtures of it. We derive a canonical probabilistic method for estimating the parameters, including the number of components, of this novel directional statistics BRDF model. We show that the model captures the full spectrum of real-world isotropic BRDFs with high accuracy, but a small footprint. We also demonstrate the advantages of the novel BRDF model by showing its use for reflection component separation and for exploring the space of isotropic BRDFs.

  20. Establishment of HPC(R2A) for regrowth control in non-chlorinated distribution systems.

    PubMed

    Uhl, Wolfgang; Schaule, Gabriela

    2004-05-01

    Drinking water distributed without disinfection and without regrowth problems for many years may show bacterial regrowth when the residence time and/or temperature in the distribution system increases or when substrate and/or bacterial concentration in the treated water increases. An example of a regrowth event in a major German city is discussed. Regrowth of HPC bacteria occurred unexpectedly at the end of a very hot summer. No pathogenic or potentially pathogenic bacteria were identified. Increased residence times in the distribution system and temperatures up to 25 degrees C were identified as most probable causes and the regrowth event was successfully overcome by changing flow regimes and decreasing residence times. Standard plate counts of HPC bacteria using the spread plate technique on nutrient rich agar according to German Drinking Water Regulations (GDWR) had proven to be a very good indicator of hygienically safe drinking water and to demonstrate the effectiveness of water treatment. However, the method proved insensitive for early regrowth detection. Regrowth experiments in the lab and sampling of the distribution system during two summers showed that spread plate counts on nutrient-poor R2A agar after 7-day incubation yielded 100 to 200 times higher counts. Counts on R2A after 3-day incubation were three times less than after 7 days. As the precision of plate count methods is very poor for counts less than 10 cfu/plate, a method yielding higher counts is better suited to detect upcoming regrowth than a method yielding low counts. It is shown that for the identification of regrowth events HPC(R2A) gives a further margin of about 2 weeks for reaction before HPC(GDWR). Copyright 2003 Elsevier B.V.

  1. An improved size exclusion-HPLC method for molecular size distribution analysis of immunoglobulin G using sodium perchlorate in the eluent.

    PubMed

    Wang, Hsiaoling; Levi, Mark S; Del Grosso, Alfred V; McCormick, William M; Bhattacharyya, Lokesh

    2017-05-10

    Size exclusion (SE) high performance liquid chromatography (HPLC) is widely used for the molecular size distribution (MSD) analyses of various therapeutic proteins. We report development and validation of a SE-HPLC method for MSD analyses of immunoglobulin G (IgG) in products using a TSKgel SuperSW3000 column and eluting it with 0.4M NaClO 4 , a chaotropic salt, in 40mM phosphate buffer, pH 6.8. The chromatograms show distinct peaks of aggregates, tetramer, and two dimers, as well as the monomer and fragment peaks. In addition, the method offers about half the run time (12min), better peak resolution, improved peak shape and more stable base-line compared to HPLC methods reported in the literature, including that in the European Pharmacopeia (EP). A comparison of MSD analysis results between our method and the EP method shows interactions between the protein and the stationary phase and partial adsorption of aggregates and tetramer on the stationary phase, when the latter method is used. Thus, the EP method shows lower percent of aggregates and tetramer than are actually present in the products. In view of the fact that aggregates have been attributed to playing a critical role in adverse reactions due to IgG products, our observation raises a major concern regarding the actual aggregate content in these products since the EP method is widely used for MSD analyses of IgG products. Our method eliminates (or substantially reduces) the interactions between the proteins and stationary phase as well as the adsorption of proteins onto the column. Our results also show that NaClO 4 in the eluent is more effective in overcoming the protein/column interactions compared to Arg-HCl, another chaotropic salt. NaClO 4 is shown not to affect the molecular size and relative distribution of different molecular forms of IgG. The method validated as per ICH Q2(R1) guideline using IgG products, shows good specificity, accuracy, precision and a linear concentration dependence of peak areas for different molecular forms. In summary, our method gives more reliable results than the SE-HPLC methods for MSD analyses of IgG reported in the literature, including the EP, particularly for aggregates and tetramer. The results are interpreted in terms of ionic (polar) and hydrophobic interactions between the stationary phase and the IgG protein. Published by Elsevier B.V.

  2. [Distribution of electric charges in 2 substances inducing tumor cell regression].

    PubMed

    Smeyers, Y G; Huertas, A

    1983-01-01

    The charge distribution of anti-cancer molecules 4-thiazolidine-carboxylic acid and 2-amino-2-thiazoline hydrochloride was calculated with a CNDO/2 semiempiral quantum mechanic method. The activity seems to be related with the formation of Zn2+ and Mn2+ ions. Both molecules show local isosterism, origin of their chelating properties.

  3. The planetary distribution of heat sources and sinks during FGGE

    NASA Technical Reports Server (NTRS)

    Johnson, D. R.; Wei, M. Y.

    1985-01-01

    Heating distributions from analysis of the National Meteorological Center and European Center for Medium Range Weather Forecasts data sets; methods used and problems involved in the inference of diabatic heating; the relationship between differential heating and energy transport; and recommendations on the inference of heat soruces and heat sinks for the planetary show are discussed.

  4. Distribution Route Planning of Clean Coal Based on Nearest Insertion Method

    NASA Astrophysics Data System (ADS)

    Wang, Yunrui

    2018-01-01

    Clean coal technology has made some achievements for several ten years, but the research in its distribution field is very small, the distribution efficiency would directly affect the comprehensive development of clean coal technology, it is the key to improve the efficiency of distribution by planning distribution route rationally. The object of this paper was a clean coal distribution system which be built in a county. Through the surveying of the customer demand and distribution route, distribution vehicle in previous years, it was found that the vehicle deployment was only distributed by experiences, and the number of vehicles which used each day changed, this resulted a waste of transport process and an increase in energy consumption. Thus, the mathematical model was established here in order to aim at shortest path as objective function, and the distribution route was re-planned by using nearest-insertion method which been improved. The results showed that the transportation distance saved 37 km and the number of vehicles used had also been decreased from the past average of 5 to fixed 4 every day, as well the real loading of vehicles increased by 16.25% while the current distribution volume staying same. It realized the efficient distribution of clean coal, achieved the purpose of saving energy and reducing consumption.

  5. Cloud-top height retrieval from polarizing remote sensor POLDER

    NASA Astrophysics Data System (ADS)

    He, Xianqiang; Pan, Delu; Yan, Bai; Mao, Zhihua

    2006-10-01

    A new cloud-top height retrieval method is proposed by using polarizing remote sensing. In cloudy conditions, it shows that, in purple and blue bands, linear polarizing radiance at the top-of-atmosphere (TOA) is mainly contributed by Rayleigh scattering of the atmosphere's molecules above cloud, and the contribution by cloud reflection and aerosol scattering can be neglected. With such characteristics, the basis principle and method of cloud-top height retrieval using polarizing remote sensing are presented in detail, and tested by the polarizing remote sensing data of POLDER. The satellite-derived cloud-top height product can not only show the distribution of global cloud-top height, but also obtain the cloud-top height distribution of moderate-scale meteorological phenomena like hurricanes and typhoons. This new method is promising to become the operational algorithm for cloud-top height retrieval for POLDER and the future polarizing remote sensing satellites.

  6. Electrical detection and analysis of surface acoustic wave in line-defect two-dimensional piezoelectric phononic crystals

    NASA Astrophysics Data System (ADS)

    Cai, Feida; Li, Honglang; Tian, Yahui; Ke, Yabing; Cheng, Lina; Lou, Wei; He, Shitang

    2018-03-01

    Line-defect piezoelectric phononic crystals (PCs) show good potential applications in surface acoustic wave (SAW) MEMS devices for RF communication systems. To analyze the SAW characteristics in line-defect two-dimensional (2D) piezoelectric PCs, optical methods are commonly used. However, the optical instruments are complex and expensive, whereas conventional electrical methods can only measure SAW transmission of the whole device and lack spatial resolution. In this paper, we propose a new electrical experimental method with multiple receiving interdigital transducers (IDTs) to detect the SAW field distribution, in which an array of receiving IDTs of equal aperture was used to receive the SAW. For this new method, SAW delay lines with perfect and line-defect 2D Al/128°YXLiNbO3 piezoelectric PCs on the transmitting path were designed and fabricated. The experimental results showed that the SAW distributed mainly in the line-defect region, which agrees with the theoretical results.

  7. Integrated geophysical study to understand the architecture of the deep critical zone in the Luquillo Critical Zone Observatory (Puerto Rico

    NASA Astrophysics Data System (ADS)

    Comas, X.; Wright, W. J.; Hynek, S. A.; Ntarlagiannis, D.; Terry, N.; Whiting, F.; Job, M. J.; Brantley, S. L.; Fletcher, R. C.

    2016-12-01

    The Luquillo Critical Zone Observatory (CZO) in Puerto Rico is characterized by a complex system of heterogeneous fractures that participate in the formation of corestones, and influence the development of a regolith by the alteration of the bedrock at very rapid weathering rates. The spatial distribution of fractures, and its influence on regolith thickness is, however, currently not well understood. In this study, we used an array of near-surface geophysical methods, including ground penetrating radar, terrain conductivity, electrical resistivity imaging and induced polarization, OhmMapper, and shallow seismic, constrained with direct methods from previous studies. These methods were combined with stress modeling to better understand: 1) changes in regolith thickness; and 2) variation of the spatial distribution and density of fractures with topography and proximity to the knickpoint. Our observations show the potential of geophysical methods for imaging variability in regolith thickness, and agree with the result of a stress model showing increased dilation of fractures with proximity to the knickpoint.

  8. Multiscale power analysis for heart rate variability

    NASA Astrophysics Data System (ADS)

    Zeng, Peng; Liu, Hongxing; Ni, Huangjing; Zhou, Jing; Xia, Lan; Ning, Xinbao

    2015-06-01

    We first introduce multiscale power (MSP) method to assess the power distribution of physiological signals on multiple time scales. Simulation on synthetic data and experiments on heart rate variability (HRV) are tested to support the approach. Results show that both physical and psychological changes influence power distribution significantly. A quantitative parameter, termed power difference (PD), is introduced to evaluate the degree of power distribution alteration. We find that dynamical correlation of HRV will be destroyed completely when PD>0.7.

  9. Revealing the microstructure of the giant component in random graph ensembles

    NASA Astrophysics Data System (ADS)

    Tishby, Ido; Biham, Ofer; Katzav, Eytan; Kühn, Reimer

    2018-04-01

    The microstructure of the giant component of the Erdős-Rényi network and other configuration model networks is analyzed using generating function methods. While configuration model networks are uncorrelated, the giant component exhibits a degree distribution which is different from the overall degree distribution of the network and includes degree-degree correlations of all orders. We present exact analytical results for the degree distributions as well as higher-order degree-degree correlations on the giant components of configuration model networks. We show that the degree-degree correlations are essential for the integrity of the giant component, in the sense that the degree distribution alone cannot guarantee that it will consist of a single connected component. To demonstrate the importance and broad applicability of these results, we apply them to the study of the distribution of shortest path lengths on the giant component, percolation on the giant component, and spectra of sparse matrices defined on the giant component. We show that by using the degree distribution on the giant component one obtains high quality results for these properties, which can be further improved by taking the degree-degree correlations into account. This suggests that many existing methods, currently used for the analysis of the whole network, can be adapted in a straightforward fashion to yield results conditioned on the giant component.

  10. Method for assessing the need for case-specific hemodynamics: application to the distribution of vascular permeability.

    PubMed

    Hazel, A L; Friedman, M H

    2000-01-01

    A common approach to understanding the role of hemodynamics in atherogenesis is to seek relationships between parameters of the hemodynamic environment, and the distribution of tissue variables thought to be indicative of early disease. An important question arising in such investigations is whether the distributions of tissue variables are sufficiently similar among cases to permit them to be described by an ensemble average distribution. If they are, the hemodynamic environment needs be determined only once, for a nominal representative geometry; if not, the hemodynamic environment must be obtained for each case. A method for classifying distributions from multiple cases to answer this question is proposed and applied to the distributions of the uptake of Evans blue dye labeled albumin by the external iliac arteries of swine in response to a step increase in flow. It is found that the uptake patterns in the proximal segment of the arteries, between the aortic trifurcation and the ostium of the circumflex iliac artery, show considerable case-to-case variability. In the distal segment, extending to the deep femoral ostium, many cases show very little spatial variation, and the patterns in those that do are similar among the cases. Thus the response of the distal segment may be understood with fewer simulations, but the proximal segment has more information to offer.

  11. Self Calibrated Wireless Distributed Environmental Sensory Networks

    PubMed Central

    Fishbain, Barak; Moreno-Centeno, Erick

    2016-01-01

    Recent advances in sensory and communication technologies have made Wireless Distributed Environmental Sensory Networks (WDESN) technically and economically feasible. WDESNs present an unprecedented tool for studying many environmental processes in a new way. However, the WDESNs’ calibration process is a major obstacle in them becoming the common practice. Here, we present a new, robust and efficient method for aggregating measurements acquired by an uncalibrated WDESN, and producing accurate estimates of the observed environmental variable’s true levels rendering the network as self-calibrated. The suggested method presents novelty both in group-decision-making and in environmental sensing as it offers a most valuable tool for distributed environmental monitoring data aggregation. Applying the method on an extensive real-life air-pollution dataset showed markedly more accurate results than the common practice and the state-of-the-art. PMID:27098279

  12. Analytical and numerical treatment of the heat conduction equation obtained via time-fractional distributed-order heat conduction law

    NASA Astrophysics Data System (ADS)

    Želi, Velibor; Zorica, Dušan

    2018-02-01

    Generalization of the heat conduction equation is obtained by considering the system of equations consisting of the energy balance equation and fractional-order constitutive heat conduction law, assumed in the form of the distributed-order Cattaneo type. The Cauchy problem for system of energy balance equation and constitutive heat conduction law is treated analytically through Fourier and Laplace integral transform methods, as well as numerically by the method of finite differences through Adams-Bashforth and Grünwald-Letnikov schemes for approximation derivatives in temporal domain and leap frog scheme for spatial derivatives. Numerical examples, showing time evolution of temperature and heat flux spatial profiles, demonstrate applicability and good agreement of both methods in cases of multi-term and power-type distributed-order heat conduction laws.

  13. Fractal analysis of the short time series in a visibility graph method

    NASA Astrophysics Data System (ADS)

    Li, Ruixue; Wang, Jiang; Yu, Haitao; Deng, Bin; Wei, Xile; Chen, Yingyuan

    2016-05-01

    The aim of this study is to evaluate the performance of the visibility graph (VG) method on short fractal time series. In this paper, the time series of Fractional Brownian motions (fBm), characterized by different Hurst exponent H, are simulated and then mapped into a scale-free visibility graph, of which the degree distributions show the power-law form. The maximum likelihood estimation (MLE) is applied to estimate power-law indexes of degree distribution, and in this progress, the Kolmogorov-Smirnov (KS) statistic is used to test the performance of estimation of power-law index, aiming to avoid the influence of droop head and heavy tail in degree distribution. As a result, we find that the MLE gives an optimal estimation of power-law index when KS statistic reaches its first local minimum. Based on the results from KS statistic, the relationship between the power-law index and the Hurst exponent is reexamined and then amended to meet short time series. Thus, a method combining VG, MLE and KS statistics is proposed to estimate Hurst exponents from short time series. Lastly, this paper also offers an exemplification to verify the effectiveness of the combined method. In addition, the corresponding results show that the VG can provide a reliable estimation of Hurst exponents.

  14. A Distributed Learning Method for ℓ1-Regularized Kernel Machine over Wireless Sensor Networks

    PubMed Central

    Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong

    2016-01-01

    In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ1 norm regularization (ℓ1-regularized) is investigated, and a novel distributed learning algorithm for the ℓ1-regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost. PMID:27376298

  15. A New Multiconstraint Method for Determining the Optimal Cable Stresses in Cable-Stayed Bridges

    PubMed Central

    Asgari, B.; Osman, S. A.; Adnan, A.

    2014-01-01

    Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method. PMID:25050400

  16. A new multiconstraint method for determining the optimal cable stresses in cable-stayed bridges.

    PubMed

    Asgari, B; Osman, S A; Adnan, A

    2014-01-01

    Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method.

  17. Ensuring the Reliable Operation of the Power Grid: State-Based and Distributed Approaches to Scheduling Energy and Contingency Reserves

    NASA Astrophysics Data System (ADS)

    Prada, Jose Fernando

    Keeping a contingency reserve in power systems is necessary to preserve the security of real-time operations. This work studies two different approaches to the optimal allocation of energy and reserves in the day-ahead generation scheduling process. Part I presents a stochastic security-constrained unit commitment model to co-optimize energy and the locational reserves required to respond to a set of uncertain generation contingencies, using a novel state-based formulation. The model is applied in an offer-based electricity market to allocate contingency reserves throughout the power grid, in order to comply with the N-1 security criterion under transmission congestion. The objective is to minimize expected dispatch and reserve costs, together with post contingency corrective redispatch costs, modeling the probability of generation failure and associated post contingency states. The characteristics of the scheduling problem are exploited to formulate a computationally efficient method, consistent with established operational practices. We simulated the distribution of locational contingency reserves on the IEEE RTS96 system and compared the results with the conventional deterministic method. We found that assigning locational spinning reserves can guarantee an N-1 secure dispatch accounting for transmission congestion at a reasonable extra cost. The simulations also showed little value of allocating downward reserves but sizable operating savings from co-optimizing locational nonspinning reserves. Overall, the results indicate the computational tractability of the proposed method. Part II presents a distributed generation scheduling model to optimally allocate energy and spinning reserves among competing generators in a day-ahead market. The model is based on the coordination between individual generators and a market entity. The proposed method uses forecasting, augmented pricing and locational signals to induce efficient commitment of generators based on firm posted prices. It is price-based but does not rely on multiple iterations, minimizes information exchange and simplifies the market clearing process. Simulations of the distributed method performed on a six-bus test system showed that, using an appropriate set of prices, it is possible to emulate the results of a conventional centralized solution, without need of providing make-whole payments to generators. Likewise, they showed that the distributed method can accommodate transactions with different products and complex security constraints.

  18. Effects of Schroth and Pilates exercises on the Cobb angle and weight distribution of patients with scoliosis.

    PubMed

    Kim, Gichul; HwangBo, Pil-Neo

    2016-03-01

    [Purpose] The purpose of this study was to compare the effect of Schroth and Pilates exercises on the Cobb angle and body weight distribution of patients with idiopathic scoliosis. [Subjects] Twenty-four scoliosis patients with a Cobb angle of ≥20° were divided into the Schroth exercise group (SEG, n = 12) and the Pilates exercise group (PEG, n = 12). [Methods] The SEG and PEG performed Schroth and Pilates exercises, respectively, three times a week for 12 weeks. The Cobb angle was measured in the standing position with a radiography apparatus, and weight load was measured with Gait View Pro 1.0. [Results] In the intragroup comparison, both groups showed significant changes in the Cobb angle. For weight distribution, the SEG showed significant differences in the total weight between the concave and convex sides, but the PEG did not show significant differences. Furthermore, in the intragroup comparison, the SEG showed significant differences in the changes in the Cobb angle and weight distribution compared with the PEG. [Conclusion] Both Schroth and Pilates exercises were effective in changing the Cobb angle and weight distribution of scoliosis patients; however, the intergroup comparison showed that the Schroth exercise was more effective than the Pilates exercise.

  19. The Extended Erlang-Truncated Exponential distribution: Properties and application to rainfall data.

    PubMed

    Okorie, I E; Akpanta, A C; Ohakwe, J; Chikezie, D C

    2017-06-01

    The Erlang-Truncated Exponential ETE distribution is modified and the new lifetime distribution is called the Extended Erlang-Truncated Exponential EETE distribution. Some statistical and reliability properties of the new distribution are given and the method of maximum likelihood estimate was proposed for estimating the model parameters. The usefulness and flexibility of the EETE distribution was illustrated with an uncensored data set and its fit was compared with that of the ETE and three other three-parameter distributions. Results based on the minimized log-likelihood ([Formula: see text]), Akaike information criterion (AIC), Bayesian information criterion (BIC) and the generalized Cramér-von Mises [Formula: see text] statistics shows that the EETE distribution provides a more reasonable fit than the one based on the other competing distributions.

  20. Intertime jump statistics of state-dependent Poisson processes.

    PubMed

    Daly, Edoardo; Porporato, Amilcare

    2007-01-01

    A method to obtain the probability distribution of the interarrival times of jump occurrences in systems driven by state-dependent Poisson noise is proposed. Such a method uses the survivor function obtained by a modified version of the master equation associated to the stochastic process under analysis. A model for the timing of human activities shows the capability of state-dependent Poisson noise to generate power-law distributions. The application of the method to a model for neuron dynamics and to a hydrological model accounting for land-atmosphere interaction elucidates the origin of characteristic recurrence intervals and possible persistence in state-dependent Poisson models.

  1. Analysis of an ultrasonically rotating droplet by moving particle semi-implicit and distributed point source method in a rotational coordinate

    NASA Astrophysics Data System (ADS)

    Wada, Yuji; Yuge, Kohei; Tanaka, Hiroki; Nakamura, Kentaro

    2017-07-01

    Numerical analysis on the rotation of an ultrasonically levitated droplet in centrifugal coordinate is discussed. A droplet levitated in an acoustic chamber is simulated using the distributed point source method and the moving particle semi-implicit method. Centrifugal coordinate is adopted to avoid the Laplacian differential error, which causes numerical divergence or inaccuracy in the global coordinate calculation. Consequently, the duration of calculation stability has increased 30 times longer than that in a the previous paper. Moreover, the droplet radius versus rotational acceleration characteristics show a similar trend to the theoretical and experimental values in the literature.

  2. Distribution Characteristics of Air-Bone Gaps – Evidence of Bias in Manual Audiometry

    PubMed Central

    Margolis, Robert H.; Wilson, Richard H.; Popelka, Gerald R.; Eikelboom, Robert H.; Swanepoel, De Wet; Saly, George L.

    2015-01-01

    Objective Five databases were mined to examine distributions of air-bone gaps obtained by automated and manual audiometry. Differences in distribution characteristics were examined for evidence of influences unrelated to the audibility of test signals. Design The databases provided air- and bone-conduction thresholds that permitted examination of air-bone gap distributions that were free of ceiling and floor effects. Cases with conductive hearing loss were eliminated based on air-bone gaps, tympanometry, and otoscopy, when available. The analysis is based on 2,378,921 threshold determinations from 721,831 subjects from five databases. Results Automated audiometry produced air-bone gaps that were normally distributed suggesting that air- and bone-conduction thresholds are normally distributed. Manual audiometry produced air-bone gaps that were not normally distributed and show evidence of biasing effects of assumptions of expected results. In one database, the form of the distributions showed evidence of inclusion of conductive hearing losses. Conclusions Thresholds obtained by manual audiometry show tester bias effects from assumptions of the patient’s hearing loss characteristics. Tester bias artificially reduces the variance of bone-conduction thresholds and the resulting air-bone gaps. Because the automated method is free of bias from assumptions of expected results, these distributions are hypothesized to reflect the true variability of air- and bone-conduction thresholds and the resulting air-bone gaps. PMID:26627469

  3. Estimate of uncertainties in polarized parton distributions

    NASA Astrophysics Data System (ADS)

    Miyama, M.; Goto, Y.; Hirai, M.; Kobayashi, H.; Kumano, S.; Morii, T.; Saito, N.; Shibata, T.-A.; Yamanishi, T.

    2001-10-01

    From \\chi^2 analysis of polarized deep inelastic scattering data, we determined polarized parton distribution functions (Y. Goto et al. (AAC), Phys. Rev. D 62, 34017 (2000).). In order to clarify the reliability of the obtained distributions, we should estimate uncertainties of the distributions. In this talk, we discuss the pol-PDF uncertainties by using a Hessian method. A Hessian matrix H_ij is given by second derivatives of the \\chi^2, and the error matrix \\varepsilon_ij is defined as the inverse matrix of H_ij. Using the error matrix, we calculate the error of a function F by (δ F)^2 = sum_i,j fracpartial Fpartial ai \\varepsilon_ij fracpartial Fpartial aj , where a_i,j are the parameters in the \\chi^2 analysis. Using this method, we show the uncertainties of the pol-PDF, structure functions g_1, and spin asymmetries A_1. Furthermore, we show a role of future experiments such as the RHIC-Spin. An important purpose of planned experiments in the near future is to determine the polarized gluon distribution function Δ g (x) in detail. We reanalyze the pol-PDF uncertainties including the gluon fake data which are expected to be given by the upcoming experiments. From this analysis, we discuss how much the uncertainties of Δ g (x) can be improved by such measurements.

  4. [Research on blood distribution of Tibetan population in Ali area].

    PubMed

    Liu, X X; Li, D D; Li, H L; Hou, L A; Liu, Z J; Yang, H Y; Qiu, L

    2017-12-12

    Objective: To explore the distribution of ABO blood group in the healthy population in the Ali area of Tibet, and to analyze the difference of blood group distribution between the Tibetan population in Ali and the Tibet Tibetan population. Methods: The blood distribution of 509 apparent healthy volunteers of Tueti County and Gal County, Tibet, which were randomly selected from September to November in 2016; 137 Tibetan blood donors, from 2016 September to2017 July and 84 Tibetan blood donors from 2015 August to 2017 July was analyzed retrospectively. The blood type was tested by the slide method. By reviewing the Chinese and foreign language database, seven articles on Tibetan blood group distribution were obtained. And the data of the blood distribution of the Ali area population and the Tibet Tibetan population were compared. Results: The ABO phenotype frequencies of 507 apparent healthy people, 137 blood donors and 84 recipients were B>O>A>AB. The composition ratio were 36.1%, 34.5%, 21.5 %, 7.9%; 40.1%, 35.0%, 17.5%, 7.3%; 39.3%, 34.5%, 20.2%, 6.0%.There was no statistically significant difference in blood group distribution between the donors and the recipients ( P >0.05). And there was no significant difference in the blood group distribution between Ali and Shigatse, Nagqu, Lhasa, Shannan. However, the differences between Ali and Qamdo, Nyingchi areas were statistically significant. Conclusion: The geographical position of the blood from the west to east, B type shows a downward trend, O type blood composition ratio shows an upward trend.

  5. Temperature distribution and heat radiation of patterned surfaces at short wavelengths.

    PubMed

    Emig, Thorsten

    2017-05-01

    We analyze the equilibrium spatial distribution of surface temperatures of patterned surfaces. The surface is exposed to a constant external heat flux and has a fixed internal temperature that is coupled to the outside heat fluxes by finite heat conductivity across the surface. It is assumed that the temperatures are sufficiently high so that the thermal wavelength (a few microns at room temperature) is short compared to all geometric length scales of the surface patterns. Hence the radiosity method can be employed. A recursive multiple scattering method is developed that enables rapid convergence to equilibrium temperatures. While the temperature distributions show distinct dependence on the detailed surface shapes (cuboids and cylinder are studied), we demonstrate robust universal relations between the mean and the standard deviation of the temperature distributions and quantities that characterize overall geometric features of the surface shape.

  6. Temperature distribution and heat radiation of patterned surfaces at short wavelengths

    NASA Astrophysics Data System (ADS)

    Emig, Thorsten

    2017-05-01

    We analyze the equilibrium spatial distribution of surface temperatures of patterned surfaces. The surface is exposed to a constant external heat flux and has a fixed internal temperature that is coupled to the outside heat fluxes by finite heat conductivity across the surface. It is assumed that the temperatures are sufficiently high so that the thermal wavelength (a few microns at room temperature) is short compared to all geometric length scales of the surface patterns. Hence the radiosity method can be employed. A recursive multiple scattering method is developed that enables rapid convergence to equilibrium temperatures. While the temperature distributions show distinct dependence on the detailed surface shapes (cuboids and cylinder are studied), we demonstrate robust universal relations between the mean and the standard deviation of the temperature distributions and quantities that characterize overall geometric features of the surface shape.

  7. Coatings influencing thermal stress in photonic crystal fiber laser

    NASA Astrophysics Data System (ADS)

    Pang, Dongqing; Li, Yan; Li, Yao; Hu, Minglie

    2018-06-01

    We studied how coating materials influence the thermal stress in the fiber core for three holding methods by simulating the temperature distribution and the thermal stress distribution in the photonic-crystal fiber laser. The results show that coating materials strongly influence both the thermal stress in the fiber core and the stress differences caused by holding methods. On the basis of the results, a two-coating PCF was designed. This design reduces the stress differences caused by variant holding conditions to zero, then the stability of laser operations can be improved.

  8. DONBOL: A computer program for predicting axisymmetric nozzle afterbody pressure distributions and drag at subsonic speeds

    NASA Technical Reports Server (NTRS)

    Putnam, L. E.

    1979-01-01

    A Neumann solution for inviscid external flow was coupled to a modified Reshotko-Tucker integral boundary-layer technique, the control volume method of Presz for calculating flow in the separated region, and an inviscid one-dimensional solution for the jet exhaust flow in order to predict axisymmetric nozzle afterbody pressure distributions and drag. The viscous and inviscid flows are solved iteratively until convergence is obtained. A computer algorithm of this procedure was written and is called DONBOL. A description of the computer program and a guide to its use is given. Comparisons of the predictions of this method with experiments show that the method accurately predicts the pressure distributions of boattail afterbodies which have the jet exhaust flow simulated by solid bodies. For nozzle configurations which have the jet exhaust simulated by high-pressure air, the present method significantly underpredicts the magnitude of nozzle pressure drag. This deficiency results because the method neglects the effects of jet plume entrainment. This method is limited to subsonic free-stream Mach numbers below that for which the flow over the body of revolution becomes sonic.

  9. Distributed least-squares estimation of a remote chemical source via convex combination in wireless sensor networks.

    PubMed

    Cao, Meng-Li; Meng, Qing-Hao; Zeng, Ming; Sun, Biao; Li, Wei; Ding, Cheng-Jun

    2014-06-27

    This paper investigates the problem of locating a continuous chemical source using the concentration measurements provided by a wireless sensor network (WSN). Such a problem exists in various applications: eliminating explosives or drugs, detecting the leakage of noxious chemicals, etc. The limited power and bandwidth of WSNs have motivated collaborative in-network processing which is the focus of this paper. We propose a novel distributed least-squares estimation (DLSE) method to solve the chemical source localization (CSL) problem using a WSN. The DLSE method is realized by iteratively conducting convex combination of the locally estimated chemical source locations in a distributed manner. Performance assessments of our method are conducted using both simulations and real experiments. In the experiments, we propose a fitting method to identify both the release rate and the eddy diffusivity. The results show that the proposed DLSE method can overcome the negative interference of local minima and saddle points of the objective function, which would hinder the convergence of local search methods, especially in the case of locating a remote chemical source.

  10. Comparison of time-frequency distribution techniques for analysis of spinal somatosensory evoked potential.

    PubMed

    Hu, Y; Luk, K D; Lu, W W; Holmes, A; Leong, J C

    2001-05-01

    Spinal somatosensory evoked potential (SSEP) has been employed to monitor the integrity of the spinal cord during surgery. To detect both temporal and spectral changes in SSEP waveforms, an investigation of the application of time-frequency analysis (TFA) techniques was conducted. SSEP signals from 30 scoliosis patients were analysed using different techniques; short time Fourier transform (STFT), Wigner-Ville distribution (WVD), Choi-Williams distribution (CWD), cone-shaped distribution (CSD) and adaptive spectrogram (ADS). The time-frequency distributions (TFD) computed using these methods were assessed and compared with each other. WVD, ADS, CSD and CWD showed better resolution than STFT. Comparing normalised peak widths, CSD showed the sharpest peak width (0.13+/-0.1) in the frequency dimension, and a mean peak width of 0.70+/-0.12 in the time dimension. Both WVD and CWD produced cross-term interference, distorting the TFA distribution, but this was not seen with CSD and ADS. CSD appeared to give a lower mean peak power bias (10.3%+/-6.2%) than ADS (41.8%+/-19.6%). Application of the CSD algorithm showed both good resolution and accurate spectrograms, and is therefore recommended as the most appropriate TFA technique for the analysis of SSEP signals.

  11. Establishment method of a mixture model and its practical application for transmission gears in an engineering vehicle

    NASA Astrophysics Data System (ADS)

    Wang, Jixin; Wang, Zhenyu; Yu, Xiangjun; Yao, Mingyao; Yao, Zongwei; Zhang, Erping

    2012-09-01

    Highly versatile machines, such as wheel loaders, forklifts, and mining haulers, are subject to many kinds of working conditions, as well as indefinite factors that lead to the complexity of the load. The load probability distribution function (PDF) of transmission gears has many distributions centers; thus, its PDF cannot be well represented by just a single-peak function. For the purpose of representing the distribution characteristics of the complicated phenomenon accurately, this paper proposes a novel method to establish a mixture model. Based on linear regression models and correlation coefficients, the proposed method can be used to automatically select the best-fitting function in the mixture model. Coefficient of determination, the mean square error, and the maximum deviation are chosen and then used as judging criteria to describe the fitting precision between the theoretical distribution and the corresponding histogram of the available load data. The applicability of this modeling method is illustrated by the field testing data of a wheel loader. Meanwhile, the load spectra based on the mixture model are compiled. The comparison results show that the mixture model is more suitable for the description of the load-distribution characteristics. The proposed research improves the flexibility and intelligence of modeling, reduces the statistical error and enhances the fitting accuracy, and the load spectra complied by this method can better reflect the actual load characteristic of the gear component.

  12. Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.

    PubMed

    Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai

    2017-11-01

    For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.

  13. Cooperative Optimal Coordination for Distributed Energy Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Tao; Wu, Di; Ren, Wei

    In this paper, we consider the optimal coordination problem for distributed energy resources (DERs) including distributed generators and energy storage devices. We propose an algorithm based on the push-sum and gradient method to optimally coordinate storage devices and distributed generators in a distributed manner. In the proposed algorithm, each DER only maintains a set of variables and updates them through information exchange with a few neighbors over a time-varying directed communication network. We show that the proposed distributed algorithm solves the optimal DER coordination problem if the time-varying directed communication network is uniformly jointly strongly connected, which is a mildmore » condition on the connectivity of communication topologies. The proposed distributed algorithm is illustrated and validated by numerical simulations.« less

  14. Parallel Harmony Search Based Distributed Energy Resource Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ceylan, Oguzhan; Liu, Guodong; Tomsovic, Kevin

    2015-01-01

    This paper presents a harmony search based parallel optimization algorithm to minimize voltage deviations in three phase unbalanced electrical distribution systems and to maximize active power outputs of distributed energy resources (DR). The main contribution is to reduce the adverse impacts on voltage profile during a day as photovoltaics (PVs) output or electrical vehicles (EVs) charging changes throughout a day. The IEEE 123- bus distribution test system is modified by adding DRs and EVs under different load profiles. The simulation results show that by using parallel computing techniques, heuristic methods may be used as an alternative optimization tool in electricalmore » power distribution systems operation.« less

  15. Ionic Conduction in Lithium Ion Battery Composite Electrode Governs Cross-sectional Reaction Distribution.

    PubMed

    Orikasa, Yuki; Gogyo, Yuma; Yamashige, Hisao; Katayama, Misaki; Chen, Kezheng; Mori, Takuya; Yamamoto, Kentaro; Masese, Titus; Inada, Yasuhiro; Ohta, Toshiaki; Siroma, Zyun; Kato, Shiro; Kinoshita, Hajime; Arai, Hajime; Ogumi, Zempachi; Uchimoto, Yoshiharu

    2016-05-19

    Composite electrodes containing active materials, carbon and binder are widely used in lithium-ion batteries. Since the electrode reaction occurs preferentially in regions with lower resistance, reaction distribution can be happened within composite electrodes. We investigate the relationship between the reaction distribution with depth direction and electronic/ionic conductivity in composite electrodes with changing electrode porosities. Two dimensional X-ray absorption spectroscopy shows that the reaction distribution is happened in lower porosity electrodes. Our developed 6-probe method can measure electronic/ionic conductivity in composite electrodes. The ionic conductivity is decreased for lower porosity electrodes, which governs the reaction distribution of composite electrodes and their performances.

  16. Effect of inhomogeneity in a patient's body on the accuracy of the pencil beam algorithm in comparison to Monte Carlo

    NASA Astrophysics Data System (ADS)

    Yamashita, T.; Akagi, T.; Aso, T.; Kimura, A.; Sasaki, T.

    2012-11-01

    The pencil beam algorithm (PBA) is reasonably accurate and fast. It is, therefore, the primary method used in routine clinical treatment planning for proton radiotherapy; still, it needs to be validated for use in highly inhomogeneous regions. In our investigation of the effect of patient inhomogeneity, PBA was compared with Monte Carlo (MC). A software framework was developed for the MC simulation of radiotherapy based on Geant4. Anatomical sites selected for the comparison were the head/neck, liver, lung and pelvis region. The dose distributions calculated by the two methods in selected examples were compared, as well as a dose volume histogram (DVH) derived from the dose distributions. The comparison of the off-center ratio (OCR) at the iso-center showed good agreement between the PBA and MC, while discrepancies were seen around the distal fall-off regions. While MC showed a fine structure on the OCR in the distal fall-off region, the PBA showed smoother distribution. The fine structures in MC calculation appeared downstream of very low-density regions. Comparison of DVHs showed that most of the target volumes were similarly covered, while some OARs located around the distal region received a higher dose when calculated by MC than the PBA.

  17. An efficient algorithm for accurate computation of the Dirichlet-multinomial log-likelihood function.

    PubMed

    Yu, Peng; Shaw, Chad A

    2014-06-01

    The Dirichlet-multinomial (DMN) distribution is a fundamental model for multicategory count data with overdispersion. This distribution has many uses in bioinformatics including applications to metagenomics data, transctriptomics and alternative splicing. The DMN distribution reduces to the multinomial distribution when the overdispersion parameter ψ is 0. Unfortunately, numerical computation of the DMN log-likelihood function by conventional methods results in instability in the neighborhood of [Formula: see text]. An alternative formulation circumvents this instability, but it leads to long runtimes that make it impractical for large count data common in bioinformatics. We have developed a new method for computation of the DMN log-likelihood to solve the instability problem without incurring long runtimes. The new approach is composed of a novel formula and an algorithm to extend its applicability. Our numerical experiments show that this new method both improves the accuracy of log-likelihood evaluation and the runtime by several orders of magnitude, especially in high-count data situations that are common in deep sequencing data. Using real metagenomic data, our method achieves manyfold runtime improvement. Our method increases the feasibility of using the DMN distribution to model many high-throughput problems in bioinformatics. We have included in our work an R package giving access to this method and a vingette applying this approach to metagenomic data. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. A New Cloud and Aerosol Layer Detection Method Based on Micropulse Lidar Measurements

    NASA Astrophysics Data System (ADS)

    Wang, Q.; Zhao, C.; Wang, Y.; Li, Z.; Wang, Z.; Liu, D.

    2014-12-01

    A new algorithm is developed to detect aerosols and clouds based on micropulse lidar (MPL) measurements. In this method, a semi-discretization processing (SDP) technique is first used to inhibit the impact of increasing noise with distance, then a value distribution equalization (VDE) method is introduced to reduce the magnitude of signal variations with distance. Combined with empirical threshold values, clouds and aerosols are detected and separated. This method can detect clouds and aerosols with high accuracy, although classification of aerosols and clouds is sensitive to the thresholds selected. Compared with the existing Atmospheric Radiation Measurement (ARM) program lidar-based cloud product, the new method detects more high clouds. The algorithm was applied to a year of observations at both the U.S. Southern Great Plains (SGP) and China Taihu site. At SGP, the cloud frequency shows a clear seasonal variation with maximum values in winter and spring, and shows bi-modal vertical distributions with maximum frequency at around 3-6 km and 8-12 km. The annual averaged cloud frequency is about 50%. By contrast, the cloud frequency at Taihu shows no clear seasonal variation and the maximum frequency is at around 1 km. The annual averaged cloud frequency is about 15% higher than that at SGP.

  19. Stepwise inference of likely dynamic flux distributions from metabolic time series data.

    PubMed

    Faraji, Mojdeh; Voit, Eberhard O

    2017-07-15

    Most metabolic pathways contain more reactions than metabolites and therefore have a wide stoichiometric matrix that corresponds to infinitely many possible flux distributions that are perfectly compatible with the dynamics of the metabolites in a given dataset. This under-determinedness poses a challenge for the quantitative characterization of flux distributions from time series data and thus for the design of adequate, predictive models. Here we propose a method that reduces the degrees of freedom in a stepwise manner and leads to a dynamic flux distribution that is, in a statistical sense, likely to be close to the true distribution. We applied the proposed method to the lignin biosynthesis pathway in switchgrass. The system consists of 16 metabolites and 23 enzymatic reactions. It has seven degrees of freedom and therefore admits a large space of dynamic flux distributions that all fit a set of metabolic time series data equally well. The proposed method reduces this space in a systematic and biologically reasonable manner and converges to a likely dynamic flux distribution in just a few iterations. The estimated solution and the true flux distribution, which is known in this case, show excellent agreement and thereby lend support to the method. The computational model was implemented in MATLAB (version R2014a, The MathWorks, Natick, MA). The source code is available at https://github.gatech.edu/VoitLab/Stepwise-Inference-of-Likely-Dynamic-Flux-Distributions and www.bst.bme.gatech.edu/research.php . mojdeh@gatech.edu or eberhard.voit@bme.gatech.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  20. Random Partition Distribution Indexed by Pairwise Information

    PubMed Central

    Dahl, David B.; Day, Ryan; Tsai, Jerry W.

    2017-01-01

    We propose a random partition distribution indexed by pairwise similarity information such that partitions compatible with the similarities are given more probability. The use of pairwise similarities, in the form of distances, is common in some clustering algorithms (e.g., hierarchical clustering), but we show how to use this type of information to define a prior partition distribution for flexible Bayesian modeling. A defining feature of the distribution is that it allocates probability among partitions within a given number of subsets, but it does not shift probability among sets of partitions with different numbers of subsets. Our distribution places more probability on partitions that group similar items yet keeps the total probability of partitions with a given number of subsets constant. The distribution of the number of subsets (and its moments) is available in closed-form and is not a function of the similarities. Our formulation has an explicit probability mass function (with a tractable normalizing constant) so the full suite of MCMC methods may be used for posterior inference. We compare our distribution with several existing partition distributions, showing that our formulation has attractive properties. We provide three demonstrations to highlight the features and relative performance of our distribution. PMID:29276318

  1. PCI bus content-addressable-memory (CAM) implementation on FPGA for pattern recognition/image retrieval in a distributed environment

    NASA Astrophysics Data System (ADS)

    Megherbi, Dalila B.; Yan, Yin; Tanmay, Parikh; Khoury, Jed; Woods, C. L.

    2004-11-01

    Recently surveillance and Automatic Target Recognition (ATR) applications are increasing as the cost of computing power needed to process the massive amount of information continues to fall. This computing power has been made possible partly by the latest advances in FPGAs and SOPCs. In particular, to design and implement state-of-the-Art electro-optical imaging systems to provide advanced surveillance capabilities, there is a need to integrate several technologies (e.g. telescope, precise optics, cameras, image/compute vision algorithms, which can be geographically distributed or sharing distributed resources) into a programmable system and DSP systems. Additionally, pattern recognition techniques and fast information retrieval, are often important components of intelligent systems. The aim of this work is using embedded FPGA as a fast, configurable and synthesizable search engine in fast image pattern recognition/retrieval in a distributed hardware/software co-design environment. In particular, we propose and show a low cost Content Addressable Memory (CAM)-based distributed embedded FPGA hardware architecture solution with real time recognition capabilities and computing for pattern look-up, pattern recognition, and image retrieval. We show how the distributed CAM-based architecture offers a performance advantage of an order-of-magnitude over RAM-based architecture (Random Access Memory) search for implementing high speed pattern recognition for image retrieval. The methods of designing, implementing, and analyzing the proposed CAM based embedded architecture are described here. Other SOPC solutions/design issues are covered. Finally, experimental results, hardware verification, and performance evaluations using both the Xilinx Virtex-II and the Altera Apex20k are provided to show the potential and power of the proposed method for low cost reconfigurable fast image pattern recognition/retrieval at the hardware/software co-design level.

  2. A method to reproduce alpha-particle spectra measured with semiconductor detectors.

    PubMed

    Timón, A Fernández; Vargas, M Jurado; Sánchez, A Martín

    2010-01-01

    A method is proposed to reproduce alpha-particle spectra measured with silicon detectors, combining analytical and computer simulation techniques. The procedure includes the use of the Monte Carlo method to simulate the tracks of alpha-particles within the source and in the detector entrance window. The alpha-particle spectrum is finally obtained by the convolution of this simulated distribution and the theoretical distributions representing the contributions of the alpha-particle spectrometer to the spectrum. Experimental spectra from (233)U and (241)Am sources were compared with the predictions given by the proposed procedure, showing good agreement. The proposed method can be an important aid for the analysis and deconvolution of complex alpha-particle spectra. Copyright 2009 Elsevier Ltd. All rights reserved.

  3. Wigner time-delay distribution in chaotic cavities and freezing transition.

    PubMed

    Texier, Christophe; Majumdar, Satya N

    2013-06-21

    Using the joint distribution for proper time delays of a chaotic cavity derived by Brouwer, Frahm, and Beenakker [Phys. Rev. Lett. 78, 4737 (1997)], we obtain, in the limit of the large number of channels N, the large deviation function for the distribution of the Wigner time delay (the sum of proper times) by a Coulomb gas method. We show that the existence of a power law tail originates from narrow resonance contributions, related to a (second order) freezing transition in the Coulomb gas.

  4. Selective structural source identification

    NASA Astrophysics Data System (ADS)

    Totaro, Nicolas

    2018-04-01

    In the field of acoustic source reconstruction, the inverse Patch Transfer Function (iPTF) has been recently proposed and has shown satisfactory results whatever the shape of the vibrating surface and whatever the acoustic environment. These two interesting features are due to the virtual acoustic volume concept underlying the iPTF methods. The aim of the present article is to show how this concept of virtual subsystem can be used in structures to reconstruct the applied force distribution. Some virtual boundary conditions can be applied on a part of the structure, called virtual testing structure, to identify the force distribution applied in that zone regardless of the presence of other sources outside the zone under consideration. In the present article, the applicability of the method is only demonstrated on planar structures. However, the final example show how the method can be applied to a complex shape planar structure with point welded stiffeners even in the tested zone. In that case, if the virtual testing structure includes the stiffeners the identified force distribution only exhibits the positions of external applied forces. If the virtual testing structure does not include the stiffeners, the identified force distribution permits to localize the forces due to the coupling between the structure and the stiffeners through the welded points as well as the ones due to the external forces. This is why this approach is considered here as a selective structural source identification method. It is demonstrated that this approach clearly falls in the same framework as the Force Analysis Technique, the Virtual Fields Method or the 2D spatial Fourier transform. Even if this approach has a lot in common with these latters, it has some interesting particularities like its low sensitivity to measurement noise.

  5. Preparation of longitudinal sections of hair samples for the analysis of cocaine by MALDI-MS/MS and TOF-SIMS imaging.

    PubMed

    Flinders, Bryn; Cuypers, Eva; Zeijlemaker, Hans; Tytgat, Jan; Heeren, Ron M A

    2015-10-01

    Matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) for the analysis of intact hair is a powerful tool for the detection of drugs of abuse in toxicology and forensic applications. Here we present a quick, easy, and reproducible method of preparing longitudinal sections of single hairs. This method improves the accessibility of chemicals embedded in the hair matrix for molecular imaging with mass spectrometry. The images obtained from a single, sectioned hair sample show molecular distributions in the exposed medulla, cortex, and a portion of the cuticle observed as a narrow layer surrounding the cortex. Using MALDI-MS/MS imaging, the distribution of cocaine was observed throughout five longitudinally sectioned drug-user hair samples. The images showed the distribution of the product ion at m/z 182, derived from the precursor ion of cocaine at m/z 304. MetA-SIMS images of longitudinally sectioned hair samples showed a more detailed distribution of cocaine at m/z 304, benzoylecgonine the major metabolite of cocaine at m/z 290 and other drugs such as methadone which was observed at m/z 310. Chronological information of drug intake can be obtained more sensitively. The chronological detail is in hours rather than months, which is of great interest in clinical as well as forensic applications. Copyright © 2015 John Wiley & Sons, Ltd.

  6. l[subscript z] Person-Fit Index to Identify Misfit Students with Achievement Test Data

    ERIC Educational Resources Information Center

    Seo, Dong Gi; Weiss, David J.

    2013-01-01

    The usefulness of the l[subscript z] person-fit index was investigated with achievement test data from 20 exams given to more than 3,200 college students. Results for three methods of estimating ? showed that the distributions of l[subscript z] were not consistent with its theoretical distribution, resulting in general overfit to the item response…

  7. Numerical analysis of mixing enhancement for micro-electroosmotic flow

    NASA Astrophysics Data System (ADS)

    Tang, G. H.; He, Y. L.; Tao, W. Q.

    2010-05-01

    Micro-electroosmotic flow is usually slow with negligible inertial effects and diffusion-based mixing can be problematic. To gain an improved understanding of electroosmotic mixing in microchannels, a numerical study has been carried out for channels patterned with wall blocks, and channels patterned with heterogeneous surfaces. The lattice Boltzmann method has been employed to obtain the external electric field, electric potential distribution in the electrolyte, the flow field, and the species concentration distribution within the same framework. The simulation results show that wall blocks and heterogeneous surfaces can significantly disturb the streamlines by fluid folding and stretching leading to apparently substantial improvements in mixing. However, the results show that the introduction of such features can substantially reduce the mass flow rate and thus effectively prolongs the available mixing time when the flow passes through the channel. This is a non-negligible factor on the effectiveness of the observed improvements in mixing efficiency. Compared with the heterogeneous surface distribution, the wall block cases can achieve more effective enhancement in the same mixing time. In addition, the field synergy theory is extended to analyze the mixing enhancement in electroosmotic flow. The distribution of the local synergy angle in the channel aids to evaluate the effectiveness of enhancement method.

  8. The Complex Dynamics of Sponsored Search Markets

    NASA Astrophysics Data System (ADS)

    Robu, Valentin; La Poutré, Han; Bohte, Sander

    This paper provides a comprehensive study of the structure and dynamics of online advertising markets, mostly based on techniques from the emergent discipline of complex systems analysis. First, we look at how the display rank of a URL link influences its click frequency, for both sponsored search and organic search. Second, we study the market structure that emerges from these queries, especially the market share distribution of different advertisers. We show that the sponsored search market is highly concentrated, with less than 5% of all advertisers receiving over 2/3 of the clicks in the market. Furthermore, we show that both the number of ad impressions and the number of clicks follow power law distributions of approximately the same coefficient. However, we find this result does not hold when studying the same distribution of clicks per rank position, which shows considerable variance, most likely due to the way advertisers divide their budget on different keywords. Finally, we turn our attention to how such sponsored search data could be used to provide decision support tools for bidding for combinations of keywords. We provide a method to visualize keywords of interest in graphical form, as well as a method to partition these graphs to obtain desirable subsets of search terms.

  9. Quantifying Uncertainties from Presence Data Sampling Methods for Species Distribution Modeling: Focused on Vegetation.

    NASA Astrophysics Data System (ADS)

    Sung, S.; Kim, H. G.; Lee, D. K.; Park, J. H.; Mo, Y.; Kil, S.; Park, C.

    2016-12-01

    The impact of climate change has been observed throughout the globe. The ecosystem experiences rapid changes such as vegetation shift, species extinction. In these context, Species Distribution Model (SDM) is one of the popular method to project impact of climate change on the ecosystem. SDM basically based on the niche of certain species with means to run SDM present point data is essential to find biological niche of species. To run SDM for plants, there are certain considerations on the characteristics of vegetation. Normally, to make vegetation data in large area, remote sensing techniques are used. In other words, the exact point of presence data has high uncertainties as we select presence data set from polygons and raster dataset. Thus, sampling methods for modeling vegetation presence data should be carefully selected. In this study, we used three different sampling methods for selection of presence data of vegetation: Random sampling, Stratified sampling and Site index based sampling. We used one of the R package BIOMOD2 to access uncertainty from modeling. At the same time, we included BioCLIM variables and other environmental variables as input data. As a result of this study, despite of differences among the 10 SDMs, the sampling methods showed differences in ROC values, random sampling methods showed the lowest ROC value while site index based sampling methods showed the highest ROC value. As a result of this study the uncertainties from presence data sampling methods and SDM can be quantified.

  10. Epidemic spreading in weighted networks: an edge-based mean-field solution.

    PubMed

    Yang, Zimo; Zhou, Tao

    2012-05-01

    Weight distribution greatly impacts the epidemic spreading taking place on top of networks. This paper presents a study of a susceptible-infected-susceptible model on regular random networks with different kinds of weight distributions. Simulation results show that the more homogeneous weight distribution leads to higher epidemic prevalence, which, unfortunately, could not be captured by the traditional mean-field approximation. This paper gives an edge-based mean-field solution for general weight distribution, which can quantitatively reproduce the simulation results. This method could be applied to characterize the nonequilibrium steady states of dynamical processes on weighted networks.

  11. Microdialysis as a way to measure antibiotics concentration in tissues.

    PubMed

    Marchand, Sandrine; Chauzy, Alexia; Dahyot-Fizelier, Claire; Couet, William

    2016-09-01

    As for all other drugs, antibiotics must reach their pharmacodynamic target in order to exert their effect, but because infection may occur in various tissues the distribution of antibiotics has always been of particular concern. In this article, we will first critically review the various methodologies available to study antibiotics tissue distribution, including microdialysis, secondly we will show how basic pharmacokinetic concepts may help to predict or interpret antibiotics tissue distribution and third we will address the question of linking antibiotics tissue distribution with their antimicrobial effect, using modern pharmacokinetic-pharmacodynamic methods. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Detection of defects on apple using B-spline lighting correction method

    NASA Astrophysics Data System (ADS)

    Li, Jiangbo; Huang, Wenqian; Guo, Zhiming

    To effectively extract defective areas in fruits, the uneven intensity distribution that was produced by the lighting system or by part of the vision system in the image must be corrected. A methodology was used to convert non-uniform intensity distribution on spherical objects into a uniform intensity distribution. A basically plane image with the defective area having a lower gray level than this plane was obtained by using proposed algorithms. Then, the defective areas can be easily extracted by a global threshold value. The experimental results with a 94.0% classification rate based on 100 apple images showed that the proposed algorithm was simple and effective. This proposed method can be applied to other spherical fruits.

  13. A comparison of methods for estimating the random effects distribution of a linear mixed model.

    PubMed

    Ghidey, Wendimagegn; Lesaffre, Emmanuel; Verbeke, Geert

    2010-12-01

    This article reviews various recently suggested approaches to estimate the random effects distribution in a linear mixed model, i.e. (1) the smoothing by roughening approach of Shen and Louis,(1) (2) the semi-non-parametric approach of Zhang and Davidian,(2) (3) the heterogeneity model of Verbeke and Lesaffre( 3) and (4) a flexible approach of Ghidey et al. (4) These four approaches are compared via an extensive simulation study. We conclude that for the considered cases, the approach of Ghidey et al. (4) often shows to have the smallest integrated mean squared error for estimating the random effects distribution. An analysis of a longitudinal dental data set illustrates the performance of the methods in a practical example.

  14. Light scattering study of rheumatoid arthritis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beuthan, J; Netz, U; Minet, O

    The distribution of light scattered by finger joints is studied in the near-IR region. It is shown that variations in the optical parameters of the tissue (scattering coefficient {mu}{sub s}, absorption coefficient {mu}{sub a}, and anisotropy factor g) depend on the presence of the rheumatoid arthritis (RA). At the first stage, the distribution of scattered light was measured in diaphanoscopic experiments. The convolution of a Gaussian error function with the scattering phase function proved to be a good approximation of the data obtained. Then, a new method was developed for the reconstruction of distribution of optical parameters in the fingermore » cross section. Model tests of the quality of this reconstruction method show good results. (laser biology and medicine)« less

  15. Revealing degree distribution of bursting neuron networks.

    PubMed

    Shen, Yu; Hou, Zhonghuai; Xin, Houwen

    2010-03-01

    We present a method to infer the degree distribution of a bursting neuron network from its dynamics. Burst synchronization (BS) of coupled Morris-Lecar neurons has been studied under the weak coupling condition. In the BS state, all the neurons start and end bursting almost simultaneously, while the spikes inside the burst are incoherent among the neurons. Interestingly, we find that the spike amplitude of a given neuron shows an excellent linear relationship with its degree, which makes it possible to estimate the degree distribution of the network by simple statistics of the spike amplitudes. We demonstrate the validity of this scheme on scale-free as well as small-world networks. The underlying mechanism of such a method is also briefly discussed.

  16. LASER APPLICATIONS AND OTHER TOPICS IN QUANTUM ELECTRONICS: Methods of computational physics in the problem of mathematical interpretation of laser investigations

    NASA Astrophysics Data System (ADS)

    Brodyn, M. S.; Starkov, V. N.

    2007-07-01

    It is shown that in laser experiments performed by using an 'imperfect' setup when instrumental distortions are considerable, sufficiently accurate results can be obtained by the modern methods of computational physics. It is found for the first time that a new instrumental function — the 'cap' function — a 'sister' of a Gaussian curve proved to be demanded namely in laser experiments. A new mathematical model of a measurement path and carefully performed computational experiment show that a light beam transmitted through a mesoporous film has actually a narrower intensity distribution than the detected beam, and the amplitude of the real intensity distribution is twice as large as that for measured intensity distributions.

  17. Phylogenetic tree construction using trinucleotide usage profile (TUP).

    PubMed

    Chen, Si; Deng, Lih-Yuan; Bowman, Dale; Shiau, Jyh-Jen Horng; Wong, Tit-Yee; Madahian, Behrouz; Lu, Henry Horng-Shing

    2016-10-06

    It has been a challenging task to build a genome-wide phylogenetic tree for a large group of species containing a large number of genes with long nucleotides sequences. The most popular method, called feature frequency profile (FFP-k), finds the frequency distribution for all words of certain length k over the whole genome sequence using (overlapping) windows of the same length. For a satisfactory result, the recommended word length (k) ranges from 6 to 15 and it may not be a multiple of 3 (codon length). The total number of possible words needed for FFP-k can range from 4 6 =4096 to 4 15 . We propose a simple improvement over the popular FFP method using only a typical word length of 3. A new method, called Trinucleotide Usage Profile (TUP), is proposed based only on the (relative) frequency distribution using non-overlapping windows of length 3. The total number of possible words needed for TUP is 4 3 =64, which is much less than the total count for the recommended optimal "resolution" for FFP. To build a phylogenetic tree, we propose first representing each of the species by a TUP vector and then using an appropriate distance measure between pairs of the TUP vectors for the tree construction. In particular, we propose summarizing a DNA sequence by a matrix of three rows corresponding to three reading frames, recording the frequency distribution of the non-overlapping words of length 3 in each of the reading frame. We also provide a numerical measure for comparing trees constructed with various methods. Compared to the FFP method, our empirical study showed that the proposed TUP method is more capable of building phylogenetic trees with a stronger biological support. We further provide some justifications on this from the information theory viewpoint. Unlike the FFP method, the TUP method takes the advantage that the starting of the first reading frame is (usually) known. Without this information, the FFP method could only rely on the frequency distribution of overlapping words, which is the average (or mixture) of the frequency distributions of three possible reading frames. Consequently, we show (from the entropy viewpoint) that the FFP procedure could dilute important gene information and therefore provides less accurate classification.

  18. Distributed collaborative probabilistic design for turbine blade-tip radial running clearance using support vector machine of regression

    NASA Astrophysics Data System (ADS)

    Fei, Cheng-Wei; Bai, Guang-Chen

    2014-12-01

    To improve the computational precision and efficiency of probabilistic design for mechanical dynamic assembly like the blade-tip radial running clearance (BTRRC) of gas turbine, a distribution collaborative probabilistic design method-based support vector machine of regression (SR)(called as DCSRM) is proposed by integrating distribution collaborative response surface method and support vector machine regression model. The mathematical model of DCSRM is established and the probabilistic design idea of DCSRM is introduced. The dynamic assembly probabilistic design of aeroengine high-pressure turbine (HPT) BTRRC is accomplished to verify the proposed DCSRM. The analysis results reveal that the optimal static blade-tip clearance of HPT is gained for designing BTRRC, and improving the performance and reliability of aeroengine. The comparison of methods shows that the DCSRM has high computational accuracy and high computational efficiency in BTRRC probabilistic analysis. The present research offers an effective way for the reliability design of mechanical dynamic assembly and enriches mechanical reliability theory and method.

  19. Computational Analysis of the Caenorhabditis elegans Germline to Study the Distribution of Nuclei, Proteins, and the Cytoskeleton.

    PubMed

    Gopal, Sandeep; Pocock, Roger

    2018-04-19

    The Caenorhabditis elegans (C. elegans) germline is used to study several biologically important processes including stem cell development, apoptosis, and chromosome dynamics. While the germline is an excellent model, the analysis is often two dimensional due to the time and labor required for three-dimensional analysis. Major readouts in such studies are the number/position of nuclei and protein distribution within the germline. Here, we present a method to perform automated analysis of the germline using confocal microscopy and computational approaches to determine the number and position of nuclei in each region of the germline. Our method also analyzes germline protein distribution that enables the three-dimensional examination of protein expression in different genetic backgrounds. Further, our study shows variations in cytoskeletal architecture in distinct regions of the germline that may accommodate specific spatial developmental requirements. Finally, our method enables automated counting of the sperm in the spermatheca of each germline. Taken together, our method enables rapid and reproducible phenotypic analysis of the C. elegans germline.

  20. A likelihood method for measuring the ultrahigh energy cosmic ray composition

    NASA Astrophysics Data System (ADS)

    High Resolution Fly'S Eye Collaboration; Abu-Zayyad, T.; Amman, J. F.; Archbold, G. C.; Belov, K.; Blake, S. A.; Belz, J. W.; Benzvi, S.; Bergman, D. R.; Boyer, J. H.; Burt, G. W.; Cao, Z.; Connolly, B. M.; Deng, W.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Maestas, M. M.; Manago, N.; Mannel, E. J.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M.; Rodriguez, D.; Sasaki, M.; Schnetzer, S.; Seman, M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.

    2006-08-01

    Air fluorescence detectors traditionally determine the dominant chemical composition of the ultrahigh energy cosmic ray flux by comparing the averaged slant depth of the shower maximum, Xmax, as a function of energy to the slant depths expected for various hypothesized primaries. In this paper, we present a method to make a direct measurement of the expected mean number of protons and iron by comparing the shapes of the expected Xmax distributions to the distribution for data. The advantages of this method includes the use of information of the full distribution and its ability to calculate a flux for various cosmic ray compositions. The same method can be expanded to marginalize uncertainties due to choice of spectra, hadronic models and atmospheric parameters. We demonstrate the technique with independent simulated data samples from a parent sample of protons and iron. We accurately predict the number of protons and iron in the parent sample and show that the uncertainties are meaningful.

  1. Estimating the mean and standard deviation of environmental data with below detection limit observations: Considering highly skewed data and model misspecification.

    PubMed

    Shoari, Niloofar; Dubé, Jean-Sébastien; Chenouri, Shoja'eddin

    2015-11-01

    In environmental studies, concentration measurements frequently fall below detection limits of measuring instruments, resulting in left-censored data. Some studies employ parametric methods such as the maximum likelihood estimator (MLE), robust regression on order statistic (rROS), and gamma regression on order statistic (GROS), while others suggest a non-parametric approach, the Kaplan-Meier method (KM). Using examples of real data from a soil characterization study in Montreal, we highlight the need for additional investigations that aim at unifying the existing literature. A number of studies have examined this issue; however, those considering data skewness and model misspecification are rare. These aspects are investigated in this paper through simulations. Among other findings, results show that for low skewed data, the performance of different statistical methods is comparable, regardless of the censoring percentage and sample size. For highly skewed data, the performance of the MLE method under lognormal and Weibull distributions is questionable; particularly, when the sample size is small or censoring percentage is high. In such conditions, MLE under gamma distribution, rROS, GROS, and KM are less sensitive to skewness. Related to model misspecification, MLE based on lognormal and Weibull distributions provides poor estimates when the true distribution of data is misspecified. However, the methods of rROS, GROS, and MLE under gamma distribution are generally robust to model misspecifications regardless of skewness, sample size, and censoring percentage. Since the characteristics of environmental data (e.g., type of distribution and skewness) are unknown a priori, we suggest using MLE based on gamma distribution, rROS and GROS. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Kinetic compensation effect in logistic distributed activation energy model for lignocellulosic biomass pyrolysis.

    PubMed

    Xu, Di; Chai, Meiyun; Dong, Zhujun; Rahman, Md Maksudur; Yu, Xi; Cai, Junmeng

    2018-06-04

    The kinetic compensation effect in the logistic distributed activation energy model (DAEM) for lignocellulosic biomass pyrolysis was investigated. The sum of square error (SSE) surface tool was used to analyze two theoretically simulated logistic DAEM processes for cellulose and xylan pyrolysis. The logistic DAEM coupled with the pattern search method for parameter estimation was used to analyze the experimental data of cellulose pyrolysis. The results showed that many parameter sets of the logistic DAEM could fit the data at different heating rates very well for both simulated and experimental processes, and a perfect linear relationship between the logarithm of the frequency factor and the mean value of the activation energy distribution was found. The parameters of the logistic DAEM can be estimated by coupling the optimization method and isoconversional kinetic methods. The results would be helpful for chemical kinetic analysis using DAEM. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Electroosmotic flow and mixing in microchannels with the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Tang, G. H.; Li, Zhuo; Wang, J. K.; He, Y. L.; Tao, W. Q.

    2006-11-01

    Understanding the electroosmotic flow in microchannels is of both fundamental and practical significance for the design and optimization of various microfluidic devices to control fluid motion. In this paper, a lattice Boltzmann equation, which recovers the nonlinear Poisson-Boltzmann equation, is used to solve the electric potential distribution in the electrolytes, and another lattice Boltzmann equation, which recovers the Navier-Stokes equation including the external force term, is used to solve the velocity fields. The method is validated by the electric potential distribution in the electrolytes and the pressure driven pulsating flow. Steady-state and pulsating electroosmotic flows in two-dimensional parallel uniform and nonuniform charged microchannels are studied with this lattice Boltzmann method. The simulation results show that the heterogeneous surface potential distribution and the electroosmotic pulsating flow can induce chaotic advection and thus enhance the mixing in microfluidic systems efficiently.

  4. Coordinated distribution network control of tap changer transformers, capacitors and PV inverters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ceylan, Oğuzhan; Liu, Guodong; Tomsovic, Kevin

    A power distribution system operates most efficiently with voltage deviations along a feeder kept to a minimum and must ensure all voltages remain within specified limits. Recently with the increased integration of photovoltaics, the variable power output has led to increased voltage fluctuations and violation of operating limits. This study proposes an optimization model based on a recently developed heuristic search method, grey wolf optimization, to coordinate the various distribution controllers. Several different case studies on IEEE 33 and 69 bus test systems modified by including tap changing transformers, capacitors and photovoltaic solar panels are performed. Simulation results are comparedmore » to two other heuristic-based optimization methods: harmony search and differential evolution. Finally, the simulation results show the effectiveness of the method and indicate the usage of reactive power outputs of PVs facilitates better voltage magnitude profile.« less

  5. Coordinated distribution network control of tap changer transformers, capacitors and PV inverters

    DOE PAGES

    Ceylan, Oğuzhan; Liu, Guodong; Tomsovic, Kevin

    2017-06-08

    A power distribution system operates most efficiently with voltage deviations along a feeder kept to a minimum and must ensure all voltages remain within specified limits. Recently with the increased integration of photovoltaics, the variable power output has led to increased voltage fluctuations and violation of operating limits. This study proposes an optimization model based on a recently developed heuristic search method, grey wolf optimization, to coordinate the various distribution controllers. Several different case studies on IEEE 33 and 69 bus test systems modified by including tap changing transformers, capacitors and photovoltaic solar panels are performed. Simulation results are comparedmore » to two other heuristic-based optimization methods: harmony search and differential evolution. Finally, the simulation results show the effectiveness of the method and indicate the usage of reactive power outputs of PVs facilitates better voltage magnitude profile.« less

  6. Optimal thresholds for the estimation of area rain-rate moments by the threshold method

    NASA Technical Reports Server (NTRS)

    Short, David A.; Shimizu, Kunio; Kedem, Benjamin

    1993-01-01

    Optimization of the threshold method, achieved by determination of the threshold that maximizes the correlation between an area-average rain-rate moment and the area coverage of rain rates exceeding the threshold, is demonstrated empirically and theoretically. Empirical results for a sequence of GATE radar snapshots show optimal thresholds of 5 and 27 mm/h for the first and second moments, respectively. Theoretical optimization of the threshold method by the maximum-likelihood approach of Kedem and Pavlopoulos (1991) predicts optimal thresholds near 5 and 26 mm/h for lognormally distributed rain rates with GATE-like parameters. The agreement between theory and observations suggests that the optimal threshold can be understood as arising due to sampling variations, from snapshot to snapshot, of a parent rain-rate distribution. Optimal thresholds for gamma and inverse Gaussian distributions are also derived and compared.

  7. Characterization of the Distance Relationship Between Localized Serotonin Receptors and Glia Cells on Fluorescence Microscopy Images of Brain Tissue.

    PubMed

    Jacak, Jaroslaw; Schaller, Susanne; Borgmann, Daniela; Winkler, Stephan M

    2015-08-01

    We here present two new methods for the characterization of fluorescent localization microscopy images obtained from immunostained brain tissue sections. Direct stochastic optical reconstruction microscopy images of 5-HT1A serotonin receptors and glial fibrillary acidic proteins in healthy cryopreserved brain tissues are analyzed. In detail, we here present two image processing methods for characterizing differences in receptor distribution on glial cells and their distribution on neural cells: One variant relies on skeleton extraction and adaptive thresholding, the other on k-means based discrete layer segmentation. Experimental results show that both methods can be applied for distinguishing classes of images with respect to serotonin receptor distribution. Quantification of nanoscopic changes in relative protein expression on particular cell types can be used to analyze degeneration in tissues caused by diseases or medical treatment.

  8. Software reliability: Additional investigations into modeling with replicated experiments

    NASA Technical Reports Server (NTRS)

    Nagel, P. M.; Schotz, F. M.; Skirvan, J. A.

    1984-01-01

    The effects of programmer experience level, different program usage distributions, and programming languages are explored. All these factors affect performance, and some tentative relational hypotheses are presented. An analytic framework for replicated and non-replicated (traditional) software experiments is presented. A method of obtaining an upper bound on the error rate of the next error is proposed. The method was validated empirically by comparing forecasts with actual data. In all 14 cases the bound exceeded the observed parameter, albeit somewhat conservatively. Two other forecasting methods are proposed and compared to observed results. Although demonstrated relative to this framework that stages are neither independent nor exponentially distributed, empirical estimates show that the exponential assumption is nearly valid for all but the extreme tails of the distribution. Except for the dependence in the stage probabilities, Cox's model approximates to a degree what is being observed.

  9. Connecting micro dynamics and population distributions in system dynamics models

    PubMed Central

    Rahmandad, Hazhir; Chen, Hsin-Jen; Xue, Hong; Wang, Youfa

    2014-01-01

    Researchers use system dynamics models to capture the mean behavior of groups of indistinguishable population elements (e.g., people) aggregated in stock variables. Yet, many modeling problems require capturing the heterogeneity across elements with respect to some attribute(s) (e.g., body weight). This paper presents a new method to connect the micro-level dynamics associated with elements in a population with the macro-level population distribution along an attribute of interest without the need to explicitly model every element. We apply the proposed method to model the distribution of Body Mass Index and its changes over time in a sample population of American women obtained from the U.S. National Health and Nutrition Examination Survey. Comparing the results with those obtained from an individual-based model that captures the same phenomena shows that our proposed method delivers accurate results with less computation than the individual-based model. PMID:25620842

  10. An orientation measurement method based on Hall-effect sensors for permanent magnet spherical actuators with 3D magnet array.

    PubMed

    Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming

    2014-10-24

    An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators.

  11. Assessing differential expression in two-color microarrays: a resampling-based empirical Bayes approach.

    PubMed

    Li, Dongmei; Le Pape, Marc A; Parikh, Nisha I; Chen, Will X; Dye, Timothy D

    2013-01-01

    Microarrays are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. Multiple testing methods in microarray data analysis aim at controlling both Type I and Type II error rates; however, real microarray data do not always fit their distribution assumptions. Smyth's ubiquitous parametric method, for example, inadequately accommodates violations of normality assumptions, resulting in inflated Type I error rates. The Significance Analysis of Microarrays, another widely used microarray data analysis method, is based on a permutation test and is robust to non-normally distributed data; however, the Significance Analysis of Microarrays method fold change criteria are problematic, and can critically alter the conclusion of a study, as a result of compositional changes of the control data set in the analysis. We propose a novel approach, combining resampling with empirical Bayes methods: the Resampling-based empirical Bayes Methods. This approach not only reduces false discovery rates for non-normally distributed microarray data, but it is also impervious to fold change threshold since no control data set selection is needed. Through simulation studies, sensitivities, specificities, total rejections, and false discovery rates are compared across the Smyth's parametric method, the Significance Analysis of Microarrays, and the Resampling-based empirical Bayes Methods. Differences in false discovery rates controls between each approach are illustrated through a preterm delivery methylation study. The results show that the Resampling-based empirical Bayes Methods offer significantly higher specificity and lower false discovery rates compared to Smyth's parametric method when data are not normally distributed. The Resampling-based empirical Bayes Methods also offers higher statistical power than the Significance Analysis of Microarrays method when the proportion of significantly differentially expressed genes is large for both normally and non-normally distributed data. Finally, the Resampling-based empirical Bayes Methods are generalizable to next generation sequencing RNA-seq data analysis.

  12. [Using the sequenced sample cluster analysis to study the body mass index distribution characteristics of adults in different age groups and genders].

    PubMed

    Cai, Y N; Pei, X T; Sun, P P; Xu, Y P; Liu, L; Ping, Z G

    2018-06-10

    Objective: To explore the characteristics of distribution on Chinese adult body mass index (BMI) in different age groups and genders and to provide reference related to obesity and related chronic diseases. Methods: Data from the China Health and Nutrition Survey in 2009 were used. Sequential sample cluster method was used to analyze the characteristics of BMI distribution in different age groups and genders by SAS. Results: Our results showed that the adult BMI in China should be divided into 3 groups according to their age, as 20 to 40 years old, 40 to 65 years old, and> 65 years old, in females or in total when grouped by difference of 5 years. For groupings in male, the three groups should be as 20 to 40, 40 to 60 years old and>60 years old. There were differences on distribution between the male and female groups. When grouped by difference of 10 years, all of the clusters for male, female and total groups as 20-40, 40-60 and>60 years old, became similar for the three classes, respectively, with no differences of distribution between gender, suggesting that the 5-years grouping was more accurate than the 10-years one, and BMI showing gender differences. Conclusions: BMI of the Chinese adults should be divided into 3 categories according to the characteristics of their age. Our results showed that BMI was increasing with age in youths and adolescents, remained unchanged in the middle-aged but decreasing in the elderly.

  13. Efficient Agent-Based Cluster Ensembles

    NASA Technical Reports Server (NTRS)

    Agogino, Adrian; Tumer, Kagan

    2006-01-01

    Numerous domains ranging from distributed data acquisition to knowledge reuse need to solve the cluster ensemble problem of combining multiple clusterings into a single unified clustering. Unfortunately current non-agent-based cluster combining methods do not work in a distributed environment, are not robust to corrupted clusterings and require centralized access to all original clusterings. Overcoming these issues will allow cluster ensembles to be used in fundamentally distributed and failure-prone domains such as data acquisition from satellite constellations, in addition to domains demanding confidentiality such as combining clusterings of user profiles. This paper proposes an efficient, distributed, agent-based clustering ensemble method that addresses these issues. In this approach each agent is assigned a small subset of the data and votes on which final cluster its data points should belong to. The final clustering is then evaluated by a global utility, computed in a distributed way. This clustering is also evaluated using an agent-specific utility that is shown to be easier for the agents to maximize. Results show that agents using the agent-specific utility can achieve better performance than traditional non-agent based methods and are effective even when up to 50% of the agents fail.

  14. A flexible approach to distributed data anonymization.

    PubMed

    Kohlmayer, Florian; Prasser, Fabian; Eckert, Claudia; Kuhn, Klaus A

    2014-08-01

    Sensitive biomedical data is often collected from distributed sources, involving different information systems and different organizational units. Local autonomy and legal reasons lead to the need of privacy preserving integration concepts. In this article, we focus on anonymization, which plays an important role for the re-use of clinical data and for the sharing of research data. We present a flexible solution for anonymizing distributed data in the semi-honest model. Prior to the anonymization procedure, an encrypted global view of the dataset is constructed by means of a secure multi-party computing (SMC) protocol. This global representation can then be anonymized. Our approach is not limited to specific anonymization algorithms but provides pre- and postprocessing for a broad spectrum of algorithms and many privacy criteria. We present an extensive analytical and experimental evaluation and discuss which types of methods and criteria are supported. Our prototype demonstrates the approach by implementing k-anonymity, ℓ-diversity, t-closeness and δ-presence with a globally optimal de-identification method in horizontally and vertically distributed setups. The experiments show that our method provides highly competitive performance and offers a practical and flexible solution for anonymizing distributed biomedical datasets. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Modeling utilization distributions in space and time

    USGS Publications Warehouse

    Keating, K.A.; Cherry, S.

    2009-01-01

    W. Van Winkle defined the utilization distribution (UD) as a probability density that gives an animal's relative frequency of occurrence in a two-dimensional (x, y) plane. We extend Van Winkle's work by redefining the UD as the relative frequency distribution of an animal's occurrence in all four dimensions of space and time. We then describe a product kernel model estimation method, devising a novel kernel from the wrapped Cauchy distribution to handle circularly distributed temporal covariates, such as day of year. Using Monte Carlo simulations of animal movements in space and time, we assess estimator performance. Although not unbiased, the product kernel method yields models highly correlated (Pearson's r - 0.975) with true probabilities of occurrence and successfully captures temporal variations in density of occurrence. In an empirical example, we estimate the expected UD in three dimensions (x, y, and t) for animals belonging to each of two distinct bighorn sheep {Ovis canadensis) social groups in Glacier National Park, Montana, USA. Results show the method can yield ecologically informative models that successfully depict temporal variations in density of occurrence for a seasonally migratory species. Some implications of this new approach to UD modeling are discussed. ?? 2009 by the Ecological Society of America.

  16. MP estimation applied to platykurtic sets of geodetic observations

    NASA Astrophysics Data System (ADS)

    Wiśniewski, Zbigniew

    2017-06-01

    MP estimation is a method which concerns estimating of the location parameters when the probabilistic models of observations differ from the normal distributions in the kurtosis or asymmetry. The system of Pearson's distributions is the probabilistic basis for the method. So far, such a method was applied and analyzed mostly for leptokurtic or mesokurtic distributions (Pearson's distributions of types IV or VII), which predominate practical cases. The analyses of geodetic or astronomical observations show that we may also deal with sets which have moderate asymmetry or small negative excess kurtosis. Asymmetry might result from the influence of many small systematic errors, which were not eliminated during preprocessing of data. The excess kurtosis can be related with bigger or smaller (in relations to the Hagen hypothesis) frequency of occurrence of the elementary errors which are close to zero. Considering that fact, this paper focuses on the estimation with application of the Pearson platykurtic distributions of types I or II. The paper presents the solution of the corresponding optimization problem and its basic properties. Although platykurtic distributions are rare in practice, it was an interesting issue to find out what results can be provided by MP estimation in the case of such observation distributions. The numerical tests which are presented in the paper are rather limited; however, they allow us to draw some general conclusions.

  17. Using independent component analysis for electrical impedance tomography

    NASA Astrophysics Data System (ADS)

    Yan, Peimin; Mo, Yulong

    2004-05-01

    Independent component analysis (ICA) is a way to resolve signals into independent components based on the statistical characteristics of the signals. It is a method for factoring probability densities of measured signals into a set of densities that are as statistically independent as possible under the assumptions of a linear model. Electrical impedance tomography (EIT) is used to detect variations of the electric conductivity of the human body. Because there are variations of the conductivity distributions inside the body, EIT presents multi-channel data. In order to get all information contained in different location of tissue it is necessary to image the individual conductivity distribution. In this paper we consider to apply ICA to EIT on the signal subspace (individual conductivity distribution). Using ICA the signal subspace will then be decomposed into statistically independent components. The individual conductivity distribution can be reconstructed by the sensitivity theorem in this paper. Compute simulations show that the full information contained in the multi-conductivity distribution will be obtained by this method.

  18. Anti-islanding Protection of Distributed Generation Using Rate of Change of Impedance

    NASA Astrophysics Data System (ADS)

    Shah, Pragnesh; Bhalja, Bhavesh

    2013-08-01

    Distributed Generation (DG), which is interlinked with distribution system, has inevitable effect on distribution system. Integrating DG with the utility network demands an anti-islanding scheme to protect the system. Failure to trip islanded generators can lead to problems such as threats to personnel safety, out-of-phase reclosing, and degradation of power quality. In this article, a new method for anti-islanding protection based on impedance monitoring of distribution network is carried out in presence of DG. The impedance measured between two phases is used to derive the rate of change of impedance (dz/dt), and its peak values are used for final trip decision. Test data are generated using PSCAD/EMTDC software package and the performance of the proposed method is evaluated in MatLab software. The simulation results show the effectiveness of the proposed scheme as it is capable to detect islanding condition accurately. Subsequently, it is also observed that the proposed scheme does not mal-operate during other disturbances such as short circuit and switching event.

  19. Prediction of sound transmission loss through multilayered panels by using Gaussian distribution of directional incident energy

    PubMed

    Kang; Ih; Kim; Kim

    2000-03-01

    In this study, a new prediction method is suggested for sound transmission loss (STL) of multilayered panels of infinite extent. Conventional methods such as random or field incidence approach often given significant discrepancies in predicting STL of multilayered panels when compared with the experiments. In this paper, appropriate directional distributions of incident energy to predict the STL of multilayered panels are proposed. In order to find a weighting function to represent the directional distribution of incident energy on the wall in a reverberation chamber, numerical simulations by using a ray-tracing technique are carried out. Simulation results reveal that the directional distribution can be approximately expressed by the Gaussian distribution function in terms of the angle of incidence. The Gaussian function is applied to predict the STL of various multilayered panel configurations as well as single panels. The compared results between the measurement and the prediction show good agreements, which validate the proposed Gaussian function approach.

  20. Numerical sedimentation particle-size analysis using the Discrete Element Method

    NASA Astrophysics Data System (ADS)

    Bravo, R.; Pérez-Aparicio, J. L.; Gómez-Hernández, J. J.

    2015-12-01

    Sedimentation tests are widely used to determine the particle size distribution of a granular sample. In this work, the Discrete Element Method interacts with the simulation of flow using the well known one-way-coupling method, a computationally affordable approach for the time-consuming numerical simulation of the hydrometer, buoyancy and pipette sedimentation tests. These tests are used in the laboratory to determine the particle-size distribution of fine-grained aggregates. Five samples with different particle-size distributions are modeled by about six million rigid spheres projected on two-dimensions, with diameters ranging from 2.5 ×10-6 m to 70 ×10-6 m, forming a water suspension in a sedimentation cylinder. DEM simulates the particle's movement considering laminar flow interactions of buoyant, drag and lubrication forces. The simulation provides the temporal/spatial distributions of densities and concentrations of the suspension. The numerical simulations cannot replace the laboratory tests since they need the final granulometry as initial data, but, as the results show, these simulations can identify the strong and weak points of each method and eventually recommend useful variations and draw conclusions on their validity, aspects very difficult to achieve in the laboratory.

  1. Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆

    PubMed Central

    Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny

    2014-01-01

    There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702

  2. Efficient measurement of large light source near-field color and luminance distributions for optical design and simulation

    NASA Astrophysics Data System (ADS)

    Kostal, Hubert; Kreysar, Douglas; Rykowski, Ronald

    2009-08-01

    The color and luminance distributions of large light sources are difficult to measure because of the size of the source and the physical space required for the measurement. We describe a method for the measurement of large light sources in a limited space that efficiently overcomes the physical limitations of traditional far-field measurement techniques. This method uses a calibrated, high dynamic range imaging colorimeter and a goniometric system to move the light source through an automated measurement sequence in the imaging colorimeter's field-of-view. The measurement is performed from within the near-field of the light source, enabling a compact measurement set-up. This method generates a detailed near-field color and luminance distribution model that can be directly converted to ray sets for optical design and that can be extrapolated to far-field distributions for illumination design. The measurements obtained show excellent correlation to traditional imaging colorimeter and photogoniometer measurement methods. The near-field goniometer approach that we describe is broadly applicable to general lighting systems, can be deployed in a compact laboratory space, and provides full near-field data for optical design and simulation.

  3. Parameter estimation for the exponential-normal convolution model for background correction of affymetrix GeneChip data.

    PubMed

    McGee, Monnie; Chen, Zhongxue

    2006-01-01

    There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.

  4. Cable Overheating Risk Warning Method Based on Impedance Parameter Estimation in Distribution Network

    NASA Astrophysics Data System (ADS)

    Yu, Zhang; Xiaohui, Song; Jianfang, Li; Fei, Gao

    2017-05-01

    Cable overheating will lead to the cable insulation level reducing, speed up the cable insulation aging, even easy to cause short circuit faults. Cable overheating risk identification and warning is nessesary for distribution network operators. Cable overheating risk warning method based on impedance parameter estimation is proposed in the paper to improve the safty and reliability operation of distribution network. Firstly, cable impedance estimation model is established by using least square method based on the data from distribiton SCADA system to improve the impedance parameter estimation accuracy. Secondly, calculate the threshold value of cable impedance based on the historical data and the forecast value of cable impedance based on the forecasting data in future from distribiton SCADA system. Thirdly, establish risks warning rules library of cable overheating, calculate the cable impedance forecast value and analysis the change rate of impedance, and then warn the overheating risk of cable line based on the overheating risk warning rules library according to the variation relationship between impedance and line temperature rise. Overheating risk warning method is simulated in the paper. The simulation results shows that the method can identify the imedance and forecast the temperature rise of cable line in distribution network accurately. The result of overheating risk warning can provide decision basis for operation maintenance and repair.

  5. On the objective identification of flood seasons

    NASA Astrophysics Data System (ADS)

    Cunderlik, Juraj M.; Ouarda, Taha B. M. J.; BobéE, Bernard

    2004-01-01

    The determination of seasons of high and low probability of flood occurrence is a task with many practical applications in contemporary hydrology and water resources management. Flood seasons are generally identified subjectively by visually assessing the temporal distribution of flood occurrences and, then at a regional scale, verified by comparing the temporal distribution with distributions obtained at hydrologically similar neighboring sites. This approach is subjective, time consuming, and potentially unreliable. The main objective of this study is therefore to introduce a new, objective, and systematic method for the identification of flood seasons. The proposed method tests the significance of flood seasons by comparing the observed variability of flood occurrences with the theoretical flood variability in a nonseasonal model. The method also addresses the uncertainty resulting from sampling variability by quantifying the probability associated with the identified flood seasons. The performance of the method was tested on an extensive number of samples with different record lengths generated from several theoretical models of flood seasonality. The proposed approach was then applied on real data from a large set of sites with different flood regimes across Great Britain. The results show that the method can efficiently identify flood seasons from both theoretical and observed distributions of flood occurrence. The results were used for the determination of the main flood seasonality types in Great Britain.

  6. Combining flow cytometry and 16S rRNA gene pyrosequencing: a promising approach for drinking water monitoring and characterization.

    PubMed

    Prest, E I; El-Chakhtoura, J; Hammes, F; Saikaly, P E; van Loosdrecht, M C M; Vrouwenvelder, J S

    2014-10-15

    The combination of flow cytometry (FCM) and 16S rRNA gene pyrosequencing data was investigated for the purpose of monitoring and characterizing microbial changes in drinking water distribution systems. High frequency sampling (5 min intervals for 1 h) was performed at the outlet of a treatment plant and at one location in the full-scale distribution network. In total, 52 bulk water samples were analysed with FCM, pyrosequencing and conventional methods (adenosine-triphosphate, ATP; heterotrophic plate count, HPC). FCM and pyrosequencing results individually showed that changes in the microbial community occurred in the water distribution system, which was not detected with conventional monitoring. FCM data showed an increase in the total bacterial cell concentrations (from 345 ± 15 × 10(3) to 425 ± 35 × 10(3) cells mL(-1)) and in the percentage of intact bacterial cells (from 39 ± 3.5% to 53 ± 4.4%) during water distribution. This shift was also observed in the FCM fluorescence fingerprints, which are characteristic of each water sample. A similar shift was detected in the microbial community composition as characterized with pyrosequencing, showing that FCM and genetic fingerprints are congruent. FCM and pyrosequencing data were subsequently combined for the calculation of cell concentration changes for each bacterial phylum. The results revealed an increase in cell concentrations of specific bacterial phyla (e.g., Proteobacteria), along with a decrease in other phyla (e.g., Actinobacteria), which could not be concluded from the two methods individually. The combination of FCM and pyrosequencing methods is a promising approach for future drinking water quality monitoring and for advanced studies on drinking water distribution pipeline ecology. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Experimental and numerical characterization of the sound pressure in standing wave acoustic levitators

    NASA Astrophysics Data System (ADS)

    Stindt, A.; Andrade, M. A. B.; Albrecht, M.; Adamowski, J. C.; Panne, U.; Riedel, J.

    2014-01-01

    A novel method for predictions of the sound pressure distribution in acoustic levitators is based on a matrix representation of the Rayleigh integral. This method allows for a fast calculation of the acoustic field within the resonator. To make sure that the underlying assumptions and simplifications are justified, this approach was tested by a direct comparison to experimental data. The experimental sound pressure distributions were recorded by high spatially resolved frequency selective microphone scanning. To emphasize the general applicability of the two approaches, the comparative studies were conducted for four different resonator geometries. In all cases, the results show an excellent agreement, demonstrating the accuracy of the matrix method.

  8. Optimal distribution of integration time for intensity measurements in Stokes polarimetry.

    PubMed

    Li, Xiaobo; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie; Hu, Haofeng

    2015-10-19

    We consider the typical Stokes polarimetry system, which performs four intensity measurements to estimate a Stokes vector. We show that if the total integration time of intensity measurements is fixed, the variance of the Stokes vector estimator depends on the distribution of the integration time at four intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the Stokes vector estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time by employing Lagrange multiplier method. According to the theoretical analysis and real-world experiment, it is shown that the total variance of the Stokes vector estimator can be significantly decreased about 40% in the case discussed in this paper. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improves the measurement accuracy of the polarimetric system.

  9. Silymarin in liposomes and ethosomes: pharmacokinetics and tissue distribution in free-moving rats by high-performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Chang, Li-Wen; Hou, Mei-Ling; Tsai, Tung-Hu

    2014-12-03

    The aim of this study was to prepare silymarin formulations (silymarin entrapped in liposomes and ethosomes, formulations referred to as LSM and ESM, respectively) to improve oral bioavailability of silymarin and evaluate its tissue distribution by liquid chromatography with tandem mass spectrometry (LC-MS/MS) in free-moving rats. Silibinin is the major active constituent of silymarin, which is the main component to be analyzed. A rapid, sensitive, and repeatable LC-MS/MS method was developed and validated in terms of precision, accuracy, and extraction recovery. Furthermore, the established method was applied to study the pharmacokinetics and tissue distribution of silymarin in rats. The size, ζ potential, and drug release of the formulations were characterized. These results showed that the LSM and ESM encapsulated formulations of silymarin may provide more efficient tissue distribution and increased oral bioavailability, thus improving its therapeutic bioactive properties in the body.

  10. Parameter estimation techniques based on optimizing goodness-of-fit statistics for structural reliability

    NASA Technical Reports Server (NTRS)

    Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.

    1993-01-01

    New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.

  11. Oxygen Distributions-Evaluation of Computational Methods, Using a Stochastic Model for Large Tumour Vasculature, to Elucidate the Importance of Considering a Complete Vascular Network.

    PubMed

    Lagerlöf, Jakob H; Bernhardt, Peter

    2016-01-01

    To develop a general model that utilises a stochastic method to generate a vessel tree based on experimental data, and an associated irregular, macroscopic tumour. These will be used to evaluate two different methods for computing oxygen distribution. A vessel tree structure, and an associated tumour of 127 cm3, were generated, using a stochastic method and Bresenham's line algorithm to develop trees on two different scales and fusing them together. The vessel dimensions were adjusted through convolution and thresholding and each vessel voxel was assigned an oxygen value. Diffusion and consumption were modelled using a Green's function approach together with Michaelis-Menten kinetics. The computations were performed using a combined tree method (CTM) and an individual tree method (ITM). Five tumour sub-sections were compared, to evaluate the methods. The oxygen distributions of the same tissue samples, using different methods of computation, were considerably less similar (root mean square deviation, RMSD≈0.02) than the distributions of different samples using CTM (0.001< RMSD<0.01). The deviations of ITM from CTM increase with lower oxygen values, resulting in ITM severely underestimating the level of hypoxia in the tumour. Kolmogorov Smirnov (KS) tests showed that millimetre-scale samples may not represent the whole. The stochastic model managed to capture the heterogeneous nature of hypoxic fractions and, even though the simplified computation did not considerably alter the oxygen distribution, it leads to an evident underestimation of tumour hypoxia, and thereby radioresistance. For a trustworthy computation of tumour oxygenation, the interaction between adjacent microvessel trees must not be neglected, why evaluation should be made using high resolution and the CTM, applied to the entire tumour.

  12. Characterizing the topology of probabilistic biological networks.

    PubMed

    Todor, Andrei; Dobra, Alin; Kahveci, Tamer

    2013-01-01

    Biological interactions are often uncertain events, that may or may not take place with some probability. This uncertainty leads to a massive number of alternative interaction topologies for each such network. The existing studies analyze the degree distribution of biological networks by assuming that all the given interactions take place under all circumstances. This strong and often incorrect assumption can lead to misleading results. In this paper, we address this problem and develop a sound mathematical basis to characterize networks in the presence of uncertain interactions. Using our mathematical representation, we develop a method that can accurately describe the degree distribution of such networks. We also take one more step and extend our method to accurately compute the joint-degree distributions of node pairs connected by edges. The number of possible network topologies grows exponentially with the number of uncertain interactions. However, the mathematical model we develop allows us to compute these degree distributions in polynomial time in the number of interactions. Our method works quickly even for entire protein-protein interaction (PPI) networks. It also helps us find an adequate mathematical model using MLE. We perform a comparative study of node-degree and joint-degree distributions in two types of biological networks: the classical deterministic networks and the more flexible probabilistic networks. Our results confirm that power-law and log-normal models best describe degree distributions for both probabilistic and deterministic networks. Moreover, the inverse correlation of degrees of neighboring nodes shows that, in probabilistic networks, nodes with large number of interactions prefer to interact with those with small number of interactions more frequently than expected. We also show that probabilistic networks are more robust for node-degree distribution computation than the deterministic ones. all the data sets used, the software implemented and the alignments found in this paper are available at http://bioinformatics.cise.ufl.edu/projects/probNet/.

  13. A Method for Obtaining High Frequency, Global, IR-Based Convective Cloud Tops for Studies of the TTL

    NASA Technical Reports Server (NTRS)

    Pfister, Leonhard; Ueyama, Rei; Jensen, Eric; Schoeberl, Mark

    2017-01-01

    Models of varying complexity that simulate water vapor and clouds in the Tropical Tropopause Layer (TTL) show that including convection directly is essential to properly simulating the water vapor and cloud distribution. In boreal winter, for example, simulations without convection yield a water vapor distribution that is too uniform with longitude, as well as minimal cloud distributions. Two things are important for convective simulations. First, it is important to get the convective cloud top potential temperature correctly, since unrealistically high values (reaching above the cold point tropopause too frequently) will cause excessive hydration of the stratosphere. Second, one must capture the time variation as well, since hydration by convection depends on the local relative humidity (temperature), which has substantial variation on synoptic time scales in the TTL. This paper describes a method for obtaining high frequency (3-hourly) global convective cloud top distributions which can be used in trajectory models. The method uses rainfall thresholds, standard IR brightness temperatures, meteorological temperature analyses, and physically realistic and documented corrections IR brightness temperature corrections to derive cloud top altitudes and potential temperatures. The cloud top altitudes compare well with combined CLOUDSAT and CALIPSO data, both in time-averaged overall vertical and horizontal distributions and in individual cases (correlations of .65-.7). An important finding is that there is significant uncertainty (nearly .5 km) in evaluating the statistical distribution of convective cloud tops even using lidar. Deep convection whose tops are in regions of high relative humidity (such as much of the TTL), will cause clouds to form above the actual convection. It is often difficult to distinguish these clouds from the actual convective cloud due to the uncertainties of evaluating ice water content from lidar measurements. Comparison with models show that calculated cloud top altitudes are generally higher than those calculated by global analyses (e.g., MERRA). Interannual variability in the distribution of convective cloud top altitudes is also investigated.

  14. a Landmark Extraction Method Associated with Geometric Features and Location Distribution

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Li, J.; Wang, Y.; Xiao, Y.; Liu, P.; Zhang, S.

    2018-04-01

    Landmark plays an important role in spatial cognition and spatial knowledge organization. Significance measuring model is the main method of landmark extraction. It is difficult to take account of the spatial distribution pattern of landmarks because that the significance of landmark is built in one-dimensional space. In this paper, we start with the geometric features of the ground object, an extraction method based on the target height, target gap and field of view is proposed. According to the influence region of Voronoi Diagram, the description of target gap is established to the geometric representation of the distribution of adjacent targets. Then, segmentation process of the visual domain of Voronoi K order adjacent is given to set up target view under the multi view; finally, through three kinds of weighted geometric features, the landmarks are identified. Comparative experiments show that this method has a certain coincidence degree with the results of traditional significance measuring model, which verifies the effectiveness and reliability of the method and reduces the complexity of landmark extraction process without losing the reference value of landmark.

  15. Interpolation of orientation distribution functions in diffusion weighted imaging using multi-tensor model.

    PubMed

    Afzali, Maryam; Fatemizadeh, Emad; Soltanian-Zadeh, Hamid

    2015-09-30

    Diffusion weighted imaging (DWI) is a non-invasive method for investigating the brain white matter structure and can be used to evaluate fiber bundles. However, due to practical constraints, DWI data acquired in clinics are low resolution. This paper proposes a method for interpolation of orientation distribution functions (ODFs). To this end, fuzzy clustering is applied to segment ODFs based on the principal diffusion directions (PDDs). Next, a cluster is modeled by a tensor so that an ODF is represented by a mixture of tensors. For interpolation, each tensor is rotated separately. The method is applied on the synthetic and real DWI data of control and epileptic subjects. Both experiments illustrate capability of the method in increasing spatial resolution of the data in the ODF field properly. The real dataset show that the method is capable of reliable identification of differences between temporal lobe epilepsy (TLE) patients and normal subjects. The method is compared to existing methods. Comparison studies show that the proposed method generates smaller angular errors relative to the existing methods. Another advantage of the method is that it does not require an iterative algorithm to find the tensors. The proposed method is appropriate for increasing resolution in the ODF field and can be applied to clinical data to improve evaluation of white matter fibers in the brain. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Detecting changes in the spatial distribution of nitrate contamination in ground water

    USGS Publications Warehouse

    Liu, Z.-J.; Hallberg, G.R.; Zimmerman, D.L.; Libra, R.D.

    1997-01-01

    Many studies of ground water pollution in general and nitrate contamination in particular have often relied on a one-time investigation, tracking of individual wells, or aggregate summaries. Studies of changes in spatial distribution of contaminants over time are lacking. This paper presents a method to compare spatial distributions for possible changes over time. The large-scale spatial distribution at a given time can be considered as a surface over the area (a trend surface). The changes in spatial distribution from period to period can be revealed by the differences in the shape and/or height of surfaces. If such a surface is described by a polynomial function, changes in surfaces can be detected by testing statistically for differences in their corresponding polynomial functions. This method was applied to nitrate concentration in a population of wells in an agricultural drainage basin in Iowa, sampled in three different years. For the period of 1981-1992, the large-scale spatial distribution of nitrate concentration did not show significant change in the shape of spatial surfaces; while the magnitude of nitrate concentration in the basin, or height of the computed surfaces showed significant fluctuations. The change in magnitude of nitrate concentration is closely related to climatic variations, especially in precipitation. The lack of change in the shape of spatial surfaces means that either the influence of land use/nitrogen management was overshadowed by climatic influence, or the changes in land use/management occurred in a random fashion.

  17. The Ghost in the Machine: Fracking in the Earth's Complex Brittle Crust

    NASA Astrophysics Data System (ADS)

    Malin, P. E.

    2015-12-01

    This paper discusses in the impact of complex rock properties on practical applications like fracking and its associated seismic emissions. A variety of borehole measurements show that the complex physical properties of the upper crust cannot be characterized by averages on any scale. Instead they appear to follow 3 empirical rule: a power law distribution in physical scales, a lognormal distribution in populations, and a direct relation between changes in porosity and log(permeability). These rules can be directly related to the presence of fluid rich and seismically active fractures - from mineral grains to fault segments. (These are the "ghosts" referred to in the title.) In other physical systems, such behaviors arise on the boundaries of phase changes, and are studied as "critical state physics". In analogy to the 4 phases of water, crustal rocks progress upward from a un-fractured, ductile lower crust to nearly cohesionless surface alluvium. The crust in between is in an unstable transition. It is in this layer methods such as hydrofracking operate - be they in Oil and Gas, geothermal, or mining. As a result, nothing is predictable in these systems. Crustal models have conventionally been constructed assuming that in situ permeability and related properties are normally distributed. This approach is consistent with the use of short scale-length cores and logs to estimate properties. However, reservoir-scale flow data show that they are better fit to lognormal distributions. Such "long tail" distributions are observed for well productivity, ore vein grades, and induced seismic signals. Outcrop and well-log data show that many rock properties also show a power-law-type variation in scale lengths. In terms of Fourier power spectra, if peaks per km is k, then their power is proportional to 1/k. The source of this variation is related to pore-space connectivity, beginning with grain-fractures. We then show that a passive seismic method, Tomographic Fracture ImagingTM (TFI), can observe the distribution of this connectivity. Combined with TFI data, our fracture-connectivity model reveals the most significant crustal features and account for their range of passive and stimulated behaviors.

  18. Thermal decomposition in thermal desorption instruments: importance of thermogram measurements for analysis of secondary organic aerosol

    NASA Astrophysics Data System (ADS)

    Stark, H.; Yatavelli, R. L. N.; Thompson, S.; Kang, H.; Krechmer, J. E.; Kimmel, J.; Palm, B. B.; Hu, W.; Hayes, P.; Day, D. A.; Campuzano Jost, P.; Ye, P.; Canagaratna, M. R.; Jayne, J. T.; Worsnop, D. R.; Jimenez, J. L.

    2017-12-01

    Understanding the chemical composition of secondary organic aerosol (SOA) is crucial for explaining sources and fate of this important aerosol class in tropospheric chemistry. Further, determining SOA volatility is key in predicting its atmospheric lifetime and fate, due to partitioning from and to the gas phase. We present three analysis approaches to determine SOA volatility distributions from two field campaigns in areas with strong biogenic emissions, a Ponderosa pine forest in Colorado, USA, from the BEACHON-RoMBAS campaign, and a mixed forest in Alabama, USA, from the SOAS campaign. We used a high-resolution-time-of-flight chemical ionization mass spectrometer (CIMS) for both campaigns, equipped with a micro-orifice volatilization impactor (MOVI) inlet for BEACHON and a filter inlet for gases and aerosols (FIGAERO) for SOAS. These inlets allow near simultaneous analysis of particle and gas-phase species by the CIMS. While gas-phase species are directly measured without heating, particles undergo thermal desorption prior to analysis. Volatility distributions can be estimated in three ways: (1) analysis of the thermograms (signal vs. temperature); (2) via partitioning theory using the gas- and particle-phase measurements; (3) from measured chemical formulas via a group contribution model. Comparison of the SOA volatility distributions from the three methods shows large discrepancies for both campaigns. Results from the thermogram method are the most consistent of the methods when compared with independent AMS-thermal denuder measurements. The volatility distributions estimated from partitioning measurements are very narrow, likely due to signal-to-noise limits in the measurements. The discrepancy between the formula and the thermogram methods indicates large-scale thermal decomposition of the SOA species. We will also show results of citric acid thermal decomposition, where, in addition to the mass spectra, measurements of CO, CO2 and H2O were made, showing thermal decomposition of up to 65% of the citric acid molecules.

  19. A note on the misuses of the variance test in meteorological studies

    NASA Astrophysics Data System (ADS)

    Hazra, Arnab; Bhattacharya, Sourabh; Banik, Pabitra; Bhattacharya, Sabyasachi

    2017-12-01

    Stochastic modeling of rainfall data is an important area in meteorology. The gamma distribution is a widely used probability model for non-zero rainfall. Typically the choice of the distribution for such meteorological studies is based on two goodness-of-fit tests—the Pearson's Chi-square test and the Kolmogorov-Smirnov test. Inspired by the index of dispersion introduced by Fisher (Statistical methods for research workers. Hafner Publishing Company Inc., New York, 1925), Mooley (Mon Weather Rev 101:160-176, 1973) proposed the variance test as a goodness-of-fit measure in this context and a number of researchers have implemented it since then. We show that the asymptotic distribution of the test statistic for the variance test is generally not comparable to any central Chi-square distribution and hence the test is erroneous. We also describe a method for checking the validity of the asymptotic distribution for a class of distributions. We implement the erroneous test on some simulated, as well as real datasets and demonstrate how it leads to some wrong conclusions.

  20. X-ray tomography studies on porosity and particle size distribution in cast in-situ Al-Cu-TiB{sub 2} semi-solid forged composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mathew, James; Mandal, Animesh

    X-ray computed tomography (XCT) was used to characterise the internal microstructure and clustering behaviour of TiB{sub 2} particles in in-situ processed Al-Cu metal matrix composites prepared by casting method. Forging was used in semi-solid state to reduce the porosity and to uniformly disperse TiB{sub 2} particles in the composite. Quantification of porosity and clustering of TiB{sub 2} particles was evaluated for different forging reductions (30% and 50% reductions) and compared with an as-cast sample using XCT. Results show that the porosity content was decreased by about 40% due to semi-solid forging as compared to the as-cast condition. Further, XCT resultsmore » show that the 30% forging reduction resulted in greater uniformity in distribution of TiB{sub 2} particles within the composite compared to as-cast and the 50% forge reduction in semi-solid state. These results show that the application of forging in semi-solid state enhances particle distribution and reduces porosity formation in cast in-situ Al-Cu-TiB{sub 2} metal matrix composites. - Highlights: •XCT was used to visualise 3D internal structure of Al-Cu-TiB{sub 2} MMCs. •Al-Cu-TiB{sub 2} MMC was prepared by casting using flux assisted synthesis method. •TiB{sub 2} particles and porosity size distribution were evaluated. •Results show that forging in semi-solid condition decreases the porosity content and improve the particle dispersion in MMCs.« less

  1. A comparison of moment-based methods of estimation for the log Pearson type 3 distribution

    NASA Astrophysics Data System (ADS)

    Koutrouvelis, I. A.; Canavos, G. C.

    2000-06-01

    The log Pearson type 3 distribution is a very important model in statistical hydrology, especially for modeling annual flood series. In this paper we compare the various methods based on moments for estimating quantiles of this distribution. Besides the methods of direct and mixed moments which were found most successful in previous studies and the well-known indirect method of moments, we develop generalized direct moments and generalized mixed moments methods and a new method of adaptive mixed moments. The last method chooses the orders of two moments for the original observations by utilizing information contained in the sample itself. The results of Monte Carlo experiments demonstrated the superiority of this method in estimating flood events of high return periods when a large sample is available and in estimating flood events of low return periods regardless of the sample size. In addition, a comparison of simulation and asymptotic results shows that the adaptive method may be used for the construction of meaningful confidence intervals for design events based on the asymptotic theory even with small samples. The simulation results also point to the specific members of the class of generalized moments estimates which maintain small values for bias and/or mean square error.

  2. Simultaneous integrated vs. sequential boost in VMAT radiotherapy of high-grade gliomas.

    PubMed

    Farzin, Mostafa; Molls, Michael; Astner, Sabrina; Rondak, Ina-Christine; Oechsner, Markus

    2015-12-01

    In 20 patients with high-grade gliomas, we compared two methods of planning for volumetric-modulated arc therapy (VMAT): simultaneous integrated boost (SIB) vs. sequential boost (SEB). The investigation focused on the analysis of dose distributions in the target volumes and the organs at risk (OARs). After contouring the target volumes [planning target volumes (PTVs) and boost volumes (BVs)] and OARs, SIB planning and SEB planning were performed. The SEB method consisted of two plans: in the first plan the PTV received 50 Gy in 25 fractions with a 2-Gy dose per fraction. In the second plan the BV received 10 Gy in 5 fractions with a dose per fraction of 2 Gy. The doses of both plans were summed up to show the total doses delivered. In the SIB method the PTV received 54 Gy in 30 fractions with a dose per fraction of 1.8 Gy, while the BV received 60 Gy in the same fraction number but with a dose per fraction of 2 Gy. All of the OARs showed higher doses (Dmax and Dmean) in the SEB method when compared with the SIB technique. The differences between the two methods were statistically significant in almost all of the OARs. Analysing the total doses of the target volumes we found dose distributions with similar homogeneities and comparable total doses. Our analysis shows that the SIB method offers advantages over the SEB method in terms of sparing OARs.

  3. Ionic Conduction in Lithium Ion Battery Composite Electrode Governs Cross-sectional Reaction Distribution

    PubMed Central

    Orikasa, Yuki; Gogyo, Yuma; Yamashige, Hisao; Katayama, Misaki; Chen, Kezheng; Mori, Takuya; Yamamoto, Kentaro; Masese, Titus; Inada, Yasuhiro; Ohta, Toshiaki; Siroma, Zyun; Kato, Shiro; Kinoshita, Hajime; Arai, Hajime; Ogumi, Zempachi; Uchimoto, Yoshiharu

    2016-01-01

    Composite electrodes containing active materials, carbon and binder are widely used in lithium-ion batteries. Since the electrode reaction occurs preferentially in regions with lower resistance, reaction distribution can be happened within composite electrodes. We investigate the relationship between the reaction distribution with depth direction and electronic/ionic conductivity in composite electrodes with changing electrode porosities. Two dimensional X-ray absorption spectroscopy shows that the reaction distribution is happened in lower porosity electrodes. Our developed 6-probe method can measure electronic/ionic conductivity in composite electrodes. The ionic conductivity is decreased for lower porosity electrodes, which governs the reaction distribution of composite electrodes and their performances. PMID:27193448

  4. Plan averaging for multicriteria navigation of sliding window IMRT and VMAT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craft, David, E-mail: dcraft@partners.org; Papp, Dávid; Unkelbach, Jan

    2014-02-15

    Purpose: To describe a method for combining sliding window plans [intensity modulated radiation therapy (IMRT) or volumetric modulated arc therapy (VMAT)] for use in treatment plan averaging, which is needed for Pareto surface navigation based multicriteria treatment planning. Methods: The authors show that by taking an appropriately defined average of leaf trajectories of sliding window plans, the authors obtain a sliding window plan whose fluence map is the exact average of the fluence maps corresponding to the initial plans. In the case of static-beam IMRT, this also implies that the dose distribution of the averaged plan is the exact dosimetricmore » average of the initial plans. In VMAT delivery, the dose distribution of the averaged plan is a close approximation of the dosimetric average of the initial plans. Results: The authors demonstrate the method on three Pareto optimal VMAT plans created for a demanding paraspinal case, where the tumor surrounds the spinal cord. The results show that the leaf averaged plans yield dose distributions that approximate the dosimetric averages of the precomputed Pareto optimal plans well. Conclusions: The proposed method enables the navigation of deliverable Pareto optimal plans directly, i.e., interactive multicriteria exploration of deliverable sliding window IMRT and VMAT plans, eliminating the need for a sequencing step after navigation and hence the dose degradation that is caused by such a sequencing step.« less

  5. Thermal stress analysis for a wood composite blade. [wind turbines

    NASA Technical Reports Server (NTRS)

    Fu, K. C.; Harb, A.

    1984-01-01

    Heat conduction throughout the blade and the distribution of thermal stresses caused by the temperature distribution were determined for a laminated wood wind turbine blade in both the horizontal and vertical positions. Results show that blade cracking is not due to thermal stresses induced by insulation. A method and practical example of thermal stress analysis for an engineering body of orthotropic materials is presented.

  6. Stability of diameter distributions in a managed uneven-aged oak forest in the Ozark Highlands

    Treesearch

    Zhiming Wang; Paul S. Johnson; H. E. Garrett; Stephen R. Shifley

    1997-01-01

    We studied a privately owned 156,000-acre oak-dominated forest in the Ozark Highlands of southern Missouri. The forest has been managed by the single-tree selection method since 1952. Using 40 years of continuous forest inventory records, we analyzed the stability of the shape of tree diameter distributions at the forest-wide scale. Results show that for trees ...

  7. FOS: A Factored Operating Systems for High Assurance and Scalability on Multicores

    DTIC Science & Technology

    2012-08-01

    computing. It builds on previous work in distributed and microkernel OSes by factoring services out of the kernel, and then further distributing each...2 3.0 Methods, Assumptions, and Procedures (System Design) .................................................. 4 3.1 Microkernel ...cooperating servers. We term such a service a fleet. Figure 2 shows the high-level architecture of fos. A small microkernel runs on every core

  8. A scalable parallel black oil simulator on distributed memory parallel computers

    NASA Astrophysics Data System (ADS)

    Wang, Kun; Liu, Hui; Chen, Zhangxin

    2015-11-01

    This paper presents our work on developing a parallel black oil simulator for distributed memory computers based on our in-house parallel platform. The parallel simulator is designed to overcome the performance issues of common simulators that are implemented for personal computers and workstations. The finite difference method is applied to discretize the black oil model. In addition, some advanced techniques are employed to strengthen the robustness and parallel scalability of the simulator, including an inexact Newton method, matrix decoupling methods, and algebraic multigrid methods. A new multi-stage preconditioner is proposed to accelerate the solution of linear systems from the Newton methods. Numerical experiments show that our simulator is scalable and efficient, and is capable of simulating extremely large-scale black oil problems with tens of millions of grid blocks using thousands of MPI processes on parallel computers.

  9. Convergence and Applications of a Gossip-Based Gauss-Newton Algorithm

    NASA Astrophysics Data System (ADS)

    Li, Xiao; Scaglione, Anna

    2013-11-01

    The Gauss-Newton algorithm is a popular and efficient centralized method for solving non-linear least squares problems. In this paper, we propose a multi-agent distributed version of this algorithm, named Gossip-based Gauss-Newton (GGN) algorithm, which can be applied in general problems with non-convex objectives. Furthermore, we analyze and present sufficient conditions for its convergence and show numerically that the GGN algorithm achieves performance comparable to the centralized algorithm, with graceful degradation in case of network failures. More importantly, the GGN algorithm provides significant performance gains compared to other distributed first order methods.

  10. Atom based grain extraction and measurement of geometric properties

    NASA Astrophysics Data System (ADS)

    Martine La Boissonière, Gabriel; Choksi, Rustum

    2018-04-01

    We introduce an accurate, self-contained and automatic atom based numerical algorithm to characterize grain distributions in two dimensional Phase Field Crystal (PFC) simulations. We compare the method with hand segmented and known test grain distributions to show that the algorithm is able to extract grains and measure their area, perimeter and other geometric properties with high accuracy. Four input parameters must be set by the user and their influence on the results is described. The method is currently tuned to extract data from PFC simulations in the hexagonal lattice regime but the framework may be extended to more general problems.

  11. Dynamical complexity changes during two forms of meditation

    NASA Astrophysics Data System (ADS)

    Li, Jin; Hu, Jing; Zhang, Yinhong; Zhang, Xiaofeng

    2011-06-01

    Detection of dynamical complexity changes in natural and man-made systems has deep scientific and practical meaning. We use the base-scale entropy method to analyze dynamical complexity changes for heart rate variability (HRV) series during specific traditional forms of Chinese Chi and Kundalini Yoga meditation techniques in healthy young adults. The results show that dynamical complexity decreases in meditation states for two forms of meditation. Meanwhile, we detected changes in probability distribution of m-words during meditation and explained this changes using probability distribution of sine function. The base-scale entropy method may be used on a wider range of physiologic signals.

  12. Numerical modeling of the tensile strength of a biological granular aggregate: Effect of the particle size distribution

    NASA Astrophysics Data System (ADS)

    Heinze, Karsta; Frank, Xavier; Lullien-Pellerin, Valérie; George, Matthieu; Radjai, Farhang; Delenne, Jean-Yves

    2017-06-01

    Wheat grains can be considered as a natural cemented granular material. They are milled under high forces to produce food products such as flour. The major part of the grain is the so-called starchy endosperm. It contains stiff starch granules, which show a multi-modal size distribution, and a softer protein matrix that surrounds the granules. Experimental milling studies and numerical simulations are going hand in hand to better understand the fragmentation behavior of this biological material and to improve milling performance. We present a numerical study of the effect of granule size distribution on the strength of such a cemented granular material. Samples of bi-modal starch granule size distribution were created and submitted to uniaxial tension, using a peridynamics method. We show that, when compared to the effects of starch-protein interface adhesion and voids, the granule size distribution has a limited effect on the samples' yield stress.

  13. Properties and determinants of codon decoding time distributions

    PubMed Central

    2014-01-01

    Background Codon decoding time is a fundamental property of mRNA translation believed to affect the abundance, function, and properties of proteins. Recently, a novel experimental technology--ribosome profiling--was developed to measure the density, and thus the speed, of ribosomes at codon resolution. Specifically, this method is based on next-generation sequencing, which theoretically can provide footprint counts that correspond to the probability of observing a ribosome in this position for each nucleotide in each transcript. Results In this study, we report for the first time various novel properties of the distribution of codon footprint counts in five organisms, based on large-scale analysis of ribosomal profiling data. We show that codons have distinctive footprint count distributions. These tend to be preserved along the inner part of the ORF, but differ at the 5' and 3' ends of the ORF, suggesting that the translation-elongation stage actually includes three biophysical sub-steps. In addition, we study various basic properties of the codon footprint count distributions and show that some of them correlate with the abundance of the tRNA molecule types recognizing them. Conclusions Our approach emphasizes the advantages of analyzing ribosome profiling and similar types of data via a comparative genomic codon-distribution-centric view. Thus, our methods can be used in future studies related to translation and even transcription elongation. PMID:25572668

  14. Application of the LSQR algorithm in non-parametric estimation of aerosol size distribution

    NASA Astrophysics Data System (ADS)

    He, Zhenzong; Qi, Hong; Lew, Zhongyuan; Ruan, Liming; Tan, Heping; Luo, Kun

    2016-05-01

    Based on the Least Squares QR decomposition (LSQR) algorithm, the aerosol size distribution (ASD) is retrieved in non-parametric approach. The direct problem is solved by the Anomalous Diffraction Approximation (ADA) and the Lambert-Beer Law. An optimal wavelength selection method is developed to improve the retrieval accuracy of the ASD. The proposed optimal wavelength set is selected by the method which can make the measurement signals sensitive to wavelength and decrease the degree of the ill-condition of coefficient matrix of linear systems effectively to enhance the anti-interference ability of retrieval results. Two common kinds of monomodal and bimodal ASDs, log-normal (L-N) and Gamma distributions, are estimated, respectively. Numerical tests show that the LSQR algorithm can be successfully applied to retrieve the ASD with high stability in the presence of random noise and low susceptibility to the shape of distributions. Finally, the experimental measurement ASD over Harbin in China is recovered reasonably. All the results confirm that the LSQR algorithm combined with the optimal wavelength selection method is an effective and reliable technique in non-parametric estimation of ASD.

  15. Research on the optimization of air quality monitoring station layout based on spatial grid statistical analysis method.

    PubMed

    Li, Tianxin; Zhou, Xing Chen; Ikhumhen, Harrison Odion; Difei, An

    2018-05-01

    In recent years, with the significant increase in urban development, it has become necessary to optimize the current air monitoring stations to reflect the quality of air in the environment. Highlighting the spatial representation of some air monitoring stations using Beijing's regional air monitoring station data from 2012 to 2014, the monthly mean particulate matter concentration (PM10) in the region was calculated and through the IDW interpolation method and spatial grid statistical method using GIS, the spatial distribution of PM10 concentration in the whole region was deduced. The spatial distribution variation of districts in Beijing using the gridding model was performed, and through the 3-year spatial analysis, PM10 concentration data including the variation and spatial overlay (1.5 km × 1.5 km cell resolution grid), the spatial distribution result obtained showed that the total PM10 concentration frequency variation exceeded the standard. It is very important to optimize the layout of the existing air monitoring stations by combining the concentration distribution of air pollutants with the spatial region using GIS.

  16. A robust hybrid fuzzy-simulated annealing-intelligent water drops approach for tuning a distribution static compensator nonlinear controller in a distribution system

    NASA Astrophysics Data System (ADS)

    Bagheri Tolabi, Hajar; Hosseini, Rahil; Shakarami, Mahmoud Reza

    2016-06-01

    This article presents a novel hybrid optimization approach for a nonlinear controller of a distribution static compensator (DSTATCOM). The DSTATCOM is connected to a distribution system with the distributed generation units. The nonlinear control is based on partial feedback linearization. Two proportional-integral-derivative (PID) controllers regulate the voltage and track the output in this control system. In the conventional scheme, the trial-and-error method is used to determine the PID controller coefficients. This article uses a combination of a fuzzy system, simulated annealing (SA) and intelligent water drops (IWD) algorithms to optimize the parameters of the controllers. The obtained results reveal that the response of the optimized controlled system is effectively improved by finding a high-quality solution. The results confirm that using the tuning method based on the fuzzy-SA-IWD can significantly decrease the settling and rising times, the maximum overshoot and the steady-state error of the voltage step response of the DSTATCOM. The proposed hybrid tuning method for the partial feedback linearizing (PFL) controller achieved better regulation of the direct current voltage for the capacitor within the DSTATCOM. Furthermore, in the event of a fault the proposed controller tuned by the fuzzy-SA-IWD method showed better performance than the conventional controller or the PFL controller without optimization by the fuzzy-SA-IWD method with regard to both fault duration and clearing times.

  17. Identifying Flow Networks in a Karstified Aquifer by Application of the Cellular Automata-Based Deterministic Inversion Method (Lez Aquifer, France)

    NASA Astrophysics Data System (ADS)

    Fischer, P.; Jardani, A.; Wang, X.; Jourde, H.; Lecoq, N.

    2017-12-01

    The distributed modeling of flow paths within karstic and fractured fields remains a complex task because of the high dependence of the hydraulic responses to the relative locations between observational boreholes and interconnected fractures and karstic conduits that control the main flow of the hydrosystem. The inverse problem in a distributed model is one alternative approach to interpret the hydraulic test data by mapping the karstic networks and fractured areas. In this work, we developed a Bayesian inversion approach, the Cellular Automata-based Deterministic Inversion (CADI) algorithm to infer the spatial distribution of hydraulic properties in a structurally constrained model. This method distributes hydraulic properties along linear structures (i.e., flow conduits) and iteratively modifies the structural geometry of this conduit network to progressively match the observed hydraulic data to the modeled ones. As a result, this method produces a conductivity model that is composed of a discrete conduit network embedded in the background matrix, capable of producing the same flow behavior as the investigated hydrologic system. The method is applied to invert a set of multiborehole hydraulic tests collected from a hydraulic tomography experiment conducted at the Terrieu field site in the Lez aquifer, Southern France. The emergent model shows a high consistency to field observation of hydraulic connections between boreholes. Furthermore, it provides a geologically realistic pattern of flow conduits. This method is therefore of considerable value toward an enhanced distributed modeling of the fractured and karstified aquifers.

  18. Prediction model of dissolved oxygen in ponds based on ELM neural network

    NASA Astrophysics Data System (ADS)

    Li, Xinfei; Ai, Jiaoyan; Lin, Chunhuan; Guan, Haibin

    2018-02-01

    Dissolved oxygen in ponds is affected by many factors, and its distribution is unbalanced. In this paper, in order to improve the imbalance of dissolved oxygen distribution more effectively, the dissolved oxygen prediction model of Extreme Learning Machine (ELM) intelligent algorithm is established, based on the method of improving dissolved oxygen distribution by artificial push flow. Select the Lake Jing of Guangxi University as the experimental area. Using the model to predict the dissolved oxygen concentration of different voltage pumps, the results show that the ELM prediction accuracy is higher than the BP algorithm, and its mean square error is MSEELM=0.0394, the correlation coefficient RELM=0.9823. The prediction results of the 24V voltage pump push flow show that the discrete prediction curve can approximate the measured values well. The model can provide the basis for the artificial improvement of the dissolved oxygen distribution decision.

  19. Bonded-cell model for particle fracture.

    PubMed

    Nguyen, Duc-Hanh; Azéma, Emilien; Sornay, Philippe; Radjai, Farhang

    2015-02-01

    Particle degradation and fracture play an important role in natural granular flows and in many applications of granular materials. We analyze the fracture properties of two-dimensional disklike particles modeled as aggregates of rigid cells bonded along their sides by a cohesive Mohr-Coulomb law and simulated by the contact dynamics method. We show that the compressive strength scales with tensile strength between cells but depends also on the friction coefficient and a parameter describing cell shape distribution. The statistical scatter of compressive strength is well described by the Weibull distribution function with a shape parameter varying from 6 to 10 depending on cell shape distribution. We show that this distribution may be understood in terms of percolating critical intercellular contacts. We propose a random-walk model of critical contacts that leads to particle size dependence of the compressive strength in good agreement with our simulation data.

  20. Potential effects of climate change on the distribution range of the main silicate sinker of the Southern Ocean.

    PubMed

    Pinkernell, Stefan; Beszteri, Bánk

    2014-08-01

    Fragilariopsis kerguelensis, a dominant diatom species throughout the Antarctic Circumpolar Current, is coined to be one of the main drivers of the biological silicate pump. Here, we study the distribution of this important species and expected consequences of climate change upon it, using correlative species distribution modeling and publicly available presence-only data. As experience with SDM is scarce for marine phytoplankton, this also serves as a pilot study for this organism group. We used the maximum entropy method to calculate distribution models for the diatom F. kerguelensis based on yearly and monthly environmental data (sea surface temperature, salinity, nitrate and silicate concentrations). Observation data were harvested from GBIF and the Global Diatom Database, and for further analyses also from the Hustedt Diatom Collection (BRM). The models were projected on current yearly and seasonal environmental data to study current distribution and its seasonality. Furthermore, we projected the seasonal model on future environmental data obtained from climate models for the year 2100. Projected on current yearly averaged environmental data, all models showed similar distribution patterns for F. kerguelensis. The monthly model showed seasonality, for example, a shift of the southern distribution boundary toward the north in the winter. Projections on future scenarios resulted in a moderately to negligibly shrinking distribution area and a change in seasonality. We found a substantial bias in the publicly available observation datasets, which could be reduced by additional observation records we obtained from the Hustedt Diatom Collection. Present-day distribution patterns inferred from the models coincided well with background knowledge and previous reports about F. kerguelensis distribution, showing that maximum entropy-based distribution models are suitable to map distribution patterns for oceanic planktonic organisms. Our scenario projections indicate moderate effects of climate change upon the biogeography of F. kerguelensis.

  1. Application of Bayesian geostatistics for evaluation of mass discharge uncertainty at contaminated sites

    NASA Astrophysics Data System (ADS)

    Troldborg, Mads; Nowak, Wolfgang; Lange, Ida V.; Santos, Marta C.; Binning, Philip J.; Bjerg, Poul L.

    2012-09-01

    Mass discharge estimates are increasingly being used when assessing risks of groundwater contamination and designing remedial systems at contaminated sites. Such estimates are, however, rather uncertain as they integrate uncertain spatial distributions of both concentration and groundwater flow. Here a geostatistical simulation method for quantifying the uncertainty of the mass discharge across a multilevel control plane is presented. The method accounts for (1) heterogeneity of both the flow field and the concentration distribution through Bayesian geostatistics, (2) measurement uncertainty, and (3) uncertain source zone and transport parameters. The method generates conditional realizations of the spatial flow and concentration distribution. An analytical macrodispersive transport solution is employed to simulate the mean concentration distribution, and a geostatistical model of the Box-Cox transformed concentration data is used to simulate observed deviations from this mean solution. By combining the flow and concentration realizations, a mass discharge probability distribution is obtained. The method has the advantage of avoiding the heavy computational burden of three-dimensional numerical flow and transport simulation coupled with geostatistical inversion. It may therefore be of practical relevance to practitioners compared to existing methods that are either too simple or computationally demanding. The method is demonstrated on a field site contaminated with chlorinated ethenes. For this site, we show that including a physically meaningful concentration trend and the cosimulation of hydraulic conductivity and hydraulic gradient across the transect helps constrain the mass discharge uncertainty. The number of sampling points required for accurate mass discharge estimation and the relative influence of different data types on mass discharge uncertainty is discussed.

  2. Experimental and Monte Carlo evaluation of Eclipse treatment planning system for effects on dose distribution of the hip prostheses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Çatlı, Serap, E-mail: serapcatli@hotmail.com; Tanır, Güneş

    2013-10-01

    The present study aimed to investigate the effects of titanium, titanium alloy, and stainless steel hip prostheses on dose distribution based on the Monte Carlo simulation method, as well as the accuracy of the Eclipse treatment planning system (TPS) at 6 and 18 MV photon energies. In the present study the pencil beam convolution (PBC) method implemented in the Eclipse TPS was compared to the Monte Carlo method and ionization chamber measurements. The present findings show that if high-Z material is used in prosthesis, large dose changes can occur due to scattering. The variance in dose observed in the presentmore » study was dependent on material type, density, and atomic number, as well as photon energy; as photon energy increased back scattering decreased. The dose perturbation effect of hip prostheses was significant and could not be predicted accurately by the PBC method for hip prostheses. The findings show that for accurate dose calculation the Monte Carlo-based TPS should be used in patients with hip prostheses.« less

  3. Valley s'Asymmetric Characteristics of the Loess Plateau in Northwestern Shanxi Based on DEM

    NASA Astrophysics Data System (ADS)

    Duan, J.

    2016-12-01

    The valleys of the Loess Plateau in northwestern Shanxi show great asymmetry. This study using multi-scale DEMs, high-resolution satellite images and digital terrain analysis method, put forward a quantitative index to describe the asymmetric morphology. Several typical areas are selected to test and verify the spatial variability. Results show: (1) Considering the difference of spatial distribution, Pianguanhe basin, Xianchuanhe basin and Yangjiachuan basin are the areas where show most significant asymmetric characteristics . (2) Considering the difference of scale, the shape of large-scale valleys represents three characteristics: randomness, equilibrium and relative symmetry, while small-scale valleys show directionality and asymmetry. (3) Asymmetric morphology performs orientation, and the east-west valleys extremely obvious. Combined with field survey, its formation mechanism can be interpreted as follows :(1)Loess uneven distribution in the valleys. (2) The distribution diversities of vegetation, water , heat conditions and other factors, make a difference in water erosion capability which leads to asymmetric characteristics.

  4. Estimating distribution parameters of annual maximum streamflows in Johor, Malaysia using TL-moments approach

    NASA Astrophysics Data System (ADS)

    Mat Jan, Nur Amalina; Shabri, Ani

    2017-01-01

    TL-moments approach has been used in an analysis to identify the best-fitting distributions to represent the annual series of maximum streamflow data over seven stations in Johor, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: Three-parameter lognormal (LN3) and Pearson Type III (P3) distribution. The main objective of this study is to derive the TL-moments ( t 1,0), t 1 = 1,2,3,4 methods for LN3 and P3 distributions. The performance of TL-moments ( t 1,0), t 1 = 1,2,3,4 was compared with L-moments through Monte Carlo simulation and streamflow data over a station in Johor, Malaysia. The absolute error is used to test the influence of TL-moments methods on estimated probability distribution functions. From the cases in this study, the results show that TL-moments with four trimmed smallest values from the conceptual sample (TL-moments [4, 0]) of LN3 distribution was the most appropriate in most of the stations of the annual maximum streamflow series in Johor, Malaysia.

  5. Physics and Computational Methods for X-ray Scatter Estimation and Correction in Cone-Beam Computed Tomography

    NASA Astrophysics Data System (ADS)

    Bootsma, Gregory J.

    X-ray scatter in cone-beam computed tomography (CBCT) is known to reduce image quality by introducing image artifacts, reducing contrast, and limiting computed tomography (CT) number accuracy. The extent of the effect of x-ray scatter on CBCT image quality is determined by the shape and magnitude of the scatter distribution in the projections. A method to allay the effects of scatter is imperative to enable application of CBCT to solve a wider domain of clinical problems. The work contained herein proposes such a method. A characterization of the scatter distribution through the use of a validated Monte Carlo (MC) model is carried out. The effects of imaging parameters and compensators on the scatter distribution are investigated. The spectral frequency components of the scatter distribution in CBCT projection sets are analyzed using Fourier analysis and found to reside predominately in the low frequency domain. The exact frequency extents of the scatter distribution are explored for different imaging configurations and patient geometries. Based on the Fourier analysis it is hypothesized the scatter distribution can be represented by a finite sum of sine and cosine functions. The fitting of MC scatter distribution estimates enables the reduction of the MC computation time by diminishing the number of photon tracks required by over three orders of magnitude. The fitting method is incorporated into a novel scatter correction method using an algorithm that simultaneously combines multiple MC scatter simulations. Running concurrent MC simulations while simultaneously fitting the results allows for the physical accuracy and flexibility of MC methods to be maintained while enhancing the overall efficiency. CBCT projection set scatter estimates, using the algorithm, are computed on the order of 1--2 minutes instead of hours or days. Resulting scatter corrected reconstructions show a reduction in artifacts and improvement in tissue contrast and voxel value accuracy.

  6. Modified GMDH-NN algorithm and its application for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Song, Shufang; Wang, Lu

    2017-11-01

    Global sensitivity analysis (GSA) is a very useful tool to evaluate the influence of input variables in the whole distribution range. Sobol' method is the most commonly used among variance-based methods, which are efficient and popular GSA techniques. High dimensional model representation (HDMR) is a popular way to compute Sobol' indices, however, its drawbacks cannot be ignored. We show that modified GMDH-NN algorithm can calculate coefficients of metamodel efficiently, so this paper aims at combining it with HDMR and proposes GMDH-HDMR method. The new method shows higher precision and faster convergent rate. Several numerical and engineering examples are used to confirm its advantages.

  7. Examination and characterization of distribution system biofilms.

    PubMed Central

    LeChevallier, M W; Babcock, T M; Lee, R G

    1987-01-01

    Investigations concerning the role of distribution system biofilms on water quality were conducted at a drinking water utility in New Jersey. The utility experienced long-term bacteriological problems in the distribution system, while treatment plant effluents were uniformly negative for coliform bacteria. Results of a monitoring program showed increased coliform levels as the water moved from the treatment plant through the distribution system. Increased coliform densities could not be accounted for by growth of the cells in the water column alone. Identification of coliform bacteria showed that species diversity increased as water flowed through the study area. All materials in the distribution system had high densities of heterotrophic plate count bacteria, while high levels of coliforms were detected only in iron tubercles. Coliform bacteria with the same biochemical profile were found both in distribution system biofilms and in the water column. Assimilable organic carbon determinations showed that carbon levels declined as water flowed through the study area. Maintenance of a 1.0-mg/liter free chlorine residual was insufficient to control coliform occurrences. Flushing and pigging the study area was not an effective control for coliform occurrences in that section. Because coliform bacteria growing in distribution system biofilms may mask the presence of indicator organisms resulting from a true breakdown of treatment barriers, the report recommends that efforts continue to find methods to control growth of coliform bacteria in pipeline biofilms. Images PMID:3435140

  8. Markov chains of infinite order and asymptotic satisfaction of balance: application to the adaptive integration method.

    PubMed

    Earl, David J; Deem, Michael W

    2005-04-14

    Adaptive Monte Carlo methods can be viewed as implementations of Markov chains with infinite memory. We derive a general condition for the convergence of a Monte Carlo method whose history dependence is contained within the simulated density distribution. In convergent cases, our result implies that the balance condition need only be satisfied asymptotically. As an example, we show that the adaptive integration method converges.

  9. A Simple Joint Estimation Method of Residual Frequency Offset and Sampling Frequency Offset for DVB Systems

    NASA Astrophysics Data System (ADS)

    Kwon, Ki-Won; Cho, Yongsoo

    This letter presents a simple joint estimation method for residual frequency offset (RFO) and sampling frequency offset (STO) in OFDM-based digital video broadcasting (DVB) systems. The proposed method selects a continual pilot (CP) subset from an unsymmetrically and non-uniformly distributed CP set to obtain an unbiased estimator. Simulation results show that the proposed method using a properly selected CP subset is unbiased and performs robustly.

  10. Vanishing points detection using combination of fast Hough transform and deep learning

    NASA Astrophysics Data System (ADS)

    Sheshkus, Alexander; Ingacheva, Anastasia; Nikolaev, Dmitry

    2018-04-01

    In this paper we propose a novel method for vanishing points detection based on convolutional neural network (CNN) approach and fast Hough transform algorithm. We show how to determine fast Hough transform neural network layer and how to use it in order to increase usability of the neural network approach to the vanishing point detection task. Our algorithm includes CNN with consequence of convolutional and fast Hough transform layers. We are building estimator for distribution of possible vanishing points in the image. This distribution can be used to find candidates of vanishing point. We provide experimental results from tests of suggested method using images collected from videos of road trips. Our approach shows stable result on test images with different projective distortions and noise. Described approach can be effectively implemented for mobile GPU and CPU.

  11. The influence of pore structure parameters on the digital core recovery degree

    NASA Astrophysics Data System (ADS)

    Xia, Huifen; Zhao, Ling; Sun, Yanyu; Yuan, Shi

    2017-05-01

    Constructing digital core in the research of water flooding or polymer flooding oil displacement efficiency has its unique advantage. Using mercury injection experiment measured pore throat size distribution frequency, coordination number measured by CT scanning method and imbibition displacement method is used to measure the wettability of the data, on the basis of considering pore throat ratio, wettability, using the principle of adaptive porosity, on the basis of fitting the permeability to complete the construction of digital core. The results show that the model of throat distribution is concentrated water flooding recovery degree is higher, and distribution is more decentralized model polymer flooding recovery degree is higher. Around the same number of PV in poly, coordination number model of water flooding and polymer flooding recovery degree is higher.

  12. Two-Party secret key distribution via a modified quantum secret sharing protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grice, Warren P.; Evans, Philip G.; Lawrie, Benjamin

    We present and demonstrate a method of distributing secret information based on N-party single-qubit Quantum Secret Sharing (QSS) in a modied plug-and-play two-party Quantum Key Distribution (QKD) system with N 2 intermediate nodes and compare it to both standard QSS and QKD. Our setup is based on the Clavis2 QKD system built by ID Quantique but is generalizable to any implementation. We show that any two out of N parties can build a secret key based on partial information from each other and with collaboration from the remaining N 2 parties. This method signicantly reduces the number of resources (singlemore » photon detectors, lasers and dark ber connections) needed to implement QKD on the grid.« less

  13. Inversion method based on stochastic optimization for particle sizing.

    PubMed

    Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix

    2016-08-01

    A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.

  14. Two-Party secret key distribution via a modified quantum secret sharing protocol

    DOE PAGES

    Grice, Warren P.; Evans, Philip G.; Lawrie, Benjamin; ...

    2015-01-01

    We present and demonstrate a method of distributing secret information based on N-party single-qubit Quantum Secret Sharing (QSS) in a modied plug-and-play two-party Quantum Key Distribution (QKD) system with N 2 intermediate nodes and compare it to both standard QSS and QKD. Our setup is based on the Clavis2 QKD system built by ID Quantique but is generalizable to any implementation. We show that any two out of N parties can build a secret key based on partial information from each other and with collaboration from the remaining N 2 parties. This method signicantly reduces the number of resources (singlemore » photon detectors, lasers and dark ber connections) needed to implement QKD on the grid.« less

  15. Cell wall microstructure, pore size distribution and absolute density of hemp shiv

    PubMed Central

    Lawrence, M.; Ansell, M. P.; Hussain, A.

    2018-01-01

    This paper, for the first time, fully characterizes the intrinsic physical parameters of hemp shiv including cell wall microstructure, pore size distribution and absolute density. Scanning electron microscopy revealed microstructural features similar to hardwoods. Confocal microscopy revealed three major layers in the cell wall: middle lamella, primary cell wall and secondary cell wall. Computed tomography improved the visualization of pore shape and pore connectivity in three dimensions. Mercury intrusion porosimetry (MIP) showed that the average accessible porosity was 76.67 ± 2.03% and pore size classes could be distinguished into micropores (3–10 nm) and macropores (0.1–1 µm and 20–80 µm). The absolute density was evaluated by helium pycnometry, MIP and Archimedes' methods. The results show that these methods can lead to misinterpretation of absolute density. The MIP method showed a realistic absolute density (1.45 g cm−3) consistent with the density of the known constituents, including lignin, cellulose and hemi-cellulose. However, helium pycnometry and Archimedes’ methods gave falsely low values owing to 10% of the volume being inaccessible pores, which require sample pretreatment in order to be filled by liquid or gas. This indicates that the determination of the cell wall density is strongly dependent on sample geometry and preparation. PMID:29765652

  16. Cell wall microstructure, pore size distribution and absolute density of hemp shiv

    NASA Astrophysics Data System (ADS)

    Jiang, Y.; Lawrence, M.; Ansell, M. P.; Hussain, A.

    2018-04-01

    This paper, for the first time, fully characterizes the intrinsic physical parameters of hemp shiv including cell wall microstructure, pore size distribution and absolute density. Scanning electron microscopy revealed microstructural features similar to hardwoods. Confocal microscopy revealed three major layers in the cell wall: middle lamella, primary cell wall and secondary cell wall. Computed tomography improved the visualization of pore shape and pore connectivity in three dimensions. Mercury intrusion porosimetry (MIP) showed that the average accessible porosity was 76.67 ± 2.03% and pore size classes could be distinguished into micropores (3-10 nm) and macropores (0.1-1 µm and 20-80 µm). The absolute density was evaluated by helium pycnometry, MIP and Archimedes' methods. The results show that these methods can lead to misinterpretation of absolute density. The MIP method showed a realistic absolute density (1.45 g cm-3) consistent with the density of the known constituents, including lignin, cellulose and hemi-cellulose. However, helium pycnometry and Archimedes' methods gave falsely low values owing to 10% of the volume being inaccessible pores, which require sample pretreatment in order to be filled by liquid or gas. This indicates that the determination of the cell wall density is strongly dependent on sample geometry and preparation.

  17. Simple proof of security of the BB84 quantum key distribution protocol

    PubMed

    Shor; Preskill

    2000-07-10

    We prove that the 1984 protocol of Bennett and Brassard (BB84) for quantum key distribution is secure. We first give a key distribution protocol based on entanglement purification, which can be proven secure using methods from Lo and Chau's proof of security for a similar protocol. We then show that the security of this protocol implies the security of BB84. The entanglement purification based protocol uses Calderbank-Shor-Steane codes, and properties of these codes are used to remove the use of quantum computation from the Lo-Chau protocol.

  18. Computation of parton distributions from the quasi-PDF approach at the physical point

    NASA Astrophysics Data System (ADS)

    Alexandrou, Constantia; Bacchio, Simone; Cichy, Krzysztof; Constantinou, Martha; Hadjiyiannakou, Kyriakos; Jansen, Karl; Koutsou, Giannis; Scapellato, Aurora; Steffens, Fernanda

    2018-03-01

    We show the first results for parton distribution functions within the proton at the physical pion mass, employing the method of quasi-distributions. In particular, we present the matrix elements for the iso-vector combination of the unpolarized, helicity and transversity quasi-distributions, obtained with Nf = 2 twisted mass cloverimproved fermions and a proton boosted with momentum |p→| = 0.83 GeV. The momentum smearing technique has been applied to improve the overlap with the proton boosted state. Moreover, we present the renormalized helicity matrix elements in the RI' scheme, following the non-perturbative renormalization prescription recently developed by our group.

  19. Reconstructing the equilibrium Boltzmann distribution from well-tempered metadynamics.

    PubMed

    Bonomi, M; Barducci, A; Parrinello, M

    2009-08-01

    Metadynamics is a widely used and successful method for reconstructing the free-energy surface of complex systems as a function of a small number of suitably chosen collective variables. This is achieved by biasing the dynamics of the system. The bias acting on the collective variables distorts the probability distribution of the other variables. Here we present a simple reweighting algorithm for recovering the unbiased probability distribution of any variable from a well-tempered metadynamics simulation. We show the efficiency of the reweighting procedure by reconstructing the distribution of the four backbone dihedral angles of alanine dipeptide from two and even one dimensional metadynamics simulation. 2009 Wiley Periodicals, Inc.

  20. Inversion of particle-size distribution from angular light-scattering data with genetic algorithms.

    PubMed

    Ye, M; Wang, S; Lu, Y; Hu, T; Zhu, Z; Xu, Y

    1999-04-20

    A stochastic inverse technique based on a genetic algorithm (GA) to invert particle-size distribution from angular light-scattering data is developed. This inverse technique is independent of any given a priori information of particle-size distribution. Numerical tests show that this technique can be successfully applied to inverse problems with high stability in the presence of random noise and low susceptibility to the shape of distributions. It has also been shown that the GA-based inverse technique is more efficient in use of computing time than the inverse Monte Carlo method recently developed by Ligon et al. [Appl. Opt. 35, 4297 (1996)].

  1. Electrohydrodynamic assisted droplet alignment for lens fabrication by droplet evaporation

    NASA Astrophysics Data System (ADS)

    Wang, Guangxu; Deng, Jia; Guo, Xing

    2018-04-01

    Lens fabrication by droplet evaporation has attracted a lot of attention since the fabrication approach is simple and moldless. Droplet position accuracy is a critical parameter in this approach, and thus it is of great importance to use accurate methods to realize the droplet position alignment. In this paper, we propose an electrohydrodynamic (EHD) assisted droplet alignment method. An electrostatic force was induced at the interface between materials to overcome the surface tension and gravity. The deviation of droplet position from the center region was eliminated and alignment was successfully realized. We demonstrated the capability of the proposed method theoretically and experimentally. First, we built a simulation model coupled with the three-phase flow formulations and the EHD equations to study the three-phase flowing process in an electric field. Results show that it is the uneven electric field distribution that leads to the relative movement of the droplet. Then, we conducted experiments to verify the method. Experimental results are consistent with the numerical simulation results. Moreover, we successfully fabricated a crater lens after applying the proposed method. A light emitting diode module packaging with the fabricated crater lens shows a significant light intensity distribution adjustment compared with a spherical cap lens.

  2. Combining counts and incidence data: an efficient approach for estimating the log-normal species abundance distribution and diversity indices.

    PubMed

    Bellier, Edwige; Grøtan, Vidar; Engen, Steinar; Schartau, Ann Kristin; Diserud, Ola H; Finstad, Anders G

    2012-10-01

    Obtaining accurate estimates of diversity indices is difficult because the number of species encountered in a sample increases with sampling intensity. We introduce a novel method that requires that the presence of species in a sample to be assessed while the counts of the number of individuals per species are only required for just a small part of the sample. To account for species included as incidence data in the species abundance distribution, we modify the likelihood function of the classical Poisson log-normal distribution. Using simulated community assemblages, we contrast diversity estimates based on a community sample, a subsample randomly extracted from the community sample, and a mixture sample where incidence data are added to a subsample. We show that the mixture sampling approach provides more accurate estimates than the subsample and at little extra cost. Diversity indices estimated from a freshwater zooplankton community sampled using the mixture approach show the same pattern of results as the simulation study. Our method efficiently increases the accuracy of diversity estimates and comprehension of the left tail of the species abundance distribution. We show how to choose the scale of sample size needed for a compromise between information gained, accuracy of the estimates and cost expended when assessing biological diversity. The sample size estimates are obtained from key community characteristics, such as the expected number of species in the community, the expected number of individuals in a sample and the evenness of the community.

  3. New Tools for Comparing Microscopy Images: Quantitative Analysis of Cell Types in Bacillus subtilis

    PubMed Central

    van Gestel, Jordi; Vlamakis, Hera

    2014-01-01

    Fluorescence microscopy is a method commonly used to examine individual differences between bacterial cells, yet many studies still lack a quantitative analysis of fluorescence microscopy data. Here we introduce some simple tools that microbiologists can use to analyze and compare their microscopy images. We show how image data can be converted to distribution data. These data can be subjected to a cluster analysis that makes it possible to objectively compare microscopy images. The distribution data can further be analyzed using distribution fitting. We illustrate our methods by scrutinizing two independently acquired data sets, each containing microscopy images of a doubly labeled Bacillus subtilis strain. For the first data set, we examined the expression of srfA and tapA, two genes which are expressed in surfactin-producing and matrix-producing cells, respectively. For the second data set, we examined the expression of eps and tapA; these genes are expressed in matrix-producing cells. We show that srfA is expressed by all cells in the population, a finding which contrasts with a previously reported bimodal distribution of srfA expression. In addition, we show that eps and tapA do not always have the same expression profiles, despite being expressed in the same cell type: both operons are expressed in cell chains, while single cells mainly express eps. These findings exemplify that the quantification and comparison of microscopy data can yield insights that otherwise would go unnoticed. PMID:25448819

  4. New tools for comparing microscopy images: quantitative analysis of cell types in Bacillus subtilis.

    PubMed

    van Gestel, Jordi; Vlamakis, Hera; Kolter, Roberto

    2015-02-15

    Fluorescence microscopy is a method commonly used to examine individual differences between bacterial cells, yet many studies still lack a quantitative analysis of fluorescence microscopy data. Here we introduce some simple tools that microbiologists can use to analyze and compare their microscopy images. We show how image data can be converted to distribution data. These data can be subjected to a cluster analysis that makes it possible to objectively compare microscopy images. The distribution data can further be analyzed using distribution fitting. We illustrate our methods by scrutinizing two independently acquired data sets, each containing microscopy images of a doubly labeled Bacillus subtilis strain. For the first data set, we examined the expression of srfA and tapA, two genes which are expressed in surfactin-producing and matrix-producing cells, respectively. For the second data set, we examined the expression of eps and tapA; these genes are expressed in matrix-producing cells. We show that srfA is expressed by all cells in the population, a finding which contrasts with a previously reported bimodal distribution of srfA expression. In addition, we show that eps and tapA do not always have the same expression profiles, despite being expressed in the same cell type: both operons are expressed in cell chains, while single cells mainly express eps. These findings exemplify that the quantification and comparison of microscopy data can yield insights that otherwise would go unnoticed. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  5. Geographic and habitat partitioning of genetically distinct zooxanthellae (Symbiodinium) in Acropora corals on the Great Barrier Reef.

    PubMed

    Ulstrup, K E; Van Oppen, M J H

    2003-12-01

    Intra- and intercolony diversity and distribution of zooxanthellae in acroporid corals is largely uncharted. In this study, two molecular methods were applied to determine the distribution of zooxanthellae in the branching corals Acropora tenuis and A. valida at several reef locations in the central section of the Great Barrier Reef. Sun-exposed and shaded parts of all colonies were examined. Single-stranded conformational polymorphism analysis showed that individual colonies of A. tenuis at two locations harbour two strains of Symbiodinium belonging to clade C (C1 and C2), whereas conspecific colonies at two other reefs harboured a single zooxanthella strain. A. valida was found to simultaneously harbour strains belonging to two distinct phylogenetic clades (C and D) at all locations sampled. A novel method with improved sensitivity (quantitative polymerase chain reaction using Taqman fluorogenic probes) was used to map the relative abundance distribution of the two zooxanthella clades. At two of the five sampling locations both coral species were collected. At these two locations, composition of the zooxanthella communities showed the same pattern in both coral species, i.e. correlation with ambient light in Pioneer Bay and an absence thereof in Nelly Bay. The results show that the distribution of genetically distinct zooxanthellae is correlated with light regime and possibly temperature in some (but not all) colonies of A. tenuis and A. valida and at some reef locations, which we interpret as acclimation to local environmental conditions.

  6. Aggregation state and magnetic properties of magnetite nanoparticles controlled by an optimized silica coating

    NASA Astrophysics Data System (ADS)

    Pérez, Nicolás; Moya, C.; Tartaj, P.; Labarta, A.; Batlle, X.

    2017-01-01

    The control of magnetic interactions is becoming essential to expand/improve the applicability of magnetic nanoparticles (NPs). Here, we show that an optimized microemulsion method can be used to obtain homogenous silica coatings on even single magnetic nuclei of highly crystalline Fe3-xO4 NPs (7 and 16 nm) derived from a high-temperature method. We show that the thickness of this coating is controlled almost at will allowing much higher average separation among particles as compared to the oleic acid coating present on pristine NPs. Magnetic susceptibility studies show that the thickness of the silica coating allows the control of magnetic interactions. Specifically, as this effect is better displayed for the smallest particles, we show that dipole-dipole interparticle interactions can be tuned progressively for the 7 nm NPs, from almost non-interacting to strongly interacting particles at room temperature. The quantitative analysis of the magnetic properties unambiguously suggests that dipolar interactions significantly broaden the effective distribution of energy barriers by spreading the distribution of activation magnetic volumes.

  7. Comparative studies on the properties of glycyrrhetinic acid-loaded PLGA microparticles prepared by emulsion and template methods.

    PubMed

    Wang, Hong; Zhang, Guangxing; Sui, Hong; Liu, Yanhua; Park, Kinam; Wang, Wenping

    2015-12-30

    The O/W emulsion method has been widely used for the production of poly (lactide-co-glycolide) (PLGA) microparticles. Recently, a template method has been used to make homogeneous microparticles with predefined size and shape, and shown to be useful in encapsulating different types of active compounds. However, differences between the template method and emulsion method have not been examined. In the current study, PLGA microparticles were prepared by the two methods using glycyrrhetinic acid (GA) as a model drug. The properties of obtained microparticles were characterized and compared on drug distribution, in vitro release, and degradation. An encapsulation efficiency of over 70% and a mean particle size of about 40μm were found for both methods. DSC thermograms and XRPD diffractograms indicated that GA was highly dispersed or in the amorphous state in the matrix of microparticles. The emulsion method produced microparticles of a broad size distribution with a core-shell type structure and many drug-rich domains inside each microparticle. Its drug release and matrix degradation was slow before Day 50 and then accelerated. In contrast, the template method formed microparticles with narrow size distribution and drug distribution without apparent drug-rich domains. The template microparticles with a loading efficiency of 85% exhibited a zero-order release profile for 3 months after the initial burst release of 26.7%, and a steady surface erosion process as well. The same microparticles made by two different methods showed two distinguished drug release profiles. The two different methods can be supplementary with each other in optimization of drug formulation for achieving predetermined drug release patterns. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Dynamical Inference in the Milky Way

    NASA Astrophysics Data System (ADS)

    Bovy, Jo

    Current and future surveys of the Galaxy contain a wealth of information about the structure and evolution of the Galactic disk and halo. Teasing out this information is complicated by measurement uncertainties, missing data, and sparse sampling. I develop and describe several applications of generative modeling--creating an approximate description of the probability of the data given the physical parameters of the system--to deal with these issues. I develop a method for inferring the Galactic potential from individual observations of stellar kinematics such as will be furnished by the upcoming Gaia space astrometry mission. This method takes uncertainties in our knowledge of the distribution function of stellar tracers into account through marginalization. I demonstrate the method by inferring the force law in the Solar System from observations of the positions and velocities of the eight planets at a single epoch. I apply a similar method to derive the Milky Way's circular velocity from observations of maser kinematics. I infer the velocity distribution of nearby stars from Hipparcos data, which only consist of tangential velocities, by forward modeling the underlying distribution with a flexible multi-Gaussian model. I characterize the contribution of several "moving groups"---overdensities of co-moving stars---to the full distribution. By studying the properties of stars in these moving groups, I show that they do not form a single-burst population and that they are most likely due to transient non-axisymmetric features of the disk, such as transient spiral structure. By forward modeling one such scenario, I show how the Hercules moving group can be traced around the Galaxy by future surveys, which would confirm that the Milky Way bar's outer Lindblad resonance lies near the Solar radius.

  9. Modification of Kirchhoff migration with variable sound speed and attenuation for acoustic imaging of media and application to tomographic imaging of the breast

    PubMed Central

    Schmidt, Steven; Duric, Nebojsa; Li, Cuiping; Roy, Olivier; Huang, Zhi-Feng

    2011-01-01

    Purpose: To explore the feasibility of improving cross-sectional reflection imaging of the breast using refractive and attenuation corrections derived from ultrasound tomography data. Methods: The authors have adapted the planar Kirchhoff migration method, commonly used in geophysics to reconstruct reflection images, for use in ultrasound tomography imaging of the breast. Furthermore, the authors extended this method to allow for refractive and attenuative corrections. Using clinical data obtained with a breast imaging prototype, the authors applied this method to generate cross-sectional reflection images of the breast that were corrected using known distributions of sound speed and attenuation obtained from the same data. Results: A comparison of images reconstructed with and without the corrections showed varying degrees of improvement. The sound speed correction resulted in sharpening of detail, while the attenuation correction reduced the central darkening caused by path length dependent losses. The improvements appeared to be greatest when dense tissue was involved and the least for fatty tissue. These results are consistent with the expectation that denser tissues lead to both greater refractive effects and greater attenuation. Conclusions: Although conventional ultrasound techniques use time-gain control to correct for attenuation gradients, these corrections lead to artifacts because the true attenuation distribution is not known. The use of constant sound speed leads to additional artifacts that arise from not knowing the sound speed distribution. The authors show that in the context of ultrasound tomography, it is possible to construct reflection images of the breast that correct for inhomogeneous distributions of both sound speed and attenuation. PMID:21452737

  10. Engineering of Droplet Manipulation in Tertiary Junction Microfluidic Channels

    DTIC Science & Technology

    2017-06-30

    DISTRIBUTION/AVAILABILITY STATEMENT A DISTRIBUTION UNLIMITED: PB Public Release 13. SUPPLEMENTARY NOTES 14. ABSTRACT We have carried out an experimental and...method (LBM). Both the experimental and numerical results showed good agreement and suggested that at higher Re equal to 3, the flow was dominated by...location during grant period. Period of Performance: 06/01/2015 – 11/01/2016 Abstract We have carried out an experimental and in silico

  11. Tests of Fit for Asymmetric Laplace Distributions with Applications on Financial Data

    NASA Astrophysics Data System (ADS)

    Fragiadakis, Kostas; Meintanis, Simos G.

    2008-11-01

    New goodness-of-fit tests for the family of asymmetric Laplace distributions are constructed. The proposed tests are based on a weighted integral incorporating the empirical characteristic function of suitably standardized data, and can be written in a closed form appropriate for computer implementation. Monte Carlo results show that the new procedure are competitive with classical goodness-of-fit methods. Applications with financial data are also included.

  12. Income inequality in Romania: The exponential-Pareto distribution

    NASA Astrophysics Data System (ADS)

    Oancea, Bogdan; Andrei, Tudorel; Pirjol, Dan

    2017-03-01

    We present a study of the distribution of the gross personal income and income inequality in Romania, using individual tax income data, and both non-parametric and parametric methods. Comparing with official results based on household budget surveys (the Family Budgets Survey and the EU-SILC data), we find that the latter underestimate the income share of the high income region, and the overall income inequality. A parametric study shows that the income distribution is well described by an exponential distribution in the low and middle incomes region, and by a Pareto distribution in the high income region with Pareto coefficient α = 2.53. We note an anomaly in the distribution in the low incomes region (∼9,250 RON), and present a model which explains it in terms of partial income reporting.

  13. Investigation of the Effect of the Non-uniform Flow Distribution After Compressor of Gas Turbine Engine on Inlet Parameters of the Turbine

    NASA Astrophysics Data System (ADS)

    Orlov, M. Yu; Lukachev, S. V.; Anisimov, V. M.

    2018-01-01

    The position of combustion chamber between compressor and turbine and combined action of these elements imply that the working processes of all these elements are interconnected. One of the main requirements of the combustion chamber is the formation of the desirable temperature field at the turbine inlet, which can realize necessary durability of nozzle assembly and blade wheel of the first stage of high-pressure turbine. The method of integrated simulation of combustion chamber and neighboring nodes (compressor and turbine) was developed. On the first stage of the study, this method was used to investigate the influence of non-uniformity of flow distribution, occurred after compressor blades on combustion chamber workflow. The goal of the study is to assess the impact of non-uniformity of flow distribution after the compressor on the parameters before the turbine. The calculation was carried out in a transient case for some operation mode of the engine. The simulation showed that the inclusion of compressor has an effect on combustion chamber workflow and allows us to determine temperature field at the turbine inlet and assesses its durability more accurately. In addition, the simulation with turbine showed the changes in flow velocity distribution and pressure in combustion chamber.

  14. Correlation between discrete probability and reaction front propagation rate in heterogeneous mixtures

    NASA Astrophysics Data System (ADS)

    Naine, Tarun Bharath; Gundawar, Manoj Kumar

    2017-09-01

    We demonstrate a very powerful correlation between the discrete probability of distances of neighboring cells and thermal wave propagation rate, for a system of cells spread on a one-dimensional chain. A gamma distribution is employed to model the distances of neighboring cells. In the absence of an analytical solution and the differences in ignition times of adjacent reaction cells following non-Markovian statistics, invariably the solution for thermal wave propagation rate for a one-dimensional system with randomly distributed cells is obtained by numerical simulations. However, such simulations which are based on Monte-Carlo methods require several iterations of calculations for different realizations of distribution of adjacent cells. For several one-dimensional systems, differing in the value of shaping parameter of the gamma distribution, we show that the average reaction front propagation rates obtained by a discrete probability between two limits, shows excellent agreement with those obtained numerically. With the upper limit at 1.3, the lower limit depends on the non-dimensional ignition temperature. Additionally, this approach also facilitates the prediction of burning limits of heterogeneous thermal mixtures. The proposed method completely eliminates the need for laborious, time intensive numerical calculations where the thermal wave propagation rates can now be calculated based only on macroscopic entity of discrete probability.

  15. Long-term statistics of extreme tsunami height at Crescent City

    NASA Astrophysics Data System (ADS)

    Dong, Sheng; Zhai, Jinjin; Tao, Shanshan

    2017-06-01

    Historically, Crescent City is one of the most vulnerable communities impacted by tsunamis along the west coast of the United States, largely attributed to its offshore geography. Trans-ocean tsunamis usually produce large wave runup at Crescent Harbor resulting in catastrophic damages, property loss and human death. How to determine the return values of tsunami height using relatively short-term observation data is of great significance to assess the tsunami hazards and improve engineering design along the coast of Crescent City. In the present study, the extreme tsunami heights observed along the coast of Crescent City from 1938 to 2015 are fitted using six different probabilistic distributions, namely, the Gumbel distribution, the Weibull distribution, the maximum entropy distribution, the lognormal distribution, the generalized extreme value distribution and the generalized Pareto distribution. The maximum likelihood method is applied to estimate the parameters of all above distributions. Both Kolmogorov-Smirnov test and root mean square error method are utilized for goodness-of-fit test and the better fitting distribution is selected. Assuming that the occurrence frequency of tsunami in each year follows the Poisson distribution, the Poisson compound extreme value distribution can be used to fit the annual maximum tsunami amplitude, and then the point and interval estimations of return tsunami heights are calculated for structural design. The results show that the Poisson compound extreme value distribution fits tsunami heights very well and is suitable to determine the return tsunami heights for coastal disaster prevention.

  16. New shape models of asteroids reconstructed from sparse-in-time photometry

    NASA Astrophysics Data System (ADS)

    Durech, Josef; Hanus, Josef; Vanco, Radim; Oszkiewicz, Dagmara Anna

    2015-08-01

    Asteroid physical parameters - the shape, the sidereal rotation period, and the spin axis orientation - can be reconstructed from the disk-integrated photometry either dense (classical lightcurves) or sparse in time by the lightcurve inversion method. We will review our recent progress in asteroid shape reconstruction from sparse photometry. The problem of finding a unique solution of the inverse problem is time consuming because the sidereal rotation period has to be found by scanning a wide interval of possible periods. This can be efficiently solved by splitting the period parameter space into small parts that are sent to computers of volunteers and processed in parallel. We will show how this approach of distributed computing works with currently available sparse photometry processed in the framework of project Asteroids@home. In particular, we will show the results based on the Lowell Photometric Database. The method produce reliable asteroid models with very low rate of false solutions and the pipelines and codes can be directly used also to other sources of sparse photometry - Gaia data, for example. We will present the distribution of spin axis of hundreds of asteroids, discuss the dependence of the spin obliquity on the size of an asteroid,and show examples of spin-axis distribution in asteroid families that confirm the Yarkovsky/YORP evolution scenario.

  17. Spatial analysis of the distribution of Spodoptera frugiperda (J.E. Smith) (Lepidoptera: Noctuidae) and losses in maize crop productivity using geostatistics.

    PubMed

    Farias, Paulo R S; Barbosa, José C; Busoli, Antonio C; Overal, William L; Miranda, Vicente S; Ribeiro, Susane M

    2008-01-01

    The fall armyworm, Spodoptera frugiperda (J.E. Smith), is one of the chief pests of maize in the Americas. The study of its spatial distribution is fundamental for designing correct control strategies, improving sampling methods, determining actual and potential crop losses, and adopting precise agricultural techniques. In São Paulo state, Brazil, a maize field was sampled at weekly intervals, from germination through harvest, for caterpillar densities, using quadrates. In each of 200 quadrates, 10 plants were sampled per week. Harvest weights were obtained in the field for each quadrate, and ear diameters and lengths were also sampled (15 ears per quadrate) and used to estimate potential productivity of the quadrate. Geostatistical analyses of caterpillar densities showed greatest ranges for small caterpillars when semivariograms were adjusted for a spherical model that showed greatest fit. As the caterpillars developed in the field, their spatial distribution became increasingly random, as shown by a model adjusted to a straight line, indicating a lack of spatial dependence among samples. Harvest weight and ear length followed the spherical model, indicating the existence of spatial variability of the production parameters in the maize field. Geostatistics shows promise for the application of precise methods in the integrated control of pests.

  18. Heavy Metal Contamination Assessment and Partition for Industrial and Mining Gathering Areas

    PubMed Central

    Guan, Yang; Shao, Chaofeng; Ju, Meiting

    2014-01-01

    Industrial and mining activities have been recognized as the major sources of soil heavy metal contamination. This study introduced an improved Nemerow index method based on the Nemerow and geo-accumulation index. Taking a typical industrial and mining gathering area in Tianjin (China) as example, this study then analyzed the contamination sources as well as the ecological and integrated risks. The spatial distribution of the contamination level and ecological risk were determined using Geographic Information Systems. The results are as follows: (1) Zinc showed the highest contaminant level in the study area; the contamination levels of the other seven heavy metals assessed were relatively lower. (2) The combustion of fossil fuels and emissions from industrial and mining activities were the main sources of contamination in the study area. (3) The overall contamination level of heavy metals in the study area ranged from heavily contaminated to extremely contaminated and showed an uneven distribution. (4) The potential ecological risk showed an uneven distribution, and the overall ecological risk level ranged from low to moderate. This study also emphasized the importance of partition in industrial and mining areas, the extensive application of spatial analysis methods, and the consideration of human health risks in future studies. PMID:25032743

  19. Effects of hexagonal boron nitride on dry compression mixture of Avicel DG and Starch 1500.

    PubMed

    Uğurlu, Timuçin; Halaçoğlu, Mekin Doğa

    2016-01-01

    The objective of this study was to investigate the lubrication properties of hexagonal boron nitride (HBN) on a (1:1) binary mixture of Avicel DG and Starch 1500 after using the dry granulation-slugging method and compare it with conventional lubricants, such as magnesium stearate (MGST), glyceryl behenate (COMP) and stearic acid (STAC). MGST is one of the most commonly used lubricants in the pharmaceutical industry. However, it has several adverse effects on tablet properties. In our current study, we employed various methods to eradicate the work hardening phenomenon in dry granulation, and used HBN as a new lubricant to overcome the adverse effects of other lubricants on tablet properties. HBN was found to be as effective as MGST and did not show any significant adverse effects on the crushing strength or work hardening. From the scanning electron microscope (SEM) images, it was concluded that HBN distributed better than MGST. As well as showing better distribution, HBN's effect on disintegration was the least pronounced. Semi-quantitative weight percent distribution of B and N elements in the tablets was obtained using EDS (energy dispersive spectroscopy). Based on atomic force microscope (AFM) surface roughness images, formulations prepared with 1% HBN showed better plastic character than those prepared with MGST.

  20. An analysis of the symmetry issue in the ℓ-distribution method of gas radiation in non-uniform gaseous media

    NASA Astrophysics Data System (ADS)

    André, Frédéric

    2017-03-01

    The recently proposed ℓ-distribution/ICE (Iterative Copula Evaluation) method of gas radiation suffers from symmetry issues when applied in highly non-isothermal and non-homogeneous gaseous media. This problem is studied in a detailed theoretical way. The objective of the present paper is: 1/to provide a mathematical analysis of this problem of symmetry and, 2/to suggest a decisive factor, defined in terms of the ratio between the narrow band Planck and Rosseland mean absorption coefficients, to handle this issue. Comparisons of model predictions with reference LBL calculations show that the proposed criterion improves the accuracy of the intuitive ICE method for applications in highly non-uniform gases at high temperatures.

  1. Distributed optical fiber-based monitoring approach of spatial seepage behavior in dike engineering

    NASA Astrophysics Data System (ADS)

    Su, Huaizhi; Ou, Bin; Yang, Lifu; Wen, Zhiping

    2018-07-01

    The failure caused by seepage is the most common one in dike engineering. As to the characteristics of seepage in dike, such as longitudinal extension engineering, the randomness, strong concealment and small initial quantity order, by means of distributed fiber temperature sensor system (DTS), adopting an improved optical fiber layer layout scheme, the location of initial interpolation point of the saturation line is obtained. With the barycentric Lagrange interpolation collocation method (BLICM), the infiltrated surface of dike full-section is generated. Combined with linear optical fiber monitoring seepage method, BLICM is applied in an engineering case, which shows that a real-time seepage monitoring technique is presented in full-section of dike based on the combination method.

  2. Distributed bearing fault diagnosis based on vibration analysis

    NASA Astrophysics Data System (ADS)

    Dolenc, Boštjan; Boškoski, Pavle; Juričić, Đani

    2016-01-01

    Distributed bearing faults appear under various circumstances, for example due to electroerosion or the progression of localized faults. Bearings with distributed faults tend to generate more complex vibration patterns than those with localized faults. Despite the frequent occurrence of such faults, their diagnosis has attracted limited attention. This paper examines a method for the diagnosis of distributed bearing faults employing vibration analysis. The vibrational patterns generated are modeled by incorporating the geometrical imperfections of the bearing components. Comparing envelope spectra of vibration signals shows that one can distinguish between localized and distributed faults. Furthermore, a diagnostic procedure for the detection of distributed faults is proposed. This is evaluated on several bearings with naturally born distributed faults, which are compared with fault-free bearings and bearings with localized faults. It is shown experimentally that features extracted from vibrations in fault-free, localized and distributed fault conditions form clearly separable clusters, thus enabling diagnosis.

  3. Uncertainties in obtaining high reliability from stress-strength models

    NASA Technical Reports Server (NTRS)

    Neal, Donald M.; Matthews, William T.; Vangel, Mark G.

    1992-01-01

    There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.

  4. Directional data analysis under the general projected normal distribution

    PubMed Central

    Wang, Fangpo; Gelfand, Alan E.

    2013-01-01

    The projected normal distribution is an under-utilized model for explaining directional data. In particular, the general version provides flexibility, e.g., asymmetry and possible bimodality along with convenient regression specification. Here, we clarify the properties of this general class. We also develop fully Bayesian hierarchical models for analyzing circular data using this class. We show how they can be fit using MCMC methods with suitable latent variables. We show how posterior inference for distributional features such as the angular mean direction and concentration can be implemented as well as how prediction within the regression setting can be handled. With regard to model comparison, we argue for an out-of-sample approach using both a predictive likelihood scoring loss criterion and a cumulative rank probability score criterion. PMID:24046539

  5. Temporal and spatial characteristics of extreme precipitation events in the Midwest of Jilin Province based on multifractal detrended fluctuation analysis method and copula functions

    NASA Astrophysics Data System (ADS)

    Guo, Enliang; Zhang, Jiquan; Si, Ha; Dong, Zhenhua; Cao, Tiehua; Lan, Wu

    2017-10-01

    Environmental changes have brought about significant changes and challenges to water resources and management in the world; these include increasing climate variability, land use change, intensive agriculture, and rapid urbanization and industrial development, especially much more frequency extreme precipitation events. All of which greatly affect water resource and the development of social economy. In this study, we take extreme precipitation events in the Midwest of Jilin Province as an example; daily precipitation data during 1960-2014 are used. The threshold of extreme precipitation events is defined by multifractal detrended fluctuation analysis (MF-DFA) method. Extreme precipitation (EP), extreme precipitation ratio (EPR), and intensity of extreme precipitation (EPI) are selected as the extreme precipitation indicators, and then the Kolmogorov-Smirnov (K-S) test is employed to determine the optimal probability distribution function of extreme precipitation indicators. On this basis, copulas connect nonparametric estimation method and the Akaike Information Criterion (AIC) method is adopted to determine the bivariate copula function. Finally, we analyze the characteristics of single variable extremum and bivariate joint probability distribution of the extreme precipitation events. The results show that the threshold of extreme precipitation events in semi-arid areas is far less than that in subhumid areas. The extreme precipitation frequency shows a significant decline while the extreme precipitation intensity shows a trend of growth; there are significant differences in spatiotemporal of extreme precipitation events. The spatial variation trend of the joint return period gets shorter from the west to the east. The spatial distribution of co-occurrence return period takes on contrary changes and it is longer than the joint return period.

  6. Three-Dimensional Mapping of Soil Organic Carbon by Combining Kriging Method with Profile Depth Function.

    PubMed

    Chen, Chong; Hu, Kelin; Li, Hong; Yun, Anping; Li, Baoguo

    2015-01-01

    Understanding spatial variation of soil organic carbon (SOC) in three-dimensional direction is helpful for land use management. Due to the effect of profile depths and soil texture on vertical distribution of SOC, the stationary assumption for SOC cannot be met in the vertical direction. Therefore the three-dimensional (3D) ordinary kriging technique cannot be directly used to map the distribution of SOC at a regional scale. The objectives of this study were to map the 3D distribution of SOC at a regional scale by combining kriging method with the profile depth function of SOC (KPDF), and to explore the effects of soil texture and land use type on vertical distribution of SOC in a fluvial plain. A total of 605 samples were collected from 121 soil profiles (0.0 to 1.0 m, 0.20 m increment) in Quzhou County, China and SOC contents were determined for each soil sample. The KPDF method was used to obtain the 3D map of SOC at the county scale. The results showed that the exponential equation well described the vertical distribution of mean values of the SOC contents. The coefficients of determination, root mean squared error and mean prediction error between the measured and the predicted SOC contents were 0.52, 1.82 and -0.24 g kg(-1) respectively, suggesting that the KPDF method could be used to produce a 3D map of SOC content. The surface SOC contents were high in the mid-west and south regions, and low values lay in the southeast corner. The SOC contents showed significant positive correlations between the five different depths and the correlations of SOC contents were larger in adjacent layers than in non-adjacent layers. Soil texture and land use type had significant effects on the spatial distribution of SOC. The influence of land use type was more important than that of soil texture in the surface soil, and soil texture played a more important role in influencing the SOC levels for 0.2-0.4 m layer.

  7. Three-Dimensional Mapping of Soil Organic Carbon by Combining Kriging Method with Profile Depth Function

    PubMed Central

    Chen, Chong; Hu, Kelin; Li, Hong; Yun, Anping; Li, Baoguo

    2015-01-01

    Understanding spatial variation of soil organic carbon (SOC) in three-dimensional direction is helpful for land use management. Due to the effect of profile depths and soil texture on vertical distribution of SOC, the stationary assumption for SOC cannot be met in the vertical direction. Therefore the three-dimensional (3D) ordinary kriging technique cannot be directly used to map the distribution of SOC at a regional scale. The objectives of this study were to map the 3D distribution of SOC at a regional scale by combining kriging method with the profile depth function of SOC (KPDF), and to explore the effects of soil texture and land use type on vertical distribution of SOC in a fluvial plain. A total of 605 samples were collected from 121 soil profiles (0.0 to 1.0 m, 0.20 m increment) in Quzhou County, China and SOC contents were determined for each soil sample. The KPDF method was used to obtain the 3D map of SOC at the county scale. The results showed that the exponential equation well described the vertical distribution of mean values of the SOC contents. The coefficients of determination, root mean squared error and mean prediction error between the measured and the predicted SOC contents were 0.52, 1.82 and -0.24 g kg-1 respectively, suggesting that the KPDF method could be used to produce a 3D map of SOC content. The surface SOC contents were high in the mid-west and south regions, and low values lay in the southeast corner. The SOC contents showed significant positive correlations between the five different depths and the correlations of SOC contents were larger in adjacent layers than in non-adjacent layers. Soil texture and land use type had significant effects on the spatial distribution of SOC. The influence of land use type was more important than that of soil texture in the surface soil, and soil texture played a more important role in influencing the SOC levels for 0.2-0.4 m layer. PMID:26047012

  8. Entanglement Properties and Quantum Phases for a Fermionic Disordered One-Dimensional Wire with Attractive Interactions.

    PubMed

    Berkovits, Richard

    2015-11-13

    A fermionic disordered one-dimensional wire in the presence of attractive interactions is known to have two distinct phases, a localized and superconducting, depending on the strength of interaction and disorder. The localized region may also exhibit a metallic behavior if the system size is shorter than the localization length. Here we show that the superconducting phase has a distribution of the entanglement entropy distinct from the metallic regime. The entanglement entropy distribution is strongly asymmetric with a Lévy α-stable distribution (compared to the Gaussian metallic distribution), as is seen also for the second Rényi entropy distribution. Thus, entanglement properties may reveal properties which cannot be detected by other methods.

  9. Intelligent decision support algorithm for distribution system restoration.

    PubMed

    Singh, Reetu; Mehfuz, Shabana; Kumar, Parmod

    2016-01-01

    Distribution system is the means of revenue for electric utility. It needs to be restored at the earliest if any feeder or complete system is tripped out due to fault or any other cause. Further, uncertainty of the loads, result in variations in the distribution network's parameters. Thus, an intelligent algorithm incorporating hybrid fuzzy-grey relation, which can take into account the uncertainties and compare the sequences is discussed to analyse and restore the distribution system. The simulation studies are carried out to show the utility of the method by ranking the restoration plans for a typical distribution system. This algorithm also meets the smart grid requirements in terms of an automated restoration plan for the partial/full blackout of network.

  10. Variance stabilization and normalization for one-color microarray data using a data-driven multiscale approach.

    PubMed

    Motakis, E S; Nason, G P; Fryzlewicz, P; Rutter, G A

    2006-10-15

    Many standard statistical techniques are effective on data that are normally distributed with constant variance. Microarray data typically violate these assumptions since they come from non-Gaussian distributions with a non-trivial mean-variance relationship. Several methods have been proposed that transform microarray data to stabilize variance and draw its distribution towards the Gaussian. Some methods, such as log or generalized log, rely on an underlying model for the data. Others, such as the spread-versus-level plot, do not. We propose an alternative data-driven multiscale approach, called the Data-Driven Haar-Fisz for microarrays (DDHFm) with replicates. DDHFm has the advantage of being 'distribution-free' in the sense that no parametric model for the underlying microarray data is required to be specified or estimated; hence, DDHFm can be applied very generally, not just to microarray data. DDHFm achieves very good variance stabilization of microarray data with replicates and produces transformed intensities that are approximately normally distributed. Simulation studies show that it performs better than other existing methods. Application of DDHFm to real one-color cDNA data validates these results. The R package of the Data-Driven Haar-Fisz transform (DDHFm) for microarrays is available in Bioconductor and CRAN.

  11. Used-habitat calibration plots: A new procedure for validating species distribution, resource selection, and step-selection models

    USGS Publications Warehouse

    Fieberg, John R.; Forester, James D.; Street, Garrett M.; Johnson, Douglas H.; ArchMiller, Althea A.; Matthiopoulos, Jason

    2018-01-01

    “Species distribution modeling” was recently ranked as one of the top five “research fronts” in ecology and the environmental sciences by ISI's Essential Science Indicators (Renner and Warton 2013), reflecting the importance of predicting how species distributions will respond to anthropogenic change. Unfortunately, species distribution models (SDMs) often perform poorly when applied to novel environments. Compounding on this problem is the shortage of methods for evaluating SDMs (hence, we may be getting our predictions wrong and not even know it). Traditional methods for validating SDMs quantify a model's ability to classify locations as used or unused. Instead, we propose to focus on how well SDMs can predict the characteristics of used locations. This subtle shift in viewpoint leads to a more natural and informative evaluation and validation of models across the entire spectrum of SDMs. Through a series of examples, we show how simple graphical methods can help with three fundamental challenges of habitat modeling: identifying missing covariates, non-linearity, and multicollinearity. Identifying habitat characteristics that are not well-predicted by the model can provide insights into variables affecting the distribution of species, suggest appropriate model modifications, and ultimately improve the reliability and generality of conservation and management recommendations.

  12. Application of portable gas detector in point and scanning method to estimate spatial distribution of methane emission in landfill.

    PubMed

    Lando, Asiyanthi Tabran; Nakayama, Hirofumi; Shimaoka, Takayuki

    2017-01-01

    Methane from landfills contributes to global warming and can pose an explosion hazard. To minimize these effects emissions must be monitored. This study proposed application of portable gas detector (PGD) in point and scanning measurements to estimate spatial distribution of methane emissions in landfills. The aims of this study were to discover the advantages and disadvantages of point and scanning methods in measuring methane concentrations, discover spatial distribution of methane emissions, cognize the correlation between ambient methane concentration and methane flux, and estimate methane flux and emissions in landfills. This study was carried out in Tamangapa landfill, Makassar city-Indonesia. Measurement areas were divided into basic and expanded area. In the point method, PGD was held one meter above the landfill surface, whereas scanning method used a PGD with a data logger mounted on a wire drawn between two poles. Point method was efficient in time, only needed one person and eight minutes in measuring 400m 2 areas, whereas scanning method could capture a lot of hot spots location and needed 20min. The results from basic area showed that ambient methane concentration and flux had a significant (p<0.01) positive correlation with R 2 =0.7109 and y=0.1544 x. This correlation equation was used to describe spatial distribution of methane emissions in the expanded area by using Kriging method. The average of estimated flux from scanning method was 71.2gm -2 d -1 higher than 38.3gm -2 d -1 from point method. Further, scanning method could capture the lower and higher value, which could be useful to evaluate and estimate the possible effects of the uncontrolled emissions in landfill. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. An Orientation Measurement Method Based on Hall-effect Sensors for Permanent Magnet Spherical Actuators with 3D Magnet Array

    PubMed Central

    Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming

    2014-01-01

    An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators. PMID:25342000

  14. Fast determination of the spatially distributed photon fluence for light dose evaluation of PDT

    NASA Astrophysics Data System (ADS)

    Zhao, Kuanxin; Chen, Weiting; Li, Tongxin; Yan, Panpan; Qin, Zhuanping; Zhao, Huijuan

    2018-02-01

    Photodynamic therapy (PDT) has shown superiorities of noninvasiveness and high-efficiency in the treatment of early-stage skin cancer. Rapid and accurate determination of spatially distributed photon fluence in turbid tissue is essential for the dosimetry evaluation of PDT. It is generally known that photon fluence can be accurately obtained by Monte Carlo (MC) methods, while too much time would be consumed especially for complex light source mode or online real-time dosimetry evaluation of PDT. In this work, a method to rapidly calculate spatially distributed photon fluence in turbid medium is proposed implementing a classical perturbation and iteration theory on mesh Monte Carlo (MMC). In the proposed method, photon fluence can be obtained by superposing a perturbed and iterative solution caused by the defects in turbid medium to an unperturbed solution for the background medium and therefore repetitive MMC simulations can be avoided. To validate the method, a non-melanoma skin cancer model is carried out. The simulation results show the solution of photon fluence can be obtained quickly and correctly by perturbation algorithm.

  15. Dictionaries and distributions: Combining expert knowledge and large scale textual data content analysis : Distributed dictionary representation.

    PubMed

    Garten, Justin; Hoover, Joe; Johnson, Kate M; Boghrati, Reihane; Iskiwitch, Carol; Dehghani, Morteza

    2018-02-01

    Theory-driven text analysis has made extensive use of psychological concept dictionaries, leading to a wide range of important results. These dictionaries have generally been applied through word count methods which have proven to be both simple and effective. In this paper, we introduce Distributed Dictionary Representations (DDR), a method that applies psychological dictionaries using semantic similarity rather than word counts. This allows for the measurement of the similarity between dictionaries and spans of text ranging from complete documents to individual words. We show how DDR enables dictionary authors to place greater emphasis on construct validity without sacrificing linguistic coverage. We further demonstrate the benefits of DDR on two real-world tasks and finally conduct an extensive study of the interaction between dictionary size and task performance. These studies allow us to examine how DDR and word count methods complement one another as tools for applying concept dictionaries and where each is best applied. Finally, we provide references to tools and resources to make this method both available and accessible to a broad psychological audience.

  16. A strategy to load balancing for non-connectivity MapReduce job

    NASA Astrophysics Data System (ADS)

    Zhou, Huaping; Liu, Guangzong; Gui, Haixia

    2017-09-01

    MapReduce has been widely used in large scale and complex datasets as a kind of distributed programming model. Original Hash partitioning function in MapReduce often results the problem of data skew when data distribution is uneven. To solve the imbalance of data partitioning, we proposes a strategy to change the remaining partitioning index when data is skewed. In Map phase, we count the amount of data which will be distributed to each reducer, then Job Tracker monitor the global partitioning information and dynamically modify the original partitioning function according to the data skew model, so the Partitioner can change the index of these partitioning which will cause data skew to the other reducer that has less load in the next partitioning process, and can eventually balance the load of each node. Finally, we experimentally compare our method with existing methods on both synthetic and real datasets, the experimental results show our strategy can solve the problem of data skew with better stability and efficiency than Hash method and Sampling method for non-connectivity MapReduce task.

  17. Minimization for conditional simulation: Relationship to optimal transport

    NASA Astrophysics Data System (ADS)

    Oliver, Dean S.

    2014-05-01

    In this paper, we consider the problem of generating independent samples from a conditional distribution when independent samples from the prior distribution are available. Although there are exact methods for sampling from the posterior (e.g. Markov chain Monte Carlo or acceptance/rejection), these methods tend to be computationally demanding when evaluation of the likelihood function is expensive, as it is for most geoscience applications. As an alternative, in this paper we discuss deterministic mappings of variables distributed according to the prior to variables distributed according to the posterior. Although any deterministic mappings might be equally useful, we will focus our discussion on a class of algorithms that obtain implicit mappings by minimization of a cost function that includes measures of data mismatch and model variable mismatch. Algorithms of this type include quasi-linear estimation, randomized maximum likelihood, perturbed observation ensemble Kalman filter, and ensemble of perturbed analyses (4D-Var). When the prior pdf is Gaussian and the observation operators are linear, we show that these minimization-based simulation methods solve an optimal transport problem with a nonstandard cost function. When the observation operators are nonlinear, however, the mapping of variables from the prior to the posterior obtained from those methods is only approximate. Errors arise from neglect of the Jacobian determinant of the transformation and from the possibility of discontinuous mappings.

  18. Calculation of the confidence intervals for transformation parameters in the registration of medical images

    PubMed Central

    Bansal, Ravi; Staib, Lawrence H.; Laine, Andrew F.; Xu, Dongrong; Liu, Jun; Posecion, Lainie F.; Peterson, Bradley S.

    2010-01-01

    Images from different individuals typically cannot be registered precisely because anatomical features within the images differ across the people imaged and because the current methods for image registration have inherent technological limitations that interfere with perfect registration. Quantifying the inevitable error in image registration is therefore of crucial importance in assessing the effects that image misregistration may have on subsequent analyses in an imaging study. We have developed a mathematical framework for quantifying errors in registration by computing the confidence intervals of the estimated parameters (3 translations, 3 rotations, and 1 global scale) for the similarity transformation. The presence of noise in images and the variability in anatomy across individuals ensures that estimated registration parameters are always random variables. We assume a functional relation among intensities across voxels in the images, and we use the theory of nonlinear, least-squares estimation to show that the parameters are multivariate Gaussian distributed. We then use the covariance matrix of this distribution to compute the confidence intervals of the transformation parameters. These confidence intervals provide a quantitative assessment of the registration error across the images. Because transformation parameters are nonlinearly related to the coordinates of landmark points in the brain, we subsequently show that the coordinates of those landmark points are also multivariate Gaussian distributed. Using these distributions, we then compute the confidence intervals of the coordinates for landmark points in the image. Each of these confidence intervals in turn provides a quantitative assessment of the registration error at a particular landmark point. Because our method is computationally intensive, however, its current implementation is limited to assessing the error of the parameters in the similarity transformation across images. We assessed the performance of our method in computing the error in estimated similarity parameters by applying that method to real world dataset. Our results showed that the size of the confidence intervals computed using our method decreased – i.e. our confidence in the registration of images from different individuals increased – for increasing amounts of blur in the images. Moreover, the size of the confidence intervals increased for increasing amounts of noise, misregistration, and differing anatomy. Thus, our method precisely quantified confidence in the registration of images that contain varying amounts of misregistration and varying anatomy across individuals. PMID:19138877

  19. PULSAR SIGNAL DENOISING METHOD BASED ON LAPLACE DISTRIBUTION IN NO-SUBSAMPLING WAVELET PACKET DOMAIN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenbo, Wang; Yanchao, Zhao; Xiangli, Wang

    2016-11-01

    In order to improve the denoising effect of the pulsar signal, a new denoising method is proposed in the no-subsampling wavelet packet domain based on the local Laplace prior model. First, we count the true noise-free pulsar signal’s wavelet packet coefficient distribution characteristics and construct the true signal wavelet packet coefficients’ Laplace probability density function model. Then, we estimate the denosied wavelet packet coefficients by using the noisy pulsar wavelet coefficients based on maximum a posteriori criteria. Finally, we obtain the denoisied pulsar signal through no-subsampling wavelet packet reconstruction of the estimated coefficients. The experimental results show that the proposed method performs better when calculating the pulsar time of arrival than the translation-invariant wavelet denoising method.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosales-Zarate, Laura E. C.; Drummond, P. D.

    We calculate the quantum Renyi entropy in a phase-space representation for either fermions or bosons. This can also be used to calculate purity and fidelity, or the entanglement between two systems. We show that it is possible to calculate the entropy from sampled phase-space distributions in normally ordered representations, although this is not possible for all quantum states. We give an example of the use of this method in an exactly soluble thermal case. The quantum entropy cannot be calculated at all using sampling methods in classical symmetric (Wigner) or antinormally ordered (Husimi) phase spaces, due to inner-product divergences. Themore » preferred method is to use generalized Gaussian phase-space methods, which utilize a distribution over stochastic Green's functions. We illustrate this approach by calculating the reduced entropy and entanglement of bosonic or fermionic modes coupled to a time-evolving, non-Markovian reservoir.« less

  1. Simulating the component counts of combinatorial structures.

    PubMed

    Arratia, Richard; Barbour, A D; Ewens, W J; Tavaré, Simon

    2018-02-09

    This article describes and compares methods for simulating the component counts of random logarithmic combinatorial structures such as permutations and mappings. We exploit the Feller coupling for simulating permutations to provide a very fast method for simulating logarithmic assemblies more generally. For logarithmic multisets and selections, this approach is replaced by an acceptance/rejection method based on a particular conditioning relationship that represents the distribution of the combinatorial structure as that of independent random variables conditioned on a weighted sum. We show how to improve its acceptance rate. We illustrate the method by estimating the probability that a random mapping has no repeated component sizes, and establish the asymptotic distribution of the difference between the number of components and the number of distinct component sizes for a very general class of logarithmic structures. Copyright © 2018. Published by Elsevier Inc.

  2. Measurement and calibration of differential Mueller matrix of distributed targets

    NASA Technical Reports Server (NTRS)

    Sarabandi, Kamal; Oh, Yisok; Ulaby, Fawwaz T.

    1992-01-01

    A rigorous method for calibrating polarimetric backscatter measurements of distributed targets is presented. By characterizing the radar distortions over the entire mainlobe of the antenna, the differential Mueller matrix is derived from the measured scattering matrices with a high degree of accuracy. It is shown that the radar distortions can be determined by measuring the polarimetric response of a metallic sphere over the main lobe of the antenna. Comparison of results obtained with the new algorithm with the results derived from the old calibration method show that the discrepancy between the two methods is less than 1 dB for the backscattering coefficients. The discrepancy is more drastic for the phase-difference statistics, indicating that removal of the radar distortions from the cross products of the scattering matrix elements cannot be accomplished with the traditional calibration methods.

  3. Comparisons of non-Gaussian statistical models in DNA methylation analysis.

    PubMed

    Ma, Zhanyu; Teschendorff, Andrew E; Yu, Hong; Taghia, Jalil; Guo, Jun

    2014-06-16

    As a key regulatory mechanism of gene expression, DNA methylation patterns are widely altered in many complex genetic diseases, including cancer. DNA methylation is naturally quantified by bounded support data; therefore, it is non-Gaussian distributed. In order to capture such properties, we introduce some non-Gaussian statistical models to perform dimension reduction on DNA methylation data. Afterwards, non-Gaussian statistical model-based unsupervised clustering strategies are applied to cluster the data. Comparisons and analysis of different dimension reduction strategies and unsupervised clustering methods are presented. Experimental results show that the non-Gaussian statistical model-based methods are superior to the conventional Gaussian distribution-based method. They are meaningful tools for DNA methylation analysis. Moreover, among several non-Gaussian methods, the one that captures the bounded nature of DNA methylation data reveals the best clustering performance.

  4. Comparisons of Non-Gaussian Statistical Models in DNA Methylation Analysis

    PubMed Central

    Ma, Zhanyu; Teschendorff, Andrew E.; Yu, Hong; Taghia, Jalil; Guo, Jun

    2014-01-01

    As a key regulatory mechanism of gene expression, DNA methylation patterns are widely altered in many complex genetic diseases, including cancer. DNA methylation is naturally quantified by bounded support data; therefore, it is non-Gaussian distributed. In order to capture such properties, we introduce some non-Gaussian statistical models to perform dimension reduction on DNA methylation data. Afterwards, non-Gaussian statistical model-based unsupervised clustering strategies are applied to cluster the data. Comparisons and analysis of different dimension reduction strategies and unsupervised clustering methods are presented. Experimental results show that the non-Gaussian statistical model-based methods are superior to the conventional Gaussian distribution-based method. They are meaningful tools for DNA methylation analysis. Moreover, among several non-Gaussian methods, the one that captures the bounded nature of DNA methylation data reveals the best clustering performance. PMID:24937687

  5. Precise Hypocenter Determination around Palu Koro Fault: a Preliminary Results

    NASA Astrophysics Data System (ADS)

    Fawzy Ismullah, M. Muhammad; Nugraha, Andri Dian; Ramdhan, Mohamad; Wandono

    2017-04-01

    Sulawesi area is located in complex tectonic pattern. High seismicity activity in the middle of Sulawesi is related to Palu Koro fault (PKF). In this study, we determined precise hypocenter around PKF by applying double-difference method. We attempt to investigate of the seismicity rate, geometry of the fault and distribution of focus depth around PKF. We first re-pick P-and S-wave arrival time of the PKF events to determine the initial hypocenter location using Hypoellipse method through updated 1-D seismic velocity. Later on, we relocated the earthquake event using double-difference method. Our preliminary results show the distribution of relocated events are located around PKF and have smaller residual time than the initial location. We will enhance the hypocenter location through updating of arrival time by applying waveform cross correlation method as input for double-difference relocation.

  6. Estimation of channel parameters and background irradiance for free-space optical link.

    PubMed

    Khatoon, Afsana; Cowley, William G; Letzepis, Nick; Giggenbach, Dirk

    2013-05-10

    Free-space optical communication can experience severe fading due to optical scintillation in long-range links. Channel estimation is also corrupted by background and electrical noise. Accurate estimation of channel parameters and scintillation index (SI) depends on perfect removal of background irradiance. In this paper, we propose three different methods, the minimum-value (MV), mean-power (MP), and maximum-likelihood (ML) based methods, to remove the background irradiance from channel samples. The MV and MP methods do not require knowledge of the scintillation distribution. While the ML-based method assumes gamma-gamma scintillation, it can be easily modified to accommodate other distributions. Each estimator's performance is compared using simulation data as well as experimental measurements. The estimators' performance are evaluated from low- to high-SI areas using simulation data as well as experimental trials. The MV and MP methods have much lower complexity than the ML-based method. However, the ML-based method shows better SI and background-irradiance estimation performance.

  7. Robust signal recovery using the prolate spherical wave functions and maximum correntropy criterion

    NASA Astrophysics Data System (ADS)

    Zou, Cuiming; Kou, Kit Ian

    2018-05-01

    Signal recovery is one of the most important problem in signal processing. This paper proposes a novel signal recovery method based on prolate spherical wave functions (PSWFs). PSWFs are a kind of special functions, which have been proved having good performance in signal recovery. However, the existing PSWFs based recovery methods used the mean square error (MSE) criterion, which depends on the Gaussianity assumption of the noise distributions. For the non-Gaussian noises, such as impulsive noise or outliers, the MSE criterion is sensitive, which may lead to large reconstruction error. Unlike the existing PSWFs based recovery methods, our proposed PSWFs based recovery method employs the maximum correntropy criterion (MCC), which is independent of the noise distribution. The proposed method can reduce the impact of the large and non-Gaussian noises. The experimental results on synthetic signals with various types of noises show that the proposed MCC based signal recovery method has better robust property against various noises compared to other existing methods.

  8. Small-Scale System for Evaluation of Stretch-Flangeability with Excellent Reliability

    NASA Astrophysics Data System (ADS)

    Yoon, Jae Ik; Jung, Jaimyun; Lee, Hak Hyeon; Kim, Hyoung Seop

    2018-02-01

    We propose a system for evaluating the stretch-flangeability of small-scale specimens based on the hole-expansion ratio (HER). The system has no size effect and shows excellent reproducibility, reliability, and economic efficiency. To verify the reliability and reproducibility of the proposed hole-expansion testing (HET) method, the deformation behavior of the conventional standard stretch-flangeability evaluation method was compared with the proposed method using finite-element method simulations. The distribution of shearing defects in the hole-edge region of the specimen, which has a significant influence on the HER, was investigated using scanning electron microscopy. The stretch-flangeability of several kinds of advanced high-strength steel determined using the conventional standard method was compared with that using the proposed small-scale HET method. It was verified that the deformation behavior, morphology and distribution of shearing defects, and stretch-flangeability results for the specimens were the same for the conventional standard method and the proposed small-scale stretch-flangeability evaluation system.

  9. Small-Scale System for Evaluation of Stretch-Flangeability with Excellent Reliability

    NASA Astrophysics Data System (ADS)

    Yoon, Jae Ik; Jung, Jaimyun; Lee, Hak Hyeon; Kim, Hyoung Seop

    2018-06-01

    We propose a system for evaluating the stretch-flangeability of small-scale specimens based on the hole-expansion ratio (HER). The system has no size effect and shows excellent reproducibility, reliability, and economic efficiency. To verify the reliability and reproducibility of the proposed hole-expansion testing (HET) method, the deformation behavior of the conventional standard stretch-flangeability evaluation method was compared with the proposed method using finite-element method simulations. The distribution of shearing defects in the hole-edge region of the specimen, which has a significant influence on the HER, was investigated using scanning electron microscopy. The stretch-flangeability of several kinds of advanced high-strength steel determined using the conventional standard method was compared with that using the proposed small-scale HET method. It was verified that the deformation behavior, morphology and distribution of shearing defects, and stretch-flangeability results for the specimens were the same for the conventional standard method and the proposed small-scale stretch-flangeability evaluation system.

  10. WE-AB-204-07: Spatiotemporal Distribution of the FDG PET Tracer in Solid Tumors: Contributions of Diffusion and Convection Mechanisms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soltani, M; Sefidgar, M; Bazmara, H

    2015-06-15

    Purpose: In this study, a mathematical model is utilized to simulate FDG distribution in tumor tissue. In contrast to conventional compartmental modeling, tracer distributions across space and time are directly linked together (i.e. moving beyond ordinary differential equations (ODEs) to utilizing partial differential equations (PDEs) coupling space and time). The diffusion and convection transport mechanisms are both incorporated to model tracer distribution. We aimed to investigate the contributions of these two mechanisms on FDG distribution for various tumor geometries obtained from PET/CT images. Methods: FDG transport was simulated via a spatiotemporal distribution model (SDM). The model is based on amore » 5K compartmental model. We model the fact that tracer concentration in the second compartment (extracellular space) is modulated via convection and diffusion. Data from n=45 patients with pancreatic tumors as imaged using clinical FDG PET/CT imaging were analyzed, and geometrical information from the tumors including size, shape, and aspect ratios were classified. Tumors with varying shapes and sizes were assessed in order to investigate the effects of convection and diffusion mechanisms on FDG transport. Numerical methods simulating interstitial flow and solute transport in tissue were utilized. Results: We have shown the convection mechanism to depend on the shape and size of tumors whereas diffusion mechanism is seen to exhibit low dependency on shape and size. Results show that concentration distribution of FDG is relatively similar for the considered tumors; and that the diffusion mechanism of FDG transport significantly dominates the convection mechanism. The Peclet number which shows the ratio of convection to diffusion rates was shown to be of the order of 10−{sup 3} for all considered tumors. Conclusion: We have demonstrated that even though convection leads to varying tracer distribution profiles depending on tumor shape and size, the domination of the diffusion phenomenon prevents these factors from modulating FDG distribution.« less

  11. TU-C-17A-08: Improving IMRT Planning and Reducing Inter-Planner Variability Using the Stochastic Frontier Method: Validation Based On Clinical and Simulated Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gagne, MC; Archambault, L; CHU de Quebec, Quebec, Quebec

    2014-06-15

    Purpose: Intensity modulated radiation therapy always requires compromises between PTV coverage and organs at risk (OAR) sparing. We previously developed metrics that correlate doses to OAR to specific patients’ morphology using stochastic frontier analysis (SFA). Here, we aim to examine the validity of this approach using a large set of realistically simulated dosimetric and geometric data. Methods: SFA describes a set of treatment plans as an asymmetric distribution with respect to a frontier defining optimal plans. Eighty head and neck IMRT plans were used to establish a metric predicting the mean dose to parotids as a function of simple geometricmore » parameters. A database of 140 parotids was used as a basis distribution to simulate physically plausible data of geometry and dose. Distributions comprising between 20 and 5000 were simulated and the SFA was applied to obtain new frontiers, which were compared to the original frontier. Results: It was possible to simulate distributions consistent with the original dataset. Below 160 organs, the SFA could not always describe distributions as asymmetric: a few cases showed a Gaussian or half-Gaussian distribution. In order to converge to a stable solution, the number of organs in a distribution must ideally be above 100, but in many cases stable parameters could be achieved with as low as 60 samples of organ data. Mean RMS value of the error of new frontiers was significantly reduced when additional organs are used. Conclusion: The number of organs in a distribution showed to have an impact on the effectiveness of the model. It is always possible to obtain a frontier, but if the number of organs in the distribution is small (< 160), it may not represent de lowest dose achievable. These results will be used to determine number of cases necessary to adapt the model to other organs.« less

  12. Load sharing in distributed real-time systems with state-change broadcasts

    NASA Technical Reports Server (NTRS)

    Shin, Kang G.; Chang, Yi-Chieh

    1989-01-01

    A decentralized dynamic load-sharing (LS) method based on state-change broadcasts is proposed for a distributed real-time system. Whenever the state of a node changes from underloaded to fully loaded and vice versa, the node broadcasts this change to a set of nodes, called a buddy set, in the system. The performance of the method is evaluated with both analytic modeling and simulation. It is modeled first by an embedded Markov chain for which numerical solutions are derived. The model solutions are then used to calculate the distribution of queue lengths at the nodes and the probability of meeting task deadlines. The analytical results show that buddy sets of 10 nodes outperform those of less than 10 nodes, and the incremental benefit gained from increasing the buddy set size beyond 15 nodes is insignificant. These and other analytical results are verified by simulation. The proposed LS method is shown to meet task deadlines with a very high probability.

  13. Detecting Genetic Interactions for Quantitative Traits Using m-Spacing Entropy Measure

    PubMed Central

    Yee, Jaeyong; Kwon, Min-Seok; Park, Taesung; Park, Mira

    2015-01-01

    A number of statistical methods for detecting gene-gene interactions have been developed in genetic association studies with binary traits. However, many phenotype measures are intrinsically quantitative and categorizing continuous traits may not always be straightforward and meaningful. Association of gene-gene interactions with an observed distribution of such phenotypes needs to be investigated directly without categorization. Information gain based on entropy measure has previously been successful in identifying genetic associations with binary traits. We extend the usefulness of this information gain by proposing a nonparametric evaluation method of conditional entropy of a quantitative phenotype associated with a given genotype. Hence, the information gain can be obtained for any phenotype distribution. Because any functional form, such as Gaussian, is not assumed for the entire distribution of a trait or a given genotype, this method is expected to be robust enough to be applied to any phenotypic association data. Here, we show its use to successfully identify the main effect, as well as the genetic interactions, associated with a quantitative trait. PMID:26339620

  14. Ladar imaging detection of salient map based on PWVD and Rényi entropy

    NASA Astrophysics Data System (ADS)

    Xu, Yuannan; Zhao, Yuan; Deng, Rong; Dong, Yanbing

    2013-10-01

    Spatial-frequency information of a given image can be extracted by associating the grey-level spatial data with one of the well-known spatial/spatial-frequency distributions. The Wigner-Ville distribution (WVD) has a good characteristic that the images can be represented in spatial/spatial-frequency domains. For intensity and range images of ladar, through the pseudo Wigner-Ville distribution (PWVD) using one or two dimension window, the statistical property of Rényi entropy is studied. We also analyzed the change of Rényi entropy's statistical property in the ladar intensity and range images when the man-made objects appear. From this foundation, a novel method for generating saliency map based on PWVD and Rényi entropy is proposed. After that, target detection is completed when the saliency map is segmented using a simple and convenient threshold method. For the ladar intensity and range images, experimental results show the proposed method can effectively detect the military vehicles from complex earth background with low false alarm.

  15. Output Feedback Distributed Containment Control for High-Order Nonlinear Multiagent Systems.

    PubMed

    Li, Yafeng; Hua, Changchun; Wu, Shuangshuang; Guan, Xinping

    2017-01-31

    In this paper, we study the problem of output feedback distributed containment control for a class of high-order nonlinear multiagent systems under a fixed undirected graph and a fixed directed graph, respectively. Only the output signals of the systems can be measured. The novel reduced order dynamic gain observer is constructed to estimate the unmeasured state variables of the system with the less conservative condition on nonlinear terms than traditional Lipschitz one. Via the backstepping method, output feedback distributed nonlinear controllers for the followers are designed. By means of the novel first virtual controllers, we separate the estimated state variables of different agents from each other. Consequently, the designed controllers show independence on the estimated state variables of neighbors except outputs information, and the dynamics of each agent can be greatly different, which make the design method have a wider class of applications. Finally, a numerical simulation is presented to illustrate the effectiveness of the proposed method.

  16. Probability distribution of haplotype frequencies under the two-locus Wright-Fisher model by diffusion approximation.

    PubMed

    Boitard, Simon; Loisel, Patrice

    2007-05-01

    The probability distribution of haplotype frequencies in a population, and the way it is influenced by genetical forces such as recombination, selection, random drift ...is a question of fundamental interest in population genetics. For large populations, the distribution of haplotype frequencies for two linked loci under the classical Wright-Fisher model is almost impossible to compute because of numerical reasons. However the Wright-Fisher process can in such cases be approximated by a diffusion process and the transition density can then be deduced from the Kolmogorov equations. As no exact solution has been found for these equations, we developed a numerical method based on finite differences to solve them. It applies to transient states and models including selection or mutations. We show by several tests that this method is accurate for computing the conditional joint density of haplotype frequencies given that no haplotype has been lost. We also prove that it is far less time consuming than other methods such as Monte Carlo simulations.

  17. Calculation of a fluctuating entropic force by phase space sampling.

    PubMed

    Waters, James T; Kim, Harold D

    2015-07-01

    A polymer chain pinned in space exerts a fluctuating force on the pin point in thermal equilibrium. The average of such fluctuating force is well understood from statistical mechanics as an entropic force, but little is known about the underlying force distribution. Here, we introduce two phase space sampling methods that can produce the equilibrium distribution of instantaneous forces exerted by a terminally pinned polymer. In these methods, both the positions and momenta of mass points representing a freely jointed chain are perturbed in accordance with the spatial constraints and the Boltzmann distribution of total energy. The constraint force for each conformation and momentum is calculated using Lagrangian dynamics. Using terminally pinned chains in space and on a surface, we show that the force distribution is highly asymmetric with both tensile and compressive forces. Most importantly, the mean of the distribution, which is equal to the entropic force, is not the most probable force even for long chains. Our work provides insights into the mechanistic origin of entropic forces, and an efficient computational tool for unbiased sampling of the phase space of a constrained system.

  18. Derivatives of logarithmic stationary distributions for policy gradient reinforcement learning.

    PubMed

    Morimura, Tetsuro; Uchibe, Eiji; Yoshimoto, Junichiro; Peters, Jan; Doya, Kenji

    2010-02-01

    Most conventional policy gradient reinforcement learning (PGRL) algorithms neglect (or do not explicitly make use of) a term in the average reward gradient with respect to the policy parameter. That term involves the derivative of the stationary state distribution that corresponds to the sensitivity of its distribution to changes in the policy parameter. Although the bias introduced by this omission can be reduced by setting the forgetting rate gamma for the value functions close to 1, these algorithms do not permit gamma to be set exactly at gamma = 1. In this article, we propose a method for estimating the log stationary state distribution derivative (LSD) as a useful form of the derivative of the stationary state distribution through backward Markov chain formulation and a temporal difference learning framework. A new policy gradient (PG) framework with an LSD is also proposed, in which the average reward gradient can be estimated by setting gamma = 0, so it becomes unnecessary to learn the value functions. We also test the performance of the proposed algorithms using simple benchmark tasks and show that these can improve the performances of existing PG methods.

  19. Solving Capacitated Closed Vehicle Routing Problem with Time Windows (CCVRPTW) using BRKGA with local search

    NASA Astrophysics Data System (ADS)

    Prasetyo, H.; Alfatsani, M. A.; Fauza, G.

    2018-05-01

    The main issue in vehicle routing problem (VRP) is finding the shortest route of product distribution from the depot to outlets to minimize total cost of distribution. Capacitated Closed Vehicle Routing Problem with Time Windows (CCVRPTW) is one of the variants of VRP that accommodates vehicle capacity and distribution period. Since the main problem of CCVRPTW is considered a non-polynomial hard (NP-hard) problem, it requires an efficient and effective algorithm to solve the problem. This study was aimed to develop Biased Random Key Genetic Algorithm (BRKGA) that is combined with local search to solve the problem of CCVRPTW. The algorithm design was then coded by MATLAB. Using numerical test, optimum algorithm parameters were set and compared with the heuristic method and Standard BRKGA to solve a case study on soft drink distribution. Results showed that BRKGA combined with local search resulted in lower total distribution cost compared with the heuristic method. Moreover, the developed algorithm was found to be successful in increasing the performance of Standard BRKGA.

  20. Exact Scheffé-type confidence intervals for output from groundwater flow models: 2. Combined use of hydrogeologic information and calibration data

    USGS Publications Warehouse

    Cooley, Richard L.

    1993-01-01

    Calibration data (observed values corresponding to model-computed values of dependent variables) are incorporated into a general method of computing exact Scheffé-type confidence intervals analogous to the confidence intervals developed in part 1 (Cooley, this issue) for a function of parameters derived from a groundwater flow model. Parameter uncertainty is specified by a distribution of parameters conditioned on the calibration data. This distribution was obtained as a posterior distribution by applying Bayes' theorem to the hydrogeologically derived prior distribution of parameters from part 1 and a distribution of differences between the calibration data and corresponding model-computed dependent variables. Tests show that the new confidence intervals can be much smaller than the intervals of part 1 because the prior parameter variance-covariance structure is altered so that combinations of parameters that give poor model fit to the data are unlikely. The confidence intervals of part 1 and the new confidence intervals can be effectively employed in a sequential method of model construction whereby new information is used to reduce confidence interval widths at each stage.

  1. Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.

    PubMed

    Wang, Zuozhen

    2018-01-01

    Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.

  2. The study of combining Latin Hypercube Sampling method and LU decomposition method (LULHS method) for constructing spatial random field

    NASA Astrophysics Data System (ADS)

    WANG, P. T.

    2015-12-01

    Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.

  3. Conjugate gradient minimisation approach to generating holographic traps for ultracold atoms.

    PubMed

    Harte, Tiffany; Bruce, Graham D; Keeling, Jonathan; Cassettari, Donatella

    2014-11-03

    Direct minimisation of a cost function can in principle provide a versatile and highly controllable route to computational hologram generation. Here we show that the careful design of cost functions, combined with numerically efficient conjugate gradient minimisation, establishes a practical method for the generation of holograms for a wide range of target light distributions. This results in a guided optimisation process, with a crucial advantage illustrated by the ability to circumvent optical vortex formation during hologram calculation. We demonstrate the implementation of the conjugate gradient method for both discrete and continuous intensity distributions and discuss its applicability to optical trapping of ultracold atoms.

  4. Time-resolved ion velocity distribution in a cylindrical Hall thruster: heterodyne-based experiment and modeling.

    PubMed

    Diallo, A; Keller, S; Shi, Y; Raitses, Y; Mazouffre, S

    2015-03-01

    Time-resolved variations of the ion velocity distribution function (IVDF) are measured in the cylindrical Hall thruster using a novel heterodyne method based on the laser-induced fluorescence technique. This method consists in inducing modulations of the discharge plasma at frequencies that enable the coupling to the breathing mode. Using a harmonic decomposition of the IVDF, one can extract each harmonic component of the IVDF from which the time-resolved IVDF is reconstructed. In addition, simulations have been performed assuming a sloshing of the IVDF during the modulation that show agreement between the simulated and measured first order perturbation of the IVDF.

  5. FDTD and transfer matrix methods for evaluating the performance of photonic crystal based microcavities for exciton-polaritons

    NASA Astrophysics Data System (ADS)

    Liu, Yi-Cheng; Byrnes, Tim

    2016-11-01

    We investigate alternative microcavity structures for exciton-polaritons consisting of photonic crystals instead of distributed Bragg reflectors. Finite-difference time-domain simulations and scattering transfer matrix methods are used to evaluate the cavity performance. The results are compared with conventional distributed Bragg reflectors. We find that in terms of the photon lifetime, the photonic crystal based microcavities are competitive, with typical lifetimes in the region of ∼20 ps being achieved. The photonic crystal microcavities have the advantage that they are compact and are frequency adjustable, showing that they are viable to investigate exciton-polariton condensation physics.

  6. Structured catalyst bed and method for conversion of feed materials to chemical products and liquid fuels

    DOEpatents

    Wang, Yong , Liu; Wei, [Richland, WA

    2012-01-24

    The present invention is a structured monolith reactor and method that provides for controlled Fischer-Tropsch (FT) synthesis. The invention controls mass transport limitations leading to higher CO conversion and lower methane selectivity. Over 95 wt % of the total product liquid hydrocarbons obtained from the monolithic catalyst are in the carbon range of C.sub.5-C.sub.18. The reactor controls readsorption of olefins leading to desired products with a preselected chain length distribution and enhanced overall reaction rate. And, liquid product analysis shows readsorption of olefins is reduced, achieving a narrower FT product distribution.

  7. Crowdsourcing to Acquire Hydrologic Data and Engage Citizen Scientists: CrowdHydrology

    USGS Publications Warehouse

    Fienen, Michael N.; Lowry, Chris

    2013-01-01

    Spatially and temporally distributed measurements of processes, such as baseflow at the watershed scale, come at substantial equipment and personnel cost. Research presented here focuses on building a crowdsourced database of inexpensive distributed stream stage measurements. Signs on staff gauges encourage citizen scientists to voluntarily send hydrologic measurements (e.g., stream stage) via text message to a server that stores and displays the data on the web. Based on the crowdsourced stream stage, we evaluate the accuracy of citizen scientist measurements and measurement approach. The results show that crowdsourced data collection is a supplemental method for collecting hydrologic data and a promising method of public engagement.

  8. The Influence of Synaptic Weight Distribution on Neuronal Population Dynamics

    PubMed Central

    Buice, Michael; Koch, Christof; Mihalas, Stefan

    2013-01-01

    The manner in which different distributions of synaptic weights onto cortical neurons shape their spiking activity remains open. To characterize a homogeneous neuronal population, we use the master equation for generalized leaky integrate-and-fire neurons with shot-noise synapses. We develop fast semi-analytic numerical methods to solve this equation for either current or conductance synapses, with and without synaptic depression. We show that its solutions match simulations of equivalent neuronal networks better than those of the Fokker-Planck equation and we compute bounds on the network response to non-instantaneous synapses. We apply these methods to study different synaptic weight distributions in feed-forward networks. We characterize the synaptic amplitude distributions using a set of measures, called tail weight numbers, designed to quantify the preponderance of very strong synapses. Even if synaptic amplitude distributions are equated for both the total current and average synaptic weight, distributions with sparse but strong synapses produce higher responses for small inputs, leading to a larger operating range. Furthermore, despite their small number, such synapses enable the network to respond faster and with more stability in the face of external fluctuations. PMID:24204219

  9. Experimental Verification of Modeled Thermal Distribution Produced by a Piston Source in Physiotherapy Ultrasound

    PubMed Central

    Lopez-Haro, S. A.; Leija, L.

    2016-01-01

    Objectives. To present a quantitative comparison of thermal patterns produced by the piston-in-a-baffle approach with those generated by a physiotherapy ultrasonic device and to show the dependency among thermal patterns and acoustic intensity distributions. Methods. The finite element (FE) method was used to model an ideal acoustic field and the produced thermal pattern to be compared with the experimental acoustic and temperature distributions produced by a real ultrasonic applicator. A thermal model using the measured acoustic profile as input is also presented for comparison. Temperature measurements were carried out with thermocouples inserted in muscle phantom. The insertion place of thermocouples was monitored with ultrasound imaging. Results. Modeled and measured thermal profiles were compared within the first 10 cm of depth. The ideal acoustic field did not adequately represent the measured field having different temperature profiles (errors 10% to 20%). Experimental field was concentrated near the transducer producing a region with higher temperatures, while the modeled ideal temperature was linearly distributed along the depth. The error was reduced to 7% when introducing the measured acoustic field as the input variable in the FE temperature modeling. Conclusions. Temperature distributions are strongly related to the acoustic field distributions. PMID:27999801

  10. Regression without truth with Markov chain Monte-Carlo

    NASA Astrophysics Data System (ADS)

    Madan, Hennadii; Pernuš, Franjo; Likar, Boštjan; Å piclin, Žiga

    2017-03-01

    Regression without truth (RWT) is a statistical technique for estimating error model parameters of each method in a group of methods used for measurement of a certain quantity. A very attractive aspect of RWT is that it does not rely on a reference method or "gold standard" data, which is otherwise difficult RWT was used for a reference-free performance comparison of several methods for measuring left ventricular ejection fraction (EF), i.e. a percentage of blood leaving the ventricle each time the heart contracts, and has since been applied for various other quantitative imaging biomarkerss (QIBs). Herein, we show how Markov chain Monte-Carlo (MCMC), a computational technique for drawing samples from a statistical distribution with probability density function known only up to a normalizing coefficient, can be used to augment RWT to gain a number of important benefits compared to the original approach based on iterative optimization. For instance, the proposed MCMC-based RWT enables the estimation of joint posterior distribution of the parameters of the error model, straightforward quantification of uncertainty of the estimates, estimation of true value of the measurand and corresponding credible intervals (CIs), does not require a finite support for prior distribution of the measureand generally has a much improved robustness against convergence to non-global maxima. The proposed approach is validated using synthetic data that emulate the EF data for 45 patients measured with 8 different methods. The obtained results show that 90% CI of the corresponding parameter estimates contain the true values of all error model parameters and the measurand. A potential real-world application is to take measurements of a certain QIB several different methods and then use the proposed framework to compute the estimates of the true values and their uncertainty, a vital information for diagnosis based on QIB.

  11. The forecasting of menstruation based on a state-space modeling of basal body temperature time series.

    PubMed

    Fukaya, Keiichi; Kawamori, Ai; Osada, Yutaka; Kitazawa, Masumi; Ishiguro, Makio

    2017-09-20

    Women's basal body temperature (BBT) shows a periodic pattern that associates with menstrual cycle. Although this fact suggests a possibility that daily BBT time series can be useful for estimating the underlying phase state as well as for predicting the length of current menstrual cycle, little attention has been paid to model BBT time series. In this study, we propose a state-space model that involves the menstrual phase as a latent state variable to explain the daily fluctuation of BBT and the menstruation cycle length. Conditional distributions of the phase are obtained by using sequential Bayesian filtering techniques. A predictive distribution of the next menstruation day can be derived based on this conditional distribution and the model, leading to a novel statistical framework that provides a sequentially updated prediction for upcoming menstruation day. We applied this framework to a real data set of women's BBT and menstruation days and compared prediction accuracy of the proposed method with that of previous methods, showing that the proposed method generally provides a better prediction. Because BBT can be obtained with relatively small cost and effort, the proposed method can be useful for women's health management. Potential extensions of this framework as the basis of modeling and predicting events that are associated with the menstrual cycles are discussed. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  12. A novel method for the evaluation of uncertainty in dose-volume histogram computation.

    PubMed

    Henríquez, Francisco Cutanda; Castrillón, Silvia Vargas

    2008-03-15

    Dose-volume histograms (DVHs) are a useful tool in state-of-the-art radiotherapy treatment planning, and it is essential to recognize their limitations. Even after a specific dose-calculation model is optimized, dose distributions computed by using treatment-planning systems are affected by several sources of uncertainty, such as algorithm limitations, measurement uncertainty in the data used to model the beam, and residual differences between measured and computed dose. This report presents a novel method to take them into account. To take into account the effect of associated uncertainties, a probabilistic approach using a new kind of histogram, a dose-expected volume histogram, is introduced. The expected value of the volume in the region of interest receiving an absorbed dose equal to or greater than a certain value is found by using the probability distribution of the dose at each point. A rectangular probability distribution is assumed for this point dose, and a formulation that accounts for uncertainties associated with point dose is presented for practical computations. This method is applied to a set of DVHs for different regions of interest, including 6 brain patients, 8 lung patients, 8 pelvis patients, and 6 prostate patients planned for intensity-modulated radiation therapy. Results show a greater effect on planning target volume coverage than in organs at risk. In cases of steep DVH gradients, such as planning target volumes, this new method shows the largest differences with the corresponding DVH; thus, the effect of the uncertainty is larger.

  13. Robust stochastic optimization for reservoir operation

    NASA Astrophysics Data System (ADS)

    Pan, Limeng; Housh, Mashor; Liu, Pan; Cai, Ximing; Chen, Xin

    2015-01-01

    Optimal reservoir operation under uncertainty is a challenging engineering problem. Application of classic stochastic optimization methods to large-scale problems is limited due to computational difficulty. Moreover, classic stochastic methods assume that the estimated distribution function or the sample inflow data accurately represents the true probability distribution, which may be invalid and the performance of the algorithms may be undermined. In this study, we introduce a robust optimization (RO) approach, Iterative Linear Decision Rule (ILDR), so as to provide a tractable approximation for a multiperiod hydropower generation problem. The proposed approach extends the existing LDR method by accommodating nonlinear objective functions. It also provides users with the flexibility of choosing the accuracy of ILDR approximations by assigning a desired number of piecewise linear segments to each uncertainty. The performance of the ILDR is compared with benchmark policies including the sampling stochastic dynamic programming (SSDP) policy derived from historical data. The ILDR solves both the single and multireservoir systems efficiently. The single reservoir case study results show that the RO method is as good as SSDP when implemented on the original historical inflows and it outperforms SSDP policy when tested on generated inflows with the same mean and covariance matrix as those in history. For the multireservoir case study, which considers water supply in addition to power generation, numerical results show that the proposed approach performs as well as in the single reservoir case study in terms of optimal value and distributional robustness.

  14. Stress analysis for structures with surface cracks

    NASA Technical Reports Server (NTRS)

    Bell, J. C.

    1978-01-01

    Two basic forms of analysis, one treating stresses around arbitrarily loaded circular cracks, the other treating stresses due to loads arbitrarily distributed on the surface of a half space, are united by a boundary-point least squares method to obtain analyses for stresses from surface cracks in places or bars. Calculations were for enough cases to show how effects from the crack vary with the depth-to-length ratio, the fractional penetration ratio, the obliquity of the load, and to some extent the fractional span ratio. The results include plots showing stress intensity factors, stress component distributions near the crack, and crack opening displacement patterns. Favorable comparisons are shown with two kinds of independent experiments, but the main method for confirming the results is by wide checking of overall satisfaction of boundary conditions, so that external confirmation is not essential. Principles involved in designing analyses which promote dependability of the results are proposed and illustrated.

  15. Modeling the brain morphology distribution in the general aging population

    NASA Astrophysics Data System (ADS)

    Huizinga, W.; Poot, D. H. J.; Roshchupkin, G.; Bron, E. E.; Ikram, M. A.; Vernooij, M. W.; Rueckert, D.; Niessen, W. J.; Klein, S.

    2016-03-01

    Both normal aging and neurodegenerative diseases such as Alzheimer's disease cause morphological changes of the brain. To better distinguish between normal and abnormal cases, it is necessary to model changes in brain morphology owing to normal aging. To this end, we developed a method for analyzing and visualizing these changes for the entire brain morphology distribution in the general aging population. The method is applied to 1000 subjects from a large population imaging study in the elderly, from which 900 were used to train the model and 100 were used for testing. The results of the 100 test subjects show that the model generalizes to subjects outside the model population. Smooth percentile curves showing the brain morphology changes as a function of age and spatiotemporal atlases derived from the model population are publicly available via an interactive web application at agingbrain.bigr.nl.

  16. General simulation algorithm for autocorrelated binary processes.

    PubMed

    Serinaldi, Francesco; Lombardo, Federico

    2017-02-01

    The apparent ubiquity of binary random processes in physics and many other fields has attracted considerable attention from the modeling community. However, generation of binary sequences with prescribed autocorrelation is a challenging task owing to the discrete nature of the marginal distributions, which makes the application of classical spectral techniques problematic. We show that such methods can effectively be used if we focus on the parent continuous process of beta distributed transition probabilities rather than on the target binary process. This change of paradigm results in a simulation procedure effectively embedding a spectrum-based iterative amplitude-adjusted Fourier transform method devised for continuous processes. The proposed algorithm is fully general, requires minimal assumptions, and can easily simulate binary signals with power-law and exponentially decaying autocorrelation functions corresponding, for instance, to Hurst-Kolmogorov and Markov processes. An application to rainfall intermittency shows that the proposed algorithm can also simulate surrogate data preserving the empirical autocorrelation.

  17. Markov chain Monte Carlo techniques applied to parton distribution functions determination: Proof of concept

    NASA Astrophysics Data System (ADS)

    Gbedo, Yémalin Gabin; Mangin-Brinet, Mariane

    2017-07-01

    We present a new procedure to determine parton distribution functions (PDFs), based on Markov chain Monte Carlo (MCMC) methods. The aim of this paper is to show that we can replace the standard χ2 minimization by procedures grounded on statistical methods, and on Bayesian inference in particular, thus offering additional insight into the rich field of PDFs determination. After a basic introduction to these techniques, we introduce the algorithm we have chosen to implement—namely Hybrid (or Hamiltonian) Monte Carlo. This algorithm, initially developed for Lattice QCD, turns out to be very interesting when applied to PDFs determination by global analyses; we show that it allows us to circumvent the difficulties due to the high dimensionality of the problem, in particular concerning the acceptance. A first feasibility study is performed and presented, which indicates that Markov chain Monte Carlo can successfully be applied to the extraction of PDFs and of their uncertainties.

  18. A stress-constrained geodetic inversion method for spatiotemporal slip of a slow slip event with earthquake swarm

    NASA Astrophysics Data System (ADS)

    Hirose, H.; Tanaka, T.

    2017-12-01

    Geodetic inversions have been performed by using GNSS data and/or tiltmeter data in order to estimate spatio-temporal fault slip distributions. They have been applied for slow slip events (SSEs), which are episodic fault slip lasting for days to years (e.g., Ozawa et al., 2001; Hirose et al., 2014). Although their slip distributions are important information in terms of inferring strain budget and frictional characteristics on a subduction plate interface, inhomogeneous station coverage generally yields spatially non-uniform slip resolution, and in a worse case, a slip distribution can not be recovered. It is known that an SSE which accompanies an earthquake swarm around the SSE slip area, such as the Boso Peninsula SSEs (e.g., Hirose et al., 2014). Some researchers hypothesize that these earthquakes are triggered by a stress change caused by the accompanying SSE (e.g., Segall et al., 2006). Based on this assumption, it is possible that a conventional geodetic inversion which impose a constraint on the stress change that promotes earthquake activities may improve the resolution of the slip distribution. Here we develop an inversion method based on the Network Inversion Filter technique (Segall and Matthews, 1997), incorporating a constraint on a positive change in Coulomb failure stress (Delta-CFS) at the accompanied earthquakes. In addition, we apply this new method to synthetic data in order to check the effectiveness of the method and the characteristics of the inverted slip distributions. The results show that there is a case in which the reproduction of a slip distribution is better with earthquake information than without it. That is, it is possible to improve the reproducibility of a slip distribution of an SSE with this new inversion method if an earthquake catalog for the accompanying earthquake activity can be used when available geodetic data are insufficient.

  19. Robust Hydrological Forecasting for High-resolution Distributed Models Using a Unified Data Assimilation Approach

    NASA Astrophysics Data System (ADS)

    Hernandez, F.; Liang, X.

    2017-12-01

    Reliable real-time hydrological forecasting, to predict important phenomena such as floods, is invaluable to the society. However, modern high-resolution distributed models have faced challenges when dealing with uncertainties that are caused by the large number of parameters and initial state estimations involved. Therefore, to rely on these high-resolution models for critical real-time forecast applications, considerable improvements on the parameter and initial state estimation techniques must be made. In this work we present a unified data assimilation algorithm called Optimized PareTo Inverse Modeling through Inverse STochastic Search (OPTIMISTS) to deal with the challenge of having robust flood forecasting for high-resolution distributed models. This new algorithm combines the advantages of particle filters and variational methods in a unique way to overcome their individual weaknesses. The analysis of candidate particles compares model results with observations in a flexible time frame, and a multi-objective approach is proposed which attempts to simultaneously minimize differences with the observations and departures from the background states by using both Bayesian sampling and non-convex evolutionary optimization. Moreover, the resulting Pareto front is given a probabilistic interpretation through kernel density estimation to create a non-Gaussian distribution of the states. OPTIMISTS was tested on a low-resolution distributed land surface model using VIC (Variable Infiltration Capacity) and on a high-resolution distributed hydrological model using the DHSVM (Distributed Hydrology Soil Vegetation Model). In the tests streamflow observations are assimilated. OPTIMISTS was also compared with a traditional particle filter and a variational method. Results show that our method can reliably produce adequate forecasts and that it is able to outperform those resulting from assimilating the observations using a particle filter or an evolutionary 4D variational method alone. In addition, our method is shown to be efficient in tackling high-resolution applications with robust results.

  20. Equivalence of MAXENT and Poisson point process models for species distribution modeling in ecology.

    PubMed

    Renner, Ian W; Warton, David I

    2013-03-01

    Modeling the spatial distribution of a species is a fundamental problem in ecology. A number of modeling methods have been developed, an extremely popular one being MAXENT, a maximum entropy modeling approach. In this article, we show that MAXENT is equivalent to a Poisson regression model and hence is related to a Poisson point process model, differing only in the intercept term, which is scale-dependent in MAXENT. We illustrate a number of improvements to MAXENT that follow from these relations. In particular, a point process model approach facilitates methods for choosing the appropriate spatial resolution, assessing model adequacy, and choosing the LASSO penalty parameter, all currently unavailable to MAXENT. The equivalence result represents a significant step in the unification of the species distribution modeling literature. Copyright © 2013, The International Biometric Society.

  1. Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Cheong, R. Y.; Gabda, D.

    2017-09-01

    Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.

  2. Locating dayside magnetopause reconnection with exhaust ion distributions

    NASA Astrophysics Data System (ADS)

    Broll, J. M.; Fuselier, S. A.; Trattner, K. J.

    2017-05-01

    Magnetic reconnection at Earth's dayside magnetopause is essential to magnetospheric dynamics. Determining where reconnection takes place is important to understanding the processes involved, and many questions about reconnection location remain unanswered. We present a method for locating the magnetic reconnection X line at Earth's dayside magnetopause under southward interplanetary magnetic field conditions using only ion velocity distribution measurements. Particle-in-cell simulations based on Cluster magnetopause crossings produce ion velocity distributions that we propagate through a model magnetosphere, allowing us to calculate the field-aligned distance between an exhaust observation and its associated reconnection line. We demonstrate this procedure for two events and compare our results with those of the Maximum Magnetic Shear Model; we find good agreement with its results and show that when our method is applicable, it produces more precise locations than the Maximum Shear Model.

  3. Measurement of toroidal vessel eddy current during plasma disruption on J-TEXT.

    PubMed

    Liu, L J; Yu, K X; Zhang, M; Zhuang, G; Li, X; Yuan, T; Rao, B; Zhao, Q

    2016-01-01

    In this paper, we have employed a thin, printed circuit board eddy current array in order to determine the radial distribution of the azimuthal component of the eddy current density at the surface of a steel plate. The eddy current in the steel plate can be calculated by analytical methods under the simplifying assumptions that the steel plate is infinitely large and the exciting current is of uniform distribution. The measurement on the steel plate shows that this method has high spatial resolution. Then, we extended this methodology to a toroidal geometry with the objective of determining the poloidal distribution of the toroidal component of the eddy current density associated with plasma disruption in a fusion reactor called J-TEXT. The preliminary measured result is consistent with the analysis and calculation results on the J-TEXT vacuum vessel.

  4. Full Waveform Inversion Using Student's t Distribution: a Numerical Study for Elastic Waveform Inversion and Simultaneous-Source Method

    NASA Astrophysics Data System (ADS)

    Jeong, Woodon; Kang, Minji; Kim, Shinwoong; Min, Dong-Joo; Kim, Won-Ki

    2015-06-01

    Seismic full waveform inversion (FWI) has primarily been based on a least-squares optimization problem for data residuals. However, the least-squares objective function can suffer from its weakness and sensitivity to noise. There have been numerous studies to enhance the robustness of FWI by using robust objective functions, such as l 1-norm-based objective functions. However, the l 1-norm can suffer from a singularity problem when the residual wavefield is very close to zero. Recently, Student's t distribution has been applied to acoustic FWI to give reasonable results for noisy data. Student's t distribution has an overdispersed density function compared with the normal distribution, and is thus useful for data with outliers. In this study, we investigate the feasibility of Student's t distribution for elastic FWI by comparing its basic properties with those of the l 2-norm and l 1-norm objective functions and by applying the three methods to noisy data. Our experiments show that the l 2-norm is sensitive to noise, whereas the l 1-norm and Student's t distribution objective functions give relatively stable and reasonable results for noisy data. When noise patterns are complicated, i.e., due to a combination of missing traces, unexpected outliers, and random noise, FWI based on Student's t distribution gives better results than l 1- and l 2-norm FWI. We also examine the application of simultaneous-source methods to acoustic FWI based on Student's t distribution. Computing the expectation of the coefficients of gradient and crosstalk noise terms and plotting the signal-to-noise ratio with iteration, we were able to confirm that crosstalk noise is suppressed as the iteration progresses, even when simultaneous-source FWI is combined with Student's t distribution. From our experiments, we conclude that FWI based on Student's t distribution can retrieve subsurface material properties with less distortion from noise than l 1- and l 2-norm FWI, and the simultaneous-source method can be adopted to improve the computational efficiency of FWI based on Student's t distribution.

  5. Risk of portfolio with simulated returns based on copula model

    NASA Astrophysics Data System (ADS)

    Razak, Ruzanna Ab; Ismail, Noriszura

    2015-02-01

    The commonly used tool for measuring risk of a portfolio with equally weighted stocks is variance-covariance method. Under extreme circumstances, this method leads to significant underestimation of actual risk due to its multivariate normality assumption of the joint distribution of stocks. The purpose of this research is to compare the actual risk of portfolio with the simulated risk of portfolio in which the joint distribution of two return series is predetermined. The data used is daily stock prices from the ASEAN market for the period January 2000 to December 2012. The copula approach is applied to capture the time varying dependence among the return series. The results shows that the chosen copula families are not suitable to present the dependence structures of each bivariate returns. Exception for the Philippines-Thailand pair where by t copula distribution appears to be the appropriate choice to depict its dependence. Assuming that the t copula distribution is the joint distribution of each paired series, simulated returns is generated and value-at-risk (VaR) is then applied to evaluate the risk of each portfolio consisting of two simulated return series. The VaR estimates was found to be symmetrical due to the simulation of returns via elliptical copula-GARCH approach. By comparison, it is found that the actual risks are underestimated for all pairs of portfolios except for Philippines-Thailand. This study was able to show that disregard of the non-normal dependence structure of two series will result underestimation of actual risk of the portfolio.

  6. Methods for Probabilistic Radiological Dose Assessment at a High-Level Radioactive Waste Repository.

    NASA Astrophysics Data System (ADS)

    Maheras, Steven James

    Methods were developed to assess and evaluate the uncertainty in offsite and onsite radiological dose at a high-level radioactive waste repository to show reasonable assurance that compliance with applicable regulatory requirements will be achieved. Uncertainty in offsite dose was assessed by employing a stochastic precode in conjunction with Monte Carlo simulation using an offsite radiological dose assessment code. Uncertainty in onsite dose was assessed by employing a discrete-event simulation model of repository operations in conjunction with an occupational radiological dose assessment model. Complementary cumulative distribution functions of offsite and onsite dose were used to illustrate reasonable assurance. Offsite dose analyses were performed for iodine -129, cesium-137, strontium-90, and plutonium-239. Complementary cumulative distribution functions of offsite dose were constructed; offsite dose was lognormally distributed with a two order of magnitude range. However, plutonium-239 results were not lognormally distributed and exhibited less than one order of magnitude range. Onsite dose analyses were performed for the preliminary inspection, receiving and handling, and the underground areas of the repository. Complementary cumulative distribution functions of onsite dose were constructed and exhibited less than one order of magnitude range. A preliminary sensitivity analysis of the receiving and handling areas was conducted using a regression metamodel. Sensitivity coefficients and partial correlation coefficients were used as measures of sensitivity. Model output was most sensitive to parameters related to cask handling operations. Model output showed little sensitivity to parameters related to cask inspections.

  7. EXIMS: an improved data analysis pipeline based on a new peak picking method for EXploring Imaging Mass Spectrometry data.

    PubMed

    Wijetunge, Chalini D; Saeed, Isaam; Boughton, Berin A; Spraggins, Jeffrey M; Caprioli, Richard M; Bacic, Antony; Roessner, Ute; Halgamuge, Saman K

    2015-10-01

    Matrix Assisted Laser Desorption Ionization-Imaging Mass Spectrometry (MALDI-IMS) in 'omics' data acquisition generates detailed information about the spatial distribution of molecules in a given biological sample. Various data processing methods have been developed for exploring the resultant high volume data. However, most of these methods process data in the spectral domain and do not make the most of the important spatial information available through this technology. Therefore, we propose a novel streamlined data analysis pipeline specifically developed for MALDI-IMS data utilizing significant spatial information for identifying hidden significant molecular distribution patterns in these complex datasets. The proposed unsupervised algorithm uses Sliding Window Normalization (SWN) and a new spatial distribution based peak picking method developed based on Gray level Co-Occurrence (GCO) matrices followed by clustering of biomolecules. We also use gist descriptors and an improved version of GCO matrices to extract features from molecular images and minimum medoid distance to automatically estimate the number of possible groups. We evaluated our algorithm using a new MALDI-IMS metabolomics dataset of a plant (Eucalypt) leaf. The algorithm revealed hidden significant molecular distribution patterns in the dataset, which the current Component Analysis and Segmentation Map based approaches failed to extract. We further demonstrate the performance of our peak picking method over other traditional approaches by using a publicly available MALDI-IMS proteomics dataset of a rat brain. Although SWN did not show any significant improvement as compared with using no normalization, the visual assessment showed an improvement as compared to using the median normalization. The source code and sample data are freely available at http://exims.sourceforge.net/. awgcdw@student.unimelb.edu.au or chalini_w@live.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Demonstration of improved seismic source inversion method of tele-seismic body wave

    NASA Astrophysics Data System (ADS)

    Yagi, Y.; Okuwaki, R.

    2017-12-01

    Seismic rupture inversion of tele-seismic body wave has been widely applied to studies of large earthquakes. In general, tele-seismic body wave contains information of overall rupture process of large earthquake, while the tele-seismic body wave is inappropriate for analyzing a detailed rupture process of M6 7 class earthquake. Recently, the quality and quantity of tele-seismic data and the inversion method has been greatly improved. Improved data and method enable us to study a detailed rupture process of M6 7 class earthquake even if we use only tele-seismic body wave. In this study, we demonstrate the ability of the improved data and method through analyses of the 2016 Rieti, Italy earthquake (Mw 6.2) and the 2016 Kumamoto, Japan earthquake (Mw 7.0) that have been well investigated by using the InSAR data set and the field observations. We assumed the rupture occurring on a single fault plane model inferred from the moment tensor solutions and the aftershock distribution. We constructed spatiotemporal discretized slip-rate functions with patches arranged as closely as possible. We performed inversions using several fault models and found that the spatiotemporal location of large slip-rate area was robust. In the 2016 Kumamoto, Japan earthquake, the slip-rate distribution shows that the rupture propagated to southwest during the first 5 s. At 5 s after the origin time, the main rupture started to propagate toward northeast. First episode and second episode correspond to rupture propagation along the Hinagu fault and the Futagawa fault, respectively. In the 2016 Rieti, Italy earthquake, the slip-rate distribution shows that the rupture propagated to up-dip direction during the first 2 s, and then rupture propagated toward northwest. From both analyses, we propose that the spatiotemporal slip-rate distribution estimated by improved inversion method of tele-seismic body wave has enough information to study a detailed rupture process of M6 7 class earthquake.

  9. Intelligent Distribution Voltage Control with Distributed Generation =

    NASA Astrophysics Data System (ADS)

    Castro Mendieta, Jose

    In this thesis, three methods for the optimal participation of the reactive power of distributed generations (DGs) in unbalanced distributed network have been proposed, developed, and tested. These new methods were developed with the objectives of maintain voltage within permissible limits and reduce losses. The first method proposes an optimal participation of reactive power of all devices available in the network. The propose approach is validated by comparing the results with other methods reported in the literature. The proposed method was implemented using Simulink of Matlab and OpenDSS. Optimization techniques and the presentation of results are from Matlab. The co-simulation of Electric Power Research Institute's (EPRI) OpenDSS program solves a three-phase optimal power flow problem in the unbalanced IEEE 13 and 34-node test feeders. The results from this work showed a better loss reduction compared to the Coordinated Voltage Control (CVC) method. The second method aims to minimize the voltage variation on the pilot bus on distribution network using DGs. It uses Pareto and Fuzzy-PID logic to reduce the voltage variation. Results indicate that the proposed method reduces the voltage variation more than the other methods. Simulink of Matlab and OpenDSS is used in the development of the proposed approach. The performance of the method is evaluated on IEEE 13-node test feeder with one and three DGs. Variables and unbalanced loads are used, based on real consumption data, over a time window of 48 hours. The third method aims to minimize the reactive losses using DGs on distribution networks. This method analyzes the problem using the IEEE 13-node test feeder with three different loads and the IEEE 123-node test feeder with four DGs. The DGs can be fixed or variables. Results indicate that integration of DGs to optimize the reactive power of the network helps to maintain the voltage within the allowed limits and to reduce the reactive power losses. The thesis is presented in the form of the three articles. The first article is published in the journal Electrical Power and Energy System, the second is published in the international journal Energies and the third was submitted to the journal Electrical Power and Energy System. Two other articles have been published in conferences with reviewing committee. This work is based on six chapters, which are detailed in the various sections of the thesis.

  10. Test for planetary influences on solar activity. [tidal effects

    NASA Technical Reports Server (NTRS)

    Dingle, L. A.; Van Hoven, G.; Sturrock, P. A.

    1973-01-01

    A method due to Schuster is used to test the hypothesis that solar activity is influenced by tides raised in the sun's atmosphere by planets. We calculate the distribution in longitude of over 1000 flares occurring in a 6 1/2 yr segment of solar cycle 19, referring the longitude system in turn to the orbital positions of Jupiter and Venus. The resulting distributions show no evidence for a tidal effect.

  11. Pilot-in-the-Loop CFD Method Development

    DTIC Science & Technology

    2015-02-01

    expensive alternatives [1]. ALM represents the blades as a set of segments along with each blade axis and the ADM represents the entire rotor as...fine grid, Δx = 1.00 m Figure 4 – Time-averaged vertical velocity distributions on downwash and rotor disk plane for hybrid and loose coupling...cases with fine and coarse grid refinement levels. Figure 4 shows the time-averaged distributions of vertical velocities on both downwash and rotor disk

  12. Precision Measurement of Distribution of Film Thickness on Pendulum for Experiment of G

    NASA Astrophysics Data System (ADS)

    Liu, Lin-Xia; Guan, Sheng-Guo; Liu, Qi; Zhang, Ya-Ting; Shao, Cheng-Gang; Luo, Jun

    2009-09-01

    Distribution of film thickness coated on the pendulum of measuring the Newton gravitational constant G is determined with a weighing method by means of a precision mass comparator. The experimental result shows that the gold film on the pendulum will contribute a correction of -24.3 ppm to our G measurement with an uncertainty of 4.3 ppm, which is significant for improving the G value with high precision.

  13. Oxygen Distributions—Evaluation of Computational Methods, Using a Stochastic Model for Large Tumour Vasculature, to Elucidate the Importance of Considering a Complete Vascular Network

    PubMed Central

    Bernhardt, Peter

    2016-01-01

    Purpose To develop a general model that utilises a stochastic method to generate a vessel tree based on experimental data, and an associated irregular, macroscopic tumour. These will be used to evaluate two different methods for computing oxygen distribution. Methods A vessel tree structure, and an associated tumour of 127 cm3, were generated, using a stochastic method and Bresenham’s line algorithm to develop trees on two different scales and fusing them together. The vessel dimensions were adjusted through convolution and thresholding and each vessel voxel was assigned an oxygen value. Diffusion and consumption were modelled using a Green’s function approach together with Michaelis-Menten kinetics. The computations were performed using a combined tree method (CTM) and an individual tree method (ITM). Five tumour sub-sections were compared, to evaluate the methods. Results The oxygen distributions of the same tissue samples, using different methods of computation, were considerably less similar (root mean square deviation, RMSD≈0.02) than the distributions of different samples using CTM (0.001< RMSD<0.01). The deviations of ITM from CTM increase with lower oxygen values, resulting in ITM severely underestimating the level of hypoxia in the tumour. Kolmogorov Smirnov (KS) tests showed that millimetre-scale samples may not represent the whole. Conclusions The stochastic model managed to capture the heterogeneous nature of hypoxic fractions and, even though the simplified computation did not considerably alter the oxygen distribution, it leads to an evident underestimation of tumour hypoxia, and thereby radioresistance. For a trustworthy computation of tumour oxygenation, the interaction between adjacent microvessel trees must not be neglected, why evaluation should be made using high resolution and the CTM, applied to the entire tumour. PMID:27861529

  14. Monitoring the distribution of prompt gamma rays in boron neutron capture therapy using a multiple-scattering Compton camera: A Monte Carlo simulation study

    NASA Astrophysics Data System (ADS)

    Lee, Taewoong; Lee, Hyounggun; Lee, Wonho

    2015-10-01

    This study evaluated the use of Compton imaging technology to monitor prompt gamma rays emitted by 10B in boron neutron capture therapy (BNCT) applied to a computerized human phantom. The Monte Carlo method, including particle-tracking techniques, was used for simulation. The distribution of prompt gamma rays emitted by the phantom during irradiation with neutron beams is closely associated with the distribution of the boron in the phantom. Maximum likelihood expectation maximization (MLEM) method was applied to the information obtained from the detected prompt gamma rays to reconstruct the distribution of the tumor including the boron uptake regions (BURs). The reconstructed Compton images of the prompt gamma rays were combined with the cross-sectional images of the human phantom. Quantitative analysis of the intensity curves showed that all combined images matched the predetermined conditions of the simulation. The tumors including the BURs were distinguishable if they were more than 2 cm apart.

  15. Verification of echo amplitude envelope analysis method in skin tissues for quantitative follow-up of healing ulcers

    NASA Astrophysics Data System (ADS)

    Omura, Masaaki; Yoshida, Kenji; Akita, Shinsuke; Yamaguchi, Tadashi

    2018-07-01

    We aim to develop an ultrasonic tissue characterization method for the follow-up of healing ulcers by diagnosing collagen fibers properties. In this paper, we demonstrated a computer simulation with simulation phantoms reflecting irregularly distributed collagen fibers to evaluate the relationship between physical properties, such as number density and periodicity, and the estimated characteristics of the echo amplitude envelope using the homodyned-K distribution. Moreover, the consistency between echo signal characteristics and the structures of ex vivo human tissues was verified from the measured data of normal skin and nonhealed ulcers. In the simulation study, speckle or coherent signal characteristics are identified as periodically or uniformly distributed collagen fibers with high number density and high periodicity. This result shows the effectiveness of the analysis using the homodyned-K distribution for tissues with complicated structures. Normal skin analysis results are characterized as including speckle or low-coherence signal components, and a nonhealed ulcer is different from normal skin with respect to the physical properties of collagen fibers.

  16. Application of Monte Carlo techniques to transient thermal modeling of cavity radiometers having diffuse-specular surfaces

    NASA Technical Reports Server (NTRS)

    Mahan, J. R.; Eskin, L. D.

    1981-01-01

    A viable alternative to the net exchange method of radiative analysis which is equally applicable to diffuse and diffuse-specular enclosures is presented. It is particularly more advantageous to use than the net exchange method in the case of a transient thermal analysis involving conduction and storage of energy as well as radiative exchange. A new quantity, called the distribution factor is defined which replaces the angle factor and the configuration factor. Once obtained, the array of distribution factors for an ensemble of surface elements which define an enclosure permits the instantaneous net radiative heat fluxes to all of the surfaces to be computed directly in terms of the known surface temperatures at that instant. The formulation of the thermal model is described, as is the determination of distribution factors by application of a Monte Carlo analysis. The results show that when fewer than 10,000 packets are emitted, an unsatisfactory approximation for the distribution factors is obtained, but that 10,000 packets is sufficient.

  17. Chance-Constrained Day-Ahead Hourly Scheduling in Distribution System Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard

    This paper aims to propose a two-step approach for day-ahead hourly scheduling in a distribution system operation, which contains two operation costs, the operation cost at substation level and feeder level. In the first step, the objective is to minimize the electric power purchase from the day-ahead market with the stochastic optimization. The historical data of day-ahead hourly electric power consumption is used to provide the forecast results with the forecasting error, which is presented by a chance constraint and formulated into a deterministic form by Gaussian mixture model (GMM). In the second step, the objective is to minimize themore » system loss. Considering the nonconvexity of the three-phase balanced AC optimal power flow problem in distribution systems, the second-order cone program (SOCP) is used to relax the problem. Then, a distributed optimization approach is built based on the alternating direction method of multiplier (ADMM). The results shows that the validity and effectiveness method.« less

  18. A data centred method to estimate and map how the local distribution of daily precipitation is changing

    NASA Astrophysics Data System (ADS)

    Chapman, Sandra; Stainforth, David; Watkins, Nick

    2014-05-01

    Estimates of how our climate is changing are needed locally in order to inform adaptation planning decisions. This requires quantifying the geographical patterns in changes at specific quantiles in distributions of variables such as daily temperature or precipitation. Here we focus on these local changes and on a method to transform daily observations of precipitation into patterns of local climate change. We develop a method[1] for analysing local climatic timeseries to assess which quantiles of the local climatic distribution show the greatest and most robust changes, to specifically address the challenges presented by daily precipitation data. We extract from the data quantities that characterize the changes in time of the likelihood of daily precipitation above a threshold and of the relative amount of precipitation in those days. Our method is a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. This deconstruction facilitates an assessment of how fast different quantiles of precipitation distributions are changing. This involves both determining which quantiles and geographical locations show the greatest change but also, those at which any change is highly uncertain. We demonstrate this approach using E-OBS gridded data[2] timeseries of local daily precipitation from specific locations across Europe over the last 60 years. We treat geographical location and precipitation as independent variables and thus obtain as outputs the pattern of change at a given threshold of precipitation and with geographical location. This is model- independent, thus providing data of direct value in model calibration and assessment. Our results show regionally consistent patterns of systematic increase in precipitation on the wettest days, and of drying across all days which is of potential value in adaptation planning. [1] S C Chapman, D A Stainforth, N W Watkins, 2013, On Estimating Local Long Term Climate Trends, Phil. Trans. R. Soc. A, 371 20120287; D. A. Stainforth, 2013, S. C. Chapman, N. W. Watkins, Mapping climate change in European temperature distributions, Environ. Res. Lett. 8, 034031 [2] Haylock, M.R., N. Hofstra, A.M.G. Klein Tank, E.J. Klok, P.D. Jones and M. New. 2008: A European daily high-resolution gridded dataset of surface temperature and precipitation. J. Geophys. Res (Atmospheres), 113, D20119

  19. [The reconstruction of two-dimensional distributions of gas concentration in the flat flame based on tunable laser absorption spectroscopy].

    PubMed

    Jiang, Zhi-Shen; Wang, Fei; Xing, Da-Wei; Xu, Ting; Yan, Jian-Hua; Cen, Ke-Fa

    2012-11-01

    The experimental method by using the tunable diode laser absorption spectroscopy combined with the model and algo- rithm was studied to reconstruct the two-dimensional distribution of gas concentration The feasibility of the reconstruction program was verified by numerical simulation A diagnostic system consisting of 24 lasers was built for the measurement of H2O in the methane/air premixed flame. The two-dimensional distribution of H2O concentration in the flame was reconstructed, showing that the reconstruction results reflect the real two-dimensional distribution of H2O concentration in the flame. This diagnostic scheme provides a promising solution for combustion control.

  20. Mesoscale mapping of available solar energy at the earth's surface by use of satellites

    NASA Technical Reports Server (NTRS)

    Hiser, H. W.; Senn, H. V.

    1980-01-01

    A method is presented for use of cloud images in the visual spectrum from the SMS/GOES geostationary satellites to determine the hourly distribution of sunshine on the mesoscale. Cloud coverage and density as a function of time of day and season are evaluated through the use of digital data processing techniques. Seasonal geographic distributions of cloud cover/sunshine are converted to joules of solar radiation received at the earth's surface through relationships developed from long-term measurements of these two parameters at six widely distributed stations. The technique can be used to generate maps showing the geographic distribution of total solar radiation on the mesoscale which is received at the earth's surface.

  1. Selected Theoretical Studies Group contributions to the 14th International Cosmic Ray conference. [including studies on galactic molecular hydrogen, interstellar reddening, and on the origin of cosmic rays

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The galactic distribution of H2 was studied through gamma radiation and through X-ray, optical, and infrared absorption measurements from SAS-2 and other sources. A comparison of the latitude distribution of gamma-ray intensity with reddening data shows reddening data to give the best estimate of interstellar gas in the solar vicinity. The distribution of galactic cosmic ray nucleons was determined and appears to be identical to the supernova remnant distribution. Interactions between ultrahigh energy cosmic-ray nuclei and intergalactic photon radiation fields were calculated, using the Monte Carlo method.

  2. Controlling the Laser Guide Star power density distribution at Sodium layer by combining Pre-correction and Beam-shaping

    NASA Astrophysics Data System (ADS)

    Huang, Jian; Wei, Kai; Jin, Kai; Li, Min; Zhang, YuDong

    2018-06-01

    The Sodium laser guide star (LGS) plays a key role in modern astronomical Adaptive Optics Systems (AOSs). The spot size and photon return of the Sodium LGS depend strongly on the laser power density distribution at the Sodium layer and thus affect the performance of the AOS. The power density distribution is degraded by turbulence in the uplink path, launch system aberrations, the beam quality of the laser, and so forth. Even without any aberrations, the TE00 Gaussian type is still not the optimal power density distribution to obtain the best balance between the measurement error and temporal error. To optimize and control the LGS power density distribution at the Sodium layer to an expected distribution type, a method that combines pre-correction and beam-shaping is proposed. A typical result shows that under strong turbulence (Fried parameter (r0) of 5 cm) and for a quasi-continuous wave Sodium laser (power (P) of 15 W), in the best case, our method can effectively optimize the distribution from the Gaussian type to the "top-hat" type and enhance the photon return flux of the Sodium LGS; at the same time, the total error of the AOS is decreased by 36% with our technique for a high power laser and poor seeing.

  3. Sharp Boundary Inversion of 2D Magnetotelluric Data using Bayesian Method.

    NASA Astrophysics Data System (ADS)

    Zhou, S.; Huang, Q.

    2017-12-01

    Normally magnetotelluric(MT) inversion method cannot show the distribution of underground resistivity with clear boundary, even if there are obviously different blocks. Aiming to solve this problem, we develop a Bayesian structure to inverse 2D MT sharp boundary data, using boundary location and inside resistivity as the random variables. Firstly, we use other MT inversion results, like ModEM, to analyze the resistivity distribution roughly. Then, we select the suitable random variables and change its data format to traditional staggered grid parameters, which can be used to do finite difference forward part. Finally, we can shape the posterior probability density(PPD), which contains all the prior information and model-data correlation, by Markov Chain Monte Carlo(MCMC) sampling from prior distribution. The depth, resistivity and their uncertainty can be valued. It also works for sensibility estimation. We applied the method to a synthetic case, which composes two large abnormal blocks in a trivial background. We consider the boundary smooth and the near true model weight constrains that mimic joint inversion or constrained inversion, then we find that the model results a more precise and focused depth distribution. And we also test the inversion without constrains and find that the boundary could also be figured, though not as well. Both inversions have a good valuation of resistivity. The constrained result has a lower root mean square than ModEM inversion result. The data sensibility obtained via PPD shows that the resistivity is the most sensible, center depth comes second and both sides are the worst.

  4. Predicting Stress vs. Strain Behaviors of Thin-Walled High Pressure Die Cast Magnesium Alloy with Actual Pore Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Kyoo Sil; Barker, Erin; Cheng, Guang

    2016-01-06

    In this paper, a three-dimensional (3D) microstructure-based finite element modeling method (i.e., extrinsic modeling method) is developed, which can be used in examining the effects of porosity on the ductility/fracture of Mg castings. For this purpose, AM60 Mg tensile samples were generated under high-pressure die-casting in a specially-designed mold. Before the tensile test, the samples were CT-scanned to obtain the pore distributions within the samples. 3D microstructure-based finite element models were then developed based on the obtained actual pore distributions of the gauge area. The input properties for the matrix material were determined by fitting the simulation result to themore » experimental result of a selected sample, and then used for all the other samples’ simulation. The results show that the ductility and fracture locations predicted from simulations agree well with the experimental results. This indicates that the developed 3D extrinsic modeling method may be used to examine the influence of various aspects of pore sizes/distributions as well as intrinsic properties (i.e., matrix properties) on the ductility/fracture of Mg castings.« less

  5. A path integral methodology for obtaining thermodynamic properties of nonadiabatic systems using Gaussian mixture distributions

    NASA Astrophysics Data System (ADS)

    Raymond, Neil; Iouchtchenko, Dmitri; Roy, Pierre-Nicholas; Nooijen, Marcel

    2018-05-01

    We introduce a new path integral Monte Carlo method for investigating nonadiabatic systems in thermal equilibrium and demonstrate an approach to reducing stochastic error. We derive a general path integral expression for the partition function in a product basis of continuous nuclear and discrete electronic degrees of freedom without the use of any mapping schemes. We separate our Hamiltonian into a harmonic portion and a coupling portion; the partition function can then be calculated as the product of a Monte Carlo estimator (of the coupling contribution to the partition function) and a normalization factor (that is evaluated analytically). A Gaussian mixture model is used to evaluate the Monte Carlo estimator in a computationally efficient manner. Using two model systems, we demonstrate our approach to reduce the stochastic error associated with the Monte Carlo estimator. We show that the selection of the harmonic oscillators comprising the sampling distribution directly affects the efficiency of the method. Our results demonstrate that our path integral Monte Carlo method's deviation from exact Trotter calculations is dominated by the choice of the sampling distribution. By improving the sampling distribution, we can drastically reduce the stochastic error leading to lower computational cost.

  6. Model and reconstruction of a K-edge contrast agent distribution with an X-ray photon-counting detector

    PubMed Central

    Meng, Bo; Cong, Wenxiang; Xi, Yan; De Man, Bruno; Yang, Jian; Wang, Ge

    2017-01-01

    Contrast-enhanced computed tomography (CECT) helps enhance the visibility for tumor imaging. When a high-Z contrast agent interacts with X-rays across its K-edge, X-ray photoelectric absorption would experience a sudden increment, resulting in a significant difference of the X-ray transmission intensity between the left and right energy windows of the K-edge. Using photon-counting detectors, the X-ray intensity data in the left and right windows of the K-edge can be measured simultaneously. The differential information of the two kinds of intensity data reflects the contrast-agent concentration distribution. K-edge differences between various matters allow opportunities for the identification of contrast agents in biomedical applications. In this paper, a general radon transform is established to link the contrast-agent concentration to X-ray intensity measurement data. An iterative algorithm is proposed to reconstruct a contrast-agent distribution and tissue attenuation background simultaneously. Comprehensive numerical simulations are performed to demonstrate the merits of the proposed method over the existing K-edge imaging methods. Our results show that the proposed method accurately quantifies a distribution of a contrast agent, optimizing the contrast-to-noise ratio at a high dose efficiency. PMID:28437900

  7. Characterizing the D2 statistic: word matches in biological sequences.

    PubMed

    Forêt, Sylvain; Wilson, Susan R; Burden, Conrad J

    2009-01-01

    Word matches are often used in sequence comparison methods, either as a measure of sequence similarity or in the first search steps of algorithms such as BLAST or BLAT. The D2 statistic is the number of matches of words of k letters between two sequences. Recent advances have been made in the characterization of this statistic and in the approximation of its distribution. Here, these results are extended to the case of approximate word matches. We compute the exact value of the variance of the D2 statistic for the case of a uniform letter distribution, and introduce a method to provide accurate approximations of the variance in the remaining cases. This enables the distribution of D2 to be approximated for typical situations arising in biological research. We apply these results to the identification of cis-regulatory modules, and show that this method detects such sequences with a high accuracy. The ability to approximate the distribution of D2 for both exact and approximate word matches will enable the use of this statistic in a more precise manner for sequence comparison, database searches, and identification of transcription factor binding sites.

  8. Generating Virtual Patients by Multivariate and Discrete Re-Sampling Techniques.

    PubMed

    Teutonico, D; Musuamba, F; Maas, H J; Facius, A; Yang, S; Danhof, M; Della Pasqua, O

    2015-10-01

    Clinical Trial Simulations (CTS) are a valuable tool for decision-making during drug development. However, to obtain realistic simulation scenarios, the patients included in the CTS must be representative of the target population. This is particularly important when covariate effects exist that may affect the outcome of a trial. The objective of our investigation was to evaluate and compare CTS results using re-sampling from a population pool and multivariate distributions to simulate patient covariates. COPD was selected as paradigm disease for the purposes of our analysis, FEV1 was used as response measure and the effects of a hypothetical intervention were evaluated in different populations in order to assess the predictive performance of the two methods. Our results show that the multivariate distribution method produces realistic covariate correlations, comparable to the real population. Moreover, it allows simulation of patient characteristics beyond the limits of inclusion and exclusion criteria in historical protocols. Both methods, discrete resampling and multivariate distribution generate realistic pools of virtual patients. However the use of a multivariate distribution enable more flexible simulation scenarios since it is not necessarily bound to the existing covariate combinations in the available clinical data sets.

  9. On the upper tail of Italian firms’ size distribution

    NASA Astrophysics Data System (ADS)

    Cirillo, Pasquale; Hüsler, Jürg

    2009-04-01

    In this paper we analyze the upper tail of the size distribution of Italian companies with limited liability belonging to the CEBI database. Size is defined in terms of net worth. In particular, we show that the largest firms follow a power law distribution, according to the well-known Pareto law, for which we give estimates of the shape parameter. Such a behavior seems to be quite persistent over time, view that for almost 20 years of observations, the shape parameter is always in the vicinity of 1.8. The power law hypothesis is also positively tested using graphical and analytical methods.

  10. Evaporation Flux Distribution of Drops on a Hydrophilic or Hydrophobic Flat Surface by Molecular Simulations.

    PubMed

    Xie, Chiyu; Liu, Guangzhi; Wang, Moran

    2016-08-16

    The evaporation flux distribution of sessile drops is investigated by molecular dynamic simulations. Three evaporating modes are classified, including the diffusion dominant mode, the substrate heating mode, and the environment heating mode. Both hydrophilic and hydrophobic drop-substrate interactions are considered. To count the evaporation flux distribution, which is position dependent, we proposed an azimuthal-angle-based division method under the assumption of spherical crown shape of drops. The modeling results show that the edge evaporation, i.e., near the contact line, is enhanced for hydrophilic drops in all the three modes. The surface diffusion of liquid molecular absorbed on solid substrate for hydrophilic cases plays an important role as well as the space diffusion on the enhanced evaporation rate at the edge. For hydrophobic drops, the edge evaporation flux is higher for the substrate heating mode, but lower than elsewhere of the drop for the diffusion dominant mode; however, a nearly uniform distribution is found for the environment heating mode. The evidence shows that the temperature distribution inside drops plays a key role in the position-dependent evaporation flux.

  11. Distributed controller clustering in software defined networks.

    PubMed

    Abdelaziz, Ahmed; Fong, Ang Tan; Gani, Abdullah; Garba, Usman; Khan, Suleman; Akhunzada, Adnan; Talebian, Hamid; Choo, Kim-Kwang Raymond

    2017-01-01

    Software Defined Networking (SDN) is an emerging promising paradigm for network management because of its centralized network intelligence. However, the centralized control architecture of the software-defined networks (SDNs) brings novel challenges of reliability, scalability, fault tolerance and interoperability. In this paper, we proposed a novel clustered distributed controller architecture in the real setting of SDNs. The distributed cluster implementation comprises of multiple popular SDN controllers. The proposed mechanism is evaluated using a real world network topology running on top of an emulated SDN environment. The result shows that the proposed distributed controller clustering mechanism is able to significantly reduce the average latency from 8.1% to 1.6%, the packet loss from 5.22% to 4.15%, compared to distributed controller without clustering running on HP Virtual Application Network (VAN) SDN and Open Network Operating System (ONOS) controllers respectively. Moreover, proposed method also shows reasonable CPU utilization results. Furthermore, the proposed mechanism makes possible to handle unexpected load fluctuations while maintaining a continuous network operation, even when there is a controller failure. The paper is a potential contribution stepping towards addressing the issues of reliability, scalability, fault tolerance, and inter-operability.

  12. Energy-Based Wavelet De-Noising of Hydrologic Time Series

    PubMed Central

    Sang, Yan-Fang; Liu, Changming; Wang, Zhonggen; Wen, Jun; Shang, Lunyu

    2014-01-01

    De-noising is a substantial issue in hydrologic time series analysis, but it is a difficult task due to the defect of methods. In this paper an energy-based wavelet de-noising method was proposed. It is to remove noise by comparing energy distribution of series with the background energy distribution, which is established from Monte-Carlo test. Differing from wavelet threshold de-noising (WTD) method with the basis of wavelet coefficient thresholding, the proposed method is based on energy distribution of series. It can distinguish noise from deterministic components in series, and uncertainty of de-noising result can be quantitatively estimated using proper confidence interval, but WTD method cannot do this. Analysis of both synthetic and observed series verified the comparable power of the proposed method and WTD, but de-noising process by the former is more easily operable. The results also indicate the influences of three key factors (wavelet choice, decomposition level choice and noise content) on wavelet de-noising. Wavelet should be carefully chosen when using the proposed method. The suitable decomposition level for wavelet de-noising should correspond to series' deterministic sub-signal which has the smallest temporal scale. If too much noise is included in a series, accurate de-noising result cannot be obtained by the proposed method or WTD, but the series would show pure random but not autocorrelation characters, so de-noising is no longer needed. PMID:25360533

  13. A second-order shock-expansion method applicable to bodies of revolution near zero lift

    NASA Technical Reports Server (NTRS)

    1957-01-01

    A second-order shock-expansion method applicable to bodies of revolution is developed by the use of the predictions of the generalized shock-expansion method in combination with characteristics theory. Equations defining the zero-lift pressure distributions and the normal-force and pitching-moment derivatives are derived. Comparisons with experimental results show that the method is applicable at values of the similarity parameter, the ratio of free-stream Mach number to nose fineness ratio, from about 0.4 to 2.

  14. Distributed processing method for arbitrary view generation in camera sensor network

    NASA Astrophysics Data System (ADS)

    Tehrani, Mehrdad P.; Fujii, Toshiaki; Tanimoto, Masayuki

    2003-05-01

    Camera sensor network as a new advent of technology is a network that each sensor node can capture video signals, process and communicate them with other nodes. The processing task in this network is to generate arbitrary view, which can be requested from central node or user. To avoid unnecessary communication between nodes in camera sensor network and speed up the processing time, we have distributed the processing tasks between nodes. In this method, each sensor node processes part of interpolation algorithm to generate the interpolated image with local communication between nodes. The processing task in camera sensor network is ray-space interpolation, which is an object independent method and based on MSE minimization by using adaptive filtering. Two methods were proposed for distributing processing tasks, which are Fully Image Shared Decentralized Processing (FIS-DP), and Partially Image Shared Decentralized Processing (PIS-DP), to share image data locally. Comparison of the proposed methods with Centralized Processing (CP) method shows that PIS-DP has the highest processing speed after FIS-DP, and CP has the lowest processing speed. Communication rate of CP and PIS-DP is almost same and better than FIS-DP. So, PIS-DP is recommended because of its better performance than CP and FIS-DP.

  15. Methods of Visually Determining the Air Flow Around Airplanes

    NASA Technical Reports Server (NTRS)

    Gough, Melvin N; Johnson, Ernest

    1932-01-01

    This report describes methods used by the National Advisory Committee for Aeronautics to study visually the air flow around airplanes. The use of streamers, oil and exhaust gas streaks, lampblack and kerosene, powdered materials, and kerosene smoke is briefly described. The generation and distribution of smoke from candles and from titanium tetrachloride are described in greater detail because they appear most advantageous for general application. Examples are included showing results of the various methods.

  16. Preparation and characterization of nanoparticles of carboxymethyl cellulose acetate butyrate containing acyclovir

    NASA Astrophysics Data System (ADS)

    Vedula, Venkata Bharadwaz; Chopra, Maulick; Joseph, Emil; Mazumder, Sonal

    2016-02-01

    Nanoparticles of carboxymethyl cellulose acetate butyrate complexed with the poorly soluble antiviral drug acyclovir (ACV) were produced by precipitation process and the formulation process and properties of nanoparticles were investigated. Two different particle synthesis methods were explored—a conventional precipitation method and a rapid precipitation in a multi-inlet vortex mixer. The particles were processed by rotavap followed by freeze-drying. Particle diameters as measured by dynamic light scattering were dependent on the synthesis method used. The conventional precipitation method did not show desired particle size distribution, whereas particles prepared by the mixer showed well-defined particle size ~125-450 nm before and after freeze-drying, respectively, with narrow polydispersity indices. Fourier transform infrared spectroscopy showed chemical stability and intactness of entrapped drug in the nanoparticles. Differential scanning calorimetry showed that the drug was in amorphous state in the polymer matrix. ACV drug loading was around 10 wt%. The release studies showed increase in solution concentration of drug from the nanoparticles compared to the as-received crystalline drug.

  17. Evaluation of nonrigid registration models for interfraction dose accumulation in radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janssens, Guillaume; Orban de Xivry, Jonathan; Fekkes, Stein

    2009-09-15

    Purpose: Interfraction dose accumulation is necessary to evaluate the dose distribution of an entire course of treatment by adding up multiple dose distributions of different treatment fractions. This accumulation of dose distributions is not straightforward as changes in the patient anatomy may occur during treatment. For this purpose, the accuracy of nonrigid registration methods is assessed for dose accumulation based on the calculated deformations fields. Methods: A phantom study using a deformable cubic silicon phantom with implanted markers and a cylindrical silicon phantom with MOSFET detectors has been performed. The phantoms were deformed and images were acquired using a cone-beammore » CT imager. Dose calculations were performed on these CT scans using the treatment planning system. Nonrigid CT-based registration was performed using two different methods, the Morphons and Demons. The resulting deformation field was applied on the dose distribution. For both phantoms, accuracy of the registered dose distribution was assessed. For the cylindrical phantom, also measured dose values in the deformed conditions were compared with the dose values of the registered dose distributions. Finally, interfraction dose accumulation for two treatment fractions of a patient with primary rectal cancer has been performed and evaluated using isodose lines and the dose volume histograms of the target volume and normal tissue. Results: A significant decrease in the difference in marker or MOSFET position was observed after nonrigid registration methods (p<0.001) for both phantoms and with both methods, as well as a significant decrease in the dose estimation error (p<0.01 for the cubic phantom and p<0.001 for the cylindrical) with both methods. Considering the whole data set at once, the difference between estimated and measured doses was also significantly decreased using registration (p<0.001 for both methods). The patient case showed a slightly underdosed planning target volume and an overdosed bladder volume due to anatomical deformations. Conclusions: Dose accumulation using nonrigid registration methods is possible using repeated CT imaging. This opens possibilities for interfraction dose accumulation and adaptive radiotherapy to incorporate possible differences in dose delivered to the target volume and organs at risk due to anatomical deformations.« less

  18. A new stochastic algorithm for inversion of dust aerosol size distribution

    NASA Astrophysics Data System (ADS)

    Wang, Li; Li, Feng; Yang, Ma-ying

    2015-08-01

    Dust aerosol size distribution is an important source of information about atmospheric aerosols, and it can be determined from multiwavelength extinction measurements. This paper describes a stochastic inverse technique based on artificial bee colony (ABC) algorithm to invert the dust aerosol size distribution by light extinction method. The direct problems for the size distribution of water drop and dust particle, which are the main elements of atmospheric aerosols, are solved by the Mie theory and the Lambert-Beer Law in multispectral region. And then, the parameters of three widely used functions, i.e. the log normal distribution (L-N), the Junge distribution (J-J), and the normal distribution (N-N), which can provide the most useful representation of aerosol size distributions, are inversed by the ABC algorithm in the dependent model. Numerical results show that the ABC algorithm can be successfully applied to recover the aerosol size distribution with high feasibility and reliability even in the presence of random noise.

  19. Optimization design of multiphase pump impeller based on combined genetic algorithm and boundary vortex flux diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Jin-ya; Cai, Shu-jie; Li, Yong-jiang; Li, Yong-jiang; Zhang, Yong-xue

    2017-12-01

    A novel optimization design method for the multiphase pump impeller is proposed through combining the quasi-3D hydraulic design (Q3DHD), the boundary vortex flux (BVF) diagnosis, and the genetic algorithm (GA). The BVF diagnosis based on the Q3DHD is used to evaluate the objection function. Numerical simulations and hydraulic performance tests are carried out to compare the impeller designed only by the Q3DHD method and that optimized by the presented method. The comparisons of both the flow fields simulated under the same condition show that (1) the pressure distribution in the optimized impeller is more reasonable and the gas-liquid separation is more efficiently inhibited, (2) the scales of the gas pocket and the vortex decrease remarkably for the optimized impeller, (3) the unevenness of the BVF distributions near the shroud of the original impeller is effectively eliminated in the optimized impeller. The experimental results show that the differential pressure and the maximum efficiency of the optimized impeller are increased by 4% and 2.5%, respectively. Overall, the study indicates that the optimization design method proposed in this paper is feasible.

  20. Seismic passive earth resistance using modified pseudo-dynamic method

    NASA Astrophysics Data System (ADS)

    Pain, Anindya; Choudhury, Deepankar; Bhattacharyya, S. K.

    2017-04-01

    In earthquake prone areas, understanding of the seismic passive earth resistance is very important for the design of different geotechnical earth retaining structures. In this study, the limit equilibrium method is used for estimation of critical seismic passive earth resistance for an inclined wall supporting horizontal cohesionless backfill. A composite failure surface is considered in the present analysis. Seismic forces are computed assuming the backfill soil as a viscoelastic material overlying a rigid stratum and the rigid stratum is subjected to a harmonic shaking. The present method satisfies the boundary conditions. The amplification of acceleration depends on the properties of the backfill soil and on the characteristics of the input motion. The acceleration distribution along the depth of the backfill is found to be nonlinear in nature. The present study shows that the horizontal and vertical acceleration distribution in the backfill soil is not always in-phase for the critical value of the seismic passive earth pressure coefficient. The effect of different parameters on the seismic passive earth pressure is studied in detail. A comparison of the present method with other theories is also presented, which shows the merits of the present study.

Top