Science.gov

Sample records for weibull distribution analysis

  1. /q-exponential, Weibull, and /q-Weibull distributions: an empirical analysis

    NASA Astrophysics Data System (ADS)

    Picoli, S.; Mendes, R. S.; Malacarne, L. C.

    2003-06-01

    In a comparative study, the q-exponential and Weibull distributions are employed to investigate frequency distributions of basketball baskets, cyclone victims, brand-name drugs by retail sales, and highway length. In order to analyze the intermediate cases, a distribution, the q-Weibull one, which interpolates the q-exponential and Weibull ones, is introduced. It is verified that the basketball baskets distribution is well described by a q-exponential, whereas the cyclone victims and brand-name drugs by retail sales ones are better adjusted by a Weibull distribution. On the other hand, for highway length the q-exponential and Weibull distributions do not give satisfactory adjustment, being necessary to employ the q-Weibull distribution. Furthermore, the introduction of this interpolating distribution gives an illumination from the point of view of the stretched exponential against inverse power law ( q-exponential with q>1) controversy.

  2. Reliability analysis of DOOF for Weibull distribution.

    PubMed

    Chen, Wen-Hua; Cui, Jie; Fan, Xiao-Yan; Lu, Xian-Biao; Xiang, Ping

    2003-01-01

    Hierarchical Bayesian method for estimating the failure probability p(i) under DOOF by taking the quasi-Beta distribution B(p(i-1), 1, 1, b) as the prior distribution is proposed in this paper. The weighted Least Squares Estimate method was used to obtain the formula for computing reliability distribution parameters and estimating the reliability characteristic values under DOOF. Taking one type of aerospace electrical connector as an example, the correctness of the above method through statistical analysis of electrical connector accelerated life test data was verified. PMID:12861622

  3. Using Weibull Distribution Analysis to Evaluate ALARA Performance

    SciTech Connect

    E. L. Frome, J. P. Watkins, and D. A. Hagemeyer

    2009-10-01

    As Low as Reasonably Achievable (ALARA) is the underlying principle for protecting nuclear workers from potential health outcomes related to occupational radiation exposure. Radiation protection performance is currently evaluated by measures such as collective dose and average measurable dose, which do not indicate ALARA performance. The purpose of this work is to show how statistical modeling of individual doses using the Weibull distribution can provide objective supplemental performance indicators for comparing ALARA implementation among sites and for insights into ALARA practices within a site. Maximum likelihood methods were employed to estimate the Weibull shape and scale parameters used for performance indicators. The shape parameter reflects the effectiveness of maximizing the number of workers receiving lower doses and is represented as the slope of the fitted line on a Weibull probability plot. Additional performance indicators derived from the model parameters include the 99th percentile and the exceedance fraction. When grouping sites by collective total effective dose equivalent (TEDE) and ranking by 99th percentile with confidence intervals, differences in performance among sites can be readily identified. Applying this methodology will enable more efficient and complete evaluation of the effectiveness of ALARA implementation.

  4. Reliability analysis of structural ceramic components using a three-parameter Weibull distribution

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Powers, Lynn M.; Starlinger, Alois

    1992-01-01

    Described here are nonlinear regression estimators for the three-Weibull distribution. Issues relating to the bias and invariance associated with these estimators are examined numerically using Monte Carlo simulation methods. The estimators were used to extract parameters from sintered silicon nitride failure data. A reliability analysis was performed on a turbopump blade utilizing the three-parameter Weibull distribution and the estimates from the sintered silicon nitride data.

  5. Reliability analysis of structural ceramic components using a three-parameter Weibull distribution

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Powers, Lynn M.; Starlinger, Alois

    1992-01-01

    Described here are nonlinear regression estimators for the three-parameter Weibull distribution. Issues relating to the bias and invariance associated with these estimators are examined numerically using Monte Carlo simulation methods. The estimators were used to extract parameters from sintered silicon nitride failure data. A reliability analysis was performed on a turbopump blade utilizing the three-parameter Weibull distribution and the estimates from the sintered silicon nitride data.

  6. Statistical analysis of censored motion sickness latency data using the two-parameter Weibull distribution

    NASA Technical Reports Server (NTRS)

    Park, Won J.; Crampton, George H.

    1988-01-01

    The suitability of the two-parameter Weibull distribution for describing highly censored cat motion sickness latency data was evaluated by estimating the parameters with the maximum likelihood method and testing for goodness of fit with the Kolmogorov-Smirnov statistic. A procedure for determining confidence levels and testing for significance of the difference between Weibull parameters is described. Computer programs for these procedures may be obtained from an archival source.

  7. Competing risk models in reliability systems, a weibull distribution model with bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, Ismed; Satria Gondokaryono, Yudi

    2016-02-01

    In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range between the true value and the maximum likelihood estimated value lines.

  8. Software for Statistical Analysis of Weibull Distributions with Application to Gear Fatigue Data: User Manual with Verification

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    2002-01-01

    The Weibull distribution has been widely adopted for the statistical description and inference of fatigue data. This document provides user instructions, examples, and verification for software to analyze gear fatigue test data. The software was developed presuming the data are adequately modeled using a two-parameter Weibull distribution. The calculations are based on likelihood methods, and the approach taken is valid for data that include type 1 censoring. The software was verified by reproducing results published by others.

  9. Software for Statistical Analysis of Weibull Distributions with Application to Gear Fatigue Data: User Manual with Verification

    NASA Technical Reports Server (NTRS)

    Kranz, Timothy L.

    2002-01-01

    The Weibull distribution has been widely adopted for the statistical description and inference of fatigue data. This document provides user instructions, examples, and verification for software to analyze gear fatigue test data. The software was developed presuming the data are adequately modeled using a two-parameter Weibull distribution. The calculations are based on likelihood methods, and the approach taken is valid for data that include type I censoring. The software was verified by reproducing results published by others.

  10. Weibull Distribution From Interval Inspection Data

    NASA Technical Reports Server (NTRS)

    Rheinfurth, Mario H.

    1987-01-01

    Most likely failure sequence assumed. Memorandum discusses application of Weibull distribution to statistics of failures of turbopump blades. Is generalization of well known exponential random probability distribution and useful in describing component-failure modes including aging effects. Parameters found from experimental data by method of maximum likelihood.

  11. Estimation problems associated with the Weibull distribution

    SciTech Connect

    Bowman, K O; Shenton, L R

    1981-09-01

    Series in descending powers of the sample size are developed for the moments of the coefficient of variation v* for the Weibull distribution F(t) = 1 -exp(-(t/b)/sup c/). A similar series for the moments of the estimator c* of the shape parameter c are derived from these. Comparisons are made with basic asymptotic assessments for the means and variances. From the first four moments, approximations are given to the distribution of v* and c*. In addition, an almost unbiased estimator of c is given when a sample is provided with the value of v*. Comments are given on the validity of the asymptotically normal assessments of the distributions.

  12. Independent Orbiter Assessment (IOA): Weibull analysis report

    NASA Technical Reports Server (NTRS)

    Raffaelli, Gary G.

    1987-01-01

    The Auxiliary Power Unit (APU) and Hydraulic Power Unit (HPU) Space Shuttle Subsystems were reviewed as candidates for demonstrating the Weibull analysis methodology. Three hardware components were identified as analysis candidates: the turbine wheel, the gearbox, and the gas generator. Detailed review of subsystem level wearout and failure history revealed the lack of actual component failure data. In addition, component wearout data were not readily available or would require a separate data accumulation effort by the vendor. Without adequate component history data being available, the Weibull analysis methodology application to the APU and HPU subsystem group was terminated.

  13. Program for Weibull Analysis of Fatigue Data

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    2005-01-01

    A Fortran computer program has been written for performing statistical analyses of fatigue-test data that are assumed to be adequately represented by a two-parameter Weibull distribution. This program calculates the following: (1) Maximum-likelihood estimates of the Weibull distribution; (2) Data for contour plots of relative likelihood for two parameters; (3) Data for contour plots of joint confidence regions; (4) Data for the profile likelihood of the Weibull-distribution parameters; (5) Data for the profile likelihood of any percentile of the distribution; and (6) Likelihood-based confidence intervals for parameters and/or percentiles of the distribution. The program can account for tests that are suspended without failure (the statistical term for such suspension of tests is "censoring"). The analytical approach followed in this program for the software is valid for type-I censoring, which is the removal of unfailed units at pre-specified times. Confidence regions and intervals are calculated by use of the likelihood-ratio method.

  14. Weibull distribution based on maximum likelihood with interval inspection data

    NASA Technical Reports Server (NTRS)

    Rheinfurth, M. H.

    1985-01-01

    The two Weibull parameters based upon the method of maximum likelihood are determined. The test data used were failures observed at inspection intervals. The application was the reliability analysis of the SSME oxidizer turbine blades.

  15. Bayesian estimation of life parameters in the Weibull distribution.

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.; Tsokos, C. P.

    1973-01-01

    Development of a Bayesian analysis of the scale and shape parameters in the Weibull distribution and the corresponding reliability function with respect to the usual life-testing procedures. For the scale parameter theta, Bayesian estimates of theta and reliability are obtained for the uniform, exponential, and inverted gamma prior probability densities. Bhattacharya's results (1967) for the one-parameter exponential life-testing distribution are reduced to a special case of these results. A fully Bayesian analysis of both the scale and shape parameters is developed by assuming independent prior distributions; since in the latter case, analytical tractability is not possible, Bayesian estimates are obtained through a conjunction of Monte Carlo simulation and numerical-integration techniques. In both cases, a computer simulation is carried out, and a comparison is made between the Bayesian and the corresponding minimum-variance unbiased, or maximum likelihood, estimates. As expected, the Bayesian estimates are superior.

  16. Development of a Weibull posterior distribution by combining a Weibull prior with an actual failure distribution using Bayesian inference

    NASA Technical Reports Server (NTRS)

    Giuntini, Michael E.; Giuntini, Ronald E.

    1991-01-01

    A Bayesian inference process for system logistical planning is presented which provides a method for incorporating actual failures with prediction data for an ongoing and improving reliability estimates. The process uses the Weibull distribution, and provides a means for examining and updating logistical and maintenance support needs.

  17. Investigation of Weibull statistics in fracture analysis of cast aluminum

    NASA Technical Reports Server (NTRS)

    Holland, Frederic A., Jr.; Zaretsky, Erwin V.

    1989-01-01

    The fracture strengths of two large batches of A357-T6 cast aluminum coupon specimens were compared by using two-parameter Weibull analysis. The minimum number of these specimens necessary to find the fracture strength of the material was determined. The applicability of three-parameter Weibull analysis was also investigated. A design methodology based on the combination of elementary stress analysis and Weibull statistical analysis is advanced and applied to the design of a spherical pressure vessel shell. The results from this design methodology are compared with results from the applicable ASME pressure vessel code.

  18. Investigation of Weibull statistics in fracture analysis of cast aluminum

    NASA Technical Reports Server (NTRS)

    Holland, F. A., Jr.; Zaretsky, E. V.

    1989-01-01

    The fracture strengths of two large batches of A357-T6 cast aluminum coupon specimens were compared by using two-parameter Weibull analysis. The minimum number of these specimens necessary to find the fracture strength of the material was determined. The applicability of three-parameter Weibull analysis was also investigated. A design methodolgy based on the combination of elementary stress analysis and Weibull statistical analysis is advanced and applied to the design of a spherical pressure vessel shell. The results from this design methodology are compared with results from the applicable ASME pressure vessel code.

  19. Table for estimating parameters of Weibull distribution

    NASA Technical Reports Server (NTRS)

    Mann, N. R.

    1971-01-01

    Table yields best linear invariant /BLI/ estimates for log of reliable life under censored life tests, permitting reliability estimations in failure analysis of items with multiple flaws. These BLI estimates have uniformly smaller expected loss than Gauss-Markov best linear unbiased estimates.

  20. Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately

    NASA Technical Reports Server (NTRS)

    Huang, Zhaofeng; Porter, Albert A.

    1991-01-01

    The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.

  1. Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately

    NASA Technical Reports Server (NTRS)

    Huang, Zhaofeng; Porter, Albert A.

    1990-01-01

    The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.

  2. A comparison of the generalized gamma and exponentiated Weibull distributions.

    PubMed

    Cox, Christopher; Matheson, Matthew

    2014-09-20

    This paper provides a comparison of the three-parameter exponentiated Weibull (EW) and generalized gamma (GG) distributions. The connection between these two different families is that the hazard functions of both have the four standard shapes (increasing, decreasing, bathtub, and arc shaped), and in fact, the shape of the hazard is the same for identical values of the three parameters. For a given EW distribution, we define a matching GG using simulation and also by matching the 5 (th) , 50 (th) , and 95 (th) percentiles. We compare EW and matching GG distributions graphically and using the Kullback-Leibler distance. We find that the survival functions for the EW and matching GG are graphically indistinguishable, and only the hazard functions can sometimes be seen to be slightly different. The Kullback-Leibler distances are very small and decrease with increasing sample size. We conclude that the similarity between the two distributions is striking, and therefore, the EW represents a convenient alternative to the GG with the identical richness of hazard behavior. More importantly, these results suggest that having the four basic hazard shapes may to some extent be an important structural characteristic of any family of distributions. PMID:24700647

  3. Large-Scale Weibull Analysis of H-451 Nuclear- Grade Graphite Specimen Rupture Data

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Walker, Andrew; Baker, Eric H.; Murthy, Pappu L.; Bratton, Robert L.

    2012-01-01

    A Weibull analysis was performed of the strength distribution and size effects for 2000 specimens of H-451 nuclear-grade graphite. The data, generated elsewhere, measured the tensile and four-point-flexure room-temperature rupture strength of specimens excised from a single extruded graphite log. Strength variation was compared with specimen location, size, and orientation relative to the parent body. In our study, data were progressively and extensively pooled into larger data sets to discriminate overall trends from local variations and to investigate the strength distribution. The CARES/Life and WeibPar codes were used to investigate issues regarding the size effect, Weibull parameter consistency, and nonlinear stress-strain response. Overall, the Weibull distribution described the behavior of the pooled data very well. However, the issue regarding the smaller-than-expected size effect remained. This exercise illustrated that a conservative approach using a two-parameter Weibull distribution is best for designing graphite components with low probability of failure for the in-core structures in the proposed Generation IV (Gen IV) high-temperature gas-cooled nuclear reactors. This exercise also demonstrated the continuing need to better understand the mechanisms driving stochastic strength response. Extensive appendixes are provided with this report to show all aspects of the rupture data and analytical results.

  4. A Weibull distribution with power-law tails that describes the first passage time processes of foreign currency exchanges

    NASA Astrophysics Data System (ADS)

    Sazuka, Naoya; Inoue, Jun-Ichi

    2007-03-01

    A Weibull distribution with power-law tails is confirmed as a good candidate to describe the first passage time process of foreign currency exchange rates. The Lorentz curve and the corresponding Gini coefficient for a Weibull distribution are derived analytically. We show that the coefficient is in good agreement with the same quantity calculated from the empirical data. We also calculate the average waiting time which is an important measure to estimate the time for customers to wait until the next price change after they login to their computer systems. By assuming that the first passage time distribution might change its shape from the Weibull to the power-law at some critical time, we evaluate the averaged waiting time by means of the renewal-reward theorem. We find that our correction of tails of the distribution makes the averaged waiting time much closer to the value obtained from empirical data analysis. We also discuss the deviation from the estimated average waiting time by deriving the waiting time distribution directly. These results make us conclude that the first passage process of the foreign currency exchange rates is well described by a Weibull distribution with power-law tails.

  5. Least Squares Best Fit Method for the Three Parameter Weibull Distribution: Analysis of Tensile and Bend Specimens with Volume or Surface Flaw Failure

    NASA Technical Reports Server (NTRS)

    Gross, Bernard

    1996-01-01

    Material characterization parameters obtained from naturally flawed specimens are necessary for reliability evaluation of non-deterministic advanced ceramic structural components. The least squares best fit method is applied to the three parameter uniaxial Weibull model to obtain the material parameters from experimental tests on volume or surface flawed specimens subjected to pure tension, pure bending, four point or three point loading. Several illustrative example problems are provided.

  6. Predictive Failure of Cylindrical Coatings Using Weibull Analysis

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.

    2002-01-01

    Rotating, coated wiping rollers used in a high-speed printing application failed primarily from fatigue. Two coating materials were evaluated: a hard, cross-linked, plasticized polyvinyl chloride (PVC) and a softer, plasticized PVC. A total of 447 tests was conducted with these coatings in a production facility. The data were evaluated using Weibull analysis. The softer coating produced more than twice the life of the harder cross-linked coating and reduced the wiper replacement rate by two-thirds, resulting in minimum production interruption.

  7. Composite Weibull-Inverse Transformed Gamma distribution and its actuarial application

    NASA Astrophysics Data System (ADS)

    Maghsoudi, Mastoureh; Bakar, Shaiful Anuar Abu; Hamzah, Nor Aishah

    2014-07-01

    This paper introduces a new composite model, namely, composite Weibull-Inverse Transformed Gamma distribution which assumes Weibull distribution for the head up to a specified threshold and inverse transformed gamma distribution beyond it. The closed form of probability density function (pdf) as well as the estimation of parameters by maximum likelihood method is presented. The model is compared with several benchmark distributions and their performances are measured. A well-known data set, Danish fire loss data, is used for this purpose and it's Value at Risk (VaR) using the new model is computed. In comparison to several standard models, the composite Weibull- Inverse Transformed Gamma model proved to be a competitor candidate.

  8. Fracture Strength: Stress Concentration, Extreme Value Statistics, and the Fate of the Weibull Distribution

    NASA Astrophysics Data System (ADS)

    Bertalan, Zsolt; Shekhawat, Ashivni; Sethna, James P.; Zapperi, Stefano

    2014-09-01

    The statistical properties of fracture strength of brittle and quasibrittle materials are often described in terms of the Weibull distribution. However, the weakest-link hypothesis, commonly used to justify it, is expected to fail when fracture occurs after significant damage accumulation. Here we show that this implies that the Weibull distribution is unstable in a renormalization-group sense for a large class of quasibrittle materials. Our theoretical arguments are supported by numerical simulations of disordered fuse networks. We also find that for brittle materials such as ceramics, the common assumption that the strength distribution can be derived from the distribution of preexisting microcracks by using Griffith's criteria is invalid. We attribute this discrepancy to crack bridging. Our findings raise questions about the applicability of Weibull statistics to most practical cases.

  9. Weibull statistical analysis of Krouse type bending fatigue of nuclear materials

    NASA Astrophysics Data System (ADS)

    Haidyrah, Ahmed S.; Newkirk, Joseph W.; Castaño, Carlos H.

    2016-03-01

    A bending fatigue mini-specimen (Krouse-type) was used to study the fatigue properties of nuclear materials. The objective of this paper is to study fatigue for Grade 91 ferritic-martensitic steel using a mini-specimen (Krouse-type) suitable for reactor irradiation studies. These mini-specimens are similar in design (but smaller) to those described in the ASTM B593 standard. The mini specimen was machined by waterjet and tested as-received. The bending fatigue machine was modified to test the mini-specimen with a specially designed adapter. The cycle bending fatigue behavior of Grade 91 was studied under constant deflection. The S-N curve was created and mean fatigue life was analyzed using mean fatigue life. In this study, the Weibull function was predicted probably for high stress to low stress at 563, 310 and 265 MPa. The commercial software Minitab 17 was used to calculate the distribution of fatigue life under different stress levels. We have used 2 and 3- parameters Weibull analysis to introduce the probability of failure. The plots indicated that the 3- parameter Weibull distribution fits the data well.

  10. An EOQ Model with Two-Parameter Weibull Distribution Deterioration and Price-Dependent Demand

    ERIC Educational Resources Information Center

    Mukhopadhyay, Sushanta; Mukherjee, R. N.; Chaudhuri, K. S.

    2005-01-01

    An inventory replenishment policy is developed for a deteriorating item and price-dependent demand. The rate of deterioration is taken to be time-proportional and the time to deterioration is assumed to follow a two-parameter Weibull distribution. A power law form of the price dependence of demand is considered. The model is solved analytically…

  11. Weibull-distributed dyke thickness reflects probabilistic character of host-rock strength.

    PubMed

    Krumbholz, Michael; Hieronymus, Christoph F; Burchardt, Steffi; Troll, Valentin R; Tanner, David C; Friese, Nadine

    2014-01-01

    Magmatic sheet intrusions (dykes) constitute the main form of magma transport in the Earth's crust. The size distribution of dykes is a crucial parameter that controls volcanic surface deformation and eruption rates and is required to realistically model volcano deformation for eruption forecasting. Here we present statistical analyses of 3,676 dyke thickness measurements from different tectonic settings and show that dyke thickness consistently follows the Weibull distribution. Known from materials science, power law-distributed flaws in brittle materials lead to Weibull-distributed failure stress. We therefore propose a dynamic model in which dyke thickness is determined by variable magma pressure that exploits differently sized host-rock weaknesses. The observed dyke thickness distributions are thus site-specific because rock strength, rather than magma viscosity and composition, exerts the dominant control on dyke emplacement. Fundamentally, the strength of geomaterials is scale-dependent and should be approximated by a probability distribution. PMID:24513695

  12. Weibull-distributed dyke thickness reflects probabilistic character of host-rock strength

    PubMed Central

    Krumbholz, Michael; Hieronymus, Christoph F.; Burchardt, Steffi; Troll, Valentin R.; Tanner, David C.; Friese, Nadine

    2014-01-01

    Magmatic sheet intrusions (dykes) constitute the main form of magma transport in the Earth’s crust. The size distribution of dykes is a crucial parameter that controls volcanic surface deformation and eruption rates and is required to realistically model volcano deformation for eruption forecasting. Here we present statistical analyses of 3,676 dyke thickness measurements from different tectonic settings and show that dyke thickness consistently follows the Weibull distribution. Known from materials science, power law-distributed flaws in brittle materials lead to Weibull-distributed failure stress. We therefore propose a dynamic model in which dyke thickness is determined by variable magma pressure that exploits differently sized host-rock weaknesses. The observed dyke thickness distributions are thus site-specific because rock strength, rather than magma viscosity and composition, exerts the dominant control on dyke emplacement. Fundamentally, the strength of geomaterials is scale-dependent and should be approximated by a probability distribution. PMID:24513695

  13. Surface Wind-Speed Statistics Modelling: Alternatives to the Weibull Distribution and Performance Evaluation

    NASA Astrophysics Data System (ADS)

    Drobinski, Philippe; Coulais, Corentin; Jourdier, Bénédicte

    2015-10-01

    Wind-speed statistics are generally modelled using the Weibull distribution. However, the Weibull distribution is based on empirical rather than physical justification and might display strong limitations for its applications. Here, we derive wind-speed distributions analytically with different assumptions on the wind components to model wind anisotropy, wind extremes and multiple wind regimes. We quantitatively confront these distributions with an extensive set of meteorological data (89 stations covering various sub-climatic regions in France) to identify distributions that perform best and the reasons for this, and we analyze the sensitivity of the proposed distributions to the diurnal to seasonal variability. We find that local topography, unsteady wind fluctuations as well as persistent wind regimes are determinants for the performances of these distributions, as they induce anisotropy or non-Gaussian fluctuations of the wind components. A Rayleigh-Rice distribution is proposed to model the combination of weak isotropic wind and persistent wind regimes. It outperforms all other tested distributions (Weibull, elliptical and non-Gaussian) and is the only proposed distribution able to catch accurately the diurnal and seasonal variability.

  14. [Weibull distribution for modeling drying of Angelicae Sinensis Radix and its application in moisture dynamics].

    PubMed

    Sha, Xiu-xiu; Zhu, Shao-qing; Duan, Jin-ao; Guo, Sheng; Lu, Xue-jun; Gao, Zhen-jiang; Yan, Hui; Qian, Da-wei

    2015-06-01

    To establish the water dynamics model for drying process of Angelicae Sinensis Radix, the Weibull distribution model was applied to study the moisture ratio variation curves, and compared the drying rate and drying activation energy with the drying methods of temperature controllable air drying, infrared drying under different temperatures (50, 60, 70 degrees C). The Weibull distribution model could well describe the drying curves, for the moisture ratio vs. drying time profiled of the model showed high correlation (R2 = 0. 994-0. 999). The result proved that the drying process of Angelicae Sinensis Radix belonged to falling-rate drying period. For the drying process, the scale parameter (a) was related to the drying temperature, and decreased as the temperature increases. The shape parameter (β) for the same drying method, drying temperature had little impact on the shape parameter. The moisture diffusion coefficient increase along with temperature increasing from 0.425 x 10(-9) m2 x s(-1) to 2.260 x 10(-9) m2 x s(-1). The activation energy for moisture diffusion was 68.82, 29.60 kJ x mol(-1) by temperature controllable air drying and infrared drying, respectively. Therefore, the Weibull distribution model can be used to predict the moisture removal of Angelicae Sinensis Radix in the drying process, which is great significance for the drying process of prediction, control and process optimization. The results provide the technical basis for the use of modern drying technology for industrial drying of Angelicae Sinensis Radix. PMID:26552166

  15. Hypotheses on the role of chromosomal changes in leukemias. Application of the Weibull distribution function.

    PubMed

    Motoiu-Raileanu, I; Motoiu, R; Gociu, M; Berceanu, S

    1976-01-01

    Investigations were carried out in 38 patients with acute leukemias or with chronic myeloid leukemias in the blast phase and a correlation was made between the cytogenetic aspect and the survival time. The interpretation of results was made by the Weibull distribution function. It was mathematically demonstrated that in the leukemic patients with chromosomal aberrations there is a preclinical period of over 40 months necessary for the formation of these anomalies. Chromosomal aberrations, the absence of mitoses and age over 70 proved to be aggravating factors in the diseases investigated. PMID:1063430

  16. Bonus-Malus System with the Claim Frequency Distribution is Geometric and the Severity Distribution is Truncated Weibull

    NASA Astrophysics Data System (ADS)

    Santi, D. N.; Purnaba, I. G. P.; Mangku, I. W.

    2016-01-01

    Bonus-Malus system is said to be optimal if it is financially balanced for insurance companies and fair for policyholders. Previous research about Bonus-Malus system concern with the determination of the risk premium which applied to all of the severity that guaranteed by the insurance company. In fact, not all of the severity that proposed by policyholder may be covered by insurance company. When the insurance company sets a maximum bound of the severity incurred, so it is necessary to modify the model of the severity distribution into the severity bound distribution. In this paper, optimal Bonus-Malus system is compound of claim frequency component has geometric distribution and severity component has truncated Weibull distribution is discussed. The number of claims considered to follow a Poisson distribution, and the expected number λ is exponentially distributed, so the number of claims has a geometric distribution. The severity with a given parameter θ is considered to have a truncated exponential distribution is modelled using the Levy distribution, so the severity have a truncated Weibull distribution.

  17. Flexural strength of infrared-transmitting window materials: bimodal Weibull statistical analysis

    NASA Astrophysics Data System (ADS)

    Klein, Claude A.

    2011-02-01

    The results of flexural strength testing performed on brittle materials are usually interpreted in light of a ``Weibull plot,'' i.e., by fitting the estimated cumulative failure probability (CFP) to a linearized semiempirical Weibull distribution. This procedure ignores the impact of the testing method on the measured stresses at fracture--specifically, the stressed area and the stress profile--thus resulting in inadequate characterization of the material under investigation. In a previous publication, the author reformulated Weibull's statistical theory of fracture in a manner that emphasizes how the stressed area and the stress profile control the failure probability distribution, which led to the concept of a characteristic strength, that is, the effective strength of a 1-cm2 uniformly stressed area. Fitting the CFP of IR-transmitting materials (AlON, fusion-cast CaF2, oxyfluoride glass, fused SiO2, CVD-ZnSe, and CVD-ZnS) was performed by means of nonlinear regressions but produced evidence of slight, systematic deviations. The purpose of this contribution is to demonstrate that upon extending the previously elaborated model to distributions involving two distinct types of defects--bimodal distributions--the fit agrees with estimated CFPs. Furthermore, the availability of two sets of statistical parameters (characteristic strength and shape parameter) can be taken advantage of to evaluate the failure-probability density, thus providing means of assessing the nature, the critical size, and the size distribution of surface/subsurface flaws.

  18. Reliability Evaluation Method with Weibull Distribution for Temporary Overvoltages of Substation Equipment

    NASA Astrophysics Data System (ADS)

    Okabe, Shigemitsu; Tsuboi, Toshihiro; Takami, Jun

    The power-frequency withstand voltage tests are regulated on electric power equipment in JEC by evaluating the lifetime reliability with a Weibull distribution function. The evaluation method is still controversial in terms of consideration of a plural number of faults and some alternative methods were proposed on this subject. The present paper first discusses the physical meanings of the various kinds of evaluating methods and secondly examines their effects on the power-frequency withstand voltage tests. Further, an appropriate method is investigated for an oil-filled transformer and a gas insulated switchgear with taking notice of dielectric breakdown or partial discharge mechanism under various insulating material and structure conditions and the tentative conclusion gives that the conventional method would be most pertinent under the present conditions.

  19. Estimating Weibull parameters for materials

    NASA Technical Reports Server (NTRS)

    Robinson, E. Y.

    1971-01-01

    The statistical analysis of strength and fracture of materials in general, with application to fiber composites are discussed. The weakest link model is considered in a fairly general form, and the resulting equations are demonstrated by using a Weibull distribution for flaws. This distribution appears naturally in a variety of problems, and therefore additional attention is devoted to analysis and statistical estimation connected with this distribution. Special working charts are included to facilitate interpretation of observed data and estimation of parameters. Implications of the size effect are considered for various kinds of flaw distributions. Failure and damage in a fiber-reinforced system are described. Some useful graphs are included for predicting the strength of such a system. Recent data on organic-fiber (PRD 49) composite material is analyzed by the Weibull distribution with the methods presented.

  20. Probabilistic Analysis for Comparing Fatigue Data Based on Johnson-Weibull Parameters

    NASA Technical Reports Server (NTRS)

    Hendricks, Robert C.; Zaretsky, Erwin V.; Vicek, Brian L.

    2007-01-01

    Probabilistic failure analysis is essential when analysis of stress-life (S-N) curves is inconclusive in determining the relative ranking of two or more materials. In 1964, L. Johnson published a methodology for establishing the confidence that two populations of data are different. Simplified algebraic equations for confidence numbers were derived based on the original work of L. Johnson. Using the ratios of mean life, the resultant values of confidence numbers deviated less than one percent from those of Johnson. It is possible to rank the fatigue lives of different materials with a reasonable degree of statistical certainty based on combined confidence numbers. These equations were applied to rotating beam fatigue tests that were conducted on three aluminum alloys at three stress levels each. These alloys were AL 2024, AL 6061, and AL 7075. The results were analyzed and compared using ASTM Standard E739-91 and the Johnson-Weibull analysis. The ASTM method did not statistically distinguish between AL 6010 and AL 7075. Based on the Johnson-Weibull analysis confidence numbers greater than 99 percent, AL 2024 was found to have the longest fatigue life, followed by AL 7075, and then AL 6061. The ASTM Standard and the Johnson-Weibull analysis result in the same stress-life exponent p for each of the three aluminum alloys at the median or L(sub 50) lives.

  1. USE OF WEIBULL FUNCTION FOR NON-LINEAR ANALYSIS OF EFFECTS OF LOW LEVELS OF SIMULATED HERBICIDE DRIFT ON PLANTS

    EPA Science Inventory

    We compared two regression models, which are based on the Weibull and probit functions, for the analysis of pesticide toxicity data from laboratory studies on Illinois crop and native plant species. Both mathematical models are continuous, differentiable, strictly positive, and...

  2. Statistical analysis of bivariate failure time data with Marshall–Olkin Weibull models

    PubMed Central

    Li, Yang; Sun, Jianguo; Song, Shuguang

    2013-01-01

    This paper discusses parametric analysis of bivariate failure time data, which often occur in medical studies among others. For this, as in the case of univariate failure time data, exponential and Weibull models are probably the most commonly used ones. However, it is surprising that there seem no general estimation procedures available for fitting the bivariate Weibull model to bivariate right-censored failure time data except some methods for special situations. We present and investigate two general but simple estimation procedures, one being a graphical approach and the other being a marginal approach, for the problem. An extensive simulation study is conducted to assess the performances of the proposed approaches and shows that they work well for practical situations. An illustrative example is provided. PMID:26294802

  3. Probabilistic Analysis for Comparing Fatigue Data Based on Johnson-Weibull Parameters

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.

    2013-01-01

    Leonard Johnson published a methodology for establishing the confidence that two populations of data are different. Johnson's methodology is dependent on limited combinations of test parameters (Weibull slope, mean life ratio, and degrees of freedom) and a set of complex mathematical equations. In this report, a simplified algebraic equation for confidence numbers is derived based on the original work of Johnson. The confidence numbers calculated with this equation are compared to those obtained graphically by Johnson. Using the ratios of mean life, the resultant values of confidence numbers at the 99 percent level deviate less than 1 percent from those of Johnson. At a 90 percent confidence level, the calculated values differ between +2 and 4 percent. The simplified equation is used to rank the experimental lives of three aluminum alloys (AL 2024, AL 6061, and AL 7075), each tested at three stress levels in rotating beam fatigue, analyzed using the Johnson- Weibull method, and compared to the ASTM Standard (E739 91) method of comparison. The ASTM Standard did not statistically distinguish between AL 6061 and AL 7075. However, it is possible to rank the fatigue lives of different materials with a reasonable degree of statistical certainty based on combined confidence numbers using the Johnson- Weibull analysis. AL 2024 was found to have the longest fatigue life, followed by AL 7075, and then AL 6061. The ASTM Standard and the Johnson-Weibull analysis result in the same stress-life exponent p for each of the three aluminum alloys at the median, or L(sub 50), lives

  4. Accurate bearing remaining useful life prediction based on Weibull distribution and artificial neural network

    NASA Astrophysics Data System (ADS)

    Ben Ali, Jaouher; Chebel-Morello, Brigitte; Saidi, Lotfi; Malinowski, Simon; Fnaiech, Farhat

    2015-05-01

    Accurate remaining useful life (RUL) prediction of critical assets is an important challenge in condition based maintenance to improve reliability and decrease machine's breakdown and maintenance's cost. Bearing is one of the most important components in industries which need to be monitored and the user should predict its RUL. The challenge of this study is to propose an original feature able to evaluate the health state of bearings and to estimate their RUL by Prognostics and Health Management (PHM) techniques. In this paper, the proposed method is based on the data-driven prognostic approach. The combination of Simplified Fuzzy Adaptive Resonance Theory Map (SFAM) neural network and Weibull distribution (WD) is explored. WD is used just in the training phase to fit measurement and to avoid areas of fluctuation in the time domain. SFAM training process is based on fitted measurements at present and previous inspection time points as input. However, the SFAM testing process is based on real measurements at present and previous inspections. Thanks to the fuzzy learning process, SFAM has an important ability and a good performance to learn nonlinear time series. As output, seven classes are defined; healthy bearing and six states for bearing degradation. In order to find the optimal RUL prediction, a smoothing phase is proposed in this paper. Experimental results show that the proposed method can reliably predict the RUL of rolling element bearings (REBs) based on vibration signals. The proposed prediction approach can be applied to prognostic other various mechanical assets.

  5. Bayesian Weibull tree models for survival analysis of clinico-genomic data

    PubMed Central

    Clarke, Jennifer; West, Mike

    2008-01-01

    An important goal of research involving gene expression data for outcome prediction is to establish the ability of genomic data to define clinically relevant risk factors. Recent studies have demonstrated that microarray data can successfully cluster patients into low- and high-risk categories. However, the need exists for models which examine how genomic predictors interact with existing clinical factors and provide personalized outcome predictions. We have developed clinico-genomic tree models for survival outcomes which use recursive partitioning to subdivide the current data set into homogeneous subgroups of patients, each with a specific Weibull survival distribution. These trees can provide personalized predictive distributions of the probability of survival for individuals of interest. Our strategy is to fit multiple models; within each model we adopt a prior on the Weibull scale parameter and update this prior via Empirical Bayes whenever the sample is split at a given node. The decision to split is based on a Bayes factor criterion. The resulting trees are weighted according to their relative likelihood values and predictions are made by averaging over models. In a pilot study of survival in advanced stage ovarian cancer we demonstrate that clinical and genomic data are complementary sources of information relevant to survival, and we use the exploratory nature of the trees to identify potential genomic biomarkers worthy of further study. PMID:18618012

  6. Improvement in mechanical properties of jute fibres through mild alkali treatment as demonstrated by utilisation of the Weibull distribution model.

    PubMed

    Roy, Aparna; Chakraborty, Sumit; Kundu, Sarada Prasad; Basak, Ratan Kumar; Majumder, Subhasish Basu; Adhikari, Basudam

    2012-03-01

    Chemically modified jute fibres are potentially useful as natural reinforcement in composite materials. Jute fibres were treated with 0.25%-1.0% sodium hydroxide (NaOH) solution for 0.5-48 h. The hydrophilicity, surface morphology, crystallinity index, thermal and mechanical characteristics of untreated and alkali treated fibres were studied.The two-parameter Weibull distribution model was applied to deal with the variation in mechanical properties of the natural fibres. Alkali treatment enhanced the tensile strength and elongation at break by 82% and 45%, respectively but decreased the hydrophilicity by 50.5% and the diameter of the fibres by 37%. PMID:22209134

  7. Bimodal Weibull statistical analysis of CVD-ZnSe and CVD-ZnS flexural strength data

    NASA Astrophysics Data System (ADS)

    Klein, Claude A.

    2011-06-01

    The results of flexural strength The results of flexural strength testing performed on brittle materials are usually interpreted in the light of a "Weibull plot," i.e., by fitting the estimated cumulative failure probability (CFP) to a linearized semiempirical Weibull distribution. This procedure ignores the impact of the testing method on the measures stressed at failure-specifically the stressed area and the stress profile-thus resulting in an inadequate characterization of the material under consideration. In a previous publication [Opt. Eng. 41, 3151 (2002)] the author reformulated Weibull's statistical theory of fracture in a manner that emphasizes how the stressed area and the stress profile control the CFP, a 1-sq.cm uniformly stressed area. Fitting the CFP of IR-transmitting materials was performed by means of nonlinear regressions but produced evidence of systematic deviations. In this paper we demonstrate that, upon extending the previously elaborated model to distributions involving two distinct types of defects (bimodal distributions), fitting the estimated CFP of CVD-ZnS or CVD-ZnSe leads to a much improved description of the fracture process. In particular, the availability of two sets of statistical parameters (characteristic strength and shape parameter) can be taken advantage of for evaluating the failure-probability density, thus providing means of assessing the nature, the critical size, and the size distribution of the surface/subsurface flaws.

  8. Weibull Analysis of Fracture Test Data on Bovine Cortical Bone: Influence of Orientation

    PubMed Central

    Ekwaro-Osire, Stephen

    2013-01-01

    The fracture toughness, KIC, of a cortical bone has been experimentally determined by several researchers. The variation of KIC values occurs from the variation of specimen orientation, shape, and size during the experiment. The fracture toughness of a cortical bone is governed by the severest flaw and, hence, may be analyzed using Weibull statistics. To the best of the authors' knowledge, however, no studies of this aspect have been published. The motivation of the study is the evaluation of Weibull parameters at the circumferential-longitudinal (CL) and longitudinal-circumferential (LC) directions. We hypothesized that Weibull parameters vary depending on the bone microstructure. In the present work, a two-parameter Weibull statistical model was applied to calculate the plane-strain fracture toughness of bovine femoral cortical bone obtained using specimens extracted from CL and LC directions of the bone. It was found that the Weibull modulus of fracture toughness was larger for CL specimens compared to LC specimens, but the opposite trend was seen for the characteristic fracture toughness. The reason for these trends is the microstructural and extrinsic toughening mechanism differences between CL and LC directions bone. The Weibull parameters found in this study can be applied to develop a damage-mechanics model for bone. PMID:24385985

  9. Estimating Weibull parameters for composite materials.

    NASA Technical Reports Server (NTRS)

    Robinson, E. Y.

    1972-01-01

    This paper deals with the statistical analysis of strength and fracture of materials in general, with application to fiber composites. The 'weakest link' model is considered in a fairly general form, and the resulting equations are demonstrated by using a Weibull distribution for flaws. This distribution appears naturally in a variety of problems, and therefore additional attention is devoted to analysis and statistical estimation connected with this distribution. Special working charts are included to facilitate interpretation of observed data and estimation of parameters. Implications of the size effect are considered for various kinds of flaw distributions. The paper describes failure and damage in a fiber-reinforced systems.

  10. A comparative study of mixed exponential and Weibull distributions in a stochastic model replicating a tropical rainfall process

    NASA Astrophysics Data System (ADS)

    Abas, Norzaida; Daud, Zalina M.; Yusof, Fadhilah

    2014-11-01

    A stochastic rainfall model is presented for the generation of hourly rainfall data in an urban area in Malaysia. In view of the high temporal and spatial variability of rainfall within the tropical rain belt, the Spatial-Temporal Neyman-Scott Rectangular Pulse model was used. The model, which is governed by the Neyman-Scott process, employs a reasonable number of parameters to represent the physical attributes of rainfall. A common approach is to attach each attribute to a mathematical distribution. With respect to rain cell intensity, this study proposes the use of a mixed exponential distribution. The performance of the proposed model was compared to a model that employs the Weibull distribution. Hourly and daily rainfall data from four stations in the Damansara River basin in Malaysia were used as input to the models, and simulations of hourly series were performed for an independent site within the basin. The performance of the models was assessed based on how closely the statistical characteristics of the simulated series resembled the statistics of the observed series. The findings obtained based on graphical representation revealed that the statistical characteristics of the simulated series for both models compared reasonably well with the observed series. However, a further assessment using the AIC, BIC and RMSE showed that the proposed model yields better results. The results of this study indicate that for tropical climates, the proposed model, using a mixed exponential distribution, is the best choice for generation of synthetic data for ungauged sites or for sites with insufficient data within the limit of the fitted region.

  11. Reliability growth modeling analysis of the space shuttle main engines based upon the Weibull process

    NASA Technical Reports Server (NTRS)

    Wheeler, J. T.

    1990-01-01

    The Weibull process, identified as the inhomogeneous Poisson process with the Weibull intensity function, is used to model the reliability growth assessment of the space shuttle main engine test and flight failure data. Additional tables of percentage-point probabilities for several different values of the confidence coefficient have been generated for setting (1-alpha)100-percent two sided confidence interval estimates on the mean time between failures. The tabled data pertain to two cases: (1) time-terminated testing, and (2) failure-terminated testing. The critical values of the three test statistics, namely Cramer-von Mises, Kolmogorov-Smirnov, and chi-square, were calculated and tabled for use in the goodness of fit tests for the engine reliability data. Numerical results are presented for five different groupings of the engine data that reflect the actual response to the failures.

  12. Prediction of the curing time to achieve maturity of the nano-cement based concrete using the Weibull distribution model: A complementary data set

    PubMed Central

    Jo, Byung Wan; Chakraborty, Sumit; Kim, Heon

    2015-01-01

    This data article provides a comparison data for nano-cement based concrete (NCC) and ordinary Portland cement based concrete (OPCC). Concrete samples (OPCC) were fabricated using ten different mix design and their characterization data is provided here. Optimization of curing time using the Weibull distribution model was done by analyzing the rate of change of compressive strength of the OPCC. Initially, the compressive strength of the OPCC samples was measured after completion of four desired curing times. Thereafter, the required curing time to achieve a particular rate of change of the compressive strength has been predicted utilizing the equation derived from the variation of the rate of change of compressive strength with the curing time, prior to the optimization of the curing time (at the 99.99% confidence level) using the Weibull distribution model. This data article complements the research article entitled “Prediction of the curing time to achieve maturity of the nano-cement based concrete using the Weibull distribution model” [1]. PMID:26217804

  13. Prediction of the curing time to achieve maturity of the nano-cement based concrete using the Weibull distribution model: A complementary data set.

    PubMed

    Jo, Byung Wan; Chakraborty, Sumit; Kim, Heon

    2015-09-01

    This data article provides a comparison data for nano-cement based concrete (NCC) and ordinary Portland cement based concrete (OPCC). Concrete samples (OPCC) were fabricated using ten different mix design and their characterization data is provided here. Optimization of curing time using the Weibull distribution model was done by analyzing the rate of change of compressive strength of the OPCC. Initially, the compressive strength of the OPCC samples was measured after completion of four desired curing times. Thereafter, the required curing time to achieve a particular rate of change of the compressive strength has been predicted utilizing the equation derived from the variation of the rate of change of compressive strength with the curing time, prior to the optimization of the curing time (at the 99.99% confidence level) using the Weibull distribution model. This data article complements the research article entitled "Prediction of the curing time to achieve maturity of the nano-cement based concrete using the Weibull distribution model" [1]. PMID:26217804

  14. Weibull distribution of incipient flaws in basalt material used in high-velocity impact experiments and applications in numerical simulations of small body disruptions

    NASA Astrophysics Data System (ADS)

    Michel, P.; Nakamura, A.

    We measured the Weibull parameters of a specific basalt material, called Yakuno basalt, which has already been used in documented high-velocity impact experiments. The outcomes of these experiments have been widely used to validate numerical codes of fragmentation developed in the context of planetary science. However, the distribution of incipient flaws in the targets, usually characterized by the so-called Weibull parameters, have generally be implemented in the codes with values allowing to match the experimental outcomes, hence the validity of numerical simulations remains to be assessed with the actual values of these parameters. Here, we follow the original method proposed by Weibull in 1939 to measure these parameters for this Yakuno basalt. We obtain a value of the Weibull modulus (also called shape parameter) m larger than the one corresponding to simulation fits to the experimental data. The characteristic strength, which corresponds to 63.2 % of failure of a sample of similar specimens and which defines the second Weibull or scale parameter is also determined. This parameter seems not sensitive to the different loading rates used to make the measurements. A complete database of impact experiments on basalt targets, including both the important initial target parameters and the detailed outcome of their disruptions, is now at the disposal of numerical codes of fragmentation for validity test. In the gravity regime, which takes place when the small bodies involved are larger than a few hundreds of meters in size, our numerical simulations have already been successful to reproduce asteroid families, showing that most large fragments from an asteroid disruption consist of gravitational aggregates formed by re-accumulation of smaller fragments during the disruption. Moreover, we found that the outcome depends strongly on the initial internal structure of the bodies involved. Therefore, the knowledge of the actual flaw distribution of the material defining the targets is required, especially in the strength dominated regime (body sizes below a few hundreds of meters) in which the small-scale physical properties of the bodies involved have a greater influence on the collisional outcome. We plan to define such physical properties for the targets made of different kinds of materials and which will be used in future impact experiments

  15. Analysis of the fuzzy greatest of CFAR detector in homogeneous and non-homogeneous Weibull clutter title

    NASA Astrophysics Data System (ADS)

    Baadeche, Mohamed; Soltani, Faouzi

    2015-12-01

    In this paper, we analyze the distributed FGO-CFAR detector in homogeneous and Non-Homogeneous Weibull clutter with an assumption of known shape parameter. The non-homogeneity is modeled by the presence of a clutter edge in the reference window. We derive membership function which maps the observations to the false alarm space and compute the threshold at the data fusion center. Applying the `Maximum', `Minimum', `Algebraic Sum' and `Algebraic Product' fuzzy rules for L detectors considered at the data fusion center, the obtained results showed that the best performance is obtained by the `Algebraic Product' fuzzy rule followed by the `Minimum' one and in these two cases the probability of detection increases significantly with the number of detectors.

  16. SER performance analysis of MPPM FSO system with three decision thresholds over exponentiated Weibull fading channels

    NASA Astrophysics Data System (ADS)

    Wang, Ping; Yang, Bensheng; Guo, Lixin; Shang, Tao

    2015-11-01

    In this work, the symbol error rate (SER) performance of the multiple pulse position modulation (MPPM) based free-space optical communication (FSO) system with three different decision thresholds, fixed decision threshold (FDT), optimized decision threshold (ODT) and dynamic decision threshold (DDT) over exponentiated Weibull (EW) fading channels has been investigated in detail. The effects of aperture averaging on each decision threshold under weak-to-strong turbulence conditions are further studied and compared. The closed-form SER expressions for three thresholds derived with the help of generalized Gauss-Laguerre quadrature rule are verified by the Monte Carlo simulations. This work is helpful for the design of receivers for FSO communication systems.

  17. Weibull-Based Design Methodology for Rotating Aircraft Engine Structures

    NASA Technical Reports Server (NTRS)

    Zaretsky, Erwin; Hendricks, Robert C.; Soditus, Sherry

    2002-01-01

    The NASA Energy Efficient Engine (E(sup 3)-Engine) is used as the basis of a Weibull-based life and reliability analysis. Each component's life and thus the engine's life is defined by high-cycle fatigue (HCF) or low-cycle fatigue (LCF). Knowing the cumulative life distribution of each of the components making up the engine as represented by a Weibull slope is a prerequisite to predicting the life and reliability of the entire engine. As the engine Weibull slope increases, the predicted lives decrease. The predicted engine lives L(sub 5) (95 % probability of survival) of approximately 17,000 and 32,000 hr do correlate with current engine maintenance practices without and with refurbishment. respectively. The individual high pressure turbine (HPT) blade lives necessary to obtain a blade system life L(sub 0.1) (99.9 % probability of survival) of 9000 hr for Weibull slopes of 3, 6 and 9, are 47,391 and 20,652 and 15,658 hr, respectively. For a design life of the HPT disks having probable points of failure equal to or greater than 36,000 hr at a probability of survival of 99.9 %, the predicted disk system life L(sub 0.1) can vary from 9,408 to 24,911 hr.

  18. Weibull Wind-Speed Distribution Parameters Derived from a Combination of Wind-Lidar and Tall-Mast Measurements Over Land, Coastal and Marine Sites

    NASA Astrophysics Data System (ADS)

    Gryning, Sven-Erik; Floors, Rogier; Peña, Alfredo; Batchvarova, Ekaterina; Brümmer, Burghard

    2015-11-01

    Wind-speed observations from tall towers are used in combination with observations up to 600 m in altitude from a Doppler wind lidar to study the long-term conditions over suburban (Hamburg), rural coastal (Høvsøre) and marine (FINO3) sites. The variability in the wind field among the sites is expressed in terms of mean wind speed and Weibull distribution shape-parameter profiles. The consequences of the carrier-to-noise-ratio (CNR) threshold-value choice on the wind-lidar observations are revealed as follows. When the wind-lidar CNR is lower than a prescribed threshold value, the observations are often filtered out as the uncertainty in the wind-speed measurements increases. For a pulsed heterodyne Doppler lidar, use of the traditional -22 dB CNR threshold value at all measuring levels up to 600 m results in a ≈ 7 % overestimation in the long-term mean wind speed over land, and a ≈ 12 % overestimation in coastal and marine environments. In addition, the height of the profile maximum of the shape parameter of the Weibull distribution (so-called reversal height) is found to depend on the applied CNR threshold; it is found to be lower at small CNR threshold values. The reversal height is greater in the suburban (high roughness) than in the rural (low roughness) area. In coastal areas the reversal height is lower than that over land and relates to the internal boundary layer that develops downwind from the coastline. Over the sea the shape parameter increases towards the sea surface. A parametrization of the vertical profile of the shape parameter fits well with observations over land, coastal regions and over the sea. An applied model for the dependence of the reversal height on the surface roughness is in good agreement with the observations over land.

  19. Weibull Wind-Speed Distribution Parameters Derived from a Combination of Wind-Lidar and Tall-Mast Measurements Over Land, Coastal and Marine Sites

    NASA Astrophysics Data System (ADS)

    Gryning, Sven-Erik; Floors, Rogier; Peña, Alfredo; Batchvarova, Ekaterina; Brümmer, Burghard

    2016-05-01

    Wind-speed observations from tall towers are used in combination with observations up to 600 m in altitude from a Doppler wind lidar to study the long-term conditions over suburban (Hamburg), rural coastal (Høvsøre) and marine (FINO3) sites. The variability in the wind field among the sites is expressed in terms of mean wind speed and Weibull distribution shape-parameter profiles. The consequences of the carrier-to-noise-ratio ( CNR) threshold-value choice on the wind-lidar observations are revealed as follows. When the wind-lidar CNR is lower than a prescribed threshold value, the observations are often filtered out as the uncertainty in the wind-speed measurements increases. For a pulsed heterodyne Doppler lidar, use of the traditional -22 dB CNR threshold value at all measuring levels up to 600 m results in a ≈ 7 % overestimation in the long-term mean wind speed over land, and a ≈ 12 % overestimation in coastal and marine environments. In addition, the height of the profile maximum of the shape parameter of the Weibull distribution (so-called reversal height) is found to depend on the applied CNR threshold; it is found to be lower at small CNR threshold values. The reversal height is greater in the suburban (high roughness) than in the rural (low roughness) area. In coastal areas the reversal height is lower than that over land and relates to the internal boundary layer that develops downwind from the coastline. Over the sea the shape parameter increases towards the sea surface. A parametrization of the vertical profile of the shape parameter fits well with observations over land, coastal regions and over the sea. An applied model for the dependence of the reversal height on the surface roughness is in good agreement with the observations over land.

  20. Simulation of correlated discrete Weibull variables: A proposal and an implementation in the R environment

    NASA Astrophysics Data System (ADS)

    Barbiero, Alessandro

    2015-12-01

    Researchers in applied sciences are often concerned with multivariate random variables. In particular, multivariate discrete data often arise in many fields (statistical quality control, biostatistics, failure analysis, etc). Here we consider the discrete Weibull distribution as an alternative to the popular Poisson random variable and propose a procedure for simulating correlated discrete Weibull random variables, with marginal distributions and correlation matrix assigned by the user. The procedure indeed relies upon the gaussian copula model and an iterative algorithm for recovering the proper correlation matrix for the copula ensuring the desired correlation matrix on the discrete margins. A simulation study is presented, which empirically shows the performance of the procedure.

  1. Genetic analysis of herd life in Canadian dairy cattle on a lactation basis using a Weibull proportional hazards model.

    PubMed

    Sewalem, A; Kistemaker, G J; Ducrocq, V; Van Doormaal, B J

    2005-01-01

    The objectives of this study were to identify the most important factors that influence functional survival and to estimate the genetic parameters of functional survival for Canadian dairy cattle. Data were obtained from lactation records extracted for the May 2002 genetic evaluation of Holstein, Jersey, and Ayrshire breeds that calved between July 1, 1985 and April 5, 2002. Analysis was performed using a Weibull proportional hazard model, and the baseline hazard function was defined on a lactation basis instead of the traditional analysis of the whole length of life. The statistical model included the effects of stage of lactation; season of production; the annual change in herd size; type of milk recording supervision; age at first calving; effects of milk, fat, and protein yields calculated within herd-year-parity deviations; and the random effects of herd-year-season of calving and sire. All effects fitted in the model had a significant effect on functional survival of cows in all breeds. Milk yield was by far the most important factor influencing survival, and the hazard increased as the milk production of the cows decreased. The hazard also increased as the fat content increased compared with the average group. Heifers that were older at calving were at higher risk of being culled, and expanding herds were at lower risk of being culled compared with stable herds. More culling was found in unsupervised herds than in supervised herds. The heritability values obtained were 0.14, 0.10, and 0.09 for Holstein, Jersey, and Ayrshire, respectively. Rank correlation between estimated breeding values (EBV) obtained from the current national genetic evaluation of direct herd life and the survival kit used in this study ranged from 0.65 to 0.87, depending on the number of daughters per sire. Estimated genetic trend obtained using the survival kit was overestimated. PMID:15591401

  2. Analysis of the relationship between type traits and functional survival in Canadian Holsteins using a Weibull proportional hazards model.

    PubMed

    Sewalem, A; Kistemaker, G J; Miglior, F; Van Doormaal, B J

    2004-11-01

    The aim of this study was to explore the impact of type traits on the functional survival of Canadian Holstein cows using a Weibull proportional hazards model. The data set consisted of 1,130,616 registered cows from 13,606 herds calving from 1985 to 2003. Functional survival was defined as the number of days from first calving to culling, death, or censoring. Type information consisted of phenotypic type scores for 8 composite traits (with 18 classes of each) and 23 linear descriptive traits (with 9 classes of each). The statistical model included the effects of stage of lactation, season of production, the annual change in herd size, type of milk recording supervision, age at first calving, effects of milk, fat and protein yields calculated within herd-year-parity deviations, herd-year-season of calving, each type trait, and the sire. Analysis was done one at a time for each of 31 type traits. The relative culling risk was calculated for animals in each class after accounting for the previously mentioned effects. Among the composite type traits with the greatest contribution to the likelihood function were final score, mammary system, and feet and legs, all having a strong relationship with functional survival. Cows with low scores for these traits had higher risk of culling compared with higher scores. For instance, cows classified as poor plus 1 vs. excellent plus 1 have a relative risk of culling 3.66 and 0.28, respectively. The corresponding figures for mammary system are 4.19 and 0.46 and for feet and legs are 2.34 and 0.50. Linear type traits with the greatest contribution to the likelihood function were fore udder attachment, udder texture, udder depth, rear udder attachment height, and rear udder attachment width. Stature and size had no strong relationship with functional survival. PMID:15483178

  3. Calculation of Weibull strength parameters and Batdorf flow-density constants for volume- and surface-flaw-induced fracture in ceramics

    NASA Technical Reports Server (NTRS)

    Shantaram, S. Pai; Gyekenyesi, John P.

    1989-01-01

    The calculation of shape and scale parametes of the two-parameter Weibull distribution is described using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. Detailed procedures are given for evaluating 90 percent confidence intervals for maximum likelihood estimates of shape and scale parameters, the unbiased estimates of the shape parameters, and the Weibull mean values and corresponding standard deviations. Furthermore, the necessary steps are described for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull distribution. It also shows how to calculate the Batdorf flaw-density constants by using the Weibull distribution statistical parameters. The techniques described were verified with several example problems, from the open literature, and were coded in the Structural Ceramics Analysis and Reliability Evaluation (SCARE) design program.

  4. Robust Fitting of a Weibull Model with Optional Censoring

    PubMed Central

    Yang, Jingjing; Scott, David W.

    2013-01-01

    The Weibull family is widely used to model failure data, or lifetime data, although the classical two-parameter Weibull distribution is limited to positive data and monotone failure rate. The parameters of the Weibull model are commonly obtained by maximum likelihood estimation; however, it is well-known that this estimator is not robust when dealing with contaminated data. A new robust procedure is introduced to fit a Weibull model by using L2 distance, i.e. integrated square distance, of the Weibull probability density function. The Weibull model is augmented with a weight parameter to robustly deal with contaminated data. Results comparing a maximum likelihood estimator with an L2 estimator are given in this article, based on both simulated and real data sets. It is shown that this new L2 parametric estimation method is more robust and does a better job than maximum likelihood in the newly proposed Weibull model when data are contaminated. The same preference for L2 distance criterion and the new Weibull model also happens for right-censored data with contamination. PMID:23888090

  5. Finite-size effects on return interval distributions for weakest-link-scaling systems.

    PubMed

    Hristopulos, Dionissios T; Petrakis, Manolis P; Kaniadakis, Giorgio

    2014-05-01

    The Weibull distribution is a commonly used model for the strength of brittle materials and earthquake return intervals. Deviations from Weibull scaling, however, have been observed in earthquake return intervals and the fracture strength of quasibrittle materials. We investigate weakest-link scaling in finite-size systems and deviations of empirical return interval distributions from the Weibull distribution function. Our analysis employs the ansatz that the survival probability function of a system with complex interactions among its units can be expressed as the product of the survival probability functions for an ensemble of representative volume elements (RVEs). We show that if the system comprises a finite number of RVEs, it obeys the κ-Weibull distribution. The upper tail of the κ-Weibull distribution declines as a power law in contrast with Weibull scaling. The hazard rate function of the κ-Weibull distribution decreases linearly after a waiting time τ(c) ∝ n(1/m), where m is the Weibull modulus and n is the system size in terms of representative volume elements. We conduct statistical analysis of experimental data and simulations which show that the κ Weibull provides competitive fits to the return interval distributions of seismic data and of avalanches in a fiber bundle model. In conclusion, using theoretical and statistical analysis of real and simulated data, we demonstrate that the κ-Weibull distribution is a useful model for extreme-event return intervals in finite-size systems. PMID:25353774

  6. EVALUATION OF SPRING OPERATED RELIEF VALVE MAINTENANCE INTERVALS AND EXTENSION OF MAINTENANCE TIMES USING A WEIBULL ANALYSIS WITH MODIFIED BAYESIAN UPDATING

    SciTech Connect

    Harris, S.; Gross, R.; Mitchell, E.

    2011-01-18

    The Savannah River Site (SRS) spring operated pressure relief valve (SORV) maintenance intervals were evaluated using an approach provided by the American Petroleum Institute (API RP 581) for risk-based inspection technology (RBI). In addition, the impact of extending the inspection schedule was evaluated using Monte Carlo Simulation (MCS). The API RP 581 approach is characterized as a Weibull analysis with modified Bayesian updating provided by SRS SORV proof testing experience. Initial Weibull parameter estimates were updated as per SRS's historical proof test records contained in the Center for Chemical Process Safety (CCPS) Process Equipment Reliability Database (PERD). The API RP 581 methodology was used to estimate the SORV's probability of failing on demand (PFD), and the annual expected risk. The API RP 581 methodology indicates that the current SRS maintenance plan is conservative. Cost savings may be attained in certain mild service applications that present low PFD and overall risk. Current practices are reviewed and recommendations are made for extending inspection intervals. The paper gives an illustration of the inspection costs versus the associated risks by using API RP 581 Risk Based Inspection (RBI) Technology. A cost effective maintenance frequency balancing both financial risk and inspection cost is demonstrated.

  7. Performance Sampling and Weibull Distributions.

    ERIC Educational Resources Information Center

    March, James C.; March, James G.

    1981-01-01

    Concerning their study of Wisconsin school superintendents, the authors comment briefly on small differences between their own tactics for modeling mobility and the tactics used by some others, including Schmittlein and Morrison. (Author/MLF)

  8. A Weibull characterization for tensile fracture of multicomponent brittle fibers

    NASA Technical Reports Server (NTRS)

    Barrows, R. G.

    1977-01-01

    A statistical characterization for multicomponent brittle fibers in presented. The method, which is an extension of usual Weibull distribution procedures, statistically considers the components making up a fiber (e.g., substrate, sheath, and surface) as separate entities and taken together as in a fiber. Tensile data for silicon carbide fiber and for an experimental carbon-boron alloy fiber are evaluated in terms of the proposed multicomponent Weibull characterization.

  9. Weibull crack density coefficient for polydimensional stress states

    NASA Technical Reports Server (NTRS)

    Gross, Bernard; Gyekenyesi, John P.

    1989-01-01

    A structural ceramic analysis and reliability evaluation code has recently been developed encompassing volume and surface flaw induced fracture, modeled by the two-parameter Weibull probability density function. A segment of the software involves computing the Weibull polydimensional stress state crack density coefficient from uniaxial stress experimental fracture data. The relationship of the polydimensional stress coefficient to the uniaxial stress coefficient is derived for a shear-insensitive material with a random surface flaw population.

  10. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    PubMed

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. PMID:26121186

  11. Modeling root reinforcement using root-failure Weibull survival function

    NASA Astrophysics Data System (ADS)

    Schwarz, M.; Giadrossich, F.; Cohen, D.

    2013-03-01

    Root networks contribute to slope stability through complicated interactions that include mechanical compression and tension. Due to the spatial heterogeneity of root distribution and the dynamic of root turnover, the quantification of root reinforcement on steep slope is challenging and consequently the calculation of slope stability as well. Although the considerable advances in root reinforcement modeling, some important aspect remain neglected. In this study we address in particular to the role of root strength variability on the mechanical behaviors of a root bundle. Many factors may contribute to the variability of root mechanical properties even considering a single class of diameter. This work presents a new approach for quantifying root reinforcement that considers the variability of mechanical properties of each root diameter class. Using the data of laboratory tensile tests and field pullout tests, we calibrate the parameters of the Weibull survival function to implement the variability of root strength in a numerical model for the calculation of root reinforcement (RBMw). The results show that, for both laboratory and field datasets, the parameters of the Weibull distribution may be considered constant with the exponent equal to 2 and the normalized failure displacement equal to 1. Moreover, the results show that the variability of root strength in each root diameter class has a major influence on the behavior of a root bundle with important implications when considering different approaches in slope stability calculation. Sensitivity analysis shows that the calibration of the tensile force and the elasticity of the roots are the most important equations, as well as the root distribution. The new model allows the characterization of root reinforcement in terms of maximum pullout force, stiffness, and energy. Moreover, it simplifies the implementation of root reinforcement in slope stability models. The realistic quantification of root reinforcement for tensile, shear and compression behavior allows the consideration of the stabilization effects of root networks on steep slopes and the influence that this has on the triggering of shallow landslides.

  12. A Weibull characterization for tensile fracture of multicomponent brittle fibers

    NASA Technical Reports Server (NTRS)

    Barrows, R. G.

    1977-01-01

    Necessary to the development and understanding of brittle fiber reinforced composites is a means to statistically describe fiber strength and strain-to-failure behavior. A statistical characterization for multicomponent brittle fibers is presented. The method, which is an extension of usual Weibull distribution procedures, statistically considers the components making up a fiber (e.g., substrate, sheath, and surface) as separate entities and taken together as in a fiber. Tensile data for silicon carbide fiber and for an experimental carbon-boron alloy fiber are evaluated in terms of the proposed multicomponent Weibull characterization.

  13. Empirical model based on Weibull distribution describing the destruction kinetics of natural microbiota in pineapple (Ananas comosus L.) puree during high-pressure processing.

    PubMed

    Chakraborty, Snehasis; Rao, Pavuluri Srinivasa; Mishra, Hari Niwas

    2015-10-15

    High pressure inactivation of natural microbiota viz. aerobic mesophiles (AM), psychrotrophs (PC), yeasts and molds (YM), total coliforms (TC) and lactic acid bacteria (LAB) in pineapple puree was studied within the experimental domain of 0.1-600 MPa and 30-50 °C with a treatment time up to 20 min. A complete destruction of yeasts and molds was obtained at 500 MPa/50 °C/15 min; whereas no counts were detected for TC and LAB at 300 MPa/30 °C/15 min. A maximum of two log cycle reductions was obtained for YM during pulse pressurization at the severe process intensity of 600 MPa/50 °C/20 min. The Weibull model clearly described the non-linearity of the survival curves during the isobaric period. The tailing effect, as confirmed by the shape parameter (β) of the survival curve, was obtained in case of YM (β<1); whereas a shouldering effect (β>1) was observed for the other microbial groups. Analogous to thermal death kinetics, the activation energy (Ea, kJ·mol(-1)) and the activation volume (Va, mL·mol(-1)) values were computed further to describe the temperature and pressure dependencies of the scale parameter (δ, min), respectively. A higher δ value was obtained for each microbe at a lower temperature and it decreased with an increase in pressure. A secondary kinetic model was developed describing the inactivation rate (k, min(-1)) as a function of pressure (P, MPa) and temperature (T, K) including the dependencies of Ea and Va on P and T, respectively. PMID:26202323

  14. Calculation of Weibull strength parameters and Batdorf flow-density constants for volume- and surface-flaw-induced fracture in ceramics

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Gyekenyesi, John P.

    1988-01-01

    The calculation of shape and scale parameters of the two-parameter Weibull distribution is described using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. Detailed procedures are given for evaluating 90 percent confidence intervals for maximum likelihood estimates of shape and scale parameters, the unbiased estimates of the shape parameters, and the Weibull mean values and corresponding standard deviations. Furthermore, the necessary steps are described for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull distribution. It also shows how to calculate the Batdorf flaw-density constants by uing the Weibull distribution statistical parameters. The techniques described were verified with several example problems, from the open literature, and were coded. The techniques described were verified with several example problems from the open literature, and were coded in the Structural Ceramics Analysis and Reliability Evaluation (SCARE) design program.

  15. Shallow Flaws Under Biaxial Loading Conditions, Part II: Application of a Weibull Stress Analysis of the Cruciform Bend Specimen Using a Hydrostatic Stress Criterion

    SciTech Connect

    Bass, B.R.; McAfee, W.J.; Williams, P.T.

    1999-08-01

    Cruciform beam fracture mechanics specimensl have been developed in the Heavy Section Steel Technology (HSST) Program at Oak Ridge National Laboratory (ORNL) to introduce a prototypic, far- field, out-of-plane biaxird bending stress component in the test section that approximates the nonlinear biaxial stresses resulting from pressurized-thernxd-shock or pressure-temperature loading of a nuclear reactor pressure vessel (RPV). Matrices of cruciform beam tests were developed to investigate and quantify the effects of temperature, biaxial loading, and specimen size on fracture initiation toughness of two-dimensional (constant depth), shtdlow, surface flaws. Tests were conducted under biaxial load ratios ranging from uniaxial to equibiaxial. These tests demonstrated that biaxial loading can have a pronounced effect on shallow-flaw fracture toughness in the lower transition temperature region for RPV materials. Two and three- parameter Weibull models have been calibrated using a new scheme (developed at the University of Illinois) that maps toughness data from test specimens with distinctly different levels of crack-tip constraint to a small scale yielding (SSY) Weibull stress space. These models, using the new hydrostatic stress criterion in place of the more commonly used maximum principal stress in the kernel of the OW integral definition, have been shown to correlate the experimentally observed biaxiaI effect in cruciform specimens, thereby providing a scaling mechanism between uniaxial and biaxial loading states.

  16. Distributed analysis in ATLAS

    NASA Astrophysics Data System (ADS)

    Dewhurst, A.; Legger, F.

    2015-12-01

    The ATLAS experiment accumulated more than 140 PB of data during the first run of the Large Hadron Collider (LHC) at CERN. The analysis of such an amount of data is a challenging task for the distributed physics community. The Distributed Analysis (DA) system of the ATLAS experiment is an established and stable component of the ATLAS distributed computing operations. About half a million user jobs are running daily on DA resources, submitted by more than 1500 ATLAS physicists. The reliability of the DA system during the first run of the LHC and the following shutdown period has been high thanks to the continuous automatic validation of the distributed analysis sites and the user support provided by a dedicated team of expert shifters. During the LHC shutdown, the ATLAS computing model has undergone several changes to improve the analysis workflows, including the re-design of the production system, a new analysis data format and event model, and the development of common reduction and analysis frameworks. We report on the impact such changes have on the DA infrastructure, describe the new DA components, and include recent performance measurements.

  17. Calculation of Weibull strength parameters, Batdorf flaw density constants and related statistical quantities using PC-CARES

    NASA Technical Reports Server (NTRS)

    Szatmary, Steven A.; Gyekenyesi, John P.; Nemeth, Noel N.

    1990-01-01

    This manual describes the operation and theory of the PC-CARES (Personal Computer-Ceramic Analysis and Reliability Evaluation of Structures) computer program for the IBM PC and compatibles running PC-DOS/MS-DOR OR IBM/MS-OS/2 (version 1.1 or higher) operating systems. The primary purpose of this code is to estimate Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities. Included in the manual is the description of the calculation of shape and scale parameters of the two-parameter Weibull distribution using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. The methods for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull line, as well as the techniques for calculating the Batdorf flaw-density constants are also described.

  18. Atlas Distributed Analysis Tools

    NASA Astrophysics Data System (ADS)

    de La Hoz, Santiago Gonzalez; Ruiz, Luis March; Liko, Dietrich

    2008-06-01

    The ATLAS production system has been successfully used to run production of simulation data at an unprecedented scale. Up to 10000 jobs were processed in one day. The experiences obtained operating the system on several grid flavours was essential to perform a user analysis using grid resources. First tests of the distributed analysis system were then performed. In the preparation phase data was registered in the LHC File Catalog (LFC) and replicated in external sites. For the main test, few resources were used. All these tests are only a first step towards the validation of the computing model. The ATLAS management computing board decided to integrate the collaboration efforts in distributed analysis in only one project, GANGA. The goal is to test the reconstruction and analysis software in a large scale Data production using Grid flavors in several sites. GANGA allows trivial switching between running test jobs on a local batch system and running large-scale analyses on the Grid; it provides job splitting and merging, and includes automated job monitoring and output retrieval.

  19. Structural characterization of genomes by large scale sequence-structure threading: application of reliability analysis in structural genomics

    PubMed Central

    Cherkasov, Artem; Ho Sui, Shannan J; Brunham, Robert C; Jones, Steven JM

    2004-01-01

    Background We establish that the occurrence of protein folds among genomes can be accurately described with a Weibull function. Systems which exhibit Weibull character can be interpreted with reliability theory commonly used in engineering analysis. For instance, Weibull distributions are widely used in reliability, maintainability and safety work to model time-to-failure of mechanical devices, mechanisms, building constructions and equipment. Results We have found that the Weibull function describes protein fold distribution within and among genomes more accurately than conventional power functions which have been used in a number of structural genomic studies reported to date. It has also been found that the Weibull reliability parameter β for protein fold distributions varies between genomes and may reflect differences in rates of gene duplication in evolutionary history of organisms. Conclusions The results of this work demonstrate that reliability analysis can provide useful insights and testable predictions in the fields of comparative and structural genomics. PMID:15274750

  20. Measuring the Weibull modulus of microscope slides

    NASA Technical Reports Server (NTRS)

    Sorensen, Carl D.

    1992-01-01

    The objectives are that students will understand why a three-point bending test is used for ceramic specimens, learn how Weibull statistics are used to measure the strength of brittle materials, and appreciate the amount of variation in the strength of brittle materials with low Weibull modulus. They will understand how the modulus of rupture is used to represent the strength of specimens in a three-point bend test. In addition, students will learn that a logarithmic transformation can be used to convert an exponent into the slope of a straight line. The experimental procedures are explained.

  1. Probability density function characterization for aggregated large-scale wind power based on Weibull mixtures

    DOE PAGESBeta

    Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; Martin-Martinez, Sergio; Zhang, Jie; Hodge, Bri -Mathias; Molina-Garcia, Angel

    2016-02-02

    Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less

  2. Transmission overhaul and replacement predictions using Weibull and renewal theory

    NASA Technical Reports Server (NTRS)

    Savage, M.; Lewicki, D. G.

    1989-01-01

    A method to estimate the frequency of transmission overhauls is presented. This method is based on the two-parameter Weibull statistical distribution for component life. A second method is presented to estimate the number of replacement components needed to support the transmission overhaul pattern. The second method is based on renewal theory. Confidence statistics are applied with both methods to improve the statistical estimate of sample behavior. A transmission example is also presented to illustrate the use of the methods. Transmission overhaul frequency and component replacement calculations are included in the example.

  3. Transmission overhaul and replacement predictions using Weibull and renewel theory

    NASA Technical Reports Server (NTRS)

    Savage, M.; Lewicki, D. G.

    1989-01-01

    A method to estimate the frequency of transmission overhauls is presented. This method is based on the two-parameter Weibull statistical distribution for component life. A second method is presented to estimate the number of replacement components needed to support the transmission overhaul pattern. The second method is based on renewal theory. Confidence statistics are applied with both methods to improve the statistical estimate of sample behavior. A transmission example is also presented to illustrate the use of the methods. Transmission overhaul frequency and component replacement calculations are included in the example.

  4. Plume height, volume, and classification of explosive volcanic eruptions based on the Weibull function

    NASA Astrophysics Data System (ADS)

    Bonadonna, Costanza; Costa, Antonio

    2013-08-01

    The Weibull distribution between volume and square root of isopach area has been recently introduced for determining volume of tephra deposits, which is crucial to the assessment of the magnitude and hazards of explosive volcanoes. We show how the decay of the size of the largest lithics with the square root of isopleth area can also be well described using a Weibull function and how plume height correlates strongly with corresponding Weibull parameters. Variations of median grain size (Md ϕ) values with square root of area of the associated contours can be, similarly, well fitted with a Weibull function. Weibull parameters, derived for both the thinning of tephra deposits and the decrease of grain size (both maximum lithic diameter and Md ϕ), with a proxy for the distance from vent (e.g., square root of isoline areas) can be combined to classify the style of explosive volcanic eruptions. Accounting for the uncertainty in the derivation of eruptive parameters (e.g., plume height and volume of tephra deposits) is crucial to any classification of eruptive style and hazard assessment. Considering a typical uncertainty of 20 % for the determination of plume height, a new eruption classification scheme based on selected Weibull parameters is proposed. Ultraplinian, Plinian, Subplinian, and small-moderate explosive eruptions are defined on the ground of plume height and mass eruption rate. Overall, the Weibull fitting represents a versatile and reliable strategy for the estimation of both the volume of tephra deposits and the height of volcanic plumes and for the classification of eruptive style. Nonetheless, due to the typically large uncertainties (mainly due to availability of data, compilation of isopach and isopleth maps, and discrepancies from empirical best fits), plume height, volume, and magnitude of explosive eruptions cannot be considered as absolute values, regardless of the technique used. It is important that various empirical and analytical methods are applied in order to assess such an uncertainty.

  5. Probability of fade and BER performance of FSO links over the exponentiated Weibull fading channel under aperture averaging

    NASA Astrophysics Data System (ADS)

    Barrios, Ricardo; Dios, Federico

    2012-10-01

    Recently a new proposal to model the fading channel in free-space optical links, namely, the exponentiated Weibull (EW) distribution, has been made. It has been suggested that the EW distribution can model the probability density function (PDF) of the irradiance under weak-to-strong conditions in the presence of aperture averaging. Here, we carry out an analysis of probability of fade and bit error-rate (BER) performance using simulation results and experimental data. The BER analysis assumes intensity modulation/direct detection with on-off keying, and new expressions are derived. Data is modeled following the statistics of the EW fading channel model, and compared with the Gamma-Gamma and Lognormal distributions, as the most widely accepted models nowadays. It is shown how the proposed EW model is valid in all the tested conditions, and even outperforms the GG and LN distributions, that are only valid under certain scenarios.

  6. Statistical analysis of bubble and crystal size distributions: Application to Colorado Plateau basalts

    NASA Astrophysics Data System (ADS)

    Proussevitch, Alexander A.; Sahagian, Dork L.; Carlson, William D.

    2007-07-01

    A new analytical technique for the statistical analysis of bubble populations in volcanic rocks [Proussevitch, A.A., Sahagian, D.L. and Tsentalovich, E.P., 2007-this issue. Statistical analysis of bubble and crystal size distributions: Formulations and procedures. J. Volc. Geotherm. Res.] has been applied to a collection of Colorado Plateau basalts (96 samples). A variety of mono- and polymodal distributions has been found in the samples, all of which belong to the logarithmic family of statistical functions. Most samples have bimodal log normal distributions, while the others are represented by mono- or bimodal log logistic, and Weibull distributions. We have grouped the observed distributions into 11 groups depending on distribution types, mode location, and intensity. The nature of the curves within these groups can be interpreted as evolution of vesiculation processes. We conclude that within bimodal log normal distributions, the mode of smaller bubbles is the result of a second nucleation and growth event in a lava flow after eruption. In the case of log logistic distributions the larger mode results from coalescence of bubbles. Coalescence processes are reflected in growth of a larger mode and decreasing bubble number density. Another style of population evolution leads to a monomodal Weibull (or exponential) distribution as a result of superposition of multiple log normal distributions in which the modes are comparable in size and intensity. These various population distribution styles can be interpreted with an understanding of vesiculation processes that can be gained through appropriate numerical models of coalescence and population evolution. The applicable vesiculation processes include: a) a single nucleation-growth event, b) continuous multiple nucleation-growth events, c) coalescence, and d) Ostwald ripening.

  7. Modeling root reinforcement using a root-failure Weibull survival function

    NASA Astrophysics Data System (ADS)

    Schwarz, M.; Giadrossich, F.; Cohen, D.

    2013-11-01

    Root networks contribute to slope stability through complex interactions with soil that include mechanical compression and tension. Due to the spatial heterogeneity of root distribution and the dynamics of root turnover, the quantification of root reinforcement on steep slopes is challenging and consequently the calculation of slope stability also. Although considerable progress has been made, some important aspects of root mechanics remain neglected. In this study we address specifically the role of root-strength variability on the mechanical behavior of a root bundle. Many factors contribute to the variability of root mechanical properties even within a single class of diameter. This work presents a new approach for quantifying root reinforcement that considers the variability of mechanical properties of each root diameter class. Using the data of laboratory tensile tests and field pullout tests, we calibrate the parameters of the Weibull survival function to implement the variability of root strength in a numerical model for the calculation of root reinforcement (RBMw). The results show that, for both laboratory and field data sets, the parameters of the Weibull distribution may be considered constant with the exponent equal to 2 and the normalized failure displacement equal to 1. Moreover, the results show that the variability of root strength in each root diameter class has a major influence on the behavior of a root bundle with important implications when considering different approaches in slope stability calculation. Sensitivity analysis shows that the calibration of the equations of the tensile force, the elasticity of the roots, and the root distribution are the most important steps. The new model allows the characterization of root reinforcement in terms of maximum pullout force, stiffness, and energy. Moreover, it simplifies the implementation of root reinforcement in slope stability models. The realistic quantification of root reinforcement for tensile, shear and compression behavior allows for the consideration of the stabilization effects of root networks on steep slopes and the influence that this has on the triggering of shallow landslides.

  8. The distribution of first-passage times and durations in FOREX and future markets

    NASA Astrophysics Data System (ADS)

    Sazuka, Naoya; Inoue, Jun-ichi; Scalas, Enrico

    2009-07-01

    Possible distributions are discussed for intertrade durations and first-passage processes in financial markets. The view-point of renewal theory is assumed. In order to represent market data with relatively long durations, two types of distributions are used, namely a distribution derived from the Mittag-Leffler survival function and the Weibull distribution. For the Mittag-Leffler type distribution, the average waiting time (residual life time) is strongly dependent on the choice of a cut-off parameter tmax, whereas the results based on the Weibull distribution do not depend on such a cut-off. Therefore, a Weibull distribution is more convenient than a Mittag-Leffler type if one wishes to evaluate relevant statistics such as average waiting time in financial markets with long durations. On the other hand, we find that the Gini index is rather independent of the cut-off parameter. Based on the above considerations, we propose a good candidate for describing the distribution of first-passage time in a market: The Weibull distribution with a power-law tail. This distribution compensates the gap between theoretical and empirical results more efficiently than a simple Weibull distribution. It should be stressed that a Weibull distribution with a power-law tail is more flexible than the Mittag-Leffler distribution, which itself can be approximated by a Weibull distribution and a power-law. Indeed, the key point is that in the former case there is freedom of choice for the exponent of the power-law attached to the Weibull distribution, which can exceed 1 in order to reproduce decays faster than possible with a Mittag-Leffler distribution. We also give a useful formula to determine an optimal crossover point minimizing the difference between the empirical average waiting time and the one predicted from renewal theory. Moreover, we discuss the limitation of our distributions by applying our distribution to the analysis of the BTP future and calculating the average waiting time. We find that our distribution is applicable as long as durations follow a Weibull law for short times and do not have too heavy a tail.

  9. Modelling memory processes and Internet response times: Weibull or power-law?

    NASA Astrophysics Data System (ADS)

    Chessa, Antonio G.; Murre, Jaap M. J.

    2006-07-01

    The Weibull distribution is proposed as a model for response times. Theoretical support is offered by classical results for extreme-value distributions. Fits of the Weibull distribution to response time data in different contexts show that this distribution (and the exponential distribution on small time-scales) perform better than the often-suggested power-law and logarithmic function. This study suggests that the power-law can be viewed as an approximation, at neural level, for the aggregate strength of superposed memory traces that have different decay rates in distinct parts of the brain. As we predict, this view does not find support at the level of induced response processes. The distinction between underlying and induced processes might also be considered in other fields, such as engineering, biology and physics.

  10. Distributed and Collaborative Software Analysis

    NASA Astrophysics Data System (ADS)

    Ghezzi, Giacomo; Gall, Harald C.

    Throughout the years software engineers have come up with a myriad of specialized tools and techniques that focus on a certain type of software analysissoftware analysis such as source code analysis, co-change analysis or bug prediction. However, easy and straight forward synergies between these analyses and tools rarely exist because of their stand-alone nature, their platform dependence, their different input and output formats and the variety of data to analyze. As a consequence, distributed and collaborative software analysiscollaborative software analysis scenarios and in particular interoperability are severely limited. We describe a distributed and collaborative software analysis platform that allows for a seamless interoperability of software analysis tools across platform, geographical and organizational boundaries. We realize software analysis tools as services that can be accessed and composed over the Internet. These distributed analysis services shall be widely accessible in our incrementally augmented Software Analysis Broker software analysis broker where organizations and tool providers can register and share their tools. To allow (semi-) automatic use and composition of these tools, they are classified and mapped into a software analysis taxonomy and adhere to specific meta-models and ontologiesontologies for their category of analysis.

  11. Spatial and temporal patterns of global onshore wind speed distribution

    NASA Astrophysics Data System (ADS)

    Zhou, Yuyu; Smith, Steven J.

    2013-09-01

    Wind power, a renewable energy source, can play an important role in electrical energy generation. Information regarding wind energy potential is important both for energy related modeling and for decision-making in the policy community. While wind speed datasets with high spatial and temporal resolution are often ultimately used for detailed planning, simpler assumptions are often used in analysis work. An accurate representation of the wind speed frequency distribution is needed in order to properly characterize wind energy potential. Using a power density method, this study estimated global variation in wind parameters as fitted to a Weibull density function using NCEP/climate forecast system reanalysis (CFSR) data over land areas. The Weibull distribution performs well in fitting the time series wind speed data at most locations according to R2, root mean square error, and power density error. The wind speed frequency distribution, as represented by the Weibull k parameter, exhibits a large amount of spatial variation, a regionally varying amount of seasonal variation, and relatively low decadal variation. We also analyzed the potential error in wind power estimation when a commonly assumed Rayleigh distribution (Weibull k = 2) is used. We find that the assumption of the same Weibull parameter across large regions can result in non-negligible errors. While large-scale wind speed data are often presented in the form of mean wind speeds, these results highlight the need to also provide information on the wind speed frequency distribution.

  12. Moment series for the coefficient of variation in Weibull sampling

    SciTech Connect

    Bowman, K.O.; Shenton, L.R.

    1981-01-01

    For the 2-parameter Weibull distribution function F(t) = 1 - exp(-t/b)/sup c/, t > 0, with c and b positive, a moment estimator c* for c is the solution of the equationGAMMA(1 + 2/c*)/GAMMA/sup 2/ (1 + 1/c*) = 1 + v*/sup 2/ where v* is the coefficient of variation in the form ..sqrt..m/sub 2//m/sub 1/', m/sub 1/' being the sample mean, m/sub 2/ the sample second central moment (it is trivial in the present context to replace m/sub 2/ by the variance). One approach to the moments of c* (Bowman and Shenton, 1981) is to set-up moment series for the scale-free v*. The series are apparently divergent and summation algorithms are essential; we consider methods due to Levin (1973) and one, introduced ourselves (Bowman and Shenton, 1976).

  13. The ATLAS distributed analysis system

    NASA Astrophysics Data System (ADS)

    Legger, F.; Atlas Collaboration

    2014-06-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of Grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high and steadily improving; Grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters provides user support and communicates user problems to the sites. Both the user support techniques and the direct feedback of users have been effective in improving the success rate and user experience when utilizing the distributed computing environment. In this contribution a description of the main components, activities and achievements of ATLAS distributed analysis is given. Several future improvements being undertaken will be described.

  14. Collective Weibull behavior of social atoms: Application of the rank-ordering statistics to historical extreme events

    NASA Astrophysics Data System (ADS)

    Chen, Chien-Chih; Tseng, Chih-Yuan; Telesca, Luciano; Chi, Sung-Ching; Sun, Li-Chung

    2012-02-01

    Analogous to crustal earthquakes in natural fault systems, we here consider the dynasty collapses as extreme events in human society. Duration data of ancient Chinese and Egyptian dynasties provides a good chance of exploring the collective behavior of the so-called social atoms. By means of the rank-ordering statistics, we demonstrate that the duration data of those ancient dynasties could be described with good accuracy by the Weibull distribution. It is thus amazing that the distribution of time to failure of human society, i.e. the disorder of a historical dynasty, follows the widely accepted Weibull process as natural material fails.

  15. Distributed data analysis in LHCb

    NASA Astrophysics Data System (ADS)

    Paterson, S. K.; Maier, A.

    2008-07-01

    The LHCb distributed data analysis system consists of the Ganga job submission front-end and the DIRAC Workload and Data Management System (WMS). Ganga is jointly developed with ATLAS and allows LHCb users to submit jobs on several backends including: several batch systems, LCG and DIRAC. The DIRAC API provides a transparent and secure way for users to run jobs to the Grid and is the default mode of submission for the LHCb Virtual Organisation (VO). This is exploited by Ganga to perform distributed user analysis for LHCb. This system provides LHCb with a consistent, efficient and simple user experience in a variety of heterogeneous environments and facilitates the incremental development of user analysis from local test jobs to the Worldwide LHC Computing Grid. With a steadily increasing number of users, the LHCb distributed analysis system has been tuned and enhanced over the past two years. This paper will describe the recent developments to support distributed data analysis for the LHCb experiment on WLCG.

  16. Stratified Weibull Regression Model for Interval-Censored Data

    PubMed Central

    Gu, Xiangdong; Shapiro, David; Hughes, Michael D.; Balasubramanian, Raji

    2016-01-01

    Interval censored outcomes arise when a silent event of interest is known to have occurred within a specific time period determined by the times of the last negative and first positive diagnostic tests. There is a rich literature on parametric and non-parametric approaches for the analysis of interval-censored outcomes. A commonly used strategy is to use a proportional hazards (PH) model with the baseline hazard function parameterized. The proportional hazards assumption can be relaxed in stratified models by allowing the baseline hazard function to vary across strata defined by a subset of explanatory variables. In this paper, we describe and implement a new R package straweib, for fitting a stratified Weibull model appropriate for interval censored outcomes. We illustrate the R package straweib by analyzing data from a longitudinal oral health study on the timing of the emergence of permanent teeth in 4430 children. PMID:26835159

  17. Effect of Individual Component Life Distribution on Engine Life Prediction

    NASA Technical Reports Server (NTRS)

    Zaretsky, Erwin V.; Hendricks, Robert C.; Soditus, Sherry M.

    2003-01-01

    The effect of individual engine component life distributions on engine life prediction was determined. A Weibull-based life and reliability analysis of the NASA Energy Efficient Engine was conducted. The engine s life at a 95 and 99.9 percent probability of survival was determined based upon the engine manufacturer s original life calculations and assumed values of each of the component s cumulative life distributions as represented by a Weibull slope. The lives of the high-pressure turbine (HPT) disks and blades were also evaluated individually and as a system in a similar manner. Knowing the statistical cumulative distribution of each engine component with reasonable engineering certainty is a condition precedent to predicting the life and reliability of an entire engine. The life of a system at a given reliability will be less than the lowest-lived component in the system at the same reliability (probability of survival). Where Weibull slopes of all the engine components are equal, the Weibull slope had a minimal effect on engine L(sub 0.1) life prediction. However, at a probability of survival of 95 percent (L(sub 5) life), life decreased with increasing Weibull slope.

  18. Aerospace Applications of Weibull and Monte Carlo Simulation with Importance Sampling

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.

    1998-01-01

    Recent developments in reliability modeling and computer technology have made it practical to use the Weibull time to failure distribution to model the system reliability of complex fault-tolerant computer-based systems. These system models are becoming increasingly popular in space systems applications as a result of mounting data that support the decreasing Weibull failure distribution and the expectation of increased system reliability. This presentation introduces the new reliability modeling developments and demonstrates their application to a novel space system application. The application is a proposed guidance, navigation, and control (GN&C) system for use in a long duration manned spacecraft for a possible Mars mission. Comparisons to the constant failure rate model are presented and the ramifications of doing so are discussed.

  19. A Weibull brittle material failure model for the ABAQUS computer program

    SciTech Connect

    Bennett, J.

    1991-08-01

    A statistical failure theory for brittle materials that traces its origins to the Weibull distribution function is developed for use in the general purpose ABAQUS finite element computer program. One of the fundamental assumptions for this development is that Mode 1 microfractures perpendicular to the direction of the principal stress contribute independently to the fast fracture. The theory is implemented by a user subroutine for ABAQUS. Example problems illustrating the capability and accuracy of the model are given. 24 refs., 12 figs.

  20. Distributed data analysis in ATLAS

    NASA Astrophysics Data System (ADS)

    Nilsson, Paul; Atlas Collaboration

    2012-12-01

    Data analysis using grid resources is one of the fundamental challenges to be addressed before the start of LHC data taking. The ATLAS detector will produce petabytes of data per year, and roughly one thousand users will need to run physics analyses on this data. Appropriate user interfaces and helper applications have been made available to ensure that the grid resources can be used without requiring expertise in grid technology. These tools enlarge the number of grid users from a few production administrators to potentially all participating physicists. ATLAS makes use of three grid infrastructures for the distributed analysis: the EGEE sites, the Open Science Grid, and Nordu Grid. These grids are managed by the gLite workload management system, the PanDA workload management system, and ARC middleware; many sites can be accessed via both the gLite WMS and PanDA. Users can choose between two front-end tools to access the distributed resources. Ganga is a tool co-developed with LHCb to provide a common interface to the multitude of execution backends (local, batch, and grid). The PanDA workload management system provides a set of utilities called PanDA Client; with these tools users can easily submit Athena analysis jobs to the PanDA-managed resources. Distributed data is managed by Don Quixote 2, a system developed by ATLAS; DQ2 is used to replicate datasets according to the data distribution policies and maintains a central catalog of file locations. The operation of the grid resources is continually monitored by the Ganga Robot functional testing system, and infrequent site stress tests are performed using the Hammer Cloud system. In addition, the DAST shift team is a group of power users who take shifts to provide distributed analysis user support; this team has effectively relieved the burden of support from the developers.

  1. Sugar Cane Nutrient Distribution Analysis

    NASA Astrophysics Data System (ADS)

    Zamboni, C. B.; da Silveira, M. A. G.; Gennari, R. F.; Garcia, I.; Medina, N. H.

    2011-08-01

    Neutron Activation Analysis (NAA), Molecular Absorption Spectrometry (UV-Vis), and Flame Photometry techniques were applied to measure plant nutrient concentrations of Br, Ca, Cl, K, Mn, N, Na and P in sugar-cane root, stalk and leaves. These data will be used to explore the behavior of element concentration in different parts of the sugar-cane to better understand the plant nutrient distribution during its development.

  2. Distribution-free discriminant analysis

    SciTech Connect

    Burr, T.; Doak, J.

    1997-05-01

    This report describes our experience in implementing a non-parametric (distribution-free) discriminant analysis module for use in a wide range of pattern recognition problems. Issues discussed include performance results on both real and simulated data sets, comparisons to other methods, and the computational environment. In some cases, this module performs better than other existing methods. Nearly all cases can benefit from the application of multiple methods.

  3. Time-dependent fiber bundles with local load sharing. II. General Weibull fibers

    NASA Astrophysics Data System (ADS)

    Phoenix, S. Leigh; Newman, William I.

    2009-12-01

    Fiber bundle models (FBMs) are useful tools in understanding failure processes in a variety of material systems. While the fibers and load sharing assumptions are easily described, FBM analysis is typically difficult. Monte Carlo methods are also hampered by the severe computational demands of large bundle sizes, which overwhelm just as behavior relevant to real materials starts to emerge. For large size scales, interest continues in idealized FBMs that assume either equal load sharing (ELS) or local load sharing (LLS) among fibers, rules that reflect features of real load redistribution in elastic lattices. The present work focuses on a one-dimensional bundle of N fibers under LLS where life consumption in a fiber follows a power law in its load, with exponent ? , and integrated over time. This life consumption function is further embodied in a functional form resulting in a Weibull distribution for lifetime under constant fiber stress and with Weibull exponent, ? . Thus the failure rate of a fiber depends on its past load history, except for ?=1 . We develop asymptotic results validated by Monte Carlo simulation using a computational algorithm developed in our previous work [Phys. Rev. EPLEEE81063-651X 63, 021507 (2001)] that greatly increases the size, N , of treatable bundles (e.g., 106 fibers in 103 realizations). In particular, our algorithm is O(NlnN) in contrast with former algorithms which were O(N2) making this investigation possible. Regimes are found for (?,?) pairs that yield contrasting behavior for large N . For ?>1 and large N , brittle weakest volume behavior emerges in terms of characteristic elements (groupings of fibers) derived from critical cluster formation, and the lifetime eventually goes to zero as N?? , unlike ELS, which yields a finite limiting mean. For 1/2???1 , however, LLS has remarkably similar behavior to ELS (appearing to be virtually identical for ?=1 ) with an asymptotic Gaussian lifetime distribution and a finite limiting mean for large N . The coefficient of variation follows a power law in increasing N but, except for ?=1 , the value of the negative exponent is clearly less than 1/2 unlike in ELS bundles where the exponent remains 1/2 for 1/2distribution for the longest lived of a parallel group of independent elements, which applies exactly to ?=0 . The lower the value of ? , the higher the transition value of ? , below which such extreme-value behavior occurs. No evidence was found for limiting Gaussian behavior for ?>1 but with 0Weibull exponent for fiber strength.

  4. Comparison of Weibull strength parameters from flexure and spin tests of brittle materials

    NASA Technical Reports Server (NTRS)

    Holland, Frederic A., Jr.; Zaretsky, Erwin V.

    1991-01-01

    Fracture data from five series of four point bend tests of beam and spin tests of flat annular disks were reanalyzed. Silicon nitride and graphite were the test materials. The experimental fracture strengths of the disks were compared with the predicted strengths based on both volume flaw and surface flaw analyses of four point bend data. Volume flaw analysis resulted in a better correlation between disks and beams in three of the five test series than did surface flaw analysis. The Weibull (moduli) and characteristic gage strengths for the disks and beams were also compared. Differences in the experimental Weibull slopes were not statistically significant. It was shown that results from the beam tests can predict the fracture strength of rotating disks.

  5. Spatial and Temporal Patterns of Global Onshore Wind Speed Distribution

    SciTech Connect

    Zhou, Yuyu; Smith, Steven J.

    2013-09-09

    Wind power, a renewable energy source, can play an important role in electrical energy generation. Information regarding wind energy potential is important both for energy related modeling and for decision-making in the policy community. While wind speed datasets with high spatial and temporal resolution are often ultimately used for detailed planning, simpler assumptions are often used in analysis work. An accurate representation of the wind speed frequency distribution is needed in order to properly characterize wind energy potential. Using a power density method, this study estimated global variation in wind parameters as fitted to a Weibull density function using NCEP/CFSR reanalysis data. The estimated Weibull distribution performs well in fitting the time series wind speed data at the global level according to R2, root mean square error, and power density error. The spatial, decadal, and seasonal patterns of wind speed distribution were then evaluated. We also analyzed the potential error in wind power estimation when a commonly assumed Rayleigh distribution (Weibull k = 2) is used. We find that the assumption of the same Weibull parameter across large regions can result in substantial errors. While large-scale wind speed data is often presented in the form of average wind speeds, these results highlight the need to also provide information on the wind speed distribution.

  6. Moment series for moment estimators of the parameters of a Weibull density

    SciTech Connect

    Bowman, K.O.; Shenton, L.R.

    1982-01-01

    Taylor series for the first four moments of the coefficients of variation in sampling from a 2-parameter Weibull density are given: they are taken as far as the coefficient of n/sup -24/. From these a four moment approximating distribution is set up using summatory techniques on the series. The shape parameter is treated in a similar way, but here the moment equations are no longer explicit estimators, and terms only as far as those in n/sup -12/ are given. The validity of assessed moments and percentiles of the approximating distributions is studied. Consideration is also given to properties of the moment estimator for 1/c.

  7. The comparative kinetic analysis of Acetocell and Lignoboost® lignin pyrolysis: the estimation of the distributed reactivity models.

    PubMed

    Janković, Bojan

    2011-10-01

    The non-isothermal pyrolysis kinetics of Acetocell (the organosolv) and Lignoboost® (kraft) lignins, in an inert atmosphere, have been studied by thermogravimetric analysis. Using isoconversional analysis, it was concluded that the apparent activation energy for all lignins strongly depends on conversion, showing that the pyrolysis of lignins is not a single chemical process. It was identified that the pyrolysis process of Acetocell and Lignoboost® lignin takes place over three reaction steps, which was confirmed by appearance of the corresponding isokinetic relationships (IKR). It was found that major pyrolysis stage of both lignins is characterized by stilbene pyrolysis reactions, which were subsequently followed by decomposition reactions of products derived from the stilbene pyrolytic process. It was concluded that non-isothermal pyrolysis of Acetocell and Lignoboost® lignins can be best described by n-th (n>1) reaction order kinetics, using the Weibull mixture model (as distributed reactivity model) with alternating shape parameters. PMID:21852115

  8. Statistical analysis of the reflectivity of ground clutter in the mm-wave band

    NASA Astrophysics Data System (ADS)

    Schimpf, H.

    Data from low-altitude airborne reflectivity measurements obtained at 95 and 35 GHz over different kinds of ground cover are compiled in tables and graphs and investigated statistically. The instrumentation for the flights is described; the reflectivity parameters are defined; and the statistical methods employed are explained. The fit of the observed frequency distributions to log-normal, log-Weibull, and Weibull distributions is assessed by applying a chi-squared test, and it is found that at low and moderate depression angles the clutter at both frequencies (and independent of polarization) from most ground-cover types is well represented by a log-normal distribution. At steep depression angles, however, no simple distribution can be determined. If the analysis is limited to the high reflectivities only (as in false-alarm-rate calculations), a log-Weibull distribution is shown to give superior results.

  9. Weibull Statistics for Upper Ocean Currents with the Fokker-Planck Equation

    NASA Astrophysics Data System (ADS)

    Chu, P. C.

    2012-12-01

    Upper oceans typically exhibit of a surface mixed layer with a thickness of a few to several hundred meters. This mixed layer is a key component in studies of climate, biological productivity and marine pollution. It is the link between the atmosphere and the deep ocean and directly affects the air-sea exchange of heat, momentum and gases. Vertically averaged horizontal currents across the mixed layer are driven by the residual between the Ekman transport and surface wind stress, and damped by the Rayleigh friction. A set of stochastic differential equations are derived for the two components of the current vector (u, v). The joint probability distribution function of (u, v) satisfies the Fokker-Planck equation (Chu, 2008, 2009), with the Weibull distribution as the solution for the current speed. To prove it, the PDF of the upper (0-50 m) tropical Pacific current speeds (w) was calculated from hourly ADCP data (1990-2007) at six stations for the Tropical Atmosphere Ocean project. In fact, it satisfies the two-parameter Weibull distribution reasonably well with different characteristics between El Nino and La Nina events: In the western Pacific, the PDF of w has a larger peakedness during the La Nina events than during the El Nino events; and vice versa in the eastern Pacific. However, the PDF of w for the lower layer (100-200 m) does not fit the Weibull distribution so well as the upper layer. This is due to the different stochastic differential equations between upper and lower layers in the tropical Pacific. For the upper layer, the stochastic differential equations, established on the base of the Ekman dynamics, have analytical solution, i.e., the Rayleigh distribution (simplest form of the Weibull distribution), for constant eddy viscosity K. Knowledge on PDF of w during the El Nino and La Nina events will improve the ensemble horizontal flux calculation, which contributes to the climate studies. Besides, the Weibull distribution is also identified from the near-real time ocean surface currents derived from satellite altimeter (JASON-1, GFO, ENVISAT) and scatterometer (QSCAT) data on 1o 1o resolution for world oceans (60o S to 60o N) as "Ocean Surface Current Analyses - Real Time (OSCAR)". Such a PDF has little seasonal and interannual variations. Knowledge on PDF of w will improve the ensemble horizontal flux calculation, which contributes to the climate studies. References Chu, P. C., 2008: Probability distribution function of the upper equatorial Pacific current speeds. Geophysical Research Letters, 35,doi:10.1029/2008GL033669 Chu, P. C., 2009: Statistical Characteristics of the Global Surface Current Speeds Obtained from Satellite Altimeter and Scatterometer Data. IEEE Journal of Selected Topics in Earth Observations and Remote Sensing,2(1),27-32.

  10. Statistical modeling of tornado intensity distributions

    NASA Astrophysics Data System (ADS)

    Dotzek, Nikolai; Grieser, Jürgen; Brooks, Harold E.

    We address the issue to determine an appropriate general functional shape of observed tornado intensity distributions. Recently, it was suggested that in the limit of long and large tornado records, exponential distributions over all positive Fujita or TORRO scale classes would result. Yet, our analysis shows that even for large databases observations contradict the validity of exponential distributions for weak (F0) and violent (F5) tornadoes. We show that observed tornado intensities can be much better described by Weibull distributions, for which an exponential remains a special case. Weibull fits in either v or F scale reproduce the observations significantly better than exponentials. In addition, we suggest to apply the original definition of negative intensity scales down to F-2 and T-4 (corresponding to v=0 m s -1) at least for climatological analyses. Weibull distributions allow for an improved risk assessment of violent tornadoes up to F6, and better estimates of total tornado occurrence, degree of underreporting and existence of subcritical tornadic circulations below damaging intensity. Therefore, our results are relevant for climatologists and risk assessment managers alike.

  11. Distributed computing and nuclear reactor analysis

    SciTech Connect

    Brown, F.B.; Derstine, K.L.; Blomquist, R.N.

    1994-03-01

    Large-scale scientific and engineering calculations for nuclear reactor analysis can now be carried out effectively in a distributed computing environment, at costs far lower than for traditional mainframes. The distributed computing environment must include support for traditional system services, such as a queuing system for batch work, reliable filesystem backups, and parallel processing capabilities for large jobs. All ANL computer codes for reactor analysis have been adapted successfully to a distributed system based on workstations and X-terminals. Distributed parallel processing has been demonstrated to be effective for long-running Monte Carlo calculations.

  12. Distributed analysis with PROOF in ATLAS collaboration

    NASA Astrophysics Data System (ADS)

    Panitkin, S. Y.; Benjamin, D.; Carillo Montoya, G.; Cranmer, K.; Ernst, M.; Guan, W.; Ito, H.; Maeno, T.; Majewski, S.; Mellado, B.; Rind, O.; Shibata, A.; Tarrade, F.; Wenaus, T.; Xu, N.; Ye, S.

    2010-04-01

    The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF can be configured to work with centralized storage systems, but it is especially effective together with distributed local storage systems - like Xrootd, when data are distributed over computing nodes. It works efficiently on different types of hardware and scales well from a multi-core laptop to large computing farms. From that point of view it is well suited for both large central analysis facilities and Tier 3 type analysis farms. PROOF can be used in interactive or batch like regimes. The interactive regime allows the user to work with typically distributed data from the ROOT command prompt and get a real time feedback on analysis progress and intermediate results. We will discuss our experience with PROOF in the context of ATLAS Collaboration distributed analysis. In particular we will discuss PROOF performance in various analysis scenarios and in multi-user, multi-session environments. We will also describe PROOF integration with the ATLAS distributed data management system and prospects of running PROOF on geographically distributed analysis farms.

  13. Distribution analysis of airborne nicotine concentrations in hospitality facilities.

    PubMed

    Schorp, Matthias K; Leyden, Donald E

    2002-02-01

    A number of publications report statistical summaries for environmental tobacco smoke (ETS) concentrations. Despite compelling evidence for the data not being normally distributed, these publications typically report the arithmetic mean and standard deviation of the data, thereby losing important information related to the distribution of values contained in the original data. We were interested in the frequency distributions of reported nicotine concentrations in hospitality environments and subjected available data to distribution analyses. The distribution of experimental indoor airborne nicotine concentration data taken from hospitality facilities worldwide was fit to lognormal, Weibull, exponential, Pearson (Type V), logistic, and loglogistic distribution models. Comparison of goodness of fit (GOF) parameters and indications from the literature verified the selection of a lognormal distribution as the overall best model. When individual data were not reported in the literature, statistical summaries of results were used to model sets of lognormally distributed data that are intended to mimic the original data distribution. Grouping the data into various categories led to 31 frequency distributions that were further interpreted. The median values in nonsmoking environments are about half of the median values in smoking sections. When different continents are compared, Asian, European, and North American median values in restaurants are about a factor of three below levels encountered in other hospitality facilities. On a comparison of nicotine concentrations in North American smoking sections and nonsmoking sections, median values are about one-third of the European levels. The results obtained may be used to address issues related to exposure to ETS in the hospitality sector. PMID:11868665

  14. Weibull models of fracture strengths and fatigue behavior of dental resins in flexure and shear.

    PubMed

    Baran, G R; McCool, J I; Paul, D; Boberick, K; Wunder, S

    1998-01-01

    In estimating lifetimes of dental restorative materials, it is useful to have available data on the fatigue behavior of these materials. Current efforts at estimation include several untested assumptions related to the equivalence of flaw distributions sampled by shear, tensile, and compressive stresses. Environmental influences on material properties are not accounted for, and it is unclear if fatigue limits exist. In this study, the shear and flexural strengths of three resins used as matrices in dental restorative composite materials were characterized by Weibull parameters. It was found that shear strengths were lower than flexural strengths, liquid sorption had a profound effect on characteristic strengths, and the Weibull shape parameter obtained from shear data differed for some materials from that obtained in flexure. In shear and flexural fatigue, a power law relationship applied for up to 250,000 cycles; no fatigue limits were found, and the data thus imply only one flaw population is responsible for failure. Again, liquid sorption adversely affected strength levels in most materials (decreasing shear strengths and flexural strengths by factors of 2-3) and to a greater extent than did the degree of cure or material chemistry. PMID:9730059

  15. Weibull Effective Area for Hertzian Ring Crack Initiation Stress

    SciTech Connect

    Jadaan, Osama M.; Wereszczak, Andrew A; Johanns, Kurt E

    2011-01-01

    Spherical or Hertzian indentation is used to characterize and guide the development of engineered ceramics under consideration for diverse applications involving contact, wear, rolling fatigue, and impact. Ring crack initiation can be one important damage mechanism of Hertzian indentation. It is caused by sufficiently-high, surface-located, radial tensile stresses in an annular ring located adjacent to and outside of the Hertzian contact circle. While the maximum radial tensile stress is known to be dependent on the elastic properties of the sphere and target, the diameter of the sphere, the applied compressive force, and the coefficient of friction, the Weibull effective area too will be affected by those parameters. However, the estimations of a maximum radial tensile stress and Weibull effective area are difficult to obtain because the coefficient of friction during Hertzian indentation is complex, likely intractable, and not known a priori. Circumventing this, the Weibull effective area expressions are derived here for the two extremes that bracket all coefficients of friction; namely, (1) the classical, frictionless, Hertzian case where only complete slip occurs, and (2) the case where no slip occurs or where the coefficient of friction is infinite.

  16. Towards Distributed Memory Parallel Program Analysis

    SciTech Connect

    Quinlan, D; Barany, G; Panas, T

    2008-06-17

    This paper presents a parallel attribute evaluation for distributed memory parallel computer architectures where previously only shared memory parallel support for this technique has been developed. Attribute evaluation is a part of how attribute grammars are used for program analysis within modern compilers. Within this work, we have extended ROSE, a open compiler infrastructure, with a distributed memory parallel attribute evaluation mechanism to support user defined global program analysis required for some forms of security analysis which can not be addressed by a file by file view of large scale applications. As a result, user defined security analyses may now run in parallel without the user having to specify the way data is communicated between processors. The automation of communication enables an extensible open-source parallel program analysis infrastructure.

  17. Shuttle Electrical Power Analysis Program (SEPAP) distribution circuit analysis report

    NASA Technical Reports Server (NTRS)

    Torina, E. M.

    1975-01-01

    An analysis and evaluation was made of the operating parameters of the shuttle electrical power distribution circuit under load conditions encountered during a normal Sortie 2 Mission with emphasis on main periods of liftoff and landing.

  18. On the Distribution of Earthquake Interevent Times and the Impact of Spatial Scale

    NASA Astrophysics Data System (ADS)

    Hristopulos, Dionissios

    2013-04-01

    The distribution of earthquake interevent times is a subject that has attracted much attention in the statistical physics literature [1-3]. A recent paper proposes that the distribution of earthquake interevent times follows from the the interplay of the crustal strength distribution and the loading function (stress versus time) of the Earth's crust locally [4]. It was also shown that the Weibull distribution describes earthquake interevent times provided that the crustal strength also follows the Weibull distribution and that the loading function follows a power-law during the loading cycle. I will discuss the implications of this work and will present supporting evidence based on the analysis of data from seismic catalogs. I will also discuss the theoretical evidence in support of the Weibull distribution based on models of statistical physics [5]. Since other-than-Weibull interevent times distributions are not excluded in [4], I will illustrate the use of the Kolmogorov-Smirnov test in order to determine which probability distributions are not rejected by the data. Finally, we propose a modification of the Weibull distribution if the size of the system under investigation (i.e., the area over which the earthquake activity occurs) is finite with respect to a critical link size. keywords: hypothesis testing, modified Weibull, hazard rate, finite size References [1] Corral, A., 2004. Long-term clustering, scaling, and universality in the temporal occurrence of earthquakes, Phys. Rev. Lett., 9210) art. no. 108501. [2] Saichev, A., Sornette, D. 2007. Theory of earthquake recurrence times, J. Geophys. Res., Ser. B 112, B04313/1-26. [3] Touati, S., Naylor, M., Main, I.G., 2009. Origin and nonuniversality of the earthquake interevent time distribution Phys. Rev. Lett., 102 (16), art. no. 168501. [4] Hristopulos, D.T., 2003. Spartan Gibbs random field models for geostatistical applications, SIAM Jour. Sci. Comput., 24, 2125-2162. [5] I. Eliazar and J. Klafter, 2006. Growth-collapse and decay-surge evolutions, and geometric Langevin equations, Physica A, 367, 106 - 128.

  19. Test Population Selection from Weibull-Based, Monte Carlo Simulations of Fatigue Life

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Zaretsky, Erwin V.; Hendricks, Robert C.

    2008-01-01

    Fatigue life is probabilistic and not deterministic. Experimentally establishing the fatigue life of materials, components, and systems is both time consuming and costly. As a result, conclusions regarding fatigue life are often inferred from a statistically insufficient number of physical tests. A proposed methodology for comparing life results as a function of variability due to Weibull parameters, variability between successive trials, and variability due to size of the experimental population is presented. Using Monte Carlo simulation of randomly selected lives from a large Weibull distribution, the variation in the L10 fatigue life of aluminum alloy AL6061 rotating rod fatigue tests was determined as a function of population size. These results were compared to the L10 fatigue lives of small (10 each) populations from AL2024, AL7075 and AL6061. For aluminum alloy AL6061, a simple algebraic relationship was established for the upper and lower L10 fatigue life limits as a function of the number of specimens failed. For most engineering applications where less than 30 percent variability can be tolerated in the maximum and minimum values, at least 30 to 35 test samples are necessary. The variability of test results based on small sample sizes can be greater than actual differences, if any, that exists between materials and can result in erroneous conclusions. The fatigue life of AL2024 is statistically longer than AL6061 and AL7075. However, there is no statistical difference between the fatigue lives of AL6061 and AL7075 even though AL7075 had a fatigue life 30 percent greater than AL6061.

  20. Test Population Selection from Weibull-Based, Monte Carlo Simulations of Fatigue Life

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Zaretsky, Erwin V.; Hendricks, Robert C.

    2012-01-01

    Fatigue life is probabilistic and not deterministic. Experimentally establishing the fatigue life of materials, components, and systems is both time consuming and costly. As a result, conclusions regarding fatigue life are often inferred from a statistically insufficient number of physical tests. A proposed methodology for comparing life results as a function of variability due to Weibull parameters, variability between successive trials, and variability due to size of the experimental population is presented. Using Monte Carlo simulation of randomly selected lives from a large Weibull distribution, the variation in the L10 fatigue life of aluminum alloy AL6061 rotating rod fatigue tests was determined as a function of population size. These results were compared to the L10 fatigue lives of small (10 each) populations from AL2024, AL7075 and AL6061. For aluminum alloy AL6061, a simple algebraic relationship was established for the upper and lower L10 fatigue life limits as a function of the number of specimens failed. For most engineering applications where less than 30 percent variability can be tolerated in the maximum and minimum values, at least 30 to 35 test samples are necessary. The variability of test results based on small sample sizes can be greater than actual differences, if any, that exists between materials and can result in erroneous conclusions. The fatigue life of AL2024 is statistically longer than AL6061 and AL7075. However, there is no statistical difference between the fatigue lives of AL6061 and AL7075 even though AL7075 had a fatigue life 30 percent greater than AL6061.

  1. Exponentiated Weibull model for the irradiance probability density function of a laser beam propagating through atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Barrios, Ricardo; Dios, Federico

    2013-02-01

    Many distributions have been proposed to model the probability density function of irradiance fluctuations. The most widespread models nowadays are the Lognormal (LN) and Gamma-Gamma (GG) distributions. Albeit these models comply with the actual PDF data most of the time, neither of them works in all scenarios and, depending on the conditions, one of the two have to be chosen. In this paper, a new model is presented resulting in the exponentiated Weibull (EW) distribution, along with a physical justification for the appearance of the model. Previously published data are used to compare the new model with the LN and GG distributions. Results suggest that the EW distribution is the better fit for data under all aperture averaging conditions and weak-to-strong turbulence regime.

  2. Force distribution analysis of mechanochemically reactive dimethylcyclobutene.

    PubMed

    Li, Wenjin; Edwards, Scott A; Lu, Lanyuan; Kubar, Tomas; Patil, Sandeep P; Grubmüller, Helmut; Groenhof, Gerrit; Gräter, Frauke

    2013-08-26

    Internal molecular forces can guide chemical reactions, yet are not straightforwardly accessible within a quantum mechanical description of the reacting molecules. Here, we present a force-matching force distribution analysis (FM-FDA) to analyze internal forces in molecules. We simulated the ring opening of trans-3,4-dimethylcyclobutene (tDCB) with on-the-fly semiempirical molecular dynamics. The self-consistent density functional tight binding (SCC-DFTB) method accurately described the force-dependent ring-opening kinetics of tDCB, showing quantitative agreement with both experimental and computational data at higher levels. Mechanical force was applied in two different ways, namely, externally by a constant pulling force and internally by embedding tDCB within a strained macrocycle-containing stiff stilbene. We analyzed the distribution of tDCB internal forces in the two different cases by FM-FDA and found that external force gave rise to a symmetric force distribution in the cyclobutene ring, which also scaled linearly with the external force, indicating that the force distribution was uniquely determined by the symmetric architecture of tDCB. In contrast, internal forces due to stiff stilbene resulted in an asymmetric force distribution within tDCB, which indicated a different geometry of force application and supported the important role of linkers in the mechanochemical reactivity of tDCB. In addition, three coordinates were identified through which the distributed forces contributed most to rate acceleration. These coordinates are mostly parallel to the coordinate connecting the two CH3 termini of tDCB. Our results confirm previous observations that the linker outside of the reactive moiety, such as a stretched polymer or a macrocycle, affects its mechanochemical reactivity. We expect FM-FDA to be of wide use to understand and quantitatively predict mechanochemical reactivity, including the challenging cases of systems within strained macrocycles. PMID:23843171

  3. The Weibull functional form for SEP event spectra

    NASA Astrophysics Data System (ADS)

    Laurenza, M.; Consolini, G.; Storini, M.; Damiani, A.

    2015-08-01

    The evolution of the kinetic energy spectra of two Solar Energetic Particle (SEP) events has been investigated through the Shannon's differential entropy during the different phases of the selected events, as proposed by [1]. Data from LET and HET instruments onboard the STEREO spacecraft were used to cover a wide energy range from ˜ 4 MeV to 100 MeV, as well as EPAM and ERNE data, on board the ACE and SOHO spacecraft, respectively, in the range 1.6 - 112 MeV. The spectral features were found to be consistent with the Weibull like shape, both during the main phase of the SEP events and over their whole duration. Comparison of results obtained for energetic particles accelerated at corotating interaction regions (CIRs) and transient-related interplanetary shocks are presented in the framework of shock acceleration.

  4. Dimensional Analysis of Solar Flare Distributions

    NASA Astrophysics Data System (ADS)

    Litvinenko, Y.

    2001-05-01

    Dimensional analysis is used to derive the distribution of solar flare energies ~ E-3/2 in accordance with recent observational and numerical results. Several other scalings, notably E ~ t f2 where t f is the flare duration, are obtained as well. Dimensional considerations can also be employed to model the rate of occurrence of solar flares (the flare waiting-time distribution). An analytical estimate for the mean flaring rate λ 0 is obtained, based on the idea that the rate reflects a balance between the processes of energy input into the corona and energy dissipation by flaring. The estimate is shown to be in good agreement with observations of flares by GOES detectors.

  5. Performance optimisations for distributed analysis in ALICE

    NASA Astrophysics Data System (ADS)

    Betev, L.; Gheata, A.; Gheata, M.; Grigoras, C.; Hristov, P.

    2014-06-01

    Performance is a critical issue in a production system accommodating hundreds of analysis users. Compared to a local session, distributed analysis is exposed to services and network latencies, remote data access and heterogeneous computing infrastructure, creating a more complex performance and efficiency optimization matrix. During the last 2 years, ALICE analysis shifted from a fast development phase to the more mature and stable code. At the same time, the frameworks and tools for deployment, monitoring and management of large productions have evolved considerably too. The ALICE Grid production system is currently used by a fair share of organized and individual user analysis, consuming up to 30% or the available resources and ranging from fully I/O-bound analysis code to CPU intensive correlations or resonances studies. While the intrinsic analysis performance is unlikely to improve by a large factor during the LHC long shutdown (LS1), the overall efficiency of the system has still to be improved by an important factor to satisfy the analysis needs. We have instrumented all analysis jobs with "sensors" collecting comprehensive monitoring information on the job running conditions and performance in order to identify bottlenecks in the data processing flow. This data are collected by the MonALISa-based ALICE Grid monitoring system and are used to steer and improve the job submission and management policy, to identify operational problems in real time and to perform automatic corrective actions. In parallel with an upgrade of our production system we are aiming for low level improvements related to data format, data management and merging of results to allow for a better performing ALICE analysis.

  6. A Novel Conditional Probability Density Distribution Surface for the Analysis of the Drop Life of Solder Joints Under Board Level Drop Impact

    NASA Astrophysics Data System (ADS)

    Gu, Jian; Lei, YongPing; Lin, Jian; Fu, HanGuang; Wu, Zhongwei

    2016-01-01

    The scattering of fatigue life data is a common problem and usually described using the normal distribution or Weibull distribution. For solder joints under drop impact, due to the complicated stress distribution, the relationship between the stress and the drop life is so far unknown. Furthermore, it is important to establish a function describing the change in standard deviation for solder joints under different drop impact levels. Therefore, in this study, a novel conditional probability density distribution surface (CPDDS) was established for the analysis of the drop life of solder joints. The relationship between the drop impact acceleration and the drop life is proposed, which comprehensively considers the stress distribution. A novel exponential model was adopted for describing the change of the standard deviation with the impact acceleration (0 → +∞). To validate the model, the drop life of Sn-3.0Ag-0.5Cu solder joints was analyzed. The probability density curve of the logarithm of the fatigue life distribution can be easily obtained for a certain acceleration level fixed on the acceleration level axis of the CPDDS. The P- A- N curve was also obtained using the functions μ( A) and σ( A), which can reflect the regularity of the life data for an overall reliability P.

  7. Analysis of Jingdong Mall Logistics Distribution Model

    NASA Astrophysics Data System (ADS)

    Shao, Kang; Cheng, Feng

    In recent years, the development of electronic commerce in our country to speed up the pace. The role of logistics has been highlighted, more and more electronic commerce enterprise are beginning to realize the importance of logistics in the success or failure of the enterprise. In this paper, the author take Jingdong Mall for example, performing a SWOT analysis of their current situation of self-built logistics system, find out the problems existing in the current Jingdong Mall logistics distribution and give appropriate recommendations.

  8. Analysis and control of distributed cooperative systems.

    SciTech Connect

    Feddema, John Todd; Parker, Eric Paul; Wagner, John S.; Schoenwald, David Alan

    2004-09-01

    As part of DARPA Information Processing Technology Office (IPTO) Software for Distributed Robotics (SDR) Program, Sandia National Laboratories has developed analysis and control software for coordinating tens to thousands of autonomous cooperative robotic agents (primarily unmanned ground vehicles) performing military operations such as reconnaissance, surveillance and target acquisition; countermine and explosive ordnance disposal; force protection and physical security; and logistics support. Due to the nature of these applications, the control techniques must be distributed, and they must not rely on high bandwidth communication between agents. At the same time, a single soldier must easily direct these large-scale systems. Finally, the control techniques must be provably convergent so as not to cause undo harm to civilians. In this project, provably convergent, moderate communication bandwidth, distributed control algorithms have been developed that can be regulated by a single soldier. We have simulated in great detail the control of low numbers of vehicles (up to 20) navigating throughout a building, and we have simulated in lesser detail the control of larger numbers of vehicles (up to 1000) trying to locate several targets in a large outdoor facility. Finally, we have experimentally validated the resulting control algorithms on smaller numbers of autonomous vehicles.

  9. Probability Distributions for Offshore Wind Speeds

    NASA Astrophysics Data System (ADS)

    Morgan, E. C.; Lackner, M.; Vogel, R. M.; Baise, L. G.

    2009-12-01

    In planning offshore wind farms, short-term wind speeds play a central role in estimating various engineering parameters, such as power output, design load, fatigue load, etc. Lacking wind speed data at a specific site, the probability distribution of wind speed serves as the primary substitute for such measurements during parameter estimation. It is common practice to model wind speeds with the Weibull distribution, but recent literature suggests that many other distributions may perform better. Such studies are often limited in either the time-span or geographic scope of their datasets. Using 10-minute average wind speed time series ranging from 1 month to 20 years in duration collected from 178 buoys around North America, we show that the widely-accepted Weibull distribution provides a poor fit to the distribution of wind speeds when compared with other models. For example several other distributions, including the bimodal Weibull, Kappa and Wakeby models, fit the data remarkably well, yielding values significantly closer to 1 than the Weibull and many other distributions. Additionally, we show that the Kappa and Wakeby predict average wind turbine power output better than other distributions, including the bimodal Weibull. Our results show that more complicated models than the two-parameter Weibull are needed to capture the complex behavior of wind, and that using such models leads to improved engineering decisions.

  10. CMS distributed data analysis with CRAB3

    DOE PAGESBeta

    Mascheroni, M.; Balcas, J.; Belforte, S.; Bockelman, B. P.; Hernandez, J. M.; Ciangottini, D.; Konstantinov, P. B.; Silva, J. M. D.; Ali, M. A. B. M.; Melo, A. M.; et al

    2015-12-23

    The CMS Remote Analysis Builder (CRAB) is a distributed workflow management tool which facilitates analysis tasks by isolating users from the technical details of the Grid infrastructure. Throughout LHC Run 1, CRAB has been successfully employed by an average of 350 distinct users each week executing about 200,000 jobs per day.CRAB has been significantly upgraded in order to face the new challenges posed by LHC Run 2. Components of the new system include 1) a lightweight client, 2) a central primary server which communicates with the clients through a REST interface, 3) secondary servers which manage user analysis tasks andmore » submit jobs to the CMS resource provisioning system, and 4) a central service to asynchronously move user data from temporary storage in the execution site to the desired storage location. Furthermore, the new system improves the robustness, scalability and sustainability of the service.Here we provide an overview of the new system, operation, and user support, report on its current status, and identify lessons learned from the commissioning phase and production roll-out.« less

  11. CMS distributed data analysis with CRAB3

    NASA Astrophysics Data System (ADS)

    Mascheroni, M.; Balcas, J.; Belforte, S.; Bockelman, B. P.; Hernandez, J. M.; Ciangottini, D.; Konstantinov, P. B.; Silva, J. M. D.; Ali, M. A. B. M.; Melo, A. M.; Riahi, H.; Tanasijczuk, A. J.; Yusli, M. N. B.; Wolf, M.; Woodard, A. E.; Vaandering, E.

    2015-12-01

    The CMS Remote Analysis Builder (CRAB) is a distributed workflow management tool which facilitates analysis tasks by isolating users from the technical details of the Grid infrastructure. Throughout LHC Run 1, CRAB has been successfully employed by an average of 350 distinct users each week executing about 200,000 jobs per day. CRAB has been significantly upgraded in order to face the new challenges posed by LHC Run 2. Components of the new system include 1) a lightweight client, 2) a central primary server which communicates with the clients through a REST interface, 3) secondary servers which manage user analysis tasks and submit jobs to the CMS resource provisioning system, and 4) a central service to asynchronously move user data from temporary storage in the execution site to the desired storage location. The new system improves the robustness, scalability and sustainability of the service. Here we provide an overview of the new system, operation, and user support, report on its current status, and identify lessons learned from the commissioning phase and production roll-out.

  12. CMS distributed data analysis with CRAB3

    SciTech Connect

    Mascheroni, M.; Balcas, J.; Belforte, S.; Bockelman, B. P.; Hernandez, J. M.; Ciangottini, D.; Konstantinov, P. B.; Silva, J. M. D.; Ali, M. A. B. M.; Melo, A. M.; Riahi, H.; Tanasijczuk, A. J.; Yusli, M. N. B.; Wolf, M.; Woodard, A. E.; Vaandering, E.

    2015-12-23

    The CMS Remote Analysis Builder (CRAB) is a distributed workflow management tool which facilitates analysis tasks by isolating users from the technical details of the Grid infrastructure. Throughout LHC Run 1, CRAB has been successfully employed by an average of 350 distinct users each week executing about 200,000 jobs per day.CRAB has been significantly upgraded in order to face the new challenges posed by LHC Run 2. Components of the new system include 1) a lightweight client, 2) a central primary server which communicates with the clients through a REST interface, 3) secondary servers which manage user analysis tasks and submit jobs to the CMS resource provisioning system, and 4) a central service to asynchronously move user data from temporary storage in the execution site to the desired storage location. Furthermore, the new system improves the robustness, scalability and sustainability of the service.Here we provide an overview of the new system, operation, and user support, report on its current status, and identify lessons learned from the commissioning phase and production roll-out.

  13. Buffered Communication Analysis in Distributed Multiparty Sessions

    NASA Astrophysics Data System (ADS)

    Deniélou, Pierre-Malo; Yoshida, Nobuko

    Many communication-centred systems today rely on asynchronous messaging among distributed peers to make efficient use of parallel execution and resource access. With such asynchrony, the communication buffers can happen to grow inconsiderately over time. This paper proposes a static verification methodology based on multiparty session types which can efficiently compute the upper bounds on buffer sizes. Our analysis relies on a uniform causality audit of the entire collaboration pattern - an examination that is not always possible from each end-point type. We extend this method to design algorithms that allocate communication channels in order to optimise the memory requirements of session executions. From these analyses, we propose two refinements methods which respect buffer bounds: a global protocol refinement that automatically inserts confirmation messages to guarantee stipulated buffer sizes and a local protocol refinement to optimise asynchronous messaging without buffer overflow. Finally our work is applied to overcome a buffer overflow problem of the multi-buffering algorithm.

  14. Psychotherapy and distributive justice: a Rawlsian analysis.

    PubMed

    Wilmot, Stephen

    2009-03-01

    In this paper I outline an approach to the distribution of resources between psychotherapy modalities in the context of the UK's health care system, using recent discussions of Cognitive Behavioural Psychotherapy as a way of highlighting resourcing issues. My main goal is to offer an approach that is just, and that accommodates the diversity of different schools of psychotherapy. In order to do this I draw extensively on the theories of Justice and of Political Liberalism developed by the late John Rawls, and adapt these to the particular requirements of psychotherapy resourcing. I explore some of the implications of this particular analysis, and consider how the principles of Rawlsian justice might translate into ground rules for deliberation and decision-making. PMID:18536980

  15. Likelihood analysis of earthquake focal mechanism distributions

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.; Jackson, David D.

    2015-06-01

    In our paper published earlier we discussed forecasts of earthquake focal mechanism and ways to test the forecast efficiency. Several verification methods were proposed, but they were based on ad hoc, empirical assumptions, thus their performance is questionable. We apply a conventional likelihood method to measure the skill of earthquake focal mechanism orientation forecasts. The advantage of such an approach is that earthquake rate prediction can be adequately combined with focal mechanism forecast, if both are based on the likelihood scores, resulting in a general forecast optimization. We measure the difference between two double-couple sources as the minimum rotation angle that transforms one into the other. We measure the uncertainty of a focal mechanism forecast (the variability), and the difference between observed and forecasted orientations (the prediction error), in terms of these minimum rotation angles. To calculate the likelihood score we need to compare actual forecasts or occurrences of predicted events with the null hypothesis that the mechanism's 3-D orientation is random (or equally probable). For 3-D rotation the random rotation angle distribution is not uniform. To better understand the resulting complexities, we calculate the information (likelihood) score for two theoretical rotational distributions (Cauchy and von Mises-Fisher), which are used to approximate earthquake source orientation pattern. We then calculate the likelihood score for earthquake source forecasts and for their validation by future seismicity data. Several issues need to be explored when analyzing observational results: their dependence on forecast and data resolution, internal dependence of scores on forecasted angle and random variability of likelihood scores. Here, we propose a simple tentative solution but extensive theoretical and statistical analysis is needed.

  16. Acceleration of sensitivity analysis by distributed computation.

    PubMed

    Sarai, N

    2007-01-01

    We have been developing a comprehensive mathematical model of cardiac myocyte based on molecular functions using a biological simulator, simBio. In this approach, expanding computing power is needed as the model becomes more intricate. Sensitivity analyses, which provide dependency of the model behavior on specific parameters, are also inevitable for developing and utilize models. Meanwhile, distributed computation by a cluster of personal computers (PC) became available using an open source package. A free software package named Jini orchestrates computers on a network, which we coupled with simBio. We connected 11 PCs and found that the time required for computing 504 models became 13 times shorter than that with a single PC. This method was proved efficient for sensitivity analysis, because calculations are independent and a linear decrease of computation time was obtained by adding PCs to the cluster. The visualization feature gives a researcher an instant feed-back, thus this system accelerates model driven study. The whole system with source code is available at www.sim-bio.org. PMID:18002164

  17. Distributed Design and Analysis of Computer Experiments

    SciTech Connect

    2002-11-11

    DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. For example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation of an algorithm by Michael McKay to compute variable correlations. DDACE can also be used to carry out a main-effects analysis to calculate the sensitivity of an output variable to each of the varied inputs taken individually. 1 Continued

  18. Distributed Design and Analysis of Computer Experiments

    Energy Science and Technology Software Center (ESTSC)

    2002-11-11

    DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. Formore » example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation of an algorithm by Michael McKay to compute variable correlations. DDACE can also be used to carry out a main-effects analysis to calculate the sensitivity of an output variable to each of the varied inputs taken individually. 1 Continued« less

  19. Ceramics Analysis and Reliability Evaluation of Structures (CARES). Users and programmers manual

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Manderscheid, Jane M.; Gyekenyesi, John P.

    1990-01-01

    This manual describes how to use the Ceramics Analysis and Reliability Evaluation of Structures (CARES) computer program. The primary function of the code is to calculate the fast fracture reliability or failure probability of macroscopically isotropic ceramic components. These components may be subjected to complex thermomechanical loadings, such as those found in heat engine applications. The program uses results from MSC/NASTRAN or ANSYS finite element analysis programs to evaluate component reliability due to inherent surface and/or volume type flaws. CARES utilizes the Batdorf model and the two-parameter Weibull cumulative distribution function to describe the effect of multiaxial stress states on material strength. The principle of independent action (PIA) and the Weibull normal stress averaging models are also included. Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities are estimated from four-point bend bar or unifrom uniaxial tensile specimen fracture strength data. Parameter estimation can be performed for single or multiple failure modes by using the least-square analysis or the maximum likelihood method. Kolmogorov-Smirnov and Anderson-Darling goodness-of-fit tests, ninety percent confidence intervals on the Weibull parameters, and Kanofsky-Srinivasan ninety percent confidence band values are also provided. The probabilistic fast-fracture theories used in CARES, along with the input and output for CARES, are described. Example problems to demonstrate various feature of the program are also included. This manual describes the MSC/NASTRAN version of the CARES program.

  20. Analysis of Off-Forward Parton Distributions

    NASA Astrophysics Data System (ADS)

    Ji, Xiangdong; Melnitchouk, Wally; Song, Xiaotong

    1997-04-01

    Off-forward parton distributions, which are generalizations of the forward parton distributions of deep-inelastic scattering, can provide information on the spin and orbital angular momentum carried by quarks and gluons in the nucleon. We use a simple bag model to estimate the off-forward distributions, as well as the form factors of the energy-momentum tensor, which can be measured in future deeply-virtual Compton scattering experiments.

  1. Distribution entropy analysis of epileptic EEG signals.

    PubMed

    Peng Li; Chang Yan; Karmakar, Chandan; Changchun Liu

    2015-08-01

    It is an open-ended challenge to accurately detect the epileptic seizures through electroencephalogram (EEG) signals. Recently published studies have made elaborate attempts to distinguish between the normal and epileptic EEG signals by advanced nonlinear entropy methods, such as the approximate entropy, sample entropy, fuzzy entropy, and permutation entropy, etc. Most recently, a novel distribution entropy (DistEn) has been reported to have superior performance compared with the conventional entropy methods for especially short length data. We thus aimed, in the present study, to show the potential of DistEn in the analysis of epileptic EEG signals. The publicly-accessible Bonn database which consisted of normal, interictal, and ictal EEG signals was used in this study. Three different measurement protocols were set for better understanding the performance of DistEn, which are: i) calculate the DistEn of a specific EEG signal using the full recording; ii) calculate the DistEn by averaging the results for all its possible non-overlapped 5 second segments; and iii) calculate it by averaging the DistEn values for all the possible non-overlapped segments of 1 second length, respectively. Results for all three protocols indicated a statistically significantly increased DistEn for the ictal class compared with both the normal and interictal classes. Besides, the results obtained under the third protocol, which only used very short segments (1 s) of EEG recordings showed a significantly (p <; 0.05) increased DistEn for the interictal class in compassion with the normal class, whereas both analyses using relatively long EEG signals failed in tracking this difference between them, which may be due to a nonstationarity effect on entropy algorithm. The capability of discriminating between the normal and interictal EEG signals is of great clinical relevance since it may provide helpful tools for the detection of a seizure onset. Therefore, our study suggests that the DistEn analysis of EEG signals is very promising for clinical and even portable EEG monitoring. PMID:26737213

  2. Distributed energy store railguns experiment and analysis

    SciTech Connect

    Holland, L.D.

    1984-01-01

    Electromagnetic acceleration of projectiles holds the potential for achieving higher velocities than yet achieved by any other means. A railgun is the simplest form of electromagnetic macroparticle accelerator and can generate the highest sustained accelerating force. The practical length of conventional railguns is limited by the impedance of the rails because current must be carried along the entire length of the rails. A railgun and power supply system called the distributed energy store railgun was proposed as a solution to this limitation. The distributed energy store railgun used multiple current sources connected to the rails of a railgun at points distributed along the bore. These current sources (energy stores) are turned on in sequence as the projectile moves down the bore so that current is fed to the railgun from behind the armature. In this system the length of the rails that carry the full armature current is less than the total length of the railgun. If a sufficient number of energy stores is used, this removes the limitation on the length of a railgun. An additional feature of distributed energy store type railguns is that they can be designed to maintain a constant pressure on the projectile being accelerated. A distributed energy store railgun was constructed and successfully operated. In addition to this first demonstration of the distributed energy store railgun principle, a theoretical model of the system was also constructed.

  3. Analysis of Temperature Distributions in Nighttime Inversions

    NASA Astrophysics Data System (ADS)

    Telyak, Oksana; Krasouski, Aliaksandr; Svetashev, Alexander; Turishev, Leonid; Barodka, Siarhei

    2015-04-01

    Adequate prediction of temperature inversion in the atmospheric boundary layer is one of prerequisites for successful forecasting of meteorological parameters and severe weather events. Examples include surface air temperature and precipitation forecasting as well as prediction of fog, frosts and smog with hazardous levels of atmospheric pollution. At the same time, reliable forecasting of temperature inversions remains an unsolved problem. For prediction of nighttime inversions over some specific territory, it is important to study characteristic features of local circulation cells formation and to properly take local factors into account to develop custom modeling techniques for operational use. The present study aims to investigate and analyze vertical temperature distributions in tropospheric inversions (isotherms) over the territory of Belarus. We study several specific cases of formation, evolution and decay of deep nighttime temperature inversions in Belarus by means of mesoscale numerical simulations with WRF model, considering basic mechanisms of isothermal and inverse temperature layers formation in the troposphere and impact of these layers on local circulation cells. Our primary goal is to assess the feasibility of advance prediction of inversions formation with WRF. Modeling results reveal that all cases under consideration have characteristic features of radiative inversions (e.g., their formation times, development phases, inversion intensities, etc). Regions of "blocking" layers formation are extensive and often spread over the entire territory of Belarus. Inversions decay starts from the lowermost (near surface) layer (altitudes of 5 to 50 m). In all cases, one can observe formation of temperature gradients that substantially differ from the basic inversion gradient, i.e. the layer splits into smaller layers, each having a different temperature stratification (isothermal, adiabatic, etc). As opposed to various empirical techniques as well as theoretical approaches based on discriminant analysis, mesoscale modeling with WRF provides fairly successful forecasts of formation times and regions for all types of temperature inversions up to 3 days in advance. Furthermore, we conclude that without proper adjustment for the presence of thin isothermal layers (adiabatic and/or inversion layers), temperature data can affect results of statistical climate studies. Provided there are regions where a long-term, constant inversion is present (e.g., Antarctica or regions with continental climate), these data can contribute an uncompensated systematic error of 2 to 10° C. We argue that this very fact may lead to inconsistencies in long-term temperature data interpretations (e.g., conclusions ranging from "global warming" to "global cooling" based on temperature observations for the same region and time period). Due to the importance of this problem from the scientific as well as practical point of view, our plans for further studies include analysis of autumn and wintertime inversions and convective inversions. At the same time, it seems promising to develop an algorithm of automatic recognition of temperature inversions based on a combination of WRF modeling results, surface and satellite observations.

  4. On the gap between an empirical distribution and an exponential distribution of waiting times for price changes in a financial market

    NASA Astrophysics Data System (ADS)

    Sazuka, Naoya

    2007-03-01

    We analyze waiting times for price changes in a foreign currency exchange rate. Recent empirical studies of high-frequency financial data support that trades in financial markets do not follow a Poisson process and the waiting times between trades are not exponentially distributed. Here we show that our data is well approximated by a Weibull distribution rather than an exponential distribution in the non-asymptotic regime. Moreover, we quantitatively evaluate how much an empirical data is far from an exponential distribution using a Weibull fit. Finally, we discuss a transition between a Weibull-law and a power-law in the long time asymptotic regime.

  5. Bayesian Inference of the Weibull Model Based on Interval-Censored Survival Data

    PubMed Central

    Guure, Chris Bambey; Ibrahim, Noor Akma; Adam, Mohd Bakri

    2013-01-01

    Interval-censored data consist of adjacent inspection times that surround an unknown failure time. We have in this paper reviewed the classical approach which is maximum likelihood in estimating the Weibull parameters with interval-censored data. We have also considered the Bayesian approach in estimating the Weibull parameters with interval-censored data under three loss functions. This study became necessary because of the limited discussion in the literature, if at all, with regard to estimating the Weibull parameters with interval-censored data using Bayesian. A simulation study is carried out to compare the performances of the methods. A real data application is also illustrated. It has been observed from the study that the Bayesian estimator is preferred to the classical maximum likelihood estimator for both the scale and shape parameters. PMID:23476718

  6. Comparison of the Weibull characteristics of hydroxyapatite and strontium doped hydroxyapatite.

    PubMed

    Yatongchai, Chokchai; Wren, Anthony W; Curran, Declan J; Hornez, Jean-Christophe; Mark R, Towler

    2013-05-01

    The effects of two strontium (Sr) additions, 5% and 10% of the total calcium (Ca) content, on the phase assemblage and Weibull statistics of hydroxyapatite (HA) are investigated and compared to those of undoped HA. Sintering was carried out in the range of 900-1200 °C in steps of 1000 °C in a conventional furnace. Sr content had little effect on the mean particulate size. Decomposition of the HA phase occurred with Sr incorporation, while β-TCP stabilization was shown to occur with 10% Sr additions. Porosity in both sets of doped samples was at a comparable level to porosity in the undoped HA samples, however the 5% Sr-HA samples displayed the greatest reduction in porosity with increasing temperature while the porosity of the 10% Sr-HA samples remain relatively constant over the full sintering temperature range. The undoped HA samples displayed the greatest Weibull strengths and the porosity was determined to be the major controlling factor. However, with the introduction of decompositional phases in the Sr-HA samples, the dependence of strength on porosity is reduced and the phase assemblage becomes the more dominant factor for Weibull strength. The Weibull modulus is relatively independent of the porosity in the undoped HA samples. The 5% Sr-HA samples experience a slight increase in Weibull modulus with porosity, indicating a possible relationship between the parameters. However the 10% Sr-HA samples show the highest Weibull modulus with a value of approximately 15 across all sintering temperatures. It is postulated that this is due to the increased amount of surface and lattice diffusion that these samples undergo, which effectively smooths out flaws in the microstructure, due to a saturation of Sr content occurring in grain boundary movement. PMID:23524073

  7. Harmonic analysis of electrical distribution systems

    SciTech Connect

    1996-03-01

    This report presents data pertaining to research on harmonics of electric power distribution systems. Harmonic data is presented on RMS and average measurements for determination of harmonics in buildings; fluorescent ballast; variable frequency drive; georator geosine harmonic data; uninterruptible power supply; delta-wye transformer; westinghouse suresine; liebert datawave; and active injection mode filter data.

  8. Survival Analysis of Patients with End Stage Renal Disease

    NASA Astrophysics Data System (ADS)

    Urrutia, J. D.; Gayo, W. S.; Bautista, L. A.; Baccay, E. B.

    2015-06-01

    This paper provides a survival analysis of End Stage Renal Disease (ESRD) under Kaplan-Meier Estimates and Weibull Distribution. The data were obtained from the records of V. L. MakabaliMemorial Hospital with respect to time t (patient's age), covariates such as developed secondary disease (Pulmonary Congestion and Cardiovascular Disease), gender, and the event of interest: the death of ESRD patients. Survival and hazard rates were estimated using NCSS for Weibull Distribution and SPSS for Kaplan-Meier Estimates. These lead to the same conclusion that hazard rate increases and survival rate decreases of ESRD patient diagnosed with Pulmonary Congestion, Cardiovascular Disease and both diseases with respect to time. It also shows that female patients have a greater risk of death compared to males. The probability risk was given the equation R = 1 — e-H(t) where e-H(t) is the survival function, H(t) the cumulative hazard function which was created using Cox-Regression.

  9. Economic analysis of efficient distribution transformer trends

    SciTech Connect

    Downing, D.J.; McConnell, B.W.; Barnes, P.R.; Hadley, S.W.; Van Dyke, J.W.

    1998-03-01

    This report outlines an approach that will account for uncertainty in the development of evaluation factors used to identify transformer designs with the lowest total owning cost (TOC). The TOC methodology is described and the most highly variable parameters are discussed. The model is developed to account for uncertainties as well as statistical distributions for the important parameters. Sample calculations are presented. The TOC methodology is applied to data provided by two utilities in order to test its validity.

  10. Stochastic Analysis of Wind Energy for Wind Pump Irrigation in Coastal Andhra Pradesh, India

    NASA Astrophysics Data System (ADS)

    Raju, M. M.; Kumar, A.; Bisht, D.; Rao, D. B.

    2014-09-01

    The rapid escalation in the prices of oil and gas as well as increasing demand for energy has attracted the attention of scientists and researchers to explore the possibility of generating and utilizing the alternative and renewable sources of wind energy in the long coastal belt of India with considerable wind energy resources. A detailed analysis of wind potential is a prerequisite to harvest the wind energy resources efficiently. Keeping this in view, the present study was undertaken to analyze the wind energy potential to assess feasibility of the wind-pump operated irrigation system in the coastal region of Andhra Pradesh, India, where high ground water table conditions are available. The stochastic analysis of wind speed data were tested to fit a probability distribution, which describes the wind energy potential in the region. The normal and Weibull probability distributions were tested; and on the basis of Chi square test, the Weibull distribution gave better results. Hence, it was concluded that the Weibull probability distribution may be used to stochastically describe the annual wind speed data of coastal Andhra Pradesh with better accuracy. The size as well as the complete irrigation system with mass curve analysis was determined to satisfy various daily irrigation demands at different risk levels.

  11. EXPERIMENTAL DESIGN STRATEGY FOR THE WEIBULL DOSE RESPONSE MODEL (JOURNAL VERSION)

    EPA Science Inventory

    The objective of the research was to determine optimum design point allocation for estimation of relative yield losses from ozone pollution when the true and fitted yield-ozone dose response relationship follows the Weibull. The optimum design is dependent on the values of the We...

  12. A Comparison of Distribution Free and Non-Distribution Free Factor Analysis Methods

    ERIC Educational Resources Information Center

    Ritter, Nicola L.

    2012-01-01

    Many researchers recognize that factor analysis can be conducted on both correlation matrices and variance-covariance matrices. Although most researchers extract factors from non-distribution free or parametric methods, researchers can also extract factors from distribution free or non-parametric methods. The nature of the data dictates the method…

  13. Intensity distribution analysis of cathodoluminescence using the energy loss distribution of electrons.

    PubMed

    Fukuta, Masahiro; Inami, Wataru; Ono, Atsushi; Kawata, Yoshimasa

    2016-01-01

    We present an intensity distribution analysis of cathodoluminescence (CL) excited with a focused electron beam in a luminescent thin film. The energy loss distribution is applied to the developed analysis method in order to determine the arrangement of the dipole locations along the path of the electron traveling in the film. Propagating light emitted from each dipole is analyzed with the finite-difference time-domain (FDTD) method. CL distribution near the film surface is evaluated as a nanometric light source. It is found that a light source with 30nm widths is generated in the film by the focused electron beam. We also discuss the accuracy of the developed analysis method by comparison with experimental results. The analysis results are brought into good agreement with the experimental results by introducing the energy loss distribution. PMID:26550930

  14. Fracture mechanics concepts in reliability analysis of monolithic ceramics

    NASA Technical Reports Server (NTRS)

    Manderscheid, Jane M.; Gyekenyesi, John P.

    1987-01-01

    Basic design concepts for high-performance, monolithic ceramic structural components are addressed. The design of brittle ceramics differs from that of ductile metals because of the inability of ceramic materials to redistribute high local stresses caused by inherent flaws. Random flaw size and orientation requires that a probabilistic analysis be performed in order to determine component reliability. The current trend in probabilistic analysis is to combine linear elastic fracture mechanics concepts with the two parameter Weibull distribution function to predict component reliability under multiaxial stress states. Nondestructive evaluation supports this analytical effort by supplying data during verification testing. It can also help to determine statistical parameters which describe the material strength variation, in particular the material threshold strength (the third Weibull parameter), which in the past was often taken as zero for simplicity.

  15. Energy production estimation and parameter sensitivity analysis for Wind Energy Conversion Systems (WECS)

    NASA Astrophysics Data System (ADS)

    Oei, T. D.; Curvers, A.; Vandehee, H.

    1985-05-01

    The wind energy production computer program WEPP and its applications are described. Loss models for gearbox and generators used in the program are treated. Generator loss models for synchronous and asynchronous generators are given together with four options of turbine mode of control. Wind regime of the turbine site is calculated from a Weibull distribution function, with correction for turbine height and terrain roughness. Examples of production optimization methods, with spider diagrams presenting the sensitivity analysis of such calculations are shown.

  16. Statistical analysis of bidirectional reflectance distribution functions

    NASA Astrophysics Data System (ADS)

    Zubiaga, Carlos J.; Belcour, Laurent; Bosch, Carles; Muñoz, Adolfo; Barla, Pascal

    2015-03-01

    Bidirectional Reflectance Distribution Functions (BRDFs) are commonly employed in Computer Graphics and Computer Vision to model opaque materials. On the one hand, a BRDF is a complex 4D function of both light and view directions, which should ensure reciprocity and energy conservation laws. On the other hand, when computing radiance reaching the eye from a surface point, the view direction is held fixed. In this respect, we are only interested in a 2D BRDF slice that acts as a filter on the local environment lighting. The goal of our work is to understand the statistical properties of such a filter as a function of viewing elevation. To this end, we have conducted a study of measured BRDFs where we have computed statistical moments for each viewing angle. We show that some moments are correlated together across dimensions and orders, while some others are close to zero and may safely be discarded. Our study opens the way to novel applications such as moment-based manipulation of measured BRDFs, material estimation and image-based material editing. It also puts empirical and physically-based material models in a new perspective, by revealing their effect as view-dependent filters.

  17. Precipitator inlet particulate distribution flow analysis

    SciTech Connect

    LaRose, J.A.; Averill, A.

    1994-12-31

    The B and W Rothemuhle precipitators located at PacifiCorp`s Wyodak Generating Station in Gillette, Wyoming have, for the past two years, been experiencing discharge wire breakage. The breakage is due to corrosion of the wires: however, the exact cause of the corrosion is unknown. One aspect thought to contribute to the problem is an unbalance of ash loading among the four precipitators. Plant operation has revealed that the ash loading to precipitator C appears to be the heaviest of the four casing, and also appears to have the most severe corrosion. Data from field measurements showed that the gas flows to the four precipitators are fairly uniform, within {+-}9% of the average. The ash loading data showed a large maldistribution among the precipitators. Precipitator C receives 60% more ash than the next heaviest loaded precipitator. A numerical model was created which showed the same results. The model was then utilized to determine design modifications to the existing flue and turning vanes to improve the ash loading distribution. The resulting design was predicted to improve the ash loading to all the precipitators, within {+-}10% of the average.

  18. Grammatical Analysis as a Distributed Neurobiological Function

    PubMed Central

    Bozic, Mirjana; Fonteneau, Elisabeth; Su, Li; Marslen-Wilson, William D

    2015-01-01

    Language processing engages large-scale functional networks in both hemispheres. Although it is widely accepted that left perisylvian regions have a key role in supporting complex grammatical computations, patient data suggest that some aspects of grammatical processing could be supported bilaterally. We investigated the distribution and the nature of grammatical computations across language processing networks by comparing two types of combinatorial grammatical sequences—inflectionally complex words and minimal phrases—and contrasting them with grammatically simple words. Novel multivariate analyses revealed that they engage a coalition of separable subsystems: inflected forms triggered left-lateralized activation, dissociable into dorsal processes supporting morphophonological parsing and ventral, lexically driven morphosyntactic processes. In contrast, simple phrases activated a consistently bilateral pattern of temporal regions, overlapping with inflectional activations in L middle temporal gyrus. These data confirm the role of the left-lateralized frontotemporal network in supporting complex grammatical computations. Critically, they also point to the capacity of bilateral temporal regions to support simple, linear grammatical computations. This is consistent with a dual neurobiological framework where phylogenetically older bihemispheric systems form part of the network that supports language function in the modern human, and where significant capacities for language comprehension remain intact even following severe left hemisphere damage. PMID:25421880

  19. Grammatical analysis as a distributed neurobiological function.

    PubMed

    Bozic, Mirjana; Fonteneau, Elisabeth; Su, Li; Marslen-Wilson, William D

    2015-03-01

    Language processing engages large-scale functional networks in both hemispheres. Although it is widely accepted that left perisylvian regions have a key role in supporting complex grammatical computations, patient data suggest that some aspects of grammatical processing could be supported bilaterally. We investigated the distribution and the nature of grammatical computations across language processing networks by comparing two types of combinatorial grammatical sequences--inflectionally complex words and minimal phrases--and contrasting them with grammatically simple words. Novel multivariate analyses revealed that they engage a coalition of separable subsystems: inflected forms triggered left-lateralized activation, dissociable into dorsal processes supporting morphophonological parsing and ventral, lexically driven morphosyntactic processes. In contrast, simple phrases activated a consistently bilateral pattern of temporal regions, overlapping with inflectional activations in L middle temporal gyrus. These data confirm the role of the left-lateralized frontotemporal network in supporting complex grammatical computations. Critically, they also point to the capacity of bilateral temporal regions to support simple, linear grammatical computations. This is consistent with a dual neurobiological framework where phylogenetically older bihemispheric systems form part of the network that supports language function in the modern human, and where significant capacities for language comprehension remain intact even following severe left hemisphere damage. PMID:25421880

  20. WATER DISTRIBUTION SYSTEM ANALYSIS: FIELD STUDIES, MODELING AND MANAGEMENT

    EPA Science Inventory

    The users guide entitled Water Distribution System Analysis: Field Studies, Modeling and Management is a reference guide for water utilities and an extensive summarization of information designed to provide drinking water utility personnel (and related consultants and research...

  1. Distributed bearing fault diagnosis based on vibration analysis

    NASA Astrophysics Data System (ADS)

    Dolenc, Boštjan; Boškoski, Pavle; Juričić, Đani

    2016-01-01

    Distributed bearing faults appear under various circumstances, for example due to electroerosion or the progression of localized faults. Bearings with distributed faults tend to generate more complex vibration patterns than those with localized faults. Despite the frequent occurrence of such faults, their diagnosis has attracted limited attention. This paper examines a method for the diagnosis of distributed bearing faults employing vibration analysis. The vibrational patterns generated are modeled by incorporating the geometrical imperfections of the bearing components. Comparing envelope spectra of vibration signals shows that one can distinguish between localized and distributed faults. Furthermore, a diagnostic procedure for the detection of distributed faults is proposed. This is evaluated on several bearings with naturally born distributed faults, which are compared with fault-free bearings and bearings with localized faults. It is shown experimentally that features extracted from vibrations in fault-free, localized and distributed fault conditions form clearly separable clusters, thus enabling diagnosis.

  2. Performance analysis of static locking in replicated distributed database systems

    NASA Technical Reports Server (NTRS)

    Kuang, Yinghong; Mukkamala, Ravi

    1991-01-01

    Data replication and transaction deadlocks can severely affect the performance of distributed database systems. Many current evaluation techniques ignore these aspects, because it is difficult to evaluate through analysis and time consuming to evaluate through simulation. A technique is used that combines simulation and analysis to closely illustrate the impact of deadlock and evaluate performance of replicated distributed database with both shared and exclusive locks.

  3. Performance analysis of static locking in replicated distributed database systems

    NASA Technical Reports Server (NTRS)

    Kuang, Yinghong; Mukkamala, Ravi

    1991-01-01

    Data replications and transaction deadlocks can severely affect the performance of distributed database systems. Many current evaluation techniques ignore these aspects, because it is difficult to evaluate through analysis and time consuming to evaluate through simulation. Here, a technique is discussed that combines simulation and analysis to closely illustrate the impact of deadlock and evaluate performance of replicated distributed databases with both shared and exclusive locks.

  4. Atomic Pair Distribution Function (PDF) Analysis of Ferroelectric Materials

    NASA Astrophysics Data System (ADS)

    Yoneda, Yasuhiro

    Atomic Pair Distribution Function (PDF) is a one of local structure analysis. PDF analysis is a powerful method for ferroelectrics in which domain structure exists. A deviation arises between average and local structures under the influence of the ferroelectric domain configuration. The local structure analysis of BaTiO3 and BiMg0.5Ti0.5O3 is shown as an example of application to the ferroelectric materials of PDF analysis.

  5. Analysis of the irregular planar distribution of proteins in membranes.

    PubMed

    Hui, S W; Frank, J

    1985-03-01

    Methods to characterize the irregular but non-random planar distribution of proteins in biological membranes were investigated. The distribution of the proteins constituting the intramembranous particles (IMP) in human erythrocyte membranes was used as an example. The distribution of IMPs was deliberately altered by experimental means. For real space analyses, the IMP positions in freeze fracture micrograph S were determined by an automatic procedure described. Radial distribution and autocorrelation analysis revealed quantitative differences between experimental groups. These methods are more sensitive than the corresponding optical diffraction or Fourier-Bessel analyses of the same IMP distribution data, due to the inability of the diffraction methods to separate contrast and distribution effects. A method to identify IMPs on a non-uniform background is described. PMID:3999133

  6. Modeling and analysis of solar distributed generation

    NASA Astrophysics Data System (ADS)

    Ortiz Rivera, Eduardo Ivan

    Recent changes in the global economy are creating a big impact in our daily life. The price of oil is increasing and the number of reserves are less every day. Also, dramatic demographic changes are impacting the viability of the electric infrastructure and ultimately the economic future of the industry. These are some of the reasons that many countries are looking for alternative energy to produce electric energy. The most common form of green energy in our daily life is solar energy. To convert solar energy into electrical energy is required solar panels, dc-dc converters, power control, sensors, and inverters. In this work, a photovoltaic module, PVM, model using the electrical characteristics provided by the manufacturer data sheet is presented for power system applications. Experimental results from testing are showed, verifying the proposed PVM model. Also in this work, three maximum power point tracker, MPPT, algorithms would be presented to obtain the maximum power from a PVM. The first MPPT algorithm is a method based on the Rolle's and Lagrange's Theorems and can provide at least an approximate answer to a family of transcendental functions that cannot be solved using differential calculus. The second MPPT algorithm is based on the approximation of the proposed PVM model using fractional polynomials where the shape, boundary conditions and performance of the proposed PVM model are satisfied. The third MPPT algorithm is based in the determination of the optimal duty cycle for a dc-dc converter and the previous knowledge of the load or load matching conditions. Also, four algorithms to calculate the effective irradiance level and temperature over a photovoltaic module are presented in this work. The main reasons to develop these algorithms are for monitoring climate conditions, the elimination of temperature and solar irradiance sensors, reductions in cost for a photovoltaic inverter system, and development of new algorithms to be integrated with maximum power point tracking algorithms. Finally, several PV power applications will be presented like circuit analysis for a load connected to two different PV arrays, speed control for a do motor connected to a PVM, and a novel single phase photovoltaic inverter system using the Z-source converter.

  7. Near field light intensity distribution analysis in bimodal polymer waveguide

    NASA Astrophysics Data System (ADS)

    Herzog, T.; Gut, K.

    2015-12-01

    The paper presents analysis of light intensity distribution and sensitivity in differential interferometer based on bimodal polymer waveguide. Key part is analysis of optimal waveguide layer thickness in structure SiO2/SU-8/H2O for maximum bulk refractive index sensitivity. The paper presents new approach to detecting phase difference between modes through registrations only part of energy propagating in the waveguide. Additionally in this paper the analysis of changes in light distribution when energy in modes is not equal were performed.

  8. Time-dependent reliability analysis of ceramic engine components

    SciTech Connect

    Nemeth, N.N.

    1993-10-01

    The computer program CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing either the power or Paris law relations. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. Two example problems demonstrating proof testing and fatigue parameter estimation are given.

  9. Time-dependent reliability analysis of ceramic engine components

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.

    1993-01-01

    The computer program CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing either the power or Paris law relations. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. Two example problems demonstrating proof testing and fatigue parameter estimation are given.

  10. Surface flaw reliability analysis of ceramic components with the SCARE finite element postprocessor program

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, John P.; Nemeth, Noel N.

    1987-01-01

    The SCARE (Structural Ceramics Analysis and Reliability Evaluation) computer program on statistical fast fracture reliability analysis with quadratic elements for volume distributed imperfections is enhanced to include the use of linear finite elements and the capability of designing against concurrent surface flaw induced ceramic component failure. The SCARE code is presently coupled as a postprocessor to the MSC/NASTRAN general purpose, finite element analysis program. The improved version now includes the Weibull and Batdorf statistical failure theories for both surface and volume flaw based reliability analysis. The program uses the two-parameter Weibull fracture strength cumulative failure probability distribution model with the principle of independent action for poly-axial stress states, and Batdorf's shear-sensitive as well as shear-insensitive statistical theories. The shear-sensitive surface crack configurations include the Griffith crack and Griffith notch geometries, using the total critical coplanar strain energy release rate criterion to predict mixed-mode fracture. Weibull material parameters based on both surface and volume flaw induced fracture can also be calculated from modulus of rupture bar tests, using the least squares method with known specimen geometry and grouped fracture data. The statistical fast fracture theories for surface flaw induced failure, along with selected input and output formats and options, are summarized. An example problem to demonstrate various features of the program is included.

  11. A Mixed Kijima Model Using the Weibull-Based Generalized Renewal Processes

    PubMed Central

    2015-01-01

    Generalized Renewal Processes are useful for approaching the rejuvenation of dynamical systems resulting from planned or unplanned interventions. We present new perspectives for the Generalized Renewal Processes in general and for the Weibull-based Generalized Renewal Processes in particular. Disregarding from literature, we present a mixed Generalized Renewal Processes approach involving Kijima Type I and II models, allowing one to infer the impact of distinct interventions on the performance of the system under study. The first and second theoretical moments of this model are introduced as well as its maximum likelihood estimation and random sampling approaches. In order to illustrate the usefulness of the proposed Weibull-based Generalized Renewal Processes model, some real data sets involving improving, stable, and deteriorating systems are used. PMID:26197222

  12. An analysis of the geographical distribution of Plasmodium ovale

    PubMed Central

    Lysenko, A. Ja.; Beljaev, A. E.

    1969-01-01

    For a long time Plasmodium ovale was considered a very rare causal agent of malaria, but recently it has been shown to be a fairly common parasite in Africa. The authors analyse all the findings of P. ovale outside tropical Africa and describe its distribution. This species is distributed in 2 areas, the first confined to tropical Africa and the second to islands in the Western Pacific. The authors make a medico-geographical analysis of the distribution of P. ovale, and attempt to explain particular features of it. PMID:5306622

  13. Strength statistics and the distribution of earthquake interevent times

    NASA Astrophysics Data System (ADS)

    Hristopulos, Dionissios T.; Mouslopoulou, Vasiliki

    2013-02-01

    The Weibull distribution is often used to model the earthquake interevent times distribution (ITD). We propose a link between the earthquake ITD on single faults with the Earths crustal shear strength distribution by means of a phenomenological stick-slip model. For single faults or fault systems with homogeneous strength statistics and power-law stress accumulation we obtain the Weibull ITD. We prove that the moduli of the interevent times and crustal shear strength are linearly related, while the time scale is an algebraic function of the scale of crustal shear strength. We also show that logarithmic stress accumulation leads to the log-Weibull ITD. We investigate deviations of the ITD tails from the Weibull model due to sampling bias, magnitude cutoff thresholds, and non-homogeneous strength parameters. Assuming the Gutenberg-Richter law and independence of the Weibull modulus on the magnitude threshold, we deduce that the interevent time scale drops exponentially with the magnitude threshold. We demonstrate that a microearthquake sequence from the island of Crete and a seismic sequence from Southern California conform reasonably well to the Weibull model.

  14. Distributed transit compartments for arbitrary lifespan distributions in aging populations.

    PubMed

    Koch, Gilbert; Schropp, Johannes

    2015-09-01

    Transit compartment models (TCM) are often used to describe aging populations where every individual has its own lifespan. However, in the TCM approach these lifespans are gamma-distributed which is a serious limitation because often the Weibull or more complex distributions are realistic. Therefore, we extend the TCM concept to approximately describe any lifespan distribution and call this generalized concept distributed transit compartment models (DTCMs). The validity of DTCMs is obtained by convergence investigations. From the mechanistic perspective the transit rates are directly controlled by the lifespan distribution. Further, DTCMs could be used to approximate the convolution of a signal with a probability density function. As example a stimulatory effect of a drug in an aging population with a Weibull-distributed lifespan is presented where distribution and model parameters are estimated based on simulated data. PMID:26100181

  15. ANALYSIS OF DISTRIBUTION FEEDER LOSSES DUE TO ADDITION OF DISTRIBUTED PHOTOVOLTAIC GENERATORS

    SciTech Connect

    Tuffner, Francis K.; Singh, Ruchi

    2011-08-09

    Distributed generators (DG) are small scale power supplying sources owned by customers or utilities and scattered throughout the power system distribution network. Distributed generation can be both renewable and non-renewable. Addition of distributed generation is primarily to increase feeder capacity and to provide peak load reduction. However, this addition comes with several impacts on the distribution feeder. Several studies have shown that addition of DG leads to reduction of feeder loss. However, most of these studies have considered lumped load and distributed load models to analyze the effects on system losses, where the dynamic variation of load due to seasonal changes is ignored. It is very important for utilities to minimize the losses under all scenarios to decrease revenue losses, promote efficient asset utilization, and therefore, increase feeder capacity. This paper will investigate an IEEE 13-node feeder populated with photovoltaic generators on detailed residential houses with water heater, Heating Ventilation and Air conditioning (HVAC) units, lights, and other plug and convenience loads. An analysis of losses for different power system components, such as transformers, underground and overhead lines, and triplex lines, will be performed. The analysis will utilize different seasons and different solar penetration levels (15%, 30%).

  16. Selection of neutrino burst candidates by pulse spatial distribution analysis

    NASA Astrophysics Data System (ADS)

    Ryasny, V. G.

    1996-02-01

    The method of analysis and possibilities of identification of neutrino bursts from collapsing stars using a spatial distribution of pulses in the multimodular installations, like the Large Volume Detector at the Gran Sasso Laboratory, Liquid Scintillation Detector (Mont Blanc) and Baksan Scintillation Telescope, are discussed. The method could be applicable for any position sensitive detector. By the spatial distribution analysis the burst imitation probability can be decreased by at least 2 orders of magnitude, without significant loss of sensitivity, for currently predicted number of the neutrino interactions.

  17. First Experiences with LHC Grid Computing and Distributed Analysis

    SciTech Connect

    Fisk, Ian

    2010-12-01

    In this presentation the experiences of the LHC experiments using grid computing were presented with a focus on experience with distributed analysis. After many years of development, preparation, exercises, and validation the LHC (Large Hadron Collider) experiments are in operations. The computing infrastructure has been heavily utilized in the first 6 months of data collection. The general experience of exploiting the grid infrastructure for organized processing and preparation is described, as well as the successes employing the infrastructure for distributed analysis. At the end the expected evolution and future plans are outlined.

  18. Entropy Methods For Univariate Distributions in Decision Analysis

    NASA Astrophysics Data System (ADS)

    Abbas, Ali E.

    2003-03-01

    One of the most important steps in decision analysis practice is the elicitation of the decision-maker's belief about an uncertainty of interest in the form of a representative probability distribution. However, the probability elicitation process is a task that involves many cognitive and motivational biases. Alternatively, the decision-maker may provide other information about the distribution of interest, such as its moments, and the maximum entropy method can be used to obtain a full distribution subject to the given moment constraints. In practice however, decision makers cannot readily provide moments for the distribution, and are much more comfortable providing information about the fractiles of the distribution of interest or bounds on its cumulative probabilities. In this paper we present a graphical method to determine the maximum entropy distribution between upper and lower probability bounds and provide an interpretation for the shape of the maximum entropy distribution subject to fractile constraints, (FMED). We also discuss the problems with the FMED in that it is discontinuous and flat over each fractile interval. We present a heuristic approximation to a distribution if in addition to its fractiles, we also know it is continuous and work through full examples to illustrate the approach.

  19. Discriminating Topology in Galaxy Distributions using Network Analysis

    NASA Astrophysics Data System (ADS)

    Hong, Sungryong; Coutinho, Bruno; Dey, Arjun; Barabási, Albert-L.; Vogelsberger, Mark; Hernquist, Lars; Gebhardt, Karl

    2016-04-01

    The large-scale distribution of galaxies is generally analyzed using the two-point correlation function. However, this statistic does not capture the topology of the distribution, and it is necessary to resort to higher order correlations to break degeneracies. We demonstrate that an alternate approach using network analysis can discriminate between topologically different distributions that have similar two-point correlations. We investigate two galaxy point distributions, one produced by a cosmological simulation and the other by a Lévy walk. For the cosmological simulation, we adopt the redshift z = 0.58 slice from Illustris (Vogelsberger et al. 2014A) and select galaxies with stellar masses greater than 108M⊙. The two point correlation function of these simulated galaxies follows a single power-law, ξ(r) ˜ r-1.5. Then, we generate Lévy walks matching the correlation function and abundance with the simulated galaxies. We find that, while the two simulated galaxy point distributions have the same abundance and two point correlation function, their spatial distributions are very different; most prominently, filamentary structures, absent in Lévy fractals. To quantify these missing topologies, we adopt network analysis tools and measure diameter, giant component, and transitivity from networks built by a conventional friends-of-friends recipe with various linking lengths. Unlike the abundance and two point correlation function, these network quantities reveal a clear separation between the two simulated distributions; therefore, the galaxy distribution simulated by Illustris is not a Lévy fractal quantitatively. We find that the described network quantities offer an efficient tool for discriminating topologies and for comparing observed and theoretical distributions.

  20. Distribution-free mediation analysis for nonlinear models with confounding.

    PubMed

    Albert, Jeffrey M

    2012-11-01

    Recently, researchers have used a potential-outcome framework to estimate causally interpretable direct and indirect effects of an intervention or exposure on an outcome. One approach to causal-mediation analysis uses the so-called mediation formula to estimate the natural direct and indirect effects. This approach generalizes the classical mediation estimators and allows for arbitrary distributions for the outcome variable and mediator. A limitation of the standard (parametric) mediation formula approach is that it requires a specified mediator regression model and distribution; such a model may be difficult to construct and may not be of primary interest. To address this limitation, we propose a new method for causal-mediation analysis that uses the empirical distribution function, thereby avoiding parametric distribution assumptions for the mediator. To adjust for confounders of the exposure-mediator and exposure-outcome relationships, inverse-probability weighting is incorporated based on a supplementary model of the probability of exposure. This method, which yields the estimates of the natural direct and indirect effects for a specified reference group, is applied to data from a cohort study of dental caries in very-low-birth-weight adolescents to investigate the oral-hygiene index as a possible mediator. Simulation studies show low bias in the estimation of direct and indirect effects in a variety of distribution scenarios, whereas the standard mediation formula approach can be considerably biased when the distribution of the mediator is incorrectly specified. PMID:23007042

  1. GIS-based poverty and population distribution analysis in China

    NASA Astrophysics Data System (ADS)

    Cui, Jing; Wang, Yingjie; Yan, Hong

    2009-07-01

    Geographically, poverty status is not only related with social-economic factors but also strongly affected by geographical environment. In the paper, GIS-based poverty and population distribution analysis method is introduced for revealing their regional differences. More than 100000 poor villages and 592 national key poor counties are chosen for the analysis. The results show that poverty distribution tends to concentrate in most of west China and mountainous rural areas of mid China. Furthermore, the fifth census data are overlaid to those poor areas in order to gain its internal diversity of social-economic characteristics. By overlaying poverty related social-economic parameters, such as sex ratio, illiteracy, education level, percentage of ethnic minorities, family composition, finding shows that poverty distribution is strongly correlated with high illiteracy rate, high percentage minorities, and larger family member.

  2. Energy loss analysis of an integrated space power distribution system

    NASA Technical Reports Server (NTRS)

    Kankam, M. David; Ribeiro, P. F.

    1992-01-01

    The results of studies related to conceptual topologies of an integrated utility-like space power system are described. The system topologies are comparatively analyzed by considering their transmission energy losses as functions of mainly distribution voltage level and load composition. The analysis is expedited by use of a Distribution System Analysis and Simulation (DSAS) software. This recently developed computer program by the Electric Power Research Institute (EPRI) uses improved load models to solve the power flow within the system. However, present shortcomings of the software with regard to space applications, and incompletely defined characteristics of a space power system make the results applicable to only the fundamental trends of energy losses of the topologies studied. Accountability, such as included, for the effects of the various parameters on the system performance can constitute part of a planning tool for a space power distribution system.

  3. Rapid Analysis of Mass Distribution of Radiation Shielding

    NASA Technical Reports Server (NTRS)

    Zapp, Edward

    2007-01-01

    Radiation Shielding Evaluation Toolset (RADSET) is a computer program that rapidly calculates the spatial distribution of mass of an arbitrary structure for use in ray-tracing analysis of the radiation-shielding properties of the structure. RADSET was written to be used in conjunction with unmodified commercial computer-aided design (CAD) software that provides access to data on the structure and generates selected three-dimensional-appearing views of the structure. RADSET obtains raw geometric, material, and mass data on the structure from the CAD software. From these data, RADSET calculates the distribution(s) of the masses of specific materials about any user-specified point(s). The results of these mass-distribution calculations are imported back into the CAD computing environment, wherein the radiation-shielding calculations are performed.

  4. Global NLO Analysis of Nuclear Parton Distribution Functions

    SciTech Connect

    Hirai, M.; Kumano, S.; Nagai, T.-H.

    2008-02-21

    Nuclear parton distribution functions (NPDFs) are determined by a global analysis of experimental measurements on structure-function ratios F{sub 2}{sup A}/F{sub 2}{sup A{sup '}} and Drell-Yan cross section ratios {sigma}{sub DY}{sup A}/{sigma}{sub DY}{sup A{sup '}}, and their uncertainties are estimated by the Hessian method. The NPDFs are obtained in both leading order (LO) and next-to-leading order (NLO) of {alpha}{sub s}. As a result, valence-quark distributions are relatively well determined, whereas antiquark distributions at x>0.2 and gluon distributions in the whole x region have large uncertainties. The NLO uncertainties are slightly smaller than the LO ones; however, such a NLO improvement is not as significant as the nucleonic case.

  5. Data synthesis and display programs for wave distribution function analysis

    NASA Technical Reports Server (NTRS)

    Storey, L. R. O.; Yeh, K. J.

    1992-01-01

    At the National Space Science Data Center (NSSDC) software was written to synthesize and display artificial data for use in developing the methodology of wave distribution analysis. The software comprises two separate interactive programs, one for data synthesis and the other for data display.

  6. WATER DISTRIBUTION SYSTEM ANALYSIS: FIELD STUDIES, MODELING AND MANAGEMENT

    EPA Science Inventory

    The user‘s guide entitled “Water Distribution System Analysis: Field Studies, Modeling and Management” is a reference guide for water utilities and an extensive summarization of information designed to provide drinking water utility personnel (and related consultants and research...

  7. Can Distributed Volunteers Accomplish Massive Data Analysis Tasks?

    NASA Technical Reports Server (NTRS)

    Kanefsky, B.; Barlow, N. G.; Gulick, V. C.

    2001-01-01

    We argue that many image analysis tasks can be performed by distributed amateurs. Our pilot study, with crater surveying and classification, has produced encouraging results in terms of both quantity (100,000 crater entries in 2 months) and quality. Additional information is contained in the original extended abstract.

  8. Analyzing Distributed Functions in an Integrated Hazard Analysis

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry; Massie, Michael J.

    2010-01-01

    Large scale integration of today's aerospace systems is achievable through the use of distributed systems. Validating the safety of distributed systems is significantly more difficult as compared to centralized systems because of the complexity of the interactions between simultaneously active components. Integrated hazard analysis (IHA), a process used to identify unacceptable risks and to provide a means of controlling them, can be applied to either centralized or distributed systems. IHA, though, must be tailored to fit the particular system being analyzed. Distributed systems, for instance, must be analyzed for hazards in terms of the functions that rely on them. This paper will describe systems-oriented IHA techniques (as opposed to traditional failure-event or reliability techniques) that should be employed for distributed systems in aerospace environments. Special considerations will be addressed when dealing with specific distributed systems such as active thermal control, electrical power, command and data handling, and software systems (including the interaction with fault management systems). Because of the significance of second-order effects in large scale distributed systems, the paper will also describe how to analyze secondary functions to secondary functions through the use of channelization.

  9. Molecular Isotopic Distribution Analysis (MIDAs) with Adjustable Mass Accuracy

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Ogurtsov, Aleksey Y.; Yu, Yi-Kuo

    2014-01-01

    In this paper, we present Molecular Isotopic Distribution Analysis (MIDAs), a new software tool designed to compute molecular isotopic distributions with adjustable accuracies. MIDAs offers two algorithms, one polynomial-based and one Fourier-transform-based, both of which compute molecular isotopic distributions accurately and efficiently. The polynomial-based algorithm contains few novel aspects, whereas the Fourier-transform-based algorithm consists mainly of improvements to other existing Fourier-transform-based algorithms. We have benchmarked the performance of the two algorithms implemented in MIDAs with that of eight software packages (BRAIN, Emass, Mercury, Mercury5, NeutronCluster, Qmass, JFC, IC) using a consensus set of benchmark molecules. Under the proposed evaluation criteria, MIDAs's algorithms, JFC, and Emass compute with comparable accuracy the coarse-grained (low-resolution) isotopic distributions and are more accurate than the other software packages. For fine-grained isotopic distributions, we compared IC, MIDAs's polynomial algorithm, and MIDAs's Fourier transform algorithm. Among the three, IC and MIDAs's polynomial algorithm compute isotopic distributions that better resemble their corresponding exact fine-grained (high-resolution) isotopic distributions. MIDAs can be accessed freely through a user-friendly web-interface at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/midas/index.html.

  10. Molecular Isotopic Distribution Analysis (MIDAs) with adjustable mass accuracy.

    PubMed

    Alves, Gelio; Ogurtsov, Aleksey Y; Yu, Yi-Kuo

    2014-01-01

    In this paper, we present Molecular Isotopic Distribution Analysis (MIDAs), a new software tool designed to compute molecular isotopic distributions with adjustable accuracies. MIDAs offers two algorithms, one polynomial-based and one Fourier-transform-based, both of which compute molecular isotopic distributions accurately and efficiently. The polynomial-based algorithm contains few novel aspects, whereas the Fourier-transform-based algorithm consists mainly of improvements to other existing Fourier-transform-based algorithms. We have benchmarked the performance of the two algorithms implemented in MIDAs with that of eight software packages (BRAIN, Emass, Mercury, Mercury5, NeutronCluster, Qmass, JFC, IC) using a consensus set of benchmark molecules. Under the proposed evaluation criteria, MIDAs's algorithms, JFC, and Emass compute with comparable accuracy the coarse-grained (low-resolution) isotopic distributions and are more accurate than the other software packages. For fine-grained isotopic distributions, we compared IC, MIDAs's polynomial algorithm, and MIDAs's Fourier transform algorithm. Among the three, IC and MIDAs's polynomial algorithm compute isotopic distributions that better resemble their corresponding exact fine-grained (high-resolution) isotopic distributions. MIDAs can be accessed freely through a user-friendly web-interface at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/midas/index.html. PMID:24254576

  11. Parametrizing the local dark matter speed distribution: A detailed analysis

    NASA Astrophysics Data System (ADS)

    Kavanagh, Bradley J.

    2014-04-01

    In a recent paper, a new parametrization for the dark matter (DM) speed distribution f(v) was proposed for use in the analysis of data from direct detection experiments. This parametrization involves expressing the logarithm of the speed distribution as a polynomial in the speed v. We present here a more detailed analysis of the properties of this parametrization. We show that the method leads to statistically unbiased mass reconstructions and exact coverage of credible intervals. The method performs well over a wide range of DM masses, even when finite energy resolution and backgrounds are taken into account. We also show how to select the appropriate number of basis functions for the parametrization. Finally, we look at how the speed distribution itself can be reconstructed, and how the method can be used to determine if the data are consistent with some test distribution. In summary, we show that this parametrization performs consistently well over a wide range of input parameters and over large numbers of statistical ensembles and can therefore reliably be used to reconstruct both the DM mass and speed distribution from direct detection data.

  12. Generalized Exponential Distribution in Flood Frequency Analysis for Polish Rivers

    PubMed Central

    Markiewicz, Iwona; Strupczewski, Witold G.; Bogdanowicz, Ewa; Kochanek, Krzysztof

    2015-01-01

    Many distributions have been used in flood frequency analysis (FFA) for fitting the flood extremes data. However, as shown in the paper, the scatter of Polish data plotted on the moment ratio diagram shows that there is still room for a new model. In the paper, we study the usefulness of the generalized exponential (GE) distribution in flood frequency analysis for Polish Rivers. We investigate the fit of GE distribution to the Polish data of the maximum flows in comparison with the inverse Gaussian (IG) distribution, which in our previous studies showed the best fitting among several models commonly used in FFA. Since the use of a discrimination procedure without the knowledge of its performance for the considered probability density functions may lead to erroneous conclusions, we compare the probability of correct selection for the GE and IG distributions along with the analysis of the asymptotic model error in respect to the upper quantile values. As an application, both GE and IG distributions are alternatively assumed for describing the annual peak flows for several gauging stations of Polish Rivers. To find the best fitting model, four discrimination procedures are used. In turn, they are based on the maximized logarithm of the likelihood function (K procedure), on the density function of the scale transformation maximal invariant (QK procedure), on the Kolmogorov-Smirnov statistics (KS procedure) and the fourth procedure based on the differences between the ML estimate of 1% quantile and its value assessed by the method of moments and linear moments, in sequence (R procedure). Due to the uncertainty of choosing the best model, the method of aggregation is applied to estimate of the maximum flow quantiles. PMID:26657239

  13. Integrating software architectures for distributed simulations and simulation analysis communities.

    SciTech Connect

    Goldsby, Michael E.; Fellig, Daniel; Linebarger, John Michael; Moore, Patrick Curtis; Sa, Timothy J.; Hawley, Marilyn F.

    2005-10-01

    The one-year Software Architecture LDRD (No.79819) was a cross-site effort between Sandia California and Sandia New Mexico. The purpose of this research was to further develop and demonstrate integrating software architecture frameworks for distributed simulation and distributed collaboration in the homeland security domain. The integrated frameworks were initially developed through the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC), sited at SNL/CA, and the National Infrastructure Simulation & Analysis Center (NISAC), sited at SNL/NM. The primary deliverable was a demonstration of both a federation of distributed simulations and a federation of distributed collaborative simulation analysis communities in the context of the same integrated scenario, which was the release of smallpox in San Diego, California. To our knowledge this was the first time such a combination of federations under a single scenario has ever been demonstrated. A secondary deliverable was the creation of the standalone GroupMeld{trademark} collaboration client, which uses the GroupMeld{trademark} synchronous collaboration framework. In addition, a small pilot experiment that used both integrating frameworks allowed a greater range of crisis management options to be performed and evaluated than would have been possible without the use of the frameworks.

  14. Assessing tephra total grain-size distribution: Insights from field data analysis

    NASA Astrophysics Data System (ADS)

    Costa, A.; Pioli, L.; Bonadonna, C.

    2016-06-01

    The Total Grain-Size Distribution (TGSD) of tephra deposits is crucial for hazard assessment and provides fundamental insights into eruption dynamics. It controls both the mass distribution within the eruptive plume and the sedimentation processes and can provide essential information on the fragmentation mechanisms. TGSD is typically calculated by integrating deposit grain-size at different locations. The result of such integration is affected not only by the number, but also by the spatial distribution and distance from the vent of the sampling sites. In order to evaluate the reliability of TGSDs, we assessed representative sampling distances for pyroclasts of different sizes through dedicated numerical simulations of tephra dispersal. Results reveal that, depending on wind conditions, a representative grain-size distribution of tephra deposits down to ∼100 μm can be obtained by integrating samples collected at distances from less than one tenth up to a few tens of the column height. The statistical properties of TGSDs representative of a range of eruption styles were calculated by fitting the data with a few general distributions given by the sum of two log-normal distributions (bi-Gaussian in Φ-units), the sum of two Weibull distributions, and a generalized log-logistic distribution for the cumulative number distributions. The main parameters of the bi-lognormal fitting correlate with height of the eruptive columns and magma viscosity, allowing general relationships to be used for estimating TGSD generated in a variety of eruptive styles and for different magma compositions. Fitting results of the cumulative number distribution show two different power law trends for coarse and fine fractions of tephra particles, respectively. Our results shed light on the complex processes that control the size of particles being injected into the atmosphere during volcanic explosive eruptions and represent the first attempt to assess TGSD on the basis of pivotal physical quantities, such as magma viscosity and plume height. Our empirical method can be used to assess the main features of TGSD necessary for numerical simulations aimed to real-time forecasting and long-term hazard assessment when more accurate field-derived TGSDs are not available.

  15. Quantitative analysis of tritium distribution in austenitic stainless steels welds

    NASA Astrophysics Data System (ADS)

    Roustila, A.; Kuromoto, N.; Brass, A. M.; Chêne, J.

    1994-08-01

    Tritium autoradiography was used to study the tritium distribution in laser and arc (TIG) weldments performed on tritiated AISI 316 samples. Quantitative values of the local tritium concentration were obtained from the microdensitometric analysis of the autoradiographs. This procedure was used to map the tritium concentration in the samples before and after laser and TIG treatments. The effect of the detritiation conditions and of welding on the tritium distribution in the material is extensively characterized. The results illustrate the interest of the technique for predicting a possible embrittlement of the material associated with a local enhancement of the tritium concentration and the presence of helium 3 generated by tritium decay.

  16. A Study of Thread Load Distribution Using Optical Deformation Analysis

    NASA Astrophysics Data System (ADS)

    Bennett, J. M.; Graham, A. J.

    2012-07-01

    It is important to measure the load distribution on bolt threads in a way which represents in-service conditions. Previous methods to experimentally find thread loads have had limitations since polymer replicas have been used, and the joint is dead weight tested. In this investigation an aluminium nut is torqued on a stainless steel bolt. By applying optical deformation analysis, using a GOM ARAMIS stereo camera system, this investigation finds the nut load distribution. The method used is a novel method for measuring load and allows for a more representative experimental setup. Results show that the first thread carries almost twice the average load of the consecutive pitches.

  17. Modelling Framework and the Quantitative Analysis of Distributed Energy Resources in Future Distribution Networks

    NASA Astrophysics Data System (ADS)

    Han, Xue; Sandels, Claes; Zhu, Kun; Nordström, Lars

    2013-08-01

    There has been a large body of statements claiming that the large-scale deployment of Distributed Energy Resources (DERs) could eventually reshape the future distribution grid operation in numerous ways. Thus, it is necessary to introduce a framework to measure to what extent the power system operation will be changed by various parameters of DERs. This article proposed a modelling framework for an overview analysis on the correlation between DERs. Furthermore, to validate the framework, the authors described the reference models of different categories of DERs with their unique characteristics, comprising distributed generation, active demand and electric vehicles. Subsequently, quantitative analysis was made on the basis of the current and envisioned DER deployment scenarios proposed for Sweden. Simulations are performed in two typical distribution network models for four seasons. The simulation results show that in general the DER deployment brings in the possibilities to reduce the power losses and voltage drops by compensating power from the local generation and optimizing the local load profiles.

  18. Vibrational Energy Distribution Analysis (VEDA): Scopes and limitations

    NASA Astrophysics Data System (ADS)

    Jamróz, Michał H.

    2013-10-01

    The principle of operations of the VEDA program written by the author for Potential Energy Distribution (PED) analysis of theoretical vibrational spectra is described. Nowadays, the PED analysis is indispensible tool in serious analysis of the vibrational spectra. To perform the PED analysis it is necessary to define 3N-6 linearly independent local mode coordinates. Already for 20-atomic molecules it is a difficult task. The VEDA program reads the input data automatically from the Gaussian program output files. Then, VEDA automatically proposes an introductory set of local mode coordinates. Next, the more adequate coordinates are proposed by the program and optimized to obtain maximal elements of each column (internal coordinate) of the PED matrix (the EPM parameter). The possibility for an automatic optimization of PED contributions is a unique feature of the VEDA program absent in any other programs performing PED analysis.

  19. Analysis of georadar data to estimate the snow depth distribution

    NASA Astrophysics Data System (ADS)

    Godio, A.; Rege, R. B.

    2016-06-01

    We have performed extensive georadar surveys for mapping the snow depth in the basin of Breuil-Cervinia (Aosta Valley) in the Italian Alps, close to the Matterhorn. More than 9 km of georadar profiles were acquired in April 2008 and 15 km in April 2009, distributed on an hydrological basin of about 12 km2. Radar surveys were carried out partially on the iced area of Ventina glacier at elevation higher than 3000 m a.s.l. and partially at lower elevation (2500 m-3000 m) on the gently slopes of the basin where the winter snow accumulated directly on the ground surface. The snow distribution on the basin, at the end of the season, could vary significantly according to the elevation range, exposition and ground morphology. In small catchment the snow depth reached 6-7 m. At higher elevation, on the glacier, a more homogeneous distribution is usually observed. A descriptive statistical analysis of the dataset is discussed to demonstrate the high spatial variability of the snow depth distribution in the area. The probability distribution of the snow depth fits the gamma distribution with a good correlation. Instead we didn't found any satisfactory relationship of the snow depth with the main morphological parameters of the terrain (elevation, slope, curvature). This suggests that the snow distribution, at the end of the winter season, is mainly conditioned by the transport phenomena and re-distribution of the wind action. The comparison of the results of georadar surveys with the hand probe measurements points out the low accuracy of the snow depth estimate in the area by using conventional hand probing approach only, encouraging to develop technology for fast and accurate mapping of the snow depth at the scale of basin.

  20. Neuronal model with distributed delay: analysis and simulation study for gamma distribution memory kernel.

    PubMed

    Karmeshu; Gupta, Varun; Kadambari, K V

    2011-06-01

    A single neuronal model incorporating distributed delay (memory)is proposed. The stochastic model has been formulated as a Stochastic Integro-Differential Equation (SIDE) which results in the underlying process being non-Markovian. A detailed analysis of the model when the distributed delay kernel has exponential form (weak delay) has been carried out. The selection of exponential kernel has enabled the transformation of the non-Markovian model to a Markovian model in an extended state space. For the study of First Passage Time (FPT) with exponential delay kernel, the model has been transformed to a system of coupled Stochastic Differential Equations (SDEs) in two-dimensional state space. Simulation studies of the SDEs provide insight into the effect of weak delay kernel on the Inter-Spike Interval(ISI) distribution. A measure based on Jensen-Shannon divergence is proposed which can be used to make a choice between two competing models viz. distributed delay model vis-á-vis LIF model. An interesting feature of the model is that the behavior of (CV(t))((ISI)) (Coefficient of Variation) of the ISI distribution with respect to memory kernel time constant parameter η reveals that neuron can switch from a bursting state to non-bursting state as the noise intensity parameter changes. The membrane potential exhibits decaying auto-correlation structure with or without damped oscillatory behavior depending on the choice of parameters. This behavior is in agreement with empirically observed pattern of spike count in a fixed time window. The power spectral density derived from the auto-correlation function is found to exhibit single and double peaks. The model is also examined for the case of strong delay with memory kernel having the form of Gamma distribution. In contrast to fast decay of damped oscillations of the ISI distribution for the model with weak delay kernel, the decay of damped oscillations is found to be slower for the model with strong delay kernel. PMID:21701877

  1. Comparision of the different probability distributions for earthquake hazard assessment in the North Anatolian Fault Zone

    NASA Astrophysics Data System (ADS)

    Yilmaz, Şeyda; Bayrak, Erdem; Bayrak, Yusuf

    2016-04-01

    In this study we examined and compared the three different probabilistic distribution methods for determining the best suitable model in probabilistic assessment of earthquake hazards. We analyzed a reliable homogeneous earthquake catalogue between a time period 1900-2015 for magnitude M ≥ 6.0 and estimated the probabilistic seismic hazard in the North Anatolian Fault zone (39°-41° N 30°-40° E) using three distribution methods namely Weibull distribution, Frechet distribution and three-parameter Weibull distribution. The distribution parameters suitability was evaluated Kolmogorov-Smirnov (K-S) goodness-of-fit test. We also compared the estimated cumulative probability and the conditional probabilities of occurrence of earthquakes for different elapsed time using these three distribution methods. We used Easyfit and Matlab software to calculate these distribution parameters and plotted the conditional probability curves. We concluded that the Weibull distribution method was the most suitable than other distribution methods in this region.

  2. Bayesian analysis of a disability model for lung cancer survival.

    PubMed

    Armero, C; Cabras, S; Castellanos, M E; Perra, S; Quirós, A; Oruezábal, M J; Sánchez-Rubio, J

    2016-02-01

    Bayesian reasoning, survival analysis and multi-state models are used to assess survival times for Stage IV non-small-cell lung cancer patients and the evolution of the disease over time. Bayesian estimation is done using minimum informative priors for the Weibull regression survival model, leading to an automatic inferential procedure. Markov chain Monte Carlo methods have been used for approximating posterior distributions and the Bayesian information criterion has been considered for covariate selection. In particular, the posterior distribution of the transition probabilities, resulting from the multi-state model, constitutes a very interesting tool which could be useful to help oncologists and patients make efficient and effective decisions. PMID:22767866

  3. Electrical Power Distribution and Control Modeling and Analysis

    NASA Technical Reports Server (NTRS)

    Fu, Johnny S.; Liffring, Mark; Mehdi, Ishaque S.

    2001-01-01

    This slide presentation reviews the use of Electrical Power Distribution and Control (EPD&C) Modeling and how modeling can support analysis. The presentation discusses using the EASY5 model to simulate and analyze the Space Shuttle Electric Auxiliary Power Unit. Diagrams of the model schematics are included, as well as graphs of the battery cell impedance, hydraulic load dynamics, and EPD&C response to hydraulic load variations.

  4. Distribution System Reliability Analysis for Smart Grid Applications

    NASA Astrophysics Data System (ADS)

    Aljohani, Tawfiq Masad

    Reliability of power systems is a key aspect in modern power system planning, design, and operation. The ascendance of the smart grid concept has provided high hopes of developing an intelligent network that is capable of being a self-healing grid, offering the ability to overcome the interruption problems that face the utility and cost it tens of millions in repair and loss. To address its reliability concerns, the power utilities and interested parties have spent extensive amount of time and effort to analyze and study the reliability of the generation and transmission sectors of the power grid. Only recently has attention shifted to be focused on improving the reliability of the distribution network, the connection joint between the power providers and the consumers where most of the electricity problems occur. In this work, we will examine the effect of the smart grid applications in improving the reliability of the power distribution networks. The test system used in conducting this thesis is the IEEE 34 node test feeder, released in 2003 by the Distribution System Analysis Subcommittee of the IEEE Power Engineering Society. The objective is to analyze the feeder for the optimal placement of the automatic switching devices and quantify their proper installation based on the performance of the distribution system. The measures will be the changes in the reliability system indices including SAIDI, SAIFI, and EUE. The goal is to design and simulate the effect of the installation of the Distributed Generators (DGs) on the utility's distribution system and measure the potential improvement of its reliability. The software used in this work is DISREL, which is intelligent power distribution software that is developed by General Reliability Co.

  5. Distributed and interactive visual analysis of omics data.

    PubMed

    Farag, Yehia; Berven, Frode S; Jonassen, Inge; Petersen, Kjell; Barsnes, Harald

    2015-11-01

    The amount of publicly shared proteomics data has grown exponentially over the last decade as the solutions for sharing and storing the data have improved. However, the use of the data is often limited by the manner of which it is made available. There are two main approaches: download and inspect the proteomics data locally, or interact with the data via one or more web pages. The first is limited by having to download the data and thus requires local computational skills and resources, while the latter most often is limited in terms of interactivity and the analysis options available. A solution is to develop web-based systems supporting distributed and fully interactive visual analysis of proteomics data. The use of a distributed architecture makes it possible to perform the computational analysis at the server, while the results of the analysis can be displayed via a web browser without the need to download the whole dataset. Here the challenges related to developing such systems for omics data will be discussed. Especially how this allows for multiple connected interactive visual displays of omics dataset in a web-based setting, and the benefits this provide for computational analysis of proteomics data.This article is part of a Special Issue entitled: Computational Proteomics. PMID:26047716

  6. Automatic analysis of attack data from distributed honeypot network

    NASA Astrophysics Data System (ADS)

    Safarik, Jakub; Voznak, MIroslav; Rezac, Filip; Partila, Pavol; Tomala, Karel

    2013-05-01

    There are many ways of getting real data about malicious activity in a network. One of them relies on masquerading monitoring servers as a production one. These servers are called honeypots and data about attacks on them brings us valuable information about actual attacks and techniques used by hackers. The article describes distributed topology of honeypots, which was developed with a strong orientation on monitoring of IP telephony traffic. IP telephony servers can be easily exposed to various types of attacks, and without protection, this situation can lead to loss of money and other unpleasant consequences. Using a distributed topology with honeypots placed in different geological locations and networks provides more valuable and independent results. With automatic system of gathering information from all honeypots, it is possible to work with all information on one centralized point. Communication between honeypots and centralized data store use secure SSH tunnels and server communicates only with authorized honeypots. The centralized server also automatically analyses data from each honeypot. Results of this analysis and also other statistical data about malicious activity are simply accessible through a built-in web server. All statistical and analysis reports serve as information basis for an algorithm which classifies different types of used VoIP attacks. The web interface then brings a tool for quick comparison and evaluation of actual attacks in all monitored networks. The article describes both, the honeypots nodes in distributed architecture, which monitor suspicious activity, and also methods and algorithms used on the server side for analysis of gathered data.

  7. Performance of free-space optical communication system using differential phase-shift keying subcarrier-intensity modulated over the exponentiated Weibull channel

    NASA Astrophysics Data System (ADS)

    Gao, Zhengguang; Liu, Hongzhan; Liao, Renbo; Ma, Xiaoping

    2015-10-01

    A differential phase-shift keying modulation for free-space optical (FSO) communication is considered in atmospheric turbulence modeled by the exponentiated Weibull distribution. The selection combining (SelC) spatial diversity is used to mitigate the effects of atmospheric turbulence. We analyze the average bit error rate (BER) of the system using SelC spatial diversity by Gauss-Laguerre approximation. The effect of aperture averaging and spatial diversity on the outage probability is also studied. The numerical results show that it requires a smaller level of signal-to-noise ratio to reach the same BER when large aperture and SelC spatial diversity are deployed in the FSO system. Moreover, it is proved that aperture averaging and SelC spatial diversity are effective for improving the performance of the system's outage probability.

  8. Progressive failure site generation in AlGaN/GaN high electron mobility transistors under OFF-state stress: Weibull statistics and temperature dependence

    SciTech Connect

    Sun, Huarui Bajo, Miguel Montes; Uren, Michael J.; Kuball, Martin

    2015-01-26

    Gate leakage degradation of AlGaN/GaN high electron mobility transistors under OFF-state stress is investigated using a combination of electrical, optical, and surface morphology characterizations. The generation of leakage “hot spots” at the edge of the gate is found to be strongly temperature accelerated. The time for the formation of each failure site follows a Weibull distribution with a shape parameter in the range of 0.7–0.9 from room temperature up to 120 °C. The average leakage per failure site is only weakly temperature dependent. The stress-induced structural degradation at the leakage sites exhibits a temperature dependence in the surface morphology, which is consistent with a surface defect generation process involving temperature-associated changes in the breakdown sites.

  9. GPS FOM Chimney Analysis using Generalized Extreme Value Distribution

    NASA Technical Reports Server (NTRS)

    Ott, Rick; Frisbee, Joe; Saha, Kanan

    2004-01-01

    Many a time an objective of a statistical analysis is to estimate a limit value like 3-sigma 95% confidence upper limit from a data sample. The generalized Extreme Value Distribution method can be profitably employed in many situations for such an estimate. . .. It is well known that according to the Central Limit theorem the mean value of a large data set is normally distributed irrespective of the distribution of the data from which the mean value is derived. In a somewhat similar fashion it is observed that many times the extreme value of a data set has a distribution that can be formulated with a Generalized Distribution. In space shuttle entry with 3-string GPS navigation the Figure Of Merit (FOM) value gives a measure of GPS navigated state accuracy. A GPS navigated state with FOM of 6 or higher is deemed unacceptable and is said to form a FOM 6 or higher chimney. A FOM chimney is a period of time during which the FOM value stays higher than 5. A longer period of FOM of value 6 or higher causes navigated state to accumulate more error for a lack of state update. For an acceptable landing it is imperative that the state error remains low and hence at low altitude during entry GPS data of FOM greater than 5 must not last more than 138 seconds. I To test the GPS performAnce many entry test cases were simulated at the Avionics Development Laboratory. Only high value FoM chimneys are consequential. The extreme value statistical technique is applied to analyze high value FOM chimneys. The Maximum likelihood method is used to determine parameters that characterize the GEV distribution, and then the limit value statistics are estimated.

  10. Componential distribution analysis of food using near infrared ray image

    NASA Astrophysics Data System (ADS)

    Yamauchi, Hiroki; Kato, Kunihito; Yamamoto, Kazuhiko; Ogawa, Noriko; Ohba, Kimie

    2008-11-01

    The components of the food related to the "deliciousness" are usually evaluated by componential analysis. The component content and type of components in the food are determined by this analysis. However, componential analysis is not able to analyze measurements in detail, and the measurement is time consuming. We propose a method to measure the two-dimensional distribution of the component in food using a near infrared ray (IR) image. The advantage of our method is to be able to visualize the invisible components. Many components in food have characteristics such as absorption and reflection of light in the IR range. The component content is measured using subtraction between two wavelengths of near IR light. In this paper, we describe a method to measure the component of food using near IR image processing, and we show an application to visualize the saccharose in the pumpkin.

  11. Iterative Monte Carlo analysis of spin-dependent parton distributions

    NASA Astrophysics Data System (ADS)

    Sato, Nobuo; Melnitchouk, W.; Kuhn, S. E.; Ethier, J. J.; Accardi, A.; Jefferson Lab Angular Momentum Collaboration

    2016-04-01

    We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳0.1 . The study also provides the first determination of the flavor-separated twist-3 PDFs and the d2 moment of the nucleon within a global PDF analysis.

  12. Characterization of plutonium distribution in MIMAS MOX by image analysis

    NASA Astrophysics Data System (ADS)

    Oudinet, Ghislain; Munoz-Viallard, Isabelle; Aufore, Laurence; Gotta, Marie-Jeanne; Becker, J. M.; Chiarelli, G.; Castelli, R.

    2008-03-01

    A better understanding of MOX fuel in-pile behaviour requires a very detailed characterization of the Pu distribution in the pellet before and after irradiation. Electron probe microanalysis (EPMA) can be used to determine the distributions of chemical elements with a spatial resolution of 1 μm. This paper describes the development of X-ray microanalysis techniques to produce semi-quantitative 'maps' of plutonium concentrations in order to rapidly characterize large areas of the fuel microstructure (1 mm 2) with reasonable accuracy. A new segmentation technique based on statistical compatibility is then proposed, so as to finely describe the MIMAS MOX fuel microstructure. Two materials were finely characterized to demonstrate the reliability of this new method. In each case, the results demonstrate the good and reliable accuracy of this new characterization methodology. The analysis method used is currently able to identify three so-called phases (Pu-rich agglomerates, a coating phase and uranium-rich agglomerates), as well as to quantify the plutonium distribution and the plutonium content of these three phases. The impact of the fabrication process on the microstructure can be seen both in the surface distribution variations of the plutonium and in the local plutonium content variations.

  13. Stochastic sensitivity analysis and kernel inference via distributional data.

    PubMed

    Li, Bochong; You, Lingchong

    2014-09-01

    Cellular processes are noisy due to the stochastic nature of biochemical reactions. As such, it is impossible to predict the exact quantity of a molecule or other attributes at the single-cell level. However, the distribution of a molecule over a population is often deterministic and is governed by the underlying regulatory networks relevant to the cellular functionality of interest. Recent studies have started to exploit this property to infer network states. To facilitate the analysis of distributional data in a general experimental setting, we introduce a computational framework to efficiently characterize the sensitivity of distributional output to changes in external stimuli. Further, we establish a probability-divergence-based kernel regression model to accurately infer signal level based on distribution measurements. Our methodology is applicable to any biological system subject to stochastic dynamics and can be used to elucidate how population-based information processing may contribute to organism-level functionality. It also lays the foundation for engineering synthetic biological systems that exploit population decoding to more robustly perform various biocomputation tasks, such as disease diagnostics and environmental-pollutant sensing. PMID:25185560

  14. Performance Analysis of an Actor-Based Distributed Simulation

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1998-01-01

    Object-oriented design of simulation programs appears to be very attractive because of the natural association of components in the simulated system with objects. There is great potential in distributing the simulation across several computers for the purpose of parallel computation and its consequent handling of larger problems in less elapsed time. One approach to such a design is to use "actors", that is, active objects with their own thread of control. Because these objects execute concurrently, communication is via messages. This is in contrast to an object-oriented design using passive objects where communication between objects is via method calls (direct calls when they are in the same address space and remote procedure calls when they are in different address spaces or different machines). This paper describes a performance analysis program for the evaluation of a design for distributed simulations based upon actors.

  15. Principal Component Analysis for Normal-Distribution-Valued Symbolic Data.

    PubMed

    Wang, Huiwen; Chen, Meiling; Shi, Xiaojun; Li, Nan

    2016-02-01

    This paper puts forward a new approach to principal component analysis (PCA) for normal-distribution-valued symbolic data, which has a vast potential of applications in the economic and management field. We derive a full set of numerical characteristics and variance-covariance structure for such data, which forms the foundation for our analytical PCA approach. Our approach is able to use all of the variance information in the original data than the prevailing representative-type approach in the literature which only uses centers, vertices, etc. The paper also provides an accurate approach to constructing the observations in a PC space based on the linear additivity property of normal distribution. The effectiveness of the proposed method is illustrated by simulated numerical experiments. At last, our method is applied to explain the puzzle of risk-return tradeoff in China's stock market. PMID:25095276

  16. Distributed analysis environment for HEP and interdisciplinary applications

    NASA Astrophysics Data System (ADS)

    Mościcki, J. T.

    2003-04-01

    Huge data volumes of Larger Hadron Collider experiment require parallel end-user analysis on clusters of hundreds of machines. While the focus of end-user High-Energy Physics analysis is on ntuples, the master-worker model of parallel processing may be also used in other contexts such as detector simulation. The aim of DIANE R&D project ( http://cern.ch/it-proj-diane) currently held by CERN IT/API group is to create a generic, component-based framework for distributed, parallel data processing in master-worker model. Pre-compiled user analysis code is loaded dynamically at runtime in component libraries and called back when appropriate. Such application-oriented framework must be flexible enough to integrate with the emerging GRID technologies as they become available in the time to come. Therefore, common services such as environment reconstruction, code distribution, load balancing and authentication are designed and implemented as pluggable modules. This approach allows to easily replace them with modules implemented with newer technology as necessary. The paper gives an overview of DIANE architecture and explains the main design choices. Selected examples of diverse applications from a variety of domains applicable to DIANE are presented. As well as preliminary benchmarking results.

  17. Time series power flow analysis for distribution connected PV generation.

    SciTech Connect

    Broderick, Robert Joseph; Quiroz, Jimmy Edward; Ellis, Abraham; Reno, Matthew J.; Smith, Jeff; Dugan, Roger

    2013-01-01

    Distributed photovoltaic (PV) projects must go through an interconnection study process before connecting to the distribution grid. These studies are intended to identify the likely impacts and mitigation alternatives. In the majority of the cases, system impacts can be ruled out or mitigation can be identified without an involved study, through a screening process or a simple supplemental review study. For some proposed projects, expensive and time-consuming interconnection studies are required. The challenges to performing the studies are twofold. First, every study scenario is potentially unique, as the studies are often highly specific to the amount of PV generation capacity that varies greatly from feeder to feeder and is often unevenly distributed along the same feeder. This can cause location-specific impacts and mitigations. The second challenge is the inherent variability in PV power output which can interact with feeder operation in complex ways, by affecting the operation of voltage regulation and protection devices. The typical simulation tools and methods in use today for distribution system planning are often not adequate to accurately assess these potential impacts. This report demonstrates how quasi-static time series (QSTS) simulation and high time-resolution data can be used to assess the potential impacts in a more comprehensive manner. The QSTS simulations are applied to a set of sample feeders with high PV deployment to illustrate the usefulness of the approach. The report describes methods that can help determine how PV affects distribution system operations. The simulation results are focused on enhancing the understanding of the underlying technical issues. The examples also highlight the steps needed to perform QSTS simulation and describe the data needed to drive the simulations. The goal of this report is to make the methodology of time series power flow analysis readily accessible to utilities and others responsible for evaluating potential PV impacts.

  18. Multiobjective sensitivity analysis and optimization of distributed hydrologic model MOBIDIC

    NASA Astrophysics Data System (ADS)

    Yang, J.; Castelli, F.; Chen, Y.

    2014-10-01

    Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives that arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for the MOBIDIC (MOdello di Bilancio Idrologico DIstribuito e Continuo) distributed hydrologic model, which combines two sensitivity analysis techniques (the Morris method and the state-dependent parameter (SDP) method) with multiobjective optimization (MOO) approach ɛ-NSGAII (Non-dominated Sorting Genetic Algorithm-II). This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina, with three objective functions, i.e., the standardized root mean square error (SRMSE) of logarithmic transformed discharge, the water balance index, and the mean absolute error of the logarithmic transformed flow duration curve, and its results were compared with those of a single objective optimization (SOO) with the traditional Nelder-Mead simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show that (1) the two sensitivity analysis techniques are effective and efficient for determining the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization. (2) Both MOO and SOO lead to acceptable simulations; e.g., for MOO, the average Nash-Sutcliffe value is 0.75 in the calibration period and 0.70 in the validation period. (3) Evaporation and surface runoff show similar importance for watershed water balance, while the contribution of baseflow can be ignored. (4) Compared to SOO, which was dependent on the initial starting location, MOO provides more insight into parameter sensitivity and the conflicting characteristics of these objective functions. Multiobjective sensitivity analysis and optimization provide an alternative way for future MOBIDIC modeling.

  19. Numerical analysis of decoy state quantum key distribution protocols

    SciTech Connect

    Harrington, Jim W; Rice, Patrick R

    2008-01-01

    Decoy state protocols are a useful tool for many quantum key distribution systems implemented with weak coherent pulses, allowing significantly better secret bit rates and longer maximum distances. In this paper we present a method to numerically find optimal three-level protocols, and we examine how the secret bit rate and the optimized parameters are dependent on various system properties, such as session length, transmission loss, and visibility. Additionally, we show how to modify the decoy state analysis to handle partially distinguishable decoy states as well as uncertainty in the prepared intensities.

  20. EST analysis pipeline: use of distributed computing resources.

    PubMed

    González, Francisco Javier; Vizcaíno, Juan Antonio

    2011-01-01

    This chapter describes how a pipeline for the analysis of expressed sequence tag (EST) data can be -implemented, based on our previous experience generating ESTs from Trichoderma spp. We focus on key steps in the workflow, such as the processing of raw data from the sequencers, the clustering of ESTs, and the functional annotation of the sequences using BLAST, InterProScan, and BLAST2GO. Some of the steps require the use of intensive computing power. Since these resources are not available for small research groups or institutes without bioinformatics support, an alternative will be described: the use of distributed computing resources (local grids and Amazon EC2). PMID:21590415

  1. Efficient network meta-analysis: a confidence distribution approach*

    PubMed Central

    Yang, Guang; Liu, Dungang; Liu, Regina Y.; Xie, Minge; Hoaglin, David C.

    2014-01-01

    Summary Network meta-analysis synthesizes several studies of multiple treatment comparisons to simultaneously provide inference for all treatments in the network. It can often strengthen inference on pairwise comparisons by borrowing evidence from other comparisons in the network. Current network meta-analysis approaches are derived from either conventional pairwise meta-analysis or hierarchical Bayesian methods. This paper introduces a new approach for network meta-analysis by combining confidence distributions (CDs). Instead of combining point estimators from individual studies in the conventional approach, the new approach combines CDs which contain richer information than point estimators and thus achieves greater efficiency in its inference. The proposed CD approach can e ciently integrate all studies in the network and provide inference for all treatments even when individual studies contain only comparisons of subsets of the treatments. Through numerical studies with real and simulated data sets, the proposed approach is shown to outperform or at least equal the traditional pairwise meta-analysis and a commonly used Bayesian hierarchical model. Although the Bayesian approach may yield comparable results with a suitably chosen prior, it is highly sensitive to the choice of priors (especially the prior of the between-trial covariance structure), which is often subjective. The CD approach is a general frequentist approach and is prior-free. Moreover, it can always provide a proper inference for all the treatment effects regardless of the between-trial covariance structure. PMID:25067933

  2. Calibration of Boltzmann distribution priors in Bayesian data analysis.

    PubMed

    Mechelke, Martin; Habeck, Michael

    2012-12-01

    The Boltzmann distribution is commonly used as a prior probability in Bayesian data analysis. Examples include the Ising model in statistical image analysis and the canonical ensemble based on molecular dynamics force fields in protein structure calculation. These models involve a temperature or weighting factor that needs to be inferred from the data. Bayesian inference stipulates to determine the temperature based on the model evidence. This is challenging because the model evidence, a ratio of two high-dimensional normalization integrals, cannot be calculated analytically. We outline a replica-exchange Monte Carlo scheme that allows us to estimate the model evidence by use of multiple histogram reweighting. The method is illustrated for an Ising model and examples in protein structure determination. PMID:23368076

  3. Conditions for transmission path analysis in energy distribution models

    NASA Astrophysics Data System (ADS)

    Aragonès, Àngels; Guasch, Oriol

    2016-02-01

    In this work, we explore under which conditions transmission path analysis (TPA) developed for statistical energy analysis (SEA) can be applied to the less restrictive energy distribution (ED) models. It is shown that TPA can be extended without problems to proper-SEA systems whereas the situation is not so clear for quasi-SEA systems. In the general case, it has been found that a TPA can always be performed on an ED model if its inverse influence energy coefficient (EIC) matrix turns to have negative off-diagonal entries. If this condition is satisfied, it can be shown that the inverse EIC matrix automatically becomes an M-matrix. An ED graph can then be defined for it and use can be made of graph theory ranking path algorithms, previously developed for SEA systems, to classify dominant paths in ED models. A small mechanical system consisting of connected plates has been used to illustrate some of the exposed theoretical results.

  4. Analysis of vegetation distribution in relation to surface morphology

    NASA Astrophysics Data System (ADS)

    Savio, Francesca; Prosdocimi, Massimo; Tarolli, Paolo; Rulli, Cristina

    2013-04-01

    The scaling relationship between curvature, and local slope of a given point on the landscape and its drainage area reveal information about the dominant erosion processes over geomorphic time scales. Vegetation is known to influence erosion rates and landslide initiation, and also it is influenced by such processes and climatic regimes. Understanding the influence of vegetation dynamics on landscape organization is a fundamental challenge in the Earth Science field. In this study we considered two headwater catchments with vegetation mostly characterized by grass species (high altitude grassland), but also shrubs (mainly Alnus viridis), and high forest (mainly Picea abies) are common. We analyzed then the statistics related to vegetation distribution and different morphological patterns. High resolution LiDAR data served as the basis upon which derive Digital Terrain Models (DTMs) and mathematical attributes of landscape morphology including slope gradient, drainage area, aspect, surface curvature, topographic wetness index, slope - area and curvature - area loglog diagrams. The results reveal distinct differences in the curvature-area and slope-area relationships of each vegetation type. For a given drainage area, mean landscape slope is generally found to increase with woody vegetation. Profound landsliding signature is detected in areas interested by Alnus viridis distribution, thus underlining the relation between such pioneer species with slope instability. This preliminary analysis suggested that, when high resolution topography is available, is possible to better characterize the vegetation distribution based on surface morphology thus providing a useful tool for better understanding the processes and the role of vegetation in the landscape evolution.

  5. Some Results Pertaining to the Two-Parameter Kappa Distribution and its use in Hydrological Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Ashkar, F.; Choulakian, V.

    2004-05-01

    We present various forms of the 2-parameter kappa (KAP2) distribution and discuss some of its relationships to other distributions commonly used in fitting extreme hydrological data, such as the exponential, Pareto, Weibull, gamma, lognormal and log-logistic. We investigate a total of five parameter estimation methods for this distribution, including the methods of probability weighted moments (PWM), classical moments (MM), and maximum likelihood (ML). Among the newer methods that will be presented and discussed, some present a degree of robustness with respect to points in the extreme tail(s) of the distribution. Estimation equations are presented for the five methods, along with remarks and suggestions regarding the analytical or numerical procedures needed to solve them. Some asymptotic variances and covariances of parameter and quantile estimators are also presented. This is finally followed by a comparison of the five estimating methods by simulation, with special emphasis on root mean square error of estimates as a performance index. Some applications in hydrology are also discussed, especially in relation to flood frequency and low streamflow analyses of hydrological data by stochastic methods.

  6. The Distributed Seismological Observatory: A Web Based Seismological Data Analysis and Distribution Toolkit

    NASA Astrophysics Data System (ADS)

    Govoni, A.; Lomax, A.; Michelini, A.

    The Web now provides a single, universal infrastructure for developing client/server data access applications and the seismology community can greatly benefit of this situation both for the routine observatory data analysis and for research purposes. The Web has reduced the myriad of platforms and technologies used to handle and exchange data to a single user interface (HTML), a single client platform (the Web browser), a single network protocol (HTTP), and a single server platform (the Web server). In this context we have designed a system that taking advantage of the latest devel- opment in the client/server data access technologies based on JAVA, JAVA RMI and XML may solve the most common problems in the data access and manipulation com- monly experienced in the seismological community. Key concepts in this design are thin client approach, minimum standards for data exchange and distributed computing. Thin client means that any PC with a JAVA enabled Web browser can interact with a set of remote data servers distributed in the world computer network querying for data and for services. Minimum standards relates to the language needed for client/server interaction that must be abstract enough to avoid that everybody know all the details of the transaction and this is solved by XML. Distribution means that a set of servers is able to provide to the client not only a data object (the actual data and the methods to work on it) but also the computing power to perform a particular task (a remote method in the JAVA RMI context) and limits the exchange of data to the results. This allows for client interaction also in very limited communication bandwidth situations. We describe in detail also the implementation of the main modules of the toolkit. A data eater module that gathers/archives seismological data from a variety of sources ranging from portable digitizers data to real-time network data. A picking/location server that allows for multi user Web based analysis of the waveforms using Lomax's SeisGram2K JAVA application and handles the posting of pickings and the hypocen- tral location of the earthquakes. A seismological bulletin generator that produces both dynamic (for Web site use) and static (for data distribution) HTML observatory bulletin in which the user can easily browse both the earthquake event parameters and the associated waveform data. All these modules have been developed and used to manage real data in current projects of the Friuli (NE Italy) network.

  7. Distributed Principal Component Analysis for Wireless Sensor Networks

    PubMed Central

    Le Borgne, Yann-Aël; Raybaud, Sylvain; Bontempi, Gianluca

    2008-01-01

    The Principal Component Analysis (PCA) is a data dimensionality reduction tech-nique well-suited for processing data from sensor networks. It can be applied to tasks like compression, event detection, and event recognition. This technique is based on a linear trans-form where the sensor measurements are projected on a set of principal components. When sensor measurements are correlated, a small set of principal components can explain most of the measurements variability. This allows to significantly decrease the amount of radio communication and of energy consumption. In this paper, we show that the power iteration method can be distributed in a sensor network in order to compute an approximation of the principal components. The proposed implementation relies on an aggregation service, which has recently been shown to provide a suitable framework for distributing the computation of a linear transform within a sensor network. We also extend this previous work by providing a detailed analysis of the computational, memory, and communication costs involved. A com-pression experiment involving real data validates the algorithm and illustrates the tradeoffs between accuracy and communication costs.

  8. Preliminary analysis of hub and spoke air freight distribution system

    NASA Technical Reports Server (NTRS)

    Whitehead, A. H., Jr.

    1978-01-01

    A brief analysis is made of the hub and spoke air freight distribution system which would employ less than 15 hub centers world wide with very large advanced distributed-load freighters providing the line-haul delivery between hubs. This system is compared to a more conventional network using conventionally-designed long-haul freighters which travel between numerous major airports. The analysis calculates all of the transportation costs, including handling charges and pickup and delivery costs. The results show that the economics of the hub/spoke system are severely compromised by the extensive use of feeder aircraft to deliver cargo into and from the large freighter terminals. Not only are the higher costs for the smaller feeder airplanes disadvantageous, but their use implies an additional exchange of cargo between modes compared to truck delivery. The conventional system uses far fewer feeder airplanes, and in many cases, none at all. When feeder aircraft are eliminated from the hub/spoke system, however, that system is universally more economical than any conventional system employing smaller line-haul aircraft.

  9. A theoretical analysis of basin-scale groundwater temperature distribution

    NASA Astrophysics Data System (ADS)

    An, Ran; Jiang, Xiao-Wei; Wang, Jun-Zhi; Wan, Li; Wang, Xu-Sheng; Li, Hailong

    2015-03-01

    The theory of regional groundwater flow is critical for explaining heat transport by moving groundwater in basins. Domenico and Palciauskas's (1973) pioneering study on convective heat transport in a simple basin assumed that convection has a small influence on redistributing groundwater temperature. Moreover, there has been no research focused on the temperature distribution around stagnation zones among flow systems. In this paper, the temperature distribution in the simple basin is reexamined and that in a complex basin with nested flow systems is explored. In both basins, compared to the temperature distribution due to conduction, convection leads to a lower temperature in most parts of the basin except for a small part near the discharge area. There is a high-temperature anomaly around the basin-bottom stagnation point where two flow systems converge due to a low degree of convection and a long travel distance, but there is no anomaly around the basin-bottom stagnation point where two flow systems diverge. In the complex basin, there are also high-temperature anomalies around internal stagnation points. Temperature around internal stagnation points could be very high when they are close to the basin bottom, for example, due to the small permeability anisotropy ratio. The temperature distribution revealed in this study could be valuable when using heat as a tracer to identify the pattern of groundwater flow in large-scale basins. Domenico PA, Palciauskas VV (1973) Theoretical analysis of forced convective heat transfer in regional groundwater flow. Geological Society of America Bulletin 84:3803-3814

  10. Probabilistic Life and Reliability Analysis of Model Gas Turbine Disk

    NASA Technical Reports Server (NTRS)

    Holland, Frederic A.; Melis, Matthew E.; Zaretsky, Erwin V.

    2002-01-01

    In 1939, W. Weibull developed what is now commonly known as the "Weibull Distribution Function" primarily to determine the cumulative strength distribution of small sample sizes of elemental fracture specimens. In 1947, G. Lundberg and A. Palmgren, using the Weibull Distribution Function developed a probabilistic lifing protocol for ball and roller bearings. In 1987, E. V. Zaretsky using the Weibull Distribution Function modified the Lundberg and Palmgren approach to life prediction. His method incorporates the results of coupon fatigue testing to compute the life of elemental stress volumes of a complex machine element to predict system life and reliability. This paper examines the Zaretsky method to determine the probabilistic life and reliability of a model gas turbine disk using experimental data from coupon specimens. The predicted results are compared to experimental disk endurance data.

  11. Evaluation of Distribution Analysis Software for DER Applications

    SciTech Connect

    Staunton, RH

    2003-01-23

    The term ''Distributed energy resources'' or DER refers to a variety of compact, mostly self-contained power-generating technologies that can be combined with energy management and storage systems and used to improve the operation of the electricity distribution system, whether or not those technologies are connected to an electricity grid. Implementing DER can be as simple as installing a small electric generator to provide backup power at an electricity consumer's site. Or it can be a more complex system, highly integrated with the electricity grid and consisting of electricity generation, energy storage, and power management systems. DER devices provide opportunities for greater local control of electricity delivery and consumption. They also enable more efficient utilization of waste heat in combined cooling, heating and power (CHP) applications--boosting efficiency and lowering emissions. CHP systems can provide electricity, heat and hot water for industrial processes, space heating and cooling, refrigeration, and humidity control to improve indoor air quality. DER technologies are playing an increasingly important role in the nation's energy portfolio. They can be used to meet base load power, peaking power, backup power, remote power, power quality, as well as cooling and heating needs. DER systems, ranging in size and capacity from a few kilowatts up to 50 MW, can include a number of technologies (e.g., supply-side and demand-side) that can be located at or near the location where the energy is used. Information pertaining to DER technologies, application solutions, successful installations, etc., can be found at the U.S. Department of Energy's DER Internet site [1]. Market forces in the restructured electricity markets are making DER, both more common and more active in the distribution systems throughout the US [2]. If DER devices can be made even more competitive with central generation sources this trend will become unstoppable. In response, energy providers will be forced to both fully acknowledge the trend and plan for accommodating DER [3]. With bureaucratic barriers [4], lack of time/resources, tariffs, etc. still seen in certain regions of the country, changes still need to be made. Given continued technical advances in DER, the time is fast approaching when the industry, nation-wide, must not only accept DER freely but also provide or review in-depth technical assessments of how DER should be integrated into and managed throughout the distribution system. Characterization studies are needed to fully understand how both the utility system and DER devices themselves will respond to all reasonable events (e.g., grid disturbances, faults, rapid growth, diverse and multiple DER systems, large reactive loads). Some of this work has already begun as it relates to operation and control of DER [5] and microturbine performance characterization [6,7]. One of the most urgently needed tools that can provide these types of analyses is a distribution network analysis program in combination with models for various DER. Together, they can be used for (1) analyzing DER placement in distribution networks and (2) helping to ensure that adequate transmission reliability is maintained. Surveys of the market show products that represent a partial match to these needs; specifically, software that has been developed to plan electrical distribution systems and analyze reliability (in a near total absence of DER). The first part of this study (Sections 2 and 3 of the report) looks at a number of these software programs and provides both summary descriptions and comparisons. The second part of this study (Section 4 of the report) considers the suitability of these analysis tools for DER studies. It considers steady state modeling and assessment work performed by ORNL using one commercially available tool on feeder data provided by a southern utility. Appendix A provides a technical report on the results of this modeling effort.

  12. Phylogenetic analysis reveals a scattered distribution of autumn colours

    PubMed Central

    Archetti, Marco

    2009-01-01

    Background and Aims Leaf colour in autumn is rarely considered informative for taxonomy, but there is now growing interest in the evolution of autumn colours and different hypotheses are debated. Research efforts are hindered by the lack of basic information: the phylogenetic distribution of autumn colours. It is not known when and how autumn colours evolved. Methods Data are reported on the autumn colours of 2368 tree species belonging to 400 genera of the temperate regions of the world, and an analysis is made of their phylogenetic relationships in order to reconstruct the evolutionary origin of red and yellow in autumn leaves. Key Results Red autumn colours are present in at least 290 species (70 genera), and evolved independently at least 25 times. Yellow is present independently from red in at least 378 species (97 genera) and evolved at least 28 times. Conclusions The phylogenetic reconstruction suggests that autumn colours have been acquired and lost many times during evolution. This scattered distribution could be explained by hypotheses involving some kind of coevolutionary interaction or by hypotheses that rely on the need for photoprotection. PMID:19126636

  13. A Distributed Flocking Approach for Information Stream Clustering Analysis

    SciTech Connect

    Cui, Xiaohui; Potok, Thomas E

    2006-01-01

    Intelligence analysts are currently overwhelmed with the amount of information streams generated everyday. There is a lack of comprehensive tool that can real-time analyze the information streams. Document clustering analysis plays an important role in improving the accuracy of information retrieval. However, most clustering technologies can only be applied for analyzing the static document collection because they normally require a large amount of computation resource and long time to get accurate result. It is very difficult to cluster a dynamic changed text information streams on an individual computer. Our early research has resulted in a dynamic reactive flock clustering algorithm which can continually refine the clustering result and quickly react to the change of document contents. This character makes the algorithm suitable for cluster analyzing dynamic changed document information, such as text information stream. Because of the decentralized character of this algorithm, a distributed approach is a very natural way to increase the clustering speed of the algorithm. In this paper, we present a distributed multi-agent flocking approach for the text information stream clustering and discuss the decentralized architectures and communication schemes for load balance and status information synchronization in this approach.

  14. Silk Fiber Mechanics from Multiscale Force Distribution Analysis

    PubMed Central

    Cetinkaya, Murat; Xiao, Senbo; Markert, Bernd; Stacklies, Wolfram; Gräter, Frauke

    2011-01-01

    Here we decipher the molecular determinants for the extreme toughness of spider silk fibers. Our bottom-up computational approach incorporates molecular dynamics and finite element simulations. Therefore, the approach allows the analysis of the internal strain distribution and load-carrying motifs in silk fibers on scales of both molecular and continuum mechanics. We thereby dissect the contributions from the nanoscale building blocks, the soft amorphous and the strong crystalline subunits, to silk fiber mechanics. We identify the amorphous subunits not only to give rise to high elasticity, but to also ensure efficient stress homogenization through the friction between entangled chains, which also allows the crystals to withstand stresses as high as 2 GPa in the context of the amorphous matrix. We show that the maximal toughness of silk is achieved at 10–40% crystallinity depending on the distribution of crystals in the fiber. We also determined a serial arrangement of the crystalline and amorphous subunits in lamellae to outperform a random or a parallel arrangement, putting forward what we believe to be a new structural model for silk and other semicrystalline materials. The multiscale approach, not requiring any empirical parameters, is applicable to other partially ordered polymeric systems. Hence, it is an efficient tool for the design of artificial silk fibers. PMID:21354403

  15. Size distribution measurement for densely binding bubbles via image analysis

    NASA Astrophysics Data System (ADS)

    Ma, Ye; Yan, Guanxi; Scheuermann, Alexander; Bringemeier, Detlef; Kong, Xiang-Zhao; Li, Ling

    2014-12-01

    For densely binding bubble clusters, conventional image analysis methods are unable to provide an accurate measurement of the bubble size distribution because of the difficulties with clearly identifying the outline edges of individual bubbles. In contrast, the bright centroids of individual bubbles can be distinctly defined and thus accurately measured. By taking this advantage, we developed a new measurement method based on a linear relationship between the bubble radius and the radius of its bright centroid so to avoid the need to identify the bubble outline edges. The linear relationship and method were thoroughly tested for 2D bubble clusters in a highly binding condition and found to be effective and robust for measuring the bubble sizes.

  16. Analysis of the Spectral Energy Distributions of Fermi bright blazars

    SciTech Connect

    Gasparrini, D.; Cutini, S.; Colafrancesco, S.; Giommi, P.; Raino, S.

    2010-03-26

    Blazars are a small fraction of all extragalactic sources but, unlike other objects, they are strong emitters across the entire electromagnetic spectrum. In this study we have conducted a detailed investigation of the broad-band spectral properties of the gamma-ray selected blazars of the Fermi LAT Bright AGN Sample (LBAS). By combining the accurately estimated Fermi gamma-ray spectra with Swift, radio, NIR-Optical and hard-X/gamma-ray data, collected within three months of the LBAS data taking period, we were able to assemble high-quality and quasi-simultaneous Spectral Energy Distributions (SED) for 48 LBAS blazars. Here we show the procedure for the multi wavelength analysis.

  17. Relationship between type traits and longevity in Canadian Jerseys and Ayrshires using a Weibull proportional hazards model.

    PubMed

    Sewalem, A; Kistemaker, G J; Van Doormaal, B J

    2005-04-01

    The aim of this study was to use a Weibull proportional hazards model to explore the impact of type traits on the functional survival of Canadian Jersey and Ayrshire cows. The data set consisted of 49,791 registered Jersey cows from 900 herds calving from 1985 to 2003. The corresponding figures for Ayrshire were 77,109 cows and 921 herds. Functional survival was defined as the number of days from first calving to culling, death, or censoring. Type information consisted of phenotypic type scores for 8 composite traits and 19 linear descriptive traits. The statistical model included the effects of stage of lactation; season of production; annual change in herd size; type of milk recording supervision; age at first calving; effects of milk, fat, and protein yields calculated as within herd-year-parity deviations; herd-year-season of calving; each type trait; and the animal's sire. Analysis was done one trait at a time for each of 27 type traits in each breed. The relative culling risk was calculated for animals in each class after accounting for the previously mentioned effects. Among the composite type traits with the greatest contribution to the likelihood function was final score followed by mammary system for Jersey breed, while in Ayrshire breed feet and legs was the second most important trait next to final score. Cows classified as Poor for final score in both breeds were >5 times more likely to be culled compared with the cows classified as Good Plus. In both breeds, cows classified as Poor for feet and legs were 5 times more likely to be culled than were cows classified as Excellent, and cows classified as Excellent for mammary system were >9 times more likely to survive than were cows classified as Poor. PMID:15778325

  18. Analysis of scale-invariant slab avalanche size distributions

    NASA Astrophysics Data System (ADS)

    Faillettaz, J.; Louchet, F.; Grasso, J.-R.; Daudon, D.

    2003-04-01

    Scale invariance of snow avalanche sizes was reported for the first time in 2001 by Louchet et al. at the EGS conference, using both acoustic emission duration, and the surface of the crown step left at the top of the starting zone, where the former parameter characterises the size of the total avalanche flow, and the latter that of the starting zone. The present paper focuses on parameters of the second type, that are more directly related to precise release mechanisms, vz. the crown crack length L, the crown crack or slab depth H, the crown step surface HxL, the volume HxL^2 of the snow involved in the starting zone, and LxH^2 which is a measure of the stress concentration at the basal crack tip at failure. The analysis is performed on two data sets, from la Grande Plagne (GP) and Tignes (T) ski resorts. For both catalogs, cumulative distributions of L, H, HxL, HxL^2 and LxH^2 are shown to be roughly linear in a log-log plot. i.e. consistent with so-called scale invariant (or power law) distributions for both triggered and natural avalanches. Plateaus are observed at small sizes, and roll-offs at large sizes. The power law exponents for each of these quantities are roughly independent of the ski resort (GP or T) they come from. In contrast, exponents for natural events are significantly smaller than those for artificial ones. We analyse the possible reasons for the scale invariance of these quantities, for the possible "universality" of the exponents corresponding to a given triggering mode, and for the difference in exponents between triggered and natural events. The physical meaning of the observed roll-offs and plateaus is also discussed. The power law distributions analysed here provide the occurrence probability of an avalanche of a given (starting) volume in a given time period on a given area. A possible use of this type of distributions for snow avalanche hazard assessment is contemplated, as it is for earthquakes or rockfalls.

  19. An Open Architecture for Distributed Malware Collection and Analysis

    NASA Astrophysics Data System (ADS)

    Cavalca, Davide; Goldoni, Emanuele

    Honeynets have become an important tool for researchers and network operators. However, the lack of a unified honeynet data model has impeded their effectiveness, resulting in multiple unrelated data sources, each with its own proprietary access method and format. Moreover, the deployment and management of a honeynet is a time-consuming activity and the interpretation of collected data is far from trivial. HIVE (Honeynet Infrastructure in Virtualized Environment) is a novel highly scalable automated data collection and analysis architecture we designed. Our infrastructure is based on top of proven FLOSS (Free, Libre and Open Source) solutions, which have been extended and integrated with new tools we developed. We use virtualization to ease honeypot management and deployment, combining both high-interaction and low-interaction sensors in a common infrastructure. We also address the need for rapid comprehension and detailed data analysis by harnessing the power of a relational database system, which provides centralized storage and access to the collected data while ensuring its constant integrity. This chapter presents our malware data collection architecture, offering some insight in the structure and benefits of a distributed virtualized honeynet and its development. Finally, we present some techniques for the active monitoring of centralized botnets we integrated in HIVE, which allow us to track the menaces evolution and timely deploy effective countermeasures.

  20. RT distribution analysis of category congruence effects with masked primes.

    PubMed

    Kinoshita, Sachiko; Hunt, Louise

    2008-10-01

    In the number magnitude decision task ("Is the number bigger/smaller than 5?"), response to a target (e.g., 3) is faster following a masked prime congruent with the target (e.g., 1) than it is following an incongruent prime (e.g., 9). This category congruence effect has been reported to be "interference-dominant" relative to a neutral prime (e.g., the # sign, the number 5) on the basis of the analysis of mean response time (RT). Using RT distribution analysis as well as mean RTs, we identified two bases for this pattern. One relates to the choice of neutral baseline: The # prime, unlike the digit prime, does not factor in the cost of perceptual transition between the prime and target, and therefore underestimates facilitation and overestimates the interference effect. The second basis of the interference-dominant pattern is a disproportionate slowdown of congruent trials in the slow RT bins. Furthermore, this slowdown is greater for primes that had been used as targets than it is with "novel" primes that have not been responded to as targets. We interpret the results as suggesting that the category congruence effect has two components with different time courses-one based on stimulus-response mapping, and the other on semantic categorization. PMID:18927046

  1. Earthquake interevent time distribution in Kachchh, Northwestern India

    NASA Astrophysics Data System (ADS)

    Pasari, Sumanta; Dikshit, Onkar

    2015-08-01

    Statistical properties of earthquake interevent times have long been the topic of interest to seismologists and earthquake professionals, mainly for hazard-related concerns. In this paper, we present a comprehensive study on the temporal statistics of earthquake interoccurrence times of the seismically active Kachchh peninsula (western India) from thirteen probability distributions. Those distributions are exponential, gamma, lognormal, Weibull, Levy, Maxwell, Pareto, Rayleigh, inverse Gaussian (Brownian passage time), inverse Weibull (Frechet), exponentiated exponential, exponentiated Rayleigh (Burr type X), and exponentiated Weibull distributions. Statistical inferences of the scale and shape parameters of these distributions are discussed from the maximum likelihood estimations and the Fisher information matrices. The latter are used as a surrogate tool to appraise the parametric uncertainty in the estimation process. The results were found on the basis of two goodness-of-fit tests: the maximum likelihood criterion with its modification to Akaike information criterion (AIC) and the Kolmogorov-Smirnov (K-S) minimum distance criterion. These results reveal that (i) the exponential model provides the best fit, (ii) the gamma, lognormal, Weibull, inverse Gaussian, exponentiated exponential, exponentiated Rayleigh, and exponentiated Weibull models provide an intermediate fit, and (iii) the rest, namely Levy, Maxwell, Pareto, Rayleigh, and inverse Weibull, fit poorly to the earthquake catalog of Kachchh and its adjacent regions. This study also analyzes the present-day seismicity in terms of the estimated recurrence interval and conditional probability curves (hazard curves). The estimated cumulative probability and the conditional probability of a magnitude 5.0 or higher event reach 0.8-0.9 by 2027-2036 and 2034-2043, respectively. These values have significant implications in a variety of practical applications including earthquake insurance, seismic zonation, location identification of lifeline structures, and revision of building codes.

  2. Comparative analysis of aerosols elemental distribution in some Romanian regions

    NASA Astrophysics Data System (ADS)

    Amemiya, Susumu; Masuda, Toshio; Popa-Simil, Liviu; Mateescu, Liviu

    1996-04-01

    The study's main aim is obtaining aerosols particulate elemental distribution and mapping it for some Romanian regions, in order to obtain preliminary information regarding the concentrations of aerosol particles and networking strategy versus local conditions. For this we used the mobile sampling strategy, but taking care on all local specific conditions and weather. In the summer of 1993, in July we took about 8 samples on a rather large territory of SE Romania which were analysed and mapped. The regions which showed an interesting behaviour or doubts such as Bucharest and Dobrogea were zoomed in near the same period of 1994, for comparing the new details with the global aspect previously obtained. An attempt was made to infer the minimum necessary number of stations in a future monitoring network. A mobile sampler was used, having tow polycarbonate filter posts of 8 and 0.4 μm. PIXE elemental analysis was performed on a 2.5 MV Van de Graaff accelerator, by using a proton beam. More than 15 elements were measured. Suggestive 2D and 3D representations were drawn, as well as histogram charts for the concentrations' distribution in the specific regions at the specified times. In spite of the poor samples from the qualitative point of view the experiment surprised us by the good coincidence (good agreement) with realities in terrain known by other means long time ago, and highlighted the power of PIXE methods in terms of money and time. Conclusions over the link between industry, traffic, vegetation, wether, surface waters, soil composition, power plant exhaust and so on, on the one hand, and surface concentration distribution, on the other, were drawn. But the method's weak points were also highlighted; these are weather dependencies (especially air masses movement and precipitation), local relief, microclimate and vegetation, and of course localisation of the sampling point versus the pollution sources and their regime. The paper contains a synthesis of the whole of the maps and graphs we made, intended in its turn to demonstrate the necessity of a national integrated network for monitoring aerosols.

  3. Global sensitivity analysis in wind energy assessment

    NASA Astrophysics Data System (ADS)

    Tsvetkova, O.; Ouarda, T. B.

    2012-12-01

    Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present research show that the brute force method is best for wind assessment purpose, SBSS outperforms other sampling strategies in the majority of cases. The results indicate that the Weibull scale parameter, turbine lifetime and Weibull shape parameter are the three most influential variables in the case study setting. The following conclusions can be drawn from these results: 1) SBSS should be recommended for use in Monte Carlo experiments, 2) The brute force method should be recommended for conducting sensitivity analysis in wind resource assessment, and 3) Little variation in the Weibull scale causes significant variation in energy production. The presence of the two distribution parameters in the top three influential variables (the Weibull shape and scale) emphasizes the importance of accuracy of (a) choosing the distribution to model wind regime at a site and (b) estimating probability distribution parameters. This can be labeled as the most important conclusion of this research because it opens a field for further research, which the authors see could change the wind energy field tremendously.

  4. Identification and Analysis of Issues in Distributive Education.

    ERIC Educational Resources Information Center

    Weatherford, John Wilson

    The purpose of the study was to analyze the opinions of distributive education leaders about issues in distributive education and to ascertain their opinions on the importance of these issues in determining effective operating procedures in distributive education. The 30 leaders were determined on the basis of the number of times their names were…

  5. Distribution and Phylogenetic Analysis of Family 19 Chitinases in Actinobacteria

    PubMed Central

    Kawase, Tomokazu; Saito, Akihiro; Sato, Toshiya; Kanai, Ryo; Fujii, Takeshi; Nikaidou, Naoki; Miyashita, Kiyotaka; Watanabe, Takeshi

    2004-01-01

    In organisms other than higher plants, family 19 chitinase was first discovered in Streptomyces griseus HUT6037, and later, the general occurrence of this enzyme in Streptomyces species was demonstrated. In the present study, the distribution of family 19 chitinases in the class Actinobacteria and the phylogenetic relationship of Actinobacteria family 19 chitinases with family 19 chitinases of other organisms were investigated. Forty-nine strains were chosen to cover almost all the suborders of the class Actinobacteria, and chitinase production was examined. Of the 49 strains, 22 formed cleared zones on agar plates containing colloidal chitin and thus appeared to produce chitinases. These 22 chitinase-positive strains were subjected to Southern hybridization analysis by using a labeled DNA fragment corresponding to the catalytic domain of ChiC, and the presence of genes similar to chiC of S. griseus HUT6037 in at least 13 strains was suggested by the results. PCR amplification and sequencing of the DNA fragments corresponding to the major part of the catalytic domains of the family 19 chitinase genes confirmed the presence of family 19 chitinase genes in these 13 strains. The strains possessing family 19 chitinase genes belong to 6 of the 10 suborders in the order Actinomycetales, which account for the greatest part of the Actinobacteria. Phylogenetic analysis suggested that there is a close evolutionary relationship between family 19 chitinases found in Actinobacteria and plant class IV chitinases. The general occurrence of family 19 chitinase genes in Streptomycineae and the high sequence similarity among the genes found in Actinobacteria suggest that the family 19 chitinase gene was first acquired by an ancestor of the Streptomycineae and spread among the Actinobacteria through horizontal gene transfer. PMID:14766598

  6. Rod internal pressure quantification and distribution analysis using Frapcon

    SciTech Connect

    Bratton, Ryan N; Jessee, Matthew Anderson; Wieselquist, William A

    2015-09-30

    This report documents work performed supporting the Department of Energy (DOE) Office of Nuclear Energy (NE) Fuel Cycle Technologies Used Fuel Disposition Campaign (UFDC) under work breakdown structure element 1.02.08.10, ST Analysis. In particular, this report fulfills the M4 milestone M4FT- 15OR0810036, Quantify effects of power uncertainty on fuel assembly characteristics, within work package FT-15OR081003 ST Analysis-ORNL. This research was also supported by the Consortium for Advanced Simulation of Light Water Reactors (http://www.casl.gov), an Energy Innovation Hub (http://www.energy.gov/hubs) for Modeling and Simulation of Nuclear Reactors under U.S. Department of Energy Contract No. DE-AC05-00OR22725. The discharge rod internal pressure (RIP) and cladding hoop stress (CHS) distributions are quantified for Watts Bar Nuclear Unit 1 (WBN1) fuel rods by modeling core cycle design data, operation data (including modeling significant trips and downpowers), and as-built fuel enrichments and densities of each fuel rod in FRAPCON-3.5. A methodology is developed which tracks inter-cycle assembly movements and assembly batch fabrication information to build individual FRAPCON inputs for each evaluated WBN1 fuel rod. An alternate model for the amount of helium released from the zirconium diboride (ZrB2) integral fuel burnable absorber (IFBA) layer is derived and applied to FRAPCON output data to quantify the RIP and CHS for these types of fuel rods. SCALE/Polaris is used to quantify fuel rodspecific spectral quantities and the amount of gaseous fission products produced in the fuel for use in FRAPCON inputs. Fuel rods with ZrB2 IFBA layers (i.e., IFBA rods) are determined to have RIP predictions that are elevated when compared to fuel rod without IFBA layers (i.e., standard rods) despite the fact that IFBA rods often have reduced fill pressures and annular fuel pellets. The primary contributor to elevated RIP predictions at burnups less than and greater than 30 GWd/MTU is determined to be the total fuel rod void volume and the amount of released fission gas in the fuel rod, respectively. Cumulative distribution functions (CDFs) are prepared from the distribution of RIP and CHS predictions for all standard and IFBA rods. The provided CDFs allow for the determination of the portion of WBN1 fuel rods that exceed a specified RIP or CHS limit. Results are separated into IFBA and standard rods so that the two groups may be analyzed individually. FRAPCON results are provided in sufficient detail to enable the recalculation of the RIP while considering any desired plenum gas temperature, total void volume, or total amount of gas present in the void volume. A method to predict the CHS from a determined or assumed RIP is also proposed, which is based on the approximately linear relationship between the CHS and the RIP. Finally, improvements to the computational methodology of FRAPCON are proposed.

  7. Analysis of gamma-ray burst duration distribution using mixtures of skewed distributions

    NASA Astrophysics Data System (ADS)

    Tarnopolski, M.

    2016-02-01

    Two classes of GRBs have been confidently identified thus far and are prescribed to different physical scenarios - NS-NS or NS-BH mergers, and collapse of massive stars, for short and long GRBs, respectively. A third, intermediate in duration class, was suggested to be present in previous catalogs, such as BATSE and Swift, based on statistical tests regarding a mixture of two or three log-normal distributions of T90. However, this might possibly not be an adequate model. This paper investigates whether the distributions of log T90 from BATSE, Swift, and Fermi are described better by a mixture of skewed distributions rather than standard Gaussians. Mixtures of standard normal, skew-normal, sinh-arcsinh and alpha-skew-normal distributions are fitted using a maximum likelihood method. The preferred model is chosen based on the Akaike information criterion. It is found that mixtures of two skew-normal or two sinh-arcsinh distributions are more likely to describe the observed duration distribution of Fermi than a mixture of three standard Gaussians, and that mixtures of two sinh-arcsinh or two skew-normal distributions are models competing with the conventional three-Gaussian in the case of BATSE and Swift. Based on statistical reasoning, and it is shown that other phenomenological models may describe the observed Fermi, BATSE, and Swift duration distributions at least as well as a mixture of standard normal distributions, and the existence of a third (intermediate) class of GRBs in Fermi data is rejected.

  8. Fourier analysis of polar cap electric field and current distributions

    NASA Technical Reports Server (NTRS)

    Barbosa, D. D.

    1984-01-01

    A theoretical study of high-latitude electric fields and currents, using analytic Fourier analysis methods, is conducted. A two-dimensional planar model of the ionosphere with an enhanced conductivity auroral belt and field-aligned currents at the edges is employed. Two separate topics are treated. A field-aligned current element near the cusp region of the polar cap is included to investigate the modifications to the convection pattern by the east-west component of the interplanetary magnetic field. It is shown that a sizable one-cell structure is induced near the cusp which diverts equipotential contours to the dawnside or duskside, depending on the sign of the cusp current. This produces characteristic dawn-dusk asymmetries to the electric field that have been previously observed over the polar cap. The second topic is concerned with the electric field configuration obtained in the limit of perfect shielding, where the field is totally excluded equatorward of the auroral oval. When realistic field-aligned current distributions are used, the result is to produce severely distorted, crescent-shaped equipotential contours over the cap. Exact, analytic formulae applicable to this case are also provided.

  9. Clustering Analysis of Seismicity and Aftershock Identification

    SciTech Connect

    Zaliapin, Ilya; Gabrielov, Andrei; Keilis-Borok, Vladimir; Wong, Henry

    2008-07-04

    We introduce a statistical methodology for clustering analysis of seismicity in the time-space-energy domain and use it to establish the existence of two statistically distinct populations of earthquakes: clustered and nonclustered. This result can be used, in particular, for nonparametric aftershock identification. The proposed approach expands the analysis of Baiesi and Paczuski [Phys. Rev. E 69, 066106 (2004)] based on the space-time-magnitude nearest-neighbor distance {eta} between earthquakes. We show that for a homogeneous Poisson marked point field with exponential marks, the distance {eta} has the Weibull distribution, which bridges our results with classical correlation analysis for point fields. The joint 2D distribution of spatial and temporal components of {eta} is used to identify the clustered part of a point field. The proposed technique is applied to several seismicity models and to the observed seismicity of southern California.

  10. Frequency distribution histograms for the rapid analysis of data

    NASA Technical Reports Server (NTRS)

    Burke, P. V.; Bullen, B. L.; Poff, K. L.

    1988-01-01

    The mean and standard error are good representations for the response of a population to an experimental parameter and are frequently used for this purpose. Frequency distribution histograms show, in addition, responses of individuals in the population. Both the statistics and a visual display of the distribution of the responses can be obtained easily using a microcomputer and available programs. The type of distribution shown by the histogram may suggest different mechanisms to be tested.

  11. Frequency distribution histograms for the rapid analysis of data.

    PubMed

    Burke, P V; Bullen, B L; Poff, K L

    1988-01-01

    The mean and standard error are good representations for the response of a population to an experimental parameter and are frequently used for this purpose. Frequency distribution histograms show, in addition, responses of individuals in the population. Both the statistics and a visual display of the distribution of the responses can be obtained easily using a microcomputer and available programs. The type of distribution shown by the histogram may suggest different mechanisms to be tested. PMID:11537875

  12. Analysis of gamma-ray burst duration distribution using mixtures of skewed distributions

    NASA Astrophysics Data System (ADS)

    Tarnopolski, M.

    2016-05-01

    Two classes of gamma-ray bursts (GRBs) have been confidently identified thus far and are prescribed to different physical scenarios - neutron star-neutron star or neutron star-black hole mergers, and collapse of massive stars, for short and long GRBs, respectively. A third, intermediate in duration class, was suggested to be present in previous catalogues, such as Burst Alert and Transient Source Explorer (BATSE) and Swift, based on statistical tests regarding a mixture of two or three lognormal distributions of T90. However, this might possibly not be an adequate model. This paper investigates whether the distributions of log T90 from BATSE, Swift, and Fermi are described better by a mixture of skewed distributions rather than standard Gaussians. Mixtures of standard normal, skew-normal, sinh-arcsinh and alpha-skew-normal distributions are fitted using a maximum likelihood method. The preferred model is chosen based on the Akaike information criterion. It is found that mixtures of two skew-normal or two sinh-arcsinh distributions are more likely to describe the observed duration distribution of Fermi than a mixture of three standard Gaussians, and that mixtures of two sinh-arcsinh or two skew-normal distributions are models competing with the conventional three-Gaussian in the case of BATSE and Swift. Based on statistical reasoning, and it is shown that other phenomenological models may describe the observed Fermi, BATSE, and Swift duration distributions at least as well as a mixture of standard normal distributions, and the existence of a third (intermediate) class of GRBs in Fermi data is rejected.

  13. Analysis Model for Domestic Hot Water Distribution Systems: Preprint

    SciTech Connect

    Maguire, J.; Krarti, M.; Fang, X.

    2011-11-01

    A thermal model was developed to estimate the energy losses from prototypical domestic hot water (DHW) distribution systems for homes. The developed model, using the TRNSYS simulation software, allows researchers and designers to better evaluate the performance of hot water distribution systems in homes. Modeling results were compared with past experimental study results and showed good agreement.

  14. Bayesian Analysis for Binomial Models with Generalized Beta Prior Distributions.

    ERIC Educational Resources Information Center

    Chen, James J.; Novick, Melvin, R.

    1984-01-01

    The Libby-Novick class of three-parameter generalized beta distributions is shown to provide a rich class of prior distributions for the binomial model that removes some restrictions of the standard beta class. A numerical example indicates the desirability of using these wider classes of densities for binomial models. (Author/BW)

  15. Modeling AIDS survival after initiation of antiretroviral treatment by Weibull models with changepoints

    PubMed Central

    2009-01-01

    Background Mortality of HIV-infected patients initiating antiretroviral therapy in the developing world is very high immediately after the start of ART therapy and drops sharply thereafter. It is necessary to use models of survival time that reflect this change. Methods In this endeavor, parametric models with changepoints such as Weibull models can be useful in order to explicitly model the underlying failure process, even in the case where abrupt changes in the mortality rate are present. Estimation of the temporal location of possible mortality changepoints has important implications on the effective management of these patients. We briefly describe these models and apply them to the case of estimating survival among HIV-infected patients who are initiating antiretroviral therapy in a care and treatment programme in sub-Saharan Africa. Results As a first reported data-driven estimate of the existence and location of early mortality changepoints after antiretroviral therapy initiation, we show that there is an early change in risk of death at three months, followed by an intermediate risk period lasting up to 10 months after therapy. Conclusion By explicitly modelling the underlying abrupt changes in mortality risk after initiation of antiretroviral therapy we are able to estimate their number and location in a rigorous, data-driven manner. The existence of a high early risk of death after initiation of antiretroviral therapy and the determination of its duration has direct implications for the optimal management of patients initiating therapy in this setting. PMID:19558677

  16. Bivariate extreme value distributions

    NASA Technical Reports Server (NTRS)

    Elshamy, M.

    1992-01-01

    In certain engineering applications, such as those occurring in the analyses of ascent structural loads for the Space Transportation System (STS), some of the load variables have a lower bound of zero. Thus, the need for practical models of bivariate extreme value probability distribution functions with lower limits was identified. We discuss the Gumbel models and present practical forms of bivariate extreme probability distributions of Weibull and Frechet types with two parameters. Bivariate extreme value probability distribution functions can be expressed in terms of the marginal extremel distributions and a 'dependence' function subject to certain analytical conditions. Properties of such bivariate extreme distributions, sums and differences of paired extremals, as well as the corresponding forms of conditional distributions, are discussed. Practical estimation techniques are also given.

  17. Analysis of DNS Cache Effects on Query Distribution

    PubMed Central

    2013-01-01

    This paper studies the DNS cache effects that occur on query distribution at the CN top-level domain (TLD) server. We first filter out the malformed DNS queries to purify the log data pollution according to six categories. A model for DNS resolution, more specifically DNS caching, is presented. We demonstrate the presence and magnitude of DNS cache effects and the cache sharing effects on the request distribution through analytic model and simulation. CN TLD log data results are provided and analyzed based on the cache model. The approximate TTL distribution for domain name is inferred quantificationally. PMID:24396313

  18. Determination analysis of energy conservation standards for distribution transformers

    SciTech Connect

    Barnes, P.R.; Van Dyke, J.W.; McConnell, B.W.; Das, S.

    1996-07-01

    This report contains information for US DOE to use in making a determination on proposing energy conservation standards for distribution transformers as required by the Energy Policy Act of 1992. Potential for saving energy with more efficient liquid-immersed and dry-type distribution transformers could be significant because these transformers account for an estimated 140 billion kWh of the annual energy lost in the delivery of electricity. Objective was to determine whether energy conservation standards for distribution transformers would have the potential for significant energy savings, be technically feasible, and be economically justified from a national perspective. It was found that energy conservation for distribution transformers would be technically and economically feasible. Based on the energy conservation options analyzed, 3.6-13.7 quads of energy could be saved from 2000 to 2030.

  19. Stability analysis of linear fractional differential system with distributed delays

    NASA Astrophysics Data System (ADS)

    Veselinova, Magdalena; Kiskinov, Hristo; Zahariev, Andrey

    2015-11-01

    In the present work we study the Cauchy problem for linear incommensurate fractional differential system with distributed delays. For the autonomous case with distributed delays with derivatives in Riemann-Liouville or Caputo sense, we establish sufficient conditions under which the zero solution is globally asymptotic stable. The established conditions coincide with the conditions which guaranty the same result in the particular case of system with constant delays and for the case of system without delays in the commensurate case too.

  20. Analysis of Fermi gamma-ray burst duration distribution

    NASA Astrophysics Data System (ADS)

    Tarnopolski, M.

    2015-09-01

    Context. Two classes of gamma-ray bursts (GRBs), short and long, have been determined without any doubts, and are usually prescribed to different physical scenarios. A third class, intermediate in T90 durations has been reported in the datasets of BATSE, Swift, RHESSI, and possibly BeppoSAX. The latest release of >1500 GRBs observed by Fermi gives an opportunity to further investigate the duration distribution. Aims: The aim of this paper is to investigate whether a third class is present in the log T90 distribution, or whether it is described by a bimodal distribution. Methods: A standard χ2 fitting of a mixture of Gaussians was applied to 25 histograms with different binnings. Results: Different binnings give various values of the fitting parameters, as well as the shape of the fitted curve. Among five statistically significant fits, none is trimodal. Conclusions: Locations of the Gaussian components are in agreement with previous works. However, a trimodal distribution, understood in the sense of having three distinct peaks, is not found for any binning. It is concluded that the duration distribution in the Fermi data is well described by a mixture of three log-normal distributions, but it is intrinsically bimodal, hence no third class is present in the T90 data of Fermi. It is suggested that the log-normal fit may not be an adequate model.

  1. Nanocrystal size distribution analysis from transmission electron microscopy images

    NASA Astrophysics Data System (ADS)

    van Sebille, Martijn; van der Maaten, Laurens J. P.; Xie, Ling; Jarolimek, Karol; Santbergen, Rudi; van Swaaij, René A. C. M. M.; Leifer, Klaus; Zeman, Miro

    2015-12-01

    We propose a method, with minimal bias caused by user input, to quickly detect and measure the nanocrystal size distribution from transmission electron microscopy (TEM) images using a combination of Laplacian of Gaussian filters and non-maximum suppression. We demonstrate the proposed method on bright-field TEM images of an a-SiC:H sample containing embedded silicon nanocrystals with varying magnifications and we compare the accuracy and speed with size distributions obtained by manual measurements, a thresholding method and PEBBLES. Finally, we analytically consider the error induced by slicing nanocrystals during TEM sample preparation on the measured nanocrystal size distribution and formulate an equation to correct this effect.We propose a method, with minimal bias caused by user input, to quickly detect and measure the nanocrystal size distribution from transmission electron microscopy (TEM) images using a combination of Laplacian of Gaussian filters and non-maximum suppression. We demonstrate the proposed method on bright-field TEM images of an a-SiC:H sample containing embedded silicon nanocrystals with varying magnifications and we compare the accuracy and speed with size distributions obtained by manual measurements, a thresholding method and PEBBLES. Finally, we analytically consider the error induced by slicing nanocrystals during TEM sample preparation on the measured nanocrystal size distribution and formulate an equation to correct this effect. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr06292f

  2. Application of extreme learning machine for estimation of wind speed distribution

    NASA Astrophysics Data System (ADS)

    Shamshirband, Shahaboddin; Mohammadi, Kasra; Tong, Chong Wen; Petković, Dalibor; Porcu, Emilio; Mostafaeipour, Ali; Ch, Sudheer; Sedaghat, Ahmad

    2016-03-01

    The knowledge of the probabilistic wind speed distribution is of particular significance in reliable evaluation of the wind energy potential and effective adoption of site specific wind turbines. Among all proposed probability density functions, the two-parameter Weibull function has been extensively endorsed and utilized to model wind speeds and express wind speed distribution in various locations. In this research work, extreme learning machine (ELM) is employed to compute the shape ( k) and scale ( c) factors of Weibull distribution function. The developed ELM model is trained and tested based upon two widely successful methods used to estimate k and c parameters. The efficiency and accuracy of ELM is compared against support vector machine, artificial neural network and genetic programming for estimating the same Weibull parameters. The survey results reveal that applying ELM approach is eventuated in attaining further precision for estimation of both Weibull parameters compared to other methods evaluated. Mean absolute percentage error, mean absolute bias error and root mean square error for k are 8.4600 %, 0.1783 and 0.2371, while for c are 0.2143 %, 0.0118 and 0.0192 m/s, respectively. In conclusion, it is conclusively found that application of ELM is particularly promising as an alternative method to estimate Weibull k and c factors.

  3. Application of extreme learning machine for estimation of wind speed distribution

    NASA Astrophysics Data System (ADS)

    Shamshirband, Shahaboddin; Mohammadi, Kasra; Tong, Chong Wen; Petkovi?, Dalibor; Porcu, Emilio; Mostafaeipour, Ali; Ch, Sudheer; Sedaghat, Ahmad

    2015-06-01

    The knowledge of the probabilistic wind speed distribution is of particular significance in reliable evaluation of the wind energy potential and effective adoption of site specific wind turbines. Among all proposed probability density functions, the two-parameter Weibull function has been extensively endorsed and utilized to model wind speeds and express wind speed distribution in various locations. In this research work, extreme learning machine (ELM) is employed to compute the shape (k) and scale (c) factors of Weibull distribution function. The developed ELM model is trained and tested based upon two widely successful methods used to estimate k and c parameters. The efficiency and accuracy of ELM is compared against support vector machine, artificial neural network and genetic programming for estimating the same Weibull parameters. The survey results reveal that applying ELM approach is eventuated in attaining further precision for estimation of both Weibull parameters compared to other methods evaluated. Mean absolute percentage error, mean absolute bias error and root mean square error for k are 8.4600 %, 0.1783 and 0.2371, while for c are 0.2143 %, 0.0118 and 0.0192 m/s, respectively. In conclusion, it is conclusively found that application of ELM is particularly promising as an alternative method to estimate Weibull k and c factors.

  4. CRACK GROWTH ANALYSIS OF SOLID OXIDE FUEL CELL ELECTROLYTES

    SciTech Connect

    S. Bandopadhyay; N. Nagabhushana

    2003-10-01

    Defects and Flaws control the structural and functional property of ceramics. In determining the reliability and lifetime of ceramics structures it is very important to quantify the crack growth behavior of the ceramics. In addition, because of the high variability of the strength and the relatively low toughness of ceramics, a statistical design approach is necessary. The statistical nature of the strength of ceramics is currently well recognized, and is usually accounted for by utilizing Weibull or similar statistical distributions. Design tools such as CARES using a combination of strength measurements, stress analysis, and statistics are available and reasonably well developed. These design codes also incorporate material data such as elastic constants as well as flaw distributions and time-dependent properties. The fast fracture reliability for ceramics is often different from their time-dependent reliability. Further confounding the design complexity, the time-dependent reliability varies with the environment/temperature/stress combination. Therefore, it becomes important to be able to accurately determine the behavior of ceramics under simulated application conditions to provide a better prediction of the lifetime and reliability for a given component. In the present study, Yttria stabilized Zirconia (YSZ) of 9.6 mol% Yttria composition was procured in the form of tubes of length 100 mm. The composition is of interest as tubular electrolytes for Solid Oxide Fuel Cells. Rings cut from the tubes were characterized for microstructure, phase stability, mechanical strength (Weibull modulus) and fracture mechanisms. The strength at operating condition of SOFCs (1000 C) decreased to 95 MPa as compared to room temperature strength of 230 MPa. However, the Weibull modulus remains relatively unchanged. Slow crack growth (SCG) parameter, n = 17 evaluated at room temperature in air was representative of well studied brittle materials. Based on the results, further work was planned to evaluate the strength degradation, modulus and failure in more representative environment of the SOFCs.

  5. Performance Analysis of Distributed Object-Oriented Applications

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1998-01-01

    The purpose of this research was to evaluate the efficiency of a distributed simulation architecture which creates individual modules which are made self-scheduling through the use of a message-based communication system used for requesting input data from another module which is the source of that data. To make the architecture as general as possible, the message-based communication architecture was implemented using standard remote object architectures (Common Object Request Broker Architecture (CORBA) and/or Distributed Component Object Model (DCOM)). A series of experiments were run in which different systems are distributed in a variety of ways across multiple computers and the performance evaluated. The experiments were duplicated in each case so that the overhead due to message communication and data transmission can be separated from the time required to actually perform the computational update of a module each iteration. The software used to distribute the modules across multiple computers was developed in the first year of the current grant and was modified considerably to add a message-based communication scheme supported by the DCOM distributed object architecture. The resulting performance was analyzed using a model created during the first year of this grant which predicts the overhead due to CORBA and DCOM remote procedure calls and includes the effects of data passed to and from the remote objects. A report covering the distributed simulation software and the results of the performance experiments has been submitted separately. The above report also discusses possible future work to apply the methodology to dynamically distribute the simulation modules so as to minimize overall computation time.

  6. Income distribution dependence of poverty measure: A theoretical analysis

    NASA Astrophysics Data System (ADS)

    Chattopadhyay, Amit K.; Mallick, Sushanta K.

    2007-04-01

    Using a modified deprivation (or poverty) function, in this paper, we theoretically study the changes in poverty with respect to the ‘global’ mean and variance of the income distribution using Indian survey data. We show that when the income obeys a log-normal distribution, a rising mean income generally indicates a reduction in poverty while an increase in the variance of the income distribution increases poverty. This altruistic view for a developing economy, however, is not tenable anymore once the poverty index is found to follow a pareto distribution. Here although a rising mean income indicates a reduction in poverty, due to the presence of an inflexion point in the poverty function, there is a critical value of the variance below which poverty decreases with increasing variance while beyond this value, poverty undergoes a steep increase followed by a decrease with respect to higher variance. Identifying this inflexion point as the poverty line, we show that the pareto poverty function satisfies all three standard axioms of a poverty index [N.C. Kakwani, Econometrica 43 (1980) 437; A.K. Sen, Econometrica 44 (1976) 219] whereas the log-normal distribution falls short of this requisite. Following these results, we make quantitative predictions to correlate a developing with a developed economy.

  7. Comparative hypsometric analysis of both Earth and Venus topographic distributions

    NASA Astrophysics Data System (ADS)

    Rosenblatt, P.; Pinet, P. C.; Thouvenot, E.

    1993-03-01

    Previous studies have compared the global topographic distribution of both planets by means of differential hypsometric curves. For the purpose of comparison, the terrestrial oceanic load was removed, and a reference base level was acquired. It was chosen on the basis of geometric considerations and reflected the geometric shape of the mean dynamical equilibrium figure of the planetary surface in both cases. This reference level corresponds to the well-known sea level for the Earth; for Venus, given its slow rate of rotation, a sphere of radius close to the mean, median and modal values of the planetary radii distribution were considered and the radius value of 6051 km arbitrarily taken. These studies were based on the low resolution (100 x 100 sq km) coverage of Venus obtained by the Pioneer Venus altimeter and on the 1 deg x 1 deg terrestrial topography. But, apart from revealing the distinct contrast existing between the Earth's bimodal and Venus' strong unimodal topographic distribution, the choice of such a reference level is inadequate and even misleading for the comparative geophysical understanding of the planetary relief distribution. The present work reinvestigates the comparison between Earth and Venus hypsometric distribution on the basis of the high-resolution data provided, on one hand, by the recent Magellan global topographic coverage of Venus' surface, and on the other hand, by the detailed NCAR 5 x 5 ft. grid topographic database currently available for the Earth's surface.

  8. Comparative hypsometric analysis of both Earth and Venus topographic distributions

    NASA Technical Reports Server (NTRS)

    Rosenblatt, P.; Pinet, P. C.; Thouvenot, E.

    1993-01-01

    Previous studies have compared the global topographic distribution of both planets by means of differential hypsometric curves. For the purpose of comparison, the terrestrial oceanic load was removed, and a reference base level was acquired. It was chosen on the basis of geometric considerations and reflected the geometric shape of the mean dynamical equilibrium figure of the planetary surface in both cases. This reference level corresponds to the well-known sea level for the Earth; for Venus, given its slow rate of rotation, a sphere of radius close to the mean, median and modal values of the planetary radii distribution were considered and the radius value of 6051 km arbitrarily taken. These studies were based on the low resolution (100 x 100 sq km) coverage of Venus obtained by the Pioneer Venus altimeter and on the 1 deg x 1 deg terrestrial topography. But, apart from revealing the distinct contrast existing between the Earth's bimodal and Venus' strong unimodal topographic distribution, the choice of such a reference level is inadequate and even misleading for the comparative geophysical understanding of the planetary relief distribution. The present work reinvestigates the comparison between Earth and Venus hypsometric distribution on the basis of the high-resolution data provided, on one hand, by the recent Magellan global topographic coverage of Venus' surface, and on the other hand, by the detailed NCAR 5 x 5 ft. grid topographic database currently available for the Earth's surface.

  9. Distributions of Autocorrelated First-Order Kinetic Outcomes: Illness Severity

    PubMed Central

    Englehardt, James D.

    2015-01-01

    Many complex systems produce outcomes having recurring, power law-like distributions over wide ranges. However, the form necessarily breaks down at extremes, whereas the Weibull distribution has been demonstrated over the full observed range. Here the Weibull distribution is derived as the asymptotic distribution of generalized first-order kinetic processes, with convergence driven by autocorrelation, and entropy maximization subject to finite positive mean, of the incremental compounding rates. Process increments represent multiplicative causes. In particular, illness severities are modeled as such, occurring in proportion to products of, e.g., chronic toxicant fractions passed by organs along a pathway, or rates of interacting oncogenic mutations. The Weibull form is also argued theoretically and by simulation to be robust to the onset of saturation kinetics. The Weibull exponential parameter is shown to indicate the number and widths of the first-order compounding increments, the extent of rate autocorrelation, and the degree to which process increments are distributed exponential. In contrast with the Gaussian result in linear independent systems, the form is driven not by independence and multiplicity of process increments, but by increment autocorrelation and entropy. In some physical systems the form may be attracting, due to multiplicative evolution of outcome magnitudes towards extreme values potentially much larger and smaller than control mechanisms can contain. The Weibull distribution is demonstrated in preference to the lognormal and Pareto I for illness severities versus (a) toxicokinetic models, (b) biologically-based network models, (c) scholastic and psychological test score data for children with prenatal mercury exposure, and (d) time-to-tumor data of the ED01 study. PMID:26061263

  10. A fractal approach to dynamic inference and distribution analysis

    PubMed Central

    van Rooij, Marieke M. J. W.; Nash, Bertha A.; Rajaraman, Srinivasan; Holden, John G.

    2013-01-01

    Event-distributions inform scientists about the variability and dispersion of repeated measurements. This dispersion can be understood from a complex systems perspective, and quantified in terms of fractal geometry. The key premise is that a distribution's shape reveals information about the governing dynamics of the system that gave rise to the distribution. Two categories of characteristic dynamics are distinguished: additive systems governed by component-dominant dynamics and multiplicative or interdependent systems governed by interaction-dominant dynamics. A logic by which systems governed by interaction-dominant dynamics are expected to yield mixtures of lognormal and inverse power-law samples is discussed. These mixtures are described by a so-called cocktail model of response times derived from human cognitive performances. The overarching goals of this article are twofold: First, to offer readers an introduction to this theoretical perspective and second, to offer an overview of the related statistical methods. PMID:23372552

  11. Analysis and machine mapping of the distribution of band recoveries

    USGS Publications Warehouse

    Cowardin, L.M.

    1977-01-01

    A method of calculating distance and bearing from banding site to recovery location based on the solution of a spherical triangle is presented. X and Y distances on an ordinate grid were applied to computer plotting of recoveries on a map. The advantages and disadvantages of tables of recoveries by State or degree block, axial lines, and distance of recovery from banding site for presentation and comparison of the spatial distribution of band recoveries are discussed. A special web-shaped partition formed by concentric circles about the point of banding and great circles at 30-degree intervals through the point of banding has certain advantages over other methods. Comparison of distributions by means of a X? contingency test is illustrated. The statistic V = X?/N can be used as a measure of difference between two distributions of band recoveries and its possible use is illustrated as a measure of the degree of migrational homing.

  12. Nanocrystal size distribution analysis from transmission electron microscopy images.

    PubMed

    van Sebille, Martijn; van der Maaten, Laurens J P; Xie, Ling; Jarolimek, Karol; Santbergen, Rudi; van Swaaij, René A C M M; Leifer, Klaus; Zeman, Miro

    2015-12-28

    We propose a method, with minimal bias caused by user input, to quickly detect and measure the nanocrystal size distribution from transmission electron microscopy (TEM) images using a combination of Laplacian of Gaussian filters and non-maximum suppression. We demonstrate the proposed method on bright-field TEM images of an a-SiC:H sample containing embedded silicon nanocrystals with varying magnifications and we compare the accuracy and speed with size distributions obtained by manual measurements, a thresholding method and PEBBLES. Finally, we analytically consider the error induced by slicing nanocrystals during TEM sample preparation on the measured nanocrystal size distribution and formulate an equation to correct this effect. PMID:26593390

  13. Evaluation of frequency distributions for flood hazard analysis

    USGS Publications Warehouse

    Thomas, Wilbert O., Jr.; Kuriki, Minoru; Suetsugi, Tadashi

    1995-01-01

    Many different frequency distributions and fitting methods are used to determine the magnitude and frequency of floods and rainfall. Ten different combinations of frequency distributions and fitting methods are evaluated by summarizing the differences in the 0.002 exceedance probability quantile (500-year event), presenting graphical displays of the 10 estimates of the 0.002 quantile, and performing statistical tests to determine if differences are statistically significant. This evaluation indicated there are some statistically significant differences among the methods but, from an engineering standpoint, these differences may not be significant.

  14. Phylogenetic analysis reveals wide distribution of globin X

    PubMed Central

    2011-01-01

    The vertebrate globin gene repertoire consists of seven members that differ in terms of structure, function and phyletic distribution. While hemoglobin, myoglobin, cytoglobin, and neuroglobin are present in almost all gnathostomes examined so far, other globin genes, like globin X, are much more restricted in their phyletic distribution. Till today, globin X has only been found in teleost fish and Xenopus. Here, we report that globin X is also present in the genomes of the sea lamprey, ghost shark and reptiles. Moreover, the identification of orthologs of globin X in crustacean, insects, platyhelminthes, and hemichordates confirms its ancient origin. PMID:22004552

  15. Analysis of inclusion distributions in silicon carbide armor ceramics

    NASA Astrophysics Data System (ADS)

    Bakas, Michael Paul

    It was determined that intrinsic microstructural defects (i.e. inclusions) are the preferential fragmentation path (initiation or propagation) for ballistically impacted SiC, and may contribute to variation in ballistic performance. Quasi-static and ballistic samples of SiC were studied and inclusions caused by common SiC sintering aids and/or impurities were identified. Ballistic rubble surfaces showed large inclusions of 10-400 micron size, while examination of polished cross-sections of the fragments showed only inclusions under 5 microns in size. The fact that large inclusions were found preferentially on rubble surfaces demonstrates a link between severe microstructural defects and the fragmentation of SiC armor. Rubble of both a "good" and "bad" performing SiC target were examined. Inclusion size data was gathered and fit to a distribution function. A difference was observed between the targets. The "good" target had twice the density of inclusions on its rubble in the size range less than 30 microns. No significant difference between distributions was observed for inclusion sizes greater than 40 microns. The "good" target fractured into an overall smaller fragment size distribution than the "bad" target, consistent with fragmentation at higher stresses. Literature suggests that the distribution of defects activated under dynamic conditions will be determined by the maximum stress reached locally in the target. On the basis of the defect distributions on its rubble, the "good" target appears to have withstood higher stresses. The fragment size distribution and inclusion size distribution on fragment surfaces both indicate higher stresses in the "good" target. Why the "good" target withstood a greater stress than the "bad" target remains a subject for conjecture. It is speculated that the position of severe "anomalous" defects may be influencing the target's performance, but this currently cannot be demonstrated conclusively. Certainly, this research shows that inclusion defects are involved in the fragmentation process, with differences in the distributions on the rubble of the targets suggesting a role in ballistic performance.

  16. Distributional Assumptions in Educational Assessments Analysis: Normal Distributions versus Generalized Beta Distribution in Modeling the Phenomenon of Learning

    ERIC Educational Resources Information Center

    Campos, Jose Alejandro Gonzalez; Moraga, Paulina Saavedra; Del Pozo, Manuel Freire

    2013-01-01

    This paper introduces the generalized beta (GB) model as a new modeling tool in the educational assessment area and evaluation analysis, specifically. Unlike normal model, GB model allows us to capture some real characteristics of data and it is an important tool for understanding the phenomenon of learning. This paper develops a contrast with the…

  17. Commentary on "Exploring the Sensitivity of Horn's Parallel Analysis to the Distributional Form of Random Data"

    ERIC Educational Resources Information Center

    Hayton, James C.

    2009-01-01

    In the article "Exploring the Sensitivity of Horn's Parallel Analysis to the Distributional Form of Random Data," Dinno (this issue) provides strong evidence that the distribution of random data does not have a significant influence on the outcome of the analysis. Hayton appreciates the thorough approach to evaluating this assumption, and agrees…

  18. Metagenomic Analysis of Water Distribution System Bacterial Communities

    EPA Science Inventory

    The microbial quality of drinking water is assessed using culture-based methods that are highly selective and that tend to underestimate the densities and diversity of microbial populations inhabiting distribution systems. In order to better understand the effect of different dis...

  19. Optimizing Distributed Practice: Theoretical Analysis and Practical Implications

    ERIC Educational Resources Information Center

    Cepeda, Nicholas J.; Coburn, Noriko; Rohrer, Doug; Wixted, John T.; Mozer, Michael C,; Pashler, Harold

    2009-01-01

    More than a century of research shows that increasing the gap between study episodes using the same material can enhance retention, yet little is known about how this so-called distributed practice effect unfolds over nontrivial periods. In two three-session laboratory studies, we examined the effects of gap on retention of foreign vocabulary,…

  20. ERIC Data Base; Pagination Field Frequency Distribution Analysis.

    ERIC Educational Resources Information Center

    Brandhorst, Wesley T.; Marra, Samuel J.; Price, Douglas S.

    A definitive study of the sizes of documents in the ERIC Data Base is reported by the ERIC Processing and Reference Facility. This is a statistical and frequency distribution study encompassing every item in the file whose record contains pagination data in machine readable form. The study provides pagination data that could be used by present and

  1. Social Distribution, Ghettoization, and Educational Triage: A Marxist Analysis.

    ERIC Educational Resources Information Center

    Cameron, Jeanne

    2000-01-01

    Discusses how many urban students are written off as unworthy of scant educational resources, using Weber and Marx to discuss how educational triage is best understood theoretically, exploring how broader processes of social distribution and triage link up with daily practices and policies in urban classrooms, and highlighting the need for a…

  2. THE EPANET PROGRAMMER'S TOOLKIT FOR ANALYSIS OF WATER DISTRIBUTION SYSTEMS

    EPA Science Inventory

    The EPANET Programmer's Toolkit is a collection of functions that helps simplify computer programming of water distribution network analyses. the functions can be used to read in a pipe network description file, modify selected component properties, run multiple hydraulic and wa...

  3. High Resolution PV Power Modeling for Distribution Circuit Analysis

    SciTech Connect

    Norris, B. L.; Dise, J. H.

    2013-09-01

    NREL has contracted with Clean Power Research to provide 1-minute simulation datasets of PV systems located at three high penetration distribution feeders in the service territory of Southern California Edison (SCE): Porterville, Palmdale, and Fontana, California. The resulting PV simulations will be used to separately model the electrical circuits to determine the impacts of PV on circuit operations.

  4. Conduits and dike distribution analysis in San Rafael Swell, Utah

    NASA Astrophysics Data System (ADS)

    Kiyosugi, K.; Connor, C.; Wetmore, P. H.; Ferwerda, B. P.; Germa, A.

    2011-12-01

    Volcanic fields generally consist of scattered monogenetic volcanoes, such as cinder cones and maars. The temporal and spatial distribution of monogenetic volcanoes and probability of future activity within volcanic fields is studied with the goals of understanding the origins of these volcano groups, and forecasting potential future volcanic hazards. The subsurface magmatic plumbing systems associated with volcanic fields, however, are rarely observed or studied. Therefore, we investigated a highly eroded and exposed magmatic plumbing system on the San Rafael Swell (UT) that consists of dikes, volcano conduits and sills. San Rafael Swell is part of the Colorado Plateau and is located east of the Rocky Mountain seismic belt and the Basin and Range. The overburden thickness at the time of mafic magma intrusion (Pliocene; ca. 4 Ma) into Jurassic sandstone is estimated to be ~800 m based on paleotopographical reconstructions. Based on a geologic map by P. Delaney and colleagues, and new field research, a total of 63 conduits are mapped in this former volcanic field. The conduits each reveal features of root zone and / or lower diatremes, including rapid dike expansion, peperite and brecciated intrusive and host rocks. Recrystallized baked zone of host rock is also observed around many conduits. Most conduits are basaltic or shonkinitic with thickness of >10 m and associated with feeder dikes intruded along N-S trend joints in the host rock, whereas two conduits are syenitic and suggesting development from underlying cognate sills. Conduit distribution, which is analyzed by a kernel function method with elliptical bandwidth, illustrates a N-S elongate higher conduit density area regardless of the azimuth of closely distributed conduits alignment (nearest neighbor distance <200 m). In addition, dike density was calculated as total dike length in unit area (km/km^2). Conduit and sill distribution is concordant with the high dike density area. Especially, the distribution of conduits is not random with respect to the dike distribution with greater than 99% confidence on the basis of the Kolmogorov-Smirnov test. On the other hand, dike density at each conduits location also suggests that there is no threshold of dike density for conduit formation. In other words, conduits may be possible to develop from even short mapped dikes in low dike density areas. These results show effectiveness of studying volcanic vent distribution to infer the size of magmatic system below volcanic fields and highlight the uncertainty of forecasting the location of new monogenetic volcanoes in active fields, which may be associated with a single dike intrusion.

  5. Exploring Vector Fields with Distribution-based Streamline Analysis

    SciTech Connect

    Lu, Kewei; Chaudhuri, Abon; Lee, Teng-Yok; Shen, Han-Wei; Wong, Pak C.

    2013-02-26

    Streamline-based techniques are designed based on the idea that properties of streamlines are indicative of features in the underlying field. In this paper, we show that statistical distributions of measurements along the trajectory of a streamline can be used as a robust and effective descriptor to measure the similarity between streamlines. With the distribution-based approach, we present a framework for interactive exploration of 3D vector fields with streamline query and clustering. Streamline queries allow us to rapidly identify streamlines that share similar geometric features to the target streamline. Streamline clustering allows us to group together streamlines of similar shapes. Based on users selection, different clusters with different features at different levels of detail can be visualized to highlight features in 3D flow fields. We demonstrate the utility of our framework with simulation data sets of varying nature and size.

  6. Secure analysis of distributed chemical databases without data integration.

    PubMed

    Karr, Alan F; Feng, Jun; Lin, Xiaodong; Sanil, Ashish P; Young, S Stanley; Reiter, Jerome P

    2005-01-01

    We present a method for performing statistically valid linear regressions on the union of distributed chemical databases that preserves confidentiality of those databases. The method employs secure multi-party computation to share local sufficient statistics necessary to compute least squares estimators of regression coefficients, error variances and other quantities of interest. We illustrate our method with an example containing four companies' rather different databases. PMID:16267693

  7. Analysis of phase distribution phenomena in microgravity environments

    NASA Technical Reports Server (NTRS)

    Lahey, Richard T., Jr.; Bonetto, F.

    1994-01-01

    The purpose of the research presented in this paper is to demonstrate the ability of multidimensional two-fluid models for bubbly two-phase flow to accurately predict lateral phase distribution phenomena in microgravity environments. If successful, this research should provide NASA with mechanistically-based analytical methods which can be used for multiphase space system design and evaluation, and should be the basis for future shuttle experiments for model verification.

  8. A Distributed Datacube Analysis Service for Radio Telescopes

    NASA Astrophysics Data System (ADS)

    Mahadevan, V.; Rosolowsky, E.

    2011-07-01

    Current- and next-generation radio telescopes are poised to produce data at an unprecedented rate. We are developing the cyberinfrastructure to enable distributed processing and storage of FITS data cubes from these telescopes. In this contribution, we will present the data storage and network infrastructure that enables efficient searching, extraction and transfer of FITS datacubes. The infrastructure combines the iRODS distributed data management with a custom spatially-enabled PostgreSQL database. The data management system ingests FITS cubes, automatically populating the metadata database using FITS header data. Queries to the metadata service return matching records using VOTable format. The iRODS system allows for a distributed network of fileservers to store large data sets redundantly with a minimum of upkeep. Transfers between iRODS data sites use parallel I/O streams for maximum speed. Files are staged to the optimal host for download by an end user. The service can automatically extract subregions of individual or adjacent cubes registered to user-defined astrometric grids using the Montage package. The data system can query multiple surveys and return spatially registered data cubes to the user. Future development will allow the data system to utilize distributed processing environment to analyze datasets, returning only the calculation results to the end user. This cyberinfrastructure project combines many existing, open-source packages into a single deployment of a data system. The codebase can also function on two-dimensional images. The project is funded by CANARIE under the Network-Enabled Platforms 2 program.

  9. Analysis of magnetic electron lens with secant hyperbolic field distribution

    NASA Astrophysics Data System (ADS)

    Pany, S. S.; Ahmed, Z.; Dubey, B. P.

    2014-12-01

    Electron-optical imaging instruments like Scanning Electron Microscope (SEM) and Transmission Electron Microscope (TEM) use specially designed solenoid electromagnets for focusing of the electron beam. Indicators of imaging performance of these instruments, like spatial resolution, have a strong correlation with the focal characteristics of the magnetic lenses, which in turn have been shown to be sensitive to the details of the spatial distribution of the axial magnetic field. Owing to the complexity of designing practical lenses, empirical mathematical expressions are important to obtain the desired focal properties. Thus the degree of accuracy of such models in representing the actual field distribution determines accuracy of the calculations and ultimately the performance of the lens. Historically, the mathematical models proposed by Glaser [1] and Ramberg [2] have been extensively used. In this paper the authors discuss another model with a secant-hyperbolic type magnetic field distribution function, and present a comparison between models, utilizing results from finite element-based field simulations as the reference for evaluating performance.

  10. Periodic analysis of total ozone and its vertical distribution

    NASA Technical Reports Server (NTRS)

    Wilcox, R. W.; Nastrom, G. D.; Belmont, A. D.

    1975-01-01

    Both total ozone and vertical distribution ozone data from the period 1957 to 1972 are analyzed. For total ozone, improved monthly zonal means for both hemispheres are computed by weighting individual station monthly means by a factor which compensates for the close grouping of stations in certain regions of latitude bands. Longitudinal variability show maxima in summer in both hemispheres, but, in winter, only in the Northern Hemisphere. The geographical distributions of the long term mean, and the annual, quasibiennial and semiannual waves in total ozone over the Northern Hemisphere are presented. The extratropical amplitude of the annual wave is by far the largest of the three, as much as 120 m atm cm over northern Siberia. There is a tendency for all three waves to have maxima in high latitudes. Monthly means of the vertical distribution of ozone determined from 3 to 8 years of ozonesonde data over North America are presented. Number density is highest in the Arctic near 18 km. The region of maximum number density slopes upward toward 10 N, where the long term mean is 45 x 10 to the 11th power molecules cm/3 near 26 km.

  11. Archiving, Distribution and Analysis of Solar-B Data

    NASA Astrophysics Data System (ADS)

    Shimojo, M.

    2007-10-01

    The Solar-B Mission Operation and Data Analysis (MODA) working group has been discussing the data analysis system for Solar-B data since 2001. In the paper, based on the Solar-B MODA document and the recent work in Japan, we introduce the dataflow from Solar-B to scientists, the data format and data-level of Solar-B data, and the data searching/providing system.

  12. Statistical analysis and modelling of small satellite reliability

    NASA Astrophysics Data System (ADS)

    Guo, Jian; Monas, Liora; Gill, Eberhard

    2014-05-01

    This paper attempts to characterize failure behaviour of small satellites through statistical analysis of actual in-orbit failures. A unique Small Satellite Anomalies Database comprising empirical failure data of 222 small satellites has been developed. A nonparametric analysis of the failure data has been implemented by means of a Kaplan-Meier estimation. An innovative modelling method, i.e. Bayesian theory in combination with Markov Chain Monte Carlo (MCMC) simulations, has been proposed to model the reliability of small satellites. An extensive parametric analysis using the Bayesian/MCMC method has been performed to fit a Weibull distribution to the data. The influence of several characteristics such as the design lifetime, mass, launch year, mission type and the type of satellite developers on the reliability has been analyzed. The results clearly show the infant mortality of small satellites. Compared with the classical maximum-likelihood estimation methods, the proposed Bayesian/MCMC method results in better fitting Weibull models and is especially suitable for reliability modelling where only very limited failures are observed.

  13. Complexity analysis of pipeline mapping problems in distributed heterogeneous networks

    SciTech Connect

    Lin, Ying; Wu, Qishi; Zhu, Mengxia; Rao, Nageswara S

    2009-04-01

    Largescale scientific applications require using various system resources to execute complex computing pipelines in distributed networks to support collaborative research. System resources are typically shared in the Internet or over dedicated connections based on their location, availability, capability, and capacity. Optimizing the network performance of computing pipelines in such distributed environments is critical to the success of these applications. We consider two types of largescale distributed applications: (1) interactive applications where a single dataset is sequentially processed along a pipeline; and (2) streaming applications where a series of datasets continuously flow through a pipeline. The computing pipelines of these applications consist of a number of modules executed in a linear order in network environments with heterogeneous resources under different constraints. Our goal is to find an efficient mapping scheme that allocates the modules of a pipeline to network nodes for minimum endtoend delay or maximum frame rate. We formulate the pipeline mappings in distributed environments as optimization problems and categorize them into six classes with different optimization goals and mapping constraints: (1) Minimum Endtoend Delay with No Node Reuse (MEDNNR), (2) Minimum Endtoend Delay with Contiguous Node Reuse (MEDCNR), (3) Minimum Endtoend Delay with Arbitrary Node Reuse (MEDANR), (4) Maximum Frame Rate with No Node Reuse or Share (MFRNNRS), (5) Maximum Frame Rate with Contiguous Node Reuse and Share (MFRCNRS), and (6) Maximum Frame Rate with Arbitrary Node Reuse and Share (MFRANRS). Here, 'contiguous node reuse' means that multiple contiguous modules along the pipeline may run on the same node and 'arbitrary node reuse' imposes no restriction on node reuse. Note that in interactive applications, a node can be reused but its resource is not shared. We prove that MEDANR is polynomially solvable and the rest are NP-complete. MEDANR, where either contiguous or noncontiguous modules in the pipeline can be mapped onto the same node, is essentially the Maximum n-hop Shortest Path problem, and can be solved using a dynamic programming method. In MEDNNR and MFRNNRS, any network node can be used only once, which requires selecting the same number of nodes for onetoone onto mapping. We show its NP-completeness by reducing from the Hamiltonian Path problem. Node reuse is allowed in MEDCNR, MFRCNRS and MFRANRS, which are similar to the Maximum n-hop Shortest Path problem that considers resource sharing. We prove their NP-completeness by reducing from the Disjoint-Connecting-Path Problem and Widest path with the Linear Capacity Constraints problem, respectively.

  14. Performance analysis of structured pedigree distributed fusion systems

    NASA Astrophysics Data System (ADS)

    Arambel, Pablo O.

    2009-05-01

    Structured pedigree is a way to compress pedigree information. When applied to distributed fusion systems, the approach avoids the well known problem of information double counting resulting from ignoring the cross-correlation among fused estimates. Other schemes that attempt to compute optimal fused estimates require the transmission of full pedigree information or raw data. This usually can not be implemented in practical systems because of the enormous requirements in communications bandwidth. The Structured Pedigree approach achieves data compression by maintaining multiple covariance matrices, one for each uncorrelated source in the network. These covariance matrices are transmitted by each node along with the state estimate. This represents a significant compression when compared to full pedigree schemes. The transmission of these covariance matrices (or a subset of these covariance matrices) allows for an efficient fusion of the estimates, while avoiding information double counting and guaranteeing consistency on the estimates. This is achieved by exploiting the additional partial knowledge on the correlation of the estimates. The approach uses a generalized version of the Split Covariance Intersection algorithm that applies to multiple estimates and multiple uncorrelated sources. In this paper we study the performance of the proposed distributed fusion system by analyzing a simple but instructive example.

  15. Visualization and analysis of lipopolysaccharide distribution in binary phospholipid bilayers

    SciTech Connect

    Henning, Maria Florencia; Sanchez, Susana; Bakas, Laura; Departamento de Ciencias Biologicas, Facultad de Ciencias Exactas, UNLP, Calles 47 y 115, 1900 La Plata

    2009-05-22

    Lipopolysaccharide (LPS) is an endotoxin released from the outer membrane of Gram-negative bacteria during infections. It have been reported that LPS may play a role in the outer membrane of bacteria similar to that of cholesterol in eukaryotic plasma membranes. In this article we compare the effect of introducing LPS or cholesterol in liposomes made of dipalmitoylphosphatidylcholine/dioleoylphosphatidylcholine on the solubilization process by Triton X-100. The results show that liposomes containing LPS or cholesterol are more resistant to solubilization by Triton X-100 than the binary phospholipid mixtures at 4 {sup o}C. The LPS distribution was analyzed on GUVs of DPPC:DOPC using FITC-LPS. Solid and liquid-crystalline domains were visualized labeling the GUVs with LAURDAN and GP images were acquired using a two-photon microscope. The images show a selective distribution of LPS in gel domains. Our results support the hypothesis that LPS could aggregate and concentrate selectively in biological membranes providing a mechanism to bring together several components of the LPS-sensing machinery.

  16. Analysis of an algorithm for distributed recognition and accountability

    SciTech Connect

    Ko, C.; Frincke, D.A.; Goan, T. Jr.; Heberlein, L.T.; Levitt, K.; Mukherjee, B.; Wee, C.

    1993-08-01

    Computer and network systems are available to attacks. Abandoning the existing huge infrastructure of possibly-insecure computer and network systems is impossible, and replacing them by totally secure systems may not be feasible or cost effective. A common element in many attacks is that a single user will often attempt to intrude upon multiple resources throughout a network. Detecting the attack can become significantly easier by compiling and integrating evidence of such intrusion attempts across the network rather than attempting to assess the situation from the vantage point of only a single host. To solve this problem, we suggest an approach for distributed recognition and accountability (DRA), which consists of algorithms which ``process,`` at a central location, distributed and asynchronous ``reports`` generated by computers (or a subset thereof) throughout the network. Our highest-priority objectives are to observe ways by which an individual moves around in a network of computers, including changing user names to possibly hide his/her true identity, and to associate all activities of multiple instance of the same individual to the same network-wide user. We present the DRA algorithm and a sketch of its proof under an initial set of simplifying albeit realistic assumptions. Later, we relax these assumptions to accommodate pragmatic aspects such as missing or delayed ``reports,`` clock slew, tampered ``reports,`` etc. We believe that such algorithms will have widespread applications in the future, particularly in intrusion-detection system.

  17. Studying bubble-particle interactions by zeta potential distribution analysis.

    PubMed

    Wu, Chendi; Wang, Louxiang; Harbottle, David; Masliyah, Jacob; Xu, Zhenghe

    2015-07-01

    Over a decade ago, Xu and Masliyah pioneered an approach to characterize the interactions between particles in dynamic environments of multicomponent systems by measuring zeta potential distributions of individual components and their mixtures. Using a Zetaphoremeter, the measured zeta potential distributions of individual components and their mixtures were used to determine the conditions of preferential attachment in multicomponent particle suspensions. The technique has been applied to study the attachment of nano-sized silica and alumina particles to sub-micron size bubbles in solutions with and without the addition of surface active agents (SDS, DAH and DF250). The degree of attachment between gas bubbles and particles is shown to be a function of the interaction energy governed by the dispersion, electrostatic double layer and hydrophobic forces. Under certain chemical conditions, the attachment of nano-particles to sub-micron size bubbles is shown to be enhanced by in-situ gas nucleation induced by hydrodynamic cavitation for the weakly interacting systems, where mixing of the two individual components results in negligible attachment. Preferential interaction in complex tertiary particle systems demonstrated strong attachment between micron-sized alumina and gas bubbles, with little attachment between micron-sized alumina and silica, possibly due to instability of the aggregates in the shear flow environment. PMID:25731913

  18. Global Uncertainty and Sensitivity Analysis of Spatially Distributed Hydrological Model, Regional Simulation Model (RSM), to spatially distributed factors

    NASA Astrophysics Data System (ADS)

    Zajac, Z. B.; Munoz-Carpena, R.; Vanderlinden, K.

    2009-12-01

    This research addresses two aspects of spatially distributed modeling: uncertainty analysis (UA), described as propagation of uncertainty from spatially distributed input factors on model outputs; and sensitivity analysis (SA) defined as assessment of relative importance of spatially distributed factors on the model output variance. An evaluation framework for spatially distributed models is proposed based on a combination of sequential Gaussian simulation (sGs) and the global, variance-based, SA method of Sobol to quantify model output uncertainty due to spatially distributed input factors, together with the corresponding sensitivity measures. The framework is independent of model assumptions; it explores the whole space of input factors, provides measures of factors importance (first-order effects) and their interactions (higher-order effects), and assesses the effect of spatial resolution of the model input factors, one of the least understood contributors to uncertainty and sensitivity of distributed models. A spatially distributed hydrological model (Regional Simulation Model, RSM), applied to a site in South Florida (Water Conservation Area-2A), is used as a benchmark for the study. The model domain is spatially represented by triangular elements (average size of 1.1 km2). High resolution land elevation measurements (400 x 400 m, +/-0.15 m vertical error) obtained by the USGS' Airborne Height Finder survey are used in the study. The original survey data (approximately 2,600 points) together with smaller density subsets drawn from this data (1/2, 1/4, 1/8, 1/16, 1/32 of original density) are used for generating equiprobable maps of effective land elevation factor values via sGs. These alternative realizations are sampled pseudo-randomly and used as inputs for model runs. In this way, uncertainty regarding a spatial representation of the elevation surface is transferred into uncertainty of model outputs. The results show that below a specific threshold of data density (1/8), model uncertainty and sensitivity are impacted by the density of land elevation data used for deriving effective land elevation factor values. Below the threshold of data density, uncertainty of model outputs is observed to increase with a decrease of density of elevation data. Similar pattern is observed for the relative importance of sensitivity indexes of the land elevation factor. The results indicate that reduced data density of land elevation could be used without significantly compromising the certainty of RSM predictions and the subsequent decision making process for the specific WCA-2A conditions. The methodology proposed in this research is useful for a model quality control and for guiding field measurement campaigns by optimizing data collection in terms of cost-benefit analysis.

  19. Sensitivity Analysis of Distributed Soil Moisture Profiles by Active Distributed Temperature Sensing

    NASA Astrophysics Data System (ADS)

    Ciocca, F.; Van De Giesen, N.; Assouline, S.; Huwald, H.; Lunati, I.

    2012-12-01

    Monitoring and measuring the fluctuations of soil moisture at large scales in the filed remains a challenge. Although sensors based on measurement of dielectric properties such as Time Domain Reflectometers (TDR) and capacity-based probes can guarantee reasonable responses, they always operate on limited spatial ranges. On the other hand optical fibers, attached to a Distribute Temperature Sensing (DTS) system, can allow for high precision soil temperature measurements over distances of kilometers. A recently developed technique called Active DTS (ADTS) and consisting of a heat pulse of a certain duration and power along the metal sheath covering the optical fiber buried in the soil, has proven a promising alternative to spatially-limited probes. Two approaches have been investigated to infer distributed soil moisture profiles in the region surrounding the optic fiber cable by analyzing the temperature variations during the heating and the cooling phases. One directly relates the change of temperature to the soil moisture (independently measured) to develop specific calibration curve for the soil used; the other requires inferring the thermal properties and then obtaining the soil moisture by inversion of known relationships. To test and compare the two approaches over a broad range of saturation conditions a large lysimeter has been homogeneously filled with loamy soil and 52 meters of fiber optic cable have been buried in the shallower 0.8 meters in a double coil rigid structure of 15 loops along with a series of capacity-based sensors (calibrated for the soil used) to provide independent soil moisture measurements at the same depths of the optical fiber. Thermocouples have also been wrapped around the fiber to investigate the effects of the insulating cover surrounding the cable, and in between each layer in order to monitor heat diffusion at several centimeters. A high performance DTS has been used to measure the temperature along the fiber optic cable. Several soil moisture profiles have been generated in the lysimeter either varying the water table height or by wetting the soil from the top. The sensitivity of the ADTS method for heat pulses of different duration and power and ranges of spatial and temporal resolution are presented.

  20. Analysis of the tropospheric water distribution during FIRE 2

    NASA Technical Reports Server (NTRS)

    Westphal, Douglas L.

    1993-01-01

    The Penn State/NCAR mesoscale model, as adapted for use at ARC, was used as a testbed for the development and validation of cloud models for use in General Circulation Models (GCM's). This modeling approach also allows us to intercompare the predictions of the various cloud schemes within the same dynamical framework. The use of the PSU/NCAR mesoscale model also allows us to compare our results with FIRE-II (First International Satellite Cloud Climatology Project Regional Experiment) observations, instead of climate statistics. Though a promising approach, our work to date revealed several difficulties. First, the model by design is limited in spatial coverage and is only run for 12 to 48 hours at a time. Hence the quality of the simulation will depend heavily on the initial conditions. The poor quality of upper-tropospheric measurements of water vapor is well known and the situation is particularly bad for mid-latitude winter since the coupling with the surface is less direct than in summer so that relying on the model to spin-up a reasonable moisture field is not always successful. Though one of the most common atmospheric constituents, water vapor is relatively difficult to measure accurately, especially operationally over large areas. The standard NWS sondes have little sensitivity at the low temperatures where cirrus form and the data from the GOES 6.7 micron channel is difficult to quantify. For this reason, the goals of FIRE Cirrus II included characterizing the three-dimensional distribution of water vapor and clouds. In studying the data from FIRE Cirrus II, it was found that no single special observation technique provides accurate regional distributions of water vapor. The Raman lidar provides accurate measurements, but only at the Hub, for levels up to 10 km, and during nighttime hours. The CLASS sondes are more sensitive to moisture at low temperatures than are the NWS sondes, but the four stations only cover an area of two hundred kilometers on a side. The aircraft give the most accurate measurements of water vapor, but are limited in spatial and temporal coverage. This problem is partly alleviated by the use of the MAPS analyses, a four-dimensional data assimilation system that combines the previous 3-hour forecast with the available observations, but its upper-level moisture analyses are sometimes deficient because of the vapor measurement problem. An attempt was made to create a consistent four-dimensional description of the water vapor distribution during the second IFO by subjectively combining data from a variety of sources, including MAPS analyses, CLASS sondes, SPECTRE sondes, NWS sondes, GOES satellite analyses, radars, lidars, and microwave radiometers.

  1. Distributed Parallel Computing in Data Analysis of Osteoporosis.

    PubMed

    Waleska Simões, Priscyla; Venson, Ramon; Comunello, Eros; Casagrande, Rogério Antônio; Bigaton, Everson; da Silva Carlessi, Lucas; da Rosa, Maria Inês; Martins, Paulo João

    2015-01-01

    This research aimed to compare the performance of two models of load balancing (Proportional and Autotuned algorithms) of the JPPF platform in the processing of data mining from a database with osteoporosis and osteopenia. When performing the analysis of execution times, it was observed that the Proportional algorithm performed better in all cases. PMID:26262381

  2. Flaw strength distributions and statistical parameters for ceramic fibers: The normal distribution

    NASA Astrophysics Data System (ADS)

    R'Mili, M.; Godin, N.; Lamon, J.

    2012-05-01

    The present paper investigates large sets of ceramic fibre failure strengths (500 to 1000 data) produced using tensile tests on tows that contained either 500 or 1000 filaments. The probability density function was determined through acoustic emission monitoring which allowed detection and counting of filament fractures. The statistical distribution of filament strengths was described using the normal distribution. The Weibull equation was then fitted to this normal distribution for estimation of statistical parameters. A perfect agreement between both distributions was obtained, and a quite negligible scatter in statistical parameters was observed, as opposed to the wide variability that is reported in the literature. Thus it was concluded that flaw strengths are distributed normally and that the statistical parameters that were derived are the true ones. In a second step, the conventional method of estimation of Weibull parameters was applied to these sets of data and, then, to subsets selected randomly. The influence of other factors involved in the conventional method of determination of statistical parameters is discussed. It is demonstrated that selection of specimens, sample size, and method of construction of so-called Weibull plots are responsible for statistical parameters variability.

  3. A pair distribution function analysis of zeolite beta

    SciTech Connect

    Martinez-Inesta, M.M.; Peral, I.; Proffen, T.; Lobo, R.F.

    2010-07-20

    We describe the structural refinement of zeolite beta using the local structure obtained with the pair distribution function (PDF) method. A high quality synchrotron and two neutron scattering datasets were obtained on two samples of siliceous zeolite beta. The two polytypes that make up zeolite beta have the same local structure; therefore refinement of the two structures was possible using the same experimental PDF. Optimized structures of polytypes A and B were used to refine the structures using the program PDFfit. Refinements using only the synchrotron or the neutron datasets gave results inconsistent with each other but a cyclic refinement with the two datasets gave a good fit to both PDFs. The results show that the PDF method is a viable technique to analyze the local structure of disordered zeolites. However, given the complexity of most zeolite frameworks, the use of both X-ray and neutron radiation and high-resolution patterns is essential to obtain reliable refinements.

  4. Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing

    NASA Technical Reports Server (NTRS)

    Ozguner, Fusun

    1996-01-01

    Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.

  5. An analysis of the Seasat Satellite Data Distribution System

    NASA Technical Reports Server (NTRS)

    Ferrari, A. J.; Renfrow, J. T.

    1980-01-01

    A computerized data distribution network for remote accessing of Seasat generated data is described. The service is intended as an experiment to determine user needs and operational abilities for utilizing on-line satellite generated oceanographic data. Synoptic weather observations are input to the U.S. Fleet Numerical Oceanographic Central for preparation and transfer to a PDP 11/60 central computer, from which all access trunks originate. The data available includes meteorological and sea-state information in the form of analyses and forecasts, and users are being monitored for reactions to the system design, data products, system operation, and performance evaluation. The system provides data on sea level and upper atmospheric pressure, sea surface temperature, wind magnitude and direction, significant wave heights, direction, and periods, and spectral wave data. Transmissions have a maximum rate of 1.1 kbit/sec over the telephone line.

  6. Quantitative analysis of inclusion distributions in hot pressed silicon carbide

    SciTech Connect

    Michael Paul Bakas

    2012-12-01

    ABSTRACT Depth of penetration measurements in hot pressed SiC have exhibited significant variability that may be influenced by microstructural defects. To obtain a better understanding regarding the role of microstructural defects under highly dynamic conditions; fragments of hot pressed SiC plates subjected to impact tests were examined. Two types of inclusion defects were identified, carbonaceous and an aluminum-iron-oxide phase. A disproportionate number of large inclusions were found on the rubble, indicating that the inclusion defects were a part of the fragmentation process. Distribution functions were plotted to compare the inclusion populations. Fragments from the superior performing sample had an inclusion population consisting of more numerous but smaller inclusions. One possible explanation for this result is that the superior sample withstood a greater stress before failure, causing a greater number of smaller inclusions to participate in fragmentation than in the weaker sample.

  7. Analysis of phase distribution phenomena in microgravity environments

    NASA Technical Reports Server (NTRS)

    Lahey, Richard, Jr.; Bonetto, Fabian

    1994-01-01

    In the past one of NASA's primary emphasis has been on identifying single and multiphase flow experiments which can produce new discoveries that are not possible except in a microgravity environment. While such experiments are obviously of great scientific interest, they do not necessarily provide NASA with the ability to use multiphase processes for power production and/or utilization in space. The purpose of the research presented in this paper is to demonstrate the ability of multidimensional two-fluid models for bubbly two-phase flow to accurately predict lateral phase distribution phenomena in microgravity environments. If successful, this research should provide NASA with mechanistically-based analytical methods which can be used for multiphase space design and evaluation, and should be the basis for future shuttle experiments for model verification.

  8. Completion report harmonic analysis of electrical distribution systems

    SciTech Connect

    Tolbert, L.M.

    1996-03-01

    Harmonic currents have increased dramatically in electrical distribution systems in the last few years due to the growth in non-linear loads found in most electronic devices. Because electrical systems have been designed for linear voltage and current waveforms; (i.e. nearly sinusoidal), non-linear loads can cause serious problems such as overheating conductors or transformers, capacitor failures, inadvertent circuit breaker tripping, or malfunction of electronic equipment. The U.S. Army Center for Public Works has proposed a study to determine what devices are best for reducing or eliminating the effects of harmonics on power systems typical of those existing in their Command, Control, Communication and Intelligence (C3I) sites.

  9. Finite key analysis for symmetric attacks in quantum key distribution

    SciTech Connect

    Meyer, Tim; Kampermann, Hermann; Kleinmann, Matthias; Bruss, Dagmar

    2006-10-15

    We introduce a constructive method to calculate the achievable secret key rate for a generic class of quantum key distribution protocols, when only a finite number n of signals is given. Our approach is applicable to all scenarios in which the quantum state shared by Alice and Bob is known. In particular, we consider the six state protocol with symmetric eavesdropping attacks, and show that for a small number of signals, i.e., below n{approx}10{sup 4}, the finite key rate differs significantly from the asymptotic value for n{yields}{infinity}. However, for larger n, a good approximation of the asymptotic value is found. We also study secret key rates for protocols using higher-dimensional quantum systems.

  10. Southern Arizona riparian habitat: Spatial distribution and analysis

    NASA Technical Reports Server (NTRS)

    Lacey, J. R.; Ogden, P. R.; Foster, K. E.

    1975-01-01

    The objectives of this study were centered around the demonstration of remote sensing as an inventory tool and researching the multiple uses of riparian vegetation. Specific study objectives were to: (1) map riparian vegetation along the Gila River, San Simon Creek, San Pedro River, Pantano Wash, (2) determine the feasibility of automated mapping using LANDSAT-1 computer compatible tapes, (3) locate and summarize existing mpas delineating riparian vegetation, (4) summarize data relevant to Southern Arizona's riparian products and uses, (5) document recent riparian vegetation changes along a selected portion of the San Pedro River, (6) summarize historical changes in composition and distribution of riparian vegetation, and (7) summarize sources of available photography pertinent to Southern Arizona.

  11. Photoelastic analysis of stress distribution with different implant systems.

    PubMed

    Pellizzer, Eduardo Piza; Carli, Rafael Imai; Falcn-Antenucci, Rosse Mary; Verri, Fellippo Ramos; Goiato, Marcelo Coelho; Villa, Luiz Marcelo Ribeiro

    2014-04-01

    The aim of this study was to evaluate stress distribution with different implant systems through photoelasticity. Five models were fabricated with photoelastic resin PL-2. Each model was composed of a block of photoelastic resin (10 40 45 mm) with an implant and a healing abutment: model 1, internal hexagon implant (4.0 10 mm; Conect AR, Conexo, So Paulo, Brazil); model 2, Morse taper/internal octagon implant (4.1 10 mm; Standard, Straumann ITI, Andover, Mass); model 3, Morse taper implant (4.0 10 mm; AR Morse, Conexo); model 4, locking taper implant (4.0 11 mm; Bicon, Boston, Mass); model 5, external hexagon implant (4.0 10 mm; Master Screw, Conexo). Axial and oblique load (45) of 150 N were applied by a universal testing machine (EMIC-DL 3000), and a circular polariscope was used to visualize the stress. The results were photographed and analyzed qualitatively using Adobe Photoshop software. For the axial load, the greatest stress concentration was exhibited in the cervical and apical thirds. However, the highest number of isochromatic fringes was observed in the implant apex and in the cervical adjacent to the load direction in all models for the oblique load. Model 2 (Morse taper, internal octagon, Straumann ITI) presented the lowest stress concentration, while model 5 (external hexagon, Master Screw, Conexo) exhibited the greatest stress. It was concluded that Morse taper implants presented a more favorable stress distribution among the test groups. The external hexagon implant showed the highest stress concentration. Oblique load generated the highest stress in all models analyzed. PMID:22208909

  12. Failure properties and strain distribution analysis of meniscal attachments.

    PubMed

    Villegas, Diego F; Maes, Jason A; Magee, Sarah D; Donahue, Tammy L Haut

    2007-01-01

    The menisci are frequently injured due to both degeneration and traumatic tearing. It has been suggested that the success of a meniscal replacement is dependent on several factors, one of which is the secure fixation and firm attachment of the replacement to the tibial plateau. Therefore, the objectives of the current study were to (1) determine the failure properties of the meniscal horn attachments, and (2) determine the strain distribution over their surfaces. Eight bovine knee joints were used to study the mechanical response of the meniscal attachments. Three meniscal attachments from one knee of each animal were tested in uniaxial tension at 2%/s to determine the load deformation response. During the tests, the samples were marked and local strain distributions were determined with a video extensometer. The linear modulus of the medial anterior attachment (154+/-134 MPa) was significantly less than both the medial posterior (248+/-179 MPa, p=0.0111) and the lateral anterior attachment (281+/-214 MPa, p=0.0007). Likewise, the ultimate strain for the medial anterior attachments (13.5+/-8.8%) was significantly less than the medial posterior (23+/-13%, p<0.0001) and the lateral anterior attachment (20.3+/-11.1%, p=0.0033). There were no significant differences in the structural properties or ultimate stress between the meniscal attachments (p>0.05). No significant differences in ultimate strain or moduli across the surface of the attachments were noted. Based on the data obtained, a meniscal replacement would need different moduli for each of the different attachments. However, the attachments appear to be homogeneous. PMID:17359982

  13. Model of the reliability analysis of the distributed computer systems with architecture "client-server"

    NASA Astrophysics Data System (ADS)

    Kovalev, I. V.; Zelenkov, P. V.; Karaseva, M. V.; Tsarev, M. Yu; Tsarev, R. Yu

    2015-01-01

    The paper considers the problem of the analysis of distributed computer systems reliability with client-server architecture. A distributed computer system is a set of hardware and software for implementing the following main functions: processing, storage, transmission and data protection. This paper discusses the distributed computer systems architecture "client-server". The paper presents the scheme of the distributed computer system functioning represented as a graph where vertices are the functional state of the system and arcs are transitions from one state to another depending on the prevailing conditions. In reliability analysis we consider such reliability indicators as the probability of the system transition in the stopping state and accidents, as well as the intensity of these transitions. The proposed model allows us to obtain correlations for the reliability parameters of the distributed computer system without any assumptions about the distribution laws of random variables and the elements number in the system.

  14. Advanced analysis of metal distributions in human hair

    SciTech Connect

    Kempson, Ivan M.; Skinner, William M.

    2008-06-09

    A variety of techniques (secondary electron microscopy with energy dispersive X-ray analysis, time-of-flight-secondary ion mass spectrometry, and synchrotron X-ray fluorescence) were utilized to distinguish metal contamination occurring in hair arising from endogenous uptake from an individual exposed to a polluted environment, in this case a lead smelter. Evidence was sought for elements less affected by contamination and potentially indicative of biogenic activity. The unique combination of surface sensitivity, spatial resolution, and detection limits used here has provided new insight regarding hair analysis. Metals such as Ca, Fe, and Pb appeared to have little representative value of endogenous uptake and were mainly due to contamination. Cu and Zn, however, demonstrate behaviors worthy of further investigation into relating hair concentrations to endogenous function.

  15. Distributed Finite Element Analysis Using a Transputer Network

    NASA Technical Reports Server (NTRS)

    Watson, James; Favenesi, James; Danial, Albert; Tombrello, Joseph; Yang, Dabby; Reynolds, Brian; Turrentine, Ronald; Shephard, Mark; Baehmann, Peggy

    1989-01-01

    The principal objective of this research effort was to demonstrate the extraordinarily cost effective acceleration of finite element structural analysis problems using a transputer-based parallel processing network. This objective was accomplished in the form of a commercially viable parallel processing workstation. The workstation is a desktop size, low-maintenance computing unit capable of supercomputer performance yet costs two orders of magnitude less. To achieve the principal research objective, a transputer based structural analysis workstation termed XPFEM was implemented with linear static structural analysis capabilities resembling commercially available NASTRAN. Finite element model files, generated using the on-line preprocessing module or external preprocessing packages, are downloaded to a network of 32 transputers for accelerated solution. The system currently executes at about one third Cray X-MP24 speed but additional acceleration appears likely. For the NASA selected demonstration problem of a Space Shuttle main engine turbine blade model with about 1500 nodes and 4500 independent degrees of freedom, the Cray X-MP24 required 23.9 seconds to obtain a solution while the transputer network, operated from an IBM PC-AT compatible host computer, required 71.7 seconds. Consequently, the $80,000 transputer network demonstrated a cost-performance ratio about 60 times better than the $15,000,000 Cray X-MP24 system.

  16. Microwave circuit analysis and design by a massively distributed computing network

    NASA Astrophysics Data System (ADS)

    Vai, Mankuan; Prasad, Sheila

    1995-05-01

    The advances in microelectronic engineering have rendered massively distributed computing networks practical and affordable. This paper describes one application of this distributed computing paradigm to the analysis and design of microwave circuits. A distributed computing network, constructed in the form of a neural network, is developed to automate the operations typically performed on a normalized Smith chart. Examples showing the use of this computing network for impedance matching and stabilizing are provided.

  17. [Vertical Distribution Characteristics and Analysis in Sediments of Xidahai Lake].

    PubMed

    Duan, Mu-chun; Xiao, Hai-feng; Zang, Shu-ying

    2015-07-01

    The organic matter (OM), total nitrogen (TN), total phosphorus (TP), the morphological changes of phosphorus and the particle size in columnar sediment core of Xidahai Lake were analyzed, to discuss the vertical distribution characteristics and influencing factors. The results showed that the contents of OM, TN and TP were 0. 633% -2. 756%, 0. 150% -0. 429% and 648. 00 - 1 480.67 mg . kg-1 respectively. The contents of Ca-P, IP and OM changed less, the contents of Fe/Al-P, OP, TP and TN fluctuated from 1843 to 1970; The contents of Ca-P, IP and TP tended to decrease, the contents of Fe/Al-P, OP and OM first decreased and then increased to different degree, TN fluctuated largely from 1970 to 1996; The nutrient elements contents showed relatively large fluctuation from 1996 to 2009, the average contents of Fe/Al-P, OP and OM were the highest in the three stages. The sediment core nutrients pollution sources were mainly from industrial wastewater, sewage and the loss of fertilizers of Xidahai Lake. The ratio of C/N in the sediments showed that organic matter was mainly from aquatic organisms. The sediment particle size composition was dominated by clay and fine silt. The correlation studies showed that Ca-P, IP and TP were significantly positively correlated, showing that the contribution of Ca-P to IP and TP growth was large. PMID:26489314

  18. Secure distributed genome analysis for GWAS and sequence comparison computation

    PubMed Central

    2015-01-01

    Background The rapid increase in the availability and volume of genomic data makes significant advances in biomedical research possible, but sharing of genomic data poses challenges due to the highly sensitive nature of such data. To address the challenges, a competition for secure distributed processing of genomic data was organized by the iDASH research center. Methods In this work we propose techniques for securing computation with real-life genomic data for minor allele frequency and chi-squared statistics computation, as well as distance computation between two genomic sequences, as specified by the iDASH competition tasks. We put forward novel optimizations, including a generalization of a version of mergesort, which might be of independent interest. Results We provide implementation results of our techniques based on secret sharing that demonstrate practicality of the suggested protocols and also report on performance improvements due to our optimization techniques. Conclusions This work describes our techniques, findings, and experimental results developed and obtained as part of iDASH 2015 research competition to secure real-life genomic computations and shows feasibility of securely computing with genomic data in practice. PMID:26733307

  19. Analysis of Glial Distribution in Drosophila Adult Brains.

    PubMed

    Ou, Jiayao; Gao, Zongbao; Song, Li; Ho, Margaret S

    2016-04-01

    Neurons and glia are the two major cell types in the nervous system and work closely with each other to program neuronal interplay. Traditionally, neurons are thought to be the major cells that actively regulate processes like synapse formation, plasticity, and behavioral output. Glia, on the other hand, serve a more supporting role. To date, accumulating evidence has suggested that glia are active participants in virtually every aspect of neuronal function. Despite this, fundamental features of how glia interact with neurons, and their spatial relationships, remain elusive. Here, we describe the glial cell population in Drosophila adult brains. Glial cells extend and tightly associate their processes with major structures such as the mushroom body (MB), ellipsoid body (EB), and antennal lobe (AL) in the brain. Glial cells are distributed in a more concentrated manner in the MB. Furthermore, subsets of glia exhibit distinctive association patterns around different neuronal structures. Whereas processes extended by astrocyte-like glia and ensheathing glia wrap around the MB and infiltrate into the EB and AL, cortex glia stay where cell bodies of neurons are and remain outside of the synaptic regions structured by EB or AL. PMID:26810782

  20. Effects of specimen size on the flexural strength and Weibull modulus of nuclear graphite IG-110, NBG-18, and PCEA

    NASA Astrophysics Data System (ADS)

    Chi, Se-Hwan

    2015-09-01

    Changes in flexural strength and Weibull modulus due to specimen size were investigated for three nuclear graphite grades, IG-110, NBG-18, and PCEA, using four-point-1/3 point (4-1/3) loading with specimens of three different sizes: 3.18 (Thickness) × 6.35 (Width) × 50.8 (Length), 6.50 (T) × 12.0 (W) × 52.0 (L), 18.0 (T) × 16.0 (W) × 64 (L) (mm) (total: 210 specimens). Results showed some specimen size effects were grade dependent: While NBG-18 (a) showed rather significant specimen size effects (37% difference between the 3 T and 18 T), the differences in IG-110 and PCEA were 7.6-15%. The maximum differences in flexural strength due to specimen size were larger in the PCEA and NBG-18 having larger sized coke particles (medium grain size: >300 μm) than the IG-110 with super fine coke particle size (25 μm). The Weibull modulus showed a data population dependency, in that it decreased with increasing numbers of data used for modulus determination. A good correlation between the fracture surface roughness and the flexural strength was confirmed.

  1. Preliminary evaluation of diabatic heating distribution from FGGE level 3b analysis data

    NASA Technical Reports Server (NTRS)

    Kasahara, A.; Mizzi, A. P.

    1985-01-01

    A method is presented for calculating the global distribution of diabatic heating rate. Preliminary results of global heating rate evaluated from the European center for Medium Range Weather Forecasts Level IIIb analysis data is also presented.

  2. Competing risk models in reliability systems, a gamma distribution model with bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, Ismed

    2016-02-01

    In this paper our effort is to introduce the basic notions that constitute a competing risks models in reliability analysis using Bayesian analysis approach with Gamma distribution as our model and presenting their analytic methods. The Gamma distribution is widely used in reliability analysis and it is known as an natural extension of the exponential distribution. The cases are limited to the models with independent causes of failure, only the scale parameter is a random variable, and uniform prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.

  3. Photoelastic analysis of stress distribution in oral rehabilitation.

    PubMed

    Turcio, Karina Helga Leal; Goiato, Marcelo Coelho; Gennari Filho, Humberto; dos Santos, Daniela Micheline

    2009-03-01

    The purpose of this study was to present a literature review about photoelasticity, a laboratory method for evaluation of implants prosthesis behavior. Fixed or removable prostheses function as levers on supporting teeth, allowing forces to cause tooth movement if not carefully planned. Hence, during treatment planning, the dentist must be aware of the biomechanics involved and prevent movement of supporting teeth, decreasing lever-type forces generated by these prosthesis. Photoelastic analysis has great applicability in restorative dentistry as it allows prediction and minimization of biomechanical critical points through modifications in treatment planning. PMID:19305247

  4. Impact analysis of automotive structures with distributed smart material systems

    NASA Astrophysics Data System (ADS)

    Peelamedu, Saravanan M.; Naganathan, Ganapathy; Buckley, Stephen J.

    1999-06-01

    New class of automobiles has structural skins that are quite different from their current designs. Particularly, new families of composite skins are developed with new injection molding processes. These skins while support the concept of lighter vehicles of the future, are also susceptible to damage upon impact. It is important that their design should be based on a better understanding on the type of impact loads and the resulting strains and damage. It is possible that these skins can be integrally designed with active materials to counter damages. This paper presents a preliminary analysis of a new class of automotive skins, using piezoceramic as a smart material. The main objective is to consider the complex system with, the skin to be modeled as a layered plate structure involving a lightweight material with foam and active materials imbedded on them. To begin with a cantilever beam structure is subjected to a load through piezoceramic and the resulting strain at the active material site is predicted accounting for the material properties, piezoceramic thickness, adhesive thickness including the effect of adhesives. A finite element analysis is carried out to compare experimental work. Further work in this direction would provide an analytical tool that will provide the basis for algorithms to predict and counter impacts on the future class of automobiles.

  5. A numerical method for the stress analysis of stiffened-shell structures under nonuniform temperature distributions

    NASA Technical Reports Server (NTRS)

    Heldenfels, Richard R

    1951-01-01

    A numerical method is presented for the stress analysis of stiffened-shell structures of arbitrary cross section under nonuniform temperature distributions. The method is based on a previously published procedure that is extended to include temperature effects and multicell construction. The application of the method to practical problems is discussed and an illustrative analysis is presented of a two-cell box beam under the combined action of vertical loads and a nonuniform temperature distribution.

  6. Structural reliability analysis of laminated CMC components

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Palko, Joseph L.; Gyekenyesi, John P.

    1991-01-01

    For laminated ceramic matrix composite (CMC) materials to realize their full potential in aerospace applications, design methods and protocols are a necessity. The time independent failure response of these materials is focussed on and a reliability analysis is presented associated with the initiation of matrix cracking. A public domain computer algorithm is highlighted that was coupled with the laminate analysis of a finite element code and which serves as a design aid to analyze structural components made from laminated CMC materials. Issues relevant to the effect of the size of the component are discussed, and a parameter estimation procedure is presented. The estimation procedure allows three parameters to be calculated from a failure population that has an underlying Weibull distribution.

  7. Spatial interpolation of precipitation distributions using copulas

    NASA Astrophysics Data System (ADS)

    Bàrdossy, Andràs; Pegram, Geoffrey

    2013-04-01

    The interpolation of precipitation distributions is important for example for climatological studies, interpolation of precipitation on different time scales and extreme value analyses. Spatial interpolation requires a certain degree of spatial continuity. This is in our case measured with the help of a Cramer- von Mises type statistic. Examples of daily precipitation measured over 4 regions with the number of stations ranging from 222 to 748 in South Germany, show a high degree of spatial continuity of the distributions. As a further step, the interpolation itself can be carried out by interpolating • The parameters of fitted Gamma or Weibull (or other appropriate) distributions • The moments of the distributions with a subsequent fit of parametric distributions • The quantiles of the distributions directly The interdependence between the variables to be interpolated makes this task extremely difficult in all three of the above cases. However a straightforward analysis of the higher quantiles shows that their interdependence is extremely strong, allowing simultaneous interpolation of quantiles using copulas. Lower quantiles are less well structured, but they are subject to higher observation errors and are likely to be of less importance in hydrology. Thus the interpolation was carried out on the basis of the quantiles corresponding to greater than 1mm/day values. Topographical influence on precipitation is considered as a covariate. The applied copula is a mixed truncated-Gaussian and Gaussian copula, which reflects the asymmetrical dependence between topography and precipitation quantiles. A split sampling and a cross validation methodology are used to evaluate the quality of the interpolation.

  8. Wavelet analysis of baryon acoustic structures in the galaxy distribution

    NASA Astrophysics Data System (ADS)

    Arnalte-Mur, P.; Labatie, A.; Clerc, N.; Martínez, V. J.; Starck, J.-L.; Lachièze-Rey, M.; Saar, E.; Paredes, S.

    2012-06-01

    Context. Baryon acoustic oscillations (BAO) are imprinted in the density field by acoustic waves travelling in the plasma of the early universe. Their fixed scale can be used as a standard ruler to study the geometry of the universe. Aims: The BAO have been previously detected using correlation functions and power spectra of the galaxy distribution. We present a new method to detect the real-space structures associated with BAO. These baryon acoustic structures are spherical shells of relatively small density contrast, surrounding high density central regions. Methods: We design a specific wavelet adapted to search for shells, and exploit the physics of the process by making use of two different mass tracers, introducing a specific statistic to detect the BAO features. We show the effect of the BAO signal in this new statistic when applied to the Λ - cold dark matter (ΛCDM) model, using an analytical approximation to the transfer function. We confirm the reliability and stability of our method by using cosmological N-body simulations from the MareNostrum Institut de Ciències de l'Espai (MICE). Results: We apply our method to the detection of BAO in a galaxy sample drawn from the Sloan Digital Sky Survey (SDSS). We use the "main" catalogue to trace the shells, and the luminous red galaxies (LRG) as tracers of the high density central regions. Using this new method, we detect, with a high significance, that the LRG in our sample are preferentially located close to the centres of shell-like structures in the density field, with characteristics similar to those expected from BAO. We show that stacking selected shells, we can find their characteristic density profile. Conclusions: We delineate a new feature of the cosmic web, the BAO shells. As these are real spatial structures, the BAO phenomenon can be studied in detail by examining those shells. Full Table 1 is only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/542/A34

  9. Extending the LWS Data Environment: Distributed Data Processing and Analysis

    NASA Technical Reports Server (NTRS)

    Narock, Thomas

    2005-01-01

    The final stages of this work saw changes to the original framework, as well as the completion and integration of several data processing services. Initially, it was thought that a peer-to-peer architecture was necessary to make this work possible. The peer-to-peer architecture provided many benefits including the dynamic discovery of new services that would be continually added. A prototype example was built and while it showed promise, a major disadvantage was seen in that it was not easily integrated into the existing data environment. While the peer-to-peer system worked well for finding and accessing distributed data processing services, it was found that its use was limited by the difficulty in calling it from existing tools and services. After collaborations with members of the data community, it was determined that our data processing system was of high value and that a new interface should be pursued in order for the community to take full advantage of it. As such; the framework was modified from a peer-to-peer architecture to a more traditional web service approach. Following this change multiple data processing services were added. These services include such things as coordinate transformations and sub setting of data. Observatory (VHO), assisted with integrating the new architecture into the VHO. This allows anyone using the VHO to search for data, to then pass that data through our processing services prior to downloading it. As a second attempt at demonstrating the new system, a collaboration was established with the Collaborative Sun Earth Connector (CoSEC) group at Lockheed Martin. This group is working on a graphical user interface to the Virtual Observatories and data processing software. The intent is to provide a high-level easy-to-use graphical interface that will allow access to the existing Virtual Observatories and data processing services from one convenient application. Working with the CoSEC group we provided access to our data processing tools from within their software. This now allows the CoSEC community to take advantage of our services and also demonstrates another means of accessing our system.

  10. Independent Orbiter Assessment (IOA): Analysis of the Electrical Power Distribution and Control Subsystem, Volume 2

    NASA Technical Reports Server (NTRS)

    Schmeckpeper, K. R.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Distribution and Control (EPD and C) hardware. The EPD and C hardware performs the functions of distributing, sensing, and controlling 28 volt DC power and of inverting, distributing, sensing, and controlling 117 volt 400 Hz AC power to all Orbiter subsystems from the three fuel cells in the Electrical Power Generation (EPG) subsystem. Volume 2 continues the presentation of IOA analysis worksheets and contains the potential critical items list.

  11. An empirical analysis of the distribution of overshoots in a stationary Gaussian stochastic process

    NASA Technical Reports Server (NTRS)

    Carter, M. C.; Madison, M. W.

    1973-01-01

    The frequency distribution of overshoots in a stationary Gaussian stochastic process is analyzed. The primary processes involved in this analysis are computer simulation and statistical estimation. Computer simulation is used to simulate stationary Gaussian stochastic processes that have selected autocorrelation functions. An analysis of the simulation results reveals a frequency distribution for overshoots with a functional dependence on the mean and variance of the process. Statistical estimation is then used to estimate the mean and variance of a process. It is shown that for an autocorrelation function, the mean and the variance for the number of overshoots, a frequency distribution for overshoots can be estimated.

  12. Distributional Benefit Analysis of a National Air Quality Rule

    PubMed Central

    Post, Ellen S.; Belova, Anna; Huang, Jin

    2011-01-01

    Under Executive Order 12898, the U.S. Environmental Protection Agency (EPA) must perform environmental justice (EJ) reviews of its rules and regulations. EJ analyses address the hypothesis that environmental disamenities are experienced disproportionately by poor and/or minority subgroups. Such analyses typically use communities as the unit of analysis. While community-based approaches make sense when considering where polluting sources locate, they are less appropriate for national air quality rules affecting many sources and pollutants that can travel thousands of miles. We compare exposures and health risks of EJ-identified individuals rather than communities to analyze EPA’s Heavy Duty Diesel (HDD) rule as an example national air quality rule. Air pollutant exposures are estimated within grid cells by air quality models; all individuals in the same grid cell are assigned the same exposure. Using an inequality index, we find that inequality within racial/ethnic subgroups far outweighs inequality between them. We find, moreover, that the HDD rule leaves between-subgroup inequality essentially unchanged. Changes in health risks depend also on subgroups’ baseline incidence rates, which differ across subgroups. Thus, health risk reductions may not follow the same pattern as reductions in exposure. These results are likely representative of other national air quality rules as well. PMID:21776207

  13. Distributional benefit analysis of a national air quality rule.

    PubMed

    Post, Ellen S; Belova, Anna; Huang, Jin

    2011-06-01

    Under Executive Order 12898, the U.S. Environmental Protection Agency (EPA) must perform environmental justice (EJ) reviews of its rules and regulations. EJ analyses address the hypothesis that environmental disamenities are experienced disproportionately by poor and/or minority subgroups. Such analyses typically use communities as the unit of analysis. While community-based approaches make sense when considering where polluting sources locate, they are less appropriate for national air quality rules affecting many sources and pollutants that can travel thousands of miles. We compare exposures and health risks of EJ-identified individuals rather than communities to analyze EPA's Heavy Duty Diesel (HDD) rule as an example national air quality rule. Air pollutant exposures are estimated within grid cells by air quality models; all individuals in the same grid cell are assigned the same exposure. Using an inequality index, we find that inequality within racial/ethnic subgroups far outweighs inequality between them. We find, moreover, that the HDD rule leaves between-subgroup inequality essentially unchanged. Changes in health risks depend also on subgroups' baseline incidence rates, which differ across subgroups. Thus, health risk reductions may not follow the same pattern as reductions in exposure. These results are likely representative of other national air quality rules as well. PMID:21776207

  14. Biomechanical analysis of force distribution in human finger extensor mechanisms.

    PubMed

    Hu, Dan; Ren, Lei; Howard, David; Zong, Changfu

    2014-01-01

    The complexities of the function and structure of human fingers have long been recognised. The in vivo forces in the human finger tendon network during different activities are critical information for clinical diagnosis, surgical treatment, prosthetic finger design, and biomimetic hand development. In this study, we propose a novel method for in vivo force estimation for the finger tendon network by combining a three-dimensional motion analysis technique and a novel biomechanical tendon network model. The extensor mechanism of a human index finger is represented by an interconnected tendinous network moving around the phalanx's dorsum. A novel analytical approach based on the "Principle of Minimum Total Potential Energy" is used to calculate the forces and deformations throughout the tendon network of the extensor mechanism when subjected to an external load and with the finger posture defined by measurement data. The predicted deformations and forces in the tendon network are in broad agreement with the results obtained by previous experimental in vitro studies. The proposed methodology provides a promising tool for investigating the biomechanical function of complex interconnected tendon networks in vivo. PMID:25126576

  15. [The method of analysis of distribution of erythrocytes by density: practical guidelines].

    PubMed

    Shukrina, E S; Nesterenko, V M; Tsvetaeva, N V; Nikulina, O F; Ataullakhanov, F I

    2014-07-01

    The article describes the phthalate method of analysis of distribution of erythrocytes by density and demonstrates its possibility. The distribution of erythrocytes by density is implemented using centrifugation of blood in micro-hematocrit capillaries in presence of compounds of dimethyl- and dibuthylphthalates of known density. The acquisition of such clinically reliable parameters of distribution of erythrocytes by density as mean density of erythrocytes, width of distribution of erythrocytes by density, light and heavy fraction of erythrocytes and maximum of curve of distribution of erythrocytes by density is described. The causes of deviation of distribution of erythrocytes by density from standard values under various pathological conditions are considered. The syndrome of dehydration of erythrocytes is described in details. The simple and accessible method of acquisition of distribution of erythrocytes by density is described. It is demonstrated that analysis of distribution of erythrocytes by density makes it possible to determine character of changes occurring with erythrocytes. The monitoring of parameters of distribution of erythrocytes by density allows evaluating dynamics of pathological process and effectiveness of therapy. PMID:25346987

  16. Advancing Collaborative Climate Studies through Globally Distributed Geospatial Analysis

    NASA Astrophysics Data System (ADS)

    Singh, R.; Percivall, G.

    2009-12-01

    (note: acronym glossary at end of abstract) For scientists to have confidence in the veracity of data sets and computational processes not under their control, operational transparency must be much greater than previously required. Being able to have a universally understood and machine-readable language for describing such things as the completeness of metadata, data provenance and uncertainty, and the discrete computational steps in a complex process take on increased importance. OGC has been involved with technological issues associated with climate change since 2005 when we, along with the IEEE Committee on Earth Observation, began a close working relationship with GEO and GEOSS (http://earthobservations.org). GEO/GEOS provide the technology platform to GCOS who in turn represents the earth observation community to UNFCCC. OGC and IEEE are the organizers of the GEO/GEOSS Architecture Implementation Pilot (see http://www.ogcnetwork.net/AIpilot). This continuing work involves closely working with GOOS (Global Ocean Observing System) and WMO (World Meteorological Organization). This session reports on the findings of recent work within the OGC’s community of software developers and users to apply geospatial web services to the climate studies domain. The value of this work is to evolve OGC web services, moving from data access and query to geo-processing and workflows. Two projects will be described, the GEOSS API-2 and the CCIP. AIP is a task of the GEOSS Architecture and Data Committee. During its duration, two GEO Tasks defined the project: AIP-2 began as GEO Task AR-07-02, to lead the incorporation of contributed components consistent with the GEOSS Architecture using a GEO Web Portal and a Clearinghouse search facility to access services through GEOSS Interoperability Arrangements in support of the GEOSS Societal Benefit Areas. AIP-2 concluded as GEOS Task AR-09-01b, to develop and pilot new process and infrastructure components for the GEOSS Common Infrastructure and the broader GEOSS architecture. Of specific interest to this session is the work on geospatial workflows and geo-processing and data discovery and access. CCIP demonstrates standards-based interoperability between geospatial applications in the service of Climate Change analysis. CCIP is planned to be a yearly exercise. It consists of a network of online data services (WCS, WFS, SOS), analysis services (WPS, WCPS, WMS), and clients that exercise those services. In 2009, CCIP focuses on Australia, and the initial application of existing OGC services to climate studies. The results of the 2009 CCIP will serve as requirements for more complex geo-processing services to be developed for CCIP 2010. The benefits of CCIP include accelerating the implementation of the GCOS, and building confidence that implementations using multi-vendor interoperable technologies can help resolve vexing climate change questions. AIP-2: Architecture Implementation Pilot, Phase 2 CCIP: Climate Challenge Integration Plugfest GEO: Group on Earth Observations GEOSS: Global Earth Observing System of Systems GCOS: Global Climate Observing System OGC: Open Geospatial Consortium SOS: Sensor Observation Service WCS: Web Coverage Service WCPS: Web Coverage Processing Service WFS: Web Feature Service WMS: Web Mapping Service

  17. Radar signal analysis of ballistic missile with micro-motion based on time-frequency distribution

    NASA Astrophysics Data System (ADS)

    Wang, Jianming; Liu, Lihua; Yu, Hua

    2015-12-01

    The micro-motion of ballistic missile targets induces micro-Doppler modulation on the radar return signal, which is a unique feature for the warhead discrimination during flight. In order to extract the micro-Doppler feature of ballistic missile targets, time-frequency analysis is employed to process the micro-Doppler modulated time-varying radar signal. The images of time-frequency distribution (TFD) reveal the micro-Doppler modulation characteristic very well. However, there are many existing time-frequency analysis methods to generate the time-frequency distribution images, including the short-time Fourier transform (STFT), Wigner distribution (WD) and Cohen class distribution, etc. Under the background of ballistic missile defence, the paper aims at working out an effective time-frequency analysis method for ballistic missile warhead discrimination from the decoys.

  18. Numerical analysis of atomic density distribution in arc driven negative ion sources

    SciTech Connect

    Yamamoto, T. Shibata, T.; Hatayama, A.; Kashiwagi, M.; Hanada, M.; Sawada, K.

    2014-02-15

    The purpose of this study is to calculate atomic (H{sup 0}) density distribution in JAEA 10 ampere negative ion source. A collisional radiative model is developed for the calculation of the H{sup 0} density distribution. The non-equilibrium feature of the electron energy distribution function (EEDF), which mainly determines the H{sup 0} production rate, is included by substituting the EEDF calculated from 3D electron transport analysis. In this paper, the H{sup 0} production rate, the ionization rate, and the density distribution in the source chamber are calculated. In the region where high energy electrons exist, the H{sup 0} production and the ionization are enhanced. The calculated H{sup 0} density distribution without the effect of the H{sup 0} transport is relatively small in the upper region. In the next step, the effect should be taken into account to obtain more realistic H{sup 0} distribution.

  19. Progress in Using the Generalized Wigner Distribution in the Analysis of Terrace-Width Distributions of Vicinal Surfaces

    NASA Astrophysics Data System (ADS)

    Cohen, S. D.; Richards, Howard L.; Einstein, T. L.

    2000-03-01

    The so-called generalized Wigner distribution (GWD) may provide at least as good a description of terrace width distributions (TWDs) on vicinal surfaces as the standard Gaussian fit.(T.L. Einstein and O. Pierre-Louis, Surface Sci. 424), L299 (1999). It works well for weak elastic repulsion strengths A between steps (where the latter fails), as illustrated explicitly(S.D. Cohen, H.L. Richards, TLE, and M. Giesen, cond- mat/9911319.) for vicinal Pt(110).( K. Swamy, E. Bertel, and I. Vilfan, Surface Sci. 425), L369 (1999). Applications to vicinal copper surfaces confirms the general viability of the new analysis procedure.(M. Giesen and T.L. Einstein, submitted to Surface Sci.) For troublesome data, we can treat the GWD as a two-parameter fit that allows the terrace widths to be scaled by an optimal effective mean width.^3 With Monte Carlo simulations we show that for physical values of A, the GWD provides a better overall estimate than the Gaussian models. We quantify how a GWD approaches a Gaussian for large A and present a convenient but accurate new expression relating the variance of the TWD to A.^3 We also mention how discreteness of terrace widths impacts the standard continuum analysis.^3

  20. The discrete-time kinetic model analysis of DNA content distributions in experimental tumour cells.

    PubMed

    Woo, K B

    1979-03-01

    A method was developed to analyse and characterize FMF measurements of DNA content distribution, utilizing the discrete time kinetic (DTK) model for cell kinetics analysis. The DTK model determines the time sequence of the cell age distribution during the proliferation of a tumor cell population and simulates the distribution pattern of the DNA content of cells in each age compartment of the cell cycle. The cells in one age compartment are distributed and spread into several compartments of the DNA content distribution to allow for different rates of DNA synthesis and instrument dispersion effects. It is assumed that the DNA content of cells in each age compartment has a Gaussian distribution. Thus, for a given cell age distribution the DNA content distribution depends on two parameters of the cells in each age compartment: the average DNA content and its coefficient of variation. As the DTK model generates the best fit DNA content distribution to the FMF measurement data, it enables one to estimate specific values of these two parameters in each stage of the cell cycle and to determine the fraction of cells in each cycle phase. The method was utilized to fit FMf measurements of DNA content distributions and to analyse their relationship tothe cell kinetic parameters, namely cell loss rate, cell cycle times and grwoth graction of exponentially growing Chinese hamster ovary cells in vitro and, also, with a wide range of coeffficients of variation, of the L1210 ascites tumour during the growth period. PMID:371812

  1. Assessing Ensemble Filter Estimates of the Analysis Error Distribution of the Day

    NASA Astrophysics Data System (ADS)

    Posselt, D. J.; Hodyss, D.; Bishop, C. H.

    2013-12-01

    Ensemble data assimilation algorithms (e.g., the Ensemble Kalman Filter) are often purported to return an estimate of the "analysis error distribution of the day"; a measure of the variability in the analysis that is consistent with the current state of the system. In this presentation, we demonstrate that in the presence of non-linearity this is not, in fact, the case. The true error distribution of the day given today's observations consists of the Bayesian posterior PDF formed via the conjunction of the prior forecast error distribution with the likelihood error distribution constructed from the observations of the day. In actuality, ensemble data assimilation algorithms return an estimate of the analysis error integrated over all prior realizations of the observations of the day. The result is consistent with the true posterior analysis uncertainty (as returned by a solution to Bayes) if the likelihood distribution produced by the observations of the day is approximately equal to the likelihood distribution integrated over all possible observations (or equivalently innovations).

  2. Spherical Harmonic Analysis of Particle Velocity Distribution Function: Comparison of Moments and Anisotropies using Cluster Data

    NASA Technical Reports Server (NTRS)

    Gurgiolo, Chris; Vinas, Adolfo F.

    2009-01-01

    This paper presents a spherical harmonic analysis of the plasma velocity distribution function using high-angular, energy, and time resolution Cluster data obtained from the PEACE spectrometer instrument to demonstrate how this analysis models the particle distribution function and its moments and anisotropies. The results show that spherical harmonic analysis produced a robust physical representation model of the velocity distribution function, resolving the main features of the measured distributions. From the spherical harmonic analysis, a minimum set of nine spectral coefficients was obtained from which the moment (up to the heat flux), anisotropy, and asymmetry calculations of the velocity distribution function were obtained. The spherical harmonic method provides a potentially effective "compression" technique that can be easily carried out onboard a spacecraft to determine the moments and anisotropies of the particle velocity distribution function for any species. These calculations were implemented using three different approaches, namely, the standard traditional integration, the spherical harmonic (SPH) spectral coefficients integration, and the singular value decomposition (SVD) on the spherical harmonic methods. A comparison among the various methods shows that both SPH and SVD approaches provide remarkable agreement with the standard moment integration method.

  3. Just fracking: a distributive environmental justice analysis of unconventional gas development in Pennsylvania, USA

    NASA Astrophysics Data System (ADS)

    Clough, Emily; Bell, Derek

    2016-02-01

    This letter presents a distributive environmental justice analysis of unconventional gas development in the area of Pennsylvania lying over the Marcellus Shale, the largest shale gas formation in play in the United States. The extraction of shale gas using unconventional wells, which are hydraulically fractured (fracking), has increased dramatically since 2005. As the number of wells has grown, so have concerns about the potential public health effects on nearby communities. These concerns make shale gas development an environmental justice issue. This letter examines whether the hazards associated with proximity to wells and the economic benefits of shale gas production are fairly distributed. We distinguish two types of distributive environmental justice: traditional and benefit sharing. We ask the traditional question: are there a disproportionate number of minority or low-income residents in areas near to unconventional wells in Pennsylvania? However, we extend this analysis in two ways: we examine income distribution and level of education; and we compare before and after shale gas development. This contributes to discussions of benefit sharing by showing how the income distribution of the population has changed. We use a binary dasymetric technique to remap the data from the 2000 US Census and the 2009-2013 American Communities Survey and combine that data with a buffer containment analysis of unconventional wells to compare the characteristics of the population living nearer to unconventional wells with those further away before and after shale gas development. Our analysis indicates that there is no evidence of traditional distributive environmental injustice: there is not a disproportionate number of minority or low-income residents in areas near to unconventional wells. However, our analysis is consistent with the claim that there is benefit sharing distributive environmental injustice: the income distribution of the population nearer to shale gas wells has not been transformed since shale gas development.

  4. CARES/LIFE Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Powers, Lynn M.; Janosik, Lesley A.; Gyekenyesi, John P.

    2003-01-01

    This manual describes the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction (CARES/LIFE) computer program. The program calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. CARES/LIFE is an extension of the CARES (Ceramic Analysis and Reliability Evaluation of Structures) computer program. The program uses results from MSC/NASTRAN, ABAQUS, and ANSYS finite element analysis programs to evaluate component reliability due to inherent surface and/or volume type flaws. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing the power law, Paris law, or Walker law. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled by using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. The probabilistic time-dependent theories used in CARES/LIFE, along with the input and output for CARES/LIFE, are described. Example problems to demonstrate various features of the program are also included.

  5. Monte Carlo models and analysis of galactic disk gamma-ray burst distributions

    NASA Technical Reports Server (NTRS)

    Hakkila, Jon

    1989-01-01

    Gamma-ray bursts are transient astronomical phenomena which have no quiescent counterparts in any region of the electromagnetic spectrum. Although temporal and spectral properties indicate that these events are likely energetic, their unknown spatial distribution complicates astrophysical interpretation. Monte Carlo samples of gamma-ray burst sources are created which belong to Galactic disk populations. Spatial analysis techniques are used to compare these samples to the observed distribution. From this, both quantitative and qualitative conclusions are drawn concerning allowed luminosity and spatial distributions of the actual sample. Although the Burst and Transient Source Experiment (BATSE) experiment on Gamma Ray Observatory (GRO) will significantly improve knowledge of the gamma-ray burst source spatial characteristics within only a few months of launch, the analysis techniques described herein will not be superceded. Rather, they may be used with BATSE results to obtain detailed information about both the luminosity and spatial distributions of the sources.

  6. FASEP ultra-automated analysis of fibre length distribution in glass-fibre-reinforced products

    NASA Astrophysics Data System (ADS)

    Hartwich, Mark R.; Hhn, Norbert; Mayr, Helga; Sandau, Konrad; Stengler, Ralph

    2009-06-01

    Reinforced plastic materials are widely used in high sophisticated applications. The length distribution of the fibres influences the mechanical properties of the final product. A method for automatic determination of this length distribution was developed. After separating the fibres out of the composite material without any damage, and preparing them for microscopical analysis, a mosaic of microscope pictures is taken. After image processing and analysis with mathematical methods, a complete statistic of the fibre length distribution could be determined. A correlation between fibre length distribution and mechanical properties, measured e.g. with material test methods, like tensile and impact tests, was found. This is a method to optimize the process and selection of material for the plastic parts. In result this enhances customer satisfaction and, maybe much more important, reduces costs for the manufacturer.

  7. SCARE - A postprocessor program to MSC/NASTRAN for reliability analysis of structural ceramic components

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, J. P.

    1986-01-01

    A computer program was developed for calculating the statistical fast fracture reliability and failure probability of ceramic components. The program includes the two-parameter Weibull material fracture strength distribution model, using the principle of independent action for polyaxial stress states and Batdorf's shear-sensitive as well as shear-insensitive crack theories, all for volume distributed flaws in macroscopically isotropic solids. Both penny-shaped cracks and Griffith cracks are included in the Batdorf shear-sensitive crack response calculations, using Griffith's maximum tensile stress or critical coplanar strain energy release rate criteria to predict mixed mode fracture. Weibull material parameters can also be calculated from modulus of rupture bar tests, using the least squares method with known specimen geometry and fracture data. The reliability prediction analysis uses MSC/NASTRAN stress, temperature and volume output, obtained from the use of three-dimensional, quadratic, isoparametric, or axisymmetric finite elements. The statistical fast fracture theories employed, along with selected input and output formats and options, are summarized. An example problem to demonstrate various features of the program is included.

  8. Weibull Multiplicative Model and Machine Learning Models for Full-Automatic Dark-Spot Detection from SAR Images

    NASA Astrophysics Data System (ADS)

    Taravat, A.; Del Frate, F.

    2013-09-01

    As a major aspect of marine pollution, oil release into the sea has serious biological and environmental impacts. Among remote sensing systems (which is a tool that offers a non-destructive investigation method), synthetic aperture radar (SAR) can provide valuable synoptic information about the position and size of the oil spill due to its wide area coverage and day/night, and all-weather capabilities. In this paper we present a new automated method for oil-spill monitoring. A new approach is based on the combination of Weibull Multiplicative Model and machine learning techniques to differentiate between dark spots and the background. First, the filter created based on Weibull Multiplicative Model is applied to each sub-image. Second, the sub-image is segmented by two different neural networks techniques (Pulsed Coupled Neural Networks and Multilayer Perceptron Neural Networks). As the last step, a very simple filtering process is used to eliminate the false targets. The proposed approaches were tested on 20 ENVISAT and ERS2 images which contained dark spots. The same parameters were used in all tests. For the overall dataset, the average accuracies of 94.05 % and 95.20 % were obtained for PCNN and MLP methods, respectively. The average computational time for dark-spot detection with a 256 × 256 image in about 4 s for PCNN segmentation using IDL software which is the fastest one in this field at present. Our experimental results demonstrate that the proposed approach is very fast, robust and effective. The proposed approach can be applied to the future spaceborne SAR images.

  9. Measurement of Brillouin frequency shift distribution in PLC by Brillouin optical correlation domain analysis

    NASA Astrophysics Data System (ADS)

    Hotate, Kazuo; Watanabe, Ryuji; He, Zuyuan; Kishi, Masato

    We have measured Brillouin frequency shift distribution in a planar lightwave circuit (PLC) by Brillouin optical correlation domain analysis (BOCDA). We have made an experimental system specialized for the measurement of PLC, realizing spatial resolution of 5.9mm with standard deviation of 0.34MHz in Brillouin frequency shift (BFS) measurement. From the data obtained in the experiments, we have found that the BFS distribution shape along the waveguide corresponds to its route pattern in the PLC.

  10. Robust Bayesian Analysis of Heavy-tailed Stochastic Volatility Models using Scale Mixtures of Normal Distributions

    PubMed Central

    Abanto-Valle, C. A.; Bandyopadhyay, D.; Lachos, V. H.; Enriquez, I.

    2009-01-01

    A Bayesian analysis of stochastic volatility (SV) models using the class of symmetric scale mixtures of normal (SMN) distributions is considered. In the face of non-normality, this provides an appealing robust alternative to the routine use of the normal distribution. Specific distributions examined include the normal, student-t, slash and the variance gamma distributions. Using a Bayesian paradigm, an efficient Markov chain Monte Carlo (MCMC) algorithm is introduced for parameter estimation. Moreover, the mixing parameters obtained as a by-product of the scale mixture representation can be used to identify outliers. The methods developed are applied to analyze daily stock returns data on S&P500 index. Bayesian model selection criteria as well as out-of- sample forecasting results reveal that the SV models based on heavy-tailed SMN distributions provide significant improvement in model fit as well as prediction to the S&P500 index data over the usual normal model. PMID:20730043

  11. Robust Bayesian Analysis of Heavy-tailed Stochastic Volatility Models using Scale Mixtures of Normal Distributions.

    PubMed

    Abanto-Valle, C A; Bandyopadhyay, D; Lachos, V H; Enriquez, I

    2010-12-01

    A Bayesian analysis of stochastic volatility (SV) models using the class of symmetric scale mixtures of normal (SMN) distributions is considered. In the face of non-normality, this provides an appealing robust alternative to the routine use of the normal distribution. Specific distributions examined include the normal, student-t, slash and the variance gamma distributions. Using a Bayesian paradigm, an efficient Markov chain Monte Carlo (MCMC) algorithm is introduced for parameter estimation. Moreover, the mixing parameters obtained as a by-product of the scale mixture representation can be used to identify outliers. The methods developed are applied to analyze daily stock returns data on S&P500 index. Bayesian model selection criteria as well as out-of- sample forecasting results reveal that the SV models based on heavy-tailed SMN distributions provide significant improvement in model fit as well as prediction to the S&P500 index data over the usual normal model. PMID:20730043

  12. Analysis of dynamical processes using the mass distribution of fission fragments in heavy-ion reactions

    NASA Astrophysics Data System (ADS)

    Aritomo, Y.

    2009-12-01

    We analyze experimental data obtained for the mass distribution of fission fragments in the reactions S36+U238 and Si30+U238 at several incident energies, which were performed by the Japan Atomic Energy Agency (JAEA) group. The analysis of the mass distribution of fission fragments is a powerful tool for understanding the mechanism of the reaction in the heavy and superheavy-mass regions. Using the dynamical model with the Langevin equation, we precisely investigate the incident energy dependence of the mass distribution of fission fragments. This study is the first attempt to treat such experimental data systematically. We also consider the fine structures in the mass distribution of fission fragments caused by the nuclear structure at a low incident energy. It is explained why the mass distribution of fission fragments has different features in the two reactions. The fusion cross sections are also estimated.

  13. Evaluating Domestic Hot Water Distribution System Options with Validated Analysis Models

    SciTech Connect

    Weitzel, E.; Hoeschele, E.

    2014-09-01

    A developing body of work is forming that collects data on domestic hot water consumption, water use behaviors, and energy efficiency of various distribution systems. Transient System Simulation Tool (TRNSYS) is a full distribution system developed that has been validated using field monitoring data and then exercised in a number of climates to understand climate impact on performance. In this study, the Building America team built upon previous analysis modeling work to evaluate differing distribution systems and the sensitivities of water heating energy and water use efficiency to variations of climate, load, distribution type, insulation and compact plumbing practices. Overall, 124 different TRNSYS models were simulated. The results of this work are useful in informing future development of water heating best practices guides as well as more accurate (and simulation time efficient) distribution models for annual whole house simulation programs.

  14. Comparison of photon correlation spectroscopy with photosedimentation analysis for the determination of aqueous colloid size distributions

    USGS Publications Warehouse

    Rees, T.F.

    1990-01-01

    Photon correlation spectroscopy (PCS) utilizes the Doppler frequency shift of photons scattered off particles undergoing Brownian motion to determine the size of colloids suspended in water. Photosedimentation analysis (PSA) measures the time-dependent change in optical density of a suspension of colloidal particles undergoing centrifugation. A description of both techniques, important underlying assumptions, and limitations are given. Results for a series of river water samples show that the colloid-size distribution means are statistically identical as determined by both techniques. This also is true of the mass median diameter (MMD), even though MMD values determined by PSA are consistently smaller than those determined by PCS. Because of this small negative bias, the skew parameters for the distributions are generally smaller for the PCS-determined distributions than for the PSA-determined distributions. Smaller polydispersity indices for the distributions are also determined by PCS. -from Author

  15. [Effect of different distribution of components concentration on the accuracy of quantitative spectral analysis].

    PubMed

    Li, Gang; Zhao, Zhe; Wang, Hui-Quan; Lin, Ling; Zhang, Bao-Ju; Wu, Xiao-Rong

    2012-07-01

    In order to discuss the effect of different distribution of components concentration on the accuracy of quantitative spectral analysis, according to the Lambert-Beer law, ideal absorption spectra of samples with three components were established. Gaussian noise was added to the spectra. Correction and prediction models were built by partial least squares regression to reflect the unequal modeling and prediction results between different distributions of components. Results show that, in the case of pure linear absorption, the accuracy of model is related to the distribution of components concentration. Not only to the component we focus on, but also to the non-tested components, the larger covered and more uniform distribution is a significant point of calibration set samples to establish a universal model and provide a satisfactory accuracy. This research supplies a theoretic guidance for reasonable choice of samples with suitable concentration distribution, which enhances the quality of model and reduces the prediction error of the predict set. PMID:23016350

  16. Does a powder surface contain all necessary information for particle size distribution analysis?

    PubMed

    Laitinen, Niklas; Antikainen, Osmo; Yliruusi, Jouko

    2002-12-01

    The aim of this study was to utilise a new approach where digital image information is used in the characterisation of particle size distributions of a large set of pharmaceutical powders. A novel optical set-up was employed to create images and calculate a stereometric parameter from the digital images of powder surfaces. Analysis was made of 40 granule batches with varying particle sizes and compositions prepared with fluidised bed granulation. The extracted digital image information was then connected to particle size using multivariate modelling. The modelled particle size distributions were compared to particle size determinations with sieve analysis and laser diffraction. The results revealed that the created models corresponded well with the particle size distributions measured with sieve analysis and laser diffraction. This study shows that digital images taken from powder surfaces contain all necessary data that is needed for particle size distribution analysis. To obtain this information from images careful consideration has to be given on the imaging conditions. In conclusion, the results of this study suggest that the new approach is a powerful means of analysis in particle size determination. The method is fast, the sample size needed is very small and the technique enables non-destructive analysis of samples. The method is suitable in the particle size range of approximately 20-1500 microm. However, further investigations with a broad range of powders have to be made to obtain information of the possibilities and limitations of the introduced method in powder characterisation. PMID:12453611

  17. Analysis and synthesis of distributed-lumped-active networks by digital computer

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The use of digital computational techniques in the analysis and synthesis of DLA (distributed lumped active) networks is considered. This class of networks consists of three distinct types of elements, namely, distributed elements (modeled by partial differential equations), lumped elements (modeled by algebraic relations and ordinary differential equations), and active elements (modeled by algebraic relations). Such a characterization is applicable to a broad class of circuits, especially including those usually referred to as linear integrated circuits, since the fabrication techniques for such circuits readily produce elements which may be modeled as distributed, as well as the more conventional lumped and active ones.

  18. Theoretical analysis of output performance of GG-IAG fiber laser by multipoint distributed side pump

    NASA Astrophysics Data System (ADS)

    Zhu, Yonggang; Duan, Kailiang; Shao, Hongmin; Zhao, Baoyin; Zhang, Entao; Zhao, Wei

    2012-11-01

    Based on a steady-state rate equations (REs) and heat dissipation model considering both convective and radiative heat transfer, the output performance and temperature distribution of Yb3+ doped gain guided and index antiguided (GG-IAG) fiber lasers by multipoint distributed pumping are analyzed by numerically solving REs. The results show that high output power and even temperature distribution can be obtained by increasing pump points and lowering the losses at the points; multipoint side pumping is an optimal method to obtain compact high power GG-IAG fiber lasers. The numerical analysis provides some insights for the construction of high power GG-IAG fiber lasers.

  19. powerlaw: A Python Package for Analysis of Heavy-Tailed Distributions

    PubMed Central

    Alstott, Jeff; Bullmore, Ed; Plenz, Dietmar

    2014-01-01

    Power laws are theoretically interesting probability distributions that are also frequently used to describe empirical data. In recent years, effective statistical methods for fitting power laws have been developed, but appropriate use of these techniques requires significant programming and statistical insight. In order to greatly decrease the barriers to using good statistical methods for fitting power law distributions, we developed the powerlaw Python package. This software package provides easy commands for basic fitting and statistical analysis of distributions. Notably, it also seeks to support a variety of user needs by being exhaustive in the options available to the user. The source code is publicly available and easily extensible. PMID:24489671

  20. Nucleon momentum distributions from a modified scaling analysis of inclusive electron nucleus scattering

    SciTech Connect

    John Arrington

    2003-05-01

    Inclusive electron scattering from nuclei at low momentum transfer (corresponding to x>1) and moderate Q^2 is dominated by quasifree scattering from nucleons. In the impulse approximation, the cross section can be directly connected to the nucleon momentum distribution via the scaling function F(y). The breakdown of the y-scaling assumptions in certain kinematic regions have prevented extraction of nucleon momentum distributions from such a scaling analysis. With a slight modification to the y-scaling assumptions, it is found that scaling functions can be extracted which are consistent with the expectations for the nucleon momentum distributions.

  1. Nucleon momentum distributions from a modified scaling analysis of inclusive electron-nucleus scattering.

    SciTech Connect

    Arrington, J.

    2002-05-15

    Inclusive electron scattering from nuclei at low momentum transfer (corresponding to x {ge} 1) and moderate Q{sup 2} is dominated by quasifree scattering from nucleons. In the impulse approximation, the cross section can be directly connected to the nucleon momentum distribution via the scaling function F(y). The breakdown of the y-scaling assumptions in certain kinematic regions have prevented extraction of nucleon momentum distributions from such a scaling analysis. With a slight modification to the y-scaling assumptions, it is found that scaling functions can be extracted which are consistent with the expectations for the nucleon momentum distributions.

  2. Design of a ridge filter structure based on the analysis of dose distributions.

    PubMed

    Fujimoto, Rintaro; Takayanagi, Taisuke; Fujitaka, Shinichiro

    2009-07-01

    Dose distributions distorted by a periodic structure, such as a ridge filter, are analytically investigated. Based on the beam optics, the fluence distributions of scanned beams passing through the ridge filter are traced. It is shown that the periodic lateral dose distribution blurred by multiple Coulomb scattering can be expressed by a sum of cosine functions through Fourier transform. The result shows that the dose homogeneity decreases exponentially as the period of the structure becomes longer. This analysis is applied to the example case of a mini-ridge filter. The mini-ridge filter is designed to broaden sharp Bragg peaks for an energy-stacking irradiation method. The dose distributions depend on the period of the ridge filter structure and the angular straggling at the ridge filter position. Several cases are prepared where the period and angular straggling are supposed to be probable values. In these cases, the lateral distributions obtained by the analytical method are compared to Monte Carlo simulation results. Both distributions show good agreement with each other within 1%, which means that this analysis allows estimation of the dose distribution downstream of the ridge filter quantitatively. The appropriate period of grooves and scatterer width can be determined which ensures sufficient homogeneity. PMID:19531845

  3. Particle size distributions by transmission electron microscopy: an interlaboratory comparison case study

    PubMed Central

    Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A

    2015-01-01

    This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition. PMID:26361398

  4. Probability Distribution Function of the Upper Equatorial Pacific Current Speeds

    NASA Astrophysics Data System (ADS)

    Chu, P. C.

    2008-12-01

    The probability distribution function (PDF) of the upper (0-50 m) tropical Pacific current speeds (w), constructed from hourly ADCP data (1990-2007) at six stations for the TOGA-TAO project, satisfies the two- parameter Weibull distribution reasonably well with different characteristics between El Nino and La Nina events: In the western Pacific, the PDF of w has a larger peakedness during the La Nina events than during the El Nino events; and vice versa in the eastern Pacific. However, the PDF of w for the lower layer (100-200 m) does not fit the Weibull distribution so well as the upper layer. This is due to the different stochastic differential equations between upper and lower layers in the tropical Pacific. For the upper layer, the stochastic differential equations, established on the base of the Ekman dynamics, have analytical solution, i.e., the Rayleigh distribution (simplest form of the Weibull distribution), for constant eddy viscosity K. Knowledge on PDF of w during the El Nino and La Nina events will improve the ensemble horizontal flux calculation, which contributes to the climate studies.

  5. Probability distribution function of the upper equatorial Pacific current speeds

    NASA Astrophysics Data System (ADS)

    Chu, Peter C.

    2008-06-01

    The probability distribution function (PDF) of the upper (0-50 m) tropical Pacific current speeds (w), constructed from hourly ADCP data (1990-2007) at six stations for the Tropical Atmosphere Ocean project, satisfies the two-parameter Weibull distribution reasonably well with different characteristics between El Nino and La Nina events: In the western Pacific, the PDF of w has a larger peakedness during the La Nina events than during the El Nino events; and vice versa in the eastern Pacific. However, the PDF of w for the lower layer (100-200 m) does not fit the Weibull distribution so well as the upper layer. This is due to the different stochastic differential equations between upper and lower layers in the tropical Pacific. For the upper layer, the stochastic differential equations, established on the base of the Ekman dynamics, have analytical solution, i.e., the Rayleigh distribution (simplest form of the Weibull distribution), for constant eddy viscosity K. Knowledge on PDF of w during the El Nino and La Nina events will improve the ensemble horizontal flux calculation, which contributes to the climate studies.

  6. Hydraulic model analysis of water distribution system, Rockwell International, Rocky Flats, Colorado

    SciTech Connect

    Perstein, J.; Castellano, J.A.

    1989-01-20

    Rockwell International requested an analysis of the existing plant site water supply distribution system at Rocky Flats, Colorado, to determine its adequacy. On September 26--29, 1988, Hughes Associates, Inc., Fire Protection Engineers, accompanied by Rocky Flats Fire Department engineers and suppression personnel, conducted water flow tests at the Rocky Flats plant site. Thirty-seven flows from various points throughout the plant site were taken on the existing domestic supply/fire main installation to assure comprehensive and thorough representation of the Rocky Flats water distribution system capability. The analysis was completed in four phases which are described, together with a summary of general conclusions and recommendations.

  7. The Analysis of the Strength, Distribution and Direction for the EEG Phase Synchronization by Musical Stimulus

    NASA Astrophysics Data System (ADS)

    Ogawa, Yutaro; Ikeda, Akira; Kotani, Kiyoshi; Jimbo, Yasuhiko

    In this study, we propose the EEG phase synchronization analysis including not only the average strength of the synchronization but also the distribution and directions under the conditions that evoked emotion by musical stimuli. The experiment is performed with the two different musical stimuli that evoke happiness or sadness for 150 seconds. It is found that the average strength of synchronization indicates no difference between the right side and the left side of the frontal lobe during the happy stimulus, the distribution and directions indicate significant differences. Therefore, proposed analysis is useful for detecting emotional condition because it provides information that cannot be obtained only by the average strength of synchronization.

  8. Sensitivity Analysis of CLIMEX Parameters in Modelling Potential Distribution of Lantana camara L.

    PubMed Central

    Taylor, Subhashni; Kumar, Lalit

    2012-01-01

    A process-based niche model of L. camara L. (lantana), a highly invasive shrub species, was developed to estimate its potential distribution using CLIMEX. Model development was carried out using its native and invasive distribution and validation was carried out with the extensive Australian distribution. A good fit was observed, with 86.7% of herbarium specimens collected in Australia occurring within the suitable and highly suitable categories. A sensitivity analysis was conducted to identify the model parameters that had the most influence on lantana distribution. The changes in suitability were assessed by mapping the regions where the distribution changed with each parameter alteration. This allowed an assessment of where, within Australia, the modification of each parameter was having the most impact, particularly in terms of the suitable and highly suitable locations. The sensitivity of various parameters was also evaluated by calculating the changes in area within the suitable and highly suitable categories. The limiting low temperature (DV0), limiting high temperature (DV3) and limiting low soil moisture (SM0) showed highest sensitivity to change. The other model parameters were relatively insensitive to change. Highly sensitive parameters require extensive research and data collection to be fitted accurately in species distribution models. The results from this study can inform more cost effective development of species distribution models for lantana. Such models form an integral part of the management of invasive species and the results can be used to streamline data collection requirements for potential distribution modelling. PMID:22815881

  9. Sensitivity Analysis of CLIMEX Parameters in Modeling Potential Distribution of Phoenix dactylifera L.

    PubMed Central

    Shabani, Farzin; Kumar, Lalit

    2014-01-01

    Using CLIMEX and the Taguchi Method, a process-based niche model was developed to estimate potential distributions of Phoenix dactylifera L. (date palm), an economically important crop in many counties. Development of the model was based on both its native and invasive distribution and validation was carried out in terms of its extensive distribution in Iran. To identify model parameters having greatest influence on distribution of date palm, a sensitivity analysis was carried out. Changes in suitability were established by mapping of regions where the estimated distribution changed with parameter alterations. This facilitated the assessment of certain areas in Iran where parameter modifications impacted the most, particularly in relation to suitable and highly suitable locations. Parameter sensitivities were also evaluated by the calculation of area changes within the suitable and highly suitable categories. The low temperature limit (DV2), high temperature limit (DV3), upper optimal temperature (SM2) and high soil moisture limit (SM3) had the greatest impact on sensitivity, while other parameters showed relatively less sensitivity or were insensitive to change. For an accurate fit in species distribution models, highly sensitive parameters require more extensive research and data collection methods. Results of this study demonstrate a more cost effective method for developing date palm distribution models, an integral element in species management, and may prove useful for streamlining requirements for data collection in potential distribution modeling for other species as well. PMID:24722140

  10. A network analysis of food flows within the United States of America.

    PubMed

    Lin, Xiaowen; Dang, Qian; Konar, Megan

    2014-05-20

    The world food system is globalized and interconnected, in which trade plays an increasingly important role in facilitating food availability. We present a novel application of network analysis to domestic food flows within the USA, a country with global importance as a major agricultural producer and trade power. We find normal node degree distributions and Weibull node strength and betweenness centrality distributions. An unassortative network structure with high clustering coefficients exists. These network properties indicate that the USA food flow network is highly social and well-mixed. However, a power law relationship between node betweenness centrality and node degree indicates potential network vulnerability to the disturbance of key nodes. We perform an equality analysis which serves as a benchmark for global food trade, where the Gini coefficient = 0.579, Lorenz asymmetry coefficient = 0.966, and Hoover index = 0.442. These findings shed insight into trade network scaling and proxy free trade and equitable network architectures. PMID:24773310

  11. A mixture of exponentials distribution for a simple and precise assessment of the volcanic hazard

    NASA Astrophysics Data System (ADS)

    Mendoza-Rosas, A. T.; de La Cruz-Reyna, S.

    2009-03-01

    The assessment of volcanic hazard is the first step for disaster mitigation. The distribution of repose periods between eruptions provides important information about the probability of new eruptions occurring within given time intervals. The quality of the probability estimate, i.e., of the hazard assessment, depends on the capacity of the chosen statistical model to describe the actual distribution of the repose times. In this work, we use a mixture of exponentials distribution, namely the sum of exponential distributions characterized by the different eruption occurrence rates that may be recognized inspecting the cumulative number of eruptions with time in specific VEI (Volcanic Explosivity Index) categories. The most striking property of an exponential mixture density is that the shape of the density function is flexible in a way similar to the frequently used Weibull distribution, matching long-tailed distributions and allowing clustering and time dependence of the eruption sequence, with distribution parameters that can be readily obtained from the observed occurrence rates. Thus, the mixture of exponentials turns out to be more precise and much easier to apply than the Weibull distribution. We recommended the use of a mixture of exponentials distribution when regimes with well-defined eruption rates can be identified in the cumulative series of events. As an example, we apply the mixture of exponential distributions to the repose-time sequences between explosive eruptions of the Colima and Popocatépetl volcanoes, México, and compare the results obtained with the Weibull and other distributions.

  12. Strategic Sequencing for State Distributed PV Policies: A Quantitative Analysis of Policy Impacts and Interactions

    SciTech Connect

    Doris, E.; Krasko, V.A.

    2012-10-01

    State and local policymakers show increasing interest in spurring the development of customer-sited distributed generation (DG), in particular solar photovoltaic (PV) markets. Prompted by that interest, this analysis examines the use of state policy as a tool to support the development of a robust private investment market. This analysis builds on previous studies that focus on government subsidies to reduce installation costs of individual projects and provides an evaluation of the impacts of policies on stimulating private market development.

  13. Space positional and motion SRC effects: A comparison with the use of reaction time distribution analysis

    PubMed Central

    Styrkowiec, Piotr; Szczepanowski, Remigiusz

    2013-01-01

    The analysis of reaction time (RT) distributions has become a recognized standard in studies on the stimulus response correspondence (SRC) effect as it allows exploring how this effect changes as a function of response speed. In this study, we compared the spatial SRC effect (the classic Simon effect) with the motion SRC effect using RT distribution analysis. Four experiments were conducted, in which we manipulated factors of space position and motion for stimulus and response, in order to obtain a clear distinction between positional SRC and motion SRC. Results showed that these two types of SRC effects differ in their RT distribution functions as the space positional SRC effect showed a decreasing function, while the motion SRC showed an increasing function. This suggests that different types of codes underlie these two SRC effects. Potential mechanisms and processes are discussed. PMID:24605178

  14. Superstatistics analysis of the ion current distribution function: Met3PbCl influence study.

    PubMed

    Miśkiewicz, Janusz; Trela, Zenon; Przestalski, Stanisław; Karcz, Waldemar

    2010-09-01

    A novel analysis of ion current time series is proposed. It is shown that higher (second, third and fourth) statistical moments of the ion current probability distribution function (PDF) can yield new information about ion channel properties. The method is illustrated on a two-state model where the PDF of the compound states are given by normal distributions. The proposed method was applied to the analysis of the SV cation channels of vacuolar membrane of Beta vulgaris and the influence of trimethyllead chloride (Met(3)PbCl) on the ion current probability distribution. Ion currents were measured by patch-clamp technique. It was shown that Met(3)PbCl influences the variance of the open-state ion current but does not alter the PDF of the closed-state ion current. Incorporation of higher statistical moments into the standard investigation of ion channel properties is proposed. PMID:20354691

  15. An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests

    ERIC Educational Resources Information Center

    Attali, Yigal

    2010-01-01

    Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…

  16. Global Distribution of Tropospheric Aerosols: A 3-D Model Analysis of Satellite Data

    NASA Technical Reports Server (NTRS)

    Chin, Mian

    2002-01-01

    This report describes objectives completed for the GACP (Global Climatology Aerosol Project). The objectives included the analysis of satellite aerosol data, including the optical properties and global distributions of major aerosol types, and human contributions to major aerosol types. The researchers have conducted simulations and field work.

  17. An investigation on the intra-sample distribution of cotton color by using image analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The colorimeter principle is widely used to measure cotton color. This method provides the sample’s color grade; but the result does not include information about the color distribution and any variation within the sample. We conducted an investigation that used image analysis method to study the ...

  18. Residence Time Distribution Measurement and Analysis of Pilot-Scale Pretreatment Reactors for Biofuels Production: Preprint

    SciTech Connect

    Sievers, D.; Kuhn, E.; Tucker, M.; Stickel, J.; Wolfrum, E.

    2013-06-01

    Measurement and analysis of residence time distribution (RTD) data is the focus of this study where data collection methods were developed specifically for the pretreatment reactor environment. Augmented physical sampling and automated online detection methods were developed and applied. Both the measurement techniques themselves and the produced RTD data are presented and discussed.

  19. DataHub knowledge based assistance for science visualization and analysis using large distributed databases

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Collins, Donald J.; Doyle, Richard J.; Jacobson, Allan S.

    1991-01-01

    Viewgraphs on DataHub knowledge based assistance for science visualization and analysis using large distributed databases. Topics covered include: DataHub functional architecture; data representation; logical access methods; preliminary software architecture; LinkWinds; data knowledge issues; expert systems; and data management.

  20. Time-Score Analysis in Criterion-Referenced Tests. Final Report.

    ERIC Educational Resources Information Center

    Tatsuoka, Kikumi K.; Tatsuoka, Maurice M.

    The family of Weibull distributions was investigated as a model for the distributions of response times for items in computer-based criterion-referenced tests. The fit of these distributions were, with a few exceptions, good to excellent according to the Kolmogorov-Smirnov test. For a few relatively simple items, the two-parameter gamma…

  1. SCARE: A post-processor program to MSC/NASTRAN for the reliability analysis of structural ceramic components

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, J. P.

    1985-01-01

    A computer program was developed for calculating the statistical fast fracture reliability and failure probability of ceramic components. The program includes the two-parameter Weibull material fracture strength distribution model, using the principle of independent action for polyaxial stress states and Batdorf's shear-sensitive as well as shear-insensitive crack theories, all for volume distributed flaws in macroscopically isotropic solids. Both penny-shaped cracks and Griffith cracks are included in the Batdorf shear-sensitive crack response calculations, using Griffith's maximum tensile stress or critical coplanar strain energy release rate criteria to predict mixed mode fracture. Weibull material parameters can also be calculated from modulus of rupture bar tests, using the least squares method with known specimen geometry and fracture data. The reliability prediction analysis uses MSC/NASTRAN stress, temperature and volume output, obtained from the use of three-dimensional, quadratic, isoparametric, or axisymmetric finite elements. The statistical fast fracture theories employed, along with selected input and output formats and options, are summarized. An example problem to demonstrate various features of the program is included.

  2. Spatial sensitivity analysis of snow cover data in a distributed rainfall-runoff model

    NASA Astrophysics Data System (ADS)

    Berezowski, T.; Nossent, J.; Chormański, J.; Batelaan, O.

    2015-04-01

    As the availability of spatially distributed data sets for distributed rainfall-runoff modelling is strongly increasing, more attention should be paid to the influence of the quality of the data on the calibration. While a lot of progress has been made on using distributed data in simulations of hydrological models, sensitivity of spatial data with respect to model results is not well understood. In this paper we develop a spatial sensitivity analysis method for spatial input data (snow cover fraction - SCF) for a distributed rainfall-runoff model to investigate when the model is differently subjected to SCF uncertainty in different zones of the model. The analysis was focussed on the relation between the SCF sensitivity and the physical and spatial parameters and processes of a distributed rainfall-runoff model. The methodology is tested for the Biebrza River catchment, Poland, for which a distributed WetSpa model is set up to simulate 2 years of daily runoff. The sensitivity analysis uses the Latin-Hypercube One-factor-At-a-Time (LH-OAT) algorithm, which employs different response functions for each spatial parameter representing a 4 × 4 km snow zone. The results show that the spatial patterns of sensitivity can be easily interpreted by co-occurrence of different environmental factors such as geomorphology, soil texture, land use, precipitation and temperature. Moreover, the spatial pattern of sensitivity under different response functions is related to different spatial parameters and physical processes. The results clearly show that the LH-OAT algorithm is suitable for our spatial sensitivity analysis approach and that the SCF is spatially sensitive in the WetSpa model. The developed method can be easily applied to other models and other spatial data.

  3. Study of Solid State Drives performance in PROOF distributed analysis system

    NASA Astrophysics Data System (ADS)

    Panitkin, S. Y.; Ernst, M.; Petkus, R.; Rind, O.; Wenaus, T.

    2010-04-01

    Solid State Drives (SSD) is a promising storage technology for High Energy Physics parallel analysis farms. Its combination of low random access time and relatively high read speed is very well suited for situations where multiple jobs concurrently access data located on the same drive. It also has lower energy consumption and higher vibration tolerance than Hard Disk Drive (HDD) which makes it an attractive choice in many applications raging from personal laptops to large analysis farms. The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF is especially efficient together with distributed local storage systems like Xrootd, when data are distributed over computing nodes. In such an architecture the local disk subsystem I/O performance becomes a critical factor, especially when computing nodes use multi-core CPUs. We will discuss our experience with SSDs in PROOF environment. We will compare performance of HDD with SSD in I/O intensive analysis scenarios. In particular we will discuss PROOF system performance scaling with a number of simultaneously running analysis jobs.

  4. Fractal analysis of the dark matter and gas distributions in the Mare-Nostrum universe

    SciTech Connect

    Gaite, José

    2010-03-01

    We develop a method of multifractal analysis of N-body cosmological simulations that improves on the customary counts-in-cells method by taking special care of the effects of discreteness and large scale homogeneity. The analysis of the Mare-Nostrum simulation with our method provides strong evidence of self-similar multifractal distributions of dark matter and gas, with a halo mass function that is of Press-Schechter type but has a power-law exponent -2, as corresponds to a multifractal. Furthermore, our analysis shows that the dark matter and gas distributions are indistinguishable as multifractals. To determine if there is any gas biasing, we calculate the cross-correlation coefficient, with negative but inconclusive results. Hence, we develop an effective Bayesian analysis connected with information theory, which clearly demonstrates that the gas is biased in a long range of scales, up to the scale of homogeneity. However, entropic measures related to the Bayesian analysis show that this gas bias is small (in a precise sense) and is such that the fractal singularities of both distributions coincide and are identical. We conclude that this common multifractal cosmic web structure is determined by the dynamics and is independent of the initial conditions.

  5. Graphical tests for the assumption of gamma and inverse Gaussian frailty distributions.

    PubMed

    Economou, P; Caroni, C

    2005-12-01

    The common choices of frailty distribution in lifetime data models include the Gamma and Inverse Gaussian distributions. We present diagnostic plots for these distributions when frailty operates in a proportional hazards framework. Firstly, we present plots based on the form of the unconditional survival function when the baseline hazard is assumed to be Weibull. Secondly, we base a plot on a closure property that applies for any baseline hazard, namely, that the frailty distribution among survivors at time t has the same form as the original distribution, with the same shape parameter but different scale parameter. We estimate the shape parameter at different values of t and examine whether it is constant, that is, whether plotted values form a straight line parallel to the time axis. We provide simulation results assuming Weibull baseline hazard and an example to illustrate the methods. PMID:16328577

  6. Spatial sensitivity analysis of snow cover data in a distributed rainfall-runoff model

    NASA Astrophysics Data System (ADS)

    Berezowski, T.; Nossent, J.; Chormański, J.; Batelaan, O.

    2014-10-01

    As the availability of spatially distributed data sets for distributed rainfall-runoff modelling is strongly growing, more attention should be paid to the influence of the quality of the data on the calibration. While a lot of progress has been made on using distributed data in simulations of hydrological models, sensitivity of spatial data with respect to model results is not well understood. In this paper we develop a spatial sensitivity analysis (SA) method for snow cover fraction input data (SCF) for a distributed rainfall-runoff model to investigate if the model is differently subjected to SCF uncertainty in different zones of the model. The analysis was focused on the relation between the SCF sensitivity and the physical, spatial parameters and processes of a distributed rainfall-runoff model. The methodology is tested for the Biebrza River catchment, Poland for which a distributed WetSpa model is setup to simulate two years of daily runoff. The SA uses the Latin-Hypercube One-factor-At-a-Time (LH-OAT) algorithm, which uses different response functions for each 4 km × 4 km snow zone. The results show that the spatial patterns of sensitivity can be easily interpreted by co-occurrence of different environmental factors such as: geomorphology, soil texture, land-use, precipitation and temperature. Moreover, the spatial pattern of sensitivity under different response functions is related to different spatial parameters and physical processes. The results clearly show that the LH-OAT algorithm is suitable for the spatial sensitivity analysis approach and that the SCF is spatially sensitive in the WetSpa model.

  7. Water distribution system vulnerability analysis using weighted and directed network models

    NASA Astrophysics Data System (ADS)

    Yazdani, Alireza; Jeffrey, Paul

    2012-06-01

    The reliability and robustness against failures of networked water distribution systems are central tenets of water supply system design and operation. The ability of such networks to continue to supply water when components are damaged or fail is dependent on the connectivity of the network and the role and location of the individual components. This paper employs a set of advanced network analysis techniques to study the connectivity of water distribution systems, its relationship with system robustness, and susceptibility to damage. Water distribution systems are modeled as weighted and directed networks by using the physical and hydraulic attributes of system components. A selection of descriptive measurements is utilized to quantify the structural properties of benchmark systems at both local (component) and global (network) scales. Moreover, a novel measure of component criticality, the demand-adjusted entropic degree, is proposed to support identification of critical nodes and their ranking according to failure impacts. The application and value of this metric is demonstrated through two case study networks in the USA and UK. Discussion focuses on the potential for gradual evolution of abstract graph-based tools and techniques to more practical network analysis methods, where a theoretical framework for the analysis of robustness and vulnerability of water distribution networks to better support planning and management decisions is presented.

  8. Phosphorescence lifetime analysis with a quadratic programming algorithm for determining quencher distributions in heterogeneous systems.

    PubMed Central

    Vinogradov, S A; Wilson, D F

    1994-01-01

    A new method for analysis of phosphorescence lifetime distributions in heterogeneous systems has been developed. This method is based on decomposition of the data vector to a linearly independent set of exponentials and uses quadratic programming principles for x2 minimization. Solution of the resulting algorithm requires a finite number of calculations (it is not iterative) and is computationally fast and robust. The algorithm has been tested on various simulated decays and for analysis of phosphorescence measurements of experimental systems with descrete distributions of lifetimes. Critical analysis of the effect of signal-to-noise on the resolving capability of the algorithm is presented. This technique is recommended for resolution of the distributions of quencher concentration in heterogeneous samples, of which oxygen distributions in tissue is an important example. Phosphors of practical importance for biological oxygen measurements: Pd-meso-tetra (4-carboxyphenyl) porphyrin (PdTCPP) and Pd-meso-porphyrin (PdMP) have been used to provide experimental test of the algorithm. PMID:7858142

  9. Measurement of bubble and pellet size distributions: past and current image analysis technology.

    PubMed

    Junker, Beth

    2006-08-01

    Measurements of bubble and pellet size distributions are useful for biochemical process optimizations. The accuracy, representation, and simplicity of these measurements improve when the measurement is performed on-line and in situ rather than off-line using a sample. Historical and currently available measurement systems for photographic methods are summarized for bubble and pellet (morphology) measurement applications. Applications to cells, mycelia, and pellets measurements have driven key technological developments that have been applied for bubble measurements. Measurement trade-offs exist to maximize accuracy, extend range, and attain reasonable cycle times. Mathematical characterization of distributions using standard statistical techniques is straightforward, facilitating data presentation and analysis. For the specific application of bubble size distributions, selected bioreactor operating parameters and physicochemical conditions alter distributions. Empirical relationships have been established in some cases where sufficient data have been collected. In addition, parameters and conditions with substantial effects on bubble size distributions were identified and their relative effects quantified. This information was used to guide required accuracy and precision targets for bubble size distribution measurements from newly developed novel on-line and in situ bubble measurement devices. PMID:16855822

  10. Analysis of ambient particle size distributions using Unmix and positive matrix factorization.

    PubMed

    Kim, Eugene; Hopke, Philip K; Larson, Timothy V; Covert, David S

    2004-01-01

    Hourly averaged particle size distributions measured at a centrally located urban site in Seattle were analyzed through the application of bilinear positive matrix factorization (PMF) and Unmix to study underlying size distributions and their daily patterns. A total of 1051 samples each with 16 size intervals from 20 to 400 nm were obtained from a differential mobility particle sizer operating between December 2000 and February 2001. Both PMF and Unmix identify four similar underlying factors in the size distributions. Factor 1 is an accumulation mode particle size spectrum that shows a regular nocturnal pattern, and factor 2 is a larger particle distribution. Factor 3 is assigned as a traffic-related particle distribution, based on its correlations with accompanying gas-phase measurements, and has a regular weekday-high rush-hour pattern. Factor 4 is a traffic-related particle size distribution that has a regular rush-hour pattern on weekdays as well as weekends. Conditional probability functions (CPF) were computed using wind profiles and factor contributions. The results of CPF analysis suggest that these factors are correlated with surrounding particle sources of wood burning, secondary aerosol, diesel emissions, and motor vehicle emissions. PMID:14740737

  11. Finite difference based vibration simulation analysis of a segmented distributed piezoelectric structronic plate system

    NASA Astrophysics Data System (ADS)

    Ren, B. Y.; Wang, L.; Tzou, H. S.; Yue, H. H.

    2010-08-01

    Electrical modeling of piezoelectric structronic systems by analog circuits has the disadvantages of huge circuit structure and low precision. However, studies of electrical simulation of segmented distributed piezoelectric structronic plate systems (PSPSs) by using output voltage signals of high-speed digital circuits to evaluate the real-time dynamic displacements are scarce in the literature. Therefore, an equivalent dynamic model based on the finite difference method (FDM) is presented to simulate the actual physical model of the segmented distributed PSPS with simply supported boundary conditions. By means of the FDM, the four-ordered dynamic partial differential equations (PDEs) of the main structure/segmented distributed sensor signals/control moments of the segmented distributed actuator of the PSPS are transformed to finite difference equations. A dynamics matrix model based on the Newmark-β integration method is established. The output voltage signal characteristics of the lower modes (m <= 3, n <= 3) with different finite difference mesh dimensions and different integration time steps are analyzed by digital signal processing (DSP) circuit simulation software. The control effects of segmented distributed actuators with different effective areas are consistent with the results of the analytical model in relevant references. Therefore, the method of digital simulation for vibration analysis of segmented distributed PSPSs presented in this paper can provide a reference for further research into the electrical simulation of PSPSs.

  12. Evaluating Domestic Hot Water Distribution System Options With Validated Analysis Models

    SciTech Connect

    Weitzel, E.; Hoeschele, M.

    2014-09-01

    A developing body of work is forming that collects data on domestic hot water consumption, water use behaviors, and energy efficiency of various distribution systems. A full distribution system developed in TRNSYS has been validated using field monitoring data and then exercised in a number of climates to understand climate impact on performance. This study builds upon previous analysis modelling work to evaluate differing distribution systems and the sensitivities of water heating energy and water use efficiency to variations of climate, load, distribution type, insulation and compact plumbing practices. Overall 124 different TRNSYS models were simulated. Of the configurations evaluated, distribution losses account for 13-29% of the total water heating energy use and water use efficiency ranges from 11-22%. The base case, an uninsulated trunk and branch system sees the most improvement in energy consumption by insulating and locating the water heater central to all fixtures. Demand recirculation systems are not projected to provide significant energy savings and in some cases increase energy consumption. Water use is most efficient with demand recirculation systems, followed by the insulated trunk and branch system with a central water heater. Compact plumbing practices and insulation have the most impact on energy consumption (2-6% for insulation and 3-4% per 10 gallons of enclosed volume reduced). The results of this work are useful in informing future development of water heating best practices guides as well as more accurate (and simulation time efficient) distribution models for annual whole house simulation programs.

  13. Monofractal and multifractal analysis of the spatial distribution of earthquakes in the central zone of Chile.

    PubMed

    Pastén, Denisse; Muñoz, Víctor; Cisternas, Armando; Rogan, José; Valdivia, Juan Alejandro

    2011-12-01

    Statistical and fractal properties of the spatial distribution of earthquakes in the central zone of Chile are studied. In particular, data are shown to behave according to the well-known Gutenberg-Richter law. The fractal structure is evident for epicenters, not for hypocenters. The multifractal spectrum is also determined, both for the spatial distribution of epicenters and hypocenters. For negative values of the index of multifractal measure q, the multifractal spectrum, which usually cannot be reliably found from data, is calculated from a generalized Cantor-set model, which fits the multifractal spectrum for q > 0, a technique which has been previously applied for analysis of solar wind data. PMID:22304171

  14. Monofractal and multifractal analysis of the spatial distribution of earthquakes in the central zone of Chile

    NASA Astrophysics Data System (ADS)

    Pastén, Denisse; Muñoz, Víctor; Cisternas, Armando; Rogan, José; Valdivia, Juan Alejandro

    2011-12-01

    Statistical and fractal properties of the spatial distribution of earthquakes in the central zone of Chile are studied. In particular, data are shown to behave according to the well-known Gutenberg-Richter law. The fractal structure is evident for epicenters, not for hypocenters. The multifractal spectrum is also determined, both for the spatial distribution of epicenters and hypocenters. For negative values of the index of multifractal measure q, the multifractal spectrum, which usually cannot be reliably found from data, is calculated from a generalized Cantor-set model, which fits the multifractal spectrum for q>0, a technique which has been previously applied for analysis of solar wind data.

  15. A distributed fiber optic sensor system for dike monitoring using Brillouin optical frequency domain analysis

    NASA Astrophysics Data System (ADS)

    Nöther, Nils; Wosniok, Aleksander; Krebber, Katerina; Thiele, Elke

    2008-03-01

    We report on the development of a complete system for spatially resolved detection of critical soil displacement in river embankments. The system uses Brillouin frequency domain analysis (BOFDA) for distributed measurement of strain in silica optical fibers. Our development consists of the measurement unit, an adequate coating for the optical fibers and a technique to integrate the coated optical fibers into geotextiles as they are commonly used in dike construction. We present several laboratory and field tests that prove the capability of the system to detect areas of soil displacement as small as 2 meters. These are the first tests of truly distributed strain measurements on optical fibers embedded into geosynthetics.

  16. Using Micro-Synchrophasor Data for Advanced Distribution Grid Planning and Operations Analysis

    SciTech Connect

    Stewart, Emma; Kiliccote, Sila; McParland, Charles; Roberts, Ciaran

    2014-07-01

    This report reviews the potential for distribution-grid phase-angle data that will be available from new micro-synchrophasors (µPMUs) to be utilized in existing distribution-grid planning and operations analysis. This data could augment the current diagnostic capabilities of grid analysis software, used in both planning and operations for applications such as fault location, and provide data for more accurate modeling of the distribution system. µPMUs are new distribution-grid sensors that will advance measurement and diagnostic capabilities and provide improved visibility of the distribution grid, enabling analysis of the grid’s increasingly complex loads that include features such as large volumes of distributed generation. Large volumes of DG leads to concerns on continued reliable operation of the grid, due to changing power flow characteristics and active generation, with its own protection and control capabilities. Using µPMU data on change in voltage phase angle between two points in conjunction with new and existing distribution-grid planning and operational tools is expected to enable model validation, state estimation, fault location, and renewable resource/load characterization. Our findings include: data measurement is outstripping the processing capabilities of planning and operational tools; not every tool can visualize a voltage phase-angle measurement to the degree of accuracy measured by advanced sensors, and the degree of accuracy in measurement required for the distribution grid is not defined; solving methods cannot handle the high volumes of data generated by modern sensors, so new models and solving methods (such as graph trace analysis) are needed; standardization of sensor-data communications platforms in planning and applications tools would allow integration of different vendors’ sensors and advanced measurement devices. In addition, data from advanced sources such as µPMUs could be used to validate models to improve/ensure accuracy, providing information on normally estimated values such as underground conductor impedance, and characterization of complex loads. Although the input of high-fidelity data to existing tools will be challenging, µPMU data on phase angle (as well as other data from advanced sensors) will be useful for basic operational decisions that are based on a trend of changing data.

  17. Rank-Ordered Multifractal Analysis (ROMA) of probability distributions in fluid turbulence

    NASA Astrophysics Data System (ADS)

    Wu, C. C.; Chang, T.

    2011-04-01

    Rank-Ordered Multifractal Analysis (ROMA) was introduced by Chang and Wu (2008) to describe the multifractal characteristic of intermittent events. The procedure provides a natural connection between the rank-ordered spectrum and the idea of one-parameter scaling for monofractals. This technique has successfully been applied to MHD turbulence simulations and turbulence data observed in various space plasmas. In this paper, the technique is applied to the probability distributions in the inertial range of the turbulent fluid flow, as given in the vast Johns Hopkins University (JHU) turbulence database. In addition, a new way of finding the continuous ROMA spectrum and the scaled probability distribution function (PDF) simultaneously is introduced.

  18. Mathematical modeling and numerical analysis of thermal distribution in arch dams considering solar radiation effect.

    PubMed

    Mirzabozorg, H; Hariri-Ardebili, M A; Shirkhan, M; Seyed-Kolbadi, S M

    2014-01-01

    The effect of solar radiation on thermal distribution in thin high arch dams is investigated. The differential equation governing thermal behavior of mass concrete in three-dimensional space is solved applying appropriate boundary conditions. Solar radiation is implemented considering the dam face direction relative to the sun, the slop relative to horizon, the region cloud cover, and the surrounding topography. It has been observed that solar radiation changes the surface temperature drastically and leads to nonuniform temperature distribution. Solar radiation effects should be considered in thermal transient analysis of thin arch dams. PMID:24695817

  19. Rank-Ordered Multifractal Analysis of Probability Distributions in Fluid Turbulence

    NASA Astrophysics Data System (ADS)

    Wu, Cheng-Chin; Chang, Tien

    2015-11-01

    Rank-Ordered Multifractal Analysis (ROMA) was introduced by Chang and Wu (2008) to describe the multifractal characteristic of intermittent events. The procedure provides a natural connection between the rank-ordered spectrum and the idea of one-parameter scaling for monofractals. This technique has successfully been applied to MHD turbulence simulations and turbulence data observed in various space plasmas. In this paper, the technique is applied to the probability distributions in the inertial range of the turbulent fluid flow, as given in the vast Johns Hopkins University (JHU) turbulence database. In addition, a refined method of finding the continuous ROMA spectrum and the scaled probability distribution function (PDF) simultaneously is introduced.

  20. Extreme value statistics analysis of fracture strengths of a sintered silicon nitride failing from pores

    NASA Technical Reports Server (NTRS)

    Chao, Luen-Yuan; Shetty, Dinesh K.

    1992-01-01

    Statistical analysis and correlation between pore-size distribution and fracture strength distribution using the theory of extreme-value statistics is presented for a sintered silicon nitride. The pore-size distribution on a polished surface of this material was characterized, using an automatic optical image analyzer. The distribution measured on the two-dimensional plane surface was transformed to a population (volume) distribution, using the Schwartz-Saltykov diameter method. The population pore-size distribution and the distribution of the pore size at the fracture origin were correllated by extreme-value statistics. Fracture strength distribution was then predicted from the extreme-value pore-size distribution, usin a linear elastic fracture mechanics model of annular crack around pore and the fracture toughness of the ceramic. The predicted strength distribution was in good agreement with strength measurements in bending. In particular, the extreme-value statistics analysis explained the nonlinear trend in the linearized Weibull plot of measured strengths without postulating a lower-bound strength.

  1. Spatial point pattern analysis of aerial survey data to assess clustering in wildlife distributions

    NASA Astrophysics Data System (ADS)

    Khaemba, Wilson Mwale

    Assessing clustering in wildlife populations is crucial for understanding their dynamics. This assessment is made difficult for data obtained through aerial surveys because the shape and size of sampling units (strip transects) result in poor data supports, which generally hampers spatial analysis of these data. The problem may be solved by having more detailed data where exact locations of observed animal groups are recorded. These data, obtainable through GPS technology, are amenable to spatial analysis, thereby allowing spatial point pattern analysis to be used to assess observed spatial patterns relative to environmental factors like vegetation. Distance measures like the G-statistic and K-function classify such patterns into clustered, regular or completely random patterns, while independence between species is assessed through a multivariate extension of the K-function. Quantification of clustering is carried out using spatial regression. The techniques are illustrated with field data on three ungulates observed in an ecosystem in Kenya. Results indicate a relation between species spatial distribution and their dietary requirements, thereby concluding the usefulness of spatial point pattern analysis in investigating species spatial distribution. It also provides a technique for explaining and differentiating the distribution of wildlife species.

  2. SpatTrack: an imaging toolbox for analysis of vesicle motility and distribution in living cells.

    PubMed

    Lund, Frederik W; Jensen, Maria Louise V; Christensen, Tanja; Nielsen, Gitte K; Heegaard, Christian W; Wüstner, Daniel

    2014-12-01

    The endocytic pathway is a complex network of highly dynamic organelles, which has been traditionally studied by quantitative fluorescence microscopy. The data generated by this method can be overwhelming and its analysis, even for the skilled microscopist, is tedious and error-prone. We developed SpatTrack, an open source, platform-independent program collecting a variety of methods for analysis of vesicle dynamics and distribution in living cells. SpatTrack performs 2D particle tracking, trajectory analysis and fitting of diffusion models to the calculated mean square displacement. It allows for spatial analysis of detected vesicle patterns including calculation of the radial distribution function and particle-based colocalization. Importantly, all analysis tools are supported by Monte Carlo simulations of synthetic images. This allows the user to assess the reliability of the analysis and to study alternative scenarios. We demonstrate the functionality of SpatTrack by performing a detailed imaging study of internalized fluorescence-tagged Niemann Pick C2 (NPC2) protein in human disease fibroblasts. Using SpatTrack, we show that NPC2 rescued the cholesterol-storage phenotype from a subpopulation of late endosomes/lysosomes (LE/LYSs). This was paralleled by repositioning and active transport of NPC2-containing vesicles to the cell surface. The potential of SpatTrack for other applications in intracellular transport studies will be discussed. PMID:25243614

  3. Consideration of tip speed limitations in preliminary analysis of minimum COE wind turbines

    NASA Astrophysics Data System (ADS)

    Cuerva-Tejero, A.; Yeow, T. S.; Lopez-Garcia, O.; Gallego-Castillo, C.

    2014-12-01

    A relation between Cost Of Energy, COE, maximum allowed tip speed, and rated wind speed, is obtained for wind turbines with a given goal rated power. The wind regime is characterised by the corresponding parameters of the probability density function of wind speed. The non-dimensional characteristics of the rotor: number of blades, the blade radial distributions of local solidity, twist, angle, and airfoil type, play the role of parameters in the mentioned relation. The COE is estimated using a cost model commonly used by the designers. This cost model requires basic design data such as the rotor radius and the ratio between the hub height and the rotor radius. Certain design options, DO, related to the technology of the power plant, tower and blades are also required as inputs. The function obtained for the COE can be explored to find those values of rotor radius that give rise to minimum cost of energy for a given wind regime as the tip speed limitation changes. The analysis reveals that iso-COE lines evolve parallel to iso-radius lines for large values of limit tip speed but that this is not the case for small values of the tip speed limits. It is concluded that., as the tip speed limit decreases, the optimum decision for keeping minimum COE values can be: a) reducing the rotor radius for places with high weibull scale parameter or b) increasing the rotor radius for places with low weibull scale parameter.

  4. Principal Effects of Axial Load on Moment-Distribution Analysis of Rigid Structures

    NASA Technical Reports Server (NTRS)

    James, Benjamin Wylie

    1935-01-01

    This thesis presents the method of moment distribution modified to include the effect of axial load upon the bending moments. This modification makes it possible to analyze accurately complex structures, such as rigid fuselage trusses, that heretofore had to be analyzed by approximate formulas and empirical rules. The method is simple enough to be practicable even for complex structures, and it gives a means of analysis for continuous beams that is simpler than the extended three-moment equation now in common use. When the effect of axial load is included, it is found that the basic principles of moment distribution remain unchanged, the only difference being that the factors used, instead of being constants for a given member, become functions of the axial load. Formulas have been developed for these factors, and curves plotted so that their applications requires no more work than moment distribution without axial load. Simple problems have been included to illustrate the use of the curves.

  5. Analysis of electron energy distribution function in the Linac4 H- source

    NASA Astrophysics Data System (ADS)

    Mochizuki, S.; Mattei, S.; Nishida, K.; Hatayama, A.; Lettry, J.

    2016-02-01

    To understand the Electron Energy Distribution Function (EEDF) in the Radio Frequency Inductively Coupled Plasmas (RF-ICPs) in hydrogen negative ion sources, the detailed analysis of the EEDFs using numerical simulation and the theoretical approach based on Boltzmann equation has been performed. It is shown that the EEDF of RF-ICPs consists of two parts, one is the low energy part which obeys Maxwellian distribution and the other is high energy part deviated from Maxwellian distribution. These simulation results have been confirmed to be reasonable by the analytical approach. The results suggest that it is possible to enhance the dissociation of molecules and the resultant H- negative ion production by reducing the gas pressure.

  6. Detecting Distributed Network Traffic Anomaly with Network-Wide Correlation Analysis

    NASA Astrophysics Data System (ADS)

    Zonglin, Li; Guangmin, Hu; Xingmiao, Yao; Dan, Yang

    2008-12-01

    Distributed network traffic anomaly refers to a traffic abnormal behavior involving many links of a network and caused by the same source (e.g., DDoS attack, worm propagation). The anomaly transiting in a single link might be unnoticeable and hard to detect, while the anomalous aggregation from many links can be prevailing, and does more harm to the networks. Aiming at the similar features of distributed traffic anomaly on many links, this paper proposes a network-wide detection method by performing anomalous correlation analysis of traffic signals' instantaneous parameters. In our method, traffic signals' instantaneous parameters are firstly computed, and their network-wide anomalous space is then extracted via traffic prediction. Finally, an anomaly is detected by a global correlation coefficient of anomalous space. Our evaluation using Abilene traffic traces demonstrates the excellent performance of this approach for distributed traffic anomaly detection.

  7. Validation results of the IAG Dancer project for distributed GPS analysis

    NASA Astrophysics Data System (ADS)

    Boomkamp, H.

    2012-12-01

    The number of permanent GPS stations in the world has grown far too large to allow processing of all this data at analysis centers. The majority of these GPS sites do not even make their observation data available to the analysis centers, for various valid reasons. The current ITRF solution is still based on centralized analysis by the IGS, and subsequent densification of the reference frame via regional network solutions. Minor inconsistencies in analysis methods, software systems and data quality imply that this centralized approach is unlikely to ever reach the ambitious accuracy objectives of GGOS. The dependence on published data also makes it clear that a centralized approach will never provide a true global ITRF solution for all GNSS receivers in the world. If the data does not come to the analysis, the only alternative is to bring the analysis to the data. The IAG Dancer project has implemented a distributed GNSS analysis system on the internet in which each receiver can have its own analysis center in the form of a freely distributed JAVA peer-to-peer application. Global parameters for satellite orbits, clocks and polar motion are solved via a distributed least squares solution among all participating receivers. A Dancer instance can run on any computer that has simultaneous access to the receiver data and to the public internet. In the future, such a process may be embedded in the receiver firmware directly. GPS network operators can join the Dancer ITRF realization without having to publish their observation data or estimation products. GPS users can run a Dancer process without contributing to the global solution, to have direct access to the ITRF in near real-time. The Dancer software has been tested on-line since late 2011. A global network of processes has gradually evolved to allow stabilization and tuning of the software in order to reach a fully operational system. This presentation reports on the current performance of the Dancer system, and demonstrates the obvious benefits of distributed analysis of geodetic data in general. IAG Dancer screenshot

  8. A Meta-Analysis of Distributed Leadership from 2002 to 2013: Theory Development, Empirical Evidence and Future Research Focus

    ERIC Educational Resources Information Center

    Tian, Meng; Risku, Mika; Collin, Kaija

    2016-01-01

    This article provides a meta-analysis of research conducted on distributed leadership from 2002 to 2013. It continues the review of distributed leadership commissioned by the English National College for School Leadership (NCSL) ("Distributed Leadership: A Desk Study," Bennett et al., 2003), which identified two gaps in the research

  9. A Meta-Analysis of Distributed Leadership from 2002 to 2013: Theory Development, Empirical Evidence and Future Research Focus

    ERIC Educational Resources Information Center

    Tian, Meng; Risku, Mika; Collin, Kaija

    2016-01-01

    This article provides a meta-analysis of research conducted on distributed leadership from 2002 to 2013. It continues the review of distributed leadership commissioned by the English National College for School Leadership (NCSL) ("Distributed Leadership: A Desk Study," Bennett et al., 2003), which identified two gaps in the research…

  10. Phenotype Clustering of Breast Epithelial Cells in Confocal Imagesbased on Nuclear Protein Distribution Analysis

    SciTech Connect

    Long, Fuhui; Peng, Hanchuan; Sudar, Damir; Levievre, Sophie A.; Knowles, David W.

    2006-09-05

    Background: The distribution of the chromatin-associatedproteins plays a key role in directing nuclear function. Previously, wedeveloped an image-based method to quantify the nuclear distributions ofproteins and showed that these distributions depended on the phenotype ofhuman mammary epithelial cells. Here we describe a method that creates ahierarchical tree of the given cell phenotypes and calculates thestatistical significance between them, based on the clustering analysisof nuclear protein distributions. Results: Nuclear distributions ofnuclear mitotic apparatus protein were previously obtained fornon-neoplastic S1 and malignant T4-2 human mammary epithelial cellscultured for up to 12 days. Cell phenotype was defined as S1 or T4-2 andthe number of days in cultured. A probabilistic ensemble approach wasused to define a set of consensus clusters from the results of multipletraditional cluster analysis techniques applied to the nucleardistribution data. Cluster histograms were constructed to show how cellsin any one phenotype were distributed across the consensus clusters.Grouping various phenotypes allowed us to build phenotype trees andcalculate the statistical difference between each group. The resultsshowed that non-neoplastic S1 cells could be distinguished from malignantT4-2 cells with 94.19 percent accuracy; that proliferating S1 cells couldbe distinguished from differentiated S1 cells with 92.86 percentaccuracy; and showed no significant difference between the variousphenotypes of T4-2 cells corresponding to increasing tumor sizes.Conclusion: This work presents a cluster analysis method that canidentify significant cell phenotypes, based on the nuclear distributionof specific proteins, with high accuracy.

  11. Development of a Web Service for Analysis in a Distributed Network

    PubMed Central

    Jiang, Xiaoqian; Wu, Yuan; Marsolo, Keith; Ohno-Machado, Lucila

    2014-01-01

    Objective: We describe functional specifications and practicalities in the software development process for a web service that allows the construction of the multivariate logistic regression model, Grid Logistic Regression (GLORE), by aggregating partial estimates from distributed sites, with no exchange of patient-level data. Background: We recently developed and published a web service for model construction and data analysis in a distributed environment. This recent paper provided an overview of the system that is useful for users, but included very few details that are relevant for biomedical informatics developers or network security personnel who may be interested in implementing this or similar systems. We focus here on how the system was conceived and implemented. Methods: We followed a two-stage development approach by first implementing the backbone system and incrementally improving the user experience through interactions with potential users during the development. Our system went through various stages such as concept proof, algorithm validation, user interface development, and system testing. We used the Zoho Project management system to track tasks and milestones. We leveraged Google Code and Apache Subversion to share code among team members, and developed an applet-servlet architecture to support the cross platform deployment. Discussion: During the development process, we encountered challenges such as Information Technology (IT) infrastructure gaps and limited team experience in user-interface design. We figured out solutions as well as enabling factors to support the translation of an innovative privacy-preserving, distributed modeling technology into a working prototype. Conclusion: Using GLORE (a distributed model that we developed earlier) as a pilot example, we demonstrated the feasibility of building and integrating distributed modeling technology into a usable framework that can support privacy-preserving, distributed data analysis among researchers at geographically dispersed institutes. PMID:25848586

  12. A two-scale Weibull approach to the failure of porous ceramic structures made by robocasting: possibilities and limits

    PubMed Central

    Genet, Martin; Houmard, Manuel; Eslava, Salvador; Saiz, Eduardo; Tomsia, Antoni P.

    2012-01-01

    This paper introduces our approach to modeling the mechanical behavior of cellular ceramics, through the example of calcium phosphate scaffolds made by robocasting for bone-tissue engineering. The Weibull theory is used to deal with the scaffolds’ constitutive rods statistical failure, and the Sanchez-Palencia theory of periodic homogenization is used to link the rod- and scaffold-scales. Uniaxial compression of scaffolds and three-point bending of rods were performed to calibrate and validate the model. If calibration based on rod-scale data leads to over-conservative predictions of scaffold’s properties (as rods’ successive failures are not taken into account), we show that, for a given rod diameter, calibration based on scaffold-scale data leads to very satisfactory predictions for a wide range of rod spacing, i.e. of scaffold porosity, as well as for different loading conditions. This work establishes the proposed model as a reliable tool for understanding and optimizing cellular ceramics’ mechanical properties. PMID:23439936

  13. Adaptive Weibull Multiplicative Model and Multilayer Perceptron neural networks for dark-spot detection from SAR imagery.

    PubMed

    Taravat, Alireza; Oppelt, Natascha

    2014-01-01

    Oil spills represent a major threat to ocean ecosystems and their environmental status. Previous studies have shown that Synthetic Aperture Radar (SAR), as its recording is independent of clouds and weather, can be effectively used for the detection and classification of oil spills. Dark formation detection is the first and critical stage in oil-spill detection procedures. In this paper, a novel approach for automated dark-spot detection in SAR imagery is presented. A new approach from the combination of adaptive Weibull Multiplicative Model (WMM) and MultiLayer Perceptron (MLP) neural networks is proposed to differentiate between dark spots and the background. The results have been compared with the results of a model combining non-adaptive WMM and pulse coupled neural networks. The presented approach overcomes the non-adaptive WMM filter setting parameters by developing an adaptive WMM model which is a step ahead towards a full automatic dark spot detection. The proposed approach was tested on 60 ENVISAT and ERS2 images which contained dark spots. For the overall dataset, an average accuracy of 94.65% was obtained. Our experimental results demonstrate that the proposed approach is very robust and effective where the non-adaptive WMM & pulse coupled neural network (PCNN) model generates poor accuracies. PMID:25474376

  14. Adaptive Weibull Multiplicative Model and Multilayer Perceptron Neural Networks for Dark-Spot Detection from SAR Imagery

    PubMed Central

    Taravat, Alireza; Oppelt, Natascha

    2014-01-01

    Oil spills represent a major threat to ocean ecosystems and their environmental status. Previous studies have shown that Synthetic Aperture Radar (SAR), as its recording is independent of clouds and weather, can be effectively used for the detection and classification of oil spills. Dark formation detection is the first and critical stage in oil-spill detection procedures. In this paper, a novel approach for automated dark-spot detection in SAR imagery is presented. A new approach from the combination of adaptive Weibull Multiplicative Model (WMM) and MultiLayer Perceptron (MLP) neural networks is proposed to differentiate between dark spots and the background. The results have been compared with the results of a model combining non-adaptive WMM and pulse coupled neural networks. The presented approach overcomes the non-adaptive WMM filter setting parameters by developing an adaptive WMM model which is a step ahead towards a full automatic dark spot detection. The proposed approach was tested on 60 ENVISAT and ERS2 images which contained dark spots. For the overall dataset, an average accuracy of 94.65% was obtained. Our experimental results demonstrate that the proposed approach is very robust and effective where the non-adaptive WMM & pulse coupled neural network (PCNN) model generates poor accuracies. PMID:25474376

  15. Modeling Exon-Specific Bias Distribution Improves the Analysis of RNA-Seq Data

    PubMed Central

    Liu, Xuejun; Zhang, Li; Chen, Songcan

    2015-01-01

    RNA-seq technology has become an important tool for quantifying the gene and transcript expression in transcriptome study. The two major difficulties for the gene and transcript expression quantification are the read mapping ambiguity and the overdispersion of the read distribution along reference sequence. Many approaches have been proposed to deal with these difficulties. A number of existing methods use Poisson distribution to model the read counts and this easily splits the counts into the contributions from multiple transcripts. Meanwhile, various solutions were put forward to account for the overdispersion in the Poisson models. By checking the similarities among the variation patterns of read counts for individual genes, we found that the count variation is exon-specific and has the conserved pattern across the samples for each individual gene. We introduce Gamma-distributed latent variables to model the read sequencing preference for each exon. These variables are embedded to the rate parameter of a Poisson model to account for the overdispersion of read distribution. The model is tractable since the Gamma priors can be integrated out in the maximum likelihood estimation. We evaluate the proposed approach, PGseq, using four real datasets and one simulated dataset, and compare its performance with other popular methods. Results show that PGseq presents competitive performance compared to other alternatives in terms of accuracy in the gene and transcript expression calculation and in the downstream differential expression analysis. Especially, we show the advantage of our method in the analysis of low expression. PMID:26448625

  16. Analysis of spatial distribution of mining tremors occurring in Rudna copper mine (Poland)

    NASA Astrophysics Data System (ADS)

    Kozłowska, Maria

    2013-10-01

    The distribution of mining tremors is strictly related to the exploitation progress of mining works and, consequently, to the local stress field. In case the distribution is known, it is possible to determine future area of intensive seismicity in exploited mining panel. In the paper, an analysis of working face-to-tremor distance for Rudna copper mine in Poland is presented. In order to develop a spatial model of tremors' occurrence in the exploited mine, the seismicity of four mining sections in the five-month period was investigated and the tremors' distribution was obtained. It was compared with the spatial distribution of tremors in coal mines found in the literature. The results show that the places where tremors mostly occur — the vicinity of the face, in front of it — coincide with the high-stress area predicted by literature models. The obtained results help to predict the future seismic zone connected with planned mining section, which can be used in seismic hazard analysis.

  17. Stress distribution around osseointegrated implants with different internal-cone connections: photoelastic and finite element analysis.

    PubMed

    Anami, Lilian Costa; da Costa Lima, Jlia Magalhes; Takahashi, Fernando Eidi; Neisser, Maximiliano Piero; Noritomi, Pedro Yoshito; Bottino, Marco Antonio

    2015-04-01

    The goal of this study was to evaluate the distribution of stresses generated around implants with different internal-cone abutments by photoelastic (PA) and finite element analysis (FEA). For FEA, implant and abutments with different internal-cone connections (H- hexagonal and S- solid) were scanned, 3D meshes were modeled and objects were loaded with computer software. Trabecular and cortical bones and photoelastic resin blocks were simulated. The PA was performed with photoelastic resin blocks where implants were included and different abutments were bolted. Specimens were observed in the circular polariscope with the application device attached, where loads were applied on same conditions as FEA. FEA images showed very similar stress distribution between two models with different abutments. Differences were observed between stress distribution in bone and resin blocks; PA images resembled those obtained on resin block FEA. PA images were also quantitatively analyzed by comparing the values assigned to fringes. It was observed that S abutment distributes loads more evenly to bone adjacent to an implant when compared to H abutment, for both analysis methods used. It was observed that the PA has generated very similar results to those obtained in FEA with the resin block. PMID:23750560

  18. A landscape analysis of cougar distribution and abundance in Montana, USA.

    PubMed

    Riley, S J; Malecki, R A

    2001-09-01

    Recent growth in the distribution and abundance of cougars (Puma concolor) throughout western North America has created opportunities, challenges, and problems for wildlife managers and raises questions about what factors affect cougar populations. We present an analysis of factors thought to affect cougar distribution and abundance across the broad geographical scales on which most population management decisions are made. Our objectives were to: (1) identify and evaluate landscape parameters that can be used to predict the capability of habitats to support cougars, and (2) evaluate factors that may account for the recent expansion in cougar numbers. Habitat values based on terrain ruggedness and forested cover explained 73% of the variation in a cougar abundance index. Indices of cougar abundance also were spatially and temporally correlated with ungulate abundance. An increase in the number and total biomass of ungulate prey species is hypothesized to account for recent increases in cougars. Cougar populations in Montana are coping with land development by humans when other components of habitat and prey populations are sufficient. Our analysis provides a better understanding of what may have influenced recent growth in cougar distribution and abundance in Montana and, when combined with insights about stakeholder acceptance capacity, offers a basis for cougar management at broad scales. Long-term conservation of cougars necessitates a better understanding of ecosystem functions that affect prey distribution and abundance, more accurate estimates of cougar populations, and management abilities to integrate these components with human values. PMID:11531235

  19. A landscape analysis of cougar distribution and abundance in Montana, USA.

    TOXLINE Toxicology Bibliographic Information

    Riley SJ; Malecki RA

    2001-09-01

    Recent growth in the distribution and abundance of cougars (Puma concolor) throughout western North America has created opportunities, challenges, and problems for wildlife managers and raises questions about what factors affect cougar populations. We present an analysis of factors thought to affect cougar distribution and abundance across the broad geographical scales on which most population management decisions are made. Our objectives were to: (1) identify and evaluate landscape parameters that can be used to predict the capability of habitats to support cougars, and (2) evaluate factors that may account for the recent expansion in cougar numbers. Habitat values based on terrain ruggedness and forested cover explained 73% of the variation in a cougar abundance index. Indices of cougar abundance also were spatially and temporally correlated with ungulate abundance. An increase in the number and total biomass of ungulate prey species is hypothesized to account for recent increases in cougars. Cougar populations in Montana are coping with land development by humans when other components of habitat and prey populations are sufficient. Our analysis provides a better understanding of what may have influenced recent growth in cougar distribution and abundance in Montana and, when combined with insights about stakeholder acceptance capacity, offers a basis for cougar management at broad scales. Long-term conservation of cougars necessitates a better understanding of ecosystem functions that affect prey distribution and abundance, more accurate estimates of cougar populations, and management abilities to integrate these components with human values.

  20. Performance analysis of a brushless dc motor due to magnetization distribution in a continuous ring magnet

    NASA Astrophysics Data System (ADS)

    Hur, Jin; Jung, In-Soung; Sung, Ha-Gyeong; Park, Soon-Sup

    2003-05-01

    This paper represents the force performance of a brushless dc motor with a continuous ring-type permanent magnet (PM), considering its magnetization patterns: trapezoidal, trapezoidal with dead zone, and unbalanced trapezoidal magnetization with dead zone. The radial force density in PM motor causes vibration, because vibration is induced the traveling force from the rotating PM acting on the stator. Magnetization distribution of the PM as well as the shape of the teeth determines the distribution of force density. In particular, the distribution has a three-dimensional (3-D) pattern because of overhang, that is, it is not uniform in axial direction. Thus, the analysis of radial force density required dynamic analysis considering the 3-D shape of the teeth and overhang. The results show that the force density as a source of vibration varies considerably depending on the overhang and magnetization distribution patterns. In addition, the validity of the developed method, coupled 3-D equivalent magnetic circuit network method, with driving circuit and motion equation, is confirmed by comparison of conventional method using 3D finite element method.

  1. Impact of hadronic and nuclear corrections on global analysis of spin-dependent parton distributions

    SciTech Connect

    Jimenez-Delgado, Pedro; Accardi, Alberto; Melnitchouk, Wally

    2014-02-01

    We present the first results of a new global next-to-leading order analysis of spin-dependent parton distribution functions from the most recent world data on inclusive polarized deep-inelastic scattering, focusing in particular on the large-x and low-Q^2 regions. By directly fitting polarization asymmetries we eliminate biases introduced by using polarized structure function data extracted under nonuniform assumptions for the unpolarized structure functions. For analysis of the large-x data we implement nuclear smearing corrections for deuterium and 3He nuclei, and systematically include target mass and higher twist corrections to the g_1 and g_2 structure functions at low Q^2. We also explore the effects of Q^2 and W^2 cuts in the data sets, and the potential impact of future data on the behavior of the spin-dependent parton distributions at large x.

  2. Risk analysis of highly combustible gas storage, supply, and distribution systems in PWR plants

    SciTech Connect

    Simion, G.P.; VanHorn, R.L.; Smith, C.L.; Bickel, J.H.; Sattison, M.B.; Bulmahn, K.D.

    1993-06-01

    This report presents the evaluation of the potential safety concerns for pressurized water reactors (PWRs) identified in Generic Safety Issue 106, Piping and the Use of Highly Combustible Gases in Vital Areas. A Westinghouse four-loop PWR plant was analyzed for the risk due to the use of combustible gases (predominantly hydrogen) within the plant. The analysis evaluated an actual hydrogen distribution configuration and conducted several sensitivity studies to determine the potential variability among PWRs. The sensitivity studies were based on hydrogen and safety-related equipment configurations observed at other PWRs within the United States. Several options for improving the hydrogen distribution system design were identified and evaluated for their effect on risk and core damage frequency. A cost/benefit analysis was performed to determine whether alternatives considered were justifiable based on the safety improvement and economics of each possible improvement.

  3. Three-dimensional gamma analysis of dose distributions in individual structures for IMRT dose verification.

    PubMed

    Tomiyama, Yuuki; Araki, Fujio; Oono, Takeshi; Hioki, Kazunari

    2014-07-01

    Our purpose in this study was to implement three-dimensional (3D) gamma analysis for structures of interest such as the planning target volume (PTV) or clinical target volume (CTV), and organs at risk (OARs) for intensity-modulated radiation therapy (IMRT) dose verification. IMRT dose distributions for prostate and head and neck (HN) cancer patients were calculated with an analytical anisotropic algorithm in an Eclipse (Varian Medical Systems) treatment planning system (TPS) and by Monte Carlo (MC) simulation. The MC dose distributions were calculated with EGSnrc/BEAMnrc and DOSXYZnrc user codes under conditions identical to those for the TPS. The prescribed doses were 76 Gy/38 fractions with five-field IMRT for the prostate and 33 Gy/17 fractions with seven-field IMRT for the HN. TPS dose distributions were verified by the gamma passing rates for the whole calculated volume, PTV or CTV, and OARs by use of 3D gamma analysis with reference to MC dose distributions. The acceptance criteria for the 3D gamma analysis were 3/3 and 2 %/2 mm for a dose difference and a distance to agreement. The gamma passing rates in PTV and OARs for the prostate IMRT plan were close to 100 %. For the HN IMRT plan, the passing rates of 2 %/2 mm in CTV and OARs were substantially lower because inhomogeneous tissues such as bone and air in the HN are included in the calculation area. 3D gamma analysis for individual structures is useful for IMRT dose verification. PMID:24796955

  4. Systematic analysis of mutation distribution in three dimensional protein structures identifies cancer driver genes.

    PubMed

    Fujimoto, Akihiro; Okada, Yukinori; Boroevich, Keith A; Tsunoda, Tatsuhiko; Taniguchi, Hiroaki; Nakagawa, Hidewaki

    2016-01-01

    Protein tertiary structure determines molecular function, interaction, and stability of the protein, therefore distribution of mutation in the tertiary structure can facilitate the identification of new driver genes in cancer. To analyze mutation distribution in protein tertiary structures, we applied a novel three dimensional permutation test to the mutation positions. We analyzed somatic mutation datasets of 21 types of cancers obtained from exome sequencing conducted by the TCGA project. Of the 3,622 genes that had ≥3 mutations in the regions with tertiary structure data, 106 genes showed significant skew in mutation distribution. Known tumor suppressors and oncogenes were significantly enriched in these identified cancer gene sets. Physical distances between mutations in known oncogenes were significantly smaller than those of tumor suppressors. Twenty-three genes were detected in multiple cancers. Candidate genes with significant skew of the 3D mutation distribution included kinases (MAPK1, EPHA5, ERBB3, and ERBB4), an apoptosis related gene (APP), an RNA splicing factor (SF1), a miRNA processing factor (DICER1), an E3 ubiquitin ligase (CUL1) and transcription factors (KLF5 and EEF1B2). Our study suggests that systematic analysis of mutation distribution in the tertiary protein structure can help identify cancer driver genes. PMID:27225414

  5. Some physics and system issues in the security analysis of quantum key distribution protocols

    NASA Astrophysics Data System (ADS)

    Yuen, Horace P.

    2014-10-01

    In this paper, we review a number of issues on the security of quantum key distribution (QKD) protocols that bear directly on the relevant physics or mathematical representation of the QKD cryptosystem. It is shown that the cryptosystem representation itself may miss out many possible attacks, which are not accounted for in the security analysis and proofs. Hence, the final security claims drawn from such analysis are not reliable, apart from foundational issues about the security criteria that are discussed elsewhere. The cases of continuous-variable QKD and multi-photon sources are elaborated upon.

  6. Comparative Analysis of HIV-1 and Murine Leukemia Virus Three-Dimensional Nuclear Distributions.

    PubMed

    Quercioli, Valentina; Di Primio, Cristina; Casini, Antonio; Mulder, Lubbertus C F; Vranckx, Lenard S; Borrenberghs, Doortje; Gijsbers, Rik; Debyser, Zeger; Cereseto, Anna

    2016-05-15

    Recent advances in fluorescence microscopy allow three-dimensional analysis of HIV-1 preintegration complexes in the nuclei of infected cells. To extend this investigation to gammaretroviruses, we engineered a fluorescent Moloney murine leukemia virus (MLV) system consisting of MLV-integrase fused to enhanced green fluorescent protein (MLV-IN-EGFP). A comparative analysis of lentiviral (HIV-1) and gammaretroviral (MLV) fluorescent complexes in the nuclei of infected cells revealed their different spatial distributions. This research tool has the potential to achieve new insight into the nuclear biology of these retroviruses. PMID:26962222

  7. Single-phase power distribution system power flow and fault analysis

    NASA Technical Reports Server (NTRS)

    Halpin, S. M.; Grigsby, L. L.

    1992-01-01

    Alternative methods for power flow and fault analysis of single-phase distribution systems are presented. The algorithms for both power flow and fault analysis utilize a generalized approach to network modeling. The generalized admittance matrix, formed using elements of linear graph theory, is an accurate network model for all possible single-phase network configurations. Unlike the standard nodal admittance matrix formulation algorithms, the generalized approach uses generalized component models for the transmission line and transformer. The standard assumption of a common node voltage reference point is not required to construct the generalized admittance matrix. Therefore, truly accurate simulation results can be obtained for networks that cannot be modeled using traditional techniques.

  8. Improving the Analysis, Storage and Sharing of Neuroimaging Data using Relational Databases and Distributed Computing

    PubMed Central

    Hasson, Uri; Skipper, Jeremy I.; Wilde, Michael J.; Nusbaum, Howard C.; Small, Steven L.

    2007-01-01

    The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data. PMID:17964812

  9. A new parallel-vector finite element analysis software on distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Qin, Jiangning; Nguyen, Duc T.

    1993-01-01

    A new parallel-vector finite element analysis software package MPFEA (Massively Parallel-vector Finite Element Analysis) is developed for large-scale structural analysis on massively parallel computers with distributed-memory. MPFEA is designed for parallel generation and assembly of the global finite element stiffness matrices as well as parallel solution of the simultaneous linear equations, since these are often the major time-consuming parts of a finite element analysis. Block-skyline storage scheme along with vector-unrolling techniques are used to enhance the vector performance. Communications among processors are carried out concurrently with arithmetic operations to reduce the total execution time. Numerical results on the Intel iPSC/860 computers (such as the Intel Gamma with 128 processors and the Intel Touchstone Delta with 512 processors) are presented, including an aircraft structure and some very large truss structures, to demonstrate the efficiency and accuracy of MPFEA.

  10. Shared and Distributed Memory Parallel Security Analysis of Large-Scale Source Code and Binary Applications

    SciTech Connect

    Quinlan, D; Barany, G; Panas, T

    2007-08-30

    Many forms of security analysis on large scale applications can be substantially automated but the size and complexity can exceed the time and memory available on conventional desktop computers. Most commercial tools are understandably focused on such conventional desktop resources. This paper presents research work on the parallelization of security analysis of both source code and binaries within our Compass tool, which is implemented using the ROSE source-to-source open compiler infrastructure. We have focused on both shared and distributed memory parallelization of the evaluation of rules implemented as checkers for a wide range of secure programming rules, applicable to desktop machines, networks of workstations and dedicated clusters. While Compass as a tool focuses on source code analysis and reports violations of an extensible set of rules, the binary analysis work uses the exact same infrastructure but is less well developed into an equivalent final tool.

  11. A performance analysis method for distributed real-time robotic systems: A case study of remote teleoperation

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Sanderson, A. C.

    1994-01-01

    Robot coordination and control systems for remote teleoperation applications are by necessity implemented on distributed computers. Modeling and performance analysis of these distributed robotic systems is difficult, but important for economic system design. Performance analysis methods originally developed for conventional distributed computer systems are often unsatisfactory for evaluating real-time systems. The paper introduces a formal model of distributed robotic control systems; and a performance analysis method, based on scheduling theory, which can handle concurrent hard-real-time response specifications. Use of the method is illustrated by a case of remote teleoperation which assesses the effect of communication delays and the allocation of robot control functions on control system hardware requirements.

  12. Integration of enzyme kinetic models and isotopomer distribution analysis for studies of in situ cell operation.

    PubMed

    Selivanov, Vitaly A; Sukhomlin, Tatiana; Centelles, Josep J; Lee, Paul W N; Cascante, Marta

    2006-01-01

    A current trend in neuroscience research is the use of stable isotope tracers in order to address metabolic processes in vivo. The tracers produce a huge number of metabolite forms that differ according to the number and position of labeled isotopes in the carbon skeleton (isotopomers) and such a large variety makes the analysis of isotopomer data highly complex. On the other hand, this multiplicity of forms does provide sufficient information to address cell operation in vivo. By the end of last millennium, a number of tools have been developed for estimation of metabolic flux profile from any possible isotopomer distribution data. However, although well elaborated, these tools were limited to steady state analysis, and the obtained set of fluxes remained disconnected from their biochemical context. In this review we focus on a new numerical analytical approach that integrates kinetic and metabolic flux analysis. The related computational algorithm estimates the dynamic flux based on the time-dependent distribution of all possible isotopomers of metabolic pathway intermediates that are generated from a labeled substrate. The new algorithm connects specific tracer data with enzyme kinetic characteristics, thereby extending the amount of data available for analysis: it uses enzyme kinetic data to estimate the flux profile, and vice versa, for the kinetic analysis it uses in vivo tracer data to reveal the biochemical basis of the estimated metabolic fluxes. PMID:17118161

  13. Using spatial gradient analysis to clarify species distributions with application to South African protea

    NASA Astrophysics Data System (ADS)

    Terres, Maria A.; Gelfand, Alan E.

    2015-07-01

    Typical ecological gradient analyses consider variation in the response of plants along a gradient of covariate values, but generally constrain themselves to predetermined response curves and ignore spatial autocorrelation. In this paper, we develop a formal spatial gradient analysis. We adopt the mathematical definition of gradients as directional rates of change with regard to a spatial surface. We view both the response and the covariate as spatial surfaces over a region of interest with respective gradient behavior. The gradient analysis we propose enables local comparison of these gradients. At any spatial location, we compare the behavior of the response surface with the behavior of the covariate surface to provide a novel form of sensitivity analysis. More precisely, we first fit a joint hierarchical Bayesian spatial model for a response variable and an environmental covariate. Then, after model fitting, at a given location, for each variable, we can obtain the posterior distribution of the derivative in any direction. We use these distributions to compute spatial sensitivities and angular discrepancies enabling a more detailed picture of the spatial nature of the response-covariate relationship. This methodology is illustrated using species presence probability as a response to elevation for two species of South African protea. We also offer a comparison with sensitivity analysis using geographically weighted regression. We show that the spatial gradient analysis allows for more extensive inference and provides a much richer description of the spatially varying relationships.

  14. Revealing protein oligomerization and densities in situ using spatial intensity distribution analysis

    PubMed Central

    Godin, Antoine G.; Costantino, Santiago; Lorenzo, Louis-Etienne; Swift, Jody L.; Sergeev, Mikhail; Ribeiro-da-Silva, Alfredo; De Koninck, Yves; Wiseman, Paul W.

    2011-01-01

    Measuring protein interactions is key to understanding cell signaling mechanisms, but quantitative analysis of these interactions in situ has remained a major challenge. Here, we present spatial intensity distribution analysis (SpIDA), an analysis technique for image data obtained using standard fluorescence microscopy. SpIDA directly measures fluorescent macromolecule densities and oligomerization states sampled within single images. The method is based on fitting intensity histograms calculated from images to obtain density maps of fluorescent molecules and their quantal brightness. Because spatial distributions are acquired by imaging, SpIDA can be applied to the analysis of images of chemically fixed tissue as well as live cells. However, the technique does not rely on spatial correlations, freeing it from biases caused by subcellular compartmentalization and heterogeneity within tissue samples. Analysis of computer-based simulations and immunocytochemically stained GABAB receptors in spinal cord samples shows that the approach yields accurate measurements over a broader range of densities than established procedures. SpIDA is applicable to sampling within small areas (6 μm2) and reveals the presence of monomers and dimers with single-dye labeling. Finally, using GFP-tagged receptor subunits, we show that SpIDA can resolve dynamic changes in receptor oligomerization in live cells. The advantages and greater versatility of SpIDA over current techniques open the door to quantificative studies of protein interactions in native tissue using standard fluorescence microscopy. PMID:21482753

  15. Harmonic amplitude distribution in a wideband ultrasonic wavefront after propagation through human abdominal wall and breast specimens.

    PubMed

    Liu, D L; Waag, R C

    1997-02-01

    The amplitude characteristics of ultrasonic wavefront distortion produced by transmission through the abdominal wall and breast is described. Ultrasonic pulses were recorded in a two-dimensional aperture after transmission through specimens of abdominal wall or breast. After the pulse arrival times were corrected for geometric path differences, the pulses were temporally Fourier transformed and two-dimensional maps of harmonic amplitudes in the measurement aperture were computed. The results indicate that, as the temporal frequency increases, the fluctuation in harmonic amplitudes increases but the spatial scale of the fluctuation decreases. The normalized second-order and third-order moments of the amplitude distribution also increase with temporal frequency. The wide range variation of these distribution characteristics could not be covered by the Rayleigh, Rician, or K-distribution because of their limited flexibility. However, the Weibull distribution and especially the generalized K-distribution provide better fits to the data. In the fit of the generalized K-distribution, a decrease of its parameter alpha with increasing temporal frequency was observed, as predicted by analysis based on a phase screen model. PMID:9035403

  16. Global Sensitivity Analysis of a Distributed Hydrologic Model Using Latin Hypercube Sampling

    NASA Astrophysics Data System (ADS)

    Dessalegne, T.; Senarath, S. U.; Novoa, R. J.

    2009-12-01

    Having a good understanding of sensitive model parameters is vital for effective calibration of highly parameterized distributed hydrologic models. A Latin Hypercube Sampling and Multiple-Linear Regression based global sensitivity analysis approach is used in this study to investigate the effect of model parameter variability on simulated stages in the Everglades National Park (ENP) in Florida, USA. The study is conducted using the distributed-parameter Regional Simulation Model (RSM) developed by the South Florida Water Management District. The model domain encompasses an area of 4,330 square kilometers and is represented by 1,184 irregular triangular cells in the RSM. The model, among other features, has the capability to simulate seasonal evapotranspiration, overland and groundwater flows as well as depth-dependent surface roughness. The parameters considered for sensitivity analysis consist of several model parameters that influence overland and groundwater flows as well as evapotranspiration within the ENP. The study evaluates the relative magnitudes of model parameter sensitivities under dry, wet and average climatic conditions. The findings of this study have important implications for calibration, validation and uncertainty analysis of distributed hydrologic models under different meteorological conditions. The results of this study are also useful for better targeting of future field data collection and sampling efforts.

  17. A distributed analysis and visualization system for model and observational data

    NASA Technical Reports Server (NTRS)

    Wilhelmson, Robert B.

    1994-01-01

    Software was developed with NASA support to aid in the analysis and display of the massive amounts of data generated from satellites, observational field programs, and from model simulations. This software was developed in the context of the PATHFINDER (Probing ATmospHeric Flows in an Interactive and Distributed EnviRonment) Project. The overall aim of this project is to create a flexible, modular, and distributed environment for data handling, modeling simulations, data analysis, and visualization of atmospheric and fluid flows. Software completed with NASA support includes GEMPAK analysis, data handling, and display modules for which collaborators at NASA had primary responsibility, and prototype software modules for three-dimensional interactive and distributed control and display as well as data handling, for which NSCA was responsible. Overall process control was handled through a scientific and visualization application builder from Silicon Graphics known as the Iris Explorer. In addition, the GEMPAK related work (GEMVIS) was also ported to the Advanced Visualization System (AVS) application builder. Many modules were developed to enhance those already available in Iris Explorer including HDF file support, improved visualization and display, simple lattice math, and the handling of metadata through development of a new grid datatype. Complete source and runtime binaries along with on-line documentation is available via the World Wide Web at: http://redrock.ncsa.uiuc.edu/ PATHFINDER/pathre12/top/top.html.

  18. Mathematical Ecology Analysis of Geographical Distribution of Soybean-Nodulating Bradyrhizobia in Japan

    PubMed Central

    Saeki, Yuichi; Shiro, Sokichi; Tajima, Toshiyuki; Yamamoto, Akihiro; Sameshima-Saito, Reiko; Sato, Takashi; Yamakawa, Takeo

    2013-01-01

    We characterized the relationship between the genetic diversity of indigenous soybean-nodulating bradyrhizobia from weakly acidic soils in Japan and their geographical distribution in an ecological study of indigenous soybean rhizobia. We isolated bradyrhizobia from three kinds of Rj-genotype soybeans. Their genetic diversity and community structure were analyzed by PCR-RFLP analysis of the 16S–23S rRNA gene internal transcribed spacer (ITS) region with 11 Bradyrhizobium USDA strains as references. We used data from the present study and previous studies to carry out mathematical ecological analyses, multidimensional scaling analysis with the Bray-Curtis index, polar ordination analysis, and multiple regression analyses to characterize the relationship between soybean-nodulating bradyrhizobial community structures and their geographical distribution. The mathematical ecological approaches used in this study demonstrated the presence of ecological niches and suggested the geographical distribution of soybean-nodulating bradyrhizobia to be a function of latitude and the related climate, with clusters in the order Bj123, Bj110, Bj6, and Be76 from north to south in Japan. PMID:24240318

  19. Combining Different Tools for EEG Analysis to Study the Distributed Character of Language Processing.

    PubMed

    Rocha, Armando Freitas da; Foz, Flávia Benevides; Pereira, Alfredo

    2015-01-01

    Recent studies on language processing indicate that language cognition is better understood if assumed to be supported by a distributed intelligent processing system enrolling neurons located all over the cortex, in contrast to reductionism that proposes to localize cognitive functions to specific cortical structures. Here, brain activity was recorded using electroencephalogram while volunteers were listening or reading small texts and had to select pictures that translate meaning of these texts. Several techniques for EEG analysis were used to show this distributed character of neuronal enrollment associated with the comprehension of oral and written descriptive texts. Low Resolution Tomography identified the many different sets (s i ) of neurons activated in several distinct cortical areas by text understanding. Linear correlation was used to calculate the information H(e i ) provided by each electrode of the 10/20 system about the identified s i . H(e i ) Principal Component Analysis (PCA) was used to study the temporal and spatial activation of these sources s i . This analysis evidenced 4 different patterns of H(e i ) covariation that are generated by neurons located at different cortical locations. These results clearly show that the distributed character of language processing is clearly evidenced by combining available EEG technologies. PMID:26713089

  20. A CLASS OF DISTRIBUTION-FREE MODELS FOR LONGITUDINAL MEDIATION ANALYSIS

    PubMed Central

    Gunzler, D.; Tang, W.; Lu, N.; Wu, P.; Tu, X.M.

    2016-01-01

    Mediation analysis constitutes an important part of treatment study to identify the mechanisms by which an intervention achieves its effect. Structural equation model (SEM) is a popular framework for modeling such causal relationship. However, current methods impose various restrictions on the study designs and data distributions, limiting the utility of the information they provide in real study applications. In particular, in longitudinal studies missing data is commonly addressed under the assumption of missing at random (MAR), where current methods are unable to handle such missing data if parametric assumptions are violated. In this paper, we propose a new, robust approach to address the limitations of current SEM within the context of longitudinal mediation analysis by utilizing a class of functional response models (FRM). Being distribution-free, the FRM-based approach does not impose any parametric assumption on data distributions. In addition, by extending the inverse probability weighted (IPW) estimates to the current context, the FRM-based SEM provides valid inference for longitudinal mediation analysis under the two most popular missing data mechanisms; missing completely at random (MCAR) and missing at random (MAR). We illustrate the approach with both real and simulated data. PMID:24271505

  1. Combining Different Tools for EEG Analysis to Study the Distributed Character of Language Processing

    PubMed Central

    da Rocha, Armando Freitas; Foz, Flávia Benevides; Pereira, Alfredo

    2015-01-01

    Recent studies on language processing indicate that language cognition is better understood if assumed to be supported by a distributed intelligent processing system enrolling neurons located all over the cortex, in contrast to reductionism that proposes to localize cognitive functions to specific cortical structures. Here, brain activity was recorded using electroencephalogram while volunteers were listening or reading small texts and had to select pictures that translate meaning of these texts. Several techniques for EEG analysis were used to show this distributed character of neuronal enrollment associated with the comprehension of oral and written descriptive texts. Low Resolution Tomography identified the many different sets (si) of neurons activated in several distinct cortical areas by text understanding. Linear correlation was used to calculate the information H(ei) provided by each electrode of the 10/20 system about the identified si. H(ei) Principal Component Analysis (PCA) was used to study the temporal and spatial activation of these sources si. This analysis evidenced 4 different patterns of H(ei) covariation that are generated by neurons located at different cortical locations. These results clearly show that the distributed character of language processing is clearly evidenced by combining available EEG technologies. PMID:26713089

  2. Space station electrical power distribution analysis using a load flow approach

    NASA Technical Reports Server (NTRS)

    Emanuel, Ervin M.

    1987-01-01

    The space station's electrical power system will evolve and grow in a manner much similar to the present terrestrial electrical power system utilities. The initial baseline reference configuration will contain more than 50 nodes or busses, inverters, transformers, overcurrent protection devices, distribution lines, solar arrays, and/or solar dynamic power generating sources. The system is designed to manage and distribute 75 KW of power single phase or three phase at 20 KHz, and grow to a level of 300 KW steady state, and must be capable of operating at a peak of 450 KW for 5 to 10 min. In order to plan far into the future and keep pace with load growth, a load flow power system analysis approach must be developed and utilized. This method is a well known energy assessment and management tool that is widely used throughout the Electrical Power Utility Industry. The results of a comprehensive evaluation and assessment of an Electrical Distribution System Analysis Program (EDSA) is discussed. Its potential use as an analysis and design tool for the 20 KHz space station electrical power system is addressed.

  3. Oxygen distribution in tumors: A qualitative analysis and modeling study providing a novel Monte Carlo approach

    SciTech Connect

    Lagerlöf, Jakob H.; Kindblom, Jon; Bernhardt, Peter

    2014-09-15

    Purpose: To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO{sub 2})]. Methods: A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 μm) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumor oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO{sub 2}), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO{sub 2} were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. Results: For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO{sub 2} distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became truncated in the lower end, due to anoxia, but smaller tumors showed undisturbed oxygen distributions. The six different models with correlated parameters generated three classes of oxygen distributions. The first was a hypothetical, negative covariance between vessel proximity and pO{sub 2} (VPO-C scenario); the second was a hypothetical positive covariance between vessel proximity and pO{sub 2} (VPO+C scenario); and the third was the hypothesis of no correlation between vessel proximity and pO{sub 2} (UP scenario). The VPO-C scenario produced a distinctly different oxygen distribution than the two other scenarios. The shape of the VPO-C scenario was similar to that of the nonvariable DOC model, and the larger the tumor, the greater the similarity between the two models. For all simulations, the mean oxygen tension decreased and the hypoxic fraction increased with tumor size. The absorbed dose required for definitive tumor control was highest for the VPO+C scenario, followed by the UP and VPO-C scenarios. Conclusions: A novel MC algorithm was presented which simulated oxygen distributions and radiation response for various biological parameter values. The analysis showed that the VPO-C scenario generated a clearly different oxygen distribution from the VPO+C scenario; the former exhibited a lower hypoxic fraction and higher radiosensitivity. In future studies, this modeling approach might be valuable for qualitative analyses of factors that affect oxygen distribution as well as analyses of specific experimental and clinical situations.

  4. A quantitative quantum-chemical analysis tool for the distribution of mechanical force in molecules

    NASA Astrophysics Data System (ADS)

    Stauch, Tim; Dreuw, Andreas

    2014-04-01

    The promising field of mechanochemistry suffers from a general lack of understanding of the distribution and propagation of force in a stretched molecule, which limits its applicability up to the present day. In this article, we introduce the JEDI (Judgement of Energy DIstribution) analysis, which is the first quantum chemical method that provides a quantitative understanding of the distribution of mechanical stress energy among all degrees of freedom in a molecule. The method is carried out on the basis of static or dynamic calculations under the influence of an external force and makes use of a Hessian matrix in redundant internal coordinates (bond lengths, bond angles, and dihedral angles), so that all relevant degrees of freedom of a molecule are included and mechanochemical processes can be interpreted in a chemically intuitive way. The JEDI method is characterized by its modest computational effort, with the calculation of the Hessian being the rate-determining step, and delivers, except for the harmonic approximation, exact ab initio results. We apply the JEDI analysis to several example molecules in both static quantum chemical calculations and Born-Oppenheimer Molecular Dynamics simulations in which molecules are subject to an external force, thus studying not only the distribution and the propagation of strain in mechanically deformed systems, but also gaining valuable insights into the mechanochemically induced isomerization of trans-3,4-dimethylcyclobutene to trans,trans-2,4-hexadiene. The JEDI analysis can potentially be used in the discussion of sonochemical reactions, molecular motors, mechanophores, and photoswitches as well as in the development of molecular force probes.

  5. A quantitative quantum-chemical analysis tool for the distribution of mechanical force in molecules

    SciTech Connect

    Stauch, Tim; Dreuw, Andreas

    2014-04-07

    The promising field of mechanochemistry suffers from a general lack of understanding of the distribution and propagation of force in a stretched molecule, which limits its applicability up to the present day. In this article, we introduce the JEDI (Judgement of Energy DIstribution) analysis, which is the first quantum chemical method that provides a quantitative understanding of the distribution of mechanical stress energy among all degrees of freedom in a molecule. The method is carried out on the basis of static or dynamic calculations under the influence of an external force and makes use of a Hessian matrix in redundant internal coordinates (bond lengths, bond angles, and dihedral angles), so that all relevant degrees of freedom of a molecule are included and mechanochemical processes can be interpreted in a chemically intuitive way. The JEDI method is characterized by its modest computational effort, with the calculation of the Hessian being the rate-determining step, and delivers, except for the harmonic approximation, exact ab initio results. We apply the JEDI analysis to several example molecules in both static quantum chemical calculations and Born-Oppenheimer Molecular Dynamics simulations in which molecules are subject to an external force, thus studying not only the distribution and the propagation of strain in mechanically deformed systems, but also gaining valuable insights into the mechanochemically induced isomerization of trans-3,4-dimethylcyclobutene to trans,trans-2,4-hexadiene. The JEDI analysis can potentially be used in the discussion of sonochemical reactions, molecular motors, mechanophores, and photoswitches as well as in the development of molecular force probes.

  6. First experience and adaptation of existing tools to ATLAS distributed analysis

    NASA Astrophysics Data System (ADS)

    de La Hoz, S. G.; Ruiz, L. M.; Liko, D.

    2008-02-01

    The ATLAS production system has been successfully used to run production of simulation data at an unprecedented scale in ATLAS. Up to 10000 jobs were processed on about 100 sites in one day. The experiences obtained operating the system on several grid flavours was essential to perform a user analysis using grid resources. First tests of the distributed analysis system were then performed. In the preparation phase data was registered in the LHC file catalog (LFC) and replicated in external sites. For the main test, few resources were used. All these tests are only a first step towards the validation of the computing model. The ATLAS management computing board decided to integrate the collaboration efforts in distributed analysis in only one project, GANGA. The goal is to test the reconstruction and analysis software in a large scale Data production using grid flavors in several sites. GANGA allows trivial switching between running test jobs on a local batch system and running large-scale analyses on the grid; it provides job splitting and merging, and includes automated job monitoring and output retrieval.

  7. Nanomaterial size distribution analysis via liquid nebulization coupled with ion mobility spectrometry (LN-IMS).

    PubMed

    Jeon, Seongho; Oberreit, Derek R; Van Schooneveld, Gary; Hogan, Christopher J

    2016-02-01

    We apply liquid nebulization (LN) in series with ion mobility spectrometry (IMS, using a differential mobility analyzer coupled to a condensation particle counter) to measure the size distribution functions (the number concentration per unit log diameter) of gold nanospheres in the 5-30 nm range, 70 nm × 11.7 nm gold nanorods, and albumin proteins originally in aqueous suspensions. In prior studies, IMS measurements have only been carried out for colloidal nanoparticles in this size range using electrosprays for aerosolization, as traditional nebulizers produce supermicrometer droplets which leave residue particles from non-volatile species. Residue particles mask the size distribution of the particles of interest. Uniquely, the LN employed in this study uses both online dilution (with dilution factors of up to 10(4)) with ultra-high purity water and a ball-impactor to remove droplets larger than 500 nm in diameter. This combination enables hydrosol-to-aerosol conversion preserving the size and morphology of particles, and also enables higher non-volatile residue tolerance than electrospray based aerosolization. Through LN-IMS measurements we show that the size distribution functions of narrowly distributed but similarly sized particles can be distinguished from one another, which is not possible with Nanoparticle Tracking Analysis in the sub-30 nm size range. Through comparison to electron microscopy measurements, we find that the size distribution functions inferred via LN-IMS measurements correspond to the particle sizes coated by surfactants, i.e. as they persist in colloidal suspensions. Finally, we show that the gas phase particle concentrations inferred from IMS size distribution functions are functions of only of the liquid phase particle concentration, and are independent of particle size, shape, and chemical composition. Therefore LN-IMS enables characterization of the size, yield, and polydispersity of sub-30 nm particles. PMID:26750519

  8. Laws prohibiting peer distribution of injecting equipment in Australia: A critical analysis of their effects.

    PubMed

    Lancaster, Kari; Seear, Kate; Treloar, Carla

    2015-12-01

    The law is a key site for the production of meanings around the 'problem' of drugs in public discourse. In this article, we critically consider the material-discursive 'effects' of laws prohibiting peer distribution of needles and syringes in Australia. Taking the laws and regulations governing possession and distribution of injecting equipment in one jurisdiction (New South Wales, Australia) as a case study, we use Carol Bacchi's poststructuralist approach to policy analysis to critically consider the assumptions and presuppositions underpinning this legislative and regulatory framework, with a particular focus on examining the discursive, subjectification and lived effects of these laws. We argue that legislative prohibitions on the distribution of injecting equipment except by 'authorised persons' within 'approved programs' constitute people who inject drugs as irresponsible, irrational, and untrustworthy and re-inscribe a familiar stereotype of the drug 'addict'. These constructions of people who inject drugs fundamentally constrain how the provision of injecting equipment may be thought about in policy and practice. We suggest that prohibitions on the distribution of injecting equipment among peers may also have other, material, effects and may be counterproductive to various public health aims and objectives. However, the actions undertaken by some people who inject drugs to distribute equipment to their peers may disrupt and challenge these constructions, through a counter-discourse in which people who inject drugs are constituted as active agents with a vital role to play in blood-borne virus prevention in the community. Such activity continues to bring with it the risk of criminal prosecution, and so it remains a vexed issue. These insights have implications of relevance beyond Australia, particularly for other countries around the world that prohibit peer distribution, but also for other legislative practices with material-discursive effects in association with injecting drug use. PMID:26118796

  9. Simulation of flight maneuver-load distributions by utilizing stationary, non-Gaussian random load histories

    NASA Technical Reports Server (NTRS)

    Leybold, H. A.

    1971-01-01

    Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.

  10. Breast Cancer Survival Analysis: Applying the Generalized Gamma Distribution under Different Conditions of the Proportional Hazards and Accelerated Failure Time Assumptions

    PubMed Central

    Abadi, Alireza; Amanpour, Farzaneh; Bajdik, Chris; Yavari, Parvin

    2012-01-01

    Background: The goal of this study is to extend the applications of parametric survival models so that they include cases in which accelerated failure time (AFT) assumption is not satisfied, and examine parametric and semiparametric models under different proportional hazards (PH) and AFT assumptions. Methods: The data for 12,531 women diagnosed with breast cancer in British Columbia, Canada, during 19901999 were divided into eight groups according to patients ages and stage of disease, and each group was assumed to have different AFT and PH assumptions. For parametric models, we fitted the saturated generalized gamma (GG) distribution, and compared this with the conventional AFT model. Using a likelihood ratio statistic, both models were compared to the simpler forms including the Weibull and lognormal. For semiparametric models, either Cox's PH model or stratified Cox model was fitted according to the PH assumption and tested using Schoenfeld residuals. The GG family was compared to the log-logistic model using Akaike information criterion (AIC) and Baysian information criterion (BIC). Results: When PH and AFT assumptions were satisfied, semiparametric and parametric models both provided valid descriptions of breast cancer patient survival. When PH assumption was not satisfied but AFT condition held, the parametric models performed better than the stratified Cox model. When neither the PH nor the AFT assumptions were met, the log normal distribution provided a reasonable fit. Conclusions: When both the PH and AFT assumptions are satisfied, the parametric and semiparametric models provide complementary information. When PH assumption is not satisfied, the parametric models should be considered, whether the AFT assumption is met or not. PMID:23024854

  11. Long-term mechanical life testing of polymeric post insulators for distribution and a comparison to porcelain

    SciTech Connect

    Cherney, E.A. )

    1988-07-01

    The paper presents the results and analyses of long-term cantilever strength tests on polymeric line post insulators. The time-to-failure data for static cantilever loads are represented by the Weibull distribution. The life distribution, obtained from the maximum likelihood estimates of the accelerated failure times, fits an exponential model. An extrapolation of the life distribution to normal loads provides an estimate of the strength rating and mechanical equivalence to porcelain line post insulators.

  12. Iterative Monte Carlo analysis of spin-dependent parton distributions

    DOE PAGESBeta

    Sato, Nobuo; Melnitchouk, Wally; Kuhn, Sebastian E.; Ethier, Jacob J.; Accardi, Alberto

    2016-04-05

    We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳ 0.1. Furthermore, the study also provides the first determination of the flavor-separated twist-3 PDFsmore » and the d2 moment of the nucleon within a global PDF analysis.« less

  13. Spatial Intensity Distribution Analysis Reveals Abnormal Oligomerization of Proteins in Single Cells.

    PubMed

    Godin, Antoine G; Rappaz, Benjamin; Potvin-Trottier, Laurent; Kennedy, Timothy E; De Koninck, Yves; Wiseman, Paul W

    2015-08-18

    Knowledge of membrane receptor organization is essential for understanding the initial steps in cell signaling and trafficking mechanisms, but quantitative analysis of receptor interactions at the single-cell level and in different cellular compartments has remained highly challenging. To achieve this, we apply a quantitative image analysis technique-spatial intensity distribution analysis (SpIDA)-that can measure fluorescent particle concentrations and oligomerization states within different subcellular compartments in live cells. An important technical challenge faced by fluorescence microscopy-based measurement of oligomerization is the fidelity of receptor labeling. In practice, imperfect labeling biases the distribution of oligomeric states measured within an aggregated system. We extend SpIDA to enable analysis of high-order oligomers from fluorescence microscopy images, by including a probability weighted correction algorithm for nonemitting labels. We demonstrated that this fraction of nonemitting probes could be estimated in single cells using SpIDA measurements on model systems with known oligomerization state. Previously, this artifact was measured using single-step photobleaching. This approach was validated using computer-simulated data and the imperfect labeling was quantified in cells with ion channels of known oligomer subunit count. It was then applied to quantify the oligomerization states in different cell compartments of the proteolipid protein (PLP) expressed in COS-7 cells. Expression of a mutant PLP linked to impaired trafficking resulted in the detection of PLP tetramers that persist in the endoplasmic reticulum, while no difference was measured at the membrane between the distributions of wild-type and mutated PLPs. Our results demonstrate that SpIDA allows measurement of protein oligomerization in different compartments of intact cells, even when fractional mislabeling occurs as well as photobleaching during the imaging process, and reveals insights into the mechanism underlying impaired trafficking of PLP. PMID:26287623

  14. Independent Orbiter Assessment (IOA): Analysis of the electrical power distribution and control subsystem, volume 1

    NASA Technical Reports Server (NTRS)

    Schmeckpeper, K. R.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Distribution and Control (EPD and C) hardware. The EPD and C hardware performs the functions of distributing, sensing, and controlling 28 volt DC power and of inverting, distributing, sensing, and controlling 117 volt 400 Hz AC power to all Orbiter subsystems from the three fuel cells in the Electrical Power Generation (EPG) subsystem. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 1671 failure modes analyzed, 9 single failures were determined to result in loss of crew or vehicle. Three single failures unique to intact abort were determined to result in possible loss of the crew or vehicle. A possible loss of mission could result if any of 136 single failures occurred. Six of the criticality 1/1 failures are in two rotary and two pushbutton switches that control External Tank and Solid Rocket Booster separation. The other 6 criticality 1/1 failures are fuses, one each per Aft Power Control Assembly (APCA) 4, 5, and 6 and one each per Forward Power Control Assembly (FPCA) 1, 2, and 3, that supply power to certain Main Propulsion System (MPS) valves and Forward Reaction Control System (RCS) circuits.

  15. Global sensitivity analysis using a new approach based on cumulative distribution functions

    NASA Astrophysics Data System (ADS)

    Wagener, T.; Pianosi, F.; Sarrazin, F.

    2014-12-01

    Global Sensitivity Analysis (GSA) has become a key tool for the analysis of environmental models. Objectives for GSA include model simplification to support calibration, diagnostic analysis of model controls and subsequent comparison with underlying perceptual models, or decision-making analysis to understand over what range of uncertainty a specific action is robust. Variance-based approaches are most widely used for GSA of environmental models. However, methods that consider the entire Probability Density Function (PDF) of the model output, rather than its variance only, are preferable in cases where variance is not an adequate proxy of uncertainty, e.g. when the output distribution is highly-skewed or multi-modal. Additionally, in contrast to variance-based strategies, they might allow for the mapping of the output on the input space, e.g. a prerequisite for the use of GSA in robust decision-making under uncertainty. Still, the adoption of density-based methods has been limited so far, possibly because they are relatively more difficult to implement. Here we present a novel GSA method, called PAWN, to efficiently compute density-based sensitivity indices, while also enabling the necessary input-output space mapping. The key idea is to characterize output distributions by their Cumulative Distribution Functions, which are easier to derive than PDFs. We discuss and demonstrate the advantages of PAWN by application to numerical and environmental examples. We expect PAWN to increase the application of density-based approaches and to be a necessary complimentary approach to variance-based GSA.

  16. Spatial sensitivity analysis of remote sensing snow cover fraction data in a distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Berezowski, Tomasz; Chormański, Jarosław; Nossent, Jiri; Batelaan, Okke

    2014-05-01

    Distributed hydrological models enhance the analysis and explanation of environmental processes. As more spatial input data and time series become available, more analysis is required of the sensitivity of the data on the simulations. Most research so far focussed on the sensitivity of precipitation data in distributed hydrological models. However, these results can not be compared until a universal approach to quantify the sensitivity of a model to spatial data is available. The frequently tested and used remote sensing data for distributed models is snow cover. Snow cover fraction (SCF) remote sensing products are easily available from the internet, e.g. MODIS snow cover product MOD10A1 (daily snow cover fraction at 500m spatial resolution). In this work a spatial sensitivity analysis (SA) of remotely sensed SCF from MOD10A1 was conducted with the distributed WetSpa model. The aim is to investigate if the WetSpa model is differently subjected to SCF uncertainty in different areas of the model domain. The analysis was extended to look not only at SA quantities but also to relate them to the physical parameters and processes in the study area. The study area is the Biebrza River catchment, Poland, which is considered semi natural catchment and subject to a spring snow melt regime. Hydrological simulations are performed with the distributed WetSpa model, with a simulation period of 2 hydrological years. For the SA the Latin-Hypercube One-factor-At-a-Time (LH-OAT) algorithm is used, with a set of different response functions in regular 4 x 4 km grid. The results show that the spatial patterns of sensitivity can be easily interpreted by co-occurrence of different landscape features. Moreover, the spatial patterns of the SA results are related to the WetSpa spatial parameters and to different physical processes. Based on the study results, it is clear that spatial approach of SA can be performed with the proposed algorithm and the MOD10A1 SCF is spatially sensitive in the WetSpa model.

  17. Measuring arbitrary diffusion coefficient distributions of nano-objects by taylor dispersion analysis.

    PubMed

    Cipelletti, Luca; Biron, Jean-Philippe; Martin, Michel; Cottet, Hervé

    2015-08-18

    Taylor dispersion analysis is an absolute and straightforward characterization method that allows determining the diffusion coefficient, or equivalently the hydrodynamic radius, from angstroms to submicron size range. In this work, we investigated the use of the Constrained Regularized Linear Inversion approach as a new data processing method to extract the probability density functions of the diffusion coefficient (or hydrodynamic radius) from experimental taylorgrams. This new approach can be applied to arbitrary polydisperse samples and gives access to the whole diffusion coefficient distributions, thereby significantly enhancing the potentiality of Taylor dispersion analysis. The method was successfully applied to both simulated and real experimental data for solutions of moderately polydisperse polymers and their binary and ternary mixtures. Distributions of diffusion coefficients obtained by this method were favorably compared with those derived from size exclusion chromatography. The influence of the noise of the simulated taylorgrams on the data processing is discussed. Finally, we discuss the ability of the method to correctly resolve bimodal distributions as a function of the relative separation between the two constituent species. PMID:26243023

  18. Multimedia content analysis and indexing: evaluation of a distributed and scalable architecture

    NASA Astrophysics Data System (ADS)

    Mandviwala, Hasnain; Blackwell, Scott; Weikart, Chris; Van Thong, Jean-Manuel

    2003-11-01

    Multimedia search engines facilitate the retrieval of documents from large media content archives now available via intranets and the Internet. Over the past several years, many research projects have focused on algorithms for analyzing and indexing media content efficiently. However, special system architectures are required to process large amounts of content from real-time feeds or existing archives. Possible solutions include dedicated distributed architectures for analyzing content rapidly and for making it searchable. The system architecture we propose implements such an approach: a highly distributed and reconfigurable batch media content analyzer that can process media streams and static media repositories. Our distributed media analysis application handles media acquisition, content processing, and document indexing. This collection of modules is orchestrated by a task flow management component, exploiting data and pipeline parallelism in the application. A scheduler manages load balancing and prioritizes the different tasks. Workers implement application-specific modules that can be deployed on an arbitrary number of nodes running different operating systems. Each application module is exposed as a web service, implemented with industry-standard interoperable middleware components such as Microsoft ASP.NET and Sun J2EE. Our system architecture is the next generation system for the multimedia indexing application demonstrated by www.speechbot.com. It can process large volumes of audio recordings with minimal support and maintenance, while running on low-cost commodity hardware. The system has been evaluated on a server farm running concurrent content analysis processes.

  19. Particle size distribution of brown and white rice during gastric digestion measured by image analysis.

    PubMed

    Bornhorst, Gail M; Kostlan, Kevin; Singh, R Paul

    2013-09-01

    The particle size distribution of foods during gastric digestion indicates the amount of physical breakdown that occurred due to the peristaltic movement of the stomach walls in addition to the breakdown that initially occurred during oral processing. The objective of this study was to present an image analysis technique that was rapid, simple, and could distinguish between food components (that is, rice kernel and bran layer in brown rice). The technique was used to quantify particle breakdown of brown and white rice during gastric digestion in growing pigs (used as a model for an adult human) over 480 min of digestion. The particle area distributions were fit to a Rosin-Rammler distribution function. Brown and white rice exhibited considerable breakdown as the number of particles per image decreased over time. The median particle area (x(50)) increased during digestion, suggesting a gastric sieving phenomenon, where small particles were emptied and larger particles were retained for additional breakdown. Brown rice breakdown was further quantified by an examination of the bran layer fragments and rice grain pieces. The percentage of total particle area composed of bran layer fragments was greater in the distal stomach than the proximal stomach in the first 120 min of digestion. The results of this study showed that image analysis may be used to quantify particle breakdown of a soft food product during gastric digestion, discriminate between different food components, and help to clarify the role of food structure and processing in food breakdown during gastric digestion. PMID:23923993

  20. Spatial distribution and cluster analysis of sexual risk behaviors reported by young men in Kisumu, Kenya

    PubMed Central

    2010-01-01

    Background The well-established connection between HIV risk behavior and place of residence points to the importance of geographic clustering in the potential transmission of HIV and other sexually transmitted infections (STI). Methods To investigate the geospatial distribution of prevalent sexually transmitted infections and sexual behaviors in a sample of 18-24 year-old sexually active men in urban and rural areas of Kisumu, Kenya, we mapped the residences of 649 men and conducted spatial cluster analysis. Spatial distribution of the study participants was assessed in terms of the demographic, behavioral, and sexual dysfunction variables, as well as laboratory diagnosed STIs. To test for the presence and location of clusters we used Kulldorff's spatial scan statistic as implemented in the Satscan program. Results The results of this study suggest that sexual risk behaviors and STIs are evenly distributed in our sample throughout the Kisumu district. No behavioral or STI clusters were detected, except for condom use. Neither urban nor rural residence significantly impacted risk behavior or STI prevalence. Conclusion We found no association between place of residence and sexual risk behaviors in our sample. While our results can not be generalized to other populations, the study shows that geospatial analysis can be an important tool for investigating study sample characteristics; for evaluating HIV/STI risk factors; and for development and implementation of targeted HIV and STI control programs in specifically defined populations and in areas where the underlying population dynamic is poorly understood. PMID:20492703

  1. Further Progress Applying the Generalized Wigner Distribution to Analysis of Vicinal Surfaces

    NASA Astrophysics Data System (ADS)

    Einstein, T. L.; Richards, Howard L.; Cohen, S. D.

    2001-03-01

    Terrace width distributions (TWDs) can be well fit by the generalized Wigner distribution (GWD), generally better than by conventional Gaussians, and thus offers a convenient way to estimate the dimensionless elastic repulsion strength tildeA from σ^2, the TWD variance.(T.L. Einstein and O. Pierre-Louis, Surface Sci. 424), L299 (1999) The GWD σ^2 accurately reproduces values for the two exactly soluble cases at small tildeA and in the asymptotic limit. Taxing numerical simulations show that the GWD σ^2 interpolates well between these limits. Extensive applications have been made to experimental data, esp. on Cu.(M. Giesen and T.L. Einstein, Surface Sci. 449), 191 (2000) Recommended analysis procedures are catalogued.(H.L. Richards, S.D. Cohen, TLE, & M. Giesen, Surf Sci 453), 59 (2000) Extensions of the GWD for multistep distributions are tested, with good agreement for second-neighbor distributions, less good for third.(TLE, HLR, SDC, & OP-L, Proc ISSI-PDSC2000, cond-mat/0012xxxxx) Alternatively, step-step correlation functions, about which there is more theoretical information, should be measured.

  2. Structure analysis and size distribution of particulate matter from candles and kerosene combustion in burning chamber

    NASA Astrophysics Data System (ADS)

    Baitimirova, M.; Osite, A.; Katkevics, J.; Viksna, A.

    2012-08-01

    Burning of candles generates particulate matter of fine dimensions that produces poor indoor air quality, so it may cause harmful impact on human health. In this study solid aerosol particles of burning of candles of different composition and kerosene combustion were collected in a closed laboratory system. Present work describes particulate matter collection for structure analysis and the relationship between source and size distribution of particulate matter. The formation mechanism of particulate matter and their tendency to agglomerate also are described. Particles obtained from kerosene combustion have normal size distribution. Whereas, particles generated from the burning of stearin candles have distribution shifted towards finer particle size range. If an additive of stearin to paraffin candle is used, particle size distribution is also observed in range of towards finer particles. A tendency to form agglomerates in a short time is observed in case of particles obtained from kerosene combustion, while in case of particles obtained from burning of candles of different composition such a tendency is not observed. Particles from candles and kerosene combustion are Aitken and accumulation mode particles

  3. Analysis of discrete and continuous distributions of ventilatory time constants from dynamic computed tomography

    NASA Astrophysics Data System (ADS)

    Doebrich, Marcus; Markstaller, Klaus; Karmrodt, Jens; Kauczor, Hans-Ulrich; Eberle, Balthasar; Weiler, Norbert; Thelen, Manfred; Schreiber, Wolfgang G.

    2005-04-01

    In this study, an algorithm was developed to measure the distribution of pulmonary time constants (TCs) from dynamic computed tomography (CT) data sets during a sudden airway pressure step up. Simulations with synthetic data were performed to test the methodology as well as the influence of experimental noise. Furthermore the algorithm was applied to in vivo data. In five pigs sudden changes in airway pressure were imposed during dynamic CT acquisition in healthy lungs and in a saline lavage ARDS model. The fractional gas content in the imaged slice (FGC) was calculated by density measurements for each CT image. Temporal variations of the FGC were analysed assuming a model with a continuous distribution of exponentially decaying time constants. The simulations proved the feasibility of the method. The influence of experimental noise could be well evaluated. Analysis of the in vivo data showed that in healthy lungs ventilation processes can be more likely characterized by discrete TCs whereas in ARDS lungs continuous distributions of TCs are observed. The temporal behaviour of lung inflation and deflation can be characterized objectively using the described new methodology. This study indicates that continuous distributions of TCs reflect lung ventilation mechanics more accurately compared to discrete TCs.

  4. Analysis of crater distribution in mare units on the lunar far side

    NASA Technical Reports Server (NTRS)

    Walker, A. S.; El-Baz, F.

    1982-01-01

    Mare material is asymmetrically distributed on the moon. The earth-facing hemisphere, where the crust is believed to be 26 km thinner than on the farside, contains substantially more basaltic mare material. Using Lunar Topographic Orthophoto Maps, the thickness of the mare material in three farside craters, Aitken (0.59 km), Isaev (1.0 km), and Tsiolkovskiy (1.75 km) was calculated. Crater frequency distribution in five farside mare units (Aitken, Isaev, Lacus Solitudinis, Langemak, and Tsiolkovskiy) and one light plains unit (in Mendeleev) were also studied. Nearly 10,000 farside craters were counted. Analysis of the crater frequency on the light plains unit gives an age of 4.3 billion yr. Crater frequency distributions on the mare units indicate ages of 3.7 and 3.8 billion yr. suggesting that the units are distributed over a narrow time period of approximately 100 million yr. Returned lunar samples from nearside maria give dates as young as 3.1 billion yr. The results of this study suggest that mare basalt emplacement on the far side ceased before it did on the near side.

  5. Pore space analysis of NAPL distribution in sand-clay media

    USGS Publications Warehouse

    Matmon, D.; Hayden, N.J.

    2003-01-01

    This paper introduces a conceptual model of clays and non-aqueous phase liquids (NAPLs) at the pore scale that has been developed from a mathematical unit cell model, and direct micromodel observation and measurement of clay-containing porous media. The mathematical model uses a unit cell concept with uniform spherical grains for simulating the sand in the sand-clay matrix (???10% clay). Micromodels made with glass slides and including different clay-containing porous media were used to investigate the two clays (kaolinite and montmorillonite) and NAPL distribution within the pore space. The results were used to understand the distribution of NAPL advancing into initially saturated sand and sand-clay media, and provided a detailed analysis of the pore-scale geometry, pore size distribution, NAPL entry pressures, and the effect of clay on this geometry. Interesting NAPL saturation profiles were observed as a result of the complexity of the pore space geometry with the different packing angles and the presence of clays. The unit cell approach has applications for enhancing the mechanistic understanding and conceptualization, both visually and mathematically, of pore-scale processes such as NAPL and clay distribution. ?? 2003 Elsevier Science Ltd. All rights reserved.

  6. Rural tourism spatial distribution based on multi-criteria decision analysis and GIS

    NASA Astrophysics Data System (ADS)

    Zhang, Hongxian; Yang, Qingsheng

    2008-10-01

    To study spatial distribution of rural tourism can provide scientific decision basis for developing rural economics. Traditional ways of tourism spatial distribution have some limitations in quantifying priority locations of tourism development on small units. They can only produce the overall tourism distribution locations and whether locations are suitable to tourism development simply while the tourism develop ranking with different decision objectives should be considered. This paper presents a way to find ranking of location of rural tourism development in spatial by integrating multi-criteria decision analysis (MCDA) and geography information system (GIS). In order to develop country economics with inconvenient transportation, undeveloped economy and better tourism resource, these locations should be firstly develop rural tourism. Based on this objective, the tourism develop priority utility of each town is calculated with MCDA and GIS. Towns which should be first develop rural tourism can be selected with higher tourism develop priority utility. The method is used to find ranking of location of rural tourism in Ningbo City successfully. The result shows that MCDA is an effective way for distribution rural tourism in spatial based on special decision objectives and rural tourism can promote economic development.

  7. Analysis of the Effects of Streamwise Lift Distribution on Sonic Boom Signature

    NASA Technical Reports Server (NTRS)

    Yoo, Paul

    2013-01-01

    Investigation of sonic boom has been one of the major areas of study in aeronautics due to the benefits a low-boom aircraft has in both civilian and military applications. This work conducts a numerical analysis of the effects of streamwise lift distribution on the shock coalescence characteristics. A simple wing-canard-stabilator body model is used in the numerical simulation. The streamwise lift distribution is varied by fixing the canard at a deflection angle while trimming the aircraft with the wing and the stabilator at the desired lift coefficient. The lift and the pitching moment coefficients are computed using the Missile DATCOM v. 707. The flow field around the wing-canard- stabilator body model is resolved using the OVERFLOW-2 flow solver. Overset/ chimera grid topology is used to simplify the grid generation of various configurations representing different streamwise lift distributions. The numerical simulations are performed without viscosity unless it is required for numerical stability. All configurations are simulated at Mach 1.4, angle-of-attack of 1.50, lift coefficient of 0.05, and pitching moment coefficient of approximately 0. Four streamwise lift distribution configurations were tested.

  8. Adaptive Voltage Control with Distributed Energy Resources: Algorithm, Theoretical Analysis, Simulation and Field Test Verification

    SciTech Connect

    Li, Huijuan; Li, Fangxing; Xu, Yan; Rizy, D Tom; Kueck, John D

    2010-01-01

    Abstract Distributed energy resources (DE) or distributed generators (DG) with power electronics interfaces and logic control using local measurements are capable of providing reactive power related ancillary system services. In particular, local voltage regulation has drawn much attention in regards to power system reliability and voltage stability, especially from past major cascading outages. This paper addresses the challenges of controlling DEs to regulate local voltage in distribution systems. An adaptive voltage control method has been proposed to dynamically modify control parameters to respond to system changes. Theoretical analysis shows that there exists a corresponding formulation of the dynamic control parameters; hence the adaptive control method is theoretically solid. Both simulation and field experiment test results at the Distributed Energy Communications and Controls (DECC) Laboratory confirm that this method is capable of satisfying the fast response requirement for operational use without causing oscillation, inefficiency, or system equipment interference. Since this method has a high tolerance to real-time data shortage and is widely adaptive to variable power system operational situations, it is quite suitable for broad utility application.

  9. Inverse analysis of non-uniform temperature distributions using multispectral pyrometry

    NASA Astrophysics Data System (ADS)

    Fu, Tairan; Duan, Minghao; Tian, Jibin; Shi, Congling

    2016-05-01

    Optical diagnostics can be used to obtain sub-pixel temperature information in remote sensing. A multispectral pyrometry method was developed using multiple spectral radiation intensities to deduce the temperature area distribution in the measurement region. The method transforms a spot multispectral pyrometer with a fixed field of view into a pyrometer with enhanced spatial resolution that can give sub-pixel temperature information from a "one pixel" measurement region. A temperature area fraction function was defined to represent the spatial temperature distribution in the measurement region. The method is illustrated by simulations of a multispectral pyrometer with a spectral range of 8.0-13.0 μm measuring a non-isothermal region with a temperature range of 500-800 K in the spot pyrometer field of view. The inverse algorithm for the sub-pixel temperature distribution (temperature area fractions) in the "one pixel" verifies this multispectral pyrometry method. The results show that an improved Levenberg-Marquardt algorithm is effective for this ill-posed inverse problem with relative errors in the temperature area fractions of (-3%, 3%) for most of the temperatures. The analysis provides a valuable reference for the use of spot multispectral pyrometers for sub-pixel temperature distributions in remote sensing measurements.

  10. Empirical analysis on the connection between power-law distributions and allometries for urban indicators

    NASA Astrophysics Data System (ADS)

    Alves, L. G. A.; Ribeiro, H. V.; Lenzi, E. K.; Mendes, R. S.

    2014-09-01

    We report on the existing connection between power-law distributions and allometries. As it was first reported in Gomez-Lievano et al. (2012) for the relationship between homicides and population, when these urban indicators present asymptotic power-law distributions, they can also display specific allometries among themselves. Here, we present an extensive characterization of this connection when considering all possible pairs of relationships from twelve urban indicators of Brazilian cities (such as child labor, illiteracy, income, sanitation and unemployment). Our analysis reveals that all our urban indicators are asymptotically distributed as power laws and that the proposed connection also holds for our data when the allometric relationship displays enough correlations. We have also found that not all allometric relationships are independent and that they can be understood as a consequence of the allometric relationship between the urban indicator and the population size. We further show that the residuals fluctuations surrounding the allometries are characterized by an almost constant variance and log-normal distributions.

  11. Spatio-Temporal Distribution Characteristics and Trajectory Similarity Analysis of Tuberculosis in Beijing, China

    PubMed Central

    Li, Lan; Xi, Yuliang; Ren, Fu

    2016-01-01

    Tuberculosis (TB) is an infectious disease with one of the highest reported incidences in China. The detection of the spatio-temporal distribution characteristics of TB is indicative of its prevention and control conditions. Trajectory similarity analysis detects variations and loopholes in prevention and provides urban public health officials and related decision makers more information for the allocation of public health resources and the formulation of prioritized health-related policies. This study analysed the spatio-temporal distribution characteristics of TB from 2009 to 2014 by utilizing spatial statistics, spatial autocorrelation analysis, and space-time scan statistics. Spatial statistics measured the TB incidence rate (TB patients per 100,000 residents) at the district level to determine its spatio-temporal distribution and to identify characteristics of change. Spatial autocorrelation analysis was used to detect global and local spatial autocorrelations across the study area. Purely spatial, purely temporal and space-time scan statistics were used to identify purely spatial, purely temporal and spatio-temporal clusters of TB at the district level. The other objective of this study was to compare the trajectory similarities between the incidence rates of TB and new smear-positive (NSP) TB patients in the resident population (NSPRP)/new smear-positive TB patients in the TB patient population (NSPTBP)/retreated smear-positive (RSP) TB patients in the resident population (RSPRP)/retreated smear-positive TB patients in the TB patient population (RSPTBP) to detect variations and loopholes in TB prevention and control among the districts in Beijing. The incidence rates in Beijing exhibited a gradual decrease from 2009 to 2014. Although global spatial autocorrelation was not detected overall across all of the districts of Beijing, individual districts did show evidence of local spatial autocorrelation: Chaoyang and Daxing were Low-Low districts over the six-year period. The purely spatial scan statistics analysis showed significant spatial clusters of high and low incidence rates; the purely temporal scan statistics showed the temporal cluster with a three-year period from 2009 to 2011 characterized by a high incidence rate; and the space-time scan statistics analysis showed significant spatio-temporal clusters. The distribution of the mean centres (MCs) showed that the general distributions of the NSPRP MCs and NSPTBP MCs were to the east of the incidence rate MCs. Conversely, the general distributions of the RSPRP MCs and the RSPTBP MCs were to the south of the incidence rate MCs. Based on the combined analysis of MC distribution characteristics and trajectory similarities, the NSP trajectory was most similar to the incidence rate trajectory. Thus, more attention should be focused on the discovery of NSP patients in the western part of Beijing, whereas the northern part of Beijing needs intensive treatment for RSP patients. PMID:26959048

  12. Spatio-Temporal Distribution Characteristics and Trajectory Similarity Analysis of Tuberculosis in Beijing, China.

    PubMed

    Li, Lan; Xi, Yuliang; Ren, Fu

    2016-01-01

    Tuberculosis (TB) is an infectious disease with one of the highest reported incidences in China. The detection of the spatio-temporal distribution characteristics of TB is indicative of its prevention and control conditions. Trajectory similarity analysis detects variations and loopholes in prevention and provides urban public health officials and related decision makers more information for the allocation of public health resources and the formulation of prioritized health-related policies. This study analysed the spatio-temporal distribution characteristics of TB from 2009 to 2014 by utilizing spatial statistics, spatial autocorrelation analysis, and space-time scan statistics. Spatial statistics measured the TB incidence rate (TB patients per 100,000 residents) at the district level to determine its spatio-temporal distribution and to identify characteristics of change. Spatial autocorrelation analysis was used to detect global and local spatial autocorrelations across the study area. Purely spatial, purely temporal and space-time scan statistics were used to identify purely spatial, purely temporal and spatio-temporal clusters of TB at the district level. The other objective of this study was to compare the trajectory similarities between the incidence rates of TB and new smear-positive (NSP) TB patients in the resident population (NSPRP)/new smear-positive TB patients in the TB patient population (NSPTBP)/retreated smear-positive (RSP) TB patients in the resident population (RSPRP)/retreated smear-positive TB patients in the TB patient population (RSPTBP) to detect variations and loopholes in TB prevention and control among the districts in Beijing. The incidence rates in Beijing exhibited a gradual decrease from 2009 to 2014. Although global spatial autocorrelation was not detected overall across all of the districts of Beijing, individual districts did show evidence of local spatial autocorrelation: Chaoyang and Daxing were Low-Low districts over the six-year period. The purely spatial scan statistics analysis showed significant spatial clusters of high and low incidence rates; the purely temporal scan statistics showed the temporal cluster with a three-year period from 2009 to 2011 characterized by a high incidence rate; and the space-time scan statistics analysis showed significant spatio-temporal clusters. The distribution of the mean centres (MCs) showed that the general distributions of the NSPRP MCs and NSPTBP MCs were to the east of the incidence rate MCs. Conversely, the general distributions of the RSPRP MCs and the RSPTBP MCs were to the south of the incidence rate MCs. Based on the combined analysis of MC distribution characteristics and trajectory similarities, the NSP trajectory was most similar to the incidence rate trajectory. Thus, more attention should be focused on the discovery of NSP patients in the western part of Beijing, whereas the northern part of Beijing needs intensive treatment for RSP patients. PMID:26959048

  13. A Grid-based solution for management and analysis of microarrays in distributed experiments.

    PubMed

    Porro, Ivan; Torterolo, Livia; Corradi, Luca; Fato, Marco; Papadimitropoulos, Adam; Scaglione, Silvia; Schenone, Andrea; Viti, Federica

    2007-01-01

    Several systems have been presented in the last years in order to manage the complexity of large microarray experiments. Although good results have been achieved, most systems tend to lack in one or more fields. A Grid based approach may provide a shared, standardized and reliable solution for storage and analysis of biological data, in order to maximize the results of experimental efforts. A Grid framework has been therefore adopted due to the necessity of remotely accessing large amounts of distributed data as well as to scale computational performances for terabyte datasets. Two different biological studies have been planned in order to highlight the benefits that can emerge from our Grid based platform. The described environment relies on storage services and computational services provided by the gLite Grid middleware. The Grid environment is also able to exploit the added value of metadata in order to let users better classify and search experiments. A state-of-art Grid portal has been implemented in order to hide the complexity of framework from end users and to make them able to easily access available services and data. The functional architecture of the portal is described. As a first test of the system performances, a gene expression analysis has been performed on a dataset of Affymetrix GeneChip Rat Expression Array RAE230A, from the ArrayExpress database. The sequence of analysis includes three steps: (i) group opening and image set uploading, (ii) normalization, and (iii) model based gene expression (based on PM/MM difference model). Two different Linux versions (sequential and parallel) of the dChip software have been developed to implement the analysis and have been tested on a cluster. From results, it emerges that the parallelization of the analysis process and the execution of parallel jobs on distributed computational resources actually improve the performances. Moreover, the Grid environment have been tested both against the possibility of uploading and accessing distributed datasets through the Grid middleware and against its ability in managing the execution of jobs on distributed computational resources. Results from the Grid test will be discussed in a further paper. PMID:17430574

  14. A distributed analysis and visualization system for model and observational data

    NASA Technical Reports Server (NTRS)

    Wilhelmson, Robert; Koch, Steven

    1993-01-01

    The objective of this proposal is to develop an integrated and distributed analysis and display software system which can be applied to all areas of the Earth System Science to study numerical model and earth observational data from storm to global scale. This system will be designed to be easy to use, portable, flexible and easily extensible and designed to adhere to current and emerging standards whenever possible. It will provide an environment for visualization of the massive amounts of data generated from satellites and other observational field measurements and from model simulations during or after their execution. Two- and three-dimensional animation will also be provided. This system will be based on a widely used software package from NASA called GEMPAK and prototype software for three-dimensional interactive displays built at NCSA. The underlying foundation of the system will be a set of software libraries which can be distributed across a UNIX based supercomputer and workstations.

  15. A distributed analysis and visualization system for model and observational data

    NASA Technical Reports Server (NTRS)

    Wilhelmson, Robert; Koch, Steven

    1992-01-01

    The objective of this proposal is to develop an integrated and distributed analysis and display software system which can be applied to all areas of the Earth System Science to study numerical model and earth observational data from storm to global scale. This system will be designed to be easy to use, portable, flexible and easily extensible and to adhere to current and emerging standards whenever possible. It will provide an environment for visualization of the massive amounts of data generated from satellites and other observational field measurements and from model simulations during or after their execution. Two- and three-dimensional animation will also be provided. This system will be based on a widely used software package from NASA called GEMPAK and prototype software for three dimensional interactive displays built at NCSA. The underlying foundation of the system will be a set of software libraries which can be distributed across a UNIX based supercomputer and workstations.

  16. Analysis and modeling of information flow and distributed expertise in space-related operations.

    PubMed

    Caldwell, Barrett S

    2005-01-01

    Evolving space operations requirements and mission planning for long-duration expeditions require detailed examinations and evaluations of information flow dynamics, knowledge-sharing processes, and information technology use in distributed expert networks. This paper describes the work conducted with flight controllers in the Mission Control Center (MCC) of NASA's Johnson Space Center. This MCC work describes the behavior of experts in a distributed supervisory coordination framework, which extends supervisory control/command and control models of human task performance. Findings from this work are helping to develop analysis techniques, information architectures, and system simulation capabilities for knowledge sharing in an expert community. These findings are being applied to improve knowledge-sharing processes applied to a research program in advanced life support for long-duration space flight. Additional simulation work is being developed to create interoperating modules of information flow and novice/expert behavior patterns. PMID:15835058

  17. Exposure Models for the Prior Distribution in Bayesian Decision Analysis for Occupational Hygiene Decision Making

    PubMed Central

    Lee, Eun Gyung; Kim, Seung Won; Feigley, Charles E.; Harper, Martin

    2015-01-01

    This study introduces two semi-quantitative methods, Structured Subjective Assessment (SSA) and Control of Substances Hazardous to Health (COSHH) Essentials, in conjunction with two-dimensional Monte Carlo simulations for determining prior probabilities. Prior distribution using expert judgment was included for comparison. Practical applications of the proposed methods were demonstrated using personal exposure measurements of isoamyl acetate in an electronics manufacturing facility and of isopropanol in a printing shop. Applicability of these methods in real workplaces was discussed based on the advantages and disadvantages of each method. Although these methods could not be completely independent of expert judgments, this study demonstrated a methodological improvement in the estimation of the prior distribution for the Bayesian decision analysis tool. The proposed methods provide a logical basis for the decision process by considering determinants of worker exposure. PMID:23252451

  18. Analysis of temperature distribution in a pipe with inner mineral deposit

    NASA Astrophysics Data System (ADS)

    Joachimiak, Magda; Ciałkowski, Michał; Bartoszewicz, Jarosław

    2014-06-01

    The paper presents the results of calculations related to determination of temperature distributions in a steel pipe of a heat exchanger taking into account inner mineral deposits. Calculations have been carried out for silicate-based scale being characterized by a low heat transfer coefficient. Deposits of the lowest values of heat conduction coefficient are particularly impactful on the strength of thermally loaded elements. In the analysis the location of the thermocouple and the imperfection of its installation were taken into account. The paper presents the influence of determination accuracy of the heat flux on the pipe external wall on temperature distribution. The influence of the heat flux disturbance value on the thickness of deposit has also been analyzed.

  19. Distributed and/or grid-oriented approach to BTeV data analysis

    SciTech Connect

    Joel N. Butler

    2002-12-23

    The BTeV collaboration will record approximately 2 petabytes of raw data per year. It plans to analyze this data using the distributed resources of the collaboration as well as dedicated resources, primarily residing in the very large BTeV trigger farm, and resources accessible through the developing world-wide data grid. The data analysis system is being designed from the very start with this approach in mind. In particular, we plan a fully disk-based data storage system with multiple copies of the data distributed across the collaboration to provide redundancy and to optimize access. We will also position ourself to take maximum advantage of shared systems, as well as dedicated systems, at our collaborating institutions.

  20. Methods and apparatuses for information analysis on shared and distributed computing systems

    DOEpatents

    Bohn, Shawn J [Richland, WA; Krishnan, Manoj Kumar [Richland, WA; Cowley, Wendy E [Richland, WA; Nieplocha, Jarek [Richland, WA

    2011-02-22

    Apparatuses and computer-implemented methods for analyzing, on shared and distributed computing systems, information comprising one or more documents are disclosed according to some aspects. In one embodiment, information analysis can comprise distributing one or more distinct sets of documents among each of a plurality of processes, wherein each process performs operations on a distinct set of documents substantially in parallel with other processes. Operations by each process can further comprise computing term statistics for terms contained in each distinct set of documents, thereby generating a local set of term statistics for each distinct set of documents. Still further, operations by each process can comprise contributing the local sets of term statistics to a global set of term statistics, and participating in generating a major term set from an assigned portion of a global vocabulary.

  1. Preliminary analysis of the span-distributed-load concept for cargo aircraft design

    NASA Technical Reports Server (NTRS)

    Whitehead, A. H., Jr.

    1975-01-01

    A simplified computer analysis of the span-distributed-load airplane (in which payload is placed within the wing structure) has shown that the span-distributed-load concept has high potential for application to future air cargo transport design. Significant increases in payload fraction over current wide-bodied freighters are shown for gross weights in excess of 0.5 Gg (1,000,000 lb). A cruise-matching calculation shows that the trend toward higher aspect ratio improves overall efficiency; that is, less thrust and fuel are required. The optimal aspect ratio probably is not determined by structural limitations. Terminal-area constraints and increasing design-payload density, however, tend to limit aspect ratio.

  2. Simulation and analysis of an intermediate frequency (IF) distribution system with applications for Space Station

    NASA Technical Reports Server (NTRS)

    Costello, Thomas A.; Brandt, C. Maite

    1989-01-01

    Simulation and analysis results are described for a wideband fiber optic intermediate frequency distribution channel for a frequency division multiple access (FDMA) system where antenna equipment is remotely located from the signal processing equipment. The fiber optic distribution channel accommodates multiple signals received from a single antenna with differing power levels. Performance parameters addressed are intermodulation degradations, laser noise, and adjacent channel interference, as they impact the overall system design. Simulation results showed that the laser diode modulation level can be allowed to reach 100 percent without considerable degradation. The laser noise must be controlled as to provide a noise floor of less than -90 dBW/Hz. The fiber optic link increases the degradation due to power imbalance yet diminishes the effects of the transmit amplifier nonlinearity. Overall, optimal operation conditions can be found to yield a degradation level of about .1 dB caused by the fiber optic link.

  3. A universal analysis tool for the detection of asymmetric signal distribution in microscopic images

    PubMed Central

    Matis, Maja; Axelrod, Jeffrey D.; Galic, Milos

    2012-01-01

    Background Polarization of tissue is achieved by asymmetric distribution of proteins and organelles within individual cells. However, existing quantitative assays to measure this asymmetry in an automated and unbiased manner suffer from significant limitations. Results Here, we report a new way to assess protein and organelle localization in tissue based on correlative fluorescence analysis. As a proof of principle, we successfully characterized planar cell polarity dependent asymmetry in developing Drosophila melanogaster tissues on the single cell level using fluorescence cross-correlation. Conclusions Systematic modulation of signal strength and distribution show that fluorescence cross-correlation reliably detects asymmetry over a broad parameter space. The novel method described here produces robust, rapid and unbiased measurement of biometrical properties of cell components in live tissue that is readily applicable in other model systems. PMID:22689329

  4. Distribution water quality anomaly detection from UV optical sensor monitoring data by integrating principal component analysis with chi-square distribution.

    PubMed

    Hou, Dibo; Zhang, Jian; Yang, Zheling; Liu, Shu; Huang, Pingjie; Zhang, Guangxin

    2015-06-29

    The issue of distribution water quality security ensuring is recently attracting global attention due to the potential threat from harmful contaminants. The real-time monitoring based on ultraviolet optical sensors is a promising technique. This method is of reagent-free, low maintenance cost, rapid analysis and wide cover range. However, the ultraviolet absorption spectra are of large size and easily interfered. While within the on-site application, there is almost no prior knowledge like spectral characteristics of potential contaminants before determined. Meanwhile, the concept of normal water quality is also varying due to the operating condition. In this paper, a procedure based on multivariate statistical analysis is proposed to detect distribution water quality anomaly based on ultraviolet optical sensors. Firstly, the principal component analysis is employed to capture the main variety features from the spectral matrix and reduce the dimensionality. A new statistical variable is then constructed and used for evaluating the local outlying degree according to the chi-square distribution in the principal component subspace. The possibility of anomaly of the latest observation is calculated by the accumulation of the outlying degrees from the adjacent previous observations. To develop a more reliable anomaly detection procedure, several key parameters are discussed. By utilizing the proposed methods, the distribution water quality anomalies and the optical abnormal changes can be detected. The contaminants intrusion experiment is conducted in a pilot-scale distribution system by injecting phenol solution. The effectiveness of the proposed procedure is finally testified using the experimental spectral data. PMID:26191757

  5. Using occlusal wear information and finite element analysis to investigate stress distributions in human molars

    PubMed Central

    Benazzi, Stefano; Kullmer, Ottmar; Grosse, Ian R; Weber, Gerhard W

    2011-01-01

    Simulations based on finite element analysis (FEA) have attracted increasing interest in dentistry and dental anthropology for evaluating the stress and strain distribution in teeth under occlusal loading conditions. Nonetheless, FEA is usually applied without considering changes in contacts between antagonistic teeth during the occlusal power stroke. In this contribution we show how occlusal information can be used to investigate the stress distribution with 3D FEA in lower first molars (M1). The antagonistic crowns M1 and P2–M1 of two dried modern human skulls were scanned by μCT in maximum intercuspation (centric occlusion) contact. A virtual analysis of the occlusal power stroke between M1 and P2–M1 was carried out in the Occlusal Fingerprint Analyser (OFA) software, and the occlusal trajectory path was recorded, while contact areas per time-step were visualized and quantified. Stress distribution of the M1 in selected occlusal stages were analyzed in strand7, considering occlusal information taken from OFA results for individual loading direction and loading area. Our FEA results show that the stress pattern changes considerably during the power stroke, suggesting that wear facets have a crucial influence on the distribution of stress on the whole tooth. Grooves and fissures on the occlusal surface are seen as critical locations, as tensile stresses are concentrated at these features. Properly accounting for the power stroke kinematics of occluding teeth results in quite different results (less tensile stresses in the crown) than usual loading scenarios based on parallel forces to the long axis of the tooth. This leads to the conclusion that functional studies considering kinematics of teeth are important to understand biomechanics and interpret morphological adaptation of teeth. PMID:21615398

  6. Statistical Distribution of Inflation on Lava Flows: Analysis of Flow Surfaces on Earth and Mars

    NASA Technical Reports Server (NTRS)

    Glazel, L. S.; Anderson, S. W.; Stofan, E. R.; Baloga, S.

    2003-01-01

    The surface morphology of a lava flow results from processes that take place during the emplacement of the flow. Certain types of features, such as tumuli, lava rises and lava rise pits, are indicators of flow inflation or endogenous growth of a lava flow. Tumuli in particular have been identified as possible indicators of tube location, indicating that their distribution on the surface of a lava flow is a junction of the internal pathways of lava present during flow emplacement. However, the distribution of tumuli on lava flows has not been examined in a statistically thorough manner. In order to more rigorously examine the distribution of tumuli on a lava flow, we examined a discrete flow lobe with numerous lava rises and tumuli on the 1969 - 1974 Mauna Ulu flow at Kilauea, Hawaii. The lobe is located in the distal portion of the flow below Holei Pali, which is characterized by hummocky pahoehoe flows emplaced from tubes. We chose this flow due to its discrete nature allowing complete mapping of surface morphologies, well-defined boundaries, well-constrained emplacement parameters, and known flow thicknesses. In addition, tube locations for this Mauna Ulu flow were mapped by Holcomb (1976) during flow emplacement. We also examine the distribution of tumuli on the distal portion of the hummocky Thrainsskjoldur flow field provided by Rossi and Gudmundsson (1996). Analysis of the Mauna Ulu and Thrainsskjoldur flow lobes and the availability of high-resolution MOC images motivated us to look for possible tumuli-dominated flow lobes on the surface of Mars. We identified a MOC image of a lava flow south of Elysium Mons with features morphologically similar to tumuli. The flow is characterized by raised elliptical to circular mounds, some with axial cracks, that are similar in size to the tumuli measured on Earth. One potential avenue of determining whether they are tumuli is to look at the spatial distribution to see if any patterns similar to those of tumuli-dominated terrestrial flows can be identified. Since tumuli form by the injection of lava beneath a crust, the distribution of tumuli on a flow should represent the distribution of thermally preferred pathways beneath the surface of the crust. That distribution of thermally preferred pathways may be a function of the evolution of a basaltic lava flow. As a longer-lived flow evolves, initially broad thermally preferred pathways would evolve to narrower, more well-defined tube-like pathways. The final flow morphology clearly preserves the growth of the flow over time, with inflation features indicating pathways that were not necessarily contemporaneously active. Here, we test using statistical analysis whether this final flow morphology produces distinct distributions that can be used to readily determine the distribution of thermally preferred pathways beneath the surface of the crust.

  7. Poster — Thur Eve — 74: Distributed, asynchronous, reactive dosimetric and outcomes analysis using DICOMautomaton

    SciTech Connect

    Clark, Haley; Wu, Jonn; Moiseenko, Vitali; Thomas, Steven

    2014-08-15

    Many have speculated about the future of computational technology in clinical radiation oncology. It has been advocated that the next generation of computational infrastructure will improve on the current generation by incorporating richer aspects of automation, more heavily and seamlessly featuring distributed and parallel computation, and providing more flexibility toward aggregate data analysis. In this report we describe how a recently created — but currently existing — analysis framework (DICOMautomaton) incorporates these aspects. DICOMautomaton supports a variety of use cases but is especially suited for dosimetric outcomes correlation analysis, investigation and comparison of radiotherapy treatment efficacy, and dose-volume computation. We describe: how it overcomes computational bottlenecks by distributing workload across a network of machines; how modern, asynchronous computational techniques are used to reduce blocking and avoid unnecessary computation; and how issues of out-of-date data are addressed using reactive programming techniques and data dependency chains. We describe internal architecture of the software and give a detailed demonstration of how DICOMautomaton could be used to search for correlations between dosimetric and outcomes data.

  8. Comprehensive neutron cross-section and secondary energy distribution uncertainty analysis for a fusion reactor

    SciTech Connect

    Gerstl, S.A.W.; LaBauve, R.J.; Young, P.G.

    1980-05-01

    On the example of General Atomic's well-documented Power Generating Fusion Reactor (PGFR) design, this report exercises a comprehensive neutron cross-section and secondary energy distribution (SED) uncertainty analysis. The LASL sensitivity and uncertainty analysis code SENSIT is used to calculate reaction cross-section sensitivity profiles and integral SED sensitivity coefficients. These are then folded with covariance matrices and integral SED uncertainties to obtain the resulting uncertainties of three calculated neutronics design parameters: two critical radiation damage rates and a nuclear heating rate. The report documents the first sensitivity-based data uncertainty analysis, which incorporates a quantitative treatment of the effects of SED uncertainties. The results demonstrate quantitatively that the ENDF/B-V cross-section data files for C, H, and O, including their SED data, are fully adequate for this design application, while the data for Fe and Ni are at best marginally adequate because they give rise to response uncertainties up to 25%. Much higher response uncertainties are caused by cross-section and SED data uncertainties in Cu (26 to 45%), tungsten (24 to 54%), and Cr (up to 98%). Specific recommendations are given for re-evaluations of certain reaction cross-sections, secondary energy distributions, and uncertainty estimates.

  9. Local storage federation through XRootD architecture for interactive distributed analysis

    NASA Astrophysics Data System (ADS)

    Colamaria, F.; Colella, D.; Donvito, G.; Elia, D.; Franco, A.; Luparello, G.; Maggi, G.; Miniello, G.; Vallero, S.; Vino, G.

    2015-12-01

    A cloud-based Virtual Analysis Facility (VAF) for the ALICE experiment at the LHC has been deployed in Bari. Similar facilities are currently running in other Italian sites with the aim to create a federation of interoperating farms able to provide their computing resources for interactive distributed analysis. The use of cloud technology, along with elastic provisioning of computing resources as an alternative to the grid for running data intensive analyses, is the main challenge of these facilities. One of the crucial aspects of the user-driven analysis execution is the data access. A local storage facility has the disadvantage that the stored data can be accessed only locally, i.e. from within the single VAF. To overcome such a limitation a federated infrastructure, which provides full access to all the data belonging to the federation independently from the site where they are stored, has been set up. The federation architecture exploits both cloud computing and XRootD technologies, in order to provide a dynamic, easy-to-use and well performing solution for data handling. It should allow the users to store the files and efficiently retrieve the data, since it implements a dynamic distributed cache among many datacenters in Italy connected to one another through the high-bandwidth national network. Details on the preliminary architecture implementation and performance studies are discussed.

  10. Measurement of bubble size distribution in a gas-liquid foam using pulsed-field gradient nuclear magnetic resonance.

    PubMed

    Stevenson, Paul; Sederman, Andrew J; Mantle, Mick D; Li, Xueliang; Gladden, Lynn F

    2010-12-01

    Pulsed-field gradient nuclear magnetic resonance, previously used for measuring droplet size distributions in emulsions, has been used to measure bubble size distributions in a non-overflowing pneumatic gas-liquid foam that has been created by sparging propane into an aqueous solution of 1.5g/l (5.20mM) SDS. The bubble size distributions measured were reproducible and approximated a Weibull distribution. However, the bubble size distributions did not materially change with position at which they were measured within the froth. An analysis of foam coarsening due to Ostwald ripening in a non-overflowing foam indicates that, for the experimental conditions employed, one would not expect this to be a significant effect. It is therefore apparent that the eventual collapse of the foam is due to bubble bursting (or surface coalescence) rather than Ostwald ripening. This surface coalescence occurs because of evaporation from the free surface of the foam. An analytical solution for the liquid fraction profile for a certain class of non-overflowing pneumatic foam is given, and a mean bubble size that is appropriate for drainage calculations is suggested. PMID:20832808

  11. Equity in the distribution of CT and MRI in China: a panel analysis

    PubMed Central

    2013-01-01

    Introduction China is facing a daunting challenge to health equity in the context of rapid economic development. This study adds to the literature by examining equity in the distribution of high-technology medical equipment, such as CT and MRI, in China. Methods A panel analysis was conducted with information about four study sites in 2006 and 2009. The four provincial-level study sites included Shanghai, Zhejiang, Shaanxi, and Hunan, representing different geographical, economic, and medical technology levels in China. A random sample of 71 hospitals was selected from the four sites. Data were collected through questionnaire surveys. Equity status was assessed in terms of CT and MRI numbers, characteristics of machine, and financing sources. The assessment was conducted at multiple levels, including international, provincial, city, and hospital level. In addition to comparison among the study sites, the sample was compared with OECD countries in CT and MRI distributions. Results China had lower numbers of CTs and MRIs per million population in 2009 than most of the selected OECD countries while the increases in its CT and MRI numbers from 2006 to 2009 were higher than most of the OECD countries. The equity status of CT distribution remained at low inequality level in both 2006 and 2009 while the equity status of MRI distribution improved from high inequality in 2006 to moderate inequality in 2009. Despite the equity improvement, the distributions of CTs and MRIs were significantly positively correlated with economic development level across all cities in the four study sites in either 2006 or 2009. Our analysis also revealed that Shanghai, the study site with the highest level of economic development, had more advanced CT and MRI machine, more imported CTs and MRIs, and higher government subsidies on these two types of equipment. Conclusions The number of CTs and MRIs increased considerably in China from 2006 to 2009. The equity status of CTs was better than that of MRIs although the equity status in MRI distribution got improved from 2006 to 2009. Still considerable inequality exists in terms of characteristics and financing of CTs and MRIs. PMID:23742755

  12. Characterizing the distribution of an endangered salmonid using environmental DNA analysis

    USGS Publications Warehouse

    Laramie, Matthew B.; Pilliod, David S.; Goldberg, Caren S.

    2015-01-01

    Determining species distributions accurately is crucial to developing conservation and management strategies for imperiled species, but a challenging task for small populations. We evaluated the efficacy of environmental DNA (eDNA) analysis for improving detection and thus potentially refining the known distribution of Chinook salmon (Oncorhynchus tshawytscha) in the Methow and Okanogan Subbasins of the Upper Columbia River, which span the border between Washington, USA and British Columbia, Canada. We developed an assay to target a 90 base pair sequence of Chinook DNA and used quantitative polymerase chain reaction (qPCR) to quantify the amount of Chinook eDNA in triplicate 1-L water samples collected at 48 stream locations in June and again in August 2012. The overall probability of detecting Chinook with our eDNA method in areas within the known distribution was 0.77 (±0.05 SE). Detection probability was lower in June (0.62, ±0.08 SE) during high flows and at the beginning of spring Chinook migration than during base flows in August (0.93, ±0.04 SE). In the Methow subbasin, mean eDNA concentration was higher in August compared to June, especially in smaller tributaries, probably resulting from the arrival of spring Chinook adults, reduced discharge, or both. Chinook eDNA concentrations did not appear to change in the Okanogan subbasin from June to August. Contrary to our expectations about downstream eDNA accumulation, Chinook eDNA did not decrease in concentration in upstream reaches (0–120 km). Further examination of factors influencing spatial distribution of eDNA in lotic systems may allow for greater inference of local population densities along stream networks or watersheds. These results demonstrate the potential effectiveness of eDNA detection methods for determining landscape-level distribution of anadromous salmonids in large river systems.

  13. Statistical analysis of factors affecting landslide distribution in the new Madrid seismic zone, Tennessee and Kentucky

    USGS Publications Warehouse

    Jibson, R.W.; Keefer, D.K.

    1989-01-01

    More than 220 large landslides along the bluffs bordering the Mississippi alluvial plain between Cairo, Ill., and Memphis, Tenn., are analyzed by discriminant analysis and multiple linear regression to determine the relative effects of slope height and steepness, stratigraphic variation, slope aspect, and proximity to the hypocenters of the 1811-12 New Madrid, Mo., earthquakes on the distribution of these landslides. Three types of landslides are analyzed: (1) old, coherent slumps and block slides, which have eroded and revegetated features and no active analogs in the area; (2) old earth flows, which are also eroded and revegetated; and (3) young rotational slumps, which are present only along near-river bluffs, and which are the only young, active landslides in the area. Discriminant analysis shows that only one characteristic differs significantly between bluffs with and without young rotational slumps: failed bluffs tend to have sand and clay at their base, which may render them more susceptible to fluvial erosion. Bluffs having old coherent slides are significantly higher, steeper, and closer to the hypocenters of the 1811-12 earthquakes than bluffs without these slides. Bluffs having old earth flows are likewise higher and closer to the earthquake hypocenters. Multiple regression analysis indicates that the distribution of young rotational slumps is affected most strongly by slope steepness: about one-third of the variation in the distribution is explained by variations in slope steepness. The distribution of old coherent slides and earth flows is affected most strongly by slope height, but the proximity to the hypocenters of the 1811-12 earthquakes also significantly affects the distribution. The results of the statistical analyses indicate that the only recently active landsliding in the area is along actively eroding river banks, where rotational slumps formed as bluffs are undercut by the river. The analyses further indicate that the old coherent slides and earth flows in the area are spatially related to the 1811-12 earthquake hypocenters and were thus probably triggered by those earthquakes. These results are consistent with findings of other recent investigations of landslides in the area that presented field, historical, and analytical evidence to demonstrate that old landslides in the area formed during the 1811-12 New Madrid earthquakes. Results of the multiple linear regression can also be used to approximate the relative susceptibility of the bluffs in the study area to seismically induced landsliding. ?? 1989.

  14. Estimating the probability distribution of the incubation period for rabies using data from the 1948-1954 rabies epidemic in Tokyo.

    PubMed

    Tojinbara, Kageaki; Sugiura, K; Yamada, A; Kakitani, I; Kwan, N C L; Sugiura, K

    2016-01-01

    Data of 98 rabies cases in dogs and cats from the 1948-1954 rabies epidemic in Tokyo were used to estimate the probability distribution of the incubation period. Lognormal, gamma and Weibull distributions were used to model the incubation period. The maximum likelihood estimates of the mean incubation period ranged from 27.30 to 28.56 days according to different distributions. The mean incubation period was shortest with the lognormal distribution (27.30 days), and longest with the Weibull distribution (28.56 days). The best distribution in terms of AIC value was the lognormal distribution with mean value of 27.30 (95% CI: 23.46-31.55) days and standard deviation of 20.20 (15.27-26.31) days. There were no significant differences between the incubation periods for dogs and cats, or between those for male and female dogs. PMID:26688561

  15. Multiobjective sensitivity analysis and optimization of a distributed hydrologic model MOBIDIC

    NASA Astrophysics Data System (ADS)

    Yang, J.; Castelli, F.; Chen, Y.

    2014-03-01

    Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives which arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for a distributed hydrologic model MOBIDIC, which combines two sensitivity analysis techniques (Morris method and State Dependent Parameter method) with a multiobjective optimization (MOO) approach ϵ-NSGAII. This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina with three objective functions, i.e., standardized root mean square error of logarithmic transformed discharge, water balance index, and mean absolute error of logarithmic transformed flow duration curve, and its results were compared with those with a single objective optimization (SOO) with the traditional Nelder-Mead Simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show: (1) the two sensitivity analysis techniques are effective and efficient to determine the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization; (2) both MOO and SOO lead to acceptable simulations, e.g., for MOO, average Nash-Sutcliffe is 0.75 in the calibration period and 0.70 in the validation period; (3) evaporation and surface runoff shows similar importance to watershed water balance while the contribution of baseflow can be ignored; (4) compared to SOO which was dependent of initial starting location, MOO provides more insight on parameter sensitivity and conflicting characteristics of these objective functions. Multiobjective sensitivity analysis and optimization provides an alternative way for future MOBIDIC modelling.

  16. A multimedia environmental model of chemical distribution: fate, transport, and uncertainty analysis.

    PubMed

    Luo, Yuzhou; Yang, Xiusheng

    2007-01-01

    This paper presented a framework for analysis of chemical concentration in the environment and evaluation of variance propagation within the model. This framework was illustrated through a case study of selected organic compounds of benzo[alpha]pyrene (BAP) and hexachlorobenzene (HCB) in the Great Lakes region. A multimedia environmental fate model was applied to perform stochastic simulations of chemical concentrations in various media. Both uncertainty in chemical properties and variability in hydrometeorological parameters were included in the Monte Carlo simulation, resulting in a distribution of concentrations in each medium. Parameters of compartmental dimensions, densities, emissions, and background concentrations were assumed to be constant in this study. The predicted concentrations in air, surface water and sediment were compared to reported data for validation purpose. Based on rank correlations, a sensitivity analysis was conducted to determine the influence of individual input parameters on the output variance for concentration in each environmental medium and for the basin-wide total mass inventory. Results of model validation indicated that the model predictions were in reasonable agreement with spatial distribution patterns, among the five lake basins, of reported data in the literature. For the chemical and environmental parameters given in this study, parameters associated to air-ground partitioning (such as moisture in surface soil, vapor pressure, and deposition velocity) and chemical distribution in soil solid (such as organic carbon partition coefficient and organic carbon content in root-zone soil) were targeted to reduce the uncertainty in basin-wide mass inventory. This results of sensitivity analysis in this study also indicated that the model sensitivity to an input parameter might be affected by the magnitudes of input parameters defined by the parameter settings in the simulation scenario. Therefore, uncertainty and sensitivity analyses for environmental fate models was suggested to be conducted after the model output was validated based on an appropriate input parameter settings. PMID:17095045

  17. Flow distribution analysis on the cooling tube network of ITER thermal shield

    NASA Astrophysics Data System (ADS)

    Nam, Kwanwoo; Chung, Wooho; Noh, Chang Hyun; Kang, Dong Kwon; Kang, Kyoung-O.; Ahn, Hee Jae; Lee, Hyeon Gon

    2014-01-01

    Thermal shield (TS) is to be installed between the vacuum vessel or the cryostat and the magnets in ITER tokamak to reduce the thermal radiation load to the magnets operating at 4.2K. The TS is cooled by pressurized helium gas at the inlet temperature of 80K. The cooling tube is welded on the TS panel surface and the composed flow network of the TS cooling tubes is complex. The flow rate in each panel should be matched to the thermal design value for effective radiation shielding. This paper presents one dimensional analysis on the flow distribution of cooling tube network for the ITER TS. The hydraulic cooling tube network is modeled by an electrical analogy. Only the cooling tube on the TS surface and its connecting pipe from the manifold are considered in the analysis model. Considering the frictional factor and the local loss in the cooling tube, the hydraulic resistance is expressed as a linear function with respect to mass flow rate. Sub-circuits in the TS are analyzed separately because each circuit is controlled by its own control valve independently. It is found that flow rates in some panels are insufficient compared with the design values. In order to improve the flow distribution, two kinds of design modifications are proposed. The first one is to connect the tubes of the adjacent panels. This will increase the resistance of the tube on the panel where the flow rate is excessive. The other design suggestion is that an orifice is installed at the exit of tube routing where the flow rate is to be reduced. The analysis for the design suggestions shows that the flow mal-distribution is improved significantly.

  18. Flow distribution analysis on the cooling tube network of ITER thermal shield

    SciTech Connect

    Nam, Kwanwoo; Chung, Wooho; Noh, Chang Hyun; Kang, Dong Kwon; Kang, Kyoung-O; Ahn, Hee Jae; Lee, Hyeon Gon

    2014-01-29

    Thermal shield (TS) is to be installed between the vacuum vessel or the cryostat and the magnets in ITER tokamak to reduce the thermal radiation load to the magnets operating at 4.2K. The TS is cooled by pressurized helium gas at the inlet temperature of 80K. The cooling tube is welded on the TS panel surface and the composed flow network of the TS cooling tubes is complex. The flow rate in each panel should be matched to the thermal design value for effective radiation shielding. This paper presents one dimensional analysis on the flow distribution of cooling tube network for the ITER TS. The hydraulic cooling tube network is modeled by an electrical analogy. Only the cooling tube on the TS surface and its connecting pipe from the manifold are considered in the analysis model. Considering the frictional factor and the local loss in the cooling tube, the hydraulic resistance is expressed as a linear function with respect to mass flow rate. Sub-circuits in the TS are analyzed separately because each circuit is controlled by its own control valve independently. It is found that flow rates in some panels are insufficient compared with the design values. In order to improve the flow distribution, two kinds of design modifications are proposed. The first one is to connect the tubes of the adjacent panels. This will increase the resistance of the tube on the panel where the flow rate is excessive. The other design suggestion is that an orifice is installed at the exit of tube routing where the flow rate is to be reduced. The analysis for the design suggestions shows that the flow mal-distribution is improved significantly.

  19. Knowledge-based assistance for science visualization and analysis using large distributed databases

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Jacobson, Allan S.; Doyle, Richard J.; Collins, Donald J.

    1993-01-01

    Within this decade, the growth in complexity of exploratory data analysis and the sheer volume of space data require new and innovative approaches to support science investigators in achieving their research objectives. To date, there have been numerous efforts addressing the individual issues involved in inter-disciplinary, multi-instrument investigations. However, while successful in small scale, these efforts have not proven to be open and scalable. This proposal addresses four areas of significant need: scientific visualization and analysis; science data management; interactions in a distributed, heterogeneous environment; and knowledge-based assistance for these functions. The fundamental innovation embedded with this proposal is the integration of three automation technologies, namely, knowledge-based expert systems, science visualization and science data management. This integration is based on concept called the DataHub. With the DataHub concept, NASA will be able to apply a more complete solution to all nodes of a distributed system. Both computation nodes and interactive nodes will be able to effectively and efficiently use the data services (address, retrieval, update, etc.) with a distributed, interdisciplinary information system in a uniform and standard way. This will allow the science investigators to concentrate on their scientific endeavors, rather than to involve themselves in the intricate technical details of the systems and tools required to accomplish their work. Thus, science investigators need not be programmers. The emphasis will be on the definition and prototyping of system elements with sufficient detail to enable data analysis and interpretation leading to publishable scientific results. In addition, the proposed work includes all the required end-to-end components and interfaces to demonstrate the completed concept.

  20. Knowledge-based assistance for science visualization and analysis using large distributed databases

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Jacobson, Allan S.; Doyle, Richard J.; Collins, Donald J.

    1992-01-01

    Within this decade, the growth in complexity of exploratory data analysis and the sheer volume of space data require new and innovative approaches to support science investigators in achieving their research objectives. To date, there have been numerous efforts addressing the individual issues involved in inter-disciplinary, multi-instrument investigations. However, while successful in small scale, these efforts have not proven to be open and scaleable. This proposal addresses four areas of significant need: scientific visualization and analysis; science data management; interactions in a distributed, heterogeneous environment; and knowledge-based assistance for these functions. The fundamental innovation embedded within this proposal is the integration of three automation technologies, namely, knowledge-based expert systems, science visualization and science data management. This integration is based on the concept called the Data Hub. With the Data Hub concept, NASA will be able to apply a more complete solution to all nodes of a distributed system. Both computation nodes and interactive nodes will be able to effectively and efficiently use the data services (access, retrieval, update, etc.) with a distributed, interdisciplinary information system in a uniform and standard way. This will allow the science investigators to concentrate on their scientific endeavors, rather than to involve themselves in the intricate technical details of the systems and tools required to accomplish their work. Thus, science investigators need not be programmers. The emphasis will be on the definition and prototyping of system elements with sufficient detail to enable data analysis and interpretation leading to publishable scientific results. In addition, the proposed work includes all the required end-to-end components and interfaces to demonstrate the completed concept.