Science.gov

Sample records for weibull distribution analysis

  1. /q-exponential, Weibull, and /q-Weibull distributions: an empirical analysis

    NASA Astrophysics Data System (ADS)

    Picoli, S.; Mendes, R. S.; Malacarne, L. C.

    2003-06-01

    In a comparative study, the q-exponential and Weibull distributions are employed to investigate frequency distributions of basketball baskets, cyclone victims, brand-name drugs by retail sales, and highway length. In order to analyze the intermediate cases, a distribution, the q-Weibull one, which interpolates the q-exponential and Weibull ones, is introduced. It is verified that the basketball baskets distribution is well described by a q-exponential, whereas the cyclone victims and brand-name drugs by retail sales ones are better adjusted by a Weibull distribution. On the other hand, for highway length the q-exponential and Weibull distributions do not give satisfactory adjustment, being necessary to employ the q-Weibull distribution. Furthermore, the introduction of this interpolating distribution gives an illumination from the point of view of the stretched exponential against inverse power law ( q-exponential with q>1) controversy.

  2. Using Weibull Distribution Analysis to Evaluate ALARA Performance

    SciTech Connect

    E. L. Frome, J. P. Watkins, and D. A. Hagemeyer

    2009-10-01

    As Low as Reasonably Achievable (ALARA) is the underlying principle for protecting nuclear workers from potential health outcomes related to occupational radiation exposure. Radiation protection performance is currently evaluated by measures such as collective dose and average measurable dose, which do not indicate ALARA performance. The purpose of this work is to show how statistical modeling of individual doses using the Weibull distribution can provide objective supplemental performance indicators for comparing ALARA implementation among sites and for insights into ALARA practices within a site. Maximum likelihood methods were employed to estimate the Weibull shape and scale parameters used for performance indicators. The shape parameter reflects the effectiveness of maximizing the number of workers receiving lower doses and is represented as the slope of the fitted line on a Weibull probability plot. Additional performance indicators derived from the model parameters include the 99th percentile and the exceedance fraction. When grouping sites by collective total effective dose equivalent (TEDE) and ranking by 99th percentile with confidence intervals, differences in performance among sites can be readily identified. Applying this methodology will enable more efficient and complete evaluation of the effectiveness of ALARA implementation.

  3. Reliability analysis of structural ceramic components using a three-parameter Weibull distribution

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Powers, Lynn M.; Starlinger, Alois

    1992-01-01

    Described here are nonlinear regression estimators for the three-parameter Weibull distribution. Issues relating to the bias and invariance associated with these estimators are examined numerically using Monte Carlo simulation methods. The estimators were used to extract parameters from sintered silicon nitride failure data. A reliability analysis was performed on a turbopump blade utilizing the three-parameter Weibull distribution and the estimates from the sintered silicon nitride data.

  4. Reliability analysis of structural ceramic components using a three-parameter Weibull distribution

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Powers, Lynn M.; Starlinger, Alois

    1992-01-01

    Described here are nonlinear regression estimators for the three-Weibull distribution. Issues relating to the bias and invariance associated with these estimators are examined numerically using Monte Carlo simulation methods. The estimators were used to extract parameters from sintered silicon nitride failure data. A reliability analysis was performed on a turbopump blade utilizing the three-parameter Weibull distribution and the estimates from the sintered silicon nitride data.

  5. Statistical analysis of censored motion sickness latency data using the two-parameter Weibull distribution

    NASA Technical Reports Server (NTRS)

    Park, Won J.; Crampton, George H.

    1988-01-01

    The suitability of the two-parameter Weibull distribution for describing highly censored cat motion sickness latency data was evaluated by estimating the parameters with the maximum likelihood method and testing for goodness of fit with the Kolmogorov-Smirnov statistic. A procedure for determining confidence levels and testing for significance of the difference between Weibull parameters is described. Computer programs for these procedures may be obtained from an archival source.

  6. Competing risk models in reliability systems, a weibull distribution model with bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, Ismed; Satria Gondokaryono, Yudi

    2016-02-01

    In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range

  7. Software for Statistical Analysis of Weibull Distributions with Application to Gear Fatigue Data: User Manual with Verification

    NASA Technical Reports Server (NTRS)

    Kranz, Timothy L.

    2002-01-01

    The Weibull distribution has been widely adopted for the statistical description and inference of fatigue data. This document provides user instructions, examples, and verification for software to analyze gear fatigue test data. The software was developed presuming the data are adequately modeled using a two-parameter Weibull distribution. The calculations are based on likelihood methods, and the approach taken is valid for data that include type I censoring. The software was verified by reproducing results published by others.

  8. Software for Statistical Analysis of Weibull Distributions with Application to Gear Fatigue Data: User Manual with Verification

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    2002-01-01

    The Weibull distribution has been widely adopted for the statistical description and inference of fatigue data. This document provides user instructions, examples, and verification for software to analyze gear fatigue test data. The software was developed presuming the data are adequately modeled using a two-parameter Weibull distribution. The calculations are based on likelihood methods, and the approach taken is valid for data that include type 1 censoring. The software was verified by reproducing results published by others.

  9. Packing fraction of particles with a Weibull size distribution

    NASA Astrophysics Data System (ADS)

    Brouwers, H. J. H.

    2016-07-01

    This paper addresses the void fraction of polydisperse particles with a Weibull (or Rosin-Rammler) size distribution. It is demonstrated that the governing parameters of this distribution can be uniquely related to those of the lognormal distribution. Hence, an existing closed-form expression that predicts the void fraction of particles with a lognormal size distribution can be transformed into an expression for Weibull distributions. Both expressions contain the contraction coefficient β. Likewise the monosized void fraction φ1, it is a physical parameter which depends on the particles' shape and their state of compaction only. Based on a consideration of the scaled binary void contraction, a linear relation for (1 - φ1)β as function of φ1 is proposed, with proportionality constant B, depending on the state of compaction only. This is validated using computational and experimental packing data concerning random close and random loose packing arrangements. Finally, using this β, the closed-form analytical expression governing the void fraction of Weibull distributions is thoroughly compared with empirical data reported in the literature, and good agreement is found. Furthermore, the present analysis yields an algebraic equation relating the void fraction of monosized particles at different compaction states. This expression appears to be in good agreement with a broad collection of random close and random loose packing data.

  10. Packing fraction of particles with a Weibull size distribution.

    PubMed

    Brouwers, H J H

    2016-07-01

    This paper addresses the void fraction of polydisperse particles with a Weibull (or Rosin-Rammler) size distribution. It is demonstrated that the governing parameters of this distribution can be uniquely related to those of the lognormal distribution. Hence, an existing closed-form expression that predicts the void fraction of particles with a lognormal size distribution can be transformed into an expression for Weibull distributions. Both expressions contain the contraction coefficient β. Likewise the monosized void fraction φ_{1}, it is a physical parameter which depends on the particles' shape and their state of compaction only. Based on a consideration of the scaled binary void contraction, a linear relation for (1-φ_{1})β as function of φ_{1} is proposed, with proportionality constant B, depending on the state of compaction only. This is validated using computational and experimental packing data concerning random close and random loose packing arrangements. Finally, using this β, the closed-form analytical expression governing the void fraction of Weibull distributions is thoroughly compared with empirical data reported in the literature, and good agreement is found. Furthermore, the present analysis yields an algebraic equation relating the void fraction of monosized particles at different compaction states. This expression appears to be in good agreement with a broad collection of random close and random loose packing data. PMID:27575204

  11. Independent Orbiter Assessment (IOA): Weibull analysis report

    NASA Technical Reports Server (NTRS)

    Raffaelli, Gary G.

    1987-01-01

    The Auxiliary Power Unit (APU) and Hydraulic Power Unit (HPU) Space Shuttle Subsystems were reviewed as candidates for demonstrating the Weibull analysis methodology. Three hardware components were identified as analysis candidates: the turbine wheel, the gearbox, and the gas generator. Detailed review of subsystem level wearout and failure history revealed the lack of actual component failure data. In addition, component wearout data were not readily available or would require a separate data accumulation effort by the vendor. Without adequate component history data being available, the Weibull analysis methodology application to the APU and HPU subsystem group was terminated.

  12. Weibull model of multiplicity distribution in hadron-hadron collisions

    NASA Astrophysics Data System (ADS)

    Dash, Sadhana; Nandi, Basanta K.; Sett, Priyanka

    2016-06-01

    We introduce the use of the Weibull distribution as a simple parametrization of charged particle multiplicities in hadron-hadron collisions at all available energies, ranging from ISR energies to the most recent LHC energies. In statistics, the Weibull distribution has wide applicability in natural processes that involve fragmentation processes. This provides a natural connection to the available state-of-the-art models for multiparticle production in hadron-hadron collisions, which involve QCD parton fragmentation and hadronization. The Weibull distribution describes the multiplicity data at the most recent LHC energies better than the single negative binomial distribution.

  13. Program for Weibull Analysis of Fatigue Data

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    2005-01-01

    A Fortran computer program has been written for performing statistical analyses of fatigue-test data that are assumed to be adequately represented by a two-parameter Weibull distribution. This program calculates the following: (1) Maximum-likelihood estimates of the Weibull distribution; (2) Data for contour plots of relative likelihood for two parameters; (3) Data for contour plots of joint confidence regions; (4) Data for the profile likelihood of the Weibull-distribution parameters; (5) Data for the profile likelihood of any percentile of the distribution; and (6) Likelihood-based confidence intervals for parameters and/or percentiles of the distribution. The program can account for tests that are suspended without failure (the statistical term for such suspension of tests is "censoring"). The analytical approach followed in this program for the software is valid for type-I censoring, which is the removal of unfailed units at pre-specified times. Confidence regions and intervals are calculated by use of the likelihood-ratio method.

  14. Modeling observed animal performance using the Weibull distribution.

    PubMed

    Hagey, Travis J; Puthoff, Jonathan B; Crandell, Kristen E; Autumn, Kellar; Harmon, Luke J

    2016-06-01

    To understand how organisms adapt, researchers must link performance and microhabitat. However, measuring performance, especially maximum performance, can sometimes be difficult. Here, we describe an improvement over previous techniques that only consider the largest observed values as maxima. Instead, we model expected performance observations via the Weibull distribution, a statistical approach that reduces the impact of rare observations. After calculating group-level weighted averages and variances by treating individuals separately to reduce pseudoreplication, our approach resulted in high statistical power despite small sample sizes. We fitted lizard adhesive performance and bite force data to the Weibull distribution and found that it closely estimated maximum performance in both cases, illustrating the generality of our approach. Using the Weibull distribution to estimate observed performance greatly improves upon previous techniques by facilitating power analyses and error estimations around robustly estimated maximum values. PMID:26994180

  15. Development of a Weibull posterior distribution by combining a Weibull prior with an actual failure distribution using Bayesian inference

    NASA Technical Reports Server (NTRS)

    Giuntini, Michael E.; Giuntini, Ronald E.

    1991-01-01

    A Bayesian inference process for system logistical planning is presented which provides a method for incorporating actual failures with prediction data for an ongoing and improving reliability estimates. The process uses the Weibull distribution, and provides a means for examining and updating logistical and maintenance support needs.

  16. The Weibull distribution applied to post and core failure.

    PubMed

    Huysmans, M C; Van Der Varst, P G; Peters, M C; Plasschaert, A J

    1992-07-01

    In this study, data on initial failure loads of direct post and core-restored premolar teeth were analyzed using the Weibull distribution. Restorations consisted of a prefabricated titanium alloy post, and an amalgam, composite or glass cermet core buildup in human upper premolar teeth. The specimens were subjected to compressive forces until failure at angles of 10, 45 and 90 degrees to their long axis. The two- and three-parameter Weibull distributions were compared for applicability to the failure load data. For estimation of the parameters of the two-parameter distribution: sigma 0 (reference stress) and m (Weibull modulus), linear regression was used. In this distribution, it is assumed that the third parameter, sigma u (cut-off stress), equals 0. The Maximum Likelihood (MLH) method was used to estimate all three parameters. It was found that the choice of distribution has a strong influence on the estimated values and that the three-parameter distribution is best fitted for the failure loads in this study. Comparisons were made between the failure probability curves as found by MLH estimation for the different core materials and loading angles. The results indicated that the influence of loading angle on the failure mechanism was stronger than that of core material. PMID:1291399

  17. Investigation of Weibull statistics in fracture analysis of cast aluminum

    NASA Technical Reports Server (NTRS)

    Holland, Frederic A., Jr.; Zaretsky, Erwin V.

    1989-01-01

    The fracture strengths of two large batches of A357-T6 cast aluminum coupon specimens were compared by using two-parameter Weibull analysis. The minimum number of these specimens necessary to find the fracture strength of the material was determined. The applicability of three-parameter Weibull analysis was also investigated. A design methodology based on the combination of elementary stress analysis and Weibull statistical analysis is advanced and applied to the design of a spherical pressure vessel shell. The results from this design methodology are compared with results from the applicable ASME pressure vessel code.

  18. A new generalization of Weibull distribution with application to a breast cancer data set

    PubMed Central

    Wahed, Abdus S.; Luong, The Minh; Jeong, Jong-Hyeon

    2011-01-01

    SUMMARY In this article, we propose a new generalization of the Weibull distribution, which incorporates the exponentiated Weibull distribution introduced by Mudholkar and Srivastava [1] as a special case. We refer to the new family of distributions as the beta-Weibull distribution. We investigate the potential usefulness of the beta-Weibull distribution for modeling censored survival data from biomedical studies. Several other generalizations of the standard two-parameter Weibull distribution are compared with regards to maximum likelihood inference of the cumulative incidence function, under the setting of competing risks. These Weibull-based parametric models are fit to a breast cancer dataset from the National Surgical Adjuvant Breast and Bowel Project (NSABP). In terms of statistical significance of the treatment effect and model adequacy, all generalized models lead to similar conclusions, suggesting that the beta-Weibull family is a reasonable candidate for modeling survival data. PMID:19424958

  19. Tensile strength of randomly perforated aluminum plates: Weibull distribution parameters

    NASA Astrophysics Data System (ADS)

    Klein, Claude A.

    2008-07-01

    Recently, Yanay and collaborators [J. Appl. Phys. 101, 104911 (2007)] addressed issues regarding the fracture strength of randomly perforated aluminum plates subjected to tensile loads. Based on comprehensive measurements and computational simulations, they formulate statistical predictions for the tensile strength dependence on the hole density but conclude that their data are inadequate for the purpose of deriving the strength distribution function. The primary purpose of this contribution is to demonstrate that, on dividing the totality of applicable data into seven "bins" of comparable population, the strength distribution of perforated plates of similar hole density obeys a conventional two-parameter Weibull model. Furthermore, on examining the fracture stresses as recorded in the vicinity of the percolation threshold, we find that the strength obeys the expression σo(P -Pth)β with Pth≃0.64 and β ≃0.4. In this light, and taking advantage of percolation theory, we formulate equations that specify how the two Weibull parameters (characteristic strength and shape factor) depend on the hole density. This enables us to express the failure probability as a function of the tensile stress, over the entire range of hole densities, i.e., P =0.02 up to the percolation threshold.

  20. Kinetic Analysis of Isothermal Decomposition Process of Sodium Bicarbonate Using the Weibull Probability Function—Estimation of Density Distribution Functions of the Apparent Activation Energies

    NASA Astrophysics Data System (ADS)

    Janković, Bojan

    2009-10-01

    The decomposition process of sodium bicarbonate (NaHCO3) has been studied by thermogravimetry in isothermal conditions at four different operating temperatures (380 K, 400 K, 420 K, and 440 K). It was found that the experimental integral and differential conversion curves at the different operating temperatures can be successfully described by the isothermal Weibull distribution function with a unique value of the shape parameter ( β = 1.07). It was also established that the Weibull distribution parameters ( β and η) show independent behavior on the operating temperature. Using the integral and differential (Friedman) isoconversional methods, in the conversion (α) range of 0.20 ≤ α ≤ 0.80, the apparent activation energy ( E a ) value was approximately constant ( E a, int = 95.2 kJmol-1 and E a, diff = 96.6 kJmol-1, respectively). The values of E a calculated by both isoconversional methods are in good agreement with the value of E a evaluated from the Arrhenius equation (94.3 kJmol-1), which was expressed through the scale distribution parameter ( η). The Málek isothermal procedure was used for estimation of the kinetic model for the investigated decomposition process. It was found that the two-parameter Šesták-Berggren (SB) autocatalytic model best describes the NaHCO3 decomposition process with the conversion function f(α) = α0.18(1-α)1.19. It was also concluded that the calculated density distribution functions of the apparent activation energies ( ddfE a ’s) are not dependent on the operating temperature, which exhibit the highly symmetrical behavior (shape factor = 1.00). The obtained isothermal decomposition results were compared with corresponding results of the nonisothermal decomposition process of NaHCO3.

  1. Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately

    NASA Technical Reports Server (NTRS)

    Huang, Zhaofeng; Porter, Albert A.

    1991-01-01

    The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.

  2. Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately

    NASA Technical Reports Server (NTRS)

    Huang, Zhaofeng; Porter, Albert A.

    1990-01-01

    The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.

  3. A comparison of the generalized gamma and exponentiated Weibull distributions.

    PubMed

    Cox, Christopher; Matheson, Matthew

    2014-09-20

    This paper provides a comparison of the three-parameter exponentiated Weibull (EW) and generalized gamma (GG) distributions. The connection between these two different families is that the hazard functions of both have the four standard shapes (increasing, decreasing, bathtub, and arc shaped), and in fact, the shape of the hazard is the same for identical values of the three parameters. For a given EW distribution, we define a matching GG using simulation and also by matching the 5 (th) , 50 (th) , and 95 (th) percentiles. We compare EW and matching GG distributions graphically and using the Kullback-Leibler distance. We find that the survival functions for the EW and matching GG are graphically indistinguishable, and only the hazard functions can sometimes be seen to be slightly different. The Kullback-Leibler distances are very small and decrease with increasing sample size. We conclude that the similarity between the two distributions is striking, and therefore, the EW represents a convenient alternative to the GG with the identical richness of hazard behavior. More importantly, these results suggest that having the four basic hazard shapes may to some extent be an important structural characteristic of any family of distributions. PMID:24700647

  4. Weibull analysis applied to the pull adhesion test and fracture of a metal-ceramic interface

    SciTech Connect

    Erck, R.A.; Nichols, F.A.; Schult, D.L.

    1992-11-01

    Various adhesion tests have been developed to measure the mechanical bonding of thin coatings deposited on substrates. In the pull test, pins that have been bonded to the coating under test are pulled with increasing force normal to the coating until the coating is pulled from the substrate. For many systems, large scatter in the data is often observed due to uncontrolled defects in the interface and the brittle nature of the pull test. In this study, the applicability of Weibull statistics to the analysis of adhesion of Ag films to vacuum sputter-cleaned zirconia was examined. Data were obtained for smooth and rough substrates for various levels of adhesion. A good fit of the data to the Weibull distribution was observed. The Weibull modulus was found to depend on the roughness of the substrate, but was insensitive to the adhesion strength.

  5. Large-Scale Weibull Analysis of H-451 Nuclear- Grade Graphite Specimen Rupture Data

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Walker, Andrew; Baker, Eric H.; Murthy, Pappu L.; Bratton, Robert L.

    2012-01-01

    A Weibull analysis was performed of the strength distribution and size effects for 2000 specimens of H-451 nuclear-grade graphite. The data, generated elsewhere, measured the tensile and four-point-flexure room-temperature rupture strength of specimens excised from a single extruded graphite log. Strength variation was compared with specimen location, size, and orientation relative to the parent body. In our study, data were progressively and extensively pooled into larger data sets to discriminate overall trends from local variations and to investigate the strength distribution. The CARES/Life and WeibPar codes were used to investigate issues regarding the size effect, Weibull parameter consistency, and nonlinear stress-strain response. Overall, the Weibull distribution described the behavior of the pooled data very well. However, the issue regarding the smaller-than-expected size effect remained. This exercise illustrated that a conservative approach using a two-parameter Weibull distribution is best for designing graphite components with low probability of failure for the in-core structures in the proposed Generation IV (Gen IV) high-temperature gas-cooled nuclear reactors. This exercise also demonstrated the continuing need to better understand the mechanisms driving stochastic strength response. Extensive appendixes are provided with this report to show all aspects of the rupture data and analytical results.

  6. Least Squares Best Fit Method for the Three Parameter Weibull Distribution: Analysis of Tensile and Bend Specimens with Volume or Surface Flaw Failure

    NASA Technical Reports Server (NTRS)

    Gross, Bernard

    1996-01-01

    Material characterization parameters obtained from naturally flawed specimens are necessary for reliability evaluation of non-deterministic advanced ceramic structural components. The least squares best fit method is applied to the three parameter uniaxial Weibull model to obtain the material parameters from experimental tests on volume or surface flawed specimens subjected to pure tension, pure bending, four point or three point loading. Several illustrative example problems are provided.

  7. A Weibull distribution with power-law tails that describes the first passage time processes of foreign currency exchanges

    NASA Astrophysics Data System (ADS)

    Sazuka, Naoya; Inoue, Jun-Ichi

    2007-03-01

    A Weibull distribution with power-law tails is confirmed as a good candidate to describe the first passage time process of foreign currency exchange rates. The Lorentz curve and the corresponding Gini coefficient for a Weibull distribution are derived analytically. We show that the coefficient is in good agreement with the same quantity calculated from the empirical data. We also calculate the average waiting time which is an important measure to estimate the time for customers to wait until the next price change after they login to their computer systems. By assuming that the first passage time distribution might change its shape from the Weibull to the power-law at some critical time, we evaluate the averaged waiting time by means of the renewal-reward theorem. We find that our correction of tails of the distribution makes the averaged waiting time much closer to the value obtained from empirical data analysis. We also discuss the deviation from the estimated average waiting time by deriving the waiting time distribution directly. These results make us conclude that the first passage process of the foreign currency exchange rates is well described by a Weibull distribution with power-law tails.

  8. Predictive Failure of Cylindrical Coatings Using Weibull Analysis

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.

    2002-01-01

    Rotating, coated wiping rollers used in a high-speed printing application failed primarily from fatigue. Two coating materials were evaluated: a hard, cross-linked, plasticized polyvinyl chloride (PVC) and a softer, plasticized PVC. A total of 447 tests was conducted with these coatings in a production facility. The data were evaluated using Weibull analysis. The softer coating produced more than twice the life of the harder cross-linked coating and reduced the wiper replacement rate by two-thirds, resulting in minimum production interruption.

  9. Numerical approach for the evaluation of Weibull distribution parameters for hydrologic purposes

    NASA Astrophysics Data System (ADS)

    Pierleoni, A.; Di Francesco, S.; Biscarini, C.; Manciola, P.

    2016-06-01

    In hydrology, the statistical description of low flow phenomena is very important in order to evaluate the available water resource especially in a river and the related values can be obviously considered as random variables, therefore probability distributions dealing with extreme values (maximum and/or minimum) of the variable play a fundamental role. Computational procedures for the estimation of the parameters featuring these distributions are actually very useful especially when embedded into analysis software [1][2] or as standalone applications. In this paper a computational procedure for the evaluation of the Weibull[3] distribution is presented focusing on the case when the lower limit of the distribution is not known or not set to a specific value a priori. The procedure takes advantage of the Gumbel[4] moment approach to the problem.

  10. Composite Weibull-Inverse Transformed Gamma distribution and its actuarial application

    NASA Astrophysics Data System (ADS)

    Maghsoudi, Mastoureh; Bakar, Shaiful Anuar Abu; Hamzah, Nor Aishah

    2014-07-01

    This paper introduces a new composite model, namely, composite Weibull-Inverse Transformed Gamma distribution which assumes Weibull distribution for the head up to a specified threshold and inverse transformed gamma distribution beyond it. The closed form of probability density function (pdf) as well as the estimation of parameters by maximum likelihood method is presented. The model is compared with several benchmark distributions and their performances are measured. A well-known data set, Danish fire loss data, is used for this purpose and it's Value at Risk (VaR) using the new model is computed. In comparison to several standard models, the composite Weibull- Inverse Transformed Gamma model proved to be a competitor candidate.

  11. Characterizing size dependence of ceramic-fiber strength using modified Weibull distribution

    SciTech Connect

    Zhu, Yuntian; Blumenthal, W.R.

    1995-05-01

    The strengths of ceramic fibers have been observed to increase with decreasing fiber diameter and length. The traditional single-modal Weibull distribution function can only take into account one type of flaw, which makes it inappropriate to characterize the strength dependence of both the diameter and the length since ceramic fibers usually have both volume and surface flaws which affect the strength dependence in different ways. Although the bi-modal Weibull distribution can be used to characterize both volume and surface flaws, the mathematical difficulty in its application makes it undesirable. In this paper, the factors affecting fiber strength are analyzed in terms of fracture mechanics and flaw formation. A modified Weibull distribution function is proposed to characterize both the diameter dependence and the length dependence of ceramic fibers.

  12. Weibull statistical analysis of Krouse type bending fatigue of nuclear materials

    NASA Astrophysics Data System (ADS)

    Haidyrah, Ahmed S.; Newkirk, Joseph W.; Castaño, Carlos H.

    2016-03-01

    A bending fatigue mini-specimen (Krouse-type) was used to study the fatigue properties of nuclear materials. The objective of this paper is to study fatigue for Grade 91 ferritic-martensitic steel using a mini-specimen (Krouse-type) suitable for reactor irradiation studies. These mini-specimens are similar in design (but smaller) to those described in the ASTM B593 standard. The mini specimen was machined by waterjet and tested as-received. The bending fatigue machine was modified to test the mini-specimen with a specially designed adapter. The cycle bending fatigue behavior of Grade 91 was studied under constant deflection. The S-N curve was created and mean fatigue life was analyzed using mean fatigue life. In this study, the Weibull function was predicted probably for high stress to low stress at 563, 310 and 265 MPa. The commercial software Minitab 17 was used to calculate the distribution of fatigue life under different stress levels. We have used 2 and 3- parameters Weibull analysis to introduce the probability of failure. The plots indicated that the 3- parameter Weibull distribution fits the data well.

  13. A new, reliable, and simple-to-use method for the analysis of a population of values of a random variable using the Weibull probability distribution: application to acrylic bone cement fatigue results.

    PubMed

    Janna, Sied; Dwiggins, David P; Lewis, Gladius

    2005-01-01

    In cases where the Weibull probability distribution is being investigated as a possible fit to experimentally obtained results of a random variable (V), there is, currently, no accurate and reliable but simple-to-use method available for simultaneously (a) establishing if the fit is of the two- or three-parameter variant of the distribution, and/or (b) estimating the minimum value of the variable (V(0)), in cases where the three-parameter variant is shown to be applicable. In the present work, the details of such a method -- which uses a simple nonlinear regression analysis -- are presented, together with results of its use when applied to 4 sets of number-of-cycles-to-fracture results from fatigue tests, performed in our laboratory, using specimens fabricated from 3 different acrylic bone cement formulations. The key result of the method is that the two- or three-parameter variant of the probability distribution is applicable if the estimate of V(0) obtained is less than or greater than zero, respectively. PMID:16179755

  14. Flexural strength of sapphire: Weibull statistical analysis of stressed area, surface coating, and polishing procedure effects

    NASA Astrophysics Data System (ADS)

    Klein, Claude A.

    2004-09-01

    The results of fracture testing are usually reported in terms of a measured strength, σM=σi¯±Δσi¯, where σi¯ is the average of the recorded peak stresses at failure, and Δσi¯ represents the standard deviation. This "strength" does not provide an objective measure of the intrisic strength since σM depends on the test method and the size of the volume or the surface subjected to tensile stresses. We first clarify issues relating to Weibull's theory of brittle fracture and then make use of the theory to assess the results of equibiaxial flexure testing that was carried out on a variety of sapphire specimens, at three mechanical test facilities. Specifically, we describe the failure probability distribution in terms of a characteristic strength σC—i.e., the effective strength of a uniformly stressed 1cm2 area—which allows us to predict the average stress at failure of a uniformly loaded "window" if the Weibull modulus m is available. A Weibull statistical analysis of biaxial-flexure strength data thus amounts to obtaining the parameters σC and m, which is best done by directly fitting estimated cumulative failure probabilities to the appropriate expression derived from Weibull's theory. We demonstrate that: (a) measurements performed on sapphire test specimens originating from two suppliers confirm the applicability of the area scaling law; for mechanically polished c- and r-plane sapphire, we obtain σC≃975MPa, m =3.40 and σC≃550MPa, m =4.10, respectively. (b) Strongly adhering compressive coatings can augment the characteristic strength by as much as 60%, in accord with predictions based on fracture-mechanics considerations, but degrade the Weibull modulus, which mitigates the benefit of this approach. And (c) Measurements performed at 600°C on chemomechanically polished c-plane test specimens indicate that proper procedures may enhance the characteristic strength by as much as 150%, with no apparent degradation of the Weibull modulus.

  15. An EOQ Model with Two-Parameter Weibull Distribution Deterioration and Price-Dependent Demand

    ERIC Educational Resources Information Center

    Mukhopadhyay, Sushanta; Mukherjee, R. N.; Chaudhuri, K. S.

    2005-01-01

    An inventory replenishment policy is developed for a deteriorating item and price-dependent demand. The rate of deterioration is taken to be time-proportional and the time to deterioration is assumed to follow a two-parameter Weibull distribution. A power law form of the price dependence of demand is considered. The model is solved analytically…

  16. Weibull-distributed dyke thickness reflects probabilistic character of host-rock strength.

    PubMed

    Krumbholz, Michael; Hieronymus, Christoph F; Burchardt, Steffi; Troll, Valentin R; Tanner, David C; Friese, Nadine

    2014-01-01

    Magmatic sheet intrusions (dykes) constitute the main form of magma transport in the Earth's crust. The size distribution of dykes is a crucial parameter that controls volcanic surface deformation and eruption rates and is required to realistically model volcano deformation for eruption forecasting. Here we present statistical analyses of 3,676 dyke thickness measurements from different tectonic settings and show that dyke thickness consistently follows the Weibull distribution. Known from materials science, power law-distributed flaws in brittle materials lead to Weibull-distributed failure stress. We therefore propose a dynamic model in which dyke thickness is determined by variable magma pressure that exploits differently sized host-rock weaknesses. The observed dyke thickness distributions are thus site-specific because rock strength, rather than magma viscosity and composition, exerts the dominant control on dyke emplacement. Fundamentally, the strength of geomaterials is scale-dependent and should be approximated by a probability distribution. PMID:24513695

  17. Weibull-distributed dyke thickness reflects probabilistic character of host-rock strength

    PubMed Central

    Krumbholz, Michael; Hieronymus, Christoph F.; Burchardt, Steffi; Troll, Valentin R.; Tanner, David C.; Friese, Nadine

    2014-01-01

    Magmatic sheet intrusions (dykes) constitute the main form of magma transport in the Earth’s crust. The size distribution of dykes is a crucial parameter that controls volcanic surface deformation and eruption rates and is required to realistically model volcano deformation for eruption forecasting. Here we present statistical analyses of 3,676 dyke thickness measurements from different tectonic settings and show that dyke thickness consistently follows the Weibull distribution. Known from materials science, power law-distributed flaws in brittle materials lead to Weibull-distributed failure stress. We therefore propose a dynamic model in which dyke thickness is determined by variable magma pressure that exploits differently sized host-rock weaknesses. The observed dyke thickness distributions are thus site-specific because rock strength, rather than magma viscosity and composition, exerts the dominant control on dyke emplacement. Fundamentally, the strength of geomaterials is scale-dependent and should be approximated by a probability distribution. PMID:24513695

  18. A modified Weibull model for tensile strength distribution of carbon nanotube fibers with strain rate and size effects

    NASA Astrophysics Data System (ADS)

    Sun, Gengzhi; Pang, John H. L.; Zhou, Jinyuan; Zhang, Yani; Zhan, Zhaoyao; Zheng, Lianxi

    2012-09-01

    Fundamental studies on the effects of strain rate and size on the distribution of tensile strength of carbon nanotube (CNT) fibers are reported in this paper. Experimental data show that the mechanical strength of CNT fibers increases from 0.2 to 0.8 GPa as the strain rate increases from 0.00001 to 0.1 (1/s). In addition, the influence of fiber diameter at low and high strain rate conditions was investigated further with statistical analysis. A modified Weibull distribution model for characterizing the tensile strength distribution of CNT fibers taking into account the effect of strain rate and fiber diameter is proposed.

  19. Standard practice for reporting uniaxial strength data and estimating Weibull distribution parameters for advanced ceramics

    NASA Astrophysics Data System (ADS)

    1994-04-01

    This practice covers the evaluation and subsequent reporting of uniaxial strength data and the estimation of probability distribution parameters for advanced ceramics that fail in a brittle fashion. The failure strength of advanced ceramics is treated as a continuous random variable. Typically, a number of test specimens with well-defined geometry are failed under well-defined isothermal loading conditions. The load at which each specimen fails is recorded. The resulting failure stresses are used to obtain parameter estimates associated with the underlying population distribution. This practice is restricted to the assumption that the distribution underlying the failure strengths is the two parameter Weibull distribution with size scaling. Furthermore, this practice is restricted to test specimens (tensile, flexural, pressurized ring, etc.) that are primarily subjected to uniaxial stress states. Section 8 outlines methods to correct for bias errors in the estimated Weibull parameters and to calculate confidence bounds on those estimates from data sets where all failures originate from a single flaw population (that is, a single failure mode). In samples where failures originate from multiple independent flaw populations (for example, competing failure modes), the methods outlined in Section 8 for bias correction and confidence bounds are not applicable. Measurements of the strength at failure are taken for one of two reasons: either for a comparison of the relative quality of two materials, or the prediction of the probability of failure (or, alternatively, the fracture strength) for a structure of interest. This practice will permit estimates of the distribution parameters that are needed for either.

  20. Bonus-Malus System with the Claim Frequency Distribution is Geometric and the Severity Distribution is Truncated Weibull

    NASA Astrophysics Data System (ADS)

    Santi, D. N.; Purnaba, I. G. P.; Mangku, I. W.

    2016-01-01

    Bonus-Malus system is said to be optimal if it is financially balanced for insurance companies and fair for policyholders. Previous research about Bonus-Malus system concern with the determination of the risk premium which applied to all of the severity that guaranteed by the insurance company. In fact, not all of the severity that proposed by policyholder may be covered by insurance company. When the insurance company sets a maximum bound of the severity incurred, so it is necessary to modify the model of the severity distribution into the severity bound distribution. In this paper, optimal Bonus-Malus system is compound of claim frequency component has geometric distribution and severity component has truncated Weibull distribution is discussed. The number of claims considered to follow a Poisson distribution, and the expected number λ is exponentially distributed, so the number of claims has a geometric distribution. The severity with a given parameter θ is considered to have a truncated exponential distribution is modelled using the Levy distribution, so the severity have a truncated Weibull distribution.

  1. Weibull probability graph paper: a call for standardization

    NASA Astrophysics Data System (ADS)

    Kane, Martin D.

    2001-04-01

    Weibull analysis of tensile strength data is routinely performed to determine the quality of optical fiber. A typical Weibull analysis includes setting up an experiment, testing the samples, plotting and interpreting the data, and performing a statistical analysis. One typical plot that is often included in the analysis is the Weibull probability plot in which the data are plotted as points on a special type of graph paper known as Weibull probability paper. If the data belong to a Weibull probability density function, they will fall approximately on a straight line. A search of the literature reveals that many Weibull analyses have been performed on optical fiber, but the associated Weibull probability plots have been drawn incorrectly. In some instances the plots have been shown with the ordinate (Probability) starting from 0% and ending at 100%. This has no physical meaning because the Weibull probability density function is a continuous distribution and is inherently not bounded. This paper will discuss the Weibull probability density function, the proper construction of Weibull probability graph paper, and interpretation of data through analysis of the associated probability plot.

  2. Reliability Evaluation Method with Weibull Distribution for Temporary Overvoltages of Substation Equipment

    NASA Astrophysics Data System (ADS)

    Okabe, Shigemitsu; Tsuboi, Toshihiro; Takami, Jun

    The power-frequency withstand voltage tests are regulated on electric power equipment in JEC by evaluating the lifetime reliability with a Weibull distribution function. The evaluation method is still controversial in terms of consideration of a plural number of faults and some alternative methods were proposed on this subject. The present paper first discusses the physical meanings of the various kinds of evaluating methods and secondly examines their effects on the power-frequency withstand voltage tests. Further, an appropriate method is investigated for an oil-filled transformer and a gas insulated switchgear with taking notice of dielectric breakdown or partial discharge mechanism under various insulating material and structure conditions and the tentative conclusion gives that the conventional method would be most pertinent under the present conditions.

  3. Flexural strength of infrared-transmitting window materials: bimodal Weibull statistical analysis

    NASA Astrophysics Data System (ADS)

    Klein, Claude A.

    2011-02-01

    The results of flexural strength testing performed on brittle materials are usually interpreted in light of a ``Weibull plot,'' i.e., by fitting the estimated cumulative failure probability (CFP) to a linearized semiempirical Weibull distribution. This procedure ignores the impact of the testing method on the measured stresses at fracture--specifically, the stressed area and the stress profile--thus resulting in inadequate characterization of the material under investigation. In a previous publication, the author reformulated Weibull's statistical theory of fracture in a manner that emphasizes how the stressed area and the stress profile control the failure probability distribution, which led to the concept of a characteristic strength, that is, the effective strength of a 1-cm2 uniformly stressed area. Fitting the CFP of IR-transmitting materials (AlON, fusion-cast CaF2, oxyfluoride glass, fused SiO2, CVD-ZnSe, and CVD-ZnS) was performed by means of nonlinear regressions but produced evidence of slight, systematic deviations. The purpose of this contribution is to demonstrate that upon extending the previously elaborated model to distributions involving two distinct types of defects--bimodal distributions--the fit agrees with estimated CFPs. Furthermore, the availability of two sets of statistical parameters (characteristic strength and shape parameter) can be taken advantage of to evaluate the failure-probability density, thus providing means of assessing the nature, the critical size, and the size distribution of surface/subsurface flaws.

  4. Probabilistic Analysis for Comparing Fatigue Data Based on Johnson-Weibull Parameters

    NASA Technical Reports Server (NTRS)

    Hendricks, Robert C.; Zaretsky, Erwin V.; Vicek, Brian L.

    2007-01-01

    Probabilistic failure analysis is essential when analysis of stress-life (S-N) curves is inconclusive in determining the relative ranking of two or more materials. In 1964, L. Johnson published a methodology for establishing the confidence that two populations of data are different. Simplified algebraic equations for confidence numbers were derived based on the original work of L. Johnson. Using the ratios of mean life, the resultant values of confidence numbers deviated less than one percent from those of Johnson. It is possible to rank the fatigue lives of different materials with a reasonable degree of statistical certainty based on combined confidence numbers. These equations were applied to rotating beam fatigue tests that were conducted on three aluminum alloys at three stress levels each. These alloys were AL 2024, AL 6061, and AL 7075. The results were analyzed and compared using ASTM Standard E739-91 and the Johnson-Weibull analysis. The ASTM method did not statistically distinguish between AL 6010 and AL 7075. Based on the Johnson-Weibull analysis confidence numbers greater than 99 percent, AL 2024 was found to have the longest fatigue life, followed by AL 7075, and then AL 6061. The ASTM Standard and the Johnson-Weibull analysis result in the same stress-life exponent p for each of the three aluminum alloys at the median or L(sub 50) lives.

  5. USE OF WEIBULL FUNCTION FOR NON-LINEAR ANALYSIS OF EFFECTS OF LOW LEVELS OF SIMULATED HERBICIDE DRIFT ON PLANTS

    EPA Science Inventory

    We compared two regression models, which are based on the Weibull and probit functions, for the analysis of pesticide toxicity data from laboratory studies on Illinois crop and native plant species. Both mathematical models are continuous, differentiable, strictly positive, and...

  6. Probabilistic Analysis for Comparing Fatigue Data Based on Johnson-Weibull Parameters

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.

    2013-01-01

    Leonard Johnson published a methodology for establishing the confidence that two populations of data are different. Johnson's methodology is dependent on limited combinations of test parameters (Weibull slope, mean life ratio, and degrees of freedom) and a set of complex mathematical equations. In this report, a simplified algebraic equation for confidence numbers is derived based on the original work of Johnson. The confidence numbers calculated with this equation are compared to those obtained graphically by Johnson. Using the ratios of mean life, the resultant values of confidence numbers at the 99 percent level deviate less than 1 percent from those of Johnson. At a 90 percent confidence level, the calculated values differ between +2 and 4 percent. The simplified equation is used to rank the experimental lives of three aluminum alloys (AL 2024, AL 6061, and AL 7075), each tested at three stress levels in rotating beam fatigue, analyzed using the Johnson- Weibull method, and compared to the ASTM Standard (E739 91) method of comparison. The ASTM Standard did not statistically distinguish between AL 6061 and AL 7075. However, it is possible to rank the fatigue lives of different materials with a reasonable degree of statistical certainty based on combined confidence numbers using the Johnson- Weibull analysis. AL 2024 was found to have the longest fatigue life, followed by AL 7075, and then AL 6061. The ASTM Standard and the Johnson-Weibull analysis result in the same stress-life exponent p for each of the three aluminum alloys at the median, or L(sub 50), lives

  7. Average capacity for optical wireless communication systems over exponentiated Weibull distribution non-Kolmogorov turbulent channels.

    PubMed

    Cheng, Mingjian; Zhang, Yixin; Gao, Jie; Wang, Fei; Zhao, Fengsheng

    2014-06-20

    We model the average channel capacity of optical wireless communication systems for cases of weak to strong turbulence channels, using the exponentiation Weibull distribution model. The joint effects of the beam wander and spread, pointing errors, atmospheric attenuation, and the spectral index of non-Kolmogorov turbulence on system performance are included. Our results show that the average capacity decreases steeply as the propagation length L changes from 0 to 200 m and decreases slowly down or tends to a stable value as the propagation length L is greater than 200 m. In the weak turbulence region, by increasing the detection aperture, we can improve the average channel capacity and the atmospheric visibility as an important issue affecting the average channel capacity. In the strong turbulence region, the increase of the radius of the detection aperture cannot reduce the effects of the atmospheric turbulence on the average channel capacity, and the effect of atmospheric visibility on the channel information capacity can be ignored. The effect of the spectral power exponent on the average channel capacity in the strong turbulence region is higher than weak turbulence region. Irrespective of the details determining the turbulent channel, we can say that pointing errors have a significant effect on the average channel capacity of optical wireless communication systems in turbulence channels. PMID:24979434

  8. Accurate bearing remaining useful life prediction based on Weibull distribution and artificial neural network

    NASA Astrophysics Data System (ADS)

    Ben Ali, Jaouher; Chebel-Morello, Brigitte; Saidi, Lotfi; Malinowski, Simon; Fnaiech, Farhat

    2015-05-01

    Accurate remaining useful life (RUL) prediction of critical assets is an important challenge in condition based maintenance to improve reliability and decrease machine's breakdown and maintenance's cost. Bearing is one of the most important components in industries which need to be monitored and the user should predict its RUL. The challenge of this study is to propose an original feature able to evaluate the health state of bearings and to estimate their RUL by Prognostics and Health Management (PHM) techniques. In this paper, the proposed method is based on the data-driven prognostic approach. The combination of Simplified Fuzzy Adaptive Resonance Theory Map (SFAM) neural network and Weibull distribution (WD) is explored. WD is used just in the training phase to fit measurement and to avoid areas of fluctuation in the time domain. SFAM training process is based on fitted measurements at present and previous inspection time points as input. However, the SFAM testing process is based on real measurements at present and previous inspections. Thanks to the fuzzy learning process, SFAM has an important ability and a good performance to learn nonlinear time series. As output, seven classes are defined; healthy bearing and six states for bearing degradation. In order to find the optimal RUL prediction, a smoothing phase is proposed in this paper. Experimental results show that the proposed method can reliably predict the RUL of rolling element bearings (REBs) based on vibration signals. The proposed prediction approach can be applied to prognostic other various mechanical assets.

  9. Inferences on the lifetime performance index for Weibull distribution based on censored observations using the max p-value method

    NASA Astrophysics Data System (ADS)

    Lee, Wen-Chuan

    2011-06-01

    In the service (or manufacturing) industries, process capability indices (PCIs) are utilised to assess whether product quality meets the required level. And the lifetime performance index (or larger-the-better PCI) CL is frequently used as a means of measuring product performance, where L is the lower specification limit. Hence, this study first uses the max p-value method to select the optimum value of the shape parameter β of the Weibull distribution and β is given. Second, we also construct the maximum likelihood estimator (MLE) of CL based on the type II right-censored sample from the Weibull distribution. The MLE of CL is then utilised to develop a novel hypothesis testing procedure provided that L is known. Finally, we give one practical example to illustrate the use of the testing procedure under given significance level α.

  10. Detecting changes in retinal function: Analysis with Non-Stationary Weibull Error Regression and Spatial enhancement (ANSWERS).

    PubMed

    Zhu, Haogang; Russell, Richard A; Saunders, Luke J; Ceccon, Stefano; Garway-Heath, David F; Crabb, David P

    2014-01-01

    Visual fields measured with standard automated perimetry are a benchmark test for determining retinal function in ocular pathologies such as glaucoma. Their monitoring over time is crucial in detecting change in disease course and, therefore, in prompting clinical intervention and defining endpoints in clinical trials of new therapies. However, conventional change detection methods do not take into account non-stationary measurement variability or spatial correlation present in these measures. An inferential statistical model, denoted 'Analysis with Non-Stationary Weibull Error Regression and Spatial enhancement' (ANSWERS), was proposed. In contrast to commonly used ordinary linear regression models, which assume normally distributed errors, ANSWERS incorporates non-stationary variability modelled as a mixture of Weibull distributions. Spatial correlation of measurements was also included into the model using a Bayesian framework. It was evaluated using a large dataset of visual field measurements acquired from electronic health records, and was compared with other widely used methods for detecting deterioration in retinal function. ANSWERS was able to detect deterioration significantly earlier than conventional methods, at matched false positive rates. Statistical sensitivity in detecting deterioration was also significantly better, especially in short time series. Furthermore, the spatial correlation utilised in ANSWERS was shown to improve the ability to detect deterioration, compared to equivalent models without spatial correlation, especially in short follow-up series. ANSWERS is a new efficient method for detecting changes in retinal function. It allows for better detection of change, more efficient endpoints and can potentially shorten the time in clinical trials for new therapies. PMID:24465636

  11. Improvement in mechanical properties of jute fibres through mild alkali treatment as demonstrated by utilisation of the Weibull distribution model.

    PubMed

    Roy, Aparna; Chakraborty, Sumit; Kundu, Sarada Prasad; Basak, Ratan Kumar; Majumder, Subhasish Basu; Adhikari, Basudam

    2012-03-01

    Chemically modified jute fibres are potentially useful as natural reinforcement in composite materials. Jute fibres were treated with 0.25%-1.0% sodium hydroxide (NaOH) solution for 0.5-48 h. The hydrophilicity, surface morphology, crystallinity index, thermal and mechanical characteristics of untreated and alkali treated fibres were studied.The two-parameter Weibull distribution model was applied to deal with the variation in mechanical properties of the natural fibres. Alkali treatment enhanced the tensile strength and elongation at break by 82% and 45%, respectively but decreased the hydrophilicity by 50.5% and the diameter of the fibres by 37%. PMID:22209134

  12. Detecting Changes in Retinal Function: Analysis with Non-Stationary Weibull Error Regression and Spatial Enhancement (ANSWERS)

    PubMed Central

    Zhu, Haogang; Russell, Richard A.; Saunders, Luke J.; Ceccon, Stefano; Garway-Heath, David F.; Crabb, David P.

    2014-01-01

    Visual fields measured with standard automated perimetry are a benchmark test for determining retinal function in ocular pathologies such as glaucoma. Their monitoring over time is crucial in detecting change in disease course and, therefore, in prompting clinical intervention and defining endpoints in clinical trials of new therapies. However, conventional change detection methods do not take into account non-stationary measurement variability or spatial correlation present in these measures. An inferential statistical model, denoted ‘Analysis with Non-Stationary Weibull Error Regression and Spatial enhancement’ (ANSWERS), was proposed. In contrast to commonly used ordinary linear regression models, which assume normally distributed errors, ANSWERS incorporates non-stationary variability modelled as a mixture of Weibull distributions. Spatial correlation of measurements was also included into the model using a Bayesian framework. It was evaluated using a large dataset of visual field measurements acquired from electronic health records, and was compared with other widely used methods for detecting deterioration in retinal function. ANSWERS was able to detect deterioration significantly earlier than conventional methods, at matched false positive rates. Statistical sensitivity in detecting deterioration was also significantly better, especially in short time series. Furthermore, the spatial correlation utilised in ANSWERS was shown to improve the ability to detect deterioration, compared to equivalent models without spatial correlation, especially in short follow-up series. ANSWERS is a new efficient method for detecting changes in retinal function. It allows for better detection of change, more efficient endpoints and can potentially shorten the time in clinical trials for new therapies. PMID:24465636

  13. An incentive for coordination in a decentralised service chain with a Weibull lifetime distributed facility

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Fang; Yang, Gino K.; Yang, Chyn-Yng; Chu, Tu-Bin

    2013-10-01

    This article deals with a decentralised service chain consisting of a service provider and a facility owner. The revenue allocation and service price are, respectively, determined by the service provider and the facility owner in a non-cooperative manner. To model this decentralised operation, a Stackelberg game between the two parties is formulated. In the mathematical framework, the service system is assumed to be driven by Poisson customer arrivals and exponential service times. The most common log-linear service demand and Weibull facility lifetime are also adopted. Under these analytical conditions, the decentralised decisions in this game are investigated and then a unique optimal equilibrium is derived. Finally, a coordination mechanism is proposed to improve the efficiency of this decentralised system.

  14. Weibull analysis of fracture test data on bovine cortical bone: influence of orientation.

    PubMed

    Khandaker, Morshed; Ekwaro-Osire, Stephen

    2013-01-01

    The fracture toughness, K IC, of a cortical bone has been experimentally determined by several researchers. The variation of K IC values occurs from the variation of specimen orientation, shape, and size during the experiment. The fracture toughness of a cortical bone is governed by the severest flaw and, hence, may be analyzed using Weibull statistics. To the best of the authors' knowledge, however, no studies of this aspect have been published. The motivation of the study is the evaluation of Weibull parameters at the circumferential-longitudinal (CL) and longitudinal-circumferential (LC) directions. We hypothesized that Weibull parameters vary depending on the bone microstructure. In the present work, a two-parameter Weibull statistical model was applied to calculate the plane-strain fracture toughness of bovine femoral cortical bone obtained using specimens extracted from CL and LC directions of the bone. It was found that the Weibull modulus of fracture toughness was larger for CL specimens compared to LC specimens, but the opposite trend was seen for the characteristic fracture toughness. The reason for these trends is the microstructural and extrinsic toughening mechanism differences between CL and LC directions bone. The Weibull parameters found in this study can be applied to develop a damage-mechanics model for bone. PMID:24385985

  15. Weibull Analysis of Fracture Test Data on Bovine Cortical Bone: Influence of Orientation

    PubMed Central

    Ekwaro-Osire, Stephen

    2013-01-01

    The fracture toughness, KIC, of a cortical bone has been experimentally determined by several researchers. The variation of KIC values occurs from the variation of specimen orientation, shape, and size during the experiment. The fracture toughness of a cortical bone is governed by the severest flaw and, hence, may be analyzed using Weibull statistics. To the best of the authors' knowledge, however, no studies of this aspect have been published. The motivation of the study is the evaluation of Weibull parameters at the circumferential-longitudinal (CL) and longitudinal-circumferential (LC) directions. We hypothesized that Weibull parameters vary depending on the bone microstructure. In the present work, a two-parameter Weibull statistical model was applied to calculate the plane-strain fracture toughness of bovine femoral cortical bone obtained using specimens extracted from CL and LC directions of the bone. It was found that the Weibull modulus of fracture toughness was larger for CL specimens compared to LC specimens, but the opposite trend was seen for the characteristic fracture toughness. The reason for these trends is the microstructural and extrinsic toughening mechanism differences between CL and LC directions bone. The Weibull parameters found in this study can be applied to develop a damage-mechanics model for bone. PMID:24385985

  16. How to do a Weibull statistical analysis of flexural strength data: application to AlON, diamond, zinc selenide, and zinc sulfide

    NASA Astrophysics Data System (ADS)

    Klein, Claude A.; Miller, Richard P.

    2001-09-01

    For the purpose of assessing the strength of engineering ceramics, it is common practice to interpret the measured stresses at fracture in the light of a semi-empirical expression derived from Weibull's theory of brittle fracture, i.e., ln[-ln(1-P)]=-mln((sigma) N)+mln((sigma) ), where P is the cumulative failure probability, (sigma) is the applied tensile stress, m is the Weibull modulus, and (sigma) N is the nominal strength. The strength of (sigma) N, however, does not represent a true measure because it depends not only on the test method but also on the size of the volume or the surface subjected to tensile stresses. In this paper we intend to first clarify issues relating to the application of Weibull's theory of fracture and then make use of the theory to assess the results of equibiaxial flexure testing that was carried out on polycrystalline infrared-transmitting materials. These materials are brittle ceramics, which most frequently fail as a consequence of tensile stresses acting on surface flaws. Since equibiaxial flexure testing is the preferred method of measuring the strength of optical ceramics, we propose to formulate the failure-probability equation in terms of a characteristic strength, (sigma) C, for biaxial loadings, i.e., P=1-exp{-(pi) (ro/cm)2[(Gamma) (1+1/m)]m((sigma) /(sigma) C)m}, where ro is the radius of the loading ring (in centimeter) and (Gamma) (z) designates the gamma function. A Weibull statistical analysis of equibiaxial strength data thus amounts to obtaining the parameters m and (sigma) C, which is best done by directly fitting estimated Pi vs i data to the failure-probability equation; this procedure avoids distorting the distribution through logarithmic linearization and can be implemented by performing a non-linear bivariate regression. Concentric- ring fracture testing performed on five sets of Raytran materials validates the procedure in the sense that the two parameters model appears to describe the experimental failure

  17. Reliability growth modeling analysis of the space shuttle main engines based upon the Weibull process

    NASA Technical Reports Server (NTRS)

    Wheeler, J. T.

    1990-01-01

    The Weibull process, identified as the inhomogeneous Poisson process with the Weibull intensity function, is used to model the reliability growth assessment of the space shuttle main engine test and flight failure data. Additional tables of percentage-point probabilities for several different values of the confidence coefficient have been generated for setting (1-alpha)100-percent two sided confidence interval estimates on the mean time between failures. The tabled data pertain to two cases: (1) time-terminated testing, and (2) failure-terminated testing. The critical values of the three test statistics, namely Cramer-von Mises, Kolmogorov-Smirnov, and chi-square, were calculated and tabled for use in the goodness of fit tests for the engine reliability data. Numerical results are presented for five different groupings of the engine data that reflect the actual response to the failures.

  18. Prediction of the curing time to achieve maturity of the nano-cement based concrete using the Weibull distribution model: A complementary data set

    PubMed Central

    Jo, Byung Wan; Chakraborty, Sumit; Kim, Heon

    2015-01-01

    This data article provides a comparison data for nano-cement based concrete (NCC) and ordinary Portland cement based concrete (OPCC). Concrete samples (OPCC) were fabricated using ten different mix design and their characterization data is provided here. Optimization of curing time using the Weibull distribution model was done by analyzing the rate of change of compressive strength of the OPCC. Initially, the compressive strength of the OPCC samples was measured after completion of four desired curing times. Thereafter, the required curing time to achieve a particular rate of change of the compressive strength has been predicted utilizing the equation derived from the variation of the rate of change of compressive strength with the curing time, prior to the optimization of the curing time (at the 99.99% confidence level) using the Weibull distribution model. This data article complements the research article entitled “Prediction of the curing time to achieve maturity of the nano-cement based concrete using the Weibull distribution model” [1]. PMID:26217804

  19. Prediction of the curing time to achieve maturity of the nano-cement based concrete using the Weibull distribution model: A complementary data set.

    PubMed

    Jo, Byung Wan; Chakraborty, Sumit; Kim, Heon

    2015-09-01

    This data article provides a comparison data for nano-cement based concrete (NCC) and ordinary Portland cement based concrete (OPCC). Concrete samples (OPCC) were fabricated using ten different mix design and their characterization data is provided here. Optimization of curing time using the Weibull distribution model was done by analyzing the rate of change of compressive strength of the OPCC. Initially, the compressive strength of the OPCC samples was measured after completion of four desired curing times. Thereafter, the required curing time to achieve a particular rate of change of the compressive strength has been predicted utilizing the equation derived from the variation of the rate of change of compressive strength with the curing time, prior to the optimization of the curing time (at the 99.99% confidence level) using the Weibull distribution model. This data article complements the research article entitled "Prediction of the curing time to achieve maturity of the nano-cement based concrete using the Weibull distribution model" [1]. PMID:26217804

  20. Quality-Related Monitoring and Grading of Granulated Products by Weibull-Distribution Modeling of Visual Images with Semi-Supervised Learning

    PubMed Central

    Liu, Jinping; Tang, Zhaohui; Xu, Pengfei; Liu, Wenzhong; Zhang, Jin; Zhu, Jianyong

    2016-01-01

    The topic of online product quality inspection (OPQI) with smart visual sensors is attracting increasing interest in both the academic and industrial communities on account of the natural connection between the visual appearance of products with their underlying qualities. Visual images captured from granulated products (GPs), e.g., cereal products, fabric textiles, are comprised of a large number of independent particles or stochastically stacking locally homogeneous fragments, whose analysis and understanding remains challenging. A method of image statistical modeling-based OPQI for GP quality grading and monitoring by a Weibull distribution(WD) model with a semi-supervised learning classifier is presented. WD-model parameters (WD-MPs) of GP images’ spatial structures, obtained with omnidirectional Gaussian derivative filtering (OGDF), which were demonstrated theoretically to obey a specific WD model of integral form, were extracted as the visual features. Then, a co-training-style semi-supervised classifier algorithm, named COSC-Boosting, was exploited for semi-supervised GP quality grading, by integrating two independent classifiers with complementary nature in the face of scarce labeled samples. Effectiveness of the proposed OPQI method was verified and compared in the field of automated rice quality grading with commonly-used methods and showed superior performance, which lays a foundation for the quality control of GP on assembly lines. PMID:27367703

  1. Quality-Related Monitoring and Grading of Granulated Products by Weibull-Distribution Modeling of Visual Images with Semi-Supervised Learning.

    PubMed

    Liu, Jinping; Tang, Zhaohui; Xu, Pengfei; Liu, Wenzhong; Zhang, Jin; Zhu, Jianyong

    2016-01-01

    The topic of online product quality inspection (OPQI) with smart visual sensors is attracting increasing interest in both the academic and industrial communities on account of the natural connection between the visual appearance of products with their underlying qualities. Visual images captured from granulated products (GPs), e.g., cereal products, fabric textiles, are comprised of a large number of independent particles or stochastically stacking locally homogeneous fragments, whose analysis and understanding remains challenging. A method of image statistical modeling-based OPQI for GP quality grading and monitoring by a Weibull distribution(WD) model with a semi-supervised learning classifier is presented. WD-model parameters (WD-MPs) of GP images' spatial structures, obtained with omnidirectional Gaussian derivative filtering (OGDF), which were demonstrated theoretically to obey a specific WD model of integral form, were extracted as the visual features. Then, a co-training-style semi-supervised classifier algorithm, named COSC-Boosting, was exploited for semi-supervised GP quality grading, by integrating two independent classifiers with complementary nature in the face of scarce labeled samples. Effectiveness of the proposed OPQI method was verified and compared in the field of automated rice quality grading with commonly-used methods and showed superior performance, which lays a foundation for the quality control of GP on assembly lines. PMID:27367703

  2. SER performance analysis of MPPM FSO system with three decision thresholds over exponentiated Weibull fading channels

    NASA Astrophysics Data System (ADS)

    Wang, Ping; Yang, Bensheng; Guo, Lixin; Shang, Tao

    2015-11-01

    In this work, the symbol error rate (SER) performance of the multiple pulse position modulation (MPPM) based free-space optical communication (FSO) system with three different decision thresholds, fixed decision threshold (FDT), optimized decision threshold (ODT) and dynamic decision threshold (DDT) over exponentiated Weibull (EW) fading channels has been investigated in detail. The effects of aperture averaging on each decision threshold under weak-to-strong turbulence conditions are further studied and compared. The closed-form SER expressions for three thresholds derived with the help of generalized Gauss-Laguerre quadrature rule are verified by the Monte Carlo simulations. This work is helpful for the design of receivers for FSO communication systems.

  3. Modeling the reliability and maintenance costs of wind turbines using Weibull analysis

    SciTech Connect

    Vachon, W.A.

    1996-12-31

    A general description is provided of the basic mathematics and use of Weibull statistical models for modeling component failures and maintenance costs as a function of time. The applicability of the model to wind turbine components and subsystems is discussed with illustrative examples of typical component reliabilities drawn from actual field experiences. Example results indicate the dominant role of key subsystems based on a combination of their failure frequency and repair/replacement costs. The value of the model is discussed as a means of defining (1) maintenance practices, (2) areas in which to focus product improvements, (3) spare parts inventory, and (4) long-term trends in maintenance costs as an important element in project cash flow projections used by developers, investors, and lenders. 6 refs., 8 figs., 3 tabs.

  4. Weibull distribution of incipient flaws in basalt material used in high-velocity impact experiments and applications in numerical simulations of small body disruptions

    NASA Astrophysics Data System (ADS)

    Michel, P.; Nakamura, A.

    We measured the Weibull parameters of a specific basalt material, called Yakuno basalt, which has already been used in documented high-velocity impact experiments. The outcomes of these experiments have been widely used to validate numerical codes of fragmentation developed in the context of planetary science. However, the distribution of incipient flaws in the targets, usually characterized by the so-called Weibull parameters, have generally be implemented in the codes with values allowing to match the experimental outcomes, hence the validity of numerical simulations remains to be assessed with the actual values of these parameters. Here, we follow the original method proposed by Weibull in 1939 to measure these parameters for this Yakuno basalt. We obtain a value of the Weibull modulus (also called shape parameter) m larger than the one corresponding to simulation fits to the experimental data. The characteristic strength, which corresponds to 63.2 % of failure of a sample of similar specimens and which defines the second Weibull or scale parameter is also determined. This parameter seems not sensitive to the different loading rates used to make the measurements. A complete database of impact experiments on basalt targets, including both the important initial target parameters and the detailed outcome of their disruptions, is now at the disposal of numerical codes of fragmentation for validity test. In the gravity regime, which takes place when the small bodies involved are larger than a few hundreds of meters in size, our numerical simulations have already been successful to reproduce asteroid families, showing that most large fragments from an asteroid disruption consist of gravitational aggregates formed by re-accumulation of smaller fragments during the disruption. Moreover, we found that the outcome depends strongly on the initial internal structure of the bodies involved. Therefore, the knowledge of the actual flaw distribution of the material defining the

  5. Analysis of the fuzzy greatest of CFAR detector in homogeneous and non-homogeneous Weibull clutter title

    NASA Astrophysics Data System (ADS)

    Baadeche, Mohamed; Soltani, Faouzi

    2015-12-01

    In this paper, we analyze the distributed FGO-CFAR detector in homogeneous and Non-Homogeneous Weibull clutter with an assumption of known shape parameter. The non-homogeneity is modeled by the presence of a clutter edge in the reference window. We derive membership function which maps the observations to the false alarm space and compute the threshold at the data fusion center. Applying the `Maximum', `Minimum', `Algebraic Sum' and `Algebraic Product' fuzzy rules for L detectors considered at the data fusion center, the obtained results showed that the best performance is obtained by the `Algebraic Product' fuzzy rule followed by the `Minimum' one and in these two cases the probability of detection increases significantly with the number of detectors.

  6. Estimating Age Distributions of Base Flow in Watersheds Underlain by Single and Dual Porosity Formations Using Groundwater Transport Simulation and Weighted Weibull Functions

    NASA Astrophysics Data System (ADS)

    Sanford, W. E.

    2015-12-01

    Age distributions of base flow to streams are important to estimate for predicting the timing of water-quality responses to changes in distributed inputs of nutrients or pollutants at the land surface. Simple models of shallow aquifers will predict exponential age distributions, but more realistic 3-D stream-aquifer geometries will cause deviations from an exponential curve. In addition, in fractured rock terrains the dual nature of the effective and total porosity of the system complicates the age distribution further. In this study shallow groundwater flow and advective transport were simulated in two regions in the Eastern United States—the Delmarva Peninsula and the upper Potomac River basin. The former is underlain by layers of unconsolidated sediment, while the latter consists of folded and fractured sedimentary rocks. Transport of groundwater to streams was simulated using the USGS code MODPATH within 175 and 275 watersheds, respectively. For the fractured rock terrain, calculations were also performed along flow pathlines to account for exchange between mobile and immobile flow zones. Porosities at both sites were calibrated using environmental tracer data (3H, 3He, CFCs and SF6) in wells and springs, and with a 30-year tritium record from the Potomac River. Carbonate and siliciclastic rocks were calibrated to have mobile porosity values of one and six percent, and immobile porosity values of 18 and 12 percent, respectively. The age distributions were fitted to Weibull functions. Whereas an exponential function has one parameter that controls the median age of the distribution, a Weibull function has an extra parameter that controls the slope of the curve. A weighted Weibull function was also developed that potentially allows for four parameters, two that control the median age and two that control the slope, one of each weighted toward early or late arrival times. For both systems the two-parameter Weibull function nearly always produced a substantially

  7. Weibull-Based Design Methodology for Rotating Aircraft Engine Structures

    NASA Technical Reports Server (NTRS)

    Zaretsky, Erwin; Hendricks, Robert C.; Soditus, Sherry

    2002-01-01

    The NASA Energy Efficient Engine (E(sup 3)-Engine) is used as the basis of a Weibull-based life and reliability analysis. Each component's life and thus the engine's life is defined by high-cycle fatigue (HCF) or low-cycle fatigue (LCF). Knowing the cumulative life distribution of each of the components making up the engine as represented by a Weibull slope is a prerequisite to predicting the life and reliability of the entire engine. As the engine Weibull slope increases, the predicted lives decrease. The predicted engine lives L(sub 5) (95 % probability of survival) of approximately 17,000 and 32,000 hr do correlate with current engine maintenance practices without and with refurbishment. respectively. The individual high pressure turbine (HPT) blade lives necessary to obtain a blade system life L(sub 0.1) (99.9 % probability of survival) of 9000 hr for Weibull slopes of 3, 6 and 9, are 47,391 and 20,652 and 15,658 hr, respectively. For a design life of the HPT disks having probable points of failure equal to or greater than 36,000 hr at a probability of survival of 99.9 %, the predicted disk system life L(sub 0.1) can vary from 9,408 to 24,911 hr.

  8. Weibull Wind-Speed Distribution Parameters Derived from a Combination of Wind-Lidar and Tall-Mast Measurements Over Land, Coastal and Marine Sites

    NASA Astrophysics Data System (ADS)

    Gryning, Sven-Erik; Floors, Rogier; Peña, Alfredo; Batchvarova, Ekaterina; Brümmer, Burghard

    2016-05-01

    Wind-speed observations from tall towers are used in combination with observations up to 600 m in altitude from a Doppler wind lidar to study the long-term conditions over suburban (Hamburg), rural coastal (Høvsøre) and marine (FINO3) sites. The variability in the wind field among the sites is expressed in terms of mean wind speed and Weibull distribution shape-parameter profiles. The consequences of the carrier-to-noise-ratio ( CNR) threshold-value choice on the wind-lidar observations are revealed as follows. When the wind-lidar CNR is lower than a prescribed threshold value, the observations are often filtered out as the uncertainty in the wind-speed measurements increases. For a pulsed heterodyne Doppler lidar, use of the traditional -22 dB CNR threshold value at all measuring levels up to 600 m results in a ≈ 7 % overestimation in the long-term mean wind speed over land, and a ≈ 12 % overestimation in coastal and marine environments. In addition, the height of the profile maximum of the shape parameter of the Weibull distribution (so-called reversal height) is found to depend on the applied CNR threshold; it is found to be lower at small CNR threshold values. The reversal height is greater in the suburban (high roughness) than in the rural (low roughness) area. In coastal areas the reversal height is lower than that over land and relates to the internal boundary layer that develops downwind from the coastline. Over the sea the shape parameter increases towards the sea surface. A parametrization of the vertical profile of the shape parameter fits well with observations over land, coastal regions and over the sea. An applied model for the dependence of the reversal height on the surface roughness is in good agreement with the observations over land.

  9. Simulation of correlated discrete Weibull variables: A proposal and an implementation in the R environment

    NASA Astrophysics Data System (ADS)

    Barbiero, Alessandro

    2015-12-01

    Researchers in applied sciences are often concerned with multivariate random variables. In particular, multivariate discrete data often arise in many fields (statistical quality control, biostatistics, failure analysis, etc). Here we consider the discrete Weibull distribution as an alternative to the popular Poisson random variable and propose a procedure for simulating correlated discrete Weibull random variables, with marginal distributions and correlation matrix assigned by the user. The procedure indeed relies upon the gaussian copula model and an iterative algorithm for recovering the proper correlation matrix for the copula ensuring the desired correlation matrix on the discrete margins. A simulation study is presented, which empirically shows the performance of the procedure.

  10. Calculation of Weibull strength parameters and Batdorf flow-density constants for volume- and surface-flaw-induced fracture in ceramics

    NASA Technical Reports Server (NTRS)

    Shantaram, S. Pai; Gyekenyesi, John P.

    1989-01-01

    The calculation of shape and scale parametes of the two-parameter Weibull distribution is described using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. Detailed procedures are given for evaluating 90 percent confidence intervals for maximum likelihood estimates of shape and scale parameters, the unbiased estimates of the shape parameters, and the Weibull mean values and corresponding standard deviations. Furthermore, the necessary steps are described for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull distribution. It also shows how to calculate the Batdorf flaw-density constants by using the Weibull distribution statistical parameters. The techniques described were verified with several example problems, from the open literature, and were coded in the Structural Ceramics Analysis and Reliability Evaluation (SCARE) design program.

  11. Robust Fitting of a Weibull Model with Optional Censoring

    PubMed Central

    Yang, Jingjing; Scott, David W.

    2013-01-01

    The Weibull family is widely used to model failure data, or lifetime data, although the classical two-parameter Weibull distribution is limited to positive data and monotone failure rate. The parameters of the Weibull model are commonly obtained by maximum likelihood estimation; however, it is well-known that this estimator is not robust when dealing with contaminated data. A new robust procedure is introduced to fit a Weibull model by using L2 distance, i.e. integrated square distance, of the Weibull probability density function. The Weibull model is augmented with a weight parameter to robustly deal with contaminated data. Results comparing a maximum likelihood estimator with an L2 estimator are given in this article, based on both simulated and real data sets. It is shown that this new L2 parametric estimation method is more robust and does a better job than maximum likelihood in the newly proposed Weibull model when data are contaminated. The same preference for L2 distance criterion and the new Weibull model also happens for right-censored data with contamination. PMID:23888090

  12. EVALUATION OF SPRING OPERATED RELIEF VALVE MAINTENANCE INTERVALS AND EXTENSION OF MAINTENANCE TIMES USING A WEIBULL ANALYSIS WITH MODIFIED BAYESIAN UPDATING

    SciTech Connect

    Harris, S.; Gross, R.; Mitchell, E.

    2011-01-18

    The Savannah River Site (SRS) spring operated pressure relief valve (SORV) maintenance intervals were evaluated using an approach provided by the American Petroleum Institute (API RP 581) for risk-based inspection technology (RBI). In addition, the impact of extending the inspection schedule was evaluated using Monte Carlo Simulation (MCS). The API RP 581 approach is characterized as a Weibull analysis with modified Bayesian updating provided by SRS SORV proof testing experience. Initial Weibull parameter estimates were updated as per SRS's historical proof test records contained in the Center for Chemical Process Safety (CCPS) Process Equipment Reliability Database (PERD). The API RP 581 methodology was used to estimate the SORV's probability of failing on demand (PFD), and the annual expected risk. The API RP 581 methodology indicates that the current SRS maintenance plan is conservative. Cost savings may be attained in certain mild service applications that present low PFD and overall risk. Current practices are reviewed and recommendations are made for extending inspection intervals. The paper gives an illustration of the inspection costs versus the associated risks by using API RP 581 Risk Based Inspection (RBI) Technology. A cost effective maintenance frequency balancing both financial risk and inspection cost is demonstrated.

  13. A Weibull characterization for tensile fracture of multicomponent brittle fibers

    NASA Technical Reports Server (NTRS)

    Barrows, R. G.

    1977-01-01

    A statistical characterization for multicomponent brittle fibers in presented. The method, which is an extension of usual Weibull distribution procedures, statistically considers the components making up a fiber (e.g., substrate, sheath, and surface) as separate entities and taken together as in a fiber. Tensile data for silicon carbide fiber and for an experimental carbon-boron alloy fiber are evaluated in terms of the proposed multicomponent Weibull characterization.

  14. Finite-size effects on return interval distributions for weakest-link-scaling systems.

    PubMed

    Hristopulos, Dionissios T; Petrakis, Manolis P; Kaniadakis, Giorgio

    2014-05-01

    The Weibull distribution is a commonly used model for the strength of brittle materials and earthquake return intervals. Deviations from Weibull scaling, however, have been observed in earthquake return intervals and the fracture strength of quasibrittle materials. We investigate weakest-link scaling in finite-size systems and deviations of empirical return interval distributions from the Weibull distribution function. Our analysis employs the ansatz that the survival probability function of a system with complex interactions among its units can be expressed as the product of the survival probability functions for an ensemble of representative volume elements (RVEs). We show that if the system comprises a finite number of RVEs, it obeys the κ-Weibull distribution. The upper tail of the κ-Weibull distribution declines as a power law in contrast with Weibull scaling. The hazard rate function of the κ-Weibull distribution decreases linearly after a waiting time τ(c) ∝ n(1/m), where m is the Weibull modulus and n is the system size in terms of representative volume elements. We conduct statistical analysis of experimental data and simulations which show that the κ Weibull provides competitive fits to the return interval distributions of seismic data and of avalanches in a fiber bundle model. In conclusion, using theoretical and statistical analysis of real and simulated data, we demonstrate that the κ-Weibull distribution is a useful model for extreme-event return intervals in finite-size systems. PMID:25353774

  15. Experimental evaluation of the strength distribution of E-glass fibres at high strain rates

    NASA Astrophysics Data System (ADS)

    Wang, Zhen

    1995-07-01

    A bimodal Weibull distribution function was applied to analyse the strength distribution of glass fibre bundles under tensile impact. The simulation was performed using a one-dimensional damage constitutive model. The results show that there were two concurrent flaw populations in the fracture process. The regression analysis using the bimodal Weibull distribution function was in good agreement with experiment.

  16. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    PubMed

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. PMID:26121186

  17. Modeling root reinforcement using root-failure Weibull survival function

    NASA Astrophysics Data System (ADS)

    Schwarz, M.; Giadrossich, F.; Cohen, D.

    2013-03-01

    Root networks contribute to slope stability through complicated interactions that include mechanical compression and tension. Due to the spatial heterogeneity of root distribution and the dynamic of root turnover, the quantification of root reinforcement on steep slope is challenging and consequently the calculation of slope stability as well. Although the considerable advances in root reinforcement modeling, some important aspect remain neglected. In this study we address in particular to the role of root strength variability on the mechanical behaviors of a root bundle. Many factors may contribute to the variability of root mechanical properties even considering a single class of diameter. This work presents a new approach for quantifying root reinforcement that considers the variability of mechanical properties of each root diameter class. Using the data of laboratory tensile tests and field pullout tests, we calibrate the parameters of the Weibull survival function to implement the variability of root strength in a numerical model for the calculation of root reinforcement (RBMw). The results show that, for both laboratory and field datasets, the parameters of the Weibull distribution may be considered constant with the exponent equal to 2 and the normalized failure displacement equal to 1. Moreover, the results show that the variability of root strength in each root diameter class has a major influence on the behavior of a root bundle with important implications when considering different approaches in slope stability calculation. Sensitivity analysis shows that the calibration of the tensile force and the elasticity of the roots are the most important equations, as well as the root distribution. The new model allows the characterization of root reinforcement in terms of maximum pullout force, stiffness, and energy. Moreover, it simplifies the implementation of root reinforcement in slope stability models. The realistic quantification of root reinforcement for

  18. A Weibull characterization for tensile fracture of multicomponent brittle fibers

    NASA Technical Reports Server (NTRS)

    Barrows, R. G.

    1977-01-01

    Necessary to the development and understanding of brittle fiber reinforced composites is a means to statistically describe fiber strength and strain-to-failure behavior. A statistical characterization for multicomponent brittle fibers is presented. The method, which is an extension of usual Weibull distribution procedures, statistically considers the components making up a fiber (e.g., substrate, sheath, and surface) as separate entities and taken together as in a fiber. Tensile data for silicon carbide fiber and for an experimental carbon-boron alloy fiber are evaluated in terms of the proposed multicomponent Weibull characterization.

  19. Empirical model based on Weibull distribution describing the destruction kinetics of natural microbiota in pineapple (Ananas comosus L.) puree during high-pressure processing.

    PubMed

    Chakraborty, Snehasis; Rao, Pavuluri Srinivasa; Mishra, Hari Niwas

    2015-10-15

    High pressure inactivation of natural microbiota viz. aerobic mesophiles (AM), psychrotrophs (PC), yeasts and molds (YM), total coliforms (TC) and lactic acid bacteria (LAB) in pineapple puree was studied within the experimental domain of 0.1-600 MPa and 30-50 °C with a treatment time up to 20 min. A complete destruction of yeasts and molds was obtained at 500 MPa/50 °C/15 min; whereas no counts were detected for TC and LAB at 300 MPa/30 °C/15 min. A maximum of two log cycle reductions was obtained for YM during pulse pressurization at the severe process intensity of 600 MPa/50 °C/20 min. The Weibull model clearly described the non-linearity of the survival curves during the isobaric period. The tailing effect, as confirmed by the shape parameter (β) of the survival curve, was obtained in case of YM (β<1); whereas a shouldering effect (β>1) was observed for the other microbial groups. Analogous to thermal death kinetics, the activation energy (Ea, kJ·mol(-1)) and the activation volume (Va, mL·mol(-1)) values were computed further to describe the temperature and pressure dependencies of the scale parameter (δ, min), respectively. A higher δ value was obtained for each microbe at a lower temperature and it decreased with an increase in pressure. A secondary kinetic model was developed describing the inactivation rate (k, min(-1)) as a function of pressure (P, MPa) and temperature (T, K) including the dependencies of Ea and Va on P and T, respectively. PMID:26202323

  20. Shallow Flaws Under Biaxial Loading Conditions, Part II: Application of a Weibull Stress Analysis of the Cruciform Bend Specimen Using a Hydrostatic Stress Criterion

    SciTech Connect

    Bass, B.R.; McAfee, W.J.; Williams, P.T.

    1999-08-01

    Cruciform beam fracture mechanics specimensl have been developed in the Heavy Section Steel Technology (HSST) Program at Oak Ridge National Laboratory (ORNL) to introduce a prototypic, far- field, out-of-plane biaxird bending stress component in the test section that approximates the nonlinear biaxial stresses resulting from pressurized-thernxd-shock or pressure-temperature loading of a nuclear reactor pressure vessel (RPV). Matrices of cruciform beam tests were developed to investigate and quantify the effects of temperature, biaxial loading, and specimen size on fracture initiation toughness of two-dimensional (constant depth), shtdlow, surface flaws. Tests were conducted under biaxial load ratios ranging from uniaxial to equibiaxial. These tests demonstrated that biaxial loading can have a pronounced effect on shallow-flaw fracture toughness in the lower transition temperature region for RPV materials. Two and three- parameter Weibull models have been calibrated using a new scheme (developed at the University of Illinois) that maps toughness data from test specimens with distinctly different levels of crack-tip constraint to a small scale yielding (SSY) Weibull stress space. These models, using the new hydrostatic stress criterion in place of the more commonly used maximum principal stress in the kernel of the OW integral definition, have been shown to correlate the experimentally observed biaxiaI effect in cruciform specimens, thereby providing a scaling mechanism between uniaxial and biaxial loading states.

  1. Calculation of Weibull strength parameters and Batdorf flow-density constants for volume- and surface-flaw-induced fracture in ceramics

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Gyekenyesi, John P.

    1988-01-01

    The calculation of shape and scale parameters of the two-parameter Weibull distribution is described using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. Detailed procedures are given for evaluating 90 percent confidence intervals for maximum likelihood estimates of shape and scale parameters, the unbiased estimates of the shape parameters, and the Weibull mean values and corresponding standard deviations. Furthermore, the necessary steps are described for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull distribution. It also shows how to calculate the Batdorf flaw-density constants by uing the Weibull distribution statistical parameters. The techniques described were verified with several example problems, from the open literature, and were coded. The techniques described were verified with several example problems from the open literature, and were coded in the Structural Ceramics Analysis and Reliability Evaluation (SCARE) design program.

  2. Calculation of Weibull strength parameters, Batdorf flaw density constants and related statistical quantities using PC-CARES

    NASA Technical Reports Server (NTRS)

    Szatmary, Steven A.; Gyekenyesi, John P.; Nemeth, Noel N.

    1990-01-01

    This manual describes the operation and theory of the PC-CARES (Personal Computer-Ceramic Analysis and Reliability Evaluation of Structures) computer program for the IBM PC and compatibles running PC-DOS/MS-DOR OR IBM/MS-OS/2 (version 1.1 or higher) operating systems. The primary purpose of this code is to estimate Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities. Included in the manual is the description of the calculation of shape and scale parameters of the two-parameter Weibull distribution using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. The methods for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull line, as well as the techniques for calculating the Batdorf flaw-density constants are also described.

  3. A comparison of Weibull and. beta. sub Ic analyses of transition range data

    SciTech Connect

    McCabe, D.E.

    1991-01-01

    Specimen size effects on K{sub Jc} data scatter in the transition range of fracture toughness have been explained by external (weakest link) statistics. In this investigation, compact specimens of A 533 grade B steel were tested in sizes ranging from 1/2TC(T) to 4TC(T) with sufficient replication to obtain good three-parameter Weibull characterization of data distributions. The optimum fitting parameters for an assumed Weibull slope of 4 were calculated. External statistics analysis was applied to the 1/2TC(T) data to predict median K{sub Jc} values for 1TC(T), 2TC(T), and 4TC(T) specimens. The distributions from experimentally developed 1TC(T), 2TC(T), and 4TC(T) data tended to confirm the predictions. However, the extremal prediction model does not work well at lower-shelf toughness. At {minus}150{degree}C the extremal model predicts a specimen size effect where in reality there is no size effect.

  4. Bias in the Weibull Strength Estimation of a SiC Fiber for the Small Gauge Length Case

    NASA Astrophysics Data System (ADS)

    Morimoto, Tetsuya; Nakagawa, Satoshi; Ogihara, Shinji

    It is known that the single-modal Weibull model describes well the size effect of brittle fiber tensile strength. However, some ceramic fibers have been reported that single-modal Weibull model provided biased estimation on the gauge length dependence. A hypothesis on the bias is that the density of critical defects is very small, thus, fracture probability of small gauge length samples distributes in discrete manner, which makes the Weibull parameters dependent on the gauge length. Tyranno ZMI Si-Zr-C-O fiber has been selected as an example fiber. The tensile tests have been done on several gauge lengths. The derived Weibull parameters have shown a dependence on the gauge length. Fracture surfaces were observed with SEM. Then we classified the fracture surfaces into the characteristic fracture patterns. Percentage of each fracture pattern was found dependent on the gauge length, too. This may be an important factor of the Weibull parameter dependence on the gauge length.

  5. Measuring the Weibull modulus of microscope slides

    NASA Technical Reports Server (NTRS)

    Sorensen, Carl D.

    1992-01-01

    The objectives are that students will understand why a three-point bending test is used for ceramic specimens, learn how Weibull statistics are used to measure the strength of brittle materials, and appreciate the amount of variation in the strength of brittle materials with low Weibull modulus. They will understand how the modulus of rupture is used to represent the strength of specimens in a three-point bend test. In addition, students will learn that a logarithmic transformation can be used to convert an exponent into the slope of a straight line. The experimental procedures are explained.

  6. Distributed analysis at LHCb

    NASA Astrophysics Data System (ADS)

    Williams, Mike; Egede, Ulrik; Paterson, Stuart; LHCb Collaboration

    2011-12-01

    The distributed analysis experience to date at LHCb has been positive: job success rates are high and wait times for high-priority jobs are low. LHCb users access the grid using the GANGA job-management package, while the LHCb virtual organization manages its resources using the DIRAC package. This clear division of labor has benefitted LHCb and its users greatly; it is a major reason why distributed analysis at LHCb has been so successful. The newly formed LHCb distributed analysis support team has also proved to be a success.

  7. Transmission overhaul and replacement predictions using Weibull and renewal theory

    NASA Technical Reports Server (NTRS)

    Savage, M.; Lewicki, D. G.

    1989-01-01

    A method to estimate the frequency of transmission overhauls is presented. This method is based on the two-parameter Weibull statistical distribution for component life. A second method is presented to estimate the number of replacement components needed to support the transmission overhaul pattern. The second method is based on renewal theory. Confidence statistics are applied with both methods to improve the statistical estimate of sample behavior. A transmission example is also presented to illustrate the use of the methods. Transmission overhaul frequency and component replacement calculations are included in the example.

  8. Transmission overhaul and replacement predictions using Weibull and renewel theory

    NASA Technical Reports Server (NTRS)

    Savage, M.; Lewicki, D. G.

    1989-01-01

    A method to estimate the frequency of transmission overhauls is presented. This method is based on the two-parameter Weibull statistical distribution for component life. A second method is presented to estimate the number of replacement components needed to support the transmission overhaul pattern. The second method is based on renewal theory. Confidence statistics are applied with both methods to improve the statistical estimate of sample behavior. A transmission example is also presented to illustrate the use of the methods. Transmission overhaul frequency and component replacement calculations are included in the example.

  9. Probability density function characterization for aggregated large-scale wind power based on Weibull mixtures

    DOE PAGESBeta

    Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; Martin-Martinez, Sergio; Zhang, Jie; Hodge, Bri -Mathias; Molina-Garcia, Angel

    2016-02-02

    Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less

  10. Impact of Three-Parameter Weibull Models in Probabilistic Assessment of Earthquake Hazards

    NASA Astrophysics Data System (ADS)

    Pasari, Sumanta; Dikshit, Onkar

    2014-07-01

    This paper investigates the suitability of a three-parameter (scale, shape, and location) Weibull distribution in probabilistic assessment of earthquake hazards. The performance is also compared with two other popular models from same Weibull family, namely the two-parameter Weibull model and the inverse Weibull model. A complete and homogeneous earthquake catalog ( Yadav et al. in Pure Appl Geophys 167:1331-1342, 2010) of 20 events ( M ≥ 7.0), spanning the period 1846 to 1995 from north-east India and its surrounding region (20°-32°N and 87°-100°E), is used to perform this study. The model parameters are initially estimated from graphical plots and later confirmed from statistical estimations such as maximum likelihood estimation (MLE) and method of moments (MoM). The asymptotic variance-covariance matrix for the MLE estimated parameters is further calculated on the basis of the Fisher information matrix (FIM). The model suitability is appraised using different statistical goodness-of-fit tests. For the study area, the estimated conditional probability for an earthquake within a decade comes out to be very high (≥0.90) for an elapsed time of 18 years (i.e., 2013). The study also reveals that the use of location parameter provides more flexibility to the three-parameter Weibull model in comparison to the two-parameter Weibull model. Therefore, it is suggested that three-parameter Weibull model has high importance in empirical modeling of earthquake recurrence and seismic hazard assessment.

  11. Distributed analysis in ATLAS

    NASA Astrophysics Data System (ADS)

    Dewhurst, A.; Legger, F.

    2015-12-01

    The ATLAS experiment accumulated more than 140 PB of data during the first run of the Large Hadron Collider (LHC) at CERN. The analysis of such an amount of data is a challenging task for the distributed physics community. The Distributed Analysis (DA) system of the ATLAS experiment is an established and stable component of the ATLAS distributed computing operations. About half a million user jobs are running daily on DA resources, submitted by more than 1500 ATLAS physicists. The reliability of the DA system during the first run of the LHC and the following shutdown period has been high thanks to the continuous automatic validation of the distributed analysis sites and the user support provided by a dedicated team of expert shifters. During the LHC shutdown, the ATLAS computing model has undergone several changes to improve the analysis workflows, including the re-design of the production system, a new analysis data format and event model, and the development of common reduction and analysis frameworks. We report on the impact such changes have on the DA infrastructure, describe the new DA components, and include recent performance measurements.

  12. Modeling root reinforcement using a root-failure Weibull survival function

    NASA Astrophysics Data System (ADS)

    Schwarz, M.; Giadrossich, F.; Cohen, D.

    2013-11-01

    Root networks contribute to slope stability through complex interactions with soil that include mechanical compression and tension. Due to the spatial heterogeneity of root distribution and the dynamics of root turnover, the quantification of root reinforcement on steep slopes is challenging and consequently the calculation of slope stability also. Although considerable progress has been made, some important aspects of root mechanics remain neglected. In this study we address specifically the role of root-strength variability on the mechanical behavior of a root bundle. Many factors contribute to the variability of root mechanical properties even within a single class of diameter. This work presents a new approach for quantifying root reinforcement that considers the variability of mechanical properties of each root diameter class. Using the data of laboratory tensile tests and field pullout tests, we calibrate the parameters of the Weibull survival function to implement the variability of root strength in a numerical model for the calculation of root reinforcement (RBMw). The results show that, for both laboratory and field data sets, the parameters of the Weibull distribution may be considered constant with the exponent equal to 2 and the normalized failure displacement equal to 1. Moreover, the results show that the variability of root strength in each root diameter class has a major influence on the behavior of a root bundle with important implications when considering different approaches in slope stability calculation. Sensitivity analysis shows that the calibration of the equations of the tensile force, the elasticity of the roots, and the root distribution are the most important steps. The new model allows the characterization of root reinforcement in terms of maximum pullout force, stiffness, and energy. Moreover, it simplifies the implementation of root reinforcement in slope stability models. The realistic quantification of root reinforcement for tensile

  13. Structural characterization of genomes by large scale sequence-structure threading: application of reliability analysis in structural genomics

    PubMed Central

    Cherkasov, Artem; Ho Sui, Shannan J; Brunham, Robert C; Jones, Steven JM

    2004-01-01

    Background We establish that the occurrence of protein folds among genomes can be accurately described with a Weibull function. Systems which exhibit Weibull character can be interpreted with reliability theory commonly used in engineering analysis. For instance, Weibull distributions are widely used in reliability, maintainability and safety work to model time-to-failure of mechanical devices, mechanisms, building constructions and equipment. Results We have found that the Weibull function describes protein fold distribution within and among genomes more accurately than conventional power functions which have been used in a number of structural genomic studies reported to date. It has also been found that the Weibull reliability parameter β for protein fold distributions varies between genomes and may reflect differences in rates of gene duplication in evolutionary history of organisms. Conclusions The results of this work demonstrate that reliability analysis can provide useful insights and testable predictions in the fields of comparative and structural genomics. PMID:15274750

  14. Atlas Distributed Analysis Tools

    NASA Astrophysics Data System (ADS)

    de La Hoz, Santiago Gonzalez; Ruiz, Luis March; Liko, Dietrich

    2008-06-01

    The ATLAS production system has been successfully used to run production of simulation data at an unprecedented scale. Up to 10000 jobs were processed in one day. The experiences obtained operating the system on several grid flavours was essential to perform a user analysis using grid resources. First tests of the distributed analysis system were then performed. In the preparation phase data was registered in the LHC File Catalog (LFC) and replicated in external sites. For the main test, few resources were used. All these tests are only a first step towards the validation of the computing model. The ATLAS management computing board decided to integrate the collaboration efforts in distributed analysis in only one project, GANGA. The goal is to test the reconstruction and analysis software in a large scale Data production using Grid flavors in several sites. GANGA allows trivial switching between running test jobs on a local batch system and running large-scale analyses on the Grid; it provides job splitting and merging, and includes automated job monitoring and output retrieval.

  15. Survival extrapolation using the poly-Weibull model

    PubMed Central

    Lunn, David; Sharples, Linda D

    2015-01-01

    Recent studies of (cost-) effectiveness in cardiothoracic transplantation have required estimation of mean survival over the lifetime of the recipients. In order to calculate mean survival, the complete survivor curve is required but is often not fully observed, so that survival extrapolation is necessary. After transplantation, the hazard function is bathtub-shaped, reflecting latent competing risks which operate additively in overlapping time periods. The poly-Weibull distribution is a flexible parametric model that may be used to extrapolate survival and has a natural competing risks interpretation. In addition, treatment effects and subgroups can be modelled separately for each component of risk. We describe the model and develop inference procedures using freely available software. The methods are applied to two problems from cardiothoracic transplantation. PMID:21937472

  16. Survival extrapolation using the poly-Weibull model.

    PubMed

    Demiris, Nikolaos; Lunn, David; Sharples, Linda D

    2015-04-01

    Recent studies of (cost-) effectiveness in cardiothoracic transplantation have required estimation of mean survival over the lifetime of the recipients. In order to calculate mean survival, the complete survivor curve is required but is often not fully observed, so that survival extrapolation is necessary. After transplantation, the hazard function is bathtub-shaped, reflecting latent competing risks which operate additively in overlapping time periods. The poly-Weibull distribution is a flexible parametric model that may be used to extrapolate survival and has a natural competing risks interpretation. In addition, treatment effects and subgroups can be modelled separately for each component of risk. We describe the model and develop inference procedures using freely available software. The methods are applied to two problems from cardiothoracic transplantation. PMID:21937472

  17. Fracture strength of ultrananocrystalline diamond thin films—identification of Weibull parameters

    NASA Astrophysics Data System (ADS)

    Espinosa, H. D.; Peng, B.; Prorok, B. C.; Moldovan, N.; Auciello, O.; Carlisle, J. A.; Gruen, D. M.; Mancini, D. C.

    2003-11-01

    The fracture strength of ultrananocrystalline diamond (UNCD) has been investigated using tensile testing of freestanding submicron films. Specifically, the fracture strength of UNCD membranes, grown by microwave plasma chemical vapor deposition (MPCVD), was measured using the membrane deflection experiment developed by Espinosa and co-workers. The data show that fracture strength follows a Weibull distribution. Furthermore, we show that the Weibull parameters are highly dependent on the seeding process used in the growth of the films. When seeding was performed with microsized diamond particles, using mechanical polishing, the stress resulting in a probability of failure of 63% was found to be 1.74 GPa, and the Weibull modulus was 5.74. By contrast, when seeding was performed with nanosized diamond particles, using ultrasonic agitation, the stress resulting in a probability of failure of 63%, increased to 4.13 GPa, and the Weibull modulus was 10.76. The tests also provided the elastic modulus of UNCD, which was found to vary from 940 to 970 GPa for both micro- and nanoseeding. The investigation highlights the role of microfabrication defects on material properties and reliability, as a function of seeding technique, when identical MPCVD chemistry is employed. The parameters identified in this study are expected to aid the designer of microelectromechanical systems devices employing UNCD films.

  18. Weibull parameters of Yakuno basalt targets used in documented high-velocity impact experiments

    NASA Astrophysics Data System (ADS)

    Nakamura, Akiko M.; Michel, Patrick; Setoh, Masato

    2007-02-01

    In this paper we describe our measurements of the Weibull parameters of a specific basalt material, called Yakuno basalt, which was used in documented high-velocity impact experiments. The outcomes of these experiments have been widely used to validate numerical codes of fragmentation developed in the context of planetary science. However, the distribution of incipient flaws in the targets, usually characterized by the Weibull parameters, has generally been implemented in the codes with values allowing to match the experimental outcomes; hence the validity of numerical simulations remains to be assessed with the actual values of these parameters from laboratory measurements. Here we follow the original method proposed by Weibull in 1939 to measure these parameters for this Yakuno basalt. We obtain a value of the Weibull modulus (also called shape parameter) m in the range 15-17 with a typical error of about 1.0 for each different trial. This value is larger than the one corresponding to simulation fits to the experimental data, generally around 9.5. The characteristic strength, which corresponds to 63.2% of failure of a sample of similar specimens and which defines the second Weibull or scale parameter, is estimated to be 19.3-19.4 MPa with a typical error of about 0.05 MPa. This parameter seems to not be sensitive to the different loading rates used to make the measurements. A complete database of impact experiments on basalt targets, including both the important initial target parameters and the detailed outcome of their disruptions, is now at the disposal of numerical codes of fragmentation for validity test.

  19. Collective Weibull behavior of social atoms: Application of the rank-ordering statistics to historical extreme events

    NASA Astrophysics Data System (ADS)

    Chen, Chien-Chih; Tseng, Chih-Yuan; Telesca, Luciano; Chi, Sung-Ching; Sun, Li-Chung

    2012-02-01

    Analogous to crustal earthquakes in natural fault systems, we here consider the dynasty collapses as extreme events in human society. Duration data of ancient Chinese and Egyptian dynasties provides a good chance of exploring the collective behavior of the so-called social atoms. By means of the rank-ordering statistics, we demonstrate that the duration data of those ancient dynasties could be described with good accuracy by the Weibull distribution. It is thus amazing that the distribution of time to failure of human society, i.e. the disorder of a historical dynasty, follows the widely accepted Weibull process as natural material fails.

  20. Aerospace Applications of Weibull and Monte Carlo Simulation with Importance Sampling

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.

    1998-01-01

    Recent developments in reliability modeling and computer technology have made it practical to use the Weibull time to failure distribution to model the system reliability of complex fault-tolerant computer-based systems. These system models are becoming increasingly popular in space systems applications as a result of mounting data that support the decreasing Weibull failure distribution and the expectation of increased system reliability. This presentation introduces the new reliability modeling developments and demonstrates their application to a novel space system application. The application is a proposed guidance, navigation, and control (GN&C) system for use in a long duration manned spacecraft for a possible Mars mission. Comparisons to the constant failure rate model are presented and the ramifications of doing so are discussed.

  1. A Weibull brittle material failure model for the ABAQUS computer program

    SciTech Connect

    Bennett, J.

    1991-08-01

    A statistical failure theory for brittle materials that traces its origins to the Weibull distribution function is developed for use in the general purpose ABAQUS finite element computer program. One of the fundamental assumptions for this development is that Mode 1 microfractures perpendicular to the direction of the principal stress contribute independently to the fast fracture. The theory is implemented by a user subroutine for ABAQUS. Example problems illustrating the capability and accuracy of the model are given. 24 refs., 12 figs.

  2. The distribution of first-passage times and durations in FOREX and future markets

    NASA Astrophysics Data System (ADS)

    Sazuka, Naoya; Inoue, Jun-ichi; Scalas, Enrico

    2009-07-01

    Possible distributions are discussed for intertrade durations and first-passage processes in financial markets. The view-point of renewal theory is assumed. In order to represent market data with relatively long durations, two types of distributions are used, namely a distribution derived from the Mittag-Leffler survival function and the Weibull distribution. For the Mittag-Leffler type distribution, the average waiting time (residual life time) is strongly dependent on the choice of a cut-off parameter tmax, whereas the results based on the Weibull distribution do not depend on such a cut-off. Therefore, a Weibull distribution is more convenient than a Mittag-Leffler type if one wishes to evaluate relevant statistics such as average waiting time in financial markets with long durations. On the other hand, we find that the Gini index is rather independent of the cut-off parameter. Based on the above considerations, we propose a good candidate for describing the distribution of first-passage time in a market: The Weibull distribution with a power-law tail. This distribution compensates the gap between theoretical and empirical results more efficiently than a simple Weibull distribution. It should be stressed that a Weibull distribution with a power-law tail is more flexible than the Mittag-Leffler distribution, which itself can be approximated by a Weibull distribution and a power-law. Indeed, the key point is that in the former case there is freedom of choice for the exponent of the power-law attached to the Weibull distribution, which can exceed 1 in order to reproduce decays faster than possible with a Mittag-Leffler distribution. We also give a useful formula to determine an optimal crossover point minimizing the difference between the empirical average waiting time and the one predicted from renewal theory. Moreover, we discuss the limitation of our distributions by applying our distribution to the analysis of the BTP future and calculating the average waiting

  3. Time-dependent fiber bundles with local load sharing. II. General Weibull fibers.

    PubMed

    Phoenix, S Leigh; Newman, William I

    2009-12-01

    Fiber bundle models (FBMs) are useful tools in understanding failure processes in a variety of material systems. While the fibers and load sharing assumptions are easily described, FBM analysis is typically difficult. Monte Carlo methods are also hampered by the severe computational demands of large bundle sizes, which overwhelm just as behavior relevant to real materials starts to emerge. For large size scales, interest continues in idealized FBMs that assume either equal load sharing (ELS) or local load sharing (LLS) among fibers, rules that reflect features of real load redistribution in elastic lattices. The present work focuses on a one-dimensional bundle of N fibers under LLS where life consumption in a fiber follows a power law in its load, with exponent rho , and integrated over time. This life consumption function is further embodied in a functional form resulting in a Weibull distribution for lifetime under constant fiber stress and with Weibull exponent, beta. Thus the failure rate of a fiber depends on its past load history, except for beta=1 . We develop asymptotic results validated by Monte Carlo simulation using a computational algorithm developed in our previous work [Phys. Rev. E 63, 021507 (2001)] that greatly increases the size, N , of treatable bundles (e.g., 10(6) fibers in 10(3) realizations). In particular, our algorithm is O(N ln N) in contrast with former algorithms which were O(N2) making this investigation possible. Regimes are found for (beta,rho) pairs that yield contrasting behavior for large N. For rho>1 and large N, brittle weakest volume behavior emerges in terms of characteristic elements (groupings of fibers) derived from critical cluster formation, and the lifetime eventually goes to zero as N-->infinity , unlike ELS, which yields a finite limiting mean. For 1/2distribution and a

  4. Time-dependent fiber bundles with local load sharing. II. General Weibull fibers

    NASA Astrophysics Data System (ADS)

    Phoenix, S. Leigh; Newman, William I.

    2009-12-01

    Fiber bundle models (FBMs) are useful tools in understanding failure processes in a variety of material systems. While the fibers and load sharing assumptions are easily described, FBM analysis is typically difficult. Monte Carlo methods are also hampered by the severe computational demands of large bundle sizes, which overwhelm just as behavior relevant to real materials starts to emerge. For large size scales, interest continues in idealized FBMs that assume either equal load sharing (ELS) or local load sharing (LLS) among fibers, rules that reflect features of real load redistribution in elastic lattices. The present work focuses on a one-dimensional bundle of N fibers under LLS where life consumption in a fiber follows a power law in its load, with exponent ρ , and integrated over time. This life consumption function is further embodied in a functional form resulting in a Weibull distribution for lifetime under constant fiber stress and with Weibull exponent, β . Thus the failure rate of a fiber depends on its past load history, except for β=1 . We develop asymptotic results validated by Monte Carlo simulation using a computational algorithm developed in our previous work [Phys. Rev. EPLEEE81063-651X 63, 021507 (2001)] that greatly increases the size, N , of treatable bundles (e.g., 106 fibers in 103 realizations). In particular, our algorithm is O(NlnN) in contrast with former algorithms which were O(N2) making this investigation possible. Regimes are found for (β,ρ) pairs that yield contrasting behavior for large N . For ρ>1 and large N , brittle weakest volume behavior emerges in terms of characteristic elements (groupings of fibers) derived from critical cluster formation, and the lifetime eventually goes to zero as N→∞ , unlike ELS, which yields a finite limiting mean. For 1/2≤ρ≤1 , however, LLS has remarkably similar behavior to ELS (appearing to be virtually identical for ρ=1 ) with an asymptotic Gaussian lifetime distribution and a

  5. Comparison of Weibull strength parameters from flexure and spin tests of brittle materials

    NASA Technical Reports Server (NTRS)

    Holland, Frederic A., Jr.; Zaretsky, Erwin V.

    1991-01-01

    Fracture data from five series of four point bend tests of beam and spin tests of flat annular disks were reanalyzed. Silicon nitride and graphite were the test materials. The experimental fracture strengths of the disks were compared with the predicted strengths based on both volume flaw and surface flaw analyses of four point bend data. Volume flaw analysis resulted in a better correlation between disks and beams in three of the five test series than did surface flaw analysis. The Weibull (moduli) and characteristic gage strengths for the disks and beams were also compared. Differences in the experimental Weibull slopes were not statistically significant. It was shown that results from the beam tests can predict the fracture strength of rotating disks.

  6. Distributed and Collaborative Software Analysis

    NASA Astrophysics Data System (ADS)

    Ghezzi, Giacomo; Gall, Harald C.

    Throughout the years software engineers have come up with a myriad of specialized tools and techniques that focus on a certain type of software analysissoftware analysis such as source code analysis, co-change analysis or bug prediction. However, easy and straight forward synergies between these analyses and tools rarely exist because of their stand-alone nature, their platform dependence, their different input and output formats and the variety of data to analyze. As a consequence, distributed and collaborative software analysiscollaborative software analysis scenarios and in particular interoperability are severely limited. We describe a distributed and collaborative software analysis platform that allows for a seamless interoperability of software analysis tools across platform, geographical and organizational boundaries. We realize software analysis tools as services that can be accessed and composed over the Internet. These distributed analysis services shall be widely accessible in our incrementally augmented Software Analysis Broker software analysis broker where organizations and tool providers can register and share their tools. To allow (semi-) automatic use and composition of these tools, they are classified and mapped into a software analysis taxonomy and adhere to specific meta-models and ontologiesontologies for their category of analysis.

  7. Effect of Individual Component Life Distribution on Engine Life Prediction

    NASA Technical Reports Server (NTRS)

    Zaretsky, Erwin V.; Hendricks, Robert C.; Soditus, Sherry M.

    2003-01-01

    The effect of individual engine component life distributions on engine life prediction was determined. A Weibull-based life and reliability analysis of the NASA Energy Efficient Engine was conducted. The engine s life at a 95 and 99.9 percent probability of survival was determined based upon the engine manufacturer s original life calculations and assumed values of each of the component s cumulative life distributions as represented by a Weibull slope. The lives of the high-pressure turbine (HPT) disks and blades were also evaluated individually and as a system in a similar manner. Knowing the statistical cumulative distribution of each engine component with reasonable engineering certainty is a condition precedent to predicting the life and reliability of an entire engine. The life of a system at a given reliability will be less than the lowest-lived component in the system at the same reliability (probability of survival). Where Weibull slopes of all the engine components are equal, the Weibull slope had a minimal effect on engine L(sub 0.1) life prediction. However, at a probability of survival of 95 percent (L(sub 5) life), life decreased with increasing Weibull slope.

  8. Characteristic tensile strength and Weibull shape parameter of carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Klein, Claude A.

    2007-06-01

    Recently, it has been argued [N. M. Pugno and R. S. Ruoff, J. Appl. Phys. 99, 024301 (2006)] that available carbon-nanotube (CNT) tensile strength data do not obey the "classical" Weibull statistical model. In this paper we formulate Weibull's theory in a manner suitable for assessing CNT fracture-strength data and demonstrate that, on taking into account the area S subjected to uniform tensile stresses, the data are consistent with Weibull's model. Based on available data, a characteristic strength σC (S=1μm2) equal to 17.6±2.5GPa in conjunction with a shape parameter m equal to 2.77±0.34 provides a good description of the CNT fracture strength. In terms of effective strengths, and on assuming that the relevant area-scaling laws apply, carbon nanotubes and diamond nanofilms exhibit similar features for stressed areas ranging from 1to104μm2.

  9. Strength analysis of yttria-stabilized tetragonal zirconia polycrystals

    SciTech Connect

    Noguchi, K.; Matsuda, Y.; Oishi, M. ); Masaki, T.; Nakayama, S.; Mizushina, M. )

    1990-09-01

    This paper reports the tensile strength of Y{sub 2}O{sub 3}-stabilized ZrO{sub 2} polycrystals (Y-TZP) measured by a newly developed tensile testing method with a rectangular bar. The tensile strength of Y-TZP was lower than that of the three-point bend strength, and the shape of the tensile strength distribution was quite different from that of the three-point bend strength distribution. It was difficult to predict the distribution curve of the tensile strength using the data of the three-point bend strength by one-modal Weibull distribution. The distribution of the tensile strength was analyzed by two- or three-modal Weibull distribution coupled with an analysis of fracture origins. The distribution curve of the three-point bend strength which was estimated by multimodal Weibull distribution agreed favorably with that of the measured three-point bend strength values. A two-modal Weibull distribution function was formulated approximately from the distributions of the tensile and three-point bend strengths, and the estimated two-modal Weibull distribution function for the four-point bend strength agreed well with the measured four-point bend strength.

  10. Incorporating finite element analysis into component life and reliability

    NASA Technical Reports Server (NTRS)

    August, Richard; Zaretsky, Erwin V.

    1991-01-01

    A method for calculating a component's design survivability by incorporating finite element analysis and probabilistic material properties was developed. The method evaluates design parameters through direct comparisons of component survivability expressed in terms of Weibull parameters. The analysis was applied to a rotating disk with mounting bolt holes. The highest probability of failure occurred at, or near, the maximum shear stress region of the bolt holes. Distribution of failure as a function of Weibull slope affects the probability of survival. Where Weibull parameters are unknown for a rotating disk, it may be permissible to assume Weibull parameters, as well as the stress-life exponent, in order to determine the disk speed where the probability of survival is highest.

  11. The ATLAS distributed analysis system

    NASA Astrophysics Data System (ADS)

    Legger, F.; Atlas Collaboration

    2014-06-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of Grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high and steadily improving; Grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters provides user support and communicates user problems to the sites. Both the user support techniques and the direct feedback of users have been effective in improving the success rate and user experience when utilizing the distributed computing environment. In this contribution a description of the main components, activities and achievements of ATLAS distributed analysis is given. Several future improvements being undertaken will be described.

  12. Application of Weibull Criterion to failure prediction in compsites

    SciTech Connect

    Cain, W. D.; Knight, Jr., C. E.

    1981-04-20

    Fiber-reinforced composite materials are being widely used in engineered structures. This report examines how the general form of the Weibull Criterion, including the evaluation of the parameters and the scaling of the parameter values, can be used for the prediction of component failure.

  13. Distributed data analysis in ATLAS

    NASA Astrophysics Data System (ADS)

    Nilsson, Paul; Atlas Collaboration

    2012-12-01

    Data analysis using grid resources is one of the fundamental challenges to be addressed before the start of LHC data taking. The ATLAS detector will produce petabytes of data per year, and roughly one thousand users will need to run physics analyses on this data. Appropriate user interfaces and helper applications have been made available to ensure that the grid resources can be used without requiring expertise in grid technology. These tools enlarge the number of grid users from a few production administrators to potentially all participating physicists. ATLAS makes use of three grid infrastructures for the distributed analysis: the EGEE sites, the Open Science Grid, and Nordu Grid. These grids are managed by the gLite workload management system, the PanDA workload management system, and ARC middleware; many sites can be accessed via both the gLite WMS and PanDA. Users can choose between two front-end tools to access the distributed resources. Ganga is a tool co-developed with LHCb to provide a common interface to the multitude of execution backends (local, batch, and grid). The PanDA workload management system provides a set of utilities called PanDA Client; with these tools users can easily submit Athena analysis jobs to the PanDA-managed resources. Distributed data is managed by Don Quixote 2, a system developed by ATLAS; DQ2 is used to replicate datasets according to the data distribution policies and maintains a central catalog of file locations. The operation of the grid resources is continually monitored by the Ganga Robot functional testing system, and infrequent site stress tests are performed using the Hammer Cloud system. In addition, the DAST shift team is a group of power users who take shifts to provide distributed analysis user support; this team has effectively relieved the burden of support from the developers.

  14. Weibull Statistics for Upper Ocean Currents with the Fokker-Planck Equation

    NASA Astrophysics Data System (ADS)

    Chu, P. C.

    2012-12-01

    Upper oceans typically exhibit of a surface mixed layer with a thickness of a few to several hundred meters. This mixed layer is a key component in studies of climate, biological productivity and marine pollution. It is the link between the atmosphere and the deep ocean and directly affects the air-sea exchange of heat, momentum and gases. Vertically averaged horizontal currents across the mixed layer are driven by the residual between the Ekman transport and surface wind stress, and damped by the Rayleigh friction. A set of stochastic differential equations are derived for the two components of the current vector (u, v). The joint probability distribution function of (u, v) satisfies the Fokker-Planck equation (Chu, 2008, 2009), with the Weibull distribution as the solution for the current speed. To prove it, the PDF of the upper (0-50 m) tropical Pacific current speeds (w) was calculated from hourly ADCP data (1990-2007) at six stations for the Tropical Atmosphere Ocean project. In fact, it satisfies the two-parameter Weibull distribution reasonably well with different characteristics between El Nino and La Nina events: In the western Pacific, the PDF of w has a larger peakedness during the La Nina events than during the El Nino events; and vice versa in the eastern Pacific. However, the PDF of w for the lower layer (100-200 m) does not fit the Weibull distribution so well as the upper layer. This is due to the different stochastic differential equations between upper and lower layers in the tropical Pacific. For the upper layer, the stochastic differential equations, established on the base of the Ekman dynamics, have analytical solution, i.e., the Rayleigh distribution (simplest form of the Weibull distribution), for constant eddy viscosity K. Knowledge on PDF of w during the El Nino and La Nina events will improve the ensemble horizontal flux calculation, which contributes to the climate studies. Besides, the Weibull distribution is also identified from the

  15. Distributional Cost-Effectiveness Analysis

    PubMed Central

    Asaria, Miqdad; Griffin, Susan; Cookson, Richard

    2015-01-01

    Distributional cost-effectiveness analysis (DCEA) is a framework for incorporating health inequality concerns into the economic evaluation of health sector interventions. In this tutorial, we describe the technical details of how to conduct DCEA, using an illustrative example comparing alternative ways of implementing the National Health Service (NHS) Bowel Cancer Screening Programme (BCSP). The 2 key stages in DCEA are 1) modeling social distributions of health associated with different interventions, and 2) evaluating social distributions of health with respect to the dual objectives of improving total population health and reducing unfair health inequality. As well as describing the technical methods used, we also identify the data requirements and the social value judgments that have to be made. Finally, we demonstrate the use of sensitivity analyses to explore the impacts of alternative modeling assumptions and social value judgments. PMID:25908564

  16. Distribution-free discriminant analysis

    SciTech Connect

    Burr, T.; Doak, J.

    1997-05-01

    This report describes our experience in implementing a non-parametric (distribution-free) discriminant analysis module for use in a wide range of pattern recognition problems. Issues discussed include performance results on both real and simulated data sets, comparisons to other methods, and the computational environment. In some cases, this module performs better than other existing methods. Nearly all cases can benefit from the application of multiple methods.

  17. Spatial and Temporal Patterns of Global Onshore Wind Speed Distribution

    SciTech Connect

    Zhou, Yuyu; Smith, Steven J.

    2013-09-09

    Wind power, a renewable energy source, can play an important role in electrical energy generation. Information regarding wind energy potential is important both for energy related modeling and for decision-making in the policy community. While wind speed datasets with high spatial and temporal resolution are often ultimately used for detailed planning, simpler assumptions are often used in analysis work. An accurate representation of the wind speed frequency distribution is needed in order to properly characterize wind energy potential. Using a power density method, this study estimated global variation in wind parameters as fitted to a Weibull density function using NCEP/CFSR reanalysis data. The estimated Weibull distribution performs well in fitting the time series wind speed data at the global level according to R2, root mean square error, and power density error. The spatial, decadal, and seasonal patterns of wind speed distribution were then evaluated. We also analyzed the potential error in wind power estimation when a commonly assumed Rayleigh distribution (Weibull k = 2) is used. We find that the assumption of the same Weibull parameter across large regions can result in substantial errors. While large-scale wind speed data is often presented in the form of average wind speeds, these results highlight the need to also provide information on the wind speed distribution.

  18. Weibull models of fracture strengths and fatigue behavior of dental resins in flexure and shear.

    PubMed

    Baran, G R; McCool, J I; Paul, D; Boberick, K; Wunder, S

    1998-01-01

    In estimating lifetimes of dental restorative materials, it is useful to have available data on the fatigue behavior of these materials. Current efforts at estimation include several untested assumptions related to the equivalence of flaw distributions sampled by shear, tensile, and compressive stresses. Environmental influences on material properties are not accounted for, and it is unclear if fatigue limits exist. In this study, the shear and flexural strengths of three resins used as matrices in dental restorative composite materials were characterized by Weibull parameters. It was found that shear strengths were lower than flexural strengths, liquid sorption had a profound effect on characteristic strengths, and the Weibull shape parameter obtained from shear data differed for some materials from that obtained in flexure. In shear and flexural fatigue, a power law relationship applied for up to 250,000 cycles; no fatigue limits were found, and the data thus imply only one flaw population is responsible for failure. Again, liquid sorption adversely affected strength levels in most materials (decreasing shear strengths and flexural strengths by factors of 2-3) and to a greater extent than did the degree of cure or material chemistry. PMID:9730059

  19. A biology-driven receptor model for daily pollen allergy risk in Korea based on Weibull probability density function

    NASA Astrophysics Data System (ADS)

    Kim, Kyu Rang; Kim, Mijin; Choe, Ho-Seong; Han, Mae Ja; Lee, Hye-Rim; Oh, Jae-Won; Kim, Baek-Jo

    2016-07-01

    Pollen is an important cause of respiratory allergic reactions. As individual sanitation has improved, allergy risk has increased, and this trend is expected to continue due to climate change. Atmospheric pollen concentration is highly influenced by weather conditions. Regression analysis and modeling of the relationships between airborne pollen concentrations and weather conditions were performed to analyze and forecast pollen conditions. Traditionally, daily pollen concentration has been estimated using regression models that describe the relationships between observed pollen concentrations and weather conditions. These models were able to forecast daily concentrations at the sites of observation, but lacked broader spatial applicability beyond those sites. To overcome this limitation, an integrated modeling scheme was developed that is designed to represent the underlying processes of pollen production and distribution. A maximum potential for airborne pollen is first determined using the Weibull probability density function. Then, daily pollen concentration is estimated using multiple regression models. Daily risk grade levels are determined based on the risk criteria used in Korea. The mean percentages of agreement between the observed and estimated levels were 81.4-88.2 % and 92.5-98.5 % for oak and Japanese hop pollens, respectively. The new models estimated daily pollen risk more accurately than the original statistical models because of the newly integrated biological response curves. Although they overestimated seasonal mean concentration, they did not simulate all of the peak concentrations. This issue would be resolved by adding more variables that affect the prevalence and internal maturity of pollens.

  20. Comment on ``On the tensile strength distribution of multiwalled carbon nanotubes'' [Appl. Phys. Lett. 87, 203106 (2005)

    NASA Astrophysics Data System (ADS)

    Lu, Chunsheng

    2008-05-01

    In a recent letter, Barber, Andrews, Schadler, and Wagner, Appl. Phys. Lett. 87, 203106 (2005). indicated that Weibull-Poisson statistics could accurately model the nanotube tensile strength data, and then concluded that the apparent strengthening mechanism in a multiwalled carbon nanotube (MWCNT) grown by chemical vapor deposition (CVD) is most likely caused by an enhanced interaction between the walls of the nanotube. In this comment, we show that their conclusion seems to be inconsistent with the assumption introduced in the data analysis by using a two-parameter Weibull distribution. Further statistical analysis provides a new explanation on the scattered strengths of MWCNTs. The effectiveness of Weibull-Poisson statistics at nanoscales is also discussed.

  1. Investigation on the lifetime of He--Ne lasers by means of Weibull function

    SciTech Connect

    Wang Xishan; Sun Zhendong

    1987-04-01

    The failure mechanism of He-Ne lasers is compared with the physical model of the Weibull function. It follows that the lifetime of He-Ne lasers ought to obey Weibull function. An equation for accelerated aging is derived, which is used to determine readily the lifetime characteristics of He-Ne lasers.

  2. Test Population Selection from Weibull-Based, Monte Carlo Simulations of Fatigue Life

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Zaretsky, Erwin V.; Hendricks, Robert C.

    2008-01-01

    Fatigue life is probabilistic and not deterministic. Experimentally establishing the fatigue life of materials, components, and systems is both time consuming and costly. As a result, conclusions regarding fatigue life are often inferred from a statistically insufficient number of physical tests. A proposed methodology for comparing life results as a function of variability due to Weibull parameters, variability between successive trials, and variability due to size of the experimental population is presented. Using Monte Carlo simulation of randomly selected lives from a large Weibull distribution, the variation in the L10 fatigue life of aluminum alloy AL6061 rotating rod fatigue tests was determined as a function of population size. These results were compared to the L10 fatigue lives of small (10 each) populations from AL2024, AL7075 and AL6061. For aluminum alloy AL6061, a simple algebraic relationship was established for the upper and lower L10 fatigue life limits as a function of the number of specimens failed. For most engineering applications where less than 30 percent variability can be tolerated in the maximum and minimum values, at least 30 to 35 test samples are necessary. The variability of test results based on small sample sizes can be greater than actual differences, if any, that exists between materials and can result in erroneous conclusions. The fatigue life of AL2024 is statistically longer than AL6061 and AL7075. However, there is no statistical difference between the fatigue lives of AL6061 and AL7075 even though AL7075 had a fatigue life 30 percent greater than AL6061.

  3. Test Population Selection from Weibull-Based, Monte Carlo Simulations of Fatigue Life

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Zaretsky, Erwin V.; Hendricks, Robert C.

    2012-01-01

    Fatigue life is probabilistic and not deterministic. Experimentally establishing the fatigue life of materials, components, and systems is both time consuming and costly. As a result, conclusions regarding fatigue life are often inferred from a statistically insufficient number of physical tests. A proposed methodology for comparing life results as a function of variability due to Weibull parameters, variability between successive trials, and variability due to size of the experimental population is presented. Using Monte Carlo simulation of randomly selected lives from a large Weibull distribution, the variation in the L10 fatigue life of aluminum alloy AL6061 rotating rod fatigue tests was determined as a function of population size. These results were compared to the L10 fatigue lives of small (10 each) populations from AL2024, AL7075 and AL6061. For aluminum alloy AL6061, a simple algebraic relationship was established for the upper and lower L10 fatigue life limits as a function of the number of specimens failed. For most engineering applications where less than 30 percent variability can be tolerated in the maximum and minimum values, at least 30 to 35 test samples are necessary. The variability of test results based on small sample sizes can be greater than actual differences, if any, that exists between materials and can result in erroneous conclusions. The fatigue life of AL2024 is statistically longer than AL6061 and AL7075. However, there is no statistical difference between the fatigue lives of AL6061 and AL7075 even though AL7075 had a fatigue life 30 percent greater than AL6061.

  4. Characteristic strength, Weibull modulus, and failure probability of fused silica glass

    NASA Astrophysics Data System (ADS)

    Klein, Claude A.

    2009-11-01

    The development of high-energy lasers has focused attention on the requirement to assess the mechanical strength of optical components made of fused silica or fused quartz (SiO2). The strength of this material is known to be highly dependent on the stressed area and the surface finish, but has not yet been properly characterized in the published literature. Recently, Detrio and collaborators at the University of Dayton Research Institute (UDRI) performed extensive ring-on-ring flexural strength measurements on fused SiO2 specimens ranging in size from 1 to 9 in. in diameter and of widely differing surface qualities. We report on a Weibull statistical analysis of the UDRI data-an analysis based on the procedure outlined in Proc. SPIE 4375, 241 (2001). We demonstrate that (1) a two-parameter Weibull model, including the area-scaling principle, applies; (2) the shape parameter (m~=10) is essentially independent of the stressed area as well as the surface finish; and (3) the characteristic strength (1-cm2 uniformly stressed area) obeys a linear law, σC (in megapascals) ~=160-2.83×PBS (in parts per million per steradian), where PBS characterizes the surface/subsurface ``damage'' of an appropriate set of test specimens. In this light, we evaluate the cumulative failure probability and the failure probability density of polished and superpolished fused SiO2 windows as a function of the biaxial tensile stress, for stressed areas ranging from 0.3 to 100 cm2.

  5. Distributed computing and nuclear reactor analysis

    SciTech Connect

    Brown, F.B.; Derstine, K.L.; Blomquist, R.N.

    1994-03-01

    Large-scale scientific and engineering calculations for nuclear reactor analysis can now be carried out effectively in a distributed computing environment, at costs far lower than for traditional mainframes. The distributed computing environment must include support for traditional system services, such as a queuing system for batch work, reliable filesystem backups, and parallel processing capabilities for large jobs. All ANL computer codes for reactor analysis have been adapted successfully to a distributed system based on workstations and X-terminals. Distributed parallel processing has been demonstrated to be effective for long-running Monte Carlo calculations.

  6. DASH---Distributed Analysis System Hierarchy

    NASA Astrophysics Data System (ADS)

    Yagi, M.; Mizumoto, Y.; Yoshida, M.; Kosugi, G.; Takata, T.; Ogasawara, R.; Ishihara, Y.; Morita, Y.; Nakamoto, H.; Watanabe, N.

    We developed the Distributed Analysis Software Hierarchy (DASH), an object-oriented data reduction and data analysis system for efficient processing of data from the SUBARU telescope. DASH consists of many objects (data management objects, reduction engines, GUIs, etc.) distributed on CORBA. We have also developed SASH, a stand-alone system which has the same interface as DASH, but which does not use some of the distributed services such as DA/DB; visiting astronomers can detach PROCube out of DASH and continue the analysis with SASH at their home institute. SASH will be used as a quick reduction tool at the summit.

  7. Brain responses strongly correlate with Weibull image statistics when processing natural images.

    PubMed

    Scholte, H Steven; Ghebreab, Sennay; Waldorp, Lourens; Smeulders, Arnold W M; Lamme, Victor A F

    2009-01-01

    The visual appearance of natural scenes is governed by a surprisingly simple hidden structure. The distributions of contrast values in natural images generally follow a Weibull distribution, with beta and gamma as free parameters. Beta and gamma seem to structure the space of natural images in an ecologically meaningful way, in particular with respect to the fragmentation and texture similarity within an image. Since it is often assumed that the brain exploits structural regularities in natural image statistics to efficiently encode and analyze visual input, we here ask ourselves whether the brain approximates the beta and gamma values underlying the contrast distributions of natural images. We present a model that shows that beta and gamma can be easily estimated from the outputs of X-cells and Y-cells. In addition, we covaried the EEG responses of subjects viewing natural images with the beta and gamma values of those images. We show that beta and gamma explain up to 71% of the variance of the early ERP signal, substantially outperforming other tested contrast measurements. This suggests that the brain is strongly tuned to the image's beta and gamma values, potentially providing the visual system with an efficient way to rapidly classify incoming images on the basis of omnipresent low-level natural image statistics. PMID:19757938

  8. Effects of dislocation density and sample-size on plastic yielding at the nanoscale: a Weibull-like framework

    NASA Astrophysics Data System (ADS)

    Rinaldi, Antonio

    2011-11-01

    Micro-compression tests have demonstrated that plastic yielding in nanoscale pillars is the result of the fine interplay between the sample-size (chiefly the diameter D) and the density of bulk dislocations ρ. The power-law scaling typical of the nanoscale stems from a source-limited regime, which depends on both these sample parameters. Based on the experimental and theoretical results available in the literature, this paper offers a perspective about the joint effect of D and ρ on the yield stress in any plastic regime, promoting also a schematic graphical map of it. In the sample-size dependent regime, such dependence is cast mathematically into a first order Weibull-type theory, where the power-law scaling the power exponent β and the modulus m of an approximate (unimodal) Weibull distribution of source-strengths can be related by a simple inverse proportionality. As a corollary, the scaling exponent β may not be a universal number, as speculated in the literature. In this context, the discussion opens the alternative possibility of more general (multimodal) source-strength distributions, which could produce more complex and realistic strengthening patterns than the single power-law usually assumed. The paper re-examines our own experimental data, as well as results of Bei et al. (2008) on Mo-alloy pillars, especially for the sake of emphasizing the significance of a sudden increase in sample response scatter as a warning signal of an incipient source-limited regime.

  9. Accuracy analysis of distributed simulation systems

    NASA Astrophysics Data System (ADS)

    Lin, Qi; Guo, Jing

    2010-08-01

    Existed simulation works always emphasize on procedural verification, which put too much focus on the simulation models instead of simulation itself. As a result, researches on improving simulation accuracy are always limited in individual aspects. As accuracy is the key in simulation credibility assessment and fidelity study, it is important to give an all-round discussion of the accuracy of distributed simulation systems themselves. First, the major elements of distributed simulation systems are summarized, which can be used as the specific basis of definition, classification and description of accuracy of distributed simulation systems. In Part 2, the framework of accuracy of distributed simulation systems is presented in a comprehensive way, which makes it more sensible to analyze and assess the uncertainty of distributed simulation systems. The concept of accuracy of distributed simulation systems is divided into 4 other factors and analyzed respectively further more in Part 3. In Part 4, based on the formalized description of framework of accuracy analysis in distributed simulation systems, the practical approach are put forward, which can be applied to study unexpected or inaccurate simulation results. Following this, a real distributed simulation system based on HLA is taken as an example to verify the usefulness of the approach proposed. The results show that the method works well and is applicable in accuracy analysis of distributed simulation systems.

  10. Analysis of distribution of critical current of bent-damaged Bi2223 composite tape

    NASA Astrophysics Data System (ADS)

    Ochiai, S.; Okuda, H.; Sugano, M.; Hojo, M.; Osamura, K.; Kuroda, T.; Kumakura, H.; Kitaguchi, H.; Itoh, K.; Wada, H.

    2011-10-01

    Distributions of critical current of damaged Bi2223 tape specimens bent by 0.6, 0.8 and 1.0% were investigated analytically with a modelling approach based on the correlation of damage evolution to distribution of critical current. It was revealed that the distribution of critical current is described by three parameter Weibull distribution function through the distribution of the tensile damage strain of Bi2223 filaments that determines the damage front in bent-composite tape. Also it was shown that the measured distribution of critical current values can be reproduced successfully by a Monte Carlo simulation using the distributions of tensile damage strain of filaments and original critical current.

  11. Analysis of cascade impactor mass distributions.

    PubMed

    Dunbar, Craig; Mitchell, Jolyon

    2005-01-01

    The purpose of this paper is to review the approaches for analyzing cascade impactor (CI) mass distributions produced by pulmonary drug products and the considerations necessary for selecting the appropriate analysis procedure. There are several methods available for analyzing CI data, yielding a hierarchy of information in terms of nominal, ordinal and continuous variables. Mass distributions analyzed as a nominal function of the stages and auxiliary components is the simplest approach for examining the whole mass emitted by the inhaler. However, the relationship between the mass distribution and aerodynamic diameter is not described by such data. This relationship is a critical attribute of pulmonary drug products due to the association between aerodynamic diameter and the mass of particulates deposited to the respiratory tract. Therefore, the nominal mass distribution can only be utilized to make decisions on the discrete masses collected in the CI. Mass distributions analyzed as an ordinal function of aerodynamic diameter can be obtained by introducing the stage size range, which generally vary in magnitude from one stage to another for a given type of CI, and differ between CIs of different designs. Furthermore, the mass collected by specific size ranges within the CI are often incorrectly used to estimate in vivo deposition at various regions of the respiratory tract. A CI-generated mass distribution can be directly related to aerodynamic diameter by expressing the mass collected by each size-fractionating stage in terms of either mass frequency or cumulative mass fraction less than the aerodynamic size appropriate to each stage. Analysis of the aerodynamic diameter as a continuous variable allows comparison of mass distributions obtained from different products, obtained by different CI designs, as well as providing input to in vivo particle deposition models. The lack of information about the mass fraction emitted by the inhaler that is not size-analyzed by

  12. Distributional Cost-Effectiveness Analysis: A Tutorial.

    PubMed

    Asaria, Miqdad; Griffin, Susan; Cookson, Richard

    2016-01-01

    Distributional cost-effectiveness analysis (DCEA) is a framework for incorporating health inequality concerns into the economic evaluation of health sector interventions. In this tutorial, we describe the technical details of how to conduct DCEA, using an illustrative example comparing alternative ways of implementing the National Health Service (NHS) Bowel Cancer Screening Programme (BCSP). The 2 key stages in DCEA are 1) modeling social distributions of health associated with different interventions, and 2) evaluating social distributions of health with respect to the dual objectives of improving total population health and reducing unfair health inequality. As well as describing the technical methods used, we also identify the data requirements and the social value judgments that have to be made. Finally, we demonstrate the use of sensitivity analyses to explore the impacts of alternative modeling assumptions and social value judgments. PMID:25908564

  13. Comparison of the Weibull characteristics of hydroxyapatite and strontium doped hydroxyapatite.

    PubMed

    Yatongchai, Chokchai; Wren, Anthony W; Curran, Declan J; Hornez, Jean-Christophe; Mark R, Towler

    2013-05-01

    The effects of two strontium (Sr) additions, 5% and 10% of the total calcium (Ca) content, on the phase assemblage and Weibull statistics of hydroxyapatite (HA) are investigated and compared to those of undoped HA. Sintering was carried out in the range of 900-1200 °C in steps of 1000 °C in a conventional furnace. Sr content had little effect on the mean particulate size. Decomposition of the HA phase occurred with Sr incorporation, while β-TCP stabilization was shown to occur with 10% Sr additions. Porosity in both sets of doped samples was at a comparable level to porosity in the undoped HA samples, however the 5% Sr-HA samples displayed the greatest reduction in porosity with increasing temperature while the porosity of the 10% Sr-HA samples remain relatively constant over the full sintering temperature range. The undoped HA samples displayed the greatest Weibull strengths and the porosity was determined to be the major controlling factor. However, with the introduction of decompositional phases in the Sr-HA samples, the dependence of strength on porosity is reduced and the phase assemblage becomes the more dominant factor for Weibull strength. The Weibull modulus is relatively independent of the porosity in the undoped HA samples. The 5% Sr-HA samples experience a slight increase in Weibull modulus with porosity, indicating a possible relationship between the parameters. However the 10% Sr-HA samples show the highest Weibull modulus with a value of approximately 15 across all sintering temperatures. It is postulated that this is due to the increased amount of surface and lattice diffusion that these samples undergo, which effectively smooths out flaws in the microstructure, due to a saturation of Sr content occurring in grain boundary movement. PMID:23524073

  14. Distributed analysis in ATLAS using GANGA

    NASA Astrophysics Data System (ADS)

    Elmsheuser, Johannes; Brochu, Frederic; Cowan, Greig; Egede, Ulrik; Gaidioz, Benjamin; Lee, Hurng-Chun; Maier, Andrew; Móscicki, Jakub; Pajchel, Katarina; Reece, Will; Samset, Bjorn; Slater, Mark; Soroko, Alexander; Vanderster, Daniel; Williams, Michael

    2010-04-01

    Distributed data analysis using Grid resources is one of the fundamental applications in high energy physics to be addressed and realized before the start of LHC data taking. The needs to manage the resources are very high. In every experiment up to a thousand physicists will be submitting analysis jobs to the Grid. Appropriate user interfaces and helper applications have to be made available to assure that all users can use the Grid without expertise in Grid technology. These tools enlarge the number of Grid users from a few production administrators to potentially all participating physicists. The GANGA job management system (http://cern.ch/ganga), developed as a common project between the ATLAS and LHCb experiments, provides and integrates these kind of tools. GANGA provides a simple and consistent way of preparing, organizing and executing analysis tasks within the experiment analysis framework, implemented through a plug-in system. It allows trivial switching between running test jobs on a local batch system and running large-scale analyzes on the Grid, hiding Grid technicalities. We will be reporting on the plug-ins and our experiences of distributed data analysis using GANGA within the ATLAS experiment. Support for all Grids presently used by ATLAS, namely the LCG/EGEE, NDGF/NorduGrid, and OSG/PanDA is provided. The integration and interaction with the ATLAS data management system DQ2 into GANGA is a key functionality. An intelligent job brokering is set up by using the job splitting mechanism together with data-set and file location knowledge. The brokering is aided by an automated system that regularly processes test analysis jobs at all ATLAS DQ2 supported sites. Large numbers of analysis jobs can be sent to the locations of data following the ATLAS computing model. GANGA supports amongst other things tasks of user analysis with reconstructed data and small scale production of Monte Carlo data.

  15. A Novel Conditional Probability Density Distribution Surface for the Analysis of the Drop Life of Solder Joints Under Board Level Drop Impact

    NASA Astrophysics Data System (ADS)

    Gu, Jian; Lei, YongPing; Lin, Jian; Fu, HanGuang; Wu, Zhongwei

    2016-01-01

    The scattering of fatigue life data is a common problem and usually described using the normal distribution or Weibull distribution. For solder joints under drop impact, due to the complicated stress distribution, the relationship between the stress and the drop life is so far unknown. Furthermore, it is important to establish a function describing the change in standard deviation for solder joints under different drop impact levels. Therefore, in this study, a novel conditional probability density distribution surface (CPDDS) was established for the analysis of the drop life of solder joints. The relationship between the drop impact acceleration and the drop life is proposed, which comprehensively considers the stress distribution. A novel exponential model was adopted for describing the change of the standard deviation with the impact acceleration (0 → +∞). To validate the model, the drop life of Sn-3.0Ag-0.5Cu solder joints was analyzed. The probability density curve of the logarithm of the fatigue life distribution can be easily obtained for a certain acceleration level fixed on the acceleration level axis of the CPDDS. The P- A- N curve was also obtained using the functions μ( A) and σ( A), which can reflect the regularity of the life data for an overall reliability P.

  16. Analysis and Modelling of Extreme Wind Speed Distributions in Complex Mountainous Regions

    NASA Astrophysics Data System (ADS)

    Laib, Mohamed; Kanevski, Mikhail

    2016-04-01

    Modelling of wind speed distributions in complex mountainous regions is an important and challenging problem which interests many scientists from several fields. In the present research, high frequency (10 min) Swiss wind speed monitoring data (IDAWEB service, Meteosuisse) are analysed and modelled with different parametric distributions (Weibull, GEV, Gamma, etc.) using maximum likelihood method. In total, 111 stations placed in different geomorphological units and at different altitude (from 203 to 3580 meters) are studied. Then, this information is used for training machine learning algorithms (Extreme Learning Machines, Support vector machine) to predict the distribution at new places, potentially useful for aeolian energy generation. An important part of the research deals with the construction and application of a high dimensional input feature space, generated from digital elevation model. A comprehensive study was carried out using feature selection approach to get the best model for the prediction. The main results are presented as spatial patterns of distributions' parameters.

  17. Performance optimisations for distributed analysis in ALICE

    NASA Astrophysics Data System (ADS)

    Betev, L.; Gheata, A.; Gheata, M.; Grigoras, C.; Hristov, P.

    2014-06-01

    Performance is a critical issue in a production system accommodating hundreds of analysis users. Compared to a local session, distributed analysis is exposed to services and network latencies, remote data access and heterogeneous computing infrastructure, creating a more complex performance and efficiency optimization matrix. During the last 2 years, ALICE analysis shifted from a fast development phase to the more mature and stable code. At the same time, the frameworks and tools for deployment, monitoring and management of large productions have evolved considerably too. The ALICE Grid production system is currently used by a fair share of organized and individual user analysis, consuming up to 30% or the available resources and ranging from fully I/O-bound analysis code to CPU intensive correlations or resonances studies. While the intrinsic analysis performance is unlikely to improve by a large factor during the LHC long shutdown (LS1), the overall efficiency of the system has still to be improved by an important factor to satisfy the analysis needs. We have instrumented all analysis jobs with "sensors" collecting comprehensive monitoring information on the job running conditions and performance in order to identify bottlenecks in the data processing flow. This data are collected by the MonALISa-based ALICE Grid monitoring system and are used to steer and improve the job submission and management policy, to identify operational problems in real time and to perform automatic corrective actions. In parallel with an upgrade of our production system we are aiming for low level improvements related to data format, data management and merging of results to allow for a better performing ALICE analysis.

  18. Analysis of Jingdong Mall Logistics Distribution Model

    NASA Astrophysics Data System (ADS)

    Shao, Kang; Cheng, Feng

    In recent years, the development of electronic commerce in our country to speed up the pace. The role of logistics has been highlighted, more and more electronic commerce enterprise are beginning to realize the importance of logistics in the success or failure of the enterprise. In this paper, the author take Jingdong Mall for example, performing a SWOT analysis of their current situation of self-built logistics system, find out the problems existing in the current Jingdong Mall logistics distribution and give appropriate recommendations.

  19. EXPERIMENTAL DESIGN STRATEGY FOR THE WEIBULL DOSE RESPONSE MODEL (JOURNAL VERSION)

    EPA Science Inventory

    The objective of the research was to determine optimum design point allocation for estimation of relative yield losses from ozone pollution when the true and fitted yield-ozone dose response relationship follows the Weibull. The optimum design is dependent on the values of the We...

  20. DASH--distributed analysis system hierarchy

    NASA Astrophysics Data System (ADS)

    Yagi, Masafumi; Yoshihiko, Mizumoto; Ogasawara, Ryusuke; Kosugi, George; Takata, Tadafumi; Ishihara, Yasuhide; Yokono, Yasunori; Morita, Yasuhiro; Nakamoto, Hiroyuki; Watanabe, Noboru; Ukawa, Kentaro

    2002-12-01

    We have developed and are operating an object-oriented data reduction and data analysis system, DASH ( Distributed Analysis Software Hierarchy ), for efficient data processing for SUBARU telescope. In DASH, all information for reducing a set of data is packed into an abstracted object, named as ``Hierarchy''. It contains rules how to search calibration data, reduction procedure to the final result, and also the reduction log. With Hierarchy, DASH works as an automated reduction pipeline platform cooperated with STARS (Subaru Telescope ARchive System). DASH is implemented on CORBA and Java technology. The portability of these technology enables us to make a subset of the system for a small stand-alone system, SASH. SASH is compatible with DASH and one can continuously reduce and analyze data between DASH and SASH.

  1. Analysis and control of distributed cooperative systems.

    SciTech Connect

    Feddema, John Todd; Parker, Eric Paul; Wagner, John S.; Schoenwald, David Alan

    2004-09-01

    As part of DARPA Information Processing Technology Office (IPTO) Software for Distributed Robotics (SDR) Program, Sandia National Laboratories has developed analysis and control software for coordinating tens to thousands of autonomous cooperative robotic agents (primarily unmanned ground vehicles) performing military operations such as reconnaissance, surveillance and target acquisition; countermine and explosive ordnance disposal; force protection and physical security; and logistics support. Due to the nature of these applications, the control techniques must be distributed, and they must not rely on high bandwidth communication between agents. At the same time, a single soldier must easily direct these large-scale systems. Finally, the control techniques must be provably convergent so as not to cause undo harm to civilians. In this project, provably convergent, moderate communication bandwidth, distributed control algorithms have been developed that can be regulated by a single soldier. We have simulated in great detail the control of low numbers of vehicles (up to 20) navigating throughout a building, and we have simulated in lesser detail the control of larger numbers of vehicles (up to 1000) trying to locate several targets in a large outdoor facility. Finally, we have experimentally validated the resulting control algorithms on smaller numbers of autonomous vehicles.

  2. Rectangular shape distributed piezoelectric actuator: analytical analysis

    NASA Astrophysics Data System (ADS)

    Sun, Bohua; Qiu, Yan

    2004-04-01

    This paper is focused on the development of distributed piezoelectric actuators (DPAs) with rectangular shapes by using PZT materials. Analytical models of rectangular shape DPAs have been constructed in order to analyse and test the performance of DPA products. Firstly, based on the theory of electromagnetics, DPAs have been considered as a type of capacitor. The charge distributed density on the interdigitated electrodes (IDEs), which has been applied in the actuators, and the capacitance of the DPAs have been calculated. The accurate distribution and intensity of electrical field in DPA element have also been calculated completely. Secondly, based on the piezoelectric constitutive relations and the compound plates theory, models for mechanical strain and stress fields of DPAs have been developed, and the performances of rectangular shape DPAs have been discussed. Finally, on the basis of the models that have been developed in this paper, an improved design of a rectangular shape DPA has been discussed and summed up. Due to the minimum hypotheses that have been used during the processes of calculation, the characteristics of this paper are that the accurate distribution and intensity of electrical fields in DPAs have been concluded. The proposed accurate calculations have not been seen in the literature, and can be used in DPA design and manufacture processes in order to improve mechanical performance and reduce the cost of DPA products in further applications. In this paper, all the processes of analysis and calculation have been done by MATLAB and MathCAD. The FEM results used for comparison were obtained using the ABAQUS program.

  3. CMS distributed data analysis with CRAB3

    SciTech Connect

    Mascheroni, M.; Balcas, J.; Belforte, S.; Bockelman, B. P.; Hernandez, J. M.; Ciangottini, D.; Konstantinov, P. B.; Silva, J. M. D.; Ali, M. A. B. M.; Melo, A. M.; Riahi, H.; Tanasijczuk, A. J.; Yusli, M. N. B.; Wolf, M.; Woodard, A. E.; Vaandering, E.

    2015-12-23

    The CMS Remote Analysis Builder (CRAB) is a distributed workflow management tool which facilitates analysis tasks by isolating users from the technical details of the Grid infrastructure. Throughout LHC Run 1, CRAB has been successfully employed by an average of 350 distinct users each week executing about 200,000 jobs per day.CRAB has been significantly upgraded in order to face the new challenges posed by LHC Run 2. Components of the new system include 1) a lightweight client, 2) a central primary server which communicates with the clients through a REST interface, 3) secondary servers which manage user analysis tasks and submit jobs to the CMS resource provisioning system, and 4) a central service to asynchronously move user data from temporary storage in the execution site to the desired storage location. Furthermore, the new system improves the robustness, scalability and sustainability of the service.Here we provide an overview of the new system, operation, and user support, report on its current status, and identify lessons learned from the commissioning phase and production roll-out.

  4. CMS distributed data analysis with CRAB3

    DOE PAGESBeta

    Mascheroni, M.; Balcas, J.; Belforte, S.; Bockelman, B. P.; Hernandez, J. M.; Ciangottini, D.; Konstantinov, P. B.; Silva, J. M. D.; Ali, M. A. B. M.; Melo, A. M.; et al

    2015-12-23

    The CMS Remote Analysis Builder (CRAB) is a distributed workflow management tool which facilitates analysis tasks by isolating users from the technical details of the Grid infrastructure. Throughout LHC Run 1, CRAB has been successfully employed by an average of 350 distinct users each week executing about 200,000 jobs per day.CRAB has been significantly upgraded in order to face the new challenges posed by LHC Run 2. Components of the new system include 1) a lightweight client, 2) a central primary server which communicates with the clients through a REST interface, 3) secondary servers which manage user analysis tasks andmore » submit jobs to the CMS resource provisioning system, and 4) a central service to asynchronously move user data from temporary storage in the execution site to the desired storage location. Furthermore, the new system improves the robustness, scalability and sustainability of the service.Here we provide an overview of the new system, operation, and user support, report on its current status, and identify lessons learned from the commissioning phase and production roll-out.« less

  5. CMS distributed data analysis with CRAB3

    NASA Astrophysics Data System (ADS)

    Mascheroni, M.; Balcas, J.; Belforte, S.; Bockelman, B. P.; Hernandez, J. M.; Ciangottini, D.; Konstantinov, P. B.; Silva, J. M. D.; Ali, M. A. B. M.; Melo, A. M.; Riahi, H.; Tanasijczuk, A. J.; Yusli, M. N. B.; Wolf, M.; Woodard, A. E.; Vaandering, E.

    2015-12-01

    The CMS Remote Analysis Builder (CRAB) is a distributed workflow management tool which facilitates analysis tasks by isolating users from the technical details of the Grid infrastructure. Throughout LHC Run 1, CRAB has been successfully employed by an average of 350 distinct users each week executing about 200,000 jobs per day. CRAB has been significantly upgraded in order to face the new challenges posed by LHC Run 2. Components of the new system include 1) a lightweight client, 2) a central primary server which communicates with the clients through a REST interface, 3) secondary servers which manage user analysis tasks and submit jobs to the CMS resource provisioning system, and 4) a central service to asynchronously move user data from temporary storage in the execution site to the desired storage location. The new system improves the robustness, scalability and sustainability of the service. Here we provide an overview of the new system, operation, and user support, report on its current status, and identify lessons learned from the commissioning phase and production roll-out.

  6. Ceramics Analysis and Reliability Evaluation of Structures (CARES). Users and programmers manual

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Manderscheid, Jane M.; Gyekenyesi, John P.

    1990-01-01

    This manual describes how to use the Ceramics Analysis and Reliability Evaluation of Structures (CARES) computer program. The primary function of the code is to calculate the fast fracture reliability or failure probability of macroscopically isotropic ceramic components. These components may be subjected to complex thermomechanical loadings, such as those found in heat engine applications. The program uses results from MSC/NASTRAN or ANSYS finite element analysis programs to evaluate component reliability due to inherent surface and/or volume type flaws. CARES utilizes the Batdorf model and the two-parameter Weibull cumulative distribution function to describe the effect of multiaxial stress states on material strength. The principle of independent action (PIA) and the Weibull normal stress averaging models are also included. Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities are estimated from four-point bend bar or unifrom uniaxial tensile specimen fracture strength data. Parameter estimation can be performed for single or multiple failure modes by using the least-square analysis or the maximum likelihood method. Kolmogorov-Smirnov and Anderson-Darling goodness-of-fit tests, ninety percent confidence intervals on the Weibull parameters, and Kanofsky-Srinivasan ninety percent confidence band values are also provided. The probabilistic fast-fracture theories used in CARES, along with the input and output for CARES, are described. Example problems to demonstrate various feature of the program are also included. This manual describes the MSC/NASTRAN version of the CARES program.

  7. A Mixed Kijima Model Using the Weibull-Based Generalized Renewal Processes

    PubMed Central

    2015-01-01

    Generalized Renewal Processes are useful for approaching the rejuvenation of dynamical systems resulting from planned or unplanned interventions. We present new perspectives for the Generalized Renewal Processes in general and for the Weibull-based Generalized Renewal Processes in particular. Disregarding from literature, we present a mixed Generalized Renewal Processes approach involving Kijima Type I and II models, allowing one to infer the impact of distinct interventions on the performance of the system under study. The first and second theoretical moments of this model are introduced as well as its maximum likelihood estimation and random sampling approaches. In order to illustrate the usefulness of the proposed Weibull-based Generalized Renewal Processes model, some real data sets involving improving, stable, and deteriorating systems are used. PMID:26197222

  8. Sensitivity analysis of distributed volcanic source inversion

    NASA Astrophysics Data System (ADS)

    Cannavo', Flavio; Camacho, Antonio G.; González, Pablo J.; Puglisi, Giuseppe; Fernández, José

    2016-04-01

    A recently proposed algorithm (Camacho et al., 2011) claims to rapidly estimate magmatic sources from surface geodetic data without any a priori assumption about source geometry. The algorithm takes the advantages of fast calculation from the analytical models and adds the capability to model free-shape distributed sources. Assuming homogenous elastic conditions, the approach can determine general geometrical configurations of pressured and/or density source and/or sliding structures corresponding to prescribed values of anomalous density, pressure and slip. These source bodies are described as aggregation of elemental point sources for pressure, density and slip, and they fit the whole data (keeping some 3D regularity conditions). Although some examples and applications have been already presented to demonstrate the ability of the algorithm in reconstructing a magma pressure source (e.g. Camacho et al., 2011,Cannavò et al., 2015), a systematic analysis of sensitivity and reliability of the algorithm is still lacking. In this explorative work we present results from a large statistical test designed to evaluate the advantages and limitations of the methodology by assessing its sensitivity to the free and constrained parameters involved in inversions. In particular, besides the source parameters, we focused on the ground deformation network topology, and noise in measurements. The proposed analysis can be used for a better interpretation of the algorithm results in real-case applications. Camacho, A. G., González, P. J., Fernández, J. & Berrino, G. (2011) Simultaneous inversion of surface deformation and gravity changes by means of extended bodies with a free geometry: Application to deforming calderas. J. Geophys. Res. 116. Cannavò F., Camacho A.G., González P.J., Mattia M., Puglisi G., Fernández J. (2015) Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises, Scientific Reports, 5 (10970) doi:10.1038/srep

  9. Transmission and distribution-loss analysis

    SciTech Connect

    Not Available

    1982-05-01

    A previous study developed a methodology for determining the losses in the various elements of an electric utility transmission and distribution system using only generally published system data. In that study the losses at the system peak and the average annual losses of the Niagara Mohawk Power Corporation system were calculated to illustrate the methods. Since there was little or no system loss data available at that time, the methodology of the loss calculations was not verified. The purpose of this study was to verify the methods that were proposed in the previous study. The data, estimates, assumptions, and calculation methods of the original study were checked against the actual Niagara Mohawk system data. The losses calculated in the original study were compared to the system losses derived from actual system data. Revisions to the original methods were recommended to improve the accuracy of the results. As a result of the analysis done in this study, the methods developed in the original study were revised. The revised methods provide reasonable loss calculation results for the Niagara Mohawk system. These methods along with discussions of their application are given. Also included is a description of the procedures followed to find the system losses from the actual system data. The revised loss calculation methods using the published data based on the Niagara Mohawk system data, operation, and loadings, gave reasonable results for that system, and the method may be applicable to similar systems.

  10. On the gap between an empirical distribution and an exponential distribution of waiting times for price changes in a financial market

    NASA Astrophysics Data System (ADS)

    Sazuka, Naoya

    2007-03-01

    We analyze waiting times for price changes in a foreign currency exchange rate. Recent empirical studies of high-frequency financial data support that trades in financial markets do not follow a Poisson process and the waiting times between trades are not exponentially distributed. Here we show that our data is well approximated by a Weibull distribution rather than an exponential distribution in the non-asymptotic regime. Moreover, we quantitatively evaluate how much an empirical data is far from an exponential distribution using a Weibull fit. Finally, we discuss a transition between a Weibull-law and a power-law in the long time asymptotic regime.

  11. Distributed Design and Analysis of Computer Experiments

    SciTech Connect

    Doak, Justin

    2002-11-11

    DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. For example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation of an

  12. Distributed Design and Analysis of Computer Experiments

    Energy Science and Technology Software Center (ESTSC)

    2002-11-11

    DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. Formore » example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation

  13. Distribution entropy analysis of epileptic EEG signals.

    PubMed

    Li, Peng; Yan, Chang; Karmakar, Chandan; Liu, Changchun

    2015-08-01

    It is an open-ended challenge to accurately detect the epileptic seizures through electroencephalogram (EEG) signals. Recently published studies have made elaborate attempts to distinguish between the normal and epileptic EEG signals by advanced nonlinear entropy methods, such as the approximate entropy, sample entropy, fuzzy entropy, and permutation entropy, etc. Most recently, a novel distribution entropy (DistEn) has been reported to have superior performance compared with the conventional entropy methods for especially short length data. We thus aimed, in the present study, to show the potential of DistEn in the analysis of epileptic EEG signals. The publicly-accessible Bonn database which consisted of normal, interictal, and ictal EEG signals was used in this study. Three different measurement protocols were set for better understanding the performance of DistEn, which are: i) calculate the DistEn of a specific EEG signal using the full recording; ii) calculate the DistEn by averaging the results for all its possible non-overlapped 5 second segments; and iii) calculate it by averaging the DistEn values for all the possible non-overlapped segments of 1 second length, respectively. Results for all three protocols indicated a statistically significantly increased DistEn for the ictal class compared with both the normal and interictal classes. Besides, the results obtained under the third protocol, which only used very short segments (1 s) of EEG recordings showed a significantly (p <; 0.05) increased DistEn for the interictal class in compassion with the normal class, whereas both analyses using relatively long EEG signals failed in tracking this difference between them, which may be due to a nonstationarity effect on entropy algorithm. The capability of discriminating between the normal and interictal EEG signals is of great clinical relevance since it may provide helpful tools for the detection of a seizure onset. Therefore, our study suggests that the Dist

  14. Survival Analysis of Patients with End Stage Renal Disease

    NASA Astrophysics Data System (ADS)

    Urrutia, J. D.; Gayo, W. S.; Bautista, L. A.; Baccay, E. B.

    2015-06-01

    This paper provides a survival analysis of End Stage Renal Disease (ESRD) under Kaplan-Meier Estimates and Weibull Distribution. The data were obtained from the records of V. L. MakabaliMemorial Hospital with respect to time t (patient's age), covariates such as developed secondary disease (Pulmonary Congestion and Cardiovascular Disease), gender, and the event of interest: the death of ESRD patients. Survival and hazard rates were estimated using NCSS for Weibull Distribution and SPSS for Kaplan-Meier Estimates. These lead to the same conclusion that hazard rate increases and survival rate decreases of ESRD patient diagnosed with Pulmonary Congestion, Cardiovascular Disease and both diseases with respect to time. It also shows that female patients have a greater risk of death compared to males. The probability risk was given the equation R = 1 — e-H(t) where e-H(t) is the survival function, H(t) the cumulative hazard function which was created using Cox-Regression.

  15. Distributed energy store railguns experiment and analysis

    SciTech Connect

    Holland, L.D.

    1984-01-01

    Electromagnetic acceleration of projectiles holds the potential for achieving higher velocities than yet achieved by any other means. A railgun is the simplest form of electromagnetic macroparticle accelerator and can generate the highest sustained accelerating force. The practical length of conventional railguns is limited by the impedance of the rails because current must be carried along the entire length of the rails. A railgun and power supply system called the distributed energy store railgun was proposed as a solution to this limitation. The distributed energy store railgun used multiple current sources connected to the rails of a railgun at points distributed along the bore. These current sources (energy stores) are turned on in sequence as the projectile moves down the bore so that current is fed to the railgun from behind the armature. In this system the length of the rails that carry the full armature current is less than the total length of the railgun. If a sufficient number of energy stores is used, this removes the limitation on the length of a railgun. An additional feature of distributed energy store type railguns is that they can be designed to maintain a constant pressure on the projectile being accelerated. A distributed energy store railgun was constructed and successfully operated. In addition to this first demonstration of the distributed energy store railgun principle, a theoretical model of the system was also constructed.

  16. Analysis of Temperature Distributions in Nighttime Inversions

    NASA Astrophysics Data System (ADS)

    Telyak, Oksana; Krasouski, Aliaksandr; Svetashev, Alexander; Turishev, Leonid; Barodka, Siarhei

    2015-04-01

    Adequate prediction of temperature inversion in the atmospheric boundary layer is one of prerequisites for successful forecasting of meteorological parameters and severe weather events. Examples include surface air temperature and precipitation forecasting as well as prediction of fog, frosts and smog with hazardous levels of atmospheric pollution. At the same time, reliable forecasting of temperature inversions remains an unsolved problem. For prediction of nighttime inversions over some specific territory, it is important to study characteristic features of local circulation cells formation and to properly take local factors into account to develop custom modeling techniques for operational use. The present study aims to investigate and analyze vertical temperature distributions in tropospheric inversions (isotherms) over the territory of Belarus. We study several specific cases of formation, evolution and decay of deep nighttime temperature inversions in Belarus by means of mesoscale numerical simulations with WRF model, considering basic mechanisms of isothermal and inverse temperature layers formation in the troposphere and impact of these layers on local circulation cells. Our primary goal is to assess the feasibility of advance prediction of inversions formation with WRF. Modeling results reveal that all cases under consideration have characteristic features of radiative inversions (e.g., their formation times, development phases, inversion intensities, etc). Regions of "blocking" layers formation are extensive and often spread over the entire territory of Belarus. Inversions decay starts from the lowermost (near surface) layer (altitudes of 5 to 50 m). In all cases, one can observe formation of temperature gradients that substantially differ from the basic inversion gradient, i.e. the layer splits into smaller layers, each having a different temperature stratification (isothermal, adiabatic, etc). As opposed to various empirical techniques as well as

  17. Dentin bonding performance using Weibull statistics and evaluation of acid-base resistant zone formation of recently introduced adhesives.

    PubMed

    Guan, Rui; Takagaki, Tomohiro; Matsui, Naoko; Sato, Takaaki; Burrow, Michael F; Palamara, Joseph; Nikaido, Toru; Tagami, Junji

    2016-07-30

    Dentin bonding durability of recently introduced dental adhesives: Clearfil SE Bond 2 (SE2), Optibond XTR (XTR), and Scotchbond Universal (SBU) was investigated using Weibull analysis as well as analysis of the micromorphological features of the acid-base resistant zone (ABRZ) created for the adhesives. The bonding procedures of SBU were divided into three subgroups: self-etch (SBS), phosphoric acid (PA) etching on moist (SBM) or dry dentin (SBD). All groups were thermocycled for 0, 5,000 and 10,000 cycles followed by microtensile bond strength testing. Acid-base challenge was undertaken before SEM and TEM observations of the adhesive interface. The etch-and-rinse method with SBU (SBM and SBD) created inferior interfaces on the dentin surface which resulted in reduced bond durability. ABRZ formation was detected with the self-etch adhesive systems; SE2, XTR and SBS. In the PA etching protocols of SBM and SBD, a thick hybrid layer but no ABRZ was detected, which might affect dentin bond durability. PMID:27335136

  18. Stochastic Analysis of Wind Energy for Wind Pump Irrigation in Coastal Andhra Pradesh, India

    NASA Astrophysics Data System (ADS)

    Raju, M. M.; Kumar, A.; Bisht, D.; Rao, D. B.

    2014-09-01

    The rapid escalation in the prices of oil and gas as well as increasing demand for energy has attracted the attention of scientists and researchers to explore the possibility of generating and utilizing the alternative and renewable sources of wind energy in the long coastal belt of India with considerable wind energy resources. A detailed analysis of wind potential is a prerequisite to harvest the wind energy resources efficiently. Keeping this in view, the present study was undertaken to analyze the wind energy potential to assess feasibility of the wind-pump operated irrigation system in the coastal region of Andhra Pradesh, India, where high ground water table conditions are available. The stochastic analysis of wind speed data were tested to fit a probability distribution, which describes the wind energy potential in the region. The normal and Weibull probability distributions were tested; and on the basis of Chi square test, the Weibull distribution gave better results. Hence, it was concluded that the Weibull probability distribution may be used to stochastically describe the annual wind speed data of coastal Andhra Pradesh with better accuracy. The size as well as the complete irrigation system with mass curve analysis was determined to satisfy various daily irrigation demands at different risk levels.

  19. Fracture mechanics concepts in reliability analysis of monolithic ceramics

    NASA Technical Reports Server (NTRS)

    Manderscheid, Jane M.; Gyekenyesi, John P.

    1987-01-01

    Basic design concepts for high-performance, monolithic ceramic structural components are addressed. The design of brittle ceramics differs from that of ductile metals because of the inability of ceramic materials to redistribute high local stresses caused by inherent flaws. Random flaw size and orientation requires that a probabilistic analysis be performed in order to determine component reliability. The current trend in probabilistic analysis is to combine linear elastic fracture mechanics concepts with the two parameter Weibull distribution function to predict component reliability under multiaxial stress states. Nondestructive evaluation supports this analytical effort by supplying data during verification testing. It can also help to determine statistical parameters which describe the material strength variation, in particular the material threshold strength (the third Weibull parameter), which in the past was often taken as zero for simplicity.

  20. Integer sparse distributed memory: analysis and results.

    PubMed

    Snaider, Javier; Franklin, Stan; Strain, Steve; George, E Olusegun

    2013-10-01

    Sparse distributed memory is an auto-associative memory system that stores high dimensional Boolean vectors. Here we present an extension of the original SDM, the Integer SDM that uses modular arithmetic integer vectors rather than binary vectors. This extension preserves many of the desirable properties of the original SDM: auto-associativity, content addressability, distributed storage, and robustness over noisy inputs. In addition, it improves the representation capabilities of the memory and is more robust over normalization. It can also be extended to support forgetting and reliable sequence storage. We performed several simulations that test the noise robustness property and capacity of the memory. Theoretical analyses of the memory's fidelity and capacity are also presented. PMID:23747569

  1. Economic analysis of efficient distribution transformer trends

    SciTech Connect

    Downing, D.J.; McConnell, B.W.; Barnes, P.R.; Hadley, S.W.; Van Dyke, J.W.

    1998-03-01

    This report outlines an approach that will account for uncertainty in the development of evaluation factors used to identify transformer designs with the lowest total owning cost (TOC). The TOC methodology is described and the most highly variable parameters are discussed. The model is developed to account for uncertainties as well as statistical distributions for the important parameters. Sample calculations are presented. The TOC methodology is applied to data provided by two utilities in order to test its validity.

  2. Equity analysis of hospital beds distribution in Shiraz, Iran 2014

    PubMed Central

    Hatam, Nahid; Zakeri, Mohammadreza; Sadeghi, Ahmad; Darzi Ramandi, Sajad; Hayati, Ramin; Siavashi, Elham

    2016-01-01

    Background: One of the important aspects of equity in health is equality in the distribution of resources in this sector. The present study aimed to assess the distribution of hospital beds in Shiraz in 2014. Methods: In this retrospective cross-sectional study, the population density index and fair distribution of beds were analyzed by Lorenz curve and Gini coefficient, respectively. Descriptive data were analyzed using Excel software. We used Distributive Analysis Stata Package (DASP) in STATA software, version 12, for computing Gini coefficient and drawing Lorenz curve. Results: The Gini coefficient was 0.68 in the population. Besides, Gini coefficient of hospital beds’ distribution based on population density was 0.70, which represented inequality in the distribution of hospital bedsamong the nine regions of Shiraz. Conclusion: Although the total number of hospital beds was reasonable in Shiraz, distribution of these resources was not fair, and inequality was observed in their distribution among the nine regions of Shiraz. PMID:27579284

  3. Robust two-parameter invariant CFAR detection utilizing order statistics applied to Weibull clutter

    NASA Astrophysics Data System (ADS)

    Nagle, Daniel T.; Saniie, Jafar

    1992-08-01

    Constant False Alarm Rate (CFAR) detectors are designed to perform when the clutter information is partially unknown and/or varying. This is accomplished using local threshold estimates from background observations in which the CFAR level is maintained. However, when local observations contain target or irrelevant information, censoring is warranted to improve detection performance. Order Statistics (OS) processors have been shown to perform robustly (referring to type II errors or CFAR loss) for heterogeneous background clutter observations, and their performance has been analyzed for exponential clutter with unknown power. In this paper, several order statistics are used to create an invariant test statistic for Weibull clutter with two varying parameters (i.e., power and skewness). The robustness of a two-parameter invariant CFAR detector is analyzed and compared with an uncensored Weibull-Two Parameter (WTP) CFAR detector and conventional Cell Averaging (CA)-CFAR detector (i.e., designed invariant to exponential clutter). The performance trade-offs of these detectors are gaged for different scenarios of volatile clutter environments.

  4. A Comparison of Distribution Free and Non-Distribution Free Factor Analysis Methods

    ERIC Educational Resources Information Center

    Ritter, Nicola L.

    2012-01-01

    Many researchers recognize that factor analysis can be conducted on both correlation matrices and variance-covariance matrices. Although most researchers extract factors from non-distribution free or parametric methods, researchers can also extract factors from distribution free or non-parametric methods. The nature of the data dictates the method…

  5. Intensity distribution analysis of cathodoluminescence using the energy loss distribution of electrons.

    PubMed

    Fukuta, Masahiro; Inami, Wataru; Ono, Atsushi; Kawata, Yoshimasa

    2016-01-01

    We present an intensity distribution analysis of cathodoluminescence (CL) excited with a focused electron beam in a luminescent thin film. The energy loss distribution is applied to the developed analysis method in order to determine the arrangement of the dipole locations along the path of the electron traveling in the film. Propagating light emitted from each dipole is analyzed with the finite-difference time-domain (FDTD) method. CL distribution near the film surface is evaluated as a nanometric light source. It is found that a light source with 30 nm widths is generated in the film by the focused electron beam. We also discuss the accuracy of the developed analysis method by comparison with experimental results. The analysis results are brought into good agreement with the experimental results by introducing the energy loss distribution. PMID:26550930

  6. Statistical wind analysis for near-space applications

    NASA Astrophysics Data System (ADS)

    Roney, Jason A.

    2007-09-01

    Statistical wind models were developed based on the existing observational wind data for near-space altitudes between 60 000 and 100 000 ft (18 30 km) above ground level (AGL) at two locations, Akon, OH, USA, and White Sands, NM, USA. These two sites are envisioned as playing a crucial role in the first flights of high-altitude airships. The analysis shown in this paper has not been previously applied to this region of the stratosphere for such an application. Standard statistics were compiled for these data such as mean, median, maximum wind speed, and standard deviation, and the data were modeled with Weibull distributions. These statistics indicated, on a yearly average, there is a lull or a “knee” in the wind between 65 000 and 72 000 ft AGL (20 22 km). From the standard statistics, trends at both locations indicated substantial seasonal variation in the mean wind speed at these heights. The yearly and monthly statistical modeling indicated that Weibull distributions were a reasonable model for the data. Forecasts and hindcasts were done by using a Weibull model based on 2004 data and comparing the model with the 2003 and 2005 data. The 2004 distribution was also a reasonable model for these years. Lastly, the Weibull distribution and cumulative function were used to predict the 50%, 95%, and 99% winds, which are directly related to the expected power requirements of a near-space station-keeping airship. These values indicated that using only the standard deviation of the mean may underestimate the operational conditions.

  7. Progressive failure site generation in AlGaN/GaN high electron mobility transistors under OFF-state stress: Weibull statistics and temperature dependence

    SciTech Connect

    Sun, Huarui Bajo, Miguel Montes; Uren, Michael J.; Kuball, Martin

    2015-01-26

    Gate leakage degradation of AlGaN/GaN high electron mobility transistors under OFF-state stress is investigated using a combination of electrical, optical, and surface morphology characterizations. The generation of leakage “hot spots” at the edge of the gate is found to be strongly temperature accelerated. The time for the formation of each failure site follows a Weibull distribution with a shape parameter in the range of 0.7–0.9 from room temperature up to 120 °C. The average leakage per failure site is only weakly temperature dependent. The stress-induced structural degradation at the leakage sites exhibits a temperature dependence in the surface morphology, which is consistent with a surface defect generation process involving temperature-associated changes in the breakdown sites.

  8. Distributed bearing fault diagnosis based on vibration analysis

    NASA Astrophysics Data System (ADS)

    Dolenc, Boštjan; Boškoski, Pavle; Juričić, Đani

    2016-01-01

    Distributed bearing faults appear under various circumstances, for example due to electroerosion or the progression of localized faults. Bearings with distributed faults tend to generate more complex vibration patterns than those with localized faults. Despite the frequent occurrence of such faults, their diagnosis has attracted limited attention. This paper examines a method for the diagnosis of distributed bearing faults employing vibration analysis. The vibrational patterns generated are modeled by incorporating the geometrical imperfections of the bearing components. Comparing envelope spectra of vibration signals shows that one can distinguish between localized and distributed faults. Furthermore, a diagnostic procedure for the detection of distributed faults is proposed. This is evaluated on several bearings with naturally born distributed faults, which are compared with fault-free bearings and bearings with localized faults. It is shown experimentally that features extracted from vibrations in fault-free, localized and distributed fault conditions form clearly separable clusters, thus enabling diagnosis.

  9. Charge distribution analysis of catalysts under simulated reaction conditions

    SciTech Connect

    Freund, F.

    1992-01-01

    Charge Distribution Analysis (CDA) is a technique for measuring mobile charge carriers in dielectric materials. CDA is based on dielectric polarization in an electric field gradient. The CDA apparatus is now under construction. 3 figs.

  10. Precipitator inlet particulate distribution flow analysis

    SciTech Connect

    LaRose, J.A.; Averill, A.

    1994-12-31

    The B and W Rothemuhle precipitators located at PacifiCorp`s Wyodak Generating Station in Gillette, Wyoming have, for the past two years, been experiencing discharge wire breakage. The breakage is due to corrosion of the wires: however, the exact cause of the corrosion is unknown. One aspect thought to contribute to the problem is an unbalance of ash loading among the four precipitators. Plant operation has revealed that the ash loading to precipitator C appears to be the heaviest of the four casing, and also appears to have the most severe corrosion. Data from field measurements showed that the gas flows to the four precipitators are fairly uniform, within {+-}9% of the average. The ash loading data showed a large maldistribution among the precipitators. Precipitator C receives 60% more ash than the next heaviest loaded precipitator. A numerical model was created which showed the same results. The model was then utilized to determine design modifications to the existing flue and turning vanes to improve the ash loading distribution. The resulting design was predicted to improve the ash loading to all the precipitators, within {+-}10% of the average.

  11. Performance analysis of static locking in replicated distributed database systems

    NASA Technical Reports Server (NTRS)

    Kuang, Yinghong; Mukkamala, Ravi

    1991-01-01

    Data replications and transaction deadlocks can severely affect the performance of distributed database systems. Many current evaluation techniques ignore these aspects, because it is difficult to evaluate through analysis and time consuming to evaluate through simulation. Here, a technique is discussed that combines simulation and analysis to closely illustrate the impact of deadlock and evaluate performance of replicated distributed databases with both shared and exclusive locks.

  12. Performance analysis of static locking in replicated distributed database systems

    NASA Technical Reports Server (NTRS)

    Kuang, Yinghong; Mukkamala, Ravi

    1991-01-01

    Data replication and transaction deadlocks can severely affect the performance of distributed database systems. Many current evaluation techniques ignore these aspects, because it is difficult to evaluate through analysis and time consuming to evaluate through simulation. A technique is used that combines simulation and analysis to closely illustrate the impact of deadlock and evaluate performance of replicated distributed database with both shared and exclusive locks.

  13. Surface flaw reliability analysis of ceramic components with the SCARE finite element postprocessor program

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, John P.; Nemeth, Noel N.

    1987-01-01

    The SCARE (Structural Ceramics Analysis and Reliability Evaluation) computer program on statistical fast fracture reliability analysis with quadratic elements for volume distributed imperfections is enhanced to include the use of linear finite elements and the capability of designing against concurrent surface flaw induced ceramic component failure. The SCARE code is presently coupled as a postprocessor to the MSC/NASTRAN general purpose, finite element analysis program. The improved version now includes the Weibull and Batdorf statistical failure theories for both surface and volume flaw based reliability analysis. The program uses the two-parameter Weibull fracture strength cumulative failure probability distribution model with the principle of independent action for poly-axial stress states, and Batdorf's shear-sensitive as well as shear-insensitive statistical theories. The shear-sensitive surface crack configurations include the Griffith crack and Griffith notch geometries, using the total critical coplanar strain energy release rate criterion to predict mixed-mode fracture. Weibull material parameters based on both surface and volume flaw induced fracture can also be calculated from modulus of rupture bar tests, using the least squares method with known specimen geometry and grouped fracture data. The statistical fast fracture theories for surface flaw induced failure, along with selected input and output formats and options, are summarized. An example problem to demonstrate various features of the program is included.

  14. Time-dependent reliability analysis of ceramic engine components

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.

    1993-01-01

    The computer program CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing either the power or Paris law relations. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. Two example problems demonstrating proof testing and fatigue parameter estimation are given.

  15. A Distributed, Parallel Visualization and Analysis Tool

    Energy Science and Technology Software Center (ESTSC)

    2007-12-01

    VisIt is an interactive parallel visualization and graphical analysis tool for viewing scientific date on UNIX and PC platforms. Users can quickly generate visualizations from their data, animate them through time, manipulate them, and save the resulting images for presentations. VisIt contains a rich set of visualization features so that you can view your data in a variety of ways. It can be used to visualize scalar and vector fields defined on two- and three-more » dimensional (2D and 3D) structured and unstructured meshes. VisIt was designed to handle very large data set sizes in the terascale range and yet can also handle small data sets in the kilobyte range.« less

  16. Performance analysis of static locking in distributed database systems

    SciTech Connect

    Shyu, S.C. ); Li, V.O.K. . Dept. of Electrical Engineering)

    1990-06-01

    Numerous performance models have been proposed for locking algorithms in centralized database systems, but few have been developed for distributed ones. Existing results on distributed locking usually ignore the deadlock problem so as to simplify the analysis. In this paper, a new performance model for static locking in distributed database systems is developed.A queuing model is used to approximate static locking in distributed database systems without deadlocks. Then a random graph model is proposed to find the deadlock probability of each transaction. The above two models are integrated, so that given the transaction arrival rate, the response time and the effective throughput can be calculated.

  17. Analysis of the irregular planar distribution of proteins in membranes.

    PubMed

    Hui, S W; Frank, J

    1985-03-01

    Methods to characterize the irregular but non-random planar distribution of proteins in biological membranes were investigated. The distribution of the proteins constituting the intramembranous particles (IMP) in human erythrocyte membranes was used as an example. The distribution of IMPs was deliberately altered by experimental means. For real space analyses, the IMP positions in freeze fracture micrograph S were determined by an automatic procedure described. Radial distribution and autocorrelation analysis revealed quantitative differences between experimental groups. These methods are more sensitive than the corresponding optical diffraction or Fourier-Bessel analyses of the same IMP distribution data, due to the inability of the diffraction methods to separate contrast and distribution effects. A method to identify IMPs on a non-uniform background is described. PMID:3999133

  18. Distributed transit compartments for arbitrary lifespan distributions in aging populations.

    PubMed

    Koch, Gilbert; Schropp, Johannes

    2015-09-01

    Transit compartment models (TCM) are often used to describe aging populations where every individual has its own lifespan. However, in the TCM approach these lifespans are gamma-distributed which is a serious limitation because often the Weibull or more complex distributions are realistic. Therefore, we extend the TCM concept to approximately describe any lifespan distribution and call this generalized concept distributed transit compartment models (DTCMs). The validity of DTCMs is obtained by convergence investigations. From the mechanistic perspective the transit rates are directly controlled by the lifespan distribution. Further, DTCMs could be used to approximate the convolution of a signal with a probability density function. As example a stimulatory effect of a drug in an aging population with a Weibull-distributed lifespan is presented where distribution and model parameters are estimated based on simulated data. PMID:26100181

  19. Modeling and analysis of solar distributed generation

    NASA Astrophysics Data System (ADS)

    Ortiz Rivera, Eduardo Ivan

    power point tracking algorithms. Finally, several PV power applications will be presented like circuit analysis for a load connected to two different PV arrays, speed control for a do motor connected to a PVM, and a novel single phase photovoltaic inverter system using the Z-source converter.

  20. Near field light intensity distribution analysis in bimodal polymer waveguide

    NASA Astrophysics Data System (ADS)

    Herzog, T.; Gut, K.

    2015-12-01

    The paper presents analysis of light intensity distribution and sensitivity in differential interferometer based on bimodal polymer waveguide. Key part is analysis of optimal waveguide layer thickness in structure SiO2/SU-8/H2O for maximum bulk refractive index sensitivity. The paper presents new approach to detecting phase difference between modes through registrations only part of energy propagating in the waveguide. Additionally in this paper the analysis of changes in light distribution when energy in modes is not equal were performed.

  1. Effect of Porosity on Strength Distribution of Microcrystalline Cellulose.

    PubMed

    Keleṣ, Özgür; Barcenas, Nicholas P; Sprys, Daniel H; Bowman, Keith J

    2015-12-01

    Fracture strength of pharmaceutical compacts varies even for nominally identical samples, which directly affects compaction, comminution, and tablet dosage forms. However, the relationships between porosity and mechanical behavior of compacts are not clear. Here, the effects of porosity on fracture strength and fracture statistics of microcrystalline cellulose compacts were investigated through diametral compression tests. Weibull modulus, a key parameter in Weibull statistics, was observed to decrease with increasing porosity from 17 to 56 vol.%, based on eight sets of compacts at different porosity levels, each set containing ∼ 50 samples, a total of 407 tests. Normal distribution fits better to fracture data for porosity less than 20 vol.%, whereas Weibull distribution is a better fit in the limit of highest porosity. Weibull moduli from 840 unique finite element simulations of isotropic porous materials were compared to experimental Weibull moduli from this research and results on various pharmaceutical materials. Deviations from Weibull statistics are observed. The effect of porosity on fracture strength can be described by a recently proposed micromechanics-based formula. PMID:26022545

  2. ANALYSIS OF DISTRIBUTION FEEDER LOSSES DUE TO ADDITION OF DISTRIBUTED PHOTOVOLTAIC GENERATORS

    SciTech Connect

    Tuffner, Francis K.; Singh, Ruchi

    2011-08-09

    Distributed generators (DG) are small scale power supplying sources owned by customers or utilities and scattered throughout the power system distribution network. Distributed generation can be both renewable and non-renewable. Addition of distributed generation is primarily to increase feeder capacity and to provide peak load reduction. However, this addition comes with several impacts on the distribution feeder. Several studies have shown that addition of DG leads to reduction of feeder loss. However, most of these studies have considered lumped load and distributed load models to analyze the effects on system losses, where the dynamic variation of load due to seasonal changes is ignored. It is very important for utilities to minimize the losses under all scenarios to decrease revenue losses, promote efficient asset utilization, and therefore, increase feeder capacity. This paper will investigate an IEEE 13-node feeder populated with photovoltaic generators on detailed residential houses with water heater, Heating Ventilation and Air conditioning (HVAC) units, lights, and other plug and convenience loads. An analysis of losses for different power system components, such as transformers, underground and overhead lines, and triplex lines, will be performed. The analysis will utilize different seasons and different solar penetration levels (15%, 30%).

  3. First Experiences with LHC Grid Computing and Distributed Analysis

    SciTech Connect

    Fisk, Ian

    2010-12-01

    In this presentation the experiences of the LHC experiments using grid computing were presented with a focus on experience with distributed analysis. After many years of development, preparation, exercises, and validation the LHC (Large Hadron Collider) experiments are in operations. The computing infrastructure has been heavily utilized in the first 6 months of data collection. The general experience of exploiting the grid infrastructure for organized processing and preparation is described, as well as the successes employing the infrastructure for distributed analysis. At the end the expected evolution and future plans are outlined.

  4. Selection of neutrino burst candidates by pulse spatial distribution analysis

    NASA Astrophysics Data System (ADS)

    Ryasny, V. G.

    1996-02-01

    The method of analysis and possibilities of identification of neutrino bursts from collapsing stars using a spatial distribution of pulses in the multimodular installations, like the Large Volume Detector at the Gran Sasso Laboratory, Liquid Scintillation Detector (Mont Blanc) and Baksan Scintillation Telescope, are discussed. The method could be applicable for any position sensitive detector. By the spatial distribution analysis the burst imitation probability can be decreased by at least 2 orders of magnitude, without significant loss of sensitivity, for currently predicted number of the neutrino interactions.

  5. Entropy Methods For Univariate Distributions in Decision Analysis

    NASA Astrophysics Data System (ADS)

    Abbas, Ali E.

    2003-03-01

    One of the most important steps in decision analysis practice is the elicitation of the decision-maker's belief about an uncertainty of interest in the form of a representative probability distribution. However, the probability elicitation process is a task that involves many cognitive and motivational biases. Alternatively, the decision-maker may provide other information about the distribution of interest, such as its moments, and the maximum entropy method can be used to obtain a full distribution subject to the given moment constraints. In practice however, decision makers cannot readily provide moments for the distribution, and are much more comfortable providing information about the fractiles of the distribution of interest or bounds on its cumulative probabilities. In this paper we present a graphical method to determine the maximum entropy distribution between upper and lower probability bounds and provide an interpretation for the shape of the maximum entropy distribution subject to fractile constraints, (FMED). We also discuss the problems with the FMED in that it is discontinuous and flat over each fractile interval. We present a heuristic approximation to a distribution if in addition to its fractiles, we also know it is continuous and work through full examples to illustrate the approach.

  6. Discriminating topology in galaxy distributions using network analysis

    NASA Astrophysics Data System (ADS)

    Hong, Sungryong; Coutinho, Bruno C.; Dey, Arjun; Barabási, Albert-L.; Vogelsberger, Mark; Hernquist, Lars; Gebhardt, Karl

    2016-07-01

    The large-scale distribution of galaxies is generally analysed using the two-point correlation function. However, this statistic does not capture the topology of the distribution, and it is necessary to resort to higher order correlations to break degeneracies. We demonstrate that an alternate approach using network analysis can discriminate between topologically different distributions that have similar two-point correlations. We investigate two galaxy point distributions, one produced by a cosmological simulation and the other by a Lévy walk. For the cosmological simulation, we adopt the redshift z = 0.58 slice from Illustris and select galaxies with stellar masses greater than 108 M⊙. The two-point correlation function of these simulated galaxies follows a single power law, ξ(r) ˜ r-1.5. Then, we generate Lévy walks matching the correlation function and abundance with the simulated galaxies. We find that, while the two simulated galaxy point distributions have the same abundance and two-point correlation function, their spatial distributions are very different; most prominently, filamentary structures, absent in Lévy fractals. To quantify these missing topologies, we adopt network analysis tools and measure diameter, giant component, and transitivity from networks built by a conventional friends-of-friends recipe with various linking lengths. Unlike the abundance and two-point correlation function, these network quantities reveal a clear separation between the two simulated distributions; therefore, the galaxy distribution simulated by Illustris is not a Lévy fractal quantitatively. We find that the described network quantities offer an efficient tool for discriminating topologies and for comparing observed and theoretical distributions.

  7. On the Correct Analysis of the Maxwell Distribution

    NASA Astrophysics Data System (ADS)

    Kalanov, Temur Z.

    2006-04-01

    The critical analysis of the Maxwell distribution is proposed. The main results of the analysis are as follows. (1) As is known, an experimental device for studying the Maxwell distribution consists of the following basic physical subsystems: (a) ideal molecular gas enclosed in a vessel (gas is in the equilibrium state); (b) molecule beam which is emitted from the small aperture of the vessel (the small aperture is a stochastic source of quantum particles). (2) The energy of the molecule of the beam does not represent random quantity, since molecules does not collide with each other. In this case, only the set of the monoenergetic molecules emitted by the stochastic source is a random quantity. This set is called a quantum gas. The probability pk that the quantum gas has the energy Enk is given by the Gibbs quantum canonical distribution: pk=p0,,-Enk / Enk T) . - T), k=0,;1,; where k is the number of molecules with energy En; T is temperature of the molecule in the vessel. (3) The average number of the molecules with energyEn represents the Planck distribution function: f=∑k=0^∞kpk ≡f(Planck). (4) In classical case, the expression Enf(Planck) represents the Maxwell distribution function: f(Maxwell)˜En,(Planck)˜v^2,;(-mv^2 / mv^2 2T) . - 2T). Consequently, the generally accepted statement that the Maxwell distribution function describes gas enclosed in a vessel is a logical error.

  8. Comparision of the different probability distributions for earthquake hazard assessment in the North Anatolian Fault Zone

    NASA Astrophysics Data System (ADS)

    Yilmaz, Şeyda; Bayrak, Erdem; Bayrak, Yusuf

    2016-04-01

    In this study we examined and compared the three different probabilistic distribution methods for determining the best suitable model in probabilistic assessment of earthquake hazards. We analyzed a reliable homogeneous earthquake catalogue between a time period 1900-2015 for magnitude M ≥ 6.0 and estimated the probabilistic seismic hazard in the North Anatolian Fault zone (39°-41° N 30°-40° E) using three distribution methods namely Weibull distribution, Frechet distribution and three-parameter Weibull distribution. The distribution parameters suitability was evaluated Kolmogorov-Smirnov (K-S) goodness-of-fit test. We also compared the estimated cumulative probability and the conditional probabilities of occurrence of earthquakes for different elapsed time using these three distribution methods. We used Easyfit and Matlab software to calculate these distribution parameters and plotted the conditional probability curves. We concluded that the Weibull distribution method was the most suitable than other distribution methods in this region.

  9. Rapid Analysis of Mass Distribution of Radiation Shielding

    NASA Technical Reports Server (NTRS)

    Zapp, Edward

    2007-01-01

    Radiation Shielding Evaluation Toolset (RADSET) is a computer program that rapidly calculates the spatial distribution of mass of an arbitrary structure for use in ray-tracing analysis of the radiation-shielding properties of the structure. RADSET was written to be used in conjunction with unmodified commercial computer-aided design (CAD) software that provides access to data on the structure and generates selected three-dimensional-appearing views of the structure. RADSET obtains raw geometric, material, and mass data on the structure from the CAD software. From these data, RADSET calculates the distribution(s) of the masses of specific materials about any user-specified point(s). The results of these mass-distribution calculations are imported back into the CAD computing environment, wherein the radiation-shielding calculations are performed.

  10. Inverse Analysis of Distributed Load Using Strain Data

    NASA Astrophysics Data System (ADS)

    Nakamura, Toshiya; Igawa, Hirotaka

    The operational stress data is quite useful in managing the structural integrity and airworthiness of an aircraft. Since the aerodynamic load (pressure) distributes continuously on the structure surface, identifying the load from finite number of measured strain data is not easy. Although this is an inverse problem, usually used is an empirical correlation between load and strain obtained through expensive ground tests. Some analytical studies have been conducted but simple mathematical expressions were assumed to approximate the pressure distribution. In the present study a more flexible approximation of continuous load distribution is proposed. The pressure distribution is identified based on finite number of strain data with using the conventional finite element method and pseudo-inverse matrix. Also an extension is made by coupling an aerodynamical restriction with the elastic equation. Numerical examples show that this extension improves the precision of the inverse analysis with very small number of strain data.

  11. Global NLO Analysis of Nuclear Parton Distribution Functions

    SciTech Connect

    Hirai, M.; Kumano, S.; Nagai, T.-H.

    2008-02-21

    Nuclear parton distribution functions (NPDFs) are determined by a global analysis of experimental measurements on structure-function ratios F{sub 2}{sup A}/F{sub 2}{sup A{sup '}} and Drell-Yan cross section ratios {sigma}{sub DY}{sup A}/{sigma}{sub DY}{sup A{sup '}}, and their uncertainties are estimated by the Hessian method. The NPDFs are obtained in both leading order (LO) and next-to-leading order (NLO) of {alpha}{sub s}. As a result, valence-quark distributions are relatively well determined, whereas antiquark distributions at x>0.2 and gluon distributions in the whole x region have large uncertainties. The NLO uncertainties are slightly smaller than the LO ones; however, such a NLO improvement is not as significant as the nucleonic case.

  12. Energy loss analysis of an integrated space power distribution system

    NASA Technical Reports Server (NTRS)

    Kankam, M. D.; Ribeiro, P. F.

    1992-01-01

    The results of studies related to conceptual topologies of an integrated utility-like space power system are described. The system topologies are comparatively analyzed by considering their transmission energy losses as functions of mainly distribution voltage level and load composition. The analysis is expedited by use of a Distribution System Analysis and Simulation (DSAS) software. This recently developed computer program by the Electric Power Research Institute (EPRI) uses improved load models to solve the power flow within the system. However, present shortcomings of the software with regard to space applications, and incompletely defined characteristics of a space power system make the results applicable to only the fundamental trends of energy losses of the topologies studied. Accountability, such as included, for the effects of the various parameters on the system performance can constitute part of a planning tool for a space power distribution system.

  13. GIS-based poverty and population distribution analysis in China

    NASA Astrophysics Data System (ADS)

    Cui, Jing; Wang, Yingjie; Yan, Hong

    2009-07-01

    Geographically, poverty status is not only related with social-economic factors but also strongly affected by geographical environment. In the paper, GIS-based poverty and population distribution analysis method is introduced for revealing their regional differences. More than 100000 poor villages and 592 national key poor counties are chosen for the analysis. The results show that poverty distribution tends to concentrate in most of west China and mountainous rural areas of mid China. Furthermore, the fifth census data are overlaid to those poor areas in order to gain its internal diversity of social-economic characteristics. By overlaying poverty related social-economic parameters, such as sex ratio, illiteracy, education level, percentage of ethnic minorities, family composition, finding shows that poverty distribution is strongly correlated with high illiteracy rate, high percentage minorities, and larger family member.

  14. Analyzing Distributed Functions in an Integrated Hazard Analysis

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry; Massie, Michael J.

    2010-01-01

    Large scale integration of today's aerospace systems is achievable through the use of distributed systems. Validating the safety of distributed systems is significantly more difficult as compared to centralized systems because of the complexity of the interactions between simultaneously active components. Integrated hazard analysis (IHA), a process used to identify unacceptable risks and to provide a means of controlling them, can be applied to either centralized or distributed systems. IHA, though, must be tailored to fit the particular system being analyzed. Distributed systems, for instance, must be analyzed for hazards in terms of the functions that rely on them. This paper will describe systems-oriented IHA techniques (as opposed to traditional failure-event or reliability techniques) that should be employed for distributed systems in aerospace environments. Special considerations will be addressed when dealing with specific distributed systems such as active thermal control, electrical power, command and data handling, and software systems (including the interaction with fault management systems). Because of the significance of second-order effects in large scale distributed systems, the paper will also describe how to analyze secondary functions to secondary functions through the use of channelization.

  15. Molecular Isotopic Distribution Analysis (MIDAs) with adjustable mass accuracy.

    PubMed

    Alves, Gelio; Ogurtsov, Aleksey Y; Yu, Yi-Kuo

    2014-01-01

    In this paper, we present Molecular Isotopic Distribution Analysis (MIDAs), a new software tool designed to compute molecular isotopic distributions with adjustable accuracies. MIDAs offers two algorithms, one polynomial-based and one Fourier-transform-based, both of which compute molecular isotopic distributions accurately and efficiently. The polynomial-based algorithm contains few novel aspects, whereas the Fourier-transform-based algorithm consists mainly of improvements to other existing Fourier-transform-based algorithms. We have benchmarked the performance of the two algorithms implemented in MIDAs with that of eight software packages (BRAIN, Emass, Mercury, Mercury5, NeutronCluster, Qmass, JFC, IC) using a consensus set of benchmark molecules. Under the proposed evaluation criteria, MIDAs's algorithms, JFC, and Emass compute with comparable accuracy the coarse-grained (low-resolution) isotopic distributions and are more accurate than the other software packages. For fine-grained isotopic distributions, we compared IC, MIDAs's polynomial algorithm, and MIDAs's Fourier transform algorithm. Among the three, IC and MIDAs's polynomial algorithm compute isotopic distributions that better resemble their corresponding exact fine-grained (high-resolution) isotopic distributions. MIDAs can be accessed freely through a user-friendly web-interface at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/midas/index.html. PMID:24254576

  16. Molecular Isotopic Distribution Analysis (MIDAs) with Adjustable Mass Accuracy

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Ogurtsov, Aleksey Y.; Yu, Yi-Kuo

    2014-01-01

    In this paper, we present Molecular Isotopic Distribution Analysis (MIDAs), a new software tool designed to compute molecular isotopic distributions with adjustable accuracies. MIDAs offers two algorithms, one polynomial-based and one Fourier-transform-based, both of which compute molecular isotopic distributions accurately and efficiently. The polynomial-based algorithm contains few novel aspects, whereas the Fourier-transform-based algorithm consists mainly of improvements to other existing Fourier-transform-based algorithms. We have benchmarked the performance of the two algorithms implemented in MIDAs with that of eight software packages (BRAIN, Emass, Mercury, Mercury5, NeutronCluster, Qmass, JFC, IC) using a consensus set of benchmark molecules. Under the proposed evaluation criteria, MIDAs's algorithms, JFC, and Emass compute with comparable accuracy the coarse-grained (low-resolution) isotopic distributions and are more accurate than the other software packages. For fine-grained isotopic distributions, we compared IC, MIDAs's polynomial algorithm, and MIDAs's Fourier transform algorithm. Among the three, IC and MIDAs's polynomial algorithm compute isotopic distributions that better resemble their corresponding exact fine-grained (high-resolution) isotopic distributions. MIDAs can be accessed freely through a user-friendly web-interface at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/midas/index.html.

  17. WATER DISTRIBUTION SYSTEM ANALYSIS: FIELD STUDIES, MODELING AND MANAGEMENT

    EPA Science Inventory

    The user‘s guide entitled “Water Distribution System Analysis: Field Studies, Modeling and Management” is a reference guide for water utilities and an extensive summarization of information designed to provide drinking water utility personnel (and related consultants and research...

  18. Can Distributed Volunteers Accomplish Massive Data Analysis Tasks?

    NASA Technical Reports Server (NTRS)

    Kanefsky, B.; Barlow, N. G.; Gulick, V. C.

    2001-01-01

    We argue that many image analysis tasks can be performed by distributed amateurs. Our pilot study, with crater surveying and classification, has produced encouraging results in terms of both quantity (100,000 crater entries in 2 months) and quality. Additional information is contained in the original extended abstract.

  19. Data synthesis and display programs for wave distribution function analysis

    NASA Technical Reports Server (NTRS)

    Storey, L. R. O.; Yeh, K. J.

    1992-01-01

    At the National Space Science Data Center (NSSDC) software was written to synthesize and display artificial data for use in developing the methodology of wave distribution analysis. The software comprises two separate interactive programs, one for data synthesis and the other for data display.

  20. Bayesian analysis of a disability model for lung cancer survival.

    PubMed

    Armero, C; Cabras, S; Castellanos, M E; Perra, S; Quirós, A; Oruezábal, M J; Sánchez-Rubio, J

    2016-02-01

    Bayesian reasoning, survival analysis and multi-state models are used to assess survival times for Stage IV non-small-cell lung cancer patients and the evolution of the disease over time. Bayesian estimation is done using minimum informative priors for the Weibull regression survival model, leading to an automatic inferential procedure. Markov chain Monte Carlo methods have been used for approximating posterior distributions and the Bayesian information criterion has been considered for covariate selection. In particular, the posterior distribution of the transition probabilities, resulting from the multi-state model, constitutes a very interesting tool which could be useful to help oncologists and patients make efficient and effective decisions. PMID:22767866

  1. Assessing tephra total grain-size distribution: Insights from field data analysis

    NASA Astrophysics Data System (ADS)

    Costa, A.; Pioli, L.; Bonadonna, C.

    2016-06-01

    The Total Grain-Size Distribution (TGSD) of tephra deposits is crucial for hazard assessment and provides fundamental insights into eruption dynamics. It controls both the mass distribution within the eruptive plume and the sedimentation processes and can provide essential information on the fragmentation mechanisms. TGSD is typically calculated by integrating deposit grain-size at different locations. The result of such integration is affected not only by the number, but also by the spatial distribution and distance from the vent of the sampling sites. In order to evaluate the reliability of TGSDs, we assessed representative sampling distances for pyroclasts of different sizes through dedicated numerical simulations of tephra dispersal. Results reveal that, depending on wind conditions, a representative grain-size distribution of tephra deposits down to ∼100 μm can be obtained by integrating samples collected at distances from less than one tenth up to a few tens of the column height. The statistical properties of TGSDs representative of a range of eruption styles were calculated by fitting the data with a few general distributions given by the sum of two log-normal distributions (bi-Gaussian in Φ-units), the sum of two Weibull distributions, and a generalized log-logistic distribution for the cumulative number distributions. The main parameters of the bi-lognormal fitting correlate with height of the eruptive columns and magma viscosity, allowing general relationships to be used for estimating TGSD generated in a variety of eruptive styles and for different magma compositions. Fitting results of the cumulative number distribution show two different power law trends for coarse and fine fractions of tephra particles, respectively. Our results shed light on the complex processes that control the size of particles being injected into the atmosphere during volcanic explosive eruptions and represent the first attempt to assess TGSD on the basis of pivotal physical

  2. Spatial Distribution Characteristics of Healthcare Facilities in Nanjing: Network Point Pattern Analysis and Correlation Analysis

    PubMed Central

    Ni, Jianhua; Qian, Tianlu; Xi, Changbai; Rui, Yikang; Wang, Jiechen

    2016-01-01

    The spatial distribution of urban service facilities is largely constrained by the road network. In this study, network point pattern analysis and correlation analysis were used to analyze the relationship between road network and healthcare facility distribution. The weighted network kernel density estimation method proposed in this study identifies significant differences between the outside and inside areas of the Ming city wall. The results of network K-function analysis show that private hospitals are more evenly distributed than public hospitals, and pharmacy stores tend to cluster around hospitals along the road network. After computing the correlation analysis between different categorized hospitals and street centrality, we find that the distribution of these hospitals correlates highly with the street centralities, and that the correlations are higher with private and small hospitals than with public and large hospitals. The comprehensive analysis results could help examine the reasonability of existing urban healthcare facility distribution and optimize the location of new healthcare facilities. PMID:27548197

  3. Spatial Distribution Characteristics of Healthcare Facilities in Nanjing: Network Point Pattern Analysis and Correlation Analysis.

    PubMed

    Ni, Jianhua; Qian, Tianlu; Xi, Changbai; Rui, Yikang; Wang, Jiechen

    2016-01-01

    The spatial distribution of urban service facilities is largely constrained by the road network. In this study, network point pattern analysis and correlation analysis were used to analyze the relationship between road network and healthcare facility distribution. The weighted network kernel density estimation method proposed in this study identifies significant differences between the outside and inside areas of the Ming city wall. The results of network K-function analysis show that private hospitals are more evenly distributed than public hospitals, and pharmacy stores tend to cluster around hospitals along the road network. After computing the correlation analysis between different categorized hospitals and street centrality, we find that the distribution of these hospitals correlates highly with the street centralities, and that the correlations are higher with private and small hospitals than with public and large hospitals. The comprehensive analysis results could help examine the reasonability of existing urban healthcare facility distribution and optimize the location of new healthcare facilities. PMID:27548197

  4. Generalized Exponential Distribution in Flood Frequency Analysis for Polish Rivers

    PubMed Central

    Markiewicz, Iwona; Strupczewski, Witold G.; Bogdanowicz, Ewa; Kochanek, Krzysztof

    2015-01-01

    Many distributions have been used in flood frequency analysis (FFA) for fitting the flood extremes data. However, as shown in the paper, the scatter of Polish data plotted on the moment ratio diagram shows that there is still room for a new model. In the paper, we study the usefulness of the generalized exponential (GE) distribution in flood frequency analysis for Polish Rivers. We investigate the fit of GE distribution to the Polish data of the maximum flows in comparison with the inverse Gaussian (IG) distribution, which in our previous studies showed the best fitting among several models commonly used in FFA. Since the use of a discrimination procedure without the knowledge of its performance for the considered probability density functions may lead to erroneous conclusions, we compare the probability of correct selection for the GE and IG distributions along with the analysis of the asymptotic model error in respect to the upper quantile values. As an application, both GE and IG distributions are alternatively assumed for describing the annual peak flows for several gauging stations of Polish Rivers. To find the best fitting model, four discrimination procedures are used. In turn, they are based on the maximized logarithm of the likelihood function (K procedure), on the density function of the scale transformation maximal invariant (QK procedure), on the Kolmogorov-Smirnov statistics (KS procedure) and the fourth procedure based on the differences between the ML estimate of 1% quantile and its value assessed by the method of moments and linear moments, in sequence (R procedure). Due to the uncertainty of choosing the best model, the method of aggregation is applied to estimate of the maximum flow quantiles. PMID:26657239

  5. Integrating software architectures for distributed simulations and simulation analysis communities.

    SciTech Connect

    Goldsby, Michael E.; Fellig, Daniel; Linebarger, John Michael; Moore, Patrick Curtis; Sa, Timothy J.; Hawley, Marilyn F.

    2005-10-01

    The one-year Software Architecture LDRD (No.79819) was a cross-site effort between Sandia California and Sandia New Mexico. The purpose of this research was to further develop and demonstrate integrating software architecture frameworks for distributed simulation and distributed collaboration in the homeland security domain. The integrated frameworks were initially developed through the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC), sited at SNL/CA, and the National Infrastructure Simulation & Analysis Center (NISAC), sited at SNL/NM. The primary deliverable was a demonstration of both a federation of distributed simulations and a federation of distributed collaborative simulation analysis communities in the context of the same integrated scenario, which was the release of smallpox in San Diego, California. To our knowledge this was the first time such a combination of federations under a single scenario has ever been demonstrated. A secondary deliverable was the creation of the standalone GroupMeld{trademark} collaboration client, which uses the GroupMeld{trademark} synchronous collaboration framework. In addition, a small pilot experiment that used both integrating frameworks allowed a greater range of crisis management options to be performed and evaluated than would have been possible without the use of the frameworks.

  6. Modelling Framework and the Quantitative Analysis of Distributed Energy Resources in Future Distribution Networks

    NASA Astrophysics Data System (ADS)

    Han, Xue; Sandels, Claes; Zhu, Kun; Nordström, Lars

    2013-08-01

    There has been a large body of statements claiming that the large-scale deployment of Distributed Energy Resources (DERs) could eventually reshape the future distribution grid operation in numerous ways. Thus, it is necessary to introduce a framework to measure to what extent the power system operation will be changed by various parameters of DERs. This article proposed a modelling framework for an overview analysis on the correlation between DERs. Furthermore, to validate the framework, the authors described the reference models of different categories of DERs with their unique characteristics, comprising distributed generation, active demand and electric vehicles. Subsequently, quantitative analysis was made on the basis of the current and envisioned DER deployment scenarios proposed for Sweden. Simulations are performed in two typical distribution network models for four seasons. The simulation results show that in general the DER deployment brings in the possibilities to reduce the power losses and voltage drops by compensating power from the local generation and optimizing the local load profiles.

  7. Spatial Distribution Analysis of Scrub Typhus in Korea

    PubMed Central

    Jin, Hong Sung; Chu, Chaeshin; Han, Dong Yeob

    2013-01-01

    Objective: This study analyzes the spatial distribution of scrub typhus in Korea. Methods: A spatial distribution of Orientia tsutsugamushi occurrence using a geographic information system (GIS) is presented, and analyzed by means of spatial clustering and correlations. Results: The provinces of Gangwon-do and Gyeongsangbuk-do show a low incidence throughout the year. Some districts have almost identical environmental conditions of scrub typhus incidence. The land use change of districts does not directly affect the incidence rate. Conclusion: GIS analysis shows the spatial characteristics of scrub typhus. This research can be used to construct a spatial-temporal model to understand the epidemic tsutsugamushi. PMID:24159523

  8. Local structure studies of materials using pair distribution function analysis

    NASA Astrophysics Data System (ADS)

    Peterson, Joseph W.

    A collection of pair distribution function studies on various materials is presented in this dissertation. In each case, local structure information of interest pushes the current limits of what these studies can accomplish. The goal is to provide insight into the individual material behaviors as well as to investigate ways to expand the current limits of PDF analysis. Where possible, I provide a framework for how PDF analysis might be applied to a wider set of material phenomena. Throughout the dissertation, I discuss 0 the capabilities of the PDF method to provide information pertaining to a material's structure and properties, ii) current limitations in the conventional approach to PDF analysis, iii) possible solutions to overcome certain limitations in PDF analysis, and iv) suggestions for future work to expand and improve the capabilities PDF analysis.

  9. Vibrational Energy Distribution Analysis (VEDA): Scopes and limitations

    NASA Astrophysics Data System (ADS)

    Jamróz, Michał H.

    2013-10-01

    The principle of operations of the VEDA program written by the author for Potential Energy Distribution (PED) analysis of theoretical vibrational spectra is described. Nowadays, the PED analysis is indispensible tool in serious analysis of the vibrational spectra. To perform the PED analysis it is necessary to define 3N-6 linearly independent local mode coordinates. Already for 20-atomic molecules it is a difficult task. The VEDA program reads the input data automatically from the Gaussian program output files. Then, VEDA automatically proposes an introductory set of local mode coordinates. Next, the more adequate coordinates are proposed by the program and optimized to obtain maximal elements of each column (internal coordinate) of the PED matrix (the EPM parameter). The possibility for an automatic optimization of PED contributions is a unique feature of the VEDA program absent in any other programs performing PED analysis.

  10. Analysis of georadar data to estimate the snow depth distribution

    NASA Astrophysics Data System (ADS)

    Godio, A.; Rege, R. B.

    2016-06-01

    We have performed extensive georadar surveys for mapping the snow depth in the basin of Breuil-Cervinia (Aosta Valley) in the Italian Alps, close to the Matterhorn. More than 9 km of georadar profiles were acquired in April 2008 and 15 km in April 2009, distributed on an hydrological basin of about 12 km2. Radar surveys were carried out partially on the iced area of Ventina glacier at elevation higher than 3000 m a.s.l. and partially at lower elevation (2500 m-3000 m) on the gently slopes of the basin where the winter snow accumulated directly on the ground surface. The snow distribution on the basin, at the end of the season, could vary significantly according to the elevation range, exposition and ground morphology. In small catchment the snow depth reached 6-7 m. At higher elevation, on the glacier, a more homogeneous distribution is usually observed. A descriptive statistical analysis of the dataset is discussed to demonstrate the high spatial variability of the snow depth distribution in the area. The probability distribution of the snow depth fits the gamma distribution with a good correlation. Instead we didn't found any satisfactory relationship of the snow depth with the main morphological parameters of the terrain (elevation, slope, curvature). This suggests that the snow distribution, at the end of the winter season, is mainly conditioned by the transport phenomena and re-distribution of the wind action. The comparison of the results of georadar surveys with the hand probe measurements points out the low accuracy of the snow depth estimate in the area by using conventional hand probing approach only, encouraging to develop technology for fast and accurate mapping of the snow depth at the scale of basin.

  11. HammerCloud: A Stress Testing System for Distributed Analysis

    NASA Astrophysics Data System (ADS)

    van der Ster, Daniel C.; Elmsheuser, Johannes; Úbeda García, Mario; Paladin, Massimo

    2011-12-01

    Distributed analysis of LHC data is an I/O-intensive activity which places large demands on the internal network, storage, and local disks at remote computing facilities. Commissioning and maintaining a site to provide an efficient distributed analysis service is therefore a challenge which can be aided by tools to help evaluate a variety of infrastructure designs and configurations. HammerCloud is one such tool; it is a stress testing service which is used by central operations teams, regional coordinators, and local site admins to (a) submit arbitrary number of analysis jobs to a number of sites, (b) maintain at a steady-state a predefined number of jobs running at the sites under test, (c) produce web-based reports summarizing the efficiency and performance of the sites under test, and (d) present a web-interface for historical test results to both evaluate progress and compare sites. HammerCloud was built around the distributed analysis framework Ganga, exploiting its API for grid job management. HammerCloud has been employed by the ATLAS experiment for continuous testing of many sites worldwide, and also during large scale computing challenges such as STEP'09 and UAT'09, where the scale of the tests exceeded 10,000 concurrently running and 1,000,000 total jobs over multi-day periods. In addition, HammerCloud is being adopted by the CMS experiment; the plugin structure of HammerCloud allows the execution of CMS jobs using their official tool (CRAB).

  12. New acquisition techniques and statistical analysis of bubble size distributions

    NASA Astrophysics Data System (ADS)

    Proussevitch, A.; Sahagian, D.

    2005-12-01

    Various approaches have been taken to solve the long-standing problem of determining size distributions of objects embedded in an opaque medium. In the case of vesicles in volcanic rocks, the most reliable technique is 3-D imagery by computed X-Ray tomography. However, this method is expensive, requires intensive computational resources and thus limited and not always available for an investigator. As a cheaper alternative, 2-D cross-sectional data is commonly available, but requires stereological analysis for 3-D conversion. A stereology technique for spherical bubbles is quite robust but elongated non-spherical bubbles require complicated conversion approaches and large observed populations. We have revised computational schemes of applying non-spherical stereology for practical analysis of bubble size distributions. The basic idea of this new approach is to exclude from the conversion those classes (bins) of non-spherical bubbles that provide a larger cross-section probability distribution than a maximum value which depends on mean aspect ratio. Thus, in contrast to traditional stereological techniques, larger bubbles are "predicted" from the rest of the population. As a proof of principle, we have compared distributions so obtained with direct 3-D imagery (X-Ray tomography) for non-spherical bubbles from the same samples of vesicular basalts collected from the Colorado Plateau. The results of the comparison demonstrate that in cases where x-ray tomography is impractical, stereology can be used with reasonable reliability, even for non-spherical vesicles.

  13. Comparing distributions of environmental outcomes for regulatory environmental justice analysis.

    PubMed

    Maguire, Kelly; Sheriff, Glenn

    2011-05-01

    Economists have long been interested in measuring distributional impacts of policy interventions. As environmental justice (EJ) emerged as an ethical issue in the 1970s, the academic literature has provided statistical analyses of the incidence and causes of various environmental outcomes as they relate to race, income, and other demographic variables. In the context of regulatory impacts, however, there is a lack of consensus regarding what information is relevant for EJ analysis, and how best to present it. This paper helps frame the discussion by suggesting a set of questions fundamental to regulatory EJ analysis, reviewing past approaches to quantifying distributional equity, and discussing the potential for adapting existing tools to the regulatory context. PMID:21655146

  14. Modeling and convergence analysis of distributed coevolutionary algorithms.

    PubMed

    Subbu, Raj; Sanderson, Arthur C

    2004-04-01

    A theoretical foundation is presented for modeling and convergence analysis of a class of distributed coevolutionary algorithms applied to optimization problems in which the variables are partitioned among p nodes. An evolutionary algorithm at each of the p nodes performs a local evolutionary search based on its own set of primary variables, and the secondary variable set at each node is clamped during this phase. An infrequent intercommunication between the nodes updates the secondary variables at each node. The local search and intercommunication phases alternate, resulting in a cooperative search by the p nodes. First, we specify a theoretical basis for a class of centralized evolutionary algorithms in terms of construction and evolution of sampling distributions over the feasible space. Next, this foundation is extended to develop a model for a class of distributed coevolutionary algorithms. Convergence and convergence rate analyzes are pursued for basic classes of objective functions. Our theoretical investigation reveals that for certain unimodal and multimodal objectives, we can expect these algorithms to converge at a geometrical rate. The distributed coevolutionary algorithms are of most interest from the perspective of their performance advantage compared to centralized algorithms, when they execute in a network environment with significant local access and internode communication delays. The relative performance of these algorithms is therefore evaluated in a distributed environment with realistic parameters of network behavior. PMID:15376831

  15. Distribution System Reliability Analysis for Smart Grid Applications

    NASA Astrophysics Data System (ADS)

    Aljohani, Tawfiq Masad

    Reliability of power systems is a key aspect in modern power system planning, design, and operation. The ascendance of the smart grid concept has provided high hopes of developing an intelligent network that is capable of being a self-healing grid, offering the ability to overcome the interruption problems that face the utility and cost it tens of millions in repair and loss. To address its reliability concerns, the power utilities and interested parties have spent extensive amount of time and effort to analyze and study the reliability of the generation and transmission sectors of the power grid. Only recently has attention shifted to be focused on improving the reliability of the distribution network, the connection joint between the power providers and the consumers where most of the electricity problems occur. In this work, we will examine the effect of the smart grid applications in improving the reliability of the power distribution networks. The test system used in conducting this thesis is the IEEE 34 node test feeder, released in 2003 by the Distribution System Analysis Subcommittee of the IEEE Power Engineering Society. The objective is to analyze the feeder for the optimal placement of the automatic switching devices and quantify their proper installation based on the performance of the distribution system. The measures will be the changes in the reliability system indices including SAIDI, SAIFI, and EUE. The goal is to design and simulate the effect of the installation of the Distributed Generators (DGs) on the utility's distribution system and measure the potential improvement of its reliability. The software used in this work is DISREL, which is intelligent power distribution software that is developed by General Reliability Co.

  16. Electrical Power Distribution and Control Modeling and Analysis

    NASA Technical Reports Server (NTRS)

    Fu, Johnny S.; Liffring, Mark; Mehdi, Ishaque S.

    2001-01-01

    This slide presentation reviews the use of Electrical Power Distribution and Control (EPD&C) Modeling and how modeling can support analysis. The presentation discusses using the EASY5 model to simulate and analyze the Space Shuttle Electric Auxiliary Power Unit. Diagrams of the model schematics are included, as well as graphs of the battery cell impedance, hydraulic load dynamics, and EPD&C response to hydraulic load variations.

  17. A Study of ATLAS Grid Performance for Distributed Analysis

    NASA Astrophysics Data System (ADS)

    Panitkin, Sergey; Fine, Valery; Wenaus, Torre

    2012-12-01

    In the past two years the ATLAS Collaboration at the LHC has collected a large volume of data and published a number of ground breaking papers. The Grid-based ATLAS distributed computing infrastructure played a crucial role in enabling timely analysis of the data. We will present a study of the performance and usage of the ATLAS Grid as platform for physics analysis in 2011. This includes studies of general properties as well as timing properties of user jobs (wait time, run time, etc). These studies are based on mining of data archived by the PanDA workload management system.

  18. Distributed and interactive visual analysis of omics data.

    PubMed

    Farag, Yehia; Berven, Frode S; Jonassen, Inge; Petersen, Kjell; Barsnes, Harald

    2015-11-01

    The amount of publicly shared proteomics data has grown exponentially over the last decade as the solutions for sharing and storing the data have improved. However, the use of the data is often limited by the manner of which it is made available. There are two main approaches: download and inspect the proteomics data locally, or interact with the data via one or more web pages. The first is limited by having to download the data and thus requires local computational skills and resources, while the latter most often is limited in terms of interactivity and the analysis options available. A solution is to develop web-based systems supporting distributed and fully interactive visual analysis of proteomics data. The use of a distributed architecture makes it possible to perform the computational analysis at the server, while the results of the analysis can be displayed via a web browser without the need to download the whole dataset. Here the challenges related to developing such systems for omics data will be discussed. Especially how this allows for multiple connected interactive visual displays of omics dataset in a web-based setting, and the benefits this provide for computational analysis of proteomics data.This article is part of a Special Issue entitled: Computational Proteomics. PMID:26047716

  19. Automatic analysis of attack data from distributed honeypot network

    NASA Astrophysics Data System (ADS)

    Safarik, Jakub; Voznak, MIroslav; Rezac, Filip; Partila, Pavol; Tomala, Karel

    2013-05-01

    There are many ways of getting real data about malicious activity in a network. One of them relies on masquerading monitoring servers as a production one. These servers are called honeypots and data about attacks on them brings us valuable information about actual attacks and techniques used by hackers. The article describes distributed topology of honeypots, which was developed with a strong orientation on monitoring of IP telephony traffic. IP telephony servers can be easily exposed to various types of attacks, and without protection, this situation can lead to loss of money and other unpleasant consequences. Using a distributed topology with honeypots placed in different geological locations and networks provides more valuable and independent results. With automatic system of gathering information from all honeypots, it is possible to work with all information on one centralized point. Communication between honeypots and centralized data store use secure SSH tunnels and server communicates only with authorized honeypots. The centralized server also automatically analyses data from each honeypot. Results of this analysis and also other statistical data about malicious activity are simply accessible through a built-in web server. All statistical and analysis reports serve as information basis for an algorithm which classifies different types of used VoIP attacks. The web interface then brings a tool for quick comparison and evaluation of actual attacks in all monitored networks. The article describes both, the honeypots nodes in distributed architecture, which monitor suspicious activity, and also methods and algorithms used on the server side for analysis of gathered data.

  20. Human leptospirosis distribution pattern analysis in Hulu Langat, Selangor

    NASA Astrophysics Data System (ADS)

    Zulkifli, Zuhafiza; Shariff, Abdul Rashid Mohamed; Tarmidi, Zakri M.

    2016-06-01

    This paper discussed the distribution pattern of human leptospirosis in the Hulu Langat District, Selangor, Malaysia. The data used in this study is leptospirosis cases’ report, and spatial boundaries. Leptospirosis cases, data were collected from Health Office of Hulu Langat and spatial boundaries, including lot and district boundaries was collected from the Department of Mapping and Surveying Malaysia (JUPEM). A total of 599 leptospirosis cases were reported in 2013, and this data was mapped based on the addresses provided in the leptospirosis cases’ report. This study uses three statistical methods to analyze the distribution pattern; Moran's I, average nearest neighborhood (ANN) and kernel density estimation. The analysis was used to determine the spatial distribution and the average distance of leptospirosis cases and located the hotspot locations. Using Moran's I analysis, results indicated the cases were random, with a value of -0.202816 which show negative spatial autocorrelation exist among leptospirosis cases. The ANN analysis result, indicated the cases are in cluster pattern, with value of the average nearest neighbor ratio is -21.80. And results also show the hotspots are has been identified and mapped in the Hulu Langat District.

  1. GPS FOM Chimney Analysis using Generalized Extreme Value Distribution

    NASA Technical Reports Server (NTRS)

    Ott, Rick; Frisbee, Joe; Saha, Kanan

    2004-01-01

    Many a time an objective of a statistical analysis is to estimate a limit value like 3-sigma 95% confidence upper limit from a data sample. The generalized Extreme Value Distribution method can be profitably employed in many situations for such an estimate. . .. It is well known that according to the Central Limit theorem the mean value of a large data set is normally distributed irrespective of the distribution of the data from which the mean value is derived. In a somewhat similar fashion it is observed that many times the extreme value of a data set has a distribution that can be formulated with a Generalized Distribution. In space shuttle entry with 3-string GPS navigation the Figure Of Merit (FOM) value gives a measure of GPS navigated state accuracy. A GPS navigated state with FOM of 6 or higher is deemed unacceptable and is said to form a FOM 6 or higher chimney. A FOM chimney is a period of time during which the FOM value stays higher than 5. A longer period of FOM of value 6 or higher causes navigated state to accumulate more error for a lack of state update. For an acceptable landing it is imperative that the state error remains low and hence at low altitude during entry GPS data of FOM greater than 5 must not last more than 138 seconds. I To test the GPS performAnce many entry test cases were simulated at the Avionics Development Laboratory. Only high value FoM chimneys are consequential. The extreme value statistical technique is applied to analyze high value FOM chimneys. The Maximum likelihood method is used to determine parameters that characterize the GEV distribution, and then the limit value statistics are estimated.

  2. Mechanical Network in Titin Immunoglobulin from Force Distribution Analysis

    PubMed Central

    Wilmanns, Matthias; Gräter, Frauke

    2009-01-01

    The role of mechanical force in cellular processes is increasingly revealed by single molecule experiments and simulations of force-induced transitions in proteins. How the applied force propagates within proteins determines their mechanical behavior yet remains largely unknown. We present a new method based on molecular dynamics simulations to disclose the distribution of strain in protein structures, here for the newly determined high-resolution crystal structure of I27, a titin immunoglobulin (IG) domain. We obtain a sparse, spatially connected, and highly anisotropic mechanical network. This allows us to detect load-bearing motifs composed of interstrand hydrogen bonds and hydrophobic core interactions, including parts distal to the site to which force was applied. The role of the force distribution pattern for mechanical stability is tested by in silico unfolding of I27 mutants. We then compare the observed force pattern to the sparse network of coevolved residues found in this family. We find a remarkable overlap, suggesting the force distribution to reflect constraints for the evolutionary design of mechanical resistance in the IG family. The force distribution analysis provides a molecular interpretation of coevolution and opens the road to the study of the mechanism of signal propagation in proteins in general. PMID:19282960

  3. Componential distribution analysis of food using near infrared ray image

    NASA Astrophysics Data System (ADS)

    Yamauchi, Hiroki; Kato, Kunihito; Yamamoto, Kazuhiko; Ogawa, Noriko; Ohba, Kimie

    2008-11-01

    The components of the food related to the "deliciousness" are usually evaluated by componential analysis. The component content and type of components in the food are determined by this analysis. However, componential analysis is not able to analyze measurements in detail, and the measurement is time consuming. We propose a method to measure the two-dimensional distribution of the component in food using a near infrared ray (IR) image. The advantage of our method is to be able to visualize the invisible components. Many components in food have characteristics such as absorption and reflection of light in the IR range. The component content is measured using subtraction between two wavelengths of near IR light. In this paper, we describe a method to measure the component of food using near IR image processing, and we show an application to visualize the saccharose in the pumpkin.

  4. Iterative Monte Carlo analysis of spin-dependent parton distributions

    NASA Astrophysics Data System (ADS)

    Sato, Nobuo; Melnitchouk, W.; Kuhn, S. E.; Ethier, J. J.; Accardi, A.; Jefferson Lab Angular Momentum Collaboration

    2016-04-01

    We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳0.1 . The study also provides the first determination of the flavor-separated twist-3 PDFs and the d2 moment of the nucleon within a global PDF analysis.

  5. Performance Analysis of an Actor-Based Distributed Simulation

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1998-01-01

    Object-oriented design of simulation programs appears to be very attractive because of the natural association of components in the simulated system with objects. There is great potential in distributing the simulation across several computers for the purpose of parallel computation and its consequent handling of larger problems in less elapsed time. One approach to such a design is to use "actors", that is, active objects with their own thread of control. Because these objects execute concurrently, communication is via messages. This is in contrast to an object-oriented design using passive objects where communication between objects is via method calls (direct calls when they are in the same address space and remote procedure calls when they are in different address spaces or different machines). This paper describes a performance analysis program for the evaluation of a design for distributed simulations based upon actors.

  6. Principal Component Analysis for Normal-Distribution-Valued Symbolic Data.

    PubMed

    Wang, Huiwen; Chen, Meiling; Shi, Xiaojun; Li, Nan

    2016-02-01

    This paper puts forward a new approach to principal component analysis (PCA) for normal-distribution-valued symbolic data, which has a vast potential of applications in the economic and management field. We derive a full set of numerical characteristics and variance-covariance structure for such data, which forms the foundation for our analytical PCA approach. Our approach is able to use all of the variance information in the original data than the prevailing representative-type approach in the literature which only uses centers, vertices, etc. The paper also provides an accurate approach to constructing the observations in a PC space based on the linear additivity property of normal distribution. The effectiveness of the proposed method is illustrated by simulated numerical experiments. At last, our method is applied to explain the puzzle of risk-return tradeoff in China's stock market. PMID:25095276

  7. Distributed analysis environment for HEP and interdisciplinary applications

    NASA Astrophysics Data System (ADS)

    Mościcki, J. T.

    2003-04-01

    Huge data volumes of Larger Hadron Collider experiment require parallel end-user analysis on clusters of hundreds of machines. While the focus of end-user High-Energy Physics analysis is on ntuples, the master-worker model of parallel processing may be also used in other contexts such as detector simulation. The aim of DIANE R&D project ( http://cern.ch/it-proj-diane) currently held by CERN IT/API group is to create a generic, component-based framework for distributed, parallel data processing in master-worker model. Pre-compiled user analysis code is loaded dynamically at runtime in component libraries and called back when appropriate. Such application-oriented framework must be flexible enough to integrate with the emerging GRID technologies as they become available in the time to come. Therefore, common services such as environment reconstruction, code distribution, load balancing and authentication are designed and implemented as pluggable modules. This approach allows to easily replace them with modules implemented with newer technology as necessary. The paper gives an overview of DIANE architecture and explains the main design choices. Selected examples of diverse applications from a variety of domains applicable to DIANE are presented. As well as preliminary benchmarking results.

  8. Time series power flow analysis for distribution connected PV generation.

    SciTech Connect

    Broderick, Robert Joseph; Quiroz, Jimmy Edward; Ellis, Abraham; Reno, Matthew J.; Smith, Jeff; Dugan, Roger

    2013-01-01

    Distributed photovoltaic (PV) projects must go through an interconnection study process before connecting to the distribution grid. These studies are intended to identify the likely impacts and mitigation alternatives. In the majority of the cases, system impacts can be ruled out or mitigation can be identified without an involved study, through a screening process or a simple supplemental review study. For some proposed projects, expensive and time-consuming interconnection studies are required. The challenges to performing the studies are twofold. First, every study scenario is potentially unique, as the studies are often highly specific to the amount of PV generation capacity that varies greatly from feeder to feeder and is often unevenly distributed along the same feeder. This can cause location-specific impacts and mitigations. The second challenge is the inherent variability in PV power output which can interact with feeder operation in complex ways, by affecting the operation of voltage regulation and protection devices. The typical simulation tools and methods in use today for distribution system planning are often not adequate to accurately assess these potential impacts. This report demonstrates how quasi-static time series (QSTS) simulation and high time-resolution data can be used to assess the potential impacts in a more comprehensive manner. The QSTS simulations are applied to a set of sample feeders with high PV deployment to illustrate the usefulness of the approach. The report describes methods that can help determine how PV affects distribution system operations. The simulation results are focused on enhancing the understanding of the underlying technical issues. The examples also highlight the steps needed to perform QSTS simulation and describe the data needed to drive the simulations. The goal of this report is to make the methodology of time series power flow analysis readily accessible to utilities and others responsible for evaluating

  9. Multiobjective sensitivity analysis and optimization of distributed hydrologic model MOBIDIC

    NASA Astrophysics Data System (ADS)

    Yang, J.; Castelli, F.; Chen, Y.

    2014-10-01

    Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives that arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for the MOBIDIC (MOdello di Bilancio Idrologico DIstribuito e Continuo) distributed hydrologic model, which combines two sensitivity analysis techniques (the Morris method and the state-dependent parameter (SDP) method) with multiobjective optimization (MOO) approach ɛ-NSGAII (Non-dominated Sorting Genetic Algorithm-II). This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina, with three objective functions, i.e., the standardized root mean square error (SRMSE) of logarithmic transformed discharge, the water balance index, and the mean absolute error of the logarithmic transformed flow duration curve, and its results were compared with those of a single objective optimization (SOO) with the traditional Nelder-Mead simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show that (1) the two sensitivity analysis techniques are effective and efficient for determining the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization. (2) Both MOO and SOO lead to acceptable simulations; e.g., for MOO, the average Nash-Sutcliffe value is 0.75 in the calibration period and 0.70 in the validation period. (3) Evaporation and surface runoff show similar importance for watershed water balance, while the contribution of baseflow can be ignored. (4) Compared to SOO, which was dependent on the initial starting location, MOO provides more

  10. Numerical analysis of decoy state quantum key distribution protocols

    SciTech Connect

    Harrington, Jim W; Rice, Patrick R

    2008-01-01

    Decoy state protocols are a useful tool for many quantum key distribution systems implemented with weak coherent pulses, allowing significantly better secret bit rates and longer maximum distances. In this paper we present a method to numerically find optimal three-level protocols, and we examine how the secret bit rate and the optimized parameters are dependent on various system properties, such as session length, transmission loss, and visibility. Additionally, we show how to modify the decoy state analysis to handle partially distinguishable decoy states as well as uncertainty in the prepared intensities.

  11. Application of Wigner distribution function for analysis of radio occultations

    NASA Astrophysics Data System (ADS)

    Gorbunov, M. E.; Lauritsen, K. B.; Leroy, S. S.

    2010-12-01

    We present the Wigner distribution function (WDF) as an alternative to radio holographic (RH) analysis in the interpretation of radio occultation (RO) observations of the Earth's atmosphere. RH analysis is widely used in RO retrieval to isolate signal from noise and to identify atmospheric multipath. The same task is performed by WDF which also maps a 1-D wave function to 2-D time-frequency phase space and which has maxima located at the ray manifold. Unlike the standard RH technique based on the spectrum analysis in small sliding apertures, WDF is given by a global integral transform, which allows for a higher resolution. We present a tomographic derivation of the WDF and discuss its properties. Examples of analysis of simulations and COSMIC RO data show that WDF allows for a much sharper localization of the details of bending angle profiles as compared to the standard RH analysis in sliding apertures. Both WDF and RH allow for identification of multivalued bending angle profiles arising in the presence of strong horizontal gradients and may introduce a negative bias into bending angle retrieval.

  12. Efficient network meta-analysis: a confidence distribution approach*

    PubMed Central

    Yang, Guang; Liu, Dungang; Liu, Regina Y.; Xie, Minge; Hoaglin, David C.

    2014-01-01

    Summary Network meta-analysis synthesizes several studies of multiple treatment comparisons to simultaneously provide inference for all treatments in the network. It can often strengthen inference on pairwise comparisons by borrowing evidence from other comparisons in the network. Current network meta-analysis approaches are derived from either conventional pairwise meta-analysis or hierarchical Bayesian methods. This paper introduces a new approach for network meta-analysis by combining confidence distributions (CDs). Instead of combining point estimators from individual studies in the conventional approach, the new approach combines CDs which contain richer information than point estimators and thus achieves greater efficiency in its inference. The proposed CD approach can e ciently integrate all studies in the network and provide inference for all treatments even when individual studies contain only comparisons of subsets of the treatments. Through numerical studies with real and simulated data sets, the proposed approach is shown to outperform or at least equal the traditional pairwise meta-analysis and a commonly used Bayesian hierarchical model. Although the Bayesian approach may yield comparable results with a suitably chosen prior, it is highly sensitive to the choice of priors (especially the prior of the between-trial covariance structure), which is often subjective. The CD approach is a general frequentist approach and is prior-free. Moreover, it can always provide a proper inference for all the treatment effects regardless of the between-trial covariance structure. PMID:25067933

  13. Cost-benefit analysis of potassium iodide distribution programs

    SciTech Connect

    Aldrich, D.C.

    1982-01-01

    An analysis has been performed to provide guidance to policymakers concerning the effectiveness of potassium iodide (KI) as a thyroid blocking agent in potential reactor accident situations, the distance to which (or area within which) it should be distributed and its relative effectiveness compared to other available protective measures. The analysis was performed using the Reactor Safety Study (WASH-1400) consequence model. Four categories of accidents were addressed: gap activity release accident (GAP), GAP without containment isolation, core melt with a melt-through release, and core melt with an atmospheric release. Cost-benefit ratios (US $/thyroid nodule prevented) are given assuming that no other protective measures are taken. Uncertainties due to health effects parameters, accident source terms, accident probabilities and costs are assessed. The effects of other potential protective measures, such as evacuation and sheltering, and the impact on children (critical population) are evaluated.

  14. Circularly symmetric distributed feedback semiconductor laser: An analysis

    SciTech Connect

    Erdogan, T.; Hall, D.G.

    1990-08-15

    We analyze the near-threshold behavior of a circularly symmetric distributed feedback laser by developing a coupled-mode theory analysis for all azimuthal modes. We show that the equations that describes the low-order azimuthal modes are, to a very good approximation, the same as those for the one-dimensional (linear) distributed feedback laser. We examine the behavior of higher-order azimuthal modes by numerically solving the exact coupled-mode equations. We find that while a significant amount of mode discrimination exists among radial (longitudinal) modes, as in the one-dimensional distributed feedback laser, there is a much smaller degree of discrimination among azimuthal modes, indicating probability of multimode operation. Despite the multimode behavior, we find the frequency bandwidth associated with modes that do lase ought to be smaller than the spacing between Fabry-Perot modes of a typical semiconductor laser. This laser is an excellent candidate for a surface-emitting laser-it should have a superb quality output beam and is well-suited for array operation.

  15. Circularly symmetric distributed feedback semiconductor laser: An analysis

    SciTech Connect

    Erdogan, T.; Hall, D.G. )

    1990-08-15

    We analyze the near-threshold behavior of a circularly symmetric distributed feedback laser by developing a coupled-mode theory analysis for all azimuthal modes. We show that the equations that describe the low-order azimuthal modes are, to a very good approximation, the same as those for the one-dimensional (linear) distributed feedback laser. We examine the behavior of higher-order azimuthal modes by numerically solving the exact coupled-mode equations. We find that while a significant amount of mode discrimination exists among radial (longitudinal) modes, as in the one-dimensional distributed feedback laser, there is a much smaller degree of discrimination among azimuthal modes, indicating probability of multimode operation. Despite the multimode behavior, we find that the frequency bandwidth associated with modes that do lase ought to be smaller than the spacing between Fabry-Perot modes of a typical semiconductor laser. This laser is an excellent candidate for a surface-emitting laser---it should have a superb quality output beam and is well-suited for array operation.

  16. Data intensive high energy physics analysis in a distributed cloud

    NASA Astrophysics Data System (ADS)

    Charbonneau, A.; Agarwal, A.; Anderson, M.; Armstrong, P.; Fransham, K.; Gable, I.; Harris, D.; Impey, R.; Leavett-Brown, C.; Paterson, M.; Podaima, W.; Sobie, R. J.; Vliet, M.

    2012-02-01

    We show that distributed Infrastructure-as-a-Service (IaaS) compute clouds can be effectively used for the analysis of high energy physics data. We have designed a distributed cloud system that works with any application using large input data sets requiring a high throughput computing environment. The system uses IaaS-enabled science and commercial clusters in Canada and the United States. We describe the process in which a user prepares an analysis virtual machine (VM) and submits batch jobs to a central scheduler. The system boots the user-specific VM on one of the IaaS clouds, runs the jobs and returns the output to the user. The user application accesses a central database for calibration data during the execution of the application. Similarly, the data is located in a central location and streamed by the running application. The system can easily run one hundred simultaneous jobs in an efficient manner and should scale to many hundreds and possibly thousands of user jobs.

  17. Preliminary analysis of hub and spoke air freight distribution system

    NASA Technical Reports Server (NTRS)

    Whitehead, A. H., Jr.

    1978-01-01

    A brief analysis is made of the hub and spoke air freight distribution system which would employ less than 15 hub centers world wide with very large advanced distributed-load freighters providing the line-haul delivery between hubs. This system is compared to a more conventional network using conventionally-designed long-haul freighters which travel between numerous major airports. The analysis calculates all of the transportation costs, including handling charges and pickup and delivery costs. The results show that the economics of the hub/spoke system are severely compromised by the extensive use of feeder aircraft to deliver cargo into and from the large freighter terminals. Not only are the higher costs for the smaller feeder airplanes disadvantageous, but their use implies an additional exchange of cargo between modes compared to truck delivery. The conventional system uses far fewer feeder airplanes, and in many cases, none at all. When feeder aircraft are eliminated from the hub/spoke system, however, that system is universally more economical than any conventional system employing smaller line-haul aircraft.

  18. A theoretical analysis of basin-scale groundwater temperature distribution

    NASA Astrophysics Data System (ADS)

    An, Ran; Jiang, Xiao-Wei; Wang, Jun-Zhi; Wan, Li; Wang, Xu-Sheng; Li, Hailong

    2015-03-01

    The theory of regional groundwater flow is critical for explaining heat transport by moving groundwater in basins. Domenico and Palciauskas's (1973) pioneering study on convective heat transport in a simple basin assumed that convection has a small influence on redistributing groundwater temperature. Moreover, there has been no research focused on the temperature distribution around stagnation zones among flow systems. In this paper, the temperature distribution in the simple basin is reexamined and that in a complex basin with nested flow systems is explored. In both basins, compared to the temperature distribution due to conduction, convection leads to a lower temperature in most parts of the basin except for a small part near the discharge area. There is a high-temperature anomaly around the basin-bottom stagnation point where two flow systems converge due to a low degree of convection and a long travel distance, but there is no anomaly around the basin-bottom stagnation point where two flow systems diverge. In the complex basin, there are also high-temperature anomalies around internal stagnation points. Temperature around internal stagnation points could be very high when they are close to the basin bottom, for example, due to the small permeability anisotropy ratio. The temperature distribution revealed in this study could be valuable when using heat as a tracer to identify the pattern of groundwater flow in large-scale basins. Domenico PA, Palciauskas VV (1973) Theoretical analysis of forced convective heat transfer in regional groundwater flow. Geological Society of America Bulletin 84:3803-3814

  19. Evaluation of Distribution Analysis Software for DER Applications

    SciTech Connect

    Staunton, RH

    2003-01-23

    providers will be forced to both fully acknowledge the trend and plan for accommodating DER [3]. With bureaucratic barriers [4], lack of time/resources, tariffs, etc. still seen in certain regions of the country, changes still need to be made. Given continued technical advances in DER, the time is fast approaching when the industry, nation-wide, must not only accept DER freely but also provide or review in-depth technical assessments of how DER should be integrated into and managed throughout the distribution system. Characterization studies are needed to fully understand how both the utility system and DER devices themselves will respond to all reasonable events (e.g., grid disturbances, faults, rapid growth, diverse and multiple DER systems, large reactive loads). Some of this work has already begun as it relates to operation and control of DER [5] and microturbine performance characterization [6,7]. One of the most urgently needed tools that can provide these types of analyses is a distribution network analysis program in combination with models for various DER. Together, they can be used for (1) analyzing DER placement in distribution networks and (2) helping to ensure that adequate transmission reliability is maintained. Surveys of the market show products that represent a partial match to these needs; specifically, software that has been developed to plan electrical distribution systems and analyze reliability (in a near total absence of DER). The first part of this study (Sections 2 and 3 of the report) looks at a number of these software programs and provides both summary descriptions and comparisons. The second part of this study (Section 4 of the report) considers the suitability of these analysis tools for DER studies. It considers steady state modeling and assessment work performed by ORNL using one commercially available tool on feeder data provided by a southern utility. Appendix A provides a technical report on the results of this modeling effort.

  20. Earthquake interevent time distribution in Kachchh, Northwestern India

    NASA Astrophysics Data System (ADS)

    Pasari, Sumanta; Dikshit, Onkar

    2015-08-01

    Statistical properties of earthquake interevent times have long been the topic of interest to seismologists and earthquake professionals, mainly for hazard-related concerns. In this paper, we present a comprehensive study on the temporal statistics of earthquake interoccurrence times of the seismically active Kachchh peninsula (western India) from thirteen probability distributions. Those distributions are exponential, gamma, lognormal, Weibull, Levy, Maxwell, Pareto, Rayleigh, inverse Gaussian (Brownian passage time), inverse Weibull (Frechet), exponentiated exponential, exponentiated Rayleigh (Burr type X), and exponentiated Weibull distributions. Statistical inferences of the scale and shape parameters of these distributions are discussed from the maximum likelihood estimations and the Fisher information matrices. The latter are used as a surrogate tool to appraise the parametric uncertainty in the estimation process. The results were found on the basis of two goodness-of-fit tests: the maximum likelihood criterion with its modification to Akaike information criterion (AIC) and the Kolmogorov-Smirnov (K-S) minimum distance criterion. These results reveal that (i) the exponential model provides the best fit, (ii) the gamma, lognormal, Weibull, inverse Gaussian, exponentiated exponential, exponentiated Rayleigh, and exponentiated Weibull models provide an intermediate fit, and (iii) the rest, namely Levy, Maxwell, Pareto, Rayleigh, and inverse Weibull, fit poorly to the earthquake catalog of Kachchh and its adjacent regions. This study also analyzes the present-day seismicity in terms of the estimated recurrence interval and conditional probability curves (hazard curves). The estimated cumulative probability and the conditional probability of a magnitude 5.0 or higher event reach 0.8-0.9 by 2027-2036 and 2034-2043, respectively. These values have significant implications in a variety of practical applications including earthquake insurance, seismic zonation

  1. Effects of specimen size on the flexural strength and Weibull modulus of nuclear graphite IG-110, NBG-18, and PCEA

    NASA Astrophysics Data System (ADS)

    Chi, Se-Hwan

    2015-09-01

    Changes in flexural strength and Weibull modulus due to specimen size were investigated for three nuclear graphite grades, IG-110, NBG-18, and PCEA, using four-point-1/3 point (4-1/3) loading with specimens of three different sizes: 3.18 (Thickness) × 6.35 (Width) × 50.8 (Length), 6.50 (T) × 12.0 (W) × 52.0 (L), 18.0 (T) × 16.0 (W) × 64 (L) (mm) (total: 210 specimens). Results showed some specimen size effects were grade dependent: While NBG-18 (a) showed rather significant specimen size effects (37% difference between the 3 T and 18 T), the differences in IG-110 and PCEA were 7.6-15%. The maximum differences in flexural strength due to specimen size were larger in the PCEA and NBG-18 having larger sized coke particles (medium grain size: >300 μm) than the IG-110 with super fine coke particle size (25 μm). The Weibull modulus showed a data population dependency, in that it decreased with increasing numbers of data used for modulus determination. A good correlation between the fracture surface roughness and the flexural strength was confirmed.

  2. Silk Fiber Mechanics from Multiscale Force Distribution Analysis

    PubMed Central

    Cetinkaya, Murat; Xiao, Senbo; Markert, Bernd; Stacklies, Wolfram; Gräter, Frauke

    2011-01-01

    Here we decipher the molecular determinants for the extreme toughness of spider silk fibers. Our bottom-up computational approach incorporates molecular dynamics and finite element simulations. Therefore, the approach allows the analysis of the internal strain distribution and load-carrying motifs in silk fibers on scales of both molecular and continuum mechanics. We thereby dissect the contributions from the nanoscale building blocks, the soft amorphous and the strong crystalline subunits, to silk fiber mechanics. We identify the amorphous subunits not only to give rise to high elasticity, but to also ensure efficient stress homogenization through the friction between entangled chains, which also allows the crystals to withstand stresses as high as 2 GPa in the context of the amorphous matrix. We show that the maximal toughness of silk is achieved at 10–40% crystallinity depending on the distribution of crystals in the fiber. We also determined a serial arrangement of the crystalline and amorphous subunits in lamellae to outperform a random or a parallel arrangement, putting forward what we believe to be a new structural model for silk and other semicrystalline materials. The multiscale approach, not requiring any empirical parameters, is applicable to other partially ordered polymeric systems. Hence, it is an efficient tool for the design of artificial silk fibers. PMID:21354403

  3. Phylogenetic analysis reveals a scattered distribution of autumn colours

    PubMed Central

    Archetti, Marco

    2009-01-01

    Background and Aims Leaf colour in autumn is rarely considered informative for taxonomy, but there is now growing interest in the evolution of autumn colours and different hypotheses are debated. Research efforts are hindered by the lack of basic information: the phylogenetic distribution of autumn colours. It is not known when and how autumn colours evolved. Methods Data are reported on the autumn colours of 2368 tree species belonging to 400 genera of the temperate regions of the world, and an analysis is made of their phylogenetic relationships in order to reconstruct the evolutionary origin of red and yellow in autumn leaves. Key Results Red autumn colours are present in at least 290 species (70 genera), and evolved independently at least 25 times. Yellow is present independently from red in at least 378 species (97 genera) and evolved at least 28 times. Conclusions The phylogenetic reconstruction suggests that autumn colours have been acquired and lost many times during evolution. This scattered distribution could be explained by hypotheses involving some kind of coevolutionary interaction or by hypotheses that rely on the need for photoprotection. PMID:19126636

  4. Lacunarity and multifractal analysis of the large DLA mass distribution

    NASA Astrophysics Data System (ADS)

    Rodriguez-Romo, Suemi; Sosa-Herrera, Antonio

    2013-08-01

    We show the methodology used to analyze fractal and mass-multifractal properties of very large Diffusion-Limited Aggregation (DLA) clusters with a maximum of 109 particles for 2D aggregates and 108 particles for 3D clusters, to support our main result; the scaling behavior obtained by our experimental results corresponds to the expected performance of monofractal objects. In order to estimate lacunarity measures for large DLA clusters, we develop a variant of the gliding-box algorithm which reduces the computer time needed to obtain experimental results. We show how our mass multifractal data have a tendency to present monofractal behavior for the mass distribution of the cases presented in this paper in the limit of very large clusters. Lacunarity analysis shows, provided we study small clusters mass distributions, data which might be interpreted as two different values of fractal dimensions while the cluster grows; however, this effect tends to vanish when the cluster size increases further, in such a way that monofractality is achieved. The outcomes of this paper lead us to conclude that the previously reported mass multifractality behavior (Vicsek et al., 1990 [13]) detected for DLA clusters is a consequence of finite size effects and floating point precision limitations and not an intrinsic feature of the phenomena, since the scaling behavior of our DLA clusters space corresponds to monofractal objects, being this situation remarkably noticeable in the limit of very large clusters.

  5. SATMC: Spectral energy distribution Analysis Through Markov Chains

    NASA Astrophysics Data System (ADS)

    Johnson, S. P.; Wilson, G. W.; Tang, Y.; Scott, K. S.

    2013-12-01

    We present the general purpose spectral energy distribution (SED) fitting tool SED Analysis Through Markov Chains (SATMC). Utilizing Monte Carlo Markov Chain (MCMC) algorithms, SATMC fits an observed SED to SED templates or models of the user's choice to infer intrinsic parameters, generate confidence levels and produce the posterior parameter distribution. Here, we describe the key features of SATMC from the underlying MCMC engine to specific features for handling SED fitting. We detail several test cases of SATMC, comparing results obtained from traditional least-squares methods, which highlight its accuracy, robustness and wide range of possible applications. We also present a sample of submillimetre galaxies (SMGs) that have been fitted using the SED synthesis routine GRASIL as input. In general, these SMGs are shown to occupy a large volume of parameter space, particularly in regards to their star formation rates which range from ˜30 to 3000 M⊙ yr-1 and stellar masses which range from ˜1010 to 1012 M⊙. Taking advantage of the Bayesian formalism inherent to SATMC, we also show how the fitting results may change under different parametrizations (i.e. different initial mass functions) and through additional or improved photometry, the latter being crucial to the study of high-redshift galaxies.

  6. A Distributed Flocking Approach for Information Stream Clustering Analysis

    SciTech Connect

    Cui, Xiaohui; Potok, Thomas E

    2006-01-01

    Intelligence analysts are currently overwhelmed with the amount of information streams generated everyday. There is a lack of comprehensive tool that can real-time analyze the information streams. Document clustering analysis plays an important role in improving the accuracy of information retrieval. However, most clustering technologies can only be applied for analyzing the static document collection because they normally require a large amount of computation resource and long time to get accurate result. It is very difficult to cluster a dynamic changed text information streams on an individual computer. Our early research has resulted in a dynamic reactive flock clustering algorithm which can continually refine the clustering result and quickly react to the change of document contents. This character makes the algorithm suitable for cluster analyzing dynamic changed document information, such as text information stream. Because of the decentralized character of this algorithm, a distributed approach is a very natural way to increase the clustering speed of the algorithm. In this paper, we present a distributed multi-agent flocking approach for the text information stream clustering and discuss the decentralized architectures and communication schemes for load balance and status information synchronization in this approach.

  7. One-Dimensional Analysis Techniques for Pulsed Blowing Distribution

    NASA Astrophysics Data System (ADS)

    Chambers, Frank

    2005-11-01

    Pulsed blowing offers reductions in bleed air requirements for aircraft flow control. Efficient pulsed blowing systems require careful design to minimize bleed air use while distributing blowing to multiple locations. Pulsed blowing systems start with a steady flow supply and process it to generate a pulsatile flow. The fluid-acoustic dynamics of the system play an important role in overall effectiveness. One-dimensional analysis techniques that in the past have been applied to ventilation systems and internal combustion engines have been adapted to pulsed blowing. Pressure wave superposition and reflection are used with the governing equations of continuity, momentum and energy to determine particle velocities and pressures through the flow field. Simulations have been performed to find changes in the amplitude and wave shape as pulses are transmitted through a simple pulsed blowing system. A general-purpose code is being developed to simulate wave transmission and allow the determination of blowing system dynamic parameters.

  8. Phylogenetic analysis on the soil bacteria distributed in karst forest

    PubMed Central

    Zhou, JunPei; Huang, Ying; Mo, MingHe

    2009-01-01

    Phylogenetic composition of bacterial community in soil of a karst forest was analyzed by culture-independent molecular approach. The bacterial 16S rRNA gene was amplified directly from soil DNA and cloned to generate a library. After screening the clone library by RFLP, 16S rRNA genes of representative clones were sequenced and the bacterial community was analyzed phylogenetically. The 16S rRNA gene inserts of 190 clones randomly selected were analyzed by RFLP and generated 126 different RFLP types. After sequencing, 126 non-chimeric sequences were obtained, generating 113 phylotypes. Phylogenetic analysis revealed that the bacteria distributed in soil of the karst forest included the members assigning into Proteobacteria, Acidobacteria, Planctomycetes, Chloroflexi (Green nonsulfur bacteria), Bacteroidetes, Verrucomicrobia, Nitrospirae, Actinobacteria (High G+C Gram-positive bacteria), Firmicutes (Low G+C Gram-positive bacteria) and candidate divisions (including the SPAM and GN08). PMID:24031430

  9. Specimen type and size effects on lithium hydride tensile strength distributions

    SciTech Connect

    Oakes, Jr, R E

    1991-12-01

    Weibull's two-parameter statistical-distribution function is used to account for the effects of specimen size and loading differences on strength distributions of lithium hydride. Three distinctly differing uniaxial specimen types (i.e., an elliptical-transition pure tensile specimen, an internally pressurized ring tensile, and two sizes of four-point-flexure specimens) are shown to provide different strength distributions as expected, because of their differing sizes and modes of loading. After separation of strengths into volumetric- and surface-initiated failure distributions, the Weibull characteristic strength parameters for the higher-strength tests associated with internal fracture initiations are shown to vary as predicted by the effective specimen volume Weibull relationship. Lower-strength results correlate with the effective area to much lesser degree, probably because of the limited number of surface-related failures and the different machining methods used to prepare the specimen. The strength distribution from the fourth specimen type, the predominantly equibiaxially stressed disk-flexure specimen, is well below that predicted by the two-parameter Weibull-derived effective volume or surface area relations. The two-parameter Weibull model cannot account for the increased failure probability associated with multiaxial stress fields. Derivations of effective volume and area relationships for those specimens for which none were found in the literature, the elliptical-transition tensile, the ring tensile, and the disk flexure (including the outer region), are also included.

  10. Conductance Distributions for Empirical Orthogonal Function Analysis and Optimal Interpolation

    NASA Astrophysics Data System (ADS)

    Knipp, Delores; McGranaghan, Ryan; Matsuo, Tomoko

    2016-04-01

    We show the first characterizations of the primary modes of ionospheric Hall and Pedersen conductance variability as empirical orthogonal functions (EOFs). These are derived from six satellite years of Defense Meteorological Satellite Program (DMSP) particle data acquired during the rise of solar cycles 22 and 24. The 60 million DMSP spectra were each processed through the Global Airlglow Model. This is the first large-scale analysis of ionospheric conductances completely free of assumption of the incident electron energy spectra. We show that the mean patterns and first four EOFs capture ˜50.1 and 52.9% of the total Pedersen and Hall conductance variabilities, respectively. The mean patterns and first EOFs are consistent with typical diffuse auroral oval structures and quiet time strengthening/weakening of the mean pattern. The second and third EOFs show major disturbance features of magnetosphere-ionosphere (MI) interactions: geomagnetically induced auroral zone expansion in EOF2 and the auroral substorm current wedge in EOF3. The fourth EOFs suggest diminished conductance associated with ionospheric substorm recovery mode. These EOFs are then used in a new optimal interpolation (OI) technique to estimate complete high-latitude ionospheric conductance distributions. The technique combines particle precipitation-based calculations of ionospheric conductances and their errors with a background model and its error covariance (estimated by EOF analysis) to infer complete distributions of the high-latitude ionospheric conductances for a week in late 2011. The OI technique captures: 1) smaller-scaler ionospheric conductance features associated with discrete precipitation and 2) brings ground- and space-based data into closer agreement. We show quantitatively and qualitatively that this new technique provides better ionospheric conductance specification than past statistical models, especially during heightened geomagnetic activity.

  11. Prediction of the Inert Strength Distribution of Si3N4 Diesel Valves

    SciTech Connect

    Andrews, M.J.; Breder, K.; Wereszczak, A.A.

    1999-01-25

    Censored Weibull strength distributions were generated with NT551 silicon nitride four-point flexure data using the ASTM C1161-B and 5.0 mm diameter cylindrical specimens. Utilizing finite element models and AlliedSignal's life prediction codes, the inert or fast fracture strength failure probability of a ceramic diesel valve was estimated from these data sets. The failure probability prediction derived from each data set were found to be more conservative than valve strength data. Fractographic analysis of the test specimens and valves showed that the cylindrical specimens failed from a different flaw population than the prismatic flexure bars and the valves. The study emphasizes the prerequisite of having coincident flaw populations homogeneously distributed in both the test specimen and the ceramic component. Lastly, it suggests that unless material homogeneity exists, that any meaningful life prediction or reliability analysis of a component may not be possible.

  12. Global sensitivity analysis in wind energy assessment

    NASA Astrophysics Data System (ADS)

    Tsvetkova, O.; Ouarda, T. B.

    2012-12-01

    research show that the brute force method is best for wind assessment purpose, SBSS outperforms other sampling strategies in the majority of cases. The results indicate that the Weibull scale parameter, turbine lifetime and Weibull shape parameter are the three most influential variables in the case study setting. The following conclusions can be drawn from these results: 1) SBSS should be recommended for use in Monte Carlo experiments, 2) The brute force method should be recommended for conducting sensitivity analysis in wind resource assessment, and 3) Little variation in the Weibull scale causes significant variation in energy production. The presence of the two distribution parameters in the top three influential variables (the Weibull shape and scale) emphasizes the importance of accuracy of (a) choosing the distribution to model wind regime at a site and (b) estimating probability distribution parameters. This can be labeled as the most important conclusion of this research because it opens a field for further research, which the authors see could change the wind energy field tremendously.

  13. Directional spatial frequency analysis of lipid distribution in atherosclerotic plaque

    NASA Astrophysics Data System (ADS)

    Korn, Clyde; Reese, Eric; Shi, Lingyan; Alfano, Robert; Russell, Stewart

    2016-04-01

    Atherosclerosis is characterized by the growth of fibrous plaques due to the retention of cholesterol and lipids within the artery wall, which can lead to vessel occlusion and cardiac events. One way to evaluate arterial disease is to quantify the amount of lipid present in these plaques, since a higher disease burden is characterized by a higher concentration of lipid. Although therapeutic stimulation of reverse cholesterol transport to reduce cholesterol deposits in plaque has not produced significant results, this may be due to current image analysis methods which use averaging techniques to calculate the total amount of lipid in the plaque without regard to spatial distribution, thereby discarding information that may have significance in marking response to therapy. Here we use Directional Fourier Spatial Frequency (DFSF) analysis to generate a characteristic spatial frequency spectrum for atherosclerotic plaques from C57 Black 6 mice both treated and untreated with a cholesterol scavenging nanoparticle. We then use the Cauchy product of these spectra to classify the images with a support vector machine (SVM). Our results indicate that treated plaque can be distinguished from untreated plaque using this method, where no difference is seen using the spatial averaging method. This work has the potential to increase the effectiveness of current in-vivo methods of plaque detection that also use averaging methods, such as laser speckle imaging and Raman spectroscopy.

  14. An Open Architecture for Distributed Malware Collection and Analysis

    NASA Astrophysics Data System (ADS)

    Cavalca, Davide; Goldoni, Emanuele

    Honeynets have become an important tool for researchers and network operators. However, the lack of a unified honeynet data model has impeded their effectiveness, resulting in multiple unrelated data sources, each with its own proprietary access method and format. Moreover, the deployment and management of a honeynet is a time-consuming activity and the interpretation of collected data is far from trivial. HIVE (Honeynet Infrastructure in Virtualized Environment) is a novel highly scalable automated data collection and analysis architecture we designed. Our infrastructure is based on top of proven FLOSS (Free, Libre and Open Source) solutions, which have been extended and integrated with new tools we developed. We use virtualization to ease honeypot management and deployment, combining both high-interaction and low-interaction sensors in a common infrastructure. We also address the need for rapid comprehension and detailed data analysis by harnessing the power of a relational database system, which provides centralized storage and access to the collected data while ensuring its constant integrity. This chapter presents our malware data collection architecture, offering some insight in the structure and benefits of a distributed virtualized honeynet and its development. Finally, we present some techniques for the active monitoring of centralized botnets we integrated in HIVE, which allow us to track the menaces evolution and timely deploy effective countermeasures.

  15. Microcanonical thermostatistics analysis without histograms: Cumulative distribution and Bayesian approaches

    NASA Astrophysics Data System (ADS)

    Alves, Nelson A.; Morero, Lucas D.; Rizzi, Leandro G.

    2015-06-01

    Microcanonical thermostatistics analysis has become an important tool to reveal essential aspects of phase transitions in complex systems. An efficient way to estimate the microcanonical inverse temperature β(E) and the microcanonical entropy S(E) is achieved with the statistical temperature weighted histogram analysis method (ST-WHAM). The strength of this method lies on its flexibility, as it can be used to analyse data produced by algorithms with generalised sampling weights. However, for any sampling weight, ST-WHAM requires the calculation of derivatives of energy histograms H(E) , which leads to non-trivial and tedious binning tasks for models with continuous energy spectrum such as those for biomolecular and colloidal systems. Here, we discuss two alternative methods that avoid the need for such energy binning to obtain continuous estimates for H(E) in order to evaluate β(E) by using ST-WHAM: (i) a series expansion to estimate probability densities from the empirical cumulative distribution function (CDF), and (ii) a Bayesian approach to model this CDF. Comparison with a simple linear regression method is also carried out. The performance of these approaches is evaluated considering coarse-grained protein models for folding and peptide aggregation.

  16. Time domain analysis of the weighted distributed order rheological model

    NASA Astrophysics Data System (ADS)

    Cao, Lili; Pu, Hai; Li, Yan; Li, Ming

    2016-05-01

    This paper presents the fundamental solution and relevant properties of the weighted distributed order rheological model in the time domain. Based on the construction of distributed order damper and the idea of distributed order element networks, this paper studies the weighted distributed order operator of the rheological model, a generalization of distributed order linear rheological model. The inverse Laplace transform on weighted distributed order operators of rheological model has been obtained by cutting the complex plane and computing the complex path integral along the Hankel path, which leads to the asymptotic property and boundary discussions. The relaxation response to weighted distributed order rheological model is analyzed, and it is closely related to many physical phenomena. A number of novel characteristics of weighted distributed order rheological model, such as power-law decay and intermediate phenomenon, have been discovered as well. And meanwhile several illustrated examples play important role in validating these results.

  17. Distribution and Phylogenetic Analysis of Family 19 Chitinases in Actinobacteria

    PubMed Central

    Kawase, Tomokazu; Saito, Akihiro; Sato, Toshiya; Kanai, Ryo; Fujii, Takeshi; Nikaidou, Naoki; Miyashita, Kiyotaka; Watanabe, Takeshi

    2004-01-01

    In organisms other than higher plants, family 19 chitinase was first discovered in Streptomyces griseus HUT6037, and later, the general occurrence of this enzyme in Streptomyces species was demonstrated. In the present study, the distribution of family 19 chitinases in the class Actinobacteria and the phylogenetic relationship of Actinobacteria family 19 chitinases with family 19 chitinases of other organisms were investigated. Forty-nine strains were chosen to cover almost all the suborders of the class Actinobacteria, and chitinase production was examined. Of the 49 strains, 22 formed cleared zones on agar plates containing colloidal chitin and thus appeared to produce chitinases. These 22 chitinase-positive strains were subjected to Southern hybridization analysis by using a labeled DNA fragment corresponding to the catalytic domain of ChiC, and the presence of genes similar to chiC of S. griseus HUT6037 in at least 13 strains was suggested by the results. PCR amplification and sequencing of the DNA fragments corresponding to the major part of the catalytic domains of the family 19 chitinase genes confirmed the presence of family 19 chitinase genes in these 13 strains. The strains possessing family 19 chitinase genes belong to 6 of the 10 suborders in the order Actinomycetales, which account for the greatest part of the Actinobacteria. Phylogenetic analysis suggested that there is a close evolutionary relationship between family 19 chitinases found in Actinobacteria and plant class IV chitinases. The general occurrence of family 19 chitinase genes in Streptomycineae and the high sequence similarity among the genes found in Actinobacteria suggest that the family 19 chitinase gene was first acquired by an ancestor of the Streptomycineae and spread among the Actinobacteria through horizontal gene transfer. PMID:14766598

  18. Rod internal pressure quantification and distribution analysis using Frapcon

    SciTech Connect

    Bratton, Ryan N; Jessee, Matthew Anderson; Wieselquist, William A

    2015-09-01

    This report documents work performed supporting the Department of Energy (DOE) Office of Nuclear Energy (NE) Fuel Cycle Technologies Used Fuel Disposition Campaign (UFDC) under work breakdown structure element 1.02.08.10, ST Analysis. In particular, this report fulfills the M4 milestone M4FT- 15OR0810036, Quantify effects of power uncertainty on fuel assembly characteristics, within work package FT-15OR081003 ST Analysis-ORNL. This research was also supported by the Consortium for Advanced Simulation of Light Water Reactors (http://www.casl.gov), an Energy Innovation Hub (http://www.energy.gov/hubs) for Modeling and Simulation of Nuclear Reactors under U.S. Department of Energy Contract No. DE-AC05-00OR22725. The discharge rod internal pressure (RIP) and cladding hoop stress (CHS) distributions are quantified for Watts Bar Nuclear Unit 1 (WBN1) fuel rods by modeling core cycle design data, operation data (including modeling significant trips and downpowers), and as-built fuel enrichments and densities of each fuel rod in FRAPCON-3.5. A methodology is developed which tracks inter-cycle assembly movements and assembly batch fabrication information to build individual FRAPCON inputs for each evaluated WBN1 fuel rod. An alternate model for the amount of helium released from the zirconium diboride (ZrB2) integral fuel burnable absorber (IFBA) layer is derived and applied to FRAPCON output data to quantify the RIP and CHS for these types of fuel rods. SCALE/Polaris is used to quantify fuel rodspecific spectral quantities and the amount of gaseous fission products produced in the fuel for use in FRAPCON inputs. Fuel rods with ZrB2 IFBA layers (i.e., IFBA rods) are determined to have RIP predictions that are elevated when compared to fuel rod without IFBA layers (i.e., standard rods) despite the fact that IFBA rods often have reduced fill pressures and annular fuel pellets. The primary contributor to elevated RIP predictions at burnups less than and greater than 30 GWd

  19. A distributed analysis of Human impact on global sediment dynamics

    NASA Astrophysics Data System (ADS)

    Cohen, S.; Kettner, A.; Syvitski, J. P.

    2012-12-01

    Understanding riverine sediment dynamics is an important undertaking for both socially-relevant issues such as agriculture, water security and infrastructure management and for scientific analysis of landscapes, river ecology, oceanography and other disciplines. Providing good quantitative and predictive tools in therefore timely particularly in light of predicted climate and landuse changes. Ever increasing human activity during the Anthropocene have affected sediment dynamics in two major ways: (1) an increase is hillslope erosion due to agriculture, deforestation and landscape engineering and (2) trapping of sediment in dams and other man-made reservoirs. The intensity and dynamics between these man-made factors vary widely across the globe and in time and are therefore hard to predict. Using sophisticated numerical models is therefore warranted. Here we use a distributed global riverine sediment flux and water discharge model (WBMsed) to compare a pristine (without human input) and disturbed (with human input) simulations. Using these 50 year simulations we will show and discuss the complex spatial and temporal patterns of human effect on riverine sediment flux and water discharge.

  20. Finite-key security analysis for multilevel quantum key distribution

    NASA Astrophysics Data System (ADS)

    Brádler, Kamil; Mirhosseini, Mohammad; Fickler, Robert; Broadbent, Anne; Boyd, Robert

    2016-07-01

    We present a detailed security analysis of a d-dimensional quantum key distribution protocol based on two and three mutually unbiased bases (MUBs) both in an asymptotic and finite-key-length scenario. The finite secret key rates (in bits per detected photon) are calculated as a function of the length of the sifted key by (i) generalizing the uncertainly relation-based insight from BB84 to any d-level 2-MUB QKD protocol and (ii) by adopting recent advances in the second-order asymptotics for finite block length quantum coding (for both d-level 2- and 3-MUB QKD protocols). Since the finite and asymptotic secret key rates increase with d and the number of MUBs (together with the tolerable threshold) such QKD schemes could in principle offer an important advantage over BB84. We discuss the possibility of an experimental realization of the 3-MUB QKD protocol with the orbital angular momentum degrees of freedom of photons.

  1. Fourier analysis of polar cap electric field and current distributions

    NASA Technical Reports Server (NTRS)

    Barbosa, D. D.

    1984-01-01

    A theoretical study of high-latitude electric fields and currents, using analytic Fourier analysis methods, is conducted. A two-dimensional planar model of the ionosphere with an enhanced conductivity auroral belt and field-aligned currents at the edges is employed. Two separate topics are treated. A field-aligned current element near the cusp region of the polar cap is included to investigate the modifications to the convection pattern by the east-west component of the interplanetary magnetic field. It is shown that a sizable one-cell structure is induced near the cusp which diverts equipotential contours to the dawnside or duskside, depending on the sign of the cusp current. This produces characteristic dawn-dusk asymmetries to the electric field that have been previously observed over the polar cap. The second topic is concerned with the electric field configuration obtained in the limit of perfect shielding, where the field is totally excluded equatorward of the auroral oval. When realistic field-aligned current distributions are used, the result is to produce severely distorted, crescent-shaped equipotential contours over the cap. Exact, analytic formulae applicable to this case are also provided.

  2. Bivariate extreme value distributions

    NASA Technical Reports Server (NTRS)

    Elshamy, M.

    1992-01-01

    In certain engineering applications, such as those occurring in the analyses of ascent structural loads for the Space Transportation System (STS), some of the load variables have a lower bound of zero. Thus, the need for practical models of bivariate extreme value probability distribution functions with lower limits was identified. We discuss the Gumbel models and present practical forms of bivariate extreme probability distributions of Weibull and Frechet types with two parameters. Bivariate extreme value probability distribution functions can be expressed in terms of the marginal extremel distributions and a 'dependence' function subject to certain analytical conditions. Properties of such bivariate extreme distributions, sums and differences of paired extremals, as well as the corresponding forms of conditional distributions, are discussed. Practical estimation techniques are also given.

  3. Monolithic ceramic analysis using the SCARE program

    NASA Technical Reports Server (NTRS)

    Manderscheid, Jane M.

    1988-01-01

    The Structural Ceramics Analysis and Reliability Evaluation (SCARE) computer program calculates the fast fracture reliability of monolithic ceramic components. The code is a post-processor to the MSC/NASTRAN general purpose finite element program. The SCARE program automatically accepts the MSC/NASTRAN output necessary to compute reliability. This includes element stresses, temperatures, volumes, and areas. The SCARE program computes two-parameter Weibull strength distributions from input fracture data for both volume and surface flaws. The distributions can then be used to calculate the reliability of geometrically complex components subjected to multiaxial stress states. Several fracture criteria and flaw types are available for selection by the user, including out-of-plane crack extension theories. The theoretical basis for the reliability calculations was proposed by Batdorf. These models combine linear elastic fracture mechanics (LEFM) with Weibull statistics to provide a mechanistic failure criterion. Other fracture theories included in SCARE are the normal stress averaging technique and the principle of independent action. The objective of this presentation is to summarize these theories, including their limitations and advantages, and to provide a general description of the SCARE program, along with example problems.

  4. Statistical distribution of mechanical properties for three graphite-epoxy material systems

    NASA Technical Reports Server (NTRS)

    Reese, C.; Sorem, J., Jr.

    1981-01-01

    Graphite-epoxy composites are playing an increasing role as viable alternative materials in structural applications necessitating thorough investigation into the predictability and reproducibility of their material strength properties. This investigation was concerned with tension, compression, and short beam shear coupon testing of large samples from three different material suppliers to determine their statistical strength behavior. Statistical results indicate that a two Parameter Weibull distribution model provides better overall characterization of material behavior for the graphite-epoxy systems tested than does the standard Normal distribution model that is employed for most design work. While either a Weibull or Normal distribution model provides adequate predictions for average strength values, the Weibull model provides better characterization in the lower tail region where the predictions are of maximum design interest. The two sets of the same material were found to have essentially the same material properties, and indicate that repeatability can be achieved.

  5. Weibull Multiplicative Model and Machine Learning Models for Full-Automatic Dark-Spot Detection from SAR Images

    NASA Astrophysics Data System (ADS)

    Taravat, A.; Del Frate, F.

    2013-09-01

    As a major aspect of marine pollution, oil release into the sea has serious biological and environmental impacts. Among remote sensing systems (which is a tool that offers a non-destructive investigation method), synthetic aperture radar (SAR) can provide valuable synoptic information about the position and size of the oil spill due to its wide area coverage and day/night, and all-weather capabilities. In this paper we present a new automated method for oil-spill monitoring. A new approach is based on the combination of Weibull Multiplicative Model and machine learning techniques to differentiate between dark spots and the background. First, the filter created based on Weibull Multiplicative Model is applied to each sub-image. Second, the sub-image is segmented by two different neural networks techniques (Pulsed Coupled Neural Networks and Multilayer Perceptron Neural Networks). As the last step, a very simple filtering process is used to eliminate the false targets. The proposed approaches were tested on 20 ENVISAT and ERS2 images which contained dark spots. The same parameters were used in all tests. For the overall dataset, the average accuracies of 94.05 % and 95.20 % were obtained for PCNN and MLP methods, respectively. The average computational time for dark-spot detection with a 256 × 256 image in about 4 s for PCNN segmentation using IDL software which is the fastest one in this field at present. Our experimental results demonstrate that the proposed approach is very fast, robust and effective. The proposed approach can be applied to the future spaceborne SAR images.

  6. Application of extreme learning machine for estimation of wind speed distribution

    NASA Astrophysics Data System (ADS)

    Shamshirband, Shahaboddin; Mohammadi, Kasra; Tong, Chong Wen; Petković, Dalibor; Porcu, Emilio; Mostafaeipour, Ali; Ch, Sudheer; Sedaghat, Ahmad

    2016-03-01

    The knowledge of the probabilistic wind speed distribution is of particular significance in reliable evaluation of the wind energy potential and effective adoption of site specific wind turbines. Among all proposed probability density functions, the two-parameter Weibull function has been extensively endorsed and utilized to model wind speeds and express wind speed distribution in various locations. In this research work, extreme learning machine (ELM) is employed to compute the shape ( k) and scale ( c) factors of Weibull distribution function. The developed ELM model is trained and tested based upon two widely successful methods used to estimate k and c parameters. The efficiency and accuracy of ELM is compared against support vector machine, artificial neural network and genetic programming for estimating the same Weibull parameters. The survey results reveal that applying ELM approach is eventuated in attaining further precision for estimation of both Weibull parameters compared to other methods evaluated. Mean absolute percentage error, mean absolute bias error and root mean square error for k are 8.4600 %, 0.1783 and 0.2371, while for c are 0.2143 %, 0.0118 and 0.0192 m/s, respectively. In conclusion, it is conclusively found that application of ELM is particularly promising as an alternative method to estimate Weibull k and c factors.

  7. Frequency distribution histograms for the rapid analysis of data

    NASA Technical Reports Server (NTRS)

    Burke, P. V.; Bullen, B. L.; Poff, K. L.

    1988-01-01

    The mean and standard error are good representations for the response of a population to an experimental parameter and are frequently used for this purpose. Frequency distribution histograms show, in addition, responses of individuals in the population. Both the statistics and a visual display of the distribution of the responses can be obtained easily using a microcomputer and available programs. The type of distribution shown by the histogram may suggest different mechanisms to be tested.

  8. Analysis of gamma-ray burst duration distribution using mixtures of skewed distributions

    NASA Astrophysics Data System (ADS)

    Tarnopolski, M.

    2016-05-01

    Two classes of gamma-ray bursts (GRBs) have been confidently identified thus far and are prescribed to different physical scenarios - neutron star-neutron star or neutron star-black hole mergers, and collapse of massive stars, for short and long GRBs, respectively. A third, intermediate in duration class, was suggested to be present in previous catalogues, such as Burst Alert and Transient Source Explorer (BATSE) and Swift, based on statistical tests regarding a mixture of two or three lognormal distributions of T90. However, this might possibly not be an adequate model. This paper investigates whether the distributions of log T90 from BATSE, Swift, and Fermi are described better by a mixture of skewed distributions rather than standard Gaussians. Mixtures of standard normal, skew-normal, sinh-arcsinh and alpha-skew-normal distributions are fitted using a maximum likelihood method. The preferred model is chosen based on the Akaike information criterion. It is found that mixtures of two skew-normal or two sinh-arcsinh distributions are more likely to describe the observed duration distribution of Fermi than a mixture of three standard Gaussians, and that mixtures of two sinh-arcsinh or two skew-normal distributions are models competing with the conventional three-Gaussian in the case of BATSE and Swift. Based on statistical reasoning, and it is shown that other phenomenological models may describe the observed Fermi, BATSE, and Swift duration distributions at least as well as a mixture of standard normal distributions, and the existence of a third (intermediate) class of GRBs in Fermi data is rejected.

  9. Analysis of aerosol vertical distribution and variability in Hong Kong

    NASA Astrophysics Data System (ADS)

    He, Qianshan; Li, Chengcai; Mao, Jietai; Lau, Alexis Kai-Hon; Chu, D. A.

    2008-07-01

    Aerosol vertical distribution is an important piece of information to improve aerosol retrieval from satellite remote sensing. Aerosol extinction coefficient profile and its integral form, aerosol optical depth (AOD), as well as atmospheric boundary layer (ABL) height and haze layer height can be derived using lidar measurements. In this paper, we used micropulse lidar measurements acquired from May 2003 to June 2004 to illustrate seasonal variations of AOD and ABL height in Hong Kong. On average, about 64% of monthly mean aerosol optical depths were contributed by aerosols within the mixing layer (with a maximum (˜76%) in November and a minimum (˜55%) in September) revealing the existence of large abundance of aerosols above ABL due to regional transport. The characteristics of seasonal averaged aerosol profiles over Hong Kong in the study period are presented to illustrate seasonal phenomena of aerosol transport and associated meteorological conditions. The correlation between AOD and surface extinction coefficient, as found, is generally poor (r2 ˜0.42) since elevated aerosol layers increase columnar aerosol abundance but not extinction at surface. The typical aerosol extinction profile in the ABL can be characterized by a low value near the surface and values increased with altitude reaching the top of ABL. When aerosol vertical profile is assumed, surface extinction coefficient can be derived from AOD using two algorithms, which are discussed in detail in this paper. Preliminary analysis showed that better estimates of the extinction coefficient at the ground level could be obtained using two-layer aerosol extinction profiles (r2 ˜0.78, slope ˜0.82, and intercept ˜0.15) than uniform profiles of extinction with height within the ABL (r2 ˜0.65, slope ˜0.27, and intercept ˜0.03). The improvement in correlation is promising on mapping satellite retrieved AOD to surface aerosol extinction coefficient for urban and regional environmental studies on air

  10. Distributions of Autocorrelated First-Order Kinetic Outcomes: Illness Severity.

    PubMed

    Englehardt, James D

    2015-01-01

    Many complex systems produce outcomes having recurring, power law-like distributions over wide ranges. However, the form necessarily breaks down at extremes, whereas the Weibull distribution has been demonstrated over the full observed range. Here the Weibull distribution is derived as the asymptotic distribution of generalized first-order kinetic processes, with convergence driven by autocorrelation, and entropy maximization subject to finite positive mean, of the incremental compounding rates. Process increments represent multiplicative causes. In particular, illness severities are modeled as such, occurring in proportion to products of, e.g., chronic toxicant fractions passed by organs along a pathway, or rates of interacting oncogenic mutations. The Weibull form is also argued theoretically and by simulation to be robust to the onset of saturation kinetics. The Weibull exponential parameter is shown to indicate the number and widths of the first-order compounding increments, the extent of rate autocorrelation, and the degree to which process increments are distributed exponential. In contrast with the Gaussian result in linear independent systems, the form is driven not by independence and multiplicity of process increments, but by increment autocorrelation and entropy. In some physical systems the form may be attracting, due to multiplicative evolution of outcome magnitudes towards extreme values potentially much larger and smaller than control mechanisms can contain. The Weibull distribution is demonstrated in preference to the lognormal and Pareto I for illness severities versus (a) toxicokinetic models, (b) biologically-based network models, (c) scholastic and psychological test score data for children with prenatal mercury exposure, and (d) time-to-tumor data of the ED01 study. PMID:26061263

  11. Distributions of Autocorrelated First-Order Kinetic Outcomes: Illness Severity

    PubMed Central

    Englehardt, James D.

    2015-01-01

    Many complex systems produce outcomes having recurring, power law-like distributions over wide ranges. However, the form necessarily breaks down at extremes, whereas the Weibull distribution has been demonstrated over the full observed range. Here the Weibull distribution is derived as the asymptotic distribution of generalized first-order kinetic processes, with convergence driven by autocorrelation, and entropy maximization subject to finite positive mean, of the incremental compounding rates. Process increments represent multiplicative causes. In particular, illness severities are modeled as such, occurring in proportion to products of, e.g., chronic toxicant fractions passed by organs along a pathway, or rates of interacting oncogenic mutations. The Weibull form is also argued theoretically and by simulation to be robust to the onset of saturation kinetics. The Weibull exponential parameter is shown to indicate the number and widths of the first-order compounding increments, the extent of rate autocorrelation, and the degree to which process increments are distributed exponential. In contrast with the Gaussian result in linear independent systems, the form is driven not by independence and multiplicity of process increments, but by increment autocorrelation and entropy. In some physical systems the form may be attracting, due to multiplicative evolution of outcome magnitudes towards extreme values potentially much larger and smaller than control mechanisms can contain. The Weibull distribution is demonstrated in preference to the lognormal and Pareto I for illness severities versus (a) toxicokinetic models, (b) biologically-based network models, (c) scholastic and psychological test score data for children with prenatal mercury exposure, and (d) time-to-tumor data of the ED01 study. PMID:26061263

  12. CRACK GROWTH ANALYSIS OF SOLID OXIDE FUEL CELL ELECTROLYTES

    SciTech Connect

    S. Bandopadhyay; N. Nagabhushana

    2003-10-01

    Defects and Flaws control the structural and functional property of ceramics. In determining the reliability and lifetime of ceramics structures it is very important to quantify the crack growth behavior of the ceramics. In addition, because of the high variability of the strength and the relatively low toughness of ceramics, a statistical design approach is necessary. The statistical nature of the strength of ceramics is currently well recognized, and is usually accounted for by utilizing Weibull or similar statistical distributions. Design tools such as CARES using a combination of strength measurements, stress analysis, and statistics are available and reasonably well developed. These design codes also incorporate material data such as elastic constants as well as flaw distributions and time-dependent properties. The fast fracture reliability for ceramics is often different from their time-dependent reliability. Further confounding the design complexity, the time-dependent reliability varies with the environment/temperature/stress combination. Therefore, it becomes important to be able to accurately determine the behavior of ceramics under simulated application conditions to provide a better prediction of the lifetime and reliability for a given component. In the present study, Yttria stabilized Zirconia (YSZ) of 9.6 mol% Yttria composition was procured in the form of tubes of length 100 mm. The composition is of interest as tubular electrolytes for Solid Oxide Fuel Cells. Rings cut from the tubes were characterized for microstructure, phase stability, mechanical strength (Weibull modulus) and fracture mechanisms. The strength at operating condition of SOFCs (1000 C) decreased to 95 MPa as compared to room temperature strength of 230 MPa. However, the Weibull modulus remains relatively unchanged. Slow crack growth (SCG) parameter, n = 17 evaluated at room temperature in air was representative of well studied brittle materials. Based on the results, further work

  13. Analysis of Trap Distribution Using Time-of-Flight Spectroscopy

    NASA Astrophysics Data System (ADS)

    Ohno, Akira; Hanna, Jun-ichi; Dunlap, David H.

    2008-02-01

    A new analytical method for determining trap distribution from a transient photocurrent in time-of-flight (TOF) measurements has been proposed in the context of convection diffusion equation with multiple-trapping and detrapping processes. The method does not need, in principle, data on temperature dependence and any initial assumption about the form of trap distribution. A trap distribution is directly extracted from time profiles of transient photocurrents on assuming the Einstein relation between mobility and diffusion constant. To demonstrate the validity of the method, we first applied photocurrents that were prepared in advance by random walk simulation for some typical trap distributions assumed. Then, we attempt to determine a trap distribution for a particular mesophase of a liquid crystal of phenylnaphthalene derivative, for which the temperature dependence of carrier transport properties is hardly available. Indeed, we have obtained an extrinsic shallow trap distribution at about 200 meV in depth together with a tail-shaped Gaussian-type density-of-states distribution. Thus, we conclude that the method may be a powerful tool to analyze a trap distribution for a system that exhibits temperature-sensitive conformational changes and/or whose carrier transport properties are not available as a function of temperature.

  14. Nonlinear analysis on vertical distribution of suspended load

    NASA Astrophysics Data System (ADS)

    Wang, Fuquan; Li, Houqiang; Ding, Jing

    1998-06-01

    In turbulent two-phase flows, the vertical distribution of suspended load has complex features under the actions of turbulence and gravity. The nonlinear dynamics and fractal features are investigated, and the nonlinear distribution is calculated. Some shortcomings of classical theories have been overcome.

  15. Analysis Model for Domestic Hot Water Distribution Systems: Preprint

    SciTech Connect

    Maguire, J.; Krarti, M.; Fang, X.

    2011-11-01

    A thermal model was developed to estimate the energy losses from prototypical domestic hot water (DHW) distribution systems for homes. The developed model, using the TRNSYS simulation software, allows researchers and designers to better evaluate the performance of hot water distribution systems in homes. Modeling results were compared with past experimental study results and showed good agreement.

  16. Development of distribution system reliability and risk analysis models

    NASA Astrophysics Data System (ADS)

    Northcote-Green, J. E. D.; Vismor, T. D.; Brooks, C. L.

    1981-08-01

    The overall objectives of a research project were to: determine distribution reliability assessment methods currently used by the industry; develop a general outage reporting scheme suitable for a wide variety of distributing utilities (reliability model); develop a model for predicting the reliability of future system configurations (risk model); and compile a handbook of reliability assessment methods designed specifically for use by the practicing distribution engineer. Emphasis was placed on compiling and organizing reliability assessment techniques presently used by the industry. The project examined reliability evaluation from two perspectives: historical and predictive assessment. Two reliability assessment models, HISRAM - the historical reliability assessment model and PRAM - the predictive reliability assessment model were developed. Each model was tested in a utility environment by the Duquesne Light Company and the Public Service Electric and Gas Company of New Jersey. A survey of 56 diverse utilities served as a basis for examining current distribution reliability assessment practices in the electric power industry.

  17. Nanocrystal size distribution analysis from transmission electron microscopy images

    NASA Astrophysics Data System (ADS)

    van Sebille, Martijn; van der Maaten, Laurens J. P.; Xie, Ling; Jarolimek, Karol; Santbergen, Rudi; van Swaaij, René A. C. M. M.; Leifer, Klaus; Zeman, Miro

    2015-12-01

    We propose a method, with minimal bias caused by user input, to quickly detect and measure the nanocrystal size distribution from transmission electron microscopy (TEM) images using a combination of Laplacian of Gaussian filters and non-maximum suppression. We demonstrate the proposed method on bright-field TEM images of an a-SiC:H sample containing embedded silicon nanocrystals with varying magnifications and we compare the accuracy and speed with size distributions obtained by manual measurements, a thresholding method and PEBBLES. Finally, we analytically consider the error induced by slicing nanocrystals during TEM sample preparation on the measured nanocrystal size distribution and formulate an equation to correct this effect.We propose a method, with minimal bias caused by user input, to quickly detect and measure the nanocrystal size distribution from transmission electron microscopy (TEM) images using a combination of Laplacian of Gaussian filters and non-maximum suppression. We demonstrate the proposed method on bright-field TEM images of an a-SiC:H sample containing embedded silicon nanocrystals with varying magnifications and we compare the accuracy and speed with size distributions obtained by manual measurements, a thresholding method and PEBBLES. Finally, we analytically consider the error induced by slicing nanocrystals during TEM sample preparation on the measured nanocrystal size distribution and formulate an equation to correct this effect. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr06292f

  18. Statistical analysis of the electrical breakdown time delay distributions in krypton

    NASA Astrophysics Data System (ADS)

    Maluckov, Čedomir A.; Karamarković, Jugoslav P.; Radović, Miodrag K.; Pejović, Momčilo M.

    2006-08-01

    The statistical analysis of the experimentally observed electrical breakdown time delay distributions in the krypton-filled diode tube at 2.6mbar is presented. The experimental distributions are obtained on the basis of 1000 successive and independent measurements. The theoretical electrical breakdown time delay distribution is evaluated as the convolution of the statistical time delay with exponential, and discharge formative time with Gaussian distribution. The distribution parameters are estimated by the stochastic modelling of the time delay distributions, and by comparing them with the experimental distributions for different relaxation times, voltages, and intensities of UV radiation. The transition of distribution shapes, from Gaussian-type to the exponential-like, is investigated by calculating the corresponding skewness and excess kurtosis parameters. It is shown that the mathematical model based on the convolution of two random variable distributions describes experimentally obtained time delay distributions and the separation of the total breakdown time delay to the statistical and formative time delay.

  19. Statistical analysis of the electrical breakdown time delay distributions in krypton

    SciTech Connect

    Maluckov, Cedomir A.; Karamarkovic, Jugoslav P.; Radovic, Miodrag K.; Pejovic, Momcilo M.

    2006-08-15

    The statistical analysis of the experimentally observed electrical breakdown time delay distributions in the krypton-filled diode tube at 2.6 mbar is presented. The experimental distributions are obtained on the basis of 1000 successive and independent measurements. The theoretical electrical breakdown time delay distribution is evaluated as the convolution of the statistical time delay with exponential, and discharge formative time with Gaussian distribution. The distribution parameters are estimated by the stochastic modelling of the time delay distributions, and by comparing them with the experimental distributions for different relaxation times, voltages, and intensities of UV radiation. The transition of distribution shapes, from Gaussian-type to the exponential-like, is investigated by calculating the corresponding skewness and excess kurtosis parameters. It is shown that the mathematical model based on the convolution of two random variable distributions describes experimentally obtained time delay distributions and the separation of the total breakdown time delay to the statistical and formative time delay.

  20. Performance Analysis of Distributed Object-Oriented Applications

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1998-01-01

    The purpose of this research was to evaluate the efficiency of a distributed simulation architecture which creates individual modules which are made self-scheduling through the use of a message-based communication system used for requesting input data from another module which is the source of that data. To make the architecture as general as possible, the message-based communication architecture was implemented using standard remote object architectures (Common Object Request Broker Architecture (CORBA) and/or Distributed Component Object Model (DCOM)). A series of experiments were run in which different systems are distributed in a variety of ways across multiple computers and the performance evaluated. The experiments were duplicated in each case so that the overhead due to message communication and data transmission can be separated from the time required to actually perform the computational update of a module each iteration. The software used to distribute the modules across multiple computers was developed in the first year of the current grant and was modified considerably to add a message-based communication scheme supported by the DCOM distributed object architecture. The resulting performance was analyzed using a model created during the first year of this grant which predicts the overhead due to CORBA and DCOM remote procedure calls and includes the effects of data passed to and from the remote objects. A report covering the distributed simulation software and the results of the performance experiments has been submitted separately. The above report also discusses possible future work to apply the methodology to dynamically distribute the simulation modules so as to minimize overall computation time.

  1. Wave energy estimation by using a statistical analysis and wave buoy data near the southern Caspian Sea

    NASA Astrophysics Data System (ADS)

    Zamani, A. R.; Badri, M. A.

    2015-04-01

    Statistical analysis was done on simultaneous wave and wind using data recorded by discus-shape wave buoy. The area is located in the southern Caspian Sea near the Anzali Port. Recorded wave data were obtained through directional spectrum wave analysis. Recorded wind direction and wind speed were obtained through the related time series as well. For 12-month measurements (May 25 2007-2008), statistical calculations were done to specify the value of nonlinear auto-correlation of wave and wind using the probability distribution function of wave characteristics and statistical analysis in various time periods. The paper also presents and analyzes the amount of wave energy for the area mentioned on the basis of available database. Analyses showed a suitable comparison between the amounts of wave energy in different seasons. As a result, the best period for the largest amount of wave energy was known. Results showed that in the research period, the mean wave and wind auto correlation were about three hours. Among the probability distribution functions, i.e Weibull, Normal, Lognormal and Rayleigh, "Weibull" had the best consistency with experimental distribution function shown in different diagrams for each season. Results also showed that the mean wave energy in the research period was about 49.88 kW/m and the maximum density of wave energy was found in February and March, 2010.

  2. Reliability analysis and optimization in the design of distributed systems

    SciTech Connect

    Hariri, S.

    1986-01-01

    Reliability measures and efficient evaluation algorithms are presented to aid in designing reliable distributed systems. The terminal reliability between a pair of computers is a good measure in computer networks. For distributed systems, to capture more effectively the redundancy in resources, such as programs and files, two new reliability measures are introduced. These measures are Distributed Program Reliability (DPR) and Distributed System Reliability (DSR). A simple and efficient algorithm, SYREL, is developed to evaluate the reliability between two computing centers. This algorithm incorporates conditional probability, set theory, and Boolean algebra in a distinct approach to achieve fast execution times and obtain compact expressions. An elegant and unified approach based on graph-theoretic techniques is used in developing algorithms to evaluate DPR and DSR measures. It performs a breadth-first search on the graph representing a given distributed system to enumerate all the subgraphs that guarantee the proper accessibility for executing the given tasks(s). These subgraphs are then used to evaluate the desired reliabilities. Several optimization algorithms are developed for designing reliable systems under a cost constraint.

  3. Distributional Assumptions in Educational Assessments Analysis: Normal Distributions versus Generalized Beta Distribution in Modeling the Phenomenon of Learning

    ERIC Educational Resources Information Center

    Campos, Jose Alejandro Gonzalez; Moraga, Paulina Saavedra; Del Pozo, Manuel Freire

    2013-01-01

    This paper introduces the generalized beta (GB) model as a new modeling tool in the educational assessment area and evaluation analysis, specifically. Unlike normal model, GB model allows us to capture some real characteristics of data and it is an important tool for understanding the phenomenon of learning. This paper develops a contrast with the…

  4. A fractal approach to dynamic inference and distribution analysis

    PubMed Central

    van Rooij, Marieke M. J. W.; Nash, Bertha A.; Rajaraman, Srinivasan; Holden, John G.

    2013-01-01

    Event-distributions inform scientists about the variability and dispersion of repeated measurements. This dispersion can be understood from a complex systems perspective, and quantified in terms of fractal geometry. The key premise is that a distribution's shape reveals information about the governing dynamics of the system that gave rise to the distribution. Two categories of characteristic dynamics are distinguished: additive systems governed by component-dominant dynamics and multiplicative or interdependent systems governed by interaction-dominant dynamics. A logic by which systems governed by interaction-dominant dynamics are expected to yield mixtures of lognormal and inverse power-law samples is discussed. These mixtures are described by a so-called cocktail model of response times derived from human cognitive performances. The overarching goals of this article are twofold: First, to offer readers an introduction to this theoretical perspective and second, to offer an overview of the related statistical methods. PMID:23372552

  5. Analysis and machine mapping of the distribution of band recoveries

    USGS Publications Warehouse

    Cowardin, L.M.

    1977-01-01

    A method of calculating distance and bearing from banding site to recovery location based on the solution of a spherical triangle is presented. X and Y distances on an ordinate grid were applied to computer plotting of recoveries on a map. The advantages and disadvantages of tables of recoveries by State or degree block, axial lines, and distance of recovery from banding site for presentation and comparison of the spatial distribution of band recoveries are discussed. A special web-shaped partition formed by concentric circles about the point of banding and great circles at 30-degree intervals through the point of banding has certain advantages over other methods. Comparison of distributions by means of a X? contingency test is illustrated. The statistic V = X?/N can be used as a measure of difference between two distributions of band recoveries and its possible use is illustrated as a measure of the degree of migrational homing.

  6. Analysis of inclusion distributions in silicon carbide armor ceramics

    NASA Astrophysics Data System (ADS)

    Bakas, Michael Paul

    It was determined that intrinsic microstructural defects (i.e. inclusions) are the preferential fragmentation path (initiation or propagation) for ballistically impacted SiC, and may contribute to variation in ballistic performance. Quasi-static and ballistic samples of SiC were studied and inclusions caused by common SiC sintering aids and/or impurities were identified. Ballistic rubble surfaces showed large inclusions of 10-400 micron size, while examination of polished cross-sections of the fragments showed only inclusions under 5 microns in size. The fact that large inclusions were found preferentially on rubble surfaces demonstrates a link between severe microstructural defects and the fragmentation of SiC armor. Rubble of both a "good" and "bad" performing SiC target were examined. Inclusion size data was gathered and fit to a distribution function. A difference was observed between the targets. The "good" target had twice the density of inclusions on its rubble in the size range less than 30 microns. No significant difference between distributions was observed for inclusion sizes greater than 40 microns. The "good" target fractured into an overall smaller fragment size distribution than the "bad" target, consistent with fragmentation at higher stresses. Literature suggests that the distribution of defects activated under dynamic conditions will be determined by the maximum stress reached locally in the target. On the basis of the defect distributions on its rubble, the "good" target appears to have withstood higher stresses. The fragment size distribution and inclusion size distribution on fragment surfaces both indicate higher stresses in the "good" target. Why the "good" target withstood a greater stress than the "bad" target remains a subject for conjecture. It is speculated that the position of severe "anomalous" defects may be influencing the target's performance, but this currently cannot be demonstrated conclusively. Certainly, this research shows

  7. THE EPANET PROGRAMMER'S TOOLKIT FOR ANALYSIS OF WATER DISTRIBUTION SYSTEMS

    EPA Science Inventory

    The EPANET Programmer's Toolkit is a collection of functions that helps simplify computer programming of water distribution network analyses. the functions can be used to read in a pipe network description file, modify selected component properties, run multiple hydraulic and wa...

  8. Optimizing Distributed Practice: Theoretical Analysis and Practical Implications

    ERIC Educational Resources Information Center

    Cepeda, Nicholas J.; Coburn, Noriko; Rohrer, Doug; Wixted, John T.; Mozer, Michael C,; Pashler, Harold

    2009-01-01

    More than a century of research shows that increasing the gap between study episodes using the same material can enhance retention, yet little is known about how this so-called distributed practice effect unfolds over nontrivial periods. In two three-session laboratory studies, we examined the effects of gap on retention of foreign vocabulary,…

  9. Analysis of distributed cooled high power millimeter wave windows

    SciTech Connect

    Nelson, S.D.; Caplan, M.; Reitter, T.A.

    1995-09-09

    The sectional high-frequency (100--170 GHz) distributed cooled window has been investigated both electromagnetically and thermally previously using computational electromagnetics (EM) and thermal codes. Recent data describes the relationship to some experimental data for the window. Results are presented for time domain CW EM analyses and CW thermal and stress calculations.

  10. High Resolution PV Power Modeling for Distribution Circuit Analysis

    SciTech Connect

    Norris, B. L.; Dise, J. H.

    2013-09-01

    NREL has contracted with Clean Power Research to provide 1-minute simulation datasets of PV systems located at three high penetration distribution feeders in the service territory of Southern California Edison (SCE): Porterville, Palmdale, and Fontana, California. The resulting PV simulations will be used to separately model the electrical circuits to determine the impacts of PV on circuit operations.

  11. Metagenomic Analysis of Water Distribution System Bacterial Communities

    EPA Science Inventory

    The microbial quality of drinking water is assessed using culture-based methods that are highly selective and that tend to underestimate the densities and diversity of microbial populations inhabiting distribution systems. In order to better understand the effect of different dis...

  12. Nb 3Sn tensile strength and its distribution estimated from change in superconducting critical current of preloaded multifilamentary composite wire

    NASA Astrophysics Data System (ADS)

    Ochiai, S.; Nishino, S.; Hojo, M.; Osamura, K.; Watanabe, K.

    The distribution of tensile strength of Nb 3Sn in multifilamentary composite wires heat-treated at 973 K for 8.64 ks (sample A) and for 43.2 ks (sample B) was estimated from an analysis of the change in the superconducting critical current at 4.2 K caused by the preloading treatment at room temperature. The average strengths of Nb 3Sn in samples A and B for a gauge length of 25 mm were 1.3 and 1.0 GPa, respectively. Applying the two-parameter Weibull distribution function, the shape parameters of samples A and B were 7.2 and 12 and the scale parameters 1.4 and 1.1 GPa, respectively. These results indicate that when the thickness of Nb 3Sn becomes great, the average strength becomes low, while the scatter of the strength (the coefficient of variation) becomes small.

  13. Flaw strength distributions and statistical parameters for ceramic fibers: the normal distribution.

    PubMed

    R'mili, M; Godin, N; Lamon, J

    2012-05-01

    The present paper investigates large sets of ceramic fibre failure strengths (500 to 1000 data) produced using tensile tests on tows that contained either 500 or 1000 filaments. The probability density function was determined through acoustic emission monitoring which allowed detection and counting of filament fractures. The statistical distribution of filament strengths was described using the normal distribution. The Weibull equation was then fitted to this normal distribution for estimation of statistical parameters. A perfect agreement between both distributions was obtained, and a quite negligible scatter in statistical parameters was observed, as opposed to the wide variability that is reported in the literature. Thus it was concluded that flaw strengths are distributed normally and that the statistical parameters that were derived are the true ones. In a second step, the conventional method of estimation of Weibull parameters was applied to these sets of data and, then, to subsets selected randomly. The influence of other factors involved in the conventional method of determination of statistical parameters is discussed. It is demonstrated that selection of specimens, sample size, and method of construction of so-called Weibull plots are responsible for statistical parameters variability. PMID:23004702

  14. Flaw strength distributions and statistical parameters for ceramic fibers: The normal distribution

    NASA Astrophysics Data System (ADS)

    R'Mili, M.; Godin, N.; Lamon, J.

    2012-05-01

    The present paper investigates large sets of ceramic fibre failure strengths (500 to 1000 data) produced using tensile tests on tows that contained either 500 or 1000 filaments. The probability density function was determined through acoustic emission monitoring which allowed detection and counting of filament fractures. The statistical distribution of filament strengths was described using the normal distribution. The Weibull equation was then fitted to this normal distribution for estimation of statistical parameters. A perfect agreement between both distributions was obtained, and a quite negligible scatter in statistical parameters was observed, as opposed to the wide variability that is reported in the literature. Thus it was concluded that flaw strengths are distributed normally and that the statistical parameters that were derived are the true ones. In a second step, the conventional method of estimation of Weibull parameters was applied to these sets of data and, then, to subsets selected randomly. The influence of other factors involved in the conventional method of determination of statistical parameters is discussed. It is demonstrated that selection of specimens, sample size, and method of construction of so-called Weibull plots are responsible for statistical parameters variability.

  15. Nonlinear structural analysis on distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Watson, Brian C.; Noor, Ahmed K.

    1995-01-01

    A computational strategy is presented for the nonlinear static and postbuckling analyses of large complex structures on massively parallel computers. The strategy is designed for distributed-memory, message-passing parallel computer systems. The key elements of the proposed strategy are: (1) a multiple-parameter reduced basis technique; (2) a nested dissection (or multilevel substructuring) ordering scheme; (3) parallel assembly of global matrices; and (4) a parallel sparse equation solver. The effectiveness of the strategy is assessed by applying it to thermo-mechanical postbuckling analyses of stiffened composite panels with cutouts, and nonlinear large-deflection analyses of HSCT models on Intel Paragon XP/S computers. The numerical studies presented demonstrate the advantages of nested dissection-based solvers over traditional skyline-based solvers on distributed memory machines.

  16. A Distributed Datacube Analysis Service for Radio Telescopes

    NASA Astrophysics Data System (ADS)

    Mahadevan, V.; Rosolowsky, E.

    2011-07-01

    Current- and next-generation radio telescopes are poised to produce data at an unprecedented rate. We are developing the cyberinfrastructure to enable distributed processing and storage of FITS data cubes from these telescopes. In this contribution, we will present the data storage and network infrastructure that enables efficient searching, extraction and transfer of FITS datacubes. The infrastructure combines the iRODS distributed data management with a custom spatially-enabled PostgreSQL database. The data management system ingests FITS cubes, automatically populating the metadata database using FITS header data. Queries to the metadata service return matching records using VOTable format. The iRODS system allows for a distributed network of fileservers to store large data sets redundantly with a minimum of upkeep. Transfers between iRODS data sites use parallel I/O streams for maximum speed. Files are staged to the optimal host for download by an end user. The service can automatically extract subregions of individual or adjacent cubes registered to user-defined astrometric grids using the Montage package. The data system can query multiple surveys and return spatially registered data cubes to the user. Future development will allow the data system to utilize distributed processing environment to analyze datasets, returning only the calculation results to the end user. This cyberinfrastructure project combines many existing, open-source packages into a single deployment of a data system. The codebase can also function on two-dimensional images. The project is funded by CANARIE under the Network-Enabled Platforms 2 program.

  17. Analysis of hailstone size distributions from a hailpad network

    NASA Astrophysics Data System (ADS)

    Fraile, R.; Castro, A.; Sánchez, J. L.

    In the province of León, a network of 250 hailpads has been installed in an area of 1000 km 2. After the individual calibration of every plate, the dents are measured by a manual method which stores data in files that can be analyzed by computer. Once the hailstones are classified according to their size, difficulties may arise when fitting linearly this distribution to a function of the type log N = log N0- βx, where N is the number of hailstones in the size class x. A discussion is presented on the universal validity of parameters N0 and β, on the problem of empty classes (to which it is impossible to apply logarithms), and on the discrimination of the smallest hail classes when making such a fitting. In conclusion, statistical methods are proposed for fitting the exponential or gamma distribution. The latter of these distributions assumes the former as a particular case and offers a better fit to the experimental data.

  18. Periodic analysis of total ozone and its vertical distribution

    NASA Technical Reports Server (NTRS)

    Wilcox, R. W.; Nastrom, G. D.; Belmont, A. D.

    1975-01-01

    Both total ozone and vertical distribution ozone data from the period 1957 to 1972 are analyzed. For total ozone, improved monthly zonal means for both hemispheres are computed by weighting individual station monthly means by a factor which compensates for the close grouping of stations in certain regions of latitude bands. Longitudinal variability show maxima in summer in both hemispheres, but, in winter, only in the Northern Hemisphere. The geographical distributions of the long term mean, and the annual, quasibiennial and semiannual waves in total ozone over the Northern Hemisphere are presented. The extratropical amplitude of the annual wave is by far the largest of the three, as much as 120 m atm cm over northern Siberia. There is a tendency for all three waves to have maxima in high latitudes. Monthly means of the vertical distribution of ozone determined from 3 to 8 years of ozonesonde data over North America are presented. Number density is highest in the Arctic near 18 km. The region of maximum number density slopes upward toward 10 N, where the long term mean is 45 x 10 to the 11th power molecules cm/3 near 26 km.

  19. Agent-based reasoning for distributed multi-INT analysis

    NASA Astrophysics Data System (ADS)

    Inchiosa, Mario E.; Parker, Miles T.; Perline, Richard

    2006-05-01

    Fully exploiting the intelligence community's exponentially growing data resources will require computational approaches differing radically from those currently available. Intelligence data is massive, distributed, and heterogeneous. Conventional approaches requiring highly structured and centralized data will not meet this challenge. We report on a new approach, Agent-Based Reasoning (ABR). In NIST evaluations, the use of ABR software tripled analysts' solution speed, doubled accuracy, and halved perceived difficulty. ABR makes use of populations of fine-grained, locally interacting agents that collectively reason about intelligence scenarios in a self-organizing, "bottom-up" process akin to those found in biological and other complex systems. Reproduction rules allow agents to make inferences from multi-INT data, while movement rules organize information and optimize reasoning. Complementary deterministic and stochastic agent behaviors enhance reasoning power and flexibility. Agent interaction via small-world networks - such as are found in nervous systems, social networks, and power distribution grids - dramatically increases the rate of discovering intelligence fragments that usefully connect to yield new inferences. Small-world networks also support the distributed processing necessary to address intelligence community data challenges. In addition, we have found that ABR pre-processing can boost the performance of commercial text clustering software. Finally, we have demonstrated interoperability with Knowledge Engineering systems and seen that reasoning across diverse data sources can be a rich source of inferences.

  20. Adaptive Weibull Multiplicative Model and Multilayer Perceptron Neural Networks for Dark-Spot Detection from SAR Imagery

    PubMed Central

    Taravat, Alireza; Oppelt, Natascha

    2014-01-01

    Oil spills represent a major threat to ocean ecosystems and their environmental status. Previous studies have shown that Synthetic Aperture Radar (SAR), as its recording is independent of clouds and weather, can be effectively used for the detection and classification of oil spills. Dark formation detection is the first and critical stage in oil-spill detection procedures. In this paper, a novel approach for automated dark-spot detection in SAR imagery is presented. A new approach from the combination of adaptive Weibull Multiplicative Model (WMM) and MultiLayer Perceptron (MLP) neural networks is proposed to differentiate between dark spots and the background. The results have been compared with the results of a model combining non-adaptive WMM and pulse coupled neural networks. The presented approach overcomes the non-adaptive WMM filter setting parameters by developing an adaptive WMM model which is a step ahead towards a full automatic dark spot detection. The proposed approach was tested on 60 ENVISAT and ERS2 images which contained dark spots. For the overall dataset, an average accuracy of 94.65% was obtained. Our experimental results demonstrate that the proposed approach is very robust and effective where the non-adaptive WMM & pulse coupled neural network (PCNN) model generates poor accuracies. PMID:25474376

  1. A two-scale Weibull approach to the failure of porous ceramic structures made by robocasting: possibilities and limits

    PubMed Central

    Genet, Martin; Houmard, Manuel; Eslava, Salvador; Saiz, Eduardo; Tomsia, Antoni P.

    2012-01-01

    This paper introduces our approach to modeling the mechanical behavior of cellular ceramics, through the example of calcium phosphate scaffolds made by robocasting for bone-tissue engineering. The Weibull theory is used to deal with the scaffolds’ constitutive rods statistical failure, and the Sanchez-Palencia theory of periodic homogenization is used to link the rod- and scaffold-scales. Uniaxial compression of scaffolds and three-point bending of rods were performed to calibrate and validate the model. If calibration based on rod-scale data leads to over-conservative predictions of scaffold’s properties (as rods’ successive failures are not taken into account), we show that, for a given rod diameter, calibration based on scaffold-scale data leads to very satisfactory predictions for a wide range of rod spacing, i.e. of scaffold porosity, as well as for different loading conditions. This work establishes the proposed model as a reliable tool for understanding and optimizing cellular ceramics’ mechanical properties. PMID:23439936

  2. Adaptive Weibull Multiplicative Model and Multilayer Perceptron neural networks for dark-spot detection from SAR imagery.

    PubMed

    Taravat, Alireza; Oppelt, Natascha

    2014-01-01

    Oil spills represent a major threat to ocean ecosystems and their environmental status. Previous studies have shown that Synthetic Aperture Radar (SAR), as its recording is independent of clouds and weather, can be effectively used for the detection and classification of oil spills. Dark formation detection is the first and critical stage in oil-spill detection procedures. In this paper, a novel approach for automated dark-spot detection in SAR imagery is presented. A new approach from the combination of adaptive Weibull Multiplicative Model (WMM) and MultiLayer Perceptron (MLP) neural networks is proposed to differentiate between dark spots and the background. The results have been compared with the results of a model combining non-adaptive WMM and pulse coupled neural networks. The presented approach overcomes the non-adaptive WMM filter setting parameters by developing an adaptive WMM model which is a step ahead towards a full automatic dark spot detection. The proposed approach was tested on 60 ENVISAT and ERS2 images which contained dark spots. For the overall dataset, an average accuracy of 94.65% was obtained. Our experimental results demonstrate that the proposed approach is very robust and effective where the non-adaptive WMM & pulse coupled neural network (PCNN) model generates poor accuracies. PMID:25474376

  3. Carbon dioxide at an unpolluted site analysed with the smoothing kernel method and skewed distributions.

    PubMed

    Pérez, Isidro A; Sánchez, M Luisa; García, M Ángeles; Pardo, Nuria

    2013-07-01

    CO₂ concentrations recorded for two years using a Picarro G1301 analyser at a rural site were studied applying two procedures. Firstly, the smoothing kernel method, which to date has been used with one linear and another circular variable, was used with pairs of circular variables: wind direction, time of day, and time of year, providing that the daily cycle was the prevailing cyclical evolution and that the highest concentrations were justified by the influence of one nearby city source, which was only revealed by directional analysis. Secondly, histograms were obtained, and these revealed most observations to be located between 380 and 410 ppm, and that there was a sharp contrast during the year. Finally, histograms were fitted to 14 distributions, the best known using analytical procedures, and the remainder using numerical procedures. RMSE was used as the goodness of fit indicator to compare and select distributions. Most functions provided similar RMSE values. However, the best fits were obtained using numerical procedures due to their greater flexibility, the triangular distribution being the simplest function of this kind. This distribution allowed us to identify directions and months of noticeable CO₂ input (SSE and April-May, respectively) as well as the daily cycle of the distribution symmetry. Among the functions whose parameters were calculated using an analytical expression, Erlang distributions provided satisfactory fits for monthly analysis, and gamma for the rest. By contrast, the Rayleigh and Weibull distributions gave the worst RMSE values. PMID:23602977

  4. Archiving, Distribution and Analysis of Solar-B Data

    NASA Astrophysics Data System (ADS)

    Shimojo, M.

    2007-10-01

    The Solar-B Mission Operation and Data Analysis (MODA) working group has been discussing the data analysis system for Solar-B data since 2001. In the paper, based on the Solar-B MODA document and the recent work in Japan, we introduce the dataflow from Solar-B to scientists, the data format and data-level of Solar-B data, and the data searching/providing system.

  5. Specimen size effects on the compressive strength and Weibull modulus of nuclear graphite of different coke particle size: IG-110 and NBG-18

    NASA Astrophysics Data System (ADS)

    Chi, Se-Hwan

    2013-05-01

    The effects of specimen size on the compressive strength and Weibull modulus were investigated for nuclear graphite of different coke particle sizes: IG-110 and NBG-18 (average coke particle size for IG-110: 25 μm, NBG-18: 300 μm). Two types of cylindrical specimens, i.e., where the diameter to length ratio was 1:2 (ASTM C 695-91 type specimen, 1:2 specimen) or 1:1 (1:1 specimen), were prepared for six diameters (3, 4, 5, 10, 15, and 20 mm) and tested at room temperature (compressive strain rate: 2.08 × 10-4 s-1). Anisotropy was considered during specimen preparation for NBG-18. The results showed that the effects of specimen size appeared negligible for the compressive strength, but grade-dependent for the Weibull modulus. In view of specimen miniaturization, deviations from the ASTM C 695-91 specimen size requirements require an investigation into the effects of size for the grade of graphite of interest, and the specimen size effects should be considered for Weibull modulus determination.

  6. VISUALIZATION AND ANALYSIS OF LPS DISTRIBUTION IN BINARY PHOSPHOLIPID BILAYERS

    PubMed Central

    Florencia, Henning María; Susana, Sanchez; Laura, Bakás

    2010-01-01

    Lipopolysaccharide (LPS) is an endotoxin released from the outer membrane of Gram negative bacteria during infections. It have been reported that LPS may play a rol in the outer membrane of bacteria similar to that of cholesterol in eukaryotic plasma membranes. In this article we compare the effect of introducing LPS or cholesterol in liposomes made of dipalmitoylphosphatidylcholine/dioleoylphosphatidylcholine on the solubilization process by Triton X-100. The results show that liposomes containing LPS or Cholesterol are more resistant to solubilization by Triton X-100 than the binary phospholipid mixtures at 4°C. The LPS distribution was analyzed on GUVs of DPPC:DOPC using FITC-LPS. Solid and liquid-crystalline domains were visualized labeling the GUVs with LAURDAN and GP images were acquired using a two-photon microscope. The images show a selective distribution of LPS in gel domains. Our results support the hypothesis that LPS could aggregate and concentrate selectively in biological membranes providing a mechanism to bring together several components of the LPS-sensing machinery. PMID:19324006

  7. Visualization and analysis of lipopolysaccharide distribution in binary phospholipid bilayers

    SciTech Connect

    Henning, Maria Florencia; Sanchez, Susana; Bakas, Laura

    2009-05-22

    Lipopolysaccharide (LPS) is an endotoxin released from the outer membrane of Gram-negative bacteria during infections. It have been reported that LPS may play a role in the outer membrane of bacteria similar to that of cholesterol in eukaryotic plasma membranes. In this article we compare the effect of introducing LPS or cholesterol in liposomes made of dipalmitoylphosphatidylcholine/dioleoylphosphatidylcholine on the solubilization process by Triton X-100. The results show that liposomes containing LPS or cholesterol are more resistant to solubilization by Triton X-100 than the binary phospholipid mixtures at 4 {sup o}C. The LPS distribution was analyzed on GUVs of DPPC:DOPC using FITC-LPS. Solid and liquid-crystalline domains were visualized labeling the GUVs with LAURDAN and GP images were acquired using a two-photon microscope. The images show a selective distribution of LPS in gel domains. Our results support the hypothesis that LPS could aggregate and concentrate selectively in biological membranes providing a mechanism to bring together several components of the LPS-sensing machinery.

  8. Performance analysis of structured pedigree distributed fusion systems

    NASA Astrophysics Data System (ADS)

    Arambel, Pablo O.

    2009-05-01

    Structured pedigree is a way to compress pedigree information. When applied to distributed fusion systems, the approach avoids the well known problem of information double counting resulting from ignoring the cross-correlation among fused estimates. Other schemes that attempt to compute optimal fused estimates require the transmission of full pedigree information or raw data. This usually can not be implemented in practical systems because of the enormous requirements in communications bandwidth. The Structured Pedigree approach achieves data compression by maintaining multiple covariance matrices, one for each uncorrelated source in the network. These covariance matrices are transmitted by each node along with the state estimate. This represents a significant compression when compared to full pedigree schemes. The transmission of these covariance matrices (or a subset of these covariance matrices) allows for an efficient fusion of the estimates, while avoiding information double counting and guaranteeing consistency on the estimates. This is achieved by exploiting the additional partial knowledge on the correlation of the estimates. The approach uses a generalized version of the Split Covariance Intersection algorithm that applies to multiple estimates and multiple uncorrelated sources. In this paper we study the performance of the proposed distributed fusion system by analyzing a simple but instructive example.

  9. Studying bubble-particle interactions by zeta potential distribution analysis.

    PubMed

    Wu, Chendi; Wang, Louxiang; Harbottle, David; Masliyah, Jacob; Xu, Zhenghe

    2015-07-01

    Over a decade ago, Xu and Masliyah pioneered an approach to characterize the interactions between particles in dynamic environments of multicomponent systems by measuring zeta potential distributions of individual components and their mixtures. Using a Zetaphoremeter, the measured zeta potential distributions of individual components and their mixtures were used to determine the conditions of preferential attachment in multicomponent particle suspensions. The technique has been applied to study the attachment of nano-sized silica and alumina particles to sub-micron size bubbles in solutions with and without the addition of surface active agents (SDS, DAH and DF250). The degree of attachment between gas bubbles and particles is shown to be a function of the interaction energy governed by the dispersion, electrostatic double layer and hydrophobic forces. Under certain chemical conditions, the attachment of nano-particles to sub-micron size bubbles is shown to be enhanced by in-situ gas nucleation induced by hydrodynamic cavitation for the weakly interacting systems, where mixing of the two individual components results in negligible attachment. Preferential interaction in complex tertiary particle systems demonstrated strong attachment between micron-sized alumina and gas bubbles, with little attachment between micron-sized alumina and silica, possibly due to instability of the aggregates in the shear flow environment. PMID:25731913

  10. A data analysis expert system for large established distributed databases

    NASA Technical Reports Server (NTRS)

    Gnacek, Anne-Marie; An, Y. Kim; Ryan, J. Patrick

    1987-01-01

    A design for a natural language database interface system, called the Deductively Augmented NASA Management Decision support System (DANMDS), is presented. The DANMDS system components have been chosen on the basis of the following considerations: maximal employment of the existing NASA IBM-PC computers and supporting software; local structuring and storing of external data via the entity-relationship model; a natural easy-to-use error-free database query language; user ability to alter query language vocabulary and data analysis heuristic; and significant artificial intelligence data analysis heuristic techniques that allow the system to become progressively and automatically more useful.

  11. Failure property distributions for conventional and highly crosslinked ultrahigh molecular weight polyethylenes.

    PubMed

    Kurtz, S M; Bergström, J; Rimnac, C M

    2005-05-01

    To make stochastic (probabilistic) failure predictions of a conventional or highly crosslinked ultrahigh molecular weight polyethylene (UHMWPE) material, not only must a failure criterion be defined, but it is also necessary to specify a probability distribution of the failure strength. This study sought to evaluate both parametric and nonparametric statistical approaches to describing the failure properties of UHMWPE, based on the Normal and Weibull model distributions, respectively. Because fatigue and fracture properties of materials have historically been well described with the use of Weibull statistics, it was expected that a nonparametric approach would provide a better fit of the failure distributions than the parametric approach. The ultimate true stress, true strain, and ultimate chain stretch data at failure were analyzed from 60 tensile tests conducted previously. The ultimate load and ultimate displacement from 121 small punch tests conducted previously were also analyzed. It was found that both Normal and Weibull models provide a reasonable description of the central tendency of the failure distribution. The principal difference between the Normal and Weibull models can be appreciated in the predicted lower-bound response at the tail end of the distribution. The data support the use of both parametric and nonparametric methods to bracket the lower-bound failure prediction in order to simulate the failure threshold for UHMWPE. PMID:15772963

  12. Temporal Distributions of Problem Behavior Based on Scatter Plot Analysis.

    ERIC Educational Resources Information Center

    Kahng, SungWoo; Iwata, Brian A.; Fischer, Sonya M.; Page, Terry J.; Treadwell, Kimberli R. H.; Williams, Don E.; Smith, Richard G.

    1998-01-01

    A large-scale analysis was conducted of problem behavior by observing 20 individuals living in residential facilities. Data were converted into scatter plot formats. When the data were transformed into aggregate "control charts," 12 of 15 sets of data revealed 30-minute intervals during which problem behavior was more likely to occur. (Author/CR)

  13. Quantitative analysis of inclusion distributions in hot pressed silicon carbide

    SciTech Connect

    Michael Paul Bakas

    2012-12-01

    ABSTRACT Depth of penetration measurements in hot pressed SiC have exhibited significant variability that may be influenced by microstructural defects. To obtain a better understanding regarding the role of microstructural defects under highly dynamic conditions; fragments of hot pressed SiC plates subjected to impact tests were examined. Two types of inclusion defects were identified, carbonaceous and an aluminum-iron-oxide phase. A disproportionate number of large inclusions were found on the rubble, indicating that the inclusion defects were a part of the fragmentation process. Distribution functions were plotted to compare the inclusion populations. Fragments from the superior performing sample had an inclusion population consisting of more numerous but smaller inclusions. One possible explanation for this result is that the superior sample withstood a greater stress before failure, causing a greater number of smaller inclusions to participate in fragmentation than in the weaker sample.

  14. Finite key analysis for symmetric attacks in quantum key distribution

    SciTech Connect

    Meyer, Tim; Kampermann, Hermann; Kleinmann, Matthias; Bruss, Dagmar

    2006-10-15

    We introduce a constructive method to calculate the achievable secret key rate for a generic class of quantum key distribution protocols, when only a finite number n of signals is given. Our approach is applicable to all scenarios in which the quantum state shared by Alice and Bob is known. In particular, we consider the six state protocol with symmetric eavesdropping attacks, and show that for a small number of signals, i.e., below n{approx}10{sup 4}, the finite key rate differs significantly from the asymptotic value for n{yields}{infinity}. However, for larger n, a good approximation of the asymptotic value is found. We also study secret key rates for protocols using higher-dimensional quantum systems.

  15. An analysis of the Seasat Satellite Data Distribution System

    NASA Technical Reports Server (NTRS)

    Ferrari, A. J.; Renfrow, J. T.

    1980-01-01

    A computerized data distribution network for remote accessing of Seasat generated data is described. The service is intended as an experiment to determine user needs and operational abilities for utilizing on-line satellite generated oceanographic data. Synoptic weather observations are input to the U.S. Fleet Numerical Oceanographic Central for preparation and transfer to a PDP 11/60 central computer, from which all access trunks originate. The data available includes meteorological and sea-state information in the form of analyses and forecasts, and users are being monitored for reactions to the system design, data products, system operation, and performance evaluation. The system provides data on sea level and upper atmospheric pressure, sea surface temperature, wind magnitude and direction, significant wave heights, direction, and periods, and spectral wave data. Transmissions have a maximum rate of 1.1 kbit/sec over the telephone line.

  16. Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing

    NASA Technical Reports Server (NTRS)

    Ozguner, Fusun

    1996-01-01

    Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.

  17. Southern Arizona riparian habitat: Spatial distribution and analysis

    NASA Technical Reports Server (NTRS)

    Lacey, J. R.; Ogden, P. R.; Foster, K. E.

    1975-01-01

    The objectives of this study were centered around the demonstration of remote sensing as an inventory tool and researching the multiple uses of riparian vegetation. Specific study objectives were to: (1) map riparian vegetation along the Gila River, San Simon Creek, San Pedro River, Pantano Wash, (2) determine the feasibility of automated mapping using LANDSAT-1 computer compatible tapes, (3) locate and summarize existing mpas delineating riparian vegetation, (4) summarize data relevant to Southern Arizona's riparian products and uses, (5) document recent riparian vegetation changes along a selected portion of the San Pedro River, (6) summarize historical changes in composition and distribution of riparian vegetation, and (7) summarize sources of available photography pertinent to Southern Arizona.

  18. Model of the reliability analysis of the distributed computer systems with architecture "client-server"

    NASA Astrophysics Data System (ADS)

    Kovalev, I. V.; Zelenkov, P. V.; Karaseva, M. V.; Tsarev, M. Yu; Tsarev, R. Yu

    2015-01-01

    The paper considers the problem of the analysis of distributed computer systems reliability with client-server architecture. A distributed computer system is a set of hardware and software for implementing the following main functions: processing, storage, transmission and data protection. This paper discusses the distributed computer systems architecture "client-server". The paper presents the scheme of the distributed computer system functioning represented as a graph where vertices are the functional state of the system and arcs are transitions from one state to another depending on the prevailing conditions. In reliability analysis we consider such reliability indicators as the probability of the system transition in the stopping state and accidents, as well as the intensity of these transitions. The proposed model allows us to obtain correlations for the reliability parameters of the distributed computer system without any assumptions about the distribution laws of random variables and the elements number in the system.

  19. Photoelastic analysis of stress distribution with different implant systems.

    PubMed

    Pellizzer, Eduardo Piza; Carli, Rafael Imai; Falcón-Antenucci, Rosse Mary; Verri, Fellippo Ramos; Goiato, Marcelo Coelho; Villa, Luiz Marcelo Ribeiro

    2014-04-01

    The aim of this study was to evaluate stress distribution with different implant systems through photoelasticity. Five models were fabricated with photoelastic resin PL-2. Each model was composed of a block of photoelastic resin (10 × 40 × 45 mm) with an implant and a healing abutment: model 1, internal hexagon implant (4.0 × 10 mm; Conect AR, Conexão, São Paulo, Brazil); model 2, Morse taper/internal octagon implant (4.1 × 10 mm; Standard, Straumann ITI, Andover, Mass); model 3, Morse taper implant (4.0 × 10 mm; AR Morse, Conexão); model 4, locking taper implant (4.0 × 11 mm; Bicon, Boston, Mass); model 5, external hexagon implant (4.0 × 10 mm; Master Screw, Conexão). Axial and oblique load (45°) of 150 N were applied by a universal testing machine (EMIC-DL 3000), and a circular polariscope was used to visualize the stress. The results were photographed and analyzed qualitatively using Adobe Photoshop software. For the axial load, the greatest stress concentration was exhibited in the cervical and apical thirds. However, the highest number of isochromatic fringes was observed in the implant apex and in the cervical adjacent to the load direction in all models for the oblique load. Model 2 (Morse taper, internal octagon, Straumann ITI) presented the lowest stress concentration, while model 5 (external hexagon, Master Screw, Conexão) exhibited the greatest stress. It was concluded that Morse taper implants presented a more favorable stress distribution among the test groups. The external hexagon implant showed the highest stress concentration. Oblique load generated the highest stress in all models analyzed. PMID:22208909

  20. Mechanistic information from analysis of molecular weight distributions of starch.

    PubMed

    Castro, Jeffrey V; Dumas, Céline; Chiou, Herbert; Fitzgerald, Melissa A; Gilbert, Robert G

    2005-01-01

    A methodology is developed for interpreting the molecular weight distributions of debranched amylopectin, based on techniques developed for quantitatively and qualitatively finding mechanistic information from the molecular weight distributions of synthetic polymers. If the only events occurring are random chain growth and stoppage (i.e., the rates are independent of degree of polymerization over the range in question), then the number of chains of degree of polymerization N, P(N), is linear in ln P(N) with a negative slope, where the slope gives the ratio of the stoppage and growth rates. This starting point suggests that mechanistic inferences can be made from a plot of lnP against N. Application to capillary electrophoresis data for the P(N) of debranched starch from across the major taxa, from bacteria (Escherichia coli), green algae (Chlamydomonas reinhardtii), mammals (Bos), and flowering plants (Oryza sativa, rice; Zea mays, maize; Triticum aestivum, wheat; Hordeum vulgare, barley; and Solanum tuberosum, potato), gives insights into the biosynthetic pathways, showing the differences and similarities of the alpha-1,4-glucans produced by the various species. Four characteristic regions for storage starch from the higher plants are revealed: (1) an initial increasing region corresponding to the formation of new branches, (2) a linear ln P region with negative slope, indicating random growth and stoppage, (3) a region corresponding to the formation of the crystalline lamellae and subsequent elongation of chains, and (4) a second linear ln P with negative slope region. Each region can be assigned to specific enzymatic processes in starch synthesis, including determining the ranges of degrees of polymerization which are subject to random and nonrandom processes. PMID:16004469

  1. Structural reliability analysis of laminated CMC components

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Palko, Joseph L.; Gyekenyesi, John P.

    1991-01-01

    For laminated ceramic matrix composite (CMC) materials to realize their full potential in aerospace applications, design methods and protocols are a necessity. The time independent failure response of these materials is focussed on and a reliability analysis is presented associated with the initiation of matrix cracking. A public domain computer algorithm is highlighted that was coupled with the laminate analysis of a finite element code and which serves as a design aid to analyze structural components made from laminated CMC materials. Issues relevant to the effect of the size of the component are discussed, and a parameter estimation procedure is presented. The estimation procedure allows three parameters to be calculated from a failure population that has an underlying Weibull distribution.

  2. Advanced analysis of metal distributions in human hair

    SciTech Connect

    Kempson, Ivan M.; Skinner, William M.

    2008-06-09

    A variety of techniques (secondary electron microscopy with energy dispersive X-ray analysis, time-of-flight-secondary ion mass spectrometry, and synchrotron X-ray fluorescence) were utilized to distinguish metal contamination occurring in hair arising from endogenous uptake from an individual exposed to a polluted environment, in this case a lead smelter. Evidence was sought for elements less affected by contamination and potentially indicative of biogenic activity. The unique combination of surface sensitivity, spatial resolution, and detection limits used here has provided new insight regarding hair analysis. Metals such as Ca, Fe, and Pb appeared to have little representative value of endogenous uptake and were mainly due to contamination. Cu and Zn, however, demonstrate behaviors worthy of further investigation into relating hair concentrations to endogenous function.

  3. Distributed Finite Element Analysis Using a Transputer Network

    NASA Technical Reports Server (NTRS)

    Watson, James; Favenesi, James; Danial, Albert; Tombrello, Joseph; Yang, Dabby; Reynolds, Brian; Turrentine, Ronald; Shephard, Mark; Baehmann, Peggy

    1989-01-01

    The principal objective of this research effort was to demonstrate the extraordinarily cost effective acceleration of finite element structural analysis problems using a transputer-based parallel processing network. This objective was accomplished in the form of a commercially viable parallel processing workstation. The workstation is a desktop size, low-maintenance computing unit capable of supercomputer performance yet costs two orders of magnitude less. To achieve the principal research objective, a transputer based structural analysis workstation termed XPFEM was implemented with linear static structural analysis capabilities resembling commercially available NASTRAN. Finite element model files, generated using the on-line preprocessing module or external preprocessing packages, are downloaded to a network of 32 transputers for accelerated solution. The system currently executes at about one third Cray X-MP24 speed but additional acceleration appears likely. For the NASA selected demonstration problem of a Space Shuttle main engine turbine blade model with about 1500 nodes and 4500 independent degrees of freedom, the Cray X-MP24 required 23.9 seconds to obtain a solution while the transputer network, operated from an IBM PC-AT compatible host computer, required 71.7 seconds. Consequently, the $80,000 transputer network demonstrated a cost-performance ratio about 60 times better than the $15,000,000 Cray X-MP24 system.

  4. Cross Section Sensitivity and Uncertainty Analysis Including Secondary Neutron Energy and Angular Distributions.

    Energy Science and Technology Software Center (ESTSC)

    1991-03-12

    Version 00 SUSD calculates sensitivity coefficients for one- and two-dimensional transport problems. Variance and standard deviation of detector responses or design parameters can be obtained using cross-section covariance matrices. In neutron transport problems, this code can perform sensitivity-uncertainty analysis for secondary angular distribution (SAD) or secondary energy distribution (SED).

  5. Evolution History of Asteroid Itokawa Based on Block Distribution Analysis

    NASA Astrophysics Data System (ADS)

    Mazrouei, Sara; Daly, Michael; Barnouin, Olivier; Ernst, Carolyn

    2013-04-01

    This work investigates trends in the global and regional distribution of blocks on asteroid 25143 Itokawa in order to discover new findings to better understand the history of this asteroid. Itokawa is a near-Earth object, and the first asteroid that was targeted for a sample return mission. Trends in block population provide new insights in regards to Itokawa's current appearance following the disruption of a possible parent body, and how its surface might have changed since then. Here blocks are defined as rocks or features with distinctive positive relief that are larger than a few meters in size. The size and distribution of blocks are measured by mapping the outline of the blocks using the Small Body Mapping Tool (SBMT) created by the Johns Hopkins University Applied Physics Laboratory [1]. The SBMT allows the user to overlap correctly geo-located Hayabusa images [2] onto the Itokawa shape model. This study provides additional inferences on the original disruption and subsequent re-accretion of Itokawa's "head" and "body" from block analyses. A new approach is taken by analyzing the population of blocks with respect to latitude for both Itokawa's current state, and a hypothetical elliptical body. Itokawa currently rotates approximately about its maximum moment of inertia, which is expected due to conservation of momentum and minimum energy arguments. After the possible disruption of the parent body of Itokawa, the "body" of Itokawa would have tended to a similar rotation. The shape of this body is made by removing the head of Itokawa and applying a semispherical cap. Using the method of [3] inertial properties of this object are calculated. With the assumption that this object had settled to its stable rotational axis, it is found that the pole axis could have been tilted about 13° away from the current axis in the direction opposite the head, equivalent to a 33 meter change in the center of mass. The results of this study provide means to test the hypothesis

  6. A global survey on the seasonal variation of the marginal distribution of daily precipitation

    NASA Astrophysics Data System (ADS)

    Papalexiou, Simon Michael; Koutsoyiannis, Demetris

    2016-08-01

    To characterize the seasonal variation of the marginal distribution of daily precipitation, it is important to find which statistical characteristics of daily precipitation actually vary the most from month-to-month and which could be regarded to be invariant. Relevant to the latter issue is the question whether there is a single model capable to describe effectively the nonzero daily precipitation for every month worldwide. To study these questions we introduce and apply a novel test for seasonal variation (SV-Test) and explore the performance of two flexible distributions in a massive analysis of approximately 170,000 monthly daily precipitation records at more than 14,000 stations from all over the globe. The analysis indicates that: (a) the shape characteristics of the marginal distribution of daily precipitation, generally, vary over the months, (b) commonly used distributions such as the Exponential, Gamma, Weibull, Lognormal, and the Pareto, are incapable to describe "universally" the daily precipitation, (c) exponential-tail distributions like the Exponential, mixed Exponentials or the Gamma can severely underestimate the magnitude of extreme events and thus may be a wrong choice, and (d) the Burr type XII and the Generalized Gamma distributions are two good models, with the latter performing exceptionally well.

  7. [Vertical Distribution Characteristics and Analysis in Sediments of Xidahai Lake].

    PubMed

    Duan, Mu-chun; Xiao, Hai-feng; Zang, Shu-ying

    2015-07-01

    The organic matter (OM), total nitrogen (TN), total phosphorus (TP), the morphological changes of phosphorus and the particle size in columnar sediment core of Xidahai Lake were analyzed, to discuss the vertical distribution characteristics and influencing factors. The results showed that the contents of OM, TN and TP were 0. 633% -2. 756%, 0. 150% -0. 429% and 648. 00 - 1 480.67 mg . kg-1 respectively. The contents of Ca-P, IP and OM changed less, the contents of Fe/Al-P, OP, TP and TN fluctuated from 1843 to 1970; The contents of Ca-P, IP and TP tended to decrease, the contents of Fe/Al-P, OP and OM first decreased and then increased to different degree, TN fluctuated largely from 1970 to 1996; The nutrient elements contents showed relatively large fluctuation from 1996 to 2009, the average contents of Fe/Al-P, OP and OM were the highest in the three stages. The sediment core nutrients pollution sources were mainly from industrial wastewater, sewage and the loss of fertilizers of Xidahai Lake. The ratio of C/N in the sediments showed that organic matter was mainly from aquatic organisms. The sediment particle size composition was dominated by clay and fine silt. The correlation studies showed that Ca-P, IP and TP were significantly positively correlated, showing that the contribution of Ca-P to IP and TP growth was large. PMID:26489314

  8. Secure distributed genome analysis for GWAS and sequence comparison computation

    PubMed Central

    2015-01-01

    Background The rapid increase in the availability and volume of genomic data makes significant advances in biomedical research possible, but sharing of genomic data poses challenges due to the highly sensitive nature of such data. To address the challenges, a competition for secure distributed processing of genomic data was organized by the iDASH research center. Methods In this work we propose techniques for securing computation with real-life genomic data for minor allele frequency and chi-squared statistics computation, as well as distance computation between two genomic sequences, as specified by the iDASH competition tasks. We put forward novel optimizations, including a generalization of a version of mergesort, which might be of independent interest. Results We provide implementation results of our techniques based on secret sharing that demonstrate practicality of the suggested protocols and also report on performance improvements due to our optimization techniques. Conclusions This work describes our techniques, findings, and experimental results developed and obtained as part of iDASH 2015 research competition to secure real-life genomic computations and shows feasibility of securely computing with genomic data in practice. PMID:26733307

  9. Study of VOC distribution in citrus fruits by chromatographic analysis.

    PubMed

    Ligor, Magdalena; Buszewski, Bogusław

    2003-07-01

    The contamination of various parts of citrus fruits by toluene (a representative of volatile organic compounds-VOCs) was analyzed. The model of contamination distribution, based on investigations of the sorption and accumulation of toluene in particular parts of citrus fruits was considered. Solvent extraction of components from fruit parts (waxy layer, cuticle, and pulp) was applied. The extracts were analyzed by gas chromatography. The sorption time profiles for such citrus fruits as kumquats and mandarins were determined by plotting the extracted mass, or the relationship C/C(0), versus the sorption time of toluene. After the sorption process the highest concentration of toluene was observed in the flavedo, where the oil glands of kumquats and mandarins are located. The data obtained prove that the high dissolution of aromatic hydrocarbons results from the presence of essential oils in the oil glands. The diffusion coefficients of toluene for the cuticle and pulp of kumquats were also calculated. The results of model investigations were compared with the actual concentration of toluene in kumquats, citrons, mandarins and oranges from outdoor stands and orchards. PMID:12768263

  10. A numerical method for the stress analysis of stiffened-shell structures under nonuniform temperature distributions

    NASA Technical Reports Server (NTRS)

    Heldenfels, Richard R

    1951-01-01

    A numerical method is presented for the stress analysis of stiffened-shell structures of arbitrary cross section under nonuniform temperature distributions. The method is based on a previously published procedure that is extended to include temperature effects and multicell construction. The application of the method to practical problems is discussed and an illustrative analysis is presented of a two-cell box beam under the combined action of vertical loads and a nonuniform temperature distribution.

  11. Photoelastic analysis of stress distribution in oral rehabilitation.

    PubMed

    Turcio, Karina Helga Leal; Goiato, Marcelo Coelho; Gennari Filho, Humberto; dos Santos, Daniela Micheline

    2009-03-01

    The purpose of this study was to present a literature review about photoelasticity, a laboratory method for evaluation of implants prosthesis behavior. Fixed or removable prostheses function as levers on supporting teeth, allowing forces to cause tooth movement if not carefully planned. Hence, during treatment planning, the dentist must be aware of the biomechanics involved and prevent movement of supporting teeth, decreasing lever-type forces generated by these prosthesis. Photoelastic analysis has great applicability in restorative dentistry as it allows prediction and minimization of biomechanical critical points through modifications in treatment planning. PMID:19305247

  12. Correlation Spectroscopy of Minor Species: Signal Purification and Distribution Analysis

    SciTech Connect

    Laurence, T A; Kwon, Y; Yin, E; Hollars, C; Camarero, J A; Barsky, D

    2006-06-21

    We are performing experiments that use fluorescence resonance energy transfer (FRET) and fluorescence correlation spectroscopy (FCS) to monitor the movement of an individual donor-labeled sliding clamp protein molecule along acceptor-labeled DNA. In addition to the FRET signal sought from the sliding clamp-DNA complexes, the detection channel for FRET contains undesirable signal from free sliding clamp and free DNA. When multiple fluorescent species contribute to a correlation signal, it is difficult or impossible to distinguish between contributions from individual species. As a remedy, we introduce ''purified FCS'' (PFCS), which uses single molecule burst analysis to select a species of interest and extract the correlation signal for further analysis. We show that by expanding the correlation region around a burst, the correlated signal is retained and the functional forms of FCS fitting equations remain valid. We demonstrate the use of PFCS in experiments with DNA sliding clamps. We also introduce ''single molecule FCS'', which obtains diffusion time estimates for each burst using expanded correlation regions. By monitoring the detachment of weakly-bound 30-mer DNA oligomers from a single-stranded DNA plasmid, we show that single molecule FCS can distinguish between bursts from species that differ by a factor of 5 in diffusion constant.

  13. Differentiating cerebral lymphomas and GBMs featuring luminance distribution analysis

    NASA Astrophysics Data System (ADS)

    Yamasaki, Toshihiko; Chen, Tsuhan; Hirai, Toshinori; Murakami, Ryuji

    2013-02-01

    Differentiating lymphomas and glioblastoma multiformes (GBMs) is important for proper treatment planning. A number of works have been proposed but there are still some problems. For example, many works depend on thresholding a single feature value, which is susceptible to noise. Non-typical cases that do not get along with such simple thresholding can be found easily. In other cases, experienced observers are required to extract the feature values or to provide some interactions to the system, which is costly. Even if experts are involved, inter-observer variance becomes another problem. In addition, most of the works use only one or a few slice(s) because 3D tumor segmentation is difficult and time-consuming. In this paper, we propose a tumor classification system that analyzes the luminance distribution of the whole tumor region. The 3D MRIs are segmented within a few tens of seconds by using our fast 3D segmentation algorithm. Then, the luminance histogram of the whole tumor region is generated. The typical cases are classified by the histogram range thresholding and the apparent diffusion coefficients (ADC) thresholding. The non-typical cases are learned and classified by a support vector machine (SVM). Most of the processing elements are semi-automatic except for the ADC value extraction. Therefore, even novice users can use the system easily and get almost the same results as experts. The experiments were conducted using 40 MRI datasets (20 lymphomas and 20 GBMs) with non-typical cases. The classification accuracy of the proposed method was 91.1% without the ADC thresholding and 95.4% with the ADC thresholding. On the other hand, the baseline method, the conventional ADC thresholding, yielded only 67.5% accuracy.

  14. Monsoonal differences and probability distribution of PM(10) concentration.

    PubMed

    Md Yusof, Noor Faizah Fitri; Ramli, Nor Azam; Yahaya, Ahmad Shukri; Sansuddin, Nurulilyana; Ghazali, Nurul Adyani; Al Madhoun, Wesam

    2010-04-01

    There are many factors that influence PM(10) concentration in the atmosphere. This paper will look at the PM(10) concentration in relation with the wet season (north east monsoon) and dry season (south west monsoon) in Seberang Perai, Malaysia from the year 2000 to 2004. It is expected that PM(10) will reach the peak during south west monsoon as the weather during this season becomes dry and this study has proved that the highest PM(10) concentrations in 2000 to 2004 were recorded in this monsoon. Two probability distributions using Weibull and lognormal were used to model the PM(10) concentration. The best model used for prediction was selected based on performance indicators. Lognormal distribution represents the data better than Weibull distribution model for 2000, 2001, and 2002. However, for 2003 and 2004, Weibull distribution represents better than the lognormal distribution. The proposed distributions were successfully used for estimation of exceedences and predicting the return periods of the sequence year. PMID:19365611

  15. An empirical analysis of the distribution of overshoots in a stationary Gaussian stochastic process

    NASA Technical Reports Server (NTRS)

    Carter, M. C.; Madison, M. W.

    1973-01-01

    The frequency distribution of overshoots in a stationary Gaussian stochastic process is analyzed. The primary processes involved in this analysis are computer simulation and statistical estimation. Computer simulation is used to simulate stationary Gaussian stochastic processes that have selected autocorrelation functions. An analysis of the simulation results reveals a frequency distribution for overshoots with a functional dependence on the mean and variance of the process. Statistical estimation is then used to estimate the mean and variance of a process. It is shown that for an autocorrelation function, the mean and the variance for the number of overshoots, a frequency distribution for overshoots can be estimated.

  16. Independent Orbiter Assessment (IOA): Analysis of the Electrical Power Distribution and Control Subsystem, Volume 2

    NASA Technical Reports Server (NTRS)

    Schmeckpeper, K. R.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Distribution and Control (EPD and C) hardware. The EPD and C hardware performs the functions of distributing, sensing, and controlling 28 volt DC power and of inverting, distributing, sensing, and controlling 117 volt 400 Hz AC power to all Orbiter subsystems from the three fuel cells in the Electrical Power Generation (EPG) subsystem. Volume 2 continues the presentation of IOA analysis worksheets and contains the potential critical items list.

  17. Extending the LWS Data Environment: Distributed Data Processing and Analysis

    NASA Technical Reports Server (NTRS)

    Narock, Thomas

    2005-01-01

    The final stages of this work saw changes to the original framework, as well as the completion and integration of several data processing services. Initially, it was thought that a peer-to-peer architecture was necessary to make this work possible. The peer-to-peer architecture provided many benefits including the dynamic discovery of new services that would be continually added. A prototype example was built and while it showed promise, a major disadvantage was seen in that it was not easily integrated into the existing data environment. While the peer-to-peer system worked well for finding and accessing distributed data processing services, it was found that its use was limited by the difficulty in calling it from existing tools and services. After collaborations with members of the data community, it was determined that our data processing system was of high value and that a new interface should be pursued in order for the community to take full advantage of it. As such; the framework was modified from a peer-to-peer architecture to a more traditional web service approach. Following this change multiple data processing services were added. These services include such things as coordinate transformations and sub setting of data. Observatory (VHO), assisted with integrating the new architecture into the VHO. This allows anyone using the VHO to search for data, to then pass that data through our processing services prior to downloading it. As a second attempt at demonstrating the new system, a collaboration was established with the Collaborative Sun Earth Connector (CoSEC) group at Lockheed Martin. This group is working on a graphical user interface to the Virtual Observatories and data processing software. The intent is to provide a high-level easy-to-use graphical interface that will allow access to the existing Virtual Observatories and data processing services from one convenient application. Working with the CoSEC group we provided access to our data

  18. Wavelet analysis of baryon acoustic structures in the galaxy distribution

    NASA Astrophysics Data System (ADS)

    Arnalte-Mur, P.; Labatie, A.; Clerc, N.; Martínez, V. J.; Starck, J.-L.; Lachièze-Rey, M.; Saar, E.; Paredes, S.

    2012-06-01

    Context. Baryon acoustic oscillations (BAO) are imprinted in the density field by acoustic waves travelling in the plasma of the early universe. Their fixed scale can be used as a standard ruler to study the geometry of the universe. Aims: The BAO have been previously detected using correlation functions and power spectra of the galaxy distribution. We present a new method to detect the real-space structures associated with BAO. These baryon acoustic structures are spherical shells of relatively small density contrast, surrounding high density central regions. Methods: We design a specific wavelet adapted to search for shells, and exploit the physics of the process by making use of two different mass tracers, introducing a specific statistic to detect the BAO features. We show the effect of the BAO signal in this new statistic when applied to the Λ - cold dark matter (ΛCDM) model, using an analytical approximation to the transfer function. We confirm the reliability and stability of our method by using cosmological N-body simulations from the MareNostrum Institut de Ciències de l'Espai (MICE). Results: We apply our method to the detection of BAO in a galaxy sample drawn from the Sloan Digital Sky Survey (SDSS). We use the "main" catalogue to trace the shells, and the luminous red galaxies (LRG) as tracers of the high density central regions. Using this new method, we detect, with a high significance, that the LRG in our sample are preferentially located close to the centres of shell-like structures in the density field, with characteristics similar to those expected from BAO. We show that stacking selected shells, we can find their characteristic density profile. Conclusions: We delineate a new feature of the cosmic web, the BAO shells. As these are real spatial structures, the BAO phenomenon can be studied in detail by examining those shells. Full Table 1 is only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc

  19. Probabilistic approach to identify sensitive parameter distributions in multimedia pathway analysis.

    SciTech Connect

    Kamboj, S.; Gnanapragasam, E.; LePoire, D.; Biwer, B. M.; Cheng, J.; Arnish, J.; Yu, C.; Chen, S. Y.; Mo, T.; Abu-Eid, R.; Thaggard, M.; Environmental Assessment; NRC

    2002-01-01

    Sensitive parameter distributions were identified with the use of probabilistic analysis in the RESRAD computer code. RESRAD is a multimedia pathway analysis code designed to evaluate radiological exposures resulting from radiological contamination in soil. The dose distribution was obtained by using a set of default parameter distribution/values. Most of the variations in the output dose distribution could be attributed to uncertainty in a small set of input parameters that could be considered as sensitive parameter distributions. The identification of the sensitive parameters is a first step in the prioritization of future research and information gathering. When site-specific parameter distribution/values are available for an actual site, the same process should be used with these site-specific data. Regression analysis used to identify sensitive parameters indicated that the dominant pathways depended on the radionuclide and source configurations. However, two parameter distributions were sensitive for many radionuclides: the external shielding factor when external exposure was the dominant pathway and the plant transfer factor when plant ingestion was the dominant pathway. No single correlation or regression coefficient can be used alone to identify sensitive parameters in all the cases. The coefficients are useful guides, but they have to be used in conjunction with other aids, such as scatter plots, and should undergo further analysis.

  20. CARES/LIFE Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Powers, Lynn M.; Janosik, Lesley A.; Gyekenyesi, John P.

    2003-01-01

    This manual describes the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction (CARES/LIFE) computer program. The program calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. CARES/LIFE is an extension of the CARES (Ceramic Analysis and Reliability Evaluation of Structures) computer program. The program uses results from MSC/NASTRAN, ABAQUS, and ANSYS finite element analysis programs to evaluate component reliability due to inherent surface and/or volume type flaws. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing the power law, Paris law, or Walker law. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled by using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. The probabilistic time-dependent theories used in CARES/LIFE, along with the input and output for CARES/LIFE, are described. Example problems to demonstrate various features of the program are also included.

  1. Biomechanical Analysis of Force Distribution in Human Finger Extensor Mechanisms

    PubMed Central

    Hu, Dan; Ren, Lei; Howard, David; Zong, Changfu

    2014-01-01

    The complexities of the function and structure of human fingers have long been recognised. The in vivo forces in the human finger tendon network during different activities are critical information for clinical diagnosis, surgical treatment, prosthetic finger design, and biomimetic hand development. In this study, we propose a novel method for in vivo force estimation for the finger tendon network by combining a three-dimensional motion analysis technique and a novel biomechanical tendon network model. The extensor mechanism of a human index finger is represented by an interconnected tendinous network moving around the phalanx's dorsum. A novel analytical approach based on the “Principle of Minimum Total Potential Energy” is used to calculate the forces and deformations throughout the tendon network of the extensor mechanism when subjected to an external load and with the finger posture defined by measurement data. The predicted deformations and forces in the tendon network are in broad agreement with the results obtained by previous experimental in vitro studies. The proposed methodology provides a promising tool for investigating the biomechanical function of complex interconnected tendon networks in vivo. PMID:25126576

  2. Distributional Benefit Analysis of a National Air Quality Rule

    PubMed Central

    Post, Ellen S.; Belova, Anna; Huang, Jin

    2011-01-01

    Under Executive Order 12898, the U.S. Environmental Protection Agency (EPA) must perform environmental justice (EJ) reviews of its rules and regulations. EJ analyses address the hypothesis that environmental disamenities are experienced disproportionately by poor and/or minority subgroups. Such analyses typically use communities as the unit of analysis. While community-based approaches make sense when considering where polluting sources locate, they are less appropriate for national air quality rules affecting many sources and pollutants that can travel thousands of miles. We compare exposures and health risks of EJ-identified individuals rather than communities to analyze EPA’s Heavy Duty Diesel (HDD) rule as an example national air quality rule. Air pollutant exposures are estimated within grid cells by air quality models; all individuals in the same grid cell are assigned the same exposure. Using an inequality index, we find that inequality within racial/ethnic subgroups far outweighs inequality between them. We find, moreover, that the HDD rule leaves between-subgroup inequality essentially unchanged. Changes in health risks depend also on subgroups’ baseline incidence rates, which differ across subgroups. Thus, health risk reductions may not follow the same pattern as reductions in exposure. These results are likely representative of other national air quality rules as well. PMID:21776207

  3. Biomechanical analysis of force distribution in human finger extensor mechanisms.

    PubMed

    Hu, Dan; Ren, Lei; Howard, David; Zong, Changfu

    2014-01-01

    The complexities of the function and structure of human fingers have long been recognised. The in vivo forces in the human finger tendon network during different activities are critical information for clinical diagnosis, surgical treatment, prosthetic finger design, and biomimetic hand development. In this study, we propose a novel method for in vivo force estimation for the finger tendon network by combining a three-dimensional motion analysis technique and a novel biomechanical tendon network model. The extensor mechanism of a human index finger is represented by an interconnected tendinous network moving around the phalanx's dorsum. A novel analytical approach based on the "Principle of Minimum Total Potential Energy" is used to calculate the forces and deformations throughout the tendon network of the extensor mechanism when subjected to an external load and with the finger posture defined by measurement data. The predicted deformations and forces in the tendon network are in broad agreement with the results obtained by previous experimental in vitro studies. The proposed methodology provides a promising tool for investigating the biomechanical function of complex interconnected tendon networks in vivo. PMID:25126576

  4. Radar signal analysis of ballistic missile with micro-motion based on time-frequency distribution

    NASA Astrophysics Data System (ADS)

    Wang, Jianming; Liu, Lihua; Yu, Hua

    2015-12-01

    The micro-motion of ballistic missile targets induces micro-Doppler modulation on the radar return signal, which is a unique feature for the warhead discrimination during flight. In order to extract the micro-Doppler feature of ballistic missile targets, time-frequency analysis is employed to process the micro-Doppler modulated time-varying radar signal. The images of time-frequency distribution (TFD) reveal the micro-Doppler modulation characteristic very well. However, there are many existing time-frequency analysis methods to generate the time-frequency distribution images, including the short-time Fourier transform (STFT), Wigner distribution (WD) and Cohen class distribution, etc. Under the background of ballistic missile defence, the paper aims at working out an effective time-frequency analysis method for ballistic missile warhead discrimination from the decoys.

  5. Numerical analysis of atomic density distribution in arc driven negative ion sources

    SciTech Connect

    Yamamoto, T. Shibata, T.; Hatayama, A.; Kashiwagi, M.; Hanada, M.; Sawada, K.

    2014-02-15

    The purpose of this study is to calculate atomic (H{sup 0}) density distribution in JAEA 10 ampere negative ion source. A collisional radiative model is developed for the calculation of the H{sup 0} density distribution. The non-equilibrium feature of the electron energy distribution function (EEDF), which mainly determines the H{sup 0} production rate, is included by substituting the EEDF calculated from 3D electron transport analysis. In this paper, the H{sup 0} production rate, the ionization rate, and the density distribution in the source chamber are calculated. In the region where high energy electrons exist, the H{sup 0} production and the ionization are enhanced. The calculated H{sup 0} density distribution without the effect of the H{sup 0} transport is relatively small in the upper region. In the next step, the effect should be taken into account to obtain more realistic H{sup 0} distribution.

  6. Advancing Collaborative Climate Studies through Globally Distributed Geospatial Analysis

    NASA Astrophysics Data System (ADS)

    Singh, R.; Percivall, G.

    2009-12-01

    Infrastructure and the broader GEOSS architecture. Of specific interest to this session is the work on geospatial workflows and geo-processing and data discovery and access. CCIP demonstrates standards-based interoperability between geospatial applications in the service of Climate Change analysis. CCIP is planned to be a yearly exercise. It consists of a network of online data services (WCS, WFS, SOS), analysis services (WPS, WCPS, WMS), and clients that exercise those services. In 2009, CCIP focuses on Australia, and the initial application of existing OGC services to climate studies. The results of the 2009 CCIP will serve as requirements for more complex geo-processing services to be developed for CCIP 2010. The benefits of CCIP include accelerating the implementation of the GCOS, and building confidence that implementations using multi-vendor interoperable technologies can help resolve vexing climate change questions. AIP-2: Architecture Implementation Pilot, Phase 2 CCIP: Climate Challenge Integration Plugfest GEO: Group on Earth Observations GEOSS: Global Earth Observing System of Systems GCOS: Global Climate Observing System OGC: Open Geospatial Consortium SOS: Sensor Observation Service WCS: Web Coverage Service WCPS: Web Coverage Processing Service WFS: Web Feature Service WMS: Web Mapping Service

  7. Hyperdimensional analysis of amino acid pair distributions in proteins.

    PubMed

    Henriksen, Svend B; Mortensen, Rasmus J; Geertz-Hansen, Henrik M; Neves-Petersen, Maria Teresa; Arnason, Omar; Söring, Jón; Petersen, Steffen B

    2011-01-01

    Our manuscript presents a novel approach to protein structure analyses. We have organized an 8-dimensional data cube with protein 3D-structural information from 8706 high-resolution non-redundant protein-chains with the aim of identifying packing rules at the amino acid pair level. The cube contains information about amino acid type, solvent accessibility, spatial and sequence distance, secondary structure and sequence length. We are able to pose structural queries to the data cube using program ProPack. The response is a 1, 2 or 3D graph. Whereas the response is of a statistical nature, the user can obtain an instant list of all PDB-structures where such pair is found. The user may select a particular structure, which is displayed highlighting the pair in question. The user may pose millions of different queries and for each one he will receive the answer in a few seconds. In order to demonstrate the capabilities of the data cube as well as the programs, we have selected well known structural features, disulphide bridges and salt bridges, where we illustrate how the queries are posed, and how answers are given. Motifs involving cysteines such as disulphide bridges, zinc-fingers and iron-sulfur clusters are clearly identified and differentiated. ProPack also reveals that whereas pairs of Lys residues virtually never appear in close spatial proximity, pairs of Arg are abundant and appear at close spatial distance, contrasting the belief that electrostatic repulsion would prevent this juxtaposition and that Arg-Lys is perceived as a conservative mutation. The presented programs can find and visualize novel packing preferences in proteins structures allowing the user to unravel correlations between pairs of amino acids. The new tools allow the user to view statistical information and visualize instantly the structures that underpin the statistical information, which is far from trivial with most other SW tools for protein structure analysis. PMID:22174733

  8. SCARE - A postprocessor program to MSC/NASTRAN for reliability analysis of structural ceramic components

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, J. P.

    1986-01-01

    A computer program was developed for calculating the statistical fast fracture reliability and failure probability of ceramic components. The program includes the two-parameter Weibull material fracture strength distribution model, using the principle of independent action for polyaxial stress states and Batdorf's shear-sensitive as well as shear-insensitive crack theories, all for volume distributed flaws in macroscopically isotropic solids. Both penny-shaped cracks and Griffith cracks are included in the Batdorf shear-sensitive crack response calculations, using Griffith's maximum tensile stress or critical coplanar strain energy release rate criteria to predict mixed mode fracture. Weibull material parameters can also be calculated from modulus of rupture bar tests, using the least squares method with known specimen geometry and fracture data. The reliability prediction analysis uses MSC/NASTRAN stress, temperature and volume output, obtained from the use of three-dimensional, quadratic, isoparametric, or axisymmetric finite elements. The statistical fast fracture theories employed, along with selected input and output formats and options, are summarized. An example problem to demonstrate various features of the program is included.

  9. Progress in Using the Generalized Wigner Distribution in the Analysis of Terrace-Width Distributions of Vicinal Surfaces

    NASA Astrophysics Data System (ADS)

    Cohen, S. D.; Richards, Howard L.; Einstein, T. L.

    2000-03-01

    The so-called generalized Wigner distribution (GWD) may provide at least as good a description of terrace width distributions (TWDs) on vicinal surfaces as the standard Gaussian fit.(T.L. Einstein and O. Pierre-Louis, Surface Sci. 424), L299 (1999). It works well for weak elastic repulsion strengths A between steps (where the latter fails), as illustrated explicitly(S.D. Cohen, H.L. Richards, TLE, and M. Giesen, cond- mat/9911319.) for vicinal Pt(110).( K. Swamy, E. Bertel, and I. Vilfan, Surface Sci. 425), L369 (1999). Applications to vicinal copper surfaces confirms the general viability of the new analysis procedure.(M. Giesen and T.L. Einstein, submitted to Surface Sci.) For troublesome data, we can treat the GWD as a two-parameter fit that allows the terrace widths to be scaled by an optimal effective mean width.^3 With Monte Carlo simulations we show that for physical values of A, the GWD provides a better overall estimate than the Gaussian models. We quantify how a GWD approaches a Gaussian for large A and present a convenient but accurate new expression relating the variance of the TWD to A.^3 We also mention how discreteness of terrace widths impacts the standard continuum analysis.^3

  10. Regression Analysis of Physician Distribution to Identify Areas of Need: Some Preliminary Findings.

    ERIC Educational Resources Information Center

    Morgan, Bruce B.; And Others

    A regression analysis was conducted of factors that help to explain the variance in physician distribution and which identify those factors that influence the maldistribution of physicians. Models were developed for different geographic areas to determine the most appropriate unit of analysis for the Western Missouri Area Health Education Center…

  11. Log Normal Distribution of Cellular Uptake of Radioactivity: Statistical Analysis of Alpha Particle Track Autoradiography

    PubMed Central

    Neti, Prasad V.S.V.; Howell, Roger W.

    2008-01-01

    Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log normal distribution function (J Nucl Med 47, 6 (2006) 1049-1058) with the aid of an autoradiographic approach. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analyses of these data. Methods The measured distributions of alpha particle tracks per cell were subjected to statistical tests with Poisson (P), log normal (LN), and Poisson – log normal (P – LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL 210Po-citrate. When cells were exposed to 67 kBq/mL, the P – LN distribution function gave a better fit, however, the underlying activity distribution remained log normal. Conclusions The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:16741316

  12. Particle size distributions by transmission electron microscopy: an interlaboratory comparison case study

    NASA Astrophysics Data System (ADS)

    Rice, Stephen B.; Chan, Christopher; Brown, Scott C.; Eschbach, Peter; Han, Li; Ensor, David S.; Stefaniak, Aleksandr B.; Bonevich, John; Vladár, András E.; Hight Walker, Angela R.; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A.

    2013-12-01

    This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin-Rammler-Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a

  13. Particle size distributions by transmission electron microscopy: an interlaboratory comparison case study

    PubMed Central

    Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A

    2015-01-01

    This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a

  14. Strength distribution of reinforcing fibers in a Nicalon fiber/chemically vapor infiltrated silicon carbide matrix composite

    NASA Technical Reports Server (NTRS)

    Eckel, Andrew J.; Bradt, Richard C.

    1989-01-01

    The strength distribution of fibers within a two-dimensional laminate ceramic/ceramic composite consisting of an eight harness satin weave of Nicalon continuous fiber within a chemically vapor infiltrated SiC matrix was determined from analysis of the fracture mirrors of the fibers. Comparison of the fiber strengths and the Weibull moduli with those for Nicalon fibers prior to incorporation into composites suggests that possible fiber damage may occur either during the weaving or during another stage of the composite manufacture. Observations also indicate that it is the higher-strength fibers which experience the greatest extent of fiber pullout and thus make a larger contribution to the overall composite toughness than do the weaker fibers.

  15. Analysis of phosphorescence in heterogeneous systems using distributions of quencher concentration.

    PubMed Central

    Golub, A S; Popel, A S; Zheng, L; Pittman, R N

    1997-01-01

    A continuous distribution approach, instead of the traditional mono- and multiexponential analysis, for determining quencher concentration in a heterogeneous system has been developed. A mathematical model of phosphorescence decay inside a volume with homogeneous concentration of phosphor and heterogeneous concentration of quencher was formulated to obtain pulse-response fitting functions for four different distributions of quencher concentration: rectangular, normal (Gaussian), gamma, and multimodal. The analysis was applied to parameter estimates of a heterogeneous distribution of oxygen tension (PO2) within a volume. Simulated phosphorescence decay data were randomly generated for different distributions and heterogeneity of PO2 inside the excitation/emission volume, consisting of 200 domains, and then fit with equations developed for the four models. Analysis using a monoexponential fit yielded a systematic error (underestimate) in mean PO2 that increased with the degree of heterogeneity. The fitting procedures based on the continuous distribution approach returned more accurate values for parameters of the generated PO2 distribution than did the monoexponential fit. The parameters of the fit (M = mean; sigma = standard deviation) were investigated as a function of signal-to-noise ratio (SNR = maximum signal amplitude/peak-to-peak noise). The best-fit parameter values were stable when SNR > or = 20. All four fitting models returned accurate values of M and sigma for different PO2 distributions. The ability of our procedures to resolve two different heterogeneous compartments was also demonstrated using a bimodal fitting model. An approximate scheme was formulated to allow calculation of the first moments of a spatial distribution of quencher without specifying the distribution. In addition, a procedure for the recovery of a histogram, representing the quencher concentration distribution, was developed and successfully tested. PMID:9199808

  16. Integral-moment analysis of the BATSE gamma-ray burst intensity distribution

    NASA Technical Reports Server (NTRS)

    Horack, John M.; Emslie, A. Gordon

    1994-01-01

    We have applied the technique of integral-moment analysis to the intensity distribution of the first 260 gamma-ray bursts observed by the Burst and Transient Source Experiment (BATSE) on the Compton Gamma Ray Observatory. This technique provides direct measurement of properties such as the mean, variance, and skewness of the convolved luminosity-number density distribution, as well as associated uncertainties. Using this method, one obtains insight into the nature of the source distributions unavailable through computation of traditional single parameters such as V/V(sub max)). If the luminosity function of the gamma-ray bursts is strongly peaked, giving bursts only a narrow range of luminosities, these results are then direct probes of the radial distribution of sources, regardless of whether the bursts are a local phenomenon, are distributed in a galactic halo, or are at cosmological distances. Accordingly, an integral-moment analysis of the intensity distribution of the gamma-ray bursts provides for the most complete analytic description of the source distribution available from the data, and offers the most comprehensive test of the compatibility of a given hypothesized distribution with observation.

  17. Spherical Harmonic Analysis of Particle Velocity Distribution Function: Comparison of Moments and Anisotropies using Cluster Data

    NASA Technical Reports Server (NTRS)

    Gurgiolo, Chris; Vinas, Adolfo F.

    2009-01-01

    This paper presents a spherical harmonic analysis of the plasma velocity distribution function using high-angular, energy, and time resolution Cluster data obtained from the PEACE spectrometer instrument to demonstrate how this analysis models the particle distribution function and its moments and anisotropies. The results show that spherical harmonic analysis produced a robust physical representation model of the velocity distribution function, resolving the main features of the measured distributions. From the spherical harmonic analysis, a minimum set of nine spectral coefficients was obtained from which the moment (up to the heat flux), anisotropy, and asymmetry calculations of the velocity distribution function were obtained. The spherical harmonic method provides a potentially effective "compression" technique that can be easily carried out onboard a spacecraft to determine the moments and anisotropies of the particle velocity distribution function for any species. These calculations were implemented using three different approaches, namely, the standard traditional integration, the spherical harmonic (SPH) spectral coefficients integration, and the singular value decomposition (SVD) on the spherical harmonic methods. A comparison among the various methods shows that both SPH and SVD approaches provide remarkable agreement with the standard moment integration method.

  18. Just fracking: a distributive environmental justice analysis of unconventional gas development in Pennsylvania, USA

    NASA Astrophysics Data System (ADS)

    Clough, Emily; Bell, Derek

    2016-02-01

    This letter presents a distributive environmental justice analysis of unconventional gas development in the area of Pennsylvania lying over the Marcellus Shale, the largest shale gas formation in play in the United States. The extraction of shale gas using unconventional wells, which are hydraulically fractured (fracking), has increased dramatically since 2005. As the number of wells has grown, so have concerns about the potential public health effects on nearby communities. These concerns make shale gas development an environmental justice issue. This letter examines whether the hazards associated with proximity to wells and the economic benefits of shale gas production are fairly distributed. We distinguish two types of distributive environmental justice: traditional and benefit sharing. We ask the traditional question: are there a disproportionate number of minority or low-income residents in areas near to unconventional wells in Pennsylvania? However, we extend this analysis in two ways: we examine income distribution and level of education; and we compare before and after shale gas development. This contributes to discussions of benefit sharing by showing how the income distribution of the population has changed. We use a binary dasymetric technique to remap the data from the 2000 US Census and the 2009-2013 American Communities Survey and combine that data with a buffer containment analysis of unconventional wells to compare the characteristics of the population living nearer to unconventional wells with those further away before and after shale gas development. Our analysis indicates that there is no evidence of traditional distributive environmental injustice: there is not a disproportionate number of minority or low-income residents in areas near to unconventional wells. However, our analysis is consistent with the claim that there is benefit sharing distributive environmental injustice: the income distribution of the population nearer to shale gas wells

  19. Monte Carlo models and analysis of galactic disk gamma-ray burst distributions

    NASA Technical Reports Server (NTRS)

    Hakkila, Jon

    1989-01-01

    Gamma-ray bursts are transient astronomical phenomena which have no quiescent counterparts in any region of the electromagnetic spectrum. Although temporal and spectral properties indicate that these events are likely energetic, their unknown spatial distribution complicates astrophysical interpretation. Monte Carlo samples of gamma-ray burst sources are created which belong to Galactic disk populations. Spatial analysis techniques are used to compare these samples to the observed distribution. From this, both quantitative and qualitative conclusions are drawn concerning allowed luminosity and spatial distributions of the actual sample. Although the Burst and Transient Source Experiment (BATSE) experiment on Gamma Ray Observatory (GRO) will significantly improve knowledge of the gamma-ray burst source spatial characteristics within only a few months of launch, the analysis techniques described herein will not be superceded. Rather, they may be used with BATSE results to obtain detailed information about both the luminosity and spatial distributions of the sources.

  20. [Effect of different distribution of components concentration on the accuracy of quantitative spectral analysis].

    PubMed

    Li, Gang; Zhao, Zhe; Wang, Hui-Quan; Lin, Ling; Zhang, Bao-Ju; Wu, Xiao-Rong

    2012-07-01

    In order to discuss the effect of different distribution of components concentration on the accuracy of quantitative spectral analysis, according to the Lambert-Beer law, ideal absorption spectra of samples with three components were established. Gaussian noise was added to the spectra. Correction and prediction models were built by partial least squares regression to reflect the unequal modeling and prediction results between different distributions of components. Results show that, in the case of pure linear absorption, the accuracy of model is related to the distribution of components concentration. Not only to the component we focus on, but also to the non-tested components, the larger covered and more uniform distribution is a significant point of calibration set samples to establish a universal model and provide a satisfactory accuracy. This research supplies a theoretic guidance for reasonable choice of samples with suitable concentration distribution, which enhances the quality of model and reduces the prediction error of the predict set. PMID:23016350

  1. Evaluating Domestic Hot Water Distribution System Options with Validated Analysis Models

    SciTech Connect

    Weitzel, E.; Hoeschele, E.

    2014-09-01

    A developing body of work is forming that collects data on domestic hot water consumption, water use behaviors, and energy efficiency of various distribution systems. Transient System Simulation Tool (TRNSYS) is a full distribution system developed that has been validated using field monitoring data and then exercised in a number of climates to understand climate impact on performance. In this study, the Building America team built upon previous analysis modeling work to evaluate differing distribution systems and the sensitivities of water heating energy and water use efficiency to variations of climate, load, distribution type, insulation and compact plumbing practices. Overall, 124 different TRNSYS models were simulated. The results of this work are useful in informing future development of water heating best practices guides as well as more accurate (and simulation time efficient) distribution models for annual whole house simulation programs.

  2. Comparison of photon correlation spectroscopy with photosedimentation analysis for the determination of aqueous colloid size distributions

    USGS Publications Warehouse

    Rees, T.F.

    1990-01-01

    Photon correlation spectroscopy (PCS) utilizes the Doppler frequency shift of photons scattered off particles undergoing Brownian motion to determine the size of colloids suspended in water. Photosedimentation analysis (PSA) measures the time-dependent change in optical density of a suspension of colloidal particles undergoing centrifugation. A description of both techniques, important underlying assumptions, and limitations are given. Results for a series of river water samples show that the colloid-size distribution means are statistically identical as determined by both techniques. This also is true of the mass median diameter (MMD), even though MMD values determined by PSA are consistently smaller than those determined by PCS. Because of this small negative bias, the skew parameters for the distributions are generally smaller for the PCS-determined distributions than for the PSA-determined distributions. Smaller polydispersity indices for the distributions are also determined by PCS. -from Author

  3. Powerlaw: a Python package for analysis of heavy-tailed distributions.

    PubMed

    Alstott, Jeff; Bullmore, Ed; Plenz, Dietmar

    2014-01-01

    Power laws are theoretically interesting probability distributions that are also frequently used to describe empirical data. In recent years, effective statistical methods for fitting power laws have been developed, but appropriate use of these techniques requires significant programming and statistical insight. In order to greatly decrease the barriers to using good statistical methods for fitting power law distributions, we developed the powerlaw Python package. This software package provides easy commands for basic fitting and statistical analysis of distributions. Notably, it also seeks to support a variety of user needs by being exhaustive in the options available to the user. The source code is publicly available and easily extensible. PMID:24489671

  4. Analysis of the 3D distribution of stacked self-assembled quantum dots by electron tomography

    PubMed Central

    2012-01-01

    The 3D distribution of self-assembled stacked quantum dots (QDs) is a key parameter to obtain the highest performance in a variety of optoelectronic devices. In this work, we have measured this distribution in 3D using a combined procedure of needle-shaped specimen preparation and electron tomography. We show that conventional 2D measurements of the distribution of QDs are not reliable, and only 3D analysis allows an accurate correlation between the growth design and the structural characteristics. PMID:23249477

  5. powerlaw: A Python Package for Analysis of Heavy-Tailed Distributions

    PubMed Central

    Alstott, Jeff; Bullmore, Ed; Plenz, Dietmar

    2014-01-01

    Power laws are theoretically interesting probability distributions that are also frequently used to describe empirical data. In recent years, effective statistical methods for fitting power laws have been developed, but appropriate use of these techniques requires significant programming and statistical insight. In order to greatly decrease the barriers to using good statistical methods for fitting power law distributions, we developed the powerlaw Python package. This software package provides easy commands for basic fitting and statistical analysis of distributions. Notably, it also seeks to support a variety of user needs by being exhaustive in the options available to the user. The source code is publicly available and easily extensible. PMID:24489671

  6. Theoretical analysis of output performance of GG-IAG fiber laser by multipoint distributed side pump

    NASA Astrophysics Data System (ADS)

    Zhu, Yonggang; Duan, Kailiang; Shao, Hongmin; Zhao, Baoyin; Zhang, Entao; Zhao, Wei

    2012-11-01

    Based on a steady-state rate equations (REs) and heat dissipation model considering both convective and radiative heat transfer, the output performance and temperature distribution of Yb3+ doped gain guided and index antiguided (GG-IAG) fiber lasers by multipoint distributed pumping are analyzed by numerically solving REs. The results show that high output power and even temperature distribution can be obtained by increasing pump points and lowering the losses at the points; multipoint side pumping is an optimal method to obtain compact high power GG-IAG fiber lasers. The numerical analysis provides some insights for the construction of high power GG-IAG fiber lasers.

  7. Analysis and synthesis of distributed-lumped-active networks by digital computer

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The use of digital computational techniques in the analysis and synthesis of DLA (distributed lumped active) networks is considered. This class of networks consists of three distinct types of elements, namely, distributed elements (modeled by partial differential equations), lumped elements (modeled by algebraic relations and ordinary differential equations), and active elements (modeled by algebraic relations). Such a characterization is applicable to a broad class of circuits, especially including those usually referred to as linear integrated circuits, since the fabrication techniques for such circuits readily produce elements which may be modeled as distributed, as well as the more conventional lumped and active ones.

  8. Theoretical Analysis of Orientation Distribution Function Reconstruction of Textured Polycrystal by Parametric X-rays

    NASA Astrophysics Data System (ADS)

    Lobach, I.; Benediktovitch, A.

    2016-07-01

    The possibility of quantitative texture analysis by means of parametric x-ray radiation (PXR) from relativistic electrons with Lorentz factor γ > 50MeV in a polycrystal is considered theoretically. In the case of rather smooth orientation distribution function (ODF) and large detector (θD >> 1/γ) the universal relation between ODF and intensity distribution is presented. It is shown that if ODF is independent on one from Euler angles, then the texture is fully determined by angular intensity distribution. Application of the method to the simulated data shows the stability of the proposed algorithm.

  9. SCARE: A post-processor program to MSC/NASTRAN for the reliability analysis of structural ceramic components

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, J. P.

    1985-01-01

    A computer program was developed for calculating the statistical fast fracture reliability and failure probability of ceramic components. The program includes the two-parameter Weibull material fracture strength distribution model, using the principle of independent action for polyaxial stress states and Batdorf's shear-sensitive as well as shear-insensitive crack theories, all for volume distributed flaws in macroscopically isotropic solids. Both penny-shaped cracks and Griffith cracks are included in the Batdorf shear-sensitive crack response calculations, using Griffith's maximum tensile stress or critical coplanar strain energy release rate criteria to predict mixed mode fracture. Weibull material parameters can also be calculated from modulus of rupture bar tests, using the least squares method with known specimen geometry and fracture data. The reliability prediction analysis uses MSC/NASTRAN stress, temperature and volume output, obtained from the use of three-dimensional, quadratic, isoparametric, or axisymmetric finite elements. The statistical fast fracture theories employed, along with selected input and output formats and options, are summarized. An example problem to demonstrate various features of the program is included.

  10. Time-Score Analysis in Criterion-Referenced Tests. Final Report.

    ERIC Educational Resources Information Center

    Tatsuoka, Kikumi K.; Tatsuoka, Maurice M.

    The family of Weibull distributions was investigated as a model for the distributions of response times for items in computer-based criterion-referenced tests. The fit of these distributions were, with a few exceptions, good to excellent according to the Kolmogorov-Smirnov test. For a few relatively simple items, the two-parameter gamma…

  11. Hydraulic model analysis of water distribution system, Rockwell International, Rocky Flats, Colorado

    SciTech Connect

    Perstein, J.; Castellano, J.A.

    1989-01-20

    Rockwell International requested an analysis of the existing plant site water supply distribution system at Rocky Flats, Colorado, to determine its adequacy. On September 26--29, 1988, Hughes Associates, Inc., Fire Protection Engineers, accompanied by Rocky Flats Fire Department engineers and suppression personnel, conducted water flow tests at the Rocky Flats plant site. Thirty-seven flows from various points throughout the plant site were taken on the existing domestic supply/fire main installation to assure comprehensive and thorough representation of the Rocky Flats water distribution system capability. The analysis was completed in four phases which are described, together with a summary of general conclusions and recommendations.

  12. The Analysis of the Strength, Distribution and Direction for the EEG Phase Synchronization by Musical Stimulus

    NASA Astrophysics Data System (ADS)

    Ogawa, Yutaro; Ikeda, Akira; Kotani, Kiyoshi; Jimbo, Yasuhiko

    In this study, we propose the EEG phase synchronization analysis including not only the average strength of the synchronization but also the distribution and directions under the conditions that evoked emotion by musical stimuli. The experiment is performed with the two different musical stimuli that evoke happiness or sadness for 150 seconds. It is found that the average strength of synchronization indicates no difference between the right side and the left side of the frontal lobe during the happy stimulus, the distribution and directions indicate significant differences. Therefore, proposed analysis is useful for detecting emotional condition because it provides information that cannot be obtained only by the average strength of synchronization.

  13. Analysis of axially non-uniform loss distribution in 3-phase induction motor considering skew effect

    SciTech Connect

    Kown, B.I.; Kim, B.T.; Jun, C.S.; Park, S.C. )

    1999-05-01

    This paper discusses the phenomena of the axially non-uniform distribution of magnetic flux densities and losses in a 3-phase squirrel cage induction motor of which the rotor bars are skewed. A 2-dimensional complex finite element method taking account of the effects of the skewed rotor bars is utilized for the analysis of characteristics such as copper and iron losses and the loss distributions are examined. The summing up values of non-uniform losses resulted from the finite element analysis are compared with measurement values.

  14. An analysis of the size distribution of Italian firms by age

    NASA Astrophysics Data System (ADS)

    Cirillo, Pasquale

    2010-02-01

    In this paper we analyze the size distribution of Italian firms by age. In other words, we want to establish whether the way that the size of firms is distributed varies as firms become old. As a proxy of size we use capital. In [L.M.B. Cabral, J. Mata, On the evolution of the firm size distribution: Facts and theory, American Economic Review 93 (2003) 1075-1090], the authors study the distribution of Portuguese firms and they find out that, while the size distribution of all firms is fairly stable over time, the distributions of firms by age groups are appreciably different. In particular, as the age of the firms increases, their size distribution on the log scale shifts to the right, the left tails becomes thinner and the right tail thicker, with a clear decrease of the skewness. In this paper, we perform a similar analysis with Italian firms using the CEBI database, also considering firms’ growth rates. Although there are several papers dealing with Italian firms and their size distribution, to our knowledge a similar study concerning size and age has not been performed yet for Italy, especially with such a big panel.

  15. Analysis of a teaching experiment on fair distribution with secondary school students

    NASA Astrophysics Data System (ADS)

    Antequera, A. T.; Espinel, M. C.

    2011-03-01

    The aim of this study is twofold. The first is to investigate the ability of secondary school students to understand the different distribution schemes and thus, indirectly, to contribute to the educational discussion and approach to be used for distribution problems so as to lessen reliance on the ubiquitous cross-multiplication rule in proportional distribution. The experiment was conducted with secondary school students using a specifically devised scenario involving a distribution problem. We present an analysis of the students' performance with respect to their concept of fair distribution in a given situation. Their ability to apply the various rules in a new situation is determined. The results provide an insight into the possibilities offered by teaching different distribution methods, especially with mathematically gifted students. The second aim is for instructors to appreciate the value of teaching other distribution methods that apply in real life in addition to a proportional distribution so that they may include in the mathematics classes some recently developed concepts from the field of cooperative game theory.

  16. Sensitivity analysis of CLIMEX parameters in modeling potential distribution of Phoenix dactylifera L.

    PubMed

    Shabani, Farzin; Kumar, Lalit

    2014-01-01

    Using CLIMEX and the Taguchi Method, a process-based niche model was developed to estimate potential distributions of Phoenix dactylifera L. (date palm), an economically important crop in many counties. Development of the model was based on both its native and invasive distribution and validation was carried out in terms of its extensive distribution in Iran. To identify model parameters having greatest influence on distribution of date palm, a sensitivity analysis was carried out. Changes in suitability were established by mapping of regions where the estimated distribution changed with parameter alterations. This facilitated the assessment of certain areas in Iran where parameter modifications impacted the most, particularly in relation to suitable and highly suitable locations. Parameter sensitivities were also evaluated by the calculation of area changes within the suitable and highly suitable categories. The low temperature limit (DV2), high temperature limit (DV3), upper optimal temperature (SM2) and high soil moisture limit (SM3) had the greatest impact on sensitivity, while other parameters showed relatively less sensitivity or were insensitive to change. For an accurate fit in species distribution models, highly sensitive parameters require more extensive research and data collection methods. Results of this study demonstrate a more cost effective method for developing date palm distribution models, an integral element in species management, and may prove useful for streamlining requirements for data collection in potential distribution modeling for other species as well. PMID:24722140

  17. Site energy distribution analysis of Cu (Ⅱ) adsorption on sediments and residues by sequential extraction method.

    PubMed

    Jin, Qiang; Yang, Yan; Dong, Xianbin; Fang, Jimin

    2016-01-01

    Many models (e.g., Langmuir model, Freundlich model and surface complexation model) have been successfully used to explain the mechanism of metal ion adsorption on the pure mineral materials. These materials usually have a homogeneous surface where all sites have the same adsorption energies. However, it's hardly appropriate for such models to describe the adsorption on heterogeneous surfaces (e.g., sediment surface), site energy distribution analysis can be to. In the present study, the site energy distribution analysis was used to describe the surface properties and adsorption behavior of the non-residual and residual components extracted from the natural aquatic sediment samples. The residues were prepared "in-situ" by using the sequential extraction procedure. The present study is intended to investigate the roles of different components and the change of site energy distribution at different temperatures of the sediment samples in controlling Cu (Ⅱ) adsorption. The results of the site energy distribution analysis indicated firstly, that the sorption sites of iron/manganese hydrous oxides (IMHO) and organic matter (OM) have higher energy. Secondly, light fraction (LF) and carbonates have little influence on site energy distribution. Finally, there was increase in site energies with the increase of temperature. Specially, low temperature (5 °C) significantly influenced the site energies of IMHO and OM, and also had obvious effect on the energy distribution of the sediments after removing target components. The site energy distribution analysis proved to be a useful method for us to further understand the energetic characteristics of sediment in comparison with those previously obtained. PMID:26552542

  18. Can Data Recognize Its Parent Distribution?

    SciTech Connect

    A.W.Marshall; J.C.Meza; and I. Olkin

    1999-05-01

    This study is concerned with model selection of lifetime and survival distributions arising in engineering reliability or in the medical sciences. We compare various distributions, including the gamma, Weibull and lognormal, with a new distribution called geometric extreme exponential. Except for the lognormal distribution, the other three distributions all have the exponential distribution as special cases. A Monte Carlo simulation was performed to determine sample sizes for which survival distributions can distinguish data generated by their own families. Two methods for decision are by maximum likelihood and by Kolmogorov distance. Neither method is uniformly best. The probability of correct selection with more than one alternative shows some surprising results when the choices are close to the exponential distribution.

  19. A Weibull model to describe antimicrobial kinetics of oregano and lemongrass essential oils against Salmonella Enteritidis in ground beef during refrigerated storage.

    PubMed

    de Oliveira, Thales Leandro Coutinho; Soares, Rodrigo de Araújo; Piccoli, Roberta Hilsdorf

    2013-03-01

    The antimicrobial effect of oregano (Origanum vulgare L.) and lemongrass (Cymbopogon citratus (DC.) Stapf.) essential oils (EOs) against Salmonella enterica serotype Enteritidis in in vitro experiments, and inoculated in ground bovine meat during refrigerated storage (4±2 °C) for 6 days was evaluated. The Weibull model was tested to fit survival/inactivation bacterial curves (estimating of p and δ parameters). The minimum inhibitory concentration (MIC) value for both EOs on S. Enteritidis was 3.90 μl/ml. The EO concentrations applied in the ground beef were 3.90, 7.80 and 15.60 μl/g, based on MIC levels and possible activity reduction by food constituents. Both evaluated EOs in all tested levels, showed antimicrobial effects, with microbial populations reducing (p≤0.05) along time storage. Evaluating fit-quality parameters (RSS and RSE) Weibull models are able to describe the inactivation curves of EOs against S. Enteritidis. The application of EOs in processed meats can be used to control pathogens during refrigerated shelf-life. PMID:23273476

  20. Wave-height hazard analysis in Eastern Coast of Spain - Bayesian approach using generalized Pareto distribution

    NASA Astrophysics Data System (ADS)

    Egozcue, J. J.; Pawlowsky-Glahn, V.; Ortego, M. I.

    2005-03-01

    Standard practice of wave-height hazard analysis often pays little attention to the uncertainty of assessed return periods and occurrence probabilities. This fact favors the opinion that, when large events happen, the hazard assessment should change accordingly. However, uncertainty of the hazard estimates is normally able to hide the effect of those large events. This is illustrated using data from the Mediterranean coast of Spain, where the last years have been extremely disastrous. Thus, it is possible to compare the hazard assessment based on data previous to those years with the analysis including them. With our approach, no significant change is detected when the statistical uncertainty is taken into account. The hazard analysis is carried out with a standard model. Time-occurrence of events is assumed Poisson distributed. The wave-height of each event is modelled as a random variable which upper tail follows a Generalized Pareto Distribution (GPD). Moreover, wave-heights are assumed independent from event to event and also independent of their occurrence in time. A threshold for excesses is assessed empirically. The other three parameters (Poisson rate, shape and scale parameters of GPD) are jointly estimated using Bayes' theorem. Prior distribution accounts for physical features of ocean waves in the Mediterranean sea and experience with these phenomena. Posterior distribution of the parameters allows to obtain posterior distributions of other derived parameters like occurrence probabilities and return periods. Predictives are also available. Computations are carried out using the program BGPE v2.0.

  1. Strategic Sequencing for State Distributed PV Policies: A Quantitative Analysis of Policy Impacts and Interactions

    SciTech Connect

    Doris, E.; Krasko, V.A.

    2012-10-01

    State and local policymakers show increasing interest in spurring the development of customer-sited distributed generation (DG), in particular solar photovoltaic (PV) markets. Prompted by that interest, this analysis examines the use of state policy as a tool to support the development of a robust private investment market. This analysis builds on previous studies that focus on government subsidies to reduce installation costs of individual projects and provides an evaluation of the impacts of policies on stimulating private market development.

  2. Superstatistics analysis of the ion current distribution function: Met3PbCl influence study.

    PubMed

    Miśkiewicz, Janusz; Trela, Zenon; Przestalski, Stanisław; Karcz, Waldemar

    2010-09-01

    A novel analysis of ion current time series is proposed. It is shown that higher (second, third and fourth) statistical moments of the ion current probability distribution function (PDF) can yield new information about ion channel properties. The method is illustrated on a two-state model where the PDF of the compound states are given by normal distributions. The proposed method was applied to the analysis of the SV cation channels of vacuolar membrane of Beta vulgaris and the influence of trimethyllead chloride (Met(3)PbCl) on the ion current probability distribution. Ion currents were measured by patch-clamp technique. It was shown that Met(3)PbCl influences the variance of the open-state ion current but does not alter the PDF of the closed-state ion current. Incorporation of higher statistical moments into the standard investigation of ion channel properties is proposed. PMID:20354691

  3. Analysis of the melanin distribution in different ethnic groups by in vivo laser scanning microscopy

    NASA Astrophysics Data System (ADS)

    Antoniou, C.; Lademann, J.; Richter, H.; Astner, S.; Patzelt, A.; Zastrow, L.; Sterry, W.; Koch, S.

    2009-05-01

    The aim of this study was to determine whether Laser scanning confocal microscopy (LSM) is able to visualize differences in melanin content and distribution in different Skin Phototypes. The investigations were carried out on six healthy volunteers with Skin Phototypes II, IV, and VI. Representative skin samples of Skin Phototypes II, V, and VI were obtained for histological analysis from remaining tissue of skin grafts and were used for LSM-pathologic correlation. LSM evaluation showed significant differences in melanin distribution in Skin Phototypes II, IV, and VI, respectively. Based on the differences in overall reflectivity and image brightness, a visual evaluation scheme showed increasing brightness of the basal and suprabasal layers with increasing Skin Phototypes. The findings correlated well with histological analysis. The results demonstrate that LSM may serve as a promising adjunctive tool for real time assessment of melanin content and distribution in human skin, with numerous clinical applications and therapeutic and preventive implications.

  4. Residence Time Distribution Measurement and Analysis of Pilot-Scale Pretreatment Reactors for Biofuels Production: Preprint

    SciTech Connect

    Sievers, D.; Kuhn, E.; Tucker, M.; Stickel, J.; Wolfrum, E.

    2013-06-01

    Measurement and analysis of residence time distribution (RTD) data is the focus of this study where data collection methods were developed specifically for the pretreatment reactor environment. Augmented physical sampling and automated online detection methods were developed and applied. Both the measurement techniques themselves and the produced RTD data are presented and discussed.

  5. Exploratory Data Analysis to Identify Factors Influencing Spatial Distributions of Weed Seed Banks

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Comparing distributions of different species in multiple fields will help us understand the spatial dynamics of weed seed banks, but analyzing observational data requires non-traditional statistical methods. We used classification and regression tree analysis (CART) to investigate factors that influ...

  6. An investigation on the intra-sample distribution of cotton color by using image analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The colorimeter principle is widely used to measure cotton color. This method provides the sample’s color grade; but the result does not include information about the color distribution and any variation within the sample. We conducted an investigation that used image analysis method to study the ...

  7. An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests

    ERIC Educational Resources Information Center

    Attali, Yigal

    2010-01-01

    Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…

  8. DataHub knowledge based assistance for science visualization and analysis using large distributed databases

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Collins, Donald J.; Doyle, Richard J.; Jacobson, Allan S.

    1991-01-01

    Viewgraphs on DataHub knowledge based assistance for science visualization and analysis using large distributed databases. Topics covered include: DataHub functional architecture; data representation; logical access methods; preliminary software architecture; LinkWinds; data knowledge issues; expert systems; and data management.

  9. Global Distribution of Tropospheric Aerosols: A 3-D Model Analysis of Satellite Data

    NASA Technical Reports Server (NTRS)

    Chin, Mian

    2002-01-01

    This report describes objectives completed for the GACP (Global Climatology Aerosol Project). The objectives included the analysis of satellite aerosol data, including the optical properties and global distributions of major aerosol types, and human contributions to major aerosol types. The researchers have conducted simulations and field work.

  10. Spatial sensitivity analysis of snow cover data in a distributed rainfall-runoff model

    NASA Astrophysics Data System (ADS)

    Berezowski, T.; Nossent, J.; Chormański, J.; Batelaan, O.

    2015-04-01

    As the availability of spatially distributed data sets for distributed rainfall-runoff modelling is strongly increasing, more attention should be paid to the influence of the quality of the data on the calibration. While a lot of progress has been made on using distributed data in simulations of hydrological models, sensitivity of spatial data with respect to model results is not well understood. In this paper we develop a spatial sensitivity analysis method for spatial input data (snow cover fraction - SCF) for a distributed rainfall-runoff model to investigate when the model is differently subjected to SCF uncertainty in different zones of the model. The analysis was focussed on the relation between the SCF sensitivity and the physical and spatial parameters and processes of a distributed rainfall-runoff model. The methodology is tested for the Biebrza River catchment, Poland, for which a distributed WetSpa model is set up to simulate 2 years of daily runoff. The sensitivity analysis uses the Latin-Hypercube One-factor-At-a-Time (LH-OAT) algorithm, which employs different response functions for each spatial parameter representing a 4 × 4 km snow zone. The results show that the spatial patterns of sensitivity can be easily interpreted by co-occurrence of different environmental factors such as geomorphology, soil texture, land use, precipitation and temperature. Moreover, the spatial pattern of sensitivity under different response functions is related to different spatial parameters and physical processes. The results clearly show that the LH-OAT algorithm is suitable for our spatial sensitivity analysis approach and that the SCF is spatially sensitive in the WetSpa model. The developed method can be easily applied to other models and other spatial data.

  11. Consideration of tip speed limitations in preliminary analysis of minimum COE wind turbines

    NASA Astrophysics Data System (ADS)

    Cuerva-Tejero, A.; Yeow, T. S.; Lopez-Garcia, O.; Gallego-Castillo, C.

    2014-12-01

    A relation between Cost Of Energy, COE, maximum allowed tip speed, and rated wind speed, is obtained for wind turbines with a given goal rated power. The wind regime is characterised by the corresponding parameters of the probability density function of wind speed. The non-dimensional characteristics of the rotor: number of blades, the blade radial distributions of local solidity, twist, angle, and airfoil type, play the role of parameters in the mentioned relation. The COE is estimated using a cost model commonly used by the designers. This cost model requires basic design data such as the rotor radius and the ratio between the hub height and the rotor radius. Certain design options, DO, related to the technology of the power plant, tower and blades are also required as inputs. The function obtained for the COE can be explored to find those values of rotor radius that give rise to minimum cost of energy for a given wind regime as the tip speed limitation changes. The analysis reveals that iso-COE lines evolve parallel to iso-radius lines for large values of limit tip speed but that this is not the case for small values of the tip speed limits. It is concluded that., as the tip speed limit decreases, the optimum decision for keeping minimum COE values can be: a) reducing the rotor radius for places with high weibull scale parameter or b) increasing the rotor radius for places with low weibull scale parameter.

  12. Efficient, Distributed and Interactive Neuroimaging Data Analysis Using the LONI Pipeline.

    PubMed

    Dinov, Ivo D; Van Horn, John D; Lozev, Kamen M; Magsipoc, Rico; Petrosyan, Petros; Liu, Zhizhong; Mackenzie-Graham, Allan; Eggert, Paul; Parker, Douglas S; Toga, Arthur W

    2009-01-01

    The LONI Pipeline is a graphical environment for construction, validation and execution of advanced neuroimaging data analysis protocols (Rex et al., 2003). It enables automated data format conversion, allows Grid utilization, facilitates data provenance, and provides a significant library of computational tools. There are two main advantages of the LONI Pipeline over other graphical analysis workflow architectures. It is built as a distributed Grid computing environment and permits efficient tool integration, protocol validation and broad resource distribution. To integrate existing data and computational tools within the LONI Pipeline environment, no modification of the resources themselves is required. The LONI Pipeline provides several types of process submissions based on the underlying server hardware infrastructure. Only workflow instructions and references to data, executable scripts and binary instructions are stored within the LONI Pipeline environment. This makes it portable, computationally efficient, distributed and independent of the individual binary processes involved in pipeline data-analysis workflows. We have expanded the LONI Pipeline (V.4.2) to include server-to-server (peer-to-peer) communication and a 3-tier failover infrastructure (Grid hardware, Sun Grid Engine/Distributed Resource Management Application API middleware, and the Pipeline server). Additionally, the LONI Pipeline provides three layers of background-server executions for all users/sites/systems. These new LONI Pipeline features facilitate resource-interoperability, decentralized computing, construction and validation of efficient and robust neuroimaging data-analysis workflows. Using brain imaging data from the Alzheimer's Disease Neuroimaging Initiative (Mueller et al., 2005), we demonstrate integration of disparate resources, graphical construction of complex neuroimaging analysis protocols and distributed parallel computing. The LONI Pipeline, its features, specifications

  13. Efficient, Distributed and Interactive Neuroimaging Data Analysis Using the LONI Pipeline

    PubMed Central

    Dinov, Ivo D.; Van Horn, John D.; Lozev, Kamen M.; Magsipoc, Rico; Petrosyan, Petros; Liu, Zhizhong; MacKenzie-Graham, Allan; Eggert, Paul; Parker, Douglas S.; Toga, Arthur W.

    2009-01-01

    The LONI Pipeline is a graphical environment for construction, validation and execution of advanced neuroimaging data analysis protocols (Rex et al., 2003). It enables automated data format conversion, allows Grid utilization, facilitates data provenance, and provides a significant library of computational tools. There are two main advantages of the LONI Pipeline over other graphical analysis workflow architectures. It is built as a distributed Grid computing environment and permits efficient tool integration, protocol validation and broad resource distribution. To integrate existing data and computational tools within the LONI Pipeline environment, no modification of the resources themselves is required. The LONI Pipeline provides several types of process submissions based on the underlying server hardware infrastructure. Only workflow instructions and references to data, executable scripts and binary instructions are stored within the LONI Pipeline environment. This makes it portable, computationally efficient, distributed and independent of the individual binary processes involved in pipeline data-analysis workflows. We have expanded the LONI Pipeline (V.4.2) to include server-to-server (peer-to-peer) communication and a 3-tier failover infrastructure (Grid hardware, Sun Grid Engine/Distributed Resource Management Application API middleware, and the Pipeline server). Additionally, the LONI Pipeline provides three layers of background-server executions for all users/sites/systems. These new LONI Pipeline features facilitate resource-interoperability, decentralized computing, construction and validation of efficient and robust neuroimaging data-analysis workflows. Using brain imaging data from the Alzheimer's Disease Neuroimaging Initiative (Mueller et al., 2005), we demonstrate integration of disparate resources, graphical construction of complex neuroimaging analysis protocols and distributed parallel computing. The LONI Pipeline, its features, specifications

  14. Radial distribution function imaging by STEM diffraction: Phase mapping and analysis of heterogeneous nanostructured glasses.

    PubMed

    Mu, Xiaoke; Wang, Di; Feng, Tao; Kübel, Christian

    2016-09-01

    Characterizing heterogeneous nanostructured amorphous materials is a challenging topic, because of difficulty to solve disordered atomic arrangement in nanometer scale. We developed a new transmission electron microscopy (TEM) method to enable phase analysis and mapping of heterogeneous amorphous structures. That is to combine scanning TEM (STEM) diffraction mapping, radial distribution function (RDF) analysis, and hyperspectral analysis. This method was applied to an amorphous zirconium oxide and zirconium iron multilayer system, and showed extreme sensitivity to small atomic packing variations. This approach helps to understand local structure variations in glassy composite materials and provides new insights to correlate structure and properties of glasses. PMID:27236215

  15. Fractal analysis of the dark matter and gas distributions in the Mare-Nostrum universe

    SciTech Connect

    Gaite, José

    2010-03-01

    We develop a method of multifractal analysis of N-body cosmological simulations that improves on the customary counts-in-cells method by taking special care of the effects of discreteness and large scale homogeneity. The analysis of the Mare-Nostrum simulation with our method provides strong evidence of self-similar multifractal distributions of dark matter and gas, with a halo mass function that is of Press-Schechter type but has a power-law exponent -2, as corresponds to a multifractal. Furthermore, our analysis shows that the dark matter and gas distributions are indistinguishable as multifractals. To determine if there is any gas biasing, we calculate the cross-correlation coefficient, with negative but inconclusive results. Hence, we develop an effective Bayesian analysis connected with information theory, which clearly demonstrates that the gas is biased in a long range of scales, up to the scale of homogeneity. However, entropic measures related to the Bayesian analysis show that this gas bias is small (in a precise sense) and is such that the fractal singularities of both distributions coincide and are identical. We conclude that this common multifractal cosmic web structure is determined by the dynamics and is independent of the initial conditions.

  16. Finite-key analysis for measurement-device-independent quantum key distribution

    NASA Astrophysics Data System (ADS)

    Song, Ting-Ting; Wen, Qiao-Yan; Guo, Fen-Zhuo; Tan, Xiao-Qing

    2012-08-01

    The length of signal pulses is finite in practical quantum key distribution. The finite-key analysis of an unconditional quantum key distribution is a burning problem, and the efficient quantum key distribution protocol suitable for practical implementation, measurement-device-independent quantum key distribution (MDI QKD), was proposed very recently. We give the finite-key analysis of MDI QKD, which removes all detector side channels and generates many orders of key rate higher than that of full-device-independent quantum key distribution. The secure bound of the ultimate key rate is obtained under the statistical fluctuations of relative frequency, which can be applied directly to practical threshold detectors with low detection efficiency and highly lossy channels. The bound is evaluated for reasonable values of the observed parameters. The simulation shows that the secure distance is around 10 km when the number of sifted data is 1010. Moreover the secure distance would be much longer in practice because of some simplified treatments used in our paper.

  17. Bivariate Frequency Analysis with Nonstationary Gumbel/GEV Marginal Distributions for Rainfall Event

    NASA Astrophysics Data System (ADS)

    Joo, Kyungwon; Kim, Sunghun; Kim, Hanbeen; Ahn, Hyunjun; Heo, Jun-Haeng

    2016-04-01

    Multivariate frequency analysis has been developing for hydrological data recently. Particularly, the copula model has been used as an effective method which has no limitation on deciding marginal distributions. The time-series rainfall data can be characterized to rainfall event by inter-event time definition and each rainfall event has rainfall depth and duration. In addition, changes in rainfall depth have been studied recently due to climate change. The nonstationary (time-varying) Gumbel and Generalized Extreme Value (GEV) have been developed and their performances have been investigated from many studies. In the current study, bivariate frequency analysis has performed for rainfall depth and duration using Archimedean copula on stationary and nonstationary hourly rainfall data to consider the effect of climate change. The parameter of copula model is estimated by inference function for margin (IFM) method and stationary/nonstationary Gumbel and GEV distributions are used for marginal distributions. As a result, level curve of copula model is obtained and goodness-of-fit test is performed to choose appropriate marginal distribution among the applied stationary and nonstationary Gumbel and GEV distributions.

  18. Harmonic amplitude distribution in a wideband ultrasonic wavefront after propagation through human abdominal wall and breast specimens.

    PubMed

    Liu, D L; Waag, R C

    1997-02-01

    The amplitude characteristics of ultrasonic wavefront distortion produced by transmission through the abdominal wall and breast is described. Ultrasonic pulses were recorded in a two-dimensional aperture after transmission through specimens of abdominal wall or breast. After the pulse arrival times were corrected for geometric path differences, the pulses were temporally Fourier transformed and two-dimensional maps of harmonic amplitudes in the measurement aperture were computed. The results indicate that, as the temporal frequency increases, the fluctuation in harmonic amplitudes increases but the spatial scale of the fluctuation decreases. The normalized second-order and third-order moments of the amplitude distribution also increase with temporal frequency. The wide range variation of these distribution characteristics could not be covered by the Rayleigh, Rician, or K-distribution because of their limited flexibility. However, the Weibull distribution and especially the generalized K-distribution provide better fits to the data. In the fit of the generalized K-distribution, a decrease of its parameter alpha with increasing temporal frequency was observed, as predicted by analysis based on a phase screen model. PMID:9035403

  19. Spatial sensitivity analysis of snow cover data in a distributed rainfall-runoff model

    NASA Astrophysics Data System (ADS)

    Berezowski, T.; Nossent, J.; Chormański, J.; Batelaan, O.

    2014-10-01

    As the availability of spatially distributed data sets for distributed rainfall-runoff modelling is strongly growing, more attention should be paid to the influence of the quality of the data on the calibration. While a lot of progress has been made on using distributed data in simulations of hydrological models, sensitivity of spatial data with respect to model results is not well understood. In this paper we develop a spatial sensitivity analysis (SA) method for snow cover fraction input data (SCF) for a distributed rainfall-runoff model to investigate if the model is differently subjected to SCF uncertainty in different zones of the model. The analysis was focused on the relation between the SCF sensitivity and the physical, spatial parameters and processes of a distributed rainfall-runoff model. The methodology is tested for the Biebrza River catchment, Poland for which a distributed WetSpa model is setup to simulate two years of daily runoff. The SA uses the Latin-Hypercube One-factor-At-a-Time (LH-OAT) algorithm, which uses different response functions for each 4 km × 4 km snow zone. The results show that the spatial patterns of sensitivity can be easily interpreted by co-occurrence of different environmental factors such as: geomorphology, soil texture, land-use, precipitation and temperature. Moreover, the spatial pattern of sensitivity under different response functions is related to different spatial parameters and physical processes. The results clearly show that the LH-OAT algorithm is suitable for the spatial sensitivity analysis approach and that the SCF is spatially sensitive in the WetSpa model.

  20. Phosphorescence lifetime analysis with a quadratic programming algorithm for determining quencher distributions in heterogeneous systems.

    PubMed Central

    Vinogradov, S A; Wilson, D F

    1994-01-01

    A new method for analysis of phosphorescence lifetime distributions in heterogeneous systems has been developed. This method is based on decomposition of the data vector to a linearly independent set of exponentials and uses quadratic programming principles for x2 minimization. Solution of the resulting algorithm requires a finite number of calculations (it is not iterative) and is computationally fast and robust. The algorithm has been tested on various simulated decays and for analysis of phosphorescence measurements of experimental systems with descrete distributions of lifetimes. Critical analysis of the effect of signal-to-noise on the resolving capability of the algorithm is presented. This technique is recommended for resolution of the distributions of quencher concentration in heterogeneous samples, of which oxygen distributions in tissue is an important example. Phosphors of practical importance for biological oxygen measurements: Pd-meso-tetra (4-carboxyphenyl) porphyrin (PdTCPP) and Pd-meso-porphyrin (PdMP) have been used to provide experimental test of the algorithm. PMID:7858142

  1. An approximate marginal logistic distribution for the analysis of longitudinal ordinal data.

    PubMed

    Nooraee, Nazanin; Abegaz, Fentaw; Ormel, Johan; Wit, Ernst; van den Heuvel, Edwin R

    2016-03-01

    Subject-specific and marginal models have been developed for the analysis of longitudinal ordinal data. Subject-specific models often lack a population-average interpretation of the model parameters due to the conditional formulation of random intercepts and slopes. Marginal models frequently lack an underlying distribution for ordinal data, in particular when generalized estimating equations are applied. To overcome these issues, latent variable models underneath the ordinal outcomes with a multivariate logistic distribution can be applied. In this article, we extend the work of O'Brien and Dunson (2004), who studied the multivariate t-distribution with marginal logistic distributions. We use maximum likelihood, instead of a Bayesian approach, and incorporated covariates in the correlation structure, in addition to the mean model. We compared our method with GEE and demonstrated that it performs better than GEE with respect to the fixed effect parameter estimation when the latent variables have an approximately elliptical distribution, and at least as good as GEE for other types of latent variable distributions. PMID:26458164

  2. Evaluating Domestic Hot Water Distribution System Options With Validated Analysis Models

    SciTech Connect

    Weitzel, E.; Hoeschele, M.

    2014-09-01

    A developing body of work is forming that collects data on domestic hot water consumption, water use behaviors, and energy efficiency of various distribution systems. A full distribution system developed in TRNSYS has been validated using field monitoring data and then exercised in a number of climates to understand climate impact on performance. This study builds upon previous analysis modelling work to evaluate differing distribution systems and the sensitivities of water heating energy and water use efficiency to variations of climate, load, distribution type, insulation and compact plumbing practices. Overall 124 different TRNSYS models were simulated. Of the configurations evaluated, distribution losses account for 13-29% of the total water heating energy use and water use efficiency ranges from 11-22%. The base case, an uninsulated trunk and branch system sees the most improvement in energy consumption by insulating and locating the water heater central to all fixtures. Demand recirculation systems are not projected to provide significant energy savings and in some cases increase energy consumption. Water use is most efficient with demand recirculation systems, followed by the insulated trunk and branch system with a central water heater. Compact plumbing practices and insulation have the most impact on energy consumption (2-6% for insulation and 3-4% per 10 gallons of enclosed volume reduced). The results of this work are useful in informing future development of water heating best practices guides as well as more accurate (and simulation time efficient) distribution models for annual whole house simulation programs.

  3. Finite difference based vibration simulation analysis of a segmented distributed piezoelectric structronic plate system

    NASA Astrophysics Data System (ADS)

    Ren, B. Y.; Wang, L.; Tzou, H. S.; Yue, H. H.

    2010-08-01

    Electrical modeling of piezoelectric structronic systems by analog circuits has the disadvantages of huge circuit structure and low precision. However, studies of electrical simulation of segmented distributed piezoelectric structronic plate systems (PSPSs) by using output voltage signals of high-speed digital circuits to evaluate the real-time dynamic displacements are scarce in the literature. Therefore, an equivalent dynamic model based on the finite difference method (FDM) is presented to simulate the actual physical model of the segmented distributed PSPS with simply supported boundary conditions. By means of the FDM, the four-ordered dynamic partial differential equations (PDEs) of the main structure/segmented distributed sensor signals/control moments of the segmented distributed actuator of the PSPS are transformed to finite difference equations. A dynamics matrix model based on the Newmark-β integration method is established. The output voltage signal characteristics of the lower modes (m <= 3, n <= 3) with different finite difference mesh dimensions and different integration time steps are analyzed by digital signal processing (DSP) circuit simulation software. The control effects of segmented distributed actuators with different effective areas are consistent with the results of the analytical model in relevant references. Therefore, the method of digital simulation for vibration analysis of segmented distributed PSPSs presented in this paper can provide a reference for further research into the electrical simulation of PSPSs.

  4. Rank-Ordered Multifractal Analysis of Probability Distributions in Fluid Turbulence

    NASA Astrophysics Data System (ADS)

    Wu, Cheng-Chin; Chang, Tien

    2015-11-01

    Rank-Ordered Multifractal Analysis (ROMA) was introduced by Chang and Wu (2008) to describe the multifractal characteristic of intermittent events. The procedure provides a natural connection between the rank-ordered spectrum and the idea of one-parameter scaling for monofractals. This technique has successfully been applied to MHD turbulence simulations and turbulence data observed in various space plasmas. In this paper, the technique is applied to the probability distributions in the inertial range of the turbulent fluid flow, as given in the vast Johns Hopkins University (JHU) turbulence database. In addition, a refined method of finding the continuous ROMA spectrum and the scaled probability distribution function (PDF) simultaneously is introduced.

  5. Mathematical modeling and numerical analysis of thermal distribution in arch dams considering solar radiation effect.

    PubMed

    Mirzabozorg, H; Hariri-Ardebili, M A; Shirkhan, M; Seyed-Kolbadi, S M

    2014-01-01

    The effect of solar radiation on thermal distribution in thin high arch dams is investigated. The differential equation governing thermal behavior of mass concrete in three-dimensional space is solved applying appropriate boundary conditions. Solar radiation is implemented considering the dam face direction relative to the sun, the slop relative to horizon, the region cloud cover, and the surrounding topography. It has been observed that solar radiation changes the surface temperature drastically and leads to nonuniform temperature distribution. Solar radiation effects should be considered in thermal transient analysis of thin arch dams. PMID:24695817

  6. Using Micro-Synchrophasor Data for Advanced Distribution Grid Planning and Operations Analysis

    SciTech Connect

    Stewart, Emma; Kiliccote, Sila; McParland, Charles; Roberts, Ciaran

    2014-07-01

    This report reviews the potential for distribution-grid phase-angle data that will be available from new micro-synchrophasors (µPMUs) to be utilized in existing distribution-grid planning and operations analysis. This data could augment the current diagnostic capabilities of grid analysis software, used in both planning and operations for applications such as fault location, and provide data for more accurate modeling of the distribution system. µPMUs are new distribution-grid sensors that will advance measurement and diagnostic capabilities and provide improved visibility of the distribution grid, enabling analysis of the grid’s increasingly complex loads that include features such as large volumes of distributed generation. Large volumes of DG leads to concerns on continued reliable operation of the grid, due to changing power flow characteristics and active generation, with its own protection and control capabilities. Using µPMU data on change in voltage phase angle between two points in conjunction with new and existing distribution-grid planning and operational tools is expected to enable model validation, state estimation, fault location, and renewable resource/load characterization. Our findings include: data measurement is outstripping the processing capabilities of planning and operational tools; not every tool can visualize a voltage phase-angle measurement to the degree of accuracy measured by advanced sensors, and the degree of accuracy in measurement required for the distribution grid is not defined; solving methods cannot handle the high volumes of data generated by modern sensors, so new models and solving methods (such as graph trace analysis) are needed; standardization of sensor-data communications platforms in planning and applications tools would allow integration of different vendors’ sensors and advanced measurement devices. In addition, data from advanced sources such as µPMUs could be used to validate models to improve

  7. Analysis of large-scale distributed knowledge sources via autonomous cooperative graph mining

    NASA Astrophysics Data System (ADS)

    Levchuk, Georgiy; Ortiz, Andres; Yan, Xifeng

    2014-05-01

    In this paper, we present a model for processing distributed relational data across multiple autonomous heterogeneous computing resources in environments with limited control, resource failures, and communication bottlenecks. Our model exploits dependencies in the data to enable collaborative distributed querying in noisy data. The collaboration policy for computational resources is efficiently constructed from the belief propagation algorithm. To scale to large data sizes, we employ a combination of priority-based filtering, incremental processing, and communication compression techniques. Our solution achieved high accuracy of analysis results and orders of magnitude improvements in computation time compared to the centralized graph matching solution.

  8. Mathematical Modeling and Numerical Analysis of Thermal Distribution in Arch Dams considering Solar Radiation Effect

    PubMed Central

    Mirzabozorg, H.; Hariri-Ardebili, M. A.; Shirkhan, M.; Seyed-Kolbadi, S. M.

    2014-01-01

    The effect of solar radiation on thermal distribution in thin high arch dams is investigated. The differential equation governing thermal behavior of mass concrete in three-dimensional space is solved applying appropriate boundary conditions. Solar radiation is implemented considering the dam face direction relative to the sun, the slop relative to horizon, the region cloud cover, and the surrounding topography. It has been observed that solar radiation changes the surface temperature drastically and leads to nonuniform temperature distribution. Solar radiation effects should be considered in thermal transient analysis of thin arch dams. PMID:24695817

  9. Evolution of the ATLAS PanDA Production and Distributed Analysis System

    NASA Astrophysics Data System (ADS)

    Maeno, T.; De, K.; Wenaus, T.; Nilsson, P.; Walker, R.; Stradling, A.; Fine, V.; Potekhin, M.; Panitkin, S.; Compostella, G.

    2012-12-01

    The PanDA (Production and Distributed Analysis) system has been developed to meet ATLAS production and analysis requirements for a data-driven workload management system capable of operating at LHC data processing scale. PanDA has performed well with high reliability and robustness during the two years of LHC data-taking, while being actively evolved to meet the rapidly changing requirements for analysis use cases. We will present an overview of system evolution including automatic rebrokerage and reattempt for analysis jobs, adaptation for the CernVM File System, support for the multi-cloud model through which Tier-2 sites act as members of multiple clouds, pledged resource management and preferential brokerage, and monitoring improvements. We will also describe results from the analysis of two years of PanDA usage statistics, current issues, and plans for the future.

  10. Do Insect Populations Die at Constant Rates as They Become Older? Contrasting Demographic Failure Kinetics with Respect to Temperature According to the Weibull Model.

    PubMed

    Damos, Petros; Soulopoulou, Polyxeni

    2015-01-01

    Temperature implies contrasting biological causes of demographic aging in poikilotherms. In this work, we used the reliability theory to describe the consistency of mortality with age in moth populations and to show that differentiation in hazard rates is related to extrinsic environmental causes such as temperature. Moreover, experiments that manipulate extrinsic mortality were used to distinguish temperature-related death rates and the pertinence of the Weibull aging model. The Newton-Raphson optimization method was applied to calculate parameters for small samples of ages at death by estimating the maximum likelihoods surfaces using scored gradient vectors and the Hessian matrix. The study reveals for the first time that the Weibull function is able to describe contrasting biological causes of demographic aging for moth populations maintained at different temperature regimes. We demonstrate that at favourable conditions the insect death rate accelerates as age advances, in contrast to the extreme temperatures in which each individual drifts toward death in a linear fashion and has a constant chance of passing away. Moreover, slope of hazard rates shifts towards a constant initial rate which is a pattern demonstrated by systems which are not wearing out (e.g. non-aging) since the failure, or death, is a random event independent of time. This finding may appear surprising, because, traditionally, it was mostly thought as rule that in aging population force of mortality increases exponentially until all individuals have died. Moreover, in relation to other studies, we have not observed any typical decelerating aging patterns at late life (mortality leveling-off), but rather, accelerated hazard rates at optimum temperatures and a stabilized increase at the extremes.In most cases, the increase in aging-related mortality was simulated reasonably well according to the Weibull survivorship model that is applied. Moreover, semi log- probability hazard rate model

  11. Do Insect Populations Die at Constant Rates as They Become Older? Contrasting Demographic Failure Kinetics with Respect to Temperature According to the Weibull Model

    PubMed Central

    Damos, Petros; Soulopoulou, Polyxeni

    2015-01-01

    Temperature implies contrasting biological causes of demographic aging in poikilotherms. In this work, we used the reliability theory to describe the consistency of mortality with age in moth populations and to show that differentiation in hazard rates is related to extrinsic environmental causes such as temperature. Moreover, experiments that manipulate extrinsic mortality were used to distinguish temperature-related death rates and the pertinence of the Weibull aging model. The Newton-Raphson optimization method was applied to calculate parameters for small samples of ages at death by estimating the maximum likelihoods surfaces using scored gradient vectors and the Hessian matrix. The study reveals for the first time that the Weibull function is able to describe contrasting biological causes of demographic aging for moth populations maintained at different temperature regimes. We demonstrate that at favourable conditions the insect death rate accelerates as age advances, in contrast to the extreme temperatures in which each individual drifts toward death in a linear fashion and has a constant chance of passing away. Moreover, slope of hazard rates shifts towards a constant initial rate which is a pattern demonstrated by systems which are not wearing out (e.g. non-aging) since the failure, or death, is a random event independent of time. This finding may appear surprising, because, traditionally, it was mostly thought as rule that in aging population force of mortality increases exponentially until all individuals have died. Moreover, in relation to other studies, we have not observed any typical decelerating aging patterns at late life (mortality leveling-off), but rather, accelerated hazard rates at optimum temperatures and a stabilized increase at the extremes.In most cases, the increase in aging-related mortality was simulated reasonably well according to the Weibull survivorship model that is applied. Moreover, semi log- probability hazard rate model

  12. GADS software for parametric linkage analysis of quantitative traits distributed as a point-mass mixture.

    PubMed

    Axenovich, Tatiana I; Zorkoltseva, Irina V

    2012-02-01

    Often the quantitative data coming from proteomics and metabolomics studies have irregular distribution with a spike. None of the wide used methods for human QTL mapping are applicable to such traits. Researchers have to reduce the sample, excluding the spike, and analyze only continuous measurements. In this study, we propose a method for the parametric linkage analysis of traits with a spike in the distribution, and a software GADS, which implements this method. Our software includes not only the programs for parametric linkage analysis, but also the program for complex segregation analysis, which allows the estimation of the model parameters used in linkage. We tested our method on the real data about vertical cup-to-disc ratio, the important characteristic of the optic disc associated with glaucoma, in a large pedigree from a Dutch isolated population. Significant linkage signal was identified on chromosome 6 with the help of GADS, whereas the analysis of the normal distributed part of the sample demonstrated only a suggestive linkage peak on this chromosome. The software GADS is freely available at http://mga.bionet.nsc.ru/soft/index.html. PMID:22340440

  13. Strain analysis from objects with a random distribution: A generalized center-to-center method

    NASA Astrophysics Data System (ADS)

    Shan, Yehua; Liang, Xinquan

    2014-03-01

    Existing methods of strain analysis such as the center-to-center method and the Fry method estimate strain from the spatial relationship between point objects in the deformed state. They assume a truncated Poisson distribution of point objects in the pre-deformed state. Significant deviations occur in nature and diffuse the central vacancy in a Fry plot, limiting the its effectiveness as a strain gauge. Therefore, a generalized center-to-center method is proposed to deal with point objects with the more general Poisson distribution, where the method outcomes do not depend on an analysis of a graphical central vacancy. This new method relies upon the probability mass function for the Poisson distribution, and adopts the maximum likelihood function method to solve for strain. The feasibility of the method is demonstrated by applying it to artificial data sets generated for known strains. Further analysis of these sets by use of the bootstrap method shows that the accuracy of the strain estimate has a strong tendency to increase either with point number or with the inclusion of more pre-deformation nearest neighbors. A poorly sorted, well packed, deformed conglomerate is analyzed, yielding strain estimate similar to the vector mean of the major axis directions of pebbles and the harmonic mean of their axial ratios from a shape-based strain determination method. These outcomes support the applicability of the new method to the analysis of deformed rocks with appropriate strain markers.

  14. Nucleation, adatom capture, and island size distributions: Unified scaling analysis for submonolayer deposition

    SciTech Connect

    Evans, J. W.; Bartelt, M. C.

    2001-06-15

    We consider the irreversible nucleation and growth of two-dimensional islands during submonolayer deposition in the regime of large island sizes. A quasihydrodynamic analysis of rate equations for island densities yields an ordinary differential equation (ODE) for the scaling function describing the island size distribution. This ODE involves the scaling function for the dependence on island size of {open_quotes}capture numbers{close_quotes} describing the aggregation of diffusing adatoms. The latter is determined via a quasihydrodynamic analysis of rate equations for the areas of {open_quotes}capture zones{close_quotes} surrounding islands. Alternatively, a more complicated analysis yields a partial differential equation (PDE) for the scaling function describing the joint probability distribution for island sizes and capture zone areas. Then, applying a moment analysis to this PDE, we obtain refined versions of the above ODE{close_quote}s, together with a third equation for the variance of the cell area distribution (for islands of a given size). The key nontrivial input to the above equations is a detailed characterization of nucleation. We analyze these equations for a general formulation of nucleation, as well as for an idealized picture considered previously, wherein nucleated islands have capture zones lying completely within those of existing islands.

  15. Age Dating Fluvial Sediment Storage Reservoirs to Construct Sediment Waiting Time Distributions

    NASA Astrophysics Data System (ADS)

    Skalak, K.; Pizzuto, J. E.; Benthem, A.; Karwan, D. L.; Mahan, S.

    2015-12-01

    Suspended sediment transport is an important geomorphic process that can often control the transport of nutrients and contaminants. The time a particle spends in storage remains a critical knowledge gap in understanding particle trajectories through landscapes. We dated floodplain deposits in South River, VA, using fallout radionuclides (Pb-210, Cs-137), optically stimulated luminescence (OSL), and radiocarbon dating to determine sediment ages and construct sediment waiting time distributions. We have a total of 14 age dates in two eroding banks. We combine these age dates with a well-constrained history of mercury concentrations on suspended sediment in the river from an industrial release. Ages from fallout radionuclides document sedimentation from the early 1900s to the present, and agree with the history of mercury contamination. OSL dates span approximately 200 to 17,000 years old. We performed a standard Weibull analysis of nonexceedance to construct a waiting time distribution of floodplain sediment for the South River. The mean waiting time for floodplain sediment is 2930 years, while the median is approximately 710 years. When the floodplain waiting time distribution is combined with the waiting time distribution for in-channel sediment storage (available from previous studies), the mean waiting time shifts to approximately 680 years, suggesting that quantifying sediment waiting times for both channel and floodplain storage is critical in advancing knowledge of particle trajectories through watersheds.

  16. Simulation of flight maneuver-load distributions by utilizing stationary, non-Gaussian random load histories

    NASA Technical Reports Server (NTRS)

    Leybold, H. A.

    1971-01-01

    Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.

  17. Long-term mechanical life testing of polymeric post insulators for distribution and a comparison to porcelain

    SciTech Connect

    Cherney, E.A. )

    1988-07-01

    The paper presents the results and analyses of long-term cantilever strength tests on polymeric line post insulators. The time-to-failure data for static cantilever loads are represented by the Weibull distribution. The life distribution, obtained from the maximum likelihood estimates of the accelerated failure times, fits an exponential model. An extrapolation of the life distribution to normal loads provides an estimate of the strength rating and mechanical equivalence to porcelain line post insulators.

  18. [Analysis of streamer properties and emission spectroscopy of 2-D OH distribution of pulsed corona discharge].

    PubMed

    Zhao, Lei; Gao, Xiang; Luo, Zhong-Yang; Xuan, Jian-Yong; Jiang, Jian-Ping; Cen, Ke-Fa

    2011-11-01

    Streamer plays a key role in the process of OH radical generation. The propagation of primary and secondary streamers of positive wire-plate pulsed corona discharge was observed using a short gate ICCD in air environment. The influence of the applied voltage on the properties was investigated. It was shown that the primary streamer propagation velocity, electric coverage and length of secondary streamer increased significantly with increasing the applied voltage. Then 2-D OH distribution was investigated by the emission spectrum. With the analysis of the OH emission spectra, the distribution of OH radicals showed a trend of decreasing from the wire electrode to its circumambience. Compared with the streamer propagation trace, the authors found that OH radical distribution and streamer are in the same area. Both OH radical concentration and the intensity of streamer decreased when far away from the wire electrode. PMID:22242481

  19. Detecting Distributed Network Traffic Anomaly with Network-Wide Correlation Analysis

    NASA Astrophysics Data System (ADS)

    Zonglin, Li; Guangmin, Hu; Xingmiao, Yao; Dan, Yang

    2008-12-01

    Distributed network traffic anomaly refers to a traffic abnormal behavior involving many links of a network and caused by the same source (e.g., DDoS attack, worm propagation). The anomaly transiting in a single link might be unnoticeable and hard to detect, while the anomalous aggregation from many links can be prevailing, and does more harm to the networks. Aiming at the similar features of distributed traffic anomaly on many links, this paper proposes a network-wide detection method by performing anomalous correlation analysis of traffic signals' instantaneous parameters. In our method, traffic signals' instantaneous parameters are firstly computed, and their network-wide anomalous space is then extracted via traffic prediction. Finally, an anomaly is detected by a global correlation coefficient of anomalous space. Our evaluation using Abilene traffic traces demonstrates the excellent performance of this approach for distributed traffic anomaly detection.

  20. Analysis of electron energy distribution function in the Linac4 H⁻ source.

    PubMed

    Mochizuki, S; Mattei, S; Nishida, K; Hatayama, A; Lettry, J

    2016-02-01

    To understand the Electron Energy Distribution Function (EEDF) in the Radio Frequency Inductively Coupled Plasmas (RF-ICPs) in hydrogen negative ion sources, the detailed analysis of the EEDFs using numerical simulation and the theoretical approach based on Boltzmann equation has been performed. It is shown that the EEDF of RF-ICPs consists of two parts, one is the low energy part which obeys Maxwellian distribution and the other is high energy part deviated from Maxwellian distribution. These simulation results have been confirmed to be reasonable by the analytical approach. The results suggest that it is possible to enhance the dissociation of molecules and the resultant H(-) negative ion production by reducing the gas pressure. PMID:26931990

  1. Principal Effects of Axial Load on Moment-Distribution Analysis of Rigid Structures

    NASA Technical Reports Server (NTRS)

    James, Benjamin Wylie

    1935-01-01

    This thesis presents the method of moment distribution modified to include the effect of axial load upon the bending moments. This modification makes it possible to analyze accurately complex structures, such as rigid fuselage trusses, that heretofore had to be analyzed by approximate formulas and empirical rules. The method is simple enough to be practicable even for complex structures, and it gives a means of analysis for continuous beams that is simpler than the extended three-moment equation now in common use. When the effect of axial load is included, it is found that the basic principles of moment distribution remain unchanged, the only difference being that the factors used, instead of being constants for a given member, become functions of the axial load. Formulas have been developed for these factors, and curves plotted so that their applications requires no more work than moment distribution without axial load. Simple problems have been included to illustrate the use of the curves.

  2. Analysis of temperature distribution during tension test of glass fiber reinforced plastic by fiber orientation variation.

    PubMed

    Kim, Jin-Woo; Kim, Hyoung-Seok; Lee, Dong-Gi

    2014-10-01

    In this paper, analysis of temperature distribution by fiber orientation variation under tension test was proposed through IR thermography camera. Lock-in method, which is one of technique in IR thermography camera to measure minute change in temperature, was utilized to monitor temperature distribution and change during crack propagation. Method to analyze of temperature distribution by fiber orientation variation under tension test of GFRP via IR thermography camera was suggested. At the maximum stress point, temperature was significantly increased. As shown previously, specimen with shorter fracture time showed abrupt increment of temperature at the maximum stress point. Specimen with longer fracture time displayed increment of temperature after the maximum stress point. PMID:25942822

  3. Numerical Analysis of Magnetic Field Distribution of Magnetic Micro-barcodes for Suspension Assay Technology

    NASA Astrophysics Data System (ADS)

    Son, Vo Thanh; Anandakumar, S.; Kim, CheolGi; Jeong, Jong-Ruyl

    2011-12-01

    In this study, we have investigated real-time decoding feasibility of magnetic micro-barcodes in a microfluidic channel by using numerical analysis of magnetic field distribution of the micro-barcodes. The vector potential model based on a molecular current has been used to obtain magnetic stray field distribution of ferromagnetic bars which consisting of the micro-barcodes. It reveals that the stray field distribution of the micro-barcodes strongly depends on the geometries of the ferromagnetic bar. Interestingly enough, we have found that one can avoide the miniaturization process of a magnetic sensor device needed to increase the sensitivity by optimizing the geometries of micro-barcodes. We also estimate a magnetic sensor response depending on flying height and lateral misalignment of the micro-barcodes over the sensor position and found that control of the flying height is crucial factor to enhance the detection sensitivity and reproducibility of a magnetic sensor signal in the suspension assay technology.

  4. A Meta-Analysis of Distributed Leadership from 2002 to 2013: Theory Development, Empirical Evidence and Future Research Focus

    ERIC Educational Resources Information Center

    Tian, Meng; Risku, Mika; Collin, Kaija

    2016-01-01

    This article provides a meta-analysis of research conducted on distributed leadership from 2002 to 2013. It continues the review of distributed leadership commissioned by the English National College for School Leadership (NCSL) ("Distributed Leadership: A Desk Study," Bennett et al., 2003), which identified two gaps in the research…

  5. Validation results of the IAG Dancer project for distributed GPS analysis

    NASA Astrophysics Data System (ADS)

    Boomkamp, H.

    2012-12-01

    The number of permanent GPS stations in the world has grown far too large to allow processing of all this data at analysis centers. The majority of these GPS sites do not even make their observation data available to the analysis centers, for various valid reasons. The current ITRF solution is still based on centralized analysis by the IGS, and subsequent densification of the reference frame via regional network solutions. Minor inconsistencies in analysis methods, software systems and data quality imply that this centralized approach is unlikely to ever reach the ambitious accuracy objectives of GGOS. The dependence on published data also makes it clear that a centralized approach will never provide a true global ITRF solution for all GNSS receivers in the world. If the data does not come to the analysis, the only alternative is to bring the analysis to the data. The IAG Dancer project has implemented a distributed GNSS analysis system on the internet in which each receiver can have its own analysis center in the form of a freely distributed JAVA peer-to-peer application. Global parameters for satellite orbits, clocks and polar motion are solved via a distributed least squares solution among all participating receivers. A Dancer instance can run on any computer that has simultaneous access to the receiver data and to the public internet. In the future, such a process may be embedded in the receiver firmware directly. GPS network operators can join the Dancer ITRF realization without having to publish their observation data or estimation products. GPS users can run a Dancer process without contributing to the global solution, to have direct access to the ITRF in near real-time. The Dancer software has been tested on-line since late 2011. A global network of processes has gradually evolved to allow stabilization and tuning of the software in order to reach a fully operational system. This presentation reports on the current performance of the Dancer system, and

  6. Development of a Web Service for Analysis in a Distributed Network

    PubMed Central

    Jiang, Xiaoqian; Wu, Yuan; Marsolo, Keith; Ohno-Machado, Lucila

    2014-01-01

    Objective: We describe functional specifications and practicalities in the software development process for a web service that allows the construction of the multivariate logistic regression model, Grid Logistic Regression (GLORE), by aggregating partial estimates from distributed sites, with no exchange of patient-level data. Background: We recently developed and published a web service for model construction and data analysis in a distributed environment. This recent paper provided an overview of the system that is useful for users, but included very few details that are relevant for biomedical informatics developers or network security personnel who may be interested in implementing this or similar systems. We focus here on how the system was conceived and implemented. Methods: We followed a two-stage development approach by first implementing the backbone system and incrementally improving the user experience through interactions with potential users during the development. Our system went through various stages such as concept proof, algorithm validation, user interface development, and system testing. We used the Zoho Project management system to track tasks and milestones. We leveraged Google Code and Apache Subversion to share code among team members, and developed an applet-servlet architecture to support the cross platform deployment. Discussion: During the development process, we encountered challenges such as Information Technology (IT) infrastructure gaps and limited team experience in user-interface design. We figured out solutions as well as enabling factors to support the translation of an innovative privacy-preserving, distributed modeling technology into a working prototype. Conclusion: Using GLORE (a distributed model that we developed earlier) as a pilot example, we demonstrated the feasibility of building and integrating distributed modeling technology into a usable framework that can support privacy-preserving, distributed data analysis among

  7. Phenotype Clustering of Breast Epithelial Cells in Confocal Imagesbased on Nuclear Protein Distribution Analysis

    SciTech Connect

    Long, Fuhui; Peng, Hanchuan; Sudar, Damir; Levievre, Sophie A.; Knowles, David W.

    2006-09-05

    Background: The distribution of the chromatin-associatedproteins plays a key role in directing nuclear function. Previously, wedeveloped an image-based method to quantify the nuclear distributions ofproteins and showed that these distributions depended on the phenotype ofhuman mammary epithelial cells. Here we describe a method that creates ahierarchical tree of the given cell phenotypes and calculates thestatistical significance between them, based on the clustering analysisof nuclear protein distributions. Results: Nuclear distributions ofnuclear mitotic apparatus protein were previously obtained fornon-neoplastic S1 and malignant T4-2 human mammary epithelial cellscultured for up to 12 days. Cell phenotype was defined as S1 or T4-2 andthe number of days in cultured. A probabilistic ensemble approach wasused to define a set of consensus clusters from the results of multipletraditional cluster analysis techniques applied to the nucleardistribution data. Cluster histograms were constructed to show how cellsin any one phenotype were distributed across the consensus clusters.Grouping various phenotypes allowed us to build phenotype trees andcalculate the statistical difference between each group. The resultsshowed that non-neoplastic S1 cells could be distinguished from malignantT4-2 cells with 94.19 percent accuracy; that proliferating S1 cells couldbe distinguished from differentiated S1 cells with 92.86 percentaccuracy; and showed no significant difference between the variousphenotypes of T4-2 cells corresponding to increasing tumor sizes.Conclusion: This work presents a cluster analysis method that canidentify significant cell phenotypes, based on the nuclear distributionof specific proteins, with high accuracy.

  8. Stress distribution around osseointegrated implants with different internal-cone connections: photoelastic and finite element analysis.

    PubMed

    Anami, Lilian Costa; da Costa Lima, Júlia Magalhães; Takahashi, Fernando Eidi; Neisser, Maximiliano Piero; Noritomi, Pedro Yoshito; Bottino, Marco Antonio

    2015-04-01

    The goal of this study was to evaluate the distribution of stresses generated around implants with different internal-cone abutments by photoelastic (PA) and finite element analysis (FEA). For FEA, implant and abutments with different internal-cone connections (H- hexagonal and S- solid) were scanned, 3D meshes were modeled and objects were loaded with computer software. Trabecular and cortical bones and photoelastic resin blocks were simulated. The PA was performed with photoelastic resin blocks where implants were included and different abutments were bolted. Specimens were observed in the circular polariscope with the application device attached, where loads were applied on same conditions as FEA. FEA images showed very similar stress distribution between two models with different abutments. Differences were observed between stress distribution in bone and resin blocks; PA images resembled those obtained on resin block FEA. PA images were also quantitatively analyzed by comparing the values assigned to fringes. It was observed that S abutment distributes loads more evenly to bone adjacent to an implant when compared to H abutment, for both analysis methods used. It was observed that the PA has generated very similar results to those obtained in FEA with the resin block. PMID:23750560

  9. Analysis of spatial distribution of mining tremors occurring in Rudna copper mine (Poland)

    NASA Astrophysics Data System (ADS)

    Kozłowska, Maria

    2013-10-01

    The distribution of mining tremors is strictly related to the exploitation progress of mining works and, consequently, to the local stress field. In case the distribution is known, it is possible to determine future area of intensive seismicity in exploited mining panel. In the paper, an analysis of working face-to-tremor distance for Rudna copper mine in Poland is presented. In order to develop a spatial model of tremors' occurrence in the exploited mine, the seismicity of four mining sections in the five-month period was investigated and the tremors' distribution was obtained. It was compared with the spatial distribution of tremors in coal mines found in the literature. The results show that the places where tremors mostly occur — the vicinity of the face, in front of it — coincide with the high-stress area predicted by literature models. The obtained results help to predict the future seismic zone connected with planned mining section, which can be used in seismic hazard analysis.

  10. Sensitivity analysis for large-deflection and postbuckling responses on distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Watson, Brian C.; Noor, Ahmed K.

    1995-01-01

    A computational strategy is presented for calculating sensitivity coefficients for the nonlinear large-deflection and postbuckling responses of laminated composite structures on distributed-memory parallel computers. The strategy is applicable to any message-passing distributed computational environment. The key elements of the proposed strategy are: (1) a multiple-parameter reduced basis technique; (2) a parallel sparse equation solver based on a nested dissection (or multilevel substructuring) node ordering scheme; and (3) a multilevel parallel procedure for evaluating hierarchical sensitivity coefficients. The hierarchical sensitivity coefficients measure the sensitivity of the composite structure response to variations in three sets of interrelated parameters; namely, laminate, layer and micromechanical (fiber, matrix, and interface/interphase) parameters. The effectiveness of the strategy is assessed by performing hierarchical sensitivity analysis for the large-deflection and postbuckling responses of stiffened composite panels with cutouts on three distributed-memory computers. The panels are subjected to combined mechanical and thermal loads. The numerical studies presented demonstrate the advantages of the reduced basis technique for hierarchical sensitivity analysis on distributed-memory machines.

  11. Performance analysis of a brushless dc motor due to magnetization distribution in a continuous ring magnet

    NASA Astrophysics Data System (ADS)

    Hur, Jin; Jung, In-Soung; Sung, Ha-Gyeong; Park, Soon-Sup

    2003-05-01

    This paper represents the force performance of a brushless dc motor with a continuous ring-type permanent magnet (PM), considering its magnetization patterns: trapezoidal, trapezoidal with dead zone, and unbalanced trapezoidal magnetization with dead zone. The radial force density in PM motor causes vibration, because vibration is induced the traveling force from the rotating PM acting on the stator. Magnetization distribution of the PM as well as the shape of the teeth determines the distribution of force density. In particular, the distribution has a three-dimensional (3-D) pattern because of overhang, that is, it is not uniform in axial direction. Thus, the analysis of radial force density required dynamic analysis considering the 3-D shape of the teeth and overhang. The results show that the force density as a source of vibration varies considerably depending on the overhang and magnetization distribution patterns. In addition, the validity of the developed method, coupled 3-D equivalent magnetic circuit network method, with driving circuit and motion equation, is confirmed by comparison of conventional method using 3D finite element method.

  12. Stress distribution on a valgus knee prosthetic inclined interline -- a finite element analysis.

    PubMed

    Orban, H; Stan, G; Gruionu, L; Orban, C

    2013-01-01

    Total knee arthroplasty following valgus deformity is a challenging procedure due to the unique set of problems that must be addressed. The aim of this study is to determine, with a finite element analysis, the load distribution for an inclined valgus prosthetic balanced knee and to compare these results with those of a prosthetic balanced knee with an uninclined interline. Computational simulations, using finite element analysis, focused on a comparision between load intensity and distribution for these situations. We studied valgus inclination at 3 and 8 degrees. We noticed that for an inclination of 3 degrees, the forces are distributed almost symmetrically on both condyles, similar to the distribution of forces in the uninclined interline case. The maximum contact pressure is greater, increasing from 15 MPa to 19.3 MPa (28%). At 8 degrees of inclination, the contact patch moved anterolateraly on the tibia, meaning that the tibial condyles will be unequally loaded. The maximum contact pressure increases to 25 MPa (66%). These greater forces could lead to polyethylene wear and collapse. Additional tibial resection could be a useful method for balancing in severe valgus knee, when valgus inlination does not exceed 3 degrees. PMID:23464776

  13. A landscape analysis of cougar distribution and abundance in Montana, USA.

    PubMed

    Riley, S J; Malecki, R A

    2001-09-01

    Recent growth in the distribution and abundance of cougars (Puma concolor) throughout western North America has created opportunities, challenges, and problems for wildlife managers and raises questions about what factors affect cougar populations. We present an analysis of factors thought to affect cougar distribution and abundance across the broad geographical scales on which most population management decisions are made. Our objectives were to: (1) identify and evaluate landscape parameters that can be used to predict the capability of habitats to support cougars, and (2) evaluate factors that may account for the recent expansion in cougar numbers. Habitat values based on terrain ruggedness and forested cover explained 73% of the variation in a cougar abundance index. Indices of cougar abundance also were spatially and temporally correlated with ungulate abundance. An increase in the number and total biomass of ungulate prey species is hypothesized to account for recent increases in cougars. Cougar populations in Montana are coping with land development by humans when other components of habitat and prey populations are sufficient. Our analysis provides a better understanding of what may have influenced recent growth in cougar distribution and abundance in Montana and, when combined with insights about stakeholder acceptance capacity, offers a basis for cougar management at broad scales. Long-term conservation of cougars necessitates a better understanding of ecosystem functions that affect prey distribution and abundance, more accurate estimates of cougar populations, and management abilities to integrate these components with human values. PMID:11531235

  14. Modeling Exon-Specific Bias Distribution Improves the Analysis of RNA-Seq Data.

    PubMed

    Liu, Xuejun; Zhang, Li; Chen, Songcan

    2015-01-01

    RNA-seq technology has become an important tool for quantifying the gene and transcript expression in transcriptome study. The two major difficulties for the gene and transcript expression quantification are the read mapping ambiguity and the overdispersion of the read distribution along reference sequence. Many approaches have been proposed to deal with these difficulties. A number of existing methods use Poisson distribution to model the read counts and this easily splits the counts into the contributions from multiple transcripts. Meanwhile, various solutions were put forward to account for the overdispersion in the Poisson models. By checking the similarities among the variation patterns of read counts for individual genes, we found that the count variation is exon-specific and has the conserved pattern across the samples for each individual gene. We introduce Gamma-distributed latent variables to model the read sequencing preference for each exon. These variables are embedded to the rate parameter of a Poisson model to account for the overdispersion of read distribution. The model is tractable since the Gamma priors can be integrated out in the maximum likelihood estimation. We evaluate the proposed approach, PGseq, using four real datasets and one simulated dataset, and compare its performance with other popular methods. Results show that PGseq presents competitive performance compared to other alternatives in terms of accuracy in the gene and transcript expression calculation and in the downstream differential expression analysis. Especially, we show the advantage of our method in the analysis of low expression. PMID:26448625

  15. Modeling Exon-Specific Bias Distribution Improves the Analysis of RNA-Seq Data

    PubMed Central

    Liu, Xuejun; Zhang, Li; Chen, Songcan

    2015-01-01

    RNA-seq technology has become an important tool for quantifying the gene and transcript expression in transcriptome study. The two major difficulties for the gene and transcript expression quantification are the read mapping ambiguity and the overdispersion of the read distribution along reference sequence. Many approaches have been proposed to deal with these difficulties. A number of existing methods use Poisson distribution to model the read counts and this easily splits the counts into the contributions from multiple transcripts. Meanwhile, various solutions were put forward to account for the overdispersion in the Poisson models. By checking the similarities among the variation patterns of read counts for individual genes, we found that the count variation is exon-specific and has the conserved pattern across the samples for each individual gene. We introduce Gamma-distributed latent variables to model the read sequencing preference for each exon. These variables are embedded to the rate parameter of a Poisson model to account for the overdispersion of read distribution. The model is tractable since the Gamma priors can be integrated out in the maximum likelihood estimation. We evaluate the proposed approach, PGseq, using four real datasets and one simulated dataset, and compare its performance with other popular methods. Results show that PGseq presents competitive performance compared to other alternatives in terms of accuracy in the gene and transcript expression calculation and in the downstream differential expression analysis. Especially, we show the advantage of our method in the analysis of low expression. PMID:26448625

  16. Identifying Synonymy between SNOMED Clinical Terms of Varying Length Using Distributional Analysis of Electronic Health Records

    PubMed Central

    Henriksson, Aron; Conway, Mike; Duneld, Martin; Chapman, Wendy W.

    2013-01-01

    Medical terminologies and ontologies are important tools for natural language processing of health record narratives. To account for the variability of language use, synonyms need to be stored in a semantic resource as textual instantiations of a concept. Developing such resources manually is, however, prohibitively expensive and likely to result in low coverage. To facilitate and expedite the process of lexical resource development, distributional analysis of large corpora provides a powerful data-driven means of (semi-)automatically identifying semantic relations, including synonymy, between terms. In this paper, we demonstrate how distributional analysis of a large corpus of electronic health records – the MIMIC-II database – can be employed to extract synonyms of SNOMED CT preferred terms. A distinctive feature of our method is its ability to identify synonymous relations between terms of varying length. PMID:24551362

  17. Passive-scheme analysis for solving the untrusted source problem in quantum key distribution

    NASA Astrophysics Data System (ADS)

    Peng, Xiang; Xu, Bingjie; Guo, Hong

    2010-04-01

    As a practical method, the passive scheme is useful to monitor the photon statistics of an untrusted source in a “Plug & Play” quantum key distribution (QKD) system. In a passive scheme, three kinds of monitor mode can be adopted: average photon number (APN) monitor, photon number analyzer (PNA), and photon number distribution (PND) monitor. In this paper, the security analysis is rigorously given for the APN monitor, while for the PNA, the analysis, including statistical fluctuation and random noise, is addressed with a confidence level. The results show that the PNA can achieve better performance than the APN monitor and can asymptotically approach the theoretical limit of the PND monitor. Also, the passive scheme with the PNA works efficiently when the signal-to-noise ratio (RSN) is not too low and so is highly applicable to solve the untrusted source problem in the QKD system.

  18. Risk analysis of highly combustible gas storage, supply, and distribution systems in PWR plants

    SciTech Connect

    Simion, G.P.; VanHorn, R.L.; Smith, C.L.; Bickel, J.H.; Sattison, M.B.; Bulmahn, K.D.

    1993-06-01

    This report presents the evaluation of the potential safety concerns for pressurized water reactors (PWRs) identified in Generic Safety Issue 106, Piping and the Use of Highly Combustible Gases in Vital Areas. A Westinghouse four-loop PWR plant was analyzed for the risk due to the use of combustible gases (predominantly hydrogen) within the plant. The analysis evaluated an actual hydrogen distribution configuration and conducted several sensitivity studies to determine the potential variability among PWRs. The sensitivity studies were based on hydrogen and safety-related equipment configurations observed at other PWRs within the United States. Several options for improving the hydrogen distribution system design were identified and evaluated for their effect on risk and core damage frequency. A cost/benefit analysis was performed to determine whether alternatives considered were justifiable based on the safety improvement and economics of each possible improvement.

  19. Impact of hadronic and nuclear corrections on global analysis of spin-dependent parton distributions

    SciTech Connect

    Jimenez-Delgado, Pedro; Accardi, Alberto; Melnitchouk, Wally

    2014-02-01

    We present the first results of a new global next-to-leading order analysis of spin-dependent parton distribution functions from the most recent world data on inclusive polarized deep-inelastic scattering, focusing in particular on the large-x and low-Q^2 regions. By directly fitting polarization asymmetries we eliminate biases introduced by using polarized structure function data extracted under nonuniform assumptions for the unpolarized structure functions. For analysis of the large-x data we implement nuclear smearing corrections for deuterium and 3He nuclei, and systematically include target mass and higher twist corrections to the g_1 and g_2 structure functions at low Q^2. We also explore the effects of Q^2 and W^2 cuts in the data sets, and the potential impact of future data on the behavior of the spin-dependent parton distributions at large x.

  20. Passive-scheme analysis for solving the untrusted source problem in quantum key distribution

    SciTech Connect

    Peng Xiang; Xu Bingjie; Guo Hong

    2010-04-15

    As a practical method, the passive scheme is useful to monitor the photon statistics of an untrusted source in a 'Plug and Play' quantum key distribution (QKD) system. In a passive scheme, three kinds of monitor mode can be adopted: average photon number (APN) monitor, photon number analyzer (PNA), and photon number distribution (PND) monitor. In this paper, the security analysis is rigorously given for the APN monitor, while for the PNA, the analysis, including statistical fluctuation and random noise, is addressed with a confidence level. The results show that the PNA can achieve better performance than the APN monitor and can asymptotically approach the theoretical limit of the PND monitor. Also, the passive scheme with the PNA works efficiently when the signal-to-noise ratio (R{sup SN}) is not too low and so is highly applicable to solve the untrusted source problem in the QKD system.

  1. A performance analysis method for distributed real-time robotic systems: A case study of remote teleoperation

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Sanderson, A. C.

    1994-01-01

    Robot coordination and control systems for remote teleoperation applications are by necessity implemented on distributed computers. Modeling and performance analysis of these distributed robotic systems is difficult, but important for economic system design. Performance analysis methods originally developed for conventional distributed computer systems are often unsatisfactory for evaluating real-time systems. The paper introduces a formal model of distributed robotic control systems; and a performance analysis method, based on scheduling theory, which can handle concurrent hard-real-time response specifications. Use of the method is illustrated by a case of remote teleoperation which assesses the effect of communication delays and the allocation of robot control functions on control system hardware requirements.

  2. Bayesian analysis of nanodosimetric ionisation distributions due to alpha particles and protons.

    PubMed

    De Nardo, L; Ferretti, A; Colautti, P; Grosswendt, B

    2011-02-01

    Track-nanodosimetry has the objective to investigate the stochastic aspect of ionisation events in particle tracks, by evaluating the probability distribution of the number of ionisations produced in a nanometric target volume positioned at distance d from a particle track. Such kind of measurements makes use of electron (or ion) gas detectors with detecting efficiencies non-uniformly distributed inside the target volume. This fact makes the reconstruction of true ionisation distributions, which correspond to an ideal efficiency of 100%, non-trivial. Bayesian unfolding has been applied to ionisation distributions produced by 5.4 MeV alpha particles and 20 MeV protons in cylindrical volumes of propane of 20 nm equivalent size, positioned at different impact parameters with respect to the primary beam. It will be shown that a Bayesian analysis performed by subdividing the target volume in sub-regions of different detection efficiencies is able to provide a good reconstruction of the true nanodosimetric ionisation distributions. PMID:21112893

  3. Mapping of aerosols' elemental distribution in two zones in Romania by PIXE analysis

    NASA Astrophysics Data System (ADS)

    Amemiya, Susumu; Masuda, Toshio; Popa-Simil, Liviu; Mateescu, Liviu

    1996-09-01

    In the summer of 1994 aerosol particles were collected from different places, using a portable stacked filter unit, with filters of 8 and 0.4 μm. Sampling was performed in order to obtain the spatial distribution of elemental concentrations of aerosols. The Van de Graaff machine in Nagoya University was used for PIXE analysis of the samples. Results were processed both in Bucharest and in Nagoya. Iso-level maps for the concentration of each of the interesting elements were drawn. Correlation was made between the industry, vegetation, weather, local geography and the concentrations above-mentioned. Major industrial pollution sources were put into evidence. For example, the Si distribution in Bucharest and Dobrogea region turned to be in close link with the vegetation and surface water distribution. The ratio between coarse (8 μm) and fine (0.4 μm) particles is related to human activity (traffic, mining, buildings). Sulphur, in its turn, follows the territorial distribution of thermal power plants and refineries (fine particles), while coarse particles seem to concentrate in high traffic areas (Diesel engines). Pb concentrations, too, respect the traffic density distribution. More than 15 elements were mapped and interesting comments could be done.

  4. Systematic analysis of mutation distribution in three dimensional protein structures identifies cancer driver genes

    PubMed Central

    Fujimoto, Akihiro; Okada, Yukinori; Boroevich, Keith A.; Tsunoda, Tatsuhiko; Taniguchi, Hiroaki; Nakagawa, Hidewaki

    2016-01-01

    Protein tertiary structure determines molecular function, interaction, and stability of the protein, therefore distribution of mutation in the tertiary structure can facilitate the identification of new driver genes in cancer. To analyze mutation distribution in protein tertiary structures, we applied a novel three dimensional permutation test to the mutation positions. We analyzed somatic mutation datasets of 21 types of cancers obtained from exome sequencing conducted by the TCGA project. Of the 3,622 genes that had ≥3 mutations in the regions with tertiary structure data, 106 genes showed significant skew in mutation distribution. Known tumor suppressors and oncogenes were significantly enriched in these identified cancer gene sets. Physical distances between mutations in known oncogenes were significantly smaller than those of tumor suppressors. Twenty-three genes were detected in multiple cancers. Candidate genes with significant skew of the 3D mutation distribution included kinases (MAPK1, EPHA5, ERBB3, and ERBB4), an apoptosis related gene (APP), an RNA splicing factor (SF1), a miRNA processing factor (DICER1), an E3 ubiquitin ligase (CUL1) and transcription factors (KLF5 and EEF1B2). Our study suggests that systematic analysis of mutation distribution in the tertiary protein structure can help identify cancer driver genes. PMID:27225414

  5. Systematic analysis of mutation distribution in three dimensional protein structures identifies cancer driver genes.

    PubMed

    Fujimoto, Akihiro; Okada, Yukinori; Boroevich, Keith A; Tsunoda, Tatsuhiko; Taniguchi, Hiroaki; Nakagawa, Hidewaki

    2016-01-01

    Protein tertiary structure determines molecular function, interaction, and stability of the protein, therefore distribution of mutation in the tertiary structure can facilitate the identification of new driver genes in cancer. To analyze mutation distribution in protein tertiary structures, we applied a novel three dimensional permutation test to the mutation positions. We analyzed somatic mutation datasets of 21 types of cancers obtained from exome sequencing conducted by the TCGA project. Of the 3,622 genes that had ≥3 mutations in the regions with tertiary structure data, 106 genes showed significant skew in mutation distribution. Known tumor suppressors and oncogenes were significantly enriched in these identified cancer gene sets. Physical distances between mutations in known oncogenes were significantly smaller than those of tumor suppressors. Twenty-three genes were detected in multiple cancers. Candidate genes with significant skew of the 3D mutation distribution included kinases (MAPK1, EPHA5, ERBB3, and ERBB4), an apoptosis related gene (APP), an RNA splicing factor (SF1), a miRNA processing factor (DICER1), an E3 ubiquitin ligase (CUL1) and transcription factors (KLF5 and EEF1B2). Our study suggests that systematic analysis of mutation distribution in the tertiary protein structure can help identify cancer driver genes. PMID:27225414

  6. Individual loss distribution measurement in 32-branched PON using pulsed pump-probe Brillouin analysis.

    PubMed

    Takahashi, Hiroshi; Ito, Fumihiko; Kito, Chihiro; Toge, Kunihiro

    2013-03-25

    We describe loss distribution measurement in a passive optical network (PON) using pulsed pump-probe Brillouin analysis. A preliminary experiment is demonstrated using a 32-branched PON constructed in the laboratory. We analyze the signal to noise ratio of this measurement and show that the method can realize a 25 dB dynamic range in 90 seconds (10000 times averaging), with an event location resolution of 10 m, and a fiber length identification resolution of 2 m. PMID:23546056

  7. An exploratory spatial analysis of soil organic carbon distribution in Canadian eco-regions

    NASA Astrophysics Data System (ADS)

    Tan, S.-Y.; Li, J.

    2014-11-01

    As the largest carbon reservoir in ecosystems, soil accounts for more than twice as much carbon storage as that of vegetation biomass or the atmosphere. This paper examines spatial patterns of soil organic carbon (SOC) in Canadian forest areas at an eco-region scale of analysis. The goal is to explore the relationship of SOC levels with various climatological variables, including temperature and precipitation. The first Canadian forest soil database published in 1997 by the Canada Forest Service was analyzed along with other long-term eco-climatic data (1961 to 1991) including precipitation, air temperature, slope, aspect, elevation, and Normalized Difference Vegetation Index (NDVI) derived from remote sensing imagery. In addition, the existing eco-region framework established by Environment Canada was evaluated for mapping SOC distribution. Exploratory spatial data analysis techniques, including spatial autocorrelation analysis, were employed to examine how forest SOC is spatially distributed in Canada. Correlation analysis and spatial regression modelling were applied to determine the dominant ecological factors influencing SOC patterns at the eco-region level. At the national scale, a spatial error regression model was developed to account for spatial dependency and to estimate SOC patterns based on ecological and ecosystem factors. Based on the significant variables derived from the spatial error model, a predictive SOC map in Canadian forest areas was generated. Although overall SOC distribution is influenced by climatic and topographic variables, distribution patterns are shown to differ significantly between eco-regions. These findings help to validate the eco-region classification framework for SOC zonation mapping in Canada.

  8. Three-dimensional gamma analysis of dose distributions in individual structures for IMRT dose verification.

    PubMed

    Tomiyama, Yuuki; Araki, Fujio; Oono, Takeshi; Hioki, Kazunari

    2014-07-01

    Our purpose in this study was to implement three-dimensional (3D) gamma analysis for structures of interest such as the planning target volume (PTV) or clinical target volume (CTV), and organs at risk (OARs) for intensity-modulated radiation therapy (IMRT) dose verification. IMRT dose distributions for prostate and head and neck (HN) cancer patients were calculated with an analytical anisotropic algorithm in an Eclipse (Varian Medical Systems) treatment planning system (TPS) and by Monte Carlo (MC) simulation. The MC dose distributions were calculated with EGSnrc/BEAMnrc and DOSXYZnrc user codes under conditions identical to those for the TPS. The prescribed doses were 76 Gy/38 fractions with five-field IMRT for the prostate and 33 Gy/17 fractions with seven-field IMRT for the HN. TPS dose distributions were verified by the gamma passing rates for the whole calculated volume, PTV or CTV, and OARs by use of 3D gamma analysis with reference to MC dose distributions. The acceptance criteria for the 3D gamma analysis were 3/3 and 2 %/2 mm for a dose difference and a distance to agreement. The gamma passing rates in PTV and OARs for the prostate IMRT plan were close to 100 %. For the HN IMRT plan, the passing rates of 2 %/2 mm in CTV and OARs were substantially lower because inhomogeneous tissues such as bone and air in the HN are included in the calculation area. 3D gamma analysis for individual structures is useful for IMRT dose verification. PMID:24796955

  9. Sensitivity analysis and parameter estimation for distributed hydrological modeling: potential of variational methods

    NASA Astrophysics Data System (ADS)

    Castaings, W.; Dartus, D.; Le Dimet, F.-X.; Saulnier, G.-M.

    2009-04-01

    Variational methods are widely used for the analysis and control of computationally intensive spatially distributed systems. In particular, the adjoint state method enables a very efficient calculation of the derivatives of an objective function (response function to be analysed or cost function to be optimised) with respect to model inputs. In this contribution, it is shown that the potential of variational methods for distributed catchment scale hydrology should be considered. A distributed flash flood model, coupling kinematic wave overland flow and Green Ampt infiltration, is applied to a small catchment of the Thoré basin and used as a relatively simple (synthetic observations) but didactic application case. It is shown that forward and adjoint sensitivity analysis provide a local but extensive insight on the relation between the assigned model parameters and the simulated hydrological response. Spatially distributed parameter sensitivities can be obtained for a very modest calculation effort (~6 times the computing time of a single model run) and the singular value decomposition (SVD) of the Jacobian matrix provides an interesting perspective for the analysis of the rainfall-runoff relation. For the estimation of model parameters, adjoint-based derivatives were found exceedingly efficient in driving a bound-constrained quasi-Newton algorithm. The reference parameter set is retrieved independently from the optimization initial condition when the very common dimension reduction strategy (i.e. scalar multipliers) is adopted. Furthermore, the sensitivity analysis results suggest that most of the variability in this high-dimensional parameter space can be captured with a few orthogonal directions. A parametrization based on the SVD leading singular vectors was found very promising but should be combined with another regularization strategy in order to prevent overfitting.

  10. Mapping drug distribution in brain tissue using liquid extraction surface analysis mass spectrometry imaging.

    PubMed

    Swales, John G; Tucker, James W; Spreadborough, Michael J; Iverson, Suzanne L; Clench, Malcolm R; Webborn, Peter J H; Goodwin, Richard J A

    2015-10-01

    Liquid extraction surface analysis mass spectrometry (LESA-MS) is a surface sampling technique that incorporates liquid extraction from the surface of tissue sections with nanoelectrospray mass spectrometry. Traditional tissue analysis techniques usually require homogenization of the sample prior to analysis via high-performance liquid chromatography mass spectrometry (HPLC-MS), but an intrinsic weakness of this is a loss of all spatial information and the inability of the technique to distinguish between actual tissue penetration and response caused by residual blood contamination. LESA-MS, in contrast, has the ability to spatially resolve drug distributions and has historically been used to profile discrete spots on the surface of tissue sections. Here, we use the technique as a mass spectrometry imaging (MSI) tool, extracting points at 1 mm spatial resolution across tissue sections to build an image of xenobiotic and endogenous compound distribution to assess drug blood-brain barrier penetration into brain tissue. A selection of penetrant and "nonpenetrant" drugs were dosed to rats via oral and intravenous administration. Whole brains were snap-frozen at necropsy and were subsequently sectioned prior to analysis by matrix-assisted laser desorption ionization mass spectrometry imaging (MALDI-MSI) and LESA-MSI. MALDI-MSI, as expected, was shown to effectively map the distribution of brain penetrative compounds but lacked sufficient sensitivity when compounds were marginally penetrative. LESA-MSI was used to effectively map the distribution of these poorly penetrative compounds, highlighting its value as a complementary technique to MALDI-MSI. The technique also showed benefits when compared to traditional homogenization, particularly for drugs that were considered nonpenetrant by homogenization but were shown to have a measurable penetration using LESA-MSI. PMID:26350423

  11. Modeling human mortality using mixtures of bathtub shaped failure distributions.

    PubMed

    Bebbington, Mark; Lai, Chin-Diew; Zitikis, Ricardas

    2007-04-01

    Aging and mortality is usually modeled by the Gompertz-Makeham distribution, where the mortality rate accelerates with age in adult humans. The resulting parameters are interpreted as the frailty and decrease in vitality with age. This fits well to life data from 'westernized' societies, where the data are accurate, of high resolution, and show the effects of high quality post-natal care. We show, however, that when the data are of lower resolution, and contain considerable structure in the infant mortality, the fit can be poor. Moreover, the Gompertz-Makeham distribution is consistent with neither the force of natural selection, nor the recently identified 'late life mortality deceleration'. Although actuarial models such as the Heligman-Pollard distribution can, in theory, achieve an improved fit, the lack of a closed form for the survival function makes fitting extremely arduous, and the biological interpretation can be lacking. We show, that a mixture, assigning mortality to exogenous or endogenous causes, using the reduced additive and flexible Weibull distributions, models well human mortality over the entire life span. The components of the mixture are asymptotically consistent with the reliability and biological theories of aging. The relative simplicity of the mixture distribution makes feasible a technique where the curvature functions of the corresponding survival and hazard rate functions are used to identify the beginning and the end of various life phases, such as infant mortality, the end of the force of natural selection, and late life mortality deceleration. We illustrate our results with a comparative analysis of Canadian and Indonesian mortality data. PMID:17188716

  12. Single-phase power distribution system power flow and fault analysis

    NASA Technical Reports Server (NTRS)

    Halpin, S. M.; Grigsby, L. L.

    1992-01-01

    Alternative methods for power flow and fault analysis of single-phase distribution systems are presented. The algorithms for both power flow and fault analysis utilize a generalized approach to network modeling. The generalized admittance matrix, formed using elements of linear graph theory, is an accurate network model for all possible single-phase network configurations. Unlike the standard nodal admittance matrix formulation algorithms, the generalized approach uses generalized component models for the transmission line and transformer. The standard assumption of a common node voltage reference point is not required to construct the generalized admittance matrix. Therefore, truly accurate simulation results can be obtained for networks that cannot be modeled using traditional techniques.

  13. Some physics and system issues in the security analysis of quantum key distribution protocols

    NASA Astrophysics Data System (ADS)

    Yuen, Horace P.

    2014-10-01

    In this paper, we review a number of issues on the security of quantum key distribution (QKD) protocols that bear directly on the relevant physics or mathematical representation of the QKD cryptosystem. It is shown that the cryptosystem representation itself may miss out many possible attacks, which are not accounted for in the security analysis and proofs. Hence, the final security claims drawn from such analysis are not reliable, apart from foundational issues about the security criteria that are discussed elsewhere. The cases of continuous-variable QKD and multi-photon sources are elaborated upon.

  14. Shared and Distributed Memory Parallel Security Analysis of Large-Scale Source Code and Binary Applications

    SciTech Connect

    Quinlan, D; Barany, G; Panas, T

    2007-08-30

    Many forms of security analysis on large scale applications can be substantially automated but the size and complexity can exceed the time and memory available on conventional desktop computers. Most commercial tools are understandably focused on such conventional desktop resources. This paper presents research work on the parallelization of security analysis of both source code and binaries within our Compass tool, which is implemented using the ROSE source-to-source open compiler infrastructure. We have focused on both shared and distributed memory parallelization of the evaluation of rules implemented as checkers for a wide range of secure programming rules, applicable to desktop machines, networks of workstations and dedicated clusters. While Compass as a tool focuses on source code analysis and reports violations of an extensible set of rules, the binary analysis work uses the exact same infrastructure but is less well developed into an equivalent final tool.

  15. Improving the analysis, storage and sharing of neuroimaging data using relational databases and distributed computing.

    PubMed

    Hasson, Uri; Skipper, Jeremy I; Wilde, Michael J; Nusbaum, Howard C; Small, Steven L

    2008-01-15

    The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data. PMID:17964812

  16. A new parallel-vector finite element analysis software on distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Qin, Jiangning; Nguyen, Duc T.

    1993-01-01

    A new parallel-vector finite element analysis software package MPFEA (Massively Parallel-vector Finite Element Analysis) is developed for large-scale structural analysis on massively parallel computers with distributed-memory. MPFEA is designed for parallel generation and assembly of the global finite element stiffness matrices as well as parallel solution of the simultaneous linear equations, since these are often the major time-consuming parts of a finite element analysis. Block-skyline storage scheme along with vector-unrolling techniques are used to enhance the vector performance. Communications among processors are carried out concurrently with arithmetic operations to reduce the total execution time. Numerical results on the Intel iPSC/860 computers (such as the Intel Gamma with 128 processors and the Intel Touchstone Delta with 512 processors) are presented, including an aircraft structure and some very large truss structures, to demonstrate the efficiency and accuracy of MPFEA.

  17. Improving the Analysis, Storage and Sharing of Neuroimaging Data using Relational Databases and Distributed Computing

    PubMed Central

    Hasson, Uri; Skipper, Jeremy I.; Wilde, Michael J.; Nusbaum, Howard C.; Small, Steven L.

    2007-01-01

    The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data. PMID:17964812

  18. Integration of enzyme kinetic models and isotopomer distribution analysis for studies of in situ cell operation

    PubMed Central

    Selivanov, Vitaly A; Sukhomlin, Tatiana; Centelles, Josep J; Lee, Paul WN; Cascante, Marta

    2006-01-01

    A current trend in neuroscience research is the use of stable isotope tracers in order to address metabolic processes in vivo. The tracers produce a huge number of metabolite forms that differ according to the number and position of labeled isotopes in the carbon skeleton (isotopomers) and such a large variety makes the analysis of isotopomer data highly complex. On the other hand, this multiplicity of forms does provide sufficient information to address cell operation in vivo. By the end of last millennium, a number of tools have been developed for estimation of metabolic flux profile from any possible isotopomer distribution data. However, although well elaborated, these tools were limited to steady state analysis, and the obtained set of fluxes remained disconnected from their biochemical context. In this review we focus on a new numerical analytical approach that integrates kinetic and metabolic flux analysis. The related computational algorithm estimates the dynamic flux based on the time-dependent distribution of all possible isotopomers of metabolic pathway intermediates that are generated from a labeled substrate. The new algorithm connects specific tracer data with enzyme kinetic characteristics, thereby extending the amount of data available for analysis: it uses enzyme kinetic data to estimate the flux profile, and vice versa, for the kinetic analysis it uses in vivo tracer data to reveal the biochemical basis of the estimated metabolic fluxes. PMID:17118161

  19. Combining Different Tools for EEG Analysis to Study the Distributed Character of Language Processing.

    PubMed

    Rocha, Armando Freitas da; Foz, Flávia Benevides; Pereira, Alfredo

    2015-01-01

    Recent studies on language processing indicate that language cognition is better understood if assumed to be supported by a distributed intelligent processing system enrolling neurons located all over the cortex, in contrast to reductionism that proposes to localize cognitive functions to specific cortical structures. Here, brain activity was recorded using electroencephalogram while volunteers were listening or reading small texts and had to select pictures that translate meaning of these texts. Several techniques for EEG analysis were used to show this distributed character of neuronal enrollment associated with the comprehension of oral and written descriptive texts. Low Resolution Tomography identified the many different sets (s i ) of neurons activated in several distinct cortical areas by text understanding. Linear correlation was used to calculate the information H(e i ) provided by each electrode of the 10/20 system about the identified s i . H(e i ) Principal Component Analysis (PCA) was used to study the temporal and spatial activation of these sources s i . This analysis evidenced 4 different patterns of H(e i ) covariation that are generated by neurons located at different cortical locations. These results clearly show that the distributed character of language processing is clearly evidenced by combining available EEG technologies. PMID:26713089

  20. Pedestrian simulation and distribution in urban space based on visibility analysis and agent simulation

    NASA Astrophysics Data System (ADS)

    Ying, Shen; Li, Lin; Gao, Yurong

    2009-10-01

    Spatial visibility analysis is the important direction of pedestrian behaviors because our visual conception in space is the straight method to get environment information and navigate your actions. Based on the agent modeling and up-tobottom method, the paper develop the framework about the analysis of the pedestrian flow depended on visibility. We use viewshed in visibility analysis and impose the parameters on agent simulation to direct their motion in urban space. We analyze the pedestrian behaviors in micro-scale and macro-scale of urban open space. The individual agent use visual affordance to determine his direction of motion in micro-scale urban street on district. And we compare the distribution of pedestrian flow with configuration in macro-scale urban environment, and mine the relationship between the pedestrian flow and distribution of urban facilities and urban function. The paper first computes the visibility situations at the vantage point in urban open space, such as street network, quantify the visibility parameters. The multiple agents use visibility parameters to decide their direction of motion, and finally pedestrian flow reach to a stable state in urban environment through the simulation of multiple agent system. The paper compare the morphology of visibility parameters and pedestrian distribution with urban function and facilities layout to confirm the consistence between them, which can be used to make decision support in urban design.

  1. A CLASS OF DISTRIBUTION-FREE MODELS FOR LONGITUDINAL MEDIATION ANALYSIS

    PubMed Central

    Gunzler, D.; Tang, W.; Lu, N.; Wu, P.; Tu, X.M.

    2016-01-01

    Mediation analysis constitutes an important part of treatment study to identify the mechanisms by which an intervention achieves its effect. Structural equation model (SEM) is a popular framework for modeling such causal relationship. However, current methods impose various restrictions on the study designs and data distributions, limiting the utility of the information they provide in real study applications. In particular, in longitudinal studies missing data is commonly addressed under the assumption of missing at random (MAR), where current methods are unable to handle such missing data if parametric assumptions are violated. In this paper, we propose a new, robust approach to address the limitations of current SEM within the context of longitudinal mediation analysis by utilizing a class of functional response models (FRM). Being distribution-free, the FRM-based approach does not impose any parametric assumption on data distributions. In addition, by extending the inverse probability weighted (IPW) estimates to the current context, the FRM-based SEM provides valid inference for longitudinal mediation analysis under the two most popular missing data mechanisms; missing completely at random (MCAR) and missing at random (MAR). We illustrate the approach with both real and simulated data. PMID:24271505

  2. Space station electrical power distribution analysis using a load flow approach

    NASA Technical Reports Server (NTRS)

    Emanuel, Ervin M.

    1987-01-01

    The space station's electrical power system will evolve and grow in a manner much similar to the present terrestrial electrical power system utilities. The initial baseline reference configuration will contain more than 50 nodes or busses, inverters, transformers, overcurrent protection devices, distribution lines, solar arrays, and/or solar dynamic power generating sources. The system is designed to manage and distribute 75 KW of power single phase or three phase at 20 KHz, and grow to a level of 300 KW steady state, and must be capable of operating at a peak of 450 KW for 5 to 10 min. In order to plan far into the future and keep pace with load growth, a load flow power system analysis approach must be developed and utilized. This method is a well known energy assessment and management tool that is widely used throughout the Electrical Power Utility Industry. The results of a comprehensive evaluation and assessment of an Electrical Distribution System Analysis Program (EDSA) is discussed. Its potential use as an analysis and design tool for the 20 KHz space station electrical power system is addressed.

  3. [Soil Heavy Metal Spatial Distribution and Source Analysis Around an Aluminum Plant in Baotou].

    PubMed

    Zhang, Lian-ke; Li, Hai-peng; Huang, Xue-min; Li, Yu-mei; Jiao, Kun-ling; Sun, Peng; Wang, Wei-da

    2016-03-15

    The soil with 500 m distance from an aluminum plant in Baotou was studied. A total of 64 soil samples were taken from the 0-5 cm, 5-20 cm, 20-40 cm and 40-60 cm layers, and the contents of Cu, Pb, Zn, Cr, Cd, Ni and Mn were tested, respectively. The correlation analysis and principal component analysis were used to identify the sources of these heavy metals in soils. The results suggested that the contents of Cu, Pb, Zn, Cr, Cd, Ni and Mn in study area were 32.9, 50.35, 69.92, 43.78, 0.54, 554.42 and 36.65 mg · kg⁻¹ respectively. All seven heavy metals tested were overweight compared with the background values of soil in Inner Mongolia. The spatial distribution of heavy metals showed that the horizontal distribution of heavy metals was obviously enriched in the southwest, while in vertical distribution, the heavy metal content (0 to 5 cm) was highest in the surface soil, and the heavy metal content decreased with increasing depth and tended to be stabilized when the depth was over 20 cm. Source analysis showed that the source of Cu, Zn, Cr and Mn might be influenced by the aluminum plant and the surrounding industrial activity. The source of Pb and Cd might be mainly related to road transportation. The source of Ni may be affected by agricultural activities and soil parent material together. PMID:27337911

  4. Mathematical Ecology Analysis of Geographical Distribution of Soybean-Nodulating Bradyrhizobia in Japan

    PubMed Central

    Saeki, Yuichi; Shiro, Sokichi; Tajima, Toshiyuki; Yamamoto, Akihiro; Sameshima-Saito, Reiko; Sato, Takashi; Yamakawa, Takeo

    2013-01-01

    We characterized the relationship between the genetic diversity of indigenous soybean-nodulating bradyrhizobia from weakly acidic soils in Japan and their geographical distribution in an ecological study of indigenous soybean rhizobia. We isolated bradyrhizobia from three kinds of Rj-genotype soybeans. Their genetic diversity and community structure were analyzed by PCR-RFLP analysis of the 16S–23S rRNA gene internal transcribed spacer (ITS) region with 11 Bradyrhizobium USDA strains as references. We used data from the present study and previous studies to carry out mathematical ecological analyses, multidimensional scaling analysis with the Bray-Curtis index, polar ordination analysis, and multiple regression analyses to characterize the relationship between soybean-nodulating bradyrhizobial community structures and their geographical distribution. The mathematical ecological approaches used in this study demonstrated the presence of ecological niches and suggested the geographical distribution of soybean-nodulating bradyrhizobia to be a function of latitude and the related climate, with clusters in the order Bj123, Bj110, Bj6, and Be76 from north to south in Japan. PMID:24240318

  5. Combining Different Tools for EEG Analysis to Study the Distributed Character of Language Processing

    PubMed Central

    da Rocha, Armando Freitas; Foz, Flávia Benevides; Pereira, Alfredo

    2015-01-01

    Recent studies on language processing indicate that language cognition is better understood if assumed to be supported by a distributed intelligent processing system enrolling neurons located all over the cortex, in contrast to reductionism that proposes to localize cognitive functions to specific cortical structures. Here, brain activity was recorded using electroencephalogram while volunteers were listening or reading small texts and had to select pictures that translate meaning of these texts. Several techniques for EEG analysis were used to show this distributed character of neuronal enrollment associated with the comprehension of oral and written descriptive texts. Low Resolution Tomography identified the many different sets (si) of neurons activated in several distinct cortical areas by text understanding. Linear correlation was used to calculate the information H(ei) provided by each electrode of the 10/20 system about the identified si. H(ei) Principal Component Analysis (PCA) was used to study the temporal and spatial activation of these sources si. This analysis evidenced 4 different patterns of H(ei) covariation that are generated by neurons located at different cortical locations. These results clearly show that the distributed character of language processing is clearly evidenced by combining available EEG technologies. PMID:26713089

  6. A distributed analysis and visualization system for model and observational data

    NASA Technical Reports Server (NTRS)

    Wilhelmson, Robert B.

    1994-01-01

    Software was developed with NASA support to aid in the analysis and display of the massive amounts of data generated from satellites, observational field programs, and from model simulations. This software was developed in the context of the PATHFINDER (Probing ATmospHeric Flows in an Interactive and Distributed EnviRonment) Project. The overall aim of this project is to create a flexible, modular, and distributed environment for data handling, modeling simulations, data analysis, and visualization of atmospheric and fluid flows. Software completed with NASA support includes GEMPAK analysis, data handling, and display modules for which collaborators at NASA had primary responsibility, and prototype software modules for three-dimensional interactive and distributed control and display as well as data handling, for which NSCA was responsible. Overall process control was handled through a scientific and visualization application builder from Silicon Graphics known as the Iris Explorer. In addition, the GEMPAK related work (GEMVIS) was also ported to the Advanced Visualization System (AVS) application builder. Many modules were developed to enhance those already available in Iris Explorer including HDF file support, improved visualization and display, simple lattice math, and the handling of metadata through development of a new grid datatype. Complete source and runtime binaries along with on-line documentation is available via the World Wide Web at: http://redrock.ncsa.uiuc.edu/ PATHFINDER/pathre12/top/top.html.

  7. Oxygen distribution in tumors: A qualitative analysis and modeling study providing a novel Monte Carlo approach

    SciTech Connect

    Lagerlöf, Jakob H.; Kindblom, Jon; Bernhardt, Peter

    2014-09-15

    lower end, due to anoxia, but smaller tumors showed undisturbed oxygen distributions. The six different models with correlated parameters generated three classes of oxygen distributions. The first was a hypothetical, negative covariance between vessel proximity and pO{sub 2} (VPO-C scenario); the second was a hypothetical positive covariance between vessel proximity and pO{sub 2} (VPO+C scenario); and the third was the hypothesis of no correlation between vessel proximity and pO{sub 2} (UP scenario). The VPO-C scenario produced a distinctly different oxygen distribution than the two other scenarios. The shape of the VPO-C scenario was similar to that of the nonvariable DOC model, and the larger the tumor, the greater the similarity between the two models. For all simulations, the mean oxygen tension decreased and the hypoxic fraction increased with tumor size. The absorbed dose required for definitive tumor control was highest for the VPO+C scenario, followed by the UP and VPO-C scenarios. Conclusions: A novel MC algorithm was presented which simulated oxygen distributions and radiation response for various biological parameter values. The analysis showed that the VPO-C scenario generated a clearly different oxygen distribution from the VPO+C scenario; the former exhibited a lower hypoxic fraction and higher radiosensitivity. In future studies, this modeling approach might be valuable for qualitative analyses of factors that affect oxygen distribution as well as analyses of specific experimental and clinical situations.

  8. Estimating the probability distribution of the incubation period for rabies using data from the 1948-1954 rabies epidemic in Tokyo.

    PubMed

    Tojinbara, Kageaki; Sugiura, K; Yamada, A; Kakitani, I; Kwan, N C L; Sugiura, K

    2016-01-01

    Data of 98 rabies cases in dogs and cats from the 1948-1954 rabies epidemic in Tokyo were used to estimate the probability distribution of the incubation period. Lognormal, gamma and Weibull distributions were used to model the incubation period. The maximum likelihood estimates of the mean incubation period ranged from 27.30 to 28.56 days according to different distributions. The mean incubation period was shortest with the lognormal distribution (27.30 days), and longest with the Weibull distribution (28.56 days). The best distribution in terms of AIC value was the lognormal distribution with mean value of 27.30 (95% CI: 23.46-31.55) days and standard deviation of 20.20 (15.27-26.31) days. There were no significant differences between the incubation periods for dogs and cats, or between those for male and female dogs. PMID:26688561

  9. Comparative Study on the Selection Criteria for Fitting Flood Frequency Distribution Models with Emphasis on Upper-Tail Behavior

    NASA Astrophysics Data System (ADS)

    Xiaohong, C.

    2014-12-01

    Many probability distributions have been proposed for flood frequency analysis and several criteria have been used for selecting a best fitted distribution to an observed or generated data set by some random process. The upper tail of flood frequency distribution should be specifically concerned for flood control. However, different model selection criteria often result in different optimal distributions when focus on upper tail of flood frequency distribution. In this study, with emphasis on the upper-tail behavior, 5 distribution selection criteria including 2 hypothesis tests and 3 information-based criteria are evaluated in selecting the best fitted distribution from 8 widely used distributions (Pearson 3, Log-Pearson 3, two-parameter lognormal, three-parameter lognormal, Gumbel, Weibull, Generalized extreme value and Generalized logistic distributions) by using datasets from Thames River (UK), Wabash River (USA), Beijiang River and Huai River (China), which are all within latitude of 23.5-66.5 degrees north. The performance of the 5 selection criteria is verified by using a composite criterion focus on upper tail events defined in this study. This paper shows the approach for the optimal selection of suitable flood frequency distributions for different river basins. Results illustrate that (1) Different distributions are selected by using hypothesis tests and information-based criteria for each river. (2) The information-based criteria perform better than hypothesis tests in most cases when the focus is on the goodness of predictions of the extreme upper tail events. (3) In order to decide on a particular distribution to fit the high flow, it would be better to use the combination criteria, in which the information-based criteria can be used first to rank the models and the results are inspected by hypothesis testing methods. In addition, if the information-based criteria and hypothesis tests provide different results, the composite criterion will be taken for

  10. A quantitative quantum-chemical analysis tool for the distribution of mechanical force in molecules

    SciTech Connect

    Stauch, Tim; Dreuw, Andreas

    2014-04-07

    The promising field of mechanochemistry suffers from a general lack of understanding of the distribution and propagation of force in a stretched molecule, which limits its applicability up to the present day. In this article, we introduce the JEDI (Judgement of Energy DIstribution) analysis, which is the first quantum chemical method that provides a quantitative understanding of the distribution of mechanical stress energy among all degrees of freedom in a molecule. The method is carried out on the basis of static or dynamic calculations under the influence of an external force and makes use of a Hessian matrix in redundant internal coordinates (bond lengths, bond angles, and dihedral angles), so that all relevant degrees of freedom of a molecule are included and mechanochemical processes can be interpreted in a chemically intuitive way. The JEDI method is characterized by its modest computational effort, with the calculation of the Hessian being the rate-determining step, and delivers, except for the harmonic approximation, exact ab initio results. We apply the JEDI analysis to several example molecules in both static quantum chemical calculations and Born-Oppenheimer Molecular Dynamics simulations in which molecules are subject to an external force, thus studying not only the distribution and the propagation of strain in mechanically deformed systems, but also gaining valuable insights into the mechanochemically induced isomerization of trans-3,4-dimethylcyclobutene to trans,trans-2,4-hexadiene. The JEDI analysis can potentially be used in the discussion of sonochemical reactions, molecular motors, mechanophores, and photoswitches as well as in the development of molecular force probes.

  11. A quantitative quantum-chemical analysis tool for the distribution of mechanical force in molecules

    NASA Astrophysics Data System (ADS)

    Stauch, Tim; Dreuw, Andreas

    2014-04-01

    The promising field of mechanochemistry suffers from a general lack of understanding of the distribution and propagation of force in a stretched molecule, which limits its applicability up to the present day. In this article, we introduce the JEDI (Judgement of Energy DIstribution) analysis, which is the first quantum chemical method that provides a quantitative understanding of the distribution of mechanical stress energy among all degrees of freedom in a molecule. The method is carried out on the basis of static or dynamic calculations under the influence of an external force and makes use of a Hessian matrix in redundant internal coordinates (bond lengths, bond angles, and dihedral angles), so that all relevant degrees of freedom of a molecule are included and mechanochemical processes can be interpreted in a chemically intuitive way. The JEDI method is characterized by its modest computational effort, with the calculation of the Hessian being the rate-determining step, and delivers, except for the harmonic approximation, exact ab initio results. We apply the JEDI analysis to several example molecules in both static quantum chemical calculations and Born-Oppenheimer Molecular Dynamics simulations in which molecules are subject to an external force, thus studying not only the distribution and the propagation of strain in mechanically deformed systems, but also gaining valuable insights into the mechanochemically induced isomerization of trans-3,4-dimethylcyclobutene to trans,trans-2,4-hexadiene. The JEDI analysis can potentially be used in the discussion of sonochemical reactions, molecular motors, mechanophores, and photoswitches as well as in the development of molecular force probes.

  12. Nanomaterial size distribution analysis via liquid nebulization coupled with ion mobility spectrometry (LN-IMS).

    PubMed

    Jeon, Seongho; Oberreit, Derek R; Van Schooneveld, Gary; Hogan, Christopher J

    2016-02-21

    We apply liquid nebulization (LN) in series with ion mobility spectrometry (IMS, using a differential mobility analyzer coupled to a condensation particle counter) to measure the size distribution functions (the number concentration per unit log diameter) of gold nanospheres in the 5-30 nm range, 70 nm × 11.7 nm gold nanorods, and albumin proteins originally in aqueous suspensions. In prior studies, IMS measurements have only been carried out for colloidal nanoparticles in this size range using electrosprays for aerosolization, as traditional nebulizers produce supermicrometer droplets which leave residue particles from non-volatile species. Residue particles mask the size distribution of the particles of interest. Uniquely, the LN employed in this study uses both online dilution (with dilution factors of up to 10(4)) with ultra-high purity water and a ball-impactor to remove droplets larger than 500 nm in diameter. This combination enables hydrosol-to-aerosol conversion preserving the size and morphology of particles, and also enables higher non-volatile residue tolerance than electrospray based aerosolization. Through LN-IMS measurements we show that the size distribution functions of narrowly distributed but similarly sized particles can be distinguished from one another, which is not possible with Nanoparticle Tracking Analysis in the sub-30 nm size range. Through comparison to electron microscopy measurements, we find that the size distribution functions inferred via LN-IMS measurements correspond to the particle sizes coated by surfactants, i.e. as they persist in colloidal suspensions. Finally, we show that the gas phase particle concentrations inferred from IMS size distribution functions are functions of only of the liquid phase particle concentration, and are independent of particle size, shape, and chemical composition. Therefore LN-IMS enables characterization of the size, yield, and polydispersity of sub-30 nm particles. PMID:26750519

  13. Laws prohibiting peer distribution of injecting equipment in Australia: A critical analysis of their effects.

    PubMed

    Lancaster, Kari; Seear, Kate; Treloar, Carla

    2015-12-01

    The law is a key site for the production of meanings around the 'problem' of drugs in public discourse. In this article, we critically consider the material-discursive 'effects' of laws prohibiting peer distribution of needles and syringes in Australia. Taking the laws and regulations governing possession and distribution of injecting equipment in one jurisdiction (New South Wales, Australia) as a case study, we use Carol Bacchi's poststructuralist approach to policy analysis to critically consider the assumptions and presuppositions underpinning this legislative and regulatory framework, with a particular focus on examining the discursive, subjectification and lived effects of these laws. We argue that legislative prohibitions on the distribution of injecting equipment except by 'authorised persons' within 'approved programs' constitute people who inject drugs as irresponsible, irrational, and untrustworthy and re-inscribe a familiar stereotype of the drug 'addict'. These constructions of people who inject drugs fundamentally constrain how the provision of injecting equipment may be thought about in policy and practice. We suggest that prohibitions on the distribution of injecting equipment among peers may also have other, material, effects and may be counterproductive to various public health aims and objectives. However, the actions undertaken by some people who inject drugs to distribute equipment to their peers may disrupt and challenge these constructions, through a counter-discourse in which people who inject drugs are constituted as active agents with a vital role to play in blood-borne virus prevention in the community. Such activity continues to bring with it the risk of criminal prosecution, and so it remains a vexed issue. These insights have implications of relevance beyond Australia, particularly for other countries around the world that prohibit peer distribution, but also for other legislative practices with material-discursive effects in

  14. First experience and adaptation of existing tools to ATLAS distributed analysis

    NASA Astrophysics Data System (ADS)

    de La Hoz, S. G.; Ruiz, L. M.; Liko, D.

    2008-02-01

    The ATLAS production system has been successfully used to run production of simulation data at an unprecedented scale in ATLAS. Up to 10000 jobs were processed on about 100 sites in one day. The experiences obtained operating the system on several grid flavours was essential to perform a user analysis using grid resources. First tests of the distributed analysis system were then performed. In the preparation phase data was registered in the LHC file catalog (LFC) and replicated in external sites. For the main test, few resources were used. All these tests are only a first step towards the validation of the computing model. The ATLAS management computing board decided to integrate the collaboration efforts in distributed analysis in only one project, GANGA. The goal is to test the reconstruction and analysis software in a large scale Data production using grid flavors in several sites. GANGA allows trivial switching between running test jobs on a local batch system and running large-scale analyses on the grid; it provides job splitting and merging, and includes automated job monitoring and output retrieval.

  15. Analysis of discrete and continuous distributions of ventilatory time constants from dynamic computed tomography

    NASA Astrophysics Data System (ADS)

    Doebrich, Marcus; Markstaller, Klaus; Karmrodt, Jens; Kauczor, Hans-Ulrich; Eberle, Balthasar; Weiler, Norbert; Thelen, Manfred; Schreiber, Wolfgang G.

    2005-04-01

    In this study, an algorithm was developed to measure the distribution of pulmonary time constants (TCs) from dynamic computed tomography (CT) data sets during a sudden airway pressure step up. Simulations with synthetic data were performed to test the methodology as well as the influence of experimental noise. Furthermore the algorithm was applied to in vivo data. In five pigs sudden changes in airway pressure were imposed during dynamic CT acquisition in healthy lungs and in a saline lavage ARDS model. The fractional gas content in the imaged slice (FGC) was calculated by density measurements for each CT image. Temporal variations of the FGC were analysed assuming a model with a continuous distribution of exponentially decaying time constants. The simulations proved the feasibility of the method. The influence of experimental noise could be well evaluated. Analysis of the in vivo data showed that in healthy lungs ventilation processes can be more likely characterized by discrete TCs whereas in ARDS lungs continuous distributions of TCs are observed. The temporal behaviour of lung inflation and deflation can be characterized objectively using the described new methodology. This study indicates that continuous distributions of TCs reflect lung ventilation mechanics more accurately compared to discrete TCs.

  16. Further Progress Applying the Generalized Wigner Distribution to Analysis of Vicinal Surfaces

    NASA Astrophysics Data System (ADS)

    Einstein, T. L.; Richards, Howard L.; Cohen, S. D.

    2001-03-01

    Terrace width distributions (TWDs) can be well fit by the generalized Wigner distribution (GWD), generally better than by conventional Gaussians, and thus offers a convenient way to estimate the dimensionless elastic repulsion strength tildeA from σ^2, the TWD variance.(T.L. Einstein and O. Pierre-Louis, Surface Sci. 424), L299 (1999) The GWD σ^2 accurately reproduces values for the two exactly soluble cases at small tildeA and in the asymptotic limit. Taxing numerical simulations show that the GWD σ^2 interpolates well between these limits. Extensive applications have been made to experimental data, esp. on Cu.(M. Giesen and T.L. Einstein, Surface Sci. 449), 191 (2000) Recommended analysis procedures are catalogued.(H.L. Richards, S.D. Cohen, TLE, & M. Giesen, Surf Sci 453), 59 (2000) Extensions of the GWD for multistep distributions are tested, with good agreement for second-neighbor distributions, less good for third.(TLE, HLR, SDC, & OP-L, Proc ISSI-PDSC2000, cond-mat/0012xxxxx) Alternatively, step-step correlation functions, about which there is more theoretical information, should be measured.

  17. Inverse analysis of non-uniform temperature distributions using multispectral pyrometry

    NASA Astrophysics Data System (ADS)

    Fu, Tairan; Duan, Minghao; Tian, Jibin; Shi, Congling

    2016-05-01

    Optical diagnostics can be used to obtain sub-pixel temperature information in remote sensing. A multispectral pyrometry method was developed using multiple spectral radiation intensities to deduce the temperature area distribution in the measurement region. The method transforms a spot multispectral pyrometer with a fixed field of view into a pyrometer with enhanced spatial resolution that can give sub-pixel temperature information from a "one pixel" measurement region. A temperature area fraction function was defined to represent the spatial temperature distribution in the measurement region. The method is illustrated by simulations of a multispectral pyrometer with a spectral range of 8.0-13.0 μm measuring a non-isothermal region with a temperature range of 500-800 K in the spot pyrometer field of view. The inverse algorithm for the sub-pixel temperature distribution (temperature area fractions) in the "one pixel" verifies this multispectral pyrometry method. The results show that an improved Levenberg-Marquardt algorithm is effective for this ill-posed inverse problem with relative errors in the temperature area fractions of (-3%, 3%) for most of the temperatures. The analysis provides a valuable reference for the use of spot multispectral pyrometers for sub-pixel temperature distributions in remote sensing measurements.

  18. Analysis of the Effects of Streamwise Lift Distribution on Sonic Boom Signature

    NASA Technical Reports Server (NTRS)

    Yoo, Paul

    2013-01-01

    Investigation of sonic boom has been one of the major areas of study in aeronautics due to the benefits a low-boom aircraft has in both civilian and military applications. This work conducts a numerical analysis of the effects of streamwise lift distribution on the shock coalescence characteristics. A simple wing-canard-stabilator body model is used in the numerical simulation. The streamwise lift distribution is varied by fixing the canard at a deflection angle while trimming the aircraft with the wing and the stabilator at the desired lift coefficient. The lift and the pitching moment coefficients are computed using the Missile DATCOM v. 707. The flow field around the wing-canard- stabilator body model is resolved using the OVERFLOW-2 flow solver. Overset/ chimera grid topology is used to simplify the grid generation of various configurations representing different streamwise lift distributions. The numerical simulations are performed without viscosity unless it is required for numerical stability. All configurations are simulated at Mach 1.4, angle-of-attack of 1.50, lift coefficient of 0.05, and pitching moment coefficient of approximately 0. Four streamwise lift distribution configurations were tested.

  19. A spatial pattern analysis of the halophytic species distribution in an arid coastal environment.

    PubMed

    Badreldin, Nasem; Uria-Diez, J; Mateu, J; Youssef, Ali; Stal, Cornelis; El-Bana, Magdy; Magdy, Ahmed; Goossens, Rudi

    2015-05-01

    Obtaining information about the spatial distribution of desert plants is considered as a serious challenge for ecologists and environmental modeling due to the required intensive field work and infrastructures in harsh and remote arid environments. A new method was applied for assessing the spatial distribution of the halophytic species (HS) in an arid coastal environment. This method was based on the object-based image analysis for a high-resolution Google Earth satellite image. The integration of the image processing techniques and field work provided accurate information about the spatial distribution of HS. The extracted objects were based on assumptions that explained the plant-pixel relationship. Three different types of digital image processing techniques were implemented and validated to obtain an accurate HS spatial distribution. A total of 2703 individuals of the HS community were found in the case study, and approximately 82% were located above an elevation of 2 m. The micro-topography exhibited a significant negative relationship with pH and EC (r = -0.79 and -0.81, respectively, p < 0.001). The spatial structure was modeled using stochastic point processes, in particular a hybrid family of Gibbs processes. A new model is proposed that uses a hard-core structure at very short distances, together with a cluster structure in short-to-medium distances and a Poisson structure for larger distances. This model was found to fit the data perfectly well. PMID:25838060

  20. Mode-distribution analysis of quasielastic neutron scattering and application to liquid water

    NASA Astrophysics Data System (ADS)

    Kikuchi, Tatsuya; Nakajima, Kenji; Ohira-Kawamura, Seiko; Inamura, Yasuhiro; Yamamuro, Osamu; Kofu, Maiko; Kawakita, Yukinobu; Suzuya, Kentaro; Nakamura, Mitsutaka; Arai, Masatoshi

    2013-06-01

    A quasielastic neutron scattering (QENS) experiment is a particular technique that endeavors to define a relationship between time and space for the diffusion dynamics of atoms and molecules. However, in most cases, analyses of QENS data are model dependent, which may distort attempts to elucidate the actual diffusion dynamics. We have developed a method for processing QENS data without a specific model, wherein all modes can be described as combinations of the relaxations based on the exponential law. By this method, we can obtain a distribution function B(Q,Γ), which we call the mode-distribution function (MDF), to represent the number of relaxation modes and distributions of the relaxation times in the modes. The deduction of MDF is based on the maximum entropy method and is very versatile in QENS data analysis. To verify this method, reproducibility was checked against several analytical models, such as that with a mode of distributed relaxation time, that with two modes closely located, and that represented by the Kohlrausch-Williams-Watts function. We report the first application to experimental data of liquid water. In addition to the two known modes, the existence of a relaxation mode of water molecules with an intermediate time scale has been discovered. We propose that the fast mode might be assigned to an intermolecular motion and the intermediate motion might be assigned to a rotational motion of the water molecules instead of to the fast mode.

  1. Quantitative Analysis of Subcellular Distribution of the SUMO Conjugation System by Confocal Microscopy Imaging.

    PubMed

    Mas, Abraham; Amenós, Montse; Lois, L Maria

    2016-01-01

    Different studies point to an enrichment in SUMO conjugation in the cell nucleus, although non-nuclear SUMO targets also exist. In general, the study of subcellular localization of proteins is essential for understanding their function within a cell. Fluorescence microscopy is a powerful tool for studying subcellular protein partitioning in living cells, since fluorescent proteins can be fused to proteins of interest to determine their localization. Subcellular distribution of proteins can be influenced by binding to other biomolecules and by posttranslational modifications. Sometimes these changes affect only a portion of the protein pool or have a partial effect, and a quantitative evaluation of fluorescence images is required to identify protein redistribution among subcellular compartments. In order to obtain accurate data about the relative subcellular distribution of SUMO conjugation machinery members, and to identify the molecular determinants involved in their localization, we have applied quantitative confocal microscopy imaging. In this chapter, we will describe the fluorescent protein fusions used in these experiments, and how to measure, evaluate, and compare average fluorescence intensities in cellular compartments by image-based analysis. We show the distribution of some components of the Arabidopsis SUMOylation machinery in epidermal onion cells and how they change their distribution in the presence of interacting partners or even when its activity is affected. PMID:27424751

  2. Rural tourism spatial distribution based on multi-criteria decision analysis and GIS

    NASA Astrophysics Data System (ADS)

    Zhang, Hongxian; Yang, Qingsheng

    2008-10-01

    To study spatial distribution of rural tourism can provide scientific decision basis for developing rural economics. Traditional ways of tourism spatial distribution have some limitations in quantifying priority locations of tourism development on small units. They can only produce the overall tourism distribution locations and whether locations are suitable to tourism development simply while the tourism develop ranking with different decision objectives should be considered. This paper presents a way to find ranking of location of rural tourism development in spatial by integrating multi-criteria decision analysis (MCDA) and geography information system (GIS). In order to develop country economics with inconvenient transportation, undeveloped economy and better tourism resource, these locations should be firstly develop rural tourism. Based on this objective, the tourism develop priority utility of each town is calculated with MCDA and GIS. Towns which should be first develop rural tourism can be selected with higher tourism develop priority utility. The method is used to find ranking of location of rural tourism in Ningbo City successfully. The result shows that MCDA is an effective way for distribution rural tourism in spatial based on special decision objectives and rural tourism can promote economic development.

  3. Pore space analysis of NAPL distribution in sand-clay media

    USGS Publications Warehouse

    Matmon, D.; Hayden, N.J.

    2003-01-01

    This paper introduces a conceptual model of clays and non-aqueous phase liquids (NAPLs) at the pore scale that has been developed from a mathematical unit cell model, and direct micromodel observation and measurement of clay-containing porous media. The mathematical model uses a unit cell concept with uniform spherical grains for simulating the sand in the sand-clay matrix (???10% clay). Micromodels made with glass slides and including different clay-containing porous media were used to investigate the two clays (kaolinite and montmorillonite) and NAPL distribution within the pore space. The results were used to understand the distribution of NAPL advancing into initially saturated sand and sand-clay media, and provided a detailed analysis of the pore-scale geometry, pore size distribution, NAPL entry pressures, and the effect of clay on this geometry. Interesting NAPL saturation profiles were observed as a result of the complexity of the pore space geometry with the different packing angles and the presence of clays. The unit cell approach has applications for enhancing the mechanistic understanding and conceptualization, both visually and mathematically, of pore-scale processes such as NAPL and clay distribution. ?? 2003 Elsevier Science Ltd. All rights reserved.

  4. Empirical analysis on the connection between power-law distributions and allometries for urban indicators

    NASA Astrophysics Data System (ADS)

    Alves, L. G. A.; Ribeiro, H. V.; Lenzi, E. K.; Mendes, R. S.

    2014-09-01

    We report on the existing connection between power-law distributions and allometries. As it was first reported in Gomez-Lievano et al. (2012) for the relationship between homicides and population, when these urban indicators present asymptotic power-law distributions, they can also display specific allometries among themselves. Here, we present an extensive characterization of this connection when considering all possible pairs of relationships from twelve urban indicators of Brazilian cities (such as child labor, illiteracy, income, sanitation and unemployment). Our analysis reveals that all our urban indicators are asymptotically distributed as power laws and that the proposed connection also holds for our data when the allometric relationship displays enough correlations. We have also found that not all allometric relationships are independent and that they can be understood as a consequence of the allometric relationship between the urban indicator and the population size. We further show that the residuals fluctuations surrounding the allometries are characterized by an almost constant variance and log-normal distributions.

  5. Iterative Monte Carlo analysis of spin-dependent parton distributions

    DOE PAGESBeta

    Sato, Nobuo; Melnitchouk, Wally; Kuhn, Sebastian E.; Ethier, Jacob J.; Accardi, Alberto

    2016-04-05

    We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳ 0.1. Furthermore, the study also provides the first determination of the flavor-separated twist-3 PDFsmore » and the d2 moment of the nucleon within a global PDF analysis.« less

  6. Multimedia content analysis and indexing: evaluation of a distributed and scalable architecture

    NASA Astrophysics Data System (ADS)

    Mandviwala, Hasnain; Blackwell, Scott; Weikart, Chris; Van Thong, Jean-Manuel

    2003-11-01

    Multimedia search engines facilitate the retrieval of documents from large media content archives now available via intranets and the Internet. Over the past several years, many research projects have focused on algorithms for analyzing and indexing media content efficiently. However, special system architectures are required to process large amounts of content from real-time feeds or existing archives. Possible solutions include dedicated distributed architectures for analyzing content rapidly and for making it searchable. The system architecture we propose implements such an approach: a highly distributed and reconfigurable batch media content analyzer that can process media streams and static media repositories. Our distributed media analysis application handles media acquisition, content processing, and document indexing. This collection of modules is orchestrated by a task flow management component, exploiting data and pipeline parallelism in the application. A scheduler manages load balancing and prioritizes the different tasks. Workers implement application-specific modules that can be deployed on an arbitrary number of nodes running different operating systems. Each application module is exposed as a web service, implemented with industry-standard interoperable middleware components such as Microsoft ASP.NET and Sun J2EE. Our system architecture is the next generation system for the multimedia indexing application demonstrated by www.speechbot.com. It can process large volumes of audio recordings with minimal support and maintenance, while running on low-cost commodity hardware. The system has been evaluated on a server farm running concurrent content analysis processes.

  7. Statistical analysis of secondary particle distributions in relativistic nucleus-nucleus collisions

    NASA Technical Reports Server (NTRS)

    Mcguire, Stephen C.

    1987-01-01

    The use is described of several statistical techniques to characterize structure in the angular distributions of secondary particles from nucleus-nucleus collisions in the energy range 24 to 61 GeV/nucleon. The objective of this work was to determine whether there are correlations between emitted particle intensity and angle that may be used to support the existence of the quark gluon plasma. The techniques include chi-square null hypothesis tests, the method of discrete Fourier transform analysis, and fluctuation analysis. We have also used the method of composite unit vectors to test for azimuthal asymmetry in a data set of 63 JACEE-3 events. Each method is presented in a manner that provides the reader with some practical detail regarding its application. Of those events with relatively high statistics, Fe approaches 0 at 55 GeV/nucleon was found to possess an azimuthal distribution with a highly non-random structure. No evidence of non-statistical fluctuations was found in the pseudo-rapidity distributions of the events studied. It is seen that the most effective application of these methods relies upon the availability of many events or single events that possess very high multiplicities.

  8. Independent Orbiter Assessment (IOA): Analysis of the electrical power distribution and control subsystem, volume 1

    NASA Technical Reports Server (NTRS)

    Schmeckpeper, K. R.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Distribution and Control (EPD and C) hardware. The EPD and C hardware performs the functions of distributing, sensing, and controlling 28 volt DC power and of inverting, distributing, sensing, and controlling 117 volt 400 Hz AC power to all Orbiter subsystems from the three fuel cells in the Electrical Power Generation (EPG) subsystem. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 1671 failure modes analyzed, 9 single failures were determined to result in loss of crew or vehicle. Three single failures unique to intact abort were determined to result in possible loss of the crew or vehicle. A possible loss of mission could result if any of 136 single failures occurred. Six of the criticality 1/1 failures are in two rotary and two pushbutton switches that control External Tank and Solid Rocket Booster separation. The other 6 criticality 1/1 failures are fuses, one each per Aft Power Control Assembly (APCA) 4, 5, and 6 and one each per Forward Power Control Assembly (FPCA) 1, 2, and 3, that supply power to certain Main Propulsion System (MPS) valves and Forward Reaction Control System (RCS) circuits.

  9. Spatial Intensity Distribution Analysis Reveals Abnormal Oligomerization of Proteins in Single Cells.

    PubMed

    Godin, Antoine G; Rappaz, Benjamin; Potvin-Trottier, Laurent; Kennedy, Timothy E; De Koninck, Yves; Wiseman, Paul W

    2015-08-18

    Knowledge of membrane receptor organization is essential for understanding the initial steps in cell signaling and trafficking mechanisms, but quantitative analysis of receptor interactions at the single-cell level and in different cellular compartments has remained highly challenging. To achieve this, we apply a quantitative image analysis technique-spatial intensity distribution analysis (SpIDA)-that can measure fluorescent particle concentrations and oligomerization states within different subcellular compartments in live cells. An important technical challenge faced by fluorescence microscopy-based measurement of oligomerization is the fidelity of receptor labeling. In practice, imperfect labeling biases the distribution of oligomeric states measured within an aggregated system. We extend SpIDA to enable analysis of high-order oligomers from fluorescence microscopy images, by including a probability weighted correction algorithm for nonemitting labels. We demonstrated that this fraction of nonemitting probes could be estimated in single cells using SpIDA measurements on model systems with known oligomerization state. Previously, this artifact was measured using single-step photobleaching. This approach was validated using computer-simulated data and the imperfect labeling was quantified in cells with ion channels of known oligomer subunit count. It was then applied to quantify the oligomerization states in different cell compartments of the proteolipid protein (PLP) expressed in COS-7 cells. Expression of a mutant PLP linked to impaired trafficking resulted in the detection of PLP tetramers that persist in the endoplasmic reticulum, while no difference was measured at the membrane between the distributions of wild-type and mutated PLPs. Our results demonstrate that SpIDA allows measurement of protein oligomerization in different compartments of intact cells, even when fractional mislabeling occurs as well as photobleaching during the imaging process, and

  10. Spatio-Temporal Distribution Characteristics and Trajectory Similarity Analysis of Tuberculosis in Beijing, China.

    PubMed

    Li, Lan; Xi, Yuliang; Ren, Fu

    2016-03-01

    Tuberculosis (TB) is an infectious disease with one of the highest reported incidences in China. The detection of the spatio-temporal distribution characteristics of TB is indicative of its prevention and control conditions. Trajectory similarity analysis detects variations and loopholes in prevention and provides urban public health officials and related decision makers more information for the allocation of public health resources and the formulation of prioritized health-related policies. This study analysed the spatio-temporal distribution characteristics of TB from 2009 to 2014 by utilizing spatial statistics, spatial autocorrelation analysis, and space-time scan statistics. Spatial statistics measured the TB incidence rate (TB patients per 100,000 residents) at the district level to determine its spatio-temporal distribution and to identify characteristics of change. Spatial autocorrelation analysis was used to detect global and local spatial autocorrelations across the study area. Purely spatial, purely temporal and space-time scan statistics were used to identify purely spatial, purely temporal and spatio-temporal clusters of TB at the district level. The other objective of this study was to compare the trajectory similarities between the incidence rates of TB and new smear-positive (NSP) TB patients in the resident population (NSPRP)/new smear-positive TB patients in the TB patient population (NSPTBP)/retreated smear-positive (RSP) TB patients in the resident population (RSPRP)/retreated smear-positive TB patients in the TB patient population (RSPTBP) to detect variations and loopholes in TB prevention and control among the districts in Beijing. The incidence rates in Beijing exhibited a gradual decrease from 2009 to 2014. Although global spatial autocorrelation was not detected overall across all of the districts of Beijing, individual districts did show evidence of local spatial autocorrelation: Chaoyang and Daxing were Low-Low districts over the six

  11. Spatio-Temporal Distribution Characteristics and Trajectory Similarity Analysis of Tuberculosis in Beijing, China

    PubMed Central

    Li, Lan; Xi, Yuliang; Ren, Fu

    2016-01-01

    Tuberculosis (TB) is an infectious disease with one of the highest reported incidences in China. The detection of the spatio-temporal distribution characteristics of TB is indicative of its prevention and control conditions. Trajectory similarity analysis detects variations and loopholes in prevention and provides urban public health officials and related decision makers more information for the allocation of public health resources and the formulation of prioritized health-related policies. This study analysed the spatio-temporal distribution characteristics of TB from 2009 to 2014 by utilizing spatial statistics, spatial autocorrelation analysis, and space-time scan statistics. Spatial statistics measured the TB incidence rate (TB patients per 100,000 residents) at the district level to determine its spatio-temporal distribution and to identify characteristics of change. Spatial autocorrelation analysis was used to detect global and local spatial autocorrelations across the study area. Purely spatial, purely temporal and space-time scan statistics were used to identify purely spatial, purely temporal and spatio-temporal clusters of TB at the district level. The other objective of this study was to compare the trajectory similarities between the incidence rates of TB and new smear-positive (NSP) TB patients in the resident population (NSPRP)/new smear-positive TB patients in the TB patient population (NSPTBP)/retreated smear-positive (RSP) TB patients in the resident population (RSPRP)/retreated smear-positive TB patients in the TB patient population (RSPTBP) to detect variations and loopholes in TB prevention and control among the districts in Beijing. The incidence rates in Beijing exhibited a gradual decrease from 2009 to 2014. Although global spatial autocorrelation was not detected overall across all of the districts of Beijing, individual districts did show evidence of local spatial autocorrelation: Chaoyang and Daxing were Low-Low districts over the six

  12. Bayesian Combination of Regional and Local Information Using Some Common Distributions in Hydrology

    NASA Astrophysics Data System (ADS)

    Seidou, O.; Ouarda, T. B.

    2007-05-01

    The main challenge in flood frequency analysis is to find relevant and sufficient information to fit a local distribution with an acceptable precision to the variable of interest. This precision impacts the cost and reliability of hydraulic structures as well as the safety of downstream communities. If the site of interest has been monitored for a sufficiently long period (more than 30-40 years), at-site frequency analysis can be used to estimate flood quantiles with a fair precision. Otherwise, regional estimation may be used to mitigate the lack of data, but local information is then ignored. The authors propose a Bayesian method in this paper that uses both sources of information for even more precise quantile estimation. The proposed method uses the classical log- linear regression as regional model and assumes that the local flood peaks are GEV, Gamma, Weibull, Log- Normal or exponentially distributed. The method works even with a single local observation besides relaxing the hypothesis of normality of the quantiles probability distribution that is used in the empirical Bayes approach. A thorough performance assessment was made with the GEV distribution and it was shown that a) when the regional model is unbiased, the proposed method gives better estimation of the GEV quantiles and parameters than the local, regional and empirical Bayes estimators; b) even when the regional model displays a severe relative bias when estimating the quantiles, the proposed method still gives the best estimation of the GEV shape parameter and outperforms the other approaches on higher quantiles provided that the relative bias is the same for all quantiles; c) the gain in performance with the new approach is considerable for sites with very short records. Theoretical developments and some preliminary results are presented for the other distributions. Keywords: regionalization, probability distribution, linear regression, empirical Bayesian method, flood frequency analysis

  13. Distribution water quality anomaly detection from UV optical sensor monitoring data by integrating principal component analysis with chi-square distribution.

    PubMed

    Hou, Dibo; Zhang, Jian; Yang, Zheling; Liu, Shu; Huang, Pingjie; Zhang, Guangxin

    2015-06-29

    The issue of distribution water quality security ensuring is recently attracting global attention due to the potential threat from harmful contaminants. The real-time monitoring based on ultraviolet optical sensors is a promising technique. This method is of reagent-free, low maintenance cost, rapid analysis and wide cover range. However, the ultraviolet absorption spectra are of large size and easily interfered. While within the on-site application, there is almost no prior knowledge like spectral characteristics of potential contaminants before determined. Meanwhile, the concept of normal water quality is also varying due to the operating condition. In this paper, a procedure based on multivariate statistical analysis is proposed to detect distribution water quality anomaly based on ultraviolet optical sensors. Firstly, the principal component analysis is employed to capture the main variety features from the spectral matrix and reduce the dimensionality. A new statistical variable is then constructed and used for evaluating the local outlying degree according to the chi-square distribution in the principal component subspace. The possibility of anomaly of the latest observation is calculated by the accumulation of the outlying degrees from the adjacent previous observations. To develop a more reliable anomaly detection procedure, several key parameters are discussed. By utilizing the proposed methods, the distribution water quality anomalies and the optical abnormal changes can be detected. The contaminants intrusion experiment is conducted in a pilot-scale distribution system by injecting phenol solution. The effectiveness of the proposed procedure is finally testified using the experimental spectral data. PMID:26191757

  14. A Grid-based solution for management and analysis of microarrays in distributed experiments

    PubMed Central

    Porro, Ivan; Torterolo, Livia; Corradi, Luca; Fato, Marco; Papadimitropoulos, Adam; Scaglione, Silvia; Schenone, Andrea; Viti, Federica

    2007-01-01

    Several systems have been presented in the last years in order to manage the complexity of large microarray experiments. Although good results have been achieved, most systems tend to lack in one or more fields. A Grid based approach may provide a shared, standardized and reliable solution for storage and analysis of biological data, in order to maximize the results of experimental efforts. A Grid framework has been therefore adopted due to the necessity of remotely accessing large amounts of distributed data as well as to scale computational performances for terabyte datasets. Two different biological studies have been planned in order to highlight the benefits that can emerge from our Grid based platform. The described environment relies on storage services and computational services provided by the gLite Grid middleware. The Grid environment is also able to exploit the added value of metadata in order to let users better classify and search experiments. A state-of-art Grid portal has been implemented in order to hide the complexity of framework from end users and to make them able to easily access available services and data. The functional architecture of the portal is described. As a first test of the system performances, a gene expression analysis has been performed on a dataset of Affymetrix GeneChip® Rat Expression Array RAE230A, from the ArrayExpress database. The sequence of analysis includes three steps: (i) group opening and image set uploading, (ii) normalization, and (iii) model based gene expression (based on PM/MM difference model). Two different Linux versions (sequential and parallel) of the dChip software have been developed to implement the analysis and have been tested on a cluster. From results, it emerges that the parallelization of the analysis process and the execution of parallel jobs on distributed computational resources actually improve the performances. Moreover, the Grid environment have been tested both against the possibility of

  15. Exposure models for the prior distribution in bayesian decision analysis for occupational hygiene decision making.

    PubMed

    Lee, Eun Gyung; Kim, Seung Won; Feigley, Charles E; Harper, Martin

    2013-01-01

    This study introduces two semi-quantitative methods, Structured Subjective Assessment (SSA) and Control of Substances Hazardous to Health (COSHH) Essentials, in conjunction with two-dimensional Monte Carlo simulations for determining prior probabilities. Prior distribution using expert judgment was included for comparison. Practical applications of the proposed methods were demonstrated using personal exposure measurements of isoamyl acetate in an electronics manufacturing facility and of isopropanol in a printing shop. Applicability of these methods in real workplaces was discussed based on the advantages and disadvantages of each method. Although these methods could not be completely independent of expert judgments, this study demonstrated a methodological improvement in the estimation of the prior distribution for the Bayesian decision analysis tool. The proposed methods provide a logical basis for the decision process by considering determinants of worker exposure. PMID:23252451

  16. Bifurcation analysis on the globally coupled Kuramoto oscillators with distributed time delays

    NASA Astrophysics Data System (ADS)

    Niu, Ben; Guo, Yuxiao

    2014-01-01

    Distributed delay interactions among a group of Kuramoto phase oscillators are studied from the viewpoint of bifurcation analysis. After restricting the system on the Ott-Antonsen manifold, a simplified model consisting of delay differential equations is obtained. Hopf bifurcation diagrams are drawn on some two-parameter planes around the incoherent state when delay follows Dirac, uniform, Gamma and normal distributions, respectively, and it is illustrated that stronger coupling is needed to achieve synchrony when increasing the variance of either natural frequency or time delay. With the aid of center manifold reduction and the normal form method, the direction of Hopf bifurcation and stability of bifurcating periodic solutions are investigated, and the existence of the hysteresis loop is explained theoretically.

  17. Analysis of temperature distribution in a pipe with inner mineral deposit

    NASA Astrophysics Data System (ADS)

    Joachimiak, Magda; Ciałkowski, Michał; Bartoszewicz, Jarosław

    2014-06-01

    The paper presents the results of calculations related to determination of temperature distributions in a steel pipe of a heat exchanger taking into account inner mineral deposits. Calculations have been carried out for silicate-based scale being characterized by a low heat transfer coefficient. Deposits of the lowest values of heat conduction coefficient are particularly impactful on the strength of thermally loaded elements. In the analysis the location of the thermocouple and the imperfection of its installation were taken into account. The paper presents the influence of determination accuracy of the heat flux on the pipe external wall on temperature distribution. The influence of the heat flux disturbance value on the thickness of deposit has also been analyzed.

  18. Preliminary analysis of the span-distributed-load concept for cargo aircraft design

    NASA Technical Reports Server (NTRS)

    Whitehead, A. H., Jr.

    1975-01-01

    A simplified computer analysis of the span-distributed-load airplane (in which payload is placed within the wing structure) has shown that the span-distributed-load concept has high potential for application to future air cargo transport design. Significant increases in payload fraction over current wide-bodied freighters are shown for gross weights in excess of 0.5 Gg (1,000,000 lb). A cruise-matching calculation shows that the trend toward higher aspect ratio improves overall efficiency; that is, less thrust and fuel are required. The optimal aspect ratio probably is not determined by structural limitations. Terminal-area constraints and increasing design-payload density, however, tend to limit aspect ratio.

  19. Methods and apparatuses for information analysis on shared and distributed computing systems

    DOEpatents

    Bohn, Shawn J [Richland, WA; Krishnan, Manoj Kumar [Richland, WA; Cowley, Wendy E [Richland, WA; Nieplocha, Jarek [Richland, WA

    2011-02-22

    Apparatuses and computer-implemented methods for analyzing, on shared and distributed computing systems, information comprising one or more documents are disclosed according to some aspects. In one embodiment, information analysis can comprise distributing one or more distinct sets of documents among each of a plurality of processes, wherein each process performs operations on a distinct set of documents substantially in parallel with other processes. Operations by each process can further comprise computing term statistics for terms contained in each distinct set of documents, thereby generating a local set of term statistics for each distinct set of documents. Still further, operations by each process can comprise contributing the local sets of term statistics to a global set of term statistics, and participating in generating a major term set from an assigned portion of a global vocabulary.

  20. Analysis of the size, shape, and spatial distribution of microinclusions by neutron-activation autoradiography

    SciTech Connect

    Flitsiyan, E.S.; Romanovskii, A.V.; Gurvich, L.G.; Kist, A.A.

    1987-02-01

    The local concentration and spatial distribution of some elements in minerals, rocks, and ores can be determined by means of neutron-activation autoradiography. The local element concentration is measured in this method by placing an activated section of the rock to be analyzed, together with an irradiated standard, against a photographic emulsion which acts as a radiation detector. The photographic density of the exposed emulsion varies as a function of the tested element content in the part of the sample next to the detector. In order to assess the value of neutron-activation autoradiography in the analysis of element distribution, we considered the main factors affecting the production of selective autoradiographs, viz., resolution, detection limit, and optimal irradiation conditions, holding time, and exposure.