/q-exponential, Weibull, and /q-Weibull distributions: an empirical analysis
NASA Astrophysics Data System (ADS)
Picoli, S.; Mendes, R. S.; Malacarne, L. C.
2003-06-01
In a comparative study, the q-exponential and Weibull distributions are employed to investigate frequency distributions of basketball baskets, cyclone victims, brand-name drugs by retail sales, and highway length. In order to analyze the intermediate cases, a distribution, the q-Weibull one, which interpolates the q-exponential and Weibull ones, is introduced. It is verified that the basketball baskets distribution is well described by a q-exponential, whereas the cyclone victims and brand-name drugs by retail sales ones are better adjusted by a Weibull distribution. On the other hand, for highway length the q-exponential and Weibull distributions do not give satisfactory adjustment, being necessary to employ the q-Weibull distribution. Furthermore, the introduction of this interpolating distribution gives an illumination from the point of view of the stretched exponential against inverse power law ( q-exponential with q>1) controversy.
The beta modified Weibull distribution.
Silva, Giovana O; Ortega, Edwin M M; Cordeiro, Gauss M
2010-07-01
A five-parameter distribution so-called the beta modified Weibull distribution is defined and studied. The new distribution contains, as special submodels, several important distributions discussed in the literature, such as the generalized modified Weibull, beta Weibull, exponentiated Weibull, beta exponential, modified Weibull and Weibull distributions, among others. The new distribution can be used effectively in the analysis of survival data since it accommodates monotone, unimodal and bathtub-shaped hazard functions. We derive the moments and examine the order statistics and their moments. We propose the method of maximum likelihood for estimating the model parameters and obtain the observed information matrix. A real data set is used to illustrate the importance and flexibility of the new distribution. PMID:20238163
Using Weibull Distribution Analysis to Evaluate ALARA Performance
E. L. Frome, J. P. Watkins, and D. A. Hagemeyer
2009-10-01
As Low as Reasonably Achievable (ALARA) is the underlying principle for protecting nuclear workers from potential health outcomes related to occupational radiation exposure. Radiation protection performance is currently evaluated by measures such as collective dose and average measurable dose, which do not indicate ALARA performance. The purpose of this work is to show how statistical modeling of individual doses using the Weibull distribution can provide objective supplemental performance indicators for comparing ALARA implementation among sites and for insights into ALARA practices within a site. Maximum likelihood methods were employed to estimate the Weibull shape and scale parameters used for performance indicators. The shape parameter reflects the effectiveness of maximizing the number of workers receiving lower doses and is represented as the slope of the fitted line on a Weibull probability plot. Additional performance indicators derived from the model parameters include the 99th percentile and the exceedance fraction. When grouping sites by collective total effective dose equivalent (TEDE) and ranking by 99th percentile with confidence intervals, differences in performance among sites can be readily identified. Applying this methodology will enable more efficient and complete evaluation of the effectiveness of ALARA implementation.
Reliability analysis of structural ceramic components using a three-parameter Weibull distribution
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Powers, Lynn M.; Starlinger, Alois
1992-01-01
Described here are nonlinear regression estimators for the three-Weibull distribution. Issues relating to the bias and invariance associated with these estimators are examined numerically using Monte Carlo simulation methods. The estimators were used to extract parameters from sintered silicon nitride failure data. A reliability analysis was performed on a turbopump blade utilizing the three-parameter Weibull distribution and the estimates from the sintered silicon nitride data.
Reliability analysis of structural ceramic components using a three-parameter Weibull distribution
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Powers, Lynn M.; Starlinger, Alois
1992-01-01
Described here are nonlinear regression estimators for the three-parameter Weibull distribution. Issues relating to the bias and invariance associated with these estimators are examined numerically using Monte Carlo simulation methods. The estimators were used to extract parameters from sintered silicon nitride failure data. A reliability analysis was performed on a turbopump blade utilizing the three-parameter Weibull distribution and the estimates from the sintered silicon nitride data.
NASA Astrophysics Data System (ADS)
Iskandar, Ismed; Satria Gondokaryono, Yudi
2016-02-01
In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range between the true value and the maximum likelihood estimated value lines.
NASA Technical Reports Server (NTRS)
Krantz, Timothy L.
2002-01-01
The Weibull distribution has been widely adopted for the statistical description and inference of fatigue data. This document provides user instructions, examples, and verification for software to analyze gear fatigue test data. The software was developed presuming the data are adequately modeled using a two-parameter Weibull distribution. The calculations are based on likelihood methods, and the approach taken is valid for data that include type 1 censoring. The software was verified by reproducing results published by others.
NASA Technical Reports Server (NTRS)
Kranz, Timothy L.
2002-01-01
The Weibull distribution has been widely adopted for the statistical description and inference of fatigue data. This document provides user instructions, examples, and verification for software to analyze gear fatigue test data. The software was developed presuming the data are adequately modeled using a two-parameter Weibull distribution. The calculations are based on likelihood methods, and the approach taken is valid for data that include type I censoring. The software was verified by reproducing results published by others.
Estimation problems associated with the Weibull distribution
Bowman, K O; Shenton, L R
1981-09-01
Series in descending powers of the sample size are developed for the moments of the coefficient of variation v* for the Weibull distribution F(t) = 1 -exp(-(t/b)/sup c/). A similar series for the moments of the estimator c* of the shape parameter c are derived from these. Comparisons are made with basic asymptotic assessments for the means and variances. From the first four moments, approximations are given to the distribution of v* and c*. In addition, an almost unbiased estimator of c is given when a sample is provided with the value of v*. Comments are given on the validity of the asymptotically normal assessments of the distributions.
Independent Orbiter Assessment (IOA): Weibull analysis report
NASA Technical Reports Server (NTRS)
Raffaelli, Gary G.
1987-01-01
The Auxiliary Power Unit (APU) and Hydraulic Power Unit (HPU) Space Shuttle Subsystems were reviewed as candidates for demonstrating the Weibull analysis methodology. Three hardware components were identified as analysis candidates: the turbine wheel, the gearbox, and the gas generator. Detailed review of subsystem level wearout and failure history revealed the lack of actual component failure data. In addition, component wearout data were not readily available or would require a separate data accumulation effort by the vendor. Without adequate component history data being available, the Weibull analysis methodology application to the APU and HPU subsystem group was terminated.
NASA Astrophysics Data System (ADS)
Imai, Kuniharu
In this study, an experimental model is made on the basis of the Whitehead abc model and the properties of partial discharge (PD) breakdown in the micro gap are investigated. The quite unique properties of the PD breakdown in the micro gap are obtained: The voltage dependence of time to the PD breakdown and spread of residual negative charge distributions in the micro gap become discontinuous at a certain voltage amplitude. The discontinuousness in the PD breakdown properties is found to be due to the difference in surface discharge patterns in the micro gap: Polbüschel-type at lower voltage and Gleitbüschel-type at higher voltage. This paper aims at discussing the PD breakdown process at higher voltage. Time to the PD breakdown in the micro gap is analyzed using the Weibull probability distribution. The result suggests that the PD breakdown processes are classified into two different types: fatigue-failure-type and early/random-failure-type. Assuming that the PD breakdown of fatigue-failure-type is a thermally activated degradation process of Ahrenius type, the activation energy is caluculated . The value is in good accord with activation energy required for polymer bond scission caused by interaction with oxygen and ozone. The result suggests that the PD breakdown of fatigue-failure-type is caused by oxidative degradation. On the other hand, residual negative charges in the micro gap play an important role in the PD breakdown process of early/random-failure-type. In view of previous studies, it is possible that the PD breakdown process of early/random-failure-type is governed by the existing probability of structural defects where a great deal of the negative charges accumulates.
NASA Technical Reports Server (NTRS)
Giuntini, Michael E.; Giuntini, Ronald E.
1991-01-01
A Bayesian inference process for system logistical planning is presented which provides a method for incorporating actual failures with prediction data for an ongoing and improving reliability estimates. The process uses the Weibull distribution, and provides a means for examining and updating logistical and maintenance support needs.
Investigation of Weibull statistics in fracture analysis of cast aluminum
NASA Technical Reports Server (NTRS)
Holland, Frederic A., Jr.; Zaretsky, Erwin V.
1989-01-01
The fracture strengths of two large batches of A357-T6 cast aluminum coupon specimens were compared by using two-parameter Weibull analysis. The minimum number of these specimens necessary to find the fracture strength of the material was determined. The applicability of three-parameter Weibull analysis was also investigated. A design methodology based on the combination of elementary stress analysis and Weibull statistical analysis is advanced and applied to the design of a spherical pressure vessel shell. The results from this design methodology are compared with results from the applicable ASME pressure vessel code.
Investigation of Weibull statistics in fracture analysis of cast aluminum
NASA Technical Reports Server (NTRS)
Holland, F. A., Jr.; Zaretsky, E. V.
1989-01-01
The fracture strengths of two large batches of A357-T6 cast aluminum coupon specimens were compared by using two-parameter Weibull analysis. The minimum number of these specimens necessary to find the fracture strength of the material was determined. The applicability of three-parameter Weibull analysis was also investigated. A design methodolgy based on the combination of elementary stress analysis and Weibull statistical analysis is advanced and applied to the design of a spherical pressure vessel shell. The results from this design methodology are compared with results from the applicable ASME pressure vessel code.
Table for estimating parameters of Weibull distribution
NASA Technical Reports Server (NTRS)
Mann, N. R.
1971-01-01
Table yields best linear invariant /BLI/ estimates for log of reliable life under censored life tests, permitting reliability estimations in failure analysis of items with multiple flaws. These BLI estimates have uniformly smaller expected loss than Gauss-Markov best linear unbiased estimates.
Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately
NASA Technical Reports Server (NTRS)
Huang, Zhaofeng; Porter, Albert A.
1990-01-01
The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.
A comparison of the generalized gamma and exponentiated Weibull distributions.
Cox, Christopher; Matheson, Matthew
2014-09-20
This paper provides a comparison of the three-parameter exponentiated Weibull (EW) and generalized gamma (GG) distributions. The connection between these two different families is that the hazard functions of both have the four standard shapes (increasing, decreasing, bathtub, and arc shaped), and in fact, the shape of the hazard is the same for identical values of the three parameters. For a given EW distribution, we define a matching GG using simulation and also by matching the 5 (th) , 50 (th) , and 95 (th) percentiles. We compare EW and matching GG distributions graphically and using the Kullback-Leibler distance. We find that the survival functions for the EW and matching GG are graphically indistinguishable, and only the hazard functions can sometimes be seen to be slightly different. The Kullback-Leibler distances are very small and decrease with increasing sample size. We conclude that the similarity between the two distributions is striking, and therefore, the EW represents a convenient alternative to the GG with the identical richness of hazard behavior. More importantly, these results suggest that having the four basic hazard shapes may to some extent be an important structural characteristic of any family of distributions. PMID:24700647
Large-Scale Weibull Analysis of H-451 Nuclear- Grade Graphite Specimen Rupture Data
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Walker, Andrew; Baker, Eric H.; Murthy, Pappu L.; Bratton, Robert L.
2012-01-01
A Weibull analysis was performed of the strength distribution and size effects for 2000 specimens of H-451 nuclear-grade graphite. The data, generated elsewhere, measured the tensile and four-point-flexure room-temperature rupture strength of specimens excised from a single extruded graphite log. Strength variation was compared with specimen location, size, and orientation relative to the parent body. In our study, data were progressively and extensively pooled into larger data sets to discriminate overall trends from local variations and to investigate the strength distribution. The CARES/Life and WeibPar codes were used to investigate issues regarding the size effect, Weibull parameter consistency, and nonlinear stress-strain response. Overall, the Weibull distribution described the behavior of the pooled data very well. However, the issue regarding the smaller-than-expected size effect remained. This exercise illustrated that a conservative approach using a two-parameter Weibull distribution is best for designing graphite components with low probability of failure for the in-core structures in the proposed Generation IV (Gen IV) high-temperature gas-cooled nuclear reactors. This exercise also demonstrated the continuing need to better understand the mechanisms driving stochastic strength response. Extensive appendixes are provided with this report to show all aspects of the rupture data and analytical results.
NASA Technical Reports Server (NTRS)
Gross, Bernard
1996-01-01
Material characterization parameters obtained from naturally flawed specimens are necessary for reliability evaluation of non-deterministic advanced ceramic structural components. The least squares best fit method is applied to the three parameter uniaxial Weibull model to obtain the material parameters from experimental tests on volume or surface flawed specimens subjected to pure tension, pure bending, four point or three point loading. Several illustrative example problems are provided.
NASA Astrophysics Data System (ADS)
Sazuka, Naoya; Inoue, Jun-Ichi
2007-03-01
A Weibull distribution with power-law tails is confirmed as a good candidate to describe the first passage time process of foreign currency exchange rates. The Lorentz curve and the corresponding Gini coefficient for a Weibull distribution are derived analytically. We show that the coefficient is in good agreement with the same quantity calculated from the empirical data. We also calculate the average waiting time which is an important measure to estimate the time for customers to wait until the next price change after they login to their computer systems. By assuming that the first passage time distribution might change its shape from the Weibull to the power-law at some critical time, we evaluate the averaged waiting time by means of the renewal-reward theorem. We find that our correction of tails of the distribution makes the averaged waiting time much closer to the value obtained from empirical data analysis. We also discuss the deviation from the estimated average waiting time by deriving the waiting time distribution directly. These results make us conclude that the first passage process of the foreign currency exchange rates is well described by a Weibull distribution with power-law tails.
Predictive Failure of Cylindrical Coatings Using Weibull Analysis
NASA Technical Reports Server (NTRS)
Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.
2002-01-01
Rotating, coated wiping rollers used in a high-speed printing application failed primarily from fatigue. Two coating materials were evaluated: a hard, cross-linked, plasticized polyvinyl chloride (PVC) and a softer, plasticized PVC. A total of 447 tests was conducted with these coatings in a production facility. The data were evaluated using Weibull analysis. The softer coating produced more than twice the life of the harder cross-linked coating and reduced the wiper replacement rate by two-thirds, resulting in minimum production interruption.
Composite Weibull-Inverse Transformed Gamma distribution and its actuarial application
NASA Astrophysics Data System (ADS)
Maghsoudi, Mastoureh; Bakar, Shaiful Anuar Abu; Hamzah, Nor Aishah
2014-07-01
This paper introduces a new composite model, namely, composite Weibull-Inverse Transformed Gamma distribution which assumes Weibull distribution for the head up to a specified threshold and inverse transformed gamma distribution beyond it. The closed form of probability density function (pdf) as well as the estimation of parameters by maximum likelihood method is presented. The model is compared with several benchmark distributions and their performances are measured. A well-known data set, Danish fire loss data, is used for this purpose and it's Value at Risk (VaR) using the new model is computed. In comparison to several standard models, the composite Weibull- Inverse Transformed Gamma model proved to be a competitor candidate.
Weibull statistical analysis of Krouse type bending fatigue of nuclear materials
NASA Astrophysics Data System (ADS)
Haidyrah, Ahmed S.; Newkirk, Joseph W.; CastaÃ±o, Carlos H.
2016-03-01
A bending fatigue mini-specimen (Krouse-type) was used to study the fatigue properties of nuclear materials. The objective of this paper is to study fatigue for Grade 91 ferritic-martensitic steel using a mini-specimen (Krouse-type) suitable for reactor irradiation studies. These mini-specimens are similar in design (but smaller) to those described in the ASTM B593 standard. The mini specimen was machined by waterjet and tested as-received. The bending fatigue machine was modified to test the mini-specimen with a specially designed adapter. The cycle bending fatigue behavior of Grade 91 was studied under constant deflection. The S-N curve was created and mean fatigue life was analyzed using mean fatigue life. In this study, the Weibull function was predicted probably for high stress to low stress at 563, 310 and 265 MPa. The commercial software Minitab 17 was used to calculate the distribution of fatigue life under different stress levels. We have used 2 and 3- parameters Weibull analysis to introduce the probability of failure. The plots indicated that the 3- parameter Weibull distribution fits the data well.
NASA Astrophysics Data System (ADS)
Brown, Wilbur K.; Wohletz, Kenneth H.
1995-08-01
We describe a physically based derivation of the Weibull distribution with respect to fragmentation processes. In this approach we consider the result of a single-event fragmentation leading to a branching tree of cracks that show geometric scale invariance (fractal behavior). With this approach, because the Rosin-Rammler type distribution is just the integral form of the Weibull distribution, it, too, has a physical basis. In further consideration of mass distributions developed by fragmentation processes, we show that one particular mass distribution closely resembles the empirical lognormal distribution. This result suggests that the successful use of the lognormal distribution to describe fragmentation distributions may have been simply fortuitous.
Kang, Suk-Bok; Han, Jun-Tae
2015-01-01
Many studies have considered a truncated and censored samples which are type-I, type-II and hybrid censoring scheme. The inverse Weibull distribution has been utilized for the analysis of life testing and reliability data. Also, this distribution is a very flexible distribution. The inverse Rayleigh distribution and inverse exponential distribution are a special case of the inverse Weibull distribution. In this paper, we derive the approximate maximum likelihood estimators (AMLEs) of the scale parameter and the shape parameter in the inverse Weibull distribution under multiply type-II censoring. We also propose a simple graphical method for goodness-on-fit test based on multiply type-II censored samples using AMLEs. PMID:26688782
An EOQ Model with Two-Parameter Weibull Distribution Deterioration and Price-Dependent Demand
ERIC Educational Resources Information Center
Mukhopadhyay, Sushanta; Mukherjee, R. N.; Chaudhuri, K. S.
2005-01-01
An inventory replenishment policy is developed for a deteriorating item and price-dependent demand. The rate of deterioration is taken to be time-proportional and the time to deterioration is assumed to follow a two-parameter Weibull distribution. A power law form of the price dependence of demand is considered. The model is solved analytically…
Weibull-distributed dyke thickness reflects probabilistic character of host-rock strength.
Krumbholz, Michael; Hieronymus, Christoph F; Burchardt, Steffi; Troll, Valentin R; Tanner, David C; Friese, Nadine
2014-01-01
Magmatic sheet intrusions (dykes) constitute the main form of magma transport in the Earth's crust. The size distribution of dykes is a crucial parameter that controls volcanic surface deformation and eruption rates and is required to realistically model volcano deformation for eruption forecasting. Here we present statistical analyses of 3,676 dyke thickness measurements from different tectonic settings and show that dyke thickness consistently follows the Weibull distribution. Known from materials science, power law-distributed flaws in brittle materials lead to Weibull-distributed failure stress. We therefore propose a dynamic model in which dyke thickness is determined by variable magma pressure that exploits differently sized host-rock weaknesses. The observed dyke thickness distributions are thus site-specific because rock strength, rather than magma viscosity and composition, exerts the dominant control on dyke emplacement. Fundamentally, the strength of geomaterials is scale-dependent and should be approximated by a probability distribution. PMID:24513695
Weibull-distributed dyke thickness reflects probabilistic character of host-rock strength
Krumbholz, Michael; Hieronymus, Christoph F.; Burchardt, Steffi; Troll, Valentin R.; Tanner, David C.; Friese, Nadine
2014-01-01
Magmatic sheet intrusions (dykes) constitute the main form of magma transport in the Earthâ€™s crust. The size distribution of dykes is a crucial parameter that controls volcanic surface deformation and eruption rates and is required to realistically model volcano deformation for eruption forecasting. Here we present statistical analyses of 3,676 dyke thickness measurements from different tectonic settings and show that dyke thickness consistently follows the Weibull distribution. Known from materials science, power law-distributed flaws in brittle materials lead to Weibull-distributed failure stress. We therefore propose a dynamic model in which dyke thickness is determined by variable magma pressure that exploits differently sized host-rock weaknesses. The observed dyke thickness distributions are thus site-specific because rock strength, rather than magma viscosity and composition, exerts the dominant control on dyke emplacement. Fundamentally, the strength of geomaterials is scale-dependent and should be approximated by a probability distribution. PMID:24513695
NASA Astrophysics Data System (ADS)
Drobinski, Philippe; Coulais, Corentin; Jourdier, Bénédicte
2015-10-01
Wind-speed statistics are generally modelled using the Weibull distribution. However, the Weibull distribution is based on empirical rather than physical justification and might display strong limitations for its applications. Here, we derive wind-speed distributions analytically with different assumptions on the wind components to model wind anisotropy, wind extremes and multiple wind regimes. We quantitatively confront these distributions with an extensive set of meteorological data (89 stations covering various sub-climatic regions in France) to identify distributions that perform best and the reasons for this, and we analyze the sensitivity of the proposed distributions to the diurnal to seasonal variability. We find that local topography, unsteady wind fluctuations as well as persistent wind regimes are determinants for the performances of these distributions, as they induce anisotropy or non-Gaussian fluctuations of the wind components. A Rayleigh-Rice distribution is proposed to model the combination of weak isotropic wind and persistent wind regimes. It outperforms all other tested distributions (Weibull, elliptical and non-Gaussian) and is the only proposed distribution able to catch accurately the diurnal and seasonal variability.
Lu, Qingshu; Tse, Siu-Keung; Chow, Shein-Chung; Lin, Min
2012-01-01
In the pharmaceutical industry, a two-stage seamless adaptive design that combines two separate independent clinical trials into a single clinical study is commonly employed in clinical research and development. In practice, in the interest of shortening the development process, it is not uncommon to consider study endpoints with different treatment durations at different stages (Chow and Chang, 2006 ; Maca et al., 2006 ). In this study, our attention is placed on the case where the study endpoints of interest are time-to-event data where the durations at the two stages are different with nonuniform patient entry and losses to follow-up or dropouts. Test statistics for the final analysis based on the combined data are developed under various hypotheses for testing equality, superiority, noninferiority, and equivalence. In addition, formulas for sample size calculation and allocation between the two stages based on the proposed test statistic are derived. PMID:22651114
Zipf's law in phonograms and Weibull distribution in ideograms: comparison of English with Japanese.
Nabeshima, Terutaka; Gunji, Yukio-Pegio
2004-02-01
Frequency distribution of word usage in a word sequence generated by capping is estimated in terms of the number of "hits" in retrieval of web-pages, to evaluate structure of semantics proper not to a particular text but to a language. Especially we compare distribution of English sequences with Japanese ones and obtain that, for English and Japanese phonogram, frequency of word usage against rank follows power-law function with exponent 1 and, for Japanese ideogram, it follows stretched exponential (Weibull distribution) function. We also discuss that such a difference can result from difference of phonogram based- (English) and ideogram-based language (Japanese). PMID:15013225
Motoiu-Raileanu, I; Motoiu, R; Gociu, M; Berceanu, S
1976-01-01
Investigations were carried out in 38 patients with acute leukemias or with chronic myeloid leukemias in the blast phase and a correlation was made between the cytogenetic aspect and the survival time. The interpretation of results was made by the Weibull distribution function. It was mathematically demonstrated that in the leukemic patients with chromosomal aberrations there is a preclinical period of over 40 months necessary for the formation of these anomalies. Chromosomal aberrations, the absence of mitoses and age over 70 proved to be aggravating factors in the diseases investigated. PMID:1063430
Flexural strength of infrared-transmitting window materials: bimodal Weibull statistical analysis
NASA Astrophysics Data System (ADS)
Klein, Claude A.
2011-02-01
The results of flexural strength testing performed on brittle materials are usually interpreted in light of a ``Weibull plot,'' i.e., by fitting the estimated cumulative failure probability (CFP) to a linearized semiempirical Weibull distribution. This procedure ignores the impact of the testing method on the measured stresses at fracture--specifically, the stressed area and the stress profile--thus resulting in inadequate characterization of the material under investigation. In a previous publication, the author reformulated Weibull's statistical theory of fracture in a manner that emphasizes how the stressed area and the stress profile control the failure probability distribution, which led to the concept of a characteristic strength, that is, the effective strength of a 1-cm2 uniformly stressed area. Fitting the CFP of IR-transmitting materials (AlON, fusion-cast CaF2, oxyfluoride glass, fused SiO2, CVD-ZnSe, and CVD-ZnS) was performed by means of nonlinear regressions but produced evidence of slight, systematic deviations. The purpose of this contribution is to demonstrate that upon extending the previously elaborated model to distributions involving two distinct types of defects--bimodal distributions--the fit agrees with estimated CFPs. Furthermore, the availability of two sets of statistical parameters (characteristic strength and shape parameter) can be taken advantage of to evaluate the failure-probability density, thus providing means of assessing the nature, the critical size, and the size distribution of surface/subsurface flaws.
NASA Astrophysics Data System (ADS)
Okabe, Shigemitsu; Tsuboi, Toshihiro; Takami, Jun
The power-frequency withstand voltage tests are regulated on electric power equipment in JEC by evaluating the lifetime reliability with a Weibull distribution function. The evaluation method is still controversial in terms of consideration of a plural number of faults and some alternative methods were proposed on this subject. The present paper first discusses the physical meanings of the various kinds of evaluating methods and secondly examines their effects on the power-frequency withstand voltage tests. Further, an appropriate method is investigated for an oil-filled transformer and a gas insulated switchgear with taking notice of dielectric breakdown or partial discharge mechanism under various insulating material and structure conditions and the tentative conclusion gives that the conventional method would be most pertinent under the present conditions.
Probabilistic Analysis for Comparing Fatigue Data Based on Johnson-Weibull Parameters
NASA Technical Reports Server (NTRS)
Hendricks, Robert C.; Zaretsky, Erwin V.; Vicek, Brian L.
2007-01-01
Probabilistic failure analysis is essential when analysis of stress-life (S-N) curves is inconclusive in determining the relative ranking of two or more materials. In 1964, L. Johnson published a methodology for establishing the confidence that two populations of data are different. Simplified algebraic equations for confidence numbers were derived based on the original work of L. Johnson. Using the ratios of mean life, the resultant values of confidence numbers deviated less than one percent from those of Johnson. It is possible to rank the fatigue lives of different materials with a reasonable degree of statistical certainty based on combined confidence numbers. These equations were applied to rotating beam fatigue tests that were conducted on three aluminum alloys at three stress levels each. These alloys were AL 2024, AL 6061, and AL 7075. The results were analyzed and compared using ASTM Standard E739-91 and the Johnson-Weibull analysis. The ASTM method did not statistically distinguish between AL 6010 and AL 7075. Based on the Johnson-Weibull analysis confidence numbers greater than 99 percent, AL 2024 was found to have the longest fatigue life, followed by AL 7075, and then AL 6061. The ASTM Standard and the Johnson-Weibull analysis result in the same stress-life exponent p for each of the three aluminum alloys at the median or L(sub 50) lives.
Rodrigo, D; Barbosa-Cánovas, G V; Martínez, A; Rodrigo, M
2003-06-01
The pulsed electric field inactivation kinetics of Escherichia coli suspended in orange juices with three different concentrations of carrot juice (0, 20, and 60%) was studied. Electric field strengths ranged from 25 to 40 kV/cm, and treatment times ranged from 40 to 340 micros. Experimental data were fitted to Bigelow, Hülsheger, and Weibull distribution functions, and the Weibull function provided the best fit (with the lowest mean square error). The dependency of each model's kinetic constant on electric field strength and carrot juice concentration was studied. A secondary model was developed to describe the relationship of Weibull parameters a and n to electric field strength and carrot juice concentration. An empirical mathematical model based on the Weibull distribution function, relating the natural logarithm of the survival fraction to treatment time, electric field strength, and carrot juice concentration, was developed. Parameters were estimated by a nonlinear regression. The results of this study indicate that the error rate for the model's predictions was 6.5% and that the model was suitable for describing E. coli inactivation. PMID:12801001
Statistical analysis of bivariate failure time data with Marshallâ€“Olkin Weibull models
Li, Yang; Sun, Jianguo; Song, Shuguang
2013-01-01
This paper discusses parametric analysis of bivariate failure time data, which often occur in medical studies among others. For this, as in the case of univariate failure time data, exponential and Weibull models are probably the most commonly used ones. However, it is surprising that there seem no general estimation procedures available for fitting the bivariate Weibull model to bivariate right-censored failure time data except some methods for special situations. We present and investigate two general but simple estimation procedures, one being a graphical approach and the other being a marginal approach, for the problem. An extensive simulation study is conducted to assess the performances of the proposed approaches and shows that they work well for practical situations. An illustrative example is provided. PMID:26294802
We compared two regression models, which are based on the Weibull and probit functions, for the analysis of pesticide toxicity data from laboratory studies on Illinois crop and native plant species. Both mathematical models are continuous, differentiable, strictly positive, and...
Probabilistic Analysis for Comparing Fatigue Data Based on Johnson-Weibull Parameters
NASA Technical Reports Server (NTRS)
Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.
2013-01-01
Leonard Johnson published a methodology for establishing the confidence that two populations of data are different. Johnson's methodology is dependent on limited combinations of test parameters (Weibull slope, mean life ratio, and degrees of freedom) and a set of complex mathematical equations. In this report, a simplified algebraic equation for confidence numbers is derived based on the original work of Johnson. The confidence numbers calculated with this equation are compared to those obtained graphically by Johnson. Using the ratios of mean life, the resultant values of confidence numbers at the 99 percent level deviate less than 1 percent from those of Johnson. At a 90 percent confidence level, the calculated values differ between +2 and 4 percent. The simplified equation is used to rank the experimental lives of three aluminum alloys (AL 2024, AL 6061, and AL 7075), each tested at three stress levels in rotating beam fatigue, analyzed using the Johnson- Weibull method, and compared to the ASTM Standard (E739 91) method of comparison. The ASTM Standard did not statistically distinguish between AL 6061 and AL 7075. However, it is possible to rank the fatigue lives of different materials with a reasonable degree of statistical certainty based on combined confidence numbers using the Johnson- Weibull analysis. AL 2024 was found to have the longest fatigue life, followed by AL 7075, and then AL 6061. The ASTM Standard and the Johnson-Weibull analysis result in the same stress-life exponent p for each of the three aluminum alloys at the median, or L(sub 50), lives
NASA Astrophysics Data System (ADS)
Ben Ali, Jaouher; Chebel-Morello, Brigitte; Saidi, Lotfi; Malinowski, Simon; Fnaiech, Farhat
2015-05-01
Accurate remaining useful life (RUL) prediction of critical assets is an important challenge in condition based maintenance to improve reliability and decrease machine's breakdown and maintenance's cost. Bearing is one of the most important components in industries which need to be monitored and the user should predict its RUL. The challenge of this study is to propose an original feature able to evaluate the health state of bearings and to estimate their RUL by Prognostics and Health Management (PHM) techniques. In this paper, the proposed method is based on the data-driven prognostic approach. The combination of Simplified Fuzzy Adaptive Resonance Theory Map (SFAM) neural network and Weibull distribution (WD) is explored. WD is used just in the training phase to fit measurement and to avoid areas of fluctuation in the time domain. SFAM training process is based on fitted measurements at present and previous inspection time points as input. However, the SFAM testing process is based on real measurements at present and previous inspections. Thanks to the fuzzy learning process, SFAM has an important ability and a good performance to learn nonlinear time series. As output, seven classes are defined; healthy bearing and six states for bearing degradation. In order to find the optimal RUL prediction, a smoothing phase is proposed in this paper. Experimental results show that the proposed method can reliably predict the RUL of rolling element bearings (REBs) based on vibration signals. The proposed prediction approach can be applied to prognostic other various mechanical assets.
Martinez, Edson Z; Achcar, Jorge A; Jácome, Alexandre A A; Santos, José S
2013-12-01
The cure fraction models are usually used to model lifetime time data with long-term survivors. In the present article, we introduce a Bayesian analysis of the four-parameter generalized modified Weibull (GMW) distribution in presence of cure fraction, censored data and covariates. In order to include the proportion of "cured" patients, mixture and non-mixture formulation models are considered. To demonstrate the ability of using this model in the analysis of real data, we consider an application to data from patients with gastric adenocarcinoma. Inferences are obtained by using MCMC (Markov Chain Monte Carlo) methods. PMID:24008248
Bayesian Weibull tree models for survival analysis of clinico-genomic data
Clarke, Jennifer; West, Mike
2008-01-01
An important goal of research involving gene expression data for outcome prediction is to establish the ability of genomic data to define clinically relevant risk factors. Recent studies have demonstrated that microarray data can successfully cluster patients into low- and high-risk categories. However, the need exists for models which examine how genomic predictors interact with existing clinical factors and provide personalized outcome predictions. We have developed clinico-genomic tree models for survival outcomes which use recursive partitioning to subdivide the current data set into homogeneous subgroups of patients, each with a specific Weibull survival distribution. These trees can provide personalized predictive distributions of the probability of survival for individuals of interest. Our strategy is to fit multiple models; within each model we adopt a prior on the Weibull scale parameter and update this prior via Empirical Bayes whenever the sample is split at a given node. The decision to split is based on a Bayes factor criterion. The resulting trees are weighted according to their relative likelihood values and predictions are made by averaging over models. In a pilot study of survival in advanced stage ovarian cancer we demonstrate that clinical and genomic data are complementary sources of information relevant to survival, and we use the exploratory nature of the trees to identify potential genomic biomarkers worthy of further study. PMID:18618012
Roy, Aparna; Chakraborty, Sumit; Kundu, Sarada Prasad; Basak, Ratan Kumar; Majumder, Subhasish Basu; Adhikari, Basudam
2012-03-01
Chemically modified jute fibres are potentially useful as natural reinforcement in composite materials. Jute fibres were treated with 0.25%-1.0% sodium hydroxide (NaOH) solution for 0.5-48 h. The hydrophilicity, surface morphology, crystallinity index, thermal and mechanical characteristics of untreated and alkali treated fibres were studied.The two-parameter Weibull distribution model was applied to deal with the variation in mechanical properties of the natural fibres. Alkali treatment enhanced the tensile strength and elongation at break by 82% and 45%, respectively but decreased the hydrophilicity by 50.5% and the diameter of the fibres by 37%. PMID:22209134
Bimodal Weibull statistical analysis of CVD-ZnSe and CVD-ZnS flexural strength data
NASA Astrophysics Data System (ADS)
Klein, Claude A.
2011-06-01
The results of flexural strength The results of flexural strength testing performed on brittle materials are usually interpreted in the light of a "Weibull plot," i.e., by fitting the estimated cumulative failure probability (CFP) to a linearized semiempirical Weibull distribution. This procedure ignores the impact of the testing method on the measures stressed at failure-specifically the stressed area and the stress profile-thus resulting in an inadequate characterization of the material under consideration. In a previous publication [Opt. Eng. 41, 3151 (2002)] the author reformulated Weibull's statistical theory of fracture in a manner that emphasizes how the stressed area and the stress profile control the CFP, a 1-sq.cm uniformly stressed area. Fitting the CFP of IR-transmitting materials was performed by means of nonlinear regressions but produced evidence of systematic deviations. In this paper we demonstrate that, upon extending the previously elaborated model to distributions involving two distinct types of defects (bimodal distributions), fitting the estimated CFP of CVD-ZnS or CVD-ZnSe leads to a much improved description of the fracture process. In particular, the availability of two sets of statistical parameters (characteristic strength and shape parameter) can be taken advantage of for evaluating the failure-probability density, thus providing means of assessing the nature, the critical size, and the size distribution of the surface/subsurface flaws.
NASA Technical Reports Server (NTRS)
Wheeler, J. T.
1990-01-01
The Weibull process, identified as the inhomogeneous Poisson process with the Weibull intensity function, is used to model the reliability growth assessment of the space shuttle main engine test and flight failure data. Additional tables of percentage-point probabilities for several different values of the confidence coefficient have been generated for setting (1-alpha)100-percent two sided confidence interval estimates on the mean time between failures. The tabled data pertain to two cases: (1) time-terminated testing, and (2) failure-terminated testing. The critical values of the three test statistics, namely Cramer-von Mises, Kolmogorov-Smirnov, and chi-square, were calculated and tabled for use in the goodness of fit tests for the engine reliability data. Numerical results are presented for five different groupings of the engine data that reflect the actual response to the failures.
NASA Astrophysics Data System (ADS)
Abas, Norzaida; Daud, Zalina M.; Yusof, Fadhilah
2014-11-01
A stochastic rainfall model is presented for the generation of hourly rainfall data in an urban area in Malaysia. In view of the high temporal and spatial variability of rainfall within the tropical rain belt, the Spatial-Temporal Neyman-Scott Rectangular Pulse model was used. The model, which is governed by the Neyman-Scott process, employs a reasonable number of parameters to represent the physical attributes of rainfall. A common approach is to attach each attribute to a mathematical distribution. With respect to rain cell intensity, this study proposes the use of a mixed exponential distribution. The performance of the proposed model was compared to a model that employs the Weibull distribution. Hourly and daily rainfall data from four stations in the Damansara River basin in Malaysia were used as input to the models, and simulations of hourly series were performed for an independent site within the basin. The performance of the models was assessed based on how closely the statistical characteristics of the simulated series resembled the statistics of the observed series. The findings obtained based on graphical representation revealed that the statistical characteristics of the simulated series for both models compared reasonably well with the observed series. However, a further assessment using the AIC, BIC and RMSE showed that the proposed model yields better results. The results of this study indicate that for tropical climates, the proposed model, using a mixed exponential distribution, is the best choice for generation of synthetic data for ungauged sites or for sites with insufficient data within the limit of the fitted region.
NASA Astrophysics Data System (ADS)
Wang, Ping; Yang, Bensheng; Guo, Lixin; Shang, Tao
2015-11-01
In this work, the symbol error rate (SER) performance of the multiple pulse position modulation (MPPM) based free-space optical communication (FSO) system with three different decision thresholds, fixed decision threshold (FDT), optimized decision threshold (ODT) and dynamic decision threshold (DDT) over exponentiated Weibull (EW) fading channels has been investigated in detail. The effects of aperture averaging on each decision threshold under weak-to-strong turbulence conditions are further studied and compared. The closed-form SER expressions for three thresholds derived with the help of generalized Gauss-Laguerre quadrature rule are verified by the Monte Carlo simulations. This work is helpful for the design of receivers for FSO communication systems.
NASA Astrophysics Data System (ADS)
Baadeche, Mohamed; Soltani, Faouzi
2015-12-01
In this paper, we analyze the distributed FGO-CFAR detector in homogeneous and Non-Homogeneous Weibull clutter with an assumption of known shape parameter. The non-homogeneity is modeled by the presence of a clutter edge in the reference window. We derive membership function which maps the observations to the false alarm space and compute the threshold at the data fusion center. Applying the `Maximum', `Minimum', `Algebraic Sum' and `Algebraic Product' fuzzy rules for L detectors considered at the data fusion center, the obtained results showed that the best performance is obtained by the `Algebraic Product' fuzzy rule followed by the `Minimum' one and in these two cases the probability of detection increases significantly with the number of detectors.
Jankovi?, Bojan
2014-01-01
A new approach in kinetic modeling of thermo-oxidative degradation process of starch granules extracted from the Cassava roots was developed. Based on the thermoanalytical measurements, three reaction stages were detected. Using Weibull and Weibull-derived (inverse) models, it was found that the first two reaction stages could be described with the change of apparent activation energy (Ea) on conversion fraction (?(T)) (using "Model-free" analysis). It was found that first reaction stage, which involves dehydration and evaporation of lower molecular mass fractions, can be described with an inverse Weibull model. This model with its distribution of Ea values and derived distribution parameters includes the occurrence of three-dimensional diffusion mechanism. The second reaction stage is very complex, and it was found to contain the system of simultaneous reactions (where depolymerization occurs), and can be described with standard Weibull model. Identified statistical model with its distribution of Ea values and derived distribution parameters includes the kinetic model that gives the variable reaction order values. Based on the established models, shelf-life studies for first two stages were carried out. Shelf-life testing has shown that optimal dehydration time is achieved by a programmed heating at medium heating rate, whereas optimal time of degradation is achieved at highest heating rate. PMID:23640748
Weibull-Based Design Methodology for Rotating Aircraft Engine Structures
NASA Technical Reports Server (NTRS)
Zaretsky, Erwin; Hendricks, Robert C.; Soditus, Sherry
2002-01-01
The NASA Energy Efficient Engine (E(sup 3)-Engine) is used as the basis of a Weibull-based life and reliability analysis. Each component's life and thus the engine's life is defined by high-cycle fatigue (HCF) or low-cycle fatigue (LCF). Knowing the cumulative life distribution of each of the components making up the engine as represented by a Weibull slope is a prerequisite to predicting the life and reliability of the entire engine. As the engine Weibull slope increases, the predicted lives decrease. The predicted engine lives L(sub 5) (95 % probability of survival) of approximately 17,000 and 32,000 hr do correlate with current engine maintenance practices without and with refurbishment. respectively. The individual high pressure turbine (HPT) blade lives necessary to obtain a blade system life L(sub 0.1) (99.9 % probability of survival) of 9000 hr for Weibull slopes of 3, 6 and 9, are 47,391 and 20,652 and 15,658 hr, respectively. For a design life of the HPT disks having probable points of failure equal to or greater than 36,000 hr at a probability of survival of 99.9 %, the predicted disk system life L(sub 0.1) can vary from 9,408 to 24,911 hr.
NASA Astrophysics Data System (ADS)
Gryning, Sven-Erik; Floors, Rogier; PeÃ±a, Alfredo; Batchvarova, Ekaterina; BrÃ¼mmer, Burghard
2015-11-01
Wind-speed observations from tall towers are used in combination with observations up to 600 m in altitude from a Doppler wind lidar to study the long-term conditions over suburban (Hamburg), rural coastal (HÃ¸vsÃ¸re) and marine (FINO3) sites. The variability in the wind field among the sites is expressed in terms of mean wind speed and Weibull distribution shape-parameter profiles. The consequences of the carrier-to-noise-ratio (CNR) threshold-value choice on the wind-lidar observations are revealed as follows. When the wind-lidar CNR is lower than a prescribed threshold value, the observations are often filtered out as the uncertainty in the wind-speed measurements increases. For a pulsed heterodyne Doppler lidar, use of the traditional -22 dB CNR threshold value at all measuring levels up to 600 m results in a â‰ˆ 7 % overestimation in the long-term mean wind speed over land, and a â‰ˆ 12 % overestimation in coastal and marine environments. In addition, the height of the profile maximum of the shape parameter of the Weibull distribution (so-called reversal height) is found to depend on the applied CNR threshold; it is found to be lower at small CNR threshold values. The reversal height is greater in the suburban (high roughness) than in the rural (low roughness) area. In coastal areas the reversal height is lower than that over land and relates to the internal boundary layer that develops downwind from the coastline. Over the sea the shape parameter increases towards the sea surface. A parametrization of the vertical profile of the shape parameter fits well with observations over land, coastal regions and over the sea. An applied model for the dependence of the reversal height on the surface roughness is in good agreement with the observations over land.
Sewalem, A; Kistemaker, G J; Miglior, F; Van Doormaal, B J
2004-11-01
The aim of this study was to explore the impact of type traits on the functional survival of Canadian Holstein cows using a Weibull proportional hazards model. The data set consisted of 1,130,616 registered cows from 13,606 herds calving from 1985 to 2003. Functional survival was defined as the number of days from first calving to culling, death, or censoring. Type information consisted of phenotypic type scores for 8 composite traits (with 18 classes of each) and 23 linear descriptive traits (with 9 classes of each). The statistical model included the effects of stage of lactation, season of production, the annual change in herd size, type of milk recording supervision, age at first calving, effects of milk, fat and protein yields calculated within herd-year-parity deviations, herd-year-season of calving, each type trait, and the sire. Analysis was done one at a time for each of 31 type traits. The relative culling risk was calculated for animals in each class after accounting for the previously mentioned effects. Among the composite type traits with the greatest contribution to the likelihood function were final score, mammary system, and feet and legs, all having a strong relationship with functional survival. Cows with low scores for these traits had higher risk of culling compared with higher scores. For instance, cows classified as poor plus 1 vs. excellent plus 1 have a relative risk of culling 3.66 and 0.28, respectively. The corresponding figures for mammary system are 4.19 and 0.46 and for feet and legs are 2.34 and 0.50. Linear type traits with the greatest contribution to the likelihood function were fore udder attachment, udder texture, udder depth, rear udder attachment height, and rear udder attachment width. Stature and size had no strong relationship with functional survival. PMID:15483178
NASA Astrophysics Data System (ADS)
Barbiero, Alessandro
2015-12-01
Researchers in applied sciences are often concerned with multivariate random variables. In particular, multivariate discrete data often arise in many fields (statistical quality control, biostatistics, failure analysis, etc). Here we consider the discrete Weibull distribution as an alternative to the popular Poisson random variable and propose a procedure for simulating correlated discrete Weibull random variables, with marginal distributions and correlation matrix assigned by the user. The procedure indeed relies upon the gaussian copula model and an iterative algorithm for recovering the proper correlation matrix for the copula ensuring the desired correlation matrix on the discrete margins. A simulation study is presented, which empirically shows the performance of the procedure.
Aragao, Glaucia M F; Corradini, Maria G; Normand, Mark D; Peleg, Micha
2007-11-01
Published survival curves of Escherichia coli in two growth media, with and without the presence of salt, at various temperatures and in a Greek eggplant salad having various levels of essential oil, all had a characteristic downward concavity when plotted on semi logarithmic coordinates. Some also exhibited what appeared as a 'shoulder' of considerable length. Regardless of whether a shoulder was noticed, the survival pattern could be considered as a manifestation of an underlying unimodal distribution of the cells' death times. Mathematically, the data could be described equally well by the Weibull and log normal distribution functions, which had similar modes, means, standard deviations and coefficients of skewness. When plotted in their probability density function (PDF) form, the curves also appeared very similar visually. This enabled us to quantify and compare the effect of temperature or essential oil concentration on the organism's survival in terms of these temporal distributions' characteristics. Increased lethality was generally expressed in a shorter mean and mode, a smaller standard deviation and increased overall symmetry as judged by the distributions' degree of skewness. The 'shoulder', as expected, simply indicated that the distribution's standard deviation was much smaller than its mode. Rate models based on the two distribution functions could be used to predict non isothermal survival patterns. They were derived on the assumption that the momentary inactivation rate is the isothermal rate at the momentary temperature at a time that corresponds to the momentary survival ratio. In this application, however, the Weibullian model with a fixed power was not only simpler and more convenient mathematically than the one based on the log normal distribution, but it also provided more accurate estimates of the dynamic inactivation patterns. PMID:17869362
NASA Technical Reports Server (NTRS)
Shantaram, S. Pai; Gyekenyesi, John P.
1989-01-01
The calculation of shape and scale parametes of the two-parameter Weibull distribution is described using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. Detailed procedures are given for evaluating 90 percent confidence intervals for maximum likelihood estimates of shape and scale parameters, the unbiased estimates of the shape parameters, and the Weibull mean values and corresponding standard deviations. Furthermore, the necessary steps are described for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull distribution. It also shows how to calculate the Batdorf flaw-density constants by using the Weibull distribution statistical parameters. The techniques described were verified with several example problems, from the open literature, and were coded in the Structural Ceramics Analysis and Reliability Evaluation (SCARE) design program.
Harris, S.; Gross, R.; Mitchell, E.
2011-01-18
The Savannah River Site (SRS) spring operated pressure relief valve (SORV) maintenance intervals were evaluated using an approach provided by the American Petroleum Institute (API RP 581) for risk-based inspection technology (RBI). In addition, the impact of extending the inspection schedule was evaluated using Monte Carlo Simulation (MCS). The API RP 581 approach is characterized as a Weibull analysis with modified Bayesian updating provided by SRS SORV proof testing experience. Initial Weibull parameter estimates were updated as per SRS's historical proof test records contained in the Center for Chemical Process Safety (CCPS) Process Equipment Reliability Database (PERD). The API RP 581 methodology was used to estimate the SORV's probability of failing on demand (PFD), and the annual expected risk. The API RP 581 methodology indicates that the current SRS maintenance plan is conservative. Cost savings may be attained in certain mild service applications that present low PFD and overall risk. Current practices are reviewed and recommendations are made for extending inspection intervals. The paper gives an illustration of the inspection costs versus the associated risks by using API RP 581 Risk Based Inspection (RBI) Technology. A cost effective maintenance frequency balancing both financial risk and inspection cost is demonstrated.
A Weibull characterization for tensile fracture of multicomponent brittle fibers
NASA Technical Reports Server (NTRS)
Barrows, R. G.
1977-01-01
A statistical characterization for multicomponent brittle fibers in presented. The method, which is an extension of usual Weibull distribution procedures, statistically considers the components making up a fiber (e.g., substrate, sheath, and surface) as separate entities and taken together as in a fiber. Tensile data for silicon carbide fiber and for an experimental carbon-boron alloy fiber are evaluated in terms of the proposed multicomponent Weibull characterization.
Finite-size effects on return interval distributions for weakest-link-scaling systems.
Hristopulos, Dionissios T; Petrakis, Manolis P; Kaniadakis, Giorgio
2014-05-01
The Weibull distribution is a commonly used model for the strength of brittle materials and earthquake return intervals. Deviations from Weibull scaling, however, have been observed in earthquake return intervals and the fracture strength of quasibrittle materials. We investigate weakest-link scaling in finite-size systems and deviations of empirical return interval distributions from the Weibull distribution function. Our analysis employs the ansatz that the survival probability function of a system with complex interactions among its units can be expressed as the product of the survival probability functions for an ensemble of representative volume elements (RVEs). We show that if the system comprises a finite number of RVEs, it obeys the Îº-Weibull distribution. The upper tail of the Îº-Weibull distribution declines as a power law in contrast with Weibull scaling. The hazard rate function of the Îº-Weibull distribution decreases linearly after a waiting time Ï„(c) âˆ n(1/m), where m is the Weibull modulus and n is the system size in terms of representative volume elements. We conduct statistical analysis of experimental data and simulations which show that the Îº Weibull provides competitive fits to the return interval distributions of seismic data and of avalanches in a fiber bundle model. In conclusion, using theoretical and statistical analysis of real and simulated data, we demonstrate that the Îº-Weibull distribution is a useful model for extreme-event return intervals in finite-size systems. PMID:25353774
Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu
2015-09-01
Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the Î» value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when Î» and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the Î» parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the Î» value in saccharification performance assessment were discussed. PMID:26121186
Modeling root reinforcement using root-failure Weibull survival function
NASA Astrophysics Data System (ADS)
Schwarz, M.; Giadrossich, F.; Cohen, D.
2013-03-01
Root networks contribute to slope stability through complicated interactions that include mechanical compression and tension. Due to the spatial heterogeneity of root distribution and the dynamic of root turnover, the quantification of root reinforcement on steep slope is challenging and consequently the calculation of slope stability as well. Although the considerable advances in root reinforcement modeling, some important aspect remain neglected. In this study we address in particular to the role of root strength variability on the mechanical behaviors of a root bundle. Many factors may contribute to the variability of root mechanical properties even considering a single class of diameter. This work presents a new approach for quantifying root reinforcement that considers the variability of mechanical properties of each root diameter class. Using the data of laboratory tensile tests and field pullout tests, we calibrate the parameters of the Weibull survival function to implement the variability of root strength in a numerical model for the calculation of root reinforcement (RBMw). The results show that, for both laboratory and field datasets, the parameters of the Weibull distribution may be considered constant with the exponent equal to 2 and the normalized failure displacement equal to 1. Moreover, the results show that the variability of root strength in each root diameter class has a major influence on the behavior of a root bundle with important implications when considering different approaches in slope stability calculation. Sensitivity analysis shows that the calibration of the tensile force and the elasticity of the roots are the most important equations, as well as the root distribution. The new model allows the characterization of root reinforcement in terms of maximum pullout force, stiffness, and energy. Moreover, it simplifies the implementation of root reinforcement in slope stability models. The realistic quantification of root reinforcement for tensile, shear and compression behavior allows the consideration of the stabilization effects of root networks on steep slopes and the influence that this has on the triggering of shallow landslides.
A Weibull characterization for tensile fracture of multicomponent brittle fibers
NASA Technical Reports Server (NTRS)
Barrows, R. G.
1977-01-01
Necessary to the development and understanding of brittle fiber reinforced composites is a means to statistically describe fiber strength and strain-to-failure behavior. A statistical characterization for multicomponent brittle fibers is presented. The method, which is an extension of usual Weibull distribution procedures, statistically considers the components making up a fiber (e.g., substrate, sheath, and surface) as separate entities and taken together as in a fiber. Tensile data for silicon carbide fiber and for an experimental carbon-boron alloy fiber are evaluated in terms of the proposed multicomponent Weibull characterization.
Chakraborty, Snehasis; Rao, Pavuluri Srinivasa; Mishra, Hari Niwas
2015-10-15
High pressure inactivation of natural microbiota viz. aerobic mesophiles (AM), psychrotrophs (PC), yeasts and molds (YM), total coliforms (TC) and lactic acid bacteria (LAB) in pineapple puree was studied within the experimental domain of 0.1-600 MPa and 30-50 °C with a treatment time up to 20 min. A complete destruction of yeasts and molds was obtained at 500 MPa/50 °C/15 min; whereas no counts were detected for TC and LAB at 300 MPa/30 °C/15 min. A maximum of two log cycle reductions was obtained for YM during pulse pressurization at the severe process intensity of 600 MPa/50 °C/20 min. The Weibull model clearly described the non-linearity of the survival curves during the isobaric period. The tailing effect, as confirmed by the shape parameter (?) of the survival curve, was obtained in case of YM (?<1); whereas a shouldering effect (?>1) was observed for the other microbial groups. Analogous to thermal death kinetics, the activation energy (Ea, kJ·mol(-1)) and the activation volume (Va, mL·mol(-1)) values were computed further to describe the temperature and pressure dependencies of the scale parameter (?, min), respectively. A higher ? value was obtained for each microbe at a lower temperature and it decreased with an increase in pressure. A secondary kinetic model was developed describing the inactivation rate (k, min(-1)) as a function of pressure (P, MPa) and temperature (T, K) including the dependencies of Ea and Va on P and T, respectively. PMID:26202323
Gas turbine safety improvement through risk analysis
Crosby, T.M.; Reinman, G.L.
1988-04-01
This paper is intended to provide the engineer with the information necessary to understand certain statistical methods that are used to improve system safety. It will provide an understanding of Weibull analysis, in that it describes when the Weibull distribution is appropriate, how to construct a Weibull plot, and how to use the parameters of the Weibull distribution to calculate risk. The paper will also provide the engineer with a comprehension of Monte Carlo simulation as it relates to quantifying safety risk. The basic components of Monte Carlo simulation are discussed as well as the formulation of a system model and its application in the gas turbine industry.
Bass, B.R.; McAfee, W.J.; Williams, P.T.
1999-08-01
Cruciform beam fracture mechanics specimensl have been developed in the Heavy Section Steel Technology (HSST) Program at Oak Ridge National Laboratory (ORNL) to introduce a prototypic, far- field, out-of-plane biaxird bending stress component in the test section that approximates the nonlinear biaxial stresses resulting from pressurized-thernxd-shock or pressure-temperature loading of a nuclear reactor pressure vessel (RPV). Matrices of cruciform beam tests were developed to investigate and quantify the effects of temperature, biaxial loading, and specimen size on fracture initiation toughness of two-dimensional (constant depth), shtdlow, surface flaws. Tests were conducted under biaxial load ratios ranging from uniaxial to equibiaxial. These tests demonstrated that biaxial loading can have a pronounced effect on shallow-flaw fracture toughness in the lower transition temperature region for RPV materials. Two and three- parameter Weibull models have been calibrated using a new scheme (developed at the University of Illinois) that maps toughness data from test specimens with distinctly different levels of crack-tip constraint to a small scale yielding (SSY) Weibull stress space. These models, using the new hydrostatic stress criterion in place of the more commonly used maximum principal stress in the kernel of the OW integral definition, have been shown to correlate the experimentally observed biaxiaI effect in cruciform specimens, thereby providing a scaling mechanism between uniaxial and biaxial loading states.
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.; Gyekenyesi, John P.
1988-01-01
The calculation of shape and scale parameters of the two-parameter Weibull distribution is described using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. Detailed procedures are given for evaluating 90 percent confidence intervals for maximum likelihood estimates of shape and scale parameters, the unbiased estimates of the shape parameters, and the Weibull mean values and corresponding standard deviations. Furthermore, the necessary steps are described for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull distribution. It also shows how to calculate the Batdorf flaw-density constants by uing the Weibull distribution statistical parameters. The techniques described were verified with several example problems, from the open literature, and were coded. The techniques described were verified with several example problems from the open literature, and were coded in the Structural Ceramics Analysis and Reliability Evaluation (SCARE) design program.
NASA Technical Reports Server (NTRS)
Szatmary, Steven A.; Gyekenyesi, John P.; Nemeth, Noel N.
1990-01-01
This manual describes the operation and theory of the PC-CARES (Personal Computer-Ceramic Analysis and Reliability Evaluation of Structures) computer program for the IBM PC and compatibles running PC-DOS/MS-DOR OR IBM/MS-OS/2 (version 1.1 or higher) operating systems. The primary purpose of this code is to estimate Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities. Included in the manual is the description of the calculation of shape and scale parameters of the two-parameter Weibull distribution using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. The methods for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull line, as well as the techniques for calculating the Batdorf flaw-density constants are also described.
Transmission overhaul and replacement predictions using Weibull and renewel theory
NASA Technical Reports Server (NTRS)
Savage, M.; Lewicki, D. G.
1989-01-01
A method to estimate the frequency of transmission overhauls is presented. This method is based on the two-parameter Weibull statistical distribution for component life. A second method is presented to estimate the number of replacement components needed to support the transmission overhaul pattern. The second method is based on renewal theory. Confidence statistics are applied with both methods to improve the statistical estimate of sample behavior. A transmission example is also presented to illustrate the use of the methods. Transmission overhaul frequency and component replacement calculations are included in the example.
Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; Martin-Martinez, Sergio; Zhang, Jie; Hodge, Bri -Mathias; Molina-Garcia, Angel
2016-02-02
Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamoreÂ Â» are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.Â«Â less
NASA Astrophysics Data System (ADS)
Dewhurst, A.; Legger, F.
2015-12-01
The ATLAS experiment accumulated more than 140 PB of data during the first run of the Large Hadron Collider (LHC) at CERN. The analysis of such an amount of data is a challenging task for the distributed physics community. The Distributed Analysis (DA) system of the ATLAS experiment is an established and stable component of the ATLAS distributed computing operations. About half a million user jobs are running daily on DA resources, submitted by more than 1500 ATLAS physicists. The reliability of the DA system during the first run of the LHC and the following shutdown period has been high thanks to the continuous automatic validation of the distributed analysis sites and the user support provided by a dedicated team of expert shifters. During the LHC shutdown, the ATLAS computing model has undergone several changes to improve the analysis workflows, including the re-design of the production system, a new analysis data format and event model, and the development of common reduction and analysis frameworks. We report on the impact such changes have on the DA infrastructure, describe the new DA components, and include recent performance measurements.
NASA Astrophysics Data System (ADS)
Bonadonna, Costanza; Costa, Antonio
2013-08-01
The Weibull distribution between volume and square root of isopach area has been recently introduced for determining volume of tephra deposits, which is crucial to the assessment of the magnitude and hazards of explosive volcanoes. We show how the decay of the size of the largest lithics with the square root of isopleth area can also be well described using a Weibull function and how plume height correlates strongly with corresponding Weibull parameters. Variations of median grain size (Md Ï•) values with square root of area of the associated contours can be, similarly, well fitted with a Weibull function. Weibull parameters, derived for both the thinning of tephra deposits and the decrease of grain size (both maximum lithic diameter and Md Ï•), with a proxy for the distance from vent (e.g., square root of isoline areas) can be combined to classify the style of explosive volcanic eruptions. Accounting for the uncertainty in the derivation of eruptive parameters (e.g., plume height and volume of tephra deposits) is crucial to any classification of eruptive style and hazard assessment. Considering a typical uncertainty of 20 % for the determination of plume height, a new eruption classification scheme based on selected Weibull parameters is proposed. Ultraplinian, Plinian, Subplinian, and small-moderate explosive eruptions are defined on the ground of plume height and mass eruption rate. Overall, the Weibull fitting represents a versatile and reliable strategy for the estimation of both the volume of tephra deposits and the height of volcanic plumes and for the classification of eruptive style. Nonetheless, due to the typically large uncertainties (mainly due to availability of data, compilation of isopach and isopleth maps, and discrepancies from empirical best fits), plume height, volume, and magnitude of explosive eruptions cannot be considered as absolute values, regardless of the technique used. It is important that various empirical and analytical methods are applied in order to assess such an uncertainty.
NASA Astrophysics Data System (ADS)
Mbaruku, A. L.; Le, Q. V.; Song, H.; Schwartz, J.
2010-11-01
The development of superconducting magnets requires not only a conductor that is capable of carrying sufficient critical current density (Jc) at high magnetic field, but also one that is mechanically robust and predictable. Here, the electromechanical behavior of AgMg sheathed Bi2Sr2CaCu2O8 + x (Bi2212) round wires and YBa2Cu3O7 - ? (YBCO) coated conductors is studied using a statistical approach based upon three-parameter Weibull statistics, where the three parameters ?, ?, and ? describe the scale, shape and location of the resulting distribution function. The results show that Bi2212 round wire has significantly different behavior than previously studied Bi2212 tape conductors, with evidence of an underlying mechanically strong but poorly connected electrical 'backbone' in the round wire that is not found in the tape conductor. Furthermore, the Bi2212 round wire results indicate a distribution in the dependence of critical current upon strain (Ic(?)) at the microscopic level, consistent with reports that a complex network of interfilamentary bridges plays a key role in connectivity. Unlike the behavior of either Bi2212 round wire or tape, the YBCO coated conductor shows a universal behavior for strains below yield, consistent with the presence of a strong, stiff NiW substrate that dominates the mechanical behavior, and a high purity, high density, highly textured YBCO layer with reversible electromechanical properties. These results indicate that, in particular for Bi2212 conductors, the strain-dependence of the location parameter, ?(?), which defines the minimum critical current for any segment of conductor at a particular value of strain, is a more important function for magnet design than Ic(?) or the critical strain, ?c. Using the approach reported previously and applied here, this curve is readily obtained using a limited length of conductor, but provides an important level of conservatism to the design of magnets using long lengths of conductor.
Modeling root reinforcement using a root-failure Weibull survival function
NASA Astrophysics Data System (ADS)
Schwarz, M.; Giadrossich, F.; Cohen, D.
2013-11-01
Root networks contribute to slope stability through complex interactions with soil that include mechanical compression and tension. Due to the spatial heterogeneity of root distribution and the dynamics of root turnover, the quantification of root reinforcement on steep slopes is challenging and consequently the calculation of slope stability also. Although considerable progress has been made, some important aspects of root mechanics remain neglected. In this study we address specifically the role of root-strength variability on the mechanical behavior of a root bundle. Many factors contribute to the variability of root mechanical properties even within a single class of diameter. This work presents a new approach for quantifying root reinforcement that considers the variability of mechanical properties of each root diameter class. Using the data of laboratory tensile tests and field pullout tests, we calibrate the parameters of the Weibull survival function to implement the variability of root strength in a numerical model for the calculation of root reinforcement (RBMw). The results show that, for both laboratory and field data sets, the parameters of the Weibull distribution may be considered constant with the exponent equal to 2 and the normalized failure displacement equal to 1. Moreover, the results show that the variability of root strength in each root diameter class has a major influence on the behavior of a root bundle with important implications when considering different approaches in slope stability calculation. Sensitivity analysis shows that the calibration of the equations of the tensile force, the elasticity of the roots, and the root distribution are the most important steps. The new model allows the characterization of root reinforcement in terms of maximum pullout force, stiffness, and energy. Moreover, it simplifies the implementation of root reinforcement in slope stability models. The realistic quantification of root reinforcement for tensile, shear and compression behavior allows for the consideration of the stabilization effects of root networks on steep slopes and the influence that this has on the triggering of shallow landslides.
Bartsch, R.R.
1995-09-01
Key elements of the 36 MJ ATLAS capacitor bank have been evaluated for individual probabilities of failure. These have been combined to estimate system reliability which is to be greater than 95% on each experimental shot. This analysis utilizes Weibull or Weibull-like distributions with increasing probability of failure with the number of shots. For transmission line insulation, a minimum thickness is obtained and for the railgaps, a method for obtaining a maintenance interval from forthcoming life tests is suggested.
Survival extrapolation using the poly-Weibull model
Lunn, David; Sharples, Linda D
2015-01-01
Recent studies of (cost-) effectiveness in cardiothoracic transplantation have required estimation of mean survival over the lifetime of the recipients. In order to calculate mean survival, the complete survivor curve is required but is often not fully observed, so that survival extrapolation is necessary. After transplantation, the hazard function is bathtub-shaped, reflecting latent competing risks which operate additively in overlapping time periods. The poly-Weibull distribution is a flexible parametric model that may be used to extrapolate survival and has a natural competing risks interpretation. In addition, treatment effects and subgroups can be modelled separately for each component of risk. We describe the model and develop inference procedures using freely available software. The methods are applied to two problems from cardiothoracic transplantation. PMID:21937472
Survival extrapolation using the poly-Weibull model.
Demiris, Nikolaos; Lunn, David; Sharples, Linda D
2015-04-01
Recent studies of (cost-) effectiveness in cardiothoracic transplantation have required estimation of mean survival over the lifetime of the recipients. In order to calculate mean survival, the complete survivor curve is required but is often not fully observed, so that survival extrapolation is necessary. After transplantation, the hazard function is bathtub-shaped, reflecting latent competing risks which operate additively in overlapping time periods. The poly-Weibull distribution is a flexible parametric model that may be used to extrapolate survival and has a natural competing risks interpretation. In addition, treatment effects and subgroups can be modelled separately for each component of risk. We describe the model and develop inference procedures using freely available software. The methods are applied to two problems from cardiothoracic transplantation. PMID:21937472
Stratified Weibull Regression Model for Interval-Censored Data
Gu, Xiangdong; Shapiro, David; Hughes, Michael D.; Balasubramanian, Raji
2016-01-01
Interval censored outcomes arise when a silent event of interest is known to have occurred within a specific time period determined by the times of the last negative and first positive diagnostic tests. There is a rich literature on parametric and non-parametric approaches for the analysis of interval-censored outcomes. A commonly used strategy is to use a proportional hazards (PH) model with the baseline hazard function parameterized. The proportional hazards assumption can be relaxed in stratified models by allowing the baseline hazard function to vary across strata defined by a subset of explanatory variables. In this paper, we describe and implement a new R package straweib, for fitting a stratified Weibull model appropriate for interval censored outcomes. We illustrate the R package straweib by analyzing data from a longitudinal oral health study on the timing of the emergence of permanent teeth in 4430 children.
NASA Astrophysics Data System (ADS)
Chen, Chien-Chih; Tseng, Chih-Yuan; Telesca, Luciano; Chi, Sung-Ching; Sun, Li-Chung
2012-02-01
Analogous to crustal earthquakes in natural fault systems, we here consider the dynasty collapses as extreme events in human society. Duration data of ancient Chinese and Egyptian dynasties provides a good chance of exploring the collective behavior of the so-called social atoms. By means of the rank-ordering statistics, we demonstrate that the duration data of those ancient dynasties could be described with good accuracy by the Weibull distribution. It is thus amazing that the distribution of time to failure of human society, i.e. the disorder of a historical dynasty, follows the widely accepted Weibull process as natural material fails.
Aerospace Applications of Weibull and Monte Carlo Simulation with Importance Sampling
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.
1998-01-01
Recent developments in reliability modeling and computer technology have made it practical to use the Weibull time to failure distribution to model the system reliability of complex fault-tolerant computer-based systems. These system models are becoming increasingly popular in space systems applications as a result of mounting data that support the decreasing Weibull failure distribution and the expectation of increased system reliability. This presentation introduces the new reliability modeling developments and demonstrates their application to a novel space system application. The application is a proposed guidance, navigation, and control (GN&C) system for use in a long duration manned spacecraft for a possible Mars mission. Comparisons to the constant failure rate model are presented and the ramifications of doing so are discussed.
The distribution of first-passage times and durations in FOREX and future markets
NASA Astrophysics Data System (ADS)
Sazuka, Naoya; Inoue, Jun-ichi; Scalas, Enrico
2009-07-01
Possible distributions are discussed for intertrade durations and first-passage processes in financial markets. The view-point of renewal theory is assumed. In order to represent market data with relatively long durations, two types of distributions are used, namely a distribution derived from the Mittag-Leffler survival function and the Weibull distribution. For the Mittag-Leffler type distribution, the average waiting time (residual life time) is strongly dependent on the choice of a cut-off parameter tmax, whereas the results based on the Weibull distribution do not depend on such a cut-off. Therefore, a Weibull distribution is more convenient than a Mittag-Leffler type if one wishes to evaluate relevant statistics such as average waiting time in financial markets with long durations. On the other hand, we find that the Gini index is rather independent of the cut-off parameter. Based on the above considerations, we propose a good candidate for describing the distribution of first-passage time in a market: The Weibull distribution with a power-law tail. This distribution compensates the gap between theoretical and empirical results more efficiently than a simple Weibull distribution. It should be stressed that a Weibull distribution with a power-law tail is more flexible than the Mittag-Leffler distribution, which itself can be approximated by a Weibull distribution and a power-law. Indeed, the key point is that in the former case there is freedom of choice for the exponent of the power-law attached to the Weibull distribution, which can exceed 1 in order to reproduce decays faster than possible with a Mittag-Leffler distribution. We also give a useful formula to determine an optimal crossover point minimizing the difference between the empirical average waiting time and the one predicted from renewal theory. Moreover, we discuss the limitation of our distributions by applying our distribution to the analysis of the BTP future and calculating the average waiting time. We find that our distribution is applicable as long as durations follow a Weibull law for short times and do not have too heavy a tail.
A Weibull brittle material failure model for the ABAQUS computer program
Bennett, J.
1991-08-01
A statistical failure theory for brittle materials that traces its origins to the Weibull distribution function is developed for use in the general purpose ABAQUS finite element computer program. One of the fundamental assumptions for this development is that Mode 1 microfractures perpendicular to the direction of the principal stress contribute independently to the fast fracture. The theory is implemented by a user subroutine for ABAQUS. Example problems illustrating the capability and accuracy of the model are given. 24 refs., 12 figs.
Spatial and temporal patterns of global onshore wind speed distribution
NASA Astrophysics Data System (ADS)
Zhou, Yuyu; Smith, Steven J.
2013-09-01
Wind power, a renewable energy source, can play an important role in electrical energy generation. Information regarding wind energy potential is important both for energy related modeling and for decision-making in the policy community. While wind speed datasets with high spatial and temporal resolution are often ultimately used for detailed planning, simpler assumptions are often used in analysis work. An accurate representation of the wind speed frequency distribution is needed in order to properly characterize wind energy potential. Using a power density method, this study estimated global variation in wind parameters as fitted to a Weibull density function using NCEP/climate forecast system reanalysis (CFSR) data over land areas. The Weibull distribution performs well in fitting the time series wind speed data at most locations according to R2, root mean square error, and power density error. The wind speed frequency distribution, as represented by the Weibull k parameter, exhibits a large amount of spatial variation, a regionally varying amount of seasonal variation, and relatively low decadal variation. We also analyzed the potential error in wind power estimation when a commonly assumed Rayleigh distribution (Weibull k = 2) is used. We find that the assumption of the same Weibull parameter across large regions can result in non-negligible errors. While large-scale wind speed data are often presented in the form of mean wind speeds, these results highlight the need to also provide information on the wind speed frequency distribution.
Time-dependent fiber bundles with local load sharing. II. General Weibull fibers
NASA Astrophysics Data System (ADS)
Phoenix, S. Leigh; Newman, William I.
2009-12-01
Fiber bundle models (FBMs) are useful tools in understanding failure processes in a variety of material systems. While the fibers and load sharing assumptions are easily described, FBM analysis is typically difficult. Monte Carlo methods are also hampered by the severe computational demands of large bundle sizes, which overwhelm just as behavior relevant to real materials starts to emerge. For large size scales, interest continues in idealized FBMs that assume either equal load sharing (ELS) or local load sharing (LLS) among fibers, rules that reflect features of real load redistribution in elastic lattices. The present work focuses on a one-dimensional bundle of N fibers under LLS where life consumption in a fiber follows a power law in its load, with exponent ? , and integrated over time. This life consumption function is further embodied in a functional form resulting in a Weibull distribution for lifetime under constant fiber stress and with Weibull exponent, ? . Thus the failure rate of a fiber depends on its past load history, except for ?=1 . We develop asymptotic results validated by Monte Carlo simulation using a computational algorithm developed in our previous work [Phys. Rev. EPLEEE81063-651X 63, 021507 (2001)] that greatly increases the size, N , of treatable bundles (e.g., 106 fibers in 103 realizations). In particular, our algorithm is O(NlnN) in contrast with former algorithms which were O(N2) making this investigation possible. Regimes are found for (?,?) pairs that yield contrasting behavior for large N . For ?>1 and large N , brittle weakest volume behavior emerges in terms of characteristic elements (groupings of fibers) derived from critical cluster formation, and the lifetime eventually goes to zero as N?? , unlike ELS, which yields a finite limiting mean. For 1/2???1 , however, LLS has remarkably similar behavior to ELS (appearing to be virtually identical for ?=1 ) with an asymptotic Gaussian lifetime distribution and a finite limiting mean for large N . The coefficient of variation follows a power law in increasing N but, except for ?=1 , the value of the negative exponent is clearly less than 1/2 unlike in ELS bundles where the exponent remains 1/2 for 1/2distribution for the longest lived of a parallel group of independent elements, which applies exactly to ?=0 . The lower the value of ? , the higher the transition value of ? , below which such extreme-value behavior occurs. No evidence was found for limiting Gaussian behavior for ?>1 but with 0Weibull exponent for fiber strength.
Hirose, H
1997-01-01
This paper proposes a new treatment for electrical insulation degradation. Some types of insulation which have been used under various circumstances are considered to degrade at various rates in accordance with their stress circumstances. The cross-linked polyethylene (XLPE) insulated cables inspected by major Japanese electric companies clearly indicate such phenomena. By assuming that the inspected specimen is sampled from one of the clustered groups, a mixed degradation model can be constructed. Since the degradation of the insulation under common circumstances is considered to follow a Weibull distribution, a mixture model and a Weibull power law can be combined. This is called The mixture Weibull power law model. By using the maximum likelihood estimation for the newly proposed model to Japanese 22 and 33 kV insulation class cables, they are clustered into a certain number of groups by using the AIC and the generalized likelihood ratio test method. The reliability of the cables at specified years are assessed. PMID:9384621
Comparison of Weibull strength parameters from flexure and spin tests of brittle materials
NASA Technical Reports Server (NTRS)
Holland, Frederic A., Jr.; Zaretsky, Erwin V.
1991-01-01
Fracture data from five series of four point bend tests of beam and spin tests of flat annular disks were reanalyzed. Silicon nitride and graphite were the test materials. The experimental fracture strengths of the disks were compared with the predicted strengths based on both volume flaw and surface flaw analyses of four point bend data. Volume flaw analysis resulted in a better correlation between disks and beams in three of the five test series than did surface flaw analysis. The Weibull (moduli) and characteristic gage strengths for the disks and beams were also compared. Differences in the experimental Weibull slopes were not statistically significant. It was shown that results from the beam tests can predict the fracture strength of rotating disks.
Distributed and Collaborative Software Analysis
NASA Astrophysics Data System (ADS)
Ghezzi, Giacomo; Gall, Harald C.
Throughout the years software engineers have come up with a myriad of specialized tools and techniques that focus on a certain type of
The ATLAS distributed analysis system
NASA Astrophysics Data System (ADS)
Legger, F.; Atlas Collaboration
2014-06-01
In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of Grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high and steadily improving; Grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters provides user support and communicates user problems to the sites. Both the user support techniques and the direct feedback of users have been effective in improving the success rate and user experience when utilizing the distributed computing environment. In this contribution a description of the main components, activities and achievements of ATLAS distributed analysis is given. Several future improvements being undertaken will be described.
Effect of Individual Component Life Distribution on Engine Life Prediction
NASA Technical Reports Server (NTRS)
Zaretsky, Erwin V.; Hendricks, Robert C.; Soditus, Sherry M.
2003-01-01
The effect of individual engine component life distributions on engine life prediction was determined. A Weibull-based life and reliability analysis of the NASA Energy Efficient Engine was conducted. The engine s life at a 95 and 99.9 percent probability of survival was determined based upon the engine manufacturer s original life calculations and assumed values of each of the component s cumulative life distributions as represented by a Weibull slope. The lives of the high-pressure turbine (HPT) disks and blades were also evaluated individually and as a system in a similar manner. Knowing the statistical cumulative distribution of each engine component with reasonable engineering certainty is a condition precedent to predicting the life and reliability of an entire engine. The life of a system at a given reliability will be less than the lowest-lived component in the system at the same reliability (probability of survival). Where Weibull slopes of all the engine components are equal, the Weibull slope had a minimal effect on engine L(sub 0.1) life prediction. However, at a probability of survival of 95 percent (L(sub 5) life), life decreased with increasing Weibull slope.
Moment series for moment estimators of the parameters of a Weibull density
Bowman, K.O.; Shenton, L.R.
1982-01-01
Taylor series for the first four moments of the coefficients of variation in sampling from a 2-parameter Weibull density are given: they are taken as far as the coefficient of n/sup -24/. From these a four moment approximating distribution is set up using summatory techniques on the series. The shape parameter is treated in a similar way, but here the moment equations are no longer explicit estimators, and terms only as far as those in n/sup -12/ are given. The validity of assessed moments and percentiles of the approximating distributions is studied. Consideration is also given to properties of the moment estimator for 1/c.
Mixture distributions of wind speed in the UAE
NASA Astrophysics Data System (ADS)
Shin, J.; Ouarda, T.; Lee, T. S.
2013-12-01
Wind speed probability distribution is commonly used to estimate potential wind energy. The 2-parameter Weibull distribution has been most widely used to characterize the distribution of wind speed. However, it is unable to properly model wind speed regimes when wind speed distribution presents bimodal and kurtotic shapes. Several studies have concluded that the Weibull distribution should not be used for frequency analysis of wind speed without investigation of wind speed distribution. Due to these mixture distributional characteristics of wind speed data, the application of mixture distributions should be further investigated in the frequency analysis of wind speed. A number of studies have investigated the potential wind energy in different parts of the Arabian Peninsula. Mixture distributional characteristics of wind speed were detected from some of these studies. Nevertheless, mixture distributions have not been employed for wind speed modeling in the Arabian Peninsula. In order to improve our understanding of wind energy potential in Arabian Peninsula, mixture distributions should be tested for the frequency analysis of wind speed. The aim of the current study is to assess the suitability of mixture distributions for the frequency analysis of wind speed in the UAE. Hourly mean wind speed data at 10-m height from 7 stations were used in the current study. The Weibull and Kappa distributions were employed as representatives of the conventional non-mixture distributions. 10 mixture distributions are used and constructed by mixing four probability distributions such as Normal, Gamma, Weibull and Extreme value type-one (EV-1) distributions. Three parameter estimation methods such as Expectation Maximization algorithm, Least Squares method and Meta-Heuristic Maximum Likelihood (MHML) method were employed to estimate the parameters of the mixture distributions. In order to compare the goodness-of-fit of tested distributions and parameter estimation methods for sample wind data, the adjusted coefficient of determination, Bayesian Information Criterion (BIC) and Chi-squared statistics were computed. Results indicate that MHML presents the best performance of parameter estimation for the used mixture distributions. In most of the employed 7 stations, mixture distributions give the best fit. When the wind speed regime shows mixture distributional characteristics, most of these regimes present the kurtotic statistical characteristic. Particularly, applications of mixture distributions for these stations show a significant improvement in explaining the whole wind speed regime. In addition, the Weibull-Weibull mixture distribution presents the best fit for the wind speed data in the UAE.
Distributed data analysis in ATLAS
NASA Astrophysics Data System (ADS)
Nilsson, Paul; Atlas Collaboration
2012-12-01
Data analysis using grid resources is one of the fundamental challenges to be addressed before the start of LHC data taking. The ATLAS detector will produce petabytes of data per year, and roughly one thousand users will need to run physics analyses on this data. Appropriate user interfaces and helper applications have been made available to ensure that the grid resources can be used without requiring expertise in grid technology. These tools enlarge the number of grid users from a few production administrators to potentially all participating physicists. ATLAS makes use of three grid infrastructures for the distributed analysis: the EGEE sites, the Open Science Grid, and Nordu Grid. These grids are managed by the gLite workload management system, the PanDA workload management system, and ARC middleware; many sites can be accessed via both the gLite WMS and PanDA. Users can choose between two front-end tools to access the distributed resources. Ganga is a tool co-developed with LHCb to provide a common interface to the multitude of execution backends (local, batch, and grid). The PanDA workload management system provides a set of utilities called PanDA Client; with these tools users can easily submit Athena analysis jobs to the PanDA-managed resources. Distributed data is managed by Don Quixote 2, a system developed by ATLAS; DQ2 is used to replicate datasets according to the data distribution policies and maintains a central catalog of file locations. The operation of the grid resources is continually monitored by the Ganga Robot functional testing system, and infrequent site stress tests are performed using the Hammer Cloud system. In addition, the DAST shift team is a group of power users who take shifts to provide distributed analysis user support; this team has effectively relieved the burden of support from the developers.
Peampring, Chaimongkon; Sanohkan, Sasiwimol
2014-12-01
To evaluate the durability of machinable dental restorative materials, this study performed an experiment to evaluate the flexural strength and Weibull statistics of a machinable lithium disilicate glass-ceramic and a machinable composite resin after being thermocycled for certain cycles. A total of 40 bar-shape specimens of were prepared with the dimension of 20 mm × 4 mm × 2 mm, which were divided into four groups of 10 specimens. Ten specimens of machinable lithium disilicate glass-ceramic (IPS e.max CAD, Ivoclar Vivadent, Liechtenstein) and 10 specimens of machinable composite resin (Paradigm MZ 100, 3M ESPE, USA) were subjected to 3-point flexural strength test. Other 10 specimens of each material were thermocycled between water temperature of 5 and 55 °C for 10,000 cycles. After that, they were tested using 3-point flexural strength test. Statistical analysis was performed using two-way analysis of variance and Tukey multiple comparisons. Weibull analysis was performed to evaluate the reliability of the strength. Means of strength and their standard deviation were: thermocycled IPS e.max CAD 389.10 (50.75), non-thermocycled IPS e.max CAD 349.96 (38.34), thermocycled Paradigm MZ 100 157.51 (12.85), non-thermocycled Paradigm MZ 100 153.33 (19.97). Within each material group, there was no significant difference in flexural strength between thermocycled and non-thermocycled specimens. Considering the Weibull analysis, there was no statistical difference of Weibull modulus in all experimental groups. Within the limitation of this study, the results showed that there was no significant effect of themocycling on flexural strength and Weibull modulus of a machinable glass-ceramic and a machinable composite resin. PMID:25489161
Application of Weibull Criterion to failure prediction in compsites
Cain, W. D.; Knight, Jr., C. E.
1981-04-20
Fiber-reinforced composite materials are being widely used in engineered structures. This report examines how the general form of the Weibull Criterion, including the evaluation of the parameters and the scaling of the parameter values, can be used for the prediction of component failure.
Distribution-free discriminant analysis
Burr, T.; Doak, J.
1997-05-01
This report describes our experience in implementing a non-parametric (distribution-free) discriminant analysis module for use in a wide range of pattern recognition problems. Issues discussed include performance results on both real and simulated data sets, comparisons to other methods, and the computational environment. In some cases, this module performs better than other existing methods. Nearly all cases can benefit from the application of multiple methods.
Sugar Cane Nutrient Distribution Analysis
NASA Astrophysics Data System (ADS)
Zamboni, C. B.; da Silveira, M. A. G.; Gennari, R. F.; Garcia, I.; Medina, N. H.
2011-08-01
Neutron Activation Analysis (NAA), Molecular Absorption Spectrometry (UV-Vis), and Flame Photometry techniques were applied to measure plant nutrient concentrations of Br, Ca, Cl, K, Mn, N, Na and P in sugar-cane root, stalk and leaves. These data will be used to explore the behavior of element concentration in different parts of the sugar-cane to better understand the plant nutrient distribution during its development.
Weibull Statistics for Upper Ocean Currents with the Fokker-Planck Equation
NASA Astrophysics Data System (ADS)
Chu, P. C.
2012-12-01
Upper oceans typically exhibit of a surface mixed layer with a thickness of a few to several hundred meters. This mixed layer is a key component in studies of climate, biological productivity and marine pollution. It is the link between the atmosphere and the deep ocean and directly affects the air-sea exchange of heat, momentum and gases. Vertically averaged horizontal currents across the mixed layer are driven by the residual between the Ekman transport and surface wind stress, and damped by the Rayleigh friction. A set of stochastic differential equations are derived for the two components of the current vector (u, v). The joint probability distribution function of (u, v) satisfies the Fokker-Planck equation (Chu, 2008, 2009), with the Weibull distribution as the solution for the current speed. To prove it, the PDF of the upper (0-50 m) tropical Pacific current speeds (w) was calculated from hourly ADCP data (1990-2007) at six stations for the Tropical Atmosphere Ocean project. In fact, it satisfies the two-parameter Weibull distribution reasonably well with different characteristics between El Nino and La Nina events: In the western Pacific, the PDF of w has a larger peakedness during the La Nina events than during the El Nino events; and vice versa in the eastern Pacific. However, the PDF of w for the lower layer (100-200 m) does not fit the Weibull distribution so well as the upper layer. This is due to the different stochastic differential equations between upper and lower layers in the tropical Pacific. For the upper layer, the stochastic differential equations, established on the base of the Ekman dynamics, have analytical solution, i.e., the Rayleigh distribution (simplest form of the Weibull distribution), for constant eddy viscosity K. Knowledge on PDF of w during the El Nino and La Nina events will improve the ensemble horizontal flux calculation, which contributes to the climate studies. Besides, the Weibull distribution is also identified from the near-real time ocean surface currents derived from satellite altimeter (JASON-1, GFO, ENVISAT) and scatterometer (QSCAT) data on 1o 1o resolution for world oceans (60o S to 60o N) as "Ocean Surface Current Analyses - Real Time (OSCAR)". Such a PDF has little seasonal and interannual variations. Knowledge on PDF of w will improve the ensemble horizontal flux calculation, which contributes to the climate studies. References Chu, P. C., 2008: Probability distribution function of the upper equatorial Pacific current speeds. Geophysical Research Letters, 35,doi:10.1029/2008GL033669 Chu, P. C., 2009: Statistical Characteristics of the Global Surface Current Speeds Obtained from Satellite Altimeter and Scatterometer Data. IEEE Journal of Selected Topics in Earth Observations and Remote Sensing,2(1),27-32.
Spatial and Temporal Patterns of Global Onshore Wind Speed Distribution
Zhou, Yuyu; Smith, Steven J.
2013-09-09
Wind power, a renewable energy source, can play an important role in electrical energy generation. Information regarding wind energy potential is important both for energy related modeling and for decision-making in the policy community. While wind speed datasets with high spatial and temporal resolution are often ultimately used for detailed planning, simpler assumptions are often used in analysis work. An accurate representation of the wind speed frequency distribution is needed in order to properly characterize wind energy potential. Using a power density method, this study estimated global variation in wind parameters as fitted to a Weibull density function using NCEP/CFSR reanalysis data. The estimated Weibull distribution performs well in fitting the time series wind speed data at the global level according to R2, root mean square error, and power density error. The spatial, decadal, and seasonal patterns of wind speed distribution were then evaluated. We also analyzed the potential error in wind power estimation when a commonly assumed Rayleigh distribution (Weibull k = 2) is used. We find that the assumption of the same Weibull parameter across large regions can result in substantial errors. While large-scale wind speed data is often presented in the form of average wind speeds, these results highlight the need to also provide information on the wind speed distribution.
Weibull Effective Area for Hertzian Ring Crack Initiation Stress
Jadaan, Osama M.; Wereszczak, Andrew A; Johanns, Kurt E
2011-01-01
Spherical or Hertzian indentation is used to characterize and guide the development of engineered ceramics under consideration for diverse applications involving contact, wear, rolling fatigue, and impact. Ring crack initiation can be one important damage mechanism of Hertzian indentation. It is caused by sufficiently-high, surface-located, radial tensile stresses in an annular ring located adjacent to and outside of the Hertzian contact circle. While the maximum radial tensile stress is known to be dependent on the elastic properties of the sphere and target, the diameter of the sphere, the applied compressive force, and the coefficient of friction, the Weibull effective area too will be affected by those parameters. However, the estimations of a maximum radial tensile stress and Weibull effective area are difficult to obtain because the coefficient of friction during Hertzian indentation is complex, likely intractable, and not known a priori. Circumventing this, the Weibull effective area expressions are derived here for the two extremes that bracket all coefficients of friction; namely, (1) the classical, frictionless, Hertzian case where only complete slip occurs, and (2) the case where no slip occurs or where the coefficient of friction is infinite.
Weibull models of fracture strengths and fatigue behavior of dental resins in flexure and shear.
Baran, G R; McCool, J I; Paul, D; Boberick, K; Wunder, S
1998-01-01
In estimating lifetimes of dental restorative materials, it is useful to have available data on the fatigue behavior of these materials. Current efforts at estimation include several untested assumptions related to the equivalence of flaw distributions sampled by shear, tensile, and compressive stresses. Environmental influences on material properties are not accounted for, and it is unclear if fatigue limits exist. In this study, the shear and flexural strengths of three resins used as matrices in dental restorative composite materials were characterized by Weibull parameters. It was found that shear strengths were lower than flexural strengths, liquid sorption had a profound effect on characteristic strengths, and the Weibull shape parameter obtained from shear data differed for some materials from that obtained in flexure. In shear and flexural fatigue, a power law relationship applied for up to 250,000 cycles; no fatigue limits were found, and the data thus imply only one flaw population is responsible for failure. Again, liquid sorption adversely affected strength levels in most materials (decreasing shear strengths and flexural strengths by factors of 2-3) and to a greater extent than did the degree of cure or material chemistry. PMID:9730059
Characteristic strength, Weibull modulus, and failure probability of fused silica glass
NASA Astrophysics Data System (ADS)
Klein, Claude A.
2009-11-01
The development of high-energy lasers has focused attention on the requirement to assess the mechanical strength of optical components made of fused silica or fused quartz (SiO2). The strength of this material is known to be highly dependent on the stressed area and the surface finish, but has not yet been properly characterized in the published literature. Recently, Detrio and collaborators at the University of Dayton Research Institute (UDRI) performed extensive ring-on-ring flexural strength measurements on fused SiO2 specimens ranging in size from 1 to 9 in. in diameter and of widely differing surface qualities. We report on a Weibull statistical analysis of the UDRI data-an analysis based on the procedure outlined in Proc. SPIE 4375, 241 (2001). We demonstrate that (1) a two-parameter Weibull model, including the area-scaling principle, applies; (2) the shape parameter (m~=10) is essentially independent of the stressed area as well as the surface finish; and (3) the characteristic strength (1-cm2 uniformly stressed area) obeys a linear law, ?C (in megapascals) ~=160-2.83×PBS (in parts per million per steradian), where PBS characterizes the surface/subsurface ``damage'' of an appropriate set of test specimens. In this light, we evaluate the cumulative failure probability and the failure probability density of polished and superpolished fused SiO2 windows as a function of the biaxial tensile stress, for stressed areas ranging from 0.3 to 100 cm2.
Petry, M.D.; Mah, T.I.; Kerans, R.J.
1997-10-01
Strengths and Weibull moduli for alumina/yttrium aluminum garnet eutectic (AYE) filaments and for Si-C-O (Nicalon) filaments were calculated using measured and average filament diameters. The strengths agreed closely. Thus an average filament diameter could be used instead of the measured filament diameter in calculating strengths. The Weibull modulus obtained from an average filament diameter approximates the Weibull modulus obtained using the measured filament diameter.
Statistical modeling of tornado intensity distributions
NASA Astrophysics Data System (ADS)
Dotzek, Nikolai; Grieser, Jürgen; Brooks, Harold E.
We address the issue to determine an appropriate general functional shape of observed tornado intensity distributions. Recently, it was suggested that in the limit of long and large tornado records, exponential distributions over all positive Fujita or TORRO scale classes would result. Yet, our analysis shows that even for large databases observations contradict the validity of exponential distributions for weak (F0) and violent (F5) tornadoes. We show that observed tornado intensities can be much better described by Weibull distributions, for which an exponential remains a special case. Weibull fits in either v or F scale reproduce the observations significantly better than exponentials. In addition, we suggest to apply the original definition of negative intensity scales down to F-2 and T-4 (corresponding to v=0 m s -1) at least for climatological analyses. Weibull distributions allow for an improved risk assessment of violent tornadoes up to F6, and better estimates of total tornado occurrence, degree of underreporting and existence of subcritical tornadic circulations below damaging intensity. Therefore, our results are relevant for climatologists and risk assessment managers alike.
The Weibull functional form for SEP event spectra
NASA Astrophysics Data System (ADS)
Laurenza, M.; Consolini, G.; Storini, M.; Damiani, A.
2015-08-01
The evolution of the kinetic energy spectra of two Solar Energetic Particle (SEP) events has been investigated through the Shannon's differential entropy during the different phases of the selected events, as proposed by [1]. Data from LET and HET instruments onboard the STEREO spacecraft were used to cover a wide energy range from Ëœ 4 MeV to 100 MeV, as well as EPAM and ERNE data, on board the ACE and SOHO spacecraft, respectively, in the range 1.6 - 112 MeV. The spectral features were found to be consistent with the Weibull like shape, both during the main phase of the SEP events and over their whole duration. Comparison of results obtained for energetic particles accelerated at corotating interaction regions (CIRs) and transient-related interplanetary shocks are presented in the framework of shock acceleration.
Test Population Selection from Weibull-Based, Monte Carlo Simulations of Fatigue Life
NASA Technical Reports Server (NTRS)
Vlcek, Brian L.; Zaretsky, Erwin V.; Hendricks, Robert C.
2008-01-01
Fatigue life is probabilistic and not deterministic. Experimentally establishing the fatigue life of materials, components, and systems is both time consuming and costly. As a result, conclusions regarding fatigue life are often inferred from a statistically insufficient number of physical tests. A proposed methodology for comparing life results as a function of variability due to Weibull parameters, variability between successive trials, and variability due to size of the experimental population is presented. Using Monte Carlo simulation of randomly selected lives from a large Weibull distribution, the variation in the L10 fatigue life of aluminum alloy AL6061 rotating rod fatigue tests was determined as a function of population size. These results were compared to the L10 fatigue lives of small (10 each) populations from AL2024, AL7075 and AL6061. For aluminum alloy AL6061, a simple algebraic relationship was established for the upper and lower L10 fatigue life limits as a function of the number of specimens failed. For most engineering applications where less than 30 percent variability can be tolerated in the maximum and minimum values, at least 30 to 35 test samples are necessary. The variability of test results based on small sample sizes can be greater than actual differences, if any, that exists between materials and can result in erroneous conclusions. The fatigue life of AL2024 is statistically longer than AL6061 and AL7075. However, there is no statistical difference between the fatigue lives of AL6061 and AL7075 even though AL7075 had a fatigue life 30 percent greater than AL6061.
Test Population Selection from Weibull-Based, Monte Carlo Simulations of Fatigue Life
NASA Technical Reports Server (NTRS)
Vlcek, Brian L.; Zaretsky, Erwin V.; Hendricks, Robert C.
2012-01-01
Fatigue life is probabilistic and not deterministic. Experimentally establishing the fatigue life of materials, components, and systems is both time consuming and costly. As a result, conclusions regarding fatigue life are often inferred from a statistically insufficient number of physical tests. A proposed methodology for comparing life results as a function of variability due to Weibull parameters, variability between successive trials, and variability due to size of the experimental population is presented. Using Monte Carlo simulation of randomly selected lives from a large Weibull distribution, the variation in the L10 fatigue life of aluminum alloy AL6061 rotating rod fatigue tests was determined as a function of population size. These results were compared to the L10 fatigue lives of small (10 each) populations from AL2024, AL7075 and AL6061. For aluminum alloy AL6061, a simple algebraic relationship was established for the upper and lower L10 fatigue life limits as a function of the number of specimens failed. For most engineering applications where less than 30 percent variability can be tolerated in the maximum and minimum values, at least 30 to 35 test samples are necessary. The variability of test results based on small sample sizes can be greater than actual differences, if any, that exists between materials and can result in erroneous conclusions. The fatigue life of AL2024 is statistically longer than AL6061 and AL7075. However, there is no statistical difference between the fatigue lives of AL6061 and AL7075 even though AL7075 had a fatigue life 30 percent greater than AL6061.
Distributed computing and nuclear reactor analysis
Brown, F.B.; Derstine, K.L.; Blomquist, R.N.
1994-03-01
Large-scale scientific and engineering calculations for nuclear reactor analysis can now be carried out effectively in a distributed computing environment, at costs far lower than for traditional mainframes. The distributed computing environment must include support for traditional system services, such as a queuing system for batch work, reliable filesystem backups, and parallel processing capabilities for large jobs. All ANL computer codes for reactor analysis have been adapted successfully to a distributed system based on workstations and X-terminals. Distributed parallel processing has been demonstrated to be effective for long-running Monte Carlo calculations.
Distribution analysis of airborne nicotine concentrations in hospitality facilities.
Schorp, Matthias K; Leyden, Donald E
2002-02-01
A number of publications report statistical summaries for environmental tobacco smoke (ETS) concentrations. Despite compelling evidence for the data not being normally distributed, these publications typically report the arithmetic mean and standard deviation of the data, thereby losing important information related to the distribution of values contained in the original data. We were interested in the frequency distributions of reported nicotine concentrations in hospitality environments and subjected available data to distribution analyses. The distribution of experimental indoor airborne nicotine concentration data taken from hospitality facilities worldwide was fit to lognormal, Weibull, exponential, Pearson (Type V), logistic, and loglogistic distribution models. Comparison of goodness of fit (GOF) parameters and indications from the literature verified the selection of a lognormal distribution as the overall best model. When individual data were not reported in the literature, statistical summaries of results were used to model sets of lognormally distributed data that are intended to mimic the original data distribution. Grouping the data into various categories led to 31 frequency distributions that were further interpreted. The median values in nonsmoking environments are about half of the median values in smoking sections. When different continents are compared, Asian, European, and North American median values in restaurants are about a factor of three below levels encountered in other hospitality facilities. On a comparison of nicotine concentrations in North American smoking sections and nonsmoking sections, median values are about one-third of the European levels. The results obtained may be used to address issues related to exposure to ETS in the hospitality sector. PMID:11868665
Cox, Christopher; Chu, Haitao; Schneider, Michael F; Muñoz, Alvaro
2007-10-15
The widely used Cox proportional hazards regression model for the analysis of censored survival data has limited utility when either hazard functions themselves are of primary interest, or when relative times instead of relative hazards are the relevant measures of association. Parametric regression models are an attractive option in situations such as this, although the choice of a particular model from the available families of distributions can be problematic. The generalized gamma (GG) distribution is an extensive family that contains nearly all of the most commonly used distributions, including the exponential, Weibull, log normal and gamma. More importantly, the GG family includes all four of the most common types of hazard function: monotonically increasing and decreasing, as well as bathtub and arc-shaped hazards. We present here a taxonomy of the hazard functions of the GG family, which includes various special distributions and allows depiction of effects of exposures on hazard functions. We applied the proposed taxonomy to study survival after a diagnosis of clinical AIDS during different eras of HIV therapy, where proportionality of hazard functions was clearly not fulfilled and flexibility in estimating hazards with very different shapes was needed. Comparisons of survival after AIDS in different eras of therapy are presented in terms of both relative times and relative hazards. Standard errors for these and other derived quantities are computed using the delta method and checked using the bootstrap. Description of standard statistical software (Stata, SAS and S-Plus) for the computations is included and available at http://statepi.jhsph.edu/software. PMID:17342754
Towards Distributed Memory Parallel Program Analysis
Quinlan, D; Barany, G; Panas, T
2008-06-17
This paper presents a parallel attribute evaluation for distributed memory parallel computer architectures where previously only shared memory parallel support for this technique has been developed. Attribute evaluation is a part of how attribute grammars are used for program analysis within modern compilers. Within this work, we have extended ROSE, a open compiler infrastructure, with a distributed memory parallel attribute evaluation mechanism to support user defined global program analysis required for some forms of security analysis which can not be addressed by a file by file view of large scale applications. As a result, user defined security analyses may now run in parallel without the user having to specify the way data is communicated between processors. The automation of communication enables an extensible open-source parallel program analysis infrastructure.
Shuttle Electrical Power Analysis Program (SEPAP) distribution circuit analysis report
NASA Technical Reports Server (NTRS)
Torina, E. M.
1975-01-01
An analysis and evaluation was made of the operating parameters of the shuttle electrical power distribution circuit under load conditions encountered during a normal Sortie 2 Mission with emphasis on main periods of liftoff and landing.
Distributional Cost-Effectiveness Analysis: A Tutorial.
Asaria, Miqdad; Griffin, Susan; Cookson, Richard
2016-01-01
Distributional cost-effectiveness analysis (DCEA) is a framework for incorporating health inequality concerns into the economic evaluation of health sector interventions. In this tutorial, we describe the technical details of how to conduct DCEA, using an illustrative example comparing alternative ways of implementing the National Health Service (NHS) Bowel Cancer Screening Programme (BCSP). The 2 key stages in DCEA are 1) modeling social distributions of health associated with different interventions, and 2) evaluating social distributions of health with respect to the dual objectives of improving total population health and reducing unfair health inequality. As well as describing the technical methods used, we also identify the data requirements and the social value judgments that have to be made. Finally, we demonstrate the use of sensitivity analyses to explore the impacts of alternative modeling assumptions and social value judgments. PMID:25908564
Analysis of fire size distribution in Portugal
NASA Astrophysics Data System (ADS)
Pereira, Mário; Calado, Teresa; Camara, Carlos; Trigo, Ricardo
2010-05-01
In this work we have applied statistical methods to characterize the variability of forest fires in Portugal, and additionally assess the role of meteorological conditions on fire size. Appropriate distribution functions were tested to fit the high positively skewed fire size samples. Maximum Likelihood Estimates (MLE) of distribution parameters were derived from a 28 year database of fire occurrences and the goodness of fit was assessed by standard Kolmogorov-Smirnov, Crámer von-Mises and Anderson-Darling statistical tests as well as by qq-plots. Weather conditions, namely air temperature, precipitation and wind have significant influence on vegetation physiological state and the impact on fire size was studied by using these variables as meteorological covariates of the above derived statistical distributions. The following datasets covering the 1980-2007 period were used: 1) the Portuguese Rural fire database, provided by the Forest National Authority and 2) daily values of meteorological variables, as well as atmospheric circulation indices as obtained from weather typing analysis and fire risk indices. The methodology was applied considering all fire records in the database and fires registered in particular periods and/or locations. Results reveal the usefulness of parametric models to characterize the observed fire size distribution and to assess the role of meteorological conditions on fire size distribution.
NASA Astrophysics Data System (ADS)
Gu, Jian; Lei, YongPing; Lin, Jian; Fu, HanGuang; Wu, Zhongwei
2016-01-01
The scattering of fatigue life data is a common problem and usually described using the normal distribution or Weibull distribution. For solder joints under drop impact, due to the complicated stress distribution, the relationship between the stress and the drop life is so far unknown. Furthermore, it is important to establish a function describing the change in standard deviation for solder joints under different drop impact levels. Therefore, in this study, a novel conditional probability density distribution surface (CPDDS) was established for the analysis of the drop life of solder joints. The relationship between the drop impact acceleration and the drop life is proposed, which comprehensively considers the stress distribution. A novel exponential model was adopted for describing the change of the standard deviation with the impact acceleration (0 ? +?). To validate the model, the drop life of Sn-3.0Ag-0.5Cu solder joints was analyzed. The probability density curve of the logarithm of the fatigue life distribution can be easily obtained for a certain acceleration level fixed on the acceleration level axis of the CPDDS. The P- A- N curve was also obtained using the functions ?( A) and ?( A), which can reflect the regularity of the life data for an overall reliability P.
Bayesian Inference of the Weibull Model Based on Interval-Censored Survival Data
Guure, Chris Bambey; Ibrahim, Noor Akma; Adam, Mohd Bakri
2013-01-01
Interval-censored data consist of adjacent inspection times that surround an unknown failure time. We have in this paper reviewed the classical approach which is maximum likelihood in estimating the Weibull parameters with interval-censored data. We have also considered the Bayesian approach in estimating the Weibull parameters with interval-censored data under three loss functions. This study became necessary because of the limited discussion in the literature, if at all, with regard to estimating the Weibull parameters with interval-censored data using Bayesian. A simulation study is carried out to compare the performances of the methods. A real data application is also illustrated. It has been observed from the study that the Bayesian estimator is preferred to the classical maximum likelihood estimator for both the scale and shape parameters. PMID:23476718
Performance optimisations for distributed analysis in ALICE
NASA Astrophysics Data System (ADS)
Betev, L.; Gheata, A.; Gheata, M.; Grigoras, C.; Hristov, P.
2014-06-01
Performance is a critical issue in a production system accommodating hundreds of analysis users. Compared to a local session, distributed analysis is exposed to services and network latencies, remote data access and heterogeneous computing infrastructure, creating a more complex performance and efficiency optimization matrix. During the last 2 years, ALICE analysis shifted from a fast development phase to the more mature and stable code. At the same time, the frameworks and tools for deployment, monitoring and management of large productions have evolved considerably too. The ALICE Grid production system is currently used by a fair share of organized and individual user analysis, consuming up to 30% or the available resources and ranging from fully I/O-bound analysis code to CPU intensive correlations or resonances studies. While the intrinsic analysis performance is unlikely to improve by a large factor during the LHC long shutdown (LS1), the overall efficiency of the system has still to be improved by an important factor to satisfy the analysis needs. We have instrumented all analysis jobs with "sensors" collecting comprehensive monitoring information on the job running conditions and performance in order to identify bottlenecks in the data processing flow. This data are collected by the MonALISa-based ALICE Grid monitoring system and are used to steer and improve the job submission and management policy, to identify operational problems in real time and to perform automatic corrective actions. In parallel with an upgrade of our production system we are aiming for low level improvements related to data format, data management and merging of results to allow for a better performing ALICE analysis.
Bayesian inference based on dual generalized order statistics from the exponentiated Weibull model
NASA Astrophysics Data System (ADS)
Al Sobhi, Mashail M.
2015-02-01
Bayesian estimation for the two parameters and the reliability function of the exponentiated Weibull model are obtained based on dual generalized order statistics (DGOS). Also, Bayesian prediction bounds for future DGOS from exponentiated Weibull model are obtained. The symmetric and asymmetric loss functions are considered for Bayesian computations. The Markov chain Monte Carlo (MCMC) methods are used for computing the Bayes estimates and prediction bounds. The results have been specialized to the lower record values. Comparisons are made between Bayesian and maximum likelihood estimators via Monte Carlo simulation.
Analysis of Jingdong Mall Logistics Distribution Model
NASA Astrophysics Data System (ADS)
Shao, Kang; Cheng, Feng
In recent years, the development of electronic commerce in our country to speed up the pace. The role of logistics has been highlighted, more and more electronic commerce enterprise are beginning to realize the importance of logistics in the success or failure of the enterprise. In this paper, the author take Jingdong Mall for example, performing a SWOT analysis of their current situation of self-built logistics system, find out the problems existing in the current Jingdong Mall logistics distribution and give appropriate recommendations.
Analysis and control of distributed cooperative systems.
Feddema, John Todd; Parker, Eric Paul; Wagner, John S.; Schoenwald, David Alan
2004-09-01
As part of DARPA Information Processing Technology Office (IPTO) Software for Distributed Robotics (SDR) Program, Sandia National Laboratories has developed analysis and control software for coordinating tens to thousands of autonomous cooperative robotic agents (primarily unmanned ground vehicles) performing military operations such as reconnaissance, surveillance and target acquisition; countermine and explosive ordnance disposal; force protection and physical security; and logistics support. Due to the nature of these applications, the control techniques must be distributed, and they must not rely on high bandwidth communication between agents. At the same time, a single soldier must easily direct these large-scale systems. Finally, the control techniques must be provably convergent so as not to cause undo harm to civilians. In this project, provably convergent, moderate communication bandwidth, distributed control algorithms have been developed that can be regulated by a single soldier. We have simulated in great detail the control of low numbers of vehicles (up to 20) navigating throughout a building, and we have simulated in lesser detail the control of larger numbers of vehicles (up to 1000) trying to locate several targets in a large outdoor facility. Finally, we have experimentally validated the resulting control algorithms on smaller numbers of autonomous vehicles.
Probability Distributions for Offshore Wind Speeds
NASA Astrophysics Data System (ADS)
Morgan, E. C.; Lackner, M.; Vogel, R. M.; Baise, L. G.
2009-12-01
In planning offshore wind farms, short-term wind speeds play a central role in estimating various engineering parameters, such as power output, design load, fatigue load, etc. Lacking wind speed data at a specific site, the probability distribution of wind speed serves as the primary substitute for such measurements during parameter estimation. It is common practice to model wind speeds with the Weibull distribution, but recent literature suggests that many other distributions may perform better. Such studies are often limited in either the time-span or geographic scope of their datasets. Using 10-minute average wind speed time series ranging from 1 month to 20 years in duration collected from 178 buoys around North America, we show that the widely-accepted Weibull distribution provides a poor fit to the distribution of wind speeds when compared with other models. For example several other distributions, including the bimodal Weibull, Kappa and Wakeby models, fit the data remarkably well, yielding values significantly closer to 1 than the Weibull and many other distributions. Additionally, we show that the Kappa and Wakeby predict average wind turbine power output better than other distributions, including the bimodal Weibull. Our results show that more complicated models than the two-parameter Weibull are needed to capture the complex behavior of wind, and that using such models leads to improved engineering decisions.
Rectangular shape distributed piezoelectric actuator: analytical analysis
NASA Astrophysics Data System (ADS)
Sun, Bohua; Qiu, Yan
2004-04-01
This paper is focused on the development of distributed piezoelectric actuators (DPAs) with rectangular shapes by using PZT materials. Analytical models of rectangular shape DPAs have been constructed in order to analyse and test the performance of DPA products. Firstly, based on the theory of electromagnetics, DPAs have been considered as a type of capacitor. The charge distributed density on the interdigitated electrodes (IDEs), which has been applied in the actuators, and the capacitance of the DPAs have been calculated. The accurate distribution and intensity of electrical field in DPA element have also been calculated completely. Secondly, based on the piezoelectric constitutive relations and the compound plates theory, models for mechanical strain and stress fields of DPAs have been developed, and the performances of rectangular shape DPAs have been discussed. Finally, on the basis of the models that have been developed in this paper, an improved design of a rectangular shape DPA has been discussed and summed up. Due to the minimum hypotheses that have been used during the processes of calculation, the characteristics of this paper are that the accurate distribution and intensity of electrical fields in DPAs have been concluded. The proposed accurate calculations have not been seen in the literature, and can be used in DPA design and manufacture processes in order to improve mechanical performance and reduce the cost of DPA products in further applications. In this paper, all the processes of analysis and calculation have been done by MATLAB and MathCAD. The FEM results used for comparison were obtained using the ABAQUS program.
CMS distributed data analysis with CRAB3
NASA Astrophysics Data System (ADS)
Mascheroni, M.; Balcas, J.; Belforte, S.; Bockelman, B. P.; Hernandez, J. M.; Ciangottini, D.; Konstantinov, P. B.; Silva, J. M. D.; Ali, M. A. B. M.; Melo, A. M.; Riahi, H.; Tanasijczuk, A. J.; Yusli, M. N. B.; Wolf, M.; Woodard, A. E.; Vaandering, E.
2015-12-01
The CMS Remote Analysis Builder (CRAB) is a distributed workflow management tool which facilitates analysis tasks by isolating users from the technical details of the Grid infrastructure. Throughout LHC Run 1, CRAB has been successfully employed by an average of 350 distinct users each week executing about 200,000 jobs per day. CRAB has been significantly upgraded in order to face the new challenges posed by LHC Run 2. Components of the new system include 1) a lightweight client, 2) a central primary server which communicates with the clients through a REST interface, 3) secondary servers which manage user analysis tasks and submit jobs to the CMS resource provisioning system, and 4) a central service to asynchronously move user data from temporary storage in the execution site to the desired storage location. The new system improves the robustness, scalability and sustainability of the service. Here we provide an overview of the new system, operation, and user support, report on its current status, and identify lessons learned from the commissioning phase and production roll-out.
Buffered Communication Analysis in Distributed Multiparty Sessions
NASA Astrophysics Data System (ADS)
DeniÃ©lou, Pierre-Malo; Yoshida, Nobuko
Many communication-centred systems today rely on asynchronous messaging among distributed peers to make efficient use of parallel execution and resource access. With such asynchrony, the communication buffers can happen to grow inconsiderately over time. This paper proposes a static verification methodology based on multiparty session types which can efficiently compute the upper bounds on buffer sizes. Our analysis relies on a uniform causality audit of the entire collaboration pattern - an examination that is not always possible from each end-point type. We extend this method to design algorithms that allocate communication channels in order to optimise the memory requirements of session executions. From these analyses, we propose two refinements methods which respect buffer bounds: a global protocol refinement that automatically inserts confirmation messages to guarantee stipulated buffer sizes and a local protocol refinement to optimise asynchronous messaging without buffer overflow. Finally our work is applied to overcome a buffer overflow problem of the multi-buffering algorithm.
NASA Technical Reports Server (NTRS)
Welch, R. M.; Sengupta, S. K.; Chen, D. W.
1990-01-01
Stratocumulus cloud fields in the FIRE IFO region are analyzed using LANDSAT Thematic Mapper imagery. Structural properties such as cloud cell size distribution, cell horizontal aspect ratio, fractional coverage and fractal dimension are determined. It is found that stratocumulus cloud number densities are represented by a power law. Cell horizontal aspect ratio has a tendency to increase at large cell sizes, and cells are bi-fractal in nature. Using LANDSAT Multispectral Scanner imagery for twelve selected stratocumulus scenes acquired during previous years, similar structural characteristics are obtained. Cloud field spatial organization also is analyzed. Nearest-neighbor spacings are fit with a number of functions, with Weibull and Gamma distributions providing the best fits. Poisson tests show that the spatial separations are not random. Second order statistics are used to examine clustering.
Ceramics Analysis and Reliability Evaluation of Structures (CARES). Users and programmers manual
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Manderscheid, Jane M.; Gyekenyesi, John P.
1990-01-01
This manual describes how to use the Ceramics Analysis and Reliability Evaluation of Structures (CARES) computer program. The primary function of the code is to calculate the fast fracture reliability or failure probability of macroscopically isotropic ceramic components. These components may be subjected to complex thermomechanical loadings, such as those found in heat engine applications. The program uses results from MSC/NASTRAN or ANSYS finite element analysis programs to evaluate component reliability due to inherent surface and/or volume type flaws. CARES utilizes the Batdorf model and the two-parameter Weibull cumulative distribution function to describe the effect of multiaxial stress states on material strength. The principle of independent action (PIA) and the Weibull normal stress averaging models are also included. Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities are estimated from four-point bend bar or unifrom uniaxial tensile specimen fracture strength data. Parameter estimation can be performed for single or multiple failure modes by using the least-square analysis or the maximum likelihood method. Kolmogorov-Smirnov and Anderson-Darling goodness-of-fit tests, ninety percent confidence intervals on the Weibull parameters, and Kanofsky-Srinivasan ninety percent confidence band values are also provided. The probabilistic fast-fracture theories used in CARES, along with the input and output for CARES, are described. Example problems to demonstrate various feature of the program are also included. This manual describes the MSC/NASTRAN version of the CARES program.
Likelihood analysis of earthquake focal mechanism distributions
NASA Astrophysics Data System (ADS)
Kagan, Yan Y.; Jackson, David D.
2015-06-01
In our paper published earlier we discussed forecasts of earthquake focal mechanism and ways to test the forecast efficiency. Several verification methods were proposed, but they were based on ad hoc, empirical assumptions, thus their performance is questionable. We apply a conventional likelihood method to measure the skill of earthquake focal mechanism orientation forecasts. The advantage of such an approach is that earthquake rate prediction can be adequately combined with focal mechanism forecast, if both are based on the likelihood scores, resulting in a general forecast optimization. We measure the difference between two double-couple sources as the minimum rotation angle that transforms one into the other. We measure the uncertainty of a focal mechanism forecast (the variability), and the difference between observed and forecasted orientations (the prediction error), in terms of these minimum rotation angles. To calculate the likelihood score we need to compare actual forecasts or occurrences of predicted events with the null hypothesis that the mechanism's 3-D orientation is random (or equally probable). For 3-D rotation the random rotation angle distribution is not uniform. To better understand the resulting complexities, we calculate the information (likelihood) score for two theoretical rotational distributions (Cauchy and von Mises-Fisher), which are used to approximate earthquake source orientation pattern. We then calculate the likelihood score for earthquake source forecasts and for their validation by future seismicity data. Several issues need to be explored when analyzing observational results: their dependence on forecast and data resolution, internal dependence of scores on forecasted angle and random variability of likelihood scores. Here, we propose a simple tentative solution but extensive theoretical and statistical analysis is needed.
Development of pair distribution function analysis
Vondreele, R.; Billinge, S.; Kwei, G.; Lawson, A.
1996-09-01
This is the final report of a 3-year LDRD project at LANL. It has become more and more evident that structural coherence in the CuO{sub 2} planes of high-{Tc} superconducting materials over some intermediate length scale (nm range) is important to superconductivity. In recent years, the pair distribution function (PDF) analysis of powder diffraction data has been developed for extracting structural information on these length scales. This project sought to expand and develop this technique, use it to analyze neutron powder diffraction data, and apply it to problems. In particular, interest is in the area of high-{Tc} superconductors, although we planned to extend the study to the closely related perovskite ferroelectric materials andother materials where the local structure affects the properties where detailed knowledge of the local and intermediate range structure is important. In addition, we planned to carry out single crystal experiments to look for diffuse scattering. This information augments the information from the PDF.
Distributed Design and Analysis of Computer Experiments
Energy Science and Technology Software Center (ESTSC)
2002-11-11
DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. FormoreÂ Â» example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation of an algorithm by Michael McKay to compute variable correlations. DDACE can also be used to carry out a main-effects analysis to calculate the sensitivity of an output variable to each of the varied inputs taken individually. 1 ContinuedÂ«Â less
Distributed Design and Analysis of Computer Experiments
2002-11-11
DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. For example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation of an algorithm by Michael McKay to compute variable correlations. DDACE can also be used to carry out a main-effects analysis to calculate the sensitivity of an output variable to each of the varied inputs taken individually. 1 Continued
A Mixed Kijima Model Using the Weibull-Based Generalized Renewal Processes
2015-01-01
Generalized Renewal Processes are useful for approaching the rejuvenation of dynamical systems resulting from planned or unplanned interventions. We present new perspectives for the Generalized Renewal Processes in general and for the Weibull-based Generalized Renewal Processes in particular. Disregarding from literature, we present a mixed Generalized Renewal Processes approach involving Kijima Type I and II models, allowing one to infer the impact of distinct interventions on the performance of the system under study. The first and second theoretical moments of this model are introduced as well as its maximum likelihood estimation and random sampling approaches. In order to illustrate the usefulness of the proposed Weibull-based Generalized Renewal Processes model, some real data sets involving improving, stable, and deteriorating systems are used. PMID:26197222
Distribution entropy analysis of epileptic EEG signals.
Peng Li; Chang Yan; Karmakar, Chandan; Changchun Liu
2015-08-01
It is an open-ended challenge to accurately detect the epileptic seizures through electroencephalogram (EEG) signals. Recently published studies have made elaborate attempts to distinguish between the normal and epileptic EEG signals by advanced nonlinear entropy methods, such as the approximate entropy, sample entropy, fuzzy entropy, and permutation entropy, etc. Most recently, a novel distribution entropy (DistEn) has been reported to have superior performance compared with the conventional entropy methods for especially short length data. We thus aimed, in the present study, to show the potential of DistEn in the analysis of epileptic EEG signals. The publicly-accessible Bonn database which consisted of normal, interictal, and ictal EEG signals was used in this study. Three different measurement protocols were set for better understanding the performance of DistEn, which are: i) calculate the DistEn of a specific EEG signal using the full recording; ii) calculate the DistEn by averaging the results for all its possible non-overlapped 5 second segments; and iii) calculate it by averaging the DistEn values for all the possible non-overlapped segments of 1 second length, respectively. Results for all three protocols indicated a statistically significantly increased DistEn for the ictal class compared with both the normal and interictal classes. Besides, the results obtained under the third protocol, which only used very short segments (1 s) of EEG recordings showed a significantly (p <; 0.05) increased DistEn for the interictal class in compassion with the normal class, whereas both analyses using relatively long EEG signals failed in tracking this difference between them, which may be due to a nonstationarity effect on entropy algorithm. The capability of discriminating between the normal and interictal EEG signals is of great clinical relevance since it may provide helpful tools for the detection of a seizure onset. Therefore, our study suggests that the DistEn analysis of EEG signals is very promising for clinical and even portable EEG monitoring. PMID:26737213
Distributed energy store railguns experiment and analysis
Holland, L.D.
1984-01-01
Electromagnetic acceleration of projectiles holds the potential for achieving higher velocities than yet achieved by any other means. A railgun is the simplest form of electromagnetic macroparticle accelerator and can generate the highest sustained accelerating force. The practical length of conventional railguns is limited by the impedance of the rails because current must be carried along the entire length of the rails. A railgun and power supply system called the distributed energy store railgun was proposed as a solution to this limitation. The distributed energy store railgun used multiple current sources connected to the rails of a railgun at points distributed along the bore. These current sources (energy stores) are turned on in sequence as the projectile moves down the bore so that current is fed to the railgun from behind the armature. In this system the length of the rails that carry the full armature current is less than the total length of the railgun. If a sufficient number of energy stores is used, this removes the limitation on the length of a railgun. An additional feature of distributed energy store type railguns is that they can be designed to maintain a constant pressure on the projectile being accelerated. A distributed energy store railgun was constructed and successfully operated. In addition to this first demonstration of the distributed energy store railgun principle, a theoretical model of the system was also constructed.
Analysis of Temperature Distributions in Nighttime Inversions
NASA Astrophysics Data System (ADS)
Telyak, Oksana; Krasouski, Aliaksandr; Svetashev, Alexander; Turishev, Leonid; Barodka, Siarhei
2015-04-01
Adequate prediction of temperature inversion in the atmospheric boundary layer is one of prerequisites for successful forecasting of meteorological parameters and severe weather events. Examples include surface air temperature and precipitation forecasting as well as prediction of fog, frosts and smog with hazardous levels of atmospheric pollution. At the same time, reliable forecasting of temperature inversions remains an unsolved problem. For prediction of nighttime inversions over some specific territory, it is important to study characteristic features of local circulation cells formation and to properly take local factors into account to develop custom modeling techniques for operational use. The present study aims to investigate and analyze vertical temperature distributions in tropospheric inversions (isotherms) over the territory of Belarus. We study several specific cases of formation, evolution and decay of deep nighttime temperature inversions in Belarus by means of mesoscale numerical simulations with WRF model, considering basic mechanisms of isothermal and inverse temperature layers formation in the troposphere and impact of these layers on local circulation cells. Our primary goal is to assess the feasibility of advance prediction of inversions formation with WRF. Modeling results reveal that all cases under consideration have characteristic features of radiative inversions (e.g., their formation times, development phases, inversion intensities, etc). Regions of "blocking" layers formation are extensive and often spread over the entire territory of Belarus. Inversions decay starts from the lowermost (near surface) layer (altitudes of 5 to 50 m). In all cases, one can observe formation of temperature gradients that substantially differ from the basic inversion gradient, i.e. the layer splits into smaller layers, each having a different temperature stratification (isothermal, adiabatic, etc). As opposed to various empirical techniques as well as theoretical approaches based on discriminant analysis, mesoscale modeling with WRF provides fairly successful forecasts of formation times and regions for all types of temperature inversions up to 3 days in advance. Furthermore, we conclude that without proper adjustment for the presence of thin isothermal layers (adiabatic and/or inversion layers), temperature data can affect results of statistical climate studies. Provided there are regions where a long-term, constant inversion is present (e.g., Antarctica or regions with continental climate), these data can contribute an uncompensated systematic error of 2 to 10Â° C. We argue that this very fact may lead to inconsistencies in long-term temperature data interpretations (e.g., conclusions ranging from "global warming" to "global cooling" based on temperature observations for the same region and time period). Due to the importance of this problem from the scientific as well as practical point of view, our plans for further studies include analysis of autumn and wintertime inversions and convective inversions. At the same time, it seems promising to develop an algorithm of automatic recognition of temperature inversions based on a combination of WRF modeling results, surface and satellite observations.
Survival Analysis of Patients with End Stage Renal Disease
NASA Astrophysics Data System (ADS)
Urrutia, J. D.; Gayo, W. S.; Bautista, L. A.; Baccay, E. B.
2015-06-01
This paper provides a survival analysis of End Stage Renal Disease (ESRD) under Kaplan-Meier Estimates and Weibull Distribution. The data were obtained from the records of V. L. MakabaliMemorial Hospital with respect to time t (patient's age), covariates such as developed secondary disease (Pulmonary Congestion and Cardiovascular Disease), gender, and the event of interest: the death of ESRD patients. Survival and hazard rates were estimated using NCSS for Weibull Distribution and SPSS for Kaplan-Meier Estimates. These lead to the same conclusion that hazard rate increases and survival rate decreases of ESRD patient diagnosed with Pulmonary Congestion, Cardiovascular Disease and both diseases with respect to time. It also shows that female patients have a greater risk of death compared to males. The probability risk was given the equation R = 1 â€” e-H(t) where e-H(t) is the survival function, H(t) the cumulative hazard function which was created using Cox-Regression.
Analysis of the chaotic maps generating different statistical distributions
NASA Astrophysics Data System (ADS)
Lawnik, M.
2015-09-01
The analysis of the chaotic maps, enabling the derivation of numbers from given statistical distributions was presented. The analyzed chaotic maps are in the form xk+1 = F-1(U(F(xk))), where F is the cumulative distribution function, U is the skew tent map and F-1 is the inverse function of F. The analysis was presented on the example of chaotic map with the standard normal distribution in view of his computational efficiency and accuracy. On the grounds of the conducted analysis, it should be indicated that the method not always allows to generate the values from the given distribution.
Stochastic Analysis of Wind Energy for Wind Pump Irrigation in Coastal Andhra Pradesh, India
NASA Astrophysics Data System (ADS)
Raju, M. M.; Kumar, A.; Bisht, D.; Rao, D. B.
2014-09-01
The rapid escalation in the prices of oil and gas as well as increasing demand for energy has attracted the attention of scientists and researchers to explore the possibility of generating and utilizing the alternative and renewable sources of wind energy in the long coastal belt of India with considerable wind energy resources. A detailed analysis of wind potential is a prerequisite to harvest the wind energy resources efficiently. Keeping this in view, the present study was undertaken to analyze the wind energy potential to assess feasibility of the wind-pump operated irrigation system in the coastal region of Andhra Pradesh, India, where high ground water table conditions are available. The stochastic analysis of wind speed data were tested to fit a probability distribution, which describes the wind energy potential in the region. The normal and Weibull probability distributions were tested; and on the basis of Chi square test, the Weibull distribution gave better results. Hence, it was concluded that the Weibull probability distribution may be used to stochastically describe the annual wind speed data of coastal Andhra Pradesh with better accuracy. The size as well as the complete irrigation system with mass curve analysis was determined to satisfy various daily irrigation demands at different risk levels.
Harmonic analysis of electrical distribution systems
1996-03-01
This report presents data pertaining to research on harmonics of electric power distribution systems. Harmonic data is presented on RMS and average measurements for determination of harmonics in buildings; fluorescent ballast; variable frequency drive; georator geosine harmonic data; uninterruptible power supply; delta-wye transformer; westinghouse suresine; liebert datawave; and active injection mode filter data.
DISTRIBUTION SYSTEM RELIABILITY ANALYSIS USING A MICROCOMPUTER
Distribution system reliability for most utilities is maintained by the knowledge of a few key personnel. Generally, these water maintenance personnel use a good memory, repair records, a large wall map and a hydraulic model of the larger transmission mains to help identify probl...
Economic analysis of efficient distribution transformer trends
Downing, D.J.; McConnell, B.W.; Barnes, P.R.; Hadley, S.W.; Van Dyke, J.W.
1998-03-01
This report outlines an approach that will account for uncertainty in the development of evaluation factors used to identify transformer designs with the lowest total owning cost (TOC). The TOC methodology is described and the most highly variable parameters are discussed. The model is developed to account for uncertainties as well as statistical distributions for the important parameters. Sample calculations are presented. The TOC methodology is applied to data provided by two utilities in order to test its validity.
A Comparison of Distribution Free and Non-Distribution Free Factor Analysis Methods
ERIC Educational Resources Information Center
Ritter, Nicola L.
2012-01-01
Many researchers recognize that factor analysis can be conducted on both correlation matrices and variance-covariance matrices. Although most researchers extract factors from non-distribution free or parametric methods, researchers can also extract factors from distribution free or non-parametric methods. The nature of the data dictates the methodâ€¦
Fukuta, Masahiro; Inami, Wataru; Ono, Atsushi; Kawata, Yoshimasa
2016-01-01
We present an intensity distribution analysis of cathodoluminescence (CL) excited with a focused electron beam in a luminescent thin film. The energy loss distribution is applied to the developed analysis method in order to determine the arrangement of the dipole locations along the path of the electron traveling in the film. Propagating light emitted from each dipole is analyzed with the finite-difference time-domain (FDTD) method. CL distribution near the film surface is evaluated as a nanometric light source. It is found that a light source with 30nm widths is generated in the film by the focused electron beam. We also discuss the accuracy of the developed analysis method by comparison with experimental results. The analysis results are brought into good agreement with the experimental results by introducing the energy loss distribution. PMID:26550930
Grammatical Analysis as a Distributed Neurobiological Function
Bozic, Mirjana; Fonteneau, Elisabeth; Su, Li; Marslen-Wilson, William D
2015-01-01
Language processing engages large-scale functional networks in both hemispheres. Although it is widely accepted that left perisylvian regions have a key role in supporting complex grammatical computations, patient data suggest that some aspects of grammatical processing could be supported bilaterally. We investigated the distribution and the nature of grammatical computations across language processing networks by comparing two types of combinatorial grammatical sequences—inflectionally complex words and minimal phrases—and contrasting them with grammatically simple words. Novel multivariate analyses revealed that they engage a coalition of separable subsystems: inflected forms triggered left-lateralized activation, dissociable into dorsal processes supporting morphophonological parsing and ventral, lexically driven morphosyntactic processes. In contrast, simple phrases activated a consistently bilateral pattern of temporal regions, overlapping with inflectional activations in L middle temporal gyrus. These data confirm the role of the left-lateralized frontotemporal network in supporting complex grammatical computations. Critically, they also point to the capacity of bilateral temporal regions to support simple, linear grammatical computations. This is consistent with a dual neurobiological framework where phylogenetically older bihemispheric systems form part of the network that supports language function in the modern human, and where significant capacities for language comprehension remain intact even following severe left hemisphere damage. PMID:25421880
Grammatical analysis as a distributed neurobiological function.
Bozic, Mirjana; Fonteneau, Elisabeth; Su, Li; Marslen-Wilson, William D
2015-03-01
Language processing engages large-scale functional networks in both hemispheres. Although it is widely accepted that left perisylvian regions have a key role in supporting complex grammatical computations, patient data suggest that some aspects of grammatical processing could be supported bilaterally. We investigated the distribution and the nature of grammatical computations across language processing networks by comparing two types of combinatorial grammatical sequences--inflectionally complex words and minimal phrases--and contrasting them with grammatically simple words. Novel multivariate analyses revealed that they engage a coalition of separable subsystems: inflected forms triggered left-lateralized activation, dissociable into dorsal processes supporting morphophonological parsing and ventral, lexically driven morphosyntactic processes. In contrast, simple phrases activated a consistently bilateral pattern of temporal regions, overlapping with inflectional activations in L middle temporal gyrus. These data confirm the role of the left-lateralized frontotemporal network in supporting complex grammatical computations. Critically, they also point to the capacity of bilateral temporal regions to support simple, linear grammatical computations. This is consistent with a dual neurobiological framework where phylogenetically older bihemispheric systems form part of the network that supports language function in the modern human, and where significant capacities for language comprehension remain intact even following severe left hemisphere damage. PMID:25421880
Statistical analysis of bidirectional reflectance distribution functions
NASA Astrophysics Data System (ADS)
Zubiaga, Carlos J.; Belcour, Laurent; Bosch, Carles; Muñoz, Adolfo; Barla, Pascal
2015-03-01
Bidirectional Reflectance Distribution Functions (BRDFs) are commonly employed in Computer Graphics and Computer Vision to model opaque materials. On the one hand, a BRDF is a complex 4D function of both light and view directions, which should ensure reciprocity and energy conservation laws. On the other hand, when computing radiance reaching the eye from a surface point, the view direction is held fixed. In this respect, we are only interested in a 2D BRDF slice that acts as a filter on the local environment lighting. The goal of our work is to understand the statistical properties of such a filter as a function of viewing elevation. To this end, we have conducted a study of measured BRDFs where we have computed statistical moments for each viewing angle. We show that some moments are correlated together across dimensions and orders, while some others are close to zero and may safely be discarded. Our study opens the way to novel applications such as moment-based manipulation of measured BRDFs, material estimation and image-based material editing. It also puts empirical and physically-based material models in a new perspective, by revealing their effect as view-dependent filters.
WATER DISTRIBUTION SYSTEM ANALYSIS: FIELD STUDIES, MODELING AND MANAGEMENT
The user‘s guide entitled “Water Distribution System Analysis: Field Studies, Modeling and Management” is a reference guide for water utilities and an extensive summarization of information designed to provide drinking water utility personnel (and related consultants and research...
Distributed bearing fault diagnosis based on vibration analysis
NASA Astrophysics Data System (ADS)
Dolenc, BoÅ¡tjan; BoÅ¡koski, Pavle; JuriÄiÄ‡, Äani
2016-01-01
Distributed bearing faults appear under various circumstances, for example due to electroerosion or the progression of localized faults. Bearings with distributed faults tend to generate more complex vibration patterns than those with localized faults. Despite the frequent occurrence of such faults, their diagnosis has attracted limited attention. This paper examines a method for the diagnosis of distributed bearing faults employing vibration analysis. The vibrational patterns generated are modeled by incorporating the geometrical imperfections of the bearing components. Comparing envelope spectra of vibration signals shows that one can distinguish between localized and distributed faults. Furthermore, a diagnostic procedure for the detection of distributed faults is proposed. This is evaluated on several bearings with naturally born distributed faults, which are compared with fault-free bearings and bearings with localized faults. It is shown experimentally that features extracted from vibrations in fault-free, localized and distributed fault conditions form clearly separable clusters, thus enabling diagnosis.
Performance analysis of static locking in replicated distributed database systems
NASA Technical Reports Server (NTRS)
Kuang, Yinghong; Mukkamala, Ravi
1991-01-01
Data replication and transaction deadlocks can severely affect the performance of distributed database systems. Many current evaluation techniques ignore these aspects, because it is difficult to evaluate through analysis and time consuming to evaluate through simulation. A technique is used that combines simulation and analysis to closely illustrate the impact of deadlock and evaluate performance of replicated distributed database with both shared and exclusive locks.
Performance analysis of static locking in replicated distributed database systems
NASA Technical Reports Server (NTRS)
Kuang, Yinghong; Mukkamala, Ravi
1991-01-01
Data replications and transaction deadlocks can severely affect the performance of distributed database systems. Many current evaluation techniques ignore these aspects, because it is difficult to evaluate through analysis and time consuming to evaluate through simulation. Here, a technique is discussed that combines simulation and analysis to closely illustrate the impact of deadlock and evaluate performance of replicated distributed databases with both shared and exclusive locks.
Two-Component Extreme Value Distribution for Flood Frequency Analysis
NASA Astrophysics Data System (ADS)
Rossi, Fabio; Fiorentino, Mauro; Versace, Pasquale
1984-07-01
Theoretical considerations, supported by statistical analysis of 39 annual flood series (AFS) of Italian basins, suggest that the two-component extreme value (TCEV) distribution can be assumed as a parent flood distribution, i.e., one closely representative of the real flood experience. This distribution belongs to the family of distributions of the annual maximum of a compound Poisson process, which is a solid theoretical basis for AFS analysis. However, the two-parameter distribution of this family, obtained on the assumption of identically distributed floods, does not account for the high variability of both observed skewness and largest order statistics, so that a significant number of observed floods qualify as outliers under this distribution. The more general TCEV distribution assumes individual floods to arise from a mixture of two exponential components. Its four parameters can be estimated by the maximum likelihood method. A regionalized TCEV distribution, with parameters representative of a set of 39 Italian AFS's, was shown to closely reproduce the observed distribution of skewness and that of the largest order statistic.
Time-dependent reliability analysis of ceramic engine components
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.
1993-01-01
The computer program CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing either the power or Paris law relations. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. Two example problems demonstrating proof testing and fatigue parameter estimation are given.
Time-dependent reliability analysis of ceramic engine components
Nemeth, N.N.
1993-10-01
The computer program CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing either the power or Paris law relations. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. Two example problems demonstrating proof testing and fatigue parameter estimation are given.
NASA Technical Reports Server (NTRS)
Gyekenyesi, John P.; Nemeth, Noel N.
1987-01-01
The SCARE (Structural Ceramics Analysis and Reliability Evaluation) computer program on statistical fast fracture reliability analysis with quadratic elements for volume distributed imperfections is enhanced to include the use of linear finite elements and the capability of designing against concurrent surface flaw induced ceramic component failure. The SCARE code is presently coupled as a postprocessor to the MSC/NASTRAN general purpose, finite element analysis program. The improved version now includes the Weibull and Batdorf statistical failure theories for both surface and volume flaw based reliability analysis. The program uses the two-parameter Weibull fracture strength cumulative failure probability distribution model with the principle of independent action for poly-axial stress states, and Batdorf's shear-sensitive as well as shear-insensitive statistical theories. The shear-sensitive surface crack configurations include the Griffith crack and Griffith notch geometries, using the total critical coplanar strain energy release rate criterion to predict mixed-mode fracture. Weibull material parameters based on both surface and volume flaw induced fracture can also be calculated from modulus of rupture bar tests, using the least squares method with known specimen geometry and grouped fracture data. The statistical fast fracture theories for surface flaw induced failure, along with selected input and output formats and options, are summarized. An example problem to demonstrate various features of the program is included.
Sun, Huarui Bajo, Miguel Montes; Uren, Michael J.; Kuball, Martin
2015-01-26
Gate leakage degradation of AlGaN/GaN high electron mobility transistors under OFF-state stress is investigated using a combination of electrical, optical, and surface morphology characterizations. The generation of leakage â€œhot spotsâ€ at the edge of the gate is found to be strongly temperature accelerated. The time for the formation of each failure site follows a Weibull distribution with a shape parameter in the range of 0.7â€“0.9 from room temperature up to 120â€‰Â°C. The average leakage per failure site is only weakly temperature dependent. The stress-induced structural degradation at the leakage sites exhibits a temperature dependence in the surface morphology, which is consistent with a surface defect generation process involving temperature-associated changes in the breakdown sites.
Performance analysis of static locking in distributed database systems
Shyu, S.C. ); Li, V.O.K. . Dept. of Electrical Engineering)
1990-06-01
Numerous performance models have been proposed for locking algorithms in centralized database systems, but few have been developed for distributed ones. Existing results on distributed locking usually ignore the deadlock problem so as to simplify the analysis. In this paper, a new performance model for static locking in distributed database systems is developed.A queuing model is used to approximate static locking in distributed database systems without deadlocks. Then a random graph model is proposed to find the deadlock probability of each transaction. The above two models are integrated, so that given the transaction arrival rate, the response time and the effective throughput can be calculated.
Modeling and analysis of solar distributed generation
NASA Astrophysics Data System (ADS)
Ortiz Rivera, Eduardo Ivan
Recent changes in the global economy are creating a big impact in our daily life. The price of oil is increasing and the number of reserves are less every day. Also, dramatic demographic changes are impacting the viability of the electric infrastructure and ultimately the economic future of the industry. These are some of the reasons that many countries are looking for alternative energy to produce electric energy. The most common form of green energy in our daily life is solar energy. To convert solar energy into electrical energy is required solar panels, dc-dc converters, power control, sensors, and inverters. In this work, a photovoltaic module, PVM, model using the electrical characteristics provided by the manufacturer data sheet is presented for power system applications. Experimental results from testing are showed, verifying the proposed PVM model. Also in this work, three maximum power point tracker, MPPT, algorithms would be presented to obtain the maximum power from a PVM. The first MPPT algorithm is a method based on the Rolle's and Lagrange's Theorems and can provide at least an approximate answer to a family of transcendental functions that cannot be solved using differential calculus. The second MPPT algorithm is based on the approximation of the proposed PVM model using fractional polynomials where the shape, boundary conditions and performance of the proposed PVM model are satisfied. The third MPPT algorithm is based in the determination of the optimal duty cycle for a dc-dc converter and the previous knowledge of the load or load matching conditions. Also, four algorithms to calculate the effective irradiance level and temperature over a photovoltaic module are presented in this work. The main reasons to develop these algorithms are for monitoring climate conditions, the elimination of temperature and solar irradiance sensors, reductions in cost for a photovoltaic inverter system, and development of new algorithms to be integrated with maximum power point tracking algorithms. Finally, several PV power applications will be presented like circuit analysis for a load connected to two different PV arrays, speed control for a do motor connected to a PVM, and a novel single phase photovoltaic inverter system using the Z-source converter.
Near field light intensity distribution analysis in bimodal polymer waveguide
NASA Astrophysics Data System (ADS)
Herzog, T.; Gut, K.
2015-12-01
The paper presents analysis of light intensity distribution and sensitivity in differential interferometer based on bimodal polymer waveguide. Key part is analysis of optimal waveguide layer thickness in structure SiO2/SU-8/H2O for maximum bulk refractive index sensitivity. The paper presents new approach to detecting phase difference between modes through registrations only part of energy propagating in the waveguide. Additionally in this paper the analysis of changes in light distribution when energy in modes is not equal were performed.
Strength statistics and the distribution of earthquake interevent times
NASA Astrophysics Data System (ADS)
Hristopulos, Dionissios T.; Mouslopoulou, Vasiliki
2013-02-01
The Weibull distribution is often used to model the earthquake interevent times distribution (ITD). We propose a link between the earthquake ITD on single faults with the Earth’s crustal shear strength distribution by means of a phenomenological stick-slip model. For single faults or fault systems with homogeneous strength statistics and power-law stress accumulation we obtain the Weibull ITD. We prove that the moduli of the interevent times and crustal shear strength are linearly related, while the time scale is an algebraic function of the scale of crustal shear strength. We also show that logarithmic stress accumulation leads to the log-Weibull ITD. We investigate deviations of the ITD tails from the Weibull model due to sampling bias, magnitude cutoff thresholds, and non-homogeneous strength parameters. Assuming the Gutenberg-Richter law and independence of the Weibull modulus on the magnitude threshold, we deduce that the interevent time scale drops exponentially with the magnitude threshold. We demonstrate that a microearthquake sequence from the island of Crete and a seismic sequence from Southern California conform reasonably well to the Weibull model.
Distributed transit compartments for arbitrary lifespan distributions in aging populations.
Koch, Gilbert; Schropp, Johannes
2015-09-01
Transit compartment models (TCM) are often used to describe aging populations where every individual has its own lifespan. However, in the TCM approach these lifespans are gamma-distributed which is a serious limitation because often the Weibull or more complex distributions are realistic. Therefore, we extend the TCM concept to approximately describe any lifespan distribution and call this generalized concept distributed transit compartment models (DTCMs). The validity of DTCMs is obtained by convergence investigations. From the mechanistic perspective the transit rates are directly controlled by the lifespan distribution. Further, DTCMs could be used to approximate the convolution of a signal with a probability density function. As example a stimulatory effect of a drug in an aging population with a Weibull-distributed lifespan is presented where distribution and model parameters are estimated based on simulated data. PMID:26100181
An analysis of the geographical distribution of Plasmodium ovale
Lysenko, A. Ja.; Beljaev, A. E.
1969-01-01
For a long time Plasmodium ovale was considered a very rare causal agent of malaria, but recently it has been shown to be a fairly common parasite in Africa. The authors analyse all the findings of P. ovale outside tropical Africa and describe its distribution. This species is distributed in 2 areas, the first confined to tropical Africa and the second to islands in the Western Pacific. The authors make a medico-geographical analysis of the distribution of P. ovale, and attempt to explain particular features of it. PMID:5306622
ANALYSIS OF DISTRIBUTION FEEDER LOSSES DUE TO ADDITION OF DISTRIBUTED PHOTOVOLTAIC GENERATORS
Tuffner, Francis K.; Singh, Ruchi
2011-08-09
Distributed generators (DG) are small scale power supplying sources owned by customers or utilities and scattered throughout the power system distribution network. Distributed generation can be both renewable and non-renewable. Addition of distributed generation is primarily to increase feeder capacity and to provide peak load reduction. However, this addition comes with several impacts on the distribution feeder. Several studies have shown that addition of DG leads to reduction of feeder loss. However, most of these studies have considered lumped load and distributed load models to analyze the effects on system losses, where the dynamic variation of load due to seasonal changes is ignored. It is very important for utilities to minimize the losses under all scenarios to decrease revenue losses, promote efficient asset utilization, and therefore, increase feeder capacity. This paper will investigate an IEEE 13-node feeder populated with photovoltaic generators on detailed residential houses with water heater, Heating Ventilation and Air conditioning (HVAC) units, lights, and other plug and convenience loads. An analysis of losses for different power system components, such as transformers, underground and overhead lines, and triplex lines, will be performed. The analysis will utilize different seasons and different solar penetration levels (15%, 30%).
First Experiences with LHC Grid Computing and Distributed Analysis
Fisk, Ian
2010-12-01
In this presentation the experiences of the LHC experiments using grid computing were presented with a focus on experience with distributed analysis. After many years of development, preparation, exercises, and validation the LHC (Large Hadron Collider) experiments are in operations. The computing infrastructure has been heavily utilized in the first 6 months of data collection. The general experience of exploiting the grid infrastructure for organized processing and preparation is described, as well as the successes employing the infrastructure for distributed analysis. At the end the expected evolution and future plans are outlined.
Entropy Methods For Univariate Distributions in Decision Analysis
NASA Astrophysics Data System (ADS)
Abbas, Ali E.
2003-03-01
One of the most important steps in decision analysis practice is the elicitation of the decision-maker's belief about an uncertainty of interest in the form of a representative probability distribution. However, the probability elicitation process is a task that involves many cognitive and motivational biases. Alternatively, the decision-maker may provide other information about the distribution of interest, such as its moments, and the maximum entropy method can be used to obtain a full distribution subject to the given moment constraints. In practice however, decision makers cannot readily provide moments for the distribution, and are much more comfortable providing information about the fractiles of the distribution of interest or bounds on its cumulative probabilities. In this paper we present a graphical method to determine the maximum entropy distribution between upper and lower probability bounds and provide an interpretation for the shape of the maximum entropy distribution subject to fractile constraints, (FMED). We also discuss the problems with the FMED in that it is discontinuous and flat over each fractile interval. We present a heuristic approximation to a distribution if in addition to its fractiles, we also know it is continuous and work through full examples to illustrate the approach.
Effect of Porosity on Strength Distribution of Microcrystalline Cellulose.
Kele?, Özgür; Barcenas, Nicholas P; Sprys, Daniel H; Bowman, Keith J
2015-12-01
Fracture strength of pharmaceutical compacts varies even for nominally identical samples, which directly affects compaction, comminution, and tablet dosage forms. However, the relationships between porosity and mechanical behavior of compacts are not clear. Here, the effects of porosity on fracture strength and fracture statistics of microcrystalline cellulose compacts were investigated through diametral compression tests. Weibull modulus, a key parameter in Weibull statistics, was observed to decrease with increasing porosity from 17 to 56 vol.%, based on eight sets of compacts at different porosity levels, each set containing ?50 samples, a total of 407 tests. Normal distribution fits better to fracture data for porosity less than 20 vol.%, whereas Weibull distribution is a better fit in the limit of highest porosity. Weibull moduli from 840 unique finite element simulations of isotropic porous materials were compared to experimental Weibull moduli from this research and results on various pharmaceutical materials. Deviations from Weibull statistics are observed. The effect of porosity on fracture strength can be described by a recently proposed micromechanics-based formula. PMID:26022545
Sewalem, A; Kistemaker, G J; Van Doormaal, B J
2005-04-01
The aim of this study was to use a Weibull proportional hazards model to explore the impact of type traits on the functional survival of Canadian Jersey and Ayrshire cows. The data set consisted of 49,791 registered Jersey cows from 900 herds calving from 1985 to 2003. The corresponding figures for Ayrshire were 77,109 cows and 921 herds. Functional survival was defined as the number of days from first calving to culling, death, or censoring. Type information consisted of phenotypic type scores for 8 composite traits and 19 linear descriptive traits. The statistical model included the effects of stage of lactation; season of production; annual change in herd size; type of milk recording supervision; age at first calving; effects of milk, fat, and protein yields calculated as within herd-year-parity deviations; herd-year-season of calving; each type trait; and the animal's sire. Analysis was done one trait at a time for each of 27 type traits in each breed. The relative culling risk was calculated for animals in each class after accounting for the previously mentioned effects. Among the composite type traits with the greatest contribution to the likelihood function was final score followed by mammary system for Jersey breed, while in Ayrshire breed feet and legs was the second most important trait next to final score. Cows classified as Poor for final score in both breeds were >5 times more likely to be culled compared with the cows classified as Good Plus. In both breeds, cows classified as Poor for feet and legs were 5 times more likely to be culled than were cows classified as Excellent, and cows classified as Excellent for mammary system were >9 times more likely to survive than were cows classified as Poor. PMID:15778325
GIS-based poverty and population distribution analysis in China
NASA Astrophysics Data System (ADS)
Cui, Jing; Wang, Yingjie; Yan, Hong
2009-07-01
Geographically, poverty status is not only related with social-economic factors but also strongly affected by geographical environment. In the paper, GIS-based poverty and population distribution analysis method is introduced for revealing their regional differences. More than 100000 poor villages and 592 national key poor counties are chosen for the analysis. The results show that poverty distribution tends to concentrate in most of west China and mountainous rural areas of mid China. Furthermore, the fifth census data are overlaid to those poor areas in order to gain its internal diversity of social-economic characteristics. By overlaying poverty related social-economic parameters, such as sex ratio, illiteracy, education level, percentage of ethnic minorities, family composition, finding shows that poverty distribution is strongly correlated with high illiteracy rate, high percentage minorities, and larger family member.
Energy loss analysis of an integrated space power distribution system
NASA Technical Reports Server (NTRS)
Kankam, M. D.; Ribeiro, P. F.
1992-01-01
The results of studies related to conceptual topologies of an integrated utility-like space power system are described. The system topologies are comparatively analyzed by considering their transmission energy losses as functions of mainly distribution voltage level and load composition. The analysis is expedited by use of a Distribution System Analysis and Simulation (DSAS) software. This recently developed computer program by the Electric Power Research Institute (EPRI) uses improved load models to solve the power flow within the system. However, present shortcomings of the software with regard to space applications, and incompletely defined characteristics of a space power system make the results applicable to only the fundamental trends of energy losses of the topologies studied. Accountability, such as included, for the effects of the various parameters on the system performance can constitute part of a planning tool for a space power distribution system.
Data synthesis and display programs for wave distribution function analysis
NASA Technical Reports Server (NTRS)
Storey, L. R. O.; Yeh, K. J.
1992-01-01
At the National Space Science Data Center (NSSDC) software was written to synthesize and display artificial data for use in developing the methodology of wave distribution analysis. The software comprises two separate interactive programs, one for data synthesis and the other for data display.
Distributed response analysis of conductive behavior in single molecules
in het Panhuis, Marc; Munn, Robert W.; Popelier, Paul L. A.; Coleman, Jonathan N.; Foley, Brian; Blau, Werner J.
2002-01-01
The ab initio computational approach of distributed response analysis is used to quantify how electrons move across conjugated molecules in an electric field, in analogy to conduction. The method promises to be valuable for characterizing the conductive behavior of single molecules in electronic devices. PMID:11983925
WATER DISTRIBUTION SYSTEM ANALYSIS: FIELD STUDIES, MODELING AND MANAGEMENT
The userâ€˜s guide entitled â€œWater Distribution System Analysis: Field Studies, Modeling and Managementâ€ is a reference guide for water utilities and an extensive summarization of information designed to provide drinking water utility personnel (and related consultants and research...
Analyzing Distributed Functions in an Integrated Hazard Analysis
NASA Technical Reports Server (NTRS)
Morris, A. Terry; Massie, Michael J.
2010-01-01
Large scale integration of today's aerospace systems is achievable through the use of distributed systems. Validating the safety of distributed systems is significantly more difficult as compared to centralized systems because of the complexity of the interactions between simultaneously active components. Integrated hazard analysis (IHA), a process used to identify unacceptable risks and to provide a means of controlling them, can be applied to either centralized or distributed systems. IHA, though, must be tailored to fit the particular system being analyzed. Distributed systems, for instance, must be analyzed for hazards in terms of the functions that rely on them. This paper will describe systems-oriented IHA techniques (as opposed to traditional failure-event or reliability techniques) that should be employed for distributed systems in aerospace environments. Special considerations will be addressed when dealing with specific distributed systems such as active thermal control, electrical power, command and data handling, and software systems (including the interaction with fault management systems). Because of the significance of second-order effects in large scale distributed systems, the paper will also describe how to analyze secondary functions to secondary functions through the use of channelization.
Bayesian analysis of a disability model for lung cancer survival.
Armero, C; Cabras, S; Castellanos, M E; Perra, S; QuirÃ³s, A; OruezÃ¡bal, M J; SÃ¡nchez-Rubio, J
2016-02-01
Bayesian reasoning, survival analysis and multi-state models are used to assess survival times for Stage IV non-small-cell lung cancer patients and the evolution of the disease over time. Bayesian estimation is done using minimum informative priors for the Weibull regression survival model, leading to an automatic inferential procedure. Markov chain Monte Carlo methods have been used for approximating posterior distributions and the Bayesian information criterion has been considered for covariate selection. In particular, the posterior distribution of the transition probabilities, resulting from the multi-state model, constitutes a very interesting tool which could be useful to help oncologists and patients make efficient and effective decisions. PMID:22767866
Adaptive walks and distribution of beneficial fitness effects.
Seetharaman, Sarada; Jain, Kavita
2014-04-01
We study the adaptation dynamics of a maladapted asexual population on rugged fitness landscapes with many local fitness peaks. The distribution of beneficial fitness effects is assumed to belong to one of the three extreme value domains, viz. Weibull, Gumbel, and Fréchet. We work in the strong selection-weak mutation regime in which beneficial mutations fix sequentially, and the population performs an uphill walk on the fitness landscape until a local fitness peak is reached. A striking prediction of our analysis is that the fitness difference between successive steps follows a pattern of diminishing returns in the Weibull domain and accelerating returns in the Fréchet domain, as the initial fitness of the population is increased. These trends are found to be robust with respect to fitness correlations. We believe that this result can be exploited in experiments to determine the extreme value domain of the distribution of beneficial fitness effects. Our work here differs significantly from the previous ones that assume the selection coefficient to be small. On taking large effect mutations into account, we find that the length of the walk shows different qualitative trends from those derived using small selection coefficient approximation. PMID:24274696
Multi-State Load Models for Distribution System Analysis
Schneider, Kevin P.; Fuller, Jason C.; Chassin, David P.
2011-11-01
Recent work in the field of distribution system analysis has shown that the traditional method of peak load analysis is not adequate for the analysis of emerging distribution system technologies. Voltage optimization, demand response, electric vehicle charging, and energy storage are examples of technologies with characteristics having daily, seasonal, and/or annual variations. In addition to the seasonal variations, emerging technologies such as demand response and plug in electric vehicle charging have the potential to send control signals to the end use loads which will affect how they consume energy. In order to support time-series analysis over different time frames and to incorporate potential control signal inputs it is necessary to develop detailed end use load models which accurately represent the load under various conditions, and not just during the peak load period. This paper will build on previous work on detail end use load modeling in order to outline the method of general multi-state load models for distribution system analysis.
Analysis and interpretation of DNA distributions measured by flow cytometry
Dean, P.N.; Gray, J.W.; Dolbeare, F.A.
1982-01-01
A principal use of flow cytometers is for the measurement of fluorescence distributions of cells stained with DNA specific dyes. A large amount of effort has been and is being expended currently in the analysis of these distributions for the fractions of cells in the G/sub 1/, S, and G/sub 2/ + M phases of the cell cycle. Several methods of analysis have been proposed and are being used; new methods continue to be introduced. Many, if not most, of these methods differ only in the mathematical function used to represent the phases of the cell cycle and represent attempts to fit exactly distributions with known phase fractions or unusual shapes. In this paper we show that these refinements probably are not necessary because of cell staining and sampling variability. This hypothesis was tested by measuring fluorescence distributions for Chinese hamster ovary and KHT mouse sarcoma cells stained with Hoechst-33258, chromomycin A3, propidium iodide, and acriflavine. Our results show that: a) single measurements can result in phase fraction estimates that are in error by as much as 40% for G/sub 2/ + M phase and 15 to 20% for G/sub 1/ and S phases; b) different dyes can yield phase fraction estimates that differ by as much as 40% due to differences in DNA specificity; c) the shapes of fluorescence distributions and their interpretation are very dependent on the dye being used and on its binding mechanism. 7 figures, 2 tables.
Generalized Exponential Distribution in Flood Frequency Analysis for Polish Rivers
Markiewicz, Iwona; Strupczewski, Witold G.; Bogdanowicz, Ewa; Kochanek, Krzysztof
2015-01-01
Many distributions have been used in flood frequency analysis (FFA) for fitting the flood extremes data. However, as shown in the paper, the scatter of Polish data plotted on the moment ratio diagram shows that there is still room for a new model. In the paper, we study the usefulness of the generalized exponential (GE) distribution in flood frequency analysis for Polish Rivers. We investigate the fit of GE distribution to the Polish data of the maximum flows in comparison with the inverse Gaussian (IG) distribution, which in our previous studies showed the best fitting among several models commonly used in FFA. Since the use of a discrimination procedure without the knowledge of its performance for the considered probability density functions may lead to erroneous conclusions, we compare the probability of correct selection for the GE and IG distributions along with the analysis of the asymptotic model error in respect to the upper quantile values. As an application, both GE and IG distributions are alternatively assumed for describing the annual peak flows for several gauging stations of Polish Rivers. To find the best fitting model, four discrimination procedures are used. In turn, they are based on the maximized logarithm of the likelihood function (K procedure), on the density function of the scale transformation maximal invariant (QK procedure), on the Kolmogorov-Smirnov statistics (KS procedure) and the fourth procedure based on the differences between the ML estimate of 1% quantile and its value assessed by the method of moments and linear moments, in sequence (R procedure). Due to the uncertainty of choosing the best model, the method of aggregation is applied to estimate of the maximum flow quantiles. PMID:26657239
Generalized Exponential Distribution in Flood Frequency Analysis for Polish Rivers.
Markiewicz, Iwona; Strupczewski, Witold G; Bogdanowicz, Ewa; Kochanek, Krzysztof
2015-01-01
Many distributions have been used in flood frequency analysis (FFA) for fitting the flood extremes data. However, as shown in the paper, the scatter of Polish data plotted on the moment ratio diagram shows that there is still room for a new model. In the paper, we study the usefulness of the generalized exponential (GE) distribution in flood frequency analysis for Polish Rivers. We investigate the fit of GE distribution to the Polish data of the maximum flows in comparison with the inverse Gaussian (IG) distribution, which in our previous studies showed the best fitting among several models commonly used in FFA. Since the use of a discrimination procedure without the knowledge of its performance for the considered probability density functions may lead to erroneous conclusions, we compare the probability of correct selection for the GE and IG distributions along with the analysis of the asymptotic model error in respect to the upper quantile values. As an application, both GE and IG distributions are alternatively assumed for describing the annual peak flows for several gauging stations of Polish Rivers. To find the best fitting model, four discrimination procedures are used. In turn, they are based on the maximized logarithm of the likelihood function (K procedure), on the density function of the scale transformation maximal invariant (QK procedure), on the Kolmogorov-Smirnov statistics (KS procedure) and the fourth procedure based on the differences between the ML estimate of 1% quantile and its value assessed by the method of moments and linear moments, in sequence (R procedure). Due to the uncertainty of choosing the best model, the method of aggregation is applied to estimate of the maximum flow quantiles. PMID:26657239
Integrating software architectures for distributed simulations and simulation analysis communities.
Goldsby, Michael E.; Fellig, Daniel; Linebarger, John Michael; Moore, Patrick Curtis; Sa, Timothy J.; Hawley, Marilyn F.
2005-10-01
The one-year Software Architecture LDRD (No.79819) was a cross-site effort between Sandia California and Sandia New Mexico. The purpose of this research was to further develop and demonstrate integrating software architecture frameworks for distributed simulation and distributed collaboration in the homeland security domain. The integrated frameworks were initially developed through the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC), sited at SNL/CA, and the National Infrastructure Simulation & Analysis Center (NISAC), sited at SNL/NM. The primary deliverable was a demonstration of both a federation of distributed simulations and a federation of distributed collaborative simulation analysis communities in the context of the same integrated scenario, which was the release of smallpox in San Diego, California. To our knowledge this was the first time such a combination of federations under a single scenario has ever been demonstrated. A secondary deliverable was the creation of the standalone GroupMeld{trademark} collaboration client, which uses the GroupMeld{trademark} synchronous collaboration framework. In addition, a small pilot experiment that used both integrating frameworks allowed a greater range of crisis management options to be performed and evaluated than would have been possible without the use of the frameworks.
Volumetric relief map for intracranial cerebrospinal fluid distribution analysis.
Lebret, Alain; Kenmochi, Yukiko; Hodel, Jérôme; Rahmouni, Alain; Decq, Philippe; Petit, Éric
2015-09-01
Cerebrospinal fluid imaging plays a significant role in the clinical diagnosis of brain disorders, such as hydrocephalus and Alzheimer's disease. While three-dimensional images of cerebrospinal fluid are very detailed, the complex structures they contain can be time-consuming and laborious to interpret. This paper presents a simple technique that represents the intracranial cerebrospinal fluid distribution as a two-dimensional image in such a way that the total fluid volume is preserved. We call this a volumetric relief map, and show its effectiveness in a characterization and analysis of fluid distributions and networks in hydrocephalus patients and healthy adults. PMID:26125975
A Study of Thread Load Distribution Using Optical Deformation Analysis
NASA Astrophysics Data System (ADS)
Bennett, J. M.; Graham, A. J.
2012-07-01
It is important to measure the load distribution on bolt threads in a way which represents in-service conditions. Previous methods to experimentally find thread loads have had limitations since polymer replicas have been used, and the joint is dead weight tested. In this investigation an aluminium nut is torqued on a stainless steel bolt. By applying optical deformation analysis, using a GOM ARAMIS stereo camera system, this investigation finds the nut load distribution. The method used is a novel method for measuring load and allows for a more representative experimental setup. Results show that the first thread carries almost twice the average load of the consecutive pitches.
Influence Of Lateral Load Distributions On Pushover Analysis Effectiveness
Colajanni, P.; Potenzone, B.
2008-07-08
The effectiveness of two simple load distributions for pushover analysis recently proposed by the authors is investigated through a comparative study, involving static and dynamic analyses of seismic response of eccentrically braced frames. It is shown that in the upper floors only multimodal pushover procedures provide results close to the dynamic profile, while the proposed load patterns are always conservative in the lower floors. They over-estimate the seismic response less than the uniform distribution, representing a reliable alternative to the uniform or more sophisticated adaptive procedures proposed by seismic codes.
Vibrational Energy Distribution Analysis (VEDA): Scopes and limitations
NASA Astrophysics Data System (ADS)
Jamróz, Micha? H.
2013-10-01
The principle of operations of the VEDA program written by the author for Potential Energy Distribution (PED) analysis of theoretical vibrational spectra is described. Nowadays, the PED analysis is indispensible tool in serious analysis of the vibrational spectra. To perform the PED analysis it is necessary to define 3N-6 linearly independent local mode coordinates. Already for 20-atomic molecules it is a difficult task. The VEDA program reads the input data automatically from the Gaussian program output files. Then, VEDA automatically proposes an introductory set of local mode coordinates. Next, the more adequate coordinates are proposed by the program and optimized to obtain maximal elements of each column (internal coordinate) of the PED matrix (the EPM parameter). The possibility for an automatic optimization of PED contributions is a unique feature of the VEDA program absent in any other programs performing PED analysis.
HammerCloud: A Stress Testing System for Distributed Analysis
NASA Astrophysics Data System (ADS)
van der Ster, Daniel C.; Elmsheuser, Johannes; Úbeda García, Mario; Paladin, Massimo
2011-12-01
Distributed analysis of LHC data is an I/O-intensive activity which places large demands on the internal network, storage, and local disks at remote computing facilities. Commissioning and maintaining a site to provide an efficient distributed analysis service is therefore a challenge which can be aided by tools to help evaluate a variety of infrastructure designs and configurations. HammerCloud is one such tool; it is a stress testing service which is used by central operations teams, regional coordinators, and local site admins to (a) submit arbitrary number of analysis jobs to a number of sites, (b) maintain at a steady-state a predefined number of jobs running at the sites under test, (c) produce web-based reports summarizing the efficiency and performance of the sites under test, and (d) present a web-interface for historical test results to both evaluate progress and compare sites. HammerCloud was built around the distributed analysis framework Ganga, exploiting its API for grid job management. HammerCloud has been employed by the ATLAS experiment for continuous testing of many sites worldwide, and also during large scale computing challenges such as STEP'09 and UAT'09, where the scale of the tests exceeded 10,000 concurrently running and 1,000,000 total jobs over multi-day periods. In addition, HammerCloud is being adopted by the CMS experiment; the plugin structure of HammerCloud allows the execution of CMS jobs using their official tool (CRAB).
An integrated architecture for distributed real-time systems analysis
Tsai, J.J.P.; Yang, S.J.; Liu, A.
1996-12-31
Analyzing distributed real-time software systems is considered difficult because of imposed timing constraints and non-deterministic behavior. These two factors inhibit the reproduction of an error without interfering with program execution, where error reproduction is the corner-stone of conventional software analysis techniques. Statistical evidence indicates that testing and debugging represent approximately 50% of the cost of new system development. In the case of distributed real-time software, the percentage is as high as 70% because the errors are {open_quotes}immune{close_quotes} to conventional debugging aids. This paper presents an integrated architecture which supports monitoring, visualization, and debugging methods for analyzing the dynamic behavior of distributed real-time systems. Based on our approach, an execution environment will be developed to simulate the program execution and data collection activities. Different levels of logical views are then reconstructed from these monitored information. A dynamic visualization and timing analysis method will then be used to study and analyze the timing behavior of distributed real-time software systems.
Microtopographic analysis of plant distribution in polar desert
NASA Astrophysics Data System (ADS)
Okuda, Masaki; Imura, Satoshi; Tanemura, Masaharu
We suggest methods for the analysis of the spatial distribution of plant species in a research area divided into a quadrat lattice. In particular, information about the topography and the spaces without plants is used for the analysis. At sites with a homogeneous substratum, we classify the topography by whether a target grid is concave or convex with respect to a standard surface of altitude. At other sites, we classify the topography according to whether the grid is located at the edge of rock and/or at a water pool. Information about the topography and the plant existence is used for constructing 2 × 2 contingency tables. In order to determine the strength of dependence between the topography and plant existence, the Akaike information criterion (AIC) is used. The methods are applied to data of the microtopography and distribution of mosses in continental Antarctica.
Electrical Power Distribution and Control Modeling and Analysis
NASA Technical Reports Server (NTRS)
Fu, Johnny S.; Liffring, Mark; Mehdi, Ishaque S.
2001-01-01
This slide presentation reviews the use of Electrical Power Distribution and Control (EPD&C) Modeling and how modeling can support analysis. The presentation discusses using the EASY5 model to simulate and analyze the Space Shuttle Electric Auxiliary Power Unit. Diagrams of the model schematics are included, as well as graphs of the battery cell impedance, hydraulic load dynamics, and EPD&C response to hydraulic load variations.
Reliability Estimation and Failure Analysis of Multilayer Ceramic Chip Capacitors
NASA Astrophysics Data System (ADS)
Yang, Seok Jun; Kim, Jin Woo; Ryu, Dong Su; Kim, Myung Soo; Jang, Joong Soon
This paper presents the failure analysis and the reliability estimation of a multilayer ceramic chip capacitor. For the failed samples used in an automobile engine control unit, failure analysis was made to identify the root cause of failure and it was shown that the migration and the avalanche breakdown were the dominant failure mechanisms. Next, an accelerated life testing was designed to estimate the life of the MLCC. It is assumed that Weibull lifetime distribution and the life-stress relationship proposed Prokopowicz and Vaskas. The life-stress relationship and the acceleration factor are estimated by analyzing the accelerated life test data.
Distribution System Reliability Analysis for Smart Grid Applications
NASA Astrophysics Data System (ADS)
Aljohani, Tawfiq Masad
Reliability of power systems is a key aspect in modern power system planning, design, and operation. The ascendance of the smart grid concept has provided high hopes of developing an intelligent network that is capable of being a self-healing grid, offering the ability to overcome the interruption problems that face the utility and cost it tens of millions in repair and loss. To address its reliability concerns, the power utilities and interested parties have spent extensive amount of time and effort to analyze and study the reliability of the generation and transmission sectors of the power grid. Only recently has attention shifted to be focused on improving the reliability of the distribution network, the connection joint between the power providers and the consumers where most of the electricity problems occur. In this work, we will examine the effect of the smart grid applications in improving the reliability of the power distribution networks. The test system used in conducting this thesis is the IEEE 34 node test feeder, released in 2003 by the Distribution System Analysis Subcommittee of the IEEE Power Engineering Society. The objective is to analyze the feeder for the optimal placement of the automatic switching devices and quantify their proper installation based on the performance of the distribution system. The measures will be the changes in the reliability system indices including SAIDI, SAIFI, and EUE. The goal is to design and simulate the effect of the installation of the Distributed Generators (DGs) on the utility's distribution system and measure the potential improvement of its reliability. The software used in this work is DISREL, which is intelligent power distribution software that is developed by General Reliability Co.
Distributed and interactive visual analysis of omics data.
Farag, Yehia; Berven, Frode S; Jonassen, Inge; Petersen, Kjell; Barsnes, Harald
2015-11-01
The amount of publicly shared proteomics data has grown exponentially over the last decade as the solutions for sharing and storing the data have improved. However, the use of the data is often limited by the manner of which it is made available. There are two main approaches: download and inspect the proteomics data locally, or interact with the data via one or more web pages. The first is limited by having to download the data and thus requires local computational skills and resources, while the latter most often is limited in terms of interactivity and the analysis options available. A solution is to develop web-based systems supporting distributed and fully interactive visual analysis of proteomics data. The use of a distributed architecture makes it possible to perform the computational analysis at the server, while the results of the analysis can be displayed via a web browser without the need to download the whole dataset. Here the challenges related to developing such systems for omics data will be discussed. Especially how this allows for multiple connected interactive visual displays of omics dataset in a web-based setting, and the benefits this provide for computational analysis of proteomics data.This article is part of a Special Issue entitled: Computational Proteomics. PMID:26047716
Automatic analysis of attack data from distributed honeypot network
NASA Astrophysics Data System (ADS)
Safarik, Jakub; Voznak, MIroslav; Rezac, Filip; Partila, Pavol; Tomala, Karel
2013-05-01
There are many ways of getting real data about malicious activity in a network. One of them relies on masquerading monitoring servers as a production one. These servers are called honeypots and data about attacks on them brings us valuable information about actual attacks and techniques used by hackers. The article describes distributed topology of honeypots, which was developed with a strong orientation on monitoring of IP telephony traffic. IP telephony servers can be easily exposed to various types of attacks, and without protection, this situation can lead to loss of money and other unpleasant consequences. Using a distributed topology with honeypots placed in different geological locations and networks provides more valuable and independent results. With automatic system of gathering information from all honeypots, it is possible to work with all information on one centralized point. Communication between honeypots and centralized data store use secure SSH tunnels and server communicates only with authorized honeypots. The centralized server also automatically analyses data from each honeypot. Results of this analysis and also other statistical data about malicious activity are simply accessible through a built-in web server. All statistical and analysis reports serve as information basis for an algorithm which classifies different types of used VoIP attacks. The web interface then brings a tool for quick comparison and evaluation of actual attacks in all monitored networks. The article describes both, the honeypots nodes in distributed architecture, which monitor suspicious activity, and also methods and algorithms used on the server side for analysis of gathered data.
GPS FOM Chimney Analysis using Generalized Extreme Value Distribution
NASA Technical Reports Server (NTRS)
Ott, Rick; Frisbee, Joe; Saha, Kanan
2004-01-01
Many a time an objective of a statistical analysis is to estimate a limit value like 3-sigma 95% confidence upper limit from a data sample. The generalized Extreme Value Distribution method can be profitably employed in many situations for such an estimate. . .. It is well known that according to the Central Limit theorem the mean value of a large data set is normally distributed irrespective of the distribution of the data from which the mean value is derived. In a somewhat similar fashion it is observed that many times the extreme value of a data set has a distribution that can be formulated with a Generalized Distribution. In space shuttle entry with 3-string GPS navigation the Figure Of Merit (FOM) value gives a measure of GPS navigated state accuracy. A GPS navigated state with FOM of 6 or higher is deemed unacceptable and is said to form a FOM 6 or higher chimney. A FOM chimney is a period of time during which the FOM value stays higher than 5. A longer period of FOM of value 6 or higher causes navigated state to accumulate more error for a lack of state update. For an acceptable landing it is imperative that the state error remains low and hence at low altitude during entry GPS data of FOM greater than 5 must not last more than 138 seconds. I To test the GPS performAnce many entry test cases were simulated at the Avionics Development Laboratory. Only high value FoM chimneys are consequential. The extreme value statistical technique is applied to analyze high value FOM chimneys. The Maximum likelihood method is used to determine parameters that characterize the GEV distribution, and then the limit value statistics are estimated.
Componential distribution analysis of food using near infrared ray image
NASA Astrophysics Data System (ADS)
Yamauchi, Hiroki; Kato, Kunihito; Yamamoto, Kazuhiko; Ogawa, Noriko; Ohba, Kimie
2008-11-01
The components of the food related to the "deliciousness" are usually evaluated by componential analysis. The component content and type of components in the food are determined by this analysis. However, componential analysis is not able to analyze measurements in detail, and the measurement is time consuming. We propose a method to measure the two-dimensional distribution of the component in food using a near infrared ray (IR) image. The advantage of our method is to be able to visualize the invisible components. Many components in food have characteristics such as absorption and reflection of light in the IR range. The component content is measured using subtraction between two wavelengths of near IR light. In this paper, we describe a method to measure the component of food using near IR image processing, and we show an application to visualize the saccharose in the pumpkin.
PERISCOPE: An Online-Based Distributed Performance Analysis Tool
NASA Astrophysics Data System (ADS)
Benedict, Shajulin; Petkov, Ventsislav; Gerndt, Michael
This paper presents PERISCOPE - an online distributed performance analysis tool that searches for a wide range of performance bottlenecks in parallel applications. It consists of a set of agents that capture and analyze application and hardware-related properties in an autonomous fashion. The paper focuses on the Periscope design, the different search methodologies, and the steps involved to do an online performance analysis. A new graphical user-friendly interface based on Eclipse is introduced. Through the use of this new easy-to-use graphical interface, remote execution, selection of the type of analysis, and the inspection of the found properties can be performed in an intuitive and easy way. In addition, a real-world application, namely, the GENE code, a grand challenge problem of plasma physics is analyzed using Periscope. The results are illustrated in terms of found properties and scalability issues.
Scalable Visual Reasoning: Supporting Collaboration through Distributed Analysis
Pike, William A.; May, Richard A.; Baddeley, Bob; Riensche, Roderick M.; Bruce, Joe; Younkin, Katarina
2007-05-21
We present a visualization environment called the Scalable Reasoning System (SRS) that provides a suite of tools for the collection, analysis, and dissemination of reasoning products. This environment is designed to function across multiple platforms, bringing the display of visual information and the capture of reasoning associated with that information to both mobile and desktop clients. The service-oriented architecture of SRS promotes collaboration and interaction between users regardless of their location or platform. Visualization services allow data processing to be centralized and analysis results collected from distributed clients in real time. We use the concept of “reasoning artifacts” to capture the analytic value attached to individual pieces of information and collections thereof, helping to fuse the foraging and sense-making loops in information analysis. Reasoning structures composed of these artifacts can be shared across platforms while maintaining references to the analytic activity (such as interactive visualization) that produced them.
Stochastic Sensitivity Analysis and Kernel Inference via Distributional Data
Li, Bochong; You, Lingchong
2014-01-01
Cellular processes are noisy due to the stochastic nature of biochemical reactions. As such, it is impossible to predict the exact quantity of a molecule or other attributes at the single-cell level. However, the distribution of a molecule over a population is often deterministic and is governed by the underlying regulatory networks relevant to the cellular functionality of interest. Recent studies have started to exploit this property to infer network states. To facilitate the analysis of distributional data in a general experimental setting, we introduce a computational framework to efficiently characterize the sensitivity of distributional output to changes in external stimuli. Further, we establish a probability-divergence-based kernel regression model to accurately infer signal level based on distribution measurements. Our methodology is applicable to any biological system subject to stochastic dynamics and can be used to elucidate how population-based information processing may contribute to organism-level functionality. It also lays the foundation for engineering synthetic biological systems that exploit population decoding to more robustly perform various biocomputation tasks, such as disease diagnostics and environmental-pollutant sensing. PMID:25185560
Performance Analysis of an Actor-Based Distributed Simulation
NASA Technical Reports Server (NTRS)
Schoeffler, James D.
1998-01-01
Object-oriented design of simulation programs appears to be very attractive because of the natural association of components in the simulated system with objects. There is great potential in distributing the simulation across several computers for the purpose of parallel computation and its consequent handling of larger problems in less elapsed time. One approach to such a design is to use "actors", that is, active objects with their own thread of control. Because these objects execute concurrently, communication is via messages. This is in contrast to an object-oriented design using passive objects where communication between objects is via method calls (direct calls when they are in the same address space and remote procedure calls when they are in different address spaces or different machines). This paper describes a performance analysis program for the evaluation of a design for distributed simulations based upon actors.
Principal Component Analysis for Normal-Distribution-Valued Symbolic Data.
Wang, Huiwen; Chen, Meiling; Shi, Xiaojun; Li, Nan
2016-02-01
This paper puts forward a new approach to principal component analysis (PCA) for normal-distribution-valued symbolic data, which has a vast potential of applications in the economic and management field. We derive a full set of numerical characteristics and variance-covariance structure for such data, which forms the foundation for our analytical PCA approach. Our approach is able to use all of the variance information in the original data than the prevailing representative-type approach in the literature which only uses centers, vertices, etc. The paper also provides an accurate approach to constructing the observations in a PC space based on the linear additivity property of normal distribution. The effectiveness of the proposed method is illustrated by simulated numerical experiments. At last, our method is applied to explain the puzzle of risk-return tradeoff in China's stock market. PMID:25095276
Growing axons analysis by using Granulometric Size Distribution
NASA Astrophysics Data System (ADS)
Gonzalez, Mariela A.; Ballarin, Virginia L.; Rapacioli, Melina; Celín, A. R.; Sánchez, V.; Flores, V.
2011-09-01
Neurite growth (neuritogenesis) in vitro is a common methodology in the field of developmental neurobiology. Morphological analyses of growing neurites are usually difficult because their thinness and low contrast usually prevent to observe clearly their shape, number, length and spatial orientation. This paper presents the use of the granulometric size distribution in order to automatically obtain information about the shape, size and spatial orientation of growing axons in tissue cultures. The results here presented show that the granulometric size distribution results in a very useful morphological tool since it allows the automatic detection of growing axons and the precise characterization of a relevant parameter indicative of the axonal growth spatial orientation such as the quantification of the angle of deviation of the growing direction. The developed algorithms automatically quantify this orientation by facilitating the analysis of these images, which is important given the large number of images that need to be processed for this type of study.
Probabilistic Life and Reliability Analysis of Model Gas Turbine Disk
NASA Technical Reports Server (NTRS)
Holland, Frederic A.; Melis, Matthew E.; Zaretsky, Erwin V.
2002-01-01
In 1939, W. Weibull developed what is now commonly known as the "Weibull Distribution Function" primarily to determine the cumulative strength distribution of small sample sizes of elemental fracture specimens. In 1947, G. Lundberg and A. Palmgren, using the Weibull Distribution Function developed a probabilistic lifing protocol for ball and roller bearings. In 1987, E. V. Zaretsky using the Weibull Distribution Function modified the Lundberg and Palmgren approach to life prediction. His method incorporates the results of coupon fatigue testing to compute the life of elemental stress volumes of a complex machine element to predict system life and reliability. This paper examines the Zaretsky method to determine the probabilistic life and reliability of a model gas turbine disk using experimental data from coupon specimens. The predicted results are compared to experimental disk endurance data.
Time series power flow analysis for distribution connected PV generation.
Broderick, Robert Joseph; Quiroz, Jimmy Edward; Ellis, Abraham; Reno, Matthew J.; Smith, Jeff; Dugan, Roger
2013-01-01
Distributed photovoltaic (PV) projects must go through an interconnection study process before connecting to the distribution grid. These studies are intended to identify the likely impacts and mitigation alternatives. In the majority of the cases, system impacts can be ruled out or mitigation can be identified without an involved study, through a screening process or a simple supplemental review study. For some proposed projects, expensive and time-consuming interconnection studies are required. The challenges to performing the studies are twofold. First, every study scenario is potentially unique, as the studies are often highly specific to the amount of PV generation capacity that varies greatly from feeder to feeder and is often unevenly distributed along the same feeder. This can cause location-specific impacts and mitigations. The second challenge is the inherent variability in PV power output which can interact with feeder operation in complex ways, by affecting the operation of voltage regulation and protection devices. The typical simulation tools and methods in use today for distribution system planning are often not adequate to accurately assess these potential impacts. This report demonstrates how quasi-static time series (QSTS) simulation and high time-resolution data can be used to assess the potential impacts in a more comprehensive manner. The QSTS simulations are applied to a set of sample feeders with high PV deployment to illustrate the usefulness of the approach. The report describes methods that can help determine how PV affects distribution system operations. The simulation results are focused on enhancing the understanding of the underlying technical issues. The examples also highlight the steps needed to perform QSTS simulation and describe the data needed to drive the simulations. The goal of this report is to make the methodology of time series power flow analysis readily accessible to utilities and others responsible for evaluating potential PV impacts.
Multiobjective sensitivity analysis and optimization of distributed hydrologic model MOBIDIC
NASA Astrophysics Data System (ADS)
Yang, J.; Castelli, F.; Chen, Y.
2014-10-01
Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives that arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for the MOBIDIC (MOdello di Bilancio Idrologico DIstribuito e Continuo) distributed hydrologic model, which combines two sensitivity analysis techniques (the Morris method and the state-dependent parameter (SDP) method) with multiobjective optimization (MOO) approach ?-NSGAII (Non-dominated Sorting Genetic Algorithm-II). This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina, with three objective functions, i.e., the standardized root mean square error (SRMSE) of logarithmic transformed discharge, the water balance index, and the mean absolute error of the logarithmic transformed flow duration curve, and its results were compared with those of a single objective optimization (SOO) with the traditional Nelder-Mead simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show that (1) the two sensitivity analysis techniques are effective and efficient for determining the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization. (2) Both MOO and SOO lead to acceptable simulations; e.g., for MOO, the average Nash-Sutcliffe value is 0.75 in the calibration period and 0.70 in the validation period. (3) Evaporation and surface runoff show similar importance for watershed water balance, while the contribution of baseflow can be ignored. (4) Compared to SOO, which was dependent on the initial starting location, MOO provides more insight into parameter sensitivity and the conflicting characteristics of these objective functions. Multiobjective sensitivity analysis and optimization provide an alternative way for future MOBIDIC modeling.
EST analysis pipeline: use of distributed computing resources.
GonzÃ¡lez, Francisco Javier; VizcaÃno, Juan Antonio
2011-01-01
This chapter describes how a pipeline for the analysis of expressed sequence tag (EST) data can be -implemented, based on our previous experience generating ESTs from Trichoderma spp. We focus on key steps in the workflow, such as the processing of raw data from the sequencers, the clustering of ESTs, and the functional annotation of the sequences using BLAST, InterProScan, and BLAST2GO. Some of the steps require the use of intensive computing power. Since these resources are not available for small research groups or institutes without bioinformatics support, an alternative will be described: the use of distributed computing resources (local grids and Amazon EC2). PMID:21590415
Numerical analysis of decoy state quantum key distribution protocols
Harrington, Jim W; Rice, Patrick R
2008-01-01
Decoy state protocols are a useful tool for many quantum key distribution systems implemented with weak coherent pulses, allowing significantly better secret bit rates and longer maximum distances. In this paper we present a method to numerically find optimal three-level protocols, and we examine how the secret bit rate and the optimized parameters are dependent on various system properties, such as session length, transmission loss, and visibility. Additionally, we show how to modify the decoy state analysis to handle partially distinguishable decoy states as well as uncertainty in the prepared intensities.
Calibration of Boltzmann distribution priors in Bayesian data analysis.
Mechelke, Martin; Habeck, Michael
2012-12-01
The Boltzmann distribution is commonly used as a prior probability in Bayesian data analysis. Examples include the Ising model in statistical image analysis and the canonical ensemble based on molecular dynamics force fields in protein structure calculation. These models involve a temperature or weighting factor that needs to be inferred from the data. Bayesian inference stipulates to determine the temperature based on the model evidence. This is challenging because the model evidence, a ratio of two high-dimensional normalization integrals, cannot be calculated analytically. We outline a replica-exchange Monte Carlo scheme that allows us to estimate the model evidence by use of multiple histogram reweighting. The method is illustrated for an Ising model and examples in protein structure determination. PMID:23368076
Conditions for transmission path analysis in energy distribution models
NASA Astrophysics Data System (ADS)
AragonÃ¨s, Ã€ngels; Guasch, Oriol
2016-02-01
In this work, we explore under which conditions transmission path analysis (TPA) developed for statistical energy analysis (SEA) can be applied to the less restrictive energy distribution (ED) models. It is shown that TPA can be extended without problems to proper-SEA systems whereas the situation is not so clear for quasi-SEA systems. In the general case, it has been found that a TPA can always be performed on an ED model if its inverse influence energy coefficient (EIC) matrix turns to have negative off-diagonal entries. If this condition is satisfied, it can be shown that the inverse EIC matrix automatically becomes an M-matrix. An ED graph can then be defined for it and use can be made of graph theory ranking path algorithms, previously developed for SEA systems, to classify dominant paths in ED models. A small mechanical system consisting of connected plates has been used to illustrate some of the exposed theoretical results.
Analysis of vegetation distribution in relation to surface morphology
NASA Astrophysics Data System (ADS)
Savio, Francesca; Prosdocimi, Massimo; Tarolli, Paolo; Rulli, Cristina
2013-04-01
The scaling relationship between curvature, and local slope of a given point on the landscape and its drainage area reveal information about the dominant erosion processes over geomorphic time scales. Vegetation is known to influence erosion rates and landslide initiation, and also it is influenced by such processes and climatic regimes. Understanding the influence of vegetation dynamics on landscape organization is a fundamental challenge in the Earth Science field. In this study we considered two headwater catchments with vegetation mostly characterized by grass species (high altitude grassland), but also shrubs (mainly Alnus viridis), and high forest (mainly Picea abies) are common. We analyzed then the statistics related to vegetation distribution and different morphological patterns. High resolution LiDAR data served as the basis upon which derive Digital Terrain Models (DTMs) and mathematical attributes of landscape morphology including slope gradient, drainage area, aspect, surface curvature, topographic wetness index, slope - area and curvature - area loglog diagrams. The results reveal distinct differences in the curvature-area and slope-area relationships of each vegetation type. For a given drainage area, mean landscape slope is generally found to increase with woody vegetation. Profound landsliding signature is detected in areas interested by Alnus viridis distribution, thus underlining the relation between such pioneer species with slope instability. This preliminary analysis suggested that, when high resolution topography is available, is possible to better characterize the vegetation distribution based on surface morphology thus providing a useful tool for better understanding the processes and the role of vegetation in the landscape evolution.
Distributed Data Analysis in the ATLAS Experiment: Challenges and Solutions
NASA Astrophysics Data System (ADS)
Elmsheuser, Johannes; van der Ster, Daniel
2012-12-01
The ATLAS experiment at the LHC at CERN is recording and simulating several 10's of PetaBytes of data per year. To analyse these data the ATLAS experiment has developed and operates a mature and stable distributed analysis (DA) service on the Worldwide LHC Computing Grid. The service is actively used: more than 1400 users have submitted jobs in the year 2011 and a total of more 1 million jobs run every week. Users are provided with a suite of tools to submit Athena, ROOT or generic jobs to the Grid, and the PanDA workload management system is responsible for their execution. The reliability of the DA service is high but steadily improving; Grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters provides user support and communicates user problems to the sites. This paper will review the state of the DA tools and services, summarize the past year of distributed analysis activity, and present the directions for future improvements to the system.
NASA Astrophysics Data System (ADS)
Govoni, A.; Lomax, A.; Michelini, A.
The Web now provides a single, universal infrastructure for developing client/server data access applications and the seismology community can greatly benefit of this situation both for the routine observatory data analysis and for research purposes. The Web has reduced the myriad of platforms and technologies used to handle and exchange data to a single user interface (HTML), a single client platform (the Web browser), a single network protocol (HTTP), and a single server platform (the Web server). In this context we have designed a system that taking advantage of the latest devel- opment in the client/server data access technologies based on JAVA, JAVA RMI and XML may solve the most common problems in the data access and manipulation com- monly experienced in the seismological community. Key concepts in this design are thin client approach, minimum standards for data exchange and distributed computing. Thin client means that any PC with a JAVA enabled Web browser can interact with a set of remote data servers distributed in the world computer network querying for data and for services. Minimum standards relates to the language needed for client/server interaction that must be abstract enough to avoid that everybody know all the details of the transaction and this is solved by XML. Distribution means that a set of servers is able to provide to the client not only a data object (the actual data and the methods to work on it) but also the computing power to perform a particular task (a remote method in the JAVA RMI context) and limits the exchange of data to the results. This allows for client interaction also in very limited communication bandwidth situations. We describe in detail also the implementation of the main modules of the toolkit. A data eater module that gathers/archives seismological data from a variety of sources ranging from portable digitizers data to real-time network data. A picking/location server that allows for multi user Web based analysis of the waveforms using Lomax's SeisGram2K JAVA application and handles the posting of pickings and the hypocen- tral location of the earthquakes. A seismological bulletin generator that produces both dynamic (for Web site use) and static (for data distribution) HTML observatory bulletin in which the user can easily browse both the earthquake event parameters and the associated waveform data. All these modules have been developed and used to manage real data in current projects of the Friuli (NE Italy) network.
A theoretical analysis of basin-scale groundwater temperature distribution
NASA Astrophysics Data System (ADS)
An, Ran; Jiang, Xiao-Wei; Wang, Jun-Zhi; Wan, Li; Wang, Xu-Sheng; Li, Hailong
2015-03-01
The theory of regional groundwater flow is critical for explaining heat transport by moving groundwater in basins. Domenico and Palciauskas's (1973) pioneering study on convective heat transport in a simple basin assumed that convection has a small influence on redistributing groundwater temperature. Moreover, there has been no research focused on the temperature distribution around stagnation zones among flow systems. In this paper, the temperature distribution in the simple basin is reexamined and that in a complex basin with nested flow systems is explored. In both basins, compared to the temperature distribution due to conduction, convection leads to a lower temperature in most parts of the basin except for a small part near the discharge area. There is a high-temperature anomaly around the basin-bottom stagnation point where two flow systems converge due to a low degree of convection and a long travel distance, but there is no anomaly around the basin-bottom stagnation point where two flow systems diverge. In the complex basin, there are also high-temperature anomalies around internal stagnation points. Temperature around internal stagnation points could be very high when they are close to the basin bottom, for example, due to the small permeability anisotropy ratio. The temperature distribution revealed in this study could be valuable when using heat as a tracer to identify the pattern of groundwater flow in large-scale basins. Domenico PA, Palciauskas VV (1973) Theoretical analysis of forced convective heat transfer in regional groundwater flow. Geological Society of America Bulletin 84:3803-3814
Evaluation of Distribution Analysis Software for DER Applications
Staunton, RH
2003-01-23
The term ''Distributed energy resources'' or DER refers to a variety of compact, mostly self-contained power-generating technologies that can be combined with energy management and storage systems and used to improve the operation of the electricity distribution system, whether or not those technologies are connected to an electricity grid. Implementing DER can be as simple as installing a small electric generator to provide backup power at an electricity consumer's site. Or it can be a more complex system, highly integrated with the electricity grid and consisting of electricity generation, energy storage, and power management systems. DER devices provide opportunities for greater local control of electricity delivery and consumption. They also enable more efficient utilization of waste heat in combined cooling, heating and power (CHP) applications--boosting efficiency and lowering emissions. CHP systems can provide electricity, heat and hot water for industrial processes, space heating and cooling, refrigeration, and humidity control to improve indoor air quality. DER technologies are playing an increasingly important role in the nation's energy portfolio. They can be used to meet base load power, peaking power, backup power, remote power, power quality, as well as cooling and heating needs. DER systems, ranging in size and capacity from a few kilowatts up to 50 MW, can include a number of technologies (e.g., supply-side and demand-side) that can be located at or near the location where the energy is used. Information pertaining to DER technologies, application solutions, successful installations, etc., can be found at the U.S. Department of Energy's DER Internet site [1]. Market forces in the restructured electricity markets are making DER, both more common and more active in the distribution systems throughout the US [2]. If DER devices can be made even more competitive with central generation sources this trend will become unstoppable. In response, energy providers will be forced to both fully acknowledge the trend and plan for accommodating DER [3]. With bureaucratic barriers [4], lack of time/resources, tariffs, etc. still seen in certain regions of the country, changes still need to be made. Given continued technical advances in DER, the time is fast approaching when the industry, nation-wide, must not only accept DER freely but also provide or review in-depth technical assessments of how DER should be integrated into and managed throughout the distribution system. Characterization studies are needed to fully understand how both the utility system and DER devices themselves will respond to all reasonable events (e.g., grid disturbances, faults, rapid growth, diverse and multiple DER systems, large reactive loads). Some of this work has already begun as it relates to operation and control of DER [5] and microturbine performance characterization [6,7]. One of the most urgently needed tools that can provide these types of analyses is a distribution network analysis program in combination with models for various DER. Together, they can be used for (1) analyzing DER placement in distribution networks and (2) helping to ensure that adequate transmission reliability is maintained. Surveys of the market show products that represent a partial match to these needs; specifically, software that has been developed to plan electrical distribution systems and analyze reliability (in a near total absence of DER). The first part of this study (Sections 2 and 3 of the report) looks at a number of these software programs and provides both summary descriptions and comparisons. The second part of this study (Section 4 of the report) considers the suitability of these analysis tools for DER studies. It considers steady state modeling and assessment work performed by ORNL using one commercially available tool on feeder data provided by a southern utility. Appendix A provides a technical report on the results of this modeling effort.
A New Lifetime Distribution with Bathtube and Unimodal Hazard Function
NASA Astrophysics Data System (ADS)
Barriga, Gladys D. C.; Louzada-Neto, Francisco; Cancho, Vicente G.
2008-11-01
In this paper we propose a new lifetime distribution which accommodate bathtub-shaped, unimodal, increasing and decreasing hazard function. Some special particular cases are derived, including the standard Weibull distribution. Maximum likelihood estimation is considered for estimate the tree parameters present in the model. The methodology is illustrated in a real data set on industrial devices on a lite test.
Silk Fiber Mechanics from Multiscale Force Distribution Analysis
Cetinkaya, Murat; Xiao, Senbo; Markert, Bernd; Stacklies, Wolfram; GrÃ¤ter, Frauke
2011-01-01
Here we decipher the molecular determinants for the extreme toughness of spider silk fibers. Our bottom-up computational approach incorporates molecular dynamics and finite element simulations. Therefore, the approach allows the analysis of the internal strain distribution and load-carrying motifs in silk fibers on scales of both molecular and continuum mechanics. We thereby dissect the contributions from the nanoscale building blocks, the soft amorphous and the strong crystalline subunits, to silk fiber mechanics. We identify the amorphous subunits not only to give rise to high elasticity, but to also ensure efficient stress homogenization through the friction between entangled chains, which also allows the crystals to withstand stresses as high as 2 GPa in the context of the amorphous matrix. We show that the maximal toughness of silk is achieved at 10â€“40% crystallinity depending on the distribution of crystals in the fiber. We also determined a serial arrangement of the crystalline and amorphous subunits in lamellae to outperform a random or a parallel arrangement, putting forward what we believe to be a new structural model for silk and other semicrystalline materials. The multiscale approach, not requiring any empirical parameters, is applicable to other partially ordered polymeric systems. Hence, it is an efficient tool for the design of artificial silk fibers. PMID:21354403
A Distributed Flocking Approach for Information Stream Clustering Analysis
Cui, Xiaohui; Potok, Thomas E
2006-01-01
Intelligence analysts are currently overwhelmed with the amount of information streams generated everyday. There is a lack of comprehensive tool that can real-time analyze the information streams. Document clustering analysis plays an important role in improving the accuracy of information retrieval. However, most clustering technologies can only be applied for analyzing the static document collection because they normally require a large amount of computation resource and long time to get accurate result. It is very difficult to cluster a dynamic changed text information streams on an individual computer. Our early research has resulted in a dynamic reactive flock clustering algorithm which can continually refine the clustering result and quickly react to the change of document contents. This character makes the algorithm suitable for cluster analyzing dynamic changed document information, such as text information stream. Because of the decentralized character of this algorithm, a distributed approach is a very natural way to increase the clustering speed of the algorithm. In this paper, we present a distributed multi-agent flocking approach for the text information stream clustering and discuss the decentralized architectures and communication schemes for load balance and status information synchronization in this approach.
Phylogenetic analysis reveals a scattered distribution of autumn colours
Archetti, Marco
2009-01-01
Background and Aims Leaf colour in autumn is rarely considered informative for taxonomy, but there is now growing interest in the evolution of autumn colours and different hypotheses are debated. Research efforts are hindered by the lack of basic information: the phylogenetic distribution of autumn colours. It is not known when and how autumn colours evolved. Methods Data are reported on the autumn colours of 2368 tree species belonging to 400 genera of the temperate regions of the world, and an analysis is made of their phylogenetic relationships in order to reconstruct the evolutionary origin of red and yellow in autumn leaves. Key Results Red autumn colours are present in at least 290 species (70 genera), and evolved independently at least 25 times. Yellow is present independently from red in at least 378 species (97 genera) and evolved at least 28 times. Conclusions The phylogenetic reconstruction suggests that autumn colours have been acquired and lost many times during evolution. This scattered distribution could be explained by hypotheses involving some kind of coevolutionary interaction or by hypotheses that rely on the need for photoprotection. PMID:19126636
Analysis of the Spectral Energy Distributions of Fermi bright blazars
Gasparrini, D.; Cutini, S.; Colafrancesco, S.; Giommi, P.; Raino, S.
2010-03-26
Blazars are a small fraction of all extragalactic sources but, unlike other objects, they are strong emitters across the entire electromagnetic spectrum. In this study we have conducted a detailed investigation of the broad-band spectral properties of the gamma-ray selected blazars of the Fermi LAT Bright AGN Sample (LBAS). By combining the accurately estimated Fermi gamma-ray spectra with Swift, radio, NIR-Optical and hard-X/gamma-ray data, collected within three months of the LBAS data taking period, we were able to assemble high-quality and quasi-simultaneous Spectral Energy Distributions (SED) for 48 LBAS blazars. Here we show the procedure for the multi wavelength analysis.
Phylogenetic analysis on the soil bacteria distributed in karst forest
Zhou, JunPei; Huang, Ying; Mo, MingHe
2009-01-01
Phylogenetic composition of bacterial community in soil of a karst forest was analyzed by culture-independent molecular approach. The bacterial 16S rRNA gene was amplified directly from soil DNA and cloned to generate a library. After screening the clone library by RFLP, 16S rRNA genes of representative clones were sequenced and the bacterial community was analyzed phylogenetically. The 16S rRNA gene inserts of 190 clones randomly selected were analyzed by RFLP and generated 126 different RFLP types. After sequencing, 126 non-chimeric sequences were obtained, generating 113 phylotypes. Phylogenetic analysis revealed that the bacteria distributed in soil of the karst forest included the members assigning into Proteobacteria, Acidobacteria, Planctomycetes, Chloroflexi (Green nonsulfur bacteria), Bacteroidetes, Verrucomicrobia, Nitrospirae, Actinobacteria (High G+C Gram-positive bacteria), Firmicutes (Low G+C Gram-positive bacteria) and candidate divisions (including the SPAM and GN08). PMID:24031430
Size distribution measurement for densely binding bubbles via image analysis
NASA Astrophysics Data System (ADS)
Ma, Ye; Yan, Guanxi; Scheuermann, Alexander; Bringemeier, Detlef; Kong, Xiang-Zhao; Li, Ling
2014-12-01
For densely binding bubble clusters, conventional image analysis methods are unable to provide an accurate measurement of the bubble size distribution because of the difficulties with clearly identifying the outline edges of individual bubbles. In contrast, the bright centroids of individual bubbles can be distinctly defined and thus accurately measured. By taking this advantage, we developed a new measurement method based on a linear relationship between the bubble radius and the radius of its bright centroid so to avoid the need to identify the bubble outline edges. The linear relationship and method were thoroughly tested for 2D bubble clusters in a highly binding condition and found to be effective and robust for measuring the bubble sizes.
Global sensitivity analysis in wind energy assessment
NASA Astrophysics Data System (ADS)
Tsvetkova, O.; Ouarda, T. B.
2012-12-01
Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present research show that the brute force method is best for wind assessment purpose, SBSS outperforms other sampling strategies in the majority of cases. The results indicate that the Weibull scale parameter, turbine lifetime and Weibull shape parameter are the three most influential variables in the case study setting. The following conclusions can be drawn from these results: 1) SBSS should be recommended for use in Monte Carlo experiments, 2) The brute force method should be recommended for conducting sensitivity analysis in wind resource assessment, and 3) Little variation in the Weibull scale causes significant variation in energy production. The presence of the two distribution parameters in the top three influential variables (the Weibull shape and scale) emphasizes the importance of accuracy of (a) choosing the distribution to model wind regime at a site and (b) estimating probability distribution parameters. This can be labeled as the most important conclusion of this research because it opens a field for further research, which the authors see could change the wind energy field tremendously.
Earthquake interevent time distribution in Kachchh, Northwestern India
NASA Astrophysics Data System (ADS)
Pasari, Sumanta; Dikshit, Onkar
2015-12-01
Statistical properties of earthquake interevent times have long been the topic of interest to seismologists and earthquake professionals, mainly for hazard-related concerns. In this paper, we present a comprehensive study on the temporal statistics of earthquake interoccurrence times of the seismically active Kachchh peninsula (western India) from thirteen probability distributions. Those distributions are exponential, gamma, lognormal, Weibull, Levy, Maxwell, Pareto, Rayleigh, inverse Gaussian (Brownian passage time), inverse Weibull (Frechet), exponentiated exponential, exponentiated Rayleigh (Burr type X), and exponentiated Weibull distributions. Statistical inferences of the scale and shape parameters of these distributions are discussed from the maximum likelihood estimations and the Fisher information matrices. The latter are used as a surrogate tool to appraise the parametric uncertainty in the estimation process. The results were found on the basis of two goodness-of-fit tests: the maximum likelihood criterion with its modification to Akaike information criterion (AIC) and the Kolmogorov-Smirnov (K-S) minimum distance criterion. These results reveal that (i) the exponential model provides the best fit, (ii) the gamma, lognormal, Weibull, inverse Gaussian, exponentiated exponential, exponentiated Rayleigh, and exponentiated Weibull models provide an intermediate fit, and (iii) the rest, namely Levy, Maxwell, Pareto, Rayleigh, and inverse Weibull, fit poorly to the earthquake catalog of Kachchh and its adjacent regions. This study also analyzes the present-day seismicity in terms of the estimated recurrence interval and conditional probability curves (hazard curves). The estimated cumulative probability and the conditional probability of a magnitude 5.0 or higher event reach 0.8-0.9 by 2027-2036 and 2034-2043, respectively. These values have significant implications in a variety of practical applications including earthquake insurance, seismic zonation, location identification of lifeline structures, and revision of building codes.
Detailed End Use Load Modeling for Distribution System Analysis
Schneider, Kevin P.; Fuller, Jason C.
2010-04-09
The field of distribution system analysis has made significant advances in the past ten years. It is now standard practice when performing a power flow simulation to use an algorithm that is capable of unbalanced per-phase analysis. Recent work has also focused on examining the need for time-series simulations instead of examining a single time period, i.e., peak loading. One area that still requires a significant amount of work is the proper modeling of end use loads. Currently it is common practice to use a simple load model consisting of a combination of constant power, constant impedance, and constant current elements. While this simple form of end use load modeling is sufficient for a single point in time, the exact model values are difficult to determine and it is inadequate for some time-series simulations. This paper will examine how to improve simple time invariant load models as well as develop multi-state time variant models.
NASA Astrophysics Data System (ADS)
Alves, Nelson A.; Morero, Lucas D.; Rizzi, Leandro G.
2015-06-01
Microcanonical thermostatistics analysis has become an important tool to reveal essential aspects of phase transitions in complex systems. An efficient way to estimate the microcanonical inverse temperature ?(E) and the microcanonical entropy S(E) is achieved with the statistical temperature weighted histogram analysis method (ST-WHAM). The strength of this method lies on its flexibility, as it can be used to analyse data produced by algorithms with generalised sampling weights. However, for any sampling weight, ST-WHAM requires the calculation of derivatives of energy histograms H(E) , which leads to non-trivial and tedious binning tasks for models with continuous energy spectrum such as those for biomolecular and colloidal systems. Here, we discuss two alternative methods that avoid the need for such energy binning to obtain continuous estimates for H(E) in order to evaluate ?(E) by using ST-WHAM: (i) a series expansion to estimate probability densities from the empirical cumulative distribution function (CDF), and (ii) a Bayesian approach to model this CDF. Comparison with a simple linear regression method is also carried out. The performance of these approaches is evaluated considering coarse-grained protein models for folding and peptide aggregation.
An Open Architecture for Distributed Malware Collection and Analysis
NASA Astrophysics Data System (ADS)
Cavalca, Davide; Goldoni, Emanuele
Honeynets have become an important tool for researchers and network operators. However, the lack of a unified honeynet data model has impeded their effectiveness, resulting in multiple unrelated data sources, each with its own proprietary access method and format. Moreover, the deployment and management of a honeynet is a time-consuming activity and the interpretation of collected data is far from trivial. HIVE (Honeynet Infrastructure in Virtualized Environment) is a novel highly scalable automated data collection and analysis architecture we designed. Our infrastructure is based on top of proven FLOSS (Free, Libre and Open Source) solutions, which have been extended and integrated with new tools we developed. We use virtualization to ease honeypot management and deployment, combining both high-interaction and low-interaction sensors in a common infrastructure. We also address the need for rapid comprehension and detailed data analysis by harnessing the power of a relational database system, which provides centralized storage and access to the collected data while ensuring its constant integrity. This chapter presents our malware data collection architecture, offering some insight in the structure and benefits of a distributed virtualized honeynet and its development. Finally, we present some techniques for the active monitoring of centralized botnets we integrated in HIVE, which allow us to track the menaces evolution and timely deploy effective countermeasures.
Specimen type and size effects on lithium hydride tensile strength distributions
Oakes, Jr, R E
1991-12-01
Weibull's two-parameter statistical-distribution function is used to account for the effects of specimen size and loading differences on strength distributions of lithium hydride. Three distinctly differing uniaxial specimen types (i.e., an elliptical-transition pure tensile specimen, an internally pressurized ring tensile, and two sizes of four-point-flexure specimens) are shown to provide different strength distributions as expected, because of their differing sizes and modes of loading. After separation of strengths into volumetric- and surface-initiated failure distributions, the Weibull characteristic strength parameters for the higher-strength tests associated with internal fracture initiations are shown to vary as predicted by the effective specimen volume Weibull relationship. Lower-strength results correlate with the effective area to much lesser degree, probably because of the limited number of surface-related failures and the different machining methods used to prepare the specimen. The strength distribution from the fourth specimen type, the predominantly equibiaxially stressed disk-flexure specimen, is well below that predicted by the two-parameter Weibull-derived effective volume or surface area relations. The two-parameter Weibull model cannot account for the increased failure probability associated with multiaxial stress fields. Derivations of effective volume and area relationships for those specimens for which none were found in the literature, the elliptical-transition tensile, the ring tensile, and the disk flexure (including the outer region), are also included.
Prediction of the Inert Strength Distribution of Si3N4 Diesel Valves
Andrews, M.J.; Breder, K.; Wereszczak, A.A.
1999-01-25
Censored Weibull strength distributions were generated with NT551 silicon nitride four-point flexure data using the ASTM C1161-B and 5.0 mm diameter cylindrical specimens. Utilizing finite element models and AlliedSignal's life prediction codes, the inert or fast fracture strength failure probability of a ceramic diesel valve was estimated from these data sets. The failure probability prediction derived from each data set were found to be more conservative than valve strength data. Fractographic analysis of the test specimens and valves showed that the cylindrical specimens failed from a different flaw population than the prismatic flexure bars and the valves. The study emphasizes the prerequisite of having coincident flaw populations homogeneously distributed in both the test specimen and the ceramic component. Lastly, it suggests that unless material homogeneity exists, that any meaningful life prediction or reliability analysis of a component may not be possible.
Reliability analysis of a structural ceramic combustion chamber
NASA Technical Reports Server (NTRS)
Salem, Jonathan A.; Manderscheid, Jane M.; Freedman, Marc R.; Gyekenyesi, John P.
1991-01-01
The Weibull modulus, fracture toughness and thermal properties of a silicon nitride material used to make a gas turbine combustor were experimentally measured. The location and nature of failure origins resulting from bend tests were determined with fractographic analysis. The measured Weibull parameters were used along with thermal and stress analysis to determine failure probabilities of the combustor with the CARES design code. The effect of data censoring, FEM mesh refinement, and fracture criterion were considered in the analysis.
Reliability analysis of a structural ceramic combustion chamber
NASA Technical Reports Server (NTRS)
Salem, Jonathan A.; Manderscheid, Jane M.; Freedman, Marc R.; Gyekenyesi, John P.
1990-01-01
The Weibull modulus, fracture toughness and thermal properties of a silicon nitride material used to make a gas turbine combustor were experimentally measured. The location and nature of failure origins resulting from bend tests were determined with fractographic analysis. The measured Weibull parameters were used along with thermal and stress analysis to determine failure probabilities of the combustor with the CARES design code. The effect of data censoring, FEM mesh refinement, and fracture criterion were considered in the analysis.
NASA Astrophysics Data System (ADS)
Chi, Se-Hwan
2015-09-01
Changes in flexural strength and Weibull modulus due to specimen size were investigated for three nuclear graphite grades, IG-110, NBG-18, and PCEA, using four-point-1/3 point (4-1/3) loading with specimens of three different sizes: 3.18 (Thickness) × 6.35 (Width) × 50.8 (Length), 6.50 (T) × 12.0 (W) × 52.0 (L), 18.0 (T) × 16.0 (W) × 64 (L) (mm) (total: 210 specimens). Results showed some specimen size effects were grade dependent: While NBG-18 (a) showed rather significant specimen size effects (37% difference between the 3 T and 18 T), the differences in IG-110 and PCEA were 7.6-15%. The maximum differences in flexural strength due to specimen size were larger in the PCEA and NBG-18 having larger sized coke particles (medium grain size: >300 ?m) than the IG-110 with super fine coke particle size (25 ?m). The Weibull modulus showed a data population dependency, in that it decreased with increasing numbers of data used for modulus determination. A good correlation between the fracture surface roughness and the flexural strength was confirmed.
Identification and Analysis of Issues in Distributive Education.
ERIC Educational Resources Information Center
Weatherford, John Wilson
The purpose of the study was to analyze the opinions of distributive education leaders about issues in distributive education and to ascertain their opinions on the importance of these issues in determining effective operating procedures in distributive education. The 30 leaders were determined on the basis of the number of times their names wereâ€¦
Distribution and Phylogenetic Analysis of Family 19 Chitinases in Actinobacteria
Kawase, Tomokazu; Saito, Akihiro; Sato, Toshiya; Kanai, Ryo; Fujii, Takeshi; Nikaidou, Naoki; Miyashita, Kiyotaka; Watanabe, Takeshi
2004-01-01
In organisms other than higher plants, family 19 chitinase was first discovered in Streptomyces griseus HUT6037, and later, the general occurrence of this enzyme in Streptomyces species was demonstrated. In the present study, the distribution of family 19 chitinases in the class Actinobacteria and the phylogenetic relationship of Actinobacteria family 19 chitinases with family 19 chitinases of other organisms were investigated. Forty-nine strains were chosen to cover almost all the suborders of the class Actinobacteria, and chitinase production was examined. Of the 49 strains, 22 formed cleared zones on agar plates containing colloidal chitin and thus appeared to produce chitinases. These 22 chitinase-positive strains were subjected to Southern hybridization analysis by using a labeled DNA fragment corresponding to the catalytic domain of ChiC, and the presence of genes similar to chiC of S. griseus HUT6037 in at least 13 strains was suggested by the results. PCR amplification and sequencing of the DNA fragments corresponding to the major part of the catalytic domains of the family 19 chitinase genes confirmed the presence of family 19 chitinase genes in these 13 strains. The strains possessing family 19 chitinase genes belong to 6 of the 10 suborders in the order Actinomycetales, which account for the greatest part of the Actinobacteria. Phylogenetic analysis suggested that there is a close evolutionary relationship between family 19 chitinases found in Actinobacteria and plant class IV chitinases. The general occurrence of family 19 chitinase genes in Streptomycineae and the high sequence similarity among the genes found in Actinobacteria suggest that the family 19 chitinase gene was first acquired by an ancestor of the Streptomycineae and spread among the Actinobacteria through horizontal gene transfer. PMID:14766598
Rod internal pressure quantification and distribution analysis using Frapcon
Bratton, Ryan N; Jessee, Matthew Anderson; Wieselquist, William A
2015-09-30
This report documents work performed supporting the Department of Energy (DOE) Office of Nuclear Energy (NE) Fuel Cycle Technologies Used Fuel Disposition Campaign (UFDC) under work breakdown structure element 1.02.08.10, ST Analysis. In particular, this report fulfills the M4 milestone M4FT- 15OR0810036, Quantify effects of power uncertainty on fuel assembly characteristics, within work package FT-15OR081003 ST Analysis-ORNL. This research was also supported by the Consortium for Advanced Simulation of Light Water Reactors (http://www.casl.gov), an Energy Innovation Hub (http://www.energy.gov/hubs) for Modeling and Simulation of Nuclear Reactors under U.S. Department of Energy Contract No. DE-AC05-00OR22725. The discharge rod internal pressure (RIP) and cladding hoop stress (CHS) distributions are quantified for Watts Bar Nuclear Unit 1 (WBN1) fuel rods by modeling core cycle design data, operation data (including modeling significant trips and downpowers), and as-built fuel enrichments and densities of each fuel rod in FRAPCON-3.5. A methodology is developed which tracks inter-cycle assembly movements and assembly batch fabrication information to build individual FRAPCON inputs for each evaluated WBN1 fuel rod. An alternate model for the amount of helium released from the zirconium diboride (ZrB2) integral fuel burnable absorber (IFBA) layer is derived and applied to FRAPCON output data to quantify the RIP and CHS for these types of fuel rods. SCALE/Polaris is used to quantify fuel rodspecific spectral quantities and the amount of gaseous fission products produced in the fuel for use in FRAPCON inputs. Fuel rods with ZrB2 IFBA layers (i.e., IFBA rods) are determined to have RIP predictions that are elevated when compared to fuel rod without IFBA layers (i.e., standard rods) despite the fact that IFBA rods often have reduced fill pressures and annular fuel pellets. The primary contributor to elevated RIP predictions at burnups less than and greater than 30 GWd/MTU is determined to be the total fuel rod void volume and the amount of released fission gas in the fuel rod, respectively. Cumulative distribution functions (CDFs) are prepared from the distribution of RIP and CHS predictions for all standard and IFBA rods. The provided CDFs allow for the determination of the portion of WBN1 fuel rods that exceed a specified RIP or CHS limit. Results are separated into IFBA and standard rods so that the two groups may be analyzed individually. FRAPCON results are provided in sufficient detail to enable the recalculation of the RIP while considering any desired plenum gas temperature, total void volume, or total amount of gas present in the void volume. A method to predict the CHS from a determined or assumed RIP is also proposed, which is based on the approximately linear relationship between the CHS and the RIP. Finally, improvements to the computational methodology of FRAPCON are proposed.
A distributed analysis of Human impact on global sediment dynamics
NASA Astrophysics Data System (ADS)
Cohen, S.; Kettner, A.; Syvitski, J. P.
2012-12-01
Understanding riverine sediment dynamics is an important undertaking for both socially-relevant issues such as agriculture, water security and infrastructure management and for scientific analysis of landscapes, river ecology, oceanography and other disciplines. Providing good quantitative and predictive tools in therefore timely particularly in light of predicted climate and landuse changes. Ever increasing human activity during the Anthropocene have affected sediment dynamics in two major ways: (1) an increase is hillslope erosion due to agriculture, deforestation and landscape engineering and (2) trapping of sediment in dams and other man-made reservoirs. The intensity and dynamics between these man-made factors vary widely across the globe and in time and are therefore hard to predict. Using sophisticated numerical models is therefore warranted. Here we use a distributed global riverine sediment flux and water discharge model (WBMsed) to compare a pristine (without human input) and disturbed (with human input) simulations. Using these 50 year simulations we will show and discuss the complex spatial and temporal patterns of human effect on riverine sediment flux and water discharge.
A distributed model of solid waste anaerobic digestion: sensitivity analysis.
Vavilin, V A; Rytov, S V; Pavlostathis, S G; Jokela, J; Rintala, J
2003-01-01
A distributed model of anaerobic digestion of solid waste was developed to describe the balance between the rates of polymer hydrolysis and methanogenesis during the anaerobic conversion of rich and lean wastes in batch and continuous-flow reactors. Waste, volatile fatty acids (VFAs), methanogenic biomass and sodium concentrations are the model variables. Diffusion and advection of VFAs inhibiting both polymer hydrolysis and methanogenesis were considered. A sensitivity analysis by changing the key model parameter values was carried out. The model simulations showed that the effective distance between the areas of hydrolysis/acidogenesis and methanogenesis is very important. An initial spatial separation of rich waste and inoculum enhances the methane production and waste degradation at high waste loading if relatively low VFA diffusion into the methanogenic area is taking place. When both hydrolysis and methanogenesis are strongly inhibited by high levels of VFA, fluctuations in biomass concentration are thought to be responsible for initiating the expansion of methanogenic area over the reactor space. PMID:14531433
Fourier analysis of polar cap electric field and current distributions
NASA Technical Reports Server (NTRS)
Barbosa, D. D.
1984-01-01
A theoretical study of high-latitude electric fields and currents, using analytic Fourier analysis methods, is conducted. A two-dimensional planar model of the ionosphere with an enhanced conductivity auroral belt and field-aligned currents at the edges is employed. Two separate topics are treated. A field-aligned current element near the cusp region of the polar cap is included to investigate the modifications to the convection pattern by the east-west component of the interplanetary magnetic field. It is shown that a sizable one-cell structure is induced near the cusp which diverts equipotential contours to the dawnside or duskside, depending on the sign of the cusp current. This produces characteristic dawn-dusk asymmetries to the electric field that have been previously observed over the polar cap. The second topic is concerned with the electric field configuration obtained in the limit of perfect shielding, where the field is totally excluded equatorward of the auroral oval. When realistic field-aligned current distributions are used, the result is to produce severely distorted, crescent-shaped equipotential contours over the cap. Exact, analytic formulae applicable to this case are also provided.
Analysis of gamma-ray burst duration distribution using mixtures of skewed distributions
NASA Astrophysics Data System (ADS)
Tarnopolski, M.
2016-02-01
Two classes of GRBs have been confidently identified thus far and are prescribed to different physical scenarios - NS-NS or NS-BH mergers, and collapse of massive stars, for short and long GRBs, respectively. A third, intermediate in duration class, was suggested to be present in previous catalogs, such as BATSE and Swift, based on statistical tests regarding a mixture of two or three log-normal distributions of T90. However, this might possibly not be an adequate model. This paper investigates whether the distributions of log T90 from BATSE, Swift, and Fermi are described better by a mixture of skewed distributions rather than standard Gaussians. Mixtures of standard normal, skew-normal, sinh-arcsinh and alpha-skew-normal distributions are fitted using a maximum likelihood method. The preferred model is chosen based on the Akaike information criterion. It is found that mixtures of two skew-normal or two sinh-arcsinh distributions are more likely to describe the observed duration distribution of Fermi than a mixture of three standard Gaussians, and that mixtures of two sinh-arcsinh or two skew-normal distributions are models competing with the conventional three-Gaussian in the case of BATSE and Swift. Based on statistical reasoning, and it is shown that other phenomenological models may describe the observed Fermi, BATSE, and Swift duration distributions at least as well as a mixture of standard normal distributions, and the existence of a third (intermediate) class of GRBs in Fermi data is rejected.
Particle shape analysis of volcanic clast samples with the Matlab tool MORPHEO
NASA Astrophysics Data System (ADS)
Charpentier, Isabelle; Sarocchi, Damiano; Rodriguez Sedano, Luis Angel
2013-02-01
This paper presents a modular Matlab tool, namely MORPHEO, devoted to the study of particle morphology by Fourier analysis. A benchmark made of four sample images with different features (digitized coins, a pebble chart, gears, digitized volcanic clasts) is then proposed to assess the abilities of the software. Attention is brought to the Weibull distribution introduced to enhance fine variations of particle morphology. Finally, as an example, samples pertaining to a lahar deposit located in La Lumbre ravine (Colima Volcano, Mexico) are analysed. MORPHEO and the benchmark are freely available for research purposes.
Frequency distribution histograms for the rapid analysis of data
NASA Technical Reports Server (NTRS)
Burke, P. V.; Bullen, B. L.; Poff, K. L.
1988-01-01
The mean and standard error are good representations for the response of a population to an experimental parameter and are frequently used for this purpose. Frequency distribution histograms show, in addition, responses of individuals in the population. Both the statistics and a visual display of the distribution of the responses can be obtained easily using a microcomputer and available programs. The type of distribution shown by the histogram may suggest different mechanisms to be tested.
Frequency distribution histograms for the rapid analysis of data.
Burke, P V; Bullen, B L; Poff, K L
1988-01-01
The mean and standard error are good representations for the response of a population to an experimental parameter and are frequently used for this purpose. Frequency distribution histograms show, in addition, responses of individuals in the population. Both the statistics and a visual display of the distribution of the responses can be obtained easily using a microcomputer and available programs. The type of distribution shown by the histogram may suggest different mechanisms to be tested. PMID:11537875
CRACK GROWTH ANALYSIS OF SOLID OXIDE FUEL CELL ELECTROLYTES
S. Bandopadhyay; N. Nagabhushana
2003-10-01
Defects and Flaws control the structural and functional property of ceramics. In determining the reliability and lifetime of ceramics structures it is very important to quantify the crack growth behavior of the ceramics. In addition, because of the high variability of the strength and the relatively low toughness of ceramics, a statistical design approach is necessary. The statistical nature of the strength of ceramics is currently well recognized, and is usually accounted for by utilizing Weibull or similar statistical distributions. Design tools such as CARES using a combination of strength measurements, stress analysis, and statistics are available and reasonably well developed. These design codes also incorporate material data such as elastic constants as well as flaw distributions and time-dependent properties. The fast fracture reliability for ceramics is often different from their time-dependent reliability. Further confounding the design complexity, the time-dependent reliability varies with the environment/temperature/stress combination. Therefore, it becomes important to be able to accurately determine the behavior of ceramics under simulated application conditions to provide a better prediction of the lifetime and reliability for a given component. In the present study, Yttria stabilized Zirconia (YSZ) of 9.6 mol% Yttria composition was procured in the form of tubes of length 100 mm. The composition is of interest as tubular electrolytes for Solid Oxide Fuel Cells. Rings cut from the tubes were characterized for microstructure, phase stability, mechanical strength (Weibull modulus) and fracture mechanisms. The strength at operating condition of SOFCs (1000 C) decreased to 95 MPa as compared to room temperature strength of 230 MPa. However, the Weibull modulus remains relatively unchanged. Slow crack growth (SCG) parameter, n = 17 evaluated at room temperature in air was representative of well studied brittle materials. Based on the results, further work was planned to evaluate the strength degradation, modulus and failure in more representative environment of the SOFCs.
Analysis Model for Domestic Hot Water Distribution Systems: Preprint
Maguire, J.; Krarti, M.; Fang, X.
2011-11-01
A thermal model was developed to estimate the energy losses from prototypical domestic hot water (DHW) distribution systems for homes. The developed model, using the TRNSYS simulation software, allows researchers and designers to better evaluate the performance of hot water distribution systems in homes. Modeling results were compared with past experimental study results and showed good agreement.
Bivariate extreme value distributions
NASA Technical Reports Server (NTRS)
Elshamy, M.
1992-01-01
In certain engineering applications, such as those occurring in the analyses of ascent structural loads for the Space Transportation System (STS), some of the load variables have a lower bound of zero. Thus, the need for practical models of bivariate extreme value probability distribution functions with lower limits was identified. We discuss the Gumbel models and present practical forms of bivariate extreme probability distributions of Weibull and Frechet types with two parameters. Bivariate extreme value probability distribution functions can be expressed in terms of the marginal extremel distributions and a 'dependence' function subject to certain analytical conditions. Properties of such bivariate extreme distributions, sums and differences of paired extremals, as well as the corresponding forms of conditional distributions, are discussed. Practical estimation techniques are also given.
Application of extreme learning machine for estimation of wind speed distribution
NASA Astrophysics Data System (ADS)
Shamshirband, Shahaboddin; Mohammadi, Kasra; Tong, Chong Wen; Petkovi?, Dalibor; Porcu, Emilio; Mostafaeipour, Ali; Ch, Sudheer; Sedaghat, Ahmad
2015-06-01
The knowledge of the probabilistic wind speed distribution is of particular significance in reliable evaluation of the wind energy potential and effective adoption of site specific wind turbines. Among all proposed probability density functions, the two-parameter Weibull function has been extensively endorsed and utilized to model wind speeds and express wind speed distribution in various locations. In this research work, extreme learning machine (ELM) is employed to compute the shape (k) and scale (c) factors of Weibull distribution function. The developed ELM model is trained and tested based upon two widely successful methods used to estimate k and c parameters. The efficiency and accuracy of ELM is compared against support vector machine, artificial neural network and genetic programming for estimating the same Weibull parameters. The survey results reveal that applying ELM approach is eventuated in attaining further precision for estimation of both Weibull parameters compared to other methods evaluated. Mean absolute percentage error, mean absolute bias error and root mean square error for k are 8.4600 %, 0.1783 and 0.2371, while for c are 0.2143 %, 0.0118 and 0.0192 m/s, respectively. In conclusion, it is conclusively found that application of ELM is particularly promising as an alternative method to estimate Weibull k and c factors.
Application of extreme learning machine for estimation of wind speed distribution
NASA Astrophysics Data System (ADS)
Shamshirband, Shahaboddin; Mohammadi, Kasra; Tong, Chong Wen; PetkoviÄ‡, Dalibor; Porcu, Emilio; Mostafaeipour, Ali; Ch, Sudheer; Sedaghat, Ahmad
2016-03-01
The knowledge of the probabilistic wind speed distribution is of particular significance in reliable evaluation of the wind energy potential and effective adoption of site specific wind turbines. Among all proposed probability density functions, the two-parameter Weibull function has been extensively endorsed and utilized to model wind speeds and express wind speed distribution in various locations. In this research work, extreme learning machine (ELM) is employed to compute the shape ( k) and scale ( c) factors of Weibull distribution function. The developed ELM model is trained and tested based upon two widely successful methods used to estimate k and c parameters. The efficiency and accuracy of ELM is compared against support vector machine, artificial neural network and genetic programming for estimating the same Weibull parameters. The survey results reveal that applying ELM approach is eventuated in attaining further precision for estimation of both Weibull parameters compared to other methods evaluated. Mean absolute percentage error, mean absolute bias error and root mean square error for k are 8.4600 %, 0.1783 and 0.2371, while for c are 0.2143 %, 0.0118 and 0.0192 m/s, respectively. In conclusion, it is conclusively found that application of ELM is particularly promising as an alternative method to estimate Weibull k and c factors.
Distributions of Autocorrelated First-Order Kinetic Outcomes: Illness Severity
Englehardt, James D.
2015-01-01
Many complex systems produce outcomes having recurring, power law-like distributions over wide ranges. However, the form necessarily breaks down at extremes, whereas the Weibull distribution has been demonstrated over the full observed range. Here the Weibull distribution is derived as the asymptotic distribution of generalized first-order kinetic processes, with convergence driven by autocorrelation, and entropy maximization subject to finite positive mean, of the incremental compounding rates. Process increments represent multiplicative causes. In particular, illness severities are modeled as such, occurring in proportion to products of, e.g., chronic toxicant fractions passed by organs along a pathway, or rates of interacting oncogenic mutations. The Weibull form is also argued theoretically and by simulation to be robust to the onset of saturation kinetics. The Weibull exponential parameter is shown to indicate the number and widths of the first-order compounding increments, the extent of rate autocorrelation, and the degree to which process increments are distributed exponential. In contrast with the Gaussian result in linear independent systems, the form is driven not by independence and multiplicity of process increments, but by increment autocorrelation and entropy. In some physical systems the form may be attracting, due to multiplicative evolution of outcome magnitudes towards extreme values potentially much larger and smaller than control mechanisms can contain. The Weibull distribution is demonstrated in preference to the lognormal and Pareto I for illness severities versus (a) toxicokinetic models, (b) biologically-based network models, (c) scholastic and psychological test score data for children with prenatal mercury exposure, and (d) time-to-tumor data of the ED01 study. PMID:26061263
Determination analysis of energy conservation standards for distribution transformers
Barnes, P.R.; Van Dyke, J.W.; McConnell, B.W.; Das, S.
1996-07-01
This report contains information for US DOE to use in making a determination on proposing energy conservation standards for distribution transformers as required by the Energy Policy Act of 1992. Potential for saving energy with more efficient liquid-immersed and dry-type distribution transformers could be significant because these transformers account for an estimated 140 billion kWh of the annual energy lost in the delivery of electricity. Objective was to determine whether energy conservation standards for distribution transformers would have the potential for significant energy savings, be technically feasible, and be economically justified from a national perspective. It was found that energy conservation for distribution transformers would be technically and economically feasible. Based on the energy conservation options analyzed, 3.6-13.7 quads of energy could be saved from 2000 to 2030.
Stability analysis of linear fractional differential system with distributed delays
NASA Astrophysics Data System (ADS)
Veselinova, Magdalena; Kiskinov, Hristo; Zahariev, Andrey
2015-11-01
In the present work we study the Cauchy problem for linear incommensurate fractional differential system with distributed delays. For the autonomous case with distributed delays with derivatives in Riemann-Liouville or Caputo sense, we establish sufficient conditions under which the zero solution is globally asymptotic stable. The established conditions coincide with the conditions which guaranty the same result in the particular case of system with constant delays and for the case of system without delays in the commensurate case too.
Nanocrystal size distribution analysis from transmission electron microscopy images
NASA Astrophysics Data System (ADS)
van Sebille, Martijn; van der Maaten, Laurens J. P.; Xie, Ling; Jarolimek, Karol; Santbergen, Rudi; van Swaaij, RenÃ© A. C. M. M.; Leifer, Klaus; Zeman, Miro
2015-12-01
We propose a method, with minimal bias caused by user input, to quickly detect and measure the nanocrystal size distribution from transmission electron microscopy (TEM) images using a combination of Laplacian of Gaussian filters and non-maximum suppression. We demonstrate the proposed method on bright-field TEM images of an a-SiC:H sample containing embedded silicon nanocrystals with varying magnifications and we compare the accuracy and speed with size distributions obtained by manual measurements, a thresholding method and PEBBLES. Finally, we analytically consider the error induced by slicing nanocrystals during TEM sample preparation on the measured nanocrystal size distribution and formulate an equation to correct this effect.We propose a method, with minimal bias caused by user input, to quickly detect and measure the nanocrystal size distribution from transmission electron microscopy (TEM) images using a combination of Laplacian of Gaussian filters and non-maximum suppression. We demonstrate the proposed method on bright-field TEM images of an a-SiC:H sample containing embedded silicon nanocrystals with varying magnifications and we compare the accuracy and speed with size distributions obtained by manual measurements, a thresholding method and PEBBLES. Finally, we analytically consider the error induced by slicing nanocrystals during TEM sample preparation on the measured nanocrystal size distribution and formulate an equation to correct this effect. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr06292f
Analysis of Fermi gamma-ray burst duration distribution
NASA Astrophysics Data System (ADS)
Tarnopolski, M.
2015-09-01
Context. Two classes of gamma-ray bursts (GRBs), short and long, have been determined without any doubts, and are usually prescribed to different physical scenarios. A third class, intermediate in T90 durations has been reported in the datasets of BATSE, Swift, RHESSI, and possibly BeppoSAX. The latest release of >1500 GRBs observed by Fermi gives an opportunity to further investigate the duration distribution. Aims: The aim of this paper is to investigate whether a third class is present in the log T90 distribution, or whether it is described by a bimodal distribution. Methods: A standard ?2 fitting of a mixture of Gaussians was applied to 25 histograms with different binnings. Results: Different binnings give various values of the fitting parameters, as well as the shape of the fitted curve. Among five statistically significant fits, none is trimodal. Conclusions: Locations of the Gaussian components are in agreement with previous works. However, a trimodal distribution, understood in the sense of having three distinct peaks, is not found for any binning. It is concluded that the duration distribution in the Fermi data is well described by a mixture of three log-normal distributions, but it is intrinsically bimodal, hence no third class is present in the T90 data of Fermi. It is suggested that the log-normal fit may not be an adequate model.
Performance Analysis of Distributed Object-Oriented Applications
NASA Technical Reports Server (NTRS)
Schoeffler, James D.
1998-01-01
The purpose of this research was to evaluate the efficiency of a distributed simulation architecture which creates individual modules which are made self-scheduling through the use of a message-based communication system used for requesting input data from another module which is the source of that data. To make the architecture as general as possible, the message-based communication architecture was implemented using standard remote object architectures (Common Object Request Broker Architecture (CORBA) and/or Distributed Component Object Model (DCOM)). A series of experiments were run in which different systems are distributed in a variety of ways across multiple computers and the performance evaluated. The experiments were duplicated in each case so that the overhead due to message communication and data transmission can be separated from the time required to actually perform the computational update of a module each iteration. The software used to distribute the modules across multiple computers was developed in the first year of the current grant and was modified considerably to add a message-based communication scheme supported by the DCOM distributed object architecture. The resulting performance was analyzed using a model created during the first year of this grant which predicts the overhead due to CORBA and DCOM remote procedure calls and includes the effects of data passed to and from the remote objects. A report covering the distributed simulation software and the results of the performance experiments has been submitted separately. The above report also discusses possible future work to apply the methodology to dynamically distribute the simulation modules so as to minimize overall computation time.
Comparative hypsometric analysis of both Earth and Venus topographic distributions
NASA Technical Reports Server (NTRS)
Rosenblatt, P.; Pinet, P. C.; Thouvenot, E.
1993-01-01
Previous studies have compared the global topographic distribution of both planets by means of differential hypsometric curves. For the purpose of comparison, the terrestrial oceanic load was removed, and a reference base level was acquired. It was chosen on the basis of geometric considerations and reflected the geometric shape of the mean dynamical equilibrium figure of the planetary surface in both cases. This reference level corresponds to the well-known sea level for the Earth; for Venus, given its slow rate of rotation, a sphere of radius close to the mean, median and modal values of the planetary radii distribution were considered and the radius value of 6051 km arbitrarily taken. These studies were based on the low resolution (100 x 100 sq km) coverage of Venus obtained by the Pioneer Venus altimeter and on the 1 deg x 1 deg terrestrial topography. But, apart from revealing the distinct contrast existing between the Earth's bimodal and Venus' strong unimodal topographic distribution, the choice of such a reference level is inadequate and even misleading for the comparative geophysical understanding of the planetary relief distribution. The present work reinvestigates the comparison between Earth and Venus hypsometric distribution on the basis of the high-resolution data provided, on one hand, by the recent Magellan global topographic coverage of Venus' surface, and on the other hand, by the detailed NCAR 5 x 5 ft. grid topographic database currently available for the Earth's surface.
Income distribution dependence of poverty measure: A theoretical analysis
NASA Astrophysics Data System (ADS)
Chattopadhyay, Amit K.; Mallick, Sushanta K.
2007-04-01
Using a modified deprivation (or poverty) function, in this paper, we theoretically study the changes in poverty with respect to the â€˜globalâ€™ mean and variance of the income distribution using Indian survey data. We show that when the income obeys a log-normal distribution, a rising mean income generally indicates a reduction in poverty while an increase in the variance of the income distribution increases poverty. This altruistic view for a developing economy, however, is not tenable anymore once the poverty index is found to follow a pareto distribution. Here although a rising mean income indicates a reduction in poverty, due to the presence of an inflexion point in the poverty function, there is a critical value of the variance below which poverty decreases with increasing variance while beyond this value, poverty undergoes a steep increase followed by a decrease with respect to higher variance. Identifying this inflexion point as the poverty line, we show that the pareto poverty function satisfies all three standard axioms of a poverty index [N.C. Kakwani, Econometrica 43 (1980) 437; A.K. Sen, Econometrica 44 (1976) 219] whereas the log-normal distribution falls short of this requisite. Following these results, we make quantitative predictions to correlate a developing with a developed economy.
Nanocrystal size distribution analysis from transmission electron microscopy images.
van Sebille, Martijn; van der Maaten, Laurens J P; Xie, Ling; Jarolimek, Karol; Santbergen, Rudi; van Swaaij, René A C M M; Leifer, Klaus; Zeman, Miro
2015-12-28
We propose a method, with minimal bias caused by user input, to quickly detect and measure the nanocrystal size distribution from transmission electron microscopy (TEM) images using a combination of Laplacian of Gaussian filters and non-maximum suppression. We demonstrate the proposed method on bright-field TEM images of an a-SiC:H sample containing embedded silicon nanocrystals with varying magnifications and we compare the accuracy and speed with size distributions obtained by manual measurements, a thresholding method and PEBBLES. Finally, we analytically consider the error induced by slicing nanocrystals during TEM sample preparation on the measured nanocrystal size distribution and formulate an equation to correct this effect. PMID:26593390
Analysis and machine mapping of the distribution of band recoveries
Cowardin, L.M.
1977-01-01
A method of calculating distance and bearing from banding site to recovery location based on the solution of a spherical triangle is presented. X and Y distances on an ordinate grid were applied to computer plotting of recoveries on a map. The advantages and disadvantages of tables of recoveries by State or degree block, axial lines, and distance of recovery from banding site for presentation and comparison of the spatial distribution of band recoveries are discussed. A special web-shaped partition formed by concentric circles about the point of banding and great circles at 30-degree intervals through the point of banding has certain advantages over other methods. Comparison of distributions by means of a X? contingency test is illustrated. The statistic V = X?/N can be used as a measure of difference between two distributions of band recoveries and its possible use is illustrated as a measure of the degree of migrational homing.
A fractal approach to dynamic inference and distribution analysis
van Rooij, Marieke M. J. W.; Nash, Bertha A.; Rajaraman, Srinivasan; Holden, John G.
2013-01-01
Event-distributions inform scientists about the variability and dispersion of repeated measurements. This dispersion can be understood from a complex systems perspective, and quantified in terms of fractal geometry. The key premise is that a distribution's shape reveals information about the governing dynamics of the system that gave rise to the distribution. Two categories of characteristic dynamics are distinguished: additive systems governed by component-dominant dynamics and multiplicative or interdependent systems governed by interaction-dominant dynamics. A logic by which systems governed by interaction-dominant dynamics are expected to yield mixtures of lognormal and inverse power-law samples is discussed. These mixtures are described by a so-called cocktail model of response times derived from human cognitive performances. The overarching goals of this article are twofold: First, to offer readers an introduction to this theoretical perspective and second, to offer an overview of the related statistical methods. PMID:23372552
Inductance and Current Distribution Analysis of a Prototype HTS Cable
NASA Astrophysics Data System (ADS)
Zhu, Jiahui; Zhang, Zhenyu; Zhang, Huiming; Zhang, Min; Qiu, Ming; Yuan, Weijia
2014-05-01
This project is partly supported by NSFC Grant 51207146, RAEng Research Exchange scheme of UK and EPSRC EP/K01496X/1. Superconducting cable is an emerging technology for electricity power transmission. Since the high power capacity HTS transmission cables are manufactured using a multi-layer conductor structure, the current distribution among the multilayer structure would be nonuniform without proper optimization and hence lead to large transmission losses. Therefore a novel optimization method has been developed to achieve evenly distributed current among different layers considering the HTS cable structure parameters: radius, pitch angle and winding direction which determine the self and mutual inductance. A prototype HTS cable has been built using BSCCO tape and tested to validate the design the optimal design method. A superconductor characterization system has been developed using the Labview and NI data acquisition system. It can be used to measure the AC loss and current distribution of short HTS cables.
NASA Astrophysics Data System (ADS)
Zamani, A. R.; Badri, M. A.
2015-04-01
Statistical analysis was done on simultaneous wave and wind using data recorded by discus-shape wave buoy. The area is located in the southern Caspian Sea near the Anzali Port. Recorded wave data were obtained through directional spectrum wave analysis. Recorded wind direction and wind speed were obtained through the related time series as well. For 12-month measurements (May 25 2007-2008), statistical calculations were done to specify the value of nonlinear auto-correlation of wave and wind using the probability distribution function of wave characteristics and statistical analysis in various time periods. The paper also presents and analyzes the amount of wave energy for the area mentioned on the basis of available database. Analyses showed a suitable comparison between the amounts of wave energy in different seasons. As a result, the best period for the largest amount of wave energy was known. Results showed that in the research period, the mean wave and wind auto correlation were about three hours. Among the probability distribution functions, i.e Weibull, Normal, Lognormal and Rayleigh, "Weibull" had the best consistency with experimental distribution function shown in different diagrams for each season. Results also showed that the mean wave energy in the research period was about 49.88 kW/m and the maximum density of wave energy was found in February and March, 2010.
ERIC Educational Resources Information Center
Hayton, James C.
2009-01-01
In the article "Exploring the Sensitivity of Horn's Parallel Analysis to the Distributional Form of Random Data," Dinno (this issue) provides strong evidence that the distribution of random data does not have a significant influence on the outcome of the analysis. Hayton appreciates the thorough approach to evaluating this assumption, and agrees…
ERIC Educational Resources Information Center
Hayton, James C.
2009-01-01
In the article "Exploring the Sensitivity of Horn's Parallel Analysis to the Distributional Form of Random Data," Dinno (this issue) provides strong evidence that the distribution of random data does not have a significant influence on the outcome of the analysis. Hayton appreciates the thorough approach to evaluating this assumption, and agreesâ€¦
ERIC Educational Resources Information Center
Campos, Jose Alejandro Gonzalez; Moraga, Paulina Saavedra; Del Pozo, Manuel Freire
2013-01-01
This paper introduces the generalized beta (GB) model as a new modeling tool in the educational assessment area and evaluation analysis, specifically. Unlike normal model, GB model allows us to capture some real characteristics of data and it is an important tool for understanding the phenomenon of learning. This paper develops a contrast with the…
ERIC Educational Resources Information Center
Campos, Jose Alejandro Gonzalez; Moraga, Paulina Saavedra; Del Pozo, Manuel Freire
2013-01-01
This paper introduces the generalized beta (GB) model as a new modeling tool in the educational assessment area and evaluation analysis, specifically. Unlike normal model, GB model allows us to capture some real characteristics of data and it is an important tool for understanding the phenomenon of learning. This paper develops a contrast with theâ€¦
Metagenomic Analysis of Water Distribution System Bacterial Communities
The microbial quality of drinking water is assessed using culture-based methods that are highly selective and that tend to underestimate the densities and diversity of microbial populations inhabiting distribution systems. In order to better understand the effect of different dis...
ERIC Data Base; Pagination Field Frequency Distribution Analysis.
ERIC Educational Resources Information Center
Brandhorst, Wesley T.; Marra, Samuel J.; Price, Douglas S.
A definitive study of the sizes of documents in the ERIC Data Base is reported by the ERIC Processing and Reference Facility. This is a statistical and frequency distribution study encompassing every item in the file whose record contains pagination data in machine readable form. The study provides pagination data that could be used by present and…
Social Distribution, Ghettoization, and Educational Triage: A Marxist Analysis.
ERIC Educational Resources Information Center
Cameron, Jeanne
2000-01-01
Discusses how many urban students are written off as unworthy of scant educational resources, using Weber and Marx to discuss how educational triage is best understood theoretically, exploring how broader processes of social distribution and triage link up with daily practices and policies in urban classrooms, and highlighting the need for aâ€¦
High Resolution PV Power Modeling for Distribution Circuit Analysis
Norris, B. L.; Dise, J. H.
2013-09-01
NREL has contracted with Clean Power Research to provide 1-minute simulation datasets of PV systems located at three high penetration distribution feeders in the service territory of Southern California Edison (SCE): Porterville, Palmdale, and Fontana, California. The resulting PV simulations will be used to separately model the electrical circuits to determine the impacts of PV on circuit operations.
THE EPANET PROGRAMMER'S TOOLKIT FOR ANALYSIS OF WATER DISTRIBUTION SYSTEMS
The EPANET Programmer's Toolkit is a collection of functions that helps simplify computer programming of water distribution network analyses. the functions can be used to read in a pipe network description file, modify selected component properties, run multiple hydraulic and wa...
Statistical analysis and modelling of small satellite reliability
NASA Astrophysics Data System (ADS)
Guo, Jian; Monas, Liora; Gill, Eberhard
2014-05-01
This paper attempts to characterize failure behaviour of small satellites through statistical analysis of actual in-orbit failures. A unique Small Satellite Anomalies Database comprising empirical failure data of 222 small satellites has been developed. A nonparametric analysis of the failure data has been implemented by means of a Kaplan-Meier estimation. An innovative modelling method, i.e. Bayesian theory in combination with Markov Chain Monte Carlo (MCMC) simulations, has been proposed to model the reliability of small satellites. An extensive parametric analysis using the Bayesian/MCMC method has been performed to fit a Weibull distribution to the data. The influence of several characteristics such as the design lifetime, mass, launch year, mission type and the type of satellite developers on the reliability has been analyzed. The results clearly show the infant mortality of small satellites. Compared with the classical maximum-likelihood estimation methods, the proposed Bayesian/MCMC method results in better fitting Weibull models and is especially suitable for reliability modelling where only very limited failures are observed.
Conduits and dike distribution analysis in San Rafael Swell, Utah
NASA Astrophysics Data System (ADS)
Kiyosugi, K.; Connor, C.; Wetmore, P. H.; Ferwerda, B. P.; Germa, A.
2011-12-01
Volcanic fields generally consist of scattered monogenetic volcanoes, such as cinder cones and maars. The temporal and spatial distribution of monogenetic volcanoes and probability of future activity within volcanic fields is studied with the goals of understanding the origins of these volcano groups, and forecasting potential future volcanic hazards. The subsurface magmatic plumbing systems associated with volcanic fields, however, are rarely observed or studied. Therefore, we investigated a highly eroded and exposed magmatic plumbing system on the San Rafael Swell (UT) that consists of dikes, volcano conduits and sills. San Rafael Swell is part of the Colorado Plateau and is located east of the Rocky Mountain seismic belt and the Basin and Range. The overburden thickness at the time of mafic magma intrusion (Pliocene; ca. 4 Ma) into Jurassic sandstone is estimated to be ~800 m based on paleotopographical reconstructions. Based on a geologic map by P. Delaney and colleagues, and new field research, a total of 63 conduits are mapped in this former volcanic field. The conduits each reveal features of root zone and / or lower diatremes, including rapid dike expansion, peperite and brecciated intrusive and host rocks. Recrystallized baked zone of host rock is also observed around many conduits. Most conduits are basaltic or shonkinitic with thickness of >10 m and associated with feeder dikes intruded along N-S trend joints in the host rock, whereas two conduits are syenitic and suggesting development from underlying cognate sills. Conduit distribution, which is analyzed by a kernel function method with elliptical bandwidth, illustrates a N-S elongate higher conduit density area regardless of the azimuth of closely distributed conduits alignment (nearest neighbor distance <200 m). In addition, dike density was calculated as total dike length in unit area (km/km^2). Conduit and sill distribution is concordant with the high dike density area. Especially, the distribution of conduits is not random with respect to the dike distribution with greater than 99% confidence on the basis of the Kolmogorov-Smirnov test. On the other hand, dike density at each conduits location also suggests that there is no threshold of dike density for conduit formation. In other words, conduits may be possible to develop from even short mapped dikes in low dike density areas. These results show effectiveness of studying volcanic vent distribution to infer the size of magmatic system below volcanic fields and highlight the uncertainty of forecasting the location of new monogenetic volcanoes in active fields, which may be associated with a single dike intrusion.
Analysis of phase distribution phenomena in microgravity environments
NASA Technical Reports Server (NTRS)
Lahey, Richard T., Jr.; Bonetto, F.
1994-01-01
The purpose of the research presented in this paper is to demonstrate the ability of multidimensional two-fluid models for bubbly two-phase flow to accurately predict lateral phase distribution phenomena in microgravity environments. If successful, this research should provide NASA with mechanistically-based analytical methods which can be used for multiphase space system design and evaluation, and should be the basis for future shuttle experiments for model verification.
Secure analysis of distributed chemical databases without data integration.
Karr, Alan F; Feng, Jun; Lin, Xiaodong; Sanil, Ashish P; Young, S Stanley; Reiter, Jerome P
2005-01-01
We present a method for performing statistically valid linear regressions on the union of distributed chemical databases that preserves confidentiality of those databases. The method employs secure multi-party computation to share local sufficient statistics necessary to compute least squares estimators of regression coefficients, error variances and other quantities of interest. We illustrate our method with an example containing four companies' rather different databases. PMID:16267693
Belcour, Laurent; Pacanowski, Romain; Delahaie, Marion; Laville-Geay, Aude; Eupherte, Laure
2014-12-01
We compare the performance of various analytical retroreflecting bidirectional reflectance distribution function (BRDF) models to assess how they reproduce accurately measured data of retroreflecting materials. We introduce a new parametrization, the back vector parametrization, to analyze retroreflecting data, and we show that this parametrization better preserves the isotropy of data. Furthermore, we update existing BRDF models to improve the representation of retroreflective data. PMID:25606744
Analysis of magnetic electron lens with secant hyperbolic field distribution
NASA Astrophysics Data System (ADS)
Pany, S. S.; Ahmed, Z.; Dubey, B. P.
2014-12-01
Electron-optical imaging instruments like Scanning Electron Microscope (SEM) and Transmission Electron Microscope (TEM) use specially designed solenoid electromagnets for focusing of the electron beam. Indicators of imaging performance of these instruments, like spatial resolution, have a strong correlation with the focal characteristics of the magnetic lenses, which in turn have been shown to be sensitive to the details of the spatial distribution of the axial magnetic field. Owing to the complexity of designing practical lenses, empirical mathematical expressions are important to obtain the desired focal properties. Thus the degree of accuracy of such models in representing the actual field distribution determines accuracy of the calculations and ultimately the performance of the lens. Historically, the mathematical models proposed by Glaser [1] and Ramberg [2] have been extensively used. In this paper the authors discuss another model with a secant-hyperbolic type magnetic field distribution function, and present a comparison between models, utilizing results from finite element-based field simulations as the reference for evaluating performance.
Periodic analysis of total ozone and its vertical distribution
NASA Technical Reports Server (NTRS)
Wilcox, R. W.; Nastrom, G. D.; Belmont, A. D.
1975-01-01
Both total ozone and vertical distribution ozone data from the period 1957 to 1972 are analyzed. For total ozone, improved monthly zonal means for both hemispheres are computed by weighting individual station monthly means by a factor which compensates for the close grouping of stations in certain regions of latitude bands. Longitudinal variability show maxima in summer in both hemispheres, but, in winter, only in the Northern Hemisphere. The geographical distributions of the long term mean, and the annual, quasibiennial and semiannual waves in total ozone over the Northern Hemisphere are presented. The extratropical amplitude of the annual wave is by far the largest of the three, as much as 120 m atm cm over northern Siberia. There is a tendency for all three waves to have maxima in high latitudes. Monthly means of the vertical distribution of ozone determined from 3 to 8 years of ozonesonde data over North America are presented. Number density is highest in the Arctic near 18 km. The region of maximum number density slopes upward toward 10 N, where the long term mean is 45 x 10 to the 11th power molecules cm/3 near 26 km.
Statistical analysis of dendritic spine distributions in rat hippocampal cultures
2013-01-01
Background Dendritic spines serve as key computational structures in brain plasticity. Much remains to be learned about their spatial and temporal distribution among neurons. Our aim in this study was to perform exploratory analyses based on the population distributions of dendritic spines with regard to their morphological characteristics and period of growth in dissociated hippocampal neurons. We fit a log-linear model to the contingency table of spine features such as spine type and distance from the soma to first determine which features were important in modeling the spines, as well as the relationships between such features. A multinomial logistic regression was then used to predict the spine types using the features suggested by the log-linear model, along with neighboring spine information. Finally, an important variant of Ripley’s K-function applicable to linear networks was used to study the spatial distribution of spines along dendrites. Results Our study indicated that in the culture system, (i) dendritic spine densities were "completely spatially random", (ii) spine type and distance from the soma were independent quantities, and most importantly, (iii) spines had a tendency to cluster with other spines of the same type. Conclusions Although these results may vary with other systems, our primary contribution is the set of statistical tools for morphological modeling of spines which can be used to assess neuronal cultures following gene manipulation such as RNAi, and to study induced pluripotent stem cells differentiated to neurons. PMID:24088199
A distributed system for two-dimensional gel analysis.
Monardo, P J; Boutell, T; Garrels, J I; Latter, G I
1994-04-01
The Quest II system is a new two-dimensional (2D) gel analysis software system for the construction and analysis of 2D gel protein databases. A new architectural approach to 2D gel software systems has been utilized. This architecture is based on a tightly coupled client/server model. There are three layers to the system architecture: (i) a database layer consisting of three database servers, (ii) a compute layer consisting of three compute servers and (iii) an extensible user interface layer currently consisting of user interface tools for linearization and merging of scanned images, the segmentation and detection of protein spots on the images, matching, editing, and analysis of gels. The ability to store and retrieve the large volume of spot data inherent in 2D gel analysis while utilizing database technology is demonstrated. PMID:8019861
Flaw strength distributions and statistical parameters for ceramic fibers: The normal distribution
NASA Astrophysics Data System (ADS)
R'Mili, M.; Godin, N.; Lamon, J.
2012-05-01
The present paper investigates large sets of ceramic fibre failure strengths (500 to 1000 data) produced using tensile tests on tows that contained either 500 or 1000 filaments. The probability density function was determined through acoustic emission monitoring which allowed detection and counting of filament fractures. The statistical distribution of filament strengths was described using the normal distribution. The Weibull equation was then fitted to this normal distribution for estimation of statistical parameters. A perfect agreement between both distributions was obtained, and a quite negligible scatter in statistical parameters was observed, as opposed to the wide variability that is reported in the literature. Thus it was concluded that flaw strengths are distributed normally and that the statistical parameters that were derived are the true ones. In a second step, the conventional method of estimation of Weibull parameters was applied to these sets of data and, then, to subsets selected randomly. The influence of other factors involved in the conventional method of determination of statistical parameters is discussed. It is demonstrated that selection of specimens, sample size, and method of construction of so-called Weibull plots are responsible for statistical parameters variability.
Analysis of the Galaxy Distribution using Multiscale Methods
NASA Astrophysics Data System (ADS)
Querre, Philippe; Starck, Jean-Luc; Martinez, Vicent J.
2002-12-01
Galaxies are arranged in interconnected walls and filaments forming a cosmic web encompassing huge, nearly empty, regions between the structures. Many statistical methods have been proposed in the past in order to describe the galaxy distribution and discriminate the different cosmological models. We present in this paper preliminary results relative to the use of new statistical tools using the 3D a trous algorithm, the 3D ridgelet transform and the 3D beamlet transform. We show that such multiscale methods produce a new way to measure in a coherent and statistically reliable way the degree of clustering, filamentarity, sheetedness, and voidedness of a dataset.
Complexity analysis of pipeline mapping problems in distributed heterogeneous networks
Lin, Ying; Wu, Qishi; Zhu, Mengxia; Rao, Nageswara S
2009-04-01
Largescale scientific applications require using various system resources to execute complex computing pipelines in distributed networks to support collaborative research. System resources are typically shared in the Internet or over dedicated connections based on their location, availability, capability, and capacity. Optimizing the network performance of computing pipelines in such distributed environments is critical to the success of these applications. We consider two types of largescale distributed applications: (1) interactive applications where a single dataset is sequentially processed along a pipeline; and (2) streaming applications where a series of datasets continuously flow through a pipeline. The computing pipelines of these applications consist of a number of modules executed in a linear order in network environments with heterogeneous resources under different constraints. Our goal is to find an efficient mapping scheme that allocates the modules of a pipeline to network nodes for minimum endtoend delay or maximum frame rate. We formulate the pipeline mappings in distributed environments as optimization problems and categorize them into six classes with different optimization goals and mapping constraints: (1) Minimum Endtoend Delay with No Node Reuse (MEDNNR), (2) Minimum Endtoend Delay with Contiguous Node Reuse (MEDCNR), (3) Minimum Endtoend Delay with Arbitrary Node Reuse (MEDANR), (4) Maximum Frame Rate with No Node Reuse or Share (MFRNNRS), (5) Maximum Frame Rate with Contiguous Node Reuse and Share (MFRCNRS), and (6) Maximum Frame Rate with Arbitrary Node Reuse and Share (MFRANRS). Here, 'contiguous node reuse' means that multiple contiguous modules along the pipeline may run on the same node and 'arbitrary node reuse' imposes no restriction on node reuse. Note that in interactive applications, a node can be reused but its resource is not shared. We prove that MEDANR is polynomially solvable and the rest are NP-complete. MEDANR, where either contiguous or noncontiguous modules in the pipeline can be mapped onto the same node, is essentially the Maximum n-hop Shortest Path problem, and can be solved using a dynamic programming method. In MEDNNR and MFRNNRS, any network node can be used only once, which requires selecting the same number of nodes for onetoone onto mapping. We show its NP-completeness by reducing from the Hamiltonian Path problem. Node reuse is allowed in MEDCNR, MFRCNRS and MFRANRS, which are similar to the Maximum n-hop Shortest Path problem that considers resource sharing. We prove their NP-completeness by reducing from the Disjoint-Connecting-Path Problem and Widest path with the Linear Capacity Constraints problem, respectively.
Studying bubble-particle interactions by zeta potential distribution analysis.
Wu, Chendi; Wang, Louxiang; Harbottle, David; Masliyah, Jacob; Xu, Zhenghe
2015-07-01
Over a decade ago, Xu and Masliyah pioneered an approach to characterize the interactions between particles in dynamic environments of multicomponent systems by measuring zeta potential distributions of individual components and their mixtures. Using a Zetaphoremeter, the measured zeta potential distributions of individual components and their mixtures were used to determine the conditions of preferential attachment in multicomponent particle suspensions. The technique has been applied to study the attachment of nano-sized silica and alumina particles to sub-micron size bubbles in solutions with and without the addition of surface active agents (SDS, DAH and DF250). The degree of attachment between gas bubbles and particles is shown to be a function of the interaction energy governed by the dispersion, electrostatic double layer and hydrophobic forces. Under certain chemical conditions, the attachment of nano-particles to sub-micron size bubbles is shown to be enhanced by in-situ gas nucleation induced by hydrodynamic cavitation for the weakly interacting systems, where mixing of the two individual components results in negligible attachment. Preferential interaction in complex tertiary particle systems demonstrated strong attachment between micron-sized alumina and gas bubbles, with little attachment between micron-sized alumina and silica, possibly due to instability of the aggregates in the shear flow environment. PMID:25731913
Visualization and analysis of lipopolysaccharide distribution in binary phospholipid bilayers
Henning, Maria Florencia; Sanchez, Susana; Bakas, Laura; Departamento de Ciencias Biologicas, Facultad de Ciencias Exactas, UNLP, Calles 47 y 115, 1900 La Plata
2009-05-22
Lipopolysaccharide (LPS) is an endotoxin released from the outer membrane of Gram-negative bacteria during infections. It have been reported that LPS may play a role in the outer membrane of bacteria similar to that of cholesterol in eukaryotic plasma membranes. In this article we compare the effect of introducing LPS or cholesterol in liposomes made of dipalmitoylphosphatidylcholine/dioleoylphosphatidylcholine on the solubilization process by Triton X-100. The results show that liposomes containing LPS or cholesterol are more resistant to solubilization by Triton X-100 than the binary phospholipid mixtures at 4 {sup o}C. The LPS distribution was analyzed on GUVs of DPPC:DOPC using FITC-LPS. Solid and liquid-crystalline domains were visualized labeling the GUVs with LAURDAN and GP images were acquired using a two-photon microscope. The images show a selective distribution of LPS in gel domains. Our results support the hypothesis that LPS could aggregate and concentrate selectively in biological membranes providing a mechanism to bring together several components of the LPS-sensing machinery.
Analysis of an algorithm for distributed recognition and accountability
Ko, C.; Frincke, D.A.; Goan, T. Jr.; Heberlein, L.T.; Levitt, K.; Mukherjee, B.; Wee, C.
1993-08-01
Computer and network systems are available to attacks. Abandoning the existing huge infrastructure of possibly-insecure computer and network systems is impossible, and replacing them by totally secure systems may not be feasible or cost effective. A common element in many attacks is that a single user will often attempt to intrude upon multiple resources throughout a network. Detecting the attack can become significantly easier by compiling and integrating evidence of such intrusion attempts across the network rather than attempting to assess the situation from the vantage point of only a single host. To solve this problem, we suggest an approach for distributed recognition and accountability (DRA), which consists of algorithms which ``process,`` at a central location, distributed and asynchronous ``reports`` generated by computers (or a subset thereof) throughout the network. Our highest-priority objectives are to observe ways by which an individual moves around in a network of computers, including changing user names to possibly hide his/her true identity, and to associate all activities of multiple instance of the same individual to the same network-wide user. We present the DRA algorithm and a sketch of its proof under an initial set of simplifying albeit realistic assumptions. Later, we relax these assumptions to accommodate pragmatic aspects such as missing or delayed ``reports,`` clock slew, tampered ``reports,`` etc. We believe that such algorithms will have widespread applications in the future, particularly in intrusion-detection system.
NASA Astrophysics Data System (ADS)
Zajac, Z. B.; Munoz-Carpena, R.; Vanderlinden, K.
2009-12-01
This research addresses two aspects of spatially distributed modeling: uncertainty analysis (UA), described as propagation of uncertainty from spatially distributed input factors on model outputs; and sensitivity analysis (SA) defined as assessment of relative importance of spatially distributed factors on the model output variance. An evaluation framework for spatially distributed models is proposed based on a combination of sequential Gaussian simulation (sGs) and the global, variance-based, SA method of Sobol to quantify model output uncertainty due to spatially distributed input factors, together with the corresponding sensitivity measures. The framework is independent of model assumptions; it explores the whole space of input factors, provides measures of factor’s importance (first-order effects) and their interactions (higher-order effects), and assesses the effect of spatial resolution of the model input factors, one of the least understood contributors to uncertainty and sensitivity of distributed models. A spatially distributed hydrological model (Regional Simulation Model, RSM), applied to a site in South Florida (Water Conservation Area-2A), is used as a benchmark for the study. The model domain is spatially represented by triangular elements (average size of 1.1 km2). High resolution land elevation measurements (400 x 400 m, +/-0.15 m vertical error) obtained by the USGS' Airborne Height Finder survey are used in the study. The original survey data (approximately 2,600 points) together with smaller density subsets drawn from this data (1/2, 1/4, 1/8, 1/16, 1/32 of original density) are used for generating equiprobable maps of effective land elevation factor values via sGs. These alternative realizations are sampled pseudo-randomly and used as inputs for model runs. In this way, uncertainty regarding a spatial representation of the elevation surface is transferred into uncertainty of model outputs. The results show that below a specific threshold of data density (1/8), model uncertainty and sensitivity are impacted by the density of land elevation data used for deriving effective land elevation factor values. Below the threshold of data density, uncertainty of model outputs is observed to increase with a decrease of density of elevation data. Similar pattern is observed for the relative importance of sensitivity indexes of the land elevation factor. The results indicate that reduced data density of land elevation could be used without significantly compromising the certainty of RSM predictions and the subsequent decision making process for the specific WCA-2A conditions. The methodology proposed in this research is useful for a model quality control and for guiding field measurement campaigns by optimizing data collection in terms of cost-benefit analysis.
Sensitivity Analysis of Distributed Soil Moisture Profiles by Active Distributed Temperature Sensing
NASA Astrophysics Data System (ADS)
Ciocca, F.; Van De Giesen, N.; Assouline, S.; Huwald, H.; Lunati, I.
2012-12-01
Monitoring and measuring the fluctuations of soil moisture at large scales in the filed remains a challenge. Although sensors based on measurement of dielectric properties such as Time Domain Reflectometers (TDR) and capacity-based probes can guarantee reasonable responses, they always operate on limited spatial ranges. On the other hand optical fibers, attached to a Distribute Temperature Sensing (DTS) system, can allow for high precision soil temperature measurements over distances of kilometers. A recently developed technique called Active DTS (ADTS) and consisting of a heat pulse of a certain duration and power along the metal sheath covering the optical fiber buried in the soil, has proven a promising alternative to spatially-limited probes. Two approaches have been investigated to infer distributed soil moisture profiles in the region surrounding the optic fiber cable by analyzing the temperature variations during the heating and the cooling phases. One directly relates the change of temperature to the soil moisture (independently measured) to develop specific calibration curve for the soil used; the other requires inferring the thermal properties and then obtaining the soil moisture by inversion of known relationships. To test and compare the two approaches over a broad range of saturation conditions a large lysimeter has been homogeneously filled with loamy soil and 52 meters of fiber optic cable have been buried in the shallower 0.8 meters in a double coil rigid structure of 15 loops along with a series of capacity-based sensors (calibrated for the soil used) to provide independent soil moisture measurements at the same depths of the optical fiber. Thermocouples have also been wrapped around the fiber to investigate the effects of the insulating cover surrounding the cable, and in between each layer in order to monitor heat diffusion at several centimeters. A high performance DTS has been used to measure the temperature along the fiber optic cable. Several soil moisture profiles have been generated in the lysimeter either varying the water table height or by wetting the soil from the top. The sensitivity of the ADTS method for heat pulses of different duration and power and ranges of spatial and temporal resolution are presented.
Channel flow analysis. [velocity distribution throughout blade flow field
NASA Technical Reports Server (NTRS)
Katsanis, T.
1973-01-01
The design of a proper blade profile requires calculation of the blade row flow field in order to determine the velocities on the blade surfaces. An analysis theory is presented for several methods used for this calculation and associated computer programs that were developed are discussed.
Analysis of the tropospheric water distribution during FIRE 2
NASA Technical Reports Server (NTRS)
Westphal, Douglas L.
1993-01-01
The Penn State/NCAR mesoscale model, as adapted for use at ARC, was used as a testbed for the development and validation of cloud models for use in General Circulation Models (GCM's). This modeling approach also allows us to intercompare the predictions of the various cloud schemes within the same dynamical framework. The use of the PSU/NCAR mesoscale model also allows us to compare our results with FIRE-II (First International Satellite Cloud Climatology Project Regional Experiment) observations, instead of climate statistics. Though a promising approach, our work to date revealed several difficulties. First, the model by design is limited in spatial coverage and is only run for 12 to 48 hours at a time. Hence the quality of the simulation will depend heavily on the initial conditions. The poor quality of upper-tropospheric measurements of water vapor is well known and the situation is particularly bad for mid-latitude winter since the coupling with the surface is less direct than in summer so that relying on the model to spin-up a reasonable moisture field is not always successful. Though one of the most common atmospheric constituents, water vapor is relatively difficult to measure accurately, especially operationally over large areas. The standard NWS sondes have little sensitivity at the low temperatures where cirrus form and the data from the GOES 6.7 micron channel is difficult to quantify. For this reason, the goals of FIRE Cirrus II included characterizing the three-dimensional distribution of water vapor and clouds. In studying the data from FIRE Cirrus II, it was found that no single special observation technique provides accurate regional distributions of water vapor. The Raman lidar provides accurate measurements, but only at the Hub, for levels up to 10 km, and during nighttime hours. The CLASS sondes are more sensitive to moisture at low temperatures than are the NWS sondes, but the four stations only cover an area of two hundred kilometers on a side. The aircraft give the most accurate measurements of water vapor, but are limited in spatial and temporal coverage. This problem is partly alleviated by the use of the MAPS analyses, a four-dimensional data assimilation system that combines the previous 3-hour forecast with the available observations, but its upper-level moisture analyses are sometimes deficient because of the vapor measurement problem. An attempt was made to create a consistent four-dimensional description of the water vapor distribution during the second IFO by subjectively combining data from a variety of sources, including MAPS analyses, CLASS sondes, SPECTRE sondes, NWS sondes, GOES satellite analyses, radars, lidars, and microwave radiometers.
Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing
NASA Technical Reports Server (NTRS)
Ozguner, Fusun
1996-01-01
Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.
Analysis of phase distribution phenomena in microgravity environments
NASA Technical Reports Server (NTRS)
Lahey, Richard, Jr.; Bonetto, Fabian
1994-01-01
In the past one of NASA's primary emphasis has been on identifying single and multiphase flow experiments which can produce new discoveries that are not possible except in a microgravity environment. While such experiments are obviously of great scientific interest, they do not necessarily provide NASA with the ability to use multiphase processes for power production and/or utilization in space. The purpose of the research presented in this paper is to demonstrate the ability of multidimensional two-fluid models for bubbly two-phase flow to accurately predict lateral phase distribution phenomena in microgravity environments. If successful, this research should provide NASA with mechanistically-based analytical methods which can be used for multiphase space design and evaluation, and should be the basis for future shuttle experiments for model verification.
Southern Arizona riparian habitat: Spatial distribution and analysis
NASA Technical Reports Server (NTRS)
Lacey, J. R.; Ogden, P. R.; Foster, K. E.
1975-01-01
The objectives of this study were centered around the demonstration of remote sensing as an inventory tool and researching the multiple uses of riparian vegetation. Specific study objectives were to: (1) map riparian vegetation along the Gila River, San Simon Creek, San Pedro River, Pantano Wash, (2) determine the feasibility of automated mapping using LANDSAT-1 computer compatible tapes, (3) locate and summarize existing mpas delineating riparian vegetation, (4) summarize data relevant to Southern Arizona's riparian products and uses, (5) document recent riparian vegetation changes along a selected portion of the San Pedro River, (6) summarize historical changes in composition and distribution of riparian vegetation, and (7) summarize sources of available photography pertinent to Southern Arizona.
Finite key analysis for symmetric attacks in quantum key distribution
Meyer, Tim; Kampermann, Hermann; Kleinmann, Matthias; Bruss, Dagmar
2006-10-15
We introduce a constructive method to calculate the achievable secret key rate for a generic class of quantum key distribution protocols, when only a finite number n of signals is given. Our approach is applicable to all scenarios in which the quantum state shared by Alice and Bob is known. In particular, we consider the six state protocol with symmetric eavesdropping attacks, and show that for a small number of signals, i.e., below n{approx}10{sup 4}, the finite key rate differs significantly from the asymptotic value for n{yields}{infinity}. However, for larger n, a good approximation of the asymptotic value is found. We also study secret key rates for protocols using higher-dimensional quantum systems.
Completion report harmonic analysis of electrical distribution systems
Tolbert, L.M.
1996-03-01
Harmonic currents have increased dramatically in electrical distribution systems in the last few years due to the growth in non-linear loads found in most electronic devices. Because electrical systems have been designed for linear voltage and current waveforms; (i.e. nearly sinusoidal), non-linear loads can cause serious problems such as overheating conductors or transformers, capacitor failures, inadvertent circuit breaker tripping, or malfunction of electronic equipment. The U.S. Army Center for Public Works has proposed a study to determine what devices are best for reducing or eliminating the effects of harmonics on power systems typical of those existing in their Command, Control, Communication and Intelligence (C3I) sites.
Structural reliability analysis of laminated CMC components
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Palko, Joseph L.; Gyekenyesi, John P.
1991-01-01
For laminated ceramic matrix composite (CMC) materials to realize their full potential in aerospace applications, design methods and protocols are a necessity. The time independent failure response of these materials is focussed on and a reliability analysis is presented associated with the initiation of matrix cracking. A public domain computer algorithm is highlighted that was coupled with the laminate analysis of a finite element code and which serves as a design aid to analyze structural components made from laminated CMC materials. Issues relevant to the effect of the size of the component are discussed, and a parameter estimation procedure is presented. The estimation procedure allows three parameters to be calculated from a failure population that has an underlying Weibull distribution.
Photoelastic analysis of stress distribution with different implant systems.
Pellizzer, Eduardo Piza; Carli, Rafael Imai; Falcón-Antenucci, Rosse Mary; Verri, Fellippo Ramos; Goiato, Marcelo Coelho; Villa, Luiz Marcelo Ribeiro
2014-04-01
The aim of this study was to evaluate stress distribution with different implant systems through photoelasticity. Five models were fabricated with photoelastic resin PL-2. Each model was composed of a block of photoelastic resin (10 × 40 × 45 mm) with an implant and a healing abutment: model 1, internal hexagon implant (4.0 × 10 mm; Conect AR, Conexão, São Paulo, Brazil); model 2, Morse taper/internal octagon implant (4.1 × 10 mm; Standard, Straumann ITI, Andover, Mass); model 3, Morse taper implant (4.0 × 10 mm; AR Morse, Conexão); model 4, locking taper implant (4.0 × 11 mm; Bicon, Boston, Mass); model 5, external hexagon implant (4.0 × 10 mm; Master Screw, Conexão). Axial and oblique load (45°) of 150 N were applied by a universal testing machine (EMIC-DL 3000), and a circular polariscope was used to visualize the stress. The results were photographed and analyzed qualitatively using Adobe Photoshop software. For the axial load, the greatest stress concentration was exhibited in the cervical and apical thirds. However, the highest number of isochromatic fringes was observed in the implant apex and in the cervical adjacent to the load direction in all models for the oblique load. Model 2 (Morse taper, internal octagon, Straumann ITI) presented the lowest stress concentration, while model 5 (external hexagon, Master Screw, Conexão) exhibited the greatest stress. It was concluded that Morse taper implants presented a more favorable stress distribution among the test groups. The external hexagon implant showed the highest stress concentration. Oblique load generated the highest stress in all models analyzed. PMID:22208909
NASA Astrophysics Data System (ADS)
Kovalev, I. V.; Zelenkov, P. V.; Karaseva, M. V.; Tsarev, M. Yu; Tsarev, R. Yu
2015-01-01
The paper considers the problem of the analysis of distributed computer systems reliability with client-server architecture. A distributed computer system is a set of hardware and software for implementing the following main functions: processing, storage, transmission and data protection. This paper discusses the distributed computer systems architecture "client-server". The paper presents the scheme of the distributed computer system functioning represented as a graph where vertices are the functional state of the system and arcs are transitions from one state to another depending on the prevailing conditions. In reliability analysis we consider such reliability indicators as the probability of the system transition in the stopping state and accidents, as well as the intensity of these transitions. The proposed model allows us to obtain correlations for the reliability parameters of the distributed computer system without any assumptions about the distribution laws of random variables and the elements number in the system.
Advanced analysis of metal distributions in human hair
Kempson, Ivan M.; Skinner, William M.
2008-06-09
A variety of techniques (secondary electron microscopy with energy dispersive X-ray analysis, time-of-flight-secondary ion mass spectrometry, and synchrotron X-ray fluorescence) were utilized to distinguish metal contamination occurring in hair arising from endogenous uptake from an individual exposed to a polluted environment, in this case a lead smelter. Evidence was sought for elements less affected by contamination and potentially indicative of biogenic activity. The unique combination of surface sensitivity, spatial resolution, and detection limits used here has provided new insight regarding hair analysis. Metals such as Ca, Fe, and Pb appeared to have little representative value of endogenous uptake and were mainly due to contamination. Cu and Zn, however, demonstrate behaviors worthy of further investigation into relating hair concentrations to endogenous function.
Taravat, Alireza; Oppelt, Natascha
2014-01-01
Oil spills represent a major threat to ocean ecosystems and their environmental status. Previous studies have shown that Synthetic Aperture Radar (SAR), as its recording is independent of clouds and weather, can be effectively used for the detection and classification of oil spills. Dark formation detection is the first and critical stage in oil-spill detection procedures. In this paper, a novel approach for automated dark-spot detection in SAR imagery is presented. A new approach from the combination of adaptive Weibull Multiplicative Model (WMM) and MultiLayer Perceptron (MLP) neural networks is proposed to differentiate between dark spots and the background. The results have been compared with the results of a model combining non-adaptive WMM and pulse coupled neural networks. The presented approach overcomes the non-adaptive WMM filter setting parameters by developing an adaptive WMM model which is a step ahead towards a full automatic dark spot detection. The proposed approach was tested on 60 ENVISAT and ERS2 images which contained dark spots. For the overall dataset, an average accuracy of 94.65% was obtained. Our experimental results demonstrate that the proposed approach is very robust and effective where the non-adaptive WMM & pulse coupled neural network (PCNN) model generates poor accuracies. PMID:25474376
Taravat, Alireza; Oppelt, Natascha
2014-01-01
Oil spills represent a major threat to ocean ecosystems and their environmental status. Previous studies have shown that Synthetic Aperture Radar (SAR), as its recording is independent of clouds and weather, can be effectively used for the detection and classification of oil spills. Dark formation detection is the first and critical stage in oil-spill detection procedures. In this paper, a novel approach for automated dark-spot detection in SAR imagery is presented. A new approach from the combination of adaptive Weibull Multiplicative Model (WMM) and MultiLayer Perceptron (MLP) neural networks is proposed to differentiate between dark spots and the background. The results have been compared with the results of a model combining non-adaptive WMM and pulse coupled neural networks. The presented approach overcomes the non-adaptive WMM filter setting parameters by developing an adaptive WMM model which is a step ahead towards a full automatic dark spot detection. The proposed approach was tested on 60 ENVISAT and ERS2 images which contained dark spots. For the overall dataset, an average accuracy of 94.65% was obtained. Our experimental results demonstrate that the proposed approach is very robust and effective where the non-adaptive WMM & pulse coupled neural network (PCNN) model generates poor accuracies. PMID:25474376
Distributed Finite Element Analysis Using a Transputer Network
NASA Technical Reports Server (NTRS)
Watson, James; Favenesi, James; Danial, Albert; Tombrello, Joseph; Yang, Dabby; Reynolds, Brian; Turrentine, Ronald; Shephard, Mark; Baehmann, Peggy
1989-01-01
The principal objective of this research effort was to demonstrate the extraordinarily cost effective acceleration of finite element structural analysis problems using a transputer-based parallel processing network. This objective was accomplished in the form of a commercially viable parallel processing workstation. The workstation is a desktop size, low-maintenance computing unit capable of supercomputer performance yet costs two orders of magnitude less. To achieve the principal research objective, a transputer based structural analysis workstation termed XPFEM was implemented with linear static structural analysis capabilities resembling commercially available NASTRAN. Finite element model files, generated using the on-line preprocessing module or external preprocessing packages, are downloaded to a network of 32 transputers for accelerated solution. The system currently executes at about one third Cray X-MP24 speed but additional acceleration appears likely. For the NASA selected demonstration problem of a Space Shuttle main engine turbine blade model with about 1500 nodes and 4500 independent degrees of freedom, the Cray X-MP24 required 23.9 seconds to obtain a solution while the transputer network, operated from an IBM PC-AT compatible host computer, required 71.7 seconds. Consequently, the $80,000 transputer network demonstrated a cost-performance ratio about 60 times better than the $15,000,000 Cray X-MP24 system.
Microwave circuit analysis and design by a massively distributed computing network
NASA Astrophysics Data System (ADS)
Vai, Mankuan; Prasad, Sheila
1995-05-01
The advances in microelectronic engineering have rendered massively distributed computing networks practical and affordable. This paper describes one application of this distributed computing paradigm to the analysis and design of microwave circuits. A distributed computing network, constructed in the form of a neural network, is developed to automate the operations typically performed on a normalized Smith chart. Examples showing the use of this computing network for impedance matching and stabilizing are provided.
Evolution History of Asteroid Itokawa Based on Block Distribution Analysis
NASA Astrophysics Data System (ADS)
Mazrouei, Sara; Daly, Michael; Barnouin, Olivier; Ernst, Carolyn
2013-04-01
This work investigates trends in the global and regional distribution of blocks on asteroid 25143 Itokawa in order to discover new findings to better understand the history of this asteroid. Itokawa is a near-Earth object, and the first asteroid that was targeted for a sample return mission. Trends in block population provide new insights in regards to Itokawa's current appearance following the disruption of a possible parent body, and how its surface might have changed since then. Here blocks are defined as rocks or features with distinctive positive relief that are larger than a few meters in size. The size and distribution of blocks are measured by mapping the outline of the blocks using the Small Body Mapping Tool (SBMT) created by the Johns Hopkins University Applied Physics Laboratory [1]. The SBMT allows the user to overlap correctly geo-located Hayabusa images [2] onto the Itokawa shape model. This study provides additional inferences on the original disruption and subsequent re-accretion of Itokawa's "head" and "body" from block analyses. A new approach is taken by analyzing the population of blocks with respect to latitude for both Itokawa's current state, and a hypothetical elliptical body. Itokawa currently rotates approximately about its maximum moment of inertia, which is expected due to conservation of momentum and minimum energy arguments. After the possible disruption of the parent body of Itokawa, the "body" of Itokawa would have tended to a similar rotation. The shape of this body is made by removing the head of Itokawa and applying a semispherical cap. Using the method of [3] inertial properties of this object are calculated. With the assumption that this object had settled to its stable rotational axis, it is found that the pole axis could have been tilted about 13° away from the current axis in the direction opposite the head, equivalent to a 33 meter change in the center of mass. The results of this study provide means to test the hypothesis of whether or not Itokawa is a contact binary. References: [1] E. G. Kahn, et al. A tool for the visualization of small body data. In LPSC XLII, 2011. [2] A. Fujiwara, et al. The rubble-pile asteroid Itokawa as observed by Hayabusa. Science, 312(5778):1330-1334, June 2006. [3] A. F. Cheng, et al. Small-scale topography of 433 Eros from laser altimetry and imaging. Icarus, 155(1):51-74, 2002
[Vertical Distribution Characteristics and Analysis in Sediments of Xidahai Lake].
Duan, Mu-chun; Xiao, Hai-feng; Zang, Shu-ying
2015-07-01
The organic matter (OM), total nitrogen (TN), total phosphorus (TP), the morphological changes of phosphorus and the particle size in columnar sediment core of Xidahai Lake were analyzed, to discuss the vertical distribution characteristics and influencing factors. The results showed that the contents of OM, TN and TP were 0. 633% -2. 756%, 0. 150% -0. 429% and 648. 00 - 1 480.67 mg . kg-1 respectively. The contents of Ca-P, IP and OM changed less, the contents of Fe/Al-P, OP, TP and TN fluctuated from 1843 to 1970; The contents of Ca-P, IP and TP tended to decrease, the contents of Fe/Al-P, OP and OM first decreased and then increased to different degree, TN fluctuated largely from 1970 to 1996; The nutrient elements contents showed relatively large fluctuation from 1996 to 2009, the average contents of Fe/Al-P, OP and OM were the highest in the three stages. The sediment core nutrients pollution sources were mainly from industrial wastewater, sewage and the loss of fertilizers of Xidahai Lake. The ratio of C/N in the sediments showed that organic matter was mainly from aquatic organisms. The sediment particle size composition was dominated by clay and fine silt. The correlation studies showed that Ca-P, IP and TP were significantly positively correlated, showing that the contribution of Ca-P to IP and TP growth was large. PMID:26489314
Secure distributed genome analysis for GWAS and sequence comparison computation
2015-01-01
Background The rapid increase in the availability and volume of genomic data makes significant advances in biomedical research possible, but sharing of genomic data poses challenges due to the highly sensitive nature of such data. To address the challenges, a competition for secure distributed processing of genomic data was organized by the iDASH research center. Methods In this work we propose techniques for securing computation with real-life genomic data for minor allele frequency and chi-squared statistics computation, as well as distance computation between two genomic sequences, as specified by the iDASH competition tasks. We put forward novel optimizations, including a generalization of a version of mergesort, which might be of independent interest. Results We provide implementation results of our techniques based on secret sharing that demonstrate practicality of the suggested protocols and also report on performance improvements due to our optimization techniques. Conclusions This work describes our techniques, findings, and experimental results developed and obtained as part of iDASH 2015 research competition to secure real-life genomic computations and shows feasibility of securely computing with genomic data in practice. PMID:26733307
Distributed-sensor-system decision analysis using team strategies
NASA Astrophysics Data System (ADS)
Choe, Howard C.; Kazakos, Demetrios
1992-11-01
A distributed (or decentralized) multiple sensor system is considered under binary hypothesis environments. The system is deployed with a host sensor (HS) and multiple slave sensors (SSs). All sensors have their own independent decision makers which are capable of declaring local decisions based solely on their own observation of the environment. The communication between the HS and the SSs is conditional upon the HS's command. Each communication that takes place involves a communication cost which plays an important role in the approaches taken in this study. The conditional communication with the cost initiates the team strategy in making the final decisions at the HS. The objectives are not only to apply the team strategy method in the decision making process, but also to minimize the expected system cost (or the probability of error in making decisions) by optimizing thresholds in the HS> The analytical expression of the expected system cost (C) is numerically evaluated for Gaussian statistics over threshold locations in the HS to find an optimal threshold location for a given communication cost. The computer simulations of various sensor systems for Gaussian observations are also performed in order to understand the behavior of each system with respect to correct detections, false, alarms, and target misses.
Motion synthesis and force distribution analysis for a biped robot.
Trojnacki, Maciej T; Zieli?ska, Teresa
2011-01-01
In this paper, the method of generating biped robot motion using recorded human gait is presented. The recorded data were modified taking into account the velocity available for robot drives. Data includes only selected joint angles, therefore the missing values were obtained considering the dynamic postural stability of the robot, which means obtaining an adequate motion trajectory of the so-called Zero Moment Point (ZMT). Also, the method of determining the ground reaction forces' distribution during the biped robot's dynamic stable walk is described. The method was developed by the authors. Following the description of equations characterizing the dynamics of robot's motion, the values of the components of ground reaction forces were symbolically determined as well as the coordinates of the points of robot's feet contact with the ground. The theoretical considerations have been supported by computer simulation and animation of the robot's motion. This was done using Matlab/Simulink package and Simulink 3D Animation Toolbox, and it has proved the proposed method. PMID:21761810
Preliminary evaluation of diabatic heating distribution from FGGE level 3b analysis data
NASA Technical Reports Server (NTRS)
Kasahara, A.; Mizzi, A. P.
1985-01-01
A method is presented for calculating the global distribution of diabatic heating rate. Preliminary results of global heating rate evaluated from the European center for Medium Range Weather Forecasts Level IIIb analysis data is also presented.
Factor analysis applied to distribution of elements in western Turkey.
Bakaç, M; Kumru, M N
2001-11-01
This paper examines the results of R-mode factor analysis performed on four radionuclides and two elements data from a geochemical survey from western Turkey. Sediment and soil samples numbering 321 were collected along the Gediz river and analysed for 238eU, 232eTh, 40K, 226Ra, Mg and Pb. Two factors in soil and sediment, which account for 59% and 56% of the total variance, respectively, were extracted and named as geological structure and volcano. PMID:11573808
Photoelastic analysis of stress distribution in oral rehabilitation.
Turcio, Karina Helga Leal; Goiato, Marcelo Coelho; Gennari Filho, Humberto; dos Santos, Daniela Micheline
2009-03-01
The purpose of this study was to present a literature review about photoelasticity, a laboratory method for evaluation of implants prosthesis behavior. Fixed or removable prostheses function as levers on supporting teeth, allowing forces to cause tooth movement if not carefully planned. Hence, during treatment planning, the dentist must be aware of the biomechanics involved and prevent movement of supporting teeth, decreasing lever-type forces generated by these prosthesis. Photoelastic analysis has great applicability in restorative dentistry as it allows prediction and minimization of biomechanical critical points through modifications in treatment planning. PMID:19305247
NASA Technical Reports Server (NTRS)
Heldenfels, Richard R
1951-01-01
A numerical method is presented for the stress analysis of stiffened-shell structures of arbitrary cross section under nonuniform temperature distributions. The method is based on a previously published procedure that is extended to include temperature effects and multicell construction. The application of the method to practical problems is discussed and an illustrative analysis is presented of a two-cell box beam under the combined action of vertical loads and a nonuniform temperature distribution.
Neti, Prasad V.S.V.; Howell, Roger W.
2010-01-01
Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log-normal (LN) distribution function (J Nucl Med. 2006;47:1049–1058) with the aid of autoradiography. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analysis of these earlier data. Methods The measured distributions of ?-particle tracks per cell were subjected to statistical tests with Poisson, LN, and Poisson-lognormal (P-LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL of 210Po-citrate. When cells were exposed to 67 kBq/mL, the P-LN distribution function gave a better fit; however, the underlying activity distribution remained log-normal. Conclusion The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:18483086
Wavelet analysis of baryon acoustic structures in the galaxy distribution
NASA Astrophysics Data System (ADS)
Arnalte-Mur, P.; Labatie, A.; Clerc, N.; MartÃnez, V. J.; Starck, J.-L.; LachiÃ¨ze-Rey, M.; Saar, E.; Paredes, S.
2012-06-01
Context. Baryon acoustic oscillations (BAO) are imprinted in the density field by acoustic waves travelling in the plasma of the early universe. Their fixed scale can be used as a standard ruler to study the geometry of the universe. Aims: The BAO have been previously detected using correlation functions and power spectra of the galaxy distribution. We present a new method to detect the real-space structures associated with BAO. These baryon acoustic structures are spherical shells of relatively small density contrast, surrounding high density central regions. Methods: We design a specific wavelet adapted to search for shells, and exploit the physics of the process by making use of two different mass tracers, introducing a specific statistic to detect the BAO features. We show the effect of the BAO signal in this new statistic when applied to the Î› - cold dark matter (Î›CDM) model, using an analytical approximation to the transfer function. We confirm the reliability and stability of our method by using cosmological N-body simulations from the MareNostrum Institut de CiÃ¨ncies de l'Espai (MICE). Results: We apply our method to the detection of BAO in a galaxy sample drawn from the Sloan Digital Sky Survey (SDSS). We use the "main" catalogue to trace the shells, and the luminous red galaxies (LRG) as tracers of the high density central regions. Using this new method, we detect, with a high significance, that the LRG in our sample are preferentially located close to the centres of shell-like structures in the density field, with characteristics similar to those expected from BAO. We show that stacking selected shells, we can find their characteristic density profile. Conclusions: We delineate a new feature of the cosmic web, the BAO shells. As these are real spatial structures, the BAO phenomenon can be studied in detail by examining those shells. Full Table 1 is only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/542/A34
Extending the LWS Data Environment: Distributed Data Processing and Analysis
NASA Technical Reports Server (NTRS)
Narock, Thomas
2005-01-01
The final stages of this work saw changes to the original framework, as well as the completion and integration of several data processing services. Initially, it was thought that a peer-to-peer architecture was necessary to make this work possible. The peer-to-peer architecture provided many benefits including the dynamic discovery of new services that would be continually added. A prototype example was built and while it showed promise, a major disadvantage was seen in that it was not easily integrated into the existing data environment. While the peer-to-peer system worked well for finding and accessing distributed data processing services, it was found that its use was limited by the difficulty in calling it from existing tools and services. After collaborations with members of the data community, it was determined that our data processing system was of high value and that a new interface should be pursued in order for the community to take full advantage of it. As such; the framework was modified from a peer-to-peer architecture to a more traditional web service approach. Following this change multiple data processing services were added. These services include such things as coordinate transformations and sub setting of data. Observatory (VHO), assisted with integrating the new architecture into the VHO. This allows anyone using the VHO to search for data, to then pass that data through our processing services prior to downloading it. As a second attempt at demonstrating the new system, a collaboration was established with the Collaborative Sun Earth Connector (CoSEC) group at Lockheed Martin. This group is working on a graphical user interface to the Virtual Observatories and data processing software. The intent is to provide a high-level easy-to-use graphical interface that will allow access to the existing Virtual Observatories and data processing services from one convenient application. Working with the CoSEC group we provided access to our data processing tools from within their software. This now allows the CoSEC community to take advantage of our services and also demonstrates another means of accessing our system.
NASA Technical Reports Server (NTRS)
Schmeckpeper, K. R.
1987-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Distribution and Control (EPD and C) hardware. The EPD and C hardware performs the functions of distributing, sensing, and controlling 28 volt DC power and of inverting, distributing, sensing, and controlling 117 volt 400 Hz AC power to all Orbiter subsystems from the three fuel cells in the Electrical Power Generation (EPG) subsystem. Volume 2 continues the presentation of IOA analysis worksheets and contains the potential critical items list.
An empirical analysis of the distribution of overshoots in a stationary Gaussian stochastic process
NASA Technical Reports Server (NTRS)
Carter, M. C.; Madison, M. W.
1973-01-01
The frequency distribution of overshoots in a stationary Gaussian stochastic process is analyzed. The primary processes involved in this analysis are computer simulation and statistical estimation. Computer simulation is used to simulate stationary Gaussian stochastic processes that have selected autocorrelation functions. An analysis of the simulation results reveals a frequency distribution for overshoots with a functional dependence on the mean and variance of the process. Statistical estimation is then used to estimate the mean and variance of a process. It is shown that for an autocorrelation function, the mean and the variance for the number of overshoots, a frequency distribution for overshoots can be estimated.
Distributional benefit analysis of a national air quality rule.
Post, Ellen S; Belova, Anna; Huang, Jin
2011-06-01
Under Executive Order 12898, the U.S. Environmental Protection Agency (EPA) must perform environmental justice (EJ) reviews of its rules and regulations. EJ analyses address the hypothesis that environmental disamenities are experienced disproportionately by poor and/or minority subgroups. Such analyses typically use communities as the unit of analysis. While community-based approaches make sense when considering where polluting sources locate, they are less appropriate for national air quality rules affecting many sources and pollutants that can travel thousands of miles. We compare exposures and health risks of EJ-identified individuals rather than communities to analyze EPA's Heavy Duty Diesel (HDD) rule as an example national air quality rule. Air pollutant exposures are estimated within grid cells by air quality models; all individuals in the same grid cell are assigned the same exposure. Using an inequality index, we find that inequality within racial/ethnic subgroups far outweighs inequality between them. We find, moreover, that the HDD rule leaves between-subgroup inequality essentially unchanged. Changes in health risks depend also on subgroups' baseline incidence rates, which differ across subgroups. Thus, health risk reductions may not follow the same pattern as reductions in exposure. These results are likely representative of other national air quality rules as well. PMID:21776207
CARES/LIFE Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Powers, Lynn M.; Janosik, Lesley A.; Gyekenyesi, John P.
2003-01-01
This manual describes the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction (CARES/LIFE) computer program. The program calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. CARES/LIFE is an extension of the CARES (Ceramic Analysis and Reliability Evaluation of Structures) computer program. The program uses results from MSC/NASTRAN, ABAQUS, and ANSYS finite element analysis programs to evaluate component reliability due to inherent surface and/or volume type flaws. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing the power law, Paris law, or Walker law. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled by using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. The probabilistic time-dependent theories used in CARES/LIFE, along with the input and output for CARES/LIFE, are described. Example problems to demonstrate various features of the program are also included.
Advancing Collaborative Climate Studies through Globally Distributed Geospatial Analysis
NASA Astrophysics Data System (ADS)
Singh, R.; Percivall, G.
2009-12-01
(note: acronym glossary at end of abstract) For scientists to have confidence in the veracity of data sets and computational processes not under their control, operational transparency must be much greater than previously required. Being able to have a universally understood and machine-readable language for describing such things as the completeness of metadata, data provenance and uncertainty, and the discrete computational steps in a complex process take on increased importance. OGC has been involved with technological issues associated with climate change since 2005 when we, along with the IEEE Committee on Earth Observation, began a close working relationship with GEO and GEOSS (http://earthobservations.org). GEO/GEOS provide the technology platform to GCOS who in turn represents the earth observation community to UNFCCC. OGC and IEEE are the organizers of the GEO/GEOSS Architecture Implementation Pilot (see http://www.ogcnetwork.net/AIpilot). This continuing work involves closely working with GOOS (Global Ocean Observing System) and WMO (World Meteorological Organization). This session reports on the findings of recent work within the OGCâ€™s community of software developers and users to apply geospatial web services to the climate studies domain. The value of this work is to evolve OGC web services, moving from data access and query to geo-processing and workflows. Two projects will be described, the GEOSS API-2 and the CCIP. AIP is a task of the GEOSS Architecture and Data Committee. During its duration, two GEO Tasks defined the project: AIP-2 began as GEO Task AR-07-02, to lead the incorporation of contributed components consistent with the GEOSS Architecture using a GEO Web Portal and a Clearinghouse search facility to access services through GEOSS Interoperability Arrangements in support of the GEOSS Societal Benefit Areas. AIP-2 concluded as GEOS Task AR-09-01b, to develop and pilot new process and infrastructure components for the GEOSS Common Infrastructure and the broader GEOSS architecture. Of specific interest to this session is the work on geospatial workflows and geo-processing and data discovery and access. CCIP demonstrates standards-based interoperability between geospatial applications in the service of Climate Change analysis. CCIP is planned to be a yearly exercise. It consists of a network of online data services (WCS, WFS, SOS), analysis services (WPS, WCPS, WMS), and clients that exercise those services. In 2009, CCIP focuses on Australia, and the initial application of existing OGC services to climate studies. The results of the 2009 CCIP will serve as requirements for more complex geo-processing services to be developed for CCIP 2010. The benefits of CCIP include accelerating the implementation of the GCOS, and building confidence that implementations using multi-vendor interoperable technologies can help resolve vexing climate change questions. AIP-2: Architecture Implementation Pilot, Phase 2 CCIP: Climate Challenge Integration Plugfest GEO: Group on Earth Observations GEOSS: Global Earth Observing System of Systems GCOS: Global Climate Observing System OGC: Open Geospatial Consortium SOS: Sensor Observation Service WCS: Web Coverage Service WCPS: Web Coverage Processing Service WFS: Web Feature Service WMS: Web Mapping Service
Radar signal analysis of ballistic missile with micro-motion based on time-frequency distribution
NASA Astrophysics Data System (ADS)
Wang, Jianming; Liu, Lihua; Yu, Hua
2015-12-01
The micro-motion of ballistic missile targets induces micro-Doppler modulation on the radar return signal, which is a unique feature for the warhead discrimination during flight. In order to extract the micro-Doppler feature of ballistic missile targets, time-frequency analysis is employed to process the micro-Doppler modulated time-varying radar signal. The images of time-frequency distribution (TFD) reveal the micro-Doppler modulation characteristic very well. However, there are many existing time-frequency analysis methods to generate the time-frequency distribution images, including the short-time Fourier transform (STFT), Wigner distribution (WD) and Cohen class distribution, etc. Under the background of ballistic missile defence, the paper aims at working out an effective time-frequency analysis method for ballistic missile warhead discrimination from the decoys.
Numerical analysis of atomic density distribution in arc driven negative ion sources
Yamamoto, T. Shibata, T.; Hatayama, A.; Kashiwagi, M.; Hanada, M.; Sawada, K.
2014-02-15
The purpose of this study is to calculate atomic (H{sup 0}) density distribution in JAEA 10 ampere negative ion source. A collisional radiative model is developed for the calculation of the H{sup 0} density distribution. The non-equilibrium feature of the electron energy distribution function (EEDF), which mainly determines the H{sup 0} production rate, is included by substituting the EEDF calculated from 3D electron transport analysis. In this paper, the H{sup 0} production rate, the ionization rate, and the density distribution in the source chamber are calculated. In the region where high energy electrons exist, the H{sup 0} production and the ionization are enhanced. The calculated H{sup 0} density distribution without the effect of the H{sup 0} transport is relatively small in the upper region. In the next step, the effect should be taken into account to obtain more realistic H{sup 0} distribution.
NASA Technical Reports Server (NTRS)
Gyekenyesi, J. P.
1986-01-01
A computer program was developed for calculating the statistical fast fracture reliability and failure probability of ceramic components. The program includes the two-parameter Weibull material fracture strength distribution model, using the principle of independent action for polyaxial stress states and Batdorf's shear-sensitive as well as shear-insensitive crack theories, all for volume distributed flaws in macroscopically isotropic solids. Both penny-shaped cracks and Griffith cracks are included in the Batdorf shear-sensitive crack response calculations, using Griffith's maximum tensile stress or critical coplanar strain energy release rate criteria to predict mixed mode fracture. Weibull material parameters can also be calculated from modulus of rupture bar tests, using the least squares method with known specimen geometry and fracture data. The reliability prediction analysis uses MSC/NASTRAN stress, temperature and volume output, obtained from the use of three-dimensional, quadratic, isoparametric, or axisymmetric finite elements. The statistical fast fracture theories employed, along with selected input and output formats and options, are summarized. An example problem to demonstrate various features of the program is included.
Regression Analysis of Physician Distribution to Identify Areas of Need: Some Preliminary Findings.
ERIC Educational Resources Information Center
Morgan, Bruce B.; And Others
A regression analysis was conducted of factors that help to explain the variance in physician distribution and which identify those factors that influence the maldistribution of physicians. Models were developed for different geographic areas to determine the most appropriate unit of analysis for the Western Missouri Area Health Education Center…
Neti, Prasad V.S.V.; Howell, Roger W.
2008-01-01
Recently, the distribution of radioactivity among a population of cells labeled with 210Po was shown to be well described by a log normal distribution function (J Nucl Med 47, 6 (2006) 1049-1058) with the aid of an autoradiographic approach. To ascertain the influence of Poisson statistics on the interpretation of the autoradiographic data, the present work reports on a detailed statistical analyses of these data. Methods The measured distributions of alpha particle tracks per cell were subjected to statistical tests with Poisson (P), log normal (LN), and Poisson – log normal (P – LN) models. Results The LN distribution function best describes the distribution of radioactivity among cell populations exposed to 0.52 and 3.8 kBq/mL 210Po-citrate. When cells were exposed to 67 kBq/mL, the P – LN distribution function gave a better fit, however, the underlying activity distribution remained log normal. Conclusions The present analysis generally provides further support for the use of LN distributions to describe the cellular uptake of radioactivity. Care should be exercised when analyzing autoradiographic data on activity distributions to ensure that Poisson processes do not distort the underlying LN distribution. PMID:16741316
NASA Technical Reports Server (NTRS)
Gurgiolo, Chris; Vinas, Adolfo F.
2009-01-01
This paper presents a spherical harmonic analysis of the plasma velocity distribution function using high-angular, energy, and time resolution Cluster data obtained from the PEACE spectrometer instrument to demonstrate how this analysis models the particle distribution function and its moments and anisotropies. The results show that spherical harmonic analysis produced a robust physical representation model of the velocity distribution function, resolving the main features of the measured distributions. From the spherical harmonic analysis, a minimum set of nine spectral coefficients was obtained from which the moment (up to the heat flux), anisotropy, and asymmetry calculations of the velocity distribution function were obtained. The spherical harmonic method provides a potentially effective "compression" technique that can be easily carried out onboard a spacecraft to determine the moments and anisotropies of the particle velocity distribution function for any species. These calculations were implemented using three different approaches, namely, the standard traditional integration, the spherical harmonic (SPH) spectral coefficients integration, and the singular value decomposition (SVD) on the spherical harmonic methods. A comparison among the various methods shows that both SPH and SVD approaches provide remarkable agreement with the standard moment integration method.
NASA Astrophysics Data System (ADS)
Clough, Emily; Bell, Derek
2016-02-01
This letter presents a distributive environmental justice analysis of unconventional gas development in the area of Pennsylvania lying over the Marcellus Shale, the largest shale gas formation in play in the United States. The extraction of shale gas using unconventional wells, which are hydraulically fractured (fracking), has increased dramatically since 2005. As the number of wells has grown, so have concerns about the potential public health effects on nearby communities. These concerns make shale gas development an environmental justice issue. This letter examines whether the hazards associated with proximity to wells and the economic benefits of shale gas production are fairly distributed. We distinguish two types of distributive environmental justice: traditional and benefit sharing. We ask the traditional question: are there a disproportionate number of minority or low-income residents in areas near to unconventional wells in Pennsylvania? However, we extend this analysis in two ways: we examine income distribution and level of education; and we compare before and after shale gas development. This contributes to discussions of benefit sharing by showing how the income distribution of the population has changed. We use a binary dasymetric technique to remap the data from the 2000 US Census and the 2009â€“2013 American Communities Survey and combine that data with a buffer containment analysis of unconventional wells to compare the characteristics of the population living nearer to unconventional wells with those further away before and after shale gas development. Our analysis indicates that there is no evidence of traditional distributive environmental injustice: there is not a disproportionate number of minority or low-income residents in areas near to unconventional wells. However, our analysis is consistent with the claim that there is benefit sharing distributive environmental injustice: the income distribution of the population nearer to shale gas wells has not been transformed since shale gas development.
A network analysis of food flows within the United States of America.
Lin, Xiaowen; Dang, Qian; Konar, Megan
2014-05-20
The world food system is globalized and interconnected, in which trade plays an increasingly important role in facilitating food availability. We present a novel application of network analysis to domestic food flows within the USA, a country with global importance as a major agricultural producer and trade power. We find normal node degree distributions and Weibull node strength and betweenness centrality distributions. An unassortative network structure with high clustering coefficients exists. These network properties indicate that the USA food flow network is highly social and well-mixed. However, a power law relationship between node betweenness centrality and node degree indicates potential network vulnerability to the disturbance of key nodes. We perform an equality analysis which serves as a benchmark for global food trade, where the Gini coefficient = 0.579, Lorenz asymmetry coefficient = 0.966, and Hoover index = 0.442. These findings shed insight into trade network scaling and proxy free trade and equitable network architectures. PMID:24773310
FASEP ultra-automated analysis of fibre length distribution in glass-fibre-reinforced products
NASA Astrophysics Data System (ADS)
Hartwich, Mark R.; Höhn, Norbert; Mayr, Helga; Sandau, Konrad; Stengler, Ralph
2009-06-01
Reinforced plastic materials are widely used in high sophisticated applications. The length distribution of the fibres influences the mechanical properties of the final product. A method for automatic determination of this length distribution was developed. After separating the fibres out of the composite material without any damage, and preparing them for microscopical analysis, a mosaic of microscope pictures is taken. After image processing and analysis with mathematical methods, a complete statistic of the fibre length distribution could be determined. A correlation between fibre length distribution and mechanical properties, measured e.g. with material test methods, like tensile and impact tests, was found. This is a method to optimize the process and selection of material for the plastic parts. In result this enhances customer satisfaction and, maybe much more important, reduces costs for the manufacturer.
Monte Carlo models and analysis of galactic disk gamma-ray burst distributions
NASA Technical Reports Server (NTRS)
Hakkila, Jon
1989-01-01
Gamma-ray bursts are transient astronomical phenomena which have no quiescent counterparts in any region of the electromagnetic spectrum. Although temporal and spectral properties indicate that these events are likely energetic, their unknown spatial distribution complicates astrophysical interpretation. Monte Carlo samples of gamma-ray burst sources are created which belong to Galactic disk populations. Spatial analysis techniques are used to compare these samples to the observed distribution. From this, both quantitative and qualitative conclusions are drawn concerning allowed luminosity and spatial distributions of the actual sample. Although the Burst and Transient Source Experiment (BATSE) experiment on Gamma Ray Observatory (GRO) will significantly improve knowledge of the gamma-ray burst source spatial characteristics within only a few months of launch, the analysis techniques described herein will not be superceded. Rather, they may be used with BATSE results to obtain detailed information about both the luminosity and spatial distributions of the sources.
NASA Astrophysics Data System (ADS)
Rice, Stephen B.; Chan, Christopher; Brown, Scott C.; Eschbach, Peter; Han, Li; Ensor, David S.; Stefaniak, Aleksandr B.; Bonevich, John; Vladár, András E.; Hight Walker, Angela R.; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A.
2013-12-01
This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin-Rammler-Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition.
Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A
2015-01-01
This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition. PMID:26361398
NASA Astrophysics Data System (ADS)
Hotate, Kazuo; Watanabe, Ryuji; He, Zuyuan; Kishi, Masato
We have measured Brillouin frequency shift distribution in a planar lightwave circuit (PLC) by Brillouin optical correlation domain analysis (BOCDA). We have made an experimental system specialized for the measurement of PLC, realizing spatial resolution of 5.9mm with standard deviation of 0.34MHz in Brillouin frequency shift (BFS) measurement. From the data obtained in the experiments, we have found that the BFS distribution shape along the waveguide corresponds to its route pattern in the PLC.
Rees, T.F.
1990-01-01
Photon correlation spectroscopy (PCS) utilizes the Doppler frequency shift of photons scattered off particles undergoing Brownian motion to determine the size of colloids suspended in water. Photosedimentation analysis (PSA) measures the time-dependent change in optical density of a suspension of colloidal particles undergoing centrifugation. A description of both techniques, important underlying assumptions, and limitations are given. Results for a series of river water samples show that the colloid-size distribution means are statistically identical as determined by both techniques. This also is true of the mass median diameter (MMD), even though MMD values determined by PSA are consistently smaller than those determined by PCS. Because of this small negative bias, the skew parameters for the distributions are generally smaller for the PCS-determined distributions than for the PSA-determined distributions. Smaller polydispersity indices for the distributions are also determined by PCS. -from Author
NASA Astrophysics Data System (ADS)
Aritomo, Y.
2009-12-01
We analyze experimental data obtained for the mass distribution of fission fragments in the reactions S36+U238 and Si30+U238 at several incident energies, which were performed by the Japan Atomic Energy Agency (JAEA) group. The analysis of the mass distribution of fission fragments is a powerful tool for understanding the mechanism of the reaction in the heavy and superheavy-mass regions. Using the dynamical model with the Langevin equation, we precisely investigate the incident energy dependence of the mass distribution of fission fragments. This study is the first attempt to treat such experimental data systematically. We also consider the fine structures in the mass distribution of fission fragments caused by the nuclear structure at a low incident energy. It is explained why the mass distribution of fission fragments has different features in the two reactions. The fusion cross sections are also estimated.
Li, Gang; Zhao, Zhe; Wang, Hui-Quan; Lin, Ling; Zhang, Bao-Ju; Wu, Xiao-Rong
2012-07-01
In order to discuss the effect of different distribution of components concentration on the accuracy of quantitative spectral analysis, according to the Lambert-Beer law, ideal absorption spectra of samples with three components were established. Gaussian noise was added to the spectra. Correction and prediction models were built by partial least squares regression to reflect the unequal modeling and prediction results between different distributions of components. Results show that, in the case of pure linear absorption, the accuracy of model is related to the distribution of components concentration. Not only to the component we focus on, but also to the non-tested components, the larger covered and more uniform distribution is a significant point of calibration set samples to establish a universal model and provide a satisfactory accuracy. This research supplies a theoretic guidance for reasonable choice of samples with suitable concentration distribution, which enhances the quality of model and reduces the prediction error of the predict set. PMID:23016350
Abanto-Valle, C A; Bandyopadhyay, D; Lachos, V H; Enriquez, I
2010-12-01
A Bayesian analysis of stochastic volatility (SV) models using the class of symmetric scale mixtures of normal (SMN) distributions is considered. In the face of non-normality, this provides an appealing robust alternative to the routine use of the normal distribution. Specific distributions examined include the normal, student-t, slash and the variance gamma distributions. Using a Bayesian paradigm, an efficient Markov chain Monte Carlo (MCMC) algorithm is introduced for parameter estimation. Moreover, the mixing parameters obtained as a by-product of the scale mixture representation can be used to identify outliers. The methods developed are applied to analyze daily stock returns data on S&P500 index. Bayesian model selection criteria as well as out-of- sample forecasting results reveal that the SV models based on heavy-tailed SMN distributions provide significant improvement in model fit as well as prediction to the S&P500 index data over the usual normal model. PMID:20730043
Evaluating Domestic Hot Water Distribution System Options with Validated Analysis Models
Weitzel, E.; Hoeschele, E.
2014-09-01
A developing body of work is forming that collects data on domestic hot water consumption, water use behaviors, and energy efficiency of various distribution systems. Transient System Simulation Tool (TRNSYS) is a full distribution system developed that has been validated using field monitoring data and then exercised in a number of climates to understand climate impact on performance. In this study, the Building America team built upon previous analysis modeling work to evaluate differing distribution systems and the sensitivities of water heating energy and water use efficiency to variations of climate, load, distribution type, insulation and compact plumbing practices. Overall, 124 different TRNSYS models were simulated. The results of this work are useful in informing future development of water heating best practices guides as well as more accurate (and simulation time efficient) distribution models for annual whole house simulation programs.
A mixture of exponentials distribution for a simple and precise assessment of the volcanic hazard
NASA Astrophysics Data System (ADS)
Mendoza-Rosas, A. T.; de La Cruz-Reyna, S.
2009-03-01
The assessment of volcanic hazard is the first step for disaster mitigation. The distribution of repose periods between eruptions provides important information about the probability of new eruptions occurring within given time intervals. The quality of the probability estimate, i.e., of the hazard assessment, depends on the capacity of the chosen statistical model to describe the actual distribution of the repose times. In this work, we use a mixture of exponentials distribution, namely the sum of exponential distributions characterized by the different eruption occurrence rates that may be recognized inspecting the cumulative number of eruptions with time in specific VEI (Volcanic Explosivity Index) categories. The most striking property of an exponential mixture density is that the shape of the density function is flexible in a way similar to the frequently used Weibull distribution, matching long-tailed distributions and allowing clustering and time dependence of the eruption sequence, with distribution parameters that can be readily obtained from the observed occurrence rates. Thus, the mixture of exponentials turns out to be more precise and much easier to apply than the Weibull distribution. We recommended the use of a mixture of exponentials distribution when regimes with well-defined eruption rates can be identified in the cumulative series of events. As an example, we apply the mixture of exponential distributions to the repose-time sequences between explosive eruptions of the Colima and PopocatÃ©petl volcanoes, MÃ©xico, and compare the results obtained with the Weibull and other distributions.
Does a powder surface contain all necessary information for particle size distribution analysis?
Laitinen, Niklas; Antikainen, Osmo; Yliruusi, Jouko
2002-12-01
The aim of this study was to utilise a new approach where digital image information is used in the characterisation of particle size distributions of a large set of pharmaceutical powders. A novel optical set-up was employed to create images and calculate a stereometric parameter from the digital images of powder surfaces. Analysis was made of 40 granule batches with varying particle sizes and compositions prepared with fluidised bed granulation. The extracted digital image information was then connected to particle size using multivariate modelling. The modelled particle size distributions were compared to particle size determinations with sieve analysis and laser diffraction. The results revealed that the created models corresponded well with the particle size distributions measured with sieve analysis and laser diffraction. This study shows that digital images taken from powder surfaces contain all necessary data that is needed for particle size distribution analysis. To obtain this information from images careful consideration has to be given on the imaging conditions. In conclusion, the results of this study suggest that the new approach is a powerful means of analysis in particle size determination. The method is fast, the sample size needed is very small and the technique enables non-destructive analysis of samples. The method is suitable in the particle size range of approximately 20-1500 microm. However, further investigations with a broad range of powders have to be made to obtain information of the possibilities and limitations of the introduced method in powder characterisation. PMID:12453611
powerlaw: A Python Package for Analysis of Heavy-Tailed Distributions
Alstott, Jeff; Bullmore, Ed; Plenz, Dietmar
2014-01-01
Power laws are theoretically interesting probability distributions that are also frequently used to describe empirical data. In recent years, effective statistical methods for fitting power laws have been developed, but appropriate use of these techniques requires significant programming and statistical insight. In order to greatly decrease the barriers to using good statistical methods for fitting power law distributions, we developed the powerlaw Python package. This software package provides easy commands for basic fitting and statistical analysis of distributions. Notably, it also seeks to support a variety of user needs by being exhaustive in the options available to the user. The source code is publicly available and easily extensible. PMID:24489671
Analysis of the 3D distribution of stacked self-assembled quantum dots by electron tomography
2012-01-01
The 3D distribution of self-assembled stacked quantum dots (QDs) is a key parameter to obtain the highest performance in a variety of optoelectronic devices. In this work, we have measured this distribution in 3D using a combined procedure of needle-shaped specimen preparation and electron tomography. We show that conventional 2D measurements of the distribution of QDs are not reliable, and only 3D analysis allows an accurate correlation between the growth design and the structural characteristics. PMID:23249477
Analysis and synthesis of distributed-lumped-active networks by digital computer
NASA Technical Reports Server (NTRS)
1973-01-01
The use of digital computational techniques in the analysis and synthesis of DLA (distributed lumped active) networks is considered. This class of networks consists of three distinct types of elements, namely, distributed elements (modeled by partial differential equations), lumped elements (modeled by algebraic relations and ordinary differential equations), and active elements (modeled by algebraic relations). Such a characterization is applicable to a broad class of circuits, especially including those usually referred to as linear integrated circuits, since the fabrication techniques for such circuits readily produce elements which may be modeled as distributed, as well as the more conventional lumped and active ones.
An information flow analysis of a distributed information system for space medical support.
Zhang, Tao; Aranzamendez, Gina; Rinkus, Susan; Gong, Yang; Rukab, Jamie; Johnson-Throop, Kathy A; Malin, JaneT; Zhang, Jiajie
2004-01-01
In this study, we applied the methodology grounded in human-centered distributed cognition principles to the information flow analysis of a highly intensive, distributed and complex environment--the Biomedical Engineer (BME) console system at NASA Johnson Space Center. This system contains disparate human and artificial agents and artifacts. Users and tasks of this system were analyzed. An ethnographic study and a detailed communication pattern analysis were conducted to gain deeper insight and better understanding of the information flow patterns and the organizational memory of the current BME console system. From this study, we identified some major problems and offered recommendations to improve the efficiency and effectiveness of this system. We believe that this analysis methodology can be used in other distributed information systems, such as a healthcare environment. PMID:15360961
Design of a ridge filter structure based on the analysis of dose distributions.
Fujimoto, Rintaro; Takayanagi, Taisuke; Fujitaka, Shinichiro
2009-07-01
Dose distributions distorted by a periodic structure, such as a ridge filter, are analytically investigated. Based on the beam optics, the fluence distributions of scanned beams passing through the ridge filter are traced. It is shown that the periodic lateral dose distribution blurred by multiple Coulomb scattering can be expressed by a sum of cosine functions through Fourier transform. The result shows that the dose homogeneity decreases exponentially as the period of the structure becomes longer. This analysis is applied to the example case of a mini-ridge filter. The mini-ridge filter is designed to broaden sharp Bragg peaks for an energy-stacking irradiation method. The dose distributions depend on the period of the ridge filter structure and the angular straggling at the ridge filter position. Several cases are prepared where the period and angular straggling are supposed to be probable values. In these cases, the lateral distributions obtained by the analytical method are compared to Monte Carlo simulation results. Both distributions show good agreement with each other within 1%, which means that this analysis allows estimation of the dose distribution downstream of the ridge filter quantitatively. The appropriate period of grooves and scatterer width can be determined which ensures sufficient homogeneity. PMID:19531845
NASA Technical Reports Server (NTRS)
Gyekenyesi, J. P.
1985-01-01
A computer program was developed for calculating the statistical fast fracture reliability and failure probability of ceramic components. The program includes the two-parameter Weibull material fracture strength distribution model, using the principle of independent action for polyaxial stress states and Batdorf's shear-sensitive as well as shear-insensitive crack theories, all for volume distributed flaws in macroscopically isotropic solids. Both penny-shaped cracks and Griffith cracks are included in the Batdorf shear-sensitive crack response calculations, using Griffith's maximum tensile stress or critical coplanar strain energy release rate criteria to predict mixed mode fracture. Weibull material parameters can also be calculated from modulus of rupture bar tests, using the least squares method with known specimen geometry and fracture data. The reliability prediction analysis uses MSC/NASTRAN stress, temperature and volume output, obtained from the use of three-dimensional, quadratic, isoparametric, or axisymmetric finite elements. The statistical fast fracture theories employed, along with selected input and output formats and options, are summarized. An example problem to demonstrate various features of the program is included.
Hydraulic model analysis of water distribution system, Rockwell International, Rocky Flats, Colorado
Perstein, J.; Castellano, J.A.
1989-01-20
Rockwell International requested an analysis of the existing plant site water supply distribution system at Rocky Flats, Colorado, to determine its adequacy. On September 26--29, 1988, Hughes Associates, Inc., Fire Protection Engineers, accompanied by Rocky Flats Fire Department engineers and suppression personnel, conducted water flow tests at the Rocky Flats plant site. Thirty-seven flows from various points throughout the plant site were taken on the existing domestic supply/fire main installation to assure comprehensive and thorough representation of the Rocky Flats water distribution system capability. The analysis was completed in four phases which are described, together with a summary of general conclusions and recommendations.
NASA Astrophysics Data System (ADS)
Ogawa, Yutaro; Ikeda, Akira; Kotani, Kiyoshi; Jimbo, Yasuhiko
In this study, we propose the EEG phase synchronization analysis including not only the average strength of the synchronization but also the distribution and directions under the conditions that evoked emotion by musical stimuli. The experiment is performed with the two different musical stimuli that evoke happiness or sadness for 150 seconds. It is found that the average strength of synchronization indicates no difference between the right side and the left side of the frontal lobe during the happy stimulus, the distribution and directions indicate significant differences. Therefore, proposed analysis is useful for detecting emotional condition because it provides information that cannot be obtained only by the average strength of synchronization.
Shabani, Farzin; Kumar, Lalit
2014-01-01
Using CLIMEX and the Taguchi Method, a process-based niche model was developed to estimate potential distributions of Phoenix dactylifera L. (date palm), an economically important crop in many counties. Development of the model was based on both its native and invasive distribution and validation was carried out in terms of its extensive distribution in Iran. To identify model parameters having greatest influence on distribution of date palm, a sensitivity analysis was carried out. Changes in suitability were established by mapping of regions where the estimated distribution changed with parameter alterations. This facilitated the assessment of certain areas in Iran where parameter modifications impacted the most, particularly in relation to suitable and highly suitable locations. Parameter sensitivities were also evaluated by the calculation of area changes within the suitable and highly suitable categories. The low temperature limit (DV2), high temperature limit (DV3), upper optimal temperature (SM2) and high soil moisture limit (SM3) had the greatest impact on sensitivity, while other parameters showed relatively less sensitivity or were insensitive to change. For an accurate fit in species distribution models, highly sensitive parameters require more extensive research and data collection methods. Results of this study demonstrate a more cost effective method for developing date palm distribution models, an integral element in species management, and may prove useful for streamlining requirements for data collection in potential distribution modeling for other species as well. PMID:24722140
Sensitivity Analysis of CLIMEX Parameters in Modelling Potential Distribution of Lantana camara L.
Taylor, Subhashni; Kumar, Lalit
2012-01-01
A process-based niche model of L. camara L. (lantana), a highly invasive shrub species, was developed to estimate its potential distribution using CLIMEX. Model development was carried out using its native and invasive distribution and validation was carried out with the extensive Australian distribution. A good fit was observed, with 86.7% of herbarium specimens collected in Australia occurring within the suitable and highly suitable categories. A sensitivity analysis was conducted to identify the model parameters that had the most influence on lantana distribution. The changes in suitability were assessed by mapping the regions where the distribution changed with each parameter alteration. This allowed an assessment of where, within Australia, the modification of each parameter was having the most impact, particularly in terms of the suitable and highly suitable locations. The sensitivity of various parameters was also evaluated by calculating the changes in area within the suitable and highly suitable categories. The limiting low temperature (DV0), limiting high temperature (DV3) and limiting low soil moisture (SM0) showed highest sensitivity to change. The other model parameters were relatively insensitive to change. Highly sensitive parameters require extensive research and data collection to be fitted accurately in species distribution models. The results from this study can inform more cost effective development of species distribution models for lantana. Such models form an integral part of the management of invasive species and the results can be used to streamline data collection requirements for potential distribution modelling. PMID:22815881
Sensitivity analysis of CLIMEX parameters in modelling potential distribution of Lantana camara L.
Taylor, Subhashni; Kumar, Lalit
2012-01-01
A process-based niche model of L. camara L. (lantana), a highly invasive shrub species, was developed to estimate its potential distribution using CLIMEX. Model development was carried out using its native and invasive distribution and validation was carried out with the extensive Australian distribution. A good fit was observed, with 86.7% of herbarium specimens collected in Australia occurring within the suitable and highly suitable categories. A sensitivity analysis was conducted to identify the model parameters that had the most influence on lantana distribution. The changes in suitability were assessed by mapping the regions where the distribution changed with each parameter alteration. This allowed an assessment of where, within Australia, the modification of each parameter was having the most impact, particularly in terms of the suitable and highly suitable locations. The sensitivity of various parameters was also evaluated by calculating the changes in area within the suitable and highly suitable categories. The limiting low temperature (DV0), limiting high temperature (DV3) and limiting low soil moisture (SM0) showed highest sensitivity to change. The other model parameters were relatively insensitive to change. Highly sensitive parameters require extensive research and data collection to be fitted accurately in species distribution models. The results from this study can inform more cost effective development of species distribution models for lantana. Such models form an integral part of the management of invasive species and the results can be used to streamline data collection requirements for potential distribution modelling. PMID:22815881
Jin, Qiang; Yang, Yan; Dong, Xianbin; Fang, Jimin
2016-01-01
Many models (e.g., Langmuir model, Freundlich model and surface complexation model) have been successfully used to explain the mechanism of metal ion adsorption on the pure mineral materials. These materials usually have a homogeneous surface where all sites have the same adsorption energies. However, it's hardly appropriate for such models to describe the adsorption on heterogeneous surfaces (e.g., sediment surface), site energy distribution analysis can be to. In the present study, the site energy distribution analysis was used to describe the surface properties and adsorption behavior of the non-residual and residual components extracted from the natural aquatic sediment samples. The residues were prepared "in-situ" by using the sequential extraction procedure. The present study is intended to investigate the roles of different components and the change of site energy distribution at different temperatures of the sediment samples in controlling Cu (?) adsorption. The results of the site energy distribution analysis indicated firstly, that the sorption sites of iron/manganese hydrous oxides (IMHO) and organic matter (OM) have higher energy. Secondly, light fraction (LF) and carbonates have little influence on site energy distribution. Finally, there was increase in site energies with the increase of temperature. Specially, low temperature (5 °C) significantly influenced the site energies of IMHO and OM, and also had obvious effect on the energy distribution of the sediments after removing target components. The site energy distribution analysis proved to be a useful method for us to further understand the energetic characteristics of sediment in comparison with those previously obtained. PMID:26552542
Igo, Robert P; Wijsman, Ellen M
2008-02-01
Variance-components (VC) linkage analysis is a powerful model-free method for assessing linkage, but the distribution of VC logarithm of the odds ratio (LOD) scores may deviate substantially from the assumed asymptotic distribution. Typically, the null distribution of the VC-LOD score and other linkage statistics has been estimated by generating new genotype data independently of the trait data, and computing a linkage statistic for many such marker-simulated data sets. However, marker simulation is susceptible to errors in the assumed marker and map model and is computationally intensive. Here, we describe a method for generating posterior distributions of linkage statistics through simulation of trait data based on the original sample and on results from an initial scan using a Bayesian Markov-chain Monte Carlo (MCMC) approach for oligogenic segregation analysis. We use samples of oligogenic trait models taken from the posterior distribution to generate new samples of trait data, which were paired with the original marker data for analysis. Empirical P-values obtained from trait and marker simulation were similar when derived for several strong linkage signals from published linkage scans, and for analysis of data with a known, simulated, trait model. Furthermore, trait simulation produces the expected null distribution of VC-LOD scores and is computationally fast when marker identity-by-descent estimates from the original data could be reused. These results suggest that trait simulation gives valid estimates of statistical significance of linkage signals. Finally, these results also demonstrate the feasibility of obtaining empirical significance levels for evaluating Bayesian oligogenic linkage signals with either marker or trait simulation. PMID:17849492
New approach on analysis of pathologic cardiac murmurs based on WPD energy distribution.
Jiang, Zhongwei; Tao, Ting; Wang, Haibin
2014-01-01
In this paper, an approach on analysis of the pathologic cardiac murmurs for congenital heart defects was proposed based on the wavelet packet (WP) technique. Considering the difference of the energy intensity distributions for the innocent and pathologic murmurs in frequency domain, the WP decomposition was introduced and the WP energies at each frequency band were calculated and compared. Based on the analysis of a large amount of clinic heart sound data, the murmurs energy distributions were divided into five frequency bands, and the relative evaluation indexes for cardiac murmurs (ICM) were proposed for analysis of the pathologic murmurs. Finally, the threshold values between the innocent and pathologic cardiac murmurs were determined based on the statistical results of the normal heart sounds. The analysis results validate the proposed evaluation indexes and the corresponding thresholds. PMID:25516123
NASA Astrophysics Data System (ADS)
Egozcue, J. J.; Pawlowsky-Glahn, V.; Ortego, M. I.
2005-03-01
Standard practice of wave-height hazard analysis often pays little attention to the uncertainty of assessed return periods and occurrence probabilities. This fact favors the opinion that, when large events happen, the hazard assessment should change accordingly. However, uncertainty of the hazard estimates is normally able to hide the effect of those large events. This is illustrated using data from the Mediterranean coast of Spain, where the last years have been extremely disastrous. Thus, it is possible to compare the hazard assessment based on data previous to those years with the analysis including them. With our approach, no significant change is detected when the statistical uncertainty is taken into account. The hazard analysis is carried out with a standard model. Time-occurrence of events is assumed Poisson distributed. The wave-height of each event is modelled as a random variable which upper tail follows a Generalized Pareto Distribution (GPD). Moreover, wave-heights are assumed independent from event to event and also independent of their occurrence in time. A threshold for excesses is assessed empirically. The other three parameters (Poisson rate, shape and scale parameters of GPD) are jointly estimated using Bayes' theorem. Prior distribution accounts for physical features of ocean waves in the Mediterranean sea and experience with these phenomena. Posterior distribution of the parameters allows to obtain posterior distributions of other derived parameters like occurrence probabilities and return periods. Predictives are also available. Computations are carried out using the program BGPE v2.0.
Doris, E.; Krasko, V.A.
2012-10-01
State and local policymakers show increasing interest in spurring the development of customer-sited distributed generation (DG), in particular solar photovoltaic (PV) markets. Prompted by that interest, this analysis examines the use of state policy as a tool to support the development of a robust private investment market. This analysis builds on previous studies that focus on government subsidies to reduce installation costs of individual projects and provides an evaluation of the impacts of policies on stimulating private market development.
Li, Gang; Zhao, Zhe; Wang, Hui-quan; Lin, Ling; Zhang, Bao-Ju; Wu, Xiao-rong
2012-04-01
In order to ensure the feasibility of complex liquid spectroscopy analysis, to analyze the accuracy gain of modeling by multi-wavelength, and to determine the appropriate distribution of concentration to obtain the high quality and universal quantitative analysis model, the precision of the detection of composition concentration by spectral analysis is illustrated through a error analysis which takes into account the following three contributions: spectral instrument noise, multi-wavelength modeling and the distribution of composition concentration. By concentration resolution analysis, the concentration resolution can be achieved when the spectrometer noise is available, but also the theoretical basis is provided to select a suitable spectrometer to meet the resolution requirement of quantitative analysis. Over-sampling technique indicates that the precision improvement by modeling with multi-wavelength can obtain higher concentration detection sensitivity. The sparse-dense-ratio and Euclidean distance of both measured and non-measured components provide the theoretic guidance for choosing the suitable concentration distribution which improves the model's quality and reduces the prediction error of the sample set. PMID:22715788
An improved fault analysis algorithm for unbalanced multi-phase power distribution systems
Halpin, S.M. . Dept. of Electrical and Computer Engineering); Grigsby, L.L.; Gross, C.A.; Nelms, R.M. . Dept. of Electrical Engineering)
1994-07-01
The results of an improved method for fault calculations in unbalanced multi-phase power distribution systems containing non-utility generators and large induction motor loads are presented in this paper. The method utilizes a combined time- and frequency-domain analysis approach to produce results that are superior to those obtained in classical fault analysis without demanding the large increase in computer time associated with complete time-domain solutions. Sources and loads can be represented by either classical frequency-domain models or detailed differential equation models. The potentially unbalanced power distribution system is represented by an admittance matrix formed using a linear graph-based application of ac circuit theory. The time-domain differential equation source and load models are interfaced with the frequency-domain distribution system model using time series analyses to estimate equivalent voltage and current phasors from discrete data sets.
An Ecological Analysis of the Geographic Distribution of Veterinarians in the United States
ERIC Educational Resources Information Center
Richards, James M., Jr.
1977-01-01
Measures of the ecological characteristics of states were developed through factor analysis. Then ecological characteristics of states and cities were related to the geographic distribution of veterinarians and physicians. Population size is the strongest correlate of the number of health professionals. Results for pet veterinarians resemble…
Global Distribution of Tropospheric Aerosols: A 3-D Model Analysis of Satellite Data
NASA Technical Reports Server (NTRS)
Chin, Mian
2002-01-01
This report describes objectives completed for the GACP (Global Climatology Aerosol Project). The objectives included the analysis of satellite aerosol data, including the optical properties and global distributions of major aerosol types, and human contributions to major aerosol types. The researchers have conducted simulations and field work.
Systematic analysis of transverse momentum distribution and non-extensive thermodynamics theory
Sena, I.; Deppman, A.
2013-03-25
A systematic analysis of transverse momentum distribution of hadrons produced in ultrarelativistic p+p and A+A collisions is presented. We investigate the effective temperature and the entropic parameter from the non-extensive thermodynamic theory of strong interaction. We conclude that the existence of a limiting effective temperature and of a limiting entropic parameter is in accordance with experimental data.
An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests
ERIC Educational Resources Information Center
Attali, Yigal
2010-01-01
Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…
Sievers, D.; Kuhn, E.; Tucker, M.; Stickel, J.; Wolfrum, E.
2013-06-01
Measurement and analysis of residence time distribution (RTD) data is the focus of this study where data collection methods were developed specifically for the pretreatment reactor environment. Augmented physical sampling and automated online detection methods were developed and applied. Both the measurement techniques themselves and the produced RTD data are presented and discussed.
NASA Technical Reports Server (NTRS)
Levine, R. D.; Bernstein, R. B.
1973-01-01
A thermodynamic-like approach to the characterization of product state distributions is outlined. A moment analysis of the surprisal and the entropy deficiency is presented from a statistical mechanical viewpoint. The role of reactant state selection is discussed using the 'state function' property of the entropy.
An investigation on the intra-sample distribution of cotton color by using image analysis
Technology Transfer Automated Retrieval System (TEKTRAN)
The colorimeter principle is widely used to measure cotton color. This method provides the sampleâ€™s color grade; but the result does not include information about the color distribution and any variation within the sample. We conducted an investigation that used image analysis method to study the ...
Mächtle, W
1999-02-01
Sedimentation velocity is a powerful tool for the analysis of complex solutions of macromolecules. However, sample turbidity imposes an upper limit to the size of molecular complexes currently amenable to such analysis. Furthermore, the breadth of the particle size distribution, combined with possible variations in the density of different particles, makes it difficult to analyze extremely complex mixtures. These same problems are faced in the polymer industry, where dispersions of latices, pigments, lacquers, and emulsions must be characterized. There is a rich history of methods developed for the polymer industry finding use in the biochemical sciences. Two such methods are presented. These use analytical ultracentrifugation to determine the density and size distributions for submicron-sized particles. Both methods rely on Stokes' equations to estimate particle size and density, whereas turbidity, corrected using Mie's theory, provides the concentration measurement. The first method uses the sedimentation time in dispersion media of different densities to evaluate the particle density and size distribution. This method works provided the sample is chemically homogeneous. The second method splices together data gathered at different sample concentrations, thus permitting the high-resolution determination of the size distribution of particle diameters ranging from 10 to 3000 nm. By increasing the rotor speed exponentially from 0 to 40,000 rpm over a 1-h period, size distributions may be measured for extremely broadly distributed dispersions. Presented here is a short history of particle size distribution analysis using the ultracentrifuge, along with a description of the newest experimental methods. Several applications of the methods are provided that demonstrate the breadth of its utility, including extensions to samples containing nonspherical and chromophoric particles. PMID:9916040
Zapp, E N; Ramsey, C R; Townsend, L W; Badhwar, G D
1999-06-01
Calculations of total dose and equivalent dose as functions of time, as well as dose-rate and equivalent dose rate since event start are presented for fifteen of the larger solar particle events that occurred during the period between November 1987 and August 1991. The doses, dose-equivalents, and rates presented are for exposures to the skin, ocular lens, and bone marrow behind a thickness of aluminum shielding which provides protection comparable to that of a thin spacecraft. The calculated dose vs time profiles are parameterized using a Weibull cumulative distribution as the fitting function. Parameters are determined using least-squares techniques. Fitted curves are then differentiated to produce smoothed dose-rate curves for each of the events. These results provide a useful starting point for the development of methods to predict the cumulative doses and times to reach various dose limits from a limited number of dosimeter measurements early in the evolution of a solar particle event. PMID:11543143
Can Data Recognize Its Parent Distribution?
A.W.Marshall; J.C.Meza; and I. Olkin
1999-05-01
This study is concerned with model selection of lifetime and survival distributions arising in engineering reliability or in the medical sciences. We compare various distributions, including the gamma, Weibull and lognormal, with a new distribution called geometric extreme exponential. Except for the lognormal distribution, the other three distributions all have the exponential distribution as special cases. A Monte Carlo simulation was performed to determine sample sizes for which survival distributions can distinguish data generated by their own families. Two methods for decision are by maximum likelihood and by Kolmogorov distance. Neither method is uniformly best. The probability of correct selection with more than one alternative shows some surprising results when the choices are close to the exponential distribution.
Consideration of tip speed limitations in preliminary analysis of minimum COE wind turbines
NASA Astrophysics Data System (ADS)
Cuerva-Tejero, A.; Yeow, T. S.; Lopez-Garcia, O.; Gallego-Castillo, C.
2014-12-01
A relation between Cost Of Energy, COE, maximum allowed tip speed, and rated wind speed, is obtained for wind turbines with a given goal rated power. The wind regime is characterised by the corresponding parameters of the probability density function of wind speed. The non-dimensional characteristics of the rotor: number of blades, the blade radial distributions of local solidity, twist, angle, and airfoil type, play the role of parameters in the mentioned relation. The COE is estimated using a cost model commonly used by the designers. This cost model requires basic design data such as the rotor radius and the ratio between the hub height and the rotor radius. Certain design options, DO, related to the technology of the power plant, tower and blades are also required as inputs. The function obtained for the COE can be explored to find those values of rotor radius that give rise to minimum cost of energy for a given wind regime as the tip speed limitation changes. The analysis reveals that iso-COE lines evolve parallel to iso-radius lines for large values of limit tip speed but that this is not the case for small values of the tip speed limits. It is concluded that., as the tip speed limit decreases, the optimum decision for keeping minimum COE values can be: a) reducing the rotor radius for places with high weibull scale parameter or b) increasing the rotor radius for places with low weibull scale parameter.
Efficient, Distributed and Interactive Neuroimaging Data Analysis Using the LONI Pipeline.
Dinov, Ivo D; Van Horn, John D; Lozev, Kamen M; Magsipoc, Rico; Petrosyan, Petros; Liu, Zhizhong; Mackenzie-Graham, Allan; Eggert, Paul; Parker, Douglas S; Toga, Arthur W
2009-01-01
The LONI Pipeline is a graphical environment for construction, validation and execution of advanced neuroimaging data analysis protocols (Rex et al., 2003). It enables automated data format conversion, allows Grid utilization, facilitates data provenance, and provides a significant library of computational tools. There are two main advantages of the LONI Pipeline over other graphical analysis workflow architectures. It is built as a distributed Grid computing environment and permits efficient tool integration, protocol validation and broad resource distribution. To integrate existing data and computational tools within the LONI Pipeline environment, no modification of the resources themselves is required. The LONI Pipeline provides several types of process submissions based on the underlying server hardware infrastructure. Only workflow instructions and references to data, executable scripts and binary instructions are stored within the LONI Pipeline environment. This makes it portable, computationally efficient, distributed and independent of the individual binary processes involved in pipeline data-analysis workflows. We have expanded the LONI Pipeline (V.4.2) to include server-to-server (peer-to-peer) communication and a 3-tier failover infrastructure (Grid hardware, Sun Grid Engine/Distributed Resource Management Application API middleware, and the Pipeline server). Additionally, the LONI Pipeline provides three layers of background-server executions for all users/sites/systems. These new LONI Pipeline features facilitate resource-interoperability, decentralized computing, construction and validation of efficient and robust neuroimaging data-analysis workflows. Using brain imaging data from the Alzheimer's Disease Neuroimaging Initiative (Mueller et al., 2005), we demonstrate integration of disparate resources, graphical construction of complex neuroimaging analysis protocols and distributed parallel computing. The LONI Pipeline, its features, specifications, documentation and usage are available online (http://Pipeline.loni.ucla.edu). PMID:19649168
NASA Technical Reports Server (NTRS)
Chao, Luen-Yuan; Shetty, Dinesh K.
1992-01-01
Statistical analysis and correlation between pore-size distribution and fracture strength distribution using the theory of extreme-value statistics is presented for a sintered silicon nitride. The pore-size distribution on a polished surface of this material was characterized, using an automatic optical image analyzer. The distribution measured on the two-dimensional plane surface was transformed to a population (volume) distribution, using the Schwartz-Saltykov diameter method. The population pore-size distribution and the distribution of the pore size at the fracture origin were correllated by extreme-value statistics. Fracture strength distribution was then predicted from the extreme-value pore-size distribution, usin a linear elastic fracture mechanics model of annular crack around pore and the fracture toughness of the ceramic. The predicted strength distribution was in good agreement with strength measurements in bending. In particular, the extreme-value statistics analysis explained the nonlinear trend in the linearized Weibull plot of measured strengths without postulating a lower-bound strength.
Study of Solid State Drives performance in PROOF distributed analysis system
NASA Astrophysics Data System (ADS)
Panitkin, S. Y.; Ernst, M.; Petkus, R.; Rind, O.; Wenaus, T.
2010-04-01
Solid State Drives (SSD) is a promising storage technology for High Energy Physics parallel analysis farms. Its combination of low random access time and relatively high read speed is very well suited for situations where multiple jobs concurrently access data located on the same drive. It also has lower energy consumption and higher vibration tolerance than Hard Disk Drive (HDD) which makes it an attractive choice in many applications raging from personal laptops to large analysis farms. The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF is especially efficient together with distributed local storage systems like Xrootd, when data are distributed over computing nodes. In such an architecture the local disk subsystem I/O performance becomes a critical factor, especially when computing nodes use multi-core CPUs. We will discuss our experience with SSDs in PROOF environment. We will compare performance of HDD with SSD in I/O intensive analysis scenarios. In particular we will discuss PROOF system performance scaling with a number of simultaneously running analysis jobs.
Fractal analysis of the dark matter and gas distributions in the Mare-Nostrum universe
Gaite, JosÃ©
2010-03-01
We develop a method of multifractal analysis of N-body cosmological simulations that improves on the customary counts-in-cells method by taking special care of the effects of discreteness and large scale homogeneity. The analysis of the Mare-Nostrum simulation with our method provides strong evidence of self-similar multifractal distributions of dark matter and gas, with a halo mass function that is of Press-Schechter type but has a power-law exponent -2, as corresponds to a multifractal. Furthermore, our analysis shows that the dark matter and gas distributions are indistinguishable as multifractals. To determine if there is any gas biasing, we calculate the cross-correlation coefficient, with negative but inconclusive results. Hence, we develop an effective Bayesian analysis connected with information theory, which clearly demonstrates that the gas is biased in a long range of scales, up to the scale of homogeneity. However, entropic measures related to the Bayesian analysis show that this gas bias is small (in a precise sense) and is such that the fractal singularities of both distributions coincide and are identical. We conclude that this common multifractal cosmic web structure is determined by the dynamics and is independent of the initial conditions.
de Oliveira, Thales Leandro Coutinho; Soares, Rodrigo de AraÃºjo; Piccoli, Roberta Hilsdorf
2013-03-01
The antimicrobial effect of oregano (Origanum vulgare L.) and lemongrass (Cymbopogon citratus (DC.) Stapf.) essential oils (EOs) against Salmonella enterica serotype Enteritidis in in vitro experiments, and inoculated in ground bovine meat during refrigerated storage (4Â±2 Â°C) for 6 days was evaluated. The Weibull model was tested to fit survival/inactivation bacterial curves (estimating of p and Î´ parameters). The minimum inhibitory concentration (MIC) value for both EOs on S. Enteritidis was 3.90 Î¼l/ml. The EO concentrations applied in the ground beef were 3.90, 7.80 and 15.60 Î¼l/g, based on MIC levels and possible activity reduction by food constituents. Both evaluated EOs in all tested levels, showed antimicrobial effects, with microbial populations reducing (pâ‰¤0.05) along time storage. Evaluating fit-quality parameters (RSS and RSE) Weibull models are able to describe the inactivation curves of EOs against S. Enteritidis. The application of EOs in processed meats can be used to control pathogens during refrigerated shelf-life. PMID:23273476
Zhang, Zhengdong D.; Paccanaro, Alberto; Fu, Yutao; Weissman, Sherman; Weng, Zhiping; Chang, Joseph; Snyder, Michael; Gerstein, Mark B.
2007-01-01
The comprehensive inventory of functional elements in 44 human genomic regions carried out by the ENCODE Project Consortium enables for the first time a global analysis of the genomic distribution of transcriptional regulatory elements. In this study we developed an intuitive and yet powerful approach to analyze the distribution of regulatory elements found in many different ChIP–chip experiments on a 10?100-kb scale. First, we focus on the overall chromosomal distribution of regulatory elements in the ENCODE regions and show that it is highly nonuniform. We demonstrate, in fact, that regulatory elements are associated with the location of known genes. Further examination on a local, single-gene scale shows an enrichment of regulatory elements near both transcription start and end sites. Our results indicate that overall these elements are clustered into regulatory rich “islands” and poor “deserts.” Next, we examine how consistent the nonuniform distribution is between different transcription factors. We perform on all the factors a multivariate analysis in the framework of a biplot, which enhances biological signals in the experiments. This groups transcription factors into sequence-specific and sequence-nonspecific clusters. Moreover, with experimental variation carefully controlled, detailed correlations show that the distribution of sites was generally reproducible for a specific factor between different laboratories and microarray platforms. Data sets associated with histone modifications have particularly strong correlations. Finally, we show how the correlations between factors change when only regulatory elements far from the transcription start sites are considered. PMID:17567997
Vinogradov, S A; Wilson, D F
1994-01-01
A new method for analysis of phosphorescence lifetime distributions in heterogeneous systems has been developed. This method is based on decomposition of the data vector to a linearly independent set of exponentials and uses quadratic programming principles for x2 minimization. Solution of the resulting algorithm requires a finite number of calculations (it is not iterative) and is computationally fast and robust. The algorithm has been tested on various simulated decays and for analysis of phosphorescence measurements of experimental systems with descrete distributions of lifetimes. Critical analysis of the effect of signal-to-noise on the resolving capability of the algorithm is presented. This technique is recommended for resolution of the distributions of quencher concentration in heterogeneous samples, of which oxygen distributions in tissue is an important example. Phosphors of practical importance for biological oxygen measurements: Pd-meso-tetra (4-carboxyphenyl) porphyrin (PdTCPP) and Pd-meso-porphyrin (PdMP) have been used to provide experimental test of the algorithm. PMID:7858142
Measurement of bubble and pellet size distributions: past and current image analysis technology.
Junker, Beth
2006-08-01
Measurements of bubble and pellet size distributions are useful for biochemical process optimizations. The accuracy, representation, and simplicity of these measurements improve when the measurement is performed on-line and in situ rather than off-line using a sample. Historical and currently available measurement systems for photographic methods are summarized for bubble and pellet (morphology) measurement applications. Applications to cells, mycelia, and pellets measurements have driven key technological developments that have been applied for bubble measurements. Measurement trade-offs exist to maximize accuracy, extend range, and attain reasonable cycle times. Mathematical characterization of distributions using standard statistical techniques is straightforward, facilitating data presentation and analysis. For the specific application of bubble size distributions, selected bioreactor operating parameters and physicochemical conditions alter distributions. Empirical relationships have been established in some cases where sufficient data have been collected. In addition, parameters and conditions with substantial effects on bubble size distributions were identified and their relative effects quantified. This information was used to guide required accuracy and precision targets for bubble size distribution measurements from newly developed novel on-line and in situ bubble measurement devices. PMID:16855822
Evaluating Domestic Hot Water Distribution System Options With Validated Analysis Models
Weitzel, E.; Hoeschele, M.
2014-09-01
A developing body of work is forming that collects data on domestic hot water consumption, water use behaviors, and energy efficiency of various distribution systems. A full distribution system developed in TRNSYS has been validated using field monitoring data and then exercised in a number of climates to understand climate impact on performance. This study builds upon previous analysis modelling work to evaluate differing distribution systems and the sensitivities of water heating energy and water use efficiency to variations of climate, load, distribution type, insulation and compact plumbing practices. Overall 124 different TRNSYS models were simulated. Of the configurations evaluated, distribution losses account for 13-29% of the total water heating energy use and water use efficiency ranges from 11-22%. The base case, an uninsulated trunk and branch system sees the most improvement in energy consumption by insulating and locating the water heater central to all fixtures. Demand recirculation systems are not projected to provide significant energy savings and in some cases increase energy consumption. Water use is most efficient with demand recirculation systems, followed by the insulated trunk and branch system with a central water heater. Compact plumbing practices and insulation have the most impact on energy consumption (2-6% for insulation and 3-4% per 10 gallons of enclosed volume reduced). The results of this work are useful in informing future development of water heating best practices guides as well as more accurate (and simulation time efficient) distribution models for annual whole house simulation programs.
Mirzabozorg, H; Hariri-Ardebili, M A; Shirkhan, M; Seyed-Kolbadi, S M
2014-01-01
The effect of solar radiation on thermal distribution in thin high arch dams is investigated. The differential equation governing thermal behavior of mass concrete in three-dimensional space is solved applying appropriate boundary conditions. Solar radiation is implemented considering the dam face direction relative to the sun, the slop relative to horizon, the region cloud cover, and the surrounding topography. It has been observed that solar radiation changes the surface temperature drastically and leads to nonuniform temperature distribution. Solar radiation effects should be considered in thermal transient analysis of thin arch dams. PMID:24695817
Analysis of non-homogeneous Timoshenko beams with generalized damping distributions
NASA Astrophysics Data System (ADS)
Sorrentino, S.; Fasana, A.; Marchesiello, S.
2007-07-01
This paper presents a study on the effects of generalized damping distributions on non-homogeneous Timoshenko beams. On the basis of some fundamentals of modal analysis for damped continuous systems applied to the particular case of the Timoshenko beam model, the eigenproblem is solved by applying a method combining a state-space representation with a transfer matrix technique, yielding closed-form expressions for the eigenfunctions. After validation by means of numerical examples using the finite element method, response functions in both the time and the frequency domain are discussed and compared according to different damping distributions.
Mirzabozorg, H.; Hariri-Ardebili, M. A.; Shirkhan, M.; Seyed-Kolbadi, S. M.
2014-01-01
The effect of solar radiation on thermal distribution in thin high arch dams is investigated. The differential equation governing thermal behavior of mass concrete in three-dimensional space is solved applying appropriate boundary conditions. Solar radiation is implemented considering the dam face direction relative to the sun, the slop relative to horizon, the region cloud cover, and the surrounding topography. It has been observed that solar radiation changes the surface temperature drastically and leads to nonuniform temperature distribution. Solar radiation effects should be considered in thermal transient analysis of thin arch dams. PMID:24695817
Using Micro-Synchrophasor Data for Advanced Distribution Grid Planning and Operations Analysis
Stewart, Emma; Kiliccote, Sila; McParland, Charles; Roberts, Ciaran
2014-07-01
This report reviews the potential for distribution-grid phase-angle data that will be available from new micro-synchrophasors (ÂµPMUs) to be utilized in existing distribution-grid planning and operations analysis. This data could augment the current diagnostic capabilities of grid analysis software, used in both planning and operations for applications such as fault location, and provide data for more accurate modeling of the distribution system. ÂµPMUs are new distribution-grid sensors that will advance measurement and diagnostic capabilities and provide improved visibility of the distribution grid, enabling analysis of the gridâ€™s increasingly complex loads that include features such as large volumes of distributed generation. Large volumes of DG leads to concerns on continued reliable operation of the grid, due to changing power flow characteristics and active generation, with its own protection and control capabilities. Using ÂµPMU data on change in voltage phase angle between two points in conjunction with new and existing distribution-grid planning and operational tools is expected to enable model validation, state estimation, fault location, and renewable resource/load characterization. Our findings include: data measurement is outstripping the processing capabilities of planning and operational tools; not every tool can visualize a voltage phase-angle measurement to the degree of accuracy measured by advanced sensors, and the degree of accuracy in measurement required for the distribution grid is not defined; solving methods cannot handle the high volumes of data generated by modern sensors, so new models and solving methods (such as graph trace analysis) are needed; standardization of sensor-data communications platforms in planning and applications tools would allow integration of different vendorsâ€™ sensors and advanced measurement devices. In addition, data from advanced sources such as ÂµPMUs could be used to validate models to improve/ensure accuracy, providing information on normally estimated values such as underground conductor impedance, and characterization of complex loads. Although the input of high-fidelity data to existing tools will be challenging, ÂµPMU data on phase angle (as well as other data from advanced sensors) will be useful for basic operational decisions that are based on a trend of changing data.
NASA Astrophysics Data System (ADS)
NÃ¶ther, Nils; Wosniok, Aleksander; Krebber, Katerina; Thiele, Elke
2008-03-01
We report on the development of a complete system for spatially resolved detection of critical soil displacement in river embankments. The system uses Brillouin frequency domain analysis (BOFDA) for distributed measurement of strain in silica optical fibers. Our development consists of the measurement unit, an adequate coating for the optical fibers and a technique to integrate the coated optical fibers into geotextiles as they are commonly used in dike construction. We present several laboratory and field tests that prove the capability of the system to detect areas of soil displacement as small as 2 meters. These are the first tests of truly distributed strain measurements on optical fibers embedded into geosynthetics.
Rank-Ordered Multifractal Analysis (ROMA) of probability distributions in fluid turbulence
NASA Astrophysics Data System (ADS)
Wu, C. C.; Chang, T.
2011-04-01
Rank-Ordered Multifractal Analysis (ROMA) was introduced by Chang and Wu (2008) to describe the multifractal characteristic of intermittent events. The procedure provides a natural connection between the rank-ordered spectrum and the idea of one-parameter scaling for monofractals. This technique has successfully been applied to MHD turbulence simulations and turbulence data observed in various space plasmas. In this paper, the technique is applied to the probability distributions in the inertial range of the turbulent fluid flow, as given in the vast Johns Hopkins University (JHU) turbulence database. In addition, a new way of finding the continuous ROMA spectrum and the scaled probability distribution function (PDF) simultaneously is introduced.
SpatTrack: an imaging toolbox for analysis of vesicle motility and distribution in living cells.
Lund, Frederik W; Jensen, Maria Louise V; Christensen, Tanja; Nielsen, Gitte K; Heegaard, Christian W; WÃ¼stner, Daniel
2014-12-01
The endocytic pathway is a complex network of highly dynamic organelles, which has been traditionally studied by quantitative fluorescence microscopy. The data generated by this method can be overwhelming and its analysis, even for the skilled microscopist, is tedious and error-prone. We developed SpatTrack, an open source, platform-independent program collecting a variety of methods for analysis of vesicle dynamics and distribution in living cells. SpatTrack performs 2D particle tracking, trajectory analysis and fitting of diffusion models to the calculated mean square displacement. It allows for spatial analysis of detected vesicle patterns including calculation of the radial distribution function and particle-based colocalization. Importantly, all analysis tools are supported by Monte Carlo simulations of synthetic images. This allows the user to assess the reliability of the analysis and to study alternative scenarios. We demonstrate the functionality of SpatTrack by performing a detailed imaging study of internalized fluorescence-tagged Niemann Pick C2 (NPC2) protein in human disease fibroblasts. Using SpatTrack, we show that NPC2 rescued the cholesterol-storage phenotype from a subpopulation of late endosomes/lysosomes (LE/LYSs). This was paralleled by repositioning and active transport of NPC2-containing vesicles to the cell surface. The potential of SpatTrack for other applications in intracellular transport studies will be discussed. PMID:25243614
NASA Astrophysics Data System (ADS)
Thanh Son, Vo; Anandakumar, S.; Kim, CheolGi; Jeong, Jong-Ruyl
2011-12-01
In this study, we have investigated real-time decoding feasibility of magnetic micro-barcodes in a microfluidic channel by using numerical analysis of magnetic field distribution of the micro-barcodes. The vector potential model based on a molecular current has been used to obtain magnetic stray field distribution of ferromagnetic bars which consisting of the micro-barcodes. It reveals that the stray field distribution of the micro-barcodes strongly depends on the geometries of the ferromagnetic bar. Interestingly enough, we have found that one can avoide the miniaturization process of a magnetic sensor device needed to increase the sensitivity by optimizing the geometries of micro-barcodes. We also estimate a magnetic sensor response depending on flying height and lateral misalignment of the micro-barcodes over the sensor position and found that control of the flying height is crucial factor to enhance the detection sensitivity and reproducibility of a magnetic sensor signal in the suspension assay technology.
Sub-population analysis of deformability distribution in heterogeneous red blood cell population.
Lee, Dong Woo; Doh, Il; Kuypers, Frans A; Cho, Young-Ho
2015-12-01
We present a method for sub-population analysis of deformability distribution using single-cell microchamber array (SiCMA) technology. It is a unique method allowing the correlation of overall cellular characteristics with surface and cytosolic characteristics to define the distribution of individual cellular characteristics in heterogeneous cell populations. As a proof of principle, reticulocytes, the immature sub-population of red blood cells (RBC), were recognized from RBC population by a surface marker and different characteristics on deformability between these populations were characterized. The proposed technology can be used in a variety of applications that would benefit from the ability to measure the distribution of cellular characteristics in complex populations, especially important to define hematologic disorders. PMID:26383009
Principal Effects of Axial Load on Moment-Distribution Analysis of Rigid Structures
NASA Technical Reports Server (NTRS)
James, Benjamin Wylie
1935-01-01
This thesis presents the method of moment distribution modified to include the effect of axial load upon the bending moments. This modification makes it possible to analyze accurately complex structures, such as rigid fuselage trusses, that heretofore had to be analyzed by approximate formulas and empirical rules. The method is simple enough to be practicable even for complex structures, and it gives a means of analysis for continuous beams that is simpler than the extended three-moment equation now in common use. When the effect of axial load is included, it is found that the basic principles of moment distribution remain unchanged, the only difference being that the factors used, instead of being constants for a given member, become functions of the axial load. Formulas have been developed for these factors, and curves plotted so that their applications requires no more work than moment distribution without axial load. Simple problems have been included to illustrate the use of the curves.
NASA Astrophysics Data System (ADS)
Schreiner, L. J.; Holmes, O.; Salomons, G.
2013-06-01
One component of clinical treatment validation, for example in the commissioning of new radiotherapy techniques or in patient specific quality assurance, is the evaluation and verification of planned and delivered dose distributions. Gamma and related tests (such as the chi evaluation) have become standard clinical tools for such work. Both functions provide quantitative comparisons between dose distributions, combining dose difference and distance to agreement criteria. However, there are some practical considerations in their utilization that can compromise the integrity of the tests, and these are occasionally overlooked especially when the tests are too readily adopted from commercial software. In this paper we review the evaluation tools and describe some practical concerns. The intent is to provide users with some guidance so that their use of these evaluations will provide valid rapid analysis and visualization of the agreement between planned and delivered dose distributions.
Analysis of electron energy distribution function in the Linac4 H- source
NASA Astrophysics Data System (ADS)
Mochizuki, S.; Mattei, S.; Nishida, K.; Hatayama, A.; Lettry, J.
2016-02-01
To understand the Electron Energy Distribution Function (EEDF) in the Radio Frequency Inductively Coupled Plasmas (RF-ICPs) in hydrogen negative ion sources, the detailed analysis of the EEDFs using numerical simulation and the theoretical approach based on Boltzmann equation has been performed. It is shown that the EEDF of RF-ICPs consists of two parts, one is the low energy part which obeys Maxwellian distribution and the other is high energy part deviated from Maxwellian distribution. These simulation results have been confirmed to be reasonable by the analytical approach. The results suggest that it is possible to enhance the dissociation of molecules and the resultant H- negative ion production by reducing the gas pressure.
Detecting Distributed Network Traffic Anomaly with Network-Wide Correlation Analysis
NASA Astrophysics Data System (ADS)
Zonglin, Li; Guangmin, Hu; Xingmiao, Yao; Dan, Yang
2008-12-01
Distributed network traffic anomaly refers to a traffic abnormal behavior involving many links of a network and caused by the same source (e.g., DDoS attack, worm propagation). The anomaly transiting in a single link might be unnoticeable and hard to detect, while the anomalous aggregation from many links can be prevailing, and does more harm to the networks. Aiming at the similar features of distributed traffic anomaly on many links, this paper proposes a network-wide detection method by performing anomalous correlation analysis of traffic signals' instantaneous parameters. In our method, traffic signals' instantaneous parameters are firstly computed, and their network-wide anomalous space is then extracted via traffic prediction. Finally, an anomaly is detected by a global correlation coefficient of anomalous space. Our evaluation using Abilene traffic traces demonstrates the excellent performance of this approach for distributed traffic anomaly detection.
Category induction via distributional analysis: Evidence from a serial reaction time task.
Hunt, Ruskin H; Aslin, Richard N
2010-02-01
Category formation lies at the heart of a number of higher-order behaviors, including language. We assessed the ability of human adults to learn, from distributional information alone, categories embedded in a sequence of input stimuli using a serial reaction time task. Artificial grammars generated corpora of input strings containing a predetermined and constrained set of sequential statistics. After training, learners were presented with novel input strings, some of which contained violations of the category membership defined by distributional context. Category induction was assessed by comparing performance on novel and familiar strings. Results indicate that learners develop increasing sensitivity to the category structure present in the input, and become sensitive to fine-grained differences in the pre- and post-element contexts that define category membership. Results suggest that distributional analysis plays a significant role in the development of visuomotor categories, and may play a similar role in the induction of linguistic form-class categories. PMID:20177430
Validation results of the IAG Dancer project for distributed GPS analysis
NASA Astrophysics Data System (ADS)
Boomkamp, H.
2012-12-01
The number of permanent GPS stations in the world has grown far too large to allow processing of all this data at analysis centers. The majority of these GPS sites do not even make their observation data available to the analysis centers, for various valid reasons. The current ITRF solution is still based on centralized analysis by the IGS, and subsequent densification of the reference frame via regional network solutions. Minor inconsistencies in analysis methods, software systems and data quality imply that this centralized approach is unlikely to ever reach the ambitious accuracy objectives of GGOS. The dependence on published data also makes it clear that a centralized approach will never provide a true global ITRF solution for all GNSS receivers in the world. If the data does not come to the analysis, the only alternative is to bring the analysis to the data. The IAG Dancer project has implemented a distributed GNSS analysis system on the internet in which each receiver can have its own analysis center in the form of a freely distributed JAVA peer-to-peer application. Global parameters for satellite orbits, clocks and polar motion are solved via a distributed least squares solution among all participating receivers. A Dancer instance can run on any computer that has simultaneous access to the receiver data and to the public internet. In the future, such a process may be embedded in the receiver firmware directly. GPS network operators can join the Dancer ITRF realization without having to publish their observation data or estimation products. GPS users can run a Dancer process without contributing to the global solution, to have direct access to the ITRF in near real-time. The Dancer software has been tested on-line since late 2011. A global network of processes has gradually evolved to allow stabilization and tuning of the software in order to reach a fully operational system. This presentation reports on the current performance of the Dancer system, and demonstrates the obvious benefits of distributed analysis of geodetic data in general. IAG Dancer screenshot
Development of a Web Service for Analysis in a Distributed Network
Jiang, Xiaoqian; Wu, Yuan; Marsolo, Keith; Ohno-Machado, Lucila
2014-01-01
Objective: We describe functional specifications and practicalities in the software development process for a web service that allows the construction of the multivariate logistic regression model, Grid Logistic Regression (GLORE), by aggregating partial estimates from distributed sites, with no exchange of patient-level data. Background: We recently developed and published a web service for model construction and data analysis in a distributed environment. This recent paper provided an overview of the system that is useful for users, but included very few details that are relevant for biomedical informatics developers or network security personnel who may be interested in implementing this or similar systems. We focus here on how the system was conceived and implemented. Methods: We followed a two-stage development approach by first implementing the backbone system and incrementally improving the user experience through interactions with potential users during the development. Our system went through various stages such as concept proof, algorithm validation, user interface development, and system testing. We used the Zoho Project management system to track tasks and milestones. We leveraged Google Code and Apache Subversion to share code among team members, and developed an applet-servlet architecture to support the cross platform deployment. Discussion: During the development process, we encountered challenges such as Information Technology (IT) infrastructure gaps and limited team experience in user-interface design. We figured out solutions as well as enabling factors to support the translation of an innovative privacy-preserving, distributed modeling technology into a working prototype. Conclusion: Using GLORE (a distributed model that we developed earlier) as a pilot example, we demonstrated the feasibility of building and integrating distributed modeling technology into a usable framework that can support privacy-preserving, distributed data analysis among researchers at geographically dispersed institutes. PMID:25848586
ERIC Educational Resources Information Center
Tian, Meng; Risku, Mika; Collin, Kaija
2016-01-01
This article provides a meta-analysis of research conducted on distributed leadership from 2002 to 2013. It continues the review of distributed leadership commissioned by the English National College for School Leadership (NCSL) ("Distributed Leadership: A Desk Study," Bennett et al., 2003), which identified two gaps in the research…
ERIC Educational Resources Information Center
Tian, Meng; Risku, Mika; Collin, Kaija
2016-01-01
This article provides a meta-analysis of research conducted on distributed leadership from 2002 to 2013. It continues the review of distributed leadership commissioned by the English National College for School Leadership (NCSL) ("Distributed Leadership: A Desk Study," Bennett et al., 2003), which identified two gaps in the researchâ€¦
Modeling Exon-Specific Bias Distribution Improves the Analysis of RNA-Seq Data
Liu, Xuejun; Zhang, Li; Chen, Songcan
2015-01-01
RNA-seq technology has become an important tool for quantifying the gene and transcript expression in transcriptome study. The two major difficulties for the gene and transcript expression quantification are the read mapping ambiguity and the overdispersion of the read distribution along reference sequence. Many approaches have been proposed to deal with these difficulties. A number of existing methods use Poisson distribution to model the read counts and this easily splits the counts into the contributions from multiple transcripts. Meanwhile, various solutions were put forward to account for the overdispersion in the Poisson models. By checking the similarities among the variation patterns of read counts for individual genes, we found that the count variation is exon-specific and has the conserved pattern across the samples for each individual gene. We introduce Gamma-distributed latent variables to model the read sequencing preference for each exon. These variables are embedded to the rate parameter of a Poisson model to account for the overdispersion of read distribution. The model is tractable since the Gamma priors can be integrated out in the maximum likelihood estimation. We evaluate the proposed approach, PGseq, using four real datasets and one simulated dataset, and compare its performance with other popular methods. Results show that PGseq presents competitive performance compared to other alternatives in terms of accuracy in the gene and transcript expression calculation and in the downstream differential expression analysis. Especially, we show the advantage of our method in the analysis of low expression. PMID:26448625
A landscape analysis of cougar distribution and abundance in Montana, USA.
Riley, S J; Malecki, R A
2001-09-01
Recent growth in the distribution and abundance of cougars (Puma concolor) throughout western North America has created opportunities, challenges, and problems for wildlife managers and raises questions about what factors affect cougar populations. We present an analysis of factors thought to affect cougar distribution and abundance across the broad geographical scales on which most population management decisions are made. Our objectives were to: (1) identify and evaluate landscape parameters that can be used to predict the capability of habitats to support cougars, and (2) evaluate factors that may account for the recent expansion in cougar numbers. Habitat values based on terrain ruggedness and forested cover explained 73% of the variation in a cougar abundance index. Indices of cougar abundance also were spatially and temporally correlated with ungulate abundance. An increase in the number and total biomass of ungulate prey species is hypothesized to account for recent increases in cougars. Cougar populations in Montana are coping with land development by humans when other components of habitat and prey populations are sufficient. Our analysis provides a better understanding of what may have influenced recent growth in cougar distribution and abundance in Montana and, when combined with insights about stakeholder acceptance capacity, offers a basis for cougar management at broad scales. Long-term conservation of cougars necessitates a better understanding of ecosystem functions that affect prey distribution and abundance, more accurate estimates of cougar populations, and management abilities to integrate these components with human values. PMID:11531235
A landscape analysis of cougar distribution and abundance in Montana, USA.
TOXLINE Toxicology Bibliographic Information
Riley SJ; Malecki RA
2001-09-01
Recent growth in the distribution and abundance of cougars (Puma concolor) throughout western North America has created opportunities, challenges, and problems for wildlife managers and raises questions about what factors affect cougar populations. We present an analysis of factors thought to affect cougar distribution and abundance across the broad geographical scales on which most population management decisions are made. Our objectives were to: (1) identify and evaluate landscape parameters that can be used to predict the capability of habitats to support cougars, and (2) evaluate factors that may account for the recent expansion in cougar numbers. Habitat values based on terrain ruggedness and forested cover explained 73% of the variation in a cougar abundance index. Indices of cougar abundance also were spatially and temporally correlated with ungulate abundance. An increase in the number and total biomass of ungulate prey species is hypothesized to account for recent increases in cougars. Cougar populations in Montana are coping with land development by humans when other components of habitat and prey populations are sufficient. Our analysis provides a better understanding of what may have influenced recent growth in cougar distribution and abundance in Montana and, when combined with insights about stakeholder acceptance capacity, offers a basis for cougar management at broad scales. Long-term conservation of cougars necessitates a better understanding of ecosystem functions that affect prey distribution and abundance, more accurate estimates of cougar populations, and management abilities to integrate these components with human values.
Anami, Lilian Costa; da Costa Lima, Júlia Magalhães; Takahashi, Fernando Eidi; Neisser, Maximiliano Piero; Noritomi, Pedro Yoshito; Bottino, Marco Antonio
2015-04-01
The goal of this study was to evaluate the distribution of stresses generated around implants with different internal-cone abutments by photoelastic (PA) and finite element analysis (FEA). For FEA, implant and abutments with different internal-cone connections (H- hexagonal and S- solid) were scanned, 3D meshes were modeled and objects were loaded with computer software. Trabecular and cortical bones and photoelastic resin blocks were simulated. The PA was performed with photoelastic resin blocks where implants were included and different abutments were bolted. Specimens were observed in the circular polariscope with the application device attached, where loads were applied on same conditions as FEA. FEA images showed very similar stress distribution between two models with different abutments. Differences were observed between stress distribution in bone and resin blocks; PA images resembled those obtained on resin block FEA. PA images were also quantitatively analyzed by comparing the values assigned to fringes. It was observed that S abutment distributes loads more evenly to bone adjacent to an implant when compared to H abutment, for both analysis methods used. It was observed that the PA has generated very similar results to those obtained in FEA with the resin block. PMID:23750560
Comparative Analysis between ROCOF and Vector Surge Relays for Distributed Generation Applications
Freitas, Walmir; Xu, Wilsun; Affonso, Carolina M.; Huang, Zhenyu
2005-04-01
This paper presents a comprehensive comparative analysis between rate of change of frequency (ROCOF) and vector surge (VS) relays for distributed generation islanding detection. The analysis is based on the concepts of detection-time versus active power-imbalance curves and critical active power imbalance. Such curves are obtained through dynamic simulations. The performance of these devices considering different scenarios is determined and compared. Factors such as voltage-dependent loads, generator inertia constant and multi-distributed generator system are analyzed. False operation of these relays due to faults in adjacent feeders is also addressed. Results show that ROCOF relays are more reliable to detect islanding than vector surge relays when the active power imbalance in the islanded system is small. However, ROCOF relays are more susceptible to false operation than vector surge relays.
Henriksson, Aron; Conway, Mike; Duneld, Martin; Chapman, Wendy W.
2013-01-01
Medical terminologies and ontologies are important tools for natural language processing of health record narratives. To account for the variability of language use, synonyms need to be stored in a semantic resource as textual instantiations of a concept. Developing such resources manually is, however, prohibitively expensive and likely to result in low coverage. To facilitate and expedite the process of lexical resource development, distributional analysis of large corpora provides a powerful data-driven means of (semi-)automatically identifying semantic relations, including synonymy, between terms. In this paper, we demonstrate how distributional analysis of a large corpus of electronic health records – the MIMIC-II database – can be employed to extract synonyms of SNOMED CT preferred terms. A distinctive feature of our method is its ability to identify synonymous relations between terms of varying length. PMID:24551362
Impact of hadronic and nuclear corrections on global analysis of spin-dependent parton distributions
Jimenez-Delgado, Pedro; Accardi, Alberto; Melnitchouk, Wally
2014-02-01
We present the first results of a new global next-to-leading order analysis of spin-dependent parton distribution functions from the most recent world data on inclusive polarized deep-inelastic scattering, focusing in particular on the large-x and low-Q^2 regions. By directly fitting polarization asymmetries we eliminate biases introduced by using polarized structure function data extracted under nonuniform assumptions for the unpolarized structure functions. For analysis of the large-x data we implement nuclear smearing corrections for deuterium and 3He nuclei, and systematically include target mass and higher twist corrections to the g_1 and g_2 structure functions at low Q^2. We also explore the effects of Q^2 and W^2 cuts in the data sets, and the potential impact of future data on the behavior of the spin-dependent parton distributions at large x.
Risk analysis of highly combustible gas storage, supply, and distribution systems in PWR plants
Simion, G.P.; VanHorn, R.L.; Smith, C.L.; Bickel, J.H.; Sattison, M.B.; Bulmahn, K.D.
1993-06-01
This report presents the evaluation of the potential safety concerns for pressurized water reactors (PWRs) identified in Generic Safety Issue 106, Piping and the Use of Highly Combustible Gases in Vital Areas. A Westinghouse four-loop PWR plant was analyzed for the risk due to the use of combustible gases (predominantly hydrogen) within the plant. The analysis evaluated an actual hydrogen distribution configuration and conducted several sensitivity studies to determine the potential variability among PWRs. The sensitivity studies were based on hydrogen and safety-related equipment configurations observed at other PWRs within the United States. Several options for improving the hydrogen distribution system design were identified and evaluated for their effect on risk and core damage frequency. A cost/benefit analysis was performed to determine whether alternatives considered were justifiable based on the safety improvement and economics of each possible improvement.
Lanza, L G; Stagi, L
2012-01-01
The analysis of counting and catching errors of both catching and non-catching types of rain intensity gauges was recently possible over a wide variety of measuring principles and instrument design solutions, based on the work performed during the recent Field Intercomparison of Rainfall Intensity Gauges promoted by World Meteorological Organization (WMO). The analysis reported here concerns the assessment of accuracy and precision of various types of instruments based on extensive calibration tests performed in the laboratory during the first phase of this WMO Intercomparison. The non-parametric analysis of relative errors allowed us to conclude that the accuracy of the investigated RI gauges is generally high, after assuming that it should be at least contained within the limits set forth by WMO in this respect. The measuring principle exploited by the instrument is generally not very decisive in obtaining such good results in the laboratory. Rather, the attention paid by the manufacturer to suitably accounting and correcting for systematic errors and time-constant related effects was demonstrated to be influential. The analysis of precision showed that the observed frequency distribution of relative errors around their mean value is not indicative of an underlying Gaussian population, being much more peaked in most cases than can be expected from samples extracted from a Gaussian distribution. The analysis of variance (one-way ANOVA), assuming the instrument model as the only potentially affecting factor, does not confirm the hypothesis of a single common underlying distribution for all instruments. Pair-wise multiple comparison analysis revealed cases in which significant differences could be observed. PMID:22546787
Stability analysis of a discrete Hutchinson equation with discrete and distributed delay
NASA Astrophysics Data System (ADS)
Suryanto, A.; Yanti, I.; Kusumawinahyu, W. M.
2014-02-01
In this paper a Hutchinson equation with discrete and distributed delay is discretized by the Euler method. The dynamics of the obtained discrete system is then investigated. Specifically the stability of the positive fixed point is analyzed. It is found that for sufficiently small time-step of integration, the positive equilibrium undergoes a Neimark-Sacker bifurcation which is controlled by the discrete time delay. The results of analysis are then confirmed by some numerical simulations.
Tomiyama, Yuuki; Araki, Fujio; Oono, Takeshi; Hioki, Kazunari
2014-07-01
Our purpose in this study was to implement three-dimensional (3D) gamma analysis for structures of interest such as the planning target volume (PTV) or clinical target volume (CTV), and organs at risk (OARs) for intensity-modulated radiation therapy (IMRT) dose verification. IMRT dose distributions for prostate and head and neck (HN) cancer patients were calculated with an analytical anisotropic algorithm in an Eclipse (Varian Medical Systems) treatment planning system (TPS) and by Monte Carlo (MC) simulation. The MC dose distributions were calculated with EGSnrc/BEAMnrc and DOSXYZnrc user codes under conditions identical to those for the TPS. The prescribed doses were 76 Gy/38 fractions with five-field IMRT for the prostate and 33 Gy/17 fractions with seven-field IMRT for the HN. TPS dose distributions were verified by the gamma passing rates for the whole calculated volume, PTV or CTV, and OARs by use of 3D gamma analysis with reference to MC dose distributions. The acceptance criteria for the 3D gamma analysis were 3/3 and 2 %/2 mm for a dose difference and a distance to agreement. The gamma passing rates in PTV and OARs for the prostate IMRT plan were close to 100 %. For the HN IMRT plan, the passing rates of 2 %/2 mm in CTV and OARs were substantially lower because inhomogeneous tissues such as bone and air in the HN are included in the calculation area. 3D gamma analysis for individual structures is useful for IMRT dose verification. PMID:24796955
Measurement of water thickness in PEM fuel cells and analysis of gas velocity distributions
NASA Astrophysics Data System (ADS)
Murakawa, H.; Ueda, T.; Sugimoto, K.; Asano, H.; Takenaka, N.
2011-09-01
Fuel gas (hydrogen gas) and oxidant gas (air) are supplied to a polymer electrolyte fuel cell (PEFC). Condensation may occur in the cathode side, since the air might be super-saturated by the fuel cell reactions. If condensed water exists in a gas diffusion layer (GDL) or the gas channels, it may affect the fuel cell performances because it blocks the oxygen from reaching the cathode reaction site. Thus, water management in the PEFC is important. In order to clarify water effects on performances of a PEFC, visualization and quantitative measurements of water distributions in a PEFC were carried out by means of neutron radiography. Two-dimensional water distributions were obtained, and water ejection was confirmed. It was found that the water easily accumulated in the GDL under the rib rather than under the channel at beginning of the operation. Furthermore, a network analysis of gas-velocity distribution is applied for the experimental results. It analyzes the gas-velocity distributions depending on the flow resistance, which is the pressure drop. Applying the measured data of water thickness, gas-velocity distributions were obtained in the channel and the GDL. From the calculation, air supply in the GDL dramatically decreased with increasing of water accumulation.
NASA Astrophysics Data System (ADS)
Boni, Giorgio; Ferraris, Luca; Giannoni, Francesca; Roth, Giorgio; Rudari, Roberto
2007-10-01
A methodology is proposed for the inference, at the regional and local scales, of flood magnitude and associated probability. Once properly set-up, this methodology is able to provide flood frequencies distributions at gauged and un-gauged river sections pertaining to the same homogeneous region, using information extracted from rainfall observations. A proper flood frequency distribution can be therefore predicted even in un-gauged watersheds, for which no discharge time series is available. In regions where objective considerations allow the assumption of probability distribution homogeneity, regional approaches are increasingly adopted as they present a higher reliability. The so-called "third level" in regional frequency analysis, that is the derivation of the local dimensional probability distribution from its regional non-dimensional counterpart is often a critical issue because of the high spatial variability of the position parameter, usually called "index flood". While in gauged sites the time series average is often a good estimator for the index flood, in un-gauged sites as much information as possible about the site concerned should be taken into account. To solve this issue, the present work builds from the experience developed for regional rainfall and flood frequency analyses, and a hydrologic model, driven by a specific hyetograph, is used to bridge the gap between rainfall and flood frequencies distributions, identifying flood discharge magnitudes associated with given frequencies. Results obtained from the application in the Liguria region, Northern Italy, are reported, and validation is proposed in gauged sites against local flood frequency distributions, obtained either from local records or from the regional frequency distribution of non-dimensional annual discharge maxima, made dimensional with the local discharge record.
Single-phase power distribution system power flow and fault analysis
NASA Technical Reports Server (NTRS)
Halpin, S. M.; Grigsby, L. L.
1992-01-01
Alternative methods for power flow and fault analysis of single-phase distribution systems are presented. The algorithms for both power flow and fault analysis utilize a generalized approach to network modeling. The generalized admittance matrix, formed using elements of linear graph theory, is an accurate network model for all possible single-phase network configurations. Unlike the standard nodal admittance matrix formulation algorithms, the generalized approach uses generalized component models for the transmission line and transformer. The standard assumption of a common node voltage reference point is not required to construct the generalized admittance matrix. Therefore, truly accurate simulation results can be obtained for networks that cannot be modeled using traditional techniques.
Hasson, Uri; Skipper, Jeremy I.; Wilde, Michael J.; Nusbaum, Howard C.; Small, Steven L.
2007-01-01
The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data. PMID:17964812
A new parallel-vector finite element analysis software on distributed-memory computers
NASA Technical Reports Server (NTRS)
Qin, Jiangning; Nguyen, Duc T.
1993-01-01
A new parallel-vector finite element analysis software package MPFEA (Massively Parallel-vector Finite Element Analysis) is developed for large-scale structural analysis on massively parallel computers with distributed-memory. MPFEA is designed for parallel generation and assembly of the global finite element stiffness matrices as well as parallel solution of the simultaneous linear equations, since these are often the major time-consuming parts of a finite element analysis. Block-skyline storage scheme along with vector-unrolling techniques are used to enhance the vector performance. Communications among processors are carried out concurrently with arithmetic operations to reduce the total execution time. Numerical results on the Intel iPSC/860 computers (such as the Intel Gamma with 128 processors and the Intel Touchstone Delta with 512 processors) are presented, including an aircraft structure and some very large truss structures, to demonstrate the efficiency and accuracy of MPFEA.
Quinlan, D; Barany, G; Panas, T
2007-08-30
Many forms of security analysis on large scale applications can be substantially automated but the size and complexity can exceed the time and memory available on conventional desktop computers. Most commercial tools are understandably focused on such conventional desktop resources. This paper presents research work on the parallelization of security analysis of both source code and binaries within our Compass tool, which is implemented using the ROSE source-to-source open compiler infrastructure. We have focused on both shared and distributed memory parallelization of the evaluation of rules implemented as checkers for a wide range of secure programming rules, applicable to desktop machines, networks of workstations and dedicated clusters. While Compass as a tool focuses on source code analysis and reports violations of an extensible set of rules, the binary analysis work uses the exact same infrastructure but is less well developed into an equivalent final tool.
NASA Technical Reports Server (NTRS)
Lefebvre, D. R.; Sanderson, A. C.
1994-01-01
Robot coordination and control systems for remote teleoperation applications are by necessity implemented on distributed computers. Modeling and performance analysis of these distributed robotic systems is difficult, but important for economic system design. Performance analysis methods originally developed for conventional distributed computer systems are often unsatisfactory for evaluating real-time systems. The paper introduces a formal model of distributed robotic control systems; and a performance analysis method, based on scheduling theory, which can handle concurrent hard-real-time response specifications. Use of the method is illustrated by a case of remote teleoperation which assesses the effect of communication delays and the allocation of robot control functions on control system hardware requirements.
Spatial distribution and source analysis of heavy metals in the marine sediments of Hong Kong.
Zhang, Xuan; Man, Xiaobing; Jiang, Honglei
2015-08-01
A data matrix, obtained during a 3-year monitoring period (2010-2012) from 45 sampling locations in the marine of Hong Kong, was subjected to pollution indicator and multivariate statistical technique analysis to investigate the spatial distribution and origin of the selected 12 heavy metals. Results suggested that V, Ni, and Ba were at safe levels, and there was a significant anthropogenic effect on Zn, Pb, Cu, Cd, and Cr, which were moderate to severe enrichment at some locations. Sampling locations 1, 2, 7, 9, 12, 13, 14, 30, 31, and 32 were identified as pollution hot spots. Principal component analysis and cluster analysis showed that Zn, Pb, Cu, Cd, and Cr were primarily controlled by anthropogenic sources and Ni, Ba, and V by natural sources. Whereas, Al, Fe, Mn, and As were controlled by both anthropogenic and natural sources. Cluster analysis classified 45 sampling sites into five groups and analysis of variance indicated there were significant differences between different groups. The pollution hot spots were classified into moderate or high polluted groups, and the influential factor of the heavy metal distribution was analyzed. PMID:26173849
Selivanov, Vitaly A; Sukhomlin, Tatiana; Centelles, Josep J; Lee, Paul W N; Cascante, Marta
2006-01-01
A current trend in neuroscience research is the use of stable isotope tracers in order to address metabolic processes in vivo. The tracers produce a huge number of metabolite forms that differ according to the number and position of labeled isotopes in the carbon skeleton (isotopomers) and such a large variety makes the analysis of isotopomer data highly complex. On the other hand, this multiplicity of forms does provide sufficient information to address cell operation in vivo. By the end of last millennium, a number of tools have been developed for estimation of metabolic flux profile from any possible isotopomer distribution data. However, although well elaborated, these tools were limited to steady state analysis, and the obtained set of fluxes remained disconnected from their biochemical context. In this review we focus on a new numerical analytical approach that integrates kinetic and metabolic flux analysis. The related computational algorithm estimates the dynamic flux based on the time-dependent distribution of all possible isotopomers of metabolic pathway intermediates that are generated from a labeled substrate. The new algorithm connects specific tracer data with enzyme kinetic characteristics, thereby extending the amount of data available for analysis: it uses enzyme kinetic data to estimate the flux profile, and vice versa, for the kinetic analysis it uses in vivo tracer data to reveal the biochemical basis of the estimated metabolic fluxes. PMID:17118161
NASA Astrophysics Data System (ADS)
Terres, Maria A.; Gelfand, Alan E.
2015-07-01
Typical ecological gradient analyses consider variation in the response of plants along a gradient of covariate values, but generally constrain themselves to predetermined response curves and ignore spatial autocorrelation. In this paper, we develop a formal spatial gradient analysis. We adopt the mathematical definition of gradients as directional rates of change with regard to a spatial surface. We view both the response and the covariate as spatial surfaces over a region of interest with respective gradient behavior. The gradient analysis we propose enables local comparison of these gradients. At any spatial location, we compare the behavior of the response surface with the behavior of the covariate surface to provide a novel form of sensitivity analysis. More precisely, we first fit a joint hierarchical Bayesian spatial model for a response variable and an environmental covariate. Then, after model fitting, at a given location, for each variable, we can obtain the posterior distribution of the derivative in any direction. We use these distributions to compute spatial sensitivities and angular discrepancies enabling a more detailed picture of the spatial nature of the response-covariate relationship. This methodology is illustrated using species presence probability as a response to elevation for two species of South African protea. We also offer a comparison with sensitivity analysis using geographically weighted regression. We show that the spatial gradient analysis allows for more extensive inference and provides a much richer description of the spatially varying relationships.
NASA Technical Reports Server (NTRS)
Leybold, H. A.
1971-01-01
Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.
da Rocha, Armando Freitas; Foz, Flávia Benevides; Pereira, Alfredo
2015-01-01
Recent studies on language processing indicate that language cognition is better understood if assumed to be supported by a distributed intelligent processing system enrolling neurons located all over the cortex, in contrast to reductionism that proposes to localize cognitive functions to specific cortical structures. Here, brain activity was recorded using electroencephalogram while volunteers were listening or reading small texts and had to select pictures that translate meaning of these texts. Several techniques for EEG analysis were used to show this distributed character of neuronal enrollment associated with the comprehension of oral and written descriptive texts. Low Resolution Tomography identified the many different sets (s i ) of neurons activated in several distinct cortical areas by text understanding. Linear correlation was used to calculate the information H(e i ) provided by each electrode of the 10/20 system about the identified s i . H(e i ) Principal Component Analysis (PCA) was used to study the temporal and spatial activation of these sources s i . This analysis evidenced 4 different patterns of H(e i ) covariation that are generated by neurons located at different cortical locations. These results clearly show that the distributed character of language processing is clearly evidenced by combining available EEG technologies. PMID:26713089
A distributed analysis and visualization system for model and observational data
NASA Technical Reports Server (NTRS)
Wilhelmson, Robert B.
1994-01-01
Software was developed with NASA support to aid in the analysis and display of the massive amounts of data generated from satellites, observational field programs, and from model simulations. This software was developed in the context of the PATHFINDER (Probing ATmospHeric Flows in an Interactive and Distributed EnviRonment) Project. The overall aim of this project is to create a flexible, modular, and distributed environment for data handling, modeling simulations, data analysis, and visualization of atmospheric and fluid flows. Software completed with NASA support includes GEMPAK analysis, data handling, and display modules for which collaborators at NASA had primary responsibility, and prototype software modules for three-dimensional interactive and distributed control and display as well as data handling, for which NSCA was responsible. Overall process control was handled through a scientific and visualization application builder from Silicon Graphics known as the Iris Explorer. In addition, the GEMPAK related work (GEMVIS) was also ported to the Advanced Visualization System (AVS) application builder. Many modules were developed to enhance those already available in Iris Explorer including HDF file support, improved visualization and display, simple lattice math, and the handling of metadata through development of a new grid datatype. Complete source and runtime binaries along with on-line documentation is available via the World Wide Web at: http://redrock.ncsa.uiuc.edu/ PATHFINDER/pathre12/top/top.html.
Saeki, Yuichi; Shiro, Sokichi; Tajima, Toshiyuki; Yamamoto, Akihiro; Sameshima-Saito, Reiko; Sato, Takashi; Yamakawa, Takeo
2013-01-01
We characterized the relationship between the genetic diversity of indigenous soybean-nodulating bradyrhizobia from weakly acidic soils in Japan and their geographical distribution in an ecological study of indigenous soybean rhizobia. We isolated bradyrhizobia from three kinds of Rj-genotype soybeans. Their genetic diversity and community structure were analyzed by PCR-RFLP analysis of the 16Sâ€“23S rRNA gene internal transcribed spacer (ITS) region with 11 Bradyrhizobium USDA strains as references. We used data from the present study and previous studies to carry out mathematical ecological analyses, multidimensional scaling analysis with the Bray-Curtis index, polar ordination analysis, and multiple regression analyses to characterize the relationship between soybean-nodulating bradyrhizobial community structures and their geographical distribution. The mathematical ecological approaches used in this study demonstrated the presence of ecological niches and suggested the geographical distribution of soybean-nodulating bradyrhizobia to be a function of latitude and the related climate, with clusters in the order Bj123, Bj110, Bj6, and Be76 from north to south in Japan. PMID:24240318
Combining Different Tools for EEG Analysis to Study the Distributed Character of Language Processing
da Rocha, Armando Freitas; Foz, FlÃ¡via Benevides; Pereira, Alfredo
2015-01-01
Recent studies on language processing indicate that language cognition is better understood if assumed to be supported by a distributed intelligent processing system enrolling neurons located all over the cortex, in contrast to reductionism that proposes to localize cognitive functions to specific cortical structures. Here, brain activity was recorded using electroencephalogram while volunteers were listening or reading small texts and had to select pictures that translate meaning of these texts. Several techniques for EEG analysis were used to show this distributed character of neuronal enrollment associated with the comprehension of oral and written descriptive texts. Low Resolution Tomography identified the many different sets (si) of neurons activated in several distinct cortical areas by text understanding. Linear correlation was used to calculate the information H(ei) provided by each electrode of the 10/20 system about the identified si. H(ei) Principal Component Analysis (PCA) was used to study the temporal and spatial activation of these sources si. This analysis evidenced 4 different patterns of H(ei) covariation that are generated by neurons located at different cortical locations. These results clearly show that the distributed character of language processing is clearly evidenced by combining available EEG technologies. PMID:26713089
Space station electrical power distribution analysis using a load flow approach
NASA Technical Reports Server (NTRS)
Emanuel, Ervin M.
1987-01-01
The space station's electrical power system will evolve and grow in a manner much similar to the present terrestrial electrical power system utilities. The initial baseline reference configuration will contain more than 50 nodes or busses, inverters, transformers, overcurrent protection devices, distribution lines, solar arrays, and/or solar dynamic power generating sources. The system is designed to manage and distribute 75 KW of power single phase or three phase at 20 KHz, and grow to a level of 300 KW steady state, and must be capable of operating at a peak of 450 KW for 5 to 10 min. In order to plan far into the future and keep pace with load growth, a load flow power system analysis approach must be developed and utilized. This method is a well known energy assessment and management tool that is widely used throughout the Electrical Power Utility Industry. The results of a comprehensive evaluation and assessment of an Electrical Distribution System Analysis Program (EDSA) is discussed. Its potential use as an analysis and design tool for the 20 KHz space station electrical power system is addressed.
Cherney, E.A. )
1988-07-01
The paper presents the results and analyses of long-term cantilever strength tests on polymeric line post insulators. The time-to-failure data for static cantilever loads are represented by the Weibull distribution. The life distribution, obtained from the maximum likelihood estimates of the accelerated failure times, fits an exponential model. An extrapolation of the life distribution to normal loads provides an estimate of the strength rating and mechanical equivalence to porcelain line post insulators.
Damos, Petros; Soulopoulou, Polyxeni
2015-01-01
Temperature implies contrasting biological causes of demographic aging in poikilotherms. In this work, we used the reliability theory to describe the consistency of mortality with age in moth populations and to show that differentiation in hazard rates is related to extrinsic environmental causes such as temperature. Moreover, experiments that manipulate extrinsic mortality were used to distinguish temperature-related death rates and the pertinence of the Weibull aging model. The Newton-Raphson optimization method was applied to calculate parameters for small samples of ages at death by estimating the maximum likelihoods surfaces using scored gradient vectors and the Hessian matrix. The study reveals for the first time that the Weibull function is able to describe contrasting biological causes of demographic aging for moth populations maintained at different temperature regimes. We demonstrate that at favourable conditions the insect death rate accelerates as age advances, in contrast to the extreme temperatures in which each individual drifts toward death in a linear fashion and has a constant chance of passing away. Moreover, slope of hazard rates shifts towards a constant initial rate which is a pattern demonstrated by systems which are not wearing out (e.g. non-aging) since the failure, or death, is a random event independent of time. This finding may appear surprising, because, traditionally, it was mostly thought as rule that in aging population force of mortality increases exponentially until all individuals have died. Moreover, in relation to other studies, we have not observed any typical decelerating aging patterns at late life (mortality leveling-off), but rather, accelerated hazard rates at optimum temperatures and a stabilized increase at the extremes.In most cases, the increase in aging-related mortality was simulated reasonably well according to the Weibull survivorship model that is applied. Moreover, semi log- probability hazard rate model illustrations and maximum likelihoods may be usefully in defining periods of mortality leveling off and provide clear evidence that environmental variability may affect parameter estimates and insect population failure rate. From a reliability theory standpoint, failure rates vary according to a linear function of age at the extremes indicating that the life system (i.e., population) is able to eliminate earlier failure and/or to keep later failure rates constant. The applied model was able to identify the major correlates of extended longevity and to suggest new ideas for using demographic concepts in both basic and applied population biology and aging. PMID:26317217
LagerlÃ¶f, Jakob H.; Kindblom, Jon; Bernhardt, Peter
2014-09-15
Purpose: To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO{sub 2})]. Methods: A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 Î¼m) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumor oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO{sub 2}), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO{sub 2} were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. Results: For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO{sub 2} distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became truncated in the lower end, due to anoxia, but smaller tumors showed undisturbed oxygen distributions. The six different models with correlated parameters generated three classes of oxygen distributions. The first was a hypothetical, negative covariance between vessel proximity and pO{sub 2} (VPO-C scenario); the second was a hypothetical positive covariance between vessel proximity and pO{sub 2} (VPO+C scenario); and the third was the hypothesis of no correlation between vessel proximity and pO{sub 2} (UP scenario). The VPO-C scenario produced a distinctly different oxygen distribution than the two other scenarios. The shape of the VPO-C scenario was similar to that of the nonvariable DOC model, and the larger the tumor, the greater the similarity between the two models. For all simulations, the mean oxygen tension decreased and the hypoxic fraction increased with tumor size. The absorbed dose required for definitive tumor control was highest for the VPO+C scenario, followed by the UP and VPO-C scenarios. Conclusions: A novel MC algorithm was presented which simulated oxygen distributions and radiation response for various biological parameter values. The analysis showed that the VPO-C scenario generated a clearly different oxygen distribution from the VPO+C scenario; the former exhibited a lower hypoxic fraction and higher radiosensitivity. In future studies, this modeling approach might be valuable for qualitative analyses of factors that affect oxygen distribution as well as analyses of specific experimental and clinical situations.
Abadi, Alireza; Amanpour, Farzaneh; Bajdik, Chris; Yavari, Parvin
2012-01-01
Background: The goal of this study is to extend the applications of parametric survival models so that they include cases in which accelerated failure time (AFT) assumption is not satisfied, and examine parametric and semiparametric models under different proportional hazards (PH) and AFT assumptions. Methods: The data for 12,531 women diagnosed with breast cancer in British Columbia, Canada, during 1990–1999 were divided into eight groups according to patients’ ages and stage of disease, and each group was assumed to have different AFT and PH assumptions. For parametric models, we fitted the saturated generalized gamma (GG) distribution, and compared this with the conventional AFT model. Using a likelihood ratio statistic, both models were compared to the simpler forms including the Weibull and lognormal. For semiparametric models, either Cox's PH model or stratified Cox model was fitted according to the PH assumption and tested using Schoenfeld residuals. The GG family was compared to the log-logistic model using Akaike information criterion (AIC) and Baysian information criterion (BIC). Results: When PH and AFT assumptions were satisfied, semiparametric and parametric models both provided valid descriptions of breast cancer patient survival. When PH assumption was not satisfied but AFT condition held, the parametric models performed better than the stratified Cox model. When neither the PH nor the AFT assumptions were met, the log normal distribution provided a reasonable fit. Conclusions: When both the PH and AFT assumptions are satisfied, the parametric and semiparametric models provide complementary information. When PH assumption is not satisfied, the parametric models should be considered, whether the AFT assumption is met or not. PMID:23024854
A quantitative quantum-chemical analysis tool for the distribution of mechanical force in molecules
NASA Astrophysics Data System (ADS)
Stauch, Tim; Dreuw, Andreas
2014-04-01
The promising field of mechanochemistry suffers from a general lack of understanding of the distribution and propagation of force in a stretched molecule, which limits its applicability up to the present day. In this article, we introduce the JEDI (Judgement of Energy DIstribution) analysis, which is the first quantum chemical method that provides a quantitative understanding of the distribution of mechanical stress energy among all degrees of freedom in a molecule. The method is carried out on the basis of static or dynamic calculations under the influence of an external force and makes use of a Hessian matrix in redundant internal coordinates (bond lengths, bond angles, and dihedral angles), so that all relevant degrees of freedom of a molecule are included and mechanochemical processes can be interpreted in a chemically intuitive way. The JEDI method is characterized by its modest computational effort, with the calculation of the Hessian being the rate-determining step, and delivers, except for the harmonic approximation, exact ab initio results. We apply the JEDI analysis to several example molecules in both static quantum chemical calculations and Born-Oppenheimer Molecular Dynamics simulations in which molecules are subject to an external force, thus studying not only the distribution and the propagation of strain in mechanically deformed systems, but also gaining valuable insights into the mechanochemically induced isomerization of trans-3,4-dimethylcyclobutene to trans,trans-2,4-hexadiene. The JEDI analysis can potentially be used in the discussion of sonochemical reactions, molecular motors, mechanophores, and photoswitches as well as in the development of molecular force probes.
A quantitative quantum-chemical analysis tool for the distribution of mechanical force in molecules
Stauch, Tim; Dreuw, Andreas
2014-04-07
The promising field of mechanochemistry suffers from a general lack of understanding of the distribution and propagation of force in a stretched molecule, which limits its applicability up to the present day. In this article, we introduce the JEDI (Judgement of Energy DIstribution) analysis, which is the first quantum chemical method that provides a quantitative understanding of the distribution of mechanical stress energy among all degrees of freedom in a molecule. The method is carried out on the basis of static or dynamic calculations under the influence of an external force and makes use of a Hessian matrix in redundant internal coordinates (bond lengths, bond angles, and dihedral angles), so that all relevant degrees of freedom of a molecule are included and mechanochemical processes can be interpreted in a chemically intuitive way. The JEDI method is characterized by its modest computational effort, with the calculation of the Hessian being the rate-determining step, and delivers, except for the harmonic approximation, exact ab initio results. We apply the JEDI analysis to several example molecules in both static quantum chemical calculations and Born-Oppenheimer Molecular Dynamics simulations in which molecules are subject to an external force, thus studying not only the distribution and the propagation of strain in mechanically deformed systems, but also gaining valuable insights into the mechanochemically induced isomerization of trans-3,4-dimethylcyclobutene to trans,trans-2,4-hexadiene. The JEDI analysis can potentially be used in the discussion of sonochemical reactions, molecular motors, mechanophores, and photoswitches as well as in the development of molecular force probes.
Modelling European dry spell length distributions, years 1951-2000
NASA Astrophysics Data System (ADS)
Serra, Carina; Lana, Xavier; Burgueño, August; Martinez, Maria-Dolors
2010-05-01
Daily precipitation records of 267 European rain gauges are considered to obtain dry spell length, DSL, series along the second half of the twentieth century (1951-2000). A dry spell is defined as a set of consecutive days with daily rain amount below a given threshold, R0, which are equal to 0.1, 1.0, 5.0 and 10.0 mm/day. DSL series are properly fit to four different statistical models: Pearson type III (PE3), Weibull (WEI), generalized Pareto (GPA) and lognormal (LN) distributions. The parameters of every model are estimated by L-moments, and the goodness of fit is assessed by quantifying discrepancies between empirical and theoretical distributions in the L-skewness-kurtosis diagrams. The most common best fitting model across Europe is PE3, especially for 0.1 and 1.0 mm/day thresholds. Nevertheless, a few stations in southern Europe are better modelled by the WEI distribution. For 5.0 and 10.0 mm/day, the spatial distribution of the best fitting model is more heterogeneous than for the lowest thresholds. Maps of DSL average and standard deviation, and expected lengths for return periods of 2, 5, 10, 25 and 50 years are also obtained. A common feature for all these maps is that, whereas for thresholds of 0.1 and 1.0 mm/day a N-S gradient is detected, especially strong in Mediterranean areas, for 5.0 and 10.0 mm/day a NW-SE gradient is observed in the Iberian Peninsula and a SW-NE gradient in the Scandinavian Peninsula. Finally, a regional frequency analysis based on a clustering algorithm is attempted for the four threshold levels R0, being observed that PE3 model is the parent distribution for the groups with the highest number of stations.
NASA Astrophysics Data System (ADS)
Matsuda, Takashi S.; Nakamura, Takuji; Ejiri, Mitsumu K.; Tsutsumi, Masaki; Shiokawa, Kazuo
2014-08-01
We have developed a new analysis method for obtaining the power spectrum in the horizontal phase velocity domain from airglow intensity image data to study atmospheric gravity waves. This method can deal with extensive amounts of imaging data obtained on different years and at various observation sites without bias caused by different event extraction criteria for the person processing the data. The new method was applied to sodium airglow data obtained in 2011 at Syowa Station (69°S, 40°E), Antarctica. The results were compared with those obtained from a conventional event analysis in which the phase fronts were traced manually in order to estimate horizontal characteristics, such as wavelengths, phase velocities, and wave periods. The horizontal phase velocity of each wave event in the airglow images corresponded closely to a peak in the spectrum. The statistical results of spectral analysis showed an eastward offset of the horizontal phase velocity distribution. This could be interpreted as the existence of wave sources around the stratospheric eastward jet. Similar zonal anisotropy was also seen in the horizontal phase velocity distribution of the gravity waves by the event analysis. Both methods produce similar statistical results about directionality of atmospheric gravity waves. Galactic contamination of the spectrum was examined by calculating the apparent velocity of the stars and found to be limited for phase speeds lower than 30 m/s. In conclusion, our new method is suitable for deriving the horizontal phase velocity characteristics of atmospheric gravity waves from an extensive amount of imaging data.
Jeon, Seongho; Oberreit, Derek R; Van Schooneveld, Gary; Hogan, Christopher J
2016-02-01
We apply liquid nebulization (LN) in series with ion mobility spectrometry (IMS, using a differential mobility analyzer coupled to a condensation particle counter) to measure the size distribution functions (the number concentration per unit log diameter) of gold nanospheres in the 5-30 nm range, 70 nm Ã— 11.7 nm gold nanorods, and albumin proteins originally in aqueous suspensions. In prior studies, IMS measurements have only been carried out for colloidal nanoparticles in this size range using electrosprays for aerosolization, as traditional nebulizers produce supermicrometer droplets which leave residue particles from non-volatile species. Residue particles mask the size distribution of the particles of interest. Uniquely, the LN employed in this study uses both online dilution (with dilution factors of up to 10(4)) with ultra-high purity water and a ball-impactor to remove droplets larger than 500 nm in diameter. This combination enables hydrosol-to-aerosol conversion preserving the size and morphology of particles, and also enables higher non-volatile residue tolerance than electrospray based aerosolization. Through LN-IMS measurements we show that the size distribution functions of narrowly distributed but similarly sized particles can be distinguished from one another, which is not possible with Nanoparticle Tracking Analysis in the sub-30 nm size range. Through comparison to electron microscopy measurements, we find that the size distribution functions inferred via LN-IMS measurements correspond to the particle sizes coated by surfactants, i.e. as they persist in colloidal suspensions. Finally, we show that the gas phase particle concentrations inferred from IMS size distribution functions are functions of only of the liquid phase particle concentration, and are independent of particle size, shape, and chemical composition. Therefore LN-IMS enables characterization of the size, yield, and polydispersity of sub-30 nm particles. PMID:26750519
Lancaster, Kari; Seear, Kate; Treloar, Carla
2015-12-01
The law is a key site for the production of meanings around the 'problem' of drugs in public discourse. In this article, we critically consider the material-discursive 'effects' of laws prohibiting peer distribution of needles and syringes in Australia. Taking the laws and regulations governing possession and distribution of injecting equipment in one jurisdiction (New South Wales, Australia) as a case study, we use Carol Bacchi's poststructuralist approach to policy analysis to critically consider the assumptions and presuppositions underpinning this legislative and regulatory framework, with a particular focus on examining the discursive, subjectification and lived effects of these laws. We argue that legislative prohibitions on the distribution of injecting equipment except by 'authorised persons' within 'approved programs' constitute people who inject drugs as irresponsible, irrational, and untrustworthy and re-inscribe a familiar stereotype of the drug 'addict'. These constructions of people who inject drugs fundamentally constrain how the provision of injecting equipment may be thought about in policy and practice. We suggest that prohibitions on the distribution of injecting equipment among peers may also have other, material, effects and may be counterproductive to various public health aims and objectives. However, the actions undertaken by some people who inject drugs to distribute equipment to their peers may disrupt and challenge these constructions, through a counter-discourse in which people who inject drugs are constituted as active agents with a vital role to play in blood-borne virus prevention in the community. Such activity continues to bring with it the risk of criminal prosecution, and so it remains a vexed issue. These insights have implications of relevance beyond Australia, particularly for other countries around the world that prohibit peer distribution, but also for other legislative practices with material-discursive effects in association with injecting drug use. PMID:26118796
Stevenson, Paul; Sederman, Andrew J; Mantle, Mick D; Li, Xueliang; Gladden, Lynn F
2010-12-01
Pulsed-field gradient nuclear magnetic resonance, previously used for measuring droplet size distributions in emulsions, has been used to measure bubble size distributions in a non-overflowing pneumatic gas-liquid foam that has been created by sparging propane into an aqueous solution of 1.5g/l (5.20mM) SDS. The bubble size distributions measured were reproducible and approximated a Weibull distribution. However, the bubble size distributions did not materially change with position at which they were measured within the froth. An analysis of foam coarsening due to Ostwald ripening in a non-overflowing foam indicates that, for the experimental conditions employed, one would not expect this to be a significant effect. It is therefore apparent that the eventual collapse of the foam is due to bubble bursting (or surface coalescence) rather than Ostwald ripening. This surface coalescence occurs because of evaporation from the free surface of the foam. An analytical solution for the liquid fraction profile for a certain class of non-overflowing pneumatic foam is given, and a mean bubble size that is appropriate for drainage calculations is suggested. PMID:20832808
Tojinbara, Kageaki; Sugiura, K; Yamada, A; Kakitani, I; Kwan, N C L; Sugiura, K
2016-01-01
Data of 98 rabies cases in dogs and cats from the 1948-1954 rabies epidemic in Tokyo were used to estimate the probability distribution of the incubation period. Lognormal, gamma and Weibull distributions were used to model the incubation period. The maximum likelihood estimates of the mean incubation period ranged from 27.30 to 28.56 days according to different distributions. The mean incubation period was shortest with the lognormal distribution (27.30 days), and longest with the Weibull distribution (28.56 days). The best distribution in terms of AIC value was the lognormal distribution with mean value of 27.30 (95% CI: 23.46-31.55) days and standard deviation of 20.20 (15.27-26.31) days. There were no significant differences between the incubation periods for dogs and cats, or between those for male and female dogs. PMID:26688561
NASA Astrophysics Data System (ADS)
Alfeld, Matthias; Wahabzada, Mirwaes; Bauckhage, Christian; Kersting, Kristian; Wellenreuther, Gerd; Falkenberg, Gerald
2014-04-01
Stacks of elemental distribution images acquired by XRF can be difficult to interpret, if they contain high degrees of redundancy and components differing in their quantitative but not qualitative elemental composition. Factor analysis, mainly in the form of Principal Component Analysis (PCA), has been used to reduce the level of redundancy and highlight correlations. PCA, however, does not yield physically meaningful representations as they often contain negative values. This limitation can be overcome, by employing factor analysis that is restricted to non-negativity. In this paper we present the first application of the Python Matrix Factorization Module (pymf) on XRF data. This is done in a case study on the painting Saul and David from the studio of Rembrandt van Rijn. We show how the discrimination between two different Co containing compounds with minimum user intervention and a priori knowledge is supported by Non-Negative Matrix Factorization (NMF).
Godin, Antoine G; Rappaz, Benjamin; Potvin-Trottier, Laurent; Kennedy, Timothy E; De Koninck, Yves; Wiseman, Paul W
2015-08-18
Knowledge of membrane receptor organization is essential for understanding the initial steps in cell signaling and trafficking mechanisms, but quantitative analysis of receptor interactions at the single-cell level and in different cellular compartments has remained highly challenging. To achieve this, we apply a quantitative image analysis technique-spatial intensity distribution analysis (SpIDA)-that can measure fluorescent particle concentrations and oligomerization states within different subcellular compartments in live cells. An important technical challenge faced by fluorescence microscopy-based measurement of oligomerization is the fidelity of receptor labeling. In practice, imperfect labeling biases the distribution of oligomeric states measured within an aggregated system. We extend SpIDA to enable analysis of high-order oligomers from fluorescence microscopy images, by including a probability weighted correction algorithm for nonemitting labels. We demonstrated that this fraction of nonemitting probes could be estimated in single cells using SpIDA measurements on model systems with known oligomerization state. Previously, this artifact was measured using single-step photobleaching. This approach was validated using computer-simulated data and the imperfect labeling was quantified in cells with ion channels of known oligomer subunit count. It was then applied to quantify the oligomerization states in different cell compartments of the proteolipid protein (PLP) expressed in COS-7 cells. Expression of a mutant PLP linked to impaired trafficking resulted in the detection of PLP tetramers that persist in the endoplasmic reticulum, while no difference was measured at the membrane between the distributions of wild-type and mutated PLPs. Our results demonstrate that SpIDA allows measurement of protein oligomerization in different compartments of intact cells, even when fractional mislabeling occurs as well as photobleaching during the imaging process, and reveals insights into the mechanism underlying impaired trafficking of PLP. PMID:26287623
NASA Technical Reports Server (NTRS)
Schmeckpeper, K. R.
1987-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Distribution and Control (EPD and C) hardware. The EPD and C hardware performs the functions of distributing, sensing, and controlling 28 volt DC power and of inverting, distributing, sensing, and controlling 117 volt 400 Hz AC power to all Orbiter subsystems from the three fuel cells in the Electrical Power Generation (EPG) subsystem. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 1671 failure modes analyzed, 9 single failures were determined to result in loss of crew or vehicle. Three single failures unique to intact abort were determined to result in possible loss of the crew or vehicle. A possible loss of mission could result if any of 136 single failures occurred. Six of the criticality 1/1 failures are in two rotary and two pushbutton switches that control External Tank and Solid Rocket Booster separation. The other 6 criticality 1/1 failures are fuses, one each per Aft Power Control Assembly (APCA) 4, 5, and 6 and one each per Forward Power Control Assembly (FPCA) 1, 2, and 3, that supply power to certain Main Propulsion System (MPS) valves and Forward Reaction Control System (RCS) circuits.
Application of digital image analysis for size distribution measurements of microbubbles
Burns, S.E.; Yiacoumi, S.; Frost, D.; Tsouris, C.
1997-03-01
This work employs digital image analysis to measure the size distribution of microbubbles generated by the process of electroflotation for use in solid/liquid separation processes. Microbubbles are used for separations in the mineral processing industry and also in the treatment of potable water and wastewater.As the bubbles move upward in a solid/liquid column due to buoyancy, particles collide with and attach to the bubbles and are carried to the surface of the column where they are removed by skimming. The removal efficiency of solids is strongly affected by the size of the bubbles. In general, higher separation is achieved by a smaller bubble size. The primary focus of this study was to characterize the size and size distribution of bubbles generated in electroflotation using image analysis. The study found that bubble diameter increased slightly as the current density applied to the system was increased. Additionally, electroflotation produces a uniform bubble size with narrow distribution which optimizes the removal of fine particles from solution.
Statistical analysis of secondary particle distributions in relativistic nucleus-nucleus collisions
NASA Technical Reports Server (NTRS)
Mcguire, Stephen C.
1987-01-01
The use is described of several statistical techniques to characterize structure in the angular distributions of secondary particles from nucleus-nucleus collisions in the energy range 24 to 61 GeV/nucleon. The objective of this work was to determine whether there are correlations between emitted particle intensity and angle that may be used to support the existence of the quark gluon plasma. The techniques include chi-square null hypothesis tests, the method of discrete Fourier transform analysis, and fluctuation analysis. We have also used the method of composite unit vectors to test for azimuthal asymmetry in a data set of 63 JACEE-3 events. Each method is presented in a manner that provides the reader with some practical detail regarding its application. Of those events with relatively high statistics, Fe approaches 0 at 55 GeV/nucleon was found to possess an azimuthal distribution with a highly non-random structure. No evidence of non-statistical fluctuations was found in the pseudo-rapidity distributions of the events studied. It is seen that the most effective application of these methods relies upon the availability of many events or single events that possess very high multiplicities.
2010-01-01
Background The well-established connection between HIV risk behavior and place of residence points to the importance of geographic clustering in the potential transmission of HIV and other sexually transmitted infections (STI). Methods To investigate the geospatial distribution of prevalent sexually transmitted infections and sexual behaviors in a sample of 18-24 year-old sexually active men in urban and rural areas of Kisumu, Kenya, we mapped the residences of 649 men and conducted spatial cluster analysis. Spatial distribution of the study participants was assessed in terms of the demographic, behavioral, and sexual dysfunction variables, as well as laboratory diagnosed STIs. To test for the presence and location of clusters we used Kulldorff's spatial scan statistic as implemented in the Satscan program. Results The results of this study suggest that sexual risk behaviors and STIs are evenly distributed in our sample throughout the Kisumu district. No behavioral or STI clusters were detected, except for condom use. Neither urban nor rural residence significantly impacted risk behavior or STI prevalence. Conclusion We found no association between place of residence and sexual risk behaviors in our sample. While our results can not be generalized to other populations, the study shows that geospatial analysis can be an important tool for investigating study sample characteristics; for evaluating HIV/STI risk factors; and for development and implementation of targeted HIV and STI control programs in specifically defined populations and in areas where the underlying population dynamic is poorly understood. PMID:20492703
Multimedia content analysis and indexing: evaluation of a distributed and scalable architecture
NASA Astrophysics Data System (ADS)
Mandviwala, Hasnain; Blackwell, Scott; Weikart, Chris; Van Thong, Jean-Manuel
2003-11-01
Multimedia search engines facilitate the retrieval of documents from large media content archives now available via intranets and the Internet. Over the past several years, many research projects have focused on algorithms for analyzing and indexing media content efficiently. However, special system architectures are required to process large amounts of content from real-time feeds or existing archives. Possible solutions include dedicated distributed architectures for analyzing content rapidly and for making it searchable. The system architecture we propose implements such an approach: a highly distributed and reconfigurable batch media content analyzer that can process media streams and static media repositories. Our distributed media analysis application handles media acquisition, content processing, and document indexing. This collection of modules is orchestrated by a task flow management component, exploiting data and pipeline parallelism in the application. A scheduler manages load balancing and prioritizes the different tasks. Workers implement application-specific modules that can be deployed on an arbitrary number of nodes running different operating systems. Each application module is exposed as a web service, implemented with industry-standard interoperable middleware components such as Microsoft ASP.NET and Sun J2EE. Our system architecture is the next generation system for the multimedia indexing application demonstrated by www.speechbot.com. It can process large volumes of audio recordings with minimal support and maintenance, while running on low-cost commodity hardware. The system has been evaluated on a server farm running concurrent content analysis processes.
Cipelletti, Luca; Biron, Jean-Philippe; Martin, Michel; Cottet, Hervé
2015-08-18
Taylor dispersion analysis is an absolute and straightforward characterization method that allows determining the diffusion coefficient, or equivalently the hydrodynamic radius, from angstroms to submicron size range. In this work, we investigated the use of the Constrained Regularized Linear Inversion approach as a new data processing method to extract the probability density functions of the diffusion coefficient (or hydrodynamic radius) from experimental taylorgrams. This new approach can be applied to arbitrary polydisperse samples and gives access to the whole diffusion coefficient distributions, thereby significantly enhancing the potentiality of Taylor dispersion analysis. The method was successfully applied to both simulated and real experimental data for solutions of moderately polydisperse polymers and their binary and ternary mixtures. Distributions of diffusion coefficients obtained by this method were favorably compared with those derived from size exclusion chromatography. The influence of the noise of the simulated taylorgrams on the data processing is discussed. Finally, we discuss the ability of the method to correctly resolve bimodal distributions as a function of the relative separation between the two constituent species. PMID:26243023
Pore space analysis of NAPL distribution in sand-clay media
Matmon, D.; Hayden, N.J.
2003-01-01
This paper introduces a conceptual model of clays and non-aqueous phase liquids (NAPLs) at the pore scale that has been developed from a mathematical unit cell model, and direct micromodel observation and measurement of clay-containing porous media. The mathematical model uses a unit cell concept with uniform spherical grains for simulating the sand in the sand-clay matrix (???10% clay). Micromodels made with glass slides and including different clay-containing porous media were used to investigate the two clays (kaolinite and montmorillonite) and NAPL distribution within the pore space. The results were used to understand the distribution of NAPL advancing into initially saturated sand and sand-clay media, and provided a detailed analysis of the pore-scale geometry, pore size distribution, NAPL entry pressures, and the effect of clay on this geometry. Interesting NAPL saturation profiles were observed as a result of the complexity of the pore space geometry with the different packing angles and the presence of clays. The unit cell approach has applications for enhancing the mechanistic understanding and conceptualization, both visually and mathematically, of pore-scale processes such as NAPL and clay distribution. ?? 2003 Elsevier Science Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Alves, L. G. A.; Ribeiro, H. V.; Lenzi, E. K.; Mendes, R. S.
2014-09-01
We report on the existing connection between power-law distributions and allometries. As it was first reported in Gomez-Lievano et al. (2012) for the relationship between homicides and population, when these urban indicators present asymptotic power-law distributions, they can also display specific allometries among themselves. Here, we present an extensive characterization of this connection when considering all possible pairs of relationships from twelve urban indicators of Brazilian cities (such as child labor, illiteracy, income, sanitation and unemployment). Our analysis reveals that all our urban indicators are asymptotically distributed as power laws and that the proposed connection also holds for our data when the allometric relationship displays enough correlations. We have also found that not all allometric relationships are independent and that they can be understood as a consequence of the allometric relationship between the urban indicator and the population size. We further show that the residuals fluctuations surrounding the allometries are characterized by an almost constant variance and log-normal distributions.
Analysis of the Effects of Streamwise Lift Distribution on Sonic Boom Signature
NASA Technical Reports Server (NTRS)
Yoo, Paul
2013-01-01
Investigation of sonic boom has been one of the major areas of study in aeronautics due to the benefits a low-boom aircraft has in both civilian and military applications. This work conducts a numerical analysis of the effects of streamwise lift distribution on the shock coalescence characteristics. A simple wing-canard-stabilator body model is used in the numerical simulation. The streamwise lift distribution is varied by fixing the canard at a deflection angle while trimming the aircraft with the wing and the stabilator at the desired lift coefficient. The lift and the pitching moment coefficients are computed using the Missile DATCOM v. 707. The flow field around the wing-canard- stabilator body model is resolved using the OVERFLOW-2 flow solver. Overset/ chimera grid topology is used to simplify the grid generation of various configurations representing different streamwise lift distributions. The numerical simulations are performed without viscosity unless it is required for numerical stability. All configurations are simulated at Mach 1.4, angle-of-attack of 1.50, lift coefficient of 0.05, and pitching moment coefficient of approximately 0. Four streamwise lift distribution configurations were tested.
NASA Astrophysics Data System (ADS)
Baitimirova, M.; Osite, A.; Katkevics, J.; Viksna, A.
2012-08-01
Burning of candles generates particulate matter of fine dimensions that produces poor indoor air quality, so it may cause harmful impact on human health. In this study solid aerosol particles of burning of candles of different composition and kerosene combustion were collected in a closed laboratory system. Present work describes particulate matter collection for structure analysis and the relationship between source and size distribution of particulate matter. The formation mechanism of particulate matter and their tendency to agglomerate also are described. Particles obtained from kerosene combustion have normal size distribution. Whereas, particles generated from the burning of stearin candles have distribution shifted towards finer particle size range. If an additive of stearin to paraffin candle is used, particle size distribution is also observed in range of towards finer particles. A tendency to form agglomerates in a short time is observed in case of particles obtained from kerosene combustion, while in case of particles obtained from burning of candles of different composition such a tendency is not observed. Particles from candles and kerosene combustion are Aitken and accumulation mode particles
A spatial pattern analysis of the halophytic species distribution in an arid coastal environment.
Badreldin, Nasem; Uria-Diez, J; Mateu, J; Youssef, Ali; Stal, Cornelis; El-Bana, Magdy; Magdy, Ahmed; Goossens, Rudi
2015-05-01
Obtaining information about the spatial distribution of desert plants is considered as a serious challenge for ecologists and environmental modeling due to the required intensive field work and infrastructures in harsh and remote arid environments. A new method was applied for assessing the spatial distribution of the halophytic species (HS) in an arid coastal environment. This method was based on the object-based image analysis for a high-resolution Google Earth satellite image. The integration of the image processing techniques and field work provided accurate information about the spatial distribution of HS. The extracted objects were based on assumptions that explained the plant-pixel relationship. Three different types of digital image processing techniques were implemented and validated to obtain an accurate HS spatial distribution. A total of 2703 individuals of the HS community were found in the case study, and approximately 82% were located above an elevation of 2 m. The micro-topography exhibited a significant negative relationship with pH and EC (r?=?-0.79 and -0.81, respectively, p?
Li, Huijuan; Li, Fangxing; Xu, Yan; Rizy, D Tom; Kueck, John D
2010-01-01
Abstract Distributed energy resources (DE) or distributed generators (DG) with power electronics interfaces and logic control using local measurements are capable of providing reactive power related ancillary system services. In particular, local voltage regulation has drawn much attention in regards to power system reliability and voltage stability, especially from past major cascading outages. This paper addresses the challenges of controlling DEs to regulate local voltage in distribution systems. An adaptive voltage control method has been proposed to dynamically modify control parameters to respond to system changes. Theoretical analysis shows that there exists a corresponding formulation of the dynamic control parameters; hence the adaptive control method is theoretically solid. Both simulation and field experiment test results at the Distributed Energy Communications and Controls (DECC) Laboratory confirm that this method is capable of satisfying the fast response requirement for operational use without causing oscillation, inefficiency, or system equipment interference. Since this method has a high tolerance to real-time data shortage and is widely adaptive to variable power system operational situations, it is quite suitable for broad utility application.
Rural tourism spatial distribution based on multi-criteria decision analysis and GIS
NASA Astrophysics Data System (ADS)
Zhang, Hongxian; Yang, Qingsheng
2008-10-01
To study spatial distribution of rural tourism can provide scientific decision basis for developing rural economics. Traditional ways of tourism spatial distribution have some limitations in quantifying priority locations of tourism development on small units. They can only produce the overall tourism distribution locations and whether locations are suitable to tourism development simply while the tourism develop ranking with different decision objectives should be considered. This paper presents a way to find ranking of location of rural tourism development in spatial by integrating multi-criteria decision analysis (MCDA) and geography information system (GIS). In order to develop country economics with inconvenient transportation, undeveloped economy and better tourism resource, these locations should be firstly develop rural tourism. Based on this objective, the tourism develop priority utility of each town is calculated with MCDA and GIS. Towns which should be first develop rural tourism can be selected with higher tourism develop priority utility. The method is used to find ranking of location of rural tourism in Ningbo City successfully. The result shows that MCDA is an effective way for distribution rural tourism in spatial based on special decision objectives and rural tourism can promote economic development.
Li, Lan; Xi, Yuliang; Ren, Fu
2016-01-01
Tuberculosis (TB) is an infectious disease with one of the highest reported incidences in China. The detection of the spatio-temporal distribution characteristics of TB is indicative of its prevention and control conditions. Trajectory similarity analysis detects variations and loopholes in prevention and provides urban public health officials and related decision makers more information for the allocation of public health resources and the formulation of prioritized health-related policies. This study analysed the spatio-temporal distribution characteristics of TB from 2009 to 2014 by utilizing spatial statistics, spatial autocorrelation analysis, and space-time scan statistics. Spatial statistics measured the TB incidence rate (TB patients per 100,000 residents) at the district level to determine its spatio-temporal distribution and to identify characteristics of change. Spatial autocorrelation analysis was used to detect global and local spatial autocorrelations across the study area. Purely spatial, purely temporal and space-time scan statistics were used to identify purely spatial, purely temporal and spatio-temporal clusters of TB at the district level. The other objective of this study was to compare the trajectory similarities between the incidence rates of TB and new smear-positive (NSP) TB patients in the resident population (NSPRP)/new smear-positive TB patients in the TB patient population (NSPTBP)/retreated smear-positive (RSP) TB patients in the resident population (RSPRP)/retreated smear-positive TB patients in the TB patient population (RSPTBP) to detect variations and loopholes in TB prevention and control among the districts in Beijing. The incidence rates in Beijing exhibited a gradual decrease from 2009 to 2014. Although global spatial autocorrelation was not detected overall across all of the districts of Beijing, individual districts did show evidence of local spatial autocorrelation: Chaoyang and Daxing were Low-Low districts over the six-year period. The purely spatial scan statistics analysis showed significant spatial clusters of high and low incidence rates; the purely temporal scan statistics showed the temporal cluster with a three-year period from 2009 to 2011 characterized by a high incidence rate; and the space-time scan statistics analysis showed significant spatio-temporal clusters. The distribution of the mean centres (MCs) showed that the general distributions of the NSPRP MCs and NSPTBP MCs were to the east of the incidence rate MCs. Conversely, the general distributions of the RSPRP MCs and the RSPTBP MCs were to the south of the incidence rate MCs. Based on the combined analysis of MC distribution characteristics and trajectory similarities, the NSP trajectory was most similar to the incidence rate trajectory. Thus, more attention should be focused on the discovery of NSP patients in the western part of Beijing, whereas the northern part of Beijing needs intensive treatment for RSP patients. PMID:26959048
Methods and apparatuses for information analysis on shared and distributed computing systems
Bohn, Shawn J [Richland, WA; Krishnan, Manoj Kumar [Richland, WA; Cowley, Wendy E [Richland, WA; Nieplocha, Jarek [Richland, WA
2011-02-22
Apparatuses and computer-implemented methods for analyzing, on shared and distributed computing systems, information comprising one or more documents are disclosed according to some aspects. In one embodiment, information analysis can comprise distributing one or more distinct sets of documents among each of a plurality of processes, wherein each process performs operations on a distinct set of documents substantially in parallel with other processes. Operations by each process can further comprise computing term statistics for terms contained in each distinct set of documents, thereby generating a local set of term statistics for each distinct set of documents. Still further, operations by each process can comprise contributing the local sets of term statistics to a global set of term statistics, and participating in generating a major term set from an assigned portion of a global vocabulary.
Lee, Eun Gyung; Kim, Seung Won; Feigley, Charles E.; Harper, Martin
2015-01-01
This study introduces two semi-quantitative methods, Structured Subjective Assessment (SSA) and Control of Substances Hazardous to Health (COSHH) Essentials, in conjunction with two-dimensional Monte Carlo simulations for determining prior probabilities. Prior distribution using expert judgment was included for comparison. Practical applications of the proposed methods were demonstrated using personal exposure measurements of isoamyl acetate in an electronics manufacturing facility and of isopropanol in a printing shop. Applicability of these methods in real workplaces was discussed based on the advantages and disadvantages of each method. Although these methods could not be completely independent of expert judgments, this study demonstrated a methodological improvement in the estimation of the prior distribution for the Bayesian decision analysis tool. The proposed methods provide a logical basis for the decision process by considering determinants of worker exposure. PMID:23252451
Preliminary analysis of the span-distributed-load concept for cargo aircraft design
NASA Technical Reports Server (NTRS)
Whitehead, A. H., Jr.
1975-01-01
A simplified computer analysis of the span-distributed-load airplane (in which payload is placed within the wing structure) has shown that the span-distributed-load concept has high potential for application to future air cargo transport design. Significant increases in payload fraction over current wide-bodied freighters are shown for gross weights in excess of 0.5 Gg (1,000,000 lb). A cruise-matching calculation shows that the trend toward higher aspect ratio improves overall efficiency; that is, less thrust and fuel are required. The optimal aspect ratio probably is not determined by structural limitations. Terminal-area constraints and increasing design-payload density, however, tend to limit aspect ratio.
Flitsiyan, E.S.; Romanovskii, A.V.; Gurvich, L.G.; Kist, A.A.
1987-02-01
The local concentration and spatial distribution of some elements in minerals, rocks, and ores can be determined by means of neutron-activation autoradiography. The local element concentration is measured in this method by placing an activated section of the rock to be analyzed, together with an irradiated standard, against a photographic emulsion which acts as a radiation detector. The photographic density of the exposed emulsion varies as a function of the tested element content in the part of the sample next to the detector. In order to assess the value of neutron-activation autoradiography in the analysis of element distribution, we considered the main factors affecting the production of selective autoradiographs, viz., resolution, detection limit, and optimal irradiation conditions, holding time, and exposure.
A distributed analysis and visualization system for model and observational data
NASA Technical Reports Server (NTRS)
Wilhelmson, Robert; Koch, Steven
1992-01-01
The objective of this proposal is to develop an integrated and distributed analysis and display software system which can be applied to all areas of the Earth System Science to study numerical model and earth observational data from storm to global scale. This system will be designed to be easy to use, portable, flexible and easily extensible and to adhere to current and emerging standards whenever possible. It will provide an environment for visualization of the massive amounts of data generated from satellites and other observational field measurements and from model simulations during or after their execution. Two- and three-dimensional animation will also be provided. This system will be based on a widely used software package from NASA called GEMPAK and prototype software for three dimensional interactive displays built at NCSA. The underlying foundation of the system will be a set of software libraries which can be distributed across a UNIX based supercomputer and workstations.
A distributed analysis and visualization system for model and observational data
NASA Technical Reports Server (NTRS)
Wilhelmson, Robert; Koch, Steven
1993-01-01
The objective of this proposal is to develop an integrated and distributed analysis and display software system which can be applied to all areas of the Earth System Science to study numerical model and earth observational data from storm to global scale. This system will be designed to be easy to use, portable, flexible and easily extensible and designed to adhere to current and emerging standards whenever possible. It will provide an environment for visualization of the massive amounts of data generated from satellites and other observational field measurements and from model simulations during or after their execution. Two- and three-dimensional animation will also be provided. This system will be based on a widely used software package from NASA called GEMPAK and prototype software for three-dimensional interactive displays built at NCSA. The underlying foundation of the system will be a set of software libraries which can be distributed across a UNIX based supercomputer and workstations.
Analysis and modeling of information flow and distributed expertise in space-related operations.
Caldwell, Barrett S
2005-01-01
Evolving space operations requirements and mission planning for long-duration expeditions require detailed examinations and evaluations of information flow dynamics, knowledge-sharing processes, and information technology use in distributed expert networks. This paper describes the work conducted with flight controllers in the Mission Control Center (MCC) of NASA's Johnson Space Center. This MCC work describes the behavior of experts in a distributed supervisory coordination framework, which extends supervisory control/command and control models of human task performance. Findings from this work are helping to develop analysis techniques, information architectures, and system simulation capabilities for knowledge sharing in an expert community. These findings are being applied to improve knowledge-sharing processes applied to a research program in advanced life support for long-duration space flight. Additional simulation work is being developed to create interoperating modules of information flow and novice/expert behavior patterns. PMID:15835058
EXERGY ANALYSIS OF THE CRYOGENIC HELIUM DISTRIBUTION SYSTEM FOR THE LARGE HADRON COLLIDER (LHC)
Claudet, S.; Lebrun, Ph.; Tavian, L.; Wagner, U.
2010-04-09
The Large Hadron Collider (LHC) at CERN features the world's largest helium cryogenic system, spreading over the 26.7 km circumference of the superconducting accelerator. With a total equivalent capacity of 145 kW at 4.5 K including 18 kW at 1.8 K, the LHC refrigerators produce an unprecedented exergetic load, which must be distributed efficiently to the magnets in the tunnel over the 3.3 km length of each of the eight independent sectors of the machine. We recall the main features of the LHC cryogenic helium distribution system at different temperature levels and present its exergy analysis, thus enabling to qualify second-principle efficiency and identify main remaining sources of irreversibility.
Distributed and/or grid-oriented approach to BTeV data analysis
Joel N. Butler
2002-12-23
The BTeV collaboration will record approximately 2 petabytes of raw data per year. It plans to analyze this data using the distributed resources of the collaboration as well as dedicated resources, primarily residing in the very large BTeV trigger farm, and resources accessible through the developing world-wide data grid. The data analysis system is being designed from the very start with this approach in mind. In particular, we plan a fully disk-based data storage system with multiple copies of the data distributed across the collaboration to provide redundancy and to optimize access. We will also position ourself to take maximum advantage of shared systems, as well as dedicated systems, at our collaborating institutions.
NASA Technical Reports Server (NTRS)
Costello, Thomas A.; Brandt, C. Maite
1989-01-01
Simulation and analysis results are described for a wideband fiber optic intermediate frequency distribution channel for a frequency division multiple access (FDMA) system where antenna equipment is remotely located from the signal processing equipment. The fiber optic distribution channel accommodates multiple signals received from a single antenna with differing power levels. Performance parameters addressed are intermodulation degradations, laser noise, and adjacent channel interference, as they impact the overall system design. Simulation results showed that the laser diode modulation level can be allowed to reach 100 percent without considerable degradation. The laser noise must be controlled as to provide a noise floor of less than -90 dBW/Hz. The fiber optic link increases the degradation due to power imbalance yet diminishes the effects of the transmit amplifier nonlinearity. Overall, optimal operation conditions can be found to yield a degradation level of about .1 dB caused by the fiber optic link.
Hou, Dibo; Zhang, Jian; Yang, Zheling; Liu, Shu; Huang, Pingjie; Zhang, Guangxin
2015-06-29
The issue of distribution water quality security ensuring is recently attracting global attention due to the potential threat from harmful contaminants. The real-time monitoring based on ultraviolet optical sensors is a promising technique. This method is of reagent-free, low maintenance cost, rapid analysis and wide cover range. However, the ultraviolet absorption spectra are of large size and easily interfered. While within the on-site application, there is almost no prior knowledge like spectral characteristics of potential contaminants before determined. Meanwhile, the concept of normal water quality is also varying due to the operating condition. In this paper, a procedure based on multivariate statistical analysis is proposed to detect distribution water quality anomaly based on ultraviolet optical sensors. Firstly, the principal component analysis is employed to capture the main variety features from the spectral matrix and reduce the dimensionality. A new statistical variable is then constructed and used for evaluating the local outlying degree according to the chi-square distribution in the principal component subspace. The possibility of anomaly of the latest observation is calculated by the accumulation of the outlying degrees from the adjacent previous observations. To develop a more reliable anomaly detection procedure, several key parameters are discussed. By utilizing the proposed methods, the distribution water quality anomalies and the optical abnormal changes can be detected. The contaminants intrusion experiment is conducted in a pilot-scale distribution system by injecting phenol solution. The effectiveness of the proposed procedure is finally testified using the experimental spectral data. PMID:26191757
NASA Astrophysics Data System (ADS)
Yang, Jing; Reichert, Peter; Abbaspour, Karim C.
2007-10-01
Calibration and uncertainty analysis in hydrologic modeling are affected by measurement errors in input and response and errors in model structure. Recently, extending similar approaches in discrete time, a continuous time autoregressive error model was proposed for statistical inference and uncertainty analysis in hydrologic modeling. The major advantages over discrete time formulation are the use of a continuous time error model for describing continuous processes, the possibility of accounting for seasonal variations of parameters in the error model, the easier treatment of missing data or omitted outliers, and the opportunity for continuous time predictions. The model was developed for the Chaohe Basin in China and had some features specific for this semiarid climatic region (in particular, the seasonal variation of parameters in the error model in response to seasonal variation in precipitation). This paper tests and extends this approach with an application to the Thur River basin in Switzerland, which is subject to completely different climatic conditions. This application corroborates the general applicability of the approach but also demonstrates the necessity of accounting for the heavy tails in the distributions of residuals and innovations. This is done by replacing the normal distribution of the innovations by a Student t distribution, the degrees of freedom of which are adapted to best represent the shape of the empirical distribution of the innovations. We conclude that with this extension, the continuous time autoregressive error model is applicable and flexible for hydrologic modeling under different climatic conditions. The major remaining conceptual disadvantage is that this class of approaches does not lead to a separate identification of model input and model structural errors. The major practical disadvantage is the high computational demand characteristic for all Markov chain Monte Carlo techniques.
Benazzi, Stefano; Kullmer, Ottmar; Grosse, Ian R; Weber, Gerhard W
2011-09-01
Simulations based on finite element analysis (FEA) have attracted increasing interest in dentistry and dental anthropology for evaluating the stress and strain distribution in teeth under occlusal loading conditions. Nonetheless, FEA is usually applied without considering changes in contacts between antagonistic teeth during the occlusal power stroke. In this contribution we show how occlusal information can be used to investigate the stress distribution with 3D FEA in lower first molars (M(1)). The antagonistic crowns M(1) and P(2)-M(1) of two dried modern human skulls were scanned by ?CT in maximum intercuspation (centric occlusion) contact. A virtual analysis of the occlusal power stroke between M(1) and P(2)-M(1) was carried out in the Occlusal Fingerprint Analyser (OFA) software, and the occlusal trajectory path was recorded, while contact areas per time-step were visualized and quantified. Stress distribution of the M(1) in selected occlusal stages were analyzed in strand7, considering occlusal information taken from OFA results for individual loading direction and loading area. Our FEA results show that the stress pattern changes considerably during the power stroke, suggesting that wear facets have a crucial influence on the distribution of stress on the whole tooth. Grooves and fissures on the occlusal surface are seen as critical locations, as tensile stresses are concentrated at these features. Properly accounting for the power stroke kinematics of occluding teeth results in quite different results (less tensile stresses in the crown) than usual loading scenarios based on parallel forces to the long axis of the tooth. This leads to the conclusion that functional studies considering kinematics of teeth are important to understand biomechanics and interpret morphological adaptation of teeth. PMID:21615398
Benazzi, Stefano; Kullmer, Ottmar; Grosse, Ian R; Weber, Gerhard W
2011-01-01
Simulations based on finite element analysis (FEA) have attracted increasing interest in dentistry and dental anthropology for evaluating the stress and strain distribution in teeth under occlusal loading conditions. Nonetheless, FEA is usually applied without considering changes in contacts between antagonistic teeth during the occlusal power stroke. In this contribution we show how occlusal information can be used to investigate the stress distribution with 3D FEA in lower first molars (M1). The antagonistic crowns M1 and P2â€“M1 of two dried modern human skulls were scanned by Î¼CT in maximum intercuspation (centric occlusion) contact. A virtual analysis of the occlusal power stroke between M1 and P2â€“M1 was carried out in the Occlusal Fingerprint Analyser (OFA) software, and the occlusal trajectory path was recorded, while contact areas per time-step were visualized and quantified. Stress distribution of the M1 in selected occlusal stages were analyzed in strand7, considering occlusal information taken from OFA results for individual loading direction and loading area. Our FEA results show that the stress pattern changes considerably during the power stroke, suggesting that wear facets have a crucial influence on the distribution of stress on the whole tooth. Grooves and fissures on the occlusal surface are seen as critical locations, as tensile stresses are concentrated at these features. Properly accounting for the power stroke kinematics of occluding teeth results in quite different results (less tensile stresses in the crown) than usual loading scenarios based on parallel forces to the long axis of the tooth. This leads to the conclusion that functional studies considering kinematics of teeth are important to understand biomechanics and interpret morphological adaptation of teeth. PMID:21615398
Statistical Distribution of Inflation on Lava Flows: Analysis of Flow Surfaces on Earth and Mars
NASA Technical Reports Server (NTRS)
Glazel, L. S.; Anderson, S. W.; Stofan, E. R.; Baloga, S.
2003-01-01
The surface morphology of a lava flow results from processes that take place during the emplacement of the flow. Certain types of features, such as tumuli, lava rises and lava rise pits, are indicators of flow inflation or endogenous growth of a lava flow. Tumuli in particular have been identified as possible indicators of tube location, indicating that their distribution on the surface of a lava flow is a junction of the internal pathways of lava present during flow emplacement. However, the distribution of tumuli on lava flows has not been examined in a statistically thorough manner. In order to more rigorously examine the distribution of tumuli on a lava flow, we examined a discrete flow lobe with numerous lava rises and tumuli on the 1969 - 1974 Mauna Ulu flow at Kilauea, Hawaii. The lobe is located in the distal portion of the flow below Holei Pali, which is characterized by hummocky pahoehoe flows emplaced from tubes. We chose this flow due to its discrete nature allowing complete mapping of surface morphologies, well-defined boundaries, well-constrained emplacement parameters, and known flow thicknesses. In addition, tube locations for this Mauna Ulu flow were mapped by Holcomb (1976) during flow emplacement. We also examine the distribution of tumuli on the distal portion of the hummocky Thrainsskjoldur flow field provided by Rossi and Gudmundsson (1996). Analysis of the Mauna Ulu and Thrainsskjoldur flow lobes and the availability of high-resolution MOC images motivated us to look for possible tumuli-dominated flow lobes on the surface of Mars. We identified a MOC image of a lava flow south of Elysium Mons with features morphologically similar to tumuli. The flow is characterized by raised elliptical to circular mounds, some with axial cracks, that are similar in size to the tumuli measured on Earth. One potential avenue of determining whether they are tumuli is to look at the spatial distribution to see if any patterns similar to those of tumuli-dominated terrestrial flows can be identified. Since tumuli form by the injection of lava beneath a crust, the distribution of tumuli on a flow should represent the distribution of thermally preferred pathways beneath the surface of the crust. That distribution of thermally preferred pathways may be a function of the evolution of a basaltic lava flow. As a longer-lived flow evolves, initially broad thermally preferred pathways would evolve to narrower, more well-defined tube-like pathways. The final flow morphology clearly preserves the growth of the flow over time, with inflation features indicating pathways that were not necessarily contemporaneously active. Here, we test using statistical analysis whether this final flow morphology produces distinct distributions that can be used to readily determine the distribution of thermally preferred pathways beneath the surface of the crust.
Clark, Haley; Wu, Jonn; Moiseenko, Vitali; Thomas, Steven
2014-08-15
Many have speculated about the future of computational technology in clinical radiation oncology. It has been advocated that the next generation of computational infrastructure will improve on the current generation by incorporating richer aspects of automation, more heavily and seamlessly featuring distributed and parallel computation, and providing more flexibility toward aggregate data analysis. In this report we describe how a recently created — but currently existing — analysis framework (DICOMautomaton) incorporates these aspects. DICOMautomaton supports a variety of use cases but is especially suited for dosimetric outcomes correlation analysis, investigation and comparison of radiotherapy treatment efficacy, and dose-volume computation. We describe: how it overcomes computational bottlenecks by distributing workload across a network of machines; how modern, asynchronous computational techniques are used to reduce blocking and avoid unnecessary computation; and how issues of out-of-date data are addressed using reactive programming techniques and data dependency chains. We describe internal architecture of the software and give a detailed demonstration of how DICOMautomaton could be used to search for correlations between dosimetric and outcomes data.
Constraints on color-octet fermions from a global parton distribution analysis.
Berger, E. L.; Guzzi, M.; Lai, H.-L.; Nadolsky, P. M.; Olness, F. I.; High Energy Physics; Southern Methodist Univ.; Taipei Municipal Univ. of Education
2010-01-01
We report a parton distribution function analysis of a complete set of hadron scattering data, in which a color-octet fermion (such as a gluino of supersymmetry) is incorporated as an extra parton constituent along with the usual standard model constituents. The data set includes the most up-to-date results from deep inelastic scattering and from jet production in hadron collisions. Another feature is the inclusion in the fit of data from determinations of the strong coupling {alpha}{sub s}(Q) at large and small values of the hard scale Q. Our motivation is to determine the extent to which the global parton distribution function analysis may provide constraints on the new fermion, as a function of its mass and {alpha}{sub s}(M{sub Z}), independent of assumptions such as the mechanism of gluino decays. Based on this analysis, we find that gluino masses as low as 30 to 50 GeV may be compatible with the current hadronic data. Gluino masses below 15 GeV (25 GeV) are excluded if {alpha}{sub s}(M{sub Z}) varies freely (is equal to 0.118). At the outset, stronger constraints had been anticipated from jet production cross sections, but experimental systematic uncertainties, particularly in normalization, reduce the discriminating power of these data.
Local storage federation through XRootD architecture for interactive distributed analysis
NASA Astrophysics Data System (ADS)
Colamaria, F.; Colella, D.; Donvito, G.; Elia, D.; Franco, A.; Luparello, G.; Maggi, G.; Miniello, G.; Vallero, S.; Vino, G.
2015-12-01
A cloud-based Virtual Analysis Facility (VAF) for the ALICE experiment at the LHC has been deployed in Bari. Similar facilities are currently running in other Italian sites with the aim to create a federation of interoperating farms able to provide their computing resources for interactive distributed analysis. The use of cloud technology, along with elastic provisioning of computing resources as an alternative to the grid for running data intensive analyses, is the main challenge of these facilities. One of the crucial aspects of the user-driven analysis execution is the data access. A local storage facility has the disadvantage that the stored data can be accessed only locally, i.e. from within the single VAF. To overcome such a limitation a federated infrastructure, which provides full access to all the data belonging to the federation independently from the site where they are stored, has been set up. The federation architecture exploits both cloud computing and XRootD technologies, in order to provide a dynamic, easy-to-use and well performing solution for data handling. It should allow the users to store the files and efficiently retrieve the data, since it implements a dynamic distributed cache among many datacenters in Italy connected to one another through the high-bandwidth national network. Details on the preliminary architecture implementation and performance studies are discussed.
Gerstl, S.A.W.; LaBauve, R.J.; Young, P.G.
1980-05-01
On the example of General Atomic's well-documented Power Generating Fusion Reactor (PGFR) design, this report exercises a comprehensive neutron cross-section and secondary energy distribution (SED) uncertainty analysis. The LASL sensitivity and uncertainty analysis code SENSIT is used to calculate reaction cross-section sensitivity profiles and integral SED sensitivity coefficients. These are then folded with covariance matrices and integral SED uncertainties to obtain the resulting uncertainties of three calculated neutronics design parameters: two critical radiation damage rates and a nuclear heating rate. The report documents the first sensitivity-based data uncertainty analysis, which incorporates a quantitative treatment of the effects of SED uncertainties. The results demonstrate quantitatively that the ENDF/B-V cross-section data files for C, H, and O, including their SED data, are fully adequate for this design application, while the data for Fe and Ni are at best marginally adequate because they give rise to response uncertainties up to 25%. Much higher response uncertainties are caused by cross-section and SED data uncertainties in Cu (26 to 45%), tungsten (24 to 54%), and Cr (up to 98%). Specific recommendations are given for re-evaluations of certain reaction cross-sections, secondary energy distributions, and uncertainty estimates.
NASA Astrophysics Data System (ADS)
Sanchidrián, J. A.; Segarra, P.; Ouchterlony, F.; López, L. M.
2009-02-01
Size distributions of fragments of crushed rock in conveyor belts and of blasted rock in a muckpile obtained by sieving are compared with the size distributions obtained by digital image analysis of photographs of the same materials taken on-site. Several calculation methods are tested, based on the raw distribution of fragment areas and on the volume-transformed ones. The influence of the calibration of the system on the results and the performance of the system in a non-calibrated mode are evaluated. The capacity of some distributions (Rosin-Rammler, Swebrec and lognormal) to fit the data in the coarse region (where particles can be delineated, i.e. discriminated individually) and to extrapolate to the non-delineated fines (where particles cannot be outlined and their contour delineated) is assessed. The error between the sizes measured and the sizes of the reference distributions (determined by sieving) increases from the coarse to the fines region. The maximum error at a given size depends primarily on its value relative to the fines cut-off (FCO) of the image analysis. In general, at sizes greater than the FCO, where the system is able to delineate fragments reliably, both volume and surface-based, calibrated, calculations can determine the sizes with maximum error expectancy of about 30%. Below the FCO, only the calibrated, volume calculation maintains a maximum error of 30%, down to sizes of about one fourth the FCO, rapidly increasing for smaller sizes. Where the calibration is done based on data above the FCO, errors can be large below this point, in excess of 80% at sizes half the FCO. In the fines range (sizes smaller than 0.2 times the FCO) the maximum errors can be close to or greater than 100% for most of the calculations and function fittings. Of the distributions tested, all of them are acceptable at sizes above the FCO; below that, the Swebrec function seems to adapt better towards the fines than the Rosin-Rammler and lognormal.
Equity in the distribution of CT and MRI in China: a panel analysis
2013-01-01
Introduction China is facing a daunting challenge to health equity in the context of rapid economic development. This study adds to the literature by examining equity in the distribution of high-technology medical equipment, such as CT and MRI, in China. Methods A panel analysis was conducted with information about four study sites in 2006 and 2009. The four provincial-level study sites included Shanghai, Zhejiang, Shaanxi, and Hunan, representing different geographical, economic, and medical technology levels in China. A random sample of 71 hospitals was selected from the four sites. Data were collected through questionnaire surveys. Equity status was assessed in terms of CT and MRI numbers, characteristics of machine, and financing sources. The assessment was conducted at multiple levels, including international, provincial, city, and hospital level. In addition to comparison among the study sites, the sample was compared with OECD countries in CT and MRI distributions. Results China had lower numbers of CTs and MRIs per million population in 2009 than most of the selected OECD countries while the increases in its CT and MRI numbers from 2006 to 2009 were higher than most of the OECD countries. The equity status of CT distribution remained at low inequality level in both 2006 and 2009 while the equity status of MRI distribution improved from high inequality in 2006 to moderate inequality in 2009. Despite the equity improvement, the distributions of CTs and MRIs were significantly positively correlated with economic development level across all cities in the four study sites in either 2006 or 2009. Our analysis also revealed that Shanghai, the study site with the highest level of economic development, had more advanced CT and MRI machine, more imported CTs and MRIs, and higher government subsidies on these two types of equipment. Conclusions The number of CTs and MRIs increased considerably in China from 2006 to 2009. The equity status of CTs was better than that of MRIs although the equity status in MRI distribution got improved from 2006 to 2009. Still considerable inequality exists in terms of characteristics and financing of CTs and MRIs. PMID:23742755
Characterizing the distribution of an endangered salmonid using environmental DNA analysis
Laramie, Matthew B.; Pilliod, David S.; Goldberg, Caren S.
2015-01-01
Determining species distributions accurately is crucial to developing conservation and management strategies for imperiled species, but a challenging task for small populations. We evaluated the efficacy of environmental DNA (eDNA) analysis for improving detection and thus potentially refining the known distribution of Chinook salmon (Oncorhynchus tshawytscha) in the Methow and Okanogan Subbasins of the Upper Columbia River, which span the border between Washington, USA and British Columbia, Canada. We developed an assay to target a 90 base pair sequence of Chinook DNA and used quantitative polymerase chain reaction (qPCR) to quantify the amount of Chinook eDNA in triplicate 1-L water samples collected at 48 stream locations in June and again in August 2012. The overall probability of detecting Chinook with our eDNA method in areas within the known distribution was 0.77 (Â±0.05Â SE). Detection probability was lower in June (0.62, Â±0.08Â SE) during high flows and at the beginning of spring Chinook migration than during base flows in August (0.93, Â±0.04Â SE). In the Methow subbasin, mean eDNA concentration was higher in August compared to June, especially in smaller tributaries, probably resulting from the arrival of spring Chinook adults, reduced discharge, or both. Chinook eDNA concentrations did not appear to change in the Okanogan subbasin from June to August. Contrary to our expectations about downstream eDNA accumulation, Chinook eDNA did not decrease in concentration in upstream reaches (0â€“120Â km). Further examination of factors influencing spatial distribution of eDNA in lotic systems may allow for greater inference of local population densities along stream networks or watersheds. These results demonstrate the potential effectiveness of eDNA detection methods for determining landscape-level distribution of anadromous salmonids in large river systems.
Analysis and improvement of data-set level file distribution in Disk Pool Manager
NASA Astrophysics Data System (ADS)
Cadellin Skipsey, Samuel; Purdie, Stuart; Britton, David; Mitchell, Mark; Bhimji, Wahid; Smith, David
2014-06-01
Of the three most widely used implementations of the WLCG Storage Element specification, Disk Pool Manager[1, 2] (DPM) has the simplest implementation of file placement balancing (StoRM doesn't attempt this, leaving it up to the underlying filesystem, which can be very sophisticated in itself). DPM uses a round-robin algorithm (with optional filesystem weighting), for placing files across filesystems and servers. This does a reasonable job of evenly distributing files across the storage array provided to it. However, it does not offer any guarantees of the evenness of distribution of that subset of files associated with a given "dataset" (which often maps onto a "directory" in the DPM namespace (DPNS)). It is useful to consider a concept of "balance", where an optimally balanced set of files indicates that the files are distributed evenly across all of the pool nodes. The best case performance of the round robin algorithm is to maintain balance, it has no mechanism to improve balance. In the past year or more, larger DPM sites have noticed load spikes on individual disk servers, and suspected that these were exacerbated by excesses of files from popular datasets on those servers. We present here a software tool which analyses file distribution for all datasets in a DPM SE, providing a measure of the poorness of file location in this context. Further, the tool provides a list of file movement actions which will improve dataset-level file distribution, and can action those file movements itself. We present results of such an analysis on the UKI-SCOTGRID-GLASGOW Production DPM.
Jibson, R.W.; Keefer, D.K.
1989-01-01
More than 220 large landslides along the bluffs bordering the Mississippi alluvial plain between Cairo, Ill., and Memphis, Tenn., are analyzed by discriminant analysis and multiple linear regression to determine the relative effects of slope height and steepness, stratigraphic variation, slope aspect, and proximity to the hypocenters of the 1811-12 New Madrid, Mo., earthquakes on the distribution of these landslides. Three types of landslides are analyzed: (1) old, coherent slumps and block slides, which have eroded and revegetated features and no active analogs in the area; (2) old earth flows, which are also eroded and revegetated; and (3) young rotational slumps, which are present only along near-river bluffs, and which are the only young, active landslides in the area. Discriminant analysis shows that only one characteristic differs significantly between bluffs with and without young rotational slumps: failed bluffs tend to have sand and clay at their base, which may render them more susceptible to fluvial erosion. Bluffs having old coherent slides are significantly higher, steeper, and closer to the hypocenters of the 1811-12 earthquakes than bluffs without these slides. Bluffs having old earth flows are likewise higher and closer to the earthquake hypocenters. Multiple regression analysis indicates that the distribution of young rotational slumps is affected most strongly by slope steepness: about one-third of the variation in the distribution is explained by variations in slope steepness. The distribution of old coherent slides and earth flows is affected most strongly by slope height, but the proximity to the hypocenters of the 1811-12 earthquakes also significantly affects the distribution. The results of the statistical analyses indicate that the only recently active landsliding in the area is along actively eroding river banks, where rotational slumps formed as bluffs are undercut by the river. The analyses further indicate that the old coherent slides and earth flows in the area are spatially related to the 1811-12 earthquake hypocenters and were thus probably triggered by those earthquakes. These results are consistent with findings of other recent investigations of landslides in the area that presented field, historical, and analytical evidence to demonstrate that old landslides in the area formed during the 1811-12 New Madrid earthquakes. Results of the multiple linear regression can also be used to approximate the relative susceptibility of the bluffs in the study area to seismically induced landsliding. ?? 1989.
Reliability Analysis of Uniaxially Ground Brittle Materials
NASA Technical Reports Server (NTRS)
Salem, Jonathan A.; Nemeth, Noel N.; Powers, Lynn M.; Choi, Sung R.
1995-01-01
The fast fracture strength distribution of uniaxially ground, alpha silicon carbide was investigated as a function of grinding angle relative to the principal stress direction in flexure. Both as-ground and ground/annealed surfaces were investigated. The resulting flexural strength distributions were used to verify reliability models and predict the strength distribution of larger plate specimens tested in biaxial flexure. Complete fractography was done on the specimens. Failures occurred from agglomerates, machining cracks, or hybrid flaws that consisted of a machining crack located at a processing agglomerate. Annealing eliminated failures due to machining damage. Reliability analyses were performed using two and three parameter Weibull and Batdorf methodologies. The Weibull size effect was demonstrated for machining flaws. Mixed mode reliability models reasonably predicted the strength distributions of uniaxial flexure and biaxial plate specimens.
Multiobjective sensitivity analysis and optimization of a distributed hydrologic model MOBIDIC
NASA Astrophysics Data System (ADS)
Yang, J.; Castelli, F.; Chen, Y.
2014-03-01
Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives which arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for a distributed hydrologic model MOBIDIC, which combines two sensitivity analysis techniques (Morris method and State Dependent Parameter method) with a multiobjective optimization (MOO) approach ϵ-NSGAII. This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina with three objective functions, i.e., standardized root mean square error of logarithmic transformed discharge, water balance index, and mean absolute error of logarithmic transformed flow duration curve, and its results were compared with those with a single objective optimization (SOO) with the traditional Nelder-Mead Simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show: (1) the two sensitivity analysis techniques are effective and efficient to determine the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization; (2) both MOO and SOO lead to acceptable simulations, e.g., for MOO, average Nash-Sutcliffe is 0.75 in the calibration period and 0.70 in the validation period; (3) evaporation and surface runoff shows similar importance to watershed water balance while the contribution of baseflow can be ignored; (4) compared to SOO which was dependent of initial starting location, MOO provides more insight on parameter sensitivity and conflicting characteristics of these objective functions. Multiobjective sensitivity analysis and optimization provides an alternative way for future MOBIDIC modelling.
Knowledge-based assistance for science visualization and analysis using large distributed databases
NASA Technical Reports Server (NTRS)
Handley, Thomas H., Jr.; Jacobson, Allan S.; Doyle, Richard J.; Collins, Donald J.
1993-01-01
Within this decade, the growth in complexity of exploratory data analysis and the sheer volume of space data require new and innovative approaches to support science investigators in achieving their research objectives. To date, there have been numerous efforts addressing the individual issues involved in inter-disciplinary, multi-instrument investigations. However, while successful in small scale, these efforts have not proven to be open and scalable. This proposal addresses four areas of significant need: scientific visualization and analysis; science data management; interactions in a distributed, heterogeneous environment; and knowledge-based assistance for these functions. The fundamental innovation embedded with this proposal is the integration of three automation technologies, namely, knowledge-based expert systems, science visualization and science data management. This integration is based on concept called the DataHub. With the DataHub concept, NASA will be able to apply a more complete solution to all nodes of a distributed system. Both computation nodes and interactive nodes will be able to effectively and efficiently use the data services (address, retrieval, update, etc.) with a distributed, interdisciplinary information system in a uniform and standard way. This will allow the science investigators to concentrate on their scientific endeavors, rather than to involve themselves in the intricate technical details of the systems and tools required to accomplish their work. Thus, science investigators need not be programmers. The emphasis will be on the definition and prototyping of system elements with sufficient detail to enable data analysis and interpretation leading to publishable scientific results. In addition, the proposed work includes all the required end-to-end components and interfaces to demonstrate the completed concept.
Knowledge-based assistance for science visualization and analysis using large distributed databases
NASA Technical Reports Server (NTRS)
Handley, Thomas H., Jr.; Jacobson, Allan S.; Doyle, Richard J.; Collins, Donald J.
1992-01-01
Within this decade, the growth in complexity of exploratory data analysis and the sheer volume of space data require new and innovative approaches to support science investigators in achieving their research objectives. To date, there have been numerous efforts addressing the individual issues involved in inter-disciplinary, multi-instrument investigations. However, while successful in small scale, these efforts have not proven to be open and scaleable. This proposal addresses four areas of significant need: scientific visualization and analysis; science data management; interactions in a distributed, heterogeneous environment; and knowledge-based assistance for these functions. The fundamental innovation embedded within this proposal is the integration of three automation technologies, namely, knowledge-based expert systems, science visualization and science data management. This integration is based on the concept called the Data Hub. With the Data Hub concept, NASA will be able to apply a more complete solution to all nodes of a distributed system. Both computation nodes and interactive nodes will be able to effectively and efficiently use the data services (access, retrieval, update, etc.) with a distributed, interdisciplinary information system in a uniform and standard way. This will allow the science investigators to concentrate on their scientific endeavors, rather than to involve themselves in the intricate technical details of the systems and tools required to accomplish their work. Thus, science investigators need not be programmers. The emphasis will be on the definition and prototyping of system elements with sufficient detail to enable data analysis and interpretation leading to publishable scientific results. In addition, the proposed work includes all the required end-to-end components and interfaces to demonstrate the completed concept.
Predictive analysis of thermal distribution and damage in thermotherapy on biological tissue
NASA Astrophysics Data System (ADS)
Fanjul-Vélez, Félix; Arce-Diego, José Luis
2007-05-01
The use of optical techniques is increasing the possibilities and success of medical praxis in certain cases, either in tissue characterization or treatment. Photodynamic therapy (PDT) or low intensity laser treatment (LILT) are two examples of the latter. Another very interesting implementation is thermotherapy, which consists of controlling temperature increase in a pathological biological tissue. With this method it is possible to provoke an improvement on specific diseases, but a previous analysis of treatment is needed in order for the patient not to suffer any collateral damage, an essential point due to security margins in medical procedures. In this work, a predictive analysis of thermal distribution in a biological tissue irradiated by an optical source is presented. Optical propagation is based on a RTT (Radiation Transport Theory) model solved via a numerical Monte Carlo method, in a multi-layered tissue. Data obtained are included in a bio-heat equation that models heat transference, taking into account conduction, convection, radiation, blood perfusion and vaporization depending on the specific problem. Spatial-temporal differential bio-heat equation is solved via a numerical finite difference approach. Experimental temperature distributions on animal tissue irradiated by laser radiation are shown. From thermal distribution in tissue, thermal damage is studied, based on an Arrhenius analysis, as a way of predicting harmful effects. The complete model can be used for concrete treatment proposals, as a way of predicting treatment effects and consequently decide which optical source parameters are appropriate for the specific disease, mainly wavelength and optical power, with reasonable security margins in the process.
Flow distribution analysis on the cooling tube network of ITER thermal shield
NASA Astrophysics Data System (ADS)
Nam, Kwanwoo; Chung, Wooho; Noh, Chang Hyun; Kang, Dong Kwon; Kang, Kyoung-O.; Ahn, Hee Jae; Lee, Hyeon Gon
2014-01-01
Thermal shield (TS) is to be installed between the vacuum vessel or the cryostat and the magnets in ITER tokamak to reduce the thermal radiation load to the magnets operating at 4.2K. The TS is cooled by pressurized helium gas at the inlet temperature of 80K. The cooling tube is welded on the TS panel surface and the composed flow network of the TS cooling tubes is complex. The flow rate in each panel should be matched to the thermal design value for effective radiation shielding. This paper presents one dimensional analysis on the flow distribution of cooling tube network for the ITER TS. The hydraulic cooling tube network is modeled by an electrical analogy. Only the cooling tube on the TS surface and its connecting pipe from the manifold are considered in the analysis model. Considering the frictional factor and the local loss in the cooling tube, the hydraulic resistance is expressed as a linear function with respect to mass flow rate. Sub-circuits in the TS are analyzed separately because each circuit is controlled by its own control valve independently. It is found that flow rates in some panels are insufficient compared with the design values. In order to improve the flow distribution, two kinds of design modifications are proposed. The first one is to connect the tubes of the adjacent panels. This will increase the resistance of the tube on the panel where the flow rate is excessive. The other design suggestion is that an orifice is installed at the exit of tube routing where the flow rate is to be reduced. The analysis for the design suggestions shows that the flow mal-distribution is improved significantly.
Flow distribution analysis on the cooling tube network of ITER thermal shield
Nam, Kwanwoo; Chung, Wooho; Noh, Chang Hyun; Kang, Dong Kwon; Kang, Kyoung-O; Ahn, Hee Jae; Lee, Hyeon Gon
2014-01-29
Thermal shield (TS) is to be installed between the vacuum vessel or the cryostat and the magnets in ITER tokamak to reduce the thermal radiation load to the magnets operating at 4.2K. The TS is cooled by pressurized helium gas at the inlet temperature of 80K. The cooling tube is welded on the TS panel surface and the composed flow network of the TS cooling tubes is complex. The flow rate in each panel should be matched to the thermal design value for effective radiation shielding. This paper presents one dimensional analysis on the flow distribution of cooling tube network for the ITER TS. The hydraulic cooling tube network is modeled by an electrical analogy. Only the cooling tube on the TS surface and its connecting pipe from the manifold are considered in the analysis model. Considering the frictional factor and the local loss in the cooling tube, the hydraulic resistance is expressed as a linear function with respect to mass flow rate. Sub-circuits in the TS are analyzed separately because each circuit is controlled by its own control valve independently. It is found that flow rates in some panels are insufficient compared with the design values. In order to improve the flow distribution, two kinds of design modifications are proposed. The first one is to connect the tubes of the adjacent panels. This will increase the resistance of the tube on the panel where the flow rate is excessive. The other design suggestion is that an orifice is installed at the exit of tube routing where the flow rate is to be reduced. The analysis for the design suggestions shows that the flow mal-distribution is improved significantly.
Complete Distributed Hyper-Entangled-Bell-State Analysis and Quantum Super Dense Coding
NASA Astrophysics Data System (ADS)
Zheng, Chunhong; Gu, Yongjian; Li, Wendong; Wang, Zhaoming; Zhang, Jiying
2016-02-01
We propose a protocol to implement the distributed hyper-entangled-Bell-state analysis (HBSA) for photonic qubits with weak cross-Kerr nonlinearities, QND photon-number-resolving detection, and some linear optical elements. The distinct feature of our scheme is that the BSA for two different degrees of freedom can be implemented deterministically and nondestructively. Based on the present HBSA, we achieve quantum super dense coding with double information capacity, which makes our scheme more significant for long-distance quantum communication.
Furrer, F; Franz, T; Berta, M; Leverrier, A; Scholz, V B; Tomamichel, M; Werner, R F
2012-09-01
We provide a security analysis for continuous variable quantum key distribution protocols based on the transmission of two-mode squeezed vacuum states measured via homodyne detection. We employ a version of the entropic uncertainty relation for smooth entropies to give a lower bound on the number of secret bits which can be extracted from a finite number of runs of the protocol. This bound is valid under general coherent attacks, and gives rise to keys which are composably secure. For comparison, we also give a lower bound valid under the assumption of collective attacks. For both scenarios, we find positive key rates using experimental parameters reachable today. PMID:23005270
Human and climate impact on global riverine water and sediment fluxes - a distributed analysis
NASA Astrophysics Data System (ADS)
Cohen, S.; Kettner, A.; Syvitski, J. P.
2013-05-01
Understanding riverine water and sediment dynamics is an important undertaking for both socially-relevant issues such as agriculture, water security and infrastructure management and for scientific analysis of climate, landscapes, river ecology, oceanography and other disciplines. Providing good quantitative and predictive tools in therefore timely particularly in light of predicted climate and landuse changes. The intensity and dynamics between man-made and climatic factors vary widely across the globe and are therefore hard to predict. Using sophisticated numerical models is therefore warranted. Here we use a distributed global riverine sediment and water discharge model (WBMsed) to simulate human and climate effect on our planet's large rivers.
Constraints on spin-dependent parton distributions at large x from global QCD analysis
NASA Astrophysics Data System (ADS)
Jimenez-Delgado, P.; Avakian, H.; Melnitchouk, W.
2014-11-01
We investigate the behavior of spin-dependent parton distribution functions (PDFs) at large parton momentum fractions x in the context of global QCD analysis. We explore the constraints from existing deep-inelastic scattering data, and from theoretical expectations for the leading x ? 1 behavior based on hard gluon exchange in perturbative QCD. Systematic uncertainties from the dependence of the PDFs on the choice of parametrization are studied by considering functional forms motivated by orbital angular momentum arguments. Finally, we quantify the reduction in the PDF uncertainties that may be expected from future high-x data from Jefferson Lab at 12 GeV.
Statistics analysis of distribution of Bradysia Ocellaris insect on Oyster mushroom cultivation
NASA Astrophysics Data System (ADS)
Sari, Kurnia Novita; Amelia, Ririn
2015-12-01
Bradysia Ocellaris insect is a pest on Oyster mushroom cultivation. The disitribution of Bradysia Ocellaris have a special pattern that can observed every week with several asumption such as independent, normality and homogenity. We can analyze the number of Bradysia Ocellaris for each week through descriptive analysis. Next, the distribution pattern of Bradysia Ocellaris is described through by semivariogram that is diagram of variance from difference value between pair of observation that separeted by d. Semivariogram model that suitable for Bradysia Ocellaris data is spherical isotropic model.
Analysis of counterfactual quantum key distribution using error-correcting theory
NASA Astrophysics Data System (ADS)
Li, Yan-Bing
2014-10-01
Counterfactual quantum key distribution is an interesting direction in quantum cryptography and has been realized by some researchers. However, it has been pointed that its insecure in information theory when it is used over a high lossy channel. In this paper, we retry its security from a error-correcting theory point of view. The analysis indicates that the security flaw comes from the reason that the error rate in the users' raw key pair is as high as that under the Eve's attack when the loss rate exceeds 50 %.
Khmyrova, I.; Seijyou, Yu.
2007-10-01
We develop simple distributed circuit model of the high-electron mobility transistor (HEMT)-like structure for the analysis of the effects associated with plasma oscillations excited in its two-dimensional electron gas (2DEG) channel. Circuit components of the model are related to physical and geometrical parameters of the structure. Developed model accounts for dependence of resistance and inductance of 2DEG channel gated region on gate voltage. Such an approach facilitates and improves understanding of HEMT-like structures' behavior in the regime of excitation of plasma oscillation and is applicable for their performance evaluation and optimization as well.
CRAB3: Establishing a new generation of services for distributed analysis at CMS
NASA Astrophysics Data System (ADS)
Cinquilli, M.; Spiga, D.; Grandi, C.; Hernàndez, J. M.; Konstantinov, P.; Mascheroni, M.; Riahi, H.; Vaandering, E.
2012-12-01
In CMS Computing the highest priorities for analysis tools are the improvement of the end users’ ability to produce and publish reliable samples and analysis results as well as a transition to a sustainable development and operations model. To achieve these goals CMS decided to incorporate analysis processing into the same framework as data and simulation processing. This strategy foresees that all workload tools (TierO, Tier1, production, analysis) share a common core with long term maintainability as well as the standardization of the operator interfaces. The re-engineered analysis workload manager, called CRAB3, makes use of newer technologies, such as RESTFul based web services and NoSQL Databases, aiming to increase the scalability and reliability of the system. As opposed to CRAB2, in CRAB3 all work is centrally injected and managed in a global queue. A pool of agents, which can be geographically distributed, consumes work from the central services serving the user tasks. The new architecture of CRAB substantially changes the deployment model and operations activities. In this paper we present the implementation of CRAB3, emphasizing how the new architecture improves the workflow automation and simplifies maintainability. In particular, we will highlight the impact of the new design on daily operations.
Particle size distribution models, their characteristics and fitting capability
NASA Astrophysics Data System (ADS)
Bayat, Hossein; Rastgo, Mostafa; Mansouri Zadeh, Moharram; Vereecken, Harry
2015-10-01
Many attempts have been made to characterize particle size distribution (PSD) curves using different mathematical models, which are primarily used as a basis for estimating soil hydraulic properties. The principle step in using soil PSD to predict soil hydraulic properties is determining an accurate and continuous curve for PSD. So far, the characteristics of the PSD models, their fitting accuracy, and the effects of their parameters on the shape and position of PSD curves have not been investigated. In this study all developed PSD models, their characteristics, behavior of their parameters, and their fitting capability to the UNSODA database soil samples were investigated. Results showed that beerkan estimation of soil transfer (BEST), two and three parameter Weibull, Rosin and Rammler (1 and 2), unimodal and bimodal Fredlund, and van Genuchten models were flexible over the entire range of soil PSD. Correspondingly, the BEST, two and three parameter Weibull, Rosin and Rammler (1 and 2), hyperbolic and offset renormalized log-normal models possessed a high fitting capability over the entire range of PSD. The few parameters of the BEST, Rosin and Rammler (1 and 2), and two parameter Weibull models provides ease of use in soil physics and mechanics research. Thus, they are seemingly fit with acceptable accuracy in predicting the PSD curve. Although the fractal models have physical and mathematical basis, they do not have the adequate flexibility to contribute a description of the PSD curve. Different aspects of the PSD models should be considered in selecting a model to describe a soil PSD.
Pokhrel, Keshav P.; Vovoras, Dimitrios; Tsokos, Chris P.
2012-01-01
The examination of brain tumor growth and its variability among cancer patients is an important aspect of epidemiologic and medical data. Several studies for tumors of brain interpreted descriptive data, in this study we perform inference in the extent possible, suggesting possible explanations for the differentiation in the survival rates apparent in the epidemiologic data. Population based information from nine registries in the USA are classified with respect to age, gender, race and tumor histology to study tumor size variation. The Weibull and Dagum distributions are fitted to the highly skewed tumor sizes distributions, the parametric analysis of the tumor sizes showed significant differentiation between sexes, increased skewness for both the male and female populations, as well as decreased kurtosis for the black female population. The effect of population characteristics on the distribution of tumor sizes is estimated by quantile regression model and then compared with the ordinary least squares results. The higher quantiles of the distribution of tumor sizes for whites are significantly higher than those of other races. Our model predicted that the effect of age in the lower quantiles of the tumor sizes distribution is negative given the variables race and sex. We apply probability and regression models to explore the effects of demographic and histology types and observe significant racial and gender differences in the form of the distributions. Efforts are made to link tumor size data with available survival rates in relation to other prognostic variables. PMID:23675268
NASA Astrophysics Data System (ADS)
Ahmadpour, A.; Okhovat, A.; Darabi Mahboub, M. J.
2013-06-01
The application of Stoeckli theory to determine pore size distribution (PSD) of activated carbons using high pressure methane adsorption data is explored. Coconut shell was used as a raw material for the preparation of 16 different activated carbon samples. Four samples with higher methane adsorption were selected and nitrogen adsorption on these adsorbents was also investigated. Some differences are found between the PSD obtained from the analysis of nitrogen adsorption isotherms and their PSD resulting from the same analysis using methane adsorption data. It is suggested that these differences may arise from the specific interactions between nitrogen molecules and activated carbon surfaces; therefore caution is required in the interpretation of PSD obtained from the nitrogen isotherm data.
Muralisankar, S; Manivannan, A; Balasubramaniam, P
2015-09-01
The aim of this manuscript is to investigate the mean square delay dependent-probability-distribution stability analysis of neutral type stochastic neural networks with time-delays. The time-delays are assumed to be interval time-varying and randomly occurring. Based on the new Lyapunov-Krasovskii functional and stochastic analysis approach, a novel sufficient condition is obtained in the form of linear matrix inequality such that the delayed stochastic neural networks are globally robustly asymptotically stable in the mean-square sense for all admissible uncertainties. Finally, the derived theoretical results are validated through numerical examples in which maximum allowable upper bounds are calculated for different lower bounds of time-delay. PMID:25862099
Analysis of a 7 year tropospheric ozone vertical distribution at the Observatoire de Haute Provence
NASA Technical Reports Server (NTRS)
Beekmann, Matthias; Ancellet, Gerard; Megie, Gerard
1994-01-01
A seven year (1984-90) climatology of tropospheric vertical ozone soundings, performed by electrochemical sondes at the OHP (44 deg N, 6 deg E, 700 m ASL) in Southern France, is presented. Its seasonal variation shows a broad spring/summer maximum in the troposphere. The contribution of photochemical ozone production and transport from the stratosphere to this seasonal variation are studied by a correlative analysis of ozone concentrations and meteorological variables, with emphasis on potential vorticity. This analysis shows the impact of dynamical and photochemical processes on the spatial and temporal ozone variability. In particular, a positive correlation (r = 04.0, significance greater than 99.9 percent) of ozone with potential vorticity is observed in the middle troposphere, reflecting the impact of stratosphere-troposphere exchange on the vertical ozone distribution.
New limits on intrinsic charm in the nucleon from global analysis of parton distributions
Jimenez-Delgado, P.; Hobbs, T. J.; Londergan, J. T.; Melnitchouk, W.
2015-02-27
We present a new global QCD analysis of parton distribution functions, allowing for possible intrinsic charm (IC) contributions in the nucleon inspired by light-front models. The analysis makes use of the full range of available high-energy scattering data for Q2 â‰¥ 1 GeV2 and W2 â‰¥ 3.5 GeV2, including fixed-target proton and deuteron deep cross sections at lower energies that were excluded in previously global analyses. The expanded data set places more stringent constraints on the momentum carried by IC, with (x)IC at most 0.5% (corresponding to an IC normalization of ~1%) at the 4Ïƒ level for Î”X2 = 1.moreÂ Â»We also assess the impact of older EMC measurements of Fc2c at large x, which favor a nonzero IC, but with very large X2 values.Â«Â less
System analysis for the Huntsville Operation Support Center distributed computer system
NASA Technical Reports Server (NTRS)
Ingels, F. M.
1986-01-01
A simulation model of the NASA Huntsville Operational Support Center (HOSC) was developed. This simulation model emulates the HYPERchannel Local Area Network (LAN) that ties together the various computers of HOSC. The HOSC system is a large installation of mainframe computers such as the Perkin Elmer 3200 series and the Dec VAX series. A series of six simulation exercises of the HOSC model is described using data sets provided by NASA. The analytical analysis of the ETHERNET LAN and the video terminals (VTs) distribution system are presented. An interface analysis of the smart terminal network model which allows the data flow requirements due to VTs on the ETHERNET LAN to be estimated, is presented.
NASA Astrophysics Data System (ADS)
Blomley, R.; Weinmann, M.; Leitloff, J.; Jutzi, B.
2014-08-01
Due to ever more efficient and accurate laser scanning technologies, the analysis of 3D point clouds has become an important task in modern photogrammetry and remote sensing. To exploit the full potential of such data for structural analysis and object detection, reliable geometric features are of crucial importance. Since multiscale approaches have proved very successful for image-based applications, efforts are currently made to apply similar approaches on 3D point clouds. In this paper we analyse common geometric covariance features, pinpointing some severe limitations regarding their performance on varying scales. Instead, we propose a different feature type based on shape distributions known from object recognition. These novel features show a very reliable performance on a wide scale range and their results in classification outnumber covariance features in all tested cases.
NASA Astrophysics Data System (ADS)
Dai, Mi; Wang, Yun
2016-01-01
In order to obtain robust cosmological constraints from Type Ia supernova (SN Ia) data, we have applied Markov Chain Monte Carlo (MCMC) to SN Ia lightcurve fitting. We develop a method for sampling the resultant probability density distributions (pdf) of the SN Ia lightcuve parameters in the MCMC likelihood analysis to constrain cosmological parameters. Applying this method to the Joint Lightcurve Analysis (JLA) data set of SNe Ia, we find that sampling the SN Ia lightcurve parameter pdf's leads to cosmological parameters closer to that of a flat Universe with a cosmological constant, compared to the usual practice of using only the best fit values of the SN Ia lightcurve parameters. Our method will be useful in the use of SN Ia data for precision cosmology.
NASA Astrophysics Data System (ADS)
Wolfsberg, A.; Kang, Q.; Li, C.; Ruskauff, G.; Bhark, E.; Freeman, E.; Prothro, L.; Drellack, S.
2007-12-01
The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The simulations are challenged by the distributed sources in each of the Corrective Action Units, by complex mass transfer processes, and by the size and complexity of the field- scale flow models. An efficient methodology utilizing particle tracking results and convolution integrals provides insitu concentrations appropriate for Monte Carlo analysis. Uncertainty in source releases and transport parameters including effective porosity, fracture apertures and spacing, matrix diffusion coefficients, sorption coefficients, and colloid load and mobility are considered. With the distributions of input uncertainties and output plume volumes, global analysis methods including stepwise regression, contingency table analysis, and classification tree analysis are used to develop sensitivity rankings of parameter uncertainties for each model considered, thus assisting a variety of decisions. The National Security Technologies, LLC component of this work is DOE/NV/25946--xxx and was done under contract number DE-AC52-O6NA25946 with the U.S. Department of Energy
Fractographic principles applied to Y-TZP mechanical behavior analysis.
Ramos, Carla MÃ¼ller; Cesar, Paulo Francisco; Bonfante, Estevam Augusto; Rubo, JosÃ© Henrique; Wang, Linda; Borges, Ana FlÃ¡via Sanches
2016-04-01
The purpose of this study was to evaluate the use of fractography principles to determine the fracture toughness of Y-TZP dental ceramic in which KIc was measured fractographically using controlled-flaw beam bending techniques and to correlate the flaw distribution with the mechanical properties. The Y-TZP blocks studied were: Zirconia Zirklein (ZZ); Zirconcad (ZCA); IPS e.max ZirCad (ZMAX); and In Ceram YZ (ZYZ). Samples were prepared (16mmÃ—4mmÃ—2mm) according to ISO 6872 specifications and subjected to three-point bending at a crosshead speed of 0.5mm/min. Weibull probability curves (95% confidence bounds) were calculated and a contour plot with the Weibull modulus (m) versus characteristic strength (Ïƒ0) was used to examine the differences among groups. The fractured surface of each specimen was inspected in a scanning electron microscope (SEM) for qualitative and quantitative fractographic analysis. The critical defect size (c) and fracture toughness (KIc) were estimated. The fractured surfaces of the samples from all groups showed similar fractographic characteristics, except ZCA showed pores and defects. Fracture toughness and the flexural strength values were not different among the groups except for ZCA. The characteristic strength (p<0.05) of ZZ (Î·=920.4) was higher than the ZCA (Î·=651.1) and similar to the ZMAX (Î·=983.6) and ZYZ (Î·=1054.8). By means of quantitative and qualitative fractographic analysis, this study showed fracture toughness and strength that could be correlated to the observable microstructural features of the evaluated zirconia polycrystalline ceramics. PMID:26722988
Qi, Lu; Speliotes, Elizabeth K.; Thorleifsson, Gudmar; Willer, Cristen J.; Herrera, Blanca M.; Jackson, Anne U.; Lim, Noha; Scheet, Paul; Soranzo, Nicole; Amin, Najaf; Aulchenko, Yurii S.; Chambers, John C.; Drong, Alexander; Luan, Jian'an; Lyon, Helen N.; Rivadeneira, Fernando; Sanna, Serena; Timpson, Nicholas J.; Zillikens, M. Carola; Zhao, Jing Hua; Almgren, Peter; Bandinelli, Stefania; Bennett, Amanda J.; Bergman, Richard N.; Bonnycastle, Lori L.; Bumpstead, Suzannah J.; Chanock, Stephen J.; Cherkas, Lynn; Chines, Peter; Coin, Lachlan; Cooper, Cyrus; Crawford, Gabriel; Doering, Angela; Dominiczak, Anna; Doney, Alex S. F.; Ebrahim, Shah; Elliott, Paul; Erdos, Michael R.; Estrada, Karol; Ferrucci, Luigi; Fischer, Guido; Forouhi, Nita G.; Gieger, Christian; Grallert, Harald; Groves, Christopher J.; Grundy, Scott; Guiducci, Candace; Hadley, David; Hamsten, Anders; Havulinna, Aki S.; Hofman, Albert; Holle, Rolf; Holloway, John W.; Illig, Thomas; Isomaa, Bo; Jacobs, Leonie C.; Jameson, Karen; Jousilahti, Pekka; Karpe, Fredrik; Kuusisto, Johanna; Laitinen, Jaana; Lathrop, G. Mark; Lawlor, Debbie A.; Mangino, Massimo; McArdle, Wendy L.; Meitinger, Thomas; Morken, Mario A.; Morris, Andrew P.; Munroe, Patricia; Narisu, Narisu; Nordström, Anna; Nordström, Peter; Oostra, Ben A.; Palmer, Colin N. A.; Payne, Felicity; Peden, John F.; Prokopenko, Inga; Renström, Frida; Ruokonen, Aimo; Salomaa, Veikko; Sandhu, Manjinder S.; Scott, Laura J.; Scuteri, Angelo; Silander, Kaisa; Song, Kijoung; Yuan, Xin; Stringham, Heather M.; Swift, Amy J.; Tuomi, Tiinamaija; Uda, Manuela; Vollenweider, Peter; Waeber, Gerard; Wallace, Chris; Walters, G. Bragi; Weedon, Michael N.; Witteman, Jacqueline C. M.; Zhang, Cuilin; Zhang, Weihua; Caulfield, Mark J.; Collins, Francis S.; Davey Smith, George; Day, Ian N. M.; Franks, Paul W.; Hattersley, Andrew T.; Hu, Frank B.; Jarvelin, Marjo-Riitta; Kong, Augustine; Kooner, Jaspal S.; Laakso, Markku; Lakatta, Edward; Mooser, Vincent; Morris, Andrew D.; Peltonen, Leena; Samani, Nilesh J.; Spector, Timothy D.; Strachan, David P.; Tanaka, Toshiko; Tuomilehto, Jaakko; Uitterlinden, André G.; van Duijn, Cornelia M.; Wareham, Nicholas J.; Watkins for the PROCARDIS consortia, Hugh; Waterworth, Dawn M.; Boehnke, Michael; Deloukas, Panos; Groop, Leif; Hunter, David J.; Thorsteinsdottir, Unnur; Schlessinger, David; Wichmann, H.-Erich; Frayling, Timothy M.; Abecasis, Gonçalo R.; Hirschhorn, Joel N.; Loos, Ruth J. F.; Stefansson, Kari; Mohlke, Karen L.; Barroso, Inês; McCarthy for the GIANT consortium, Mark I.
2009-01-01
To identify genetic loci influencing central obesity and fat distribution, we performed a meta-analysis of 16 genome-wide association studies (GWAS, N?=?38,580) informative for adult waist circumference (WC) and waist–hip ratio (WHR). We selected 26 SNPs for follow-up, for which the evidence of association with measures of central adiposity (WC and/or WHR) was strong and disproportionate to that for overall adiposity or height. Follow-up studies in a maximum of 70,689 individuals identified two loci strongly associated with measures of central adiposity; these map near TFAP2B (WC, P?=?1.9×10?11) and MSRA (WC, P?=?8.9×10?9). A third locus, near LYPLAL1, was associated with WHR in women only (P?=?2.6×10?8). The variants near TFAP2B appear to influence central adiposity through an effect on overall obesity/fat-mass, whereas LYPLAL1 displays a strong female-only association with fat distribution. By focusing on anthropometric measures of central obesity and fat distribution, we have identified three loci implicated in the regulation of human adiposity. PMID:19557161
Distributed Data-Flow for In-Situ Visualization and Analysis at Petascale
Laney, D E; Childs, H R
2009-03-13
We conducted a feasibility study to research modifications to data-flow architectures to enable data-flow to be distributed across multiple machines automatically. Distributed data-flow is a crucial technology to ensure that tools like the VisIt visualization application can provide in-situ data analysis and post-processing for simulations on peta-scale machines. We modified a version of VisIt to study load-balancing trade-offs between light-weight kernel compute environments and dedicated post-processing cluster nodes. Our research focused on memory overheads for contouring operations, which involves variable amounts of generated geometry on each node and computation of normal vectors for all generated vertices. Each compute node independently decided whether to send data to dedicated post-processing nodes at each stage of pipeline execution, depending on available memory. We instrumented the code to allow user settable available memory amounts to test extremely low-overhead compute environments. We performed initial testing of this prototype distributed streaming framework, but did not have time to perform scaling studies at and beyond 1000 compute-nodes.
Sun, Jie; Huang, Hui; Xiao, Ge Xin; Feng, Guo Shuang; Yu, Shi Cheng; Xue, Yu Tang; Wan, Xia; Yang, Gong Huan; Sun, Xin
2015-03-01
Liver cancer is a common and leading cause of cancer death in China. We used the cancer registry data collected from 2009 to 2011 to describe the spatial distribution of liver cancer incidence at village level in Shengqiu county, Henan province, China. Spatial autocorrelation analysis was employed to detect significant differences from a random spatial distribution of liver cancer incidence. Spatial scan statistics were used to detect and evaluate the clusters of liver cancer cases. Spatial clusters were mapped using ArcGIS 10.0 software in order to identify their physical location at village level. High cluster areas of liver cancer incidence were observed in 26 villages of 7 towns and low cluster areas were observed in 16 villages of 4 towns. High cluster areas of liver cancer incidence were distributed along the Sha Ying River which is the largest of tributary of the Huai River. Role of water pollution in Shenqiu County where the high cluster was found deserves further investigation. PMID:25800446
Analysis of synoptic scale controlling factors in the distribution of gravity wave potential energy
NASA Astrophysics Data System (ADS)
Yang, Shih-Sian; Pan, C. J.; Das, Uma; Lai, H. C.
2015-12-01
In the past years, global morphology and climatology of gravity waves have been widely studied and the effects of topography and convection systems have been evaluated, but the complete gravity wave distribution could not be explained by these effects. To find the missing controlling factors, a series of synoptic scale analyses is performed in the present study to investigate relationships between synoptic scale factors and potential energy (Ep) associated with gravity waves. Global distribution of Ep during a 12-year period from 2002 to 2013 is derived using temperature profiles retrieved from observations of Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument onboard the Thermosphere Ionosphere Mesosphere Energetics and Dynamics (TIMED) satellite. Synoptic scale factors obtained from ECMWF Interim reanalysis data are employed to investigate the correlation between synoptic systems and Ep. It is found that Ep values are high around extratropical cyclones over mid-latitudes (30-60Â°) and around the Intertropical Convergence Zone (ITCZ) over low-latitudes (10-30Â°). Ep values are low around subtropical highs over both mid- and low-latitudes. This is the first time that a synoptic scale analysis of Ep distribution is performed, and the influence of synoptic scale factors on Ep confirmed.
Stewart, F. M.; Gordon, D. M.; Levin, B. R.
1990-01-01
In the 47 years since fluctuation analysis was introduced by Luria and Delbruck, it has been widely used to calculate mutation rates. Up to now, in spite of the importance of such calculations, the probability distribution of the number of mutants that will appear in a fluctuation experiment has been known only under the restrictive, and possibly unrealistic, assumptions: (1) that the mutation rate is exactly proportional to the growth rate and (2) that all mutants grow at a rate that is a constant multiple of the growth rate of the original cells. In this paper, we approach the distribution of the number of mutants from a new point of view that will enable researchers to calculate the distribution to be expected using assumptions that they believe to be closer to biological reality. The new idea is to classify mutations according to the number of observable mutants that derive from the mutation when the culture is selectively plated. This approach also simplifies the calculations in situations where two, or many, kinds of mutation may occur in a single culture. PMID:2307353
Spatial distribution analysis of chemical and biochemical properties across Koiliaris CZO
NASA Astrophysics Data System (ADS)
Tsiknia, Myrto; Varouchakis, Emmanouil A.; Paranychianakis, Nikolaos V.; Nikolaidis, Nikolaos P.
2015-04-01
Arid and semi-arid ecosystems cover approximately 47% of the Earth's surface. Soils in these climatic zones are often severely degraded and poor in organic carbon and nutrients. Anthropogenic activities like overgrazing and intensive agricultural practices further exacerbate the quality of the soils making them more vulnerable to erosion and accelerating losses of nutrients which might end up to surface waterways degrading their quality. Data of the geospatial distribution of nutrient availability as well as on the involved processes at watershed level might help us to identify areas which will potentially act as sources of nutrients and probably will allow us to adopt appropriate management practices to mitigate environmental impacts. In the present study we have performed an extensive sampling campaign (50 points) across a typical Mediterranean watershed, the Koiliaris Critical Zone Observatory (CZO), organized in such a way to effectively capture the complex variability (climatic, soil properties, hydrology, land use) of the watershed. Analyses of soil physico-chemical properties (texture, pH, EC, TOC, TN, NO3--N, and NH4+-N) and biochemical assays (potential nitrification rate, nitrogen mineralization rate, enzymes activities) were carried out. Geostatistical analysis and more specifically the kriging interpolation method was employed to generate distribution maps of the distribution of nitrogen forms and of the related biochemical assays. Such maps could provide an important tool for effective ecosystem management and monitoring decisions.
Determination and analysis of distribution coefficients of 137Cs in soils from Biscay (Spain).
Elejalde, C; Herranz, M; Legarda, F; Romero, F
2000-10-01
The distribution coefficient of (137)Cs has been determined in 58 soils from 12 sampling points from Biscay by treating 10 g with 25 ml of an aqueous solution with an activity of 1765 Bq in the radionuclide, by shaking during 64 h and measuring the residual activity with a suitable detector. Soils were characterised by sampling depth, particle size analysis and the usual chemical parameters. Soils were thereafter treated to fix the chemical forms of (137)Cs speciation by successive extractions in order to determine fractions due to exchangeable, associated with carbonates, iron oxide and organic matter fractions, obtaining by difference the amount taken by the rest of the soil constituents. For this research, 16 soils from four points were selected from the previous samples. The greatest mean percentages of (137)Cs sorption were with the rest (69.93), exchangeable (13.17) and organic matter (12.54%) fractions. This paper includes also the calculation of partial distribution coefficients for chemical species as well as relations of distribution coefficients both among them and with soil parameters. PMID:15092865
Merita, Keisham; Kattukunnel, Joseph John; Yadav, Shrirang Ramchandra; Bhat, Kangila Venkataramana; Rao, Satyawada Rama
2015-03-01
A comparative analysis of fluorochrome-binding pattern in nine taxa of Abelmoschus had shown that the type, amount and distribution pattern of heterochromatin were characteristic for each taxa. The fluorescent chromosome-binding sites obtained by chromomycin A3 (CMA) and 4',6-diamidino-2-phenylindole (DAPI) staining in all the nine species showed constitutive heterochromatin CMA(+), DAPI(+) and CMA(+)/DAPI(+). Large amount of heterozygosity was observed with regard to heterochromatin distribution pattern in all the taxa studied. The CMA(+)-binding sites are comparatively less than DAPI(+)-binding sites which is clearly evident as AT-rich regions are more than GC-rich regions in all the nine taxa analysed in Abelmoschus. These CMA(+) and DAPI(+)-binding sites apparently rise with the increased in chromosome numbers of the different species. This pattern of heterochromatin heterogeneity seems to be a general characteristic feature. Therefore, the differential pattern of distribution of GC- and AT-rich sequences might have played an important role in diversification of the genus Abelmoschus. Polyploidy is an important factor in the evolution of Abelmoschus and the sole reason for range in chromosome numbers in this genus. It may be noted that, though often, but not always, the increase of DNA is caused by an increase in the amount of heterochromatin, i.e. increase of non-coding sections indicating restructuring of the heterochromatin. Thus, cumulative small and direct numerical changes might have played a role in the speciation of Abelmoschus. PMID:25300590
Rock size-frequency distribution analysis at the Chang'E-3 landing site
NASA Astrophysics Data System (ADS)
Di, Kaichang; Xu, Bin; Peng, Man; Yue, Zongyu; Liu, Zhaoqin; Wan, Wenhui; Li, Lichun; Zhou, Jianliang
2016-01-01
This paper presents a comprehensive analysis of the rock size-frequency distribution at the Chang'E-3 landing site. Using 84 Navcam stereo images acquired at 7 waypoints by the Yutu rover and an interactive stereo image processing system, a total of 582 rocks larger than 0.05 m in diameter were identified and measured. The statistical results of the size-frequency distribution show that the cumulative fractional area covered by rocks versus their diameter follows a simple exponential function and has a convex-up shape on log-log graphs with the slope increasing with diameter. The cumulative number of rocks versus diameter derived by numerically integrating the cumulative fractional area also shows a good fit with the data. A diameter-height relationship was also determined from height and diameter ratios. The observed rock statistics were also compared with those from other lunar missions, including the Surveyor, Apollo, and Lunokhod missions; results suggest that the rock distribution at the Chang'E-3 landing site is similar to that found by Surveyor III.