Science.gov

Sample records for weibull distribution analysis

  1. /q-exponential, Weibull, and /q-Weibull distributions: an empirical analysis

    NASA Astrophysics Data System (ADS)

    Picoli, S.; Mendes, R. S.; Malacarne, L. C.

    2003-06-01

    In a comparative study, the q-exponential and Weibull distributions are employed to investigate frequency distributions of basketball baskets, cyclone victims, brand-name drugs by retail sales, and highway length. In order to analyze the intermediate cases, a distribution, the q-Weibull one, which interpolates the q-exponential and Weibull ones, is introduced. It is verified that the basketball baskets distribution is well described by a q-exponential, whereas the cyclone victims and brand-name drugs by retail sales ones are better adjusted by a Weibull distribution. On the other hand, for highway length the q-exponential and Weibull distributions do not give satisfactory adjustment, being necessary to employ the q-Weibull distribution. Furthermore, the introduction of this interpolating distribution gives an illumination from the point of view of the stretched exponential against inverse power law ( q-exponential with q>1) controversy.

  2. Using Weibull Distribution Analysis to Evaluate ALARA Performance

    SciTech Connect

    E. L. Frome, J. P. Watkins, and D. A. Hagemeyer

    2009-10-01

    As Low as Reasonably Achievable (ALARA) is the underlying principle for protecting nuclear workers from potential health outcomes related to occupational radiation exposure. Radiation protection performance is currently evaluated by measures such as collective dose and average measurable dose, which do not indicate ALARA performance. The purpose of this work is to show how statistical modeling of individual doses using the Weibull distribution can provide objective supplemental performance indicators for comparing ALARA implementation among sites and for insights into ALARA practices within a site. Maximum likelihood methods were employed to estimate the Weibull shape and scale parameters used for performance indicators. The shape parameter reflects the effectiveness of maximizing the number of workers receiving lower doses and is represented as the slope of the fitted line on a Weibull probability plot. Additional performance indicators derived from the model parameters include the 99th percentile and the exceedance fraction. When grouping sites by collective total effective dose equivalent (TEDE) and ranking by 99th percentile with confidence intervals, differences in performance among sites can be readily identified. Applying this methodology will enable more efficient and complete evaluation of the effectiveness of ALARA implementation.

  3. Reliability analysis of structural ceramic components using a three-parameter Weibull distribution

    SciTech Connect

    Duffy, S.F.; Powers, L.M.; Starlinger, A. NASA, Lewis Research Center, Cleveland, OH )

    1992-01-01

    Described here are nonlinear regression estimators for the three-parameter Weibull distribution. Issues relating to the bias and invariance associated with these estimators are examined numerically using Monte Carlo simulation methods. The estimators were used to extract parameters from sintered silicon nitride failure data. A reliability analysis was performed on a turbopump blade utilizing the three-parameter Weibull distribution and the estimates from the sintered silicon nitride data. 16 refs.

  4. Software for Statistical Analysis of Weibull Distributions with Application to Gear Fatigue Data: User Manual with Verification

    NASA Technical Reports Server (NTRS)

    Kranz, Timothy L.

    2002-01-01

    The Weibull distribution has been widely adopted for the statistical description and inference of fatigue data. This document provides user instructions, examples, and verification for software to analyze gear fatigue test data. The software was developed presuming the data are adequately modeled using a two-parameter Weibull distribution. The calculations are based on likelihood methods, and the approach taken is valid for data that include type I censoring. The software was verified by reproducing results published by others.

  5. Independent Orbiter Assessment (IOA): Weibull analysis report

    NASA Technical Reports Server (NTRS)

    Raffaelli, Gary G.

    1987-01-01

    The Auxiliary Power Unit (APU) and Hydraulic Power Unit (HPU) Space Shuttle Subsystems were reviewed as candidates for demonstrating the Weibull analysis methodology. Three hardware components were identified as analysis candidates: the turbine wheel, the gearbox, and the gas generator. Detailed review of subsystem level wearout and failure history revealed the lack of actual component failure data. In addition, component wearout data were not readily available or would require a separate data accumulation effort by the vendor. Without adequate component history data being available, the Weibull analysis methodology application to the APU and HPU subsystem group was terminated.

  6. Investigation of Weibull statistics in fracture analysis of cast aluminum

    NASA Technical Reports Server (NTRS)

    Holland, F. A., Jr.; Zaretsky, E. V.

    1989-01-01

    The fracture strengths of two large batches of A357-T6 cast aluminum coupon specimens were compared by using two-parameter Weibull analysis. The minimum number of these specimens necessary to find the fracture strength of the material was determined. The applicability of three-parameter Weibull analysis was also investigated. A design methodolgy based on the combination of elementary stress analysis and Weibull statistical analysis is advanced and applied to the design of a spherical pressure vessel shell. The results from this design methodology are compared with results from the applicable ASME pressure vessel code.

  7. Investigation of Weibull statistics in fracture analysis of cast aluminum

    NASA Technical Reports Server (NTRS)

    Holland, Frederic A., Jr.; Zaretsky, Erwin V.

    1989-01-01

    The fracture strengths of two large batches of A357-T6 cast aluminum coupon specimens were compared by using two-parameter Weibull analysis. The minimum number of these specimens necessary to find the fracture strength of the material was determined. The applicability of three-parameter Weibull analysis was also investigated. A design methodology based on the combination of elementary stress analysis and Weibull statistical analysis is advanced and applied to the design of a spherical pressure vessel shell. The results from this design methodology are compared with results from the applicable ASME pressure vessel code.

  8. Table for estimating parameters of Weibull distribution

    NASA Technical Reports Server (NTRS)

    Mann, N. R.

    1971-01-01

    Table yields best linear invariant /BLI/ estimates for log of reliable life under censored life tests, permitting reliability estimations in failure analysis of items with multiple flaws. These BLI estimates have uniformly smaller expected loss than Gauss-Markov best linear unbiased estimates.

  9. Kinetic Analysis of Isothermal Decomposition Process of Sodium Bicarbonate Using the Weibull Probability Function—Estimation of Density Distribution Functions of the Apparent Activation Energies

    NASA Astrophysics Data System (ADS)

    Jankovi?, Bojan

    2009-10-01

    The decomposition process of sodium bicarbonate (NaHCO3) has been studied by thermogravimetry in isothermal conditions at four different operating temperatures (380 K, 400 K, 420 K, and 440 K). It was found that the experimental integral and differential conversion curves at the different operating temperatures can be successfully described by the isothermal Weibull distribution function with a unique value of the shape parameter ( ? = 1.07). It was also established that the Weibull distribution parameters ( ? and ?) show independent behavior on the operating temperature. Using the integral and differential (Friedman) isoconversional methods, in the conversion (?) range of 0.20 ? ? ? 0.80, the apparent activation energy ( E a ) value was approximately constant ( E a, int = 95.2 kJmol-1 and E a, diff = 96.6 kJmol-1, respectively). The values of E a calculated by both isoconversional methods are in good agreement with the value of E a evaluated from the Arrhenius equation (94.3 kJmol-1), which was expressed through the scale distribution parameter ( ?). The Málek isothermal procedure was used for estimation of the kinetic model for the investigated decomposition process. It was found that the two-parameter Šesták-Berggren (SB) autocatalytic model best describes the NaHCO3 decomposition process with the conversion function f(?) = ?0.18(1-?)1.19. It was also concluded that the calculated density distribution functions of the apparent activation energies ( ddfE a ’s) are not dependent on the operating temperature, which exhibit the highly symmetrical behavior (shape factor = 1.00). The obtained isothermal decomposition results were compared with corresponding results of the nonisothermal decomposition process of NaHCO3.

  10. Lower bound on reliability for Weibull distribution when shape parameter is not estimated accurately

    NASA Technical Reports Server (NTRS)

    Huang, Zhaofeng; Porter, Albert A.

    1990-01-01

    The mathematical relationships between the shape parameter Beta and estimates of reliability and a life limit lower bound for the two parameter Weibull distribution are investigated. It is shown that under rather general conditions, both the reliability lower bound and the allowable life limit lower bound (often called a tolerance limit) have unique global minimums over a range of Beta. Hence lower bound solutions can be obtained without assuming or estimating Beta. The existence and uniqueness of these lower bounds are proven. Some real data examples are given to show how these lower bounds can be easily established and to demonstrate their practicality. The method developed here has proven to be extremely useful when using the Weibull distribution in analysis of no-failure or few-failures data. The results are applicable not only in the aerospace industry but anywhere that system reliabilities are high.

  11. Large-Scale Weibull Analysis of H-451 Nuclear- Grade Graphite Specimen Rupture Data

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Walker, Andrew; Baker, Eric H.; Murthy, Pappu L.; Bratton, Robert L.

    2012-01-01

    A Weibull analysis was performed of the strength distribution and size effects for 2000 specimens of H-451 nuclear-grade graphite. The data, generated elsewhere, measured the tensile and four-point-flexure room-temperature rupture strength of specimens excised from a single extruded graphite log. Strength variation was compared with specimen location, size, and orientation relative to the parent body. In our study, data were progressively and extensively pooled into larger data sets to discriminate overall trends from local variations and to investigate the strength distribution. The CARES/Life and WeibPar codes were used to investigate issues regarding the size effect, Weibull parameter consistency, and nonlinear stress-strain response. Overall, the Weibull distribution described the behavior of the pooled data very well. However, the issue regarding the smaller-than-expected size effect remained. This exercise illustrated that a conservative approach using a two-parameter Weibull distribution is best for designing graphite components with low probability of failure for the in-core structures in the proposed Generation IV (Gen IV) high-temperature gas-cooled nuclear reactors. This exercise also demonstrated the continuing need to better understand the mechanisms driving stochastic strength response. Extensive appendixes are provided with this report to show all aspects of the rupture data and analytical results.

  12. Predictive Failure of Cylindrical Coatings Using Weibull Analysis

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.

    2002-01-01

    Rotating, coated wiping rollers used in a high-speed printing application failed primarily from fatigue. Two coating materials were evaluated: a hard, cross-linked, plasticized polyvinyl chloride (PVC) and a softer, plasticized PVC. A total of 447 tests was conducted with these coatings in a production facility. The data were evaluated using Weibull analysis. The softer coating produced more than twice the life of the harder cross-linked coating and reduced the wiper replacement rate by two-thirds, resulting in minimum production interruption.

  13. Least Squares Best Fit Method for the Three Parameter Weibull Distribution: Analysis of Tensile and Bend Specimens with Volume or Surface Flaw Failure

    NASA Technical Reports Server (NTRS)

    Gross, Bernard

    1996-01-01

    Material characterization parameters obtained from naturally flawed specimens are necessary for reliability evaluation of non-deterministic advanced ceramic structural components. The least squares best fit method is applied to the three parameter uniaxial Weibull model to obtain the material parameters from experimental tests on volume or surface flawed specimens subjected to pure tension, pure bending, four point or three point loading. Several illustrative example problems are provided.

  14. Fracture Strength: Stress Concentration, Extreme Value Statistics, and the Fate of the Weibull Distribution

    NASA Astrophysics Data System (ADS)

    Bertalan, Zsolt; Shekhawat, Ashivni; Sethna, James P.; Zapperi, Stefano

    2014-09-01

    The statistical properties of fracture strength of brittle and quasibrittle materials are often described in terms of the Weibull distribution. However, the weakest-link hypothesis, commonly used to justify it, is expected to fail when fracture occurs after significant damage accumulation. Here we show that this implies that the Weibull distribution is unstable in a renormalization-group sense for a large class of quasibrittle materials. Our theoretical arguments are supported by numerical simulations of disordered fuse networks. We also find that for brittle materials such as ceramics, the common assumption that the strength distribution can be derived from the distribution of preexisting microcracks by using Griffith's criteria is invalid. We attribute this discrepancy to crack bridging. Our findings raise questions about the applicability of Weibull statistics to most practical cases.

  15. The graphical method for goodness of fit test in the inverse Weibull distribution based on multiply type-II censored samples.

    PubMed

    Kang, Suk-Bok; Han, Jun-Tae

    2015-01-01

    Many studies have considered a truncated and censored samples which are type-I, type-II and hybrid censoring scheme. The inverse Weibull distribution has been utilized for the analysis of life testing and reliability data. Also, this distribution is a very flexible distribution. The inverse Rayleigh distribution and inverse exponential distribution are a special case of the inverse Weibull distribution. In this paper, we derive the approximate maximum likelihood estimators (AMLEs) of the scale parameter and the shape parameter in the inverse Weibull distribution under multiply type-II censoring. We also propose a simple graphical method for goodness-on-fit test based on multiply type-II censored samples using AMLEs. PMID:26688782

  16. Weibull-distributed dyke thickness reflects probabilistic character of host-rock strength

    PubMed Central

    Krumbholz, Michael; Hieronymus, Christoph F.; Burchardt, Steffi; Troll, Valentin R.; Tanner, David C.; Friese, Nadine

    2014-01-01

    Magmatic sheet intrusions (dykes) constitute the main form of magma transport in the Earth’s crust. The size distribution of dykes is a crucial parameter that controls volcanic surface deformation and eruption rates and is required to realistically model volcano deformation for eruption forecasting. Here we present statistical analyses of 3,676 dyke thickness measurements from different tectonic settings and show that dyke thickness consistently follows the Weibull distribution. Known from materials science, power law-distributed flaws in brittle materials lead to Weibull-distributed failure stress. We therefore propose a dynamic model in which dyke thickness is determined by variable magma pressure that exploits differently sized host-rock weaknesses. The observed dyke thickness distributions are thus site-specific because rock strength, rather than magma viscosity and composition, exerts the dominant control on dyke emplacement. Fundamentally, the strength of geomaterials is scale-dependent and should be approximated by a probability distribution. PMID:24513695

  17. Surface Wind-Speed Statistics Modelling: Alternatives to the Weibull Distribution and Performance Evaluation

    NASA Astrophysics Data System (ADS)

    Drobinski, Philippe; Coulais, Corentin; Jourdier, Bénédicte

    2015-10-01

    Wind-speed statistics are generally modelled using the Weibull distribution. However, the Weibull distribution is based on empirical rather than physical justification and might display strong limitations for its applications. Here, we derive wind-speed distributions analytically with different assumptions on the wind components to model wind anisotropy, wind extremes and multiple wind regimes. We quantitatively confront these distributions with an extensive set of meteorological data (89 stations covering various sub-climatic regions in France) to identify distributions that perform best and the reasons for this, and we analyze the sensitivity of the proposed distributions to the diurnal to seasonal variability. We find that local topography, unsteady wind fluctuations as well as persistent wind regimes are determinants for the performances of these distributions, as they induce anisotropy or non-Gaussian fluctuations of the wind components. A Rayleigh-Rice distribution is proposed to model the combination of weak isotropic wind and persistent wind regimes. It outperforms all other tested distributions (Weibull, elliptical and non-Gaussian) and is the only proposed distribution able to catch accurately the diurnal and seasonal variability.

  18. Analysis of time-to-event data with nonuniform patient entry and loss to follow-up under a two-stage seamless adaptive design with weibull distribution.

    PubMed

    Lu, Qingshu; Tse, Siu-Keung; Chow, Shein-Chung; Lin, Min

    2012-01-01

    In the pharmaceutical industry, a two-stage seamless adaptive design that combines two separate independent clinical trials into a single clinical study is commonly employed in clinical research and development. In practice, in the interest of shortening the development process, it is not uncommon to consider study endpoints with different treatment durations at different stages (Chow and Chang, 2006 ; Maca et al., 2006 ). In this study, our attention is placed on the case where the study endpoints of interest are time-to-event data where the durations at the two stages are different with nonuniform patient entry and losses to follow-up or dropouts. Test statistics for the final analysis based on the combined data are developed under various hypotheses for testing equality, superiority, noninferiority, and equivalence. In addition, formulas for sample size calculation and allocation between the two stages based on the proposed test statistic are derived. PMID:22651114

  19. Reliability Evaluation Method with Weibull Distribution for Temporary Overvoltages of Substation Equipment

    NASA Astrophysics Data System (ADS)

    Okabe, Shigemitsu; Tsuboi, Toshihiro; Takami, Jun

    The power-frequency withstand voltage tests are regulated on electric power equipment in JEC by evaluating the lifetime reliability with a Weibull distribution function. The evaluation method is still controversial in terms of consideration of a plural number of faults and some alternative methods were proposed on this subject. The present paper first discusses the physical meanings of the various kinds of evaluating methods and secondly examines their effects on the power-frequency withstand voltage tests. Further, an appropriate method is investigated for an oil-filled transformer and a gas insulated switchgear with taking notice of dielectric breakdown or partial discharge mechanism under various insulating material and structure conditions and the tentative conclusion gives that the conventional method would be most pertinent under the present conditions.

  20. USE OF WEIBULL FUNCTION FOR NON-LINEAR ANALYSIS OF EFFECTS OF LOW LEVELS OF SIMULATED HERBICIDE DRIFT ON PLANTS

    EPA Science Inventory

    We compared two regression models, which are based on the Weibull and probit functions, for the analysis of pesticide toxicity data from laboratory studies on Illinois crop and native plant species. Both mathematical models are continuous, differentiable, strictly positive, and...

  1. Statistical analysis of bivariate failure time data with Marshall–Olkin Weibull models

    PubMed Central

    Li, Yang; Sun, Jianguo; Song, Shuguang

    2013-01-01

    This paper discusses parametric analysis of bivariate failure time data, which often occur in medical studies among others. For this, as in the case of univariate failure time data, exponential and Weibull models are probably the most commonly used ones. However, it is surprising that there seem no general estimation procedures available for fitting the bivariate Weibull model to bivariate right-censored failure time data except some methods for special situations. We present and investigate two general but simple estimation procedures, one being a graphical approach and the other being a marginal approach, for the problem. An extensive simulation study is conducted to assess the performances of the proposed approaches and shows that they work well for practical situations. An illustrative example is provided. PMID:26294802

  2. Probabilistic Analysis for Comparing Fatigue Data Based on Johnson-Weibull Parameters

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.

    2013-01-01

    Leonard Johnson published a methodology for establishing the confidence that two populations of data are different. Johnson's methodology is dependent on limited combinations of test parameters (Weibull slope, mean life ratio, and degrees of freedom) and a set of complex mathematical equations. In this report, a simplified algebraic equation for confidence numbers is derived based on the original work of Johnson. The confidence numbers calculated with this equation are compared to those obtained graphically by Johnson. Using the ratios of mean life, the resultant values of confidence numbers at the 99 percent level deviate less than 1 percent from those of Johnson. At a 90 percent confidence level, the calculated values differ between +2 and 4 percent. The simplified equation is used to rank the experimental lives of three aluminum alloys (AL 2024, AL 6061, and AL 7075), each tested at three stress levels in rotating beam fatigue, analyzed using the Johnson- Weibull method, and compared to the ASTM Standard (E739 91) method of comparison. The ASTM Standard did not statistically distinguish between AL 6061 and AL 7075. However, it is possible to rank the fatigue lives of different materials with a reasonable degree of statistical certainty based on combined confidence numbers using the Johnson- Weibull analysis. AL 2024 was found to have the longest fatigue life, followed by AL 7075, and then AL 6061. The ASTM Standard and the Johnson-Weibull analysis result in the same stress-life exponent p for each of the three aluminum alloys at the median, or L(sub 50), lives

  3. Weibull distribution function based on an empirical mathematical model for inactivation of Escherichia coli by pulsed electric fields.

    PubMed

    Rodrigo, D; Barbosa-Cánovas, G V; Martínez, A; Rodrigo, M

    2003-06-01

    The pulsed electric field inactivation kinetics of Escherichia coli suspended in orange juices with three different concentrations of carrot juice (0, 20, and 60%) was studied. Electric field strengths ranged from 25 to 40 kV/cm, and treatment times ranged from 40 to 340 micros. Experimental data were fitted to Bigelow, Hülsheger, and Weibull distribution functions, and the Weibull function provided the best fit (with the lowest mean square error). The dependency of each model's kinetic constant on electric field strength and carrot juice concentration was studied. A secondary model was developed to describe the relationship of Weibull parameters a and n to electric field strength and carrot juice concentration. An empirical mathematical model based on the Weibull distribution function, relating the natural logarithm of the survival fraction to treatment time, electric field strength, and carrot juice concentration, was developed. Parameters were estimated by a nonlinear regression. The results of this study indicate that the error rate for the model's predictions was 6.5% and that the model was suitable for describing E. coli inactivation. PMID:12801001

  4. Mixture and non-mixture cure fraction models based on the generalized modified Weibull distribution with an application to gastric cancer data.

    PubMed

    Martinez, Edson Z; Achcar, Jorge A; Jácome, Alexandre A A; Santos, José S

    2013-12-01

    The cure fraction models are usually used to model lifetime time data with long-term survivors. In the present article, we introduce a Bayesian analysis of the four-parameter generalized modified Weibull (GMW) distribution in presence of cure fraction, censored data and covariates. In order to include the proportion of "cured" patients, mixture and non-mixture formulation models are considered. To demonstrate the ability of using this model in the analysis of real data, we consider an application to data from patients with gastric adenocarcinoma. Inferences are obtained by using MCMC (Markov Chain Monte Carlo) methods. PMID:24008248

  5. Accurate bearing remaining useful life prediction based on Weibull distribution and artificial neural network

    NASA Astrophysics Data System (ADS)

    Ben Ali, Jaouher; Chebel-Morello, Brigitte; Saidi, Lotfi; Malinowski, Simon; Fnaiech, Farhat

    2015-05-01

    Accurate remaining useful life (RUL) prediction of critical assets is an important challenge in condition based maintenance to improve reliability and decrease machine's breakdown and maintenance's cost. Bearing is one of the most important components in industries which need to be monitored and the user should predict its RUL. The challenge of this study is to propose an original feature able to evaluate the health state of bearings and to estimate their RUL by Prognostics and Health Management (PHM) techniques. In this paper, the proposed method is based on the data-driven prognostic approach. The combination of Simplified Fuzzy Adaptive Resonance Theory Map (SFAM) neural network and Weibull distribution (WD) is explored. WD is used just in the training phase to fit measurement and to avoid areas of fluctuation in the time domain. SFAM training process is based on fitted measurements at present and previous inspection time points as input. However, the SFAM testing process is based on real measurements at present and previous inspections. Thanks to the fuzzy learning process, SFAM has an important ability and a good performance to learn nonlinear time series. As output, seven classes are defined; healthy bearing and six states for bearing degradation. In order to find the optimal RUL prediction, a smoothing phase is proposed in this paper. Experimental results show that the proposed method can reliably predict the RUL of rolling element bearings (REBs) based on vibration signals. The proposed prediction approach can be applied to prognostic other various mechanical assets.

  6. Improvement in mechanical properties of jute fibres through mild alkali treatment as demonstrated by utilisation of the Weibull distribution model.

    PubMed

    Roy, Aparna; Chakraborty, Sumit; Kundu, Sarada Prasad; Basak, Ratan Kumar; Majumder, Subhasish Basu; Adhikari, Basudam

    2012-03-01

    Chemically modified jute fibres are potentially useful as natural reinforcement in composite materials. Jute fibres were treated with 0.25%-1.0% sodium hydroxide (NaOH) solution for 0.5-48 h. The hydrophilicity, surface morphology, crystallinity index, thermal and mechanical characteristics of untreated and alkali treated fibres were studied.The two-parameter Weibull distribution model was applied to deal with the variation in mechanical properties of the natural fibres. Alkali treatment enhanced the tensile strength and elongation at break by 82% and 45%, respectively but decreased the hydrophilicity by 50.5% and the diameter of the fibres by 37%. PMID:22209134

  7. A practical and systematic review of Weibull statistics for reporting strengths of dental materials

    PubMed Central

    Quinn, George D.; Quinn, Janet B.

    2011-01-01

    Objectives To review the history, theory and current applications of Weibull analyses sufficient to make informed decisions regarding practical use of the analysis in dental material strength testing. Data References are made to examples in the engineering and dental literature, but this paper also includes illustrative analyses of Weibull plots, fractographic interpretations, and Weibull distribution parameters obtained for a dense alumina, two feldspathic porcelains, and a zirconia. Sources Informational sources include Weibull's original articles, later articles specific to applications and theoretical foundations of Weibull analysis, texts on statistics and fracture mechanics and the international standards literature. Study Selection The chosen Weibull analyses are used to illustrate technique, the importance of flaw size distributions, physical meaning of Weibull parameters and concepts of “equivalent volumes” to compare measured strengths obtained from different test configurations. Conclusions Weibull analysis has a strong theoretical basis and can be of particular value in dental applications, primarily because of test specimen size limitations and the use of different test configurations. Also endemic to dental materials, however, is increased difficulty in satisfying application requirements, such as confirming fracture origin type and diligence in obtaining quality strength data. PMID:19945745

  8. Reliability growth modeling analysis of the space shuttle main engines based upon the Weibull process

    NASA Technical Reports Server (NTRS)

    Wheeler, J. T.

    1990-01-01

    The Weibull process, identified as the inhomogeneous Poisson process with the Weibull intensity function, is used to model the reliability growth assessment of the space shuttle main engine test and flight failure data. Additional tables of percentage-point probabilities for several different values of the confidence coefficient have been generated for setting (1-alpha)100-percent two sided confidence interval estimates on the mean time between failures. The tabled data pertain to two cases: (1) time-terminated testing, and (2) failure-terminated testing. The critical values of the three test statistics, namely Cramer-von Mises, Kolmogorov-Smirnov, and chi-square, were calculated and tabled for use in the goodness of fit tests for the engine reliability data. Numerical results are presented for five different groupings of the engine data that reflect the actual response to the failures.

  9. A comparative study of mixed exponential and Weibull distributions in a stochastic model replicating a tropical rainfall process

    NASA Astrophysics Data System (ADS)

    Abas, Norzaida; Daud, Zalina M.; Yusof, Fadhilah

    2014-11-01

    A stochastic rainfall model is presented for the generation of hourly rainfall data in an urban area in Malaysia. In view of the high temporal and spatial variability of rainfall within the tropical rain belt, the Spatial-Temporal Neyman-Scott Rectangular Pulse model was used. The model, which is governed by the Neyman-Scott process, employs a reasonable number of parameters to represent the physical attributes of rainfall. A common approach is to attach each attribute to a mathematical distribution. With respect to rain cell intensity, this study proposes the use of a mixed exponential distribution. The performance of the proposed model was compared to a model that employs the Weibull distribution. Hourly and daily rainfall data from four stations in the Damansara River basin in Malaysia were used as input to the models, and simulations of hourly series were performed for an independent site within the basin. The performance of the models was assessed based on how closely the statistical characteristics of the simulated series resembled the statistics of the observed series. The findings obtained based on graphical representation revealed that the statistical characteristics of the simulated series for both models compared reasonably well with the observed series. However, a further assessment using the AIC, BIC and RMSE showed that the proposed model yields better results. The results of this study indicate that for tropical climates, the proposed model, using a mixed exponential distribution, is the best choice for generation of synthetic data for ungauged sites or for sites with insufficient data within the limit of the fitted region.

  10. Prediction of the curing time to achieve maturity of the nano-cement based concrete using the Weibull distribution model: A complementary data set

    PubMed Central

    Jo, Byung Wan; Chakraborty, Sumit; Kim, Heon

    2015-01-01

    This data article provides a comparison data for nano-cement based concrete (NCC) and ordinary Portland cement based concrete (OPCC). Concrete samples (OPCC) were fabricated using ten different mix design and their characterization data is provided here. Optimization of curing time using the Weibull distribution model was done by analyzing the rate of change of compressive strength of the OPCC. Initially, the compressive strength of the OPCC samples was measured after completion of four desired curing times. Thereafter, the required curing time to achieve a particular rate of change of the compressive strength has been predicted utilizing the equation derived from the variation of the rate of change of compressive strength with the curing time, prior to the optimization of the curing time (at the 99.99% confidence level) using the Weibull distribution model. This data article complements the research article entitled “Prediction of the curing time to achieve maturity of the nano-cement based concrete using the Weibull distribution model” [1]. PMID:26217804

  11. SER performance analysis of MPPM FSO system with three decision thresholds over exponentiated Weibull fading channels

    NASA Astrophysics Data System (ADS)

    Wang, Ping; Yang, Bensheng; Guo, Lixin; Shang, Tao

    2015-11-01

    In this work, the symbol error rate (SER) performance of the multiple pulse position modulation (MPPM) based free-space optical communication (FSO) system with three different decision thresholds, fixed decision threshold (FDT), optimized decision threshold (ODT) and dynamic decision threshold (DDT) over exponentiated Weibull (EW) fading channels has been investigated in detail. The effects of aperture averaging on each decision threshold under weak-to-strong turbulence conditions are further studied and compared. The closed-form SER expressions for three thresholds derived with the help of generalized Gauss-Laguerre quadrature rule are verified by the Monte Carlo simulations. This work is helpful for the design of receivers for FSO communication systems.

  12. Weibull-Based Design Methodology for Rotating Aircraft Engine Structures

    NASA Technical Reports Server (NTRS)

    Zaretsky, Erwin; Hendricks, Robert C.; Soditus, Sherry

    2002-01-01

    The NASA Energy Efficient Engine (E(sup 3)-Engine) is used as the basis of a Weibull-based life and reliability analysis. Each component's life and thus the engine's life is defined by high-cycle fatigue (HCF) or low-cycle fatigue (LCF). Knowing the cumulative life distribution of each of the components making up the engine as represented by a Weibull slope is a prerequisite to predicting the life and reliability of the entire engine. As the engine Weibull slope increases, the predicted lives decrease. The predicted engine lives L(sub 5) (95 % probability of survival) of approximately 17,000 and 32,000 hr do correlate with current engine maintenance practices without and with refurbishment. respectively. The individual high pressure turbine (HPT) blade lives necessary to obtain a blade system life L(sub 0.1) (99.9 % probability of survival) of 9000 hr for Weibull slopes of 3, 6 and 9, are 47,391 and 20,652 and 15,658 hr, respectively. For a design life of the HPT disks having probable points of failure equal to or greater than 36,000 hr at a probability of survival of 99.9 %, the predicted disk system life L(sub 0.1) can vary from 9,408 to 24,911 hr.

  13. Calculation of Weibull strength parameters and Batdorf flow-density constants for volume- and surface-flaw-induced fracture in ceramics

    NASA Technical Reports Server (NTRS)

    Shantaram, S. Pai; Gyekenyesi, John P.

    1989-01-01

    The calculation of shape and scale parametes of the two-parameter Weibull distribution is described using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. Detailed procedures are given for evaluating 90 percent confidence intervals for maximum likelihood estimates of shape and scale parameters, the unbiased estimates of the shape parameters, and the Weibull mean values and corresponding standard deviations. Furthermore, the necessary steps are described for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull distribution. It also shows how to calculate the Batdorf flaw-density constants by using the Weibull distribution statistical parameters. The techniques described were verified with several example problems, from the open literature, and were coded in the Structural Ceramics Analysis and Reliability Evaluation (SCARE) design program.

  14. EVALUATION OF SPRING OPERATED RELIEF VALVE MAINTENANCE INTERVALS AND EXTENSION OF MAINTENANCE TIMES USING A WEIBULL ANALYSIS WITH MODIFIED BAYESIAN UPDATING

    SciTech Connect

    Harris, S.; Gross, R.; Mitchell, E.

    2011-01-18

    The Savannah River Site (SRS) spring operated pressure relief valve (SORV) maintenance intervals were evaluated using an approach provided by the American Petroleum Institute (API RP 581) for risk-based inspection technology (RBI). In addition, the impact of extending the inspection schedule was evaluated using Monte Carlo Simulation (MCS). The API RP 581 approach is characterized as a Weibull analysis with modified Bayesian updating provided by SRS SORV proof testing experience. Initial Weibull parameter estimates were updated as per SRS's historical proof test records contained in the Center for Chemical Process Safety (CCPS) Process Equipment Reliability Database (PERD). The API RP 581 methodology was used to estimate the SORV's probability of failing on demand (PFD), and the annual expected risk. The API RP 581 methodology indicates that the current SRS maintenance plan is conservative. Cost savings may be attained in certain mild service applications that present low PFD and overall risk. Current practices are reviewed and recommendations are made for extending inspection intervals. The paper gives an illustration of the inspection costs versus the associated risks by using API RP 581 Risk Based Inspection (RBI) Technology. A cost effective maintenance frequency balancing both financial risk and inspection cost is demonstrated.

  15. Finite-size effects on return interval distributions for weakest-link-scaling systems.

    PubMed

    Hristopulos, Dionissios T; Petrakis, Manolis P; Kaniadakis, Giorgio

    2014-05-01

    The Weibull distribution is a commonly used model for the strength of brittle materials and earthquake return intervals. Deviations from Weibull scaling, however, have been observed in earthquake return intervals and the fracture strength of quasibrittle materials. We investigate weakest-link scaling in finite-size systems and deviations of empirical return interval distributions from the Weibull distribution function. Our analysis employs the ansatz that the survival probability function of a system with complex interactions among its units can be expressed as the product of the survival probability functions for an ensemble of representative volume elements (RVEs). We show that if the system comprises a finite number of RVEs, it obeys the ?-Weibull distribution. The upper tail of the ?-Weibull distribution declines as a power law in contrast with Weibull scaling. The hazard rate function of the ?-Weibull distribution decreases linearly after a waiting time ?(c) ? n(1/m), where m is the Weibull modulus and n is the system size in terms of representative volume elements. We conduct statistical analysis of experimental data and simulations which show that the ? Weibull provides competitive fits to the return interval distributions of seismic data and of avalanches in a fiber bundle model. In conclusion, using theoretical and statistical analysis of real and simulated data, we demonstrate that the ?-Weibull distribution is a useful model for extreme-event return intervals in finite-size systems. PMID:25353774

  16. A Weibull characterization for tensile fracture of multicomponent brittle fibers

    NASA Technical Reports Server (NTRS)

    Barrows, R. G.

    1977-01-01

    A statistical characterization for multicomponent brittle fibers in presented. The method, which is an extension of usual Weibull distribution procedures, statistically considers the components making up a fiber (e.g., substrate, sheath, and surface) as separate entities and taken together as in a fiber. Tensile data for silicon carbide fiber and for an experimental carbon-boron alloy fiber are evaluated in terms of the proposed multicomponent Weibull characterization.

  17. Study of the use of the Weibull distribution and compliance of the reliability of lead-free solder in accelerated thermal cycling through varying solder volume, pitch, and thermal range with previously established materials models

    NASA Astrophysics Data System (ADS)

    Berger, Barry Nathaniel

    The research presented addresses the reliability of lead-free solder in accelerated thermal cycling to reevaluate the use of the two-parameter Weibull distribution and to better understand the affects of changing solder joint volume, distances to neutral point, and thermal cycling temperature ranges. 10mil, 12mil, and 16mil diameter Sn96.5Ag3.0Cu 0.5 solder volumes are compared along with five distances to neutral points and three thermal cycling ranges to find their relationships and reliability during thermal cycling. The results of statistical tests and post-failure analyses are compared to popular reliability models to show discrepancies between them and the experimental results. These comparisons demonstrate that many studies do not correctly use the Weibull distribution as they do not include the shape parameter as an integral part of their analyses. 10mil components perform best at equivalent strain ranges. 12mil components undergo an unusual shift in reliability depending on the strain. 16mil components follow normal trends.

  18. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    PubMed

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the ? value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when ? and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the ? parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the ? value in saccharification performance assessment were discussed. PMID:26121186

  19. Non-parametric texture defect detection using Weibull Fabian Timma,b and Erhardt Bartha

    E-print Network

    Non-parametric texture defect detection using Weibull features Fabian Timma,b and Erhardt Bartha a- parametric approach for defect detection in textures that only employs two features. We compute the two: defect detection, texture images, Weibull distribution, optical inspection, non-parametric models

  20. Calculation of Weibull strength parameters and Batdorf flow-density constants for volume- and surface-flaw-induced fracture in ceramics

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.; Gyekenyesi, John P.

    1988-01-01

    The calculation of shape and scale parameters of the two-parameter Weibull distribution is described using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. Detailed procedures are given for evaluating 90 percent confidence intervals for maximum likelihood estimates of shape and scale parameters, the unbiased estimates of the shape parameters, and the Weibull mean values and corresponding standard deviations. Furthermore, the necessary steps are described for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull distribution. It also shows how to calculate the Batdorf flaw-density constants by uing the Weibull distribution statistical parameters. The techniques described were verified with several example problems, from the open literature, and were coded. The techniques described were verified with several example problems from the open literature, and were coded in the Structural Ceramics Analysis and Reliability Evaluation (SCARE) design program.

  1. Empirical model based on Weibull distribution describing the destruction kinetics of natural microbiota in pineapple (Ananas comosus L.) puree during high-pressure processing.

    PubMed

    Chakraborty, Snehasis; Rao, Pavuluri Srinivasa; Mishra, Hari Niwas

    2015-10-15

    High pressure inactivation of natural microbiota viz. aerobic mesophiles (AM), psychrotrophs (PC), yeasts and molds (YM), total coliforms (TC) and lactic acid bacteria (LAB) in pineapple puree was studied within the experimental domain of 0.1-600 MPa and 30-50 °C with a treatment time up to 20 min. A complete destruction of yeasts and molds was obtained at 500 MPa/50 °C/15 min; whereas no counts were detected for TC and LAB at 300 MPa/30 °C/15 min. A maximum of two log cycle reductions was obtained for YM during pulse pressurization at the severe process intensity of 600 MPa/50 °C/20 min. The Weibull model clearly described the non-linearity of the survival curves during the isobaric period. The tailing effect, as confirmed by the shape parameter (?) of the survival curve, was obtained in case of YM (?<1); whereas a shouldering effect (?>1) was observed for the other microbial groups. Analogous to thermal death kinetics, the activation energy (Ea, kJ·mol(-1)) and the activation volume (Va, mL·mol(-1)) values were computed further to describe the temperature and pressure dependencies of the scale parameter (?, min), respectively. A higher ? value was obtained for each microbe at a lower temperature and it decreased with an increase in pressure. A secondary kinetic model was developed describing the inactivation rate (k, min(-1)) as a function of pressure (P, MPa) and temperature (T, K) including the dependencies of Ea and Va on P and T, respectively. PMID:26202323

  2. Impact of Three-Parameter Weibull Models in Probabilistic Assessment of Earthquake Hazards

    NASA Astrophysics Data System (ADS)

    Pasari, Sumanta; Dikshit, Onkar

    2014-07-01

    This paper investigates the suitability of a three-parameter (scale, shape, and location) Weibull distribution in probabilistic assessment of earthquake hazards. The performance is also compared with two other popular models from same Weibull family, namely the two-parameter Weibull model and the inverse Weibull model. A complete and homogeneous earthquake catalog ( Yadav et al. in Pure Appl Geophys 167:1331-1342, 2010) of 20 events ( M ? 7.0), spanning the period 1846 to 1995 from north-east India and its surrounding region (20°-32°N and 87°-100°E), is used to perform this study. The model parameters are initially estimated from graphical plots and later confirmed from statistical estimations such as maximum likelihood estimation (MLE) and method of moments (MoM). The asymptotic variance-covariance matrix for the MLE estimated parameters is further calculated on the basis of the Fisher information matrix (FIM). The model suitability is appraised using different statistical goodness-of-fit tests. For the study area, the estimated conditional probability for an earthquake within a decade comes out to be very high (?0.90) for an elapsed time of 18 years (i.e., 2013). The study also reveals that the use of location parameter provides more flexibility to the three-parameter Weibull model in comparison to the two-parameter Weibull model. Therefore, it is suggested that three-parameter Weibull model has high importance in empirical modeling of earthquake recurrence and seismic hazard assessment.

  3. Calculation of Weibull strength parameters, Batdorf flaw density constants and related statistical quantities using PC-CARES

    NASA Technical Reports Server (NTRS)

    Szatmary, Steven A.; Gyekenyesi, John P.; Nemeth, Noel N.

    1990-01-01

    This manual describes the operation and theory of the PC-CARES (Personal Computer-Ceramic Analysis and Reliability Evaluation of Structures) computer program for the IBM PC and compatibles running PC-DOS/MS-DOR OR IBM/MS-OS/2 (version 1.1 or higher) operating systems. The primary purpose of this code is to estimate Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities. Included in the manual is the description of the calculation of shape and scale parameters of the two-parameter Weibull distribution using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. The methods for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull line, as well as the techniques for calculating the Batdorf flaw-density constants are also described.

  4. Balanced condition in networks leads to Weibull statistics

    E-print Network

    Jalan, Sarika

    2013-01-01

    The importance of the balance in inhibitory and excitatory couplings in the brain has increasingly been realized. Despite the key role played by inhibitory-excitatory couplings in the functioning of brain networks, the impact of a balanced condition on the stability properties of underlying networks remains largely unknown. We investigate properties of the largest eigenvalues of networks having such couplings, and find that they follow completely different statistics when in the balanced situation. Based on numerical simulations, we demonstrate that the transition from Weibull to Fr\\'echet via the Gumbel distribution can be controlled by the variance of the column sum of the adjacency matrix, which depends monotonically on the denseness of the underlying network. As a balanced condition is imposed, the largest real part of the eigenvalue emulates a transition to the generalized extreme value statistics, independent of the inhibitory connection probability. Furthermore, the transition to the Weibull statistics...

  5. Distributed analysis in CMS

    SciTech Connect

    Fanfani, Alessandra; Afaq, Anzar; Sanches, Jose Afonso; Andreeva, Julia; Bagliesi, Giusepppe; Bauerdick, Lothar; Belforte, Stefano; Bittencourt Sampaio, Patricia; Bloom, Ken; Blumenfeld, Barry; Bonacorsi, Daniele; /INFN, Bologna /Bologna U. /Rutherford

    2010-03-20

    The CMS experiment expects to manage several Pbytes of data each year during the LHC programme, distributing them over many computing sites around the world and enabling data access at those centers for analysis. CMS has identified the distributed sites as the primary location for physics analysis to support a wide community with thousands potential users. This represents an unprecedented experimental challenge in terms of the scale of distributed computing resources and number of user. An overview of the computing architecture, the software tools and the distributed infrastructure is reported. Summaries of the experience in establishing efficient and scalable operations to get prepared for CMS distributed analysis are presented, followed by the user experience in their current analysis activities.

  6. ATLAS reliability analysis

    SciTech Connect

    Bartsch, R.R.

    1995-09-01

    Key elements of the 36 MJ ATLAS capacitor bank have been evaluated for individual probabilities of failure. These have been combined to estimate system reliability which is to be greater than 95% on each experimental shot. This analysis utilizes Weibull or Weibull-like distributions with increasing probability of failure with the number of shots. For transmission line insulation, a minimum thickness is obtained and for the railgaps, a method for obtaining a maintenance interval from forthcoming life tests is suggested.

  7. Atlas Distributed Analysis Tools

    NASA Astrophysics Data System (ADS)

    de La Hoz, Santiago Gonzalez; Ruiz, Luis March; Liko, Dietrich

    2008-06-01

    The ATLAS production system has been successfully used to run production of simulation data at an unprecedented scale. Up to 10000 jobs were processed in one day. The experiences obtained operating the system on several grid flavours was essential to perform a user analysis using grid resources. First tests of the distributed analysis system were then performed. In the preparation phase data was registered in the LHC File Catalog (LFC) and replicated in external sites. For the main test, few resources were used. All these tests are only a first step towards the validation of the computing model. The ATLAS management computing board decided to integrate the collaboration efforts in distributed analysis in only one project, GANGA. The goal is to test the reconstruction and analysis software in a large scale Data production using Grid flavors in several sites. GANGA allows trivial switching between running test jobs on a local batch system and running large-scale analyses on the Grid; it provides job splitting and merging, and includes automated job monitoring and output retrieval.

  8. Modelling memory processes and Internet response times: Weibull or power-law?

    NASA Astrophysics Data System (ADS)

    Chessa, Antonio G.; Murre, Jaap M. J.

    2006-07-01

    The Weibull distribution is proposed as a model for response times. Theoretical support is offered by classical results for extreme-value distributions. Fits of the Weibull distribution to response time data in different contexts show that this distribution (and the exponential distribution on small time-scales) perform better than the often-suggested power-law and logarithmic function. This study suggests that the power-law can be viewed as an approximation, at neural level, for the aggregate strength of superposed memory traces that have different decay rates in distinct parts of the brain. As we predict, this view does not find support at the level of induced response processes. The distinction between underlying and induced processes might also be considered in other fields, such as engineering, biology and physics.

  9. The distribution of first-passage times and durations in FOREX and future markets

    NASA Astrophysics Data System (ADS)

    Sazuka, Naoya; Inoue, Jun-ichi; Scalas, Enrico

    2009-07-01

    Possible distributions are discussed for intertrade durations and first-passage processes in financial markets. The view-point of renewal theory is assumed. In order to represent market data with relatively long durations, two types of distributions are used, namely a distribution derived from the Mittag-Leffler survival function and the Weibull distribution. For the Mittag-Leffler type distribution, the average waiting time (residual life time) is strongly dependent on the choice of a cut-off parameter tmax, whereas the results based on the Weibull distribution do not depend on such a cut-off. Therefore, a Weibull distribution is more convenient than a Mittag-Leffler type if one wishes to evaluate relevant statistics such as average waiting time in financial markets with long durations. On the other hand, we find that the Gini index is rather independent of the cut-off parameter. Based on the above considerations, we propose a good candidate for describing the distribution of first-passage time in a market: The Weibull distribution with a power-law tail. This distribution compensates the gap between theoretical and empirical results more efficiently than a simple Weibull distribution. It should be stressed that a Weibull distribution with a power-law tail is more flexible than the Mittag-Leffler distribution, which itself can be approximated by a Weibull distribution and a power-law. Indeed, the key point is that in the former case there is freedom of choice for the exponent of the power-law attached to the Weibull distribution, which can exceed 1 in order to reproduce decays faster than possible with a Mittag-Leffler distribution. We also give a useful formula to determine an optimal crossover point minimizing the difference between the empirical average waiting time and the one predicted from renewal theory. Moreover, we discuss the limitation of our distributions by applying our distribution to the analysis of the BTP future and calculating the average waiting time. We find that our distribution is applicable as long as durations follow a Weibull law for short times and do not have too heavy a tail.

  10. An extension of Gompertzian growth dynamics: Weibull and Frechet models.

    PubMed

    Rocha, J Leonel; Aleixo, Sandra M

    2013-04-01

    In this work a new probabilistic and dynamical approach to an extension of the Gompertz law is proposed. A generalized family of probability density functions, designated by Beta • (p,q), which is proportional to the right hand side of the Tsoularis-Wallace model, is studied. In particular, for p=2, the investigation is extended to the extreme value models of Weibull and Frechet type. These models, described by differential equations, are proportional to the hyper-Gompertz growth model. It is proved that the Beta• (2,q) densities are a power of betas mixture, and that its dynamics are determined by a non-linear coupling of probabilities. The dynamical analysis is performed using techniques of symbolic dynamics and the system complexity is measured using topological entropy. Generally, the natural history of a malignant tumour is reflected through bifurcation diagrams, in which are identified regions of regression, stability, bifurcation, chaos and terminus. PMID:23458306

  11. Collective Weibull behavior of social atoms: Application of the rank-ordering statistics to historical extreme events

    NASA Astrophysics Data System (ADS)

    Chen, Chien-Chih; Tseng, Chih-Yuan; Telesca, Luciano; Chi, Sung-Ching; Sun, Li-Chung

    2012-02-01

    Analogous to crustal earthquakes in natural fault systems, we here consider the dynasty collapses as extreme events in human society. Duration data of ancient Chinese and Egyptian dynasties provides a good chance of exploring the collective behavior of the so-called social atoms. By means of the rank-ordering statistics, we demonstrate that the duration data of those ancient dynasties could be described with good accuracy by the Weibull distribution. It is thus amazing that the distribution of time to failure of human society, i.e. the disorder of a historical dynasty, follows the widely accepted Weibull process as natural material fails.

  12. Aerospace Applications of Weibull and Monte Carlo Simulation with Importance Sampling

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.

    1998-01-01

    Recent developments in reliability modeling and computer technology have made it practical to use the Weibull time to failure distribution to model the system reliability of complex fault-tolerant computer-based systems. These system models are becoming increasingly popular in space systems applications as a result of mounting data that support the decreasing Weibull failure distribution and the expectation of increased system reliability. This presentation introduces the new reliability modeling developments and demonstrates their application to a novel space system application. The application is a proposed guidance, navigation, and control (GN&C) system for use in a long duration manned spacecraft for a possible Mars mission. Comparisons to the constant failure rate model are presented and the ramifications of doing so are discussed.

  13. Incorporating finite element analysis into component life and reliability

    NASA Technical Reports Server (NTRS)

    August, Richard; Zaretsky, Erwin V.

    1991-01-01

    A method for calculating a component's design survivability by incorporating finite element analysis and probabilistic material properties was developed. The method evaluates design parameters through direct comparisons of component survivability expressed in terms of Weibull parameters. The analysis was applied to a rotating disk with mounting bolt holes. The highest probability of failure occurred at, or near, the maximum shear stress region of the bolt holes. Distribution of failure as a function of Weibull slope affects the probability of survival. Where Weibull parameters are unknown for a rotating disk, it may be permissible to assume Weibull parameters, as well as the stress-life exponent, in order to determine the disk speed where the probability of survival is highest.

  14. Effect of Individual Component Life Distribution on Engine Life Prediction

    NASA Technical Reports Server (NTRS)

    Zaretsky, Erwin V.; Hendricks, Robert C.; Soditus, Sherry M.

    2003-01-01

    The effect of individual engine component life distributions on engine life prediction was determined. A Weibull-based life and reliability analysis of the NASA Energy Efficient Engine was conducted. The engine s life at a 95 and 99.9 percent probability of survival was determined based upon the engine manufacturer s original life calculations and assumed values of each of the component s cumulative life distributions as represented by a Weibull slope. The lives of the high-pressure turbine (HPT) disks and blades were also evaluated individually and as a system in a similar manner. Knowing the statistical cumulative distribution of each engine component with reasonable engineering certainty is a condition precedent to predicting the life and reliability of an entire engine. The life of a system at a given reliability will be less than the lowest-lived component in the system at the same reliability (probability of survival). Where Weibull slopes of all the engine components are equal, the Weibull slope had a minimal effect on engine L(sub 0.1) life prediction. However, at a probability of survival of 95 percent (L(sub 5) life), life decreased with increasing Weibull slope.

  15. Distributed and Collaborative Software Analysis

    NASA Astrophysics Data System (ADS)

    Ghezzi, Giacomo; Gall, Harald C.

    Throughout the years software engineers have come up with a myriad of specialized tools and techniques that focus on a certain type of software analysissoftware analysis such as source code analysis, co-change analysis or bug prediction. However, easy and straight forward synergies between these analyses and tools rarely exist because of their stand-alone nature, their platform dependence, their different input and output formats and the variety of data to analyze. As a consequence, distributed and collaborative software analysiscollaborative software analysis scenarios and in particular interoperability are severely limited. We describe a distributed and collaborative software analysis platform that allows for a seamless interoperability of software analysis tools across platform, geographical and organizational boundaries. We realize software analysis tools as services that can be accessed and composed over the Internet. These distributed analysis services shall be widely accessible in our incrementally augmented Software Analysis Broker software analysis broker where organizations and tool providers can register and share their tools. To allow (semi-) automatic use and composition of these tools, they are classified and mapped into a software analysis taxonomy and adhere to specific meta-models and ontologiesontologies for their category of analysis.

  16. Additive noise, Weibull functions and the approximation of psychometric functions.

    PubMed

    Mortensen, U

    2002-09-01

    The Weibull function is frequently chosen to define psychometric functions. Tyler and Chen (Vis. Res. 40 (2000) 3121) criticised the high-threshold postulate implied by the Weibull function and argued that this function implies the assumption of multiplicative noise. It will be shown in this paper that in fact the Weibull function is compatible with the assumption of additive noise, and that the Weibull function may be generalised to the case of detection not being high threshold. The derivations rest, however, on a representation of sensory activity lacking a satisfying degree of generality. Therefore, a more general representation of sensory activity in terms of stochastic processes will be suggested, with detection being defined as a level-crossing process, containing the original representation as a special case. Two classes of stochastic processes will be considered: one where the noise is assumed to be additive, stationary Gaussian, and another resulting from cascaded Poisson processes, representing a form of multiplicative noise. While Weibull functions turn out to approximate well psychometric functions generated by both types of stochastic processes, it also becomes obvious that there is no simple interpretation of the parameters of the fitted Weibull functions. Moreover, corresponding to Tyler and Chen's discussion of the role of multiplicative noise particular sources of this type of noise will be considered and shown to be compatible with the Weibull. It is indicated how multiplicative noise may be defined in general; however, it will be argued that in the light of certain empirical data the role of this type of noise may be negligible in most detection tasks. PMID:12350425

  17. Moment series for moment estimators of the parameters of a Weibull density

    SciTech Connect

    Bowman, K.O.; Shenton, L.R.

    1982-01-01

    Taylor series for the first four moments of the coefficients of variation in sampling from a 2-parameter Weibull density are given: they are taken as far as the coefficient of n/sup -24/. From these a four moment approximating distribution is set up using summatory techniques on the series. The shape parameter is treated in a similar way, but here the moment equations are no longer explicit estimators, and terms only as far as those in n/sup -12/ are given. The validity of assessed moments and percentiles of the approximating distributions is studied. Consideration is also given to properties of the moment estimator for 1/c.

  18. Distributed data analysis in LHCb

    NASA Astrophysics Data System (ADS)

    Paterson, S. K.; Maier, A.

    2008-07-01

    The LHCb distributed data analysis system consists of the Ganga job submission front-end and the DIRAC Workload and Data Management System (WMS). Ganga is jointly developed with ATLAS and allows LHCb users to submit jobs on several backends including: several batch systems, LCG and DIRAC. The DIRAC API provides a transparent and secure way for users to run jobs to the Grid and is the default mode of submission for the LHCb Virtual Organisation (VO). This is exploited by Ganga to perform distributed user analysis for LHCb. This system provides LHCb with a consistent, efficient and simple user experience in a variety of heterogeneous environments and facilitates the incremental development of user analysis from local test jobs to the Worldwide LHC Computing Grid. With a steadily increasing number of users, the LHCb distributed analysis system has been tuned and enhanced over the past two years. This paper will describe the recent developments to support distributed data analysis for the LHCb experiment on WLCG.

  19. On the q-type distributions

    NASA Astrophysics Data System (ADS)

    Nadarajah, Saralees; Kotz, Samuel

    2007-04-01

    Various q-type distributions have appeared in the physics literature in the recent years, see e.g. L.C. Malacarne, R.S. Mendes, E. K. Lenzi, q-exponential distribution in urban agglomeration, Phys. Rev. E 65, (2002) 017106. S.M.D. Queiros, On a possible dynamical scenario leading to a generalised Gamma distribution, in xxx.lanl.gov-physics/0411111. U.M.S. Costa, V.N. Freire, L.C. Malacarne, R.S. Mendes, S. Picoli Jr., E.A. de Vasconcelos, E.F. da Silva Jr., An improved description of the dielectric breakdown in oxides based on a generalized Weibull distribution, Physica A 361, (2006) 215. S. Picoli, Jr., R.S. Mendes, L.C. Malacarne, q-exponential, Weibull, and q-Weibull distributions: an empirical analysis, Physica A 324 (2003) 678-688. A.M.C. de Souza, C. Tsallis, Student's t- and r- distributions: unified derivation from an entropic variational principle, Physica A 236 (1997) 52-57. It is pointed out in the paper that many of these are the same as or particular cases of what has been known in the statistics literature. Several of these statistical distributions are discussed and references provided. We feel that this paper could be of assistance for modeling problems of the type considered by L.C. Malacarne, R.S. Mendes, E. K. Lenzi, q-exponential distribution in urban agglomeration, Phys. Rev. E 65, (2002) 017106. S.M.D. Queiros, On a possible dynamical scenario leading to a generalised Gamma distribution, in xxx.lanl.gov-physics/0411111. U.M.S. Costa, V.N. Freire, L.C. Malacarne, R.S. Mendes, S. Picoli Jr., E.A. de Vasconcelos, E.F. da Silva Jr., An improved description of the dielectric breakdown in oxides based on a generalized Weibull distribution, Physica A 361, (2006) 215. S. Picoli, Jr., R.S. Mendes, L.C. Malacarne, q-exponential, Weibull, and q-Weibull distributions: an empirical analysis, Physica A 324 (2003) 678-688. A.M.C. de Souza, C. Tsallis, Student's t- and r- distributions: unified derivation from an entropic variational principle, Physica A 236 (1997) 52-57 and others.

  20. Application of Weibull Criterion to failure prediction in compsites

    SciTech Connect

    Cain, W. D.; Knight, Jr., C. E.

    1981-04-20

    Fiber-reinforced composite materials are being widely used in engineered structures. This report examines how the general form of the Weibull Criterion, including the evaluation of the parameters and the scaling of the parameter values, can be used for the prediction of component failure.

  1. Distributed data analysis in ATLAS

    NASA Astrophysics Data System (ADS)

    Nilsson, Paul; Atlas Collaboration

    2012-12-01

    Data analysis using grid resources is one of the fundamental challenges to be addressed before the start of LHC data taking. The ATLAS detector will produce petabytes of data per year, and roughly one thousand users will need to run physics analyses on this data. Appropriate user interfaces and helper applications have been made available to ensure that the grid resources can be used without requiring expertise in grid technology. These tools enlarge the number of grid users from a few production administrators to potentially all participating physicists. ATLAS makes use of three grid infrastructures for the distributed analysis: the EGEE sites, the Open Science Grid, and Nordu Grid. These grids are managed by the gLite workload management system, the PanDA workload management system, and ARC middleware; many sites can be accessed via both the gLite WMS and PanDA. Users can choose between two front-end tools to access the distributed resources. Ganga is a tool co-developed with LHCb to provide a common interface to the multitude of execution backends (local, batch, and grid). The PanDA workload management system provides a set of utilities called PanDA Client; with these tools users can easily submit Athena analysis jobs to the PanDA-managed resources. Distributed data is managed by Don Quixote 2, a system developed by ATLAS; DQ2 is used to replicate datasets according to the data distribution policies and maintains a central catalog of file locations. The operation of the grid resources is continually monitored by the Ganga Robot functional testing system, and infrequent site stress tests are performed using the Hammer Cloud system. In addition, the DAST shift team is a group of power users who take shifts to provide distributed analysis user support; this team has effectively relieved the burden of support from the developers.

  2. Spatial and Temporal Patterns of Global Onshore Wind Speed Distribution

    SciTech Connect

    Zhou, Yuyu; Smith, Steven J.

    2013-09-09

    Wind power, a renewable energy source, can play an important role in electrical energy generation. Information regarding wind energy potential is important both for energy related modeling and for decision-making in the policy community. While wind speed datasets with high spatial and temporal resolution are often ultimately used for detailed planning, simpler assumptions are often used in analysis work. An accurate representation of the wind speed frequency distribution is needed in order to properly characterize wind energy potential. Using a power density method, this study estimated global variation in wind parameters as fitted to a Weibull density function using NCEP/CFSR reanalysis data. The estimated Weibull distribution performs well in fitting the time series wind speed data at the global level according to R2, root mean square error, and power density error. The spatial, decadal, and seasonal patterns of wind speed distribution were then evaluated. We also analyzed the potential error in wind power estimation when a commonly assumed Rayleigh distribution (Weibull k = 2) is used. We find that the assumption of the same Weibull parameter across large regions can result in substantial errors. While large-scale wind speed data is often presented in the form of average wind speeds, these results highlight the need to also provide information on the wind speed distribution.

  3. Weibull Statistics for Upper Ocean Currents with the Fokker-Planck Equation

    NASA Astrophysics Data System (ADS)

    Chu, P. C.

    2012-12-01

    Upper oceans typically exhibit of a surface mixed layer with a thickness of a few to several hundred meters. This mixed layer is a key component in studies of climate, biological productivity and marine pollution. It is the link between the atmosphere and the deep ocean and directly affects the air-sea exchange of heat, momentum and gases. Vertically averaged horizontal currents across the mixed layer are driven by the residual between the Ekman transport and surface wind stress, and damped by the Rayleigh friction. A set of stochastic differential equations are derived for the two components of the current vector (u, v). The joint probability distribution function of (u, v) satisfies the Fokker-Planck equation (Chu, 2008, 2009), with the Weibull distribution as the solution for the current speed. To prove it, the PDF of the upper (0-50 m) tropical Pacific current speeds (w) was calculated from hourly ADCP data (1990-2007) at six stations for the Tropical Atmosphere Ocean project. In fact, it satisfies the two-parameter Weibull distribution reasonably well with different characteristics between El Nino and La Nina events: In the western Pacific, the PDF of w has a larger peakedness during the La Nina events than during the El Nino events; and vice versa in the eastern Pacific. However, the PDF of w for the lower layer (100-200 m) does not fit the Weibull distribution so well as the upper layer. This is due to the different stochastic differential equations between upper and lower layers in the tropical Pacific. For the upper layer, the stochastic differential equations, established on the base of the Ekman dynamics, have analytical solution, i.e., the Rayleigh distribution (simplest form of the Weibull distribution), for constant eddy viscosity K. Knowledge on PDF of w during the El Nino and La Nina events will improve the ensemble horizontal flux calculation, which contributes to the climate studies. Besides, the Weibull distribution is also identified from the near-real time ocean surface currents derived from satellite altimeter (JASON-1, GFO, ENVISAT) and scatterometer (QSCAT) data on 1o 1o resolution for world oceans (60o S to 60o N) as "Ocean Surface Current Analyses - Real Time (OSCAR)". Such a PDF has little seasonal and interannual variations. Knowledge on PDF of w will improve the ensemble horizontal flux calculation, which contributes to the climate studies. References Chu, P. C., 2008: Probability distribution function of the upper equatorial Pacific current speeds. Geophysical Research Letters, 35,doi:10.1029/2008GL033669 Chu, P. C., 2009: Statistical Characteristics of the Global Surface Current Speeds Obtained from Satellite Altimeter and Scatterometer Data. IEEE Journal of Selected Topics in Earth Observations and Remote Sensing,2(1),27-32.

  4. Distribution-free discriminant analysis

    SciTech Connect

    Burr, T.; Doak, J.

    1997-05-01

    This report describes our experience in implementing a non-parametric (distribution-free) discriminant analysis module for use in a wide range of pattern recognition problems. Issues discussed include performance results on both real and simulated data sets, comparisons to other methods, and the computational environment. In some cases, this module performs better than other existing methods. Nearly all cases can benefit from the application of multiple methods.

  5. Is Weibull Distribution the Most Appropriate Statistical Strength

    E-print Network

    Kundu, Debasis

    , storage and renewable energy devices, fiber optics for high speed communications and elements associated with chemical inertness etc1 . The advancement of ceramic science in the last few decades has

  6. Weibull Effective Area for Hertzian Ring Crack Initiation Stress

    SciTech Connect

    Jadaan, Osama M.; Wereszczak, Andrew A; Johanns, Kurt E

    2011-01-01

    Spherical or Hertzian indentation is used to characterize and guide the development of engineered ceramics under consideration for diverse applications involving contact, wear, rolling fatigue, and impact. Ring crack initiation can be one important damage mechanism of Hertzian indentation. It is caused by sufficiently-high, surface-located, radial tensile stresses in an annular ring located adjacent to and outside of the Hertzian contact circle. While the maximum radial tensile stress is known to be dependent on the elastic properties of the sphere and target, the diameter of the sphere, the applied compressive force, and the coefficient of friction, the Weibull effective area too will be affected by those parameters. However, the estimations of a maximum radial tensile stress and Weibull effective area are difficult to obtain because the coefficient of friction during Hertzian indentation is complex, likely intractable, and not known a priori. Circumventing this, the Weibull effective area expressions are derived here for the two extremes that bracket all coefficients of friction; namely, (1) the classical, frictionless, Hertzian case where only complete slip occurs, and (2) the case where no slip occurs or where the coefficient of friction is infinite.

  7. Weibull models of fracture strengths and fatigue behavior of dental resins in flexure and shear.

    PubMed

    Baran, G R; McCool, J I; Paul, D; Boberick, K; Wunder, S

    1998-01-01

    In estimating lifetimes of dental restorative materials, it is useful to have available data on the fatigue behavior of these materials. Current efforts at estimation include several untested assumptions related to the equivalence of flaw distributions sampled by shear, tensile, and compressive stresses. Environmental influences on material properties are not accounted for, and it is unclear if fatigue limits exist. In this study, the shear and flexural strengths of three resins used as matrices in dental restorative composite materials were characterized by Weibull parameters. It was found that shear strengths were lower than flexural strengths, liquid sorption had a profound effect on characteristic strengths, and the Weibull shape parameter obtained from shear data differed for some materials from that obtained in flexure. In shear and flexural fatigue, a power law relationship applied for up to 250,000 cycles; no fatigue limits were found, and the data thus imply only one flaw population is responsible for failure. Again, liquid sorption adversely affected strength levels in most materials (decreasing shear strengths and flexural strengths by factors of 2-3) and to a greater extent than did the degree of cure or material chemistry. PMID:9730059

  8. GRB brightness ratio distribution analysis

    NASA Astrophysics Data System (ADS)

    Laros, J. G.

    1996-08-01

    The objective of this analysis is to obtain insight into whether positionally close pairs of GRBs are due to repetitions, clustering, or random chance. We consider the Brightness Ratio Distribution (BRD) of pairs of events. Here, brightness is used as a generic term for any quantity related to the observed intensity of an event. The BRD has the interesting property that if one can select pairs whose components are at the same distance-such as, by considering only close-together pairs-then the distance dependence ``drops out'' of each brightness ratio and the BRD becomes narrower because its width no longer has a component caused by the sources' differing distances. We have begun to apply this analysis to the BATSE events for which location and brightness data are available, comparing the BRD for close-together event pairs to the BRDs for the other (presumedly unrelated) pairs. Preliminary results do not show any clear indication that close-together pairs are related. However, this work is at a very early stage with regard to optimizing the method and understanding its properties.

  9. Statistical modeling of tornado intensity distributions

    NASA Astrophysics Data System (ADS)

    Dotzek, Nikolai; Grieser, Jürgen; Brooks, Harold E.

    We address the issue to determine an appropriate general functional shape of observed tornado intensity distributions. Recently, it was suggested that in the limit of long and large tornado records, exponential distributions over all positive Fujita or TORRO scale classes would result. Yet, our analysis shows that even for large databases observations contradict the validity of exponential distributions for weak (F0) and violent (F5) tornadoes. We show that observed tornado intensities can be much better described by Weibull distributions, for which an exponential remains a special case. Weibull fits in either v or F scale reproduce the observations significantly better than exponentials. In addition, we suggest to apply the original definition of negative intensity scales down to F-2 and T-4 (corresponding to v=0 m s -1) at least for climatological analyses. Weibull distributions allow for an improved risk assessment of violent tornadoes up to F6, and better estimates of total tornado occurrence, degree of underreporting and existence of subcritical tornadic circulations below damaging intensity. Therefore, our results are relevant for climatologists and risk assessment managers alike.

  10. Test Population Selection from Weibull-Based, Monte Carlo Simulations of Fatigue Life

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Zaretsky, Erwin V.; Hendricks, Robert C.

    2008-01-01

    Fatigue life is probabilistic and not deterministic. Experimentally establishing the fatigue life of materials, components, and systems is both time consuming and costly. As a result, conclusions regarding fatigue life are often inferred from a statistically insufficient number of physical tests. A proposed methodology for comparing life results as a function of variability due to Weibull parameters, variability between successive trials, and variability due to size of the experimental population is presented. Using Monte Carlo simulation of randomly selected lives from a large Weibull distribution, the variation in the L10 fatigue life of aluminum alloy AL6061 rotating rod fatigue tests was determined as a function of population size. These results were compared to the L10 fatigue lives of small (10 each) populations from AL2024, AL7075 and AL6061. For aluminum alloy AL6061, a simple algebraic relationship was established for the upper and lower L10 fatigue life limits as a function of the number of specimens failed. For most engineering applications where less than 30 percent variability can be tolerated in the maximum and minimum values, at least 30 to 35 test samples are necessary. The variability of test results based on small sample sizes can be greater than actual differences, if any, that exists between materials and can result in erroneous conclusions. The fatigue life of AL2024 is statistically longer than AL6061 and AL7075. However, there is no statistical difference between the fatigue lives of AL6061 and AL7075 even though AL7075 had a fatigue life 30 percent greater than AL6061.

  11. Test Population Selection from Weibull-Based, Monte Carlo Simulations of Fatigue Life

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Zaretsky, Erwin V.; Hendricks, Robert C.

    2012-01-01

    Fatigue life is probabilistic and not deterministic. Experimentally establishing the fatigue life of materials, components, and systems is both time consuming and costly. As a result, conclusions regarding fatigue life are often inferred from a statistically insufficient number of physical tests. A proposed methodology for comparing life results as a function of variability due to Weibull parameters, variability between successive trials, and variability due to size of the experimental population is presented. Using Monte Carlo simulation of randomly selected lives from a large Weibull distribution, the variation in the L10 fatigue life of aluminum alloy AL6061 rotating rod fatigue tests was determined as a function of population size. These results were compared to the L10 fatigue lives of small (10 each) populations from AL2024, AL7075 and AL6061. For aluminum alloy AL6061, a simple algebraic relationship was established for the upper and lower L10 fatigue life limits as a function of the number of specimens failed. For most engineering applications where less than 30 percent variability can be tolerated in the maximum and minimum values, at least 30 to 35 test samples are necessary. The variability of test results based on small sample sizes can be greater than actual differences, if any, that exists between materials and can result in erroneous conclusions. The fatigue life of AL2024 is statistically longer than AL6061 and AL7075. However, there is no statistical difference between the fatigue lives of AL6061 and AL7075 even though AL7075 had a fatigue life 30 percent greater than AL6061.

  12. Distributed computing and nuclear reactor analysis

    SciTech Connect

    Brown, F.B.; Derstine, K.L.; Blomquist, R.N.

    1994-03-01

    Large-scale scientific and engineering calculations for nuclear reactor analysis can now be carried out effectively in a distributed computing environment, at costs far lower than for traditional mainframes. The distributed computing environment must include support for traditional system services, such as a queuing system for batch work, reliable filesystem backups, and parallel processing capabilities for large jobs. All ANL computer codes for reactor analysis have been adapted successfully to a distributed system based on workstations and X-terminals. Distributed parallel processing has been demonstrated to be effective for long-running Monte Carlo calculations.

  13. Extreme value distributions and Renormalization Group

    E-print Network

    Iván Calvo; Juan C. Cuchí; José G. Esteve; Fernando Falceto

    2012-10-15

    In the classical theorems of extreme value theory the limits of suitably rescaled maxima of sequences of independent, identically distributed random variables are studied. So far, only affine rescalings have been considered. We show, however, that more general rescalings are natural and lead to new limit distributions, apart from the Gumbel, Weibull, and Fr\\'echet families. The problem is approached using the language of Renormalization Group transformations in the space of probability densities. The limit distributions are fixed points of the transformation and the study of the differential around them allows a local analysis of the domains of attraction and the computation of finite-size corrections.

  14. Efficient and ethical response-adaptive randomization designs for multi-arm clinical trials with Weibull time-to-event outcomes.

    PubMed

    Sverdlov, Oleksandr; Ryeznik, Yevgen; Wong, Weng-Kee

    2014-01-01

    We consider a design problem for a clinical trial with multiple treatment arms and time-to-event primary outcomes that are modeled using the Weibull family of distributions. The D-optimal design for the most precise estimation of model parameters is derived, along with compound optimal allocation designs that provide targeted efficiencies for various estimation problems and ethical considerations. The proposed optimal allocation designs are studied theoretically and are implemented using response-adaptive randomization for a clinical trial with censored Weibull outcomes. We compare the merits of our multiple-objective response-adaptive designs with traditional randomization designs and show that our designs are more flexible, realistic, generally more ethical, and frequently provide higher efficiencies for estimating different sets of parameters. PMID:24697678

  15. Calibration of three-parameter Weibull stress model for 15Kh2NMFA RPV steel

    NASA Astrophysics Data System (ADS)

    Beleznai, Robert; Lenkey, Gyöngyvér B.; Lauerova, Dana

    2010-11-01

    Weibull stress model represents a basic local approach model used in the ductile-to-brittle transition region for description and prediction of cleavage fracture for materials of both PWR and WWER reactor pressure vessels. In the Weibull stress model used most frequently until now [1], the parameters are determined by a calibration procedure using the fracture toughness values of high and low constraint specimens. In the present paper, the results of SEN(B) pre-cracked specimens of 10 × 20 × 120 mm size, with deep and shallow cracks, are utilized. Specimens were made of material of WWER-1000 reactor pressure vessel, and were tested at Nuclear Research Institute Rez. Determination of Weibull stress was performed for both the case of including plastic strain correction into the Weibull stress formula and without it.

  16. Limit Laws for Norms of IID Samples with Weibull Tails Leonid Bogachev1,2

    E-print Network

    Bogachev, Leonid V.

    where {Xi} belong to the domain of attraction of Gumbel's double exponential law (in the senseLimit Laws for Norms of IID Samples with Weibull Tails Leonid Bogachev1,2 Received We are concerned law of large numbers

  17. Towards Distributed Memory Parallel Program Analysis

    SciTech Connect

    Quinlan, D; Barany, G; Panas, T

    2008-06-17

    This paper presents a parallel attribute evaluation for distributed memory parallel computer architectures where previously only shared memory parallel support for this technique has been developed. Attribute evaluation is a part of how attribute grammars are used for program analysis within modern compilers. Within this work, we have extended ROSE, a open compiler infrastructure, with a distributed memory parallel attribute evaluation mechanism to support user defined global program analysis required for some forms of security analysis which can not be addressed by a file by file view of large scale applications. As a result, user defined security analyses may now run in parallel without the user having to specify the way data is communicated between processors. The automation of communication enables an extensible open-source parallel program analysis infrastructure.

  18. Performance of Weibull and Linear Semi-logarithmic Models in Simulating Inactivation in Waters.

    PubMed

    Stocker, M D; Pachepsky, Y A; Shelton, D R

    2014-09-01

    Modeling inactivation of indicator microorganisms is a necessary component of microbial water quality forecast and management recommendations. The linear semi-logarithmic (LSL) model is commonly used to simulate the dependencies of bacterial concentrations in waters on time. There were indications that assumption of the semi-logarithmic linearity may not be accurate enough in waters. The objective of this work was to compare performance of the LSL and the two-parametric Weibull inactivation models with data on survival of indicator organism in various types of water from a representative database of 167 laboratory experiments. The Weibull model was preferred in >99% of all cases when the root mean squared errors and Nash-Sutcliffe statistics were compared. Comparison of corrected Akaike statistic values gave the preference to the Weibull model in only 35% of cases. This was caused by (i) a small number of experimental points on some inactivation curves, (ii) closeness of the shape parameter of the Weibull equation to one, and (iii) piecewise log-linear inactivation dynamic that could be well described by neither of the two models compared. Based on the Akaike test, the Weibull model was favored in agricultural, lake, and pristine waters, whereas the LSL model was preferred for groundwater, wastewater, rivers, and marine waters. The decimal reduction time parameter of both the LSL and Weibull models exhibited an Arrhenius-type dependence on temperature. Overall, the existing inactivation data indicate that the application of the Weibull model can improve the predictive capabilities of microbial water quality modeling. PMID:25603241

  19. A Novel Conditional Probability Density Distribution Surface for the Analysis of the Drop Life of Solder Joints Under Board Level Drop Impact

    NASA Astrophysics Data System (ADS)

    Gu, Jian; Lei, YongPing; Lin, Jian; Fu, HanGuang; Wu, Zhongwei

    2015-10-01

    The scattering of fatigue life data is a common problem and usually described using the normal distribution or Weibull distribution. For solder joints under drop impact, due to the complicated stress distribution, the relationship between the stress and the drop life is so far unknown. Furthermore, it is important to establish a function describing the change in standard deviation for solder joints under different drop impact levels. Therefore, in this study, a novel conditional probability density distribution surface (CPDDS) was established for the analysis of the drop life of solder joints. The relationship between the drop impact acceleration and the drop life is proposed, which comprehensively considers the stress distribution. A novel exponential model was adopted for describing the change of the standard deviation with the impact acceleration (0 ? +?). To validate the model, the drop life of Sn-3.0Ag-0.5Cu solder joints was analyzed. The probability density curve of the logarithm of the fatigue life distribution can be easily obtained for a certain acceleration level fixed on the acceleration level axis of the CPDDS. The P-A-N curve was also obtained using the functions ?(A) and ?(A), which can reflect the regularity of the life data for an overall reliability P.

  20. Distributional Cost-Effectiveness Analysis: A Tutorial.

    PubMed

    Asaria, Miqdad; Griffin, Susan; Cookson, Richard

    2016-01-01

    Distributional cost-effectiveness analysis (DCEA) is a framework for incorporating health inequality concerns into the economic evaluation of health sector interventions. In this tutorial, we describe the technical details of how to conduct DCEA, using an illustrative example comparing alternative ways of implementing the National Health Service (NHS) Bowel Cancer Screening Programme (BCSP). The 2 key stages in DCEA are 1) modeling social distributions of health associated with different interventions, and 2) evaluating social distributions of health with respect to the dual objectives of improving total population health and reducing unfair health inequality. As well as describing the technical methods used, we also identify the data requirements and the social value judgments that have to be made. Finally, we demonstrate the use of sensitivity analyses to explore the impacts of alternative modeling assumptions and social value judgments. PMID:25908564

  1. Ceramics Analysis and Reliability Evaluation of Structures (CARES). Users and programmers manual

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Manderscheid, Jane M.; Gyekenyesi, John P.

    1990-01-01

    This manual describes how to use the Ceramics Analysis and Reliability Evaluation of Structures (CARES) computer program. The primary function of the code is to calculate the fast fracture reliability or failure probability of macroscopically isotropic ceramic components. These components may be subjected to complex thermomechanical loadings, such as those found in heat engine applications. The program uses results from MSC/NASTRAN or ANSYS finite element analysis programs to evaluate component reliability due to inherent surface and/or volume type flaws. CARES utilizes the Batdorf model and the two-parameter Weibull cumulative distribution function to describe the effect of multiaxial stress states on material strength. The principle of independent action (PIA) and the Weibull normal stress averaging models are also included. Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities are estimated from four-point bend bar or unifrom uniaxial tensile specimen fracture strength data. Parameter estimation can be performed for single or multiple failure modes by using the least-square analysis or the maximum likelihood method. Kolmogorov-Smirnov and Anderson-Darling goodness-of-fit tests, ninety percent confidence intervals on the Weibull parameters, and Kanofsky-Srinivasan ninety percent confidence band values are also provided. The probabilistic fast-fracture theories used in CARES, along with the input and output for CARES, are described. Example problems to demonstrate various feature of the program are also included. This manual describes the MSC/NASTRAN version of the CARES program.

  2. Performance optimisations for distributed analysis in ALICE

    NASA Astrophysics Data System (ADS)

    Betev, L.; Gheata, A.; Gheata, M.; Grigoras, C.; Hristov, P.

    2014-06-01

    Performance is a critical issue in a production system accommodating hundreds of analysis users. Compared to a local session, distributed analysis is exposed to services and network latencies, remote data access and heterogeneous computing infrastructure, creating a more complex performance and efficiency optimization matrix. During the last 2 years, ALICE analysis shifted from a fast development phase to the more mature and stable code. At the same time, the frameworks and tools for deployment, monitoring and management of large productions have evolved considerably too. The ALICE Grid production system is currently used by a fair share of organized and individual user analysis, consuming up to 30% or the available resources and ranging from fully I/O-bound analysis code to CPU intensive correlations or resonances studies. While the intrinsic analysis performance is unlikely to improve by a large factor during the LHC long shutdown (LS1), the overall efficiency of the system has still to be improved by an important factor to satisfy the analysis needs. We have instrumented all analysis jobs with "sensors" collecting comprehensive monitoring information on the job running conditions and performance in order to identify bottlenecks in the data processing flow. This data are collected by the MonALISa-based ALICE Grid monitoring system and are used to steer and improve the job submission and management policy, to identify operational problems in real time and to perform automatic corrective actions. In parallel with an upgrade of our production system we are aiming for low level improvements related to data format, data management and merging of results to allow for a better performing ALICE analysis.

  3. Exoribonuclease superfamilies: structural analysis and phylogenetic distribution

    PubMed Central

    Zuo, Yuhong; Deutscher, Murray P.

    2001-01-01

    Exoribonucleases play an important role in all aspects of RNA metabolism. Biochemical and genetic analyses in recent years have identified many new RNases and it is now clear that a single cell can contain multiple enzymes of this class. Here, we analyze the structure and phylogenetic distribution of the known exoribonucleases. Based on extensive sequence analysis and on their catalytic properties, all of the exoribonucleases and their homologs have been grouped into six superfamilies and various subfamilies. We identify common motifs that can be used to characterize newly-discovered exoribonucleases, and based on these motifs we correct some previously misassigned proteins. This analysis may serve as a useful first step for developing a nomenclature for this group of enzymes. PMID:11222749

  4. Analysis and control of distributed cooperative systems.

    SciTech Connect

    Feddema, John Todd; Parker, Eric Paul; Wagner, John S.; Schoenwald, David Alan

    2004-09-01

    As part of DARPA Information Processing Technology Office (IPTO) Software for Distributed Robotics (SDR) Program, Sandia National Laboratories has developed analysis and control software for coordinating tens to thousands of autonomous cooperative robotic agents (primarily unmanned ground vehicles) performing military operations such as reconnaissance, surveillance and target acquisition; countermine and explosive ordnance disposal; force protection and physical security; and logistics support. Due to the nature of these applications, the control techniques must be distributed, and they must not rely on high bandwidth communication between agents. At the same time, a single soldier must easily direct these large-scale systems. Finally, the control techniques must be provably convergent so as not to cause undo harm to civilians. In this project, provably convergent, moderate communication bandwidth, distributed control algorithms have been developed that can be regulated by a single soldier. We have simulated in great detail the control of low numbers of vehicles (up to 20) navigating throughout a building, and we have simulated in lesser detail the control of larger numbers of vehicles (up to 1000) trying to locate several targets in a large outdoor facility. Finally, we have experimentally validated the resulting control algorithms on smaller numbers of autonomous vehicles.

  5. Limit Laws for Norms of IID Samples with Weibull Tails Leonid Bogachev 1,2

    E-print Network

    Bogachev, Leonid V.

    Limit Laws for Norms of IID Samples with Weibull Tails Leonid Bogachev 1,2 Received We by Schlather (10) , but the case where {X i } belong to the domain of attraction of Gumbel's double exponential law (in the sense of extreme value theory) has largely remained open (even for an exponential

  6. Rectangular shape distributed piezoelectric actuator: analytical analysis

    NASA Astrophysics Data System (ADS)

    Sun, Bohua; Qiu, Yan

    2004-04-01

    This paper is focused on the development of distributed piezoelectric actuators (DPAs) with rectangular shapes by using PZT materials. Analytical models of rectangular shape DPAs have been constructed in order to analyse and test the performance of DPA products. Firstly, based on the theory of electromagnetics, DPAs have been considered as a type of capacitor. The charge distributed density on the interdigitated electrodes (IDEs), which has been applied in the actuators, and the capacitance of the DPAs have been calculated. The accurate distribution and intensity of electrical field in DPA element have also been calculated completely. Secondly, based on the piezoelectric constitutive relations and the compound plates theory, models for mechanical strain and stress fields of DPAs have been developed, and the performances of rectangular shape DPAs have been discussed. Finally, on the basis of the models that have been developed in this paper, an improved design of a rectangular shape DPA has been discussed and summed up. Due to the minimum hypotheses that have been used during the processes of calculation, the characteristics of this paper are that the accurate distribution and intensity of electrical fields in DPAs have been concluded. The proposed accurate calculations have not been seen in the literature, and can be used in DPA design and manufacture processes in order to improve mechanical performance and reduce the cost of DPA products in further applications. In this paper, all the processes of analysis and calculation have been done by MATLAB and MathCAD. The FEM results used for comparison were obtained using the ABAQUS program.

  7. Survival Analysis of Patients with End Stage Renal Disease

    NASA Astrophysics Data System (ADS)

    Urrutia, J. D.; Gayo, W. S.; Bautista, L. A.; Baccay, E. B.

    2015-06-01

    This paper provides a survival analysis of End Stage Renal Disease (ESRD) under Kaplan-Meier Estimates and Weibull Distribution. The data were obtained from the records of V. L. MakabaliMemorial Hospital with respect to time t (patient's age), covariates such as developed secondary disease (Pulmonary Congestion and Cardiovascular Disease), gender, and the event of interest: the death of ESRD patients. Survival and hazard rates were estimated using NCSS for Weibull Distribution and SPSS for Kaplan-Meier Estimates. These lead to the same conclusion that hazard rate increases and survival rate decreases of ESRD patient diagnosed with Pulmonary Congestion, Cardiovascular Disease and both diseases with respect to time. It also shows that female patients have a greater risk of death compared to males. The probability risk was given the equation R = 1 — e-H(t) where e-H(t) is the survival function, H(t) the cumulative hazard function which was created using Cox-Regression.

  8. Stochastic Analysis of Wind Energy for Wind Pump Irrigation in Coastal Andhra Pradesh, India

    NASA Astrophysics Data System (ADS)

    Raju, M. M.; Kumar, A.; Bisht, D.; Rao, D. B.

    2014-09-01

    The rapid escalation in the prices of oil and gas as well as increasing demand for energy has attracted the attention of scientists and researchers to explore the possibility of generating and utilizing the alternative and renewable sources of wind energy in the long coastal belt of India with considerable wind energy resources. A detailed analysis of wind potential is a prerequisite to harvest the wind energy resources efficiently. Keeping this in view, the present study was undertaken to analyze the wind energy potential to assess feasibility of the wind-pump operated irrigation system in the coastal region of Andhra Pradesh, India, where high ground water table conditions are available. The stochastic analysis of wind speed data were tested to fit a probability distribution, which describes the wind energy potential in the region. The normal and Weibull probability distributions were tested; and on the basis of Chi square test, the Weibull distribution gave better results. Hence, it was concluded that the Weibull probability distribution may be used to stochastically describe the annual wind speed data of coastal Andhra Pradesh with better accuracy. The size as well as the complete irrigation system with mass curve analysis was determined to satisfy various daily irrigation demands at different risk levels.

  9. Design and Analysis of a Distributed Regenerative Frequency Divider Using a Distributed Mixer

    E-print Network

    Heydari, Payam

    Design and Analysis of a Distributed Regenerative Frequency Divider Using a Distributed Mixer Amin-2625 ABSTRACT In this paper we present the design and analysis of a distributed regenerative frequency divider in 1939, a regenerative frequency divider is essentially a non-linear feedback circuit consisting

  10. ATLAS Distributed Data Analysis: challenges and performance

    E-print Network

    Fassi, Farida; The ATLAS collaboration

    2015-01-01

    In the LHC operations era the key goal is to analyse the results of the collisions of high-energy particles as a way of probing the fundamental forces of nature. The ATLAS experiment at the LHC at CERN is recording and simulating several 10's of PetaBytes of data per year. The ATLAS Computing Model was designed around the concepts of Grid Computing. Large data volumes from the detectors and simulations require a large number of CPUs and storage space for data processing. To cope with this challenge a global network known as the Worldwide LHC Computing Grid (WLCG) was built. This is the most sophisticated data taking and analysis system ever built. ATLAS accumulated more than 140 PB of data between 2009 and 2014. To analyse these data ATLAS developed, deployed and now operates a mature and stable distributed analysis (DA) service on the WLCG. The service is actively used: more than half a million user jobs run daily on DA resources, submitted by more than 1500 ATLAS physicists. A significant reliability of the...

  11. ATLAS Distributed Data Analysis: performance and challenges

    E-print Network

    Fassi, Farida; The ATLAS collaboration

    2015-01-01

    In the LHC operations era the key goal is to analyse the results of the collisions of high-energy particles as a way of probing the fundamental forces of nature. The ATLAS experiment at the LHC at CERN is recording and simulating several 10's of PetaBytes of data per year. The ATLAS Computing Model was designed around the concepts of Grid Computing. Large data volumes from the detectors and simulations require a large number of CPUs and storage space for data processing. To cope with this challenge a global network known as the Worldwide LHC Computing Grid (WLCG) was built. This is the most sophisticated data taking and analysis system ever built. ATLAS accumulated more than 140 PB of data between 2009 and 2014. To analyse these data ATLAS developed, deployed and now operates a mature and stable distributed analysis (DA) service on the WLCG. The service is actively used: more than half a million user jobs run daily on DA resources, submitted by more than 1500 ATLAS physicists. A significant reliability of the...

  12. Likelihood analysis of earthquake focal mechanism distributions

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.; Jackson, David D.

    2015-06-01

    In our paper published earlier we discussed forecasts of earthquake focal mechanism and ways to test the forecast efficiency. Several verification methods were proposed, but they were based on ad hoc, empirical assumptions, thus their performance is questionable. We apply a conventional likelihood method to measure the skill of earthquake focal mechanism orientation forecasts. The advantage of such an approach is that earthquake rate prediction can be adequately combined with focal mechanism forecast, if both are based on the likelihood scores, resulting in a general forecast optimization. We measure the difference between two double-couple sources as the minimum rotation angle that transforms one into the other. We measure the uncertainty of a focal mechanism forecast (the variability), and the difference between observed and forecasted orientations (the prediction error), in terms of these minimum rotation angles. To calculate the likelihood score we need to compare actual forecasts or occurrences of predicted events with the null hypothesis that the mechanism's 3-D orientation is random (or equally probable). For 3-D rotation the random rotation angle distribution is not uniform. To better understand the resulting complexities, we calculate the information (likelihood) score for two theoretical rotational distributions (Cauchy and von Mises-Fisher), which are used to approximate earthquake source orientation pattern. We then calculate the likelihood score for earthquake source forecasts and for their validation by future seismicity data. Several issues need to be explored when analyzing observational results: their dependence on forecast and data resolution, internal dependence of scores on forecasted angle and random variability of likelihood scores. Here, we propose a simple tentative solution but extensive theoretical and statistical analysis is needed.

  13. Statistical wind analysis for near-space applications

    NASA Astrophysics Data System (ADS)

    Roney, Jason A.

    2007-09-01

    Statistical wind models were developed based on the existing observational wind data for near-space altitudes between 60 000 and 100 000 ft (18 30 km) above ground level (AGL) at two locations, Akon, OH, USA, and White Sands, NM, USA. These two sites are envisioned as playing a crucial role in the first flights of high-altitude airships. The analysis shown in this paper has not been previously applied to this region of the stratosphere for such an application. Standard statistics were compiled for these data such as mean, median, maximum wind speed, and standard deviation, and the data were modeled with Weibull distributions. These statistics indicated, on a yearly average, there is a lull or a “knee” in the wind between 65 000 and 72 000 ft AGL (20 22 km). From the standard statistics, trends at both locations indicated substantial seasonal variation in the mean wind speed at these heights. The yearly and monthly statistical modeling indicated that Weibull distributions were a reasonable model for the data. Forecasts and hindcasts were done by using a Weibull model based on 2004 data and comparing the model with the 2003 and 2005 data. The 2004 distribution was also a reasonable model for these years. Lastly, the Weibull distribution and cumulative function were used to predict the 50%, 95%, and 99% winds, which are directly related to the expected power requirements of a near-space station-keeping airship. These values indicated that using only the standard deviation of the mean may underestimate the operational conditions.

  14. A Mixed Kijima Model Using the Weibull-Based Generalized Renewal Processes

    PubMed Central

    2015-01-01

    Generalized Renewal Processes are useful for approaching the rejuvenation of dynamical systems resulting from planned or unplanned interventions. We present new perspectives for the Generalized Renewal Processes in general and for the Weibull-based Generalized Renewal Processes in particular. Disregarding from literature, we present a mixed Generalized Renewal Processes approach involving Kijima Type I and II models, allowing one to infer the impact of distinct interventions on the performance of the system under study. The first and second theoretical moments of this model are introduced as well as its maximum likelihood estimation and random sampling approaches. In order to illustrate the usefulness of the proposed Weibull-based Generalized Renewal Processes model, some real data sets involving improving, stable, and deteriorating systems are used. PMID:26197222

  15. Distributed Design and Analysis of Computer Experiments

    Energy Science and Technology Software Center (ESTSC)

    2002-11-11

    DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. Formore »example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation of an algorithm by Michael McKay to compute variable correlations. DDACE can also be used to carry out a main-effects analysis to calculate the sensitivity of an output variable to each of the varied inputs taken individually. 1 Continued« less

  16. Distributed Design and Analysis of Computer Experiments

    SciTech Connect

    2002-11-11

    DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. For example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation of an algorithm by Michael McKay to compute variable correlations. DDACE can also be used to carry out a main-effects analysis to calculate the sensitivity of an output variable to each of the varied inputs taken individually. 1 Continued

  17. Global Compiler Analysis for Optimizing Tuplespace Communication on Distributed Systems

    E-print Network

    Fenwick, Jay

    Global Compiler Analysis for Optimizing Tuplespace Communication on Distributed Systems James B systems. This paper provides concrete steps towards advanced compile­time analysis and optimization of the uncoupled communication of shared tuplespace. Specifically, we present global anal­ ysis techniques

  18. Time-dependent reliability analysis of ceramic engine components

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.

    1993-01-01

    The computer program CARES/LIFE calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. This program is an extension of the CARES (Ceramics Analysis and Reliability Evaluation of Structures) computer program. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing either the power or Paris law relations. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. Two example problems demonstrating proof testing and fatigue parameter estimation are given.

  19. Statistical Analysis of Particle Distributions in Composite Materials

    E-print Network

    Wichert, Sofia

    Statistical Analysis of Particle Distributions in Composite Materials by Sofia Mucharreira de Distributions in Composite Materials Sofia Mucharreira de Azeredo Lopes Summary Particulate composite materials distributions is of prime importance for a better control of the production of particulate composite materials

  20. Shark: Fast Data Analysis Using Coarse-grained Distributed Memory

    E-print Network

    California at Irvine, University of

    Shark: Fast Data Analysis Using Coarse-grained Distributed Memory Cliff Engle, Antonio Lupher {cengle, alupher, rxin, matei, franklin, shenker, istoica}@cs.berkeley.edu ABSTRACT Shark is a research data analysis system built on a novel coarse-grained distributed shared-memory abstraction. Shark

  1. Analysis of chord-length distributions.

    PubMed

    Burger, C; Ruland, W

    2001-09-01

    A closed-form analytical solution for the inversion of the integral equation relating small-angle scattering intensity distributions of two-phase systems to chord-length distributions is presented. The result is generalized to arbitrary derivatives of higher order of the autocorrelation function and to arbitrary projections of the scattering intensity (including slit collimation). This inverse transformation offers an elegant way to investigate the impact of certain features, e.g. singularities, in the chord-length distribution or its higher-order derivatives on the scattering curve, e.g. oscillatory components in the asymptotic behavior at a large scattering vector. Several examples are discussed. PMID:11526297

  2. Analysis of Temperature Distributions in Nighttime Inversions

    NASA Astrophysics Data System (ADS)

    Telyak, Oksana; Krasouski, Aliaksandr; Svetashev, Alexander; Turishev, Leonid; Barodka, Siarhei

    2015-04-01

    Adequate prediction of temperature inversion in the atmospheric boundary layer is one of prerequisites for successful forecasting of meteorological parameters and severe weather events. Examples include surface air temperature and precipitation forecasting as well as prediction of fog, frosts and smog with hazardous levels of atmospheric pollution. At the same time, reliable forecasting of temperature inversions remains an unsolved problem. For prediction of nighttime inversions over some specific territory, it is important to study characteristic features of local circulation cells formation and to properly take local factors into account to develop custom modeling techniques for operational use. The present study aims to investigate and analyze vertical temperature distributions in tropospheric inversions (isotherms) over the territory of Belarus. We study several specific cases of formation, evolution and decay of deep nighttime temperature inversions in Belarus by means of mesoscale numerical simulations with WRF model, considering basic mechanisms of isothermal and inverse temperature layers formation in the troposphere and impact of these layers on local circulation cells. Our primary goal is to assess the feasibility of advance prediction of inversions formation with WRF. Modeling results reveal that all cases under consideration have characteristic features of radiative inversions (e.g., their formation times, development phases, inversion intensities, etc). Regions of "blocking" layers formation are extensive and often spread over the entire territory of Belarus. Inversions decay starts from the lowermost (near surface) layer (altitudes of 5 to 50 m). In all cases, one can observe formation of temperature gradients that substantially differ from the basic inversion gradient, i.e. the layer splits into smaller layers, each having a different temperature stratification (isothermal, adiabatic, etc). As opposed to various empirical techniques as well as theoretical approaches based on discriminant analysis, mesoscale modeling with WRF provides fairly successful forecasts of formation times and regions for all types of temperature inversions up to 3 days in advance. Furthermore, we conclude that without proper adjustment for the presence of thin isothermal layers (adiabatic and/or inversion layers), temperature data can affect results of statistical climate studies. Provided there are regions where a long-term, constant inversion is present (e.g., Antarctica or regions with continental climate), these data can contribute an uncompensated systematic error of 2 to 10° C. We argue that this very fact may lead to inconsistencies in long-term temperature data interpretations (e.g., conclusions ranging from "global warming" to "global cooling" based on temperature observations for the same region and time period). Due to the importance of this problem from the scientific as well as practical point of view, our plans for further studies include analysis of autumn and wintertime inversions and convective inversions. At the same time, it seems promising to develop an algorithm of automatic recognition of temperature inversions based on a combination of WRF modeling results, surface and satellite observations.

  3. DISTRIBUTION SYSTEM RELIABILITY ANALYSIS USING A MICROCOMPUTER

    EPA Science Inventory

    Distribution system reliability for most utilities is maintained by the knowledge of a few key personnel. Generally, these water maintenance personnel use a good memory, repair records, a large wall map and a hydraulic model of the larger transmission mains to help identify probl...

  4. Intensity distribution analysis of cathodoluminescence using the energy loss distribution of electrons.

    PubMed

    Fukuta, Masahiro; Inami, Wataru; Ono, Atsushi; Kawata, Yoshimasa

    2016-01-01

    We present an intensity distribution analysis of cathodoluminescence (CL) excited with a focused electron beam in a luminescent thin film. The energy loss distribution is applied to the developed analysis method in order to determine the arrangement of the dipole locations along the path of the electron traveling in the film. Propagating light emitted from each dipole is analyzed with the finite-difference time-domain (FDTD) method. CL distribution near the film surface is evaluated as a nanometric light source. It is found that a light source with 30nm widths is generated in the film by the focused electron beam. We also discuss the accuracy of the developed analysis method by comparison with experimental results. The analysis results are brought into good agreement with the experimental results by introducing the energy loss distribution. PMID:26550930

  5. Economic analysis of efficient distribution transformer trends

    SciTech Connect

    Downing, D.J.; McConnell, B.W.; Barnes, P.R.; Hadley, S.W.; Van Dyke, J.W.

    1998-03-01

    This report outlines an approach that will account for uncertainty in the development of evaluation factors used to identify transformer designs with the lowest total owning cost (TOC). The TOC methodology is described and the most highly variable parameters are discussed. The model is developed to account for uncertainties as well as statistical distributions for the important parameters. Sample calculations are presented. The TOC methodology is applied to data provided by two utilities in order to test its validity.

  6. An implementation and analysis of a randomized distributed stack 

    E-print Network

    Kirkland, Dustin Charles

    2013-02-22

    This thesis presents an algorithm for a randomized distributed stack, a coded simulator for examining its behavior, and an analysis of data collected from simulations configured to investigate its performance in particular situations...

  7. WATER DISTRIBUTION SYSTEM ANALYSIS: FIELD STUDIES, MODELING AND MANAGEMENT

    EPA Science Inventory

    The user‘s guide entitled “Water Distribution System Analysis: Field Studies, Modeling and Management” is a reference guide for water utilities and an extensive summarization of information designed to provide drinking water utility personnel (and related consultants and research...

  8. A Comparison of Distribution Free and Non-Distribution Free Factor Analysis Methods

    ERIC Educational Resources Information Center

    Ritter, Nicola L.

    2012-01-01

    Many researchers recognize that factor analysis can be conducted on both correlation matrices and variance-covariance matrices. Although most researchers extract factors from non-distribution free or parametric methods, researchers can also extract factors from distribution free or non-parametric methods. The nature of the data dictates the method…

  9. Building Finite Element Analysis Programs in Distributed Services Environment

    E-print Network

    Stanford University

    as standalone desktop software. This paper describes an Internet-enabled framework that facilitates building and structural analysis services distributed in remote sites. With the advances of computing facilities integration. The goal is to provide a platform for building distributed applications that utilize software

  10. Performance analysis of static locking in replicated distributed database systems

    NASA Technical Reports Server (NTRS)

    Kuang, Yinghong; Mukkamala, Ravi

    1991-01-01

    Data replications and transaction deadlocks can severely affect the performance of distributed database systems. Many current evaluation techniques ignore these aspects, because it is difficult to evaluate through analysis and time consuming to evaluate through simulation. Here, a technique is discussed that combines simulation and analysis to closely illustrate the impact of deadlock and evaluate performance of replicated distributed databases with both shared and exclusive locks.

  11. Performance analysis of static locking in replicated distributed database systems

    NASA Technical Reports Server (NTRS)

    Kuang, Yinghong; Mukkamala, Ravi

    1991-01-01

    Data replication and transaction deadlocks can severely affect the performance of distributed database systems. Many current evaluation techniques ignore these aspects, because it is difficult to evaluate through analysis and time consuming to evaluate through simulation. A technique is used that combines simulation and analysis to closely illustrate the impact of deadlock and evaluate performance of replicated distributed database with both shared and exclusive locks.

  12. Distributed bearing fault diagnosis based on vibration analysis

    NASA Astrophysics Data System (ADS)

    Dolenc, Boštjan; Boškoski, Pavle; Juri?i?, ?ani

    2016-01-01

    Distributed bearing faults appear under various circumstances, for example due to electroerosion or the progression of localized faults. Bearings with distributed faults tend to generate more complex vibration patterns than those with localized faults. Despite the frequent occurrence of such faults, their diagnosis has attracted limited attention. This paper examines a method for the diagnosis of distributed bearing faults employing vibration analysis. The vibrational patterns generated are modeled by incorporating the geometrical imperfections of the bearing components. Comparing envelope spectra of vibration signals shows that one can distinguish between localized and distributed faults. Furthermore, a diagnostic procedure for the detection of distributed faults is proposed. This is evaluated on several bearings with naturally born distributed faults, which are compared with fault-free bearings and bearings with localized faults. It is shown experimentally that features extracted from vibrations in fault-free, localized and distributed fault conditions form clearly separable clusters, thus enabling diagnosis.

  13. Uncertainty in thermal process calculations due to variability in first-order and Weibull kinetic parameters.

    PubMed

    Halder, A; Datta, A K; Geedipalli, S S R

    2007-05-01

    Alternatives to first-order model of death kinetics of microorganisms have been proposed as improvements in the calculation of lethality for a thermal process. Although such models can potentially improve predictions for many situations, this article tries to answer the question of whether the added complexities of these models are a worthwhile investment once we include the effect of uncertainties in various microbiological and process parameters. Monte Carlo technique is used to include variability in kinetic parameters in lethality calculation for a number of heating processes, for both first-order and Weibull kinetics models. It is shown that uncertainties represented by coefficient of variation in kinetic parameters lead to a wide range of final log-reduction prediction. With the same percent variability in kinetic parameters, uncertainty in the final log reduction for Weibull kinetics was smaller or equal to that for first-order kinetics. Due to the large effect of variability in the input parameters on the final log reduction, the effort to move toward more accurate kinetic models needs to be weighed against inclusion of variability. PMID:17995767

  14. Grammatical Analysis as a Distributed Neurobiological Function

    PubMed Central

    Bozic, Mirjana; Fonteneau, Elisabeth; Su, Li; Marslen-Wilson, William D

    2015-01-01

    Language processing engages large-scale functional networks in both hemispheres. Although it is widely accepted that left perisylvian regions have a key role in supporting complex grammatical computations, patient data suggest that some aspects of grammatical processing could be supported bilaterally. We investigated the distribution and the nature of grammatical computations across language processing networks by comparing two types of combinatorial grammatical sequences—inflectionally complex words and minimal phrases—and contrasting them with grammatically simple words. Novel multivariate analyses revealed that they engage a coalition of separable subsystems: inflected forms triggered left-lateralized activation, dissociable into dorsal processes supporting morphophonological parsing and ventral, lexically driven morphosyntactic processes. In contrast, simple phrases activated a consistently bilateral pattern of temporal regions, overlapping with inflectional activations in L middle temporal gyrus. These data confirm the role of the left-lateralized frontotemporal network in supporting complex grammatical computations. Critically, they also point to the capacity of bilateral temporal regions to support simple, linear grammatical computations. This is consistent with a dual neurobiological framework where phylogenetically older bihemispheric systems form part of the network that supports language function in the modern human, and where significant capacities for language comprehension remain intact even following severe left hemisphere damage. PMID:25421880

  15. Complex network analysis of water distribution systems

    E-print Network

    A. Yazdani; P. Jeffrey

    2011-04-01

    This paper explores a variety of strategies for understanding the formation, structure, efficiency and vulnerability of water distribution networks. Water supply systems are studied as spatially organized networks for which the practical applications of abstract evaluation methods are critically evaluated. Empirical data from benchmark networks are used to study the interplay between network structure and operational efficiency, reliability and robustness. Structural measurements are undertaken to quantify properties such as redundancy and optimal-connectivity, herein proposed as constraints in network design optimization problems. The role of the supply-demand structure towards system efficiency is studied and an assessment of the vulnerability to failures based on the disconnection of nodes from the source(s) is undertaken. The absence of conventional degree-based hubs (observed through uncorrelated non-heterogeneous sparse topologies) prompts an alternative approach to studying structural vulnerability based on the identification of network cut-sets and optimal connectivity invariants. A discussion on the scope, limitations and possible future directions of this research is provided.

  16. Effect of Porosity on Strength Distribution of Microcrystalline Cellulose.

    PubMed

    Kele?, Özgür; Barcenas, Nicholas P; Sprys, Daniel H; Bowman, Keith J

    2015-12-01

    Fracture strength of pharmaceutical compacts varies even for nominally identical samples, which directly affects compaction, comminution, and tablet dosage forms. However, the relationships between porosity and mechanical behavior of compacts are not clear. Here, the effects of porosity on fracture strength and fracture statistics of microcrystalline cellulose compacts were investigated through diametral compression tests. Weibull modulus, a key parameter in Weibull statistics, was observed to decrease with increasing porosity from 17 to 56 vol.%, based on eight sets of compacts at different porosity levels, each set containing ?50 samples, a total of 407 tests. Normal distribution fits better to fracture data for porosity less than 20 vol.%, whereas Weibull distribution is a better fit in the limit of highest porosity. Weibull moduli from 840 unique finite element simulations of isotropic porous materials were compared to experimental Weibull moduli from this research and results on various pharmaceutical materials. Deviations from Weibull statistics are observed. The effect of porosity on fracture strength can be described by a recently proposed micromechanics-based formula. PMID:26022545

  17. Scaling analysis of the galaxy distribution in the SSRS catalog

    NASA Astrophysics Data System (ADS)

    Campos, A.; Dominguez-Tenreiro, R.; Yepes, G.

    1994-12-01

    A detailed analysis of the galaxy distribution in the Southern Sky Redshift Survey (SSRS) by means of the multifractal or scaling formalism is presented. It is shown that galaxies cluster in different ways according to their morphological type as well as their size. Elliptical galaxies are more clustered than spirals, even at scales up to 15/h Mpc, whereas no clear segregation between early and late spirals is found. It is also shown that smaller galaxies distribute more homogeneously than larger galaxies.

  18. Scaling Analysis of the Galaxy Distribution in the SSRS Catalog

    E-print Network

    A. Campos; R. Dominguez-Tenreiro; G. Yepes

    1994-07-14

    A detailed analysis of the galaxy distribution in the Southern Sky Redshift Survey (SSRS) by means of the multifractal or scaling formalism is presented. It is shown that galaxies cluster in different ways according to their morphological type as well as their size. Ellipticals are more clustered than spirals, even at scales up to 15 h$^{-1}$ Mpc, whereas no clear segregation between early and late spirals is found. It is also shown that smaller galaxies distribute more homogeneously than larger galaxies.

  19. Modeling and analysis of solar distributed generation

    NASA Astrophysics Data System (ADS)

    Ortiz Rivera, Eduardo Ivan

    Recent changes in the global economy are creating a big impact in our daily life. The price of oil is increasing and the number of reserves are less every day. Also, dramatic demographic changes are impacting the viability of the electric infrastructure and ultimately the economic future of the industry. These are some of the reasons that many countries are looking for alternative energy to produce electric energy. The most common form of green energy in our daily life is solar energy. To convert solar energy into electrical energy is required solar panels, dc-dc converters, power control, sensors, and inverters. In this work, a photovoltaic module, PVM, model using the electrical characteristics provided by the manufacturer data sheet is presented for power system applications. Experimental results from testing are showed, verifying the proposed PVM model. Also in this work, three maximum power point tracker, MPPT, algorithms would be presented to obtain the maximum power from a PVM. The first MPPT algorithm is a method based on the Rolle's and Lagrange's Theorems and can provide at least an approximate answer to a family of transcendental functions that cannot be solved using differential calculus. The second MPPT algorithm is based on the approximation of the proposed PVM model using fractional polynomials where the shape, boundary conditions and performance of the proposed PVM model are satisfied. The third MPPT algorithm is based in the determination of the optimal duty cycle for a dc-dc converter and the previous knowledge of the load or load matching conditions. Also, four algorithms to calculate the effective irradiance level and temperature over a photovoltaic module are presented in this work. The main reasons to develop these algorithms are for monitoring climate conditions, the elimination of temperature and solar irradiance sensors, reductions in cost for a photovoltaic inverter system, and development of new algorithms to be integrated with maximum power point tracking algorithms. Finally, several PV power applications will be presented like circuit analysis for a load connected to two different PV arrays, speed control for a do motor connected to a PVM, and a novel single phase photovoltaic inverter system using the Z-source converter.

  20. Drop Size Distribution analysis in Sicily via optical disdrometer

    NASA Astrophysics Data System (ADS)

    Schiera, R.; Cancelliere, A.

    2009-04-01

    Knowledge of drop size distribution of rainfall is fundamental for rain measurement through meteorological radars. In the paper, preliminary results related to the analysis of drop size distributions obtained through an optical disdrometer in Sicily are presented. In particular, observed drop size distributions sampled at 1 minute have been analyzed to calculate the main parameters of Ulbrich probability distribution. Different estimation methods have been applied, by considering separately each minute of observation or by pooling different observations with rainfall intensities falling within a given class. Goodness of fit of Ulbrich distribution has been assessed by means of statistical tests. The results indicate that the theoretical distribution appear to fit well the experimental values. Furthermore, the terminal velocities of the drops have also been investigated, and the resulting couples of values (diameter, velocity) have been compared with the analytical expression of Atlas et al. (1973), which also appears to fit them. Finally, an analysis of the relationship between the parameters of the Ulbrich distribution with the intensity of precipitation is presented.

  1. Adaptive walks and distribution of beneficial fitness effects.

    PubMed

    Seetharaman, Sarada; Jain, Kavita

    2014-04-01

    We study the adaptation dynamics of a maladapted asexual population on rugged fitness landscapes with many local fitness peaks. The distribution of beneficial fitness effects is assumed to belong to one of the three extreme value domains, viz. Weibull, Gumbel, and Fréchet. We work in the strong selection-weak mutation regime in which beneficial mutations fix sequentially, and the population performs an uphill walk on the fitness landscape until a local fitness peak is reached. A striking prediction of our analysis is that the fitness difference between successive steps follows a pattern of diminishing returns in the Weibull domain and accelerating returns in the Fréchet domain, as the initial fitness of the population is increased. These trends are found to be robust with respect to fitness correlations. We believe that this result can be exploited in experiments to determine the extreme value domain of the distribution of beneficial fitness effects. Our work here differs significantly from the previous ones that assume the selection coefficient to be small. On taking large effect mutations into account, we find that the length of the walk shows different qualitative trends from those derived using small selection coefficient approximation. PMID:24274696

  2. First Experiences with LHC Grid Computing and Distributed Analysis

    SciTech Connect

    Fisk, Ian

    2010-12-01

    In this presentation the experiences of the LHC experiments using grid computing were presented with a focus on experience with distributed analysis. After many years of development, preparation, exercises, and validation the LHC (Large Hadron Collider) experiments are in operations. The computing infrastructure has been heavily utilized in the first 6 months of data collection. The general experience of exploiting the grid infrastructure for organized processing and preparation is described, as well as the successes employing the infrastructure for distributed analysis. At the end the expected evolution and future plans are outlined.

  3. Entropy Methods For Univariate Distributions in Decision Analysis

    NASA Astrophysics Data System (ADS)

    Abbas, Ali E.

    2003-03-01

    One of the most important steps in decision analysis practice is the elicitation of the decision-maker's belief about an uncertainty of interest in the form of a representative probability distribution. However, the probability elicitation process is a task that involves many cognitive and motivational biases. Alternatively, the decision-maker may provide other information about the distribution of interest, such as its moments, and the maximum entropy method can be used to obtain a full distribution subject to the given moment constraints. In practice however, decision makers cannot readily provide moments for the distribution, and are much more comfortable providing information about the fractiles of the distribution of interest or bounds on its cumulative probabilities. In this paper we present a graphical method to determine the maximum entropy distribution between upper and lower probability bounds and provide an interpretation for the shape of the maximum entropy distribution subject to fractile constraints, (FMED). We also discuss the problems with the FMED in that it is discontinuous and flat over each fractile interval. We present a heuristic approximation to a distribution if in addition to its fractiles, we also know it is continuous and work through full examples to illustrate the approach.

  4. ANALYSIS OF DISTRIBUTION FEEDER LOSSES DUE TO ADDITION OF DISTRIBUTED PHOTOVOLTAIC GENERATORS

    SciTech Connect

    Tuffner, Francis K.; Singh, Ruchi

    2011-08-09

    Distributed generators (DG) are small scale power supplying sources owned by customers or utilities and scattered throughout the power system distribution network. Distributed generation can be both renewable and non-renewable. Addition of distributed generation is primarily to increase feeder capacity and to provide peak load reduction. However, this addition comes with several impacts on the distribution feeder. Several studies have shown that addition of DG leads to reduction of feeder loss. However, most of these studies have considered lumped load and distributed load models to analyze the effects on system losses, where the dynamic variation of load due to seasonal changes is ignored. It is very important for utilities to minimize the losses under all scenarios to decrease revenue losses, promote efficient asset utilization, and therefore, increase feeder capacity. This paper will investigate an IEEE 13-node feeder populated with photovoltaic generators on detailed residential houses with water heater, Heating Ventilation and Air conditioning (HVAC) units, lights, and other plug and convenience loads. An analysis of losses for different power system components, such as transformers, underground and overhead lines, and triplex lines, will be performed. The analysis will utilize different seasons and different solar penetration levels (15%, 30%).

  5. Energy loss analysis of an integrated space power distribution system

    NASA Technical Reports Server (NTRS)

    Kankam, M. David; Ribeiro, P. F.

    1992-01-01

    The results of studies related to conceptual topologies of an integrated utility-like space power system are described. The system topologies are comparatively analyzed by considering their transmission energy losses as functions of mainly distribution voltage level and load composition. The analysis is expedited by use of a Distribution System Analysis and Simulation (DSAS) software. This recently developed computer program by the Electric Power Research Institute (EPRI) uses improved load models to solve the power flow within the system. However, present shortcomings of the software with regard to space applications, and incompletely defined characteristics of a space power system make the results applicable to only the fundamental trends of energy losses of the topologies studied. Accountability, such as included, for the effects of the various parameters on the system performance can constitute part of a planning tool for a space power distribution system.

  6. GIS-based poverty and population distribution analysis in China

    NASA Astrophysics Data System (ADS)

    Cui, Jing; Wang, Yingjie; Yan, Hong

    2009-07-01

    Geographically, poverty status is not only related with social-economic factors but also strongly affected by geographical environment. In the paper, GIS-based poverty and population distribution analysis method is introduced for revealing their regional differences. More than 100000 poor villages and 592 national key poor counties are chosen for the analysis. The results show that poverty distribution tends to concentrate in most of west China and mountainous rural areas of mid China. Furthermore, the fifth census data are overlaid to those poor areas in order to gain its internal diversity of social-economic characteristics. By overlaying poverty related social-economic parameters, such as sex ratio, illiteracy, education level, percentage of ethnic minorities, family composition, finding shows that poverty distribution is strongly correlated with high illiteracy rate, high percentage minorities, and larger family member.

  7. Data synthesis and display programs for wave distribution function analysis

    NASA Technical Reports Server (NTRS)

    Storey, L. R. O.; Yeh, K. J.

    1992-01-01

    At the National Space Science Data Center (NSSDC) software was written to synthesize and display artificial data for use in developing the methodology of wave distribution analysis. The software comprises two separate interactive programs, one for data synthesis and the other for data display.

  8. NASA/CR2015xxxxxx Operadic Analysis of Distributed

    E-print Network

    Spivak, David

    NASA/CR­2015­xxxxxx Operadic Analysis of Distributed Systems Kevin Schweiker, Srivatsan Varadarajan Cambridge, MA 02139, USA September 2015 #12;NASA STI Program . . . in Profile Since its founding, NASA has been dedicated to the advancement of aeronautics and space science. The NASA scientific and technical

  9. Temperature/stress distribution by finite element analysis

    E-print Network

    Pennycook, Steve

    forms of energy and transportation The Corrosion Science and Technology Group in the Materials ScienceTemperature/stress distribution by finite element analysis Corrosion Science & Technology Group Part of the Materials Science and Technology Division We do a broad range of research to support all

  10. Uncorrected proofs --not for distribution 2Computational Analysis

    E-print Network

    Pritham, Ellen J.

    Uncorrected proofs -- not for distribution 2Computational Analysis and Paleogenomics an enormous impact on the evolution of genes and the dynamics of genomes. Improvements to the efficiency in the human genome, Alu and L1 (Deininger and Schmid, 1976; Rubin et al., 1980; Singer, 1982). Further genetic

  11. Surface Analysis and Grain size Distribution of Flood Deposits in

    E-print Network

    Winglee, Robert M.

    Surface Analysis and Grain size Distribution of Flood Deposits in the Eastern Himalayas Introduction and Motivation ·The Siang River Valley in Northeast India preserves a record of Holocene floods, with deposits at elevations of 150 m above the modern river elevation. ·There was a well documented flood from

  12. An Analysis of Currency Options and Exchange Rate Distributions

    E-print Network

    An Analysis of Currency Options and Exchange Rate Distributions Chris Andrews 16 May 1995 Abstract This paper investigates several aspects of the pricing of currency and currency options. In section 2 a standard model for the behav­ ior of stocks will be applied to currency exchange rates. This model

  13. Principal Component Analysis for Distributed Data Sets with Updating

    E-print Network

    Chan, Raymond

    Principal Component Analysis for Distributed Data Sets with Updating Zheng-Jian Bai1, , Raymond H data sets is a key requirement in data mining. A powerful technique for this purpose is the principal component analy- sis (PCA). PCA-based clustering algorithms are effective when the data sets are found

  14. Rapid Analysis of Mass Distribution of Radiation Shielding

    NASA Technical Reports Server (NTRS)

    Zapp, Edward

    2007-01-01

    Radiation Shielding Evaluation Toolset (RADSET) is a computer program that rapidly calculates the spatial distribution of mass of an arbitrary structure for use in ray-tracing analysis of the radiation-shielding properties of the structure. RADSET was written to be used in conjunction with unmodified commercial computer-aided design (CAD) software that provides access to data on the structure and generates selected three-dimensional-appearing views of the structure. RADSET obtains raw geometric, material, and mass data on the structure from the CAD software. From these data, RADSET calculates the distribution(s) of the masses of specific materials about any user-specified point(s). The results of these mass-distribution calculations are imported back into the CAD computing environment, wherein the radiation-shielding calculations are performed.

  15. Molecular Isotopic Distribution Analysis (MIDAs) with Adjustable Mass Accuracy

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Ogurtsov, Aleksey Y.; Yu, Yi-Kuo

    2014-01-01

    In this paper, we present Molecular Isotopic Distribution Analysis (MIDAs), a new software tool designed to compute molecular isotopic distributions with adjustable accuracies. MIDAs offers two algorithms, one polynomial-based and one Fourier-transform-based, both of which compute molecular isotopic distributions accurately and efficiently. The polynomial-based algorithm contains few novel aspects, whereas the Fourier-transform-based algorithm consists mainly of improvements to other existing Fourier-transform-based algorithms. We have benchmarked the performance of the two algorithms implemented in MIDAs with that of eight software packages (BRAIN, Emass, Mercury, Mercury5, NeutronCluster, Qmass, JFC, IC) using a consensus set of benchmark molecules. Under the proposed evaluation criteria, MIDAs's algorithms, JFC, and Emass compute with comparable accuracy the coarse-grained (low-resolution) isotopic distributions and are more accurate than the other software packages. For fine-grained isotopic distributions, we compared IC, MIDAs's polynomial algorithm, and MIDAs's Fourier transform algorithm. Among the three, IC and MIDAs's polynomial algorithm compute isotopic distributions that better resemble their corresponding exact fine-grained (high-resolution) isotopic distributions. MIDAs can be accessed freely through a user-friendly web-interface at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/midas/index.html.

  16. Analysis of exposure biomarker relationships with the Johnson SBB distribution.

    PubMed

    Flynn, Michael R

    2007-08-01

    Application of the Johnson bivariate S(B) distribution, or alternatively the S(BB) distribution, is presented here as a tool for the analysis of concentration data and in particular for characterizing the relationship between exposures and biomarkers. Methods for fitting the marginal S(B) distributions are enhanced by maximizing the Shapiro-Wilk W statistic. The subsequent goodness of fit for the S(BB) distribution is evaluated with a multivariate Z statistic. Median regression results are extended here with methods for calculating the mean and standard deviation of the conditional array distributions. Application of these methods to the evaluation of the relationship between exposure to airborne bromopropane and the biomarker of serum bromide concentration suggests that the S(BB) distribution may be useful in stratifying workers by exposure based on using a biomarker. A comparison with the usual two-parameter log-normal approach shows that in some cases the S(BB) distribution may offer advantages. PMID:17693427

  17. Analyzing Distributed Functions in an Integrated Hazard Analysis

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry; Massie, Michael J.

    2010-01-01

    Large scale integration of today's aerospace systems is achievable through the use of distributed systems. Validating the safety of distributed systems is significantly more difficult as compared to centralized systems because of the complexity of the interactions between simultaneously active components. Integrated hazard analysis (IHA), a process used to identify unacceptable risks and to provide a means of controlling them, can be applied to either centralized or distributed systems. IHA, though, must be tailored to fit the particular system being analyzed. Distributed systems, for instance, must be analyzed for hazards in terms of the functions that rely on them. This paper will describe systems-oriented IHA techniques (as opposed to traditional failure-event or reliability techniques) that should be employed for distributed systems in aerospace environments. Special considerations will be addressed when dealing with specific distributed systems such as active thermal control, electrical power, command and data handling, and software systems (including the interaction with fault management systems). Because of the significance of second-order effects in large scale distributed systems, the paper will also describe how to analyze secondary functions to secondary functions through the use of channelization.

  18. Probabilistic Life and Reliability Analysis of Model Gas Turbine Disk

    NASA Technical Reports Server (NTRS)

    Holland, Frederic A.; Melis, Matthew E.; Zaretsky, Erwin V.

    2002-01-01

    In 1939, W. Weibull developed what is now commonly known as the "Weibull Distribution Function" primarily to determine the cumulative strength distribution of small sample sizes of elemental fracture specimens. In 1947, G. Lundberg and A. Palmgren, using the Weibull Distribution Function developed a probabilistic lifing protocol for ball and roller bearings. In 1987, E. V. Zaretsky using the Weibull Distribution Function modified the Lundberg and Palmgren approach to life prediction. His method incorporates the results of coupon fatigue testing to compute the life of elemental stress volumes of a complex machine element to predict system life and reliability. This paper examines the Zaretsky method to determine the probabilistic life and reliability of a model gas turbine disk using experimental data from coupon specimens. The predicted results are compared to experimental disk endurance data.

  19. Analysis and interpretation of DNA distributions measured by flow cytometry

    SciTech Connect

    Dean, P.N.; Gray, J.W.; Dolbeare, F.A.

    1982-01-01

    A principal use of flow cytometers is for the measurement of fluorescence distributions of cells stained with DNA specific dyes. A large amount of effort has been and is being expended currently in the analysis of these distributions for the fractions of cells in the G/sub 1/, S, and G/sub 2/ + M phases of the cell cycle. Several methods of analysis have been proposed and are being used; new methods continue to be introduced. Many, if not most, of these methods differ only in the mathematical function used to represent the phases of the cell cycle and represent attempts to fit exactly distributions with known phase fractions or unusual shapes. In this paper we show that these refinements probably are not necessary because of cell staining and sampling variability. This hypothesis was tested by measuring fluorescence distributions for Chinese hamster ovary and KHT mouse sarcoma cells stained with Hoechst-33258, chromomycin A3, propidium iodide, and acriflavine. Our results show that: a) single measurements can result in phase fraction estimates that are in error by as much as 40% for G/sub 2/ + M phase and 15 to 20% for G/sub 1/ and S phases; b) different dyes can yield phase fraction estimates that differ by as much as 40% due to differences in DNA specificity; c) the shapes of fluorescence distributions and their interpretation are very dependent on the dye being used and on its binding mechanism. 7 figures, 2 tables.

  20. Integrating software architectures for distributed simulations and simulation analysis communities.

    SciTech Connect

    Goldsby, Michael E.; Fellig, Daniel; Linebarger, John Michael; Moore, Patrick Curtis; Sa, Timothy J.; Hawley, Marilyn F.

    2005-10-01

    The one-year Software Architecture LDRD (No.79819) was a cross-site effort between Sandia California and Sandia New Mexico. The purpose of this research was to further develop and demonstrate integrating software architecture frameworks for distributed simulation and distributed collaboration in the homeland security domain. The integrated frameworks were initially developed through the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC), sited at SNL/CA, and the National Infrastructure Simulation & Analysis Center (NISAC), sited at SNL/NM. The primary deliverable was a demonstration of both a federation of distributed simulations and a federation of distributed collaborative simulation analysis communities in the context of the same integrated scenario, which was the release of smallpox in San Diego, California. To our knowledge this was the first time such a combination of federations under a single scenario has ever been demonstrated. A secondary deliverable was the creation of the standalone GroupMeld{trademark} collaboration client, which uses the GroupMeld{trademark} synchronous collaboration framework. In addition, a small pilot experiment that used both integrating frameworks allowed a greater range of crisis management options to be performed and evaluated than would have been possible without the use of the frameworks.

  1. Generalized Exponential Distribution in Flood Frequency Analysis for Polish Rivers

    PubMed Central

    Markiewicz, Iwona; Strupczewski, Witold G.; Bogdanowicz, Ewa; Kochanek, Krzysztof

    2015-01-01

    Many distributions have been used in flood frequency analysis (FFA) for fitting the flood extremes data. However, as shown in the paper, the scatter of Polish data plotted on the moment ratio diagram shows that there is still room for a new model. In the paper, we study the usefulness of the generalized exponential (GE) distribution in flood frequency analysis for Polish Rivers. We investigate the fit of GE distribution to the Polish data of the maximum flows in comparison with the inverse Gaussian (IG) distribution, which in our previous studies showed the best fitting among several models commonly used in FFA. Since the use of a discrimination procedure without the knowledge of its performance for the considered probability density functions may lead to erroneous conclusions, we compare the probability of correct selection for the GE and IG distributions along with the analysis of the asymptotic model error in respect to the upper quantile values. As an application, both GE and IG distributions are alternatively assumed for describing the annual peak flows for several gauging stations of Polish Rivers. To find the best fitting model, four discrimination procedures are used. In turn, they are based on the maximized logarithm of the likelihood function (K procedure), on the density function of the scale transformation maximal invariant (QK procedure), on the Kolmogorov-Smirnov statistics (KS procedure) and the fourth procedure based on the differences between the ML estimate of 1% quantile and its value assessed by the method of moments and linear moments, in sequence (R procedure). Due to the uncertainty of choosing the best model, the method of aggregation is applied to estimate of the maximum flow quantiles. PMID:26657239

  2. Volumetric relief map for intracranial cerebrospinal fluid distribution analysis.

    PubMed

    Lebret, Alain; Kenmochi, Yukiko; Hodel, Jérôme; Rahmouni, Alain; Decq, Philippe; Petit, Éric

    2015-09-01

    Cerebrospinal fluid imaging plays a significant role in the clinical diagnosis of brain disorders, such as hydrocephalus and Alzheimer's disease. While three-dimensional images of cerebrospinal fluid are very detailed, the complex structures they contain can be time-consuming and laborious to interpret. This paper presents a simple technique that represents the intracranial cerebrospinal fluid distribution as a two-dimensional image in such a way that the total fluid volume is preserved. We call this a volumetric relief map, and show its effectiveness in a characterization and analysis of fluid distributions and networks in hydrocephalus patients and healthy adults. PMID:26125975

  3. Influence Of Lateral Load Distributions On Pushover Analysis Effectiveness

    SciTech Connect

    Colajanni, P.; Potenzone, B.

    2008-07-08

    The effectiveness of two simple load distributions for pushover analysis recently proposed by the authors is investigated through a comparative study, involving static and dynamic analyses of seismic response of eccentrically braced frames. It is shown that in the upper floors only multimodal pushover procedures provide results close to the dynamic profile, while the proposed load patterns are always conservative in the lower floors. They over-estimate the seismic response less than the uniform distribution, representing a reliable alternative to the uniform or more sophisticated adaptive procedures proposed by seismic codes.

  4. HammerCloud: A Stress Testing System for Distributed Analysis

    NASA Astrophysics Data System (ADS)

    van der Ster, Daniel C.; Elmsheuser, Johannes; Úbeda García, Mario; Paladin, Massimo

    2011-12-01

    Distributed analysis of LHC data is an I/O-intensive activity which places large demands on the internal network, storage, and local disks at remote computing facilities. Commissioning and maintaining a site to provide an efficient distributed analysis service is therefore a challenge which can be aided by tools to help evaluate a variety of infrastructure designs and configurations. HammerCloud is one such tool; it is a stress testing service which is used by central operations teams, regional coordinators, and local site admins to (a) submit arbitrary number of analysis jobs to a number of sites, (b) maintain at a steady-state a predefined number of jobs running at the sites under test, (c) produce web-based reports summarizing the efficiency and performance of the sites under test, and (d) present a web-interface for historical test results to both evaluate progress and compare sites. HammerCloud was built around the distributed analysis framework Ganga, exploiting its API for grid job management. HammerCloud has been employed by the ATLAS experiment for continuous testing of many sites worldwide, and also during large scale computing challenges such as STEP'09 and UAT'09, where the scale of the tests exceeded 10,000 concurrently running and 1,000,000 total jobs over multi-day periods. In addition, HammerCloud is being adopted by the CMS experiment; the plugin structure of HammerCloud allows the execution of CMS jobs using their official tool (CRAB).

  5. A comprehensive study of distribution laws for the fragments of Košice meteorite

    NASA Astrophysics Data System (ADS)

    Gritsevich, Maria; Vinnikov, Vladimir; Kohout, TomáÅ.¡; Tóth, Juraj; Peltoniemi, Jouni; Turchak, Leonid; Virtanen, Jenni

    2014-03-01

    In this study, we conduct a detailed analysis of the Košice meteorite fall (February 28, 2010), to derive a reliable law describing the mass distribution among the recovered fragments. In total, 218 fragments of the Košice meteorite, with a total mass of 11.285 kg, were analyzed. Bimodal Weibull, bimodal Grady, and bimodal lognormal distributions are found to be the most appropriate for describing the Košice fragmentation process. Based on the assumption of bimodal lognormal, bimodal Grady, bimodal sequential, and bimodal Weibull fragmentation distributions, we suggest that, prior to further extensive fragmentation in the lower atmosphere, the Košice meteoroid was initially represented by two independent pieces with cumulative residual masses of approximately 2 and 9 kg, respectively. The smaller piece produced about 2 kg of multiple lightweight meteorite fragments with the mean around 12 g. The larger one resulted in 9 kg of meteorite fragments, recovered on the ground, including the two heaviest pieces of 2.374 kg and 2.167 kg with the mean around 140 g. Based on our investigations, we conclude that two to three larger fragments of 500-1000 g each should exist, but were either not recovered or not reported by illegal meteorite hunters.

  6. Distributed and interactive visual analysis of omics data.

    PubMed

    Farag, Yehia; Berven, Frode S; Jonassen, Inge; Petersen, Kjell; Barsnes, Harald

    2015-11-01

    The amount of publicly shared proteomics data has grown exponentially over the last decade as the solutions for sharing and storing the data have improved. However, the use of the data is often limited by the manner of which it is made available. There are two main approaches: download and inspect the proteomics data locally, or interact with the data via one or more web pages. The first is limited by having to download the data and thus requires local computational skills and resources, while the latter most often is limited in terms of interactivity and the analysis options available. A solution is to develop web-based systems supporting distributed and fully interactive visual analysis of proteomics data. The use of a distributed architecture makes it possible to perform the computational analysis at the server, while the results of the analysis can be displayed via a web browser without the need to download the whole dataset. Here the challenges related to developing such systems for omics data will be discussed. Especially how this allows for multiple connected interactive visual displays of omics dataset in a web-based setting, and the benefits this provide for computational analysis of proteomics data.This article is part of a Special Issue entitled: Computational Proteomics. PMID:26047716

  7. Economic Analysis of Trickle Distribution System Texas High Plains. 

    E-print Network

    Osborn, James E.; Young, Alan M.; Wilke, Otto C.; Wendt, Charles

    1977-01-01

    higher than similar calculations for furrow systems. The resul ts were similar for sorghum. :~ ECONOMIC ANALYSIS OF TRICKLE DISTRIBUTION SYSTEMS TEXAS HIGH PLAINS James E. Osborn, Alan M. Young Otto C. Wilke, and Charles Wendt* major source... water in High Plains. Currently. 78 percent of tlt'T'",nn", in the region is irrigated by furrow The application efficiency for furrow in this region has been estimated to be as 50 percent (l). e (drip) irrigation is a method for dis water...

  8. Electrical Power Distribution and Control Modeling and Analysis

    NASA Technical Reports Server (NTRS)

    Fu, Johnny S.; Liffring, Mark; Mehdi, Ishaque S.

    2001-01-01

    This slide presentation reviews the use of Electrical Power Distribution and Control (EPD&C) Modeling and how modeling can support analysis. The presentation discusses using the EASY5 model to simulate and analyze the Space Shuttle Electric Auxiliary Power Unit. Diagrams of the model schematics are included, as well as graphs of the battery cell impedance, hydraulic load dynamics, and EPD&C response to hydraulic load variations.

  9. Automatic analysis of attack data from distributed honeypot network

    NASA Astrophysics Data System (ADS)

    Safarik, Jakub; Voznak, MIroslav; Rezac, Filip; Partila, Pavol; Tomala, Karel

    2013-05-01

    There are many ways of getting real data about malicious activity in a network. One of them relies on masquerading monitoring servers as a production one. These servers are called honeypots and data about attacks on them brings us valuable information about actual attacks and techniques used by hackers. The article describes distributed topology of honeypots, which was developed with a strong orientation on monitoring of IP telephony traffic. IP telephony servers can be easily exposed to various types of attacks, and without protection, this situation can lead to loss of money and other unpleasant consequences. Using a distributed topology with honeypots placed in different geological locations and networks provides more valuable and independent results. With automatic system of gathering information from all honeypots, it is possible to work with all information on one centralized point. Communication between honeypots and centralized data store use secure SSH tunnels and server communicates only with authorized honeypots. The centralized server also automatically analyses data from each honeypot. Results of this analysis and also other statistical data about malicious activity are simply accessible through a built-in web server. All statistical and analysis reports serve as information basis for an algorithm which classifies different types of used VoIP attacks. The web interface then brings a tool for quick comparison and evaluation of actual attacks in all monitored networks. The article describes both, the honeypots nodes in distributed architecture, which monitor suspicious activity, and also methods and algorithms used on the server side for analysis of gathered data.

  10. Global sensitivity analysis in wind energy assessment

    NASA Astrophysics Data System (ADS)

    Tsvetkova, O.; Ouarda, T. B.

    2012-12-01

    Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present research show that the brute force method is best for wind assessment purpose, SBSS outperforms other sampling strategies in the majority of cases. The results indicate that the Weibull scale parameter, turbine lifetime and Weibull shape parameter are the three most influential variables in the case study setting. The following conclusions can be drawn from these results: 1) SBSS should be recommended for use in Monte Carlo experiments, 2) The brute force method should be recommended for conducting sensitivity analysis in wind resource assessment, and 3) Little variation in the Weibull scale causes significant variation in energy production. The presence of the two distribution parameters in the top three influential variables (the Weibull shape and scale) emphasizes the importance of accuracy of (a) choosing the distribution to model wind regime at a site and (b) estimating probability distribution parameters. This can be labeled as the most important conclusion of this research because it opens a field for further research, which the authors see could change the wind energy field tremendously.

  11. Reliability analysis of a structural ceramic combustion chamber

    NASA Technical Reports Server (NTRS)

    Salem, Jonathan A.; Manderscheid, Jane M.; Freedman, Marc R.; Gyekenyesi, John P.

    1990-01-01

    The Weibull modulus, fracture toughness and thermal properties of a silicon nitride material used to make a gas turbine combustor were experimentally measured. The location and nature of failure origins resulting from bend tests were determined with fractographic analysis. The measured Weibull parameters were used along with thermal and stress analysis to determine failure probabilities of the combustor with the CARES design code. The effect of data censoring, FEM mesh refinement, and fracture criterion were considered in the analysis.

  12. Reliability analysis of a structural ceramic combustion chamber

    NASA Technical Reports Server (NTRS)

    Salem, Jonathan A.; Manderscheid, Jane M.; Freedman, Marc R.; Gyekenyesi, John P.

    1991-01-01

    The Weibull modulus, fracture toughness and thermal properties of a silicon nitride material used to make a gas turbine combustor were experimentally measured. The location and nature of failure origins resulting from bend tests were determined with fractographic analysis. The measured Weibull parameters were used along with thermal and stress analysis to determine failure probabilities of the combustor with the CARES design code. The effect of data censoring, FEM mesh refinement, and fracture criterion were considered in the analysis.

  13. Componential distribution analysis of food using near infrared ray image

    NASA Astrophysics Data System (ADS)

    Yamauchi, Hiroki; Kato, Kunihito; Yamamoto, Kazuhiko; Ogawa, Noriko; Ohba, Kimie

    2008-11-01

    The components of the food related to the "deliciousness" are usually evaluated by componential analysis. The component content and type of components in the food are determined by this analysis. However, componential analysis is not able to analyze measurements in detail, and the measurement is time consuming. We propose a method to measure the two-dimensional distribution of the component in food using a near infrared ray (IR) image. The advantage of our method is to be able to visualize the invisible components. Many components in food have characteristics such as absorption and reflection of light in the IR range. The component content is measured using subtraction between two wavelengths of near IR light. In this paper, we describe a method to measure the component of food using near IR image processing, and we show an application to visualize the saccharose in the pumpkin.

  14. GPS FOM Chimney Analysis using Generalized Extreme Value Distribution

    NASA Technical Reports Server (NTRS)

    Ott, Rick; Frisbee, Joe; Saha, Kanan

    2004-01-01

    Many a time an objective of a statistical analysis is to estimate a limit value like 3-sigma 95% confidence upper limit from a data sample. The generalized Extreme Value Distribution method can be profitably employed in many situations for such an estimate. . .. It is well known that according to the Central Limit theorem the mean value of a large data set is normally distributed irrespective of the distribution of the data from which the mean value is derived. In a somewhat similar fashion it is observed that many times the extreme value of a data set has a distribution that can be formulated with a Generalized Distribution. In space shuttle entry with 3-string GPS navigation the Figure Of Merit (FOM) value gives a measure of GPS navigated state accuracy. A GPS navigated state with FOM of 6 or higher is deemed unacceptable and is said to form a FOM 6 or higher chimney. A FOM chimney is a period of time during which the FOM value stays higher than 5. A longer period of FOM of value 6 or higher causes navigated state to accumulate more error for a lack of state update. For an acceptable landing it is imperative that the state error remains low and hence at low altitude during entry GPS data of FOM greater than 5 must not last more than 138 seconds. I To test the GPS performAnce many entry test cases were simulated at the Avionics Development Laboratory. Only high value FoM chimneys are consequential. The extreme value statistical technique is applied to analyze high value FOM chimneys. The Maximum likelihood method is used to determine parameters that characterize the GEV distribution, and then the limit value statistics are estimated.

  15. Job optimization in ATLAS TAG-based distributed analysis

    NASA Astrophysics Data System (ADS)

    Mambelli, M.; Cranshaw, J.; Gardner, R.; Maeno, T.; Malon, D.; Novak, M.

    2010-04-01

    The ATLAS experiment is projected to collect over one billion events/year during the first few years of operation. The efficient selection of events for various physics analyses across all appropriate samples presents a significant technical challenge. ATLAS computing infrastructure leverages the Grid to tackle the analysis across large samples by organizing data into a hierarchical structure and exploiting distributed computing to churn through the computations. This includes events at different stages of processing: RAW, ESD (Event Summary Data), AOD (Analysis Object Data), DPD (Derived Physics Data). Event Level Metadata Tags (TAGs) contain information about each event stored using multiple technologies accessible by POOL and various web services. This allows users to apply selection cuts on quantities of interest across the entire sample to compile a subset of events that are appropriate for their analysis. This paper describes new methods for organizing jobs using the TAGs criteria to analyze ATLAS data. It further compares different access patterns to the event data and explores ways to partition the workload for event selection and analysis. Here analysis is defined as a broader set of event processing tasks including event selection and reduction operations ("skimming", "slimming" and "thinning") as well as DPD making. Specifically it compares analysis with direct access to the events (AOD and ESD data) to access mediated by different TAG-based event selections. We then compare different ways of splitting the processing to maximize performance.

  16. Stochastic Sensitivity Analysis and Kernel Inference via Distributional Data

    PubMed Central

    Li, Bochong; You, Lingchong

    2014-01-01

    Cellular processes are noisy due to the stochastic nature of biochemical reactions. As such, it is impossible to predict the exact quantity of a molecule or other attributes at the single-cell level. However, the distribution of a molecule over a population is often deterministic and is governed by the underlying regulatory networks relevant to the cellular functionality of interest. Recent studies have started to exploit this property to infer network states. To facilitate the analysis of distributional data in a general experimental setting, we introduce a computational framework to efficiently characterize the sensitivity of distributional output to changes in external stimuli. Further, we establish a probability-divergence-based kernel regression model to accurately infer signal level based on distribution measurements. Our methodology is applicable to any biological system subject to stochastic dynamics and can be used to elucidate how population-based information processing may contribute to organism-level functionality. It also lays the foundation for engineering synthetic biological systems that exploit population decoding to more robustly perform various biocomputation tasks, such as disease diagnostics and environmental-pollutant sensing. PMID:25185560

  17. Crystal size distribution analysis of plagioclase in experimentally decompressed hydrous rhyodacite magma

    E-print Network

    Hammer, Julia Eve

    Crystal size distribution analysis of plagioclase in experimentally decompressed hydrous rhyodacite November 2010 Editor: T.M. Harrison Keywords: crystal size distribution plagioclase decompression experiments growth rate nucleation rate residence time This study presents crystal size distributions (CSD

  18. Growing axons analysis by using Granulometric Size Distribution

    NASA Astrophysics Data System (ADS)

    Gonzalez, Mariela A.; Ballarin, Virginia L.; Rapacioli, Melina; Celín, A. R.; Sánchez, V.; Flores, V.

    2011-09-01

    Neurite growth (neuritogenesis) in vitro is a common methodology in the field of developmental neurobiology. Morphological analyses of growing neurites are usually difficult because their thinness and low contrast usually prevent to observe clearly their shape, number, length and spatial orientation. This paper presents the use of the granulometric size distribution in order to automatically obtain information about the shape, size and spatial orientation of growing axons in tissue cultures. The results here presented show that the granulometric size distribution results in a very useful morphological tool since it allows the automatic detection of growing axons and the precise characterization of a relevant parameter indicative of the axonal growth spatial orientation such as the quantification of the angle of deviation of the growing direction. The developed algorithms automatically quantify this orientation by facilitating the analysis of these images, which is important given the large number of images that need to be processed for this type of study.

  19. Performance Analysis of an Actor-Based Distributed Simulation

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1998-01-01

    Object-oriented design of simulation programs appears to be very attractive because of the natural association of components in the simulated system with objects. There is great potential in distributing the simulation across several computers for the purpose of parallel computation and its consequent handling of larger problems in less elapsed time. One approach to such a design is to use "actors", that is, active objects with their own thread of control. Because these objects execute concurrently, communication is via messages. This is in contrast to an object-oriented design using passive objects where communication between objects is via method calls (direct calls when they are in the same address space and remote procedure calls when they are in different address spaces or different machines). This paper describes a performance analysis program for the evaluation of a design for distributed simulations based upon actors.

  20. EECS 598 Special Topic Analysis of Electric Power Distribution Systems and Loads

    E-print Network

    Cafarella, Michael J.

    EECS 598 Special Topic Analysis of Electric Power Distribution Systems This course covers the fundamentals of electric power distribution systems and electric loads. Most power system courses focus on analysis of transmission systems

  1. Time series power flow analysis for distribution connected PV generation.

    SciTech Connect

    Broderick, Robert Joseph; Quiroz, Jimmy Edward; Ellis, Abraham; Reno, Matthew J.; Smith, Jeff; Dugan, Roger

    2013-01-01

    Distributed photovoltaic (PV) projects must go through an interconnection study process before connecting to the distribution grid. These studies are intended to identify the likely impacts and mitigation alternatives. In the majority of the cases, system impacts can be ruled out or mitigation can be identified without an involved study, through a screening process or a simple supplemental review study. For some proposed projects, expensive and time-consuming interconnection studies are required. The challenges to performing the studies are twofold. First, every study scenario is potentially unique, as the studies are often highly specific to the amount of PV generation capacity that varies greatly from feeder to feeder and is often unevenly distributed along the same feeder. This can cause location-specific impacts and mitigations. The second challenge is the inherent variability in PV power output which can interact with feeder operation in complex ways, by affecting the operation of voltage regulation and protection devices. The typical simulation tools and methods in use today for distribution system planning are often not adequate to accurately assess these potential impacts. This report demonstrates how quasi-static time series (QSTS) simulation and high time-resolution data can be used to assess the potential impacts in a more comprehensive manner. The QSTS simulations are applied to a set of sample feeders with high PV deployment to illustrate the usefulness of the approach. The report describes methods that can help determine how PV affects distribution system operations. The simulation results are focused on enhancing the understanding of the underlying technical issues. The examples also highlight the steps needed to perform QSTS simulation and describe the data needed to drive the simulations. The goal of this report is to make the methodology of time series power flow analysis readily accessible to utilities and others responsible for evaluating potential PV impacts.

  2. Numerical analysis of decoy state quantum key distribution protocols

    SciTech Connect

    Harrington, Jim W; Rice, Patrick R

    2008-01-01

    Decoy state protocols are a useful tool for many quantum key distribution systems implemented with weak coherent pulses, allowing significantly better secret bit rates and longer maximum distances. In this paper we present a method to numerically find optimal three-level protocols, and we examine how the secret bit rate and the optimized parameters are dependent on various system properties, such as session length, transmission loss, and visibility. Additionally, we show how to modify the decoy state analysis to handle partially distinguishable decoy states as well as uncertainty in the prepared intensities.

  3. Prediction of the Inert Strength Distribution of Si3N4 Diesel Valves

    SciTech Connect

    Andrews, M.J.; Breder, K.; Wereszczak, A.A.

    1999-01-25

    Censored Weibull strength distributions were generated with NT551 silicon nitride four-point flexure data using the ASTM C1161-B and 5.0 mm diameter cylindrical specimens. Utilizing finite element models and AlliedSignal's life prediction codes, the inert or fast fracture strength failure probability of a ceramic diesel valve was estimated from these data sets. The failure probability prediction derived from each data set were found to be more conservative than valve strength data. Fractographic analysis of the test specimens and valves showed that the cylindrical specimens failed from a different flaw population than the prismatic flexure bars and the valves. The study emphasizes the prerequisite of having coincident flaw populations homogeneously distributed in both the test specimen and the ceramic component. Lastly, it suggests that unless material homogeneity exists, that any meaningful life prediction or reliability analysis of a component may not be possible.

  4. Conditions for transmission path analysis in energy distribution models

    NASA Astrophysics Data System (ADS)

    Aragonès, Àngels; Guasch, Oriol

    2016-02-01

    In this work, we explore under which conditions transmission path analysis (TPA) developed for statistical energy analysis (SEA) can be applied to the less restrictive energy distribution (ED) models. It is shown that TPA can be extended without problems to proper-SEA systems whereas the situation is not so clear for quasi-SEA systems. In the general case, it has been found that a TPA can always be performed on an ED model if its inverse influence energy coefficient (EIC) matrix turns to have negative off-diagonal entries. If this condition is satisfied, it can be shown that the inverse EIC matrix automatically becomes an M-matrix. An ED graph can then be defined for it and use can be made of graph theory ranking path algorithms, previously developed for SEA systems, to classify dominant paths in ED models. A small mechanical system consisting of connected plates has been used to illustrate some of the exposed theoretical results.

  5. Earthquake interevent time distribution in Kachchh, Northwestern India

    NASA Astrophysics Data System (ADS)

    Pasari, Sumanta; Dikshit, Onkar

    2015-12-01

    Statistical properties of earthquake interevent times have long been the topic of interest to seismologists and earthquake professionals, mainly for hazard-related concerns. In this paper, we present a comprehensive study on the temporal statistics of earthquake interoccurrence times of the seismically active Kachchh peninsula (western India) from thirteen probability distributions. Those distributions are exponential, gamma, lognormal, Weibull, Levy, Maxwell, Pareto, Rayleigh, inverse Gaussian (Brownian passage time), inverse Weibull (Frechet), exponentiated exponential, exponentiated Rayleigh (Burr type X), and exponentiated Weibull distributions. Statistical inferences of the scale and shape parameters of these distributions are discussed from the maximum likelihood estimations and the Fisher information matrices. The latter are used as a surrogate tool to appraise the parametric uncertainty in the estimation process. The results were found on the basis of two goodness-of-fit tests: the maximum likelihood criterion with its modification to Akaike information criterion (AIC) and the Kolmogorov-Smirnov (K-S) minimum distance criterion. These results reveal that (i) the exponential model provides the best fit, (ii) the gamma, lognormal, Weibull, inverse Gaussian, exponentiated exponential, exponentiated Rayleigh, and exponentiated Weibull models provide an intermediate fit, and (iii) the rest, namely Levy, Maxwell, Pareto, Rayleigh, and inverse Weibull, fit poorly to the earthquake catalog of Kachchh and its adjacent regions. This study also analyzes the present-day seismicity in terms of the estimated recurrence interval and conditional probability curves (hazard curves). The estimated cumulative probability and the conditional probability of a magnitude 5.0 or higher event reach 0.8-0.9 by 2027-2036 and 2034-2043, respectively. These values have significant implications in a variety of practical applications including earthquake insurance, seismic zonation, location identification of lifeline structures, and revision of building codes.

  6. Evaluation of Distribution Analysis Software for DER Applications

    SciTech Connect

    Staunton, RH

    2003-01-23

    The term ''Distributed energy resources'' or DER refers to a variety of compact, mostly self-contained power-generating technologies that can be combined with energy management and storage systems and used to improve the operation of the electricity distribution system, whether or not those technologies are connected to an electricity grid. Implementing DER can be as simple as installing a small electric generator to provide backup power at an electricity consumer's site. Or it can be a more complex system, highly integrated with the electricity grid and consisting of electricity generation, energy storage, and power management systems. DER devices provide opportunities for greater local control of electricity delivery and consumption. They also enable more efficient utilization of waste heat in combined cooling, heating and power (CHP) applications--boosting efficiency and lowering emissions. CHP systems can provide electricity, heat and hot water for industrial processes, space heating and cooling, refrigeration, and humidity control to improve indoor air quality. DER technologies are playing an increasingly important role in the nation's energy portfolio. They can be used to meet base load power, peaking power, backup power, remote power, power quality, as well as cooling and heating needs. DER systems, ranging in size and capacity from a few kilowatts up to 50 MW, can include a number of technologies (e.g., supply-side and demand-side) that can be located at or near the location where the energy is used. Information pertaining to DER technologies, application solutions, successful installations, etc., can be found at the U.S. Department of Energy's DER Internet site [1]. Market forces in the restructured electricity markets are making DER, both more common and more active in the distribution systems throughout the US [2]. If DER devices can be made even more competitive with central generation sources this trend will become unstoppable. In response, energy providers will be forced to both fully acknowledge the trend and plan for accommodating DER [3]. With bureaucratic barriers [4], lack of time/resources, tariffs, etc. still seen in certain regions of the country, changes still need to be made. Given continued technical advances in DER, the time is fast approaching when the industry, nation-wide, must not only accept DER freely but also provide or review in-depth technical assessments of how DER should be integrated into and managed throughout the distribution system. Characterization studies are needed to fully understand how both the utility system and DER devices themselves will respond to all reasonable events (e.g., grid disturbances, faults, rapid growth, diverse and multiple DER systems, large reactive loads). Some of this work has already begun as it relates to operation and control of DER [5] and microturbine performance characterization [6,7]. One of the most urgently needed tools that can provide these types of analyses is a distribution network analysis program in combination with models for various DER. Together, they can be used for (1) analyzing DER placement in distribution networks and (2) helping to ensure that adequate transmission reliability is maintained. Surveys of the market show products that represent a partial match to these needs; specifically, software that has been developed to plan electrical distribution systems and analyze reliability (in a near total absence of DER). The first part of this study (Sections 2 and 3 of the report) looks at a number of these software programs and provides both summary descriptions and comparisons. The second part of this study (Section 4 of the report) considers the suitability of these analysis tools for DER studies. It considers steady state modeling and assessment work performed by ORNL using one commercially available tool on feeder data provided by a southern utility. Appendix A provides a technical report on the results of this modeling effort.

  7. A Distributed Flocking Approach for Information Stream Clustering Analysis

    SciTech Connect

    Cui, Xiaohui; Potok, Thomas E

    2006-01-01

    Intelligence analysts are currently overwhelmed with the amount of information streams generated everyday. There is a lack of comprehensive tool that can real-time analyze the information streams. Document clustering analysis plays an important role in improving the accuracy of information retrieval. However, most clustering technologies can only be applied for analyzing the static document collection because they normally require a large amount of computation resource and long time to get accurate result. It is very difficult to cluster a dynamic changed text information streams on an individual computer. Our early research has resulted in a dynamic reactive flock clustering algorithm which can continually refine the clustering result and quickly react to the change of document contents. This character makes the algorithm suitable for cluster analyzing dynamic changed document information, such as text information stream. Because of the decentralized character of this algorithm, a distributed approach is a very natural way to increase the clustering speed of the algorithm. In this paper, we present a distributed multi-agent flocking approach for the text information stream clustering and discuss the decentralized architectures and communication schemes for load balance and status information synchronization in this approach.

  8. Phylogenetic analysis reveals a scattered distribution of autumn colours

    PubMed Central

    Archetti, Marco

    2009-01-01

    Background and Aims Leaf colour in autumn is rarely considered informative for taxonomy, but there is now growing interest in the evolution of autumn colours and different hypotheses are debated. Research efforts are hindered by the lack of basic information: the phylogenetic distribution of autumn colours. It is not known when and how autumn colours evolved. Methods Data are reported on the autumn colours of 2368 tree species belonging to 400 genera of the temperate regions of the world, and an analysis is made of their phylogenetic relationships in order to reconstruct the evolutionary origin of red and yellow in autumn leaves. Key Results Red autumn colours are present in at least 290 species (70 genera), and evolved independently at least 25 times. Yellow is present independently from red in at least 378 species (97 genera) and evolved at least 28 times. Conclusions The phylogenetic reconstruction suggests that autumn colours have been acquired and lost many times during evolution. This scattered distribution could be explained by hypotheses involving some kind of coevolutionary interaction or by hypotheses that rely on the need for photoprotection. PMID:19126636

  9. Silk Fiber Mechanics from Multiscale Force Distribution Analysis

    PubMed Central

    Cetinkaya, Murat; Xiao, Senbo; Markert, Bernd; Stacklies, Wolfram; Gräter, Frauke

    2011-01-01

    Here we decipher the molecular determinants for the extreme toughness of spider silk fibers. Our bottom-up computational approach incorporates molecular dynamics and finite element simulations. Therefore, the approach allows the analysis of the internal strain distribution and load-carrying motifs in silk fibers on scales of both molecular and continuum mechanics. We thereby dissect the contributions from the nanoscale building blocks, the soft amorphous and the strong crystalline subunits, to silk fiber mechanics. We identify the amorphous subunits not only to give rise to high elasticity, but to also ensure efficient stress homogenization through the friction between entangled chains, which also allows the crystals to withstand stresses as high as 2 GPa in the context of the amorphous matrix. We show that the maximal toughness of silk is achieved at 10–40% crystallinity depending on the distribution of crystals in the fiber. We also determined a serial arrangement of the crystalline and amorphous subunits in lamellae to outperform a random or a parallel arrangement, putting forward what we believe to be a new structural model for silk and other semicrystalline materials. The multiscale approach, not requiring any empirical parameters, is applicable to other partially ordered polymeric systems. Hence, it is an efficient tool for the design of artificial silk fibers. PMID:21354403

  10. Dose distribution of neutron beam and chromosome analysis

    SciTech Connect

    Matsubara, S.; Kuwabara, Y.; Horiuchi, J.; Suzuki, S.; Ito, A.

    1988-03-01

    Chromosome analysis using peripheral lymphocytes is a sensitive and reliable indicator of the biological effect of radiation at low radiation doses. In the present study, using this chromosome aberration analysis, we tried to investigate the radiation dose distribution within and around the radiation field of a 6 MeV neutron beam. The efficient induction of dicentrics and rings at the irradiation of neutrons compared to those of gamma or X rays especially in a lower dose range, results in a dominant linear component in the linear quadratic model in the dose response relation of a chromosome aberration formation. A marked increase of RBE (relative biological effectiveness) in a lower dose range of neutrons was demonstrated. The radiation doses of the neutron beam as a function of depth, estimated from the yields of dicentrics and rings in a water phantom revealed a fairly good agreement with doses that were physically obtained. The radiation portal margin of the neutron beam was demonstrated to be not as sharp due to a wide penumbra. This wide penumbra and high RBE value, especially at lower dose range of the neutron beam may contribute to the induction of secondary malignancies in the normal tissue surrounding the tumor mass.

  11. Finite element analysis of hydrogen distribution in butt joints

    SciTech Connect

    Boellinghaus, T.; Hoffmeister, H.; Schubert, C.

    1996-12-31

    Due to its high diffusibility, it is very difficult to locate hydrogen in steel weldments experimentally. However, quantitative numerical analysis of hydrogen distribution is possible by finite element analysis based upon Fick`s Second Law, if the diffusion coefficient is known for different weld microstructures of the investigated steel type. This has been confirmed by comparing finite element calculations with carrier gas hot extractions of bead-on-plate welds, using a temperature dependent scatterband for hydrogen diffusion coefficients in micro alloyed law carbon structural steels. Subsequently, such numerical approach has been applied to various root welds in butt joints, in order to quantify the geometric influences on the hydrogen concentration profile. By avoiding complicated mathematical models it is shown that high hydrogen concentrations occurred in crack susceptible weld regions, dependent on the groove shape and the weld cross section geometry. It turned out that plate thickness affects hydrogen diffusion only to a limited extent. In particular, y-bevel root welds maintain hydrogen in the HAZ longer as compared to other weld joints, as for instance a v-bevel preparation. Long hydrogen removal times in all investigated joint configurations generally indicate the problem of delayed hydrogen assisted cracking in welds.

  12. An Open Architecture for Distributed Malware Collection and Analysis

    NASA Astrophysics Data System (ADS)

    Cavalca, Davide; Goldoni, Emanuele

    Honeynets have become an important tool for researchers and network operators. However, the lack of a unified honeynet data model has impeded their effectiveness, resulting in multiple unrelated data sources, each with its own proprietary access method and format. Moreover, the deployment and management of a honeynet is a time-consuming activity and the interpretation of collected data is far from trivial. HIVE (Honeynet Infrastructure in Virtualized Environment) is a novel highly scalable automated data collection and analysis architecture we designed. Our infrastructure is based on top of proven FLOSS (Free, Libre and Open Source) solutions, which have been extended and integrated with new tools we developed. We use virtualization to ease honeypot management and deployment, combining both high-interaction and low-interaction sensors in a common infrastructure. We also address the need for rapid comprehension and detailed data analysis by harnessing the power of a relational database system, which provides centralized storage and access to the collected data while ensuring its constant integrity. This chapter presents our malware data collection architecture, offering some insight in the structure and benefits of a distributed virtualized honeynet and its development. Finally, we present some techniques for the active monitoring of centralized botnets we integrated in HIVE, which allow us to track the menaces evolution and timely deploy effective countermeasures.

  13. Microcanonical thermostatistics analysis without histograms: Cumulative distribution and Bayesian approaches

    NASA Astrophysics Data System (ADS)

    Alves, Nelson A.; Morero, Lucas D.; Rizzi, Leandro G.

    2015-06-01

    Microcanonical thermostatistics analysis has become an important tool to reveal essential aspects of phase transitions in complex systems. An efficient way to estimate the microcanonical inverse temperature ?(E) and the microcanonical entropy S(E) is achieved with the statistical temperature weighted histogram analysis method (ST-WHAM). The strength of this method lies on its flexibility, as it can be used to analyse data produced by algorithms with generalised sampling weights. However, for any sampling weight, ST-WHAM requires the calculation of derivatives of energy histograms H(E) , which leads to non-trivial and tedious binning tasks for models with continuous energy spectrum such as those for biomolecular and colloidal systems. Here, we discuss two alternative methods that avoid the need for such energy binning to obtain continuous estimates for H(E) in order to evaluate ?(E) by using ST-WHAM: (i) a series expansion to estimate probability densities from the empirical cumulative distribution function (CDF), and (ii) a Bayesian approach to model this CDF. Comparison with a simple linear regression method is also carried out. The performance of these approaches is evaluated considering coarse-grained protein models for folding and peptide aggregation.

  14. Statistical analysis and modelling of small satellite reliability

    NASA Astrophysics Data System (ADS)

    Guo, Jian; Monas, Liora; Gill, Eberhard

    2014-05-01

    This paper attempts to characterize failure behaviour of small satellites through statistical analysis of actual in-orbit failures. A unique Small Satellite Anomalies Database comprising empirical failure data of 222 small satellites has been developed. A nonparametric analysis of the failure data has been implemented by means of a Kaplan-Meier estimation. An innovative modelling method, i.e. Bayesian theory in combination with Markov Chain Monte Carlo (MCMC) simulations, has been proposed to model the reliability of small satellites. An extensive parametric analysis using the Bayesian/MCMC method has been performed to fit a Weibull distribution to the data. The influence of several characteristics such as the design lifetime, mass, launch year, mission type and the type of satellite developers on the reliability has been analyzed. The results clearly show the infant mortality of small satellites. Compared with the classical maximum-likelihood estimation methods, the proposed Bayesian/MCMC method results in better fitting Weibull models and is especially suitable for reliability modelling where only very limited failures are observed.

  15. Effects of specimen size on the flexural strength and Weibull modulus of nuclear graphite IG-110, NBG-18, and PCEA

    NASA Astrophysics Data System (ADS)

    Chi, Se-Hwan

    2015-09-01

    Changes in flexural strength and Weibull modulus due to specimen size were investigated for three nuclear graphite grades, IG-110, NBG-18, and PCEA, using four-point-1/3 point (4-1/3) loading with specimens of three different sizes: 3.18 (Thickness) × 6.35 (Width) × 50.8 (Length), 6.50 (T) × 12.0 (W) × 52.0 (L), 18.0 (T) × 16.0 (W) × 64 (L) (mm) (total: 210 specimens). Results showed some specimen size effects were grade dependent: While NBG-18 (a) showed rather significant specimen size effects (37% difference between the 3 T and 18 T), the differences in IG-110 and PCEA were 7.6-15%. The maximum differences in flexural strength due to specimen size were larger in the PCEA and NBG-18 having larger sized coke particles (medium grain size: >300 ?m) than the IG-110 with super fine coke particle size (25 ?m). The Weibull modulus showed a data population dependency, in that it decreased with increasing numbers of data used for modulus determination. A good correlation between the fracture surface roughness and the flexural strength was confirmed.

  16. Statistical structural analysis of rotor impact ice shedding

    NASA Technical Reports Server (NTRS)

    Kellacky, C. J.; Chu, M. L.; Scavuzzo, R. J.

    1991-01-01

    The statistical characteristics of impact ice shear strength are analyzed, with emphasis placed on the most probable shear strength and statistical distribution of an ice deposit. Several distribution types are considered: the Weibull, two-parameter Weibull, and exponential distributions, as well as the Gumbell distribution of the smallest extreme and the Gumbell distribution of the largest extreme. It is concluded that the Weibull distribution yields the best results; however, the expected life, shape parameter, and scale parameter should be determined separately for each case of varying wind speed and droplet size. The theoretical predictions of shear stresses in a specific rotating ice shape are compared, and it is noted that when the effects of lift are added to the theoretical model and the interference is calculated with a new mean and standard deviation, the probability of ice shed is computed as 36.64 pct.

  17. Application of extreme learning machine for estimation of wind speed distribution

    NASA Astrophysics Data System (ADS)

    Shamshirband, Shahaboddin; Mohammadi, Kasra; Tong, Chong Wen; Petkovi?, Dalibor; Porcu, Emilio; Mostafaeipour, Ali; Ch, Sudheer; Sedaghat, Ahmad

    2015-06-01

    The knowledge of the probabilistic wind speed distribution is of particular significance in reliable evaluation of the wind energy potential and effective adoption of site specific wind turbines. Among all proposed probability density functions, the two-parameter Weibull function has been extensively endorsed and utilized to model wind speeds and express wind speed distribution in various locations. In this research work, extreme learning machine (ELM) is employed to compute the shape (k) and scale (c) factors of Weibull distribution function. The developed ELM model is trained and tested based upon two widely successful methods used to estimate k and c parameters. The efficiency and accuracy of ELM is compared against support vector machine, artificial neural network and genetic programming for estimating the same Weibull parameters. The survey results reveal that applying ELM approach is eventuated in attaining further precision for estimation of both Weibull parameters compared to other methods evaluated. Mean absolute percentage error, mean absolute bias error and root mean square error for k are 8.4600 %, 0.1783 and 0.2371, while for c are 0.2143 %, 0.0118 and 0.0192 m/s, respectively. In conclusion, it is conclusively found that application of ELM is particularly promising as an alternative method to estimate Weibull k and c factors.

  18. Dynamic Analysis of the Arrow Distributed Protocol Fabian Kuhn

    E-print Network

    and Problems--computa- tions on discrete structures; G.2.2 [Discrete Mathematics]: Graph Theory--graph al- gorithms; G.2.2 [Discrete Mathematics]: Graph Theory--network problems General Terms Algorithms, Theory Keywords Distributed Queuing, Distributed Ordering, Competitive Anal- ysis, Distributed Algorithms The work

  19. Random Distributions in Image Analysis Richard Emilion and Denis Pasquignon

    E-print Network

    Emilion, Richard

    Bayesian statistics with the Dirichlet RD introduced in a famous paper of T.S. Ferguson [11] and further as being composed of distinct clusters such that the local distributions in each cluster are generated by a fixed random distribution while the whole set of local distributions is generated by a mixture

  20. Qualitative Analysis of Distributed Physical Systems with Applications to Control Synthesis \\Lambda

    E-print Network

    Bailey-Kellogg, Chris

    Qualitative Analysis of Distributed Physical Systems with Applications to Control Synthesis \\Lambda integrate and produce micro­electro­ mechanical system (MEMS) devices on a massive scale, \\Lambda Copyright

  1. A distributed analysis of Human impact on global sediment dynamics

    NASA Astrophysics Data System (ADS)

    Cohen, S.; Kettner, A.; Syvitski, J. P.

    2012-12-01

    Understanding riverine sediment dynamics is an important undertaking for both socially-relevant issues such as agriculture, water security and infrastructure management and for scientific analysis of landscapes, river ecology, oceanography and other disciplines. Providing good quantitative and predictive tools in therefore timely particularly in light of predicted climate and landuse changes. Ever increasing human activity during the Anthropocene have affected sediment dynamics in two major ways: (1) an increase is hillslope erosion due to agriculture, deforestation and landscape engineering and (2) trapping of sediment in dams and other man-made reservoirs. The intensity and dynamics between these man-made factors vary widely across the globe and in time and are therefore hard to predict. Using sophisticated numerical models is therefore warranted. Here we use a distributed global riverine sediment flux and water discharge model (WBMsed) to compare a pristine (without human input) and disturbed (with human input) simulations. Using these 50 year simulations we will show and discuss the complex spatial and temporal patterns of human effect on riverine sediment flux and water discharge.

  2. Fourier analysis of polar cap electric field and current distributions

    NASA Technical Reports Server (NTRS)

    Barbosa, D. D.

    1984-01-01

    A theoretical study of high-latitude electric fields and currents, using analytic Fourier analysis methods, is conducted. A two-dimensional planar model of the ionosphere with an enhanced conductivity auroral belt and field-aligned currents at the edges is employed. Two separate topics are treated. A field-aligned current element near the cusp region of the polar cap is included to investigate the modifications to the convection pattern by the east-west component of the interplanetary magnetic field. It is shown that a sizable one-cell structure is induced near the cusp which diverts equipotential contours to the dawnside or duskside, depending on the sign of the cusp current. This produces characteristic dawn-dusk asymmetries to the electric field that have been previously observed over the polar cap. The second topic is concerned with the electric field configuration obtained in the limit of perfect shielding, where the field is totally excluded equatorward of the auroral oval. When realistic field-aligned current distributions are used, the result is to produce severely distorted, crescent-shaped equipotential contours over the cap. Exact, analytic formulae applicable to this case are also provided.

  3. Distributions of Autocorrelated First-Order Kinetic Outcomes: Illness Severity

    PubMed Central

    Englehardt, James D.

    2015-01-01

    Many complex systems produce outcomes having recurring, power law-like distributions over wide ranges. However, the form necessarily breaks down at extremes, whereas the Weibull distribution has been demonstrated over the full observed range. Here the Weibull distribution is derived as the asymptotic distribution of generalized first-order kinetic processes, with convergence driven by autocorrelation, and entropy maximization subject to finite positive mean, of the incremental compounding rates. Process increments represent multiplicative causes. In particular, illness severities are modeled as such, occurring in proportion to products of, e.g., chronic toxicant fractions passed by organs along a pathway, or rates of interacting oncogenic mutations. The Weibull form is also argued theoretically and by simulation to be robust to the onset of saturation kinetics. The Weibull exponential parameter is shown to indicate the number and widths of the first-order compounding increments, the extent of rate autocorrelation, and the degree to which process increments are distributed exponential. In contrast with the Gaussian result in linear independent systems, the form is driven not by independence and multiplicity of process increments, but by increment autocorrelation and entropy. In some physical systems the form may be attracting, due to multiplicative evolution of outcome magnitudes towards extreme values potentially much larger and smaller than control mechanisms can contain. The Weibull distribution is demonstrated in preference to the lognormal and Pareto I for illness severities versus (a) toxicokinetic models, (b) biologically-based network models, (c) scholastic and psychological test score data for children with prenatal mercury exposure, and (d) time-to-tumor data of the ED01 study. PMID:26061263

  4. Bivariate extreme value distributions

    NASA Technical Reports Server (NTRS)

    Elshamy, M.

    1992-01-01

    In certain engineering applications, such as those occurring in the analyses of ascent structural loads for the Space Transportation System (STS), some of the load variables have a lower bound of zero. Thus, the need for practical models of bivariate extreme value probability distribution functions with lower limits was identified. We discuss the Gumbel models and present practical forms of bivariate extreme probability distributions of Weibull and Frechet types with two parameters. Bivariate extreme value probability distribution functions can be expressed in terms of the marginal extremel distributions and a 'dependence' function subject to certain analytical conditions. Properties of such bivariate extreme distributions, sums and differences of paired extremals, as well as the corresponding forms of conditional distributions, are discussed. Practical estimation techniques are also given.

  5. Probability distributions for offshore wind speeds Eugene C. Morgan a,*, Matthew Lackner b

    E-print Network

    Vogel, Richard M.

    Probability distributions for offshore wind speeds Eugene C. Morgan a,*, Matthew Lackner b Wind turbine energy output Weibull distribution Extreme wind a b s t r a c t In planning offshore wind, such as power output, extreme wind load, and fatigue load. Lacking wind speed time series of sufficient length

  6. Analysis of gamma-ray burst duration distribution using mixtures of skewed distributions

    E-print Network

    Mariusz Tarnopolski

    2015-12-09

    Two classes of GRBs have been confidently identified thus far and are prescribed to different physical scenarios -- NS-NS or NS-BH mergers, and collapse of massive stars, for short and long GRBs, respectively. A third, intermediate in duration class, was suggested to be present in previous catalogs, such as BATSE and $Swift$, based on statistical tests regarding a mixture of two or three log-normal distributions of $T_{90}$. However, this might possibly not be an adequate model. This paper investigates whether the distributions of $\\log T_{90}$ from BATSE, $Swift$, and $Fermi$ are described better by a mixture of skewed distributions rather than standard Gaussians. Mixtures of standard normal, skew-normal, sinh-arcsinh and alpha-skew-normal distributions are fitted using a maximum likelihood method. The preferred model is chosen based on the Akaike information criterion. It is found that mixtures of two skew-normal or two sinh-arcsinh distributions are more likely to describe the observed duration distribution of $Fermi$ than a mixture of three standard Gaussians, and that mixtures of two sinh-arcsinh or two skew-normal distributions are models competing with the conventional three-Gaussian in the case of BATSE and $Swift$. Based on statistical reasoning, existence of a third (intermediate) class of GRBs in $Fermi$ data is rejected, and it is shown that other phenomenological models may describe the observed $Fermi$, BATSE, and $Swift$ duration distributions at least as well as a mixture of standard normal distributions.

  7. Frequency distribution histograms for the rapid analysis of data

    NASA Technical Reports Server (NTRS)

    Burke, P. V.; Bullen, B. L.; Poff, K. L.

    1988-01-01

    The mean and standard error are good representations for the response of a population to an experimental parameter and are frequently used for this purpose. Frequency distribution histograms show, in addition, responses of individuals in the population. Both the statistics and a visual display of the distribution of the responses can be obtained easily using a microcomputer and available programs. The type of distribution shown by the histogram may suggest different mechanisms to be tested.

  8. Analysis of Multi-Band UWB Distributed Beaconing over Fading Channels

    E-print Network

    Savazzi, Stefano

    Analysis of Multi-Band UWB Distributed Beaconing over Fading Channels Leonardo Goratti1, Alberto distributed beaconing. In this paper, we analyze the distributed beaconing mechanism of ECMA-368. We focus on the transient phase in which newcomer devices attempt to join a network by accessing the standard defined beacon

  9. DATALITE: a Distributed Architecture for Traffic Analysis via Light-weight Traffic Digest

    E-print Network

    Chao, Jonathan

    DATALITE: a Distributed Architecture for Traffic Analysis via Light-weight Traffic Digest for Traffic Analysis via LIght- weight Traffic digEst, which introduces a set of new distributed algorithms digests (TD's) amongst the network nodes. A TD for N packets only requires O(loglog N) bits of memory

  10. JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR SOME PARAMETERS AFFECTING THE DISTRIBUTIONAL

    E-print Network

    Premack, David

    JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR SOME PARAMETERS AFFECTING THE DISTRIBUTIONAL-running. Operations included limited access to wheel, food deprivation, and protracted maintenance on a 24-hr feeding schedule. A distributional analysis of response duration, burst duration, and inter-burst interval showed

  11. Analysis Model for Domestic Hot Water Distribution Systems: Preprint

    SciTech Connect

    Maguire, J.; Krarti, M.; Fang, X.

    2011-11-01

    A thermal model was developed to estimate the energy losses from prototypical domestic hot water (DHW) distribution systems for homes. The developed model, using the TRNSYS simulation software, allows researchers and designers to better evaluate the performance of hot water distribution systems in homes. Modeling results were compared with past experimental study results and showed good agreement.

  12. Structural Vulnerability Analysis of Electric Power Distribution Grids

    E-print Network

    Koc, Yakup; Warnier, Martijn; Kumar, Tarun

    2015-01-01

    Power grid outages cause huge economical and societal costs. Disruptions in the power distribution grid are responsible for a significant fraction of electric power unavailability to customers. The impact of extreme weather conditions, continuously increasing demand, and the over-ageing of assets in the grid, deteriorates the safety of electric power delivery in the near future. It is this dependence on electric power that necessitates further research in the power distribution grid security assessment. Thus measures to analyze the robustness characteristics and to identify vulnerabilities as they exist in the grid are of utmost importance. This research investigates exactly those concepts- the vulnerability and robustness of power distribution grids from a topological point of view, and proposes a metric to quantify them with respect to assets in a distribution grid. Real-world data is used to demonstrate the applicability of the proposed metric as a tool to assess the criticality of assets in a distribution...

  13. CARES/LIFE Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Powers, Lynn M.; Janosik, Lesley A.; Gyekenyesi, John P.

    2003-01-01

    This manual describes the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction (CARES/LIFE) computer program. The program calculates the time-dependent reliability of monolithic ceramic components subjected to thermomechanical and/or proof test loading. CARES/LIFE is an extension of the CARES (Ceramic Analysis and Reliability Evaluation of Structures) computer program. The program uses results from MSC/NASTRAN, ABAQUS, and ANSYS finite element analysis programs to evaluate component reliability due to inherent surface and/or volume type flaws. CARES/LIFE accounts for the phenomenon of subcritical crack growth (SCG) by utilizing the power law, Paris law, or Walker law. The two-parameter Weibull cumulative distribution function is used to characterize the variation in component strength. The effects of multiaxial stresses are modeled by using either the principle of independent action (PIA), the Weibull normal stress averaging method (NSA), or the Batdorf theory. Inert strength and fatigue parameters are estimated from rupture strength data of naturally flawed specimens loaded in static, dynamic, or cyclic fatigue. The probabilistic time-dependent theories used in CARES/LIFE, along with the input and output for CARES/LIFE, are described. Example problems to demonstrate various features of the program are also included.

  14. Numerical Analysis of a Cold Air Distribution System 

    E-print Network

    Zhu, L.; Li, R.; Yuan, D.

    2006-01-01

    Cold air distribution systems may reduce the operating energy consumption of air-conditioned air supply system and improve the outside air volume percentages and indoor air quality. However, indoor temperature patterns and velocity field are easily...

  15. Thermal Analysis of Antenna Structures. Part 2: Panel Temperature Distribution

    NASA Technical Reports Server (NTRS)

    Schonfeld, D.; Lansing, F. L.

    1983-01-01

    This article is the second in a series that analyzes the temperature distribution in microwave antennas. An analytical solution in a series form is obtained for the temperature distribution in a flat plate analogous to an antenna surface panel under arbitrary temperature and boundary conditions. The solution includes the effects of radiation and air convection from the plate. Good agreement is obtained between the numerical and analytical solutions.

  16. Analysis of Fermi gamma-ray burst duration distribution

    E-print Network

    Mariusz Tarnopolski

    2015-07-07

    Two classes of GRBs, short and long, have been determined without any doubts, and are usually prescribed to different physical scenarios. A third class, intermediate in $T_{90}$ durations, has been reported to be present in the datasets of BATSE, Swift, RHESSI and possibly BeppoSAX. The latest release of $>1500$ GRBs observed by Fermi gives an opportunity to further investigate the duration distribution. The aim of this paper is to investigate whether a third class is present in the $\\log T_{90}$ distribution, or is it described by a bimodal distribution. A standard $\\chi^2$ fitting of a mixture of Gaussians is applied to 25 histograms with different binnings. Different binnings give various values of the fitting parameters, as well as the shape of the fitted curve. Among five statistically significant fits none is trimodal. Locations of the Gaussian components are in agreement with previous works. However, a trimodal distribution, understood in the sense of having three separated peaks, is not found for any binning. It is concluded that the duration distribution in Fermi data is well described by a mixture of three log-normal distributions, but it is intrinsically bimodal, hence no third class is present in the $T_{90}$ data of Fermi. It is suggested that the log-normal fit may not be an adequate model.

  17. Performance Analysis of Distributed Object-Oriented Applications

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1998-01-01

    The purpose of this research was to evaluate the efficiency of a distributed simulation architecture which creates individual modules which are made self-scheduling through the use of a message-based communication system used for requesting input data from another module which is the source of that data. To make the architecture as general as possible, the message-based communication architecture was implemented using standard remote object architectures (Common Object Request Broker Architecture (CORBA) and/or Distributed Component Object Model (DCOM)). A series of experiments were run in which different systems are distributed in a variety of ways across multiple computers and the performance evaluated. The experiments were duplicated in each case so that the overhead due to message communication and data transmission can be separated from the time required to actually perform the computational update of a module each iteration. The software used to distribute the modules across multiple computers was developed in the first year of the current grant and was modified considerably to add a message-based communication scheme supported by the DCOM distributed object architecture. The resulting performance was analyzed using a model created during the first year of this grant which predicts the overhead due to CORBA and DCOM remote procedure calls and includes the effects of data passed to and from the remote objects. A report covering the distributed simulation software and the results of the performance experiments has been submitted separately. The above report also discusses possible future work to apply the methodology to dynamically distribute the simulation modules so as to minimize overall computation time.

  18. Flaw strength distributions and statistical parameters for ceramic fibers: the normal distribution.

    PubMed

    R'mili, M; Godin, N; Lamon, J

    2012-05-01

    The present paper investigates large sets of ceramic fibre failure strengths (500 to 1000 data) produced using tensile tests on tows that contained either 500 or 1000 filaments. The probability density function was determined through acoustic emission monitoring which allowed detection and counting of filament fractures. The statistical distribution of filament strengths was described using the normal distribution. The Weibull equation was then fitted to this normal distribution for estimation of statistical parameters. A perfect agreement between both distributions was obtained, and a quite negligible scatter in statistical parameters was observed, as opposed to the wide variability that is reported in the literature. Thus it was concluded that flaw strengths are distributed normally and that the statistical parameters that were derived are the true ones. In a second step, the conventional method of estimation of Weibull parameters was applied to these sets of data and, then, to subsets selected randomly. The influence of other factors involved in the conventional method of determination of statistical parameters is discussed. It is demonstrated that selection of specimens, sample size, and method of construction of so-called Weibull plots are responsible for statistical parameters variability. PMID:23004702

  19. Comparative hypsometric analysis of both Earth and Venus topographic distributions

    NASA Technical Reports Server (NTRS)

    Rosenblatt, P.; Pinet, P. C.; Thouvenot, E.

    1993-01-01

    Previous studies have compared the global topographic distribution of both planets by means of differential hypsometric curves. For the purpose of comparison, the terrestrial oceanic load was removed, and a reference base level was acquired. It was chosen on the basis of geometric considerations and reflected the geometric shape of the mean dynamical equilibrium figure of the planetary surface in both cases. This reference level corresponds to the well-known sea level for the Earth; for Venus, given its slow rate of rotation, a sphere of radius close to the mean, median and modal values of the planetary radii distribution were considered and the radius value of 6051 km arbitrarily taken. These studies were based on the low resolution (100 x 100 sq km) coverage of Venus obtained by the Pioneer Venus altimeter and on the 1 deg x 1 deg terrestrial topography. But, apart from revealing the distinct contrast existing between the Earth's bimodal and Venus' strong unimodal topographic distribution, the choice of such a reference level is inadequate and even misleading for the comparative geophysical understanding of the planetary relief distribution. The present work reinvestigates the comparison between Earth and Venus hypsometric distribution on the basis of the high-resolution data provided, on one hand, by the recent Magellan global topographic coverage of Venus' surface, and on the other hand, by the detailed NCAR 5 x 5 ft. grid topographic database currently available for the Earth's surface.

  20. Nanocrystal size distribution analysis from transmission electron microscopy images.

    PubMed

    van Sebille, Martijn; van der Maaten, Laurens J P; Xie, Ling; Jarolimek, Karol; Santbergen, Rudi; van Swaaij, René A C M M; Leifer, Klaus; Zeman, Miro

    2015-12-28

    We propose a method, with minimal bias caused by user input, to quickly detect and measure the nanocrystal size distribution from transmission electron microscopy (TEM) images using a combination of Laplacian of Gaussian filters and non-maximum suppression. We demonstrate the proposed method on bright-field TEM images of an a-SiC:H sample containing embedded silicon nanocrystals with varying magnifications and we compare the accuracy and speed with size distributions obtained by manual measurements, a thresholding method and PEBBLES. Finally, we analytically consider the error induced by slicing nanocrystals during TEM sample preparation on the measured nanocrystal size distribution and formulate an equation to correct this effect. PMID:26593390

  1. A fractal approach to dynamic inference and distribution analysis

    PubMed Central

    van Rooij, Marieke M. J. W.; Nash, Bertha A.; Rajaraman, Srinivasan; Holden, John G.

    2013-01-01

    Event-distributions inform scientists about the variability and dispersion of repeated measurements. This dispersion can be understood from a complex systems perspective, and quantified in terms of fractal geometry. The key premise is that a distribution's shape reveals information about the governing dynamics of the system that gave rise to the distribution. Two categories of characteristic dynamics are distinguished: additive systems governed by component-dominant dynamics and multiplicative or interdependent systems governed by interaction-dominant dynamics. A logic by which systems governed by interaction-dominant dynamics are expected to yield mixtures of lognormal and inverse power-law samples is discussed. These mixtures are described by a so-called cocktail model of response times derived from human cognitive performances. The overarching goals of this article are twofold: First, to offer readers an introduction to this theoretical perspective and second, to offer an overview of the related statistical methods. PMID:23372552

  2. Analysis and machine mapping of the distribution of band recoveries

    USGS Publications Warehouse

    Cowardin, L.M.

    1977-01-01

    A method of calculating distance and bearing from banding site to recovery location based on the solution of a spherical triangle is presented. X and Y distances on an ordinate grid were applied to computer plotting of recoveries on a map. The advantages and disadvantages of tables of recoveries by State or degree block, axial lines, and distance of recovery from banding site for presentation and comparison of the spatial distribution of band recoveries are discussed. A special web-shaped partition formed by concentric circles about the point of banding and great circles at 30-degree intervals through the point of banding has certain advantages over other methods. Comparison of distributions by means of a X? contingency test is illustrated. The statistic V = X?/N can be used as a measure of difference between two distributions of band recoveries and its possible use is illustrated as a measure of the degree of migrational homing.

  3. Inductance and Current Distribution Analysis of a Prototype HTS Cable

    NASA Astrophysics Data System (ADS)

    Zhu, Jiahui; Zhang, Zhenyu; Zhang, Huiming; Zhang, Min; Qiu, Ming; Yuan, Weijia

    2014-05-01

    This project is partly supported by NSFC Grant 51207146, RAEng Research Exchange scheme of UK and EPSRC EP/K01496X/1. Superconducting cable is an emerging technology for electricity power transmission. Since the high power capacity HTS transmission cables are manufactured using a multi-layer conductor structure, the current distribution among the multilayer structure would be nonuniform without proper optimization and hence lead to large transmission losses. Therefore a novel optimization method has been developed to achieve evenly distributed current among different layers considering the HTS cable structure parameters: radius, pitch angle and winding direction which determine the self and mutual inductance. A prototype HTS cable has been built using BSCCO tape and tested to validate the design the optimal design method. A superconductor characterization system has been developed using the Labview and NI data acquisition system. It can be used to measure the AC loss and current distribution of short HTS cables.

  4. Image analysis of light distribution in a photobioreactor.

    PubMed

    Jung, Sang-Kyu; Lee, Sun Bok

    2003-11-01

    Light intensity is a crucial factor that determines the growth of photosynthetic cells. This study analyzed the light distribution in a photobioreactor by processing images, captured with a digital camera, of a rectangular photobioreactor containing Synechococcus sp. PCC6801 as a model microorganism. The gray-scale images obtained clearly demonstrate the variation of the light-distribution profiles upon changing cell concentrations and external light intensity. Image-processing techniques were also used to predict the cell density in the photobioreactor. By analyzing the digitized image data with a neural network model, we were able to predict the cell concentrations in the photobioreactor with a <5% error. PMID:12968294

  5. Analysis of inclusion distributions in silicon carbide armor ceramics

    NASA Astrophysics Data System (ADS)

    Bakas, Michael Paul

    It was determined that intrinsic microstructural defects (i.e. inclusions) are the preferential fragmentation path (initiation or propagation) for ballistically impacted SiC, and may contribute to variation in ballistic performance. Quasi-static and ballistic samples of SiC were studied and inclusions caused by common SiC sintering aids and/or impurities were identified. Ballistic rubble surfaces showed large inclusions of 10-400 micron size, while examination of polished cross-sections of the fragments showed only inclusions under 5 microns in size. The fact that large inclusions were found preferentially on rubble surfaces demonstrates a link between severe microstructural defects and the fragmentation of SiC armor. Rubble of both a "good" and "bad" performing SiC target were examined. Inclusion size data was gathered and fit to a distribution function. A difference was observed between the targets. The "good" target had twice the density of inclusions on its rubble in the size range less than 30 microns. No significant difference between distributions was observed for inclusion sizes greater than 40 microns. The "good" target fractured into an overall smaller fragment size distribution than the "bad" target, consistent with fragmentation at higher stresses. Literature suggests that the distribution of defects activated under dynamic conditions will be determined by the maximum stress reached locally in the target. On the basis of the defect distributions on its rubble, the "good" target appears to have withstood higher stresses. The fragment size distribution and inclusion size distribution on fragment surfaces both indicate higher stresses in the "good" target. Why the "good" target withstood a greater stress than the "bad" target remains a subject for conjecture. It is speculated that the position of severe "anomalous" defects may be influencing the target's performance, but this currently cannot be demonstrated conclusively. Certainly, this research shows that inclusion defects are involved in the fragmentation process, with differences in the distributions on the rubble of the targets suggesting a role in ballistic performance.

  6. High Resolution PV Power Modeling for Distribution Circuit Analysis

    SciTech Connect

    Norris, B. L.; Dise, J. H.

    2013-09-01

    NREL has contracted with Clean Power Research to provide 1-minute simulation datasets of PV systems located at three high penetration distribution feeders in the service territory of Southern California Edison (SCE): Porterville, Palmdale, and Fontana, California. The resulting PV simulations will be used to separately model the electrical circuits to determine the impacts of PV on circuit operations.

  7. Metagenomic Analysis of Water Distribution System Bacterial Communities

    EPA Science Inventory

    The microbial quality of drinking water is assessed using culture-based methods that are highly selective and that tend to underestimate the densities and diversity of microbial populations inhabiting distribution systems. In order to better understand the effect of different dis...

  8. THE EPANET PROGRAMMER'S TOOLKIT FOR ANALYSIS OF WATER DISTRIBUTION SYSTEMS

    EPA Science Inventory

    The EPANET Programmer's Toolkit is a collection of functions that helps simplify computer programming of water distribution network analyses. the functions can be used to read in a pipe network description file, modify selected component properties, run multiple hydraulic and wa...

  9. ANALYSIS OF COARSELY GROUPED DATA FROM THE LOGNORMAL DISTRIBUTION

    EPA Science Inventory

    A missing information technique is applied to blood lead data that is both grouped and assumed to be lognormally distributed. These maximum likelihood techniques are extended from the simple lognormal case to obtain solutions for a general linear model case. Various models are fi...

  10. Analysis of vegetation distribution in Interior Alaska and sensitivity to

    E-print Network

    McGuire, A. David

    distribution of four major vegetation types: tundra, deciduous forest, black spruce forest and white spruce by elevation, precipitation and south to north aspect. At the second step, forest was separated into deciduous temperatures exceeded a critical limit (+2 °C). Deciduous forests expand their range the most when any two

  11. Model of the reliability analysis of the distributed computer systems with architecture "client-server"

    NASA Astrophysics Data System (ADS)

    Kovalev, I. V.; Zelenkov, P. V.; Karaseva, M. V.; Tsarev, M. Yu; Tsarev, R. Yu

    2015-01-01

    The paper considers the problem of the analysis of distributed computer systems reliability with client-server architecture. A distributed computer system is a set of hardware and software for implementing the following main functions: processing, storage, transmission and data protection. This paper discusses the distributed computer systems architecture "client-server". The paper presents the scheme of the distributed computer system functioning represented as a graph where vertices are the functional state of the system and arcs are transitions from one state to another depending on the prevailing conditions. In reliability analysis we consider such reliability indicators as the probability of the system transition in the stopping state and accidents, as well as the intensity of these transitions. The proposed model allows us to obtain correlations for the reliability parameters of the distributed computer system without any assumptions about the distribution laws of random variables and the elements number in the system.

  12. Distributional Assumptions in Educational Assessments Analysis: Normal Distributions versus Generalized Beta Distribution in Modeling the Phenomenon of Learning

    ERIC Educational Resources Information Center

    Campos, Jose Alejandro Gonzalez; Moraga, Paulina Saavedra; Del Pozo, Manuel Freire

    2013-01-01

    This paper introduces the generalized beta (GB) model as a new modeling tool in the educational assessment area and evaluation analysis, specifically. Unlike normal model, GB model allows us to capture some real characteristics of data and it is an important tool for understanding the phenomenon of learning. This paper develops a contrast with the…

  13. A New Analysis of Charge Symmetry Violation in Parton Distributions

    E-print Network

    C. Boros; F. M. Steffens; J. T. Londergan; A. W. Thomas

    1999-08-06

    To date, the strongest indication of charge symmetry violation in parton distributions has been obtained by comparing the $F_2$ structure functions from CCFR neutrino data and NMC muon data. We show that in order to make precise tests of charge symmetry with the neutrino data, two conditions must be satisfied. First, the nuclear shadowing calculations must be made explicitly for neutrinos, not simply taken from muon data on nuclei. Second, the contribution of strange and charm quarks should be calculated explicitly using next-to-leading order [NLO] QCD, and the ``slow rescaling'' charm threshold correction should not be applied to the neutrino data. When these criteria are satisfied, the comparison is consistent with charge symmetry within the experimental errors and the present uncertainty in the strange quark distribution of the nucleon.

  14. Analysis of legal and institutional issues affecting distributed resource decisions

    SciTech Connect

    Nimmons, J.

    1995-12-01

    This paper describes research recently begun to investigate how legal, regulatory, and institutional factors may influence distributed resources (DR) decisions and shape utility participation in DR activities. The research, cosponsored by investor-owned utilities and the National Renewable Energy Laboratory (NREL) and performed by John Nimmons and Associates, Awad and Singer, and Energy and Environmental Economics, will define the institutional framework within which DR decisions will be made, and should yield a first-phase report early next year.

  15. Periodic analysis of total ozone and its vertical distribution

    NASA Technical Reports Server (NTRS)

    Wilcox, R. W.; Nastrom, G. D.; Belmont, A. D.

    1975-01-01

    Both total ozone and vertical distribution ozone data from the period 1957 to 1972 are analyzed. For total ozone, improved monthly zonal means for both hemispheres are computed by weighting individual station monthly means by a factor which compensates for the close grouping of stations in certain regions of latitude bands. Longitudinal variability show maxima in summer in both hemispheres, but, in winter, only in the Northern Hemisphere. The geographical distributions of the long term mean, and the annual, quasibiennial and semiannual waves in total ozone over the Northern Hemisphere are presented. The extratropical amplitude of the annual wave is by far the largest of the three, as much as 120 m atm cm over northern Siberia. There is a tendency for all three waves to have maxima in high latitudes. Monthly means of the vertical distribution of ozone determined from 3 to 8 years of ozonesonde data over North America are presented. Number density is highest in the Arctic near 18 km. The region of maximum number density slopes upward toward 10 N, where the long term mean is 45 x 10 to the 11th power molecules cm/3 near 26 km.

  16. Analysis of magnetic electron lens with secant hyperbolic field distribution

    NASA Astrophysics Data System (ADS)

    Pany, S. S.; Ahmed, Z.; Dubey, B. P.

    2014-12-01

    Electron-optical imaging instruments like Scanning Electron Microscope (SEM) and Transmission Electron Microscope (TEM) use specially designed solenoid electromagnets for focusing of the electron beam. Indicators of imaging performance of these instruments, like spatial resolution, have a strong correlation with the focal characteristics of the magnetic lenses, which in turn have been shown to be sensitive to the details of the spatial distribution of the axial magnetic field. Owing to the complexity of designing practical lenses, empirical mathematical expressions are important to obtain the desired focal properties. Thus the degree of accuracy of such models in representing the actual field distribution determines accuracy of the calculations and ultimately the performance of the lens. Historically, the mathematical models proposed by Glaser [1] and Ramberg [2] have been extensively used. In this paper the authors discuss another model with a secant-hyperbolic type magnetic field distribution function, and present a comparison between models, utilizing results from finite element-based field simulations as the reference for evaluating performance.

  17. Genome-Wide Association Scan Meta-Analysis Identifies Three Loci Influencing Adiposity and Fat Distribution

    E-print Network

    Hunter, David J.

    To identify genetic loci influencing central obesity and fat distribution, we performed a meta-analysis of 16 genome-wide association studies (GWAS, N = 38,580) informative for adult waist circumference (WC) and waist–hip ...

  18. Analysis and Planning for Improved Distribution of Nursing Personnel and Services. Final Report.

    ERIC Educational Resources Information Center

    Elliott, Jo Eleanor; Kearns, Jeanne M.

    The goal of the Analysis and Planning Project was to strengthen the nursing community's ability to analyze and plan for improved distribution of nursing personnel and services. The objectives aimed at (1) developing analytical models and a data base for planning purposes; (2) identifying new approaches to improve the distribution and delivery of…

  19. Analysis of Multi-Robot Play Effectiveness and of Distributed Incidental Play Recognition

    E-print Network

    Veloso, Manuela M.

    Analysis of Multi-Robot Play Effectiveness and of Distributed Incidental Play Recognition Colin Mc,veloso}@cs.cmu.edu Summary. Distributed play-based approaches have been proposed as an effective means of switching show that different plays have a significant effect on opponent performance in real robot soccer games

  20. Lyapunov Analysis of a Distributed Optimization Scheme Karla Kvaternik and Lacra Pavel

    E-print Network

    Pavel, Lacra

    asymptotically stable for this algorithm. I. INTRODUCTION The development of distributed optimization schemes and wireless networks, [4], [5], voltage and frequency control in microgrids [6], and optimal sensor fusionLyapunov Analysis of a Distributed Optimization Scheme Karla Kvaternik and Lacra Pavel Abstract

  1. Security Analysis and Extensions of the PCB Algorithm for Distributed Key Generation

    E-print Network

    Poovendran, Radha

    Security Analysis and Extensions of the PCB Algorithm for Distributed Key Generation Radha these methods is the distributed key generation method proposed by Poovendran, Corson and Baras in [PCB],which we call the PCB scheme in this paper. The PCB scheme made use of modulo arithmetic and generalized

  2. Inverse Analysis to Estimate Surface Damage Distribution of Thin Panels Narayanan Ramanujam and Toshio Nakamura

    E-print Network

    Nakamura, Toshio

    Inverse Analysis to Estimate Surface Damage Distribution of Thin Panels Narayanan Ramanujam may be degraded or damaged. Evaluations of these damages on panels and shells often demand complex to estimate the unknown surface damage distribution. In order to verify its effectiveness, various models

  3. Comparing transient storage modeling and residence time distribution (RTD) analysis in geomorphically varied

    E-print Network

    Gooseff, Michael N.

    Comparing transient storage modeling and residence time distribution (RTD) analysis storage model OTIS, which assumes an exponential residence time distribution. In this study, we compare University, Corvallis, OR 97331-5506, USA b US Forest Service, Pacific Northwest Research Station, Olympia

  4. Statistical analysis of the distribution of gold particles over antigen sites after immunogold labelling

    E-print Network

    Stone, J. V.

    Statistical analysis of the distribution of gold particles over antigen sites after immunogold labelling C.A. Glasbey Biomathematics and Statistics Scotland JCMB, King's Buildings, Edinburgh, EH9 3JZ statistically. The distributions were found to be inconsistent with a Poisson process. Instead, a physically

  5. QUANTILE ANALYSIS OF IMAGE SENSOR NOISE DISTRIBUTION Jiachao Zhang, Keigo Hirakawa

    E-print Network

    Hirakawa, Keigo

    QUANTILE ANALYSIS OF IMAGE SENSOR NOISE DISTRIBUTION Jiachao Zhang, Keigo Hirakawa University the real image sen- sor noise distribution to the models of noise often assumed in im- age denoising noise behavior. Noise model mismatch would likely result in image denoising that undersmoothes real

  6. Spatial analysis of pallid sturgeon Scaphirhynchus albus distribution in the Missouri River, South Dakota

    E-print Network

    Spatial analysis of pallid sturgeon Scaphirhynchus albus distribution in the Missouri River, South and distribution of the endangered pallid sturgeon Scaphirhynchus albus has generally been documented using radio (> 3 years). Standardized sampling for pallid sturgeon, which relies on a variety of gear types, has

  7. Analysis of Saturn's thermal emission at 2.2-cm wavelength: Spatial distribution of ammonia vapor

    E-print Network

    Analysis of Saturn's thermal emission at 2.2-cm wavelength: Spatial distribution of ammonia vapor A a c t This work focuses on determining the latitudinal structure of ammonia vapor in Saturn's cloud of the spa- tial distribution of ammonia vapor, since ammonia gas is the only effective opacity source

  8. Does Weather Explain the Cost and Quality? An Analysis of UK Electricity Distribution Companies

    E-print Network

    Yu, William; Jamasb, Tooraj; Pollitt, Michael G.

    O R K IN G P A P E R Abstract Does Weather Explain the Cost and Quality Performance? An Analysis of UK Electricity Distribution Companies EPRG Working Paper 0827 Cambridge Working Paper in Economics 0858 William Yu*, Tooraj Jamasb... S U M M A R Y Does Weather Explain the Cost and Quality Performance? An Analysis of UK Electricity Distribution Companies EPRG Working Paper 0827 Cambridge Working Paper in Economics 0858 William Yu, Tooraj Jamasb, Michael Pollitt...

  9. Analysis of the Galaxy Distribution using Multiscale Methods

    NASA Astrophysics Data System (ADS)

    Querre, Philippe; Starck, Jean-Luc; Martinez, Vicent J.

    2002-12-01

    Galaxies are arranged in interconnected walls and filaments forming a cosmic web encompassing huge, nearly empty, regions between the structures. Many statistical methods have been proposed in the past in order to describe the galaxy distribution and discriminate the different cosmological models. We present in this paper preliminary results relative to the use of new statistical tools using the 3D a trous algorithm, the 3D ridgelet transform and the 3D beamlet transform. We show that such multiscale methods produce a new way to measure in a coherent and statistically reliable way the degree of clustering, filamentarity, sheetedness, and voidedness of a dataset.

  10. Complexity analysis of pipeline mapping problems in distributed heterogeneous networks

    SciTech Connect

    Lin, Ying; Wu, Qishi; Zhu, Mengxia; Rao, Nageswara S

    2009-04-01

    Largescale scientific applications require using various system resources to execute complex computing pipelines in distributed networks to support collaborative research. System resources are typically shared in the Internet or over dedicated connections based on their location, availability, capability, and capacity. Optimizing the network performance of computing pipelines in such distributed environments is critical to the success of these applications. We consider two types of largescale distributed applications: (1) interactive applications where a single dataset is sequentially processed along a pipeline; and (2) streaming applications where a series of datasets continuously flow through a pipeline. The computing pipelines of these applications consist of a number of modules executed in a linear order in network environments with heterogeneous resources under different constraints. Our goal is to find an efficient mapping scheme that allocates the modules of a pipeline to network nodes for minimum endtoend delay or maximum frame rate. We formulate the pipeline mappings in distributed environments as optimization problems and categorize them into six classes with different optimization goals and mapping constraints: (1) Minimum Endtoend Delay with No Node Reuse (MEDNNR), (2) Minimum Endtoend Delay with Contiguous Node Reuse (MEDCNR), (3) Minimum Endtoend Delay with Arbitrary Node Reuse (MEDANR), (4) Maximum Frame Rate with No Node Reuse or Share (MFRNNRS), (5) Maximum Frame Rate with Contiguous Node Reuse and Share (MFRCNRS), and (6) Maximum Frame Rate with Arbitrary Node Reuse and Share (MFRANRS). Here, 'contiguous node reuse' means that multiple contiguous modules along the pipeline may run on the same node and 'arbitrary node reuse' imposes no restriction on node reuse. Note that in interactive applications, a node can be reused but its resource is not shared. We prove that MEDANR is polynomially solvable and the rest are NP-complete. MEDANR, where either contiguous or noncontiguous modules in the pipeline can be mapped onto the same node, is essentially the Maximum n-hop Shortest Path problem, and can be solved using a dynamic programming method. In MEDNNR and MFRNNRS, any network node can be used only once, which requires selecting the same number of nodes for onetoone onto mapping. We show its NP-completeness by reducing from the Hamiltonian Path problem. Node reuse is allowed in MEDCNR, MFRCNRS and MFRANRS, which are similar to the Maximum n-hop Shortest Path problem that considers resource sharing. We prove their NP-completeness by reducing from the Disjoint-Connecting-Path Problem and Widest path with the Linear Capacity Constraints problem, respectively.

  11. Independent Orbiter Assessment (IOA): Analysis of the Electrical Power Distribution and Control Subsystem, Volume 2

    NASA Technical Reports Server (NTRS)

    Schmeckpeper, K. R.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Distribution and Control (EPD and C) hardware. The EPD and C hardware performs the functions of distributing, sensing, and controlling 28 volt DC power and of inverting, distributing, sensing, and controlling 117 volt 400 Hz AC power to all Orbiter subsystems from the three fuel cells in the Electrical Power Generation (EPG) subsystem. Volume 2 continues the presentation of IOA analysis worksheets and contains the potential critical items list.

  12. Studying bubble-particle interactions by zeta potential distribution analysis.

    PubMed

    Wu, Chendi; Wang, Louxiang; Harbottle, David; Masliyah, Jacob; Xu, Zhenghe

    2015-07-01

    Over a decade ago, Xu and Masliyah pioneered an approach to characterize the interactions between particles in dynamic environments of multicomponent systems by measuring zeta potential distributions of individual components and their mixtures. Using a Zetaphoremeter, the measured zeta potential distributions of individual components and their mixtures were used to determine the conditions of preferential attachment in multicomponent particle suspensions. The technique has been applied to study the attachment of nano-sized silica and alumina particles to sub-micron size bubbles in solutions with and without the addition of surface active agents (SDS, DAH and DF250). The degree of attachment between gas bubbles and particles is shown to be a function of the interaction energy governed by the dispersion, electrostatic double layer and hydrophobic forces. Under certain chemical conditions, the attachment of nano-particles to sub-micron size bubbles is shown to be enhanced by in-situ gas nucleation induced by hydrodynamic cavitation for the weakly interacting systems, where mixing of the two individual components results in negligible attachment. Preferential interaction in complex tertiary particle systems demonstrated strong attachment between micron-sized alumina and gas bubbles, with little attachment between micron-sized alumina and silica, possibly due to instability of the aggregates in the shear flow environment. PMID:25731913

  13. Analysis of an algorithm for distributed recognition and accountability

    SciTech Connect

    Ko, C.; Frincke, D.A.; Goan, T. Jr.; Heberlein, L.T.; Levitt, K.; Mukherjee, B.; Wee, C.

    1993-08-01

    Computer and network systems are available to attacks. Abandoning the existing huge infrastructure of possibly-insecure computer and network systems is impossible, and replacing them by totally secure systems may not be feasible or cost effective. A common element in many attacks is that a single user will often attempt to intrude upon multiple resources throughout a network. Detecting the attack can become significantly easier by compiling and integrating evidence of such intrusion attempts across the network rather than attempting to assess the situation from the vantage point of only a single host. To solve this problem, we suggest an approach for distributed recognition and accountability (DRA), which consists of algorithms which ``process,`` at a central location, distributed and asynchronous ``reports`` generated by computers (or a subset thereof) throughout the network. Our highest-priority objectives are to observe ways by which an individual moves around in a network of computers, including changing user names to possibly hide his/her true identity, and to associate all activities of multiple instance of the same individual to the same network-wide user. We present the DRA algorithm and a sketch of its proof under an initial set of simplifying albeit realistic assumptions. Later, we relax these assumptions to accommodate pragmatic aspects such as missing or delayed ``reports,`` clock slew, tampered ``reports,`` etc. We believe that such algorithms will have widespread applications in the future, particularly in intrusion-detection system.

  14. Visualization and analysis of lipopolysaccharide distribution in binary phospholipid bilayers

    SciTech Connect

    Henning, Maria Florencia; Sanchez, Susana; Bakas, Laura; Departamento de Ciencias Biologicas, Facultad de Ciencias Exactas, UNLP, Calles 47 y 115, 1900 La Plata

    2009-05-22

    Lipopolysaccharide (LPS) is an endotoxin released from the outer membrane of Gram-negative bacteria during infections. It have been reported that LPS may play a role in the outer membrane of bacteria similar to that of cholesterol in eukaryotic plasma membranes. In this article we compare the effect of introducing LPS or cholesterol in liposomes made of dipalmitoylphosphatidylcholine/dioleoylphosphatidylcholine on the solubilization process by Triton X-100. The results show that liposomes containing LPS or cholesterol are more resistant to solubilization by Triton X-100 than the binary phospholipid mixtures at 4 {sup o}C. The LPS distribution was analyzed on GUVs of DPPC:DOPC using FITC-LPS. Solid and liquid-crystalline domains were visualized labeling the GUVs with LAURDAN and GP images were acquired using a two-photon microscope. The images show a selective distribution of LPS in gel domains. Our results support the hypothesis that LPS could aggregate and concentrate selectively in biological membranes providing a mechanism to bring together several components of the LPS-sensing machinery.

  15. VISUALIZATION AND ANALYSIS OF LPS DISTRIBUTION IN BINARY PHOSPHOLIPID BILAYERS

    PubMed Central

    Florencia, Henning María; Susana, Sanchez; Laura, Bakás

    2010-01-01

    Lipopolysaccharide (LPS) is an endotoxin released from the outer membrane of Gram negative bacteria during infections. It have been reported that LPS may play a rol in the outer membrane of bacteria similar to that of cholesterol in eukaryotic plasma membranes. In this article we compare the effect of introducing LPS or cholesterol in liposomes made of dipalmitoylphosphatidylcholine/dioleoylphosphatidylcholine on the solubilization process by Triton X-100. The results show that liposomes containing LPS or Cholesterol are more resistant to solubilization by Triton X-100 than the binary phospholipid mixtures at 4°C. The LPS distribution was analyzed on GUVs of DPPC:DOPC using FITC-LPS. Solid and liquid-crystalline domains were visualized labeling the GUVs with LAURDAN and GP images were acquired using a two-photon microscope. The images show a selective distribution of LPS in gel domains. Our results support the hypothesis that LPS could aggregate and concentrate selectively in biological membranes providing a mechanism to bring together several components of the LPS-sensing machinery. PMID:19324006

  16. Distributed Network Analysis Using TOPAS and Gerhard Munz, Georg Carle

    E-print Network

    Carle, Georg

    , Wireshark makes use of the pcap library (libpcap) [2]. The analysis functions comprise stream reassembly technical gimmicks, such as traffic redirection or the experimental remote capture functionality of WinPcap into a stream of frames in pcap format. Wireshark is able to read this pcap stream from a Unix pipe

  17. Particle size distributions by transmission electron microscopy: an interlaboratory comparison case study

    NASA Astrophysics Data System (ADS)

    Rice, Stephen B.; Chan, Christopher; Brown, Scott C.; Eschbach, Peter; Han, Li; Ensor, David S.; Stefaniak, Aleksandr B.; Bonevich, John; Vladár, András E.; Hight Walker, Angela R.; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A.

    2013-12-01

    This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin-Rammler-Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition.

  18. An empirical analysis of the distribution of overshoots in a stationary Gaussian stochastic process

    NASA Technical Reports Server (NTRS)

    Carter, M. C.; Madison, M. W.

    1973-01-01

    The frequency distribution of overshoots in a stationary Gaussian stochastic process is analyzed. The primary processes involved in this analysis are computer simulation and statistical estimation. Computer simulation is used to simulate stationary Gaussian stochastic processes that have selected autocorrelation functions. An analysis of the simulation results reveals a frequency distribution for overshoots with a functional dependence on the mean and variance of the process. Statistical estimation is then used to estimate the mean and variance of a process. It is shown that for an autocorrelation function, the mean and the variance for the number of overshoots, a frequency distribution for overshoots can be estimated.

  19. Probabilistic approach to identify sensitive parameter distributions in multimedia pathway analysis.

    SciTech Connect

    Kamboj, S.; Gnanapragasam, E.; LePoire, D.; Biwer, B. M.; Cheng, J.; Arnish, J.; Yu, C.; Chen, S. Y.; Mo, T.; Abu-Eid, R.; Thaggard, M.; Environmental Assessment; NRC

    2002-01-01

    Sensitive parameter distributions were identified with the use of probabilistic analysis in the RESRAD computer code. RESRAD is a multimedia pathway analysis code designed to evaluate radiological exposures resulting from radiological contamination in soil. The dose distribution was obtained by using a set of default parameter distribution/values. Most of the variations in the output dose distribution could be attributed to uncertainty in a small set of input parameters that could be considered as sensitive parameter distributions. The identification of the sensitive parameters is a first step in the prioritization of future research and information gathering. When site-specific parameter distribution/values are available for an actual site, the same process should be used with these site-specific data. Regression analysis used to identify sensitive parameters indicated that the dominant pathways depended on the radionuclide and source configurations. However, two parameter distributions were sensitive for many radionuclides: the external shielding factor when external exposure was the dominant pathway and the plant transfer factor when plant ingestion was the dominant pathway. No single correlation or regression coefficient can be used alone to identify sensitive parameters in all the cases. The coefficients are useful guides, but they have to be used in conjunction with other aids, such as scatter plots, and should undergo further analysis.

  20. Analysis of the tropospheric water distribution during FIRE 2

    NASA Technical Reports Server (NTRS)

    Westphal, Douglas L.

    1993-01-01

    The Penn State/NCAR mesoscale model, as adapted for use at ARC, was used as a testbed for the development and validation of cloud models for use in General Circulation Models (GCM's). This modeling approach also allows us to intercompare the predictions of the various cloud schemes within the same dynamical framework. The use of the PSU/NCAR mesoscale model also allows us to compare our results with FIRE-II (First International Satellite Cloud Climatology Project Regional Experiment) observations, instead of climate statistics. Though a promising approach, our work to date revealed several difficulties. First, the model by design is limited in spatial coverage and is only run for 12 to 48 hours at a time. Hence the quality of the simulation will depend heavily on the initial conditions. The poor quality of upper-tropospheric measurements of water vapor is well known and the situation is particularly bad for mid-latitude winter since the coupling with the surface is less direct than in summer so that relying on the model to spin-up a reasonable moisture field is not always successful. Though one of the most common atmospheric constituents, water vapor is relatively difficult to measure accurately, especially operationally over large areas. The standard NWS sondes have little sensitivity at the low temperatures where cirrus form and the data from the GOES 6.7 micron channel is difficult to quantify. For this reason, the goals of FIRE Cirrus II included characterizing the three-dimensional distribution of water vapor and clouds. In studying the data from FIRE Cirrus II, it was found that no single special observation technique provides accurate regional distributions of water vapor. The Raman lidar provides accurate measurements, but only at the Hub, for levels up to 10 km, and during nighttime hours. The CLASS sondes are more sensitive to moisture at low temperatures than are the NWS sondes, but the four stations only cover an area of two hundred kilometers on a side. The aircraft give the most accurate measurements of water vapor, but are limited in spatial and temporal coverage. This problem is partly alleviated by the use of the MAPS analyses, a four-dimensional data assimilation system that combines the previous 3-hour forecast with the available observations, but its upper-level moisture analyses are sometimes deficient because of the vapor measurement problem. An attempt was made to create a consistent four-dimensional description of the water vapor distribution during the second IFO by subjectively combining data from a variety of sources, including MAPS analyses, CLASS sondes, SPECTRE sondes, NWS sondes, GOES satellite analyses, radars, lidars, and microwave radiometers.

  1. A Simultaneous Spectral Invariant Analysis of the GRB Count Distribution and Time Dilation

    E-print Network

    Ehud Cohen; Tsvi Piran

    1995-12-19

    The analysis of the BATSE's count distribution within cosmological models suffers from observational uncertainties due to the variability of the bursts' spectra: when BATSE observes bursts from different redshifts at a fixed energy band it detects photons from different energy bands at the source. This adds a spectral dependence to the count distribution $N(C)$. Similarly variation of the duration as a function of energy at the source complicates the time dilation analysis. We describe here a new statistical formalism that performs the required ``blue shifting" of the count number and the burst duration in a statistical manner. This formalism allows us to perform a combined best fit (maximal likelihood) to the count distribution, $N(C)$, and the duration distribution simultaneously. The outcome of this analysis is a single best fit value for the redshift of the observed bursts.

  2. Quantitative analysis of inclusion distributions in hot pressed silicon carbide

    SciTech Connect

    Michael Paul Bakas

    2012-12-01

    ABSTRACT Depth of penetration measurements in hot pressed SiC have exhibited significant variability that may be influenced by microstructural defects. To obtain a better understanding regarding the role of microstructural defects under highly dynamic conditions; fragments of hot pressed SiC plates subjected to impact tests were examined. Two types of inclusion defects were identified, carbonaceous and an aluminum-iron-oxide phase. A disproportionate number of large inclusions were found on the rubble, indicating that the inclusion defects were a part of the fragmentation process. Distribution functions were plotted to compare the inclusion populations. Fragments from the superior performing sample had an inclusion population consisting of more numerous but smaller inclusions. One possible explanation for this result is that the superior sample withstood a greater stress before failure, causing a greater number of smaller inclusions to participate in fragmentation than in the weaker sample.

  3. Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing

    NASA Technical Reports Server (NTRS)

    Ozguner, Fusun

    1996-01-01

    Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.

  4. Southern Arizona riparian habitat: Spatial distribution and analysis

    NASA Technical Reports Server (NTRS)

    Lacey, J. R.; Ogden, P. R.; Foster, K. E.

    1975-01-01

    The objectives of this study were centered around the demonstration of remote sensing as an inventory tool and researching the multiple uses of riparian vegetation. Specific study objectives were to: (1) map riparian vegetation along the Gila River, San Simon Creek, San Pedro River, Pantano Wash, (2) determine the feasibility of automated mapping using LANDSAT-1 computer compatible tapes, (3) locate and summarize existing mpas delineating riparian vegetation, (4) summarize data relevant to Southern Arizona's riparian products and uses, (5) document recent riparian vegetation changes along a selected portion of the San Pedro River, (6) summarize historical changes in composition and distribution of riparian vegetation, and (7) summarize sources of available photography pertinent to Southern Arizona.

  5. Finite key analysis for symmetric attacks in quantum key distribution

    SciTech Connect

    Meyer, Tim; Kampermann, Hermann; Kleinmann, Matthias; Bruss, Dagmar

    2006-10-15

    We introduce a constructive method to calculate the achievable secret key rate for a generic class of quantum key distribution protocols, when only a finite number n of signals is given. Our approach is applicable to all scenarios in which the quantum state shared by Alice and Bob is known. In particular, we consider the six state protocol with symmetric eavesdropping attacks, and show that for a small number of signals, i.e., below n{approx}10{sup 4}, the finite key rate differs significantly from the asymptotic value for n{yields}{infinity}. However, for larger n, a good approximation of the asymptotic value is found. We also study secret key rates for protocols using higher-dimensional quantum systems.

  6. Completion report harmonic analysis of electrical distribution systems

    SciTech Connect

    Tolbert, L.M.

    1996-03-01

    Harmonic currents have increased dramatically in electrical distribution systems in the last few years due to the growth in non-linear loads found in most electronic devices. Because electrical systems have been designed for linear voltage and current waveforms; (i.e. nearly sinusoidal), non-linear loads can cause serious problems such as overheating conductors or transformers, capacitor failures, inadvertent circuit breaker tripping, or malfunction of electronic equipment. The U.S. Army Center for Public Works has proposed a study to determine what devices are best for reducing or eliminating the effects of harmonics on power systems typical of those existing in their Command, Control, Communication and Intelligence (C3I) sites.

  7. A pair distribution function analysis of zeolite beta

    SciTech Connect

    Martinez-Inesta, M.M.; Peral, I.; Proffen, T.; Lobo, R.F.

    2010-07-20

    We describe the structural refinement of zeolite beta using the local structure obtained with the pair distribution function (PDF) method. A high quality synchrotron and two neutron scattering datasets were obtained on two samples of siliceous zeolite beta. The two polytypes that make up zeolite beta have the same local structure; therefore refinement of the two structures was possible using the same experimental PDF. Optimized structures of polytypes A and B were used to refine the structures using the program PDFfit. Refinements using only the synchrotron or the neutron datasets gave results inconsistent with each other but a cyclic refinement with the two datasets gave a good fit to both PDFs. The results show that the PDF method is a viable technique to analyze the local structure of disordered zeolites. However, given the complexity of most zeolite frameworks, the use of both X-ray and neutron radiation and high-resolution patterns is essential to obtain reliable refinements.

  8. Advanced analysis of metal distributions in human hair

    SciTech Connect

    Kempson, Ivan M.; Skinner, William M.

    2008-06-09

    A variety of techniques (secondary electron microscopy with energy dispersive X-ray analysis, time-of-flight-secondary ion mass spectrometry, and synchrotron X-ray fluorescence) were utilized to distinguish metal contamination occurring in hair arising from endogenous uptake from an individual exposed to a polluted environment, in this case a lead smelter. Evidence was sought for elements less affected by contamination and potentially indicative of biogenic activity. The unique combination of surface sensitivity, spatial resolution, and detection limits used here has provided new insight regarding hair analysis. Metals such as Ca, Fe, and Pb appeared to have little representative value of endogenous uptake and were mainly due to contamination. Cu and Zn, however, demonstrate behaviors worthy of further investigation into relating hair concentrations to endogenous function.

  9. Photoelastic analysis of stress distribution with different implant systems.

    PubMed

    Pellizzer, Eduardo Piza; Carli, Rafael Imai; Falcón-Antenucci, Rosse Mary; Verri, Fellippo Ramos; Goiato, Marcelo Coelho; Villa, Luiz Marcelo Ribeiro

    2014-04-01

    The aim of this study was to evaluate stress distribution with different implant systems through photoelasticity. Five models were fabricated with photoelastic resin PL-2. Each model was composed of a block of photoelastic resin (10 × 40 × 45 mm) with an implant and a healing abutment: model 1, internal hexagon implant (4.0 × 10 mm; Conect AR, Conexão, São Paulo, Brazil); model 2, Morse taper/internal octagon implant (4.1 × 10 mm; Standard, Straumann ITI, Andover, Mass); model 3, Morse taper implant (4.0 × 10 mm; AR Morse, Conexão); model 4, locking taper implant (4.0 × 11 mm; Bicon, Boston, Mass); model 5, external hexagon implant (4.0 × 10 mm; Master Screw, Conexão). Axial and oblique load (45°) of 150 N were applied by a universal testing machine (EMIC-DL 3000), and a circular polariscope was used to visualize the stress. The results were photographed and analyzed qualitatively using Adobe Photoshop software. For the axial load, the greatest stress concentration was exhibited in the cervical and apical thirds. However, the highest number of isochromatic fringes was observed in the implant apex and in the cervical adjacent to the load direction in all models for the oblique load. Model 2 (Morse taper, internal octagon, Straumann ITI) presented the lowest stress concentration, while model 5 (external hexagon, Master Screw, Conexão) exhibited the greatest stress. It was concluded that Morse taper implants presented a more favorable stress distribution among the test groups. The external hexagon implant showed the highest stress concentration. Oblique load generated the highest stress in all models analyzed. PMID:22208909

  10. Distributed finite element analysis using a transputer network

    NASA Technical Reports Server (NTRS)

    Watson, James; Favenesi, James; Danial, Albert; Tombrello, Joseph; Yang, Dabby; Reynolds, Brian; Turrentine, Ronald; Shephard, Mark; Baehmann, Peggy

    1989-01-01

    The principal objective of this research effort was to demonstrate the extraordinarily cost effective acceleration of finite element structural analysis problems using a transputer-based parallel processing network. This objective was accomplished in the form of a commercially viable parallel processing workstation. The workstation is a desktop size, low-maintenance computing unit capable of supercomputer performance yet costs two orders of magnitude less. To achieve the principal research objective, a transputer based structural analysis workstation termed XPFEM was implemented with linear static structural analysis capabilities resembling commercially available NASTRAN. Finite element model files, generated using the on-line preprocessing module or external preprocessing packages, are downloaded to a network of 32 transputers for accelerated solution. The system currently executes at about one third Cray X-MP24 speed but additional acceleration appears likely. For the NASA selected demonstration problem of a Space Shuttle main engine turbine blade model with about 1500 nodes and 4500 independent degrees of freedom, the Cray X-MP24 required 23.9 seconds to obtain a solution while the transputer network, operated from an IBM PC-AT compatible host computer, required 71.7 seconds. Consequently, the $80,000 transputer network demonstrated a cost-performance ratio about 60 times better than the $15,000,000 Cray X-MP24 system.

  11. Extreme value statistics analysis of fracture strengths of a sintered silicon nitride failing from pores

    NASA Technical Reports Server (NTRS)

    Chao, Luen-Yuan; Shetty, Dinesh K.

    1992-01-01

    Statistical analysis and correlation between pore-size distribution and fracture strength distribution using the theory of extreme-value statistics is presented for a sintered silicon nitride. The pore-size distribution on a polished surface of this material was characterized, using an automatic optical image analyzer. The distribution measured on the two-dimensional plane surface was transformed to a population (volume) distribution, using the Schwartz-Saltykov diameter method. The population pore-size distribution and the distribution of the pore size at the fracture origin were correllated by extreme-value statistics. Fracture strength distribution was then predicted from the extreme-value pore-size distribution, usin a linear elastic fracture mechanics model of annular crack around pore and the fracture toughness of the ceramic. The predicted strength distribution was in good agreement with strength measurements in bending. In particular, the extreme-value statistics analysis explained the nonlinear trend in the linearized Weibull plot of measured strengths without postulating a lower-bound strength.

  12. Aalborg Universitet Models for HLI analysis of power system with offshore wind farms and distributed

    E-print Network

    Bak-Jensen, Birgitte

    Aalborg Universitet Models for HLI analysis of power system with offshore wind farms Integration of Wind Power and Transmission Networks for Offshore Wind farms. General rights Copyright for Offshore Wind Farms 1 Models for HLI analysis of power systems with offshore wind farms and distributed

  13. Semi-Markov PEPA: Compositional Modelling and Analysis with Generally Distributed Actions

    E-print Network

    Bradley, Jeremy

    Semi-Markov PEPA: Compositional Modelling and Analysis with Generally Distributed Actions Jeremy T as a PEPA model generates a Markov chain for analysis purposes, so semi-Markov PEPA produces a semi-Markov chain. We discuss how semi-Markov PEPA models are anal- ysed through Knottenbelt's semi-Markov DNAmaca

  14. Serdica J. Computing 2 (2008), 101126 ANALYSIS OF THE DISTRIBUTIONS OF COLOR

    E-print Network

    Stanchev, Peter

    2008-01-01

    Serdica J. Computing 2 (2008), 101­126 ANALYSIS OF THE DISTRIBUTIONS OF COLOR CHARACTERISTICS study some of the characteristics of the art painting image color semantics. We analyze the color Color Semantics" (APICSS) for image analysis and retrieval was created. The obtained result can be used

  15. ForPeerReview Multifractal analysis of a semi-distributed urban

    E-print Network

    Lovejoy, Shaun

    ForPeerReview Only Multifractal analysis of a semi-distributed urban hydrological model Journal: Urban Water Manuscript ID: NURW-2011-0130.R2 Manuscript Type: Full Paper Date Submitted by the Author: n:/mc.manuscriptcentral.com/nurw Email: urbanwater@exeter.ac.uk Urban Water Journal #12;ForPeerReview Only 1 Title: Multifractal analysis

  16. The Distribution of References Across Texts: Some Implications for Citation Analysis

    E-print Network

    Menczer, Filippo

    1 The Distribution of References Across Texts: Some Implications for Citation Analysis Ying}@indiana.edu Abstract In citation network analysis, complex behavior is reduced to a simple edge, namely, node A cites the case that the contributions of all citations are treated equally, even though some citations appear

  17. Composition and On Demand Deployment of Distributed Brain Activity Analysis Application on Global Grids

    E-print Network

    Buyya, Rajkumar

    1 Composition and On Demand Deployment of Distributed Brain Activity Analysis Application on Global are brain science and high-energy physics. The analysis of brain activity data gathered from the MEG and analyze brain functions and requires access to large-scale computational resources. The potential platform

  18. Sensitivity Analysis of Distributed Soil Moisture Profiles by Active Distributed Temperature Sensing

    NASA Astrophysics Data System (ADS)

    Ciocca, F.; Van De Giesen, N.; Assouline, S.; Huwald, H.; Lunati, I.

    2012-12-01

    Monitoring and measuring the fluctuations of soil moisture at large scales in the filed remains a challenge. Although sensors based on measurement of dielectric properties such as Time Domain Reflectometers (TDR) and capacity-based probes can guarantee reasonable responses, they always operate on limited spatial ranges. On the other hand optical fibers, attached to a Distribute Temperature Sensing (DTS) system, can allow for high precision soil temperature measurements over distances of kilometers. A recently developed technique called Active DTS (ADTS) and consisting of a heat pulse of a certain duration and power along the metal sheath covering the optical fiber buried in the soil, has proven a promising alternative to spatially-limited probes. Two approaches have been investigated to infer distributed soil moisture profiles in the region surrounding the optic fiber cable by analyzing the temperature variations during the heating and the cooling phases. One directly relates the change of temperature to the soil moisture (independently measured) to develop specific calibration curve for the soil used; the other requires inferring the thermal properties and then obtaining the soil moisture by inversion of known relationships. To test and compare the two approaches over a broad range of saturation conditions a large lysimeter has been homogeneously filled with loamy soil and 52 meters of fiber optic cable have been buried in the shallower 0.8 meters in a double coil rigid structure of 15 loops along with a series of capacity-based sensors (calibrated for the soil used) to provide independent soil moisture measurements at the same depths of the optical fiber. Thermocouples have also been wrapped around the fiber to investigate the effects of the insulating cover surrounding the cable, and in between each layer in order to monitor heat diffusion at several centimeters. A high performance DTS has been used to measure the temperature along the fiber optic cable. Several soil moisture profiles have been generated in the lysimeter either varying the water table height or by wetting the soil from the top. The sensitivity of the ADTS method for heat pulses of different duration and power and ranges of spatial and temporal resolution are presented.

  19. Numerical analysis of atomic density distribution in arc driven negative ion sources

    SciTech Connect

    Yamamoto, T. Shibata, T.; Hatayama, A.; Kashiwagi, M.; Hanada, M.; Sawada, K.

    2014-02-15

    The purpose of this study is to calculate atomic (H{sup 0}) density distribution in JAEA 10 ampere negative ion source. A collisional radiative model is developed for the calculation of the H{sup 0} density distribution. The non-equilibrium feature of the electron energy distribution function (EEDF), which mainly determines the H{sup 0} production rate, is included by substituting the EEDF calculated from 3D electron transport analysis. In this paper, the H{sup 0} production rate, the ionization rate, and the density distribution in the source chamber are calculated. In the region where high energy electrons exist, the H{sup 0} production and the ionization are enhanced. The calculated H{sup 0} density distribution without the effect of the H{sup 0} transport is relatively small in the upper region. In the next step, the effect should be taken into account to obtain more realistic H{sup 0} distribution.

  20. Evolution History of Asteroid Itokawa Based on Block Distribution Analysis

    NASA Astrophysics Data System (ADS)

    Mazrouei, Sara; Daly, Michael; Barnouin, Olivier; Ernst, Carolyn

    2013-04-01

    This work investigates trends in the global and regional distribution of blocks on asteroid 25143 Itokawa in order to discover new findings to better understand the history of this asteroid. Itokawa is a near-Earth object, and the first asteroid that was targeted for a sample return mission. Trends in block population provide new insights in regards to Itokawa's current appearance following the disruption of a possible parent body, and how its surface might have changed since then. Here blocks are defined as rocks or features with distinctive positive relief that are larger than a few meters in size. The size and distribution of blocks are measured by mapping the outline of the blocks using the Small Body Mapping Tool (SBMT) created by the Johns Hopkins University Applied Physics Laboratory [1]. The SBMT allows the user to overlap correctly geo-located Hayabusa images [2] onto the Itokawa shape model. This study provides additional inferences on the original disruption and subsequent re-accretion of Itokawa's "head" and "body" from block analyses. A new approach is taken by analyzing the population of blocks with respect to latitude for both Itokawa's current state, and a hypothetical elliptical body. Itokawa currently rotates approximately about its maximum moment of inertia, which is expected due to conservation of momentum and minimum energy arguments. After the possible disruption of the parent body of Itokawa, the "body" of Itokawa would have tended to a similar rotation. The shape of this body is made by removing the head of Itokawa and applying a semispherical cap. Using the method of [3] inertial properties of this object are calculated. With the assumption that this object had settled to its stable rotational axis, it is found that the pole axis could have been tilted about 13° away from the current axis in the direction opposite the head, equivalent to a 33 meter change in the center of mass. The results of this study provide means to test the hypothesis of whether or not Itokawa is a contact binary. References: [1] E. G. Kahn, et al. A tool for the visualization of small body data. In LPSC XLII, 2011. [2] A. Fujiwara, et al. The rubble-pile asteroid Itokawa as observed by Hayabusa. Science, 312(5778):1330-1334, June 2006. [3] A. F. Cheng, et al. Small-scale topography of 433 Eros from laser altimetry and imaging. Icarus, 155(1):51-74, 2002

  1. Consideration of tip speed limitations in preliminary analysis of minimum COE wind turbines

    NASA Astrophysics Data System (ADS)

    Cuerva-Tejero, A.; Yeow, T. S.; Lopez-Garcia, O.; Gallego-Castillo, C.

    2014-12-01

    A relation between Cost Of Energy, COE, maximum allowed tip speed, and rated wind speed, is obtained for wind turbines with a given goal rated power. The wind regime is characterised by the corresponding parameters of the probability density function of wind speed. The non-dimensional characteristics of the rotor: number of blades, the blade radial distributions of local solidity, twist, angle, and airfoil type, play the role of parameters in the mentioned relation. The COE is estimated using a cost model commonly used by the designers. This cost model requires basic design data such as the rotor radius and the ratio between the hub height and the rotor radius. Certain design options, DO, related to the technology of the power plant, tower and blades are also required as inputs. The function obtained for the COE can be explored to find those values of rotor radius that give rise to minimum cost of energy for a given wind regime as the tip speed limitation changes. The analysis reveals that iso-COE lines evolve parallel to iso-radius lines for large values of limit tip speed but that this is not the case for small values of the tip speed limits. It is concluded that., as the tip speed limit decreases, the optimum decision for keeping minimum COE values can be: a) reducing the rotor radius for places with high weibull scale parameter or b) increasing the rotor radius for places with low weibull scale parameter.

  2. [Vertical Distribution Characteristics and Analysis in Sediments of Xidahai Lake].

    PubMed

    Duan, Mu-chun; Xiao, Hai-feng; Zang, Shu-ying

    2015-07-01

    The organic matter (OM), total nitrogen (TN), total phosphorus (TP), the morphological changes of phosphorus and the particle size in columnar sediment core of Xidahai Lake were analyzed, to discuss the vertical distribution characteristics and influencing factors. The results showed that the contents of OM, TN and TP were 0. 633% -2. 756%, 0. 150% -0. 429% and 648. 00 - 1 480.67 mg . kg-1 respectively. The contents of Ca-P, IP and OM changed less, the contents of Fe/Al-P, OP, TP and TN fluctuated from 1843 to 1970; The contents of Ca-P, IP and TP tended to decrease, the contents of Fe/Al-P, OP and OM first decreased and then increased to different degree, TN fluctuated largely from 1970 to 1996; The nutrient elements contents showed relatively large fluctuation from 1996 to 2009, the average contents of Fe/Al-P, OP and OM were the highest in the three stages. The sediment core nutrients pollution sources were mainly from industrial wastewater, sewage and the loss of fertilizers of Xidahai Lake. The ratio of C/N in the sediments showed that organic matter was mainly from aquatic organisms. The sediment particle size composition was dominated by clay and fine silt. The correlation studies showed that Ca-P, IP and TP were significantly positively correlated, showing that the contribution of Ca-P to IP and TP growth was large. PMID:26489314

  3. Motion synthesis and force distribution analysis for a biped robot.

    PubMed

    Trojnacki, Maciej T; Zieli?ska, Teresa

    2011-01-01

    In this paper, the method of generating biped robot motion using recorded human gait is presented. The recorded data were modified taking into account the velocity available for robot drives. Data includes only selected joint angles, therefore the missing values were obtained considering the dynamic postural stability of the robot, which means obtaining an adequate motion trajectory of the so-called Zero Moment Point (ZMT). Also, the method of determining the ground reaction forces' distribution during the biped robot's dynamic stable walk is described. The method was developed by the authors. Following the description of equations characterizing the dynamics of robot's motion, the values of the components of ground reaction forces were symbolically determined as well as the coordinates of the points of robot's feet contact with the ground. The theoretical considerations have been supported by computer simulation and animation of the robot's motion. This was done using Matlab/Simulink package and Simulink 3D Animation Toolbox, and it has proved the proposed method. PMID:21761810

  4. Spectral Energy Distribution Analysis of Luminous Infrared Galaxies from GOALS

    NASA Astrophysics Data System (ADS)

    U, Vivian; Sanders, D.; Evans, A.; Mazzarella, J.; Armus, L.; Iwasawa, K.; Vavilkin, T.; Surace, J.; Howell, J.; GOALS Team

    2009-05-01

    The spectral energy distributions (SEDs) of the local luminous and ultraluminous infrared galaxies (LIRGs and ULIRGs) were thought to be well understood and exemplified by that of Arp 220, the "poster child" of these objects; but in fact, Arp 220 has been shown to be special in more than one way. Here we present comprehensive SEDs (from radio through x-ray) for the 88 most luminous (U)LIRGs in the Great Observatories All-sky LIRG Survey (GOALS), which combines multiwavelength imaging and spectroscopic data from space telescopes (Spitzer, HST, GALEX, and Chandra) in an effort to fully understand galaxy evolution processes and the enhanced infrared emission in the local universe. Spanning the luminosity range 11.4 < log(L_ir/L_sun) < 12.5, our objects are a complete subset of the flux-limited IRAS Revised Bright Galaxy Sample. To complement spacecraft data, we also took optical imaging data from Mauna Kea as well as searched through literature in order to compile accurate and consistent photometry and fully characterize the spectral shapes of the SEDs. We then analyzed the ratios of the radio, infrared, optical, and x-ray emission as a function of infrared luminosity and discussed the trends observed.

  5. Spherical Harmonic Analysis of Particle Velocity Distribution Function: Comparison of Moments and Anisotropies using Cluster Data

    NASA Technical Reports Server (NTRS)

    Gurgiolo, Chris; Vinas, Adolfo F.

    2009-01-01

    This paper presents a spherical harmonic analysis of the plasma velocity distribution function using high-angular, energy, and time resolution Cluster data obtained from the PEACE spectrometer instrument to demonstrate how this analysis models the particle distribution function and its moments and anisotropies. The results show that spherical harmonic analysis produced a robust physical representation model of the velocity distribution function, resolving the main features of the measured distributions. From the spherical harmonic analysis, a minimum set of nine spectral coefficients was obtained from which the moment (up to the heat flux), anisotropy, and asymmetry calculations of the velocity distribution function were obtained. The spherical harmonic method provides a potentially effective "compression" technique that can be easily carried out onboard a spacecraft to determine the moments and anisotropies of the particle velocity distribution function for any species. These calculations were implemented using three different approaches, namely, the standard traditional integration, the spherical harmonic (SPH) spectral coefficients integration, and the singular value decomposition (SVD) on the spherical harmonic methods. A comparison among the various methods shows that both SPH and SVD approaches provide remarkable agreement with the standard moment integration method.

  6. Wavelet analysis of baryon acoustic structures in the galaxy distribution

    NASA Astrophysics Data System (ADS)

    Arnalte-Mur, P.; Labatie, A.; Clerc, N.; Martínez, V. J.; Starck, J.-L.; Lachièze-Rey, M.; Saar, E.; Paredes, S.

    2012-06-01

    Context. Baryon acoustic oscillations (BAO) are imprinted in the density field by acoustic waves travelling in the plasma of the early universe. Their fixed scale can be used as a standard ruler to study the geometry of the universe. Aims: The BAO have been previously detected using correlation functions and power spectra of the galaxy distribution. We present a new method to detect the real-space structures associated with BAO. These baryon acoustic structures are spherical shells of relatively small density contrast, surrounding high density central regions. Methods: We design a specific wavelet adapted to search for shells, and exploit the physics of the process by making use of two different mass tracers, introducing a specific statistic to detect the BAO features. We show the effect of the BAO signal in this new statistic when applied to the ? - cold dark matter (?CDM) model, using an analytical approximation to the transfer function. We confirm the reliability and stability of our method by using cosmological N-body simulations from the MareNostrum Institut de Ciències de l'Espai (MICE). Results: We apply our method to the detection of BAO in a galaxy sample drawn from the Sloan Digital Sky Survey (SDSS). We use the "main" catalogue to trace the shells, and the luminous red galaxies (LRG) as tracers of the high density central regions. Using this new method, we detect, with a high significance, that the LRG in our sample are preferentially located close to the centres of shell-like structures in the density field, with characteristics similar to those expected from BAO. We show that stacking selected shells, we can find their characteristic density profile. Conclusions: We delineate a new feature of the cosmic web, the BAO shells. As these are real spatial structures, the BAO phenomenon can be studied in detail by examining those shells. Full Table 1 is only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/542/A34

  7. Extending the LWS Data Environment: Distributed Data Processing and Analysis

    NASA Technical Reports Server (NTRS)

    Narock, Thomas

    2005-01-01

    The final stages of this work saw changes to the original framework, as well as the completion and integration of several data processing services. Initially, it was thought that a peer-to-peer architecture was necessary to make this work possible. The peer-to-peer architecture provided many benefits including the dynamic discovery of new services that would be continually added. A prototype example was built and while it showed promise, a major disadvantage was seen in that it was not easily integrated into the existing data environment. While the peer-to-peer system worked well for finding and accessing distributed data processing services, it was found that its use was limited by the difficulty in calling it from existing tools and services. After collaborations with members of the data community, it was determined that our data processing system was of high value and that a new interface should be pursued in order for the community to take full advantage of it. As such; the framework was modified from a peer-to-peer architecture to a more traditional web service approach. Following this change multiple data processing services were added. These services include such things as coordinate transformations and sub setting of data. Observatory (VHO), assisted with integrating the new architecture into the VHO. This allows anyone using the VHO to search for data, to then pass that data through our processing services prior to downloading it. As a second attempt at demonstrating the new system, a collaboration was established with the Collaborative Sun Earth Connector (CoSEC) group at Lockheed Martin. This group is working on a graphical user interface to the Virtual Observatories and data processing software. The intent is to provide a high-level easy-to-use graphical interface that will allow access to the existing Virtual Observatories and data processing services from one convenient application. Working with the CoSEC group we provided access to our data processing tools from within their software. This now allows the CoSEC community to take advantage of our services and also demonstrates another means of accessing our system.

  8. Biomechanical analysis of force distribution in human finger extensor mechanisms.

    PubMed

    Hu, Dan; Ren, Lei; Howard, David; Zong, Changfu

    2014-01-01

    The complexities of the function and structure of human fingers have long been recognised. The in vivo forces in the human finger tendon network during different activities are critical information for clinical diagnosis, surgical treatment, prosthetic finger design, and biomimetic hand development. In this study, we propose a novel method for in vivo force estimation for the finger tendon network by combining a three-dimensional motion analysis technique and a novel biomechanical tendon network model. The extensor mechanism of a human index finger is represented by an interconnected tendinous network moving around the phalanx's dorsum. A novel analytical approach based on the "Principle of Minimum Total Potential Energy" is used to calculate the forces and deformations throughout the tendon network of the extensor mechanism when subjected to an external load and with the finger posture defined by measurement data. The predicted deformations and forces in the tendon network are in broad agreement with the results obtained by previous experimental in vitro studies. The proposed methodology provides a promising tool for investigating the biomechanical function of complex interconnected tendon networks in vivo. PMID:25126576

  9. Distributional benefit analysis of a national air quality rule.

    PubMed

    Post, Ellen S; Belova, Anna; Huang, Jin

    2011-06-01

    Under Executive Order 12898, the U.S. Environmental Protection Agency (EPA) must perform environmental justice (EJ) reviews of its rules and regulations. EJ analyses address the hypothesis that environmental disamenities are experienced disproportionately by poor and/or minority subgroups. Such analyses typically use communities as the unit of analysis. While community-based approaches make sense when considering where polluting sources locate, they are less appropriate for national air quality rules affecting many sources and pollutants that can travel thousands of miles. We compare exposures and health risks of EJ-identified individuals rather than communities to analyze EPA's Heavy Duty Diesel (HDD) rule as an example national air quality rule. Air pollutant exposures are estimated within grid cells by air quality models; all individuals in the same grid cell are assigned the same exposure. Using an inequality index, we find that inequality within racial/ethnic subgroups far outweighs inequality between them. We find, moreover, that the HDD rule leaves between-subgroup inequality essentially unchanged. Changes in health risks depend also on subgroups' baseline incidence rates, which differ across subgroups. Thus, health risk reductions may not follow the same pattern as reductions in exposure. These results are likely representative of other national air quality rules as well. PMID:21776207

  10. Monte Carlo models and analysis of galactic disk gamma-ray burst distributions

    NASA Technical Reports Server (NTRS)

    Hakkila, Jon

    1989-01-01

    Gamma-ray bursts are transient astronomical phenomena which have no quiescent counterparts in any region of the electromagnetic spectrum. Although temporal and spectral properties indicate that these events are likely energetic, their unknown spatial distribution complicates astrophysical interpretation. Monte Carlo samples of gamma-ray burst sources are created which belong to Galactic disk populations. Spatial analysis techniques are used to compare these samples to the observed distribution. From this, both quantitative and qualitative conclusions are drawn concerning allowed luminosity and spatial distributions of the actual sample. Although the Burst and Transient Source Experiment (BATSE) experiment on Gamma Ray Observatory (GRO) will significantly improve knowledge of the gamma-ray burst source spatial characteristics within only a few months of launch, the analysis techniques described herein will not be superceded. Rather, they may be used with BATSE results to obtain detailed information about both the luminosity and spatial distributions of the sources.

  11. Advancing Collaborative Climate Studies through Globally Distributed Geospatial Analysis

    NASA Astrophysics Data System (ADS)

    Singh, R.; Percivall, G.

    2009-12-01

    (note: acronym glossary at end of abstract) For scientists to have confidence in the veracity of data sets and computational processes not under their control, operational transparency must be much greater than previously required. Being able to have a universally understood and machine-readable language for describing such things as the completeness of metadata, data provenance and uncertainty, and the discrete computational steps in a complex process take on increased importance. OGC has been involved with technological issues associated with climate change since 2005 when we, along with the IEEE Committee on Earth Observation, began a close working relationship with GEO and GEOSS (http://earthobservations.org). GEO/GEOS provide the technology platform to GCOS who in turn represents the earth observation community to UNFCCC. OGC and IEEE are the organizers of the GEO/GEOSS Architecture Implementation Pilot (see http://www.ogcnetwork.net/AIpilot). This continuing work involves closely working with GOOS (Global Ocean Observing System) and WMO (World Meteorological Organization). This session reports on the findings of recent work within the OGC’s community of software developers and users to apply geospatial web services to the climate studies domain. The value of this work is to evolve OGC web services, moving from data access and query to geo-processing and workflows. Two projects will be described, the GEOSS API-2 and the CCIP. AIP is a task of the GEOSS Architecture and Data Committee. During its duration, two GEO Tasks defined the project: AIP-2 began as GEO Task AR-07-02, to lead the incorporation of contributed components consistent with the GEOSS Architecture using a GEO Web Portal and a Clearinghouse search facility to access services through GEOSS Interoperability Arrangements in support of the GEOSS Societal Benefit Areas. AIP-2 concluded as GEOS Task AR-09-01b, to develop and pilot new process and infrastructure components for the GEOSS Common Infrastructure and the broader GEOSS architecture. Of specific interest to this session is the work on geospatial workflows and geo-processing and data discovery and access. CCIP demonstrates standards-based interoperability between geospatial applications in the service of Climate Change analysis. CCIP is planned to be a yearly exercise. It consists of a network of online data services (WCS, WFS, SOS), analysis services (WPS, WCPS, WMS), and clients that exercise those services. In 2009, CCIP focuses on Australia, and the initial application of existing OGC services to climate studies. The results of the 2009 CCIP will serve as requirements for more complex geo-processing services to be developed for CCIP 2010. The benefits of CCIP include accelerating the implementation of the GCOS, and building confidence that implementations using multi-vendor interoperable technologies can help resolve vexing climate change questions. AIP-2: Architecture Implementation Pilot, Phase 2 CCIP: Climate Challenge Integration Plugfest GEO: Group on Earth Observations GEOSS: Global Earth Observing System of Systems GCOS: Global Climate Observing System OGC: Open Geospatial Consortium SOS: Sensor Observation Service WCS: Web Coverage Service WCPS: Web Coverage Processing Service WFS: Web Feature Service WMS: Web Mapping Service

  12. Apportionment of multiple aerosol size distributions modes using factor analysis techniques

    SciTech Connect

    Williams, A.

    1990-12-31

    This research in progress is concerned with developing the capability to ojectively partition aerosol size distribution data into a small number of modes, that together explain most of the variation of the observed data. It is desired to determine from analysis in the field of the optical spectrometer data when a particular mode is present and to collect filter samples to determine the chemical composition of that mode. The results would relate aerosol size distributions to chemical composition.

  13. Apportionment of multiple aerosol size distributions modes using factor analysis techniques

    SciTech Connect

    Williams, A.

    1990-01-01

    This research in progress is concerned with developing the capability to ojectively partition aerosol size distribution data into a small number of modes, that together explain most of the variation of the observed data. It is desired to determine from analysis in the field of the optical spectrometer data when a particular mode is present and to collect filter samples to determine the chemical composition of that mode. The results would relate aerosol size distributions to chemical composition.

  14. Comparison of photon correlation spectroscopy with photosedimentation analysis for the determination of aqueous colloid size distributions

    USGS Publications Warehouse

    Rees, T.F.

    1990-01-01

    Photon correlation spectroscopy (PCS) utilizes the Doppler frequency shift of photons scattered off particles undergoing Brownian motion to determine the size of colloids suspended in water. Photosedimentation analysis (PSA) measures the time-dependent change in optical density of a suspension of colloidal particles undergoing centrifugation. A description of both techniques, important underlying assumptions, and limitations are given. Results for a series of river water samples show that the colloid-size distribution means are statistically identical as determined by both techniques. This also is true of the mass median diameter (MMD), even though MMD values determined by PSA are consistently smaller than those determined by PCS. Because of this small negative bias, the skew parameters for the distributions are generally smaller for the PCS-determined distributions than for the PSA-determined distributions. Smaller polydispersity indices for the distributions are also determined by PCS. -from Author

  15. Evaluating Domestic Hot Water Distribution System Options with Validated Analysis Models

    SciTech Connect

    Weitzel, E.; Hoeschele, E.

    2014-09-01

    A developing body of work is forming that collects data on domestic hot water consumption, water use behaviors, and energy efficiency of various distribution systems. Transient System Simulation Tool (TRNSYS) is a full distribution system developed that has been validated using field monitoring data and then exercised in a number of climates to understand climate impact on performance. In this study, the Building America team built upon previous analysis modeling work to evaluate differing distribution systems and the sensitivities of water heating energy and water use efficiency to variations of climate, load, distribution type, insulation and compact plumbing practices. Overall, 124 different TRNSYS models were simulated. The results of this work are useful in informing future development of water heating best practices guides as well as more accurate (and simulation time efficient) distribution models for annual whole house simulation programs.

  16. Powerlaw: a Python package for analysis of heavy-tailed distributions.

    PubMed

    Alstott, Jeff; Bullmore, Ed; Plenz, Dietmar

    2014-01-01

    Power laws are theoretically interesting probability distributions that are also frequently used to describe empirical data. In recent years, effective statistical methods for fitting power laws have been developed, but appropriate use of these techniques requires significant programming and statistical insight. In order to greatly decrease the barriers to using good statistical methods for fitting power law distributions, we developed the powerlaw Python package. This software package provides easy commands for basic fitting and statistical analysis of distributions. Notably, it also seeks to support a variety of user needs by being exhaustive in the options available to the user. The source code is publicly available and easily extensible. PMID:24489671

  17. Analysis of the 3D distribution of stacked self-assembled quantum dots by electron tomography

    PubMed Central

    2012-01-01

    The 3D distribution of self-assembled stacked quantum dots (QDs) is a key parameter to obtain the highest performance in a variety of optoelectronic devices. In this work, we have measured this distribution in 3D using a combined procedure of needle-shaped specimen preparation and electron tomography. We show that conventional 2D measurements of the distribution of QDs are not reliable, and only 3D analysis allows an accurate correlation between the growth design and the structural characteristics. PMID:23249477

  18. Hydraulic model analysis of water distribution system, Rockwell International, Rocky Flats, Colorado

    SciTech Connect

    Perstein, J.; Castellano, J.A.

    1989-01-20

    Rockwell International requested an analysis of the existing plant site water supply distribution system at Rocky Flats, Colorado, to determine its adequacy. On September 26--29, 1988, Hughes Associates, Inc., Fire Protection Engineers, accompanied by Rocky Flats Fire Department engineers and suppression personnel, conducted water flow tests at the Rocky Flats plant site. Thirty-seven flows from various points throughout the plant site were taken on the existing domestic supply/fire main installation to assure comprehensive and thorough representation of the Rocky Flats water distribution system capability. The analysis was completed in four phases which are described, together with a summary of general conclusions and recommendations.

  19. The Analysis of the Strength, Distribution and Direction for the EEG Phase Synchronization by Musical Stimulus

    NASA Astrophysics Data System (ADS)

    Ogawa, Yutaro; Ikeda, Akira; Kotani, Kiyoshi; Jimbo, Yasuhiko

    In this study, we propose the EEG phase synchronization analysis including not only the average strength of the synchronization but also the distribution and directions under the conditions that evoked emotion by musical stimuli. The experiment is performed with the two different musical stimuli that evoke happiness or sadness for 150 seconds. It is found that the average strength of synchronization indicates no difference between the right side and the left side of the frontal lobe during the happy stimulus, the distribution and directions indicate significant differences. Therefore, proposed analysis is useful for detecting emotional condition because it provides information that cannot be obtained only by the average strength of synchronization.

  20. Empirical significance values for linkage analysis: trait simulation using posterior model distributions from MCMC oligogenic segregation analysis.

    PubMed

    Igo, Robert P; Wijsman, Ellen M

    2008-02-01

    Variance-components (VC) linkage analysis is a powerful model-free method for assessing linkage, but the distribution of VC logarithm of the odds ratio (LOD) scores may deviate substantially from the assumed asymptotic distribution. Typically, the null distribution of the VC-LOD score and other linkage statistics has been estimated by generating new genotype data independently of the trait data, and computing a linkage statistic for many such marker-simulated data sets. However, marker simulation is susceptible to errors in the assumed marker and map model and is computationally intensive. Here, we describe a method for generating posterior distributions of linkage statistics through simulation of trait data based on the original sample and on results from an initial scan using a Bayesian Markov-chain Monte Carlo (MCMC) approach for oligogenic segregation analysis. We use samples of oligogenic trait models taken from the posterior distribution to generate new samples of trait data, which were paired with the original marker data for analysis. Empirical P-values obtained from trait and marker simulation were similar when derived for several strong linkage signals from published linkage scans, and for analysis of data with a known, simulated, trait model. Furthermore, trait simulation produces the expected null distribution of VC-LOD scores and is computationally fast when marker identity-by-descent estimates from the original data could be reused. These results suggest that trait simulation gives valid estimates of statistical significance of linkage signals. Finally, these results also demonstrate the feasibility of obtaining empirical significance levels for evaluating Bayesian oligogenic linkage signals with either marker or trait simulation. PMID:17849492

  1. New approach on analysis of pathologic cardiac murmurs based on WPD energy distribution.

    PubMed

    Jiang, Zhongwei; Tao, Ting; Wang, Haibin

    2014-01-01

    In this paper, an approach on analysis of the pathologic cardiac murmurs for congenital heart defects was proposed based on the wavelet packet (WP) technique. Considering the difference of the energy intensity distributions for the innocent and pathologic murmurs in frequency domain, the WP decomposition was introduced and the WP energies at each frequency band were calculated and compared. Based on the analysis of a large amount of clinic heart sound data, the murmurs energy distributions were divided into five frequency bands, and the relative evaluation indexes for cardiac murmurs (ICM) were proposed for analysis of the pathologic murmurs. Finally, the threshold values between the innocent and pathologic cardiac murmurs were determined based on the statistical results of the normal heart sounds. The analysis results validate the proposed evaluation indexes and the corresponding thresholds. PMID:25516123

  2. Strategic Sequencing for State Distributed PV Policies: A Quantitative Analysis of Policy Impacts and Interactions

    SciTech Connect

    Doris, E.; Krasko, V.A.

    2012-10-01

    State and local policymakers show increasing interest in spurring the development of customer-sited distributed generation (DG), in particular solar photovoltaic (PV) markets. Prompted by that interest, this analysis examines the use of state policy as a tool to support the development of a robust private investment market. This analysis builds on previous studies that focus on government subsidies to reduce installation costs of individual projects and provides an evaluation of the impacts of policies on stimulating private market development.

  3. Wave-height hazard analysis in Eastern Coast of Spain - Bayesian approach using generalized Pareto distribution

    NASA Astrophysics Data System (ADS)

    Egozcue, J. J.; Pawlowsky-Glahn, V.; Ortego, M. I.

    2005-03-01

    Standard practice of wave-height hazard analysis often pays little attention to the uncertainty of assessed return periods and occurrence probabilities. This fact favors the opinion that, when large events happen, the hazard assessment should change accordingly. However, uncertainty of the hazard estimates is normally able to hide the effect of those large events. This is illustrated using data from the Mediterranean coast of Spain, where the last years have been extremely disastrous. Thus, it is possible to compare the hazard assessment based on data previous to those years with the analysis including them. With our approach, no significant change is detected when the statistical uncertainty is taken into account. The hazard analysis is carried out with a standard model. Time-occurrence of events is assumed Poisson distributed. The wave-height of each event is modelled as a random variable which upper tail follows a Generalized Pareto Distribution (GPD). Moreover, wave-heights are assumed independent from event to event and also independent of their occurrence in time. A threshold for excesses is assessed empirically. The other three parameters (Poisson rate, shape and scale parameters of GPD) are jointly estimated using Bayes' theorem. Prior distribution accounts for physical features of ocean waves in the Mediterranean sea and experience with these phenomena. Posterior distribution of the parameters allows to obtain posterior distributions of other derived parameters like occurrence probabilities and return periods. Predictives are also available. Computations are carried out using the program BGPE v2.0.

  4. When the tail counts: the advantage of bilingualism through the ex-gaussian distribution analysis.

    PubMed

    Calabria, Marco; Hernández, Mireia; Martin, Clara D; Costa, Albert

    2011-01-01

    Several studies have documented the advantage of bilingualism with respect to the development of the executive control (EC) system. Two effects of bilingualism have been described in conflict resolution tasks: (a) bilinguals tend to perform the tasks faster overall, and (b) bilinguals tend to experience less interference from conflicting information, compared to monolinguals. The precise way in which the bilingual advantage relies on different EC mechanisms is still not well understood. The goal of the present article is to further explore how bilingualism impacts the EC system by performing a new analysis (Ex-Gaussian) of already reported data in which bilinguals and monolinguals performed a flanker task. Ex-Gaussian distribution analysis allows us to partial out the contribution of the normal and the exponential components of the RT distribution of the two groups. The fit of the raw data to the ex-Gaussian distribution showed two main results. First, we found that the bilingualism advantage in the overall speed of processing is captured by group differences in the normal (?) and the exponential (?) components of the distribution. Second, the bilingual advantage in the magnitude of the conflict effect is captured by group differences only in the exponential component. The results are discussed in terms of: (a) usefulness of the ex-Gaussian analysis as a tool to better describe the RT distribution, and (b) a new approach to explore the cognitive processes purportedly involved in instantiating the bilingualism advantage with respect to EC. PMID:22007182

  5. Space positional and motion SRC effects: A comparison with the use of reaction time distribution analysis

    PubMed Central

    Styrkowiec, Piotr; Szczepanowski, Remigiusz

    2013-01-01

    The analysis of reaction time (RT) distributions has become a recognized standard in studies on the stimulus response correspondence (SRC) effect as it allows exploring how this effect changes as a function of response speed. In this study, we compared the spatial SRC effect (the classic Simon effect) with the motion SRC effect using RT distribution analysis. Four experiments were conducted, in which we manipulated factors of space position and motion for stimulus and response, in order to obtain a clear distinction between positional SRC and motion SRC. Results showed that these two types of SRC effects differ in their RT distribution functions as the space positional SRC effect showed a decreasing function, while the motion SRC showed an increasing function. This suggests that different types of codes underlie these two SRC effects. Potential mechanisms and processes are discussed. PMID:24605178

  6. Waveform Analysis for High-Quality Loop-Based Audio Distribution Stuart Cunningham

    E-print Network

    Davies, John N.

    Waveform Analysis for High-Quality Loop-Based Audio Distribution Stuart Cunningham Centre using compression procedures, but the quality of the audio suffers, to the detriment of the reproduced audio, and thus, the listening experience. Though acceptable audio quality can be achieved using

  7. GenoMap: A Distributed System for Unifying Genotyping and Genetic Linkage Analysis

    E-print Network

    Casavant, Tom

    to be managed include: informative sets of polymorphic markers; databases of patient demographic, pedigreeGenoMap: A Distributed System for Unifying Genotyping and Genetic Linkage Analysis Todd E. Scheetz. of Electrical and Computer Engineering and the Dept. of Genetics University of Iowa genomap

  8. Analysis of energy disposal - Thermodynamic aspects of the entropy deficiency of a product state distribution

    NASA Technical Reports Server (NTRS)

    Levine, R. D.; Bernstein, R. B.

    1973-01-01

    A thermodynamic-like approach to the characterization of product state distributions is outlined. A moment analysis of the surprisal and the entropy deficiency is presented from a statistical mechanical viewpoint. The role of reactant state selection is discussed using the 'state function' property of the entropy.

  9. Systematic analysis of transverse momentum distribution and non-extensive thermodynamics theory

    SciTech Connect

    Sena, I.; Deppman, A.

    2013-03-25

    A systematic analysis of transverse momentum distribution of hadrons produced in ultrarelativistic p+p and A+A collisions is presented. We investigate the effective temperature and the entropic parameter from the non-extensive thermodynamic theory of strong interaction. We conclude that the existence of a limiting effective temperature and of a limiting entropic parameter is in accordance with experimental data.

  10. PV Interconnection Risk Analysis through Distribution System Impact Signatures and Feeder Zones

    E-print Network

    PV Interconnection Risk Analysis through Distribution System Impact Signatures and Feeder Zones reliability problems. In order to improve the interconnection study process, the use of feeder zones and PV, and location and size of PV. PV impact signatures, hosting capacity, and feeder risk zones are demonstrated

  11. Nonsmooth analysis and sonar-based implementation of distributed coordination algorithms

    E-print Network

    Brennan, Sean

    Nonsmooth analysis and sonar-based implementation of distributed coordination algorithms Craig L mobile robots equipped with sonar. We develop novel approaches for improving single point sonar scan) equipped with rotating sonar sensors is used. Successful implementation of the algorithms is largely

  12. A Review of Integrated Analysis of Production-Distribution Systems Ana Maria Sarmiento

    E-print Network

    Nagi, Rakesh

    the relevance of logistics' costs in the o system in the analysis, since we are interested in the following questions: (i) How have logistics. Keywords: Production, Distribution, Inventory, Routing, Logistics. Email: as1@eng.buffalo.edu yTo whom

  13. Exploratory Data Analysis to Identify Factors Influencing Spatial Distributions of Weed Seed Banks

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Comparing distributions of different species in multiple fields will help us understand the spatial dynamics of weed seed banks, but analyzing observational data requires non-traditional statistical methods. We used classification and regression tree analysis (CART) to investigate factors that influ...

  14. Data Reduction for the Scalable Automated Analysis of Distributed Darknet Traffic

    E-print Network

    Jahanian, Farnam

    Data Reduction for the Scalable Automated Analysis of Distributed Darknet Traffic Michael Bailey address blocks (or darknets) with forensic honeypots (or honeyfarms). In this paper we examine- brid systems. We show that individual darknets are dom- inated by a small number of sources repeating

  15. An investigation on the intra-sample distribution of cotton color by using image analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The colorimeter principle is widely used to measure cotton color. This method provides the sample’s color grade; but the result does not include information about the color distribution and any variation within the sample. We conducted an investigation that used image analysis method to study the ...

  16. DataHub knowledge based assistance for science visualization and analysis using large distributed databases

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Collins, Donald J.; Doyle, Richard J.; Jacobson, Allan S.

    1991-01-01

    Viewgraphs on DataHub knowledge based assistance for science visualization and analysis using large distributed databases. Topics covered include: DataHub functional architecture; data representation; logical access methods; preliminary software architecture; LinkWinds; data knowledge issues; expert systems; and data management.

  17. Three-Dimensional Structure of Nanocomposites from Atomic Pair Distribution Function Analysis: Study of Polyaniline and

    E-print Network

    Trikalitis, Pantelis N.

    Three-Dimensional Structure of Nanocomposites from Atomic Pair Distribution Function Analysis of an 84-atom orthorhombic unit cell. The nanocomposite (PANI)0.5V2O5,1.0H2O too is locally well ordered polymeric nanocomposites have attracted much attention because of their unique and novel properties

  18. Residence Time Distribution Measurement and Analysis of Pilot-Scale Pretreatment Reactors for Biofuels Production: Preprint

    SciTech Connect

    Sievers, D.; Kuhn, E.; Tucker, M.; Stickel, J.; Wolfrum, E.

    2013-06-01

    Measurement and analysis of residence time distribution (RTD) data is the focus of this study where data collection methods were developed specifically for the pretreatment reactor environment. Augmented physical sampling and automated online detection methods were developed and applied. Both the measurement techniques themselves and the produced RTD data are presented and discussed.

  19. Postmortem Analysis of Neuron Distributions in the Locus Coeruleus of Alcoholics and Suicidal Victims

    E-print Network

    Postmortem Analysis of Neuron Distributions in the Locus Coeruleus of Alcoholics and Suicidal of the human locus coeruleus (LC) in four groups of subjects: controls, suicidal non­alcoholics, non­suicidal alcoholics, and suicidal alcoholics. The data consist of postmortem neuron measurements, counts, and spatial

  20. Nonlocal approach to the analysis of the stress distribution in granular systems. II. Application to experiment

    E-print Network

    Kenkre, V.M.

    Nonlocal approach to the analysis of the stress distribution in granular systems. II. Application, New Mexico 87185 Received 15 October 1997 A theory of stress propagation in granular materials to the compaction of ceramic and metal powders in pipes with previously unexplained experimental features

  1. Distributed Sensor Analysis for Fault Detection in Tightly-Coupled Multi-Robot Team Tasks

    E-print Network

    Parker, Lynne E.

    Distributed Sensor Analysis for Fault Detection in Tightly-Coupled Multi-Robot Team Tasks Xingyan Li and Lynne E. Parker Proc. of IEEE International Conference on Robotics and Automation, Kobe, Japan multi-robot team tasks. While the centralized version of SAFDetection was shown to be successful

  2. The distribution of first-passage times and durations in FOREX and future markets

    E-print Network

    Sazuka, Naoya; Scalas, Enrico

    2008-01-01

    Possible distributions are discussed for intertrade durations and first-passage processes in financial markets. The view-point of renewal theory is assumed. In order to represent market data with relatively long durations, two types of distributions are used, namely, a distribution derived from the so-called Mittag-Leffler survival function and the Weibull distribution. For Mittag-Leffler type distribution, the average waiting time (residual life time) is strongly dependent on the choice of a cut-off parameter t_ max, whereas the results based on the Weibull distribution do not depend on such a cut-off. Therefore, a Weibull distribution is more convenient than a Mittag-Leffler type one if one wishes to evaluate relevant statistics such as average waiting time in financial markets with long durations. On the other side, we find that the Gini index is rather independent of the cut-off parameter. Based on the above considerations, we propose a good candidate for describing the distribution of first-passage time ...

  3. High-resolution, submicron particle size distribution analysis using gravitational-sweep sedimentation.

    PubMed Central

    Mächtle, W

    1999-01-01

    Sedimentation velocity is a powerful tool for the analysis of complex solutions of macromolecules. However, sample turbidity imposes an upper limit to the size of molecular complexes currently amenable to such analysis. Furthermore, the breadth of the particle size distribution, combined with possible variations in the density of different particles, makes it difficult to analyze extremely complex mixtures. These same problems are faced in the polymer industry, where dispersions of latices, pigments, lacquers, and emulsions must be characterized. There is a rich history of methods developed for the polymer industry finding use in the biochemical sciences. Two such methods are presented. These use analytical ultracentrifugation to determine the density and size distributions for submicron-sized particles. Both methods rely on Stokes' equations to estimate particle size and density, whereas turbidity, corrected using Mie's theory, provides the concentration measurement. The first method uses the sedimentation time in dispersion media of different densities to evaluate the particle density and size distribution. This method works provided the sample is chemically homogeneous. The second method splices together data gathered at different sample concentrations, thus permitting the high-resolution determination of the size distribution of particle diameters ranging from 10 to 3000 nm. By increasing the rotor speed exponentially from 0 to 40,000 rpm over a 1-h period, size distributions may be measured for extremely broadly distributed dispersions. Presented here is a short history of particle size distribution analysis using the ultracentrifuge, along with a description of the newest experimental methods. Several applications of the methods are provided that demonstrate the breadth of its utility, including extensions to samples containing nonspherical and chromophoric particles. PMID:9916040

  4. Spatial sensitivity analysis of snow cover data in a distributed rainfall-runoff model

    NASA Astrophysics Data System (ADS)

    Berezowski, T.; Nossent, J.; Chorma?ski, J.; Batelaan, O.

    2015-04-01

    As the availability of spatially distributed data sets for distributed rainfall-runoff modelling is strongly increasing, more attention should be paid to the influence of the quality of the data on the calibration. While a lot of progress has been made on using distributed data in simulations of hydrological models, sensitivity of spatial data with respect to model results is not well understood. In this paper we develop a spatial sensitivity analysis method for spatial input data (snow cover fraction - SCF) for a distributed rainfall-runoff model to investigate when the model is differently subjected to SCF uncertainty in different zones of the model. The analysis was focussed on the relation between the SCF sensitivity and the physical and spatial parameters and processes of a distributed rainfall-runoff model. The methodology is tested for the Biebrza River catchment, Poland, for which a distributed WetSpa model is set up to simulate 2 years of daily runoff. The sensitivity analysis uses the Latin-Hypercube One-factor-At-a-Time (LH-OAT) algorithm, which employs different response functions for each spatial parameter representing a 4 × 4 km snow zone. The results show that the spatial patterns of sensitivity can be easily interpreted by co-occurrence of different environmental factors such as geomorphology, soil texture, land use, precipitation and temperature. Moreover, the spatial pattern of sensitivity under different response functions is related to different spatial parameters and physical processes. The results clearly show that the LH-OAT algorithm is suitable for our spatial sensitivity analysis approach and that the SCF is spatially sensitive in the WetSpa model. The developed method can be easily applied to other models and other spatial data.

  5. Can Data Recognize Its Parent Distribution?

    SciTech Connect

    A.W.Marshall; J.C.Meza; and I. Olkin

    1999-05-01

    This study is concerned with model selection of lifetime and survival distributions arising in engineering reliability or in the medical sciences. We compare various distributions, including the gamma, Weibull and lognormal, with a new distribution called geometric extreme exponential. Except for the lognormal distribution, the other three distributions all have the exponential distribution as special cases. A Monte Carlo simulation was performed to determine sample sizes for which survival distributions can distinguish data generated by their own families. Two methods for decision are by maximum likelihood and by Kolmogorov distance. Neither method is uniformly best. The probability of correct selection with more than one alternative shows some surprising results when the choices are close to the exponential distribution.

  6. Recurrence time distribution and temporal clustering properties of a cellular automaton modelling landslide events

    NASA Astrophysics Data System (ADS)

    Piegari, E.; Di Maio, R.; Avella, A.

    2013-12-01

    Reasonable prediction of landslide occurrences in a given area requires the choice of an appropriate probability distribution of recurrence time intervals. Although landslides are widespread and frequent in many parts of the world, complete databases of landslide occurrences over large periods are missing and often such natural disasters are treated as processes uncorrelated in time and, therefore, Poisson distributed. In this paper, we examine the recurrence time statistics of landslide events simulated by a cellular automaton model that reproduces well the actual frequency-size statistics of landslide catalogues. The complex time series are analysed by varying both the threshold above which the time between events is recorded and the values of the key model parameters. The synthetic recurrence time probability distribution is shown to be strongly dependent on the rate at which instability is approached, providing a smooth crossover from a power-law regime to a Weibull regime. Moreover, a Fano factor analysis shows a clear indication of different degrees of correlation in landslide time series. Such a finding supports, at least in part, a recent analysis performed for the first time of an historical landslide time series over a time window of fifty years.

  7. Fractal analysis of the dark matter and gas distributions in the Mare-Nostrum universe

    SciTech Connect

    Gaite, José

    2010-03-01

    We develop a method of multifractal analysis of N-body cosmological simulations that improves on the customary counts-in-cells method by taking special care of the effects of discreteness and large scale homogeneity. The analysis of the Mare-Nostrum simulation with our method provides strong evidence of self-similar multifractal distributions of dark matter and gas, with a halo mass function that is of Press-Schechter type but has a power-law exponent -2, as corresponds to a multifractal. Furthermore, our analysis shows that the dark matter and gas distributions are indistinguishable as multifractals. To determine if there is any gas biasing, we calculate the cross-correlation coefficient, with negative but inconclusive results. Hence, we develop an effective Bayesian analysis connected with information theory, which clearly demonstrates that the gas is biased in a long range of scales, up to the scale of homogeneity. However, entropic measures related to the Bayesian analysis show that this gas bias is small (in a precise sense) and is such that the fractal singularities of both distributions coincide and are identical. We conclude that this common multifractal cosmic web structure is determined by the dynamics and is independent of the initial conditions.

  8. URANIUM METAL POWDER PRODUCTION, PARTICLE DISTRIBUTION ANALYSIS, AND REACTION RATE STUDIES OF A HYDRIDE-DEHYDRIDE PROCESS 

    E-print Network

    Sames, William

    2011-08-08

    Work was done to study a hydride-dehydride method for producing uranium metal powder. Particle distribution analysis was conducted using digital microscopy and grayscale image analysis software. The particle size was found to be predominantly...

  9. A Weibull model to describe antimicrobial kinetics of oregano and lemongrass essential oils against Salmonella Enteritidis in ground beef during refrigerated storage.

    PubMed

    de Oliveira, Thales Leandro Coutinho; Soares, Rodrigo de Araújo; Piccoli, Roberta Hilsdorf

    2013-03-01

    The antimicrobial effect of oregano (Origanum vulgare L.) and lemongrass (Cymbopogon citratus (DC.) Stapf.) essential oils (EOs) against Salmonella enterica serotype Enteritidis in in vitro experiments, and inoculated in ground bovine meat during refrigerated storage (4±2 °C) for 6 days was evaluated. The Weibull model was tested to fit survival/inactivation bacterial curves (estimating of p and ? parameters). The minimum inhibitory concentration (MIC) value for both EOs on S. Enteritidis was 3.90 ?l/ml. The EO concentrations applied in the ground beef were 3.90, 7.80 and 15.60 ?l/g, based on MIC levels and possible activity reduction by food constituents. Both evaluated EOs in all tested levels, showed antimicrobial effects, with microbial populations reducing (p?0.05) along time storage. Evaluating fit-quality parameters (RSS and RSE) Weibull models are able to describe the inactivation curves of EOs against S. Enteritidis. The application of EOs in processed meats can be used to control pathogens during refrigerated shelf-life. PMID:23273476

  10. Spatial sensitivity analysis of snow cover data in a distributed rainfall-runoff model

    NASA Astrophysics Data System (ADS)

    Berezowski, T.; Nossent, J.; Chorma?ski, J.; Batelaan, O.

    2014-10-01

    As the availability of spatially distributed data sets for distributed rainfall-runoff modelling is strongly growing, more attention should be paid to the influence of the quality of the data on the calibration. While a lot of progress has been made on using distributed data in simulations of hydrological models, sensitivity of spatial data with respect to model results is not well understood. In this paper we develop a spatial sensitivity analysis (SA) method for snow cover fraction input data (SCF) for a distributed rainfall-runoff model to investigate if the model is differently subjected to SCF uncertainty in different zones of the model. The analysis was focused on the relation between the SCF sensitivity and the physical, spatial parameters and processes of a distributed rainfall-runoff model. The methodology is tested for the Biebrza River catchment, Poland for which a distributed WetSpa model is setup to simulate two years of daily runoff. The SA uses the Latin-Hypercube One-factor-At-a-Time (LH-OAT) algorithm, which uses different response functions for each 4 km × 4 km snow zone. The results show that the spatial patterns of sensitivity can be easily interpreted by co-occurrence of different environmental factors such as: geomorphology, soil texture, land-use, precipitation and temperature. Moreover, the spatial pattern of sensitivity under different response functions is related to different spatial parameters and physical processes. The results clearly show that the LH-OAT algorithm is suitable for the spatial sensitivity analysis approach and that the SCF is spatially sensitive in the WetSpa model.

  11. Long-term mechanical life testing of polymeric post insulators for distribution and a comparison to porcelain

    SciTech Connect

    Cherney, E.A. )

    1988-07-01

    The paper presents the results and analyses of long-term cantilever strength tests on polymeric line post insulators. The time-to-failure data for static cantilever loads are represented by the Weibull distribution. The life distribution, obtained from the maximum likelihood estimates of the accelerated failure times, fits an exponential model. An extrapolation of the life distribution to normal loads provides an estimate of the strength rating and mechanical equivalence to porcelain line post insulators.

  12. Mathematical Modeling and Numerical Analysis of Thermal Distribution in Arch Dams considering Solar Radiation Effect

    PubMed Central

    Mirzabozorg, H.; Hariri-Ardebili, M. A.; Shirkhan, M.; Seyed-Kolbadi, S. M.

    2014-01-01

    The effect of solar radiation on thermal distribution in thin high arch dams is investigated. The differential equation governing thermal behavior of mass concrete in three-dimensional space is solved applying appropriate boundary conditions. Solar radiation is implemented considering the dam face direction relative to the sun, the slop relative to horizon, the region cloud cover, and the surrounding topography. It has been observed that solar radiation changes the surface temperature drastically and leads to nonuniform temperature distribution. Solar radiation effects should be considered in thermal transient analysis of thin arch dams. PMID:24695817

  13. Fourier analysis of the flux-tube distribution in SU(3) lattice QCD

    E-print Network

    Arata Yamamoto

    2010-04-16

    This letter presents a novel analysis of the action/energy density distribution around a static quark-antiquark pair in SU(3) lattice quantum chromodynamics. Using the Fourier transformation of the link variable, we remove the high-momentum gluon and extract the flux-tube component from the action/energy density. When the high-momentum gluon is removed, the statistical fluctuation is drastically suppressed, and the singularities from the quark self-energy disappear. The obtained flux-tube component is broadly distributed around the line connecting the quark and the antiquark.

  14. Mathematical modeling and numerical analysis of thermal distribution in arch dams considering solar radiation effect.

    PubMed

    Mirzabozorg, H; Hariri-Ardebili, M A; Shirkhan, M; Seyed-Kolbadi, S M

    2014-01-01

    The effect of solar radiation on thermal distribution in thin high arch dams is investigated. The differential equation governing thermal behavior of mass concrete in three-dimensional space is solved applying appropriate boundary conditions. Solar radiation is implemented considering the dam face direction relative to the sun, the slop relative to horizon, the region cloud cover, and the surrounding topography. It has been observed that solar radiation changes the surface temperature drastically and leads to nonuniform temperature distribution. Solar radiation effects should be considered in thermal transient analysis of thin arch dams. PMID:24695817

  15. Measurement of bubble and pellet size distributions: past and current image analysis technology.

    PubMed

    Junker, Beth

    2006-08-01

    Measurements of bubble and pellet size distributions are useful for biochemical process optimizations. The accuracy, representation, and simplicity of these measurements improve when the measurement is performed on-line and in situ rather than off-line using a sample. Historical and currently available measurement systems for photographic methods are summarized for bubble and pellet (morphology) measurement applications. Applications to cells, mycelia, and pellets measurements have driven key technological developments that have been applied for bubble measurements. Measurement trade-offs exist to maximize accuracy, extend range, and attain reasonable cycle times. Mathematical characterization of distributions using standard statistical techniques is straightforward, facilitating data presentation and analysis. For the specific application of bubble size distributions, selected bioreactor operating parameters and physicochemical conditions alter distributions. Empirical relationships have been established in some cases where sufficient data have been collected. In addition, parameters and conditions with substantial effects on bubble size distributions were identified and their relative effects quantified. This information was used to guide required accuracy and precision targets for bubble size distribution measurements from newly developed novel on-line and in situ bubble measurement devices. PMID:16855822

  16. Differentiation of distribution systems, source water, and clinical coliforms by DNA analysis.

    PubMed Central

    Edberg, S C; Patterson, J E; Smith, D B

    1994-01-01

    During a 2-week period, Enterobacter cloacae was isolated from throughout the water distribution system in New Haven County, Connecticut. There was no forewarning of this event and no apparent reasons for it. Several epidemiologic and public health questions required rapid answers. Were these E. cloacae isolates the result of treatment failure and breakthrough or was regrowth occurring within the system? Did the E. cloacae isolates represent a health threat and were they causing infection? Pulsed-field gel electrophoresis utilizing whole-cell DNA digestion with restriction endonuclease SpeI permitted the rapid generation of specific information to answer these questions. Gel bands were stained with ethidium bromide and photographed with UV illumination. Homogeneity among isolates was confirmed by repeat digestion with XbaI. From each of the water distribution isolates, a single pattern of restriction endonuclease fragments was generated, indicating that only one clone of E. cloacae was in the distribution system. There was no homogeneity between source and distribution water E. cloacae isolates. Moreover, E. cloacae clinical isolates from patients from New Haven area hospitals showed no identity with E. cloacae isolated from the distribution system. Therefore, pulsed-field gel electrophoresis DNA analysis demonstrated that the E. cloacae from the distribution system was the result of a regrowth bloom within the system and not the result of treatment failure and that this clone was not causing a public health risk. Images PMID:8126169

  17. Evaluating Domestic Hot Water Distribution System Options With Validated Analysis Models

    SciTech Connect

    Weitzel, E.; Hoeschele, M.

    2014-09-01

    A developing body of work is forming that collects data on domestic hot water consumption, water use behaviors, and energy efficiency of various distribution systems. A full distribution system developed in TRNSYS has been validated using field monitoring data and then exercised in a number of climates to understand climate impact on performance. This study builds upon previous analysis modelling work to evaluate differing distribution systems and the sensitivities of water heating energy and water use efficiency to variations of climate, load, distribution type, insulation and compact plumbing practices. Overall 124 different TRNSYS models were simulated. Of the configurations evaluated, distribution losses account for 13-29% of the total water heating energy use and water use efficiency ranges from 11-22%. The base case, an uninsulated trunk and branch system sees the most improvement in energy consumption by insulating and locating the water heater central to all fixtures. Demand recirculation systems are not projected to provide significant energy savings and in some cases increase energy consumption. Water use is most efficient with demand recirculation systems, followed by the insulated trunk and branch system with a central water heater. Compact plumbing practices and insulation have the most impact on energy consumption (2-6% for insulation and 3-4% per 10 gallons of enclosed volume reduced). The results of this work are useful in informing future development of water heating best practices guides as well as more accurate (and simulation time efficient) distribution models for annual whole house simulation programs.

  18. Reliability Analysis of Uniaxially Ground Brittle Materials

    NASA Technical Reports Server (NTRS)

    Salem, Jonathan A.; Nemeth, Noel N.; Powers, Lynn M.; Choi, Sung R.

    1995-01-01

    The fast fracture strength distribution of uniaxially ground, alpha silicon carbide was investigated as a function of grinding angle relative to the principal stress direction in flexure. Both as-ground and ground/annealed surfaces were investigated. The resulting flexural strength distributions were used to verify reliability models and predict the strength distribution of larger plate specimens tested in biaxial flexure. Complete fractography was done on the specimens. Failures occurred from agglomerates, machining cracks, or hybrid flaws that consisted of a machining crack located at a processing agglomerate. Annealing eliminated failures due to machining damage. Reliability analyses were performed using two and three parameter Weibull and Batdorf methodologies. The Weibull size effect was demonstrated for machining flaws. Mixed mode reliability models reasonably predicted the strength distributions of uniaxial flexure and biaxial plate specimens.

  19. Strain analysis from objects with a random distribution: A generalized center-to-center method

    NASA Astrophysics Data System (ADS)

    Shan, Yehua; Liang, Xinquan

    2014-03-01

    Existing methods of strain analysis such as the center-to-center method and the Fry method estimate strain from the spatial relationship between point objects in the deformed state. They assume a truncated Poisson distribution of point objects in the pre-deformed state. Significant deviations occur in nature and diffuse the central vacancy in a Fry plot, limiting the its effectiveness as a strain gauge. Therefore, a generalized center-to-center method is proposed to deal with point objects with the more general Poisson distribution, where the method outcomes do not depend on an analysis of a graphical central vacancy. This new method relies upon the probability mass function for the Poisson distribution, and adopts the maximum likelihood function method to solve for strain. The feasibility of the method is demonstrated by applying it to artificial data sets generated for known strains. Further analysis of these sets by use of the bootstrap method shows that the accuracy of the strain estimate has a strong tendency to increase either with point number or with the inclusion of more pre-deformation nearest neighbors. A poorly sorted, well packed, deformed conglomerate is analyzed, yielding strain estimate similar to the vector mean of the major axis directions of pebbles and the harmonic mean of their axial ratios from a shape-based strain determination method. These outcomes support the applicability of the new method to the analysis of deformed rocks with appropriate strain markers.

  20. A performance analysis method for distributed real-time robotic systems: A case study of remote teleoperation

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Sanderson, A. C.

    1994-01-01

    Robot coordination and control systems for remote teleoperation applications are by necessity implemented on distributed computers. Modeling and performance analysis of these distributed robotic systems is difficult, but important for economic system design. Performance analysis methods originally developed for conventional distributed computer systems are often unsatisfactory for evaluating real-time systems. The paper introduces a formal model of distributed robotic control systems; and a performance analysis method, based on scheduling theory, which can handle concurrent hard-real-time response specifications. Use of the method is illustrated by a case of remote teleoperation which assesses the effect of communication delays and the allocation of robot control functions on control system hardware requirements.

  1. Towards an XML and agent-based framework for the distributed management and analysis of active data archives

    E-print Network

    Walker, David W.

    Implementation and use of a prototype distributed active data archive system are outlined. This system is basedTowards an XML and agent-based framework for the distributed management and analysis of active data on the Synthetic Aperture Radar Atlas (SARA) and utilises cooperative software agents for data access and analysis

  2. Validation results of the IAG Dancer project for distributed GPS analysis

    NASA Astrophysics Data System (ADS)

    Boomkamp, H.

    2012-12-01

    The number of permanent GPS stations in the world has grown far too large to allow processing of all this data at analysis centers. The majority of these GPS sites do not even make their observation data available to the analysis centers, for various valid reasons. The current ITRF solution is still based on centralized analysis by the IGS, and subsequent densification of the reference frame via regional network solutions. Minor inconsistencies in analysis methods, software systems and data quality imply that this centralized approach is unlikely to ever reach the ambitious accuracy objectives of GGOS. The dependence on published data also makes it clear that a centralized approach will never provide a true global ITRF solution for all GNSS receivers in the world. If the data does not come to the analysis, the only alternative is to bring the analysis to the data. The IAG Dancer project has implemented a distributed GNSS analysis system on the internet in which each receiver can have its own analysis center in the form of a freely distributed JAVA peer-to-peer application. Global parameters for satellite orbits, clocks and polar motion are solved via a distributed least squares solution among all participating receivers. A Dancer instance can run on any computer that has simultaneous access to the receiver data and to the public internet. In the future, such a process may be embedded in the receiver firmware directly. GPS network operators can join the Dancer ITRF realization without having to publish their observation data or estimation products. GPS users can run a Dancer process without contributing to the global solution, to have direct access to the ITRF in near real-time. The Dancer software has been tested on-line since late 2011. A global network of processes has gradually evolved to allow stabilization and tuning of the software in order to reach a fully operational system. This presentation reports on the current performance of the Dancer system, and demonstrates the obvious benefits of distributed analysis of geodetic data in general. IAG Dancer screenshot

  3. Sub-population analysis of deformability distribution in heterogeneous red blood cell population.

    PubMed

    Lee, Dong Woo; Doh, Il; Kuypers, Frans A; Cho, Young-Ho

    2015-12-01

    We present a method for sub-population analysis of deformability distribution using single-cell microchamber array (SiCMA) technology. It is a unique method allowing the correlation of overall cellular characteristics with surface and cytosolic characteristics to define the distribution of individual cellular characteristics in heterogeneous cell populations. As a proof of principle, reticulocytes, the immature sub-population of red blood cells (RBC), were recognized from RBC population by a surface marker and different characteristics on deformability between these populations were characterized. The proposed technology can be used in a variety of applications that would benefit from the ability to measure the distribution of cellular characteristics in complex populations, especially important to define hematologic disorders. PMID:26383009

  4. Coupling Lemma and Its Application to The Security Analysis of Quantum Key Distribution

    E-print Network

    Kentaro Kato

    2015-05-23

    It is known that the coupling lemma provides a useful tool in the study of probability theory and its related areas. It describes the relation between the variational distance of two probability distributions and the probability that outcomes from the two random experiments associated with each distribution are not identical. In this paper, the failure probability interpretation problem that has been presented by Yuen and Hirota is discussed from the viewpoint of the application of the coupling lemma. First, we introduce the coupling lemma, and investigate properties of it. Next, it is shown that the claims for this problem in the literatures are justified by using the coupling lemma. Consequently, we see that the failure probability interpretation is not adequate in the security analysis of quantum key distribution.

  5. Precise dipolar coupling constant distribution analysis in proton multiple-quantum NMR of elastomers.

    PubMed

    Chassé, Walter; López Valentín, Juan; Genesky, Geoffrey D; Cohen, Claude; Saalwächter, Kay

    2011-01-28

    In this work we present an improved approach for the analysis of (1)H double-quantum nuclear magnetic resonance build-up data, mainly for the determination of residual dipolar coupling constants and distributions thereof in polymer gels and elastomers, yielding information on crosslink density and potential spatial inhomogeneities. We introduce a new generic build-up function, for use as component fitting function in linear superpositions, or as kernel function in fast Tikhonov regularization (ftikreg). As opposed to the previously used inverted Gaussian build-up function based on a second-moment approximation, this method yields faithful coupling constant distributions, as limitations on the fitting limit are now lifted. A robust method for the proper estimation of the error parameter used for the regularization is established, and the approach is demonstrated for different inhomogeneous elastomers with coupling constant distributions. PMID:21280798

  6. Intrinsic Charm Parton Distribution Functions from CTEQ-TEA Global Analysis

    E-print Network

    Dulat, Sayipjamal; Gao, Jun; Huston, Joey; Pumplin, Jon; Schmidt, Carl; Stump, Daniel; Yuan, C -P

    2013-01-01

    We study the possibility of intrinsic (non-perturbative) charm in parton distribution functions (PDF) of the proton, within the context of the CT10 next-to-next-to-leading order (NNLO) global analysis. Three models for the intrinsic charm (IC) quark content are compared: (i) $\\hat{c}(x) = 0$ (zero-IC model); (ii) $\\hat{c}(x)$ is parametrized by a valence-like parton distribution (BHPS model); (iii) $\\hat{c}(x)$ is parametrized by a sea-like parton distribution (SEA model). In these models, the intrinsic charm content, $\\hat{c}(x)$, is included in the charm PDF at the matching scale $Q_c=m_c=1.3$ GeV. The best fits to data are constructed and compared. Correlations between the value of $m_c$ and the amount of IC are also considered.

  7. Intrinsic Charm Parton Distribution Functions from CTEQ-TEA Global Analysis

    E-print Network

    Sayipjamal Dulat; Tie-Jiun Hou; Jun Gao; Joey Huston; Jon Pumplin; Carl Schmidt; Daniel Stump; C. -P. Yuan

    2014-08-19

    We study the possibility of intrinsic (non-perturbative) charm in parton distribution functions (PDF) of the proton, within the context of the CT10 next-to-next-to-leading order (NNLO) global analysis. Three models for the intrinsic charm (IC) quark content are compared: (i) $\\hat{c}(x) = 0$ (zero-IC model); (ii) $\\hat{c}(x)$ is parametrized by a valence-like parton distribution (BHPS model); (iii) $\\hat{c}(x)$ is parametrized by a sea-like parton distribution (SEA model). In these models, the intrinsic charm content, $\\hat{c}(x)$, is included in the charm PDF at the matching scale $Q_c=m_c=1.3$ GeV. The best fits to data are constructed and compared. Correlations between the value of $m_c$ and the amount of IC are also considered.

  8. Modelling European dry spell length distributions, years 1951-2000

    NASA Astrophysics Data System (ADS)

    Serra, Carina; Lana, Xavier; Burgueño, August; Martinez, Maria-Dolors

    2010-05-01

    Daily precipitation records of 267 European rain gauges are considered to obtain dry spell length, DSL, series along the second half of the twentieth century (1951-2000). A dry spell is defined as a set of consecutive days with daily rain amount below a given threshold, R0, which are equal to 0.1, 1.0, 5.0 and 10.0 mm/day. DSL series are properly fit to four different statistical models: Pearson type III (PE3), Weibull (WEI), generalized Pareto (GPA) and lognormal (LN) distributions. The parameters of every model are estimated by L-moments, and the goodness of fit is assessed by quantifying discrepancies between empirical and theoretical distributions in the L-skewness-kurtosis diagrams. The most common best fitting model across Europe is PE3, especially for 0.1 and 1.0 mm/day thresholds. Nevertheless, a few stations in southern Europe are better modelled by the WEI distribution. For 5.0 and 10.0 mm/day, the spatial distribution of the best fitting model is more heterogeneous than for the lowest thresholds. Maps of DSL average and standard deviation, and expected lengths for return periods of 2, 5, 10, 25 and 50 years are also obtained. A common feature for all these maps is that, whereas for thresholds of 0.1 and 1.0 mm/day a N-S gradient is detected, especially strong in Mediterranean areas, for 5.0 and 10.0 mm/day a NW-SE gradient is observed in the Iberian Peninsula and a SW-NE gradient in the Scandinavian Peninsula. Finally, a regional frequency analysis based on a clustering algorithm is attempted for the four threshold levels R0, being observed that PE3 model is the parent distribution for the groups with the highest number of stations.

  9. Development of a Web Service for Analysis in a Distributed Network

    PubMed Central

    Jiang, Xiaoqian; Wu, Yuan; Marsolo, Keith; Ohno-Machado, Lucila

    2014-01-01

    Objective: We describe functional specifications and practicalities in the software development process for a web service that allows the construction of the multivariate logistic regression model, Grid Logistic Regression (GLORE), by aggregating partial estimates from distributed sites, with no exchange of patient-level data. Background: We recently developed and published a web service for model construction and data analysis in a distributed environment. This recent paper provided an overview of the system that is useful for users, but included very few details that are relevant for biomedical informatics developers or network security personnel who may be interested in implementing this or similar systems. We focus here on how the system was conceived and implemented. Methods: We followed a two-stage development approach by first implementing the backbone system and incrementally improving the user experience through interactions with potential users during the development. Our system went through various stages such as concept proof, algorithm validation, user interface development, and system testing. We used the Zoho Project management system to track tasks and milestones. We leveraged Google Code and Apache Subversion to share code among team members, and developed an applet-servlet architecture to support the cross platform deployment. Discussion: During the development process, we encountered challenges such as Information Technology (IT) infrastructure gaps and limited team experience in user-interface design. We figured out solutions as well as enabling factors to support the translation of an innovative privacy-preserving, distributed modeling technology into a working prototype. Conclusion: Using GLORE (a distributed model that we developed earlier) as a pilot example, we demonstrated the feasibility of building and integrating distributed modeling technology into a usable framework that can support privacy-preserving, distributed data analysis among researchers at geographically dispersed institutes. PMID:25848586

  10. Comparative Analysis between ROCOF and Vector Surge Relays for Distributed Generation Applications

    SciTech Connect

    Freitas, Walmir; Xu, Wilsun; Affonso, Carolina M.; Huang, Zhenyu

    2005-04-01

    This paper presents a comprehensive comparative analysis between rate of change of frequency (ROCOF) and vector surge (VS) relays for distributed generation islanding detection. The analysis is based on the concepts of detection-time versus active power-imbalance curves and critical active power imbalance. Such curves are obtained through dynamic simulations. The performance of these devices considering different scenarios is determined and compared. Factors such as voltage-dependent loads, generator inertia constant and multi-distributed generator system are analyzed. False operation of these relays due to faults in adjacent feeders is also addressed. Results show that ROCOF relays are more reliable to detect islanding than vector surge relays when the active power imbalance in the islanded system is small. However, ROCOF relays are more susceptible to false operation than vector surge relays.

  11. Passive-scheme analysis for solving the untrusted source problem in quantum key distribution

    SciTech Connect

    Peng Xiang; Xu Bingjie; Guo Hong

    2010-04-15

    As a practical method, the passive scheme is useful to monitor the photon statistics of an untrusted source in a 'Plug and Play' quantum key distribution (QKD) system. In a passive scheme, three kinds of monitor mode can be adopted: average photon number (APN) monitor, photon number analyzer (PNA), and photon number distribution (PND) monitor. In this paper, the security analysis is rigorously given for the APN monitor, while for the PNA, the analysis, including statistical fluctuation and random noise, is addressed with a confidence level. The results show that the PNA can achieve better performance than the APN monitor and can asymptotically approach the theoretical limit of the PND monitor. Also, the passive scheme with the PNA works efficiently when the signal-to-noise ratio (R{sup SN}) is not too low and so is highly applicable to solve the untrusted source problem in the QKD system.

  12. Impact of hadronic and nuclear corrections on global analysis of spin-dependent parton distributions

    SciTech Connect

    Jimenez-Delgado, Pedro; Accardi, Alberto; Melnitchouk, Wally

    2014-02-01

    We present the first results of a new global next-to-leading order analysis of spin-dependent parton distribution functions from the most recent world data on inclusive polarized deep-inelastic scattering, focusing in particular on the large-x and low-Q^2 regions. By directly fitting polarization asymmetries we eliminate biases introduced by using polarized structure function data extracted under nonuniform assumptions for the unpolarized structure functions. For analysis of the large-x data we implement nuclear smearing corrections for deuterium and 3He nuclei, and systematically include target mass and higher twist corrections to the g_1 and g_2 structure functions at low Q^2. We also explore the effects of Q^2 and W^2 cuts in the data sets, and the potential impact of future data on the behavior of the spin-dependent parton distributions at large x.

  13. Shared and Distributed Memory Parallel Security Analysis of Large-Scale Source Code and Binary Applications

    SciTech Connect

    Quinlan, D; Barany, G; Panas, T

    2007-08-30

    Many forms of security analysis on large scale applications can be substantially automated but the size and complexity can exceed the time and memory available on conventional desktop computers. Most commercial tools are understandably focused on such conventional desktop resources. This paper presents research work on the parallelization of security analysis of both source code and binaries within our Compass tool, which is implemented using the ROSE source-to-source open compiler infrastructure. We have focused on both shared and distributed memory parallelization of the evaluation of rules implemented as checkers for a wide range of secure programming rules, applicable to desktop machines, networks of workstations and dedicated clusters. While Compass as a tool focuses on source code analysis and reports violations of an extensible set of rules, the binary analysis work uses the exact same infrastructure but is less well developed into an equivalent final tool.

  14. Stress distribution on a valgus knee prosthetic inclined interline -- a finite element analysis.

    PubMed

    Orban, H; Stan, G; Gruionu, L; Orban, C

    2013-01-01

    Total knee arthroplasty following valgus deformity is a challenging procedure due to the unique set of problems that must be addressed. The aim of this study is to determine, with a finite element analysis, the load distribution for an inclined valgus prosthetic balanced knee and to compare these results with those of a prosthetic balanced knee with an uninclined interline. Computational simulations, using finite element analysis, focused on a comparision between load intensity and distribution for these situations. We studied valgus inclination at 3 and 8 degrees. We noticed that for an inclination of 3 degrees, the forces are distributed almost symmetrically on both condyles, similar to the distribution of forces in the uninclined interline case. The maximum contact pressure is greater, increasing from 15 MPa to 19.3 MPa (28%). At 8 degrees of inclination, the contact patch moved anterolateraly on the tibia, meaning that the tibial condyles will be unequally loaded. The maximum contact pressure increases to 25 MPa (66%). These greater forces could lead to polyethylene wear and collapse. Additional tibial resection could be a useful method for balancing in severe valgus knee, when valgus inlination does not exceed 3 degrees. PMID:23464776

  15. Modeling Exon-Specific Bias Distribution Improves the Analysis of RNA-Seq Data

    PubMed Central

    Liu, Xuejun; Zhang, Li; Chen, Songcan

    2015-01-01

    RNA-seq technology has become an important tool for quantifying the gene and transcript expression in transcriptome study. The two major difficulties for the gene and transcript expression quantification are the read mapping ambiguity and the overdispersion of the read distribution along reference sequence. Many approaches have been proposed to deal with these difficulties. A number of existing methods use Poisson distribution to model the read counts and this easily splits the counts into the contributions from multiple transcripts. Meanwhile, various solutions were put forward to account for the overdispersion in the Poisson models. By checking the similarities among the variation patterns of read counts for individual genes, we found that the count variation is exon-specific and has the conserved pattern across the samples for each individual gene. We introduce Gamma-distributed latent variables to model the read sequencing preference for each exon. These variables are embedded to the rate parameter of a Poisson model to account for the overdispersion of read distribution. The model is tractable since the Gamma priors can be integrated out in the maximum likelihood estimation. We evaluate the proposed approach, PGseq, using four real datasets and one simulated dataset, and compare its performance with other popular methods. Results show that PGseq presents competitive performance compared to other alternatives in terms of accuracy in the gene and transcript expression calculation and in the downstream differential expression analysis. Especially, we show the advantage of our method in the analysis of low expression. PMID:26448625

  16. Single-phase power distribution system power flow and fault analysis

    NASA Technical Reports Server (NTRS)

    Halpin, S. M.; Grigsby, L. L.

    1992-01-01

    Alternative methods for power flow and fault analysis of single-phase distribution systems are presented. The algorithms for both power flow and fault analysis utilize a generalized approach to network modeling. The generalized admittance matrix, formed using elements of linear graph theory, is an accurate network model for all possible single-phase network configurations. Unlike the standard nodal admittance matrix formulation algorithms, the generalized approach uses generalized component models for the transmission line and transformer. The standard assumption of a common node voltage reference point is not required to construct the generalized admittance matrix. Therefore, truly accurate simulation results can be obtained for networks that cannot be modeled using traditional techniques.

  17. Mapping Drug Distribution in Brain Tissue Using Liquid Extraction Surface Analysis Mass Spectrometry Imaging.

    PubMed

    Swales, John G; Tucker, James W; Spreadborough, Michael J; Iverson, Suzanne L; Clench, Malcolm R; Webborn, Peter J H; Goodwin, Richard J A

    2015-10-01

    Liquid extraction surface analysis mass spectrometry (LESA-MS) is a surface sampling technique that incorporates liquid extraction from the surface of tissue sections with nanoelectrospray mass spectrometry. Traditional tissue analysis techniques usually require homogenization of the sample prior to analysis via high-performance liquid chromatography mass spectrometry (HPLC-MS), but an intrinsic weakness of this is a loss of all spatial information and the inability of the technique to distinguish between actual tissue penetration and response caused by residual blood contamination. LESA-MS, in contrast, has the ability to spatially resolve drug distributions and has historically been used to profile discrete spots on the surface of tissue sections. Here, we use the technique as a mass spectrometry imaging (MSI) tool, extracting points at 1 mm spatial resolution across tissue sections to build an image of xenobiotic and endogenous compound distribution to assess drug blood-brain barrier penetration into brain tissue. A selection of penetrant and "nonpenetrant" drugs were dosed to rats via oral and intravenous administration. Whole brains were snap-frozen at necropsy and were subsequently sectioned prior to analysis by matrix-assisted laser desorption ionization mass spectrometry imaging (MALDI-MSI) and LESA-MSI. MALDI-MSI, as expected, was shown to effectively map the distribution of brain penetrative compounds but lacked sufficient sensitivity when compounds were marginally penetrative. LESA-MSI was used to effectively map the distribution of these poorly penetrative compounds, highlighting its value as a complementary technique to MALDI-MSI. The technique also showed benefits when compared to traditional homogenization, particularly for drugs that were considered nonpenetrant by homogenization but were shown to have a measurable penetration using LESA-MSI. PMID:26350423

  18. Time-cost analysis of a quantum key distribution system clocked at 100 MHz

    E-print Network

    Xiaofan Mo; Itzel Lucio Martinez; Philip Chan; Chris Healey; Steve Hosier; Wolfgang Tittel

    2011-05-18

    We describe the realization of a quantum key distribution (QKD) system clocked at 100 MHz. The system includes classical postprocessing implemented via software, and is operated over a 12 km standard telecommunication dark fiber in a real-world environment. A time-cost analysis of the sifted, error-corrected, and secret key rates relative to the raw key rate is presented, and the scalability of our implementation with respect to higher secret key rates is discussed.

  19. Contain analysis of hydrogen distribution and combustion in PWR dry containments

    SciTech Connect

    Yang, J.W.; Nimnual, S.

    1991-01-01

    Hydrogen transport and combustion in a PWR dry containment are analyzed using the CONTAIN code for a multi-compartment model of the Zion plant. The analysis includes consideration of both degraded core and full core meltdown accidents initiated by a small break LOCA. The importance of intercell flow mixing on distributions of gas composition and temperature in various compartments are evaluated. Thermal stratification and combustion behavior are discussed. 4 refs., 8 figs., 2 tabs.

  20. Combining Different Tools for EEG Analysis to Study the Distributed Character of Language Processing

    PubMed Central

    da Rocha, Armando Freitas; Foz, Flávia Benevides; Pereira, Alfredo

    2015-01-01

    Recent studies on language processing indicate that language cognition is better understood if assumed to be supported by a distributed intelligent processing system enrolling neurons located all over the cortex, in contrast to reductionism that proposes to localize cognitive functions to specific cortical structures. Here, brain activity was recorded using electroencephalogram while volunteers were listening or reading small texts and had to select pictures that translate meaning of these texts. Several techniques for EEG analysis were used to show this distributed character of neuronal enrollment associated with the comprehension of oral and written descriptive texts. Low Resolution Tomography identified the many different sets (si) of neurons activated in several distinct cortical areas by text understanding. Linear correlation was used to calculate the information H(ei) provided by each electrode of the 10/20 system about the identified si. H(ei) Principal Component Analysis (PCA) was used to study the temporal and spatial activation of these sources si. This analysis evidenced 4 different patterns of H(ei) covariation that are generated by neurons located at different cortical locations. These results clearly show that the distributed character of language processing is clearly evidenced by combining available EEG technologies. PMID:26713089

  1. Space station electrical power distribution analysis using a load flow approach

    NASA Technical Reports Server (NTRS)

    Emanuel, Ervin M.

    1987-01-01

    The space station's electrical power system will evolve and grow in a manner much similar to the present terrestrial electrical power system utilities. The initial baseline reference configuration will contain more than 50 nodes or busses, inverters, transformers, overcurrent protection devices, distribution lines, solar arrays, and/or solar dynamic power generating sources. The system is designed to manage and distribute 75 KW of power single phase or three phase at 20 KHz, and grow to a level of 300 KW steady state, and must be capable of operating at a peak of 450 KW for 5 to 10 min. In order to plan far into the future and keep pace with load growth, a load flow power system analysis approach must be developed and utilized. This method is a well known energy assessment and management tool that is widely used throughout the Electrical Power Utility Industry. The results of a comprehensive evaluation and assessment of an Electrical Distribution System Analysis Program (EDSA) is discussed. Its potential use as an analysis and design tool for the 20 KHz space station electrical power system is addressed.

  2. A distributed analysis and visualization system for model and observational data

    NASA Technical Reports Server (NTRS)

    Wilhelmson, Robert B.

    1994-01-01

    Software was developed with NASA support to aid in the analysis and display of the massive amounts of data generated from satellites, observational field programs, and from model simulations. This software was developed in the context of the PATHFINDER (Probing ATmospHeric Flows in an Interactive and Distributed EnviRonment) Project. The overall aim of this project is to create a flexible, modular, and distributed environment for data handling, modeling simulations, data analysis, and visualization of atmospheric and fluid flows. Software completed with NASA support includes GEMPAK analysis, data handling, and display modules for which collaborators at NASA had primary responsibility, and prototype software modules for three-dimensional interactive and distributed control and display as well as data handling, for which NSCA was responsible. Overall process control was handled through a scientific and visualization application builder from Silicon Graphics known as the Iris Explorer. In addition, the GEMPAK related work (GEMVIS) was also ported to the Advanced Visualization System (AVS) application builder. Many modules were developed to enhance those already available in Iris Explorer including HDF file support, improved visualization and display, simple lattice math, and the handling of metadata through development of a new grid datatype. Complete source and runtime binaries along with on-line documentation is available via the World Wide Web at: http://redrock.ncsa.uiuc.edu/ PATHFINDER/pathre12/top/top.html.

  3. Measurement of water thickness in PEM fuel cells and analysis of gas velocity distributions

    NASA Astrophysics Data System (ADS)

    Murakawa, H.; Ueda, T.; Sugimoto, K.; Asano, H.; Takenaka, N.

    2011-09-01

    Fuel gas (hydrogen gas) and oxidant gas (air) are supplied to a polymer electrolyte fuel cell (PEFC). Condensation may occur in the cathode side, since the air might be super-saturated by the fuel cell reactions. If condensed water exists in a gas diffusion layer (GDL) or the gas channels, it may affect the fuel cell performances because it blocks the oxygen from reaching the cathode reaction site. Thus, water management in the PEFC is important. In order to clarify water effects on performances of a PEFC, visualization and quantitative measurements of water distributions in a PEFC were carried out by means of neutron radiography. Two-dimensional water distributions were obtained, and water ejection was confirmed. It was found that the water easily accumulated in the GDL under the rib rather than under the channel at beginning of the operation. Furthermore, a network analysis of gas-velocity distribution is applied for the experimental results. It analyzes the gas-velocity distributions depending on the flow resistance, which is the pressure drop. Applying the measured data of water thickness, gas-velocity distributions were obtained in the channel and the GDL. From the calculation, air supply in the GDL dramatically decreased with increasing of water accumulation.

  4. Flood probability analysis for un-gauged watersheds by means of a simple distributed hydrologic model

    NASA Astrophysics Data System (ADS)

    Boni, Giorgio; Ferraris, Luca; Giannoni, Francesca; Roth, Giorgio; Rudari, Roberto

    2007-10-01

    A methodology is proposed for the inference, at the regional and local scales, of flood magnitude and associated probability. Once properly set-up, this methodology is able to provide flood frequencies distributions at gauged and un-gauged river sections pertaining to the same homogeneous region, using information extracted from rainfall observations. A proper flood frequency distribution can be therefore predicted even in un-gauged watersheds, for which no discharge time series is available. In regions where objective considerations allow the assumption of probability distribution homogeneity, regional approaches are increasingly adopted as they present a higher reliability. The so-called "third level" in regional frequency analysis, that is the derivation of the local dimensional probability distribution from its regional non-dimensional counterpart is often a critical issue because of the high spatial variability of the position parameter, usually called "index flood". While in gauged sites the time series average is often a good estimator for the index flood, in un-gauged sites as much information as possible about the site concerned should be taken into account. To solve this issue, the present work builds from the experience developed for regional rainfall and flood frequency analyses, and a hydrologic model, driven by a specific hyetograph, is used to bridge the gap between rainfall and flood frequencies distributions, identifying flood discharge magnitudes associated with given frequencies. Results obtained from the application in the Liguria region, Northern Italy, are reported, and validation is proposed in gauged sites against local flood frequency distributions, obtained either from local records or from the regional frequency distribution of non-dimensional annual discharge maxima, made dimensional with the local discharge record.

  5. Histological and Demographic Characteristics of the Distribution of Brain and Central Nervous System Tumors’ Sizes: Results from SEER Registries Using Statistical Methods

    PubMed Central

    Pokhrel, Keshav P.; Vovoras, Dimitrios; Tsokos, Chris P.

    2012-01-01

    The examination of brain tumor growth and its variability among cancer patients is an important aspect of epidemiologic and medical data. Several studies for tumors of brain interpreted descriptive data, in this study we perform inference in the extent possible, suggesting possible explanations for the differentiation in the survival rates apparent in the epidemiologic data. Population based information from nine registries in the USA are classified with respect to age, gender, race and tumor histology to study tumor size variation. The Weibull and Dagum distributions are fitted to the highly skewed tumor sizes distributions, the parametric analysis of the tumor sizes showed significant differentiation between sexes, increased skewness for both the male and female populations, as well as decreased kurtosis for the black female population. The effect of population characteristics on the distribution of tumor sizes is estimated by quantile regression model and then compared with the ordinary least squares results. The higher quantiles of the distribution of tumor sizes for whites are significantly higher than those of other races. Our model predicted that the effect of age in the lower quantiles of the tumor sizes distribution is negative given the variables race and sex. We apply probability and regression models to explore the effects of demographic and histology types and observe significant racial and gender differences in the form of the distributions. Efforts are made to link tumor size data with available survival rates in relation to other prognostic variables. PMID:23675268

  6. Do Insect Populations Die at Constant Rates as They Become Older? Contrasting Demographic Failure Kinetics with Respect to Temperature According to the Weibull Model

    PubMed Central

    Damos, Petros; Soulopoulou, Polyxeni

    2015-01-01

    Temperature implies contrasting biological causes of demographic aging in poikilotherms. In this work, we used the reliability theory to describe the consistency of mortality with age in moth populations and to show that differentiation in hazard rates is related to extrinsic environmental causes such as temperature. Moreover, experiments that manipulate extrinsic mortality were used to distinguish temperature-related death rates and the pertinence of the Weibull aging model. The Newton-Raphson optimization method was applied to calculate parameters for small samples of ages at death by estimating the maximum likelihoods surfaces using scored gradient vectors and the Hessian matrix. The study reveals for the first time that the Weibull function is able to describe contrasting biological causes of demographic aging for moth populations maintained at different temperature regimes. We demonstrate that at favourable conditions the insect death rate accelerates as age advances, in contrast to the extreme temperatures in which each individual drifts toward death in a linear fashion and has a constant chance of passing away. Moreover, slope of hazard rates shifts towards a constant initial rate which is a pattern demonstrated by systems which are not wearing out (e.g. non-aging) since the failure, or death, is a random event independent of time. This finding may appear surprising, because, traditionally, it was mostly thought as rule that in aging population force of mortality increases exponentially until all individuals have died. Moreover, in relation to other studies, we have not observed any typical decelerating aging patterns at late life (mortality leveling-off), but rather, accelerated hazard rates at optimum temperatures and a stabilized increase at the extremes.In most cases, the increase in aging-related mortality was simulated reasonably well according to the Weibull survivorship model that is applied. Moreover, semi log- probability hazard rate model illustrations and maximum likelihoods may be usefully in defining periods of mortality leveling off and provide clear evidence that environmental variability may affect parameter estimates and insect population failure rate. From a reliability theory standpoint, failure rates vary according to a linear function of age at the extremes indicating that the life system (i.e., population) is able to eliminate earlier failure and/or to keep later failure rates constant. The applied model was able to identify the major correlates of extended longevity and to suggest new ideas for using demographic concepts in both basic and applied population biology and aging. PMID:26317217

  7. CARES/PC - CERAMICS ANALYSIS AND RELIABILITY EVALUATION OF STRUCTURES

    NASA Technical Reports Server (NTRS)

    Szatmary, S. A.

    1994-01-01

    The beneficial properties of structural ceramics include their high-temperature strength, light weight, hardness, and corrosion and oxidation resistance. For advanced heat engines, ceramics have demonstrated functional abilities at temperatures well beyond the operational limits of metals. This is offset by the fact that ceramic materials tend to be brittle. When a load is applied, their lack of significant plastic deformation causes the material to crack at microscopic flaws, destroying the component. CARES/PC performs statistical analysis of data obtained from the fracture of simple, uniaxial tensile or flexural specimens and estimates the Weibull and Batdorf material parameters from this data. CARES/PC is a subset of the program CARES (COSMIC program number LEW-15168) which calculates the fast-fracture reliability or failure probability of ceramic components utilizing the Batdorf and Weibull models to describe the effects of multi-axial stress states on material strength. CARES additionally requires that the ceramic structure be modeled by a finite element program such as MSC/NASTRAN or ANSYS. The more limited CARES/PC does not perform fast-fracture reliability estimation of components. CARES/PC estimates ceramic material properties from uniaxial tensile or from three- and four-point bend bar data. In general, the parameters are obtained from the fracture stresses of many specimens (30 or more are recommended) whose geometry and loading configurations are held constant. Parameter estimation can be performed for single or multiple failure modes by using the least-squares analysis or the maximum likelihood method. Kolmogorov-Smirnov and Anderson-Darling goodness-of-fit tests measure the accuracy of the hypothesis that the fracture data comes from a population with a distribution specified by the estimated Weibull parameters. Ninety-percent confidence intervals on the Weibull parameters and the unbiased value of the shape parameter for complete samples are provided when the maximum likelihood technique is used. CARES/PC is written and compiled with the Microsoft FORTRAN v5.0 compiler using the VAX FORTRAN extensions and dynamic array allocation supported by this compiler for the IBM/MS-DOS or OS/2 operating systems. The dynamic array allocation routines allow the user to match the number of fracture sets and test specimens to the memory available. Machine requirements include IBM PC compatibles with optional math coprocessor. Program output is designed to fit 80-column format printers. Executables for both DOS and OS/2 are provided. CARES/PC is distributed on one 5.25 inch 360K MS-DOS format diskette in compressed format. The expansion tool PKUNZIP.EXE is supplied on the diskette. CARES/PC was developed in 1990. IBM PC and OS/2 are trademarks of International Business Machines. MS-DOS and MS OS/2 are trademarks of Microsoft Corporation. VAX is a trademark of Digital Equipment Corporation.

  8. Oxygen distribution in tumors: A qualitative analysis and modeling study providing a novel Monte Carlo approach

    SciTech Connect

    Lagerlöf, Jakob H.; Kindblom, Jon; Bernhardt, Peter

    2014-09-15

    Purpose: To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO{sub 2})]. Methods: A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 ?m) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumor oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO{sub 2}), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO{sub 2} were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. Results: For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO{sub 2} distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became truncated in the lower end, due to anoxia, but smaller tumors showed undisturbed oxygen distributions. The six different models with correlated parameters generated three classes of oxygen distributions. The first was a hypothetical, negative covariance between vessel proximity and pO{sub 2} (VPO-C scenario); the second was a hypothetical positive covariance between vessel proximity and pO{sub 2} (VPO+C scenario); and the third was the hypothesis of no correlation between vessel proximity and pO{sub 2} (UP scenario). The VPO-C scenario produced a distinctly different oxygen distribution than the two other scenarios. The shape of the VPO-C scenario was similar to that of the nonvariable DOC model, and the larger the tumor, the greater the similarity between the two models. For all simulations, the mean oxygen tension decreased and the hypoxic fraction increased with tumor size. The absorbed dose required for definitive tumor control was highest for the VPO+C scenario, followed by the UP and VPO-C scenarios. Conclusions: A novel MC algorithm was presented which simulated oxygen distributions and radiation response for various biological parameter values. The analysis showed that the VPO-C scenario generated a clearly different oxygen distribution from the VPO+C scenario; the former exhibited a lower hypoxic fraction and higher radiosensitivity. In future studies, this modeling approach might be valuable for qualitative analyses of factors that affect oxygen distribution as well as analyses of specific experimental and clinical situations.

  9. First experience and adaptation of existing tools to ATLAS distributed analysis

    NASA Astrophysics Data System (ADS)

    de La Hoz, S. G.; Ruiz, L. M.; Liko, D.

    2008-02-01

    The ATLAS production system has been successfully used to run production of simulation data at an unprecedented scale in ATLAS. Up to 10000 jobs were processed on about 100 sites in one day. The experiences obtained operating the system on several grid flavours was essential to perform a user analysis using grid resources. First tests of the distributed analysis system were then performed. In the preparation phase data was registered in the LHC file catalog (LFC) and replicated in external sites. For the main test, few resources were used. All these tests are only a first step towards the validation of the computing model. The ATLAS management computing board decided to integrate the collaboration efforts in distributed analysis in only one project, GANGA. The goal is to test the reconstruction and analysis software in a large scale Data production using grid flavors in several sites. GANGA allows trivial switching between running test jobs on a local batch system and running large-scale analyses on the grid; it provides job splitting and merging, and includes automated job monitoring and output retrieval.

  10. Analysis of dose distribution for heavily exposed workers in the first criticality accident in Japan.

    PubMed

    Endo, Akira; Yamaguchi, Yasuhiro

    2003-04-01

    The first criticality accident in Japan occurred in a uranium processing plant in Tokai-mura on September 30, 1999. The accident, which occurred while a large amount of enriched uranyl nitrate solution was being loaded into a tank, led to a chain reaction that continued for 20 h. Two workers who were pouring the uranium solution into the tank at the time were heterogeneously exposed to neutrons and gamma rays produced by nuclear fission. Analysis of dose distributions was essential for the understanding of the clinical course observed in the skin and organs of these workers. We developed a numerical simulation system, which consists of mathematical human models and Monte Carlo radiation transport programs, for analyzing dose distributions in various postures and applied the system to the dose analysis for the two workers. This analysis revealed the extreme heterogeneity of the doses from neutrons and gamma rays in the skin and body, which depended on the positions and postures of the workers. The detailed dose analysis presented here using color maps is indispensable for an understanding of the biological effects of high-dose exposure to a mixed field of neutrons and gamma rays as well as for the development of emergency treatments for victims of radiation exposure. PMID:12643798

  11. A quantitative quantum-chemical analysis tool for the distribution of mechanical force in molecules

    SciTech Connect

    Stauch, Tim; Dreuw, Andreas

    2014-04-07

    The promising field of mechanochemistry suffers from a general lack of understanding of the distribution and propagation of force in a stretched molecule, which limits its applicability up to the present day. In this article, we introduce the JEDI (Judgement of Energy DIstribution) analysis, which is the first quantum chemical method that provides a quantitative understanding of the distribution of mechanical stress energy among all degrees of freedom in a molecule. The method is carried out on the basis of static or dynamic calculations under the influence of an external force and makes use of a Hessian matrix in redundant internal coordinates (bond lengths, bond angles, and dihedral angles), so that all relevant degrees of freedom of a molecule are included and mechanochemical processes can be interpreted in a chemically intuitive way. The JEDI method is characterized by its modest computational effort, with the calculation of the Hessian being the rate-determining step, and delivers, except for the harmonic approximation, exact ab initio results. We apply the JEDI analysis to several example molecules in both static quantum chemical calculations and Born-Oppenheimer Molecular Dynamics simulations in which molecules are subject to an external force, thus studying not only the distribution and the propagation of strain in mechanically deformed systems, but also gaining valuable insights into the mechanochemically induced isomerization of trans-3,4-dimethylcyclobutene to trans,trans-2,4-hexadiene. The JEDI analysis can potentially be used in the discussion of sonochemical reactions, molecular motors, mechanophores, and photoswitches as well as in the development of molecular force probes.

  12. A quantitative quantum-chemical analysis tool for the distribution of mechanical force in molecules

    NASA Astrophysics Data System (ADS)

    Stauch, Tim; Dreuw, Andreas

    2014-04-01

    The promising field of mechanochemistry suffers from a general lack of understanding of the distribution and propagation of force in a stretched molecule, which limits its applicability up to the present day. In this article, we introduce the JEDI (Judgement of Energy DIstribution) analysis, which is the first quantum chemical method that provides a quantitative understanding of the distribution of mechanical stress energy among all degrees of freedom in a molecule. The method is carried out on the basis of static or dynamic calculations under the influence of an external force and makes use of a Hessian matrix in redundant internal coordinates (bond lengths, bond angles, and dihedral angles), so that all relevant degrees of freedom of a molecule are included and mechanochemical processes can be interpreted in a chemically intuitive way. The JEDI method is characterized by its modest computational effort, with the calculation of the Hessian being the rate-determining step, and delivers, except for the harmonic approximation, exact ab initio results. We apply the JEDI analysis to several example molecules in both static quantum chemical calculations and Born-Oppenheimer Molecular Dynamics simulations in which molecules are subject to an external force, thus studying not only the distribution and the propagation of strain in mechanically deformed systems, but also gaining valuable insights into the mechanochemically induced isomerization of trans-3,4-dimethylcyclobutene to trans,trans-2,4-hexadiene. The JEDI analysis can potentially be used in the discussion of sonochemical reactions, molecular motors, mechanophores, and photoswitches as well as in the development of molecular force probes.

  13. VaR-Efficient Portfolios for a Class of Super- and Sub-Exponentially Decaying Assets Return Distributions

    E-print Network

    Y. Malevergne; D. Sornette

    2003-01-06

    Using a family of modified Weibull distributions, encompassing both sub-exponentials and super-exponentials, to parameterize the marginal distributions of asset returns and their multivariate generalizations with Gaussian copulas, we offer exact formulas for the tails of the distribution $P(S)$ of returns $S$ of a portfolio of arbitrary composition of these assets. We find that the tail of $P(S)$ is also asymptotically a modified Weibull distribution with a characteristic scale $\\chi$ function of the asset weights with different functional forms depending on the super- or sub-exponential behavior of the marginals and on the strength of the dependence between the assets. We then treat in details the problem of risk minimization using the Value-at-Risk and Expected-Shortfall which are shown to be (asymptotically) equivalent in this framework.

  14. Laws prohibiting peer distribution of injecting equipment in Australia: A critical analysis of their effects.

    PubMed

    Lancaster, Kari; Seear, Kate; Treloar, Carla

    2015-12-01

    The law is a key site for the production of meanings around the 'problem' of drugs in public discourse. In this article, we critically consider the material-discursive 'effects' of laws prohibiting peer distribution of needles and syringes in Australia. Taking the laws and regulations governing possession and distribution of injecting equipment in one jurisdiction (New South Wales, Australia) as a case study, we use Carol Bacchi's poststructuralist approach to policy analysis to critically consider the assumptions and presuppositions underpinning this legislative and regulatory framework, with a particular focus on examining the discursive, subjectification and lived effects of these laws. We argue that legislative prohibitions on the distribution of injecting equipment except by 'authorised persons' within 'approved programs' constitute people who inject drugs as irresponsible, irrational, and untrustworthy and re-inscribe a familiar stereotype of the drug 'addict'. These constructions of people who inject drugs fundamentally constrain how the provision of injecting equipment may be thought about in policy and practice. We suggest that prohibitions on the distribution of injecting equipment among peers may also have other, material, effects and may be counterproductive to various public health aims and objectives. However, the actions undertaken by some people who inject drugs to distribute equipment to their peers may disrupt and challenge these constructions, through a counter-discourse in which people who inject drugs are constituted as active agents with a vital role to play in blood-borne virus prevention in the community. Such activity continues to bring with it the risk of criminal prosecution, and so it remains a vexed issue. These insights have implications of relevance beyond Australia, particularly for other countries around the world that prohibit peer distribution, but also for other legislative practices with material-discursive effects in association with injecting drug use. PMID:26118796

  15. Global sensitivity analysis using a new approach based on cumulative distribution functions

    NASA Astrophysics Data System (ADS)

    Wagener, T.; Pianosi, F.; Sarrazin, F.

    2014-12-01

    Global Sensitivity Analysis (GSA) has become a key tool for the analysis of environmental models. Objectives for GSA include model simplification to support calibration, diagnostic analysis of model controls and subsequent comparison with underlying perceptual models, or decision-making analysis to understand over what range of uncertainty a specific action is robust. Variance-based approaches are most widely used for GSA of environmental models. However, methods that consider the entire Probability Density Function (PDF) of the model output, rather than its variance only, are preferable in cases where variance is not an adequate proxy of uncertainty, e.g. when the output distribution is highly-skewed or multi-modal. Additionally, in contrast to variance-based strategies, they might allow for the mapping of the output on the input space, e.g. a prerequisite for the use of GSA in robust decision-making under uncertainty. Still, the adoption of density-based methods has been limited so far, possibly because they are relatively more difficult to implement. Here we present a novel GSA method, called PAWN, to efficiently compute density-based sensitivity indices, while also enabling the necessary input-output space mapping. The key idea is to characterize output distributions by their Cumulative Distribution Functions, which are easier to derive than PDFs. We discuss and demonstrate the advantages of PAWN by application to numerical and environmental examples. We expect PAWN to increase the application of density-based approaches and to be a necessary complimentary approach to variance-based GSA.

  16. Independent Orbiter Assessment (IOA): Analysis of the electrical power distribution and control subsystem, volume 1

    NASA Technical Reports Server (NTRS)

    Schmeckpeper, K. R.

    1987-01-01

    The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA approach features a top-down analysis of the hardware to determine failure modes, criticality, and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. This report documents the independent analysis results corresponding to the Orbiter Electrical Power Distribution and Control (EPD and C) hardware. The EPD and C hardware performs the functions of distributing, sensing, and controlling 28 volt DC power and of inverting, distributing, sensing, and controlling 117 volt 400 Hz AC power to all Orbiter subsystems from the three fuel cells in the Electrical Power Generation (EPG) subsystem. Each level of hardware was evaluated and analyzed for possible failure modes and effects. Criticality was assigned based upon the severity of the effect for each failure mode. Of the 1671 failure modes analyzed, 9 single failures were determined to result in loss of crew or vehicle. Three single failures unique to intact abort were determined to result in possible loss of the crew or vehicle. A possible loss of mission could result if any of 136 single failures occurred. Six of the criticality 1/1 failures are in two rotary and two pushbutton switches that control External Tank and Solid Rocket Booster separation. The other 6 criticality 1/1 failures are fuses, one each per Aft Power Control Assembly (APCA) 4, 5, and 6 and one each per Forward Power Control Assembly (FPCA) 1, 2, and 3, that supply power to certain Main Propulsion System (MPS) valves and Forward Reaction Control System (RCS) circuits.

  17. Multimedia content analysis and indexing: evaluation of a distributed and scalable architecture

    NASA Astrophysics Data System (ADS)

    Mandviwala, Hasnain; Blackwell, Scott; Weikart, Chris; Van Thong, Jean-Manuel

    2003-11-01

    Multimedia search engines facilitate the retrieval of documents from large media content archives now available via intranets and the Internet. Over the past several years, many research projects have focused on algorithms for analyzing and indexing media content efficiently. However, special system architectures are required to process large amounts of content from real-time feeds or existing archives. Possible solutions include dedicated distributed architectures for analyzing content rapidly and for making it searchable. The system architecture we propose implements such an approach: a highly distributed and reconfigurable batch media content analyzer that can process media streams and static media repositories. Our distributed media analysis application handles media acquisition, content processing, and document indexing. This collection of modules is orchestrated by a task flow management component, exploiting data and pipeline parallelism in the application. A scheduler manages load balancing and prioritizes the different tasks. Workers implement application-specific modules that can be deployed on an arbitrary number of nodes running different operating systems. Each application module is exposed as a web service, implemented with industry-standard interoperable middleware components such as Microsoft ASP.NET and Sun J2EE. Our system architecture is the next generation system for the multimedia indexing application demonstrated by www.speechbot.com. It can process large volumes of audio recordings with minimal support and maintenance, while running on low-cost commodity hardware. The system has been evaluated on a server farm running concurrent content analysis processes.

  18. Statistical analysis of secondary particle distributions in relativistic nucleus-nucleus collisions

    NASA Technical Reports Server (NTRS)

    Mcguire, Stephen C.

    1987-01-01

    The use is described of several statistical techniques to characterize structure in the angular distributions of secondary particles from nucleus-nucleus collisions in the energy range 24 to 61 GeV/nucleon. The objective of this work was to determine whether there are correlations between emitted particle intensity and angle that may be used to support the existence of the quark gluon plasma. The techniques include chi-square null hypothesis tests, the method of discrete Fourier transform analysis, and fluctuation analysis. We have also used the method of composite unit vectors to test for azimuthal asymmetry in a data set of 63 JACEE-3 events. Each method is presented in a manner that provides the reader with some practical detail regarding its application. Of those events with relatively high statistics, Fe approaches 0 at 55 GeV/nucleon was found to possess an azimuthal distribution with a highly non-random structure. No evidence of non-statistical fluctuations was found in the pseudo-rapidity distributions of the events studied. It is seen that the most effective application of these methods relies upon the availability of many events or single events that possess very high multiplicities.

  19. Measuring arbitrary diffusion coefficient distributions of nano-objects by taylor dispersion analysis.

    PubMed

    Cipelletti, Luca; Biron, Jean-Philippe; Martin, Michel; Cottet, Hervé

    2015-08-18

    Taylor dispersion analysis is an absolute and straightforward characterization method that allows determining the diffusion coefficient, or equivalently the hydrodynamic radius, from angstroms to submicron size range. In this work, we investigated the use of the Constrained Regularized Linear Inversion approach as a new data processing method to extract the probability density functions of the diffusion coefficient (or hydrodynamic radius) from experimental taylorgrams. This new approach can be applied to arbitrary polydisperse samples and gives access to the whole diffusion coefficient distributions, thereby significantly enhancing the potentiality of Taylor dispersion analysis. The method was successfully applied to both simulated and real experimental data for solutions of moderately polydisperse polymers and their binary and ternary mixtures. Distributions of diffusion coefficients obtained by this method were favorably compared with those derived from size exclusion chromatography. The influence of the noise of the simulated taylorgrams on the data processing is discussed. Finally, we discuss the ability of the method to correctly resolve bimodal distributions as a function of the relative separation between the two constituent species. PMID:26243023

  20. In situ sulfonation of alkyl benzene self-assembled monolayers: product distribution and kinetic analysis.

    PubMed

    Katash, Irit; Luo, Xianglin; Sukenik, Chaim N

    2008-10-01

    The sulfonation of aromatic rings held at the surface of a covalently anchored self-assembled monolayer has been analyzed in terms of the rates and isomer distribution of the sulfonation process. The observed product distributions are similar to those observed in solution, though the data obtained suggest that the reaction rate and the ortho/para product ratio depend on the length of the tether anchoring the aryl ring to the monolayer interface. It was also found that the interface becomes progressively more disordered and the observed reaction rates decrease as the reaction progresses. There is no evidence for a bias in favor of reaction at the more exposed para-position nor is there evidence for an enhanced reaction rate due to the increased disorder and/or improved wetting as the reaction proceeds. This is the first detailed study of electrophilic aromatic substitution at a monolayer interface. It introduces new approaches to the spectroscopic analysis of reactions on self-assembled monolayers and provides a new general approach to the analysis of isomeric product distribution in such a setting. PMID:18785720

  1. Empirical analysis on the connection between power-law distributions and allometries for urban indicators

    NASA Astrophysics Data System (ADS)

    Alves, L. G. A.; Ribeiro, H. V.; Lenzi, E. K.; Mendes, R. S.

    2014-09-01

    We report on the existing connection between power-law distributions and allometries. As it was first reported in Gomez-Lievano et al. (2012) for the relationship between homicides and population, when these urban indicators present asymptotic power-law distributions, they can also display specific allometries among themselves. Here, we present an extensive characterization of this connection when considering all possible pairs of relationships from twelve urban indicators of Brazilian cities (such as child labor, illiteracy, income, sanitation and unemployment). Our analysis reveals that all our urban indicators are asymptotically distributed as power laws and that the proposed connection also holds for our data when the allometric relationship displays enough correlations. We have also found that not all allometric relationships are independent and that they can be understood as a consequence of the allometric relationship between the urban indicator and the population size. We further show that the residuals fluctuations surrounding the allometries are characterized by an almost constant variance and log-normal distributions.

  2. Modeling call holding time distributions for CCS network design and performance analysis

    NASA Astrophysics Data System (ADS)

    Bolotin, Vladimir A.

    1994-04-01

    The message traffic offered to the CCS signalling network depends on and is modulated by the traffic characteristics of the circuit switched calls supported by the CCS network. Most previous analyses of CCS network engineering, performance evaluation and congestion control protocols generally assume an exponential holding time of circuit switched calls. Analysis of actual holding time distributions in conversations, facsimile and voice mail connections revealed that these distributions radically differ from the exponential distribution. Especially significant is the large proportion of very short calls in real traffic in comparison with the exponential distribution model. The diversity of calls (partial dialing, subscriber busy, no answer) and services results in a multi-component call mix, with even larger proportion of short time intervals between message-generating events. Very short call holding times can have a significant impact on the traffic stream presented to the CCS network: for calls with short holding times, the different CCS messages arrive relatively close to each other, and this manifests as burstiness in the CCS traffic stream.

  3. Rural tourism spatial distribution based on multi-criteria decision analysis and GIS

    NASA Astrophysics Data System (ADS)

    Zhang, Hongxian; Yang, Qingsheng

    2008-10-01

    To study spatial distribution of rural tourism can provide scientific decision basis for developing rural economics. Traditional ways of tourism spatial distribution have some limitations in quantifying priority locations of tourism development on small units. They can only produce the overall tourism distribution locations and whether locations are suitable to tourism development simply while the tourism develop ranking with different decision objectives should be considered. This paper presents a way to find ranking of location of rural tourism development in spatial by integrating multi-criteria decision analysis (MCDA) and geography information system (GIS). In order to develop country economics with inconvenient transportation, undeveloped economy and better tourism resource, these locations should be firstly develop rural tourism. Based on this objective, the tourism develop priority utility of each town is calculated with MCDA and GIS. Towns which should be first develop rural tourism can be selected with higher tourism develop priority utility. The method is used to find ranking of location of rural tourism in Ningbo City successfully. The result shows that MCDA is an effective way for distribution rural tourism in spatial based on special decision objectives and rural tourism can promote economic development.

  4. A spatial pattern analysis of the halophytic species distribution in an arid coastal environment.

    PubMed

    Badreldin, Nasem; Uria-Diez, J; Mateu, J; Youssef, Ali; Stal, Cornelis; El-Bana, Magdy; Magdy, Ahmed; Goossens, Rudi

    2015-05-01

    Obtaining information about the spatial distribution of desert plants is considered as a serious challenge for ecologists and environmental modeling due to the required intensive field work and infrastructures in harsh and remote arid environments. A new method was applied for assessing the spatial distribution of the halophytic species (HS) in an arid coastal environment. This method was based on the object-based image analysis for a high-resolution Google Earth satellite image. The integration of the image processing techniques and field work provided accurate information about the spatial distribution of HS. The extracted objects were based on assumptions that explained the plant-pixel relationship. Three different types of digital image processing techniques were implemented and validated to obtain an accurate HS spatial distribution. A total of 2703 individuals of the HS community were found in the case study, and approximately 82% were located above an elevation of 2 m. The micro-topography exhibited a significant negative relationship with pH and EC (r?=?-0.79 and -0.81, respectively, p?

  5. Methods and apparatuses for information analysis on shared and distributed computing systems

    DOEpatents

    Bohn, Shawn J [Richland, WA; Krishnan, Manoj Kumar [Richland, WA; Cowley, Wendy E [Richland, WA; Nieplocha, Jarek [Richland, WA

    2011-02-22

    Apparatuses and computer-implemented methods for analyzing, on shared and distributed computing systems, information comprising one or more documents are disclosed according to some aspects. In one embodiment, information analysis can comprise distributing one or more distinct sets of documents among each of a plurality of processes, wherein each process performs operations on a distinct set of documents substantially in parallel with other processes. Operations by each process can further comprise computing term statistics for terms contained in each distinct set of documents, thereby generating a local set of term statistics for each distinct set of documents. Still further, operations by each process can comprise contributing the local sets of term statistics to a global set of term statistics, and participating in generating a major term set from an assigned portion of a global vocabulary.

  6. Exposure models for the prior distribution in bayesian decision analysis for occupational hygiene decision making.

    PubMed

    Lee, Eun Gyung; Kim, Seung Won; Feigley, Charles E; Harper, Martin

    2013-01-01

    This study introduces two semi-quantitative methods, Structured Subjective Assessment (SSA) and Control of Substances Hazardous to Health (COSHH) Essentials, in conjunction with two-dimensional Monte Carlo simulations for determining prior probabilities. Prior distribution using expert judgment was included for comparison. Practical applications of the proposed methods were demonstrated using personal exposure measurements of isoamyl acetate in an electronics manufacturing facility and of isopropanol in a printing shop. Applicability of these methods in real workplaces was discussed based on the advantages and disadvantages of each method. Although these methods could not be completely independent of expert judgments, this study demonstrated a methodological improvement in the estimation of the prior distribution for the Bayesian decision analysis tool. The proposed methods provide a logical basis for the decision process by considering determinants of worker exposure. PMID:23252451

  7. Exposure Models for the Prior Distribution in Bayesian Decision Analysis for Occupational Hygiene Decision Making

    PubMed Central

    Lee, Eun Gyung; Kim, Seung Won; Feigley, Charles E.; Harper, Martin

    2015-01-01

    This study introduces two semi-quantitative methods, Structured Subjective Assessment (SSA) and Control of Substances Hazardous to Health (COSHH) Essentials, in conjunction with two-dimensional Monte Carlo simulations for determining prior probabilities. Prior distribution using expert judgment was included for comparison. Practical applications of the proposed methods were demonstrated using personal exposure measurements of isoamyl acetate in an electronics manufacturing facility and of isopropanol in a printing shop. Applicability of these methods in real workplaces was discussed based on the advantages and disadvantages of each method. Although these methods could not be completely independent of expert judgments, this study demonstrated a methodological improvement in the estimation of the prior distribution for the Bayesian decision analysis tool. The proposed methods provide a logical basis for the decision process by considering determinants of worker exposure. PMID:23252451

  8. EXERGY ANALYSIS OF THE CRYOGENIC HELIUM DISTRIBUTION SYSTEM FOR THE LARGE HADRON COLLIDER (LHC)

    SciTech Connect

    Claudet, S.; Lebrun, Ph.; Tavian, L.; Wagner, U.

    2010-04-09

    The Large Hadron Collider (LHC) at CERN features the world's largest helium cryogenic system, spreading over the 26.7 km circumference of the superconducting accelerator. With a total equivalent capacity of 145 kW at 4.5 K including 18 kW at 1.8 K, the LHC refrigerators produce an unprecedented exergetic load, which must be distributed efficiently to the magnets in the tunnel over the 3.3 km length of each of the eight independent sectors of the machine. We recall the main features of the LHC cryogenic helium distribution system at different temperature levels and present its exergy analysis, thus enabling to qualify second-principle efficiency and identify main remaining sources of irreversibility.

  9. Preliminary analysis of the span-distributed-load concept for cargo aircraft design

    NASA Technical Reports Server (NTRS)

    Whitehead, A. H., Jr.

    1975-01-01

    A simplified computer analysis of the span-distributed-load airplane (in which payload is placed within the wing structure) has shown that the span-distributed-load concept has high potential for application to future air cargo transport design. Significant increases in payload fraction over current wide-bodied freighters are shown for gross weights in excess of 0.5 Gg (1,000,000 lb). A cruise-matching calculation shows that the trend toward higher aspect ratio improves overall efficiency; that is, less thrust and fuel are required. The optimal aspect ratio probably is not determined by structural limitations. Terminal-area constraints and increasing design-payload density, however, tend to limit aspect ratio.

  10. A distributed analysis and visualization system for model and observational data

    NASA Technical Reports Server (NTRS)

    Wilhelmson, Robert; Koch, Steven

    1992-01-01

    The objective of this proposal is to develop an integrated and distributed analysis and display software system which can be applied to all areas of the Earth System Science to study numerical model and earth observational data from storm to global scale. This system will be designed to be easy to use, portable, flexible and easily extensible and to adhere to current and emerging standards whenever possible. It will provide an environment for visualization of the massive amounts of data generated from satellites and other observational field measurements and from model simulations during or after their execution. Two- and three-dimensional animation will also be provided. This system will be based on a widely used software package from NASA called GEMPAK and prototype software for three dimensional interactive displays built at NCSA. The underlying foundation of the system will be a set of software libraries which can be distributed across a UNIX based supercomputer and workstations.

  11. A distributed analysis and visualization system for model and observational data

    NASA Technical Reports Server (NTRS)

    Wilhelmson, Robert; Koch, Steven

    1993-01-01

    The objective of this proposal is to develop an integrated and distributed analysis and display software system which can be applied to all areas of the Earth System Science to study numerical model and earth observational data from storm to global scale. This system will be designed to be easy to use, portable, flexible and easily extensible and designed to adhere to current and emerging standards whenever possible. It will provide an environment for visualization of the massive amounts of data generated from satellites and other observational field measurements and from model simulations during or after their execution. Two- and three-dimensional animation will also be provided. This system will be based on a widely used software package from NASA called GEMPAK and prototype software for three-dimensional interactive displays built at NCSA. The underlying foundation of the system will be a set of software libraries which can be distributed across a UNIX based supercomputer and workstations.

  12. Simulation and analysis of an intermediate frequency (IF) distribution system with applications for Space Station

    NASA Technical Reports Server (NTRS)

    Costello, Thomas A.; Brandt, C. Maite

    1989-01-01

    Simulation and analysis results are described for a wideband fiber optic intermediate frequency distribution channel for a frequency division multiple access (FDMA) system where antenna equipment is remotely located from the signal processing equipment. The fiber optic distribution channel accommodates multiple signals received from a single antenna with differing power levels. Performance parameters addressed are intermodulation degradations, laser noise, and adjacent channel interference, as they impact the overall system design. Simulation results showed that the laser diode modulation level can be allowed to reach 100 percent without considerable degradation. The laser noise must be controlled as to provide a noise floor of less than -90 dBW/Hz. The fiber optic link increases the degradation due to power imbalance yet diminishes the effects of the transmit amplifier nonlinearity. Overall, optimal operation conditions can be found to yield a degradation level of about .1 dB caused by the fiber optic link.

  13. Analysis and modeling of information flow and distributed expertise in space-related operations.

    PubMed

    Caldwell, Barrett S

    2005-01-01

    Evolving space operations requirements and mission planning for long-duration expeditions require detailed examinations and evaluations of information flow dynamics, knowledge-sharing processes, and information technology use in distributed expert networks. This paper describes the work conducted with flight controllers in the Mission Control Center (MCC) of NASA's Johnson Space Center. This MCC work describes the behavior of experts in a distributed supervisory coordination framework, which extends supervisory control/command and control models of human task performance. Findings from this work are helping to develop analysis techniques, information architectures, and system simulation capabilities for knowledge sharing in an expert community. These findings are being applied to improve knowledge-sharing processes applied to a research program in advanced life support for long-duration space flight. Additional simulation work is being developed to create interoperating modules of information flow and novice/expert behavior patterns. PMID:15835058

  14. Exergy Analysis of the Cryogenic Helium Distribution System for the Large Hadron Collider (LHC)

    E-print Network

    Claudet, S; Tavian, L; Wagner, U; 10.1063/1.3422294

    2010-01-01

    The Large Hadron Collider (LHC) at CERN features the world’s largest helium cryogenic system, spreading over the 26.7 km circumference of the superconducting accelerator. With a total equivalent capacity of 145 kW at 4.5 K including 18 kW at 1.8 K, the LHC refrigerators produce an unprecedented exergetic load, which must be distributed efficiently to the magnets in the tunnel over the 3.3 km length of each of the eight independent sectors of the machine. We recall the main features of the LHC cryogenic helium distribution system at different temperature levels and present its exergy analysis, thus enabling to qualify second-principle efficiency and identify main remaining sources of irreversibility..

  15. Nanostructural analysis of water distribution in hydrated multicomponent gels using thermal analysis and NMR relaxometry.

    PubMed

    Codoni, Doroty; Belton, Peter; Qi, Sheng

    2015-06-01

    Highly complex, multicomponent gels and water-containing soft materials have varied applications in biomedical, pharmaceutical, and food sciences, but the characterization of these nanostructured materials is extremely challenging. The aim of this study was to use stearoyl macrogol-32 glycerides (Gelucire 50/13) gels containing seven different species of glycerides, PEG, and PEG-esters, as model, complex, multicomponent gels, to investigate the effect of water content on the micro- and nanoarchitecture of the gel interior. Thermal analysis and NMR relaxometry were used to probe the thermal and diffusional behavior of water molecules within the gel network. For the highly concentrated gels (low water content), the water activity was significantly lowered due to entrapment in the dense gel network. For the gels with intermediate water content, multiple populations of water molecules with different thermal responses and diffusion behavior were detected, indicating the presence of water in different microenvironments. This correlated with the network architecture of the freeze-dried gels observed using SEM. For the gels with high water content, increased quantities of water with similar diffusion characteristics as free water could be detected, indicating the presence of large water pockets in these gels. The results of this study provide new insights into structure of Gelucire gels, which have not been reported before because of the complexity of the material. They also demonstrate that the combination of thermal analysis and NMR relaxometry offers insights into the structure of soft materials not available by the use of each technique alone. However, we also note that in some instances the results of these measurements are overinterpreted and we suggest limitations of the methods that must be considered when using them. PMID:25945869

  16. Poster — Thur Eve — 74: Distributed, asynchronous, reactive dosimetric and outcomes analysis using DICOMautomaton

    SciTech Connect

    Clark, Haley; Wu, Jonn; Moiseenko, Vitali; Thomas, Steven

    2014-08-15

    Many have speculated about the future of computational technology in clinical radiation oncology. It has been advocated that the next generation of computational infrastructure will improve on the current generation by incorporating richer aspects of automation, more heavily and seamlessly featuring distributed and parallel computation, and providing more flexibility toward aggregate data analysis. In this report we describe how a recently created — but currently existing — analysis framework (DICOMautomaton) incorporates these aspects. DICOMautomaton supports a variety of use cases but is especially suited for dosimetric outcomes correlation analysis, investigation and comparison of radiotherapy treatment efficacy, and dose-volume computation. We describe: how it overcomes computational bottlenecks by distributing workload across a network of machines; how modern, asynchronous computational techniques are used to reduce blocking and avoid unnecessary computation; and how issues of out-of-date data are addressed using reactive programming techniques and data dependency chains. We describe internal architecture of the software and give a detailed demonstration of how DICOMautomaton could be used to search for correlations between dosimetric and outcomes data.

  17. Bayesian uncertainty analysis in distributed hydrologic modeling: A case study in the Thur River basin (Switzerland)

    NASA Astrophysics Data System (ADS)

    Yang, Jing; Reichert, Peter; Abbaspour, Karim C.

    2007-10-01

    Calibration and uncertainty analysis in hydrologic modeling are affected by measurement errors in input and response and errors in model structure. Recently, extending similar approaches in discrete time, a continuous time autoregressive error model was proposed for statistical inference and uncertainty analysis in hydrologic modeling. The major advantages over discrete time formulation are the use of a continuous time error model for describing continuous processes, the possibility of accounting for seasonal variations of parameters in the error model, the easier treatment of missing data or omitted outliers, and the opportunity for continuous time predictions. The model was developed for the Chaohe Basin in China and had some features specific for this semiarid climatic region (in particular, the seasonal variation of parameters in the error model in response to seasonal variation in precipitation). This paper tests and extends this approach with an application to the Thur River basin in Switzerland, which is subject to completely different climatic conditions. This application corroborates the general applicability of the approach but also demonstrates the necessity of accounting for the heavy tails in the distributions of residuals and innovations. This is done by replacing the normal distribution of the innovations by a Student t distribution, the degrees of freedom of which are adapted to best represent the shape of the empirical distribution of the innovations. We conclude that with this extension, the continuous time autoregressive error model is applicable and flexible for hydrologic modeling under different climatic conditions. The major remaining conceptual disadvantage is that this class of approaches does not lead to a separate identification of model input and model structural errors. The major practical disadvantage is the high computational demand characteristic for all Markov chain Monte Carlo techniques.

  18. Particle size distribution models, their characteristics and fitting capability

    NASA Astrophysics Data System (ADS)

    Bayat, Hossein; Rastgo, Mostafa; Mansouri Zadeh, Moharram; Vereecken, Harry

    2015-10-01

    Many attempts have been made to characterize particle size distribution (PSD) curves using different mathematical models, which are primarily used as a basis for estimating soil hydraulic properties. The principle step in using soil PSD to predict soil hydraulic properties is determining an accurate and continuous curve for PSD. So far, the characteristics of the PSD models, their fitting accuracy, and the effects of their parameters on the shape and position of PSD curves have not been investigated. In this study all developed PSD models, their characteristics, behavior of their parameters, and their fitting capability to the UNSODA database soil samples were investigated. Results showed that beerkan estimation of soil transfer (BEST), two and three parameter Weibull, Rosin and Rammler (1 and 2), unimodal and bimodal Fredlund, and van Genuchten models were flexible over the entire range of soil PSD. Correspondingly, the BEST, two and three parameter Weibull, Rosin and Rammler (1 and 2), hyperbolic and offset renormalized log-normal models possessed a high fitting capability over the entire range of PSD. The few parameters of the BEST, Rosin and Rammler (1 and 2), and two parameter Weibull models provides ease of use in soil physics and mechanics research. Thus, they are seemingly fit with acceptable accuracy in predicting the PSD curve. Although the fractal models have physical and mathematical basis, they do not have the adequate flexibility to contribute a description of the PSD curve. Different aspects of the PSD models should be considered in selecting a model to describe a soil PSD.

  19. Statistical Distribution of Inflation on Lava Flows: Analysis of Flow Surfaces on Earth and Mars

    NASA Technical Reports Server (NTRS)

    Glazel, L. S.; Anderson, S. W.; Stofan, E. R.; Baloga, S.

    2003-01-01

    The surface morphology of a lava flow results from processes that take place during the emplacement of the flow. Certain types of features, such as tumuli, lava rises and lava rise pits, are indicators of flow inflation or endogenous growth of a lava flow. Tumuli in particular have been identified as possible indicators of tube location, indicating that their distribution on the surface of a lava flow is a junction of the internal pathways of lava present during flow emplacement. However, the distribution of tumuli on lava flows has not been examined in a statistically thorough manner. In order to more rigorously examine the distribution of tumuli on a lava flow, we examined a discrete flow lobe with numerous lava rises and tumuli on the 1969 - 1974 Mauna Ulu flow at Kilauea, Hawaii. The lobe is located in the distal portion of the flow below Holei Pali, which is characterized by hummocky pahoehoe flows emplaced from tubes. We chose this flow due to its discrete nature allowing complete mapping of surface morphologies, well-defined boundaries, well-constrained emplacement parameters, and known flow thicknesses. In addition, tube locations for this Mauna Ulu flow were mapped by Holcomb (1976) during flow emplacement. We also examine the distribution of tumuli on the distal portion of the hummocky Thrainsskjoldur flow field provided by Rossi and Gudmundsson (1996). Analysis of the Mauna Ulu and Thrainsskjoldur flow lobes and the availability of high-resolution MOC images motivated us to look for possible tumuli-dominated flow lobes on the surface of Mars. We identified a MOC image of a lava flow south of Elysium Mons with features morphologically similar to tumuli. The flow is characterized by raised elliptical to circular mounds, some with axial cracks, that are similar in size to the tumuli measured on Earth. One potential avenue of determining whether they are tumuli is to look at the spatial distribution to see if any patterns similar to those of tumuli-dominated terrestrial flows can be identified. Since tumuli form by the injection of lava beneath a crust, the distribution of tumuli on a flow should represent the distribution of thermally preferred pathways beneath the surface of the crust. That distribution of thermally preferred pathways may be a function of the evolution of a basaltic lava flow. As a longer-lived flow evolves, initially broad thermally preferred pathways would evolve to narrower, more well-defined tube-like pathways. The final flow morphology clearly preserves the growth of the flow over time, with inflation features indicating pathways that were not necessarily contemporaneously active. Here, we test using statistical analysis whether this final flow morphology produces distinct distributions that can be used to readily determine the distribution of thermally preferred pathways beneath the surface of the crust.

  20. Statistical analysis of factors affecting landslide distribution in the new Madrid seismic zone, Tennessee and Kentucky

    USGS Publications Warehouse

    Jibson, R.W.; Keefer, D.K.

    1989-01-01

    More than 220 large landslides along the bluffs bordering the Mississippi alluvial plain between Cairo, Ill., and Memphis, Tenn., are analyzed by discriminant analysis and multiple linear regression to determine the relative effects of slope height and steepness, stratigraphic variation, slope aspect, and proximity to the hypocenters of the 1811-12 New Madrid, Mo., earthquakes on the distribution of these landslides. Three types of landslides are analyzed: (1) old, coherent slumps and block slides, which have eroded and revegetated features and no active analogs in the area; (2) old earth flows, which are also eroded and revegetated; and (3) young rotational slumps, which are present only along near-river bluffs, and which are the only young, active landslides in the area. Discriminant analysis shows that only one characteristic differs significantly between bluffs with and without young rotational slumps: failed bluffs tend to have sand and clay at their base, which may render them more susceptible to fluvial erosion. Bluffs having old coherent slides are significantly higher, steeper, and closer to the hypocenters of the 1811-12 earthquakes than bluffs without these slides. Bluffs having old earth flows are likewise higher and closer to the earthquake hypocenters. Multiple regression analysis indicates that the distribution of young rotational slumps is affected most strongly by slope steepness: about one-third of the variation in the distribution is explained by variations in slope steepness. The distribution of old coherent slides and earth flows is affected most strongly by slope height, but the proximity to the hypocenters of the 1811-12 earthquakes also significantly affects the distribution. The results of the statistical analyses indicate that the only recently active landsliding in the area is along actively eroding river banks, where rotational slumps formed as bluffs are undercut by the river. The analyses further indicate that the old coherent slides and earth flows in the area are spatially related to the 1811-12 earthquake hypocenters and were thus probably triggered by those earthquakes. These results are consistent with findings of other recent investigations of landslides in the area that presented field, historical, and analytical evidence to demonstrate that old landslides in the area formed during the 1811-12 New Madrid earthquakes. Results of the multiple linear regression can also be used to approximate the relative susceptibility of the bluffs in the study area to seismically induced landsliding. ?? 1989.

  1. Predictive analysis of thermal distribution and damage in thermotherapy on biological tissue

    NASA Astrophysics Data System (ADS)

    Fanjul-Vélez, Félix; Arce-Diego, José Luis

    2007-05-01

    The use of optical techniques is increasing the possibilities and success of medical praxis in certain cases, either in tissue characterization or treatment. Photodynamic therapy (PDT) or low intensity laser treatment (LILT) are two examples of the latter. Another very interesting implementation is thermotherapy, which consists of controlling temperature increase in a pathological biological tissue. With this method it is possible to provoke an improvement on specific diseases, but a previous analysis of treatment is needed in order for the patient not to suffer any collateral damage, an essential point due to security margins in medical procedures. In this work, a predictive analysis of thermal distribution in a biological tissue irradiated by an optical source is presented. Optical propagation is based on a RTT (Radiation Transport Theory) model solved via a numerical Monte Carlo method, in a multi-layered tissue. Data obtained are included in a bio-heat equation that models heat transference, taking into account conduction, convection, radiation, blood perfusion and vaporization depending on the specific problem. Spatial-temporal differential bio-heat equation is solved via a numerical finite difference approach. Experimental temperature distributions on animal tissue irradiated by laser radiation are shown. From thermal distribution in tissue, thermal damage is studied, based on an Arrhenius analysis, as a way of predicting harmful effects. The complete model can be used for concrete treatment proposals, as a way of predicting treatment effects and consequently decide which optical source parameters are appropriate for the specific disease, mainly wavelength and optical power, with reasonable security margins in the process.

  2. Flow distribution analysis on the cooling tube network of ITER thermal shield

    SciTech Connect

    Nam, Kwanwoo; Chung, Wooho; Noh, Chang Hyun; Kang, Dong Kwon; Kang, Kyoung-O; Ahn, Hee Jae; Lee, Hyeon Gon

    2014-01-29

    Thermal shield (TS) is to be installed between the vacuum vessel or the cryostat and the magnets in ITER tokamak to reduce the thermal radiation load to the magnets operating at 4.2K. The TS is cooled by pressurized helium gas at the inlet temperature of 80K. The cooling tube is welded on the TS panel surface and the composed flow network of the TS cooling tubes is complex. The flow rate in each panel should be matched to the thermal design value for effective radiation shielding. This paper presents one dimensional analysis on the flow distribution of cooling tube network for the ITER TS. The hydraulic cooling tube network is modeled by an electrical analogy. Only the cooling tube on the TS surface and its connecting pipe from the manifold are considered in the analysis model. Considering the frictional factor and the local loss in the cooling tube, the hydraulic resistance is expressed as a linear function with respect to mass flow rate. Sub-circuits in the TS are analyzed separately because each circuit is controlled by its own control valve independently. It is found that flow rates in some panels are insufficient compared with the design values. In order to improve the flow distribution, two kinds of design modifications are proposed. The first one is to connect the tubes of the adjacent panels. This will increase the resistance of the tube on the panel where the flow rate is excessive. The other design suggestion is that an orifice is installed at the exit of tube routing where the flow rate is to be reduced. The analysis for the design suggestions shows that the flow mal-distribution is improved significantly.

  3. Knowledge-based assistance for science visualization and analysis using large distributed databases

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Jacobson, Allan S.; Doyle, Richard J.; Collins, Donald J.

    1993-01-01

    Within this decade, the growth in complexity of exploratory data analysis and the sheer volume of space data require new and innovative approaches to support science investigators in achieving their research objectives. To date, there have been numerous efforts addressing the individual issues involved in inter-disciplinary, multi-instrument investigations. However, while successful in small scale, these efforts have not proven to be open and scalable. This proposal addresses four areas of significant need: scientific visualization and analysis; science data management; interactions in a distributed, heterogeneous environment; and knowledge-based assistance for these functions. The fundamental innovation embedded with this proposal is the integration of three automation technologies, namely, knowledge-based expert systems, science visualization and science data management. This integration is based on concept called the DataHub. With the DataHub concept, NASA will be able to apply a more complete solution to all nodes of a distributed system. Both computation nodes and interactive nodes will be able to effectively and efficiently use the data services (address, retrieval, update, etc.) with a distributed, interdisciplinary information system in a uniform and standard way. This will allow the science investigators to concentrate on their scientific endeavors, rather than to involve themselves in the intricate technical details of the systems and tools required to accomplish their work. Thus, science investigators need not be programmers. The emphasis will be on the definition and prototyping of system elements with sufficient detail to enable data analysis and interpretation leading to publishable scientific results. In addition, the proposed work includes all the required end-to-end components and interfaces to demonstrate the completed concept.

  4. Knowledge-based assistance for science visualization and analysis using large distributed databases

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Jacobson, Allan S.; Doyle, Richard J.; Collins, Donald J.

    1992-01-01

    Within this decade, the growth in complexity of exploratory data analysis and the sheer volume of space data require new and innovative approaches to support science investigators in achieving their research objectives. To date, there have been numerous efforts addressing the individual issues involved in inter-disciplinary, multi-instrument investigations. However, while successful in small scale, these efforts have not proven to be open and scaleable. This proposal addresses four areas of significant need: scientific visualization and analysis; science data management; interactions in a distributed, heterogeneous environment; and knowledge-based assistance for these functions. The fundamental innovation embedded within this proposal is the integration of three automation technologies, namely, knowledge-based expert systems, science visualization and science data management. This integration is based on the concept called the Data Hub. With the Data Hub concept, NASA will be able to apply a more complete solution to all nodes of a distributed system. Both computation nodes and interactive nodes will be able to effectively and efficiently use the data services (access, retrieval, update, etc.) with a distributed, interdisciplinary information system in a uniform and standard way. This will allow the science investigators to concentrate on their scientific endeavors, rather than to involve themselves in the intricate technical details of the systems and tools required to accomplish their work. Thus, science investigators need not be programmers. The emphasis will be on the definition and prototyping of system elements with sufficient detail to enable data analysis and interpretation leading to publishable scientific results. In addition, the proposed work includes all the required end-to-end components and interfaces to demonstrate the completed concept.

  5. CRAB3: Establishing a new generation of services for distributed analysis at CMS

    NASA Astrophysics Data System (ADS)

    Cinquilli, M.; Spiga, D.; Grandi, C.; Hernàndez, J. M.; Konstantinov, P.; Mascheroni, M.; Riahi, H.; Vaandering, E.

    2012-12-01

    In CMS Computing the highest priorities for analysis tools are the improvement of the end users’ ability to produce and publish reliable samples and analysis results as well as a transition to a sustainable development and operations model. To achieve these goals CMS decided to incorporate analysis processing into the same framework as data and simulation processing. This strategy foresees that all workload tools (TierO, Tier1, production, analysis) share a common core with long term maintainability as well as the standardization of the operator interfaces. The re-engineered analysis workload manager, called CRAB3, makes use of newer technologies, such as RESTFul based web services and NoSQL Databases, aiming to increase the scalability and reliability of the system. As opposed to CRAB2, in CRAB3 all work is centrally injected and managed in a global queue. A pool of agents, which can be geographically distributed, consumes work from the central services serving the user tasks. The new architecture of CRAB substantially changes the deployment model and operations activities. In this paper we present the implementation of CRAB3, emphasizing how the new architecture improves the workflow automation and simplifies maintainability. In particular, we will highlight the impact of the new design on daily operations.

  6. Characterizing the distribution of an endangered salmonid using environmental DNA analysis

    USGS Publications Warehouse

    Laramie, Matthew B.; Pilliod, David S.; Goldberg, Caren S.

    2015-01-01

    Determining species distributions accurately is crucial to developing conservation and management strategies for imperiled species, but a challenging task for small populations. We evaluated the efficacy of environmental DNA (eDNA) analysis for improving detection and thus potentially refining the known distribution of Chinook salmon (Oncorhynchus tshawytscha) in the Methow and Okanogan Subbasins of the Upper Columbia River, which span the border between Washington, USA and British Columbia, Canada. We developed an assay to target a 90 base pair sequence of Chinook DNA and used quantitative polymerase chain reaction (qPCR) to quantify the amount of Chinook eDNA in triplicate 1-L water samples collected at 48 stream locations in June and again in August 2012. The overall probability of detecting Chinook with our eDNA method in areas within the known distribution was 0.77 (±0.05 SE). Detection probability was lower in June (0.62, ±0.08 SE) during high flows and at the beginning of spring Chinook migration than during base flows in August (0.93, ±0.04 SE). In the Methow subbasin, mean eDNA concentration was higher in August compared to June, especially in smaller tributaries, probably resulting from the arrival of spring Chinook adults, reduced discharge, or both. Chinook eDNA concentrations did not appear to change in the Okanogan subbasin from June to August. Contrary to our expectations about downstream eDNA accumulation, Chinook eDNA did not decrease in concentration in upstream reaches (0–120 km). Further examination of factors influencing spatial distribution of eDNA in lotic systems may allow for greater inference of local population densities along stream networks or watersheds. These results demonstrate the potential effectiveness of eDNA detection methods for determining landscape-level distribution of anadromous salmonids in large river systems.

  7. Analysis and improvement of data-set level file distribution in Disk Pool Manager

    NASA Astrophysics Data System (ADS)

    Cadellin Skipsey, Samuel; Purdie, Stuart; Britton, David; Mitchell, Mark; Bhimji, Wahid; Smith, David

    2014-06-01

    Of the three most widely used implementations of the WLCG Storage Element specification, Disk Pool Manager[1, 2] (DPM) has the simplest implementation of file placement balancing (StoRM doesn't attempt this, leaving it up to the underlying filesystem, which can be very sophisticated in itself). DPM uses a round-robin algorithm (with optional filesystem weighting), for placing files across filesystems and servers. This does a reasonable job of evenly distributing files across the storage array provided to it. However, it does not offer any guarantees of the evenness of distribution of that subset of files associated with a given "dataset" (which often maps onto a "directory" in the DPM namespace (DPNS)). It is useful to consider a concept of "balance", where an optimally balanced set of files indicates that the files are distributed evenly across all of the pool nodes. The best case performance of the round robin algorithm is to maintain balance, it has no mechanism to improve balance. In the past year or more, larger DPM sites have noticed load spikes on individual disk servers, and suspected that these were exacerbated by excesses of files from popular datasets on those servers. We present here a software tool which analyses file distribution for all datasets in a DPM SE, providing a measure of the poorness of file location in this context. Further, the tool provides a list of file movement actions which will improve dataset-level file distribution, and can action those file movements itself. We present results of such an analysis on the UKI-SCOTGRID-GLASGOW Production DPM.

  8. A Model Based on Environmental Factors for Diameter Distribution in Black Wattle in Brazil

    PubMed Central

    Sanquetta, Carlos Roberto; Behling, Alexandre; Dalla Corte, Ana Paula; Péllico Netto, Sylvio; Rodrigues, Aurelio Lourenço; Simon, Augusto Arlindo

    2014-01-01

    This article discusses the dynamics of a diameter distribution in stands of black wattle throughout its growth cycle using the Weibull probability density function. Moreover, the parameters of this distribution were related to environmental variables from meteorological data and surface soil horizon with the aim of finding a model for diameter distribution which their coefficients were related to the environmental variables. We found that the diameter distribution of the stand changes only slightly over time and that the estimators of the Weibull function are correlated with various environmental variables, with accumulated rainfall foremost among them. Thus, a model was obtained in which the estimators of the Weibull function are dependent on rainfall. Such a function can have important applications, such as in simulating growth potential in regions where historical growth data is lacking, as well as the behavior of the stand under different environmental conditions. The model can also be used to project growth in diameter, based on the rainfall affecting the forest over a certain time period. PMID:24932909

  9. Development of a Database to Support a Multi-Scale Analysis of the Distribution of Westslope Cutthroat Trout

    E-print Network

    Development of a Database to Support a Multi-Scale Analysis of the Distribution of Westslope ....................................................................................................................................5 Database Development was filed with U.S. Fish and Wildlife Service to list WCT as threatened under the Endangered Species Act

  10. Analysis of Cold Air Distribution System in an Office Building by the Numerical Simulation Method 

    E-print Network

    Jian, Y.; Li, D.; Xu, H.; Ma, X.

    2006-01-01

    , China HVAC Technologies for Energy Efficiency Vol.IV-2-2 Analysis of Cold Air Distribution System in an Office Building by the Numerical Simulation Method Yiwen Jian Dan Li Hongqing Xu Xiaojun Ma Beijing University... single-side outlets and 8 double-side outlets of HILT-type cold air diffusers that are set on ceiling; each diffuser?s air volume is 340m3/h. And there are 4 return air inlets that are set ICEBO2006, Shenzhen, China HVAC...

  11. Study of Jet Substructure in the ATLAS Experiment using Distributed Analysis within Spanish Tier-2 Infrastructures

    E-print Network

    Oliver García, Elena; González de la Hoz, Santiago

    The first study of jet substructure on LHC data was performed by the ATLAS experiment. The jet algorithm chosen was AntiKt with R-parameter=1.0. This study has been important to check the working of the substructure variables which allow to distinguish boosted objects from background. In this study, the computing part has had a great importance because the work done into the ATLAS Spanish Tier-2 federation on understanding its performance and its operations. This has allowed the access of hundred of million of events to obtain the results using Grid technologies for Distributed Analysis. Also, this activity helped in other physics studies of ATLAS experiment.

  12. Continuous Variable Quantum Key Distribution: Finite-Key Analysis of Composable Security against Coherent Attacks

    E-print Network

    Furrer, Fabian; Berta, Mario; Scholz, Volkher B; Tomamichel, Marco; Werner, Reinhard F

    2011-01-01

    We provide a security analysis for continuous variable quantum key distribution protocols based on the transmission of squeezed vacuum states measured via homodyne detection. We employ a version of the entropic uncertainty relation for smooth entropies to give a lower bound on the number of secret bits which can be extracted from a finite number of runs of the protocol. This bound is valid under general coherent attacks, and gives rise to keys which are composably secure. For comparison, we also give a lower bound valid under the assumption of collective attacks. For both scenarios, we find positive key rates using experimental parameters reachable today.

  13. Continuous variable quantum key distribution: finite-key analysis of composable security against coherent attacks.

    PubMed

    Furrer, F; Franz, T; Berta, M; Leverrier, A; Scholz, V B; Tomamichel, M; Werner, R F

    2012-09-01

    We provide a security analysis for continuous variable quantum key distribution protocols based on the transmission of two-mode squeezed vacuum states measured via homodyne detection. We employ a version of the entropic uncertainty relation for smooth entropies to give a lower bound on the number of secret bits which can be extracted from a finite number of runs of the protocol. This bound is valid under general coherent attacks, and gives rise to keys which are composably secure. For comparison, we also give a lower bound valid under the assumption of collective attacks. For both scenarios, we find positive key rates using experimental parameters reachable today. PMID:23005270

  14. Single-molecule detection technologies in miniaturized high-throughput screening: fluorescence intensity distribution analysis.

    PubMed

    Haupts, Ulrich; Rüdiger, Martin; Ashman, Stephen; Turconi, Sandra; Bingham, Ryan; Wharton, Charlotte; Hutchinson, Jonathan; Carey, Charlotte; Moore, Keith J; Pope, Andrew J

    2003-02-01

    Single-molecule detection technologies are becoming a powerful readout format to support ultra-high-throughput screening. These methods are based on the analysis of fluorescence intensity fluctuations detected from a small confocal volume element. The fluctuating signal contains information about the mass and brightness of the different species in a mixture. The authors demonstrate a number of applications of fluorescence intensity distribution analysis (FIDA), which discriminates molecules by their specific brightness. Examples for assays based on brightness changes induced by quenching/dequenching of fluorescence, fluorescence energy transfer, and multiple-binding stoichiometry are given for important drug targets such as kinases and proteases. FIDA also provides a powerful method to extract correct biological data in the presence of compound fluorescence. PMID:12854995

  15. System analysis for the Huntsville Operation Support Center distributed computer system

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.

    1986-01-01

    A simulation model of the NASA Huntsville Operational Support Center (HOSC) was developed. This simulation model emulates the HYPERchannel Local Area Network (LAN) that ties together the various computers of HOSC. The HOSC system is a large installation of mainframe computers such as the Perkin Elmer 3200 series and the Dec VAX series. A series of six simulation exercises of the HOSC model is described using data sets provided by NASA. The analytical analysis of the ETHERNET LAN and the video terminals (VTs) distribution system are presented. An interface analysis of the smart terminal network model which allows the data flow requirements due to VTs on the ETHERNET LAN to be estimated, is presented.

  16. Pore size distribution analysis of activated carbons prepared from coconut shell using methane adsorption data

    NASA Astrophysics Data System (ADS)

    Ahmadpour, A.; Okhovat, A.; Darabi Mahboub, M. J.

    2013-06-01

    The application of Stoeckli theory to determine pore size distribution (PSD) of activated carbons using high pressure methane adsorption data is explored. Coconut shell was used as a raw material for the preparation of 16 different activated carbon samples. Four samples with higher methane adsorption were selected and nitrogen adsorption on these adsorbents was also investigated. Some differences are found between the PSD obtained from the analysis of nitrogen adsorption isotherms and their PSD resulting from the same analysis using methane adsorption data. It is suggested that these differences may arise from the specific interactions between nitrogen molecules and activated carbon surfaces; therefore caution is required in the interpretation of PSD obtained from the nitrogen isotherm data.

  17. New limits on intrinsic charm in the nucleon from global analysis of parton distributions

    DOE PAGESBeta

    Jimenez-Delgado, P.; Hobbs, T. J.; Londergan, J. T.; Melnitchouk, W.

    2015-02-27

    We present a new global QCD analysis of parton distribution functions, allowing for possible intrinsic charm (IC) contributions in the nucleon inspired by light-front models. The analysis makes use of the full range of available high-energy scattering data for Q2 ? 1 GeV2 and W2 ? 3.5 GeV2, including fixed-target proton and deuteron deep cross sections at lower energies that were excluded in previously global analyses. The expanded data set places more stringent constraints on the momentum carried by IC, with (x)IC at most 0.5% (corresponding to an IC normalization of ~1%) at the 4? level for ?X2 = 1.more »We also assess the impact of older EMC measurements of Fc2c at large x, which favor a nonzero IC, but with very large X2 values.« less

  18. Sampling the Probability Distribution of Type Ia Supernova Lightcurve Parameters in Cosmological Analysis

    NASA Astrophysics Data System (ADS)

    Dai, Mi; Wang, Yun

    2016-01-01

    In order to obtain robust cosmological constraints from Type Ia supernova (SN Ia) data, we have applied Markov Chain Monte Carlo (MCMC) to SN Ia lightcurve fitting. We develop a method for sampling the resultant probability density distributions (pdf) of the SN Ia lightcuve parameters in the MCMC likelihood analysis to constrain cosmological parameters. Applying this method to the Joint Lightcurve Analysis (JLA) data set of SNe Ia, we find that sampling the SN Ia lightcurve parameter pdf's leads to cosmological parameters closer to that of a flat Universe with a cosmological constant, compared to the usual practice of using only the best fit values of the SN Ia lightcurve parameters. Our method will be useful in the use of SN Ia data for precision cosmology.

  19. Analysis of a 7 year tropospheric ozone vertical distribution at the Observatoire de Haute Provence

    NASA Technical Reports Server (NTRS)

    Beekmann, Matthias; Ancellet, Gerard; Megie, Gerard

    1994-01-01

    A seven year (1984-90) climatology of tropospheric vertical ozone soundings, performed by electrochemical sondes at the OHP (44 deg N, 6 deg E, 700 m ASL) in Southern France, is presented. Its seasonal variation shows a broad spring/summer maximum in the troposphere. The contribution of photochemical ozone production and transport from the stratosphere to this seasonal variation are studied by a correlative analysis of ozone concentrations and meteorological variables, with emphasis on potential vorticity. This analysis shows the impact of dynamical and photochemical processes on the spatial and temporal ozone variability. In particular, a positive correlation (r = 04.0, significance greater than 99.9 percent) of ozone with potential vorticity is observed in the middle troposphere, reflecting the impact of stratosphere-troposphere exchange on the vertical ozone distribution.

  20. Modeling Background Distributions for the SuperCDMS Soudan High Threshold Analysis

    NASA Astrophysics Data System (ADS)

    Cornell, Brett; SuperCDMS Collaboration

    2015-04-01

    The SuperCDMS Soudan experiment searches for interactions of WIMP dark matter particles with germanium detectors, using ionization yield and fiducialization to reject backgrounds. An exposure of 3000 kg-day has been accumulated with 9 kg of new-generation SuperCDMS iZIP detectors, which use a sophisticated ionization electrode and phonon sensor structure to report information about the three-dimensional position of each event. We report on the development of a model for background distributions that exploits this new position information as well as improved simulations of detector physics. This model will be used to optimize nuclear recoil acceptance and background rejection for a cut-based analysis, and it may be further developed for an eventual maximum likelihood analysis.

  1. Shape distribution features for point cloud analysis - a geometric histogram approach on multiple scales

    NASA Astrophysics Data System (ADS)

    Blomley, R.; Weinmann, M.; Leitloff, J.; Jutzi, B.

    2014-08-01

    Due to ever more efficient and accurate laser scanning technologies, the analysis of 3D point clouds has become an important task in modern photogrammetry and remote sensing. To exploit the full potential of such data for structural analysis and object detection, reliable geometric features are of crucial importance. Since multiscale approaches have proved very successful for image-based applications, efforts are currently made to apply similar approaches on 3D point clouds. In this paper we analyse common geometric covariance features, pinpointing some severe limitations regarding their performance on varying scales. Instead, we propose a different feature type based on shape distributions known from object recognition. These novel features show a very reliable performance on a wide scale range and their results in classification outnumber covariance features in all tested cases.

  2. Quantitative high-pressure pair distribution function analysis of nanocrystalline gold

    NASA Astrophysics Data System (ADS)

    Martin, C. David; Antao, Sytle M.; Chupas, Peter J.; Lee, Peter L.; Shastri, Sarvjit D.; Parise, John B.

    2005-02-01

    Using a diamond anvil cell with high-energy monochromatic x rays, we have studied the total scattering of nanocrystalline gold to 20Å-1 at pressures up to 10GPa in a hydrostatic alcohol pressure-medium. Through direct Fourier transformation of the structure function [S(Q)], pair distribution functions (PDFs) [G(r)] are calculated without Kaplow-type iterative corrections. Quantitative high-pressure PDF (QHP-PDF) analysis is performed via full-profile least-squares modeling and confirmed through comparison of Rietveld analysis of Bragg diffraction. The quality of the high pressure PDFs obtained demonstrates the integrity of our technique and suggests the feasibility of future QHP-PDF studies of liquids, disordered solids, and materials at phase transition under pressure.

  3. Volcanic Hazard Assessment Through Analysis of Physical Characteristics and Distribution of Volcanic Projectiles

    NASA Astrophysics Data System (ADS)

    Alatorre-Ibarguengoitia, M. A.; Kueppers, U.; Delgado-Granados, H.; Dingwell, D. B.

    2006-12-01

    Dealing with hazards at active volcanoes requires detailed knowledge of eruptive history and a good understanding of pre- and syn-eruptive processes. Despite improvement in monitoring systems, such an understanding cannot be based on direct field observations alone. Experimental and theoretical modelling are two essential components of modern volcanic hazard analysis. Volcanic ballistic projectiles (VBP) are a major hazard related to volcanic explosions. They may affect people, ecology, infrastructure and aircraft. In order to determine the potential areas for VBP fall, it is needed to estimate maximum ranges under different explosive scenarios. Each scenario is defined by the kinetic energy calculated from the impact location and its dimension and the physical characteristics of the projectiles (e.g. density, drag coefficient). The kinetic energy derives from the excess pressure in the expanding volatile phase driving the explosion. The design and development of "fragmentation bomb" technology has provided volcanology with the capability of controlled and systematic analysis of the fragmentation behavior of magma upon rapid decompression. Study of samples from several volcanoes has demonstrated a close relationship between open porosity and overpressure required for complete fragmentation of samples (fragmentation threshold). Analysis of the experimentally generated pyroclasts by fractal analysis shows that grain-size distribution is linearly dependent on open porosity and PEF (potential energy for fragmentation). Combination of these two approaches (kinetic energy from the distribution of VBP and potential energy (PEF) from scaled experiments and investigation of experimental and natural pyroclasts), taken together with seismic monitoring provides the potential for a significantly more refined hazard assessment of active, explosive volcanoes.

  4. Determination and analysis of distribution coefficients of 137Cs in soils from Biscay (Spain).

    PubMed

    Elejalde, C; Herranz, M; Legarda, F; Romero, F

    2000-10-01

    The distribution coefficient of (137)Cs has been determined in 58 soils from 12 sampling points from Biscay by treating 10 g with 25 ml of an aqueous solution with an activity of 1765 Bq in the radionuclide, by shaking during 64 h and measuring the residual activity with a suitable detector. Soils were characterised by sampling depth, particle size analysis and the usual chemical parameters. Soils were thereafter treated to fix the chemical forms of (137)Cs speciation by successive extractions in order to determine fractions due to exchangeable, associated with carbonates, iron oxide and organic matter fractions, obtaining by difference the amount taken by the rest of the soil constituents. For this research, 16 soils from four points were selected from the previous samples. The greatest mean percentages of (137)Cs sorption were with the rest (69.93), exchangeable (13.17) and organic matter (12.54%) fractions. This paper includes also the calculation of partial distribution coefficients for chemical species as well as relations of distribution coefficients both among them and with soil parameters. PMID:15092865

  5. Spatial distribution of liver cancer incidence in shenqiu county, henan province, china: a spatial analysis.

    PubMed

    Sun, Jie; Huang, Hui; Xiao, Ge Xin; Feng, Guo Shuang; Yu, Shi Cheng; Xue, Yu Tang; Wan, Xia; Yang, Gong Huan; Sun, Xin

    2015-03-01

    Liver cancer is a common and leading cause of cancer death in China. We used the cancer registry data collected from 2009 to 2011 to describe the spatial distribution of liver cancer incidence at village level in Shengqiu county, Henan province, China. Spatial autocorrelation analysis was employed to detect significant differences from a random spatial distribution of liver cancer incidence. Spatial scan statistics were used to detect and evaluate the clusters of liver cancer cases. Spatial clusters were mapped using ArcGIS 10.0 software in order to identify their physical location at village level. High cluster areas of liver cancer incidence were observed in 26 villages of 7 towns and low cluster areas were observed in 16 villages of 4 towns. High cluster areas of liver cancer incidence were distributed along the Sha Ying River which is the largest of tributary of the Huai River. Role of water pollution in Shenqiu County where the high cluster was found deserves further investigation. PMID:25800446

  6. Fluctuation Analysis: The Probability Distribution of the Number of Mutants under Different Conditions

    PubMed Central

    Stewart, F. M.; Gordon, D. M.; Levin, B. R.

    1990-01-01

    In the 47 years since fluctuation analysis was introduced by Luria and Delbruck, it has been widely used to calculate mutation rates. Up to now, in spite of the importance of such calculations, the probability distribution of the number of mutants that will appear in a fluctuation experiment has been known only under the restrictive, and possibly unrealistic, assumptions: (1) that the mutation rate is exactly proportional to the growth rate and (2) that all mutants grow at a rate that is a constant multiple of the growth rate of the original cells. In this paper, we approach the distribution of the number of mutants from a new point of view that will enable researchers to calculate the distribution to be expected using assumptions that they believe to be closer to biological reality. The new idea is to classify mutations according to the number of observable mutants that derive from the mutation when the culture is selectively plated. This approach also simplifies the calculations in situations where two, or many, kinds of mutation may occur in a single culture. PMID:2307353

  7. Comparative analysis of heterochromatin distribution in wild and cultivated Abelmoschus species based on fluorescent staining methods.

    PubMed

    Merita, Keisham; Kattukunnel, Joseph John; Yadav, Shrirang Ramchandra; Bhat, Kangila Venkataramana; Rao, Satyawada Rama

    2015-03-01

    A comparative analysis of fluorochrome-binding pattern in nine taxa of Abelmoschus had shown that the type, amount and distribution pattern of heterochromatin were characteristic for each taxa. The fluorescent chromosome-binding sites obtained by chromomycin A3 (CMA) and 4',6-diamidino-2-phenylindole (DAPI) staining in all the nine species showed constitutive heterochromatin CMA(+), DAPI(+) and CMA(+)/DAPI(+). Large amount of heterozygosity was observed with regard to heterochromatin distribution pattern in all the taxa studied. The CMA(+)-binding sites are comparatively less than DAPI(+)-binding sites which is clearly evident as AT-rich regions are more than GC-rich regions in all the nine taxa analysed in Abelmoschus. These CMA(+) and DAPI(+)-binding sites apparently rise with the increased in chromosome numbers of the different species. This pattern of heterochromatin heterogeneity seems to be a general characteristic feature. Therefore, the differential pattern of distribution of GC- and AT-rich sequences might have played an important role in diversification of the genus Abelmoschus. Polyploidy is an important factor in the evolution of Abelmoschus and the sole reason for range in chromosome numbers in this genus. It may be noted that, though often, but not always, the increase of DNA is caused by an increase in the amount of heterochromatin, i.e. increase of non-coding sections indicating restructuring of the heterochromatin. Thus, cumulative small and direct numerical changes might have played a role in the speciation of Abelmoschus. PMID:25300590

  8. Design and analysis of multi-electrodes distribution for shaping of electrostatic stretched membrane mirror

    NASA Astrophysics Data System (ADS)

    Jiang, Long-jun; Wei, Xiao-ru; Yang, Bin; Tang, Min-xue

    2013-12-01

    One of the key technologies which need to be solved in developing membrane mirror is the surface shape control of the mirror. The pressure distribution and calculation method for the shape of a membrane paraboloidal mirror are studied in this paper. According to Karman equation in circular membrane theory, the analytic expression of the radial continuous distributed pressure load used to form a membrane paraboloidal mirror with certain aperture and F number is solved out under certain radial displacement condition. Taking the example of membrane paraboloidal mirror with diameters of 200mm, 300mm, 500mm, and F number of 10, the number and the radial width of sub-electrodes are optimized. It is founded that by using multi-electrodes distribution mode that the radial width of center sub-electrode is 1.6 times longer than that of the rest concentric annulus electrodes with same radial width, the deviation between the membrane mirror shape and the standard paraboloid surface shape can be effectively reduced. The surface shape of a membrane mirror in 300mm diameter and F/10 that is formed by electrostatic stretching through a multi-electrodes plate with insulation intervals among sub-electrodes or not are simulated by using finite element analysis. It may provide a theoretical basis for practical control of membrane mirror shape.

  9. Distributed Data-Flow for In-Situ Visualization and Analysis at Petascale

    SciTech Connect

    Laney, D E; Childs, H R

    2009-03-13

    We conducted a feasibility study to research modifications to data-flow architectures to enable data-flow to be distributed across multiple machines automatically. Distributed data-flow is a crucial technology to ensure that tools like the VisIt visualization application can provide in-situ data analysis and post-processing for simulations on peta-scale machines. We modified a version of VisIt to study load-balancing trade-offs between light-weight kernel compute environments and dedicated post-processing cluster nodes. Our research focused on memory overheads for contouring operations, which involves variable amounts of generated geometry on each node and computation of normal vectors for all generated vertices. Each compute node independently decided whether to send data to dedicated post-processing nodes at each stage of pipeline execution, depending on available memory. We instrumented the code to allow user settable available memory amounts to test extremely low-overhead compute environments. We performed initial testing of this prototype distributed streaming framework, but did not have time to perform scaling studies at and beyond 1000 compute-nodes.

  10. Analysis of Stomata Distribution Patterns for Quantification of the Foliar Plasticity of Tradescantia Zebrina

    NASA Astrophysics Data System (ADS)

    Batista Florindo, Joao; Landini, Gabriel; Almeida Filho, Humberto; Martinez Bruno, Odemir

    2015-09-01

    Here we propose a method for the analysis of the stomata distribution patterns on the surface of plant leaves. We also investigate how light exposure during growth can affect stomata distribution and the plasticity of leaves. Understanding foliar plasticity (the ability of leaves to modify their structural organization to adapt to changing environmental resources) is a fundamental problem in Agricultural and Environmental Sciences. Most published work on quantification of stomata has concentrated on descriptions of their density per unit of leaf area, however density alone does not provide a complete description of the problem and leaves several unanswered questions (e.g. whether the stomata patterns change across various areas of the leaf, or how the patterns change under varying observational scales). We used two approaches here, to know, multiscale fractal dimension and complex networks, as a means to provide a description of the complexity of these distributions. In the experiments, we used 18 samples from the plant Tradescantia Zebrina grown under three different conditions (4 hours of artificial light each day, 24 hours of artificial light each day, and sunlight) for a total of 69 days. The network descriptors were capable of correctly discriminating the different conditions in 88% of cases, while the fractal descriptors discriminated 83% of the samples. This is a significant improvement over the correct classification rates achieved when using only stomata density (56% of the samples).

  11. Analysis of failed distribution transformers and application of low voltage arresters

    SciTech Connect

    Goedde, G.L.; Plummer, C.W.

    1995-06-01

    Cooper Power Systems was contracted by EPRI in 1987 to perform a feasibility study on reducing distribution transformer basic insulation level. The project goal was to examine the economics of removing insulation from distribution transformers. The study was done with the help of Wisconsin Power and Light (WP&L). At the time the protective margins of distribution arresters were adequate but not excessive. Most of the economics could be realized in the costs of accessories such as replacing the 125kV BIL transformer bushings with the high volume 95kV BIL bushing. After the project`s completion, discussions with WP&L led to transformer tear downs to determine why their transformers were failing. Their method of protection was to use a cross arm mounted primary arrester in parallel with a fuse cutout and the transformer. Prior to the WP&L tear downs the utility industry was aware of secondary surge failures at Florida Power Corporation where more than half of the non-interlaced transformer failures were due to low side current surges. The analysis of WP&L failed transformers indicated a high failure rate due to secondary surges of both interlaced and non-interlaced transformers. Additional tear downs at midwest and southeastern utilities provided similar results.

  12. Analysis of synoptic scale controlling factors in the distribution of gravity wave potential energy

    NASA Astrophysics Data System (ADS)

    Yang, Shih-Sian; Pan, C. J.; Das, Uma; Lai, H. C.

    2015-12-01

    In the past years, global morphology and climatology of gravity waves have been widely studied and the effects of topography and convection systems have been evaluated, but the complete gravity wave distribution could not be explained by these effects. To find the missing controlling factors, a series of synoptic scale analyses is performed in the present study to investigate relationships between synoptic scale factors and potential energy (Ep) associated with gravity waves. Global distribution of Ep during a 12-year period from 2002 to 2013 is derived using temperature profiles retrieved from observations of Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) instrument onboard the Thermosphere Ionosphere Mesosphere Energetics and Dynamics (TIMED) satellite. Synoptic scale factors obtained from ECMWF Interim reanalysis data are employed to investigate the correlation between synoptic systems and Ep. It is found that Ep values are high around extratropical cyclones over mid-latitudes (30-60°) and around the Intertropical Convergence Zone (ITCZ) over low-latitudes (10-30°). Ep values are low around subtropical highs over both mid- and low-latitudes. This is the first time that a synoptic scale analysis of Ep distribution is performed, and the influence of synoptic scale factors on Ep confirmed.

  13. Spatial distribution analysis of chemical and biochemical properties across Koiliaris CZO

    NASA Astrophysics Data System (ADS)

    Tsiknia, Myrto; Varouchakis, Emmanouil A.; Paranychianakis, Nikolaos V.; Nikolaidis, Nikolaos P.

    2015-04-01

    Arid and semi-arid ecosystems cover approximately 47% of the Earth's surface. Soils in these climatic zones are often severely degraded and poor in organic carbon and nutrients. Anthropogenic activities like overgrazing and intensive agricultural practices further exacerbate the quality of the soils making them more vulnerable to erosion and accelerating losses of nutrients which might end up to surface waterways degrading their quality. Data of the geospatial distribution of nutrient availability as well as on the involved processes at watershed level might help us to identify areas which will potentially act as sources of nutrients and probably will allow us to adopt appropriate management practices to mitigate environmental impacts. In the present study we have performed an extensive sampling campaign (50 points) across a typical Mediterranean watershed, the Koiliaris Critical Zone Observatory (CZO), organized in such a way to effectively capture the complex variability (climatic, soil properties, hydrology, land use) of the watershed. Analyses of soil physico-chemical properties (texture, pH, EC, TOC, TN, NO3--N, and NH4+-N) and biochemical assays (potential nitrification rate, nitrogen mineralization rate, enzymes activities) were carried out. Geostatistical analysis and more specifically the kriging interpolation method was employed to generate distribution maps of the distribution of nitrogen forms and of the related biochemical assays. Such maps could provide an important tool for effective ecosystem management and monitoring decisions.

  14. A regional analysis of the distribution of Rippled Scour Depressions along the California coast

    NASA Astrophysics Data System (ADS)

    Mueller, C.; Davis, A. C.

    2011-12-01

    Rippled scour depressions (RSDs) are prominent sediment features found on continental shelves worldwide. They are characterized by coarse grain sand in comparison to the surrounding sediment plateau and by long period sand waves inside of the depressions (0.4m-1m depth). The coarse grain sand composition of RSDs in relation to the surrounding finer sediment adds heterogeneity to the habitat available to benthic fauna. Bathymetric and acoustic backscatter data from the California Seafloor Mapping Project reveal that RSDs are common features of the California continental shelf including areas within marine protected areas (MPAs). While many studies have described RSDs at specific locations, the study presented here is the first to address the spatial distribution of RSDs at the regional scale. The goals of this study were to: 1) quantify the abundance and patterns of distribution of RSDs along the entire California coast, and 2) determine the percentage of rock reef, sedimentary and RSD habitats within state waters, both inside and outside of the State's MPA network. Our general approach was to develop and use a landscape analysis algorithm-based Topographic Position Index (TPI) to identify the distinct edges of RSDs in order to differentiate the features from other soft sediment and rocky reef habitat. GIS spatial analysis was then used to quantify the distribution and abundance of RSDs along the entire coast and test predicted relationships with proximity to rocky reef, depth, and latitude. RSDs were found to make up 3.4% of the continental shelf in California compared to 8.0% for rocky reef. We also determined that RSD percent cover varied significantly with depth and increased with proximity to rocky reef habitat. Because RSDs are a unique habitat and are found throughout California's MPAs their distribution likely affects the composition and abundance of benthic communities. For this reason, determining the patterns of distribution and abundance for RSDs on the California continental shelf will provide information valuable to the design, monitoring and performance assessment of California's newly designed MPA network mandated by the State's Marine Life Protection Act.

  15. Water Distribution System Deficiencies and Gastrointestinal Illness: A Systematic Review and Meta-Analysis

    PubMed Central

    Gruber, Joshua S.; Colford, John M.

    2014-01-01

    Background: Water distribution systems are vulnerable to performance deficiencies that can cause (re)contamination of treated water and plausibly lead to increased risk of gastrointestinal illness (GII) in consumers. Objectives: It is well established that large system disruptions in piped water networks can cause GII outbreaks. We hypothesized that routine network problems can also contribute to background levels of waterborne illness and conducted a systematic review and meta-analysis to assess the impact of distribution system deficiencies on endemic GII. Methods: We reviewed published studies that compared direct tap water consumption to consumption of tap water re-treated at the point of use (POU) and studies of specific system deficiencies such as breach of physical or hydraulic pipe integrity and lack of disinfectant residual. Results: In settings with network malfunction, consumers of tap water versus POU-treated water had increased GII [incidence density ratio (IDR) = 1.34; 95% CI: 1.00, 1.79]. The subset of nonblinded studies showed a significant association between GII and tap water versus POU-treated water consumption (IDR = 1.52; 95% CI: 1.05, 2.20), but there was no association based on studies that blinded participants to their POU water treatment status (IDR = 0.98; 95% CI: 0.90, 1.08). Among studies focusing on specific network deficiencies, GII was associated with temporary water outages (relative risk = 3.26; 95% CI: 1.48, 7.19) as well as chronic outages in intermittently operated distribution systems (odds ratio = 1.61; 95% CI: 1.26, 2.07). Conclusions: Tap water consumption is associated with GII in malfunctioning distribution networks. System deficiencies such as water outages also are associated with increased GII, suggesting a potential health risk for consumers served by piped water networks. Citation: Ercumen A, Gruber JS, Colford JM Jr. 2014. Water distribution system deficiencies and gastrointestinal illness: a systematic review and meta-analysis. Environ Health Perspect 122:651–660;?http://dx.doi.org/10.1289/ehp.1306912 PMID:24659576

  16. Nonlinear Reduced-Order Analysis with Time-Varying Spatial Loading Distributions

    NASA Technical Reports Server (NTRS)

    Prezekop, Adam

    2008-01-01

    Oscillating shocks acting in combination with high-intensity acoustic loadings present a challenge to the design of resilient hypersonic flight vehicle structures. This paper addresses some features of this loading condition and certain aspects of a nonlinear reduced-order analysis with emphasis on system identification leading to formation of a robust modal basis. The nonlinear dynamic response of a composite structure subject to the simultaneous action of locally strong oscillating pressure gradients and high-intensity acoustic loadings is considered. The reduced-order analysis used in this work has been previously demonstrated to be both computationally efficient and accurate for time-invariant spatial loading distributions, provided that an appropriate modal basis is used. The challenge of the present study is to identify a suitable basis for loadings with time-varying spatial distributions. Using a proper orthogonal decomposition and modal expansion, it is shown that such a basis can be developed. The basis is made more robust by incrementally expanding it to account for changes in the location, frequency and span of the oscillating pressure gradient.

  17. Optimal multiparameter analysis of source water distributions in the Southern Drake Passage

    NASA Astrophysics Data System (ADS)

    Frants, Marina; Gille, Sarah T.; Hewes, Christopher D.; Holm-Hansen, Osmund; Kahru, Mati; Lombrozo, Aaron; Measures, Christopher I.; Greg Mitchell, B.; Wang, Haili; Zhou, Meng

    2013-06-01

    In order to evaluate the effects of horizontal advection on iron supply in the vicinity of the Shackleton Transverse Ridge (STR) in the southern Drake Passage, the water composition in the region is estimated along the isopycnal containing the subsurface iron peak. Optimal Multiparameter (OMP) analysis of temperature, salinity, oxygen and nutrient data is used to estimate the water composition at CTD stations sampled in summer 2004 and winter 2006. The highest iron concentrations in the Ona Basin are found below the mixed layer, both in summer and in winter. The water composition derived from the OMP analysis is consistent with a scenario in which iron-rich shelf waters from the South Shetland Islands and the Antarctic Peninsula are advected northward on the eastern side of the STR, where they interact with the low-iron waters of the Antarctic Circumpolar Current (ACC) in the Ona Basin. The shelf waters and the ACC waters appear to interact through a stirring process without fully mixing, resulting in a filamented distribution that has also been inferred from the satellite data. To the west of the STR, the shelf waters are primarily confined to the continental shelf, and do not extend northwards. This source of water distribution is consistent with the idea that iron enters the Ona Basin from the continental shelf through advection along an isopycnal, resulting in an iron concentration peak occurring below the winter mixed layer in the Ona Basin.

  18. Biometric analysis of the palm vein distribution by means two different techniques of feature extraction

    NASA Astrophysics Data System (ADS)

    Castro-Ortega, R.; Toxqui-Quitl, C.; Solís-Villarreal, J.; Padilla-Vivanco, A.; Castro-Ramos, J.

    2014-09-01

    Vein patterns can be used for accessing, identifying, and authenticating purposes; which are more reliable than classical identification way. Furthermore, these patterns can be used for venipuncture in health fields to get on to veins of patients when they cannot be seen with the naked eye. In this paper, an image acquisition system is implemented in order to acquire digital images of people hands in the near infrared. The image acquisition system consists of a CCD camera and a light source with peak emission in the 880 nm. This radiation can penetrate and can be strongly absorbed by the desoxyhemoglobin that is presented in the blood of the veins. Our method of analysis is composed by several steps and the first one of all is the enhancement of acquired images which is implemented by spatial filters. After that, adaptive thresholding and mathematical morphology operations are used in order to obtain the distribution of vein patterns. The above process is focused on the people recognition through of images of their palm-dorsal distributions obtained from the near infrared light. This work has been directed for doing a comparison of two different techniques of feature extraction as moments and veincode. The classification task is achieved using Artificial Neural Networks. Two databases are used for the analysis of the performance of the algorithms. The first database used here is owned of the Hong Kong Polytechnic University and the second one is our own database.

  19. Multipath data analysis and exploitation for the design of distributed radar systems

    NASA Astrophysics Data System (ADS)

    Mitra, Atindra K.; Robinson, Philip; LaRue, James; Glett, John

    2007-04-01

    A description of the design parameters for a scaled RF environment is presented. This scaled RF environment was developed for purposes of simulating and investigating multipath phenomena in urban environments. A number of experiments were conducted with this scaled urban environment including a series of tests with eight spatially distributed receivers and one transmitter. Details with regard to the instrumentation system along with the measurement philosophy are provided. The primary focus of this paper is a detailed treatment of data analysis and exploitation techniques for the multipath data generated by this scaled RF environment. A portion of the material on multipath data analysis and exploitation is focused on developing techniques for identifying a optimum placement of receiver pairs for purposes of maximizing information content on a embedded target. In other words, data from the eight distributed receiver locations are analyzed and techniques are presented that allow for the selection of receiver pairs that provide the most information on targets that are embedded within the multipath environment. The last section of the paper discusses visualization and pseudo-imaging techniques for targets embedded in multipath environments.

  20. Historical changes in Australian temperature extremes as inferred from extreme value distribution analysis

    NASA Astrophysics Data System (ADS)

    Wang, Xiaolan L.; Trewin, Blair; Feng, Yang; Jones, David

    2013-02-01

    Abstract This study develops a generalized extreme value (GEV) <span class="hlt">distribution</span> <span class="hlt">analysis</span> approach, namely, a GEV tree approach that allows for both stationary and nonstationary cases. This approach is applied to a century-long homogenized daily temperature data set for Australia to assess changes in temperature extremes from 1910 to 2010. Changes in 20 year return values are estimated from the most suitable GEV <span class="hlt">distribution</span> chosen from a GEV tree. Twenty year return values of extreme low minimum temperature are found to have warmed strongly over the century in most parts of the continent. There is also a tendency toward warming of extreme high maximum temperatures, but it is weaker than that for minimum temperatures, with the majority of stations not showing significant trends. The observed changes in extreme temperatures are broadly consistent with observed changes in mean temperatures and in the frequency of temperatures above the ninetieth and below the tenth percentile (i.e., extreme indices). The GEV tree <span class="hlt">analysis</span> provides insight into behavior of extremes with re-occurrence times of several years to decades that are of importance to engineering design/applications, while extreme indices represent moderately extreme events with re-occurrence times of a year or shorter.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9418E..06V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9418E..06V"><span id="translatedtitle">Interactive <span class="hlt">analysis</span> of geographically <span class="hlt">distributed</span> population imaging data collections over light-path data networks</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>van Lew, Baldur; Botha, Charl P.; Milles, Julien R.; Vrooman, Henri A.; van de Giessen, Martijn; Lelieveldt, Boudewijn P. F.</p> <p>2015-03-01</p> <p>The cohort size required in epidemiological imaging genetics studies often mandates the pooling of data from multiple hospitals. Patient data, however, is subject to strict privacy protection regimes, and physical data storage may be legally restricted to a hospital network. To enable biomarker discovery, fast data access and interactive data exploration must be combined with high-performance computing resources, while respecting privacy regulations. We present a system using fast and inherently secure light-paths to access <span class="hlt">distributed</span> data, thereby obviating the need for a central data repository. A secure private cloud computing framework facilitates interactive, computationally intensive exploration of this geographically <span class="hlt">distributed</span>, privacy sensitive data. As a proof of concept, MRI brain imaging data hosted at two remote sites were processed in response to a user command at a third site. The system was able to automatically start virtual machines, run a selected processing pipeline and write results to a user accessible database, while keeping data locally stored in the hospitals. Individual tasks took approximately 50% longer compared to a locally hosted blade server but the cloud infrastructure reduced the total elapsed time by a factor of 40 using 70 virtual machines in the cloud. We demonstrated that the combination light-path and private cloud is a viable means of building an <span class="hlt">analysis</span> infrastructure for secure data <span class="hlt">analysis</span>. The system requires further work in the areas of error handling, load balancing and secure support of multiple users.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20080031491','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20080031491"><span id="translatedtitle"><span class="hlt">Analysis</span> of Regolith Simulant Ejecta <span class="hlt">Distributions</span> from Normal Incident Hypervelocity Impact</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Edwards, David L.; Cooke, William; Suggs, Rob; Moser, Danielle E.</p> <p>2008-01-01</p> <p>The National Aeronautics and Space Administration (NASA) has established the Constellation Program. The Constellation Program has defined one of its many goals as long-term lunar habitation. Critical to the design of a lunar habitat is an understanding of the lunar surface environment; of specific importance is the primary meteoroid and subsequent ejecta environment. The document, NASA SP-8013 'Meteoroid Environment Model Near Earth to Lunar Surface', was developed for the Apollo program in 1969 and contains the latest definition of the lunar ejecta environment. There is concern that NASA SP-8013 may over-estimate the lunar ejecta environment. NASA's Meteoroid Environment Office (MEO) has initiated several tasks to improve the accuracy of our understanding of the lunar surface ejecta environment. This paper reports the results of experiments on projectile impact into powdered pumice and unconsolidated JSC-1A Lunar Mare Regolith simulant targets. Projectiles were accelerated to velocities between 2.45 and 5.18 km/s at normal incidence using the Ames Vertical Gun Range (AVGR). The ejected particles were detected by thin aluminum foil targets strategically placed around the impact site and angular ejecta <span class="hlt">distributions</span> were determined. Assumptions were made to support the <span class="hlt">analysis</span> which include; assuming ejecta spherical symmetry resulting from normal impact and all ejecta particles were of mean target particle size. This <span class="hlt">analysis</span> produces a hemispherical flux density <span class="hlt">distribution</span> of ejecta with sufficient velocity to penetrate the aluminum foil detectors.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011SPIE.7963E..12W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011SPIE.7963E..12W"><span id="translatedtitle"><span class="hlt">Analysis</span> of adipose tissue <span class="hlt">distribution</span> using whole-body magnetic resonance imaging</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wald, Diana; Schwarz, Tobias; Dinkel, Julien; Delorme, Stefan; Teucher, Birgit; Kaaks, Rudolf; Meinzer, Hans-Peter; Heimann, Tobias</p> <p>2011-03-01</p> <p>Obesity is an increasing problem in the western world and triggers diseases like cancer, type two diabetes, and cardiovascular diseases. In recent years, magnetic resonance imaging (MRI) has become a clinically viable method to measure the amount and <span class="hlt">distribution</span> of adipose tissue (AT) in the body. However, <span class="hlt">analysis</span> of MRI images by manual segmentation is a tedious and time-consuming process. In this paper, we propose a semi-automatic method to quantify the amount of different AT types from whole-body MRI data with less user interaction. Initially, body fat is extracted by automatic thresholding. A statistical shape model of the abdomen is then used to differentiate between subcutaneous and visceral AT. Finally, fat in the bone marrow is removed using morphological operators. The proposed method was evaluated on 15 whole-body MRI images using manual segmentation as ground truth for adipose tissue. The resulting overlap for total AT was 93.7% +/- 5.5 with a volumetric difference of 7.3% +/- 6.4. Furthermore, we tested the robustness of the segmentation results with regard to the initial, interactively defined position of the shape model. In conclusion, the developed method proved suitable for the <span class="hlt">analysis</span> of AT <span class="hlt">distribution</span> from whole-body MRI data. For large studies, a fully automatic version of the segmentation procedure is expected in the near future.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014SPIE.9026E..0IM','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014SPIE.9026E..0IM"><span id="translatedtitle">Application-driven merging and <span class="hlt">analysis</span> of person trajectories for <span class="hlt">distributed</span> smart camera networks</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Metzler, Jürgen; Monari, Eduardo; Kuntzsch, Colin</p> <p>2014-03-01</p> <p>Tracking of persons and <span class="hlt">analysis</span> of their trajectories are important tasks of surveillance systems as they support the monitoring personnel. However, this trend is accompanied by an increasing demand on smarter camera networks carrying out surveillance tasks autonomously. Thus, there is a higher system complexity so that requirements on the video <span class="hlt">analysis</span> algorithms are increasing as well. In this paper, we present a system concept and application for anonymously gathering, processing and <span class="hlt">analysis</span> of trajectories in <span class="hlt">distributed</span> smart camera networks. It allows a multitude of <span class="hlt">analysis</span> techniques such as inspecting individual properties of the observed movement in real-time. Additionally, the anonymous movement data allows long-term storage and big data analyses for statistical purposes. The system described in this paper has been implemented as prototype system and deployed for proof of concept under real conditions at the entrance hall of the Leibniz University Hannover. It shows an overall stable performance, particularly with respect to significant illumination changes over hours, as well as regarding the reduction of false positives by post processing and trajectory merging performed on top of a panorama based person detection module.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/207477','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/207477"><span id="translatedtitle">POPE: A <span class="hlt">distributed</span> query system for high performance <span class="hlt">analysis</span> of very large persistent object stores</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Fischler, M.S.; Isely, M.C.; Nigri, A.M.; Rinaldo, F.J.</p> <p>1996-01-01</p> <p><span class="hlt">Analysis</span> of large physics data sets is a major computing task at Fermilab. One step in such an <span class="hlt">analysis</span> involves culling ``interesting`` events via the use of complex query criteria. What makes this unusual is the scale required: 100`s of gigabytes of event data must be scanned at 10`s of megabytes per second for the typical queries that are applied, and data must be extracted from 10`s of terabytes based on the result of the query. The Physics Object Persistency Manager (POPM) system is a solution tailored to this scale of problem. A running POPM environment can support multiple queries in progress, each scanning at rates exceeding 10 megabytes per second, all of which are sharing access to a very large persistent address space <span class="hlt">distributed</span> across multiple disks on multiple hosts. Specifically, POPM employs the following techniques to permit this scale of performance and access: Persistent objects: Experimental data to be scanned is ``populated`` as a data structure into the persistent address space supported by POPM. C++ classes with a few key overloaded operators provide nearly transparent semantics for access to the persistent storage. <span class="hlt">Distributed</span> and parallel I/O: The persistent address space is automatically <span class="hlt">distributed</span> across disks of multiple ``I/O nodes`` within the POPM system. A striping unit concept is implemented in POPM, permitting fast parallel I/O across the storage nodes, even for small single queries. Efficient Shared access: POPM implements an efficient mechanism for arbitration and multiplexing of I/O access among multiple queries on the same or separate compute nodes.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/biblio/347871','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/biblio/347871"><span id="translatedtitle">Phase behavior and splitting <span class="hlt">analysis</span>: An operational tool in gas transmission and <span class="hlt">distribution</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Martinez A., F.F.; Infantini S., M.</p> <p>1998-12-31</p> <p>Most of the natural gas produced by Petroleos de Venezuela, S.A. (PDVSA) is associated gas that flows from the gathering systems to the processing plants, before it arrives at the transmission systems. Even if the gas transmission occurs after the necessary processing of the gas has been completed, condensation still happens in the transmission, regulating stations and/or <span class="hlt">distribution</span> systems. The quantity of condensate will not only depend on composition, pressure and temperature, but also on the unequal splitting phenomenon that takes place at tee junctions in a network system. The splitting phenomenon determines the liquid <span class="hlt">distribution</span> at the junction. This situation is more drastic when the processing plant is partially or totally shut down in a maintenance program. This work shows how the gas transmission and <span class="hlt">distribution</span> engineer has to use the phase behavior and splitting <span class="hlt">analysis</span> as an operational tool, in order to predict and prevent the presence of liquid in the system. Using process simulators, the phase behavior <span class="hlt">analysis</span> is conducted to determine the bubble and dew point curve and to perform flash calculations at any pressure and temperature. Then, the operational pressure-temperature profile is over-plotted on the phase envelope diagram in order to evaluate the condensation possibility into the gas pipeline. Afterwards, the pressure and temperature drops in regulating stations are incorporated in the phase envelope diagram and the two-phase gas condensate study is completed. Finally, considering the concepts of the splitting phenomenon and the knowledge that it can really happen, the presence of liquid in particular customers can be explained and solved. Operational experiences are included to evaluate the methodology and to present the effectiveness of the results.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011AAS...21743319J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011AAS...21743319J"><span id="translatedtitle">Large Survey Database: A <span class="hlt">Distributed</span> Framework for Storage and <span class="hlt">Analysis</span> of Large Datasets</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Juric, Mario</p> <p>2011-01-01</p> <p>The Large Survey Database (LSD) is a Python framework and DBMS for <span class="hlt">distributed</span> storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the <span class="hlt">analysis</span> of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures. An LSD database consists of a set of vertically and horizontally partitioned tables, physically stored as compressed HDF5 files. Vertically, we partition the tables into groups of related columns ('column groups'), storing together logically related data (e.g., astrometry, photometry). Horizontally, the tables are partitioned into partially overlapping ``cells'' by position in space (lon, lat) and time (t). This organization allows for fast lookups based on spatial and temporal coordinates, as well as data and task <span class="hlt">distribution</span>. The design was inspired by the success of Google BigTable (Chang et al., 2006). Our programming model is a pipelined extension of MapReduce (Dean and Ghemawat, 2004). An SQL-like query language is used to access data. For complex tasks, map-reduce ``kernels'' that operate on query results on a per-cell basis can be written, with the framework taking care of scheduling and execution. The combination leverages users' familiarity with SQL, while offering a fully <span class="hlt">distributed</span> computing environment. LSD adds little overhead compared to direct Python file I/O. In tests, we sweeped through 1.1 Grows of PanSTARRS+SDSS data (220GB) less than 15 minutes on a dual CPU machine. In a cluster environment, we achieved bandwidths of 17Gbits/sec (I/O limited). Based on current experience, we believe LSD should scale to be useful for <span class="hlt">analysis</span> and storage of LSST-scale datasets. It can be downloaded from http://mwscience.net/lsd.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013MS%26E...52b2035H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013MS%26E...52b2035H"><span id="translatedtitle">Computation of stress <span class="hlt">distribution</span> in a mixed flow pump based on fluid-structure interaction <span class="hlt">analysis</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hu, F. F.; Chen, T.; Wu, D. Z.; Wang, L. Q.</p> <p>2013-12-01</p> <p>The internal flow evolution of the pump was induced with impeller movement. In various conditions, the peak load on centrifugal blade under the change of rotational speed or flow rate was also changed. It would cause an error when inertia load with a safety coefficient (that was difficult to ascertain) was applied in structure design. In order to accurately analyze the impeller stress under various conditions and improve the reliability of pump, based on a mixed flow pump model, the stress <span class="hlt">distribution</span> characteristic was analyzed under different flow rates and rotational speeds. Based on a three-dimensional calculation model including impeller, guide blade, inlet and outlet, the three-dimension incompressible turbulence flow in the centrifugal pump was simulated by using the standard k-epsilon turbulence model. Based on the sequentially coupled simulation approach, a three-dimensional finite element model of impeller was established, and the fluid-structure interaction method of the blade load transfer was discussed. The blades pressure from flow simulation, together with inertia force acting on the blade, was used as the blade loading on solid surface. The Finite Element Method (FEM) was used to calculate the stress <span class="hlt">distribution</span> of the blade respectively under inertia load, or fluid load, or combined load. The results showed that the blade stress changed with flow rate and rotational speed. In all cases, the maximum stress on the blade appeared on the pressure side near the hub, and the maximum static stress increased with the decreasing of the flow rate and the increasing of rotational speed. There was a big difference on the static stress when inertia load, fluid load and combined loads was applied respectively. In order to more accurately calculate the stress <span class="hlt">distribution</span>, the structure <span class="hlt">analysis</span> should be conducted due to combined loads. The results could provide basis for the stress <span class="hlt">analysis</span> and structure optimization of pump.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://pubs.er.usgs.gov/publication/sir20135030','USGSPUBS'); return false;" href="http://pubs.er.usgs.gov/publication/sir20135030"><span id="translatedtitle">Software for <span class="hlt">analysis</span> of chemical mixtures--composition, occurrence, <span class="hlt">distribution</span>, and possible toxicity</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Scott, Jonathon C.; Skach, Kenneth A.; Toccalino, Patricia L.</p> <p>2013-01-01</p> <p>The composition, occurrence, <span class="hlt">distribution</span>, and possible toxicity of chemical mixtures in the environment are research concerns of the U.S. Geological Survey and others. The presence of specific chemical mixtures may serve as indicators of natural phenomena or human-caused events. Chemical mixtures may also have ecological, industrial, geochemical, or toxicological effects. Chemical-mixture occurrences vary by analyte composition and concentration. Four related computer programs have been developed by the National Water-Quality Assessment Program of the U.S. Geological Survey for research of chemical-mixture compositions, occurrences, <span class="hlt">distributions</span>, and possible toxicities. The compositions and occurrences are identified for the user-supplied data, and therefore the resultant counts are constrained by the user’s choices for the selection of chemicals, reporting limits for the analytical methods, spatial coverage, and time span for the data supplied. The <span class="hlt">distribution</span> of chemical mixtures may be spatial, temporal, and (or) related to some other variable, such as chemical usage. Possible toxicities optionally are estimated from user-supplied benchmark data. The software for the <span class="hlt">analysis</span> of chemical mixtures described in this report is designed to work with chemical-<span class="hlt">analysis</span> data files retrieved from the U.S. Geological Survey National Water Information System but can also be used with appropriately formatted data from other sources. Installation and usage of the mixture software are documented. This mixture software was designed to function with minimal changes on a variety of computer-operating systems. To obtain the software described herein and other U.S. Geological Survey software, visit http://water.usgs.gov/software/.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014WRR....50..336S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014WRR....50..336S"><span id="translatedtitle">Rainfall extremes: Toward reconciliation after the battle of <span class="hlt">distributions</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Serinaldi, Francesco; Kilsby, Chris G.</p> <p>2014-01-01</p> <p>This study attempts to reconcile the conflicting results reported in the literature concerning the behavior of peak-over-threshold (POT) daily rainfall extremes and their <span class="hlt">distribution</span>. By using two worldwide data sets, the impact of threshold selection and record length on the upper tail behavior of POT observations is investigated. The rainfall process is studied within the framework of generalized Pareto (GP) exceedances according to the classical extreme value theory (EVT), with particular attention paid to the study of the GP shape parameter, which controls the heaviness of the upper tail of the GP <span class="hlt">distribution</span>. A twofold effect is recognized. First, as the threshold decreases, and nonextreme values are progressively incorporated in the POT samples, the variance of the GP shape parameter reduces and the mean converges to positive values denoting a tendency to heavy tail behavior. Simultaneously, the EVT asymptotic hypotheses are less and less realistic, and the GP asymptote tends to be replaced by the <span class="hlt">Weibull</span> penultimate asymptote whose upper tail is exponential but apparently heavy. Second, for a fixed high threshold, the variance of the GP shape parameter reduces as the record length (number of years) increases, and the mean values tend to be positive, thus denoting again the prevalence of heavy tail behavior. In both cases, i.e., threshold selection and record length effect, the heaviness of the tail may be ascribed to mechanisms such as the blend of extreme and nonextreme values, and fluctuations of the parent <span class="hlt">distributions</span>. It is shown how these results provide a link between previous studies and pave the way for more comprehensive analyses which merge empirical, theoretical, and operational points of view. This study also provides several ancillary results, such as a set of formulae to correct the bias of the GP shape parameter estimates due to short record lengths accounting for uncertainty, thus avoiding systematic underestimation of extremes which results from the <span class="hlt">analysis</span> of short time series.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19960051335','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19960051335"><span id="translatedtitle"><span class="hlt">Analysis</span> of the access patterns at GSFC <span class="hlt">distributed</span> active archive center</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Johnson, Theodore; Bedet, Jean-Jacques</p> <p>1996-01-01</p> <p>The Goddard Space Flight Center (GSFC) <span class="hlt">Distributed</span> Active Archive Center (DAAC) has been operational for more than two years. Its mission is to support existing and pre Earth Observing System (EOS) Earth science datasets, facilitate the scientific research, and test Earth Observing System Data and Information System (EOSDIS) concepts. Over 550,000 files and documents have been archived, and more than six Terabytes have been <span class="hlt">distributed</span> to the scientific community. Information about user request and file access patterns, and their impact on system loading, is needed to optimize current operations and to plan for future archives. To facilitate the management of daily activities, the GSFC DAAC has developed a data base system to track correspondence, requests, ingestion and <span class="hlt">distribution</span>. In addition, several log files which record transactions on Unitree are maintained and periodically examined. This study identifies some of the users' requests and file access patterns at the GSFC DAAC during 1995. The <span class="hlt">analysis</span> is limited to the subset of orders for which the data files are under the control of the Hierarchical Storage Management (HSM) Unitree. The results show that most of the data volume ordered was for two data products. The volume was also mostly made up of level 3 and 4 data and most of the volume was <span class="hlt">distributed</span> on 8 mm and 4 mm tapes. In addition, most of the volume ordered was for deliveries in North America although there was a significant world-wide use. There was a wide range of request sizes in terms of volume and number of files ordered. On an average 78.6 files were ordered per request. Using the data managed by Unitree, several caching algorithms have been evaluated for both hit rate and the overhead ('cost') associated with the movement of data from near-line devices to disks. The algorithm called LRU/2 bin was found to be the best for this workload, but the STbin algorithm also worked well.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006JPhCS..43.1055D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006JPhCS..43.1055D"><span id="translatedtitle">Current (re-)<span class="hlt">Distribution</span> inside an ITER Full-Size Conductor: a Qualitative <span class="hlt">Analysis</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>di Zenobio, A.; Muzzi, L.; Turtù, S.; Della Corte, A.; Verdini, L.</p> <p>2006-06-01</p> <p>The comprehension of the current re-<span class="hlt">distribution</span> phenomenon inside multi-filamentary conductors is a crucial point for the design of ITER-relevant coils, as it is by now assessed that current non-uniformity among cable sub-stages may strongly deteriorate Cable-in-Conduit Conductors (CICC) performances. The only feasible way to get information about the current flowing inside CICC sub-stages is an indirect evaluation by self-field measurements in regions very close to conductor surface. A 7m full-size NbTi conductor (Bus-Bar III) has been used as short-circuit during the test of an ITER Toroidal Field Coil HTS current lead at FzK. Its relatively simple shape and the absence of any other magnetic field source (background coils, etc.), made BBIII one of the most desirable candidate for a reliable measurement of the current <span class="hlt">distribution</span> under controlled conditions. This is why it has been ad hoc instrumented with different arrangements of Hall-probes (rings and arrays), as well as with transverse and longitudinal voltage taps. This paper gives a qualitative interpretation of the current (re-)<span class="hlt">distribution</span> events inside the conductor as derived from the <span class="hlt">analysis</span> of the Hall sensors and the voltage taps signals, during Tcs measurements and as a function of different dI/dt. It has been shown that Hall probes represent a very reliable tool to investigate this issue. In fact, re-<span class="hlt">distribution</span> phenomena have been clearly observed during transition, and even far before reaching Tcs, when voltage transverse signals had not yet showed any appreciable onset.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PhDT........72R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PhDT........72R"><span id="translatedtitle">A Cost to Benefit <span class="hlt">Analysis</span> of a Next Generation Electric Power <span class="hlt">Distribution</span> System</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Raman, Apurva</p> <p></p> <p>This thesis provides a cost to benefit <span class="hlt">analysis</span> of the proposed next generation of <span class="hlt">distribution</span> systems- the Future Renewable Electric Energy <span class="hlt">Distribution</span> Management (FREEDM) system. With the increasing penetration of renewable energy sources onto the grid, it becomes necessary to have an infrastructure that allows for easy integration of these resources coupled with features like enhanced reliability of the system and fast protection from faults. The Solid State Transformer (SST) and the Fault Isolation Device (FID) make for the core of the FREEDM system and have huge investment costs. Some key features of the FREEDM system include improved power flow control, compact design and unity power factor operation. Customers may observe a reduction in the electricity bill by a certain fraction for using renewable sources of generation. There is also a possibility of huge subsidies given to encourage use of renewable energy. This thesis is an attempt to quantify the benefits offered by the FREEDM system in monetary terms and to calculate the time in years required to gain a return on investments made. The elevated cost of FIDs needs to be justified by the advantages they offer. The result of different rates of interest and how they influence the payback period is also studied. The payback periods calculated are observed for viability. A comparison is made between the active power losses on a certain <span class="hlt">distribution</span> feeder that makes use of <span class="hlt">distribution</span> level magnetic transformers versus one that makes use of SSTs. The reduction in the annual active power losses in the case of the feeder using SSTs is translated onto annual savings in terms of cost when compared to the conventional case with magnetic transformers. Since the FREEDM system encourages operation at unity power factor, the need for installing capacitor banks for improving the power factor is eliminated and this reflects in savings in terms of cost. The FREEDM system offers enhanced reliability when compared to a conventional system. The payback periods observed support the concept of introducing the FREEDM system.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EGUGA..1616281V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EGUGA..1616281V"><span id="translatedtitle"><span class="hlt">Analysis</span> of the relationship between landslides size <span class="hlt">distribution</span> and earthquake source area</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Valagussa, Andrea; Crosta, Giovanni B.; Frattini, Paolo; Xu, Chong</p> <p>2014-05-01</p> <p>The spatial <span class="hlt">distribution</span> of earthquake induced landslides around the seismogenetic source has been analysed to better understand the triggering of landslides in seismic areas and to forecast the maximum distance at which an earthquake, with a certain magnitude, can induce landslides (e.g Keefer, 1984). However, when applying such approaches to old earthquakes (e.g 1929 Buller and 1968 Iningahua earthquakes New Zealand; Parker, 2013; 1976 Friuli earthquake, Italy) one should be concerned about the undersampling of smaller landslides which can be cancelled by erosion and landscape evolution. For this reason, it is important to characterize carefully the relationship between landslide area and number with distance from the source, but also the size <span class="hlt">distribution</span> of landslides as a function of distance from the source. In this paper, we analyse the 2008 Wenchuan earthquake landslide inventory (Xu et al, 2013). The earthquake triggered more than 197,000 landslides of different type, including rock avalanches, rockfalls, translational and rotational slides, lateral spreads and derbies flows. First, we calculated the landslide intensity (number of landslides per unit area) and spatial density (landslide area per unit area) as a function of distance from the source area of the earthquake. Then, we developed magnitude frequency curves (MFC) for different distances from the source area. Comparing these curves, we can describe the relation between the distance and the frequency density of landslide in seismic area. Keefer D K (1984) Landslides caused by earthquakes. Geological Society of America Bulletin, 95(4), 406-421. Parker R N, (2013) Hillslope memory and spatial and temporal <span class="hlt">distributions</span> of earthquake-induced landslides, Durham theses, Durham University. Xu, C., Xu, X., Yao, X., & Dai, F. (2013). Three (nearly) complete inventories of landslides triggered by the May 12, 2008 Wenchuan Mw 7.9 earthquake of China and their spatial <span class="hlt">distribution</span> statistical <span class="hlt">analysis</span>. Landslides, 1-21.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2720384','PMC'); return false;" href="http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2720384"><span id="translatedtitle">Spatial <span class="hlt">distribution</span> of the active surveillance of sheep scrapie in Great Britain: an exploratory <span class="hlt">analysis</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Birch, Colin PD; Chikukwa, Ambrose C; Hyder, Kieran; Del Rio Vilas, Victor J</p> <p>2009-01-01</p> <p>Background This paper explores the spatial <span class="hlt">distribution</span> of sampling within the active surveillance of sheep scrapie in Great Britain. We investigated the geographic <span class="hlt">distribution</span> of the birth holdings of sheep sampled for scrapie during 2002 – 2005, including samples taken in abattoir surveys (c. 83,100) and from sheep that died in the field ("fallen stock", c. 14,600). We mapped the birth holdings by county and calculated the sampling rate, defined as the proportion of the holdings in each county sampled by the surveys. The Moran index was used to estimate the global spatial autocorrelation across Great Britain. The contributions of each county to the global Moran index were analysed by a local indicator of spatial autocorrelation (LISA). Results The sampling rate differed among counties in both surveys, which affected the <span class="hlt">distribution</span> of detected cases of scrapie. Within each survey, the county sampling rates in different years were positively correlated during 2002–2005, with the abattoir survey being more strongly autocorrelated through time than the fallen stock survey. In the abattoir survey, spatial indices indicated that sampling rates in neighbouring counties tended to be similar, with few significant contrasts. Sampling rates were strongly correlated with sheep density, being highest in Wales, Southwest England and Northern England. This relationship with sheep density accounted for over 80% of the variation in sampling rate among counties. In the fallen stock survey, sampling rates in neighbouring counties tended to be different, with more statistically significant contrasts. The fallen stock survey also included a larger proportion of holdings providing many samples. Conclusion Sampling will continue to be uneven unless action is taken to make it more uniform, if more uniform sampling becomes a target. Alternatively, analyses of scrapie occurrence in these datasets can take account of the <span class="hlt">distribution</span> of sampling. Combining the surveys only partially reduces uneven sampling. Adjusting the <span class="hlt">distribution</span> of sampling between abattoirs to reduce the bias in favour of regions with high sheep densities could probably achieve more even sampling. However, any adjustment of sampling should take account of the current understanding of the <span class="hlt">distribution</span> of scrapie cases, which will be improved by further <span class="hlt">analysis</span> of this dataset. PMID:19607705</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/841283','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/841283"><span id="translatedtitle">RELIABILITY <span class="hlt">ANALYSIS</span> OF THE ELECTRICAL POWER <span class="hlt">DISTRIBUTION</span> SYSTEM TO SELECTED PORTIONS OF THE NUCLEAR HVAC SYSTEM</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>N. Ramirez</p> <p>2004-12-16</p> <p>A design requirement probability of 0.01 or less in a 4-hour period ensures that the nuclear heating, ventilation, and air-conditioning (HVAC) system in the primary confinement areas of the Dry Transfer Facilities (DTFs) and Fuel Handling Facility (FHF) is working during a Category 1 drop event involving commercial spent nuclear fuel (CSNF) assemblies (BSC 2004a , Section 5.1.1.48). This corresponds to an hourly HVAC failure rate of 2.5E-3 per hour or less, which is contributed to by two dominant causes: equipment failure and loss of electrical power. Meeting this minimum threshold ensures that a Category 1 initiating event followed by the failure of HVAC is a Category 2 event sequence. The two causes for the loss of electrical power include the loss of offsite power and the loss of onsite power <span class="hlt">distribution</span>. Thus, in order to meet the threshold requirement aforementioned, the failure rate of mechanical equipment, loss of offsite power, and loss of onsite power <span class="hlt">distribution</span> must be less than or equal to 2.5E-3 per hour for the nuclear HVAC system in the primary confinement areas of the DTFs and FHF. The loss of offsite power occurs at a frequency of 1.1E-5 per hour (BSC 2004a, Section 5.1.1.48). The purpose of this <span class="hlt">analysis</span> is to determine the probability of occurrence of the unavailability of the nuclear HVAC system in the primary confinement areas of the DTFs and FHF due to loss of electrical power. In addition, this <span class="hlt">analysis</span> provides insights on the contribution to the unavailability of the HVAC system due to equipment failure. The scope of this <span class="hlt">analysis</span> is limited to finding the frequency of loss of electrical power to the nuclear HVAC system in the primary confinement areas of the DTFs and FHF.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=mixture&pg=2&id=EJ962275','ERIC'); return false;" href="http://eric.ed.gov/?q=mixture&pg=2&id=EJ962275"><span id="translatedtitle">Mixture Factor <span class="hlt">Analysis</span> for Approximating a Nonnormally <span class="hlt">Distributed</span> Continuous Latent Factor with Continuous and Dichotomous Observed Variables</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Wall, Melanie M.; Guo, Jia; Amemiya, Yasuo</p> <p>2012-01-01</p> <p>Mixture factor <span class="hlt">analysis</span> is examined as a means of flexibly estimating nonnormally <span class="hlt">distributed</span> continuous latent factors in the presence of both continuous and dichotomous observed variables. A simulation study compares mixture factor <span class="hlt">analysis</span> with normal maximum likelihood (ML) latent factor modeling. Different results emerge for continuous versus…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.gridbus.org/papers/neurogrid-ccpe.pdf','EPRINT'); return false;" href="http://www.gridbus.org/papers/neurogrid-ccpe.pdf"><span id="translatedtitle">Neuroscience Instrumentation and <span class="hlt">Distributed</span> <span class="hlt">Analysis</span> of Brain Activity Data: A Case for eScience on Global Grids</span></a></p> <p><a target="_blank" href="http://www.osti.gov/eprints/">E-print Network</a></p> <p>Melbourne, University of</p> <p></p> <p>1 Neuroscience Instrumentation and <span class="hlt">Distributed</span> <span class="hlt">Analysis</span> of Brain Activity Data: A Case for e commonly observed in scientific disciplines. Two popular scientific disciplines of this nature are brain science and high-energy physics. The <span class="hlt">analysis</span> of brain activity data gathered from the MEG</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/23892743','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/23892743"><span id="translatedtitle">New methods for <span class="hlt">analysis</span> of spatial <span class="hlt">distribution</span> and coaggregation of microbial populations in complex biofilms.</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Almstrand, Robert; Daims, Holger; Persson, Frank; Sörensson, Fred; Hermansson, Malte</p> <p>2013-10-01</p> <p>In biofilms, microbial activities form gradients of substrates and electron acceptors, creating a complex landscape of microhabitats, often resulting in structured localization of the microbial populations present. To understand the dynamic interplay between and within these populations, quantitative measurements and statistical <span class="hlt">analysis</span> of their localization patterns within the biofilms are necessary, and adequate automated tools for such analyses are needed. We have designed and applied new methods for fluorescence in situ hybridization (FISH) and digital image <span class="hlt">analysis</span> of directionally dependent (anisotropic) multispecies biofilms. A sequential-FISH approach allowed multiple populations to be detected in a biofilm sample. This was combined with an automated tool for vertical-<span class="hlt">distribution</span> <span class="hlt">analysis</span> by generating in silico biofilm slices and the recently developed Inflate algorithm for coaggregation <span class="hlt">analysis</span> of microbial populations in anisotropic biofilms. As a proof of principle, we show distinct stratification patterns of the ammonia oxidizers Nitrosomonas oligotropha subclusters I and II and the nitrite oxidizer Nitrospira sublineage I in three different types of wastewater biofilms, suggesting niche differentiation between the N. oligotropha subclusters, which could explain their coexistence in the same biofilms. Coaggregation <span class="hlt">analysis</span> showed that N. oligotropha subcluster II aggregated closer to Nitrospira than did N. oligotropha subcluster I in a pilot plant nitrifying trickling filter (NTF) and a moving-bed biofilm reactor (MBBR), but not in a full-scale NTF, indicating important ecophysiological differences between these phylogenetically closely related subclusters. By using high-resolution quantitative methods applicable to any multispecies biofilm in general, the ecological interactions of these complex ecosystems can be understood in more detail. PMID:23892743</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/26591998','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/26591998"><span id="translatedtitle">[<span class="hlt">Distribution</span> Characteristics and Source <span class="hlt">Analysis</span> of Dustfall Trace Elements During Winter in Beijing].</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Xiong, Qiu-lin; Zhao, Wen-ji; Guo, Xiao-yu; Chen, Fan-tao; Shu, Tong-tong; Zheng, Xiao-xia; Zhao, Wen-hui</p> <p>2015-08-01</p> <p>The dustfall content is one of the evaluation indexes of atmospheric pollution. Trace elements especially heavy metals in dustfall can lead to risks to ecological environment and human health. In order to study the <span class="hlt">distribution</span> characteristics of trace elements, heavy metals pollution and their sources in winter atmospheric dust, 49 dustfall samples were collected in Beijing City and nearby during November 2013 to March 2014. Then the contents (mass percentages) of 40 trace elements were measured by Elan DRC It type inductively coupled plasma mass (ICP-MS). Test results showed that more than half of the trace elements in the dust were less than 10 mg x kg(-1); about a quarter were between 10-100 mg x kg-1); while 7 elements (Pb, Zr, Cr, Cu, Zn, Sr and Ba) were more than 100 mg x kg(-1). The contents of Pb, Cu, Zn, Bi, Cd and Mo of winter dustfall in Beijing city.were respectively 4.18, 4.66, 5.35, 6.31, 6.62, and 8.62 times as high as those of corresponding elements in the surface soil in the same period, which went beyond the soil background values by more than 300% . The contribution of human activities to dustfall trace heavy metals content in Beijing city was larger than that in the surrounding region. Then sources <span class="hlt">analysis</span> of dustfall and its 20 main trace elements (Cd, Mo, Nb, Ga, Co, Y, Nd, Li, La, Ni, Rb, V, Ce, Pb, Zr, Cr, Cu, Zn, Sr, Ba) was conducted through a multi-method <span class="hlt">analysis</span>, including Pearson correlation <span class="hlt">analysis</span>, Kendall correlation coefficient <span class="hlt">analysis</span> and principal component <span class="hlt">analysis</span>. Research results indicated that sources of winter dustfall in Beijing city were mainly composed of the earth's crust sources (including road dust, construction dust and remote transmission of dust) and the burning of fossil fuels (vehicle emissions, coal combustion, biomass combustion and industrial processes). PMID:26591998</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3811375','PMC'); return false;" href="http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3811375"><span id="translatedtitle">New Methods for <span class="hlt">Analysis</span> of Spatial <span class="hlt">Distribution</span> and Coaggregation of Microbial Populations in Complex Biofilms</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Almstrand, Robert; Daims, Holger; Persson, Frank; Sörensson, Fred</p> <p>2013-01-01</p> <p>In biofilms, microbial activities form gradients of substrates and electron acceptors, creating a complex landscape of microhabitats, often resulting in structured localization of the microbial populations present. To understand the dynamic interplay between and within these populations, quantitative measurements and statistical <span class="hlt">analysis</span> of their localization patterns within the biofilms are necessary, and adequate automated tools for such analyses are needed. We have designed and applied new methods for fluorescence in situ hybridization (FISH) and digital image <span class="hlt">analysis</span> of directionally dependent (anisotropic) multispecies biofilms. A sequential-FISH approach allowed multiple populations to be detected in a biofilm sample. This was combined with an automated tool for vertical-<span class="hlt">distribution</span> <span class="hlt">analysis</span> by generating in silico biofilm slices and the recently developed Inflate algorithm for coaggregation <span class="hlt">analysis</span> of microbial populations in anisotropic biofilms. As a proof of principle, we show distinct stratification patterns of the ammonia oxidizers Nitrosomonas oligotropha subclusters I and II and the nitrite oxidizer Nitrospira sublineage I in three different types of wastewater biofilms, suggesting niche differentiation between the N. oligotropha subclusters, which could explain their coexistence in the same biofilms. Coaggregation <span class="hlt">analysis</span> showed that N. oligotropha subcluster II aggregated closer to Nitrospira than did N. oligotropha subcluster I in a pilot plant nitrifying trickling filter (NTF) and a moving-bed biofilm reactor (MBBR), but not in a full-scale NTF, indicating important ecophysiological differences between these phylogenetically closely related subclusters. By using high-resolution quantitative methods applicable to any multispecies biofilm in general, the ecological interactions of these complex ecosystems can be understood in more detail. PMID:23892743</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005AGUFM.H31E1349C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005AGUFM.H31E1349C"><span id="translatedtitle">A Geoinformatics Approach to LiDAR / ALSM Data <span class="hlt">Distribution</span>, Interpolation, and <span class="hlt">Analysis</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Crosby, C. J.; Conner, J.; Frank, E.; Arrowsmith, J. R.; Memon, A.; Nandigam, V.; Wurman, G.; Baru, C.</p> <p>2005-12-01</p> <p><span class="hlt">Distribution</span>, interpolation and <span class="hlt">analysis</span> of large LiDAR (Light Distance And Ranging, also known as ALSM (Airborne Laser Swath Mapping)) datasets pushes the computational limits of typical data <span class="hlt">distribution</span> and processing systems. The high point-density of LiDAR datasets makes grid interpolation difficult for geoscience users who lack the computing and software resources necessary to handle these massive data volumes. We are using a geoinformatics approach to the <span class="hlt">distribution</span>, interpolation and <span class="hlt">analysis</span> of LiDAR data that capitalizes on cyberinfrastructure being developed as part of the GEON project (http://www.geongrid.org). Our approach utilizes a comprehensive workflow-based solution that begins with user-defined selection of a subset of raw data and ends with download and visualization of interpolated surfaces and derived products. The workflow environment allows us to modularize and generalize the procedure. It provides the freedom to easily plug-in new processes, to utilize existing sub workflows within an <span class="hlt">analysis</span>, and easily extend or modify the <span class="hlt">analysis</span> using drag-and-drop functionality through the Kepler workflow management system. In this GEON-based workflow, the billions of points within a LiDAR dataset point cloud are hosted in an IBM DB2 spatial database running on the DataStar terascale computer at San Diego Supercomputer Center; a machine designed specifically for data intensive computations. Data selection is performed via an ArcIMS-based interface that allows users to execute spatial and attribute subset queries on the larger dataset. The subset of data is then passed to a GRASS Open Source GIS-based web service, "lservice", that handles interpolation to grid and <span class="hlt">analysis</span> of the data. Lservice was developed entirely within the open source domain and offers spline and inverse distance weighted (IDW) interpolation to grid with user-defined resolution and parameters. We also compute geomorphic metrics such as slope, curvature, and aspect. Users may choose to download their results in ESRI or ascii grid formats as well as geo tiff. Additionally, our workflow feeds into GEON web services in development that will allow visualization of Lservice outputs in either a web browser window or in 3D through Fledermaus' free viewer iView3D (or our own OpenGL-based tools). This geoinformatics-based system will allow GEON to host LiDAR point cloud data for the greater geoscience community, including data collected by the National Center for Airborne Laser Mapping (NCALM). In addition, most of the functions within this workflow are not limited to LiDAR data and may be used for <span class="hlt">distributing</span>, interpolating and visualizing any computationally intensive point dataset (such as gravity). By utilizing the computational infrastructure developed by GEON, this system can democratize LiDAR data access for the geoscience community.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/26329859','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/26329859"><span id="translatedtitle"><span class="hlt">Analysis</span> of <span class="hlt">Distribution</span> of Ingredients in Commercially Available Clarithromycin Tablets Using Near-Infrared Chemical Imaging with Principal Component <span class="hlt">Analysis</span> and Partial Least Squares.</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Koide, Tatsuo; Yamamoto, Yoshihisa; Fukami, Toshiro; Katori, Noriko; Okuda, Haruhiro; Hiyama, Yukio</p> <p>2015-01-01</p> <p>The aim of this study was to evaluate pharmaceuticals using a near-infrared chemical imaging (NIR-CI) technique for visualizing the <span class="hlt">distribution</span> of ingredients in solid dosage forms of commercially available clarithromycin tablets. The cross section of a tablet was measured using the NIR-CI system for evaluating the <span class="hlt">distribution</span> of ingredients in the tablet. The chemical images were generated by performing multivariate <span class="hlt">analysis</span> methods: principal component <span class="hlt">analysis</span> (PCA) and partial least squares (PLS) with normalized near-infrared (NIR) spectral data. We gained spectral and <span class="hlt">distributional</span> information related to clarithromycin, cornstarch, and magnesium stearate by using PCA <span class="hlt">analysis</span>. On the basis of this information, the <span class="hlt">distribution</span> images of these ingredients were generated using PLS <span class="hlt">analysis</span>. The results of PCA <span class="hlt">analysis</span> enabled us to analyze individual components by using PLS even if sufficient information on the products was not available. However, some ingredients such as binder could not be detected using NIR-CI, because their particle sizes were smaller than the pixel size (approximately 25×25×50?µm) and they were present in low concentrations. The combined <span class="hlt">analysis</span> using both PCA and PLS with NIR-CI was useful to analyze the <span class="hlt">distribution</span> of ingredients in a commercially available pharmaceutical even when sufficient information on the product is not available. PMID:26329859</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/984643','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/984643"><span id="translatedtitle"><span class="hlt">Distributed</span> <span class="hlt">analysis</span> with CRAB: The client-server architecture evolution and commissioning</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Codispoti, G.; Cinquilli, M.; Fanfani, A.; Fanzago, F.; Farina, F.; Lacaprara, S.; Miccio, V.; Spiga, D.; Vaandering, E.</p> <p>2008-01-01</p> <p>CRAB (CMS Remote <span class="hlt">Analysis</span> Builder) is the tool used by CMS to enable running physics <span class="hlt">analysis</span> in a transparent manner over data <span class="hlt">distributed</span> across many sites. It abstracts out the interaction with the underlying batch farms, grid infrastructure and CMS workload management tools, such that it is easily usable by non-experts. CRAB can be used as a direct interface to the computing system or can delegate the user task to a server. Major efforts have been dedicated to the client-server system development, allowing the user to deal only with a simple and intuitive interface and to delegate all the work to a server. The server takes care of handling the users jobs during the whole lifetime of the users task. In particular, it takes care of the data and resources discovery, process tracking and output handling. It also provides services such as automatic resubmission in case of failures, notification to the user of the task status, and automatic blacklisting of sites showing evident problems beyond what is provided by existing grid infrastructure. The CRAB Server architecture and its deployment will be presented, as well as the current status and future development. In addition the experience in using the system for initial detector commissioning activities and data <span class="hlt">analysis</span> will be summarized.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JHyd..529..940J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JHyd..529..940J"><span id="translatedtitle">Multi-site identification of a <span class="hlt">distributed</span> hydrological nitrogen model using Bayesian uncertainty <span class="hlt">analysis</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jiang, Sanyuan; Jomaa, Seifeddine; Büttner, Olaf; Meon, Günter; Rode, Michael</p> <p>2015-10-01</p> <p>For capturing spatial variations of runoff and nutrient fluxes attributed to catchment heterogeneity, multi-site hydrological water quality monitoring strategies are increasingly put into practice. This study aimed to investigate the impacts of spatially <span class="hlt">distributed</span> streamflow and streamwater Inorganic Nitrogen (IN) concentration observations on the identification of a continuous time, spatially semi-<span class="hlt">distributed</span> and process-based hydrological water quality model HYPE (HYdrological Predictions for the Environment). A Bayesian inference based approach DREAM(ZS) (DiffeRential Evolution Adaptive Metrololis algorithm) was combined with HYPE to implement model optimisation and uncertainty <span class="hlt">analysis</span> on streamflow and streamwater IN concentration simulations at a nested meso scale catchment in central Germany. To this end, a 10-year period (1994-1999 for calibration and 1999-2004 for validation) was utilised. We compared the parameters' posterior <span class="hlt">distributions</span>, modelling performance using the best estimated parameter set and 95% prediction confidence intervals at catchment outlet for the calibration period that were derived from single-site calibration (SSC) and multi-site calibration (MSC) modes. For SSC, streamflow and streamwater IN concentration observations at only the catchment outlet were used. While, for MSC, streamflow and streamwater IN concentration observations from both catchment outlet and two internal sites were considered. Results showed that the uncertainty intervals of hydrological water quality parameters' posterior <span class="hlt">distributions</span> estimated from MSC, were narrower than those obtained from SSC. In addition, it was found that the MSC outperformed SSC on streamwater IN concentration simulations at internal sites for both calibration and validation periods, while the influence on streamflow modelling performance was small. This can be explained by the "nested" nature of the catchment and high correlation between discharge observations from different sites. Results revealed, also, that 95% prediction confidence intervals of streamflow and streamwater IN concentration estimated from MSC were more credible compared with those estimated from SSC, which are reflected by narrower confidence intervals and higher percentage of observations bracketed in the estimated unit confidence intervals. The outcomes of this study pointed out the importance of spatially <span class="hlt">distributed</span> hydrological water quality observations for improving model parameter identification and provided guidelines for choice of adequate calibration strategy and design of hydrological water quality monitoring campaign.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/biblio/1015063','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/biblio/1015063"><span id="translatedtitle">Sieveless particle size <span class="hlt">distribution</span> <span class="hlt">analysis</span> of particulate materials through computer vision</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Igathinathane, C.; Pordesimo, L. O.; Columbus, Eugene P; Batchelor, William D; Sokhansanj, Shahabaddine</p> <p>2009-05-01</p> <p>This paper explores the inconsistency of length-based separation by mechanical sieving of particulate materials with standard sieves, which is the standard method of particle size <span class="hlt">distribution</span> (PSD) <span class="hlt">analysis</span>. We observed inconsistencies of length-based separation of particles using standard sieves with manual measurements, which showed deviations of 17 22 times. In addition, we have demonstrated the falling through effect of particles cannot be avoided irrespective of the wall thickness of the sieve. We proposed and utilized a computer vision with image processing as an alternative approach; wherein a user-coded Java ImageJ plugin was developed to evaluate PSD based on length of particles. A regular flatbed scanner acquired digital images of particulate material. The plugin determines particles lengths from Feret's diameter and width from pixel-march method, or minor axis, or the minimum dimension of bounding rectangle utilizing the digital images after assessing the particles area and shape (convex or nonconvex). The plugin also included the determination of several significant dimensions and PSD parameters. Test samples utilized were ground biomass obtained from the first thinning and mature stand of southern pine forest residues, oak hard wood, switchgrass, elephant grass, giant miscanthus, wheat straw, as well as Basmati rice. A sieveless PSD <span class="hlt">analysis</span> method utilized the true separation of all particles into groups based on their distinct length (419 639 particles based on samples studied), with each group truly represented by their exact length. This approach ensured length-based separation without the inconsistencies observed with mechanical sieving. Image based sieve simulation (developed separately) indicated a significant effect (P < 0.05) on number of sieves used in PSD <span class="hlt">analysis</span>, especially with non-uniform material such as ground biomass, and more than 50 equally spaced sieves were required to match the sieveless all distinct particles PSD <span class="hlt">analysis</span>. Results substantiate that mechanical sieving, owing to handling limitations and inconsistent length-based separation of particles, is inadequate in determining the PSD of non-uniform particulate samples. The developed computer vision sieveless PSD <span class="hlt">analysis</span> approach has the potential to replace the standard mechanical sieving. The plugin can be readily extended to model (e.g., Rosin Rammler) the PSD of materials, and mass-based <span class="hlt">analysis</span>, while providing several advantages such as accuracy, speed, low cost, automated <span class="hlt">analysis</span>, and reproducible results.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20160000354','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20160000354"><span id="translatedtitle">Federated Giovanni: A <span class="hlt">Distributed</span> Web Service for <span class="hlt">Analysis</span> and Visualization of Remote Sensing Data</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lynnes, Chris</p> <p>2014-01-01</p> <p>The Geospatial Interactive Online Visualization and <span class="hlt">Analysis</span> Interface (Giovanni) is a popular tool for users of the Goddard Earth Sciences Data and Information Services Center (GES DISC) and has been in use for over a decade. It provides a wide variety of algorithms and visualizations to explore large remote sensing datasets without having to download the data and without having to write readers and visualizers for it. Giovanni is now being extended to enable its capabilities at other data centers within the Earth Observing System Data and Information System (EOSDIS). This Federated Giovanni will allow four other data centers to add and maintain their data within Giovanni on behalf of their user community. Those data centers are the Physical Oceanography <span class="hlt">Distributed</span> Active Archive Center (PO.DAAC), MODIS Adaptive Processing System (MODAPS), Ocean Biology Processing Group (OBPG), and Land Processes <span class="hlt">Distributed</span> Active Archive Center (LP DAAC). Three tiers are supported: Tier 1 (GES DISC-hosted) gives the remote data center a data management interface to add and maintain data, which are provided through the Giovanni instance at the GES DISC. Tier 2 packages Giovanni up as a virtual machine for <span class="hlt">distribution</span> to and deployment by the other data centers. Data variables are shared among data centers by sharing documents from the Solr database that underpins Giovanni's data management capabilities. However, each data center maintains their own instance of Giovanni, exposing the variables of most interest to their user community. Tier 3 is a Shared Source model, in which the data centers cooperate to extend the infrastructure by contributing source code.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19940023413','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19940023413"><span id="translatedtitle">Review and <span class="hlt">analysis</span> of dense linear system solver package for <span class="hlt">distributed</span> memory machines</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Narang, H. N.</p> <p>1993-01-01</p> <p>A dense linear system solver package recently developed at the University of Texas at Austin for <span class="hlt">distributed</span> memory machine (e.g. Intel Paragon) has been reviewed and analyzed. The package contains about 45 software routines, some written in FORTRAN, and some in C-language, and forms the basis for parallel/<span class="hlt">distributed</span> solutions of systems of linear equations encountered in many problems of scientific and engineering nature. The package, being studied by the Computer Applications Branch of the <span class="hlt">Analysis</span> and Computation Division, may provide a significant computational resource for NASA scientists and engineers in parallel/<span class="hlt">distributed</span> computing. Since the package is new and not well tested or documented, many of its underlying concepts and implementations were unclear; our task was to review, analyze, and critique the package as a step in the process that will enable scientists and engineers to apply it to the solution of their problems. All routines in the package were reviewed and analyzed. Underlying theory or concepts which exist in the form of published papers or technical reports, or memos, were either obtained from the author, or from the scientific literature; and general algorithms, explanations, examples, and critiques have been provided to explain the workings of these programs. Wherever the things were still unclear, communications were made with the developer (author), either by telephone or by electronic mail, to understand the workings of the routines. Whenever possible, tests were made to verify the concepts and logic employed in their implementations. A detailed report is being separately documented to explain the workings of these routines.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFMIN52A..05L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFMIN52A..05L"><span id="translatedtitle">Federated Giovanni: A <span class="hlt">Distributed</span> Web Service for <span class="hlt">Analysis</span> and Visualization of Remote Sensing Data</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lynnes, C.; Hegde, M.; Acker, J. G.; Mattmann, C. A.; D'Sa, E. J.; Thompson, C. K.; Kalb, V.; Ramirez, P.; Franz, B. A.; Lossing, R.; Fang, F.; Torbert, C.; Hendrix, C.</p> <p>2014-12-01</p> <p>The Geospatial Interactive Online Visualization and <span class="hlt">Analysis</span> Interface (Giovanni) is a popular tool for users of the Goddard Earth Sciences Data and Information Services Center (GES DISC) and has been in use for over a decade. It provides a wide variety of algorithms and visualizations to explore large remote sensing datasets without having to download the data and without having to write readers and visualizers for it. Giovanni is now being extended to enable its capabilities at other data centers within the Earth Observing System Data and Information System (EOSDIS). This "Federated Giovanni" will allow four other data centers to add and maintain their data within Giovanni on behalf of their user community. Those data centers are the Physical Oceanography <span class="hlt">Distributed</span> Active Archive Center (PO.DAAC), MODIS Adaptive Processing System (MODAPS), Ocean Biology Processing Group (OBPG), and Land Processes <span class="hlt">Distributed</span> Active Archive Center (LP DAAC). Three tiers are supported: Tier 1 (GES DISC-hosted) gives the remote data center a data management interface to add and maintain data, which are provided through the Giovanni instance at the GES DISC. Tier 2 packages Giovanni up as a virtual machine for <span class="hlt">distribution</span> to and deployment by the other data centers. Data variables are shared among data centers by sharing documents from the Solr database that underpins Giovanni's data management capabilities. However, each data center maintains their own instance of Giovanni, exposing the variables of most interest to their user community. Tier 3 is a Shared Source model, in which the data centers cooperate to extend the infrastructure by contributing source code.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://arxiv.org/pdf/1406.0387v1','EPRINT'); return false;" href="http://arxiv.org/pdf/1406.0387v1"><span id="translatedtitle">Tight finite-key <span class="hlt">analysis</span> for passive decoy-state quantum key <span class="hlt">distribution</span> under general attacks</span></a></p> <p><a target="_blank" href="http://www.osti.gov/eprints/">E-print Network</a></p> <p>Chun Zhou; Wan-su Bao; Hong-wei Li; Yang Wang; Yuan Li; Zhen-qiang Yin; Wei Chen; Zheng-fu Han</p> <p>2014-06-02</p> <p>For quantum key <span class="hlt">distribution</span> (QKD) using spontaneous parametric-down-conversion sources (SPDCSs), the passive decoy-state protocol has been proved to be efficiently close to the theoretical limit of an infinite decoy-state protocol. In this paper, we apply a tight finite-key <span class="hlt">analysis</span> for the passive decoy-state QKD using SPDCSs. Combining the security bound based on the uncertainty principle with the passive decoy-state protocol, a concise and stringent formula for calculating the key generation rate for QKD using SPDCSs is presented. The simulation shows that the secure distance under our formula can reach up to 182 km when the number of sifted data is $10^{10}$. Our results also indicate that, under the same deviation of statistical fluctuation due to finite-size effects, the passive decoy-state QKD with SPDCSs can perform as well as the active decoy-state QKD with a weak coherent source.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2393774','PMC'); return false;" href="http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2393774"><span id="translatedtitle">Mutational load <span class="hlt">distribution</span> <span class="hlt">analysis</span> yields metrics reflecting genetic instability during pancreatic carcinogenesis</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Tarafa, Gemma; Tuck, David; Ladner, Daniela; Topazian, Mark; Brand, Randall; Deters, Carolyn; Moreno, Victor; Capella, Gabriel; Lynch, Henry; Lizardi, Paul; Costa, Jose</p> <p>2008-01-01</p> <p>Considering carcinogenesis as a microevolutionary process, best described in the context of metapopulation dynamics, provides the basis for theoretical and empirical studies that indicate it is possible to estimate the relative contribution of genetic instability and selection to the process of tumor formation. We show that mutational load <span class="hlt">distribution</span> <span class="hlt">analysis</span> (MLDA) of DNA found in pancreatic fluids yields biometrics that reflect the interplay of instability, selection, accident, and gene function that determines the eventual emergence of a tumor. An in silico simulation of carcinogenesis indicates that MLDA may be a suitable tool for early detection of pancreatic cancer. We also present evidence indicating that, when performed serially in individuals harboring a p16 germ-line mutation bestowing a high risk for pancreatic cancer, MLDA may be an effective tool for the longitudinal assessment of risk and early detection of pancreatic cancer. PMID:18337498</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19840025026','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19840025026"><span id="translatedtitle">System <span class="hlt">analysis</span> for the Huntsville Operational Support Center <span class="hlt">distributed</span> computer system</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ingels, F. M.; Mauldin, J.</p> <p>1984-01-01</p> <p>The Huntsville Operations Support Center (HOSC) is a <span class="hlt">distributed</span> computer system used to provide real time data acquisition, <span class="hlt">analysis</span> and display during NASA space missions and to perform simulation and study activities during non-mission times. The primary purpose is to provide a HOSC system simulation model that is used to investigate the effects of various HOSC system configurations. Such a model would be valuable in planning the future growth of HOSC and in ascertaining the effects of data rate variations, update table broadcasting and smart display terminal data requirements on the HOSC HYPERchannel network system. A simulation model was developed in PASCAL and results of the simulation model for various system configuraions were obtained. A tutorial of the model is presented and the results of simulation runs are presented. Some very high data rate situations were simulated to observe the effects of the HYPERchannel switch over from contention to priority mode under high channel loading.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/26480129','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/26480129"><span id="translatedtitle">Polarimetric dehazing method for dense haze removal based on <span class="hlt">distribution</span> <span class="hlt">analysis</span> of angle of polarization.</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liang, Jian; Ren, Liyong; Ju, Haijuan; Zhang, Wenfei; Qu, Enshi</p> <p>2015-10-01</p> <p>Many dehazing methods have proven to be effective in removing haze out of the hazy image, but few of them are adaptive in handling the dense haze. In this paper, based on the angle of polarization (AOP) <span class="hlt">distribution</span> <span class="hlt">analysis</span> we propose a kind of polarimetric dehazing method, which is verified to be capable of enhancing the contrast and the range of visibility of images taken in dense haze substantially. It is found that the estimating precision of the intensity of airlight is a key factor which determines the dehazing quality, and fortunately our method involves a high precision estimation inherently. In the experiments a good dehazing performance is demonstrated, especially for dense haze removal. We find that the visibility can be enhanced at least 74%. Besides, the method can be used not only in dense haze but also in severe sea fog. PMID:26480129</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/biblio/21437870','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/biblio/21437870"><span id="translatedtitle">Finite-size <span class="hlt">analysis</span> of a continuous-variable quantum key <span class="hlt">distribution</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Leverrier, Anthony; Grangier, Philippe</p> <p>2010-06-15</p> <p>The goal of this paper is to extend the framework of finite-size <span class="hlt">analysis</span> recently developed for quantum key <span class="hlt">distribution</span> to continuous-variable protocols. We do not solve this problem completely here, and we mainly consider the finite-size effects on the parameter estimation procedure. Despite the fact that some questions are left open, we are able to give an estimation of the secret key rate for protocols which do not contain a postselection procedure. As expected, these results are significantly more pessimistic than those obtained in the asymptotic regime. However, we show that recent continuous-variable protocols are able to provide fully secure secret keys in the finite-size scenario, over distances larger than 50 km.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://pubs.er.usgs.gov/publication/70009936','USGSPUBS'); return false;" href="http://pubs.er.usgs.gov/publication/70009936"><span id="translatedtitle">Importance of neutron energy <span class="hlt">distribution</span> in borehole activation <span class="hlt">analysis</span> in relatively dry, low-porosity rocks</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Senftle, F.E.; Moxham, R.M.; Tanner, A.B.; Philbin, P.W.; Boynton, G.R.; Wager, R.E.</p> <p>1977-01-01</p> <p>To evaluate the importance of variations in the neutron energy <span class="hlt">distribution</span> in borehole activation <span class="hlt">analysis</span>, capture gamma-ray measurements were made in relatively dry, low-porosity gabbro of the Duluth Complex. Although sections of over a meter of solid rock were encountered in the borehole, there was significant fracturing with interstitial water leading to a substantial variation of water with depth in the borehole. The linear-correlation coefficients calculated for the peak intensities of several elements compared to the chemical core analyses were generally poor throughout the depth investigated. The data suggest and arguments are given which indicate that the variation of the thermal-to-intermediate-to-fast neutron flux density as a function of borehole depth is a serious source of error and is a major cause of the changes observed in the capture gamma-ray peak intensities. These variations in neutron energy may also cause a shift in the observed capture gamma-ray energy.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/26406057','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/26406057"><span id="translatedtitle">Finite element <span class="hlt">analysis</span> of stress <span class="hlt">distribution</span> in four different endodontic post systems in a model canine.</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, Aijie; Feng, Xiaoli; Zhang, Yanli; Liu, Ruoyu; Shao, Longquan</p> <p>2015-08-17</p> <p>To investigate the stress <span class="hlt">distribution</span> in a maxillary canine restored with each of four different post systems at different levels of alveolar bone loss. Two-dimensional finite element <span class="hlt">analysis</span> (FEA) was performed by modeling a severely damaged canine with four different post systems: CAD/CAM zirconia, CAD/CAM glass fiber, cast titanium, and cast gold. A force of 100 N was applied to the crown, and the von Mises stresses were obtained. FEA revealed that the CAD/CAM zirconia post system produced the lowest maximum von Mises stress in the dentin layer at 115.8 MPa, while the CAD/CAM glass fiber post produced the highest stress in the dentin at 518.2 MPa. For a severely damaged anterior tooth, a zirconia post system is the best choice while a cast gold post ranks second. The CAD/CAM glass fiber post is least recommended in terms of stress level in the dentin. PMID:26406057</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/910431','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/910431"><span id="translatedtitle">LARGE SCALE <span class="hlt">DISTRIBUTED</span> PARAMETER MODEL OF MAIN MAGNET SYSTEM AND FREQUENCY DECOMPOSITION <span class="hlt">ANALYSIS</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>ZHANG,W.; MARNERIS, I.; SANDBERG, J.</p> <p>2007-06-25</p> <p>Large accelerator main magnet system consists of hundreds, even thousands, of dipole magnets. They are linked together under selected configurations to provide highly uniform dipole fields when powered. <span class="hlt">Distributed</span> capacitance, insulation resistance, coil resistance, magnet inductance, and coupling inductance of upper and lower pancakes make each magnet a complex network. When all dipole magnets are chained together in a circle, they become a coupled pair of very high order complex ladder networks. In this study, a network of more than thousand inductive, capacitive or resistive elements are used to model an actual system. The circuit is a large-scale network. Its equivalent polynomial form has several hundred degrees. <span class="hlt">Analysis</span> of this high order circuit and simulation of the response of any or all components is often computationally infeasible. We present methods to use frequency decomposition approach to effectively simulate and analyze magnet configuration and power supply topologies.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://arxiv.org/pdf/1408.0492v2','EPRINT'); return false;" href="http://arxiv.org/pdf/1408.0492v2"><span id="translatedtitle">Risk <span class="hlt">analysis</span> of Trojan-horse attacks on practical quantum key <span class="hlt">distribution</span> systems</span></a></p> <p><a target="_blank" href="http://www.osti.gov/eprints/">E-print Network</a></p> <p>Nitin Jain; Birgit Stiller; Imran Khan; Vadim Makarov; Christoph Marquardt; Gerd Leuchs</p> <p>2014-12-19</p> <p>An eavesdropper Eve may probe a quantum key <span class="hlt">distribution</span> (QKD) system by sending a bright pulse from the quantum channel into the system and analyzing the back-reflected pulses. Such Trojan-horse attacks can breach the security of the QKD system if appropriate safeguards are not installed or if they can be fooled by Eve. We present a risk <span class="hlt">analysis</span> of such attacks based on extensive spectral measurements, such as transmittance, reflectivity, and detection sensitivity of some critical components used in typical QKD systems. Our results indicate the existence of wavelength regimes where the attacker gains considerable advantage as compared to launching an attack at 1550 nm. We also propose countermeasures to reduce the risk of such attacks.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19850020330','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19850020330"><span id="translatedtitle">System <span class="hlt">Analysis</span> for the Huntsville Operation Support Center, <span class="hlt">Distributed</span> Computer System</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ingels, F. M.; Massey, D.</p> <p>1985-01-01</p> <p>HOSC as a <span class="hlt">distributed</span> computing system, is responsible for data acquisition and <span class="hlt">analysis</span> during Space Shuttle operations. HOSC also provides computing services for Marshall Space Flight Center's nonmission activities. As mission and nonmission activities change, so do the support functions of HOSC change, demonstrating the need for some method of simulating activity at HOSC in various configurations. The simulation developed in this work primarily models the HYPERchannel network. The model simulates the activity of a steady state network, reporting statistics such as, transmitted bits, collision statistics, frame sequences transmitted, and average message delay. These statistics are used to evaluate such performance indicators as throughout, utilization, and delay. Thus the overall performance of the network is evaluated, as well as predicting possible overload conditions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010JTePh..55..686G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010JTePh..55..686G"><span id="translatedtitle"><span class="hlt">Analysis</span> of energy <span class="hlt">distribution</span> in a loaded quantum anharmonic oscillator in a wide temperature range</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gilyarov, V. L.; Slutsker, A. I.</p> <p>2010-05-01</p> <p><span class="hlt">Analysis</span> of the energy <span class="hlt">distribution</span> in an ensemble of quantum anharmonic oscillators loaded by an external force in a wide temperature range (from T = 0) is carried out using a general approach based on the virial theorem. At T = 0, anharmonic effects are observed: a linear variation of zero-point energy of an oscillator under loading (energy decrease during extension and increase under compression) and a linear variation of the average kinetic and potential energy components. At high temperatures, at which the dynamics of the oscillators becomes classical, the anharmonic effects are manifested in a linear variation in the vibrational energy and a linear variation in the average kinetic and potential energy components upon an increase in force. Mutually compensating variation in the average kinetic and potential energy components of the internal dynamic energy of an oscillator (energy redistribution upon loading) takes place both at low and high temperatures.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/26276514','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/26276514"><span id="translatedtitle">Convection-enhanced delivery of MANF--volume of <span class="hlt">distribution</span> <span class="hlt">analysis</span> in porcine putamen and substantia nigra.</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Barua, N U; Bienemann, A S; Woolley, M; Wyatt, M J; Johnson, D; Lewis, O; Irving, C; Pritchard, G; Gill, S</p> <p>2015-10-15</p> <p>Mesencephalic astrocyte-derived neurotrophic factor (MANF) is a 20kDa human protein which has both neuroprotective and neurorestorative activity on dopaminergic neurons and therefore may have application for the treatment of Parkinson's Disease. The aims of this study were to determine the translational potential of convection-enhanced delivery (CED) of MANF for the treatment of PD by studying its <span class="hlt">distribution</span> in porcine putamen and substantia nigra and to correlate histological <span class="hlt">distribution</span> with co-infused gadolinium-DTPA using real-time magnetic resonance imaging. We describe the <span class="hlt">distribution</span> of MANF in porcine putamen and substantia nigra using an implantable CED catheter system using co-infused gadolinium-DTPA to allow real-time MRI tracking of infusate <span class="hlt">distribution</span>. The <span class="hlt">distribution</span> of gadolinium-DTPA on MRI correlated well with immunohistochemical <span class="hlt">analysis</span> of MANF <span class="hlt">distribution</span>. Volumetric <span class="hlt">analysis</span> of MANF IHC staining indicated a volume of infusion (Vi) to volume of <span class="hlt">distribution</span> (Vd) ratio of 3 in putamen and 2 in substantia nigra. This study confirms the translational potential of CED of MANF as a novel treatment strategy in PD and also supports the co-infusion of gadolinium as a proxy measure of MANF <span class="hlt">distribution</span> in future clinical studies. Further study is required to determine the optimum infusion regime, flow rate and frequency of infusions in human trials. PMID:26276514</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/26252627','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/26252627"><span id="translatedtitle">Monte Carlo <span class="hlt">analysis</span> of the enhanced transcranial penetration using <span class="hlt">distributed</span> near-infrared emitter array.</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yue, Lan; Humayun, Mark S</p> <p>2015-08-01</p> <p>Transcranial near-infrared (NIR) treatment of neurological diseases has gained recent momentum. However, the low NIR dose available to the brain, which shows severe scattering and absorption of the photons by human tissues, largely limits its effectiveness in clinical use. Hereby, we propose to take advantage of the strong scattering effect of the cranial tissues by applying an evenly <span class="hlt">distributed</span> multiunit emitter array on the scalp to enhance the cerebral photon density while maintaining each single emitter operating under the safe thermal limit. By employing the Monte Carlo method, we simulated the transcranial propagation of the array emitted light and demonstrated markedly enhanced intracranial photon flux as well as improved uniformity of the photon <span class="hlt">distribution</span>. These enhancements are correlated with the source location, density, and wavelength of light. To the best of our knowledge, we present the first systematic <span class="hlt">analysis</span> of the intracranial light field established by the scalp-applied multisource array and reveal a strategy for the optimization of the therapeutic effects of the NIR radiation. PMID:26252627</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/25500465','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/25500465"><span id="translatedtitle">Clustering <span class="hlt">analysis</span> of water <span class="hlt">distribution</span> systems: identifying critical components and community impacts.</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Diao, K; Farmani, R; Fu, G; Astaraie-Imani, M; Ward, S; Butler, D</p> <p>2014-01-01</p> <p>Large water <span class="hlt">distribution</span> systems (WDSs) are networks with both topological and behavioural complexity. Thereby, it is usually difficult to identify the key features of the properties of the system, and subsequently all the critical components within the system for a given purpose of design or control. One way is, however, to more explicitly visualize the network structure and interactions between components by dividing a WDS into a number of clusters (subsystems). Accordingly, this paper introduces a clustering strategy that decomposes WDSs into clusters with stronger internal connections than external connections. The detected cluster layout is very similar to the community structure of the served urban area. As WDSs may expand along with urban development in a community-by-community manner, the correspondingly formed <span class="hlt">distribution</span> clusters may reveal some crucial configurations of WDSs. For verification, the method is applied to identify all the critical links during firefighting for the vulnerability <span class="hlt">analysis</span> of a real-world WDS. Moreover, both the most critical pipes and clusters are addressed, given the consequences of pipe failure. Compared with the enumeration method, the method used in this study identifies the same group of the most critical components, and provides similar criticality prioritizations of them in a more computationally efficient time. PMID:25500465</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1998JNuM..258.1517N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1998JNuM..258.1517N"><span id="translatedtitle"><span class="hlt">Analysis</span> and measurement of residual stress <span class="hlt">distribution</span> of vanadium/ceramics joints for fusion reactor applications</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nemoto, Yoshiyuki; Ueda, Kazukiyo; Satou, Manabu; Hasegawa, Akira; Abe, Katsunori</p> <p>1998-10-01</p> <p>Vanadium alloys are considered as candidate structural materials for fusion reactor system. When vanadium alloys are used in fusion reactor system, joining with ceramics for insulating is one of material issues to be solved to make component of fusion reactor. In the application of ceramics/metal jointing and coating, residual stress caused by difference of thermal expansion rate between ceramics and metals is an important factor in obtaining good bonding strength and soundness of coating. In this work, residual stress <span class="hlt">distribution</span> in direct diffusion bonded vanadium/alumina joint (jointing temperature: 1400°C) was measured by small area X-ray diffraction method. And the comparison of Finite Element Method (FEM) <span class="hlt">analysis</span> and actual stress <span class="hlt">distribution</span> was carried out. Tensile stress concentration at the edge of the boundary of the joint in alumina was observed. The residual stress concentration may cause cracks in alumina, or failure of bonding. Actually, cracks in alumina caused by thermal stress after bonding at 1500°C was observed. The stress concentration of the joint must be reduced to obtain good bonded joint. Lower bonding temperature or to devise the shape of the outer surface of the joint will reduce the stress concentration.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3720290','PMC'); return false;" href="http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3720290"><span id="translatedtitle">Software LS-MIDA for efficient mass isotopomer <span class="hlt">distribution</span> <span class="hlt">analysis</span> in metabolic modelling</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2013-01-01</p> <p>Background The knowledge of metabolic pathways and fluxes is important to understand the adaptation of organisms to their biotic and abiotic environment. The specific <span class="hlt">distribution</span> of stable isotope labelled precursors into metabolic products can be taken as fingerprints of the metabolic events and dynamics through the metabolic networks. An open-source software is required that easily and rapidly calculates from mass spectra of labelled metabolites, derivatives and their fragments global isotope excess and isotopomer <span class="hlt">distribution</span>. Results The open-source software “Least Square Mass Isotopomer Analyzer” (LS-MIDA) is presented that processes experimental mass spectrometry (MS) data on the basis of metabolite information such as the number of atoms in the compound, mass to charge ratio (m/e or m/z) values of the compounds and fragments under study, and the experimental relative MS intensities reflecting the enrichments of isotopomers in 13C- or 15?N-labelled compounds, in comparison to the natural abundances in the unlabelled molecules. The software uses Brauman’s least square method of linear regression. As a result, global isotope enrichments of the metabolite or fragment under study and the molar abundances of each isotopomer are obtained and displayed. Conclusions The new software provides an open-source platform that easily and rapidly converts experimental MS patterns of labelled metabolites into isotopomer enrichments that are the basis for subsequent observation-driven <span class="hlt">analysis</span> of pathways and fluxes, as well as for model-driven metabolic flux calculations. PMID:23837681</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1998JMPSo..46..537H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1998JMPSo..46..537H"><span id="translatedtitle"><span class="hlt">Analysis</span> of probabilistic <span class="hlt">distribution</span> and range of average stress in each phase of heterogeneous materials</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hori, Muneo; Kubo, Junichiro</p> <p>1998-03-01</p> <p>The estimation of the crust stress is essential for the prediction of an earthquake. However, the in situ measurement of stress is difficult since data often show large variance, probably, due to the heterogeneity in the rock mass. To clarify the effects of the material heterogeneity on the stress <span class="hlt">distribution</span>, this paper carries out two analyses, the prediction of a probability density function (PDF) from a given PDF of the material heterogeneity, and the computation of a range which bounds possible maximum/minimum values. Recursive formulae for computing the PDF are derived from the perturbation <span class="hlt">analysis</span> of fields disturbed by the heterogeneity, and the range is determined from approximate solutions which bound the strain energy of an infinite body with arbitrary ellipsoidal inclusions. Analytic expressions for the recursive formulae and the approximate solutions are obtained by applying new techniques that are based on the equivalent inclusion method. Monte-Carlo computation of the stress <span class="hlt">distribution</span> in a heterogeneous body is made to verify the validity of the present analyses. It is shown that the predicted PDF and range are in good agreement with those obtained from the simulation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/8091641','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/8091641"><span id="translatedtitle">The use of discriminant <span class="hlt">analysis</span> in predicting the <span class="hlt">distribution</span> of bluetongue virus in Queensland, Australia.</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ward, M P</p> <p>1994-01-01</p> <p>The climatic variables that were most useful in classifying the infection status of Queensland cattle herds with bluetongue virus were assessed using stepwise linear discriminant <span class="hlt">analysis</span>. A discriminant function that included average annual rainfall and average daily maximum temperature was found to correctly classify 82.6% of uninfected herds and 72.4% of infected herds. Overall, the infection status of 74.1% of herds was correctly classified. The spatial <span class="hlt">distribution</span> of infected herds was found to parallel that of the suspected vector, Culicoides brevitarsis. This evidence supports the role of this arthropod species as a vector of bluetongue viruses in Queensland. The effect of potential changes in temperature and rainfall (the so-called 'global warming' scenario) on the <span class="hlt">distribution</span> of bluetongue virus infection of cattle herds in Queensland was then investigated. With an increase in both rainfall and temperature, the area of endemic bluetongue virus infection was predicted to extend a further 150 km in and in southern Queensland. The implications of this for sheep-raising in Queensland are discussed. PMID:8091641</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/biblio/6995177','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/biblio/6995177"><span id="translatedtitle">Microprobe <span class="hlt">analysis</span> of element <span class="hlt">distribution</span> in bovine extracellular matrices and muscle</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Engel, M.B.; Catchpole, H.R. )</p> <p>1989-09-01</p> <p>The concentrations of some essential elements, Na, K, P, S and Cl were determined by microprobe <span class="hlt">analysis</span> in bovine extracellular matrices of cartilage, tendon and elastic tissue (ligamentum nuchae) and in muscle cells. The values for the different tissues were compared and related to the blood electrolyte concentrations. Among the connective tissues the highest Na and lowest Cl values were found for cartilage which bears a high negative charge. The lowest concentrations of these elements occurred in elastic tissue which is relatively non-polar. In the three extracellular matrices sodium levels exceeded potassium. In myofibers potassium was the major cation at 30 times the blood value and about 3 times the concentration of sodium. Chlorine values were around 0.4 that of blood. Sulfur and phosphorus are components of the tissue macromolecules. The negative charge on the extracellular matrices is a function of carboxyl and sulfate radicals. In the myofiber this property is largely attributable to carboxyl and phosphate groups. Differences in potassium-sodium <span class="hlt">distribution</span> in cells and extracellular matrices are attributed partly to the microtrabecular lattice and to the ordered state of cell water. In general the element concentrations and selective <span class="hlt">distribution</span> can be related to the chemical composition and organization of the tissue, the net immobile charge, the nature of the dispersion medium (water) and changes in its dielectric constant, and to the physico-chemical properties of the individual ions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PhyA..437..351X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PhyA..437..351X"><span id="translatedtitle">Time-singularity multifractal spectrum <span class="hlt">distribution</span> based on detrended fluctuation <span class="hlt">analysis</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xiong, Gang; Yu, Wenxian; Zhang, Shuning</p> <p>2015-11-01</p> <p>The time-singularity multifractal spectrum <span class="hlt">distribution</span> (TS-MFSD) generalizes the singularity spectrum in a time-varying framework. In this paper, a new method to compute MFSD based on detrended fluctuation <span class="hlt">analysis</span> (DFA-MFSD) is introduced. We relate DFA-MFSD method to the standard partition function based multifractal spectrum <span class="hlt">distribution</span> formalism, and prove that both approaches are equivalent for fractal time series with compact support. Furthermore, we find that DFA-MFSD has equivalent results, better mathematic foundation, less computational cost and is more adapted for fractal time series with arbitrary length, compared with MFSD based on wavelet transform modulus maxima (WTMM-MFSD). By analyzing several examples, this paper shows that DFAm-MFSD with different polynomial fitting orders can reliably determine the time-varying multifractal scaling behavior of time series, including processes embodying chirp-type or oscillating singularities. To illustrate these results, simulations are executed using binomial multiplicative cascades, wavelet series and real sea clutter, and simulations indicate that DFA m-MFSD benefits from excellent theoretical and practical performances.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/860220','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/860220"><span id="translatedtitle"><span class="hlt">Distributed</span> Energy Resources at Naval Base Ventura County Building1512: A Sensitivity <span class="hlt">Analysis</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Bailey, Owen C.; Marnay, Chris</p> <p>2005-06-05</p> <p>This report is the second of a two-part study by BerkeleyLab of a DER (<span class="hlt">distributed</span> energy resources) system at Navy Base VenturaCounty (NBVC). First, a preliminary assessment ofthe cost effectivenessof <span class="hlt">distributed</span> energy resources at Naval Base Ventura County (NBVC)Building 1512 was conducted in response to the base s request for designassistance to the Federal Energy Management Program (Bailey and Marnay,2004). That report contains a detailed description of the site and theDER-CAM (Consumer Adoption Model) parameters used. This second reportcontains sensitivity analyses of key parameters in the DER system modelof Building 1512 at NBVC and additionally considers the potential forabsorption-powered refrigeration.The prior <span class="hlt">analysis</span> found that under thecurrent tariffs, and given assumptions about the performance andstructure of building energy loads and available generating technologycharacteristics, installing a 600 kW DER system with absorption coolingand recovery heat capabilities could deliver cost savings of about 14percent, worth $55,000 per year. However, under current conditions, thisstudy also suggested that significant savings could be obtained ifBuilding 1512 changed from its current direct access contract to a SCETOU-8 (Southern California Edison time of use tariff number 8) ratewithout installing a DER system. Evaluated on this tariff, the potentialsavings from installation of a DER system would be about 4 percent of thetotal bill, or $16,000 per year.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/4073220','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/4073220"><span id="translatedtitle">Glomerular anionic site <span class="hlt">distribution</span> in nonproteinuric rats. A computer-assisted morphometric <span class="hlt">analysis</span>.</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pilia, P A; Swain, R P; Williams, A V; Loadholt, C B; Ainsworth, S K</p> <p>1985-12-01</p> <p>The cationic ultrastructural tracer polyethyleneimine (PEI: pI approximately equal to 11.0), binds electrophysically to uniformly spaced discrete electron-dense anionic sites present in the laminae rarae of the rat glomerular basement membrane (GBM), mesangial reflections of the GBM, Bowman's capsule, and tubular basement membranes when administered intravenously. Computer-assisted morphometric <span class="hlt">analysis</span> of glomerular anionic sites reveals that the maximum concentration of stainable lamina rara externa (lre) sites (21/10,000 A GBM) occurs 60 minutes after PEI injection with a site-site interspacing of 460 A. Lamina rara interna (lri) sites similarly demonstrate a maximum concentration (20/10,000 A GBM) at 60 minutes with a periodicity of 497 A. The concentration and <span class="hlt">distribution</span> of anionic sites within the lri was irregular in pattern and markedly decreased in number, while the lre possesses an electrical field that is highly regular at all time intervals analyzed (15, 30, 60, 120, 180, 240, and 300 minutes). Immersion and perfusion of renal tissue with PEI reveals additional heavy staining of the epithelial and endothelial cell sialoprotein coatings. PEI appears to bind to glomerular anionic sites reversibly: ie, between 60 and 180 minutes the concentration of stained sites decreases. At 300 minutes, the interspacing once again approaches the 60-minute concentration. This suggests a dynamic turnover or dissociation followed by a reassociation of glomerular negatively charged PEI binding sites. In contrast, morphometric <span class="hlt">analysis</span> of anionic sites stained with lysozyme and protamine sulfate reveals interspacings of 642 A and 585 A, respectively; in addition, these tracers produce major glomerular ultrastructural alterations and induce transient proteinuria. PEI does not induce proteinuria in rats, nor does it produce glomerular morphologic alterations when ten times the tracer dosage is administered intravenously. These findings indicate that the choice of ultrastructural charge tracer, the method of administering the tracer, and the time selected for <span class="hlt">analysis</span> of tissue after administration of tracer significantly influences results. Morphometric <span class="hlt">analysis</span> of the <span class="hlt">distribution</span> of glomerular anionic sites in nonproteinuric rats provides a method of evaluating quantitative alterations of the glomerular charge barrier in renal disease models. PMID:4073220</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EGUGA..15.6043F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EGUGA..15.6043F"><span id="translatedtitle">Annually resolved grain-size <span class="hlt">distributions</span> in varved sediments using image <span class="hlt">analysis</span> - application to Paleoclimatology</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Francus, Pierre; Lapointe, François; Lamoureux, Scott</p> <p>2013-04-01</p> <p>Varved sediments are unique archives because they contain continuous and undisturbed records of past climatic conditions with an internal robust chronology. In many case, conceptual models for the varve formation can be established linking processes occurring in the watershed, such as river floods or snow melt, to specific lamina within the varve structure. However, the physical properties of such layers, including grain-size, are seldom measured despite their intrinsic value as indicators of hydrological processes. This paper reviews the development and improvements of an image <span class="hlt">analysis</span> methodology to extract grain-size data from finely laminated sediments. The technique uses thin-sections from sediment cores, scanning electron microscope images of carefully selected regions of interest from the thin-sections, and an image <span class="hlt">analysis</span> routine to extract semi-automatically grain-size data. An example from Cape Bounty in the Canadian High Arctic is presented: grain-size data within each varve was measured for the last 2845 years. Several particle size <span class="hlt">distribution</span> indices for each individual facies were calculated and combined to identify each type of sedimentary facies encountered within the sequence. For instance, high standard deviation and 98th percentile index values are interpreted as high-energy events such as turbidites and debris flows. Moreover, some grain-size indicators from the most recent varves correlate well with instrumental climate data. For instance, the 98th percentile grain size has a strong correlation (R2=0.71) with summer rainfall. This kind of relationship allows for the calibration of the image-<span class="hlt">analysis</span> generated grain-size data set in terms of hydroclimatic parameters. The rainfall reconstruction suggests that Cape Bounty recently experienced an unprecedented increase since ~1920 AD. These results contrast to other common varve measurements. For instance, varve thickness is not significantly correlated with the particle size <span class="hlt">distribution</span>, and is poorly linked to the instrumental record. Indeed, sediment accumulation can result from the accumulation of different successive hydroclimatic and geomorphic mechanisms such as spring snowmelt, rain events and landslides, as well as by changes in lake circulation and stratification. Therefore, a detailed grain-size obtained using image <span class="hlt">analysis</span> appears to be a better approach to reconstruct past hydroclimatic conditions in this clastic sedimentary setting and holds tremendous potential to improve paleoclimatic reconstructions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003EAEJA.....3384D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003EAEJA.....3384D"><span id="translatedtitle">MATTERCLIFF - software for the <span class="hlt">analysis</span> of spatial <span class="hlt">distribution</span> of discontinuities in cliffs</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Délèze, J.-Y.; Jaboyedoff, M.; Baillifard, F.; Rouiller, J.-D.</p> <p>2003-04-01</p> <p>MATTERCLIFF is a freeware software program providing simple new tools for geometrical characterization of discontinuity sets and rock slopes stability <span class="hlt">analysis</span>. This software is a multiple-document application composed of six modules which performs: (1) Digitization of discontinuities from pictures or outcrop sketches that allows the automatic characterization of their average spacing. This value is obtained by using a sampling window, rectangular or otherwise. Estimates of persistence and spacing are performed by various methods. Spacing can also be calculated by drawing scanlines through the window. Results are exported to the module described on point 2. (2) Statistical <span class="hlt">analysis</span> of spatial <span class="hlt">distribution</span> of discontinuity sets using spacing histograms and variograms. The apparent spacing can be converted to true spacing. (3) Stereographic representation of structural data. This tool allows the representation of wedges on a Wulff or Schmidt stereonet. (4) Simple geomechanical models (plane failure and wedge failure). (5) Angle conversions, dip-direction to x-y-z coordinates, transformation and opposite, intersection of planes, angles between planes or axes. (6) Average volume estimates generated by 2 or 3 discontinuity sets, using spacing <span class="hlt">distribution</span>, or density of intersection and connectivity of 2 or 3 discontinuity sets. As rock instabilities are caused mainly by discontinuities, this software provides new tools for the interpretation of field observations necessary for the detection of rock instability. Results can be used to estimate density of discontinuity sets on topographic surfaces, leading to a pre-zoning of rockfall hazard. This computer program can be freely downloaded from: http://www.crealp.ch/download/</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005JCli...18..585F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005JCli...18..585F"><span id="translatedtitle">Assessing a Tornado Climatology from Global Tornado Intensity <span class="hlt">Distributions</span>.</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Feuerstein, Bernold; Dotzek, Nikolai; Grieser, Jürgen</p> <p>2005-02-01</p> <p>Recent work demonstrated that the shape of tornado intensity <span class="hlt">distributions</span> from various regions worldwide is well described by <span class="hlt">Weibull</span> functions. This statistical modeling revealed a strong correlation between the fit parameters c for shape and b for scale regardless of the data source. In the present work it is shown that the quality of the <span class="hlt">Weibull</span> fits is optimized if only tornado reports of F1 and higher intensity are used and that the c-b correlation does indeed reflect a universal feature of the observed tornado intensity <span class="hlt">distributions</span>. For regions with likely supercell tornado dominance, this feature is the number ratio of F4 to F3 tornado reports R(F4/F3) = 0.238. The c-b diagram for the <span class="hlt">Weibull</span> shape and scale parameters is used as a climatological chart, which allows different types of tornado climatology to be distinguished, presumably arising from supercell versus nonsupercell tornadogenesis. Assuming temporal invariance of the climatology and using a detection efficiency function for tornado observations, a stationary climatological probability <span class="hlt">distribution</span> from large tornado records (U.S. decadal data 1950-99) is extracted. This can be used for risk assessment, comparative studies on tornado intensity <span class="hlt">distributions</span> worldwide, and estimates of the degree of underreporting for areas with poor databases. For the 1990s U.S. data, a likely tornado underreporting of the weak events (F0, F1) by a factor of 2 can be diagnosed, as well as asymptotic climatological c,b values of c = 1.79 and b = 2.13, to which a convergence in the 1950-99 U.S. decadal data is verified.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/1893370','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/1893370"><span id="translatedtitle">An <span class="hlt">analysis</span> of monoclonal antibody <span class="hlt">distribution</span> in microscopic tumor nodules: consequences of a "binding site barrier".</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>van Osdol, W; Fujimori, K; Weinstein, J N</p> <p>1991-09-15</p> <p>Rational in vivo application of monoclonal antibodies for diagnosis and therapy of cancer requires an understanding of both the global and microscopic pharmacology of macromolecular ligands. Here, we introduce a new mathematical model for antibody <span class="hlt">distribution</span> into small, prevascular, densely packed nodules (representing either primary or metastatic tumor). For the <span class="hlt">analysis</span>, we link together several aspects of antibody pharmacology: the global (whole body) pharmacokinetics; transcapillary transport into normal tissue interstitium surrounding the nodule; diffusion into the nodule; nonspecific binding and/or partitioning; specific binding to tumor antigen; metabolism; and lymphatic outflow from the tissue space. Input parameter values are estimated from experimental studies in vitro, in animals, and in clinical trials. Our aim is to explore the sensitivity of antibody localization to variation in three of the important parameters of this model: the rate of transcapillary transport; the rate of lymphatic outflow; and the antigen density. Predictions based on this <span class="hlt">analysis</span> include the following: (a) High rates of transcapillary transport influx or low rates of lymphatic efflux will enhance antibody percolation into the tumor nodule at early times after injection and increase the average antibody concentration in the tumor at all times; (b) Changes in antibody influx rate will affect the antibody <span class="hlt">distribution</span> in the tumor at earlier times than do changes in the efflux rate; (c) Reducing the antigen concentration will increase the uniformity of antibody penetration but lower the average concentration in the tumor at all times after injection; and (d) Counter to intuition, lowering the antigen concentration can increase the peak concentrations achieved toward the center of the nodule. If, in addition, there is any metabolism of bound antibody, the concentration-time integral (i.e., the "area under the curve") for the center of the nodule will also be increased by decreasing the antigen concentration. These predictions directly reflect the "binding site barrier" hypothesis of Weinstein et al. (Ann. NY Acad. Sci., 507: 199-210, 1987) and Fujimori et al. (Cancer Res., 49:5656-5663, 1989; J. Nucl. Med., 31:1191-1198, 1990). In general, and perhaps surprisingly until one considers the problem carefully, the parameters governing antibody percolation can have opposite effects on the uniformity of antibody <span class="hlt">distribution</span> at early and late times. These calculations, using the PERC program set, were done for antibodies, but we believe that the "binding site barrier" will also prove important for other injected macromolecules, for at least some highly bindable injected small molecules, for lymphokines and cytokines released from transfected cells injected in vivo, and, indeed, for endogenous species such as the autocrine-paracrine factors. PMID:1893370</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/23380005','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/23380005"><span id="translatedtitle">Grain boundary character <span class="hlt">distribution</span> of nanocrystalline Cu thin films using stereological <span class="hlt">analysis</span> of transmission electron microscope orientation maps.</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Darbal, A D; Ganesh, K J; Liu, X; Lee, S-B; Ledonne, J; Sun, T; Yao, B; Warren, A P; Rohrer, G S; Rollett, A D; Ferreira, P J; Coffey, K R; Barmak, K</p> <p>2013-02-01</p> <p>Stereological <span class="hlt">analysis</span> has been coupled with transmission electron microscope (TEM) orientation mapping to investigate the grain boundary character <span class="hlt">distribution</span> in nanocrystalline copper thin films. The use of the nanosized (<5 nm) beam in the TEM for collecting spot diffraction patterns renders an order of magnitude improvement in spatial resolution compared to the <span class="hlt">analysis</span> of electron backscatter diffraction patterns in the scanning electron microscope. Electron beam precession is used to reduce dynamical effects and increase the reliability of orientation solutions. The misorientation <span class="hlt">distribution</span> function shows a strong misorientation texture with a peak at 60°/[111], corresponding to the ?3 misorientation. The grain boundary plane <span class="hlt">distribution</span> shows {111} as the most frequently occurring plane, indicating a significant population of coherent twin boundaries. This study demonstrates the use of nanoscale orientation mapping in the TEM to quantify the five-parameter grain boundary <span class="hlt">distribution</span> in nanocrystalline materials. PMID:23380005</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/23255334','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/23255334"><span id="translatedtitle"><span class="hlt">Analysis</span> of circulating DNA <span class="hlt">distribution</span> in pregnant and nonpregnant dairy cows.</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mayer, Jennifer; Beck, Julia; Soller, Jan T; Wemheuer, Wilhelm; Schütz, Ekkehard; Brenig, Bertram</p> <p>2013-02-01</p> <p>Circulating nucleic acids (CNAs) are free-floating, cell-free DNA and RNA molecules in the circulation of healthy and diseased humans and animals. The aim of this study was to identify differences in CNA <span class="hlt">distribution</span> in serum samples from multiparous pregnant (n = 24) and nonpregnant (n = 16) dairy cows at different days of gestation (Days 0, 20, and 40). A modified serial <span class="hlt">analysis</span> of gene expression procedure was used to generate concatemerized short sequence tags from isolated serum DNA. A total of 6.1 × 10(6) tags were recovered from analyzed samples (n = 40). Significant differences between the pregnant and nonpregnant groups were detected in chromosomal regions, protein-coding sequences, and single genes (P < 0.05). Approximately 23% (1.4 × 10(6) tags) of the total sequence pool were present exclusively in the analyzed serum samples of pregnant cows. Of these tag sequences, seven originated from genomic regions and 13 from repetitive elements. Comparative BLAST <span class="hlt">analysis</span> identified the repetitive tags as BovB (non-long terminal repeat retrotransposons/long interspersed nuclear elements), Art2A, BovA2, and Bov-tA2 (short interspersed nuclear elements). To our knowledge, this is the first study to comprehensively characterize the circulating, cell-free DNA profile in sera from pregnant and nonpregnant cows across early gestation. PMID:23255334</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4211302','PMC'); return false;" href="http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4211302"><span id="translatedtitle">In Silico Genome Comparison and <span class="hlt">Distribution</span> <span class="hlt">Analysis</span> of Simple Sequences Repeats in Cassava</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Vásquez, Andrea; López, Camilo</p> <p>2014-01-01</p> <p>We conducted a SSRs density <span class="hlt">analysis</span> in different cassava genomic regions. The information obtained was useful to establish comparisons between cassava's SSRs genomic <span class="hlt">distribution</span> and those of poplar, flax, and Jatropha. In general, cassava has a low SSR density (~50?SSRs/Mbp) and has a high proportion of pentanucleotides, (24,2?SSRs/Mbp). It was found that coding sequences have 15,5?SSRs/Mbp, introns have 82,3?SSRs/Mbp, 5? UTRs have 196,1?SSRs/Mbp, and 3? UTRs have 50,5?SSRs/Mbp. Through motif <span class="hlt">analysis</span> of cassava's genome SSRs, the most abundant motif was AT/AT while in intron sequences and UTRs regions it was AG/CT. In addition, in coding sequences the motif AAG/CTT was also found to occur most frequently; in fact, it is the third most used codon in cassava. Sequences containing SSRs were classified according to their functional annotation of Gene Ontology categories. The identified SSRs here may be a valuable addition for genetic mapping and future studies in phylogenetic analyses and genomic evolution. PMID:25374887</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/biblio/22308099','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/biblio/22308099"><span id="translatedtitle">Transmission integral <span class="hlt">analysis</span> of Mössbauer spectra displaying hyperfine parameter <span class="hlt">distributions</span> with arbitrary profile</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Klencsár, Zoltán</p> <p>2014-10-27</p> <p>Accurate quantitative <span class="hlt">analysis</span> of Mössbauer spectra displaying thickness effects requires the consideration of the so-called transmission integral when modeling the spectral shape. Whereas this is straightforward when the correct model for the decomposition of the absorber's nuclear resonance absorption cross-section into individual components is a priori known, in the absence of such knowledge and notably in the presence of hyperfine parameter <span class="hlt">distributions</span> with an unknown profile, the so-called model-independent evaluation methods could be used to fit the spectra. However, the methods available for this purpose were developed for the <span class="hlt">analysis</span> of spectra for which the thin absorber approximation is valid, and thus they do not take the sample thickness and related effects into account. Consequently, in order to use them for spectra displaying thickness effects, their usage needs to be generalized by combining them with transmission integral fitting. A new algorithm realizing such a generalized version of the Hesse-Rübartsch model-independent evaluation method was developed recently as an integral part of the MossWinn program. In the present work, the working principle of the newly developed algorithm is described in details along with examples illustrating the capabilities of the method for the case of {sup 57}Fe Mössbauer spectroscopy.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/5255050','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/5255050"><span id="translatedtitle">Interactive statistical-<span class="hlt">distribution-analysis</span> program utilizing numerical and graphical methods</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Glandon, S. R.; Fields, D. E.</p> <p>1982-04-01</p> <p>The TERPED/P program is designed to facilitate the quantitative <span class="hlt">analysis</span> of experimental data, determine the <span class="hlt">distribution</span> function that best describes the data, and provide graphical representations of the data. This code differs from its predecessors, TEDPED and TERPED, in that a printer-plotter has been added for graphical output flexibility. The addition of the printer-plotter provides TERPED/P with a method of generating graphs that is not dependent on DISSPLA, Integrated Software Systems Corporation's confidential proprietary graphics package. This makes it possible to use TERPED/P on systems not equipped with DISSPLA. In addition, the printer plot is usually produced more rapidly than a high-resolution plot can be generated. Graphical and numerical tests are performed on the data in accordance with the user's assumption of normality or lognormality. Statistical <span class="hlt">analysis</span> options include computation of the chi-squared statistic and its significance level and the Kolmogorov-Smirnov one-sample test confidence level for data sets of more than 80 points. Plots can be produced on a Calcomp paper plotter, a FR80 film plotter, or a graphics terminal using the high-resolution, DISSPLA-dependent plotter or on a character-type output device by the printer-plotter. The plots are of cumulative probability (abscissa) versus user-defined units (ordinate). The program was developed on a Digital Equipment Corporation (DEC) PDP-10 and consists of 1500 statements. The language used is FORTRAN-10, DEC's extended version of FORTRAN-IV.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/biblio/7191262','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/biblio/7191262"><span id="translatedtitle">Computer-controlled flow injection <span class="hlt">analysis</span> system for on-line determination of <span class="hlt">distribution</span> ratios</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Nekimken, H.L.; Smith, B.F.; Jarvinen, G.D.; Peterson, E.J.; Jones, M.M.</p> <p>1988-07-15</p> <p>An automated flow injection <span class="hlt">analysis</span> (FIA) system has been developed for the rapid acquisition of liquid/liquid, metal ion <span class="hlt">distribution</span> ratios (D). The system features automatic switching between aqueous metal sample and wash solutions, on-line solvent extraction, phase separation, and the simultaneous detection of the separated phases by diode-array spectrophotometry. A comparative study of manual, single-stage liquid/liquid extractions with the flow injection system was completed by using a new extraction system UO/sub 2//sup 2 +//benzene/TOPO (trioctylphosphine oxide)/HBMPPT (4-benzoyl-2,4-dihydro-5-methyl-2-phenyl-3H-pyrazol-3-thione). The batch and FIA methods yielded results generally within 5% of each other. The major differences between the two systems are that the FIA system is at least twice as fast, is less labor intensive, is more reproducible, and yields better statistics (a result of the FIA's speed and automation features). Slope <span class="hlt">analysis</span> of the plotted data from the uranyl extraction studies indicates that the extraction complex is UO/sub 2/(BMPPT)/sub 2/(TOPO).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1989SPIE.1030..361S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1989SPIE.1030..361S"><span id="translatedtitle">Stereophotogrammetrie Mass <span class="hlt">Distribution</span> Parameter Determination Of The Lower Body Segments For Use In Gait <span class="hlt">Analysis</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sheffer, Daniel B.; Schaer, Alex R.; Baumann, Juerg U.</p> <p>1989-04-01</p> <p>Inclusion of mass <span class="hlt">distribution</span> information in biomechanical <span class="hlt">analysis</span> of motion is a requirement for the accurate calculation of external moments and forces acting on the segmental joints during locomotion. Regression equations produced from a variety of photogrammetric, anthropometric and cadaeveric studies have been developed and espoused in literature. Because of limitations in the accuracy of predicted inertial properties based on the application of regression equation developed on one population and then applied on a different study population, the employment of a measurement technique that accurately defines the shape of each individual subject measured is desirable. This individual data acquisition method is especially needed when analyzing the gait of subjects with large differences in their extremity geo-metry from those considered "normal", or who may possess gross asymmetries in shape in their own contralateral limbs. This study presents the photogrammetric acquisition and data <span class="hlt">analysis</span> methodology used to assess the inertial tensors of two groups of subjects, one with spastic diplegic cerebral palsy and the other considered normal.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015IJBm...59.1721K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015IJBm...59.1721K"><span id="translatedtitle">Environmental risk factors and hotspot <span class="hlt">analysis</span> of dengue <span class="hlt">distribution</span> in Pakistan</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Khalid, Bushra; Ghaffar, Abdul</p> <p>2015-11-01</p> <p>This study is an attempt to find out the factors responsible for sudden dengue outbreak in different cities of Pakistan during 2011. For this purpose, spatio-temporal <span class="hlt">distribution</span> of dengue in Islamabad, Rawalpindi, Lahore, and Karachi has been taken into account. According to the available data, the factors responsible for this spread includes climate covariates like rainfall, temperature, and wind speed; social covariates like population, and area of locality, and environmental risk factors like drainage pattern and geo-hydrological conditions. Reported dengue cases from localities and Shuttle Radar Topography Mission (SRTM) 90 m digital elevation model (DEM) of study areas have been processed for hotspots, regression model and stream density in the localities of high dengue incidence. The relationship of daily dengue incidence with climate covariates during the months of July-October of the study year is analyzed. Results show that each dry spell of 2-4 days provides suitable conditions for the development and survival of dengue vectors during the wet months of July and August in the areas of high stream density and population. Very few cases have been reported in July while higher number of cases reported in the months of August, September, until late October. Hotspot <span class="hlt">analysis</span> highlights the areas of high dengue incidence while regression <span class="hlt">analysis</span> shows the relationship between the population and the areas of localities with the dengue incidence.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4570820','PMC'); return false;" href="http://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4570820"><span id="translatedtitle">Technology Resource, <span class="hlt">Distribution</span>, and Development Characteristics of Global Influenza Virus Vaccine: A Patent Bibliometric <span class="hlt">Analysis</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Liu, Long; Yan, Zhe; Tao, Lixin; Guo, Xiuhua; Luo, Yanxia; Yan, Aoshuang</p> <p>2015-01-01</p> <p>Influenza virus vaccine (IVV) is a promising research domain that is closely related to global health matters, which has been acknowledged not only by scientists and technology developers, but also by policy-makers. Meanwhile, patents encompass valuable technological information and reflect the latest technological inventions as well as the innovative capability of a nation. However, little research has examined this up-and-coming research field using patent bibliometric method. Thus, this paper (a) designs the technology classification system and search strategy for the identification of IVV; and (b) presents a longitudinal <span class="hlt">analysis</span> of the global IVV development based on the European Patent Office (EPO) patents. Bibliometric <span class="hlt">analysis</span> is used to rank countries, institutions, inventors and technology subfields contributing to IVV technical progress. The results show that the global trends of IVV are a multi-developing feature of variety but an uneven technical resource <span class="hlt">distribution</span>. Although the synthetic peptide vaccine is a comparatively young field, it already demonstrates the powerful vitality and the enormous development space. With the worldwide competition increasing, all nations especially China should be looking to increase devotion, enhance capability and regard effectiveness of technological innovation. PMID:26372160</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/25869291','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/25869291"><span id="translatedtitle">Environmental risk factors and hotspot <span class="hlt">analysis</span> of dengue <span class="hlt">distribution</span> in Pakistan.</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Khalid, Bushra; Ghaffar, Abdul</p> <p>2015-11-01</p> <p>This study is an attempt to find out the factors responsible for sudden dengue outbreak in different cities of Pakistan during 2011. For this purpose, spatio-temporal <span class="hlt">distribution</span> of dengue in Islamabad, Rawalpindi, Lahore, and Karachi has been taken into account. According to the available data, the factors responsible for this spread includes climate covariates like rainfall, temperature, and wind speed; social covariates like population, and area of locality, and environmental risk factors like drainage pattern and geo-hydrological conditions. Reported dengue cases from localities and Shuttle Radar Topography Mission (SRTM) 90 m digital elevation model (DEM) of study areas have been processed for hotspots, regression model and stream density in the localities of high dengue incidence. The relationship of daily dengue incidence with climate covariates during the months of July-October of the study year is analyzed. Results show that each dry spell of 2-4 days provides suitable conditions for the development and survival of dengue vectors during the wet months of July and August in the areas of high stream density and population. Very few cases have been reported in July while higher number of cases reported in the months of August, September, until late October. Hotspot <span class="hlt">analysis</span> highlights the areas of high dengue incidence while regression <span class="hlt">analysis</span> shows the relationship between the population and the areas of localities with the dengue incidence. PMID:25869291</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2000SPIE.3980..447G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2000SPIE.3980..447G"><span id="translatedtitle">Traffic and trend <span class="hlt">analysis</span> of local- and wide-area networks for a <span class="hlt">distributed</span> PACS implementation</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gac, Robert J., Jr.; Harding, Douglas, Jr.; Weiser, John C.; Chacko, Anna K.; Radvany, Martin; Romlein, John R.</p> <p>2000-05-01</p> <p>Inductive Modeling Techniques (IMT) in a stand alone, <span class="hlt">distributed</span> Picture Archiving and Communication System (PACS) or telemedicine environment can be utilized to monitor SNMP (Simple Network Management Protocol) enabled devices such as network switches, servers or workstations. A comprehensive approach using IMT is presented across the stages of the PACS lifecycle: Pre-PACS, Implementation, and Clinical Use. At each stage of the cycle, the results of IMT can be utilized to assist in assessing and forecasting future system loading. This loading represents a clinical trend <span class="hlt">analysis</span> equating to the clinical workflow and delivery of services. Specific attention is directed to an understanding and thorough depiction of IMT methodology, focusing on the use of SNMP, the Management Information Base (MIB), and the data stream output that is mapped and placed in an object-oriented database and made available for web-based, real time, in-depth viewing and/or <span class="hlt">analysis</span>. A thorough description of these outputs is presented, spotlighting potential report applications such as system failures; existing system, CPU, workstation, server and LAN/WAN link utilization; packet rates; application isolation; notification of system alarms; fault isolation; high/low bandwidth users; and data transfer rates. These types of data are increasingly required for programming LAN/WAN upgrades as digital imaging and PACS are implemented.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006PhDT.......150P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006PhDT.......150P"><span id="translatedtitle"><span class="hlt">Analysis</span> of strain <span class="hlt">distribution</span> in equal channel angular extrusion by finite element method simulation and experimental validation</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pond, Brett</p> <p></p> <p>Equal channel angular extrusion (ECAE) has been shown to produce ultra-fine grains in materials. Previous studies on ECAE consider only two-dimensional <span class="hlt">analysis</span> on primarily low-temperature ECAE processing. The current study focuses on three-dimensional strain <span class="hlt">distribution</span> in cylindrical samples resulting from ECAE processing at T > 0.4Tm. The effects of die geometry, sample size, friction coefficient, and backpressure on strain <span class="hlt">distribution</span> by ECAE are analyzed by a three-dimensional finite element computer program DEFORM 3D, as well as experimental measurements and validation. Sharp angle (SA) dies (Psi ? 20°) produce more homogenous and higher strains than streamlined (SL) dies (Psi = 90°). Sample size has little effect on strain <span class="hlt">distribution</span>. Friction coefficient has a significant effect on pressing force and strain <span class="hlt">distribution</span>. Increasing the amount of back pressure increases shape retention and decreases surface cracking and shear banding, resulting in increasingly more homogenous strain <span class="hlt">distributions</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009AGUFMIN43D1175K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009AGUFMIN43D1175K"><span id="translatedtitle">Harness That S.O.B.: <span class="hlt">Distributing</span> Remote Sensing <span class="hlt">Analysis</span> in a Small Office/Business</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kramer, J.; Combe, J.; McCord, T. B.</p> <p>2009-12-01</p> <p>Researchers in a small office/business (SOB) operate with limited funding, equipment, and software availability. To mitigate these issues, we developed a <span class="hlt">distributed</span> computing framework that: 1) leverages open source software to implement functionality otherwise reliant on proprietary software and 2) harnesses the unused power of (semi-)idle office computers with mixed operating systems (OSes). This abstract outlines some reasons for the effort, its conceptual basis and implementation, and provides brief speedup results. The Multiple-Endmember Linear Spectral Unmixing Model (MELSUM)1 processes remote-sensing (hyper-)spectral images. The algorithm is computationally expensive, sometimes taking a full week or more for a 1 million pixel/100 wavelength image. <span class="hlt">Analysis</span> of pixels is independent, so a large benefit can be gained from parallel processing techniques. Job concurrency is limited by the number of active processing units. MELSUM was originally written in the Interactive Data Language (IDL). Despite its multi-threading capabilities, an IDL instance executes on a single machine, and so concurrency is limited by the machine's number of central processing units (CPUs). Network <span class="hlt">distribution</span> can access more CPUs to provide a greater speedup, while also taking advantage of (often) underutilized extant equipment. appropriately integrating open source software magnifies the impact by avoiding the purchase of additional licenses. Our method of <span class="hlt">distribution</span> breaks into four conceptual parts: 1) the top- or task-level user interface; 2) a mid-level program that manages hosts and jobs, called the <span class="hlt">distribution</span> server; 3) a low-level executable for individual pixel calculations; and 4) a control program to synchronize sequential sub-tasks. Each part is a separate OS process, passing information via shell commands and/or temporary files. While the control and low-level executables are short-lived, the top-level program and <span class="hlt">distribution</span> server run (at least) for the entirety of a task. While any language that supports "spawning" of OS processes can serve as the top-level interface, our solution, d-MELSUM, has been integrated with the IDL code. Doing so extracts the core calculating from IDL, but otherwise preserves IDL features and functionality. The <span class="hlt">distribution</span> server is an extension of ADE2 mobile robot software, written in Java. Network connections rely on a secure shell (SSH) implementation, whether natively available (e.g., Linux or OS X) or user installed (e.g., OpenSSH available via Cygwin on Windows). Both the low-level and control programs are relatively small C++ programs (~54K, or 1500 lines, total) that were developed in-house, and use GNU's g++ compiler. The low-level code also relies on Linear Algebra PACKage (LAPACK) libraries for pixel calculations. Despite performance being contingent on data size, CPU speed, and network communication rate and latency to some degree, results have generally demonstrated a time reduction of a factor proportional to the number of open connections (one per CPU). For example, the task mentioned above requiring a week to process took 18 hours with d-MELSUM, using 10 CPUs on 2 computers. 1 J.-Ph Combe, et al., PSS 56, 2008. 2 J. Kramer and M. Scheutz, IROS2006, 2006.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014JMMM..353..110V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014JMMM..353..110V"><span id="translatedtitle">Non-regularized inversion method from light scattering applied to ferrofluid magnetization curves for magnetic size <span class="hlt">distribution</span> <span class="hlt">analysis</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>van Rijssel, Jos; Kuipers, Bonny W. M.; Erné, Ben H.</p> <p>2014-03-01</p> <p>A numerical inversion method known from the <span class="hlt">analysis</span> of light scattering by colloidal dispersions is now applied to magnetization curves of ferrofluids. The <span class="hlt">distribution</span> of magnetic particle sizes or dipole moments is determined without assuming that the <span class="hlt">distribution</span> is unimodal or of a particular shape. The inversion method enforces positive number densities via a non-negative least squares procedure. It is tested successfully on experimental and simulated data for ferrofluid samples with known multimodal size <span class="hlt">distributions</span>. The created computer program MINORIM is made available on the web.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://dspace.mit.edu/handle/1721.1/95960','EPRINT'); return false;" href="http://dspace.mit.edu/handle/1721.1/95960"><span id="translatedtitle">Inference on Counterfactual <span class="hlt">Distributions</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/eprints/">E-print Network</a></p> <p>Chernozhukov, Victor V.</p> <p></p> <p>Counterfactual <span class="hlt">distributions</span> are important ingredients for policy <span class="hlt">analysis</span> and decomposition <span class="hlt">analysis</span> in empirical economics. In this article, we develop modeling and inference tools for counterfactual <span class="hlt">distributions</span> based ...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20070026735','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20070026735"><span id="translatedtitle">An <span class="hlt">Analysis</span> of the Orbital <span class="hlt">Distribution</span> of Solid Rocket Motor Slag</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Horstman, Matthew F.; Mulrooney, Mark</p> <p>2007-01-01</p> <p>The contribution made by orbiting solid rocket motors (SRMs) to the orbital debris environment is both potentially significant and insufficiently studied. A combination of rocket motor design and the mechanisms of the combustion process can lead to the emission of sufficiently large and numerous by-products to warrant assessment of their contribution to the orbital debris environment. These particles are formed during SRM tail-off, or the termination of burn, by the rapid expansion, dissemination, and solidification of the molten Al2O3 slag pool accumulated during the main burn phase of SRMs utilizing immersion-type nozzles. Though the usage of SRMs is low compared to the usage of liquid fueled motors, the propensity of SRMs to generate particles in the 100 m and larger size regime has caused concern regarding their contributing to the debris environment. Particle sizes as large as 1 cm have been witnessed in ground tests conducted under vacuum conditions and comparable sizes have been estimated via ground-based telescopic and in-situ observations of sub-orbital SRM tail-off events. Using sub-orbital and post recovery observations, a simplistic number-size-velocity <span class="hlt">distribution</span> of slag from on-orbit SRM firings was postulated. In this paper we have developed more elaborate <span class="hlt">distributions</span> and emission scenarios and modeled the resultant orbital population and its time evolution by incorporating a historical database of SRM launches, propellant masses, and likely location and time of particulate deposition. From this <span class="hlt">analysis</span> a more comprehensive understanding has been obtained of the role of SRM ejecta in the orbital debris environment, indicating that SRM slag is a significant component of the current and future population.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://pubs.er.usgs.gov/publication/70018114','USGSPUBS'); return false;" href="http://pubs.er.usgs.gov/publication/70018114"><span id="translatedtitle">Spatial uncertainty <span class="hlt">analysis</span>: Propagation of interpolation errors in spatially <span class="hlt">distributed</span> models</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Phillips, D.L.; Marks, D.G.</p> <p>1996-01-01</p> <p>In simulation modelling, it is desirable to quantify model uncertainties and provide not only point estimates for output variables but confidence intervals as well. Spatially <span class="hlt">distributed</span> physical and ecological process models are becoming widely used, with runs being made over a grid of points that represent the landscape. This requires input values at each grid point, which often have to be interpolated from irregularly scattered measurement sites, e.g., weather stations. Interpolation introduces spatially varying errors which propagate through the model We extended established uncertainty <span class="hlt">analysis</span> methods to a spatial domain for quantifying spatial patterns of input variable interpolation errors and how they propagate through a model to affect the uncertainty of the model output. We applied this to a model of potential evapotranspiration (PET) as a demonstration. We modelled PET for three time periods in 1990 as a function of temperature, humidity, and wind on a 10-km grid across the U.S. portion of the Columbia River Basin. Temperature, humidity, and wind speed were interpolated using kriging from 700- 1000 supporting data points. Kriging standard deviations (SD) were used to quantify the spatially varying interpolation uncertainties. For each of 5693 grid points, 100 Monte Carlo simulations were done, using the kriged values of temperature, humidity, and wind, plus random error terms determined by the kriging SDs and the correlations of interpolation errors among the three variables. For the spring season example, kriging SDs averaged 2.6??C for temperature, 8.7% for relative humidity, and 0.38 m s-1 for wind. The resultant PET estimates had coefficients of variation (CVs) ranging from 14% to 27% for the 10-km grid cells. Maps of PET means and CVs showed the spatial patterns of PET with a measure of its uncertainty due to interpolation of the input variables. This methodology should be applicable to a variety of spatially <span class="hlt">distributed</span> models using interpolated inputs.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/biblio/1076534','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/biblio/1076534"><span id="translatedtitle">Measurements and <span class="hlt">Analysis</span> of Oxygen Bubble <span class="hlt">Distributions</span> in LiCl-KCl Molten Salt</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Ryan W. Bezzant; Supathorn Phongikaroon; Michael F. Simpson</p> <p>2013-03-01</p> <p>Transparent system experimental studies have been performed to provide measurement and <span class="hlt">analysis</span> of oxygen bubble <span class="hlt">distributions</span> and mass transfer coefficients at different sparging rates ranging from 0.05 to 0.20 L/min in LiCl-KCl molten salt at 500 degrees C using a high-speed digital camera and an oxygen sensor. The results reveal that bubble sizes and rise velocities increased with an increase in oxygen sparging rate. The bubbles observed were ellipsoidal in shape, and an equivalent diameter based on the ellipsoid volume was calculated. The average equivalent bubble diameters at 500 degrees C and these oxygen sparging rates range from 2.63 to 4.07 mm. Results show that the bubble equivalent diameters at each respective sparging rate are normally <span class="hlt">distributed</span>. A Fanning friction factor correlation was produced to predict a bubble’s rise velocity based on its equivalent diameter. The oxygen mass transfer coefficients for four sparging rates were calculated using the oxygenation model. These calculated values were within the order of magnitude of 10-2 cm/sec and followed a decreasing trend corresponding to an increasing bubble size and sparging rate. The diffusivities were calculated based on two different types of mechanisms, one based on physics of the bubbles and the other on systematic properties. The results reveal that diffusivity values calculated from bubble physics are 1.65 to 8.40 x 10-5 cm2/sec, which are within the range suggested by literature for gases in liquids of a similar viscosity.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011JInst...6C1094S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011JInst...6C1094S"><span id="translatedtitle">AMIC: an expandable integrated analog front-end for light <span class="hlt">distribution</span> moments <span class="hlt">analysis</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Spaggiari, M.; Herrero, V.; Lerche, C. W.; Aliaga, R.; Monzó, J. M.; Gadea, R.</p> <p>2011-01-01</p> <p>In this article we introduce AMIC (Analog Moments Integrated Circuit), a novel analog Application Specific Integrated Circuit (ASIC) front-end for Positron Emission Tomography (PET) applications. Its working principle is based on mathematical <span class="hlt">analysis</span> of light <span class="hlt">distribution</span> through moments calculation. Each moment provides useful information about light <span class="hlt">distribution</span>, such as energy, position, depth of interaction, skewness (deformation due to border effect) etc. A current buffer delivers a copy of each input current to several processing blocks. The current preamplifier is designed in order to achieve unconditional stability under high input capacitance, thus allowing the use of both Photo-Multiplier Tubes (PMT) and Silicon Photo-Multipliers (SiPM). Each processing block implements an analog current filtering by multiplying each input current by a programmable 8-bit coefficient. The latter is implemented through a high linear MOS current divider ladder, whose high sensitivity to variations in output voltages requires the integration of an extremely stable fully differential current collector. Output currents are then summed and sent to the output stage, that provides both a buffered output current and a linear rail-to-rail voltage for further digitalization. Since computation is purely additive, the 64 input channels of AMIC do not represent a limitation in the number of the detector's outputs. Current outputs of various AMIC structures can be combined as inputs of a final AMIC, thus providing a fully expandable structure. In this version of AMIC, 8 programmable blocks for moments calculation are integrated, as well as an I2C interface in order to program every coefficient. Extracted layout simulation results demonstrate that the information provided by moment calculation in AMIC helps to improve tridimensional positioning of the detected event. A two-detector test-bench is now being used for AMIC prototype characterization and preliminary results are presented.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/1715752','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/1715752"><span id="translatedtitle">Multicompartmental <span class="hlt">analysis</span> of triiodothyronine (T3) <span class="hlt">distribution</span> and metabolism in clinically euthyroid obese individuals.</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Curti, G; Romei, F; Fresco, G</p> <p>1990-04-01</p> <p>Triiodothyronine (T3) kinetic studies were performed on ten clinically non-fasting euthyroid obese individuals and on eight normal subjects. Kinetic <span class="hlt">analysis</span> was carried out according to a three-pool model whose basic approach involved the acquisition of data from both vascular and extravascular sources. The former were represented by the plasma disappearance curves of iv injected radiolabeled T3. The latter included fecal and urinary losses and liver, kidney and thigh uptake. A detailed comparison of the T3 kinetics of obese and normal individuals did not uncover many differences between these two groups in the way the hormone is <span class="hlt">distributed</span>, metabolized and excreted. The mean value for the plasma equivalent <span class="hlt">distribution</span> volume of T3(VD3) in obese individuals (27.05 L) was not significantly different from that in controls (24.60 L) nor were significant differences observed between the mean value for the plasma appearance rate of the hormone (PAR3) in obese subjects (29.80 micrograms/day) and that in controls (30.05 micrograms/day). The mean value for the size of the slow pool (Qc), including fatty tissue as well as skeletal muscle, was unchanged in obese individuals when compared with controls, although in the obese subjects the mean value for the mass of fatty tissue was about 5 times greater. In addition, in obese individuals, the mean value for the fractional rate transfer from plasma to the slow pool (Kca), which was 9.06 day-1, did not significantly differ from that in controls (9.22 day-1).(ABSTRACT TRUNCATED AT 250 WORDS) PMID:1715752</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/biblio/53387','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/biblio/53387"><span id="translatedtitle"><span class="hlt">Analysis</span> and prediction of lightning strike <span class="hlt">distributions</span> associated with synoptic map types over Florida</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Reap, R.M.</p> <p>1994-08-01</p> <p>The temporal and spatial <span class="hlt">distributions</span> of lightning activity associated with specific synoptic regimes of low-level wind flow were analyzed as part of an experiment to develop improved statistical thunderstorm forecasts for Florida. The synoptic regimes were identified by means of a linear correlation technique that was used to perform pattern classification or `map typing` of 18- and 30-h sea level pressure forecasts from the National Meteorological Center`s Nested Grid Model (NGM). Lightning location data for the 1987-90 warm seasons were subsequently analyzed on a 12-km grid to determine the thunderstorm <span class="hlt">distribution</span> for each of the predetermined map types. The <span class="hlt">analysis</span> revealed organized coastal maxima in lightning activity related to land-sea-breeze convergence zones that form in direct response to the low-level wind flow. Surface effects were also indicated by the persistent minima in lightning activity over Lake Okeechobee and by the lightning maxima found in regions with shoreline curvature favoring localized convergence. Experimental thunderstorm probability equations for Florida were subsequently developed from climatological lightning frequencies and NGM forecast fields. The lightning frequencies were combined with the K stability index to form interactive predictors that take into account the temporal and spatial variations in lightning occurrence for each map type but modulate the climatology in response to the daily large-scale synoptic situation. The statistical forecast equations were developed for each map type in an attempt to simulate the effects of small-scale processes, such as land-sea-breeze convergence zones, on the subsequent development of peninsular-scale convection.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EGUGA..1611497P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EGUGA..1611497P"><span id="translatedtitle"><span class="hlt">Analysis</span> of catchment behavior using residence time <span class="hlt">distributions</span> with application to the Thuringian Basin</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Prykhodko, Vladyslav; Heße, Falk; Kumar, Rohini; Samaniego, Luis; Attinger, Sabine</p> <p>2014-05-01</p> <p>Residence time <span class="hlt">distribution</span> (RTD), as presented e.g. by Botter et al., are a novel mathematical framework for a quantitative characterization of hydrological systems. These <span class="hlt">distributions</span> contain information about water storage, flow pathways and water sources and therefore improve the classical hydrograph methods by allowing both nonlinear as well as time-dependent dynamics. In our study we extend this previous works by applying this theoretical framework on real-world heterogeneous catchments. To that end we use a catchment-scale hydrological model (mHM) and apply the approach of Botter et al. to each spatial grid cell of mHM. To facilitate the coupling we amended Botter's approach by introducing additional fluxes (like runoff from unsaturated zone) and specifying the structure of the groundwater zone. By virtue of this coupling we could then make use of the realistic hydrological fluxes and state variables as provided by mHM. This allowed us to use both observed (precipitation, temperature, soil type etc.) and modeled data sets and asses their impact on the behavior of the resulting RTD's. We extended the aforementioned framework to analyze large catchments by including geomorphic effect due to the actual arrangement of subcatchments around the channel network using the flood routing algorithm of mHM. Additionally we study dependencies of the stochastic characteristics of RTD's on the meteorological and hydrological processes as well as on the morphological structure of the catchment. As a result we gained mean residence times (MRT) of base flow and groundwater flow on the mesoscale (4km x 4km). We compare the spatial <span class="hlt">distribution</span> of MRT's with land cover and soil moisture maps as well as driving forces like precipitation and temperature. Results showed that land cover is a major predictor for MRT's whereas its impact on the mean evapotranspiration time was much lower. Additionally we determined the temporal evolution of mean travel times by using time series of all relevant hydrological processes (observed as well as modeled by mHM) from 1960-2010. Our <span class="hlt">analysis</span> revealed the strong regularity of the catchment dynamics over long time periods. The strong seasonal changes of MRT's, usually modeled by sine-wave approach, could be approximated by sawtooth-wave model. Our future work will be focused on comparing of our numerical results with realistic data from tracer experiments and isotope measurements.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ncbi.nlm.nih.gov/pubmed/19591538','PUBMED'); return false;" href="http://www.ncbi.nlm.nih.gov/pubmed/19591538"><span id="translatedtitle">Particles emitted from indoor combustion sources: size <span class="hlt">distribution</span> measurement and chemical <span class="hlt">analysis</span>.</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Roy, A A; Baxla, S P; Gupta, Tarun; Bandyopadhyaya, R; Tripathi, S N</p> <p>2009-08-01</p> <p>This study is primarily focused toward measuring the particle size <span class="hlt">distribution</span> and chemical <span class="hlt">analysis</span> of particulate matter that originates from combustion sources typically found in Indian urban homes. Four such sources were selected: cigarette, incense stick, mosquito coil, and dhoop, the latter being actually a thick form of incense stick. Altogether, seven of the most popular brands available in the Indian market were tested. Particle size <span class="hlt">distribution</span> in the smoke was measured using a scanning mobility particle sizer, using both long and nano forms of differential mobility analyzer (DMA), with readings averaged from four to six runs. The measurable particle size range of the nano DMA was 4.6 nm to 157.8 nm, whereas that of the long DMA was 15.7 nm to 637.8 nm. Therefore, readings obtained from the long and the nano DMA were compared for different brands as well as for different sources. An overlap was seen in the readings in the common range of measurement. The lowest value of peak concentration was seen for one brand of incense stick (0.9 x 10(6) cm(-3)), whereas the highest (7.1 x 10(6) cm(-3)) was seen for the dhoop. Generally, these sources showed a peak between 140 and 170 nm; however, 2 incense stick brands showed peaks at 79 nm and 89 nm. The dhoop showed results much different from the rest of the sources, with a mode at around 240 nm. Chemical <span class="hlt">analysis</span> in terms of three heavy metals (cadmium, zinc, and lead) was performed using graphite tube atomizer and flame-atomic absorption spectrophotometer. Calculations were made to assess the expected cancer and noncancer risks, using published toxicity potentials for these three heavy metals. Our calculations revealed that all the sources showed lead concentrations much below the American Conference of Governmental Industrial Hygienists (ACGIH) threshold limit value (TLV) level. One of the two mosquito coil brands (M(2)) showed cadmium concentrations two times higher than the California Environmental Protection Agency (Cal EPA) reference exposure level (REL). The latter also showed the highest carcinogenic risks of 350 people per million population. The amount of zinc obtained from the sources, however, was found to be quite below the standard limits, implying no risk in terms of zinc. PMID:19591538</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012AGUFMNG23A1544L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012AGUFMNG23A1544L"><span id="translatedtitle"><span class="hlt">Analysis</span> of 2-cm Thermal Emission from Saturn: <span class="hlt">Distribution</span> of Ammonia Gas in the Cloud Layer</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Laraia, A.; Ingersoll, A. P.; Janssen, M. A.; Gulkis, S.</p> <p>2012-12-01</p> <p>Ammonia gas, a condensable gas in Saturn's atmosphere, is the main opacity source at 2-cm wavelength. Thus an <span class="hlt">analysis</span> of spatially resolved 2-cm thermal emission from Saturn provides information about the <span class="hlt">distribution</span> of ammonia within the atmosphere. Assuming that the temperature falls off adiabatically with height, high 2-cm brightness temperatures indicate low column ammonia vapor abundance and low brightness temperatures indicate high abundance. In this work we perform an <span class="hlt">analysis</span> of four 2-cm brightness temperature maps of Saturn, obtained from the radar instrument on board the Cassini spacecraft. The weighting function for this instrument peaks in the cloud layer near the 1 bar pressure level, depending on the column abundance of ammonia. We observe anomalies in brightness temperature within 10° of the equator, with relatively high brightness temperatures around ±10° surrounding a band of lower brightness temperatures directly on the equator and extending to ±3°. The temperatures off the equator are ~10 K higher than those at the equator in some cases. These observations are qualitatively consistent with Fletcher et al (2011), who presented an <span class="hlt">analysis</span> of 4.6-5.1 ?m thermal spectra from Cassini VIMS data and found high ammonia abundance at the equator and low abundance just off it. To compare with Fletcher et al we used an atmospheric testbed model, which provides forward calculation of microwave radiances, to get information about the physical parameters of Saturn's atmosphere, namely the relative humidity (RH) of ammonia in the cloud layer and the deep abundance of ammonia. We present a realm of possibilities for the parameter regime in which Saturn lies, in terms of ammonia RH and deep abundance. Our <span class="hlt">analysis</span> brings us to the question: what dynamical processes are occurring in the equatorial region that cause such a drastic difference between the ammonia vapor abundance directly on the equator and that just off it? We speculate on a few possibilities that may lead to this latitudinal structure of ammonia vapor, and compare the thermal emission from Saturn as a function of latitude to the outgoing longwave radiation from Earth in the vicinity of the Hadley Cell.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://ntrs.nasa.gov/search.jsp?R=19940002942&hterms=care&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3Dcare','NASA-TRS'); return false;" href="http://ntrs.nasa.gov/search.jsp?R=19940002942&hterms=care&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D60%26Ntt%3Dcare"><span id="translatedtitle">CARES - CERAMICS <span class="hlt">ANALYSIS</span> AND RELIABILITY EVALUATION OF STRUCTURES</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Nemeth, N. N.</p> <p>1994-01-01</p> <p>The beneficial properties of structural ceramics include their high-temperature strength, light weight, hardness, and corrosion and oxidation resistance. For advanced heat engines, ceramics have demonstrated functional abilities at temperatures well beyond the operational limits of metals. This is offset by the fact that ceramic materials tend to be brittle. When a load is applied, their lack of significant plastic deformation causes the material to crack at microscopic flaws, destroying the component. CARES calculates the fast-fracture reliability or failure probability of macroscopically isotropic ceramic components. These components may be subjected to complex thermomechanical loadings. The program uses results from a commercial structural <span class="hlt">analysis</span> program (MSC/NASTRAN or ANSYS) to evaluate component reliability due to inherent surface and/or volume type flaws. A multiple material capability allows the finite element model reliability to be a function of many different ceramic material statistical characterizations. The reliability <span class="hlt">analysis</span> uses element stress, temperature, area, and volume output, which are obtained from two dimensional shell and three dimensional solid isoparametric or axisymmetric finite elements. CARES utilizes the Batdorf model and the two-parameter <span class="hlt">Weibull</span> cumulative <span class="hlt">distribution</span> function to describe the effects of multi-axial stress states on material strength. The shear-sensitive Batdorf model requires a user-selected flaw geometry and a mixed-mode fracture criterion. Flaws intersecting the surface and imperfections embedded in the volume can be modeled. The total strain energy release rate theory is used as a mixed mode fracture criterion for co-planar crack extension. Out-of-plane crack extension criteria are approximated by a simple equation with a semi-empirical constant that can model the maximum tangential stress theory, the minimum strain energy density criterion, the maximum strain energy release rate theory, or experimental results. For comparison, Griffith's maximum tensile stress theory, the principle of independent action, and the <span class="hlt">Weibull</span> normal stress averaging models are also included. <span class="hlt">Weibull</span> material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities are estimated from four-point bend bar or uniform uniaxial tensile specimen fracture strength data. Parameter estimation can be performed for single or multiple failure modes by using the least-squares <span class="hlt">analysis</span> or the maximum likelihood method. A more limited program, CARES/PC (COSMIC number LEW-15248) runs on a personal computer and estimates ceramic material properties from three-point bend bar data. CARES/PC does not perform fast fracture reliability estimation. CARES is written in FORTRAN 77 and has been implemented on DEC VAX series computers under VMS and on IBM 370 series computers under VM/CMS. On a VAX, CARES requires 10Mb of main memory. Five MSC/NASTRAN example problems and two ANSYS example problems are provided. There are two versions of CARES supplied on the <span class="hlt">distribution</span> tape, CARES1 and CARES2. CARES2 contains sub-elements and CARES1 does not. CARES is available on a 9-track 1600 BPI VAX FILES-11 format magnetic tape (standard media) or in VAX BACKUP format on a TK50 tape cartridge. The program requires a FORTRAN 77 compiler and about 12Mb memory. CARES was developed in 1990. DEC, VAX and VMS are trademarks of Digital Equipment Corporation. IBM 370 is a trademark of International Business Machines. MSC/NASTRAN is a trademark of MacNeal-Schwendler Corporation. ANSYS is a trademark of Swanson <span class="hlt">Analysis</span> Systems, Inc.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <center> <div class="footer-extlink text-muted"><small>Some links on this page may take you to non-federal websites. Their policies may differ from this site.</small> </div> </center> <div id="footer-wrapper"> <div class="footer-content"> <div id="footerOSTI" class=""> <div class="row"> <div class="col-md-4 text-center col-md-push-4 footer-content-center"><small><a href="http://www.science.gov/disclaimer.html">Privacy and Security</a></small> <div class="visible-sm visible-xs push_footer"></div> </div> <div class="col-md-4 text-center col-md-pull-4 footer-content-left"> <img src="http://www.osti.gov/images/DOE_SC31.png" alt="U.S. Department of Energy" usemap="#doe" height="31" width="177"><map style="display:none;" name="doe" id="doe"><area shape="rect" coords="1,3,107,30" href="http://www.energy.gov" alt="U.S. Deparment of Energy"><area shape="rect" coords="114,3,165,30" href="http://www.science.energy.gov" alt="Office of Science"></map> <a ref="http://www.osti.gov" style="margin-left: 15px;"><img src="http://www.osti.gov/images/footerimages/ostigov53.png" alt="Office of Scientific and Technical Information" height="31" width="53"></a> <div class="visible-sm visible-xs push_footer"></div> </div> <div class="col-md-4 text-center footer-content-right"> <a href="http://www.osti.gov/nle"><img src="http://www.osti.gov/images/footerimages/NLElogo31.png" alt="National Library of Energy" height="31" width="79"></a> <a href="http://www.science.gov"><img src="http://www.osti.gov/images/footerimages/scigov77.png" alt="science.gov" height="31" width="98"></a> <a href="http://worldwidescience.org"><img src="http://www.osti.gov/images/footerimages/wws82.png" alt="WorldWideScience.org" height="31" width="90"></a> </div> </div> </div> </div> </div> <p><br></p> </div><!-- container --> </body> </html>