Recurrence interval analysis of trading volumes
NASA Astrophysics Data System (ADS)
Ren, Fei; Zhou, Wei-Xing
2010-06-01
We study the statistical properties of the recurrence intervals τ between successive trading volumes exceeding a certain threshold q . The recurrence interval analysis is carried out for the 20 liquid Chinese stocks covering a period from January 2000 to May 2009, and two Chinese indices from January 2003 to April 2009. Similar to the recurrence interval distribution of the price returns, the tail of the recurrence interval distribution of the trading volumes follows a power-law scaling, and the results are verified by the goodness-of-fit tests using the Kolmogorov-Smirnov (KS) statistic, the weighted KS statistic and the Cramér-von Mises criterion. The measurements of the conditional probability distribution and the detrended fluctuation function show that both short-term and long-term memory effects exist in the recurrence intervals between trading volumes. We further study the relationship between trading volumes and price returns based on the recurrence interval analysis method. It is found that large trading volumes are more likely to occur following large price returns, and the comovement between trading volumes and price returns is more pronounced for large trading volumes.
Recurrence interval analysis of trading volumes.
Ren, Fei; Zhou, Wei-Xing
2010-06-01
We study the statistical properties of the recurrence intervals τ between successive trading volumes exceeding a certain threshold q. The recurrence interval analysis is carried out for the 20 liquid Chinese stocks covering a period from January 2000 to May 2009, and two Chinese indices from January 2003 to April 2009. Similar to the recurrence interval distribution of the price returns, the tail of the recurrence interval distribution of the trading volumes follows a power-law scaling, and the results are verified by the goodness-of-fit tests using the Kolmogorov-Smirnov (KS) statistic, the weighted KS statistic and the Cramér-von Mises criterion. The measurements of the conditional probability distribution and the detrended fluctuation function show that both short-term and long-term memory effects exist in the recurrence intervals between trading volumes. We further study the relationship between trading volumes and price returns based on the recurrence interval analysis method. It is found that large trading volumes are more likely to occur following large price returns, and the comovement between trading volumes and price returns is more pronounced for large trading volumes.
Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number.
Fragkos, Konstantinos C; Tsagris, Michail; Frangos, Christos C
2014-01-01
The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator.
Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number
Fragkos, Konstantinos C.; Tsagris, Michail; Frangos, Christos C.
2014-01-01
The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator. PMID:27437470
Confidence Intervals for True Scores Using the Skew-Normal Distribution
ERIC Educational Resources Information Center
Garcia-Perez, Miguel A.
2010-01-01
A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…
A Limitation of the Applicability of Interval Shift Analysis to Program Evaluation
ERIC Educational Resources Information Center
Hardy, Roy
1975-01-01
Interval Shift Analysis (ISA) is an adaptation of the linear programming model used to determine maximum benefits or minimal losses in quantifiable economics problems. ISA is applied to pre and posttest score distributions for 43 classes of second graders. (RC)
The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution
NASA Astrophysics Data System (ADS)
Shin, H.; Heo, J.; Kim, T.; Jung, Y.
2007-12-01
The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.
Tsukerman, B M; Finkel'shteĭn, I E
1987-07-01
A statistical analysis of prolonged ECG records has been carried out in patients with various heart rhythm and conductivity disorders. The distribution of absolute R-R duration values and relationships between adjacent intervals have been examined. A two-step algorithm has been constructed that excludes anomalous and "suspicious" intervals from a sample of consecutively recorded R-R intervals, until only the intervals between contractions of veritably sinus origin remain in the sample. The algorithm has been developed into a programme for microcomputer Electronica NC-80. It operates reliably even in cases of complex combined rhythm and conductivity disorders.
A random effects meta-analysis model with Box-Cox transformation.
Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D
2017-07-19
In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The random effects meta-analysis with the Box-Cox transformation may be an important tool for examining robustness of traditional meta-analysis results against skewness on the observed treatment effect estimates. Further critical evaluation of the method is needed.
NASA Astrophysics Data System (ADS)
Endreny, Theodore A.; Pashiardis, Stelios
2007-02-01
SummaryRobust and accurate estimates of rainfall frequencies are difficult to make with short, and arid-climate, rainfall records, however new regional and global methods were used to supplement such a constrained 15-34 yr record in Cyprus. The impact of supplementing rainfall frequency analysis with the regional and global approaches was measured with relative bias and root mean square error (RMSE) values. Analysis considered 42 stations with 8 time intervals (5-360 min) in four regions delineated by proximity to sea and elevation. Regional statistical algorithms found the sites passed discordancy tests of coefficient of variation, skewness and kurtosis, while heterogeneity tests revealed the regions were homogeneous to mildly heterogeneous. Rainfall depths were simulated in the regional analysis method 500 times, and then goodness of fit tests identified the best candidate distribution as the general extreme value (GEV) Type II. In the regional analysis, the method of L-moments was used to estimate location, shape, and scale parameters. In the global based analysis, the distribution was a priori prescribed as GEV Type II, a shape parameter was a priori set to 0.15, and a time interval term was constructed to use one set of parameters for all time intervals. Relative RMSE values were approximately equal at 10% for the regional and global method when regions were compared, but when time intervals were compared the global method RMSE had a parabolic-shaped time interval trend. Relative bias values were also approximately equal for both methods when regions were compared, but again a parabolic-shaped time interval trend was found for the global method. The global method relative RMSE and bias trended with time interval, which may be caused by fitting a single scale value for all time intervals.
Volatility return intervals analysis of the Japanese market
NASA Astrophysics Data System (ADS)
Jung, W.-S.; Wang, F. Z.; Havlin, S.; Kaizoji, T.; Moon, H.-T.; Stanley, H. E.
2008-03-01
We investigate scaling and memory effects in return intervals between price volatilities above a certain threshold q for the Japanese stock market using daily and intraday data sets. We find that the distribution of return intervals can be approximated by a scaling function that depends only on the ratio between the return interval τ and its mean <τ>. We also find memory effects such that a large (or small) return interval follows a large (or small) interval by investigating the conditional distribution and mean return interval. The results are similar to previous studies of other markets and indicate that similar statistical features appear in different financial markets. We also compare our results between the period before and after the big crash at the end of 1989. We find that scaling and memory effects of the return intervals show similar features although the statistical properties of the returns are different.
NASA Astrophysics Data System (ADS)
Glazner, Allen F.; Sadler, Peter M.
2016-12-01
The duration of a geologic interval, such as the time over which a given volume of magma accumulated to form a pluton, or the lifespan of a large igneous province, is commonly determined from a relatively small number of geochronologic determinations (e.g., 4-10) within that interval. Such sample sets can underestimate the true length of the interval by a significant amount. For example, the average interval determined from a sample of size n = 5, drawn from a uniform random distribution, will underestimate the true interval by 50%. Even for n = 10, the average sample only captures ˜80% of the interval. If the underlying distribution is known then a correction factor can be determined from theory or Monte Carlo analysis; for a uniform random distribution, this factor is
Cronin, Matthew A.; Amstrup, Steven C.; Durner, George M.; Noel, Lynn E.; McDonald, Trent L.; Ballard, Warren B.
1998-01-01
There is concern that caribou (Rangifer tarandus) may avoid roads and facilities (i.e., infrastructure) in the Prudhoe Bay oil field (PBOF) in northern Alaska, and that this avoidance can have negative effects on the animals. We quantified the relationship between caribou distribution and PBOF infrastructure during the post-calving period (mid-June to mid-August) with aerial surveys from 1990 to 1995. We conducted four to eight surveys per year with complete coverage of the PBOF. We identified active oil field infrastructure and used a geographic information system (GIS) to construct ten 1 km wide concentric intervals surrounding the infrastructure. We tested whether caribou distribution is related to distance from infrastructure with a chi-squared habitat utilization-availability analysis and log-linear regression. We considered bulls, calves, and total caribou of all sex/age classes separately. The habitat utilization-availability analysis indicated there was no consistent trend of attraction to or avoidance of infrastructure. Caribou frequently were more abundant than expected in the intervals close to infrastructure, and this trend was more pronounced for bulls and for total caribou of all sex/age classes than for calves. Log-linear regression (with Poisson error structure) of numbers of caribou and distance from infrastructure were also done, with and without combining data into the 1 km distance intervals. The analysis without intervals revealed no relationship between caribou distribution and distance from oil field infrastructure, or between caribou distribution and Julian date, year, or distance from the Beaufort Sea coast. The log-linear regression with caribou combined into distance intervals showed the density of bulls and total caribou of all sex/age classes declined with distance from infrastructure. Our results indicate that during the post-calving period: 1) caribou distribution is largely unrelated to distance from infrastructure; 2) caribou regularly use habitats in the PBOF; 3) caribou often occur close to infrastructure; and 4) caribou do not appear to avoid oil field infrastructure.
A single-loop optimization method for reliability analysis with second order uncertainty
NASA Astrophysics Data System (ADS)
Xie, Shaojun; Pan, Baisong; Du, Xiaoping
2015-08-01
Reliability analysis may involve random variables and interval variables. In addition, some of the random variables may have interval distribution parameters owing to limited information. This kind of uncertainty is called second order uncertainty. This article develops an efficient reliability method for problems involving the three aforementioned types of uncertain input variables. The analysis produces the maximum and minimum reliability and is computationally demanding because two loops are needed: a reliability analysis loop with respect to random variables and an interval analysis loop for extreme responses with respect to interval variables. The first order reliability method and nonlinear optimization are used for the two loops, respectively. For computational efficiency, the two loops are combined into a single loop by treating the Karush-Kuhn-Tucker (KKT) optimal conditions of the interval analysis as constraints. Three examples are presented to demonstrate the proposed method.
Voter model with non-Poissonian interevent intervals
NASA Astrophysics Data System (ADS)
Takaguchi, Taro; Masuda, Naoki
2011-09-01
Recent analysis of social communications among humans has revealed that the interval between interactions for a pair of individuals and for an individual often follows a long-tail distribution. We investigate the effect of such a non-Poissonian nature of human behavior on dynamics of opinion formation. We use a variant of the voter model and numerically compare the time to consensus of all the voters with different distributions of interevent intervals and different networks. Compared with the exponential distribution of interevent intervals (i.e., the standard voter model), the power-law distribution of interevent intervals slows down consensus on the ring. This is because of the memory effect; in the power-law case, the expected time until the next update event on a link is large if the link has not had an update event for a long time. On the complete graph, the consensus time in the power-law case is close to that in the exponential case. Regular graphs bridge these two results such that the slowing down of the consensus in the power-law case as compared to the exponential case is less pronounced as the degree increases.
Temporal Structure of Volatility Fluctuations
NASA Astrophysics Data System (ADS)
Wang, Fengzhong; Yamasaki, Kazuko; Stanley, H. Eugene; Havlin, Shlomo
Volatility fluctuations are of great importance for the study of financial markets, and the temporal structure is an essential feature of fluctuations. To explore the temporal structure, we employ a new approach based on the return interval, which is defined as the time interval between two successive volatility values that are above a given threshold. We find that the distribution of the return intervals follows a scaling law over a wide range of thresholds, and over a broad range of sampling intervals. Moreover, this scaling law is universal for stocks of different countries, for commodities, for interest rates, and for currencies. However, further and more detailed analysis of the return intervals shows some systematic deviations from the scaling law. We also demonstrate a significant memory effect in the return intervals time organization. We find that the distribution of return intervals is strongly related to the correlations in the volatility.
Study of temperature distributions in wafer exposure process
NASA Astrophysics Data System (ADS)
Lin, Zone-Ching; Wu, Wen-Jang
During the exposure process of photolithography, wafer absorbs the exposure energy, which results in rising temperature and the phenomenon of thermal expansion. This phenomenon was often neglected due to its limited effect in the previous generation of process. However, in the new generation of process, it may very likely become a factor to be considered. In this paper, the finite element model for analyzing the transient behavior of the distribution of wafer temperature during exposure was established under the assumption that the wafer was clamped by a vacuum chuck without warpage. The model is capable of simulating the distribution of the wafer temperature under different exposure conditions. The flowchart of analysis begins with the simulation of transient behavior in a single exposure region to the variation of exposure energy, interval of exposure locations and interval of exposure time under continuous exposure to investigate the distribution of wafer temperature. The simulation results indicate that widening the interval of exposure locations has a greater impact in improving the distribution of wafer temperature than extending the interval of exposure time between neighboring image fields. Besides, as long as the distance between the field center locations of two neighboring exposure regions exceeds the straight distance equals to three image fields wide, the interacting thermal effect during wafer exposure can be ignored. The analysis flow proposed in this paper can serve as a supporting reference tool for engineers in planning exposure paths.
Tian, Guo-Liang; Li, Hui-Qiong
2017-08-01
Some existing confidence interval methods and hypothesis testing methods in the analysis of a contingency table with incomplete observations in both margins entirely depend on an underlying assumption that the sampling distribution of the observed counts is a product of independent multinomial/binomial distributions for complete and incomplete counts. However, it can be shown that this independency assumption is incorrect and can result in unreliable conclusions because of the under-estimation of the uncertainty. Therefore, the first objective of this paper is to derive the valid joint sampling distribution of the observed counts in a contingency table with incomplete observations in both margins. The second objective is to provide a new framework for analyzing incomplete contingency tables based on the derived joint sampling distribution of the observed counts by developing a Fisher scoring algorithm to calculate maximum likelihood estimates of parameters of interest, the bootstrap confidence interval methods, and the bootstrap testing hypothesis methods. We compare the differences between the valid sampling distribution and the sampling distribution under the independency assumption. Simulation studies showed that average/expected confidence-interval widths of parameters based on the sampling distribution under the independency assumption are shorter than those based on the new sampling distribution, yielding unrealistic results. A real data set is analyzed to illustrate the application of the new sampling distribution for incomplete contingency tables and the analysis results again confirm the conclusions obtained from the simulation studies.
NASA Astrophysics Data System (ADS)
Bruggeman, M.; Baeten, P.; De Boeck, W.; Carchon, R.
1996-02-01
Neutron coincidence counting is commonly used for the non-destructive assay of plutonium bearing waste or for safeguards verification measurements. A major drawback of conventional coincidence counting is related to the fact that a valid calibration is needed to convert a neutron coincidence count rate to a 240Pu equivalent mass ( 240Pu eq). In waste assay, calibrations are made for representative waste matrices and source distributions. The actual waste however may have quite different matrices and source distributions compared to the calibration samples. This often results in a bias of the assay result. This paper presents a new neutron multiplicity sensitive coincidence counting technique including an auto-calibration of the neutron detection efficiency. The coincidence counting principle is based on the recording of one- and two-dimensional Rossi-alpha distributions triggered respectively by pulse pairs and by pulse triplets. Rossi-alpha distributions allow an easy discrimination between real and accidental coincidences and are aimed at being measured by a PC-based fast time interval analyser. The Rossi-alpha distributions can be easily expressed in terms of a limited number of factorial moments of the neutron multiplicity distributions. The presented technique allows an unbiased measurement of the 240Pu eq mass. The presented theory—which will be indicated as Time Interval Analysis (TIA)—is complementary to Time Correlation Analysis (TCA) theories which were developed in the past, but is from the theoretical point of view much simpler and allows a straightforward calculation of deadtime corrections and error propagation. Analytical expressions are derived for the Rossi-alpha distributions as a function of the factorial moments of the efficiency dependent multiplicity distributions. The validity of the proposed theory is demonstrated and verified via Monte Carlo simulations of pulse trains and the subsequent analysis of the simulated data.
Sun, J
1995-09-01
In this paper we discuss the non-parametric estimation of a distribution function based on incomplete data for which the measurement origin of a survival time or the date of enrollment in a study is known only to belong to an interval. Also the survival time of interest itself is observed from a truncated distribution and is known only to lie in an interval. To estimate the distribution function, a simple self-consistency algorithm, a generalization of Turnbull's (1976, Journal of the Royal Statistical Association, Series B 38, 290-295) self-consistency algorithm, is proposed. This method is then used to analyze two AIDS cohort studies, for which direct use of the EM algorithm (Dempster, Laird and Rubin, 1976, Journal of the Royal Statistical Association, Series B 39, 1-38), which is computationally complicated, has previously been the usual method of the analysis.
ERIC Educational Resources Information Center
Shieh, Gwowen
2006-01-01
This paper considers the problem of analysis of correlation coefficients from a multivariate normal population. A unified theorem is derived for the regression model with normally distributed explanatory variables and the general results are employed to provide useful expressions for the distributions of simple, multiple, and partial-multiple…
Power in Bayesian Mediation Analysis for Small Sample Research
Miočević, Milica; MacKinnon, David P.; Levy, Roy
2018-01-01
It was suggested that Bayesian methods have potential for increasing power in mediation analysis (Koopman, Howe, Hollenbeck, & Sin, 2015; Yuan & MacKinnon, 2009). This paper compares the power of Bayesian credibility intervals for the mediated effect to the power of normal theory, distribution of the product, percentile, and bias-corrected bootstrap confidence intervals at N≤ 200. Bayesian methods with diffuse priors have power comparable to the distribution of the product and bootstrap methods, and Bayesian methods with informative priors had the most power. Varying degrees of precision of prior distributions were also examined. Increased precision led to greater power only when N≥ 100 and the effects were small, N < 60 and the effects were large, and N < 200 and the effects were medium. An empirical example from psychology illustrated a Bayesian analysis of the single mediator model from prior selection to interpreting results. PMID:29662296
Power in Bayesian Mediation Analysis for Small Sample Research.
Miočević, Milica; MacKinnon, David P; Levy, Roy
2017-01-01
It was suggested that Bayesian methods have potential for increasing power in mediation analysis (Koopman, Howe, Hollenbeck, & Sin, 2015; Yuan & MacKinnon, 2009). This paper compares the power of Bayesian credibility intervals for the mediated effect to the power of normal theory, distribution of the product, percentile, and bias-corrected bootstrap confidence intervals at N≤ 200. Bayesian methods with diffuse priors have power comparable to the distribution of the product and bootstrap methods, and Bayesian methods with informative priors had the most power. Varying degrees of precision of prior distributions were also examined. Increased precision led to greater power only when N≥ 100 and the effects were small, N < 60 and the effects were large, and N < 200 and the effects were medium. An empirical example from psychology illustrated a Bayesian analysis of the single mediator model from prior selection to interpreting results.
Statistical physics approaches to financial fluctuations
NASA Astrophysics Data System (ADS)
Wang, Fengzhong
2009-12-01
Complex systems attract many researchers from various scientific fields. Financial markets are one of these widely studied complex systems. Statistical physics, which was originally developed to study large systems, provides novel ideas and powerful methods to analyze financial markets. The study of financial fluctuations characterizes market behavior, and helps to better understand the underlying market mechanism. Our study focuses on volatility, a fundamental quantity to characterize financial fluctuations. We examine equity data of the entire U.S. stock market during 2001 and 2002. To analyze the volatility time series, we develop a new approach, called return interval analysis, which examines the time intervals between two successive volatilities exceeding a given value threshold. We find that the return interval distribution displays scaling over a wide range of thresholds. This scaling is valid for a range of time windows, from one minute up to one day. Moreover, our results are similar for commodities, interest rates, currencies, and for stocks of different countries. Further analysis shows some systematic deviations from a scaling law, which we can attribute to nonlinear correlations in the volatility time series. We also find a memory effect in return intervals for different time scales, which is related to the long-term correlations in the volatility. To further characterize the mechanism of price movement, we simulate the volatility time series using two different models, fractionally integrated generalized autoregressive conditional heteroscedasticity (FIGARCH) and fractional Brownian motion (fBm), and test these models with the return interval analysis. We find that both models can mimic time memory but only fBm shows scaling in the return interval distribution. In addition, we examine the volatility of daily opening to closing and of closing to opening. We find that each volatility distribution has a power law tail. Using the detrended fluctuation analysis (DFA) method, we show long-term auto-correlations in these volatility time series. We also analyze return, the actual price changes of stocks, and find that the returns over the two sessions are often anti-correlated.
NASA Astrophysics Data System (ADS)
Dietze, Michael; Fuchs, Margret; Kreutzer, Sebastian
2016-04-01
Many modern approaches of radiometric dating or geochemical fingerprinting rely on sampling sedimentary deposits. A key assumption of most concepts is that the extracted grain-size fraction of the sampled sediment adequately represents the actual process to be dated or the source area to be fingerprinted. However, these assumptions are not always well constrained. Rather, they have to align with arbitrary, method-determined size intervals, such as "coarse grain" or "fine grain" with partly even different definitions. Such arbitrary intervals violate principal process-based concepts of sediment transport and can thus introduce significant bias to the analysis outcome (i.e., a deviation of the measured from the true value). We present a flexible numerical framework (numOlum) for the statistical programming language R that allows quantifying the bias due to any given analysis size interval for different types of sediment deposits. This framework is applied to synthetic samples from the realms of luminescence dating and geochemical fingerprinting, i.e. a virtual reworked loess section. We show independent validation data from artificially dosed and subsequently mixed grain-size proportions and we present a statistical approach (end-member modelling analysis, EMMA) that allows accounting for the effect of measuring the compound dosimetric history or geochemical composition of a sample. EMMA separates polymodal grain-size distributions into the underlying transport process-related distributions and their contribution to each sample. These underlying distributions can then be used to adjust grain-size preparation intervals to minimise the incorporation of "undesired" grain-size fractions.
Interval sampling methods and measurement error: a computer simulation.
Wirth, Oliver; Slaven, James; Taylor, Matthew A
2014-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.
Kwon, Yong Hyun; Kwon, Jung Won; Lee, Myoung Hee
2015-01-01
[Purpose] The purpose of the current study was to compare the effectiveness of motor sequential learning according to two different types of practice schedules, distributed practice schedule (two 12-hour inter-trial intervals) and massed practice schedule (two 10-minute inter-trial intervals) using a serial reaction time (SRT) task. [Subjects and Methods] Thirty healthy subjects were recruited and then randomly and evenly assigned to either the distributed practice group or the massed practice group. All subjects performed three consecutive sessions of the SRT task following one of the two different types of practice schedules. Distributed practice was scheduled for two 12-hour inter-session intervals including sleeping time, whereas massed practice was administered for two 10-minute inter-session intervals. Response time (RT) and response accuracy (RA) were measured in at pre-test, mid-test, and post-test. [Results] For RT, univariate analysis demonstrated significant main effects in the within-group comparison of the three tests as well as the interaction effect of two groups × three tests, whereas the between-group comparison showed no significant effect. The results for RA showed no significant differences in neither the between-group comparison nor the interaction effect of two groups × three tests, whereas the within-group comparison of the three tests showed a significant main effect. [Conclusion] Distributed practice led to enhancement of motor skill acquisition at the first inter-session interval as well as at the second inter-interval the following day, compared to massed practice. Consequentially, the results of this study suggest that a distributed practice schedule can enhance the effectiveness of motor sequential learning in 1-day learning as well as for two days learning formats compared to massed practice. PMID:25931727
PV System Component Fault and Failure Compilation and Analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klise, Geoffrey Taylor; Lavrova, Olga; Gooding, Renee Lynne
This report describes data collection and analysis of solar photovoltaic (PV) equipment events, which consist of faults and fa ilures that occur during the normal operation of a distributed PV system or PV power plant. We present summary statistics from locations w here maintenance data is being collected at various intervals, as well as reliability statistics gathered from that da ta, consisting of fault/failure distributions and repair distributions for a wide range of PV equipment types.
Asymptotic confidence intervals for the Pearson correlation via skewness and kurtosis.
Bishara, Anthony J; Li, Jiexiang; Nash, Thomas
2018-02-01
When bivariate normality is violated, the default confidence interval of the Pearson correlation can be inaccurate. Two new methods were developed based on the asymptotic sampling distribution of Fisher's z' under the general case where bivariate normality need not be assumed. In Monte Carlo simulations, the most successful of these methods relied on the (Vale & Maurelli, 1983, Psychometrika, 48, 465) family to approximate a distribution via the marginal skewness and kurtosis of the sample data. In Simulation 1, this method provided more accurate confidence intervals of the correlation in non-normal data, at least as compared to no adjustment of the Fisher z' interval, or to adjustment via the sample joint moments. In Simulation 2, this approximate distribution method performed favourably relative to common non-parametric bootstrap methods, but its performance was mixed relative to an observed imposed bootstrap and two other robust methods (PM1 and HC4). No method was completely satisfactory. An advantage of the approximate distribution method, though, is that it can be implemented even without access to raw data if sample skewness and kurtosis are reported, making the method particularly useful for meta-analysis. Supporting information includes R code. © 2017 The British Psychological Society.
One-way ANOVA based on interval information
NASA Astrophysics Data System (ADS)
Hesamian, Gholamreza
2016-08-01
This paper deals with extending the one-way analysis of variance (ANOVA) to the case where the observed data are represented by closed intervals rather than real numbers. In this approach, first a notion of interval random variable is introduced. Especially, a normal distribution with interval parameters is introduced to investigate hypotheses about the equality of interval means or test the homogeneity of interval variances assumption. Moreover, the least significant difference (LSD method) for investigating multiple comparison of interval means is developed when the null hypothesis about the equality of means is rejected. Then, at a given interval significance level, an index is applied to compare the interval test statistic and the related interval critical value as a criterion to accept or reject the null interval hypothesis of interest. Finally, the method of decision-making leads to some degrees to accept or reject the interval hypotheses. An applied example will be used to show the performance of this method.
Return Intervals Approach to Financial Fluctuations
NASA Astrophysics Data System (ADS)
Wang, Fengzhong; Yamasaki, Kazuko; Havlin, Shlomo; Stanley, H. Eugene
Financial fluctuations play a key role for financial markets studies. A new approach focusing on properties of return intervals can help to get better understanding of the fluctuations. A return interval is defined as the time between two successive volatilities above a given threshold. We review recent studies and analyze the 1000 most traded stocks in the US stock markets. We find that the distribution of the return intervals has a well approximated scaling over a wide range of thresholds. The scaling is also valid for various time windows from one minute up to one trading day. Moreover, these results are universal for stocks of different countries, commodities, interest rates as well as currencies. Further analysis shows some systematic deviations from a scaling law, which are due to the nonlinear correlations in the volatility sequence. We also examine the memory in return intervals for different time scales, which are related to the long-term correlations in the volatility. Furthermore, we test two popular models, FIGARCH and fractional Brownian motion (fBm). Both models can catch the memory effect but only fBm shows a good scaling in the return interval distribution.
Bayesian analyses of time-interval data for environmental radiation monitoring.
Luo, Peng; Sharp, Julia L; DeVol, Timothy A
2013-01-01
Time-interval (time difference between two consecutive pulses) analysis based on the principles of Bayesian inference was investigated for online radiation monitoring. Using experimental and simulated data, Bayesian analysis of time-interval data [Bayesian (ti)] was compared with Bayesian and a conventional frequentist analysis of counts in a fixed count time [Bayesian (cnt) and single interval test (SIT), respectively]. The performances of the three methods were compared in terms of average run length (ARL) and detection probability for several simulated detection scenarios. Experimental data were acquired with a DGF-4C system in list mode. Simulated data were obtained using Monte Carlo techniques to obtain a random sampling of the Poisson distribution. All statistical algorithms were developed using the R Project for statistical computing. Bayesian analysis of time-interval information provided a similar detection probability as Bayesian analysis of count information, but the authors were able to make a decision with fewer pulses at relatively higher radiation levels. In addition, for the cases with very short presence of the source (< count time), time-interval information is more sensitive to detect a change than count information since the source data is averaged by the background data over the entire count time. The relationships of the source time, change points, and modifications to the Bayesian approach for increasing detection probability are presented.
Lash, Timothy L
2007-11-26
The associations of pesticide exposure with disease outcomes are estimated without the benefit of a randomized design. For this reason and others, these studies are susceptible to systematic errors. I analyzed studies of the associations between alachlor and glyphosate exposure and cancer incidence, both derived from the Agricultural Health Study cohort, to quantify the bias and uncertainty potentially attributable to systematic error. For each study, I identified the prominent result and important sources of systematic error that might affect it. I assigned probability distributions to the bias parameters that allow quantification of the bias, drew a value at random from each assigned distribution, and calculated the estimate of effect adjusted for the biases. By repeating the draw and adjustment process over multiple iterations, I generated a frequency distribution of adjusted results, from which I obtained a point estimate and simulation interval. These methods were applied without access to the primary record-level dataset. The conventional estimates of effect associating alachlor and glyphosate exposure with cancer incidence were likely biased away from the null and understated the uncertainty by quantifying only random error. For example, the conventional p-value for a test of trend in the alachlor study equaled 0.02, whereas fewer than 20% of the bias analysis iterations yielded a p-value of 0.02 or lower. Similarly, the conventional fully-adjusted result associating glyphosate exposure with multiple myleoma equaled 2.6 with 95% confidence interval of 0.7 to 9.4. The frequency distribution generated by the bias analysis yielded a median hazard ratio equal to 1.5 with 95% simulation interval of 0.4 to 8.9, which was 66% wider than the conventional interval. Bias analysis provides a more complete picture of true uncertainty than conventional frequentist statistical analysis accompanied by a qualitative description of study limitations. The latter approach is likely to lead to overconfidence regarding the potential for causal associations, whereas the former safeguards against such overinterpretations. Furthermore, such analyses, once programmed, allow rapid implementation of alternative assignments of probability distributions to the bias parameters, so elevate the plane of discussion regarding study bias from characterizing studies as "valid" or "invalid" to a critical and quantitative discussion of sources of uncertainty.
A new variable interval schedule with constant hazard rate and finite time range.
Bugallo, Mehdi; Machado, Armando; Vasconcelos, Marco
2018-05-27
We propose a new variable interval (VI) schedule that achieves constant probability of reinforcement in time while using a bounded range of intervals. By sampling each trial duration from a uniform distribution ranging from 0 to 2 T seconds, and then applying a reinforcement rule that depends linearly on trial duration, the schedule alternates reinforced and unreinforced trials, each less than 2 T seconds, while preserving a constant hazard function. © 2018 Society for the Experimental Analysis of Behavior.
NASA Astrophysics Data System (ADS)
Huo, Chengyu; Huang, Xiaolin; Zhuang, Jianjun; Hou, Fengzhen; Ni, Huangjing; Ning, Xinbao
2013-09-01
The Poincaré plot is one of the most important approaches in human cardiac rhythm analysis. However, further investigations are still needed to concentrate on techniques that can characterize the dispersion of the points displayed by a Poincaré plot. Based on a modified Poincaré plot, we provide a novel measurement named distribution entropy (DE) and propose a quadrantal multi-scale distribution entropy analysis (QMDE) for the quantitative descriptions of the scatter distribution patterns in various regions and temporal scales. We apply this method to the heartbeat interval series derived from healthy subjects and congestive heart failure (CHF) sufferers, respectively, and find that the discriminations between them are most significant in the first quadrant, which implies significant impacts on vagal regulation brought about by CHF. We also investigate the day-night differences of young healthy people, and it is shown that the results present a clearly circadian rhythm, especially in the first quadrant. In addition, the multi-scale analysis indicates that the results of healthy subjects and CHF sufferers fluctuate in different trends with variation of the scale factor. The same phenomenon also appears in circadian rhythm investigations of young healthy subjects, which implies that the cardiac dynamic system is affected differently in various temporal scales by physiological or pathological factors.
Risk-based maintenance of ethylene oxide production facilities.
Khan, Faisal I; Haddara, Mahmoud R
2004-05-20
This paper discusses a methodology for the design of an optimum inspection and maintenance program. The methodology, called risk-based maintenance (RBM) is based on integrating a reliability approach and a risk assessment strategy to obtain an optimum maintenance schedule. First, the likely equipment failure scenarios are formulated. Out of many likely failure scenarios, the ones, which are most probable, are subjected to a detailed study. Detailed consequence analysis is done for the selected scenarios. Subsequently, these failure scenarios are subjected to a fault tree analysis to determine their probabilities. Finally, risk is computed by combining the results of the consequence and the probability analyses. The calculated risk is compared against known acceptable criteria. The frequencies of the maintenance tasks are obtained by minimizing the estimated risk. A case study involving an ethylene oxide production facility is presented. Out of the five most hazardous units considered, the pipeline used for the transportation of the ethylene is found to have the highest risk. Using available failure data and a lognormal reliability distribution function human health risk factors are calculated. Both societal risk factors and individual risk factors exceeded the acceptable risk criteria. To determine an optimal maintenance interval, a reverse fault tree analysis was used. The maintenance interval was determined such that the original high risk is brought down to an acceptable level. A sensitivity analysis is also undertaken to study the impact of changing the distribution of the reliability model as well as the error in the distribution parameters on the maintenance interval.
Funke, K; Wörgötter, F
1995-01-01
1. The spike interval pattern during the light responses of 155 on- and 81 off-centre cells of the dorsal lateral geniculate nucleus (LGN) was studied in anaesthetized and paralysed cats by the use of a novel analysis. Temporally localized interval distributions were computed from a 100 ms time window, which was shifted along the time axis in 10 ms steps, resulting in a 90% overlap between two adjacent windows. For each step the interval distribution was computed inside the time window with 1 ms resolution, and plotted as a greyscale-coded pixel line orthogonal to the time axis. For visual stimulation, light or dark spots of different size and contrast were presented with different background illumination levels. 2. Two characteristic interval patterns were observed during the sustained response component of the cells. Mainly on-cells (77%) responded with multimodal interval distributions, resulting in elongated 'bands' in the 2-dimensional time window plots. In similar situations, the interval distributions for most (71%) off-cells were rather wide and featureless. In those cases where interval bands (i.e. multimodal interval distributions) were observed for off-cells (14%), they were always much wider than for the on-cells. This difference between the on- and off-cell population was independent of the background illumination and the contrast of the stimulus. Y on-cells also tended to produce wider interval bands than X on-cells. 3. For most stimulation situations the first interval band was centred around 6-9 ms, which has been called the fundamental interval; higher order bands are multiples thereof. The fundamental interval shifted towards larger sizes with decreasing stimulus contrast. Increasing stimulus size, on the other hand, resulted in a redistribution of the intervals into higher order bands, while at the same time the location of the fundamental interval remained largely unaffected. This was interpreted as an effect of the increasing surround inhibition at the geniculate level, by which individual retinal EPSPs were cancelled. A changing level of adaptation can result in a mixed shift/redistribution effect because of the changing stimulus contrast and changing level of tonic inhibition. 4. The occurrence of interval bands is not directly related to the shape of the autocorrelation function, which can be flat, weakly oscillatory or strongly oscillatory, regardless of the interval band pattern. 5. A simple computer model was devised to account for the observed cell behaviour. The model is highly robust against parameter variations.(ABSTRACT TRUNCATED AT 400 WORDS) Images Figure 1 Figure 2 Figure 3 Figure 4 Figure 6 Figure 7 Figure 8 Figure 9 Figure 10 Figure 11 Figure 12 Figure 13 Figure 15 PMID:7562612
Program for Weibull Analysis of Fatigue Data
NASA Technical Reports Server (NTRS)
Krantz, Timothy L.
2005-01-01
A Fortran computer program has been written for performing statistical analyses of fatigue-test data that are assumed to be adequately represented by a two-parameter Weibull distribution. This program calculates the following: (1) Maximum-likelihood estimates of the Weibull distribution; (2) Data for contour plots of relative likelihood for two parameters; (3) Data for contour plots of joint confidence regions; (4) Data for the profile likelihood of the Weibull-distribution parameters; (5) Data for the profile likelihood of any percentile of the distribution; and (6) Likelihood-based confidence intervals for parameters and/or percentiles of the distribution. The program can account for tests that are suspended without failure (the statistical term for such suspension of tests is "censoring"). The analytical approach followed in this program for the software is valid for type-I censoring, which is the removal of unfailed units at pre-specified times. Confidence regions and intervals are calculated by use of the likelihood-ratio method.
Likelihood-based confidence intervals for estimating floods with given return periods
NASA Astrophysics Data System (ADS)
Martins, Eduardo Sávio P. R.; Clarke, Robin T.
1993-06-01
This paper discusses aspects of the calculation of likelihood-based confidence intervals for T-year floods, with particular reference to (1) the two-parameter gamma distribution; (2) the Gumbel distribution; (3) the two-parameter log-normal distribution, and other distributions related to the normal by Box-Cox transformations. Calculation of the confidence limits is straightforward using the Nelder-Mead algorithm with a constraint incorporated, although care is necessary to ensure convergence either of the Nelder-Mead algorithm, or of the Newton-Raphson calculation of maximum-likelihood estimates. Methods are illustrated using records from 18 gauging stations in the basin of the River Itajai-Acu, State of Santa Catarina, southern Brazil. A small and restricted simulation compared likelihood-based confidence limits with those given by use of the central limit theorem; for the same confidence probability, the confidence limits of the simulation were wider than those of the central limit theorem, which failed more frequently to contain the true quantile being estimated. The paper discusses possible applications of likelihood-based confidence intervals in other areas of hydrological analysis.
NASA Astrophysics Data System (ADS)
Yin, Shengwen; Yu, Dejie; Yin, Hui; Lü, Hui; Xia, Baizhan
2017-09-01
Considering the epistemic uncertainties within the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) model when it is used for the response analysis of built-up systems in the mid-frequency range, the hybrid Evidence Theory-based Finite Element/Statistical Energy Analysis (ETFE/SEA) model is established by introducing the evidence theory. Based on the hybrid ETFE/SEA model and the sub-interval perturbation technique, the hybrid Sub-interval Perturbation and Evidence Theory-based Finite Element/Statistical Energy Analysis (SIP-ETFE/SEA) approach is proposed. In the hybrid ETFE/SEA model, the uncertainty in the SEA subsystem is modeled by a non-parametric ensemble, while the uncertainty in the FE subsystem is described by the focal element and basic probability assignment (BPA), and dealt with evidence theory. Within the hybrid SIP-ETFE/SEA approach, the mid-frequency response of interest, such as the ensemble average of the energy response and the cross-spectrum response, is calculated analytically by using the conventional hybrid FE/SEA method. Inspired by the probability theory, the intervals of the mean value, variance and cumulative distribution are used to describe the distribution characteristics of mid-frequency responses of built-up systems with epistemic uncertainties. In order to alleviate the computational burdens for the extreme value analysis, the sub-interval perturbation technique based on the first-order Taylor series expansion is used in ETFE/SEA model to acquire the lower and upper bounds of the mid-frequency responses over each focal element. Three numerical examples are given to illustrate the feasibility and effectiveness of the proposed method.
Quantitative analysis of ground penetrating radar data in the Mu Us Sandland
NASA Astrophysics Data System (ADS)
Fu, Tianyang; Tan, Lihua; Wu, Yongqiu; Wen, Yanglei; Li, Dawei; Duan, Jinlong
2018-06-01
Ground penetrating radar (GPR), which can reveal the sedimentary structure and development process of dunes, is widely used to evaluate aeolian landforms. The interpretations for GPR profiles are mostly based on qualitative descriptions of geometric features of the radar reflections. This research quantitatively analyzed the waveform parameter characteristics of different radar units by extracting the amplitude and time interval parameters of GPR data in the Mu Us Sandland in China, and then identified and interpreted different sedimentary structures. The results showed that different types of radar units had specific waveform parameter characteristics. The main waveform parameter characteristics of sand dune radar facies and sandstone radar facies included low amplitudes and wide ranges of time intervals, ranging from 0 to 0.25 and 4 to 33 ns respectively, and the mean amplitudes changed gradually with time intervals. The amplitude distribution curves of various sand dune radar facies were similar as unimodal distributions. The radar surfaces showed high amplitudes with time intervals concentrated in high-value areas, ranging from 0.08 to 0.61 and 9 to 34 ns respectively, and the mean amplitudes changed drastically with time intervals. The amplitude and time interval values of lacustrine radar facies were between that of sand dune radar facies and radar surfaces, ranging from 0.08 to 0.29 and 11 to 30 ns respectively, and the mean amplitude and time interval curve was approximately trapezoidal. The quantitative extraction and analysis of GPR reflections could help distinguish various radar units and provide evidence for identifying sedimentary structure in aeolian landforms.
NASA Astrophysics Data System (ADS)
Iskandar, I.
2018-03-01
The exponential distribution is the most widely used reliability analysis. This distribution is very suitable for representing the lengths of life of many cases and is available in a simple statistical form. The characteristic of this distribution is a constant hazard rate. The exponential distribution is the lower rank of the Weibull distributions. In this paper our effort is to introduce the basic notions that constitute an exponential competing risks model in reliability analysis using Bayesian analysis approach and presenting their analytic methods. The cases are limited to the models with independent causes of failure. A non-informative prior distribution is used in our analysis. This model describes the likelihood function and follows with the description of the posterior function and the estimations of the point, interval, hazard function, and reliability. The net probability of failure if only one specific risk is present, crude probability of failure due to a specific risk in the presence of other causes, and partial crude probabilities are also included.
Multiplicity distributions of charged hadrons in vp and charged current interactions
NASA Astrophysics Data System (ADS)
Jones, G. T.; Jones, R. W. L.; Kennedy, B. W.; Morrison, D. R. O.; Mobayyen, M. M.; Wainstein, S.; Aderholz, M.; Hantke, D.; Katz, U. F.; Kern, J.; Schmitz, N.; Wittek, W.; Borner, H. P.; Myatt, G.; Radojicic, D.; Burke, S.
1992-03-01
Using data on vp andbar vp charged current interactions from a bubble chamber experiment with BEBC at CERN, the multiplicity distributions of charged hadrons are investigated. The analysis is based on ˜20000 events with incident v and ˜10000 events with incidentbar v. The invariant mass W of the total hadronic system ranges from 3 GeV to ˜14 GeV. The experimental multiplicity distributions are fitted by the binomial function (for different intervals of W and in different intervals of the rapidity y), by the Levy function and the lognormal function. All three parametrizations give acceptable values for X 2. For fixed W, forward and backward multiplicities are found to be uncorrelated. The normalized moments of the charged multiplicity distributions are measured as a function of W. They show a violation of KNO scaling.
Guo, Guang-Hui; Wu, Feng-Chang; He, Hong-Ping; Feng, Cheng-Lian; Zhang, Rui-Qing; Li, Hui-Xian
2012-04-01
Probabilistic approaches, such as Monte Carlo Sampling (MCS) and Latin Hypercube Sampling (LHS), and non-probabilistic approaches, such as interval analysis, fuzzy set theory and variance propagation, were used to characterize uncertainties associated with risk assessment of sigma PAH8 in surface water of Taihu Lake. The results from MCS and LHS were represented by probability distributions of hazard quotients of sigma PAH8 in surface waters of Taihu Lake. The probabilistic distribution of hazard quotient were obtained from the results of MCS and LHS based on probabilistic theory, which indicated that the confidence intervals of hazard quotient at 90% confidence level were in the range of 0.000 18-0.89 and 0.000 17-0.92, with the mean of 0.37 and 0.35, respectively. In addition, the probabilities that the hazard quotients from MCS and LHS exceed the threshold of 1 were 9.71% and 9.68%, respectively. The sensitivity analysis suggested the toxicity data contributed the most to the resulting distribution of quotients. The hazard quotient of sigma PAH8 to aquatic organisms ranged from 0.000 17 to 0.99 using interval analysis. The confidence interval was (0.001 5, 0.016 3) at the 90% confidence level calculated using fuzzy set theory, and the confidence interval was (0.000 16, 0.88) at the 90% confidence level based on the variance propagation. These results indicated that the ecological risk of sigma PAH8 to aquatic organisms were low. Each method has its own set of advantages and limitations, which was based on different theory; therefore, the appropriate method should be selected on a case-by-case to quantify the effects of uncertainties on the ecological risk assessment. Approach based on the probabilistic theory was selected as the most appropriate method to assess the risk of sigma PAH8 in surface water of Taihu Lake, which provided an important scientific foundation of risk management and control for organic pollutants in water.
Xu, Yonghong; Gao, Xiaohuan; Wang, Zhengxi
2014-04-01
Missing data represent a general problem in many scientific fields, especially in medical survival analysis. Dealing with censored data, interpolation method is one of important methods. However, most of the interpolation methods replace the censored data with the exact data, which will distort the real distribution of the censored data and reduce the probability of the real data falling into the interpolation data. In order to solve this problem, we in this paper propose a nonparametric method of estimating the survival function of right-censored and interval-censored data and compare its performance to SC (self-consistent) algorithm. Comparing to the average interpolation and the nearest neighbor interpolation method, the proposed method in this paper replaces the right-censored data with the interval-censored data, and greatly improves the probability of the real data falling into imputation interval. Then it bases on the empirical distribution theory to estimate the survival function of right-censored and interval-censored data. The results of numerical examples and a real breast cancer data set demonstrated that the proposed method had higher accuracy and better robustness for the different proportion of the censored data. This paper provides a good method to compare the clinical treatments performance with estimation of the survival data of the patients. This pro vides some help to the medical survival data analysis.
Toward a quantitative account of pitch distribution in spontaneous narrative: Method and validation
Matteson, Samuel E.; Streit Olness, Gloria; Caplow, Nancy J.
2013-01-01
Pitch is well-known both to animate human discourse and to convey meaning in communication. The study of the statistical population distributions of pitch in discourse will undoubtedly benefit from methodological improvements. The current investigation examines a method that parameterizes pitch in discourse as musical pitch interval H measured in units of cents and that disaggregates the sequence of peak word-pitches using tools employed in time-series analysis and digital signal processing. The investigators test the proposed methodology by its application to distributions in pitch interval of the peak word-pitch (collectively called the discourse gamut) that occur in simulated and actual spontaneous emotive narratives obtained from 17 middle-aged African-American adults. The analysis, in rigorous tests, not only faithfully reproduced simulated distributions imbedded in realistic time series that drift and include pitch breaks, but the protocol also reveals that the empirical distributions exhibit a common hidden structure when normalized to a slowly varying mode (called the gamut root) of their respective probability density functions. Quantitative differences between narratives reveal the speakers' relative propensity for the use of pitch levels corresponding to elevated degrees of a discourse gamut (the “e-la”) superimposed upon a continuum that conforms systematically to an asymmetric Laplace distribution. PMID:23654400
Analysis of aggregated tick returns: Evidence for anomalous diffusion
NASA Astrophysics Data System (ADS)
Weber, Philipp
2007-01-01
In order to investigate the origin of large price fluctuations, we analyze stock price changes of ten frequently traded NASDAQ stocks in the year 2002. Though the influence of the trading frequency on the aggregate return in a certain time interval is important, it cannot alone explain the heavy-tailed distribution of stock price changes. For this reason, we analyze intervals with a fixed number of trades in order to eliminate the influence of the trading frequency and investigate the relevance of other factors for the aggregate return. We show that in tick time the price follows a discrete diffusion process with a variable step width while the difference between the number of steps in positive and negative direction in an interval is Gaussian distributed. The step width is given by the return due to a single trade and is long-term correlated in tick time. Hence, its mean value can well characterize an interval of many trades and turns out to be an important determinant for large aggregate returns. We also present a statistical model reproducing the cumulative distribution of aggregate returns. For an accurate agreement with the empirical distribution, we also take into account asymmetries of the step widths in different directions together with cross correlations between these asymmetries and the mean step width as well as the signs of the steps.
How Statistics "Excel" Online.
ERIC Educational Resources Information Center
Chao, Faith; Davis, James
2000-01-01
Discusses the use of Microsoft Excel software and provides examples of its use in an online statistics course at Golden Gate University in the areas of randomness and probability, sampling distributions, confidence intervals, and regression analysis. (LRW)
Non-Gaussian distributions of melodic intervals in music: The Lévy-stable approximation
NASA Astrophysics Data System (ADS)
Niklasson, Gunnar A.; Niklasson, Maria H.
2015-11-01
The analysis of structural patterns in music is of interest in order to increase our fundamental understanding of music, as well as for devising algorithms for computer-generated music, so called algorithmic composition. Musical melodies can be analyzed in terms of a “music walk” between the pitches of successive tones in a notescript, in analogy with the “random walk” model commonly used in physics. We find that the distribution of melodic intervals between tones can be approximated with a Lévy-stable distribution. Since music also exibits self-affine scaling, we propose that the “music walk” should be modelled as a Lévy motion. We find that the Lévy motion model captures basic structural patterns in classical as well as in folk music.
Yoganandan, Narayan; Arun, Mike W J; Pintar, Frank A; Szabo, Aniko
2014-01-01
Derive optimum injury probability curves to describe human tolerance of the lower leg using parametric survival analysis. The study reexamined lower leg postmortem human subjects (PMHS) data from a large group of specimens. Briefly, axial loading experiments were conducted by impacting the plantar surface of the foot. Both injury and noninjury tests were included in the testing process. They were identified by pre- and posttest radiographic images and detailed dissection following the impact test. Fractures included injuries to the calcaneus and distal tibia-fibula complex (including pylon), representing severities at the Abbreviated Injury Score (AIS) level 2+. For the statistical analysis, peak force was chosen as the main explanatory variable and the age was chosen as the covariable. Censoring statuses depended on experimental outcomes. Parameters from the parametric survival analysis were estimated using the maximum likelihood approach and the dfbetas statistic was used to identify overly influential samples. The best fit from the Weibull, log-normal, and log-logistic distributions was based on the Akaike information criterion. Plus and minus 95% confidence intervals were obtained for the optimum injury probability distribution. The relative sizes of the interval were determined at predetermined risk levels. Quality indices were described at each of the selected probability levels. The mean age, stature, and weight were 58.2±15.1 years, 1.74±0.08 m, and 74.9±13.8 kg, respectively. Excluding all overly influential tests resulted in the tightest confidence intervals. The Weibull distribution was the most optimum function compared to the other 2 distributions. A majority of quality indices were in the good category for this optimum distribution when results were extracted for 25-, 45- and 65-year-olds at 5, 25, and 50% risk levels age groups for lower leg fracture. For 25, 45, and 65 years, peak forces were 8.1, 6.5, and 5.1 kN at 5% risk; 9.6, 7.7, and 6.1 kN at 25% risk; and 10.4, 8.3, and 6.6 kN at 50% risk, respectively. This study derived axial loading-induced injury risk curves based on survival analysis using peak force and specimen age; adopting different censoring schemes; considering overly influential samples in the analysis; and assessing the quality of the distribution at discrete probability levels. Because procedures used in the present survival analysis are accepted by international automotive communities, current optimum human injury probability distributions can be used at all risk levels with more confidence in future crashworthiness applications for automotive and other disciplines.
An Elementary Algorithm for Autonomous Air Terminal Merging and Interval Management
NASA Technical Reports Server (NTRS)
White, Allan L.
2017-01-01
A central element of air traffic management is the safe merging and spacing of aircraft during the terminal area flight phase. This paper derives and examines an algorithm for the merging and interval managing problem for Standard Terminal Arrival Routes. It describes a factor analysis for performance based on the distribution of arrivals, the operating period of the terminal, and the topology of the arrival routes; then presents results from a performance analysis and from a safety analysis for a realistic topology based on typical routes for a runway at Phoenix International Airport. The heart of the safety analysis is a statistical derivation on how to conduct a safety analysis for a local simulation when the safety requirement is given for the entire airspace.
The temporal organization of behavior on periodic food schedules.
Reid, A K; Bacha, G; Morán, C
1993-01-01
Various theories of temporal control and schedule induction imply that periodic schedules temporally modulate an organism's motivational states within interreinforcement intervals. This speculation has been fueled by frequently observed multimodal activity distributions created by averaging across interreinforcement intervals. We tested this hypothesis by manipulating the cost associated with schedule-induced activities and the availability of other activities to determine the degree to which (a) the temporal distributions of activities within the interreinforcement interval are fixed or can be temporally displaced, (b) rats can reallocate activities across different interreinforcement intervals, and (c) noninduced activities can substitute for schedule-induced activities. Obtained multimodal activity distributions created by averaging across interreinforcement intervals were not representative of the transitions occurring within individual intervals, so the averaged multimodal distributions should not be assumed to represent changes in the subject's motivational states within the interval. Rather, the multimodal distributions often result from averaging across interreinforcement intervals in which only a single activity occurs. A direct influence of the periodic schedule on the motivational states implies that drinking and running should occur at different periods within the interval, but in three experiments the starting times of drinking and running within interreinforcement intervals were equal. Thus, the sequential pattern of drinking and running on periodic schedules does not result from temporal modulation of motivational states within interreinforcement intervals. PMID:8433061
Pocket Handbook on Reliability
1975-09-01
exponencial distributions Weibull distribution, -xtimating reliability, confidence intervals, relia- bility growth, 0. P- curves, Bayesian analysis. 20 A S...introduction for those not familiar with reliability and a good refresher for those who are currently working in the area. LEWIS NERI, CHIEF...includes one or both of the following objectives: a) prediction of the current system reliability, b) projection on the system reliability for someI future
Kwon, Deukwoo; Reis, Isildinha M
2015-08-12
When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.
Cooley, Richard L.
1993-01-01
Calibration data (observed values corresponding to model-computed values of dependent variables) are incorporated into a general method of computing exact Scheffé-type confidence intervals analogous to the confidence intervals developed in part 1 (Cooley, this issue) for a function of parameters derived from a groundwater flow model. Parameter uncertainty is specified by a distribution of parameters conditioned on the calibration data. This distribution was obtained as a posterior distribution by applying Bayes' theorem to the hydrogeologically derived prior distribution of parameters from part 1 and a distribution of differences between the calibration data and corresponding model-computed dependent variables. Tests show that the new confidence intervals can be much smaller than the intervals of part 1 because the prior parameter variance-covariance structure is altered so that combinations of parameters that give poor model fit to the data are unlikely. The confidence intervals of part 1 and the new confidence intervals can be effectively employed in a sequential method of model construction whereby new information is used to reduce confidence interval widths at each stage.
Yan, Cunling; Yang, Jia; Wei, Lianhua; Hu, Jian; Song, Jiaqi; Wang, Xiaoqin; Han, Ruilin; Huang, Ying; Zhang, Wei; Soh, Andrew; Beshiri, Agim; Fan, Zhuping; Zheng, Yijie; Chen, Wei
2018-02-01
Alpha-fetoprotein (AFP) has been widely used in clinical practice for decades. However, large-scale survey of serum reference interval for ARCHITECT AFP is still absent in Chinese population. This study aimed to measure serum AFP levels in healthy Chinese Han subjects, which is a sub-analysis of an ongoing prospective, cross-sectional, multi-center study (ClinicalTrials.gov Identifier: NCT03047603). This analysis included a total of 530 participants (41.43±12.14years of age on average, 48.49% males), enrolled from 5 regional centers. Serum AFP level was measured by ARCHITECT immunoassay. Statistical analysis was performed using SAS 9.4 and R software. AFP distribution did not show significant correlation with age or sex. The overall median and interquartile range of AFP was 2.87 (2.09, 3.83) ng/mL. AFP level did not show a trend of increasing with age. The new reference interval was 2.0-7.07ng/mL (LOQ- 97.5th percentiles). The reference interval for ARCHITECT AFP is updated with the data of adequate number of healthy Han adults. This new reference interval is more practical and applicable in Chinese adults. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Monitoring molecular interactions using photon arrival-time interval distribution analysis
Laurence, Ted A [Livermore, CA; Weiss, Shimon [Los Angels, CA
2009-10-06
A method for analyzing/monitoring the properties of species that are labeled with fluorophores. A detector is used to detect photons emitted from species that are labeled with one or more fluorophores and located in a confocal detection volume. The arrival time of each of the photons is determined. The interval of time between various photon pairs is then determined to provide photon pair intervals. The number of photons that have arrival times within the photon pair intervals is also determined. The photon pair intervals are then used in combination with the corresponding counts of intervening photons to analyze properties and interactions of the molecules including brightness, concentration, coincidence and transit time. The method can be used for analyzing single photon streams and multiple photon streams.
Two Populations of Sunspots: Differential Rotation
NASA Astrophysics Data System (ADS)
Nagovitsyn, Yu. A.; Pevtsov, A. A.; Osipova, A. A.
2018-03-01
To investigate the differential rotation of sunspot groups using the Greenwich data, we propose an approach based on a statistical analysis of the histograms of particular longitudinal velocities in different latitude intervals. The general statistical velocity distributions for all such intervals are shown to be described by two rather than one normal distribution, so that two fundamental rotation modes exist simultaneously: fast and slow. The differentiality of rotation for the modes is the same: the coefficient at sin2 in Faye's law is 2.87-2.88 deg/day, while the equatorial rotation rates differ significantly, 0.27 deg/day. On the other hand, an analysis of the longitudinal velocities for the previously revealed two differing populations of sunspot groups has shown that small short-lived groups (SSGs) are associated with the fast rotation mode, while large long-lived groups (LLGs) are associated with both fast and slow modes. The results obtained not only suggest a real physical difference between the two populations of sunspots but also give new empirical data for the development of a dynamo theory, in particular, for the theory of a spatially distributed dynamo.
Return volatility interval analysis of stock indexes during a financial crash
NASA Astrophysics Data System (ADS)
Li, Wei-Shen; Liaw, Sy-Sang
2015-09-01
We investigate the interval between return volatilities above a certain threshold q for 10 countries data sets during the 2008/2009 global financial crisis, and divide these data into several stages according to stock price tendencies: plunging stage (stage 1), fluctuating or rebounding stage (stage 2) and soaring stage (stage 3). For different thresholds q, the cumulative distribution function always satisfies a power law tail distribution. We find the absolute value of the power-law exponent is lowest in stage 1 for various types of markets, and increases monotonically from stage 1 to stage 3 in emerging markets. The fractal dimension properties of the return volatility interval series provide some surprising results. We find that developed markets have strong persistence and transform to weaker correlation in the plunging and soaring stages. In contrast, emerging markets fail to exhibit such a transformation, but rather show a constant-correlation behavior with the recurrence of extreme return volatility in corresponding stages during a crash. We believe this long-memory property found in recurrence-interval series, especially for developed markets, plays an important role in volatility clustering.
Daniels, Carter W; Sanabria, Federico
2017-03-01
The distribution of latencies and interresponse times (IRTs) of rats was compared between two fixed-interval (FI) schedules of food reinforcement (FI 30 s and FI 90 s), and between two levels of food deprivation. Computational modeling revealed that latencies and IRTs were well described by mixture probability distributions embodying two-state Markov chains. Analysis of these models revealed that only a subset of latencies is sensitive to the periodicity of reinforcement, and prefeeding only reduces the size of this subset. The distribution of IRTs suggests that behavior in FI schedules is organized in bouts that lengthen and ramp up in frequency with proximity to reinforcement. Prefeeding slowed down the lengthening of bouts and increased the time between bouts. When concatenated, latency and IRT models adequately reproduced sigmoidal FI response functions. These findings suggest that behavior in FI schedules fluctuates in and out of schedule control; an account of such fluctuation suggests that timing and motivation are dissociable components of FI performance. These mixture-distribution models also provide novel insights on the motivational, associative, and timing processes expressed in FI performance. These processes may be obscured, however, when performance in timing tasks is analyzed in terms of mean response rates.
Uncertainty analysis for absorbed dose from a brain receptor imaging agent
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aydogan, B.; Miller, L.F.; Sparks, R.B.
Absorbed dose estimates are known to contain uncertainties. A recent literature search indicates that prior to this study no rigorous investigation of uncertainty associated with absorbed dose has been undertaken. A method of uncertainty analysis for absorbed dose calculations has been developed and implemented for the brain receptor imaging agent {sup 123}I-IPT. The two major sources of uncertainty considered were the uncertainty associated with the determination of residence time and that associated with the determination of the S values. There are many sources of uncertainty in the determination of the S values, but only the inter-patient organ mass variation wasmore » considered in this work. The absorbed dose uncertainties were determined for lung, liver, heart and brain. Ninety-five percent confidence intervals of the organ absorbed dose distributions for each patient and for a seven-patient population group were determined by the ``Latin Hypercube Sampling`` method. For an individual patient, the upper bound of the 95% confidence interval of the absorbed dose was found to be about 2.5 times larger than the estimated mean absorbed dose. For the seven-patient population the upper bound of the 95% confidence interval of the absorbed dose distribution was around 45% more than the estimated population mean. For example, the 95% confidence interval of the population liver dose distribution was found to be between 1.49E+0.7 Gy/MBq and 4.65E+07 Gy/MBq with a mean of 2.52E+07 Gy/MBq. This study concluded that patients in a population receiving {sup 123}I-IPT could receive absorbed doses as much as twice as large as the standard estimated absorbed dose due to these uncertainties.« less
The 1993 Mississippi river flood: A one hundred or a one thousand year event?
Malamud, B.D.; Turcotte, D.L.; Barton, C.C.
1996-01-01
Power-law (fractal) extreme-value statistics are applicable to many natural phenomena under a wide variety of circumstances. Data from a hydrologic station in Keokuk, Iowa, shows the great flood of the Mississippi River in 1993 has a recurrence interval on the order of 100 years using power-law statistics applied to partial-duration flood series and on the order of 1,000 years using a log-Pearson type 3 (LP3) distribution applied to annual series. The LP3 analysis is the federally adopted probability distribution for flood-frequency estimation of extreme events. We suggest that power-law statistics are preferable to LP3 analysis. As a further test of the power-law approach we consider paleoflood data from the Colorado River. We compare power-law and LP3 extrapolations of historical data with these paleo-floods. The results are remarkably similar to those obtained for the Mississippi River: Recurrence intervals from power-law statistics applied to Lees Ferry discharge data are generally consistent with inferred 100- and 1,000-year paleofloods, whereas LP3 analysis gives recurrence intervals that are orders of magnitude longer. For both the Keokuk and Lees Ferry gauges, the use of an annual series introduces an artificial curvature in log-log space that leads to an underestimate of severe floods. Power-law statistics are predicting much shorter recurrence intervals than the federally adopted LP3 statistics. We suggest that if power-law behavior is applicable, then the likelihood of severe floods is much higher. More conservative dam designs and land-use restrictions Nay be required.
ERIC Educational Resources Information Center
Weber, Deborah A.
Greater understanding and use of confidence intervals is central to changes in statistical practice (G. Cumming and S. Finch, 2001). Reliability coefficients and confidence intervals for reliability coefficients can be computed using a variety of methods. Estimating confidence intervals includes both central and noncentral distribution approaches.…
Yoganandan, Narayan; Arun, Mike W.J.; Pintar, Frank A.; Szabo, Aniko
2015-01-01
Objective Derive optimum injury probability curves to describe human tolerance of the lower leg using parametric survival analysis. Methods The study re-examined lower leg PMHS data from a large group of specimens. Briefly, axial loading experiments were conducted by impacting the plantar surface of the foot. Both injury and non-injury tests were included in the testing process. They were identified by pre- and posttest radiographic images and detailed dissection following the impact test. Fractures included injuries to the calcaneus and distal tibia-fibula complex (including pylon), representing severities at the Abbreviated Injury Score (AIS) level 2+. For the statistical analysis, peak force was chosen as the main explanatory variable and the age was chosen as the co-variable. Censoring statuses depended on experimental outcomes. Parameters from the parametric survival analysis were estimated using the maximum likelihood approach and the dfbetas statistic was used to identify overly influential samples. The best fit from the Weibull, log-normal and log-logistic distributions was based on the Akaike Information Criterion. Plus and minus 95% confidence intervals were obtained for the optimum injury probability distribution. The relative sizes of the interval were determined at predetermined risk levels. Quality indices were described at each of the selected probability levels. Results The mean age, stature and weight: 58.2 ± 15.1 years, 1.74 ± 0.08 m and 74.9 ± 13.8 kg. Excluding all overly influential tests resulted in the tightest confidence intervals. The Weibull distribution was the most optimum function compared to the other two distributions. A majority of quality indices were in the good category for this optimum distribution when results were extracted for 25-, 45- and 65-year-old at five, 25 and 50% risk levels age groups for lower leg fracture. For 25, 45 and 65 years, peak forces were 8.1, 6.5, and 5.1 kN at 5% risk; 9.6, 7.7, and 6.1 kN at 25% risk; and 10.4, 8.3, and 6.6 kN at 50% risk, respectively. Conclusions This study derived axial loading-induced injury risk curves based on survival analysis using peak force and specimen age; adopting different censoring schemes; considering overly influential samples in the analysis; and assessing the quality of the distribution at discrete probability levels. Because procedures used in the present survival analysis are accepted by international automotive communities, current optimum human injury probability distributions can be used at all risk levels with more confidence in future crashworthiness applications for automotive and other disciplines. PMID:25307381
Stochastic modeling of a serial killer
Simkin, M.V.; Roychowdhury, V.P.
2014-01-01
We analyze the time pattern of the activity of a serial killer, who during twelve years had murdered 53 people. The plot of the cumulative number of murders as a function of time is of “Devil’s staircase” type. The distribution of the intervals between murders (step length) follows a power law with the exponent of 1.4. We propose a model according to which the serial killer commits murders when neuronal excitation in his brain exceeds certain threshold. We model this neural activity as a branching process, which in turn is approximated by a random walk. As the distribution of the random walk return times is a power law with the exponent 1.5, the distribution of the inter-murder intervals is thus explained. We illustrate analytical results by numerical simulation. Time pattern activity data from two other serial killers further substantiate our analysis. PMID:24721476
Stochastic modeling of a serial killer.
Simkin, M V; Roychowdhury, V P
2014-08-21
We analyze the time pattern of the activity of a serial killer, who during 12 years had murdered 53 people. The plot of the cumulative number of murders as a function of time is of "Devil's staircase" type. The distribution of the intervals between murders (step length) follows a power law with the exponent of 1.4. We propose a model according to which the serial killer commits murders when neuronal excitation in his brain exceeds certain threshold. We model this neural activity as a branching process, which in turn is approximated by a random walk. As the distribution of the random walk return times is a power law with the exponent 1.5, the distribution of the inter-murder intervals is thus explained. We illustrate analytical results by numerical simulation. Time pattern activity data from two other serial killers further substantiate our analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.
Graphic analysis and multifractal on percolation-based return interval series
NASA Astrophysics Data System (ADS)
Pei, A. Q.; Wang, J.
2015-05-01
A financial time series model is developed and investigated by the oriented percolation system (one of the statistical physics systems). The nonlinear and statistical behaviors of the return interval time series are studied for the proposed model and the real stock market by applying visibility graph (VG) and multifractal detrended fluctuation analysis (MF-DFA). We investigate the fluctuation behaviors of return intervals of the model for different parameter settings, and also comparatively study these fluctuation patterns with those of the real financial data for different threshold values. The empirical research of this work exhibits the multifractal features for the corresponding financial time series. Further, the VGs deviated from both of the simulated data and the real data show the behaviors of small-world, hierarchy, high clustering and power-law tail for the degree distributions.
Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi
2016-02-01
Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We recommend the use of cumulative probability to fit parametric probability distributions to propagule retention time, specifically using maximum likelihood for parameter estimation. Furthermore, the experimental design for an optimal characterization of unimodal propagule retention time should contemplate at least 500 recovered propagules and sampling time-intervals not larger than the time peak of propagule retrieval, except in the tail of the distribution where broader sampling time-intervals may also produce accurate fits.
An approach to solve group-decision-making problems with ordinal interval numbers.
Fan, Zhi-Ping; Liu, Yang
2010-10-01
The ordinal interval number is a form of uncertain preference information in group decision making (GDM), while it is seldom discussed in the existing research. This paper investigates how the ranking order of alternatives is determined based on preference information of ordinal interval numbers in GDM problems. When ranking a large quantity of ordinal interval numbers, the efficiency and accuracy of the ranking process are critical. A new approach is proposed to rank alternatives using ordinal interval numbers when every ranking ordinal in an ordinal interval number is thought to be uniformly and independently distributed in its interval. First, we give the definition of possibility degree on comparing two ordinal interval numbers and the related theory analysis. Then, to rank alternatives, by comparing multiple ordinal interval numbers, a collective expectation possibility degree matrix on pairwise comparisons of alternatives is built, and an optimization model based on this matrix is constructed. Furthermore, an algorithm is also presented to rank alternatives by solving the model. Finally, two examples are used to illustrate the use of the proposed approach.
Latin Hypercube Sampling (LHS) UNIX Library/Standalone
DOE Office of Scientific and Technical Information (OSTI.GOV)
2004-05-13
The LHS UNIX Library/Standalone software provides the capability to draw random samples from over 30 distribution types. It performs the sampling by a stratified sampling method called Latin Hypercube Sampling (LHS). Multiple distributions can be sampled simultaneously, with user-specified correlations amongst the input distributions, LHS UNIX Library/ Standalone provides a way to generate multi-variate samples. The LHS samples can be generated either as a callable library (e.g., from within the DAKOTA software framework) or as a standalone capability. LHS UNIX Library/Standalone uses the Latin Hypercube Sampling method (LHS) to generate samples. LHS is a constrained Monte Carlo sampling scheme. Inmore » LHS, the range of each variable is divided into non-overlapping intervals on the basis of equal probability. A sample is selected at random with respect to the probability density in each interval, If multiple variables are sampled simultaneously, then values obtained for each are paired in a random manner with the n values of the other variables. In some cases, the pairing is restricted to obtain specified correlations amongst the input variables. Many simulation codes have input parameters that are uncertain and can be specified by a distribution, To perform uncertainty analysis and sensitivity analysis, random values are drawn from the input parameter distributions, and the simulation is run with these values to obtain output values. If this is done repeatedly, with many input samples drawn, one can build up a distribution of the output as well as examine correlations between input and output variables.« less
NASA Technical Reports Server (NTRS)
Rutledge, Charles K.
1988-01-01
The validity of applying chi-square based confidence intervals to far-field acoustic flyover spectral estimates was investigated. Simulated data, using a Kendall series and experimental acoustic data from the NASA/McDonnell Douglas 500E acoustics test, were analyzed. Statistical significance tests to determine the equality of distributions of the simulated and experimental data relative to theoretical chi-square distributions were performed. Bias and uncertainty errors associated with the spectral estimates were easily identified from the data sets. A model relating the uncertainty and bias errors to the estimates resulted, which aided in determining the appropriateness of the chi-square distribution based confidence intervals. Such confidence intervals were appropriate for nontonally associated frequencies of the experimental data but were inappropriate for tonally associated estimate distributions. The appropriateness at the tonally associated frequencies was indicated by the presence of bias error and noncomformity of the distributions to the theoretical chi-square distribution. A technique for determining appropriate confidence intervals at the tonally associated frequencies was suggested.
Levine, M W
1991-01-01
Simulated neural impulse trains were generated by a digital realization of the integrate-and-fire model. The variability in these impulse trains had as its origin a random noise of specified distribution. Three different distributions were used: the normal (Gaussian) distribution (no skew, normokurtic), a first-order gamma distribution (positive skew, leptokurtic), and a uniform distribution (no skew, platykurtic). Despite these differences in the distribution of the variability, the distributions of the intervals between impulses were nearly indistinguishable. These inter-impulse distributions were better fit with a hyperbolic gamma distribution than a hyperbolic normal distribution, although one might expect a better approximation for normally distributed inverse intervals. Consideration of why the inter-impulse distribution is independent of the distribution of the causative noise suggests two putative interval distributions that do not depend on the assumed noise distribution: the log normal distribution, which is predicated on the assumption that long intervals occur with the joint probability of small input values, and the random walk equation, which is the diffusion equation applied to a random walk model of the impulse generating process. Either of these equations provides a more satisfactory fit to the simulated impulse trains than the hyperbolic normal or hyperbolic gamma distributions. These equations also provide better fits to impulse trains derived from the maintained discharges of ganglion cells in the retinae of cats or goldfish. It is noted that both equations are free from the constraint that the coefficient of variation (CV) have a maximum of unity.(ABSTRACT TRUNCATED AT 250 WORDS)
Browne, Erica N; Rathinam, Sivakumar R; Kanakath, Anuradha; Thundikandy, Radhika; Babu, Manohar; Lietman, Thomas M; Acharya, Nisha R
2017-02-01
To conduct a Bayesian analysis of a randomized clinical trial (RCT) for non-infectious uveitis using expert opinion as a subjective prior belief. A RCT was conducted to determine which antimetabolite, methotrexate or mycophenolate mofetil, is more effective as an initial corticosteroid-sparing agent for the treatment of intermediate, posterior, and pan-uveitis. Before the release of trial results, expert opinion on the relative effectiveness of these two medications was collected via online survey. Members of the American Uveitis Society executive committee were invited to provide an estimate for the relative decrease in efficacy with a 95% credible interval (CrI). A prior probability distribution was created from experts' estimates. A Bayesian analysis was performed using the constructed expert prior probability distribution and the trial's primary outcome. A total of 11 of the 12 invited uveitis specialists provided estimates. Eight of 11 experts (73%) believed mycophenolate mofetil is more effective. The group prior belief was that the odds of treatment success for patients taking mycophenolate mofetil were 1.4-fold the odds of those taking methotrexate (95% CrI 0.03-45.0). The odds of treatment success with mycophenolate mofetil compared to methotrexate was 0.4 from the RCT (95% confidence interval 0.1-1.2) and 0.7 (95% CrI 0.2-1.7) from the Bayesian analysis. A Bayesian analysis combining expert belief with the trial's result did not indicate preference for one drug. However, the wide credible interval leaves open the possibility of a substantial treatment effect. This suggests clinical equipoise necessary to allow a larger, more definitive RCT.
Age-dependent biochemical quantities: an approach for calculating reference intervals.
Bjerner, J
2007-01-01
A parametric method is often preferred when calculating reference intervals for biochemical quantities, as non-parametric methods are less efficient and require more observations/study subjects. Parametric methods are complicated, however, because of three commonly encountered features. First, biochemical quantities seldom display a Gaussian distribution, and there must either be a transformation procedure to obtain such a distribution or a more complex distribution has to be used. Second, biochemical quantities are often dependent on a continuous covariate, exemplified by rising serum concentrations of MUC1 (episialin, CA15.3) with increasing age. Third, outliers often exert substantial influence on parametric estimations and therefore need to be excluded before calculations are made. The International Federation of Clinical Chemistry (IFCC) currently recommends that confidence intervals be calculated for the reference centiles obtained. However, common statistical packages allowing for the adjustment of a continuous covariate do not make this calculation. In the method described in the current study, Tukey's fence is used to eliminate outliers and two-stage transformations (modulus-exponential-normal) in order to render Gaussian distributions. Fractional polynomials are employed to model functions for mean and standard deviations dependent on a covariate, and the model is selected by maximum likelihood. Confidence intervals are calculated for the fitted centiles by combining parameter estimation and sampling uncertainties. Finally, the elimination of outliers was made dependent on covariates by reiteration. Though a good knowledge of statistical theory is needed when performing the analysis, the current method is rewarding because the results are of practical use in patient care.
The Application of Nonstandard Analysis to the Study of Inviscid Shock Wave Jump Conditions
NASA Technical Reports Server (NTRS)
Farassat, F.; Baty, R. S.
1998-01-01
The use of conservation laws in nonconservative form for deriving shock jump conditions by Schwartz distribution theory leads to ambiguous products of generalized functions. Nonstandard analysis is used to define a class of Heaviside functions where the jump from zero to one occurs on an infinitesimal interval. These Heaviside functions differ by their microstructure near x = 0, i.e., by the nature of the rise within the infinitesimal interval it is shown that the conservation laws in nonconservative form can relate the different Heaviside functions used to define jumps in different flow parameters. There are no mathematical or logical ambiguities in the derivation of the jump conditions. An important result is that the microstructure of the Heaviside function of the jump in entropy has a positive peak greater than one within the infinitesimal interval where the jump occurs. This phenomena is known from more sophisticated studies of the structure of shock waves using viscous fluid assumption. However, the present analysis is simpler and more direct.
A model of return intervals between earthquake events
NASA Astrophysics Data System (ADS)
Zhou, Yu; Chechkin, Aleksei; Sokolov, Igor M.; Kantz, Holger
2016-06-01
Application of the diffusion entropy analysis and the standard deviation analysis to the time sequence of the southern California earthquake events from 1976 to 2002 uncovered scaling behavior typical for anomalous diffusion. However, the origin of such behavior is still under debate. Some studies attribute the scaling behavior to the correlations in the return intervals, or waiting times, between aftershocks or mainshocks. To elucidate a nature of the scaling, we applied specific reshulffling techniques to eliminate correlations between different types of events and then examined how it affects the scaling behavior. We demonstrate that the origin of the scaling behavior observed is the interplay between mainshock waiting time distribution and the structure of clusters of aftershocks, but not correlations in waiting times between the mainshocks and aftershocks themselves. Our findings are corroborated by numerical simulations of a simple model showing a very similar behavior. The mainshocks are modeled by a renewal process with a power-law waiting time distribution between events, and aftershocks follow a nonhomogeneous Poisson process with the rate governed by Omori's law.
Quantification of Uncertainty in the Flood Frequency Analysis
NASA Astrophysics Data System (ADS)
Kasiapillai Sudalaimuthu, K.; He, J.; Swami, D.
2017-12-01
Flood frequency analysis (FFA) is usually carried out for planning and designing of water resources and hydraulic structures. Owing to the existence of variability in sample representation, selection of distribution and estimation of distribution parameters, the estimation of flood quantile has been always uncertain. Hence, suitable approaches must be developed to quantify the uncertainty in the form of prediction interval as an alternate to deterministic approach. The developed framework in the present study to include uncertainty in the FFA discusses a multi-objective optimization approach to construct the prediction interval using ensemble of flood quantile. Through this approach, an optimal variability of distribution parameters is identified to carry out FFA. To demonstrate the proposed approach, annual maximum flow data from two gauge stations (Bow river at Calgary and Banff, Canada) are used. The major focus of the present study was to evaluate the changes in magnitude of flood quantiles due to the recent extreme flood event occurred during the year 2013. In addition, the efficacy of the proposed method was further verified using standard bootstrap based sampling approaches and found that the proposed method is reliable in modeling extreme floods as compared to the bootstrap methods.
The effect of sampling rate on observed statistics in a correlated random walk
Rosser, G.; Fletcher, A. G.; Maini, P. K.; Baker, R. E.
2013-01-01
Tracking the movement of individual cells or animals can provide important information about their motile behaviour, with key examples including migrating birds, foraging mammals and bacterial chemotaxis. In many experimental protocols, observations are recorded with a fixed sampling interval and the continuous underlying motion is approximated as a series of discrete steps. The size of the sampling interval significantly affects the tracking measurements, the statistics computed from observed trajectories, and the inferences drawn. Despite the widespread use of tracking data to investigate motile behaviour, many open questions remain about these effects. We use a correlated random walk model to study the variation with sampling interval of two key quantities of interest: apparent speed and angle change. Two variants of the model are considered, in which reorientations occur instantaneously and with a stationary pause, respectively. We employ stochastic simulations to study the effect of sampling on the distributions of apparent speeds and angle changes, and present novel mathematical analysis in the case of rapid sampling. Our investigation elucidates the complex nature of sampling effects for sampling intervals ranging over many orders of magnitude. Results show that inclusion of a stationary phase significantly alters the observed distributions of both quantities. PMID:23740484
1985-03-01
distribution. Samples of suspended partici’lates will also be collected for later image and elemental analysis . 25 Method of analysis for particle...will be flow injection analysis . This method will allow rapid, continuous analysis of seawater nutrients. Measurements will be made at one minute...5 m intervals) as well as from the underway pumping system. Method of pigment analysis for porphyrin and carotenoid pigments will be separation by
Geffré, Anne; Concordet, Didier; Braun, Jean-Pierre; Trumel, Catherine
2011-03-01
International recommendations for determination of reference intervals have been recently updated, especially for small reference sample groups, and use of the robust method and Box-Cox transformation is now recommended. Unfortunately, these methods are not included in most software programs used for data analysis by clinical laboratories. We have created a set of macroinstructions, named Reference Value Advisor, for use in Microsoft Excel to calculate reference limits applying different methods. For any series of data, Reference Value Advisor calculates reference limits (with 90% confidence intervals [CI]) using a nonparametric method when n≥40 and by parametric and robust methods from native and Box-Cox transformed values; tests normality of distributions using the Anderson-Darling test and outliers using Tukey and Dixon-Reed tests; displays the distribution of values in dot plots and histograms and constructs Q-Q plots for visual inspection of normality; and provides minimal guidelines in the form of comments based on international recommendations. The critical steps in determination of reference intervals are correct selection of as many reference individuals as possible and analysis of specimens in controlled preanalytical and analytical conditions. Computing tools cannot compensate for flaws in selection and size of the reference sample group and handling and analysis of samples. However, if those steps are performed properly, Reference Value Advisor, available as freeware at http://www.biostat.envt.fr/spip/spip.php?article63, permits rapid assessment and comparison of results calculated using different methods, including currently unavailable methods. This allows for selection of the most appropriate method, especially as the program provides the CI of limits. It should be useful in veterinary clinical pathology when only small reference sample groups are available. ©2011 American Society for Veterinary Clinical Pathology.
Brackney, Ryan J; Cheung, Timothy H. C; Neisewander, Janet L; Sanabria, Federico
2011-01-01
Dissociating motoric and motivational effects of pharmacological manipulations on operant behavior is a substantial challenge. To address this problem, we applied a response-bout analysis to data from rats trained to lever press for sucrose on variable-interval (VI) schedules of reinforcement. Motoric, motivational, and schedule factors (effort requirement, deprivation level, and schedule requirements, respectively) were manipulated. Bout analysis found that interresponse times (IRTs) were described by a mixture of two exponential distributions, one characterizing IRTs within response bouts, another characterizing intervals between bouts. Increasing effort requirement lengthened the shortest IRT (the refractory period between responses). Adding a ratio requirement increased the length and density of response bouts. Both manipulations also decreased the bout-initiation rate. In contrast, food deprivation only increased the bout-initiation rate. Changes in the distribution of IRTs over time showed that responses during extinction were also emitted in bouts, and that the decrease in response rate was primarily due to progressively longer intervals between bouts. Taken together, these results suggest that changes in the refractory period indicate motoric effects, whereas selective alterations in bout initiation rate indicate incentive-motivational effects. These findings support the use of response-bout analyses to identify the influence of pharmacological manipulations on processes underlying operant performance. PMID:21765544
Estimation and confidence intervals for empirical mixing distributions
Link, W.A.; Sauer, J.R.
1995-01-01
Questions regarding collections of parameter estimates can frequently be expressed in terms of an empirical mixing distribution (EMD). This report discusses empirical Bayes estimation of an EMD, with emphasis on the construction of interval estimates. Estimation of the EMD is accomplished by substitution of estimates of prior parameters in the posterior mean of the EMD. This procedure is examined in a parametric model (the normal-normal mixture) and in a semi-parametric model. In both cases, the empirical Bayes bootstrap of Laird and Louis (1987, Journal of the American Statistical Association 82, 739-757) is used to assess the variability of the estimated EMD arising from the estimation of prior parameters. The proposed methods are applied to a meta-analysis of population trend estimates for groups of birds.
NASA Astrophysics Data System (ADS)
Barengoltz, Jack
2016-07-01
Monte Carlo (MC) is a common method to estimate probability, effectively by a simulation. For planetary protection, it may be used to estimate the probability of impact P{}_{I} by a launch vehicle (upper stage) of a protected planet. The object of the analysis is to provide a value for P{}_{I} with a given level of confidence (LOC) that the true value does not exceed the maximum allowed value of P{}_{I}. In order to determine the number of MC histories required, one must also guess the maximum number of hits that will occur in the analysis. This extra parameter is needed because a LOC is desired. If more hits occur, the MC analysis would indicate that the true value may exceed the specification value with a higher probability than the LOC. (In the worst case, even the mean value of the estimated P{}_{I} might exceed the specification value.) After the analysis is conducted, the actual number of hits is, of course, the mean. The number of hits arises from a small probability per history and a large number of histories; these are the classic requirements for a Poisson distribution. For a known Poisson distribution (the mean is the only parameter), the probability for some interval in the number of hits is calculable. Before the analysis, this is not possible. Fortunately, there are methods that can bound the unknown mean for a Poisson distribution. F. Garwoodfootnote{ F. Garwood (1936), ``Fiduciary limits for the Poisson distribution.'' Biometrika 28, 437-442.} published an appropriate method that uses the Chi-squared function, actually its inversefootnote{ The integral chi-squared function would yield probability α as a function of the mean µ and an actual value n.} (despite the notation used): This formula for the upper and lower limits of the mean μ with the two-tailed probability 1-α depends on the LOC α and an estimated value of the number of "successes" n. In a MC analysis for planetary protection, only the upper limit is of interest, i.e., the single-tailed distribution. (Smaller actual P{}_{I }is no problem.) {}_{ } One advantage of this method is that this function is available in EXCEL. Note that care must be taken with the definition of the CHIINV function (the inverse of the integral chi-squared distribution). The equivalent inequality in EXCEL is μ < CHIINV[1-α, 2(n+1)] In practice, one calculates this upper limit for a specified LOC, α , and a guess of how many hits n will be found after the MC analysis. Then the estimate of the number of histories required is this upper limit divided by the specification for the allowed P{}_{I} (rounded up). However, if the number of hits actually exceeds the guess, the P{}_{I} requirement will be met only with a smaller LOC. A disadvantage is that the intervals about the mean are "in general too wide, yielding coverage probabilities much greater than 1- α ." footnote{ G. Casella and C. Robert (1988), Purdue University-Technical Report #88-7 or Cornell University-Technical Report BU-903-M.} For planetary protection, this technical issue means that the upper limit of the interval and the probability associated with the interval (i.e., the LOC) are conservative.
The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.
Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica
2014-05-01
The distribution of the product has several useful applications. One of these applications is its use to form confidence intervals for the indirect effect as the product of 2 regression coefficients. The purpose of this article is to investigate how the moments of the distribution of the product explain normal theory mediation confidence interval coverage and imbalance. Values of the critical ratio for each random variable are used to demonstrate how the moments of the distribution of the product change across values of the critical ratio observed in research studies. Results of the simulation study showed that as skewness in absolute value increases, coverage decreases. And as skewness in absolute value and kurtosis increases, imbalance increases. The difference between testing the significance of the indirect effect using the normal theory versus the asymmetric distribution of the product is further illustrated with a real data example. This article is the first study to show the direct link between the distribution of the product and indirect effect confidence intervals and clarifies the results of previous simulation studies by showing why normal theory confidence intervals for indirect effects are often less accurate than those obtained from the asymmetric distribution of the product or from resampling methods.
Representation of analysis results involving aleatory and epistemic uncertainty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jay Dean; Helton, Jon Craig; Oberkampf, William Louis
2008-08-01
Procedures are described for the representation of results in analyses that involve both aleatory uncertainty and epistemic uncertainty, with aleatory uncertainty deriving from an inherent randomness in the behavior of the system under study and epistemic uncertainty deriving from a lack of knowledge about the appropriate values to use for quantities that are assumed to have fixed but poorly known values in the context of a specific study. Aleatory uncertainty is usually represented with probability and leads to cumulative distribution functions (CDFs) or complementary cumulative distribution functions (CCDFs) for analysis results of interest. Several mathematical structures are available for themore » representation of epistemic uncertainty, including interval analysis, possibility theory, evidence theory and probability theory. In the presence of epistemic uncertainty, there is not a single CDF or CCDF for a given analysis result. Rather, there is a family of CDFs and a corresponding family of CCDFs that derive from epistemic uncertainty and have an uncertainty structure that derives from the particular uncertainty structure (i.e., interval analysis, possibility theory, evidence theory, probability theory) used to represent epistemic uncertainty. Graphical formats for the representation of epistemic uncertainty in families of CDFs and CCDFs are investigated and presented for the indicated characterizations of epistemic uncertainty.« less
NASA Astrophysics Data System (ADS)
Kawaguchi, Hiroshi; Hayashi, Toshiyuki; Kato, Toshinori; Okada, Eiji
2004-06-01
Near-infrared (NIR) topography can obtain a topographical distribution of the activated region in the brain cortex. Near-infrared light is strongly scattered in the head, and the volume of tissue sampled by a source-detector pair on the head surface is broadly distributed in the brain. This scattering effect results in poor resolution and contrast in the topographic image of the brain activity. In this study, a one-dimensional distribution of absorption change in a head model is calculated by mapping and reconstruction methods to evaluate the effect of the image reconstruction algorithm and the interval of measurement points for topographic imaging on the accuracy of the topographic image. The light propagation in the head model is predicted by Monte Carlo simulation to obtain the spatial sensitivity profile for a source-detector pair. The measurement points are one-dimensionally arranged on the surface of the model, and the distance between adjacent measurement points is varied from 4 mm to 28 mm. Small intervals of the measurement points improve the topographic image calculated by both the mapping and reconstruction methods. In the conventional mapping method, the limit of the spatial resolution depends upon the interval of the measurement points and spatial sensitivity profile for source-detector pairs. The reconstruction method has advantages over the mapping method which improve the results of one-dimensional analysis when the interval of measurement points is less than 12 mm. The effect of overlapping of spatial sensitivity profiles indicates that the reconstruction method may be effective to improve the spatial resolution of a two-dimensional reconstruction of topographic image obtained with larger interval of measurement points. Near-infrared topography with the reconstruction method potentially obtains an accurate distribution of absorption change in the brain even if the size of absorption change is less than 10 mm.
TSP Symposium 2012 Proceedings
2012-11-01
and Statistical Model 78 7.3 Analysis and Results 79 7.4 Threats to Validity and Limitations 85 7.5 Conclusions 86 7.6 Acknowledgments 87 7.7...Table 12: Overall Statistics of the Experiment 32 Table 13: Results of Pairwise ANOVA Analysis, Highlighting Statistically Significant Differences...we calculated the percentage of defects injected. The distribution statistics are shown in Table 2. Table 2: Mean Lower, Upper Confidence Interval
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-08
... series in the pilot: (1) A time series analysis of open interest; and (2) An analysis of the distribution... times the number of shares outstanding. These are summed for all 500 stocks and divided by a... below $3.00 and $0.10 for all other series. Strike price intervals would be set no less than 5 points...
Oscillatory dynamics of an intravenous glucose tolerance test model with delay interval
NASA Astrophysics Data System (ADS)
Shi, Xiangyun; Kuang, Yang; Makroglou, Athena; Mokshagundam, Sriprakash; Li, Jiaxu
2017-11-01
Type 2 diabetes mellitus (T2DM) has become prevalent pandemic disease in view of the modern life style. Both diabetic population and health expenses grow rapidly according to American Diabetes Association. Detecting the potential onset of T2DM is an essential focal point in the research of diabetes mellitus. The intravenous glucose tolerance test (IVGTT) is an effective protocol to determine the insulin sensitivity, glucose effectiveness, and pancreatic β-cell functionality, through the analysis and parameter estimation of a proper differential equation model. Delay differential equations have been used to study the complex physiological phenomena including the glucose and insulin regulations. In this paper, we propose a novel approach to model the time delay in IVGTT modeling. This novel approach uses two parameters to simulate not only both discrete time delay and distributed time delay in the past interval, but also the time delay distributed in a past sub-interval. Normally, larger time delay, either a discrete or a distributed delay, will destabilize the system. However, we find that time delay over a sub-interval might not. We present analytically some basic model properties, which are desirable biologically and mathematically. We show that this relatively simple model provides good fit to fluctuating patient data sets and reveals some intriguing dynamics. Moreover, our numerical simulation results indicate that our model may remove the defect in well known Minimal Model, which often overestimates the glucose effectiveness index.
Weblog patterns and human dynamics with decreasing interest
NASA Astrophysics Data System (ADS)
Guo, J.-L.; Fan, C.; Guo, Z.-H.
2011-06-01
In order to describe the phenomenon that people's interest in doing something always keep high in the beginning while gradually decreases until reaching the balance, a model which describes the attenuation of interest is proposed to reflect the fact that people's interest becomes more stable after a long time. We give a rigorous analysis on this model by non-homogeneous Poisson processes. Our analysis indicates that the interval distribution of arrival-time is a mixed distribution with exponential and power-law feature, which is a power law with an exponential cutoff. After that, we collect blogs in ScienceNet.cn and carry on empirical study on the interarrival time distribution. The empirical results agree well with the theoretical analysis, obeying a special power law with the exponential cutoff, that is, a special kind of Gamma distribution. These empirical results verify the model by providing an evidence for a new class of phenomena in human dynamics. It can be concluded that besides power-law distributions, there are other distributions in human dynamics. These findings demonstrate the variety of human behavior dynamics.
Multifractal analysis of mobile social networks
NASA Astrophysics Data System (ADS)
Zheng, Wei; Zhang, Zifeng; Deng, Yufan
2017-09-01
As Wireless Fidelity (Wi-Fi)-enabled handheld devices have been widely used, the mobile social networks (MSNs) has been attracting extensive attention. Fractal approaches have also been widely applied to characterierize natural networks as useful tools to depict their spatial distribution and scaling properties. Moreover, when the complexity of the spatial distribution of MSNs cannot be properly charaterized by single fractal dimension, multifractal analysis is required. For further research, we introduced a multifractal analysis method based on box-covering algorithm to describe the structure of MSNs. Using this method, we find that the networks are multifractal at different time interval. The simulation results demonstrate that the proposed method is efficient for analyzing the multifractal characteristic of MSNs, which provides a distribution of singularities adequately describing both the heterogeneity of fractal patterns and the statistics of measurements across spatial scales in MSNs.
Browne, Erica N; Rathinam, Sivakumar R; Kanakath, Anuradha; Thundikandy, Radhika; Babu, Manohar; Lietman, Thomas M; Acharya, Nisha R
2017-01-01
Purpose To conduct a Bayesian analysis of a randomized clinical trial (RCT) for non-infectious uveitis using expert opinion as a subjective prior belief. Methods A RCT was conducted to determine which antimetabolite, methotrexate or mycophenolate mofetil, is more effective as an initial corticosteroid-sparing agent for the treatment of intermediate, posterior, and pan- uveitis. Before the release of trial results, expert opinion on the relative effectiveness of these two medications was collected via online survey. Members of the American Uveitis Society executive committee were invited to provide an estimate for the relative decrease in efficacy with a 95% credible interval (CrI). A prior probability distribution was created from experts’ estimates. A Bayesian analysis was performed using the constructed expert prior probability distribution and the trial’s primary outcome. Results 11 of 12 invited uveitis specialists provided estimates. Eight of 11 experts (73%) believed mycophenolate mofetil is more effective. The group prior belief was that the odds of treatment success for patients taking mycophenolate mofetil were 1.4-fold the odds of those taking methotrexate (95% CrI 0.03 – 45.0). The odds of treatment success with mycophenolate mofetil compared to methotrexate was 0.4 from the RCT (95% confidence interval 0.1–1.2) and 0.7 (95% CrI 0.2–1.7) from the Bayesian analysis. Conclusions A Bayesian analysis combining expert belief with the trial’s result did not indicate preference for one drug. However, the wide credible interval leaves open the possibility of a substantial treatment effect. This suggests clinical equipoise necessary to allow a larger, more definitive RCT. PMID:27982726
2014-01-01
Background Recurrent events data analysis is common in biomedicine. Literature review indicates that most statistical models used for such data are often based on time to the first event or consider events within a subject as independent. Even when taking into account the non-independence of recurrent events within subjects, data analyses are mostly done with continuous risk interval models, which may not be appropriate for treatments with sustained effects (e.g., drug treatments of malaria patients). Furthermore, results can be biased in cases of a confounding factor implying different risk exposure, e.g. in malaria transmission: if subjects are located at zones showing different environmental factors implying different risk exposures. Methods This work aimed to compare four different approaches by analysing recurrent malaria episodes from a clinical trial assessing the effectiveness of three malaria treatments [artesunate + amodiaquine (AS + AQ), artesunate + sulphadoxine-pyrimethamine (AS + SP) or artemether-lumefantrine (AL)], with continuous and discontinuous risk intervals: Andersen-Gill counting process (AG-CP), Prentice-Williams-Peterson counting process (PWP-CP), a shared gamma frailty model, and Generalized Estimating Equations model (GEE) using Poisson distribution. Simulations were also made to analyse the impact of the addition of a confounding factor on malaria recurrent episodes. Results Using the discontinuous interval analysis, AG-CP and Shared gamma frailty models provided similar estimations of treatment effect on malaria recurrent episodes when adjusted on age category. The patients had significant decreased risk of recurrent malaria episodes when treated with AS + AQ or AS + SP arms compared to AL arm; Relative Risks were: 0.75 (95% CI (Confidence Interval): 0.62-0.89), 0.74 (95% CI: 0.62-0.88) respectively for AG-CP model and 0.76 (95% CI: 0.64-0.89), 0.74 (95% CI: 0.62-0.87) for the Shared gamma frailty model. With both discontinuous and continuous risk intervals analysis, GEE Poisson distribution models failed to detect the effect of AS + AQ arm compared to AL arm when adjusted for age category. The discontinuous risk interval analysis was found to be the more appropriate approach. Conclusion Repeated event in infectious diseases such as malaria can be analysed with appropriate existing models that account for the correlation between multiple events within subjects with common statistical software packages, after properly setting up the data structures. PMID:25073652
Sun, Xing; Li, Xiaoyun; Chen, Cong; Song, Yang
2013-01-01
Frequent rise of interval-censored time-to-event data in randomized clinical trials (e.g., progression-free survival [PFS] in oncology) challenges statistical researchers in the pharmaceutical industry in various ways. These challenges exist in both trial design and data analysis. Conventional statistical methods treating intervals as fixed points, which are generally practiced by pharmaceutical industry, sometimes yield inferior or even flawed analysis results in extreme cases for interval-censored data. In this article, we examine the limitation of these standard methods under typical clinical trial settings and further review and compare several existing nonparametric likelihood-based methods for interval-censored data, methods that are more sophisticated but robust. Trial design issues involved with interval-censored data comprise another topic to be explored in this article. Unlike right-censored survival data, expected sample size or power for a trial with interval-censored data relies heavily on the parametric distribution of the baseline survival function as well as the frequency of assessments. There can be substantial power loss in trials with interval-censored data if the assessments are very infrequent. Such an additional dependency controverts many fundamental assumptions and principles in conventional survival trial designs, especially the group sequential design (e.g., the concept of information fraction). In this article, we discuss these fundamental changes and available tools to work around their impacts. Although progression-free survival is often used as a discussion point in the article, the general conclusions are equally applicable to other interval-censored time-to-event endpoints.
A hierarchical Bayesian GEV model for improving local and regional flood quantile estimates
NASA Astrophysics Data System (ADS)
Lima, Carlos H. R.; Lall, Upmanu; Troy, Tara; Devineni, Naresh
2016-10-01
We estimate local and regional Generalized Extreme Value (GEV) distribution parameters for flood frequency analysis in a multilevel, hierarchical Bayesian framework, to explicitly model and reduce uncertainties. As prior information for the model, we assume that the GEV location and scale parameters for each site come from independent log-normal distributions, whose mean parameter scales with the drainage area. From empirical and theoretical arguments, the shape parameter for each site is shrunk towards a common mean. Non-informative prior distributions are assumed for the hyperparameters and the MCMC method is used to sample from the joint posterior distribution. The model is tested using annual maximum series from 20 streamflow gauges located in an 83,000 km2 flood prone basin in Southeast Brazil. The results show a significant reduction of uncertainty estimates of flood quantile estimates over the traditional GEV model, particularly for sites with shorter records. For return periods within the range of the data (around 50 years), the Bayesian credible intervals for the flood quantiles tend to be narrower than the classical confidence limits based on the delta method. As the return period increases beyond the range of the data, the confidence limits from the delta method become unreliable and the Bayesian credible intervals provide a way to estimate satisfactory confidence bands for the flood quantiles considering parameter uncertainties and regional information. In order to evaluate the applicability of the proposed hierarchical Bayesian model for regional flood frequency analysis, we estimate flood quantiles for three randomly chosen out-of-sample sites and compare with classical estimates using the index flood method. The posterior distributions of the scaling law coefficients are used to define the predictive distributions of the GEV location and scale parameters for the out-of-sample sites given only their drainage areas and the posterior distribution of the average shape parameter is taken as the regional predictive distribution for this parameter. While the index flood method does not provide a straightforward way to consider the uncertainties in the index flood and in the regional parameters, the results obtained here show that the proposed Bayesian method is able to produce adequate credible intervals for flood quantiles that are in accordance with empirical estimates.
NASA Astrophysics Data System (ADS)
Meng, Hao; Ren, Fei; Gu, Gao-Feng; Xiong, Xiong; Zhang, Yong-Jie; Zhou, Wei-Xing; Zhang, Wei
2012-05-01
Understanding the statistical properties of recurrence intervals (also termed return intervals in econophysics literature) of extreme events is crucial to risk assessment and management of complex systems. The probability distributions and correlations of recurrence intervals for many systems have been extensively investigated. However, the impacts of microscopic rules of a complex system on the macroscopic properties of its recurrence intervals are less studied. In this letter, we adopt an order-driven stock model to address this issue for stock returns. We find that the distributions of the scaled recurrence intervals of simulated returns have a power-law scaling with stretched exponential cutoff and the intervals possess multifractal nature, which are consistent with empirical results. We further investigate the effects of long memory in the directions (or signs) and relative prices of the order flow on the characteristic quantities of these properties. It is found that the long memory in the order directions (Hurst index Hs) has a negligible effect on the interval distributions and the multifractal nature. In contrast, the power-law exponent of the interval distribution increases linearly with respect to the Hurst index Hx of the relative prices, and the singularity width of the multifractal nature fluctuates around a constant value when Hx<0.7 and then increases with Hx. No evident effects of Hs and Hx are found on the long memory of the recurrence intervals. Our results indicate that the nontrivial properties of the recurrence intervals of returns are mainly caused by traders' behaviors of persistently placing new orders around the best bid and ask prices.
An understanding of human dynamics in urban subway traffic from the Maximum Entropy Principle
NASA Astrophysics Data System (ADS)
Yong, Nuo; Ni, Shunjiang; Shen, Shifei; Ji, Xuewei
2016-08-01
We studied the distribution of entry time interval in Beijing subway traffic by analyzing the smart card transaction data, and then deduced the probability distribution function of entry time interval based on the Maximum Entropy Principle. Both theoretical derivation and data statistics indicated that the entry time interval obeys power-law distribution with an exponential cutoff. In addition, we pointed out the constraint conditions for the distribution form and discussed how the constraints affect the distribution function. It is speculated that for bursts and heavy tails in human dynamics, when the fitted power exponent is less than 1.0, it cannot be a pure power-law distribution, but with an exponential cutoff, which may be ignored in the previous studies.
Aagten-Murphy, David; Cappagli, Giulia; Burr, David
2014-03-01
Expert musicians are able to time their actions accurately and consistently during a musical performance. We investigated how musical expertise influences the ability to reproduce auditory intervals and how this generalises across different techniques and sensory modalities. We first compared various reproduction strategies and interval length, to examine the effects in general and to optimise experimental conditions for testing the effect of music, and found that the effects were robust and consistent across different paradigms. Focussing on a 'ready-set-go' paradigm subjects reproduced time intervals drawn from distributions varying in total length (176, 352 or 704 ms) or in the number of discrete intervals within the total length (3, 5, 11 or 21 discrete intervals). Overall, Musicians performed more veridical than Non-Musicians, and all subjects reproduced auditory-defined intervals more accurately than visually-defined intervals. However, Non-Musicians, particularly with visual stimuli, consistently exhibited a substantial and systematic regression towards the mean interval. When subjects judged intervals from distributions of longer total length they tended to regress more towards the mean, while the ability to discriminate between discrete intervals within the distribution had little influence on subject error. These results are consistent with a Bayesian model that minimizes reproduction errors by incorporating a central tendency prior weighted by the subject's own temporal precision relative to the current distribution of intervals. Finally a strong correlation was observed between all durations of formal musical training and total reproduction errors in both modalities (accounting for 30% of the variance). Taken together these results demonstrate that formal musical training improves temporal reproduction, and that this improvement transfers from audition to vision. They further demonstrate the flexibility of sensorimotor mechanisms in adapting to different task conditions to minimise temporal estimation errors. © 2013.
Robust allocation of a defensive budget considering an attacker's private information.
Nikoofal, Mohammad E; Zhuang, Jun
2012-05-01
Attackers' private information is one of the main issues in defensive resource allocation games in homeland security. The outcome of a defense resource allocation decision critically depends on the accuracy of estimations about the attacker's attributes. However, terrorists' goals may be unknown to the defender, necessitating robust decisions by the defender. This article develops a robust-optimization game-theoretical model for identifying optimal defense resource allocation strategies for a rational defender facing a strategic attacker while the attacker's valuation of targets, being the most critical attribute of the attacker, is unknown but belongs to bounded distribution-free intervals. To our best knowledge, no previous research has applied robust optimization in homeland security resource allocation when uncertainty is defined in bounded distribution-free intervals. The key features of our model include (1) modeling uncertainty in attackers' attributes, where uncertainty is characterized by bounded intervals; (2) finding the robust-optimization equilibrium for the defender using concepts dealing with budget of uncertainty and price of robustness; and (3) applying the proposed model to real data. © 2011 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Lucas, Charles E.; Walters, Eric A.; Jatskevich, Juri; Wasynczuk, Oleg; Lamm, Peter T.
2003-09-01
In this paper, a new technique useful for the numerical simulation of large-scale systems is presented. This approach enables the overall system simulation to be formed by the dynamic interconnection of the various interdependent simulations, each representing a specific component or subsystem such as control, electrical, mechanical, hydraulic, or thermal. Each simulation may be developed separately using possibly different commercial-off-the-shelf simulation programs thereby allowing the most suitable language or tool to be used based on the design/analysis needs. These subsystems communicate the required interface variables at specific time intervals. A discussion concerning the selection of appropriate communication intervals is presented herein. For the purpose of demonstration, this technique is applied to a detailed simulation of a representative aircraft power system, such as that found on the Joint Strike Fighter (JSF). This system is comprised of ten component models each developed using MATLAB/Simulink, EASY5, or ACSL. When the ten component simulations were distributed across just four personal computers (PCs), a greater than 15-fold improvement in simulation speed (compared to the single-computer implementation) was achieved.
2014-01-01
Background Meta-regression is becoming increasingly used to model study level covariate effects. However this type of statistical analysis presents many difficulties and challenges. Here two methods for calculating confidence intervals for the magnitude of the residual between-study variance in random effects meta-regression models are developed. A further suggestion for calculating credible intervals using informative prior distributions for the residual between-study variance is presented. Methods Two recently proposed and, under the assumptions of the random effects model, exact methods for constructing confidence intervals for the between-study variance in random effects meta-analyses are extended to the meta-regression setting. The use of Generalised Cochran heterogeneity statistics is extended to the meta-regression setting and a Newton-Raphson procedure is developed to implement the Q profile method for meta-analysis and meta-regression. WinBUGS is used to implement informative priors for the residual between-study variance in the context of Bayesian meta-regressions. Results Results are obtained for two contrasting examples, where the first example involves a binary covariate and the second involves a continuous covariate. Intervals for the residual between-study variance are wide for both examples. Conclusions Statistical methods, and R computer software, are available to compute exact confidence intervals for the residual between-study variance under the random effects model for meta-regression. These frequentist methods are almost as easily implemented as their established counterparts for meta-analysis. Bayesian meta-regressions are also easily performed by analysts who are comfortable using WinBUGS. Estimates of the residual between-study variance in random effects meta-regressions should be routinely reported and accompanied by some measure of their uncertainty. Confidence and/or credible intervals are well-suited to this purpose. PMID:25196829
NASA Astrophysics Data System (ADS)
Zi, Bin; Zhou, Bin
2016-07-01
For the prediction of dynamic response field of the luffing system of an automobile crane (LSOAAC) with random and interval parameters, a hybrid uncertain model is introduced. In the hybrid uncertain model, the parameters with certain probability distribution are modeled as random variables, whereas, the parameters with lower and upper bounds are modeled as interval variables instead of given precise values. Based on the hybrid uncertain model, the hybrid uncertain dynamic response equilibrium equation, in which different random and interval parameters are simultaneously included in input and output terms, is constructed. Then a modified hybrid uncertain analysis method (MHUAM) is proposed. In the MHUAM, based on random interval perturbation method, the first-order Taylor series expansion and the first-order Neumann series, the dynamic response expression of the LSOAAC is developed. Moreover, the mathematical characteristics of extrema of bounds of dynamic response are determined by random interval moment method and monotonic analysis technique. Compared with the hybrid Monte Carlo method (HMCM) and interval perturbation method (IPM), numerical results show the feasibility and efficiency of the MHUAM for solving the hybrid LSOAAC problems. The effects of different uncertain models and parameters on the LSOAAC response field are also investigated deeply, and numerical results indicate that the impact made by the randomness in the thrust of the luffing cylinder F is larger than that made by the gravity of the weight in suspension Q . In addition, the impact made by the uncertainty in the displacement between the lower end of the lifting arm and the luffing cylinder a is larger than that made by the length of the lifting arm L .
On the properties of stochastic intermittency in rainfall processes.
Molini, A; La, Barbera P; Lanza, L G
2002-01-01
In this work we propose a mixed approach to deal with the modelling of rainfall events, based on the analysis of geometrical and statistical properties of rain intermittency in time, combined with the predictability power derived from the analysis of no-rain periods distribution and from the binary decomposition of the rain signal. Some recent hypotheses on the nature of rain intermittency are reviewed too. In particular, the internal intermittent structure of a high resolution pluviometric time series covering one decade and recorded at the tipping bucket station of the University of Genova is analysed, by separating the internal intermittency of rainfall events from the inter-arrival process through a simple geometrical filtering procedure. In this way it is possible to associate no-rain intervals with a probability distribution both in virtue of their position within the event and their percentage. From this analysis, an invariant probability distribution for the no-rain periods within the events is obtained at different aggregation levels and its satisfactory agreement with a typical extreme value distribution is shown.
Prediction of future asset prices
NASA Astrophysics Data System (ADS)
Seong, Ng Yew; Hin, Pooi Ah; Ching, Soo Huei
2014-12-01
This paper attempts to incorporate trading volumes as an additional predictor for predicting asset prices. Denoting r(t) as the vector consisting of the time-t values of the trading volume and price of a given asset, we model the time-(t+1) asset price to be dependent on the present and l-1 past values r(t), r(t-1), ....., r(t-1+1) via a conditional distribution which is derived from a (2l+1)-dimensional power-normal distribution. A prediction interval based on the 100(α/2)% and 100(1-α/2)% points of the conditional distribution is then obtained. By examining the average lengths of the prediction intervals found by using the composite indices of the Malaysia stock market for the period 2008 to 2013, we found that the value 2 appears to be a good choice for l. With the omission of the trading volume in the vector r(t), the corresponding prediction interval exhibits a slightly longer average length, showing that it might be desirable to keep trading volume as a predictor. From the above conditional distribution, the probability that the time-(t+1) asset price will be larger than the time-t asset price is next computed. When the probability differs from 0 (or 1) by less than 0.03, the observed time-(t+1) increase in price tends to be negative (or positive). Thus the above probability has a good potential of being used as a market indicator in technical analysis.
PFA toolbox: a MATLAB tool for Metabolic Flux Analysis.
Morales, Yeimy; Bosque, Gabriel; Vehí, Josep; Picó, Jesús; Llaneras, Francisco
2016-07-11
Metabolic Flux Analysis (MFA) is a methodology that has been successfully applied to estimate metabolic fluxes in living cells. However, traditional frameworks based on this approach have some limitations, particularly when measurements are scarce and imprecise. This is very common in industrial environments. The PFA Toolbox can be used to face those scenarios. Here we present the PFA (Possibilistic Flux Analysis) Toolbox for MATLAB, which simplifies the use of Interval and Possibilistic Metabolic Flux Analysis. The main features of the PFA Toolbox are the following: (a) It provides reliable MFA estimations in scenarios where only a few fluxes can be measured or those available are imprecise. (b) It provides tools to easily plot the results as interval estimates or flux distributions. (c) It is composed of simple functions that MATLAB users can apply in flexible ways. (d) It includes a Graphical User Interface (GUI), which provides a visual representation of the measurements and their uncertainty. (e) It can use stoichiometric models in COBRA format. In addition, the PFA Toolbox includes a User's Guide with a thorough description of its functions and several examples. The PFA Toolbox for MATLAB is a freely available Toolbox that is able to perform Interval and Possibilistic MFA estimations.
Statistical regularities in the return intervals of volatility
NASA Astrophysics Data System (ADS)
Wang, F.; Weber, P.; Yamasaki, K.; Havlin, S.; Stanley, H. E.
2007-01-01
We discuss recent results concerning statistical regularities in the return intervals of volatility in financial markets. In particular, we show how the analysis of volatility return intervals, defined as the time between two volatilities larger than a given threshold, can help to get a better understanding of the behavior of financial time series. We find scaling in the distribution of return intervals for thresholds ranging over a factor of 25, from 0.6 to 15 standard deviations, and also for various time windows from one minute up to 390 min (an entire trading day). Moreover, these results are universal for different stocks, commodities, interest rates as well as currencies. We also analyze the memory in the return intervals which relates to the memory in the volatility and find two scaling regimes, ℓ<ℓ* with α1=0.64±0.02 and ℓ> ℓ* with α2=0.92±0.04; these exponent values are similar to results of Liu et al. for the volatility. As an application, we use the scaling and memory properties of the return intervals to suggest a possibly useful method for estimating risk.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cumming, J.B.; Haustein, P.E.; Stoenner, R.W.
1986-03-01
Angular distributions are reported for /sup 37/Ar and /sup 127/Xe produced by the interaction of 8-GeV /sup 20/Ne and 25-GeV /sup 12/C ions with Au. A shift from a forward to a sideward peaked distribution is observed for /sup 37/Ar, similar to that known to occur for incident protons over the same energy interval. Analysis of these data and those for Z = 8 fragments indicate that reactions leading to heavy fragment emission become more peripheral as bombarding energies increase. A mechanistic analysis is presented which explores the ranges of applicability of several models and the reliability of their predictionsmore » to fragmentation reactions induced by both energetic heavy ions and protons.« less
A method for developing design diagrams for ceramic and glass materials using fatigue data
NASA Technical Reports Server (NTRS)
Heslin, T. M.; Magida, M. B.; Forrest, K. A.
1986-01-01
The service lifetime of glass and ceramic materials can be expressed as a plot of time-to-failure versus applied stress whose plot is parametric in percent probability of failure. This type of plot is called a design diagram. Confidence interval estimates for such plots depend on the type of test that is used to generate the data, on assumptions made concerning the statistical distribution of the test results, and on the type of analysis used. This report outlines the development of design diagrams for glass and ceramic materials in engineering terms using static or dynamic fatigue tests, assuming either no particular statistical distribution of test results or a Weibull distribution and using either median value or homologous ratio analysis of the test results.
Performance Analysis of the IEEE 802.11p Multichannel MAC Protocol in Vehicular Ad Hoc Networks
2017-01-01
Vehicular Ad Hoc Networks (VANETs) employ multichannel to provide a variety of safety and non-safety applications, based on the IEEE 802.11p and IEEE 1609.4 protocols. The safety applications require timely and reliable transmissions, while the non-safety applications require efficient and high throughput. In the IEEE 1609.4 protocol, operating interval is divided into alternating Control Channel (CCH) interval and Service Channel (SCH) interval with an identical length. During the CCH interval, nodes transmit safety-related messages and control messages, and Enhanced Distributed Channel Access (EDCA) mechanism is employed to allow four Access Categories (ACs) within a station with different priorities according to their criticality for the vehicle’s safety. During the SCH interval, the non-safety massages are transmitted. An analytical model is proposed in this paper to evaluate performance, reliability and efficiency of the IEEE 802.11p and IEEE 1609.4 protocols. The proposed model improves the existing work by taking serval aspects and the character of multichannel switching into design consideration. Extensive performance evaluations based on analysis and simulation help to validate the accuracy of the proposed model and analyze the capabilities and limitations of the IEEE 802.11p and IEEE 1609.4 protocols, and enhancement suggestions are given. PMID:29231882
Performance Analysis of the IEEE 802.11p Multichannel MAC Protocol in Vehicular Ad Hoc Networks.
Song, Caixia
2017-12-12
Vehicular Ad Hoc Networks (VANETs) employ multichannel to provide a variety of safety and non-safety applications, based on the IEEE 802.11p and IEEE 1609.4 protocols. The safety applications require timely and reliable transmissions, while the non-safety applications require efficient and high throughput. In the IEEE 1609.4 protocol, operating interval is divided into alternating Control Channel (CCH) interval and Service Channel (SCH) interval with an identical length. During the CCH interval, nodes transmit safety-related messages and control messages, and Enhanced Distributed Channel Access (EDCA) mechanism is employed to allow four Access Categories (ACs) within a station with different priorities according to their criticality for the vehicle's safety. During the SCH interval, the non-safety massages are transmitted. An analytical model is proposed in this paper to evaluate performance, reliability and efficiency of the IEEE 802.11p and IEEE 1609.4 protocols. The proposed model improves the existing work by taking serval aspects and the character of multichannel switching into design consideration. Extensive performance evaluations based on analysis and simulation help to validate the accuracy of the proposed model and analyze the capabilities and limitations of the IEEE 802.11p and IEEE 1609.4 protocols, and enhancement suggestions are given.
Probability density function of non-reactive solute concentration in heterogeneous porous formations
Alberto Bellin; Daniele Tonina
2007-01-01
Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for...
Extended Poisson process modelling and analysis of grouped binary data.
Faddy, Malcolm J; Smith, David M
2012-05-01
A simple extension of the Poisson process results in binomially distributed counts of events in a time interval. A further extension generalises this to probability distributions under- or over-dispersed relative to the binomial distribution. Substantial levels of under-dispersion are possible with this modelling, but only modest levels of over-dispersion - up to Poisson-like variation. Although simple analytical expressions for the moments of these probability distributions are not available, approximate expressions for the mean and variance are derived, and used to re-parameterise the models. The modelling is applied in the analysis of two published data sets, one showing under-dispersion and the other over-dispersion. More appropriate assessment of the precision of estimated parameters and reliable model checking diagnostics follow from this more general modelling of these data sets. © 2012 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Nonstandard Analysis and Shock Wave Jump Conditions in a One-Dimensional Compressible Gas
NASA Technical Reports Server (NTRS)
Baty, Roy S.; Farassat, Fereidoun; Hargreaves, John
2007-01-01
Nonstandard analysis is a relatively new area of mathematics in which infinitesimal numbers can be defined and manipulated rigorously like real numbers. This report presents a fairly comprehensive tutorial on nonstandard analysis for physicists and engineers with many examples applicable to generalized functions. To demonstrate the power of the subject, the problem of shock wave jump conditions is studied for a one-dimensional compressible gas. It is assumed that the shock thickness occurs on an infinitesimal interval and the jump functions in the thermodynamic and fluid dynamic parameters occur smoothly across this interval. To use conservations laws, smooth pre-distributions of the Dirac delta measure are applied whose supports are contained within the shock thickness. Furthermore, smooth pre-distributions of the Heaviside function are applied which vary from zero to one across the shock wave. It is shown that if the equations of motion are expressed in nonconservative form then the relationships between the jump functions for the flow parameters may be found unambiguously. The analysis yields the classical Rankine-Hugoniot jump conditions for an inviscid shock wave. Moreover, non-monotonic entropy jump conditions are obtained for both inviscid and viscous flows. The report shows that products of generalized functions may be defined consistently using nonstandard analysis; however, physically meaningful products of generalized functions must be determined from the physics of the problem and not the mathematical form of the governing equations.
Dugué, Audrey Emmanuelle; Pulido, Marina; Chabaud, Sylvie; Belin, Lisa; Gal, Jocelyn
2016-12-01
We describe how to estimate progression-free survival while dealing with interval-censored data in the setting of clinical trials in oncology. Three procedures with SAS and R statistical software are described: one allowing for a nonparametric maximum likelihood estimation of the survival curve using the EM-ICM (Expectation and Maximization-Iterative Convex Minorant) algorithm as described by Wellner and Zhan in 1997; a sensitivity analysis procedure in which the progression time is assigned (i) at the midpoint, (ii) at the upper limit (reflecting the standard analysis when the progression time is assigned at the first radiologic exam showing progressive disease), or (iii) at the lower limit of the censoring interval; and finally, two multiple imputations are described considering a uniform or the nonparametric maximum likelihood estimation (NPMLE) distribution. Clin Cancer Res; 22(23); 5629-35. ©2016 AACR. ©2016 American Association for Cancer Research.
The interval testing procedure: A general framework for inference in functional data analysis.
Pini, Alessia; Vantini, Simone
2016-09-01
We introduce in this work the Interval Testing Procedure (ITP), a novel inferential technique for functional data. The procedure can be used to test different functional hypotheses, e.g., distributional equality between two or more functional populations, equality of mean function of a functional population to a reference. ITP involves three steps: (i) the representation of data on a (possibly high-dimensional) functional basis; (ii) the test of each possible set of consecutive basis coefficients; (iii) the computation of the adjusted p-values associated to each basis component, by means of a new strategy here proposed. We define a new type of error control, the interval-wise control of the family wise error rate, particularly suited for functional data. We show that ITP is provided with such a control. A simulation study comparing ITP with other testing procedures is reported. ITP is then applied to the analysis of hemodynamical features involved with cerebral aneurysm pathology. ITP is implemented in the fdatest R package. © 2016, The International Biometric Society.
Geosocial process and its regularities
NASA Astrophysics Data System (ADS)
Vikulina, Marina; Vikulin, Alexander; Dolgaya, Anna
2015-04-01
Natural disasters and social events (wars, revolutions, genocides, epidemics, fires, etc.) accompany each other throughout human civilization, thus reflecting the close relationship of these phenomena that are seemingly of different nature. In order to study this relationship authors compiled and analyzed the list of the 2,400 natural disasters and social phenomena weighted by their magnitude that occurred during the last XXXVI centuries of our history. Statistical analysis was performed separately for each aggregate (natural disasters and social phenomena), and for particular statistically representative types of events. There was 5 + 5 = 10 types. It is shown that the numbers of events in the list are distributed by logarithmic law: the bigger the event, the less likely it happens. For each type of events and each aggregate the existence of periodicities with periods of 280 ± 60 years was established. Statistical analysis of the time intervals between adjacent events for both aggregates showed good agreement with Weibull-Gnedenko distribution with shape parameter less than 1, which is equivalent to the conclusion about the grouping of events at small time intervals. Modeling of statistics of time intervals with Pareto distribution allowed to identify the emergent property for all events in the aggregate. This result allowed the authors to make conclusion about interaction between natural disasters and social phenomena. The list of events compiled by authors and first identified properties of cyclicity, grouping and interaction process reflected by this list is the basis of modeling essentially unified geosocial process at high enough statistical level. Proof of interaction between "lifeless" Nature and Society is fundamental and provided a new approach to forecasting demographic crises with taking into account both natural disasters and social phenomena.
Xu, Qinghai; Shi, Wanzhong; Xie, Yuhong; Wang, Zhenfeng; Li, Xusheng; Tong, Chuanxin
2017-01-01
The Qiongdongnan Basin is a strongly overpressured basin with the maximum pressure coefficient (the ratio of the actual pore pressure versus hydrostatic pressure at the same depth) over 2.27. However, there exists a widespread low-overpressure interval between the strong overpressure intervals in the Yanan Sag of western basin. The mechanisms of the low-overpressure interval are not well understood. Three main approaches, pore pressure test data and well-log analysis, pressure prediction based on the relationship between the deviation of the velocity and the pressure coefficients, and numerical modeling, were employed to illustrate the distribution and evolution of the low-overpressure interval. And we analyzed and explained the phenomenon of the low-overpressure interval that is both underlain and overlain by high overpressure internal. The low-overpressure interval between the strong overpressure intervals can be identified and modelled by drilling data of P-wave sonic and the mud weight, and the numerical modeling using the PetroMod software. Results show that the low-overpressure interval is mainly composed of sandstone sediments. The porosities of sandstone in the low-overpressure interval primarily range from 15%-20%, and the permeabilities range from 10–100 md. Analysis of the geochemical parameters of C1, iC4/nC4, ΔR3, and numerical modeling shows that oil and gas migrated upward into the sandstone in the low-overpressure interval, and then migrated along the sandstone of low-overpressure interval into the Yacheng uplift. The low-overpressure both underlain and overlain by overpressure resulted from the fluids migrating along the sandstones in the low-overpressure interval into the Yacheng uplift since 1.9Ma. The mudstone in the strong overpressure interval is good cap overlain the sandstone of low-overpressure interval, therefore up-dip pinchouts or isolated sandstone in the low-overpressure interval locating the migration path of oil and gas are good plays for hydrocarbon exploration. PMID:28934237
Characteristic Lifelength of Coherent Structure in the Turbulent Boundary Layer
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.
2006-01-01
A characteristic lifelength is defined by which a Gaussian distribution is fit to data correlated over a 3 sensor array sampling streamwise sidewall pressure. The data were acquired at subsonic, transonic and supersonic speeds aboard a Tu-144. Lifelengths are estimated using the cross spectrum and are shown to compare favorably with Efimtsov's prediction of correlation space scales. Lifelength distributions are computed in the time/frequency domain using an interval correlation technique on the continuous wavelet transform of the original time data. The median values of the lifelength distributions are found to be very close to the frequency averaged result. The interval correlation technique is shown to allow the retrieval and inspection of the original time data of each event in the lifelength distribution, thus providing a means to locate and study the nature of the coherent structure in the turbulent boundary layer. The lifelength data can be converted to lifetimes using the convection velocity. The lifetime of events in the time/frequency domain are displayed in Lifetime Maps. The primary purpose of the paper is to validate these new analysis techniques so that they can be used with confidence to further characterize coherent structure in the turbulent boundary layer.
Contraction frequency after administration of misoprostol in obese versus nonobese women.
Stefely, Erin; Warshak, Carri R
2018-04-30
To examine impact of obesity on contraction frequency following misoprostol. Our hypothesis is that an increased volume of distribution reduces the bioavailability of misoprostol and may be an explanation for reduced efficacy. We examined the contraction frequency as a surrogate marker for bioavailability of misoprostol. We compared the rate of contractions at five time intervals in 313 subjects: prior to administration, and at four intervals post administration. We compared number of contractions in obese versus nonobese. As a planned secondary analysis, we then compared the rate of change in contractions per hour at four time intervals: a repeated measures analysis to compare the rate of change in contractions per hour over the 5-hour window controlling for race (White versus non-White) and parity (primiparous versus multiparous). General linear model and repeated measures analysis were conducted to report the parameter estimates, least square means, difference of least square means, and p values. Nonobese women presented with more contractions at baseline, 7 ± 5 versus 4 ± 5 c/h, p < .001. At all four time intervals after misoprostol administration obese women had fewer contractions per hour. The rate of change in contraction frequency after administration found obese women had a lower rate of increase in contraction frequency over the course of all four hours. We found a least squares means estimate (c/h): first hour (-0.87), p = .08, second hour (-2.43), p = .01, third hour (-1.80), p = .96, and fourth hour (-2.98), p = .007. Obese women have a lower rate of contractions per hour at baseline and at four intervals after misoprostol administration. In addition, the rate of change in the increase in contractions/hour also was reduced in obese women versus nonobese women. This suggests a lower bioavailability of misoprostol in women with a larger volume of distribution which would likely impact the efficacy of misoprostol in obese women when given the same dose of misoprostol. It is unknown if higher misoprostol dosing would increase efficacy of misoprostol in obese women.
Pigeons' Choices between Fixed-Interval and Random-Interval Schedules: Utility of Variability?
ERIC Educational Resources Information Center
Andrzejewski, Matthew E.; Cardinal, Claudia D.; Field, Douglas P.; Flannery, Barbara A.; Johnson, Michael; Bailey, Kathleen; Hineline, Philip N.
2005-01-01
Pigeons' choosing between fixed-interval and random-interval schedules of reinforcement was investigated in three experiments using a discrete-trial procedure. In all three experiments, the random-interval schedule was generated by sampling a probability distribution at an interval (and in multiples of the interval) equal to that of the…
Are EUR and GBP different words for the same currency?
NASA Astrophysics Data System (ADS)
Ivanova, K.; Ausloos, M.
2002-05-01
The British Pound (GBP) is not part of the Euro (EUR) monetary system. In order to find out arguments on whether GBP should join the EUR or not correlations are calculated between GBP exchange rates with respect to various currencies: USD, JPY, CHF, DKK, the currencies forming EUR and a reconstructed EUR for the time interval from 1993 till June 30, 2000. The distribution of fluctuations of the exchange rates is Gaussian for the central part of the distribution, but has fat tails for the large size fluctuations. Within the Detrended Fluctuation Analysis (DFA) statistical method the power law behavior describing the root-mean-square deviation from a linear trend of the exchange rate fluctuations is obtained as a function of time for the time interval of interest. The time-dependent exponent evolution of the exchange rate fluctuations is given. Statistical considerations imply that the GBP is already behaving as a true EUR.
Entropy Methods For Univariate Distributions in Decision Analysis
NASA Astrophysics Data System (ADS)
Abbas, Ali E.
2003-03-01
One of the most important steps in decision analysis practice is the elicitation of the decision-maker's belief about an uncertainty of interest in the form of a representative probability distribution. However, the probability elicitation process is a task that involves many cognitive and motivational biases. Alternatively, the decision-maker may provide other information about the distribution of interest, such as its moments, and the maximum entropy method can be used to obtain a full distribution subject to the given moment constraints. In practice however, decision makers cannot readily provide moments for the distribution, and are much more comfortable providing information about the fractiles of the distribution of interest or bounds on its cumulative probabilities. In this paper we present a graphical method to determine the maximum entropy distribution between upper and lower probability bounds and provide an interpretation for the shape of the maximum entropy distribution subject to fractile constraints, (FMED). We also discuss the problems with the FMED in that it is discontinuous and flat over each fractile interval. We present a heuristic approximation to a distribution if in addition to its fractiles, we also know it is continuous and work through full examples to illustrate the approach.
Anderton, D L; Bean, L L
1985-05-01
Our analysis of changing birth interval distributions over the course of a fertility transition from natural to controlled fertility has examined three closely related propositions. First, within both natural fertility populations (identified at the aggregate level) and cohorts following the onset of fertility limitation, we hypothesized that substantial groups of women with long birth intervals across the individually specified childbearing careers could be identified. That is, even during periods when fertility behavior at the aggregate level is consistent with a natural fertility regime, birth intervals at all parities are inversely related to completed family size. Our tabular analysis enables us to conclude that birth spacing patterns are parity dependent; there is stability in CEB-parity specific mean and birth interval variance over the entire transition. Our evidence does not suggest that the early group of women limiting and spacing births was marked by infecundity. Secondly, the transition appears to be associated with an increasingly larger proportion of women shifting to the same spacing schedules associated with smaller families in earlier cohorts. Thirdly, variations in birth spacing by age of marriage indicate that changes in birth intervals over time are at least indirectly associated with age of marriage, indicating an additional compositional effect. The evidence we have presented on spacing behavior does not negate the argument that parity-dependent stopping behavior was a powerful factor in the fertility transition. Our data also provide evidence of attempts to truncate childbearing. Specifically, the smaller the completed family size, the longer the ultimate birth interval; and ultimate birth intervals increase across cohorts controlling CEB and parity. But spacing appears to represent an additional strategy of fertility limitation. Thus, it may be necessary to distinguish spacing and stopping behavior if one wishes to clarify behavioral patterns within a population (Edlefsen, 1981; Friedlander et al., 1980; Rodriguez and Hobcraft, 1980). Because fertility transition theories imply increased attempts to limit family sizes, it is important to examine differential behavior within subgroups achieving different family sizes. It is this level of analysis which we have attempted to achieve in utilizing parity-specific birth intervals controlled by children ever born.(ABSTRACT TRUNCATED AT 400 WORDS)
Schaefer, Carina; Cawello, Willi; Waitzinger, Josef; Elshoff, Jan-Peer
2015-04-01
Age- and sex-related differences in body composition could affect the pharmacokinetic parameters of administered drugs. The purpose of this post hoc analysis was to investigate the influences of age and sex on the pharmacokinetics of lacosamide. This post hoc analysis used pharmacokinetic data taken at steady state from (i) two phase I studies of oral lacosamide in healthy adult subjects (n = 66), and (ii) a population pharmacokinetic analysis carried out using data from two phase III studies of adjunctive oral lacosamide in adults (n = 565) with focal epilepsy taking 1-3 concomitant anti-epileptic drugs. Phase I data were stratified by age and sex as 'younger female' (aged 18-40 years), 'younger male' (aged 18-45 years) or 'elderly male/female' (aged ≥65 years), then normalized by body weight (lean body weight or fat-free mass), height or volume of distribution, and analysed using non-compartmental analysis. Population pharmacokinetic data were stratified by sex and analysed using a one-compartment model. Minor numerical differences between lacosamide exposure [the area under the concentration-time curve at steady state over the dosage interval (AUCτ,ss)] and the maximum plasma concentration at steady state (C max,ss) in subjects of different ages or sexes were noted. The differences could be explained by a scaling factor between the drug applied and the plasma concentration. Following normalization by lean body weight or volume of distribution, an analysis of relative bioavailability resulted in 90 % confidence intervals of the ratios for AUCτ,ss and C max,ss for age (elderly to younger) or sex (male to female) falling within the range accepted for equivalence (80-125 %); without normalization, the 90 % confidence intervals were outside this range. Minor numerical differences in lacosamide plasma concentrations were noted in the comparison between male and female patients (aged 16-71 years) with focal epilepsy. Simulations using different body weights demonstrated a minimal effect of body weight on lacosamide plasma concentrations in adult patients with focal epilepsy. Age and sex had no relevant effects on the rates of absorption and elimination of lacosamide in this post hoc analysis, as the minor numerical differences could be explained by the main scaling factor for body weight or volume of distribution. The pharmacokinetic profile of lacosamide was unaffected by age or sex in adults with focal epilepsy.
Ecohydrology of dry regions of the United States: Precipitation pulses and intraseasonal drought
William K. Lauenroth; John B. Bradford
2009-01-01
Distribution of precipitation event sizes and interval lengths between events are important characteristics of arid and semi-arid climates. Understanding their importance will contribute to our ability to understand ecosystem dynamics in these regions. Our objective for this paper was to present a comprehensive analysis of the daily precipitation regimes of arid and...
Yu, Chanki; Lee, Sang Wook
2016-05-20
We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.
Forward and backward uncertainty propagation: an oxidation ditch modelling example.
Abusam, A; Keesman, K J; van Straten, G
2003-01-01
In the field of water technology, forward uncertainty propagation is frequently used, whereas backward uncertainty propagation is rarely used. In forward uncertainty analysis, one moves from a given (or assumed) parameter subspace towards the corresponding distribution of the output or objective function. However, in the backward uncertainty propagation, one moves in the reverse direction, from the distribution function towards the parameter subspace. Backward uncertainty propagation, which is a generalisation of parameter estimation error analysis, gives information essential for designing experimental or monitoring programmes, and for tighter bounding of parameter uncertainty intervals. The procedure of carrying out backward uncertainty propagation is illustrated in this technical note by working example for an oxidation ditch wastewater treatment plant. Results obtained have demonstrated that essential information can be achieved by carrying out backward uncertainty propagation analysis.
Multi-Sample Cluster Analysis Using Akaike’s Information Criterion.
1982-12-20
Intervals. For more details on these test procedures refer to Gabriel [7J, Krishnaiah (CIlUj, [11]), Srivastava [16), and others. -3- As noted in Consul...723. (4] Consul, P. C. (1969), "The Exact Distributions of Likelihood Criteria for Different Hypotheses," in P. R. Krishnaiah (Ed.), Multivariate...1178. [7] Gabriel, K. R. (1969), "A Comparison of Some lethods of Simultaneous Inference in MANOVA," in P. R. Krishnaiah (Ed.), Multivariate Analysis-lI
Quasi-periodic recurrence of large earthquakes on the southern San Andreas fault
Scharer, Katherine M.; Biasi, Glenn P.; Weldon, Ray J.; Fumal, Tom E.
2010-01-01
It has been 153 yr since the last large earthquake on the southern San Andreas fault (California, United States), but the average interseismic interval is only ~100 yr. If the recurrence of large earthquakes is periodic, rather than random or clustered, the length of this period is notable and would generally increase the risk estimated in probabilistic seismic hazard analyses. Unfortunately, robust characterization of a distribution describing earthquake recurrence on a single fault is limited by the brevity of most earthquake records. Here we use statistical tests on a 3000 yr combined record of 29 ground-rupturing earthquakes from Wrightwood, California. We show that earthquake recurrence there is more regular than expected from a Poisson distribution and is not clustered, leading us to conclude that recurrence is quasi-periodic. The observation of unimodal time dependence is persistent across an observationally based sensitivity analysis that critically examines alternative interpretations of the geologic record. The results support formal forecast efforts that use renewal models to estimate probabilities of future earthquakes on the southern San Andreas fault. Only four intervals (15%) from the record are longer than the present open interval, highlighting the current hazard posed by this fault.
Semiparametric regression analysis of interval-censored competing risks data.
Mao, Lu; Lin, Dan-Yu; Zeng, Donglin
2017-09-01
Interval-censored competing risks data arise when each study subject may experience an event or failure from one of several causes and the failure time is not observed directly but rather is known to lie in an interval between two examinations. We formulate the effects of possibly time-varying (external) covariates on the cumulative incidence or sub-distribution function of competing risks (i.e., the marginal probability of failure from a specific cause) through a broad class of semiparametric regression models that captures both proportional and non-proportional hazards structures for the sub-distribution. We allow each subject to have an arbitrary number of examinations and accommodate missing information on the cause of failure. We consider nonparametric maximum likelihood estimation and devise a fast and stable EM-type algorithm for its computation. We then establish the consistency, asymptotic normality, and semiparametric efficiency of the resulting estimators for the regression parameters by appealing to modern empirical process theory. In addition, we show through extensive simulation studies that the proposed methods perform well in realistic situations. Finally, we provide an application to a study on HIV-1 infection with different viral subtypes. © 2017, The International Biometric Society.
Ehrenfest model with large jumps in finance
NASA Astrophysics Data System (ADS)
Takahashi, Hisanao
2004-02-01
Changes (returns) in stock index prices and exchange rates for currencies are argued, based on empirical data, to obey a stable distribution with characteristic exponent α<2 for short sampling intervals and a Gaussian distribution for long sampling intervals. In order to explain this phenomenon, an Ehrenfest model with large jumps (ELJ) is introduced to explain the empirical density function of price changes for both short and long sampling intervals.
Interval timing in genetically modified mice: a simple paradigm
Balci, F.; Papachristos, E. B.; Gallistel, C. R.; Brunner, D.; Gibson, J.; Shumyatsky, G. P.
2009-01-01
We describe a behavioral screen for the quantitative study of interval timing and interval memory in mice. Mice learn to switch from a short-latency feeding station to a long-latency station when the short latency has passed without a feeding. The psychometric function is the cumulative distribution of switch latencies. Its median measures timing accuracy and its interquartile interval measures timing precision. Next, using this behavioral paradigm, we have examined mice with a gene knockout of the receptor for gastrin-releasing peptide that show enhanced (i.e. prolonged) freezing in fear conditioning. We have tested the hypothesis that the mutants freeze longer because they are more uncertain than wild types about when to expect the electric shock. The knockouts however show normal accuracy and precision in timing, so we have rejected this alternative hypothesis. Last, we conduct the pharmacological validation of our behavioral screen using D-amphetamine and methamphetamine. We suggest including the analysis of interval timing and temporal memory in tests of genetically modified mice for learning and memory and argue that our paradigm allows this to be done simply and efficiently. PMID:17696995
Interval timing in genetically modified mice: a simple paradigm.
Balci, F; Papachristos, E B; Gallistel, C R; Brunner, D; Gibson, J; Shumyatsky, G P
2008-04-01
We describe a behavioral screen for the quantitative study of interval timing and interval memory in mice. Mice learn to switch from a short-latency feeding station to a long-latency station when the short latency has passed without a feeding. The psychometric function is the cumulative distribution of switch latencies. Its median measures timing accuracy and its interquartile interval measures timing precision. Next, using this behavioral paradigm, we have examined mice with a gene knockout of the receptor for gastrin-releasing peptide that show enhanced (i.e. prolonged) freezing in fear conditioning. We have tested the hypothesis that the mutants freeze longer because they are more uncertain than wild types about when to expect the electric shock. The knockouts however show normal accuracy and precision in timing, so we have rejected this alternative hypothesis. Last, we conduct the pharmacological validation of our behavioral screen using d-amphetamine and methamphetamine. We suggest including the analysis of interval timing and temporal memory in tests of genetically modified mice for learning and memory and argue that our paradigm allows this to be done simply and efficiently.
NASA Technical Reports Server (NTRS)
Packard, Michael H.
2002-01-01
Probabilistic Structural Analysis (PSA) is now commonly used for predicting the distribution of time/cycles to failure of turbine blades and other engine components. These distributions are typically based on fatigue/fracture and creep failure modes of these components. Additionally, reliability analysis is used for taking test data related to particular failure modes and calculating failure rate distributions of electronic and electromechanical components. How can these individual failure time distributions of structural, electronic and electromechanical component failure modes be effectively combined into a top level model for overall system evaluation of component upgrades, changes in maintenance intervals, or line replaceable unit (LRU) redesign? This paper shows an example of how various probabilistic failure predictions for turbine engine components can be evaluated and combined to show their effect on overall engine performance. A generic model of a turbofan engine was modeled using various Probabilistic Risk Assessment (PRA) tools (Quantitative Risk Assessment Software (QRAS) etc.). Hypothetical PSA results for a number of structural components along with mitigation factors that would restrict the failure mode from propagating to a Loss of Mission (LOM) failure were used in the models. The output of this program includes an overall failure distribution for LOM of the system. The rank and contribution to the overall Mission Success (MS) is also given for each failure mode and each subsystem. This application methodology demonstrates the effectiveness of PRA for assessing the performance of large turbine engines. Additionally, the effects of system changes and upgrades, the application of different maintenance intervals, inclusion of new sensor detection of faults and other upgrades were evaluated in determining overall turbine engine reliability.
Matthews, D R; Hindmarsh, P C; Pringle, P J; Brook, C G
1991-09-01
To develop a method for quantifying the distribution of concentrations present in hormone profiles, which would allow an observer-unbiased estimate of the time concentration attribute and to make an assessment of the baseline. The log-transformed concentrations (regardless of their temporal attribute) are sorted and allocated to class intervals. The number of observations in each interval are then determined and expressed as a percentage of the total number of samples drawn in the study period. The data may be displayed as a frequency distribution or as a cumulative distribution. Cumulative distributions may be plotted as sigmoidal ogives or can be transformed into discrete probabilities (linear probits), which are then linear, and amenable to regression analysis. Probability analysis gives estimates of the mean (the value below which 50% of the observed concentrations lie, which we term 'OC50'). 'Baseline' can be defined in terms of percentage occupancy--the 'Observed Concentration for 5%' (which we term 'OC5') which is the threshold at or below which the hormone concentrations are measured 5% of the time. We report the use of applying this method to 24-hour growth hormone (GH) profiles from 63 children, 26 adults and one giant. We demonstrate that GH effects (growth or gigantism) in these groups are more related to the baseline OC5 concentration than peak concentration (OC5 +/- 95% confidence limits: adults 0.05 +/- 0.04, peak-height-velocity pubertal 0.39 +/- 0.22, giant 8.9 mU/l). Pulsatile hormone profiles can be analysed using this method in order to assess baseline and other concentration domains.
Joint distribution approaches to simultaneously quantifying benefit and risk.
Shaffer, Michele L; Watterberg, Kristi L
2006-10-12
The benefit-risk ratio has been proposed to measure the tradeoff between benefits and risks of two therapies for a single binary measure of efficacy and a single adverse event. The ratio is calculated from the difference in risk and difference in benefit between therapies. Small sample sizes or expected differences in benefit or risk can lead to no solution or problematic solutions for confidence intervals. Alternatively, using the joint distribution of benefit and risk, confidence regions for the differences in risk and benefit can be constructed in the benefit-risk plane. The information in the joint distribution can be summarized by choosing regions of interest in this plane. Using Bayesian methodology provides a very flexible framework for summarizing information in the joint distribution. Data from a National Institute of Child Health & Human Development trial of hydrocortisone illustrate the construction of confidence regions and regions of interest in the benefit-risk plane, where benefit is survival without supplemental oxygen at 36 weeks postmenstrual age, and risk is gastrointestinal perforation. For the subgroup of infants exposed to chorioamnionitis the confidence interval based on the benefit-risk ratio is wide (Benefit-risk ratio: 1.52; 90% confidence interval: 0.23 to 5.25). Choosing regions of appreciable risk and acceptable risk in the benefit-risk plane confirms the uncertainty seen in the wide confidence interval for the benefit-risk ratio--there is a greater than 50% chance of falling into the region of acceptable risk--while visually allowing the uncertainty in risk and benefit to be shown separately. Applying Bayesian methodology, an incremental net health benefit analysis shows there is a 72% chance of having a positive incremental net benefit if hydrocortisone is used in place of placebo if one is willing to incur at most one gastrointestinal perforation for each additional infant that survives without supplemental oxygen. If the benefit-risk ratio is presented, the joint distribution of benefit and risk also should be shown. These regions avoid the ambiguity associated with collapsing benefit and risk to a single dimension. Bayesian methods allow even greater flexibility in simultaneously quantifying benefit and risk.
Tavakol, Najmeh; Kheiri, Soleiman; Sedehi, Morteza
2016-01-01
Time to donating blood plays a major role in a regular donor to becoming continues one. The aim of this study was to determine the effective factors on the interval between the blood donations. In a longitudinal study in 2008, 864 samples of first-time donors in Shahrekord Blood Transfusion Center, capital city of Chaharmahal and Bakhtiari Province, Iran were selected by a systematic sampling and were followed up for five years. Among these samples, a subset of 424 donors who had at least two successful blood donations were chosen for this study and the time intervals between their donations were measured as response variable. Sex, body weight, age, marital status, education, stay and job were recorded as independent variables. Data analysis was performed based on log-normal hazard model with gamma correlated frailty. In this model, the frailties are sum of two independent components assumed a gamma distribution. The analysis was done via Bayesian approach using Markov Chain Monte Carlo algorithm by OpenBUGS. Convergence was checked via Gelman-Rubin criteria using BOA program in R. Age, job and education were significant on chance to donate blood (P<0.05). The chances of blood donation for the higher-aged donors, clericals, workers, free job, students and educated donors were higher and in return, time intervals between their blood donations were shorter. Due to the significance effect of some variables in the log-normal correlated frailty model, it is necessary to plan educational and cultural program to encourage the people with longer inter-donation intervals to donate more frequently.
Interval Estimation of Seismic Hazard Parameters
NASA Astrophysics Data System (ADS)
Orlecka-Sikora, Beata; Lasocki, Stanislaw
2017-03-01
The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.
Modeling stream fish distributions using interval-censored detection times.
Ferreira, Mário; Filipe, Ana Filipa; Bardos, David C; Magalhães, Maria Filomena; Beja, Pedro
2016-08-01
Controlling for imperfect detection is important for developing species distribution models (SDMs). Occupancy-detection models based on the time needed to detect a species can be used to address this problem, but this is hindered when times to detection are not known precisely. Here, we extend the time-to-detection model to deal with detections recorded in time intervals and illustrate the method using a case study on stream fish distribution modeling. We collected electrofishing samples of six fish species across a Mediterranean watershed in Northeast Portugal. Based on a Bayesian hierarchical framework, we modeled the probability of water presence in stream channels, and the probability of species occupancy conditional on water presence, in relation to environmental and spatial variables. We also modeled time-to-first detection conditional on occupancy in relation to local factors, using modified interval-censored exponential survival models. Posterior distributions of occupancy probabilities derived from the models were used to produce species distribution maps. Simulations indicated that the modified time-to-detection model provided unbiased parameter estimates despite interval-censoring. There was a tendency for spatial variation in detection rates to be primarily influenced by depth and, to a lesser extent, stream width. Species occupancies were consistently affected by stream order, elevation, and annual precipitation. Bayesian P-values and AUCs indicated that all models had adequate fit and high discrimination ability, respectively. Mapping of predicted occupancy probabilities showed widespread distribution by most species, but uncertainty was generally higher in tributaries and upper reaches. The interval-censored time-to-detection model provides a practical solution to model occupancy-detection when detections are recorded in time intervals. This modeling framework is useful for developing SDMs while controlling for variation in detection rates, as it uses simple data that can be readily collected by field ecologists.
Kistner, Emily O; Muller, Keith E
2004-09-01
Intraclass correlation and Cronbach's alpha are widely used to describe reliability of tests and measurements. Even with Gaussian data, exact distributions are known only for compound symmetric covariance (equal variances and equal correlations). Recently, large sample Gaussian approximations were derived for the distribution functions. New exact results allow calculating the exact distribution function and other properties of intraclass correlation and Cronbach's alpha, for Gaussian data with any covariance pattern, not just compound symmetry. Probabilities are computed in terms of the distribution function of a weighted sum of independent chi-square random variables. New F approximations for the distribution functions of intraclass correlation and Cronbach's alpha are much simpler and faster to compute than the exact forms. Assuming the covariance matrix is known, the approximations typically provide sufficient accuracy, even with as few as ten observations. Either the exact or approximate distributions may be used to create confidence intervals around an estimate of reliability. Monte Carlo simulations led to a number of conclusions. Correctly assuming that the covariance matrix is compound symmetric leads to accurate confidence intervals, as was expected from previously known results. However, assuming and estimating a general covariance matrix produces somewhat optimistically narrow confidence intervals with 10 observations. Increasing sample size to 100 gives essentially unbiased coverage. Incorrectly assuming compound symmetry leads to pessimistically large confidence intervals, with pessimism increasing with sample size. In contrast, incorrectly assuming general covariance introduces only a modest optimistic bias in small samples. Hence the new methods seem preferable for creating confidence intervals, except when compound symmetry definitely holds.
Buffered coscheduling for parallel programming and enhanced fault tolerance
Petrini, Fabrizio [Los Alamos, NM; Feng, Wu-chun [Los Alamos, NM
2006-01-31
A computer implemented method schedules processor jobs on a network of parallel machine processors or distributed system processors. Control information communications generated by each process performed by each processor during a defined time interval is accumulated in buffers, where adjacent time intervals are separated by strobe intervals for a global exchange of control information. A global exchange of the control information communications at the end of each defined time interval is performed during an intervening strobe interval so that each processor is informed by all of the other processors of the number of incoming jobs to be received by each processor in a subsequent time interval. The buffered coscheduling method of this invention also enhances the fault tolerance of a network of parallel machine processors or distributed system processors
NASA Astrophysics Data System (ADS)
Li, Jian; Torres, Diego F.; Lin, Ting Ting; Grondin, Marie-Helene; Kerr, Matthew; Lemoine-Goumard, Marianne; de Oña Wilhelmi, Emma
2018-05-01
We present the results of the analysis of eight years of Fermi-LAT data of the pulsar/pulsar wind nebula complex PSR J0205+6449/3C 58. Using a contemporaneous ephemeris, we carried out a detailed analysis of PSR J0205+6449 both during its off-peak and on-peak phase intervals. 3C 58 is significantly detected during the off-peak phase interval. We show that the spectral energy distribution at high energies is the same disregarding the phases considered, and thus that this part of the spectrum is most likely dominated by the nebula radiation. We present results of theoretical models of the nebula and the magnetospheric emission that confirm this interpretation. Possible high-energy flares from 3C 58 were searched for, but none were unambiguously identified.
Guo, P; Huang, G H
2010-03-01
In this study, an interval-parameter semi-infinite fuzzy-chance-constrained mixed-integer linear programming (ISIFCIP) approach is developed for supporting long-term planning of waste-management systems under multiple uncertainties in the City of Regina, Canada. The method improves upon the existing interval-parameter semi-infinite programming (ISIP) and fuzzy-chance-constrained programming (FCCP) by incorporating uncertainties expressed as dual uncertainties of functional intervals and multiple uncertainties of distributions with fuzzy-interval admissible probability of violating constraint within a general optimization framework. The binary-variable solutions represent the decisions of waste-management-facility expansion, and the continuous ones are related to decisions of waste-flow allocation. The interval solutions can help decision-makers to obtain multiple decision alternatives, as well as provide bases for further analyses of tradeoffs between waste-management cost and system-failure risk. In the application to the City of Regina, Canada, two scenarios are considered. In Scenario 1, the City's waste-management practices would be based on the existing policy over the next 25 years. The total diversion rate for the residential waste would be approximately 14%. Scenario 2 is associated with a policy for waste minimization and diversion, where 35% diversion of residential waste should be achieved within 15 years, and 50% diversion over 25 years. In this scenario, not only landfill would be expanded, but also CF and MRF would be expanded. Through the scenario analyses, useful decision support for the City's solid-waste managers and decision-makers has been generated. Three special characteristics of the proposed method make it unique compared with other optimization techniques that deal with uncertainties. Firstly, it is useful for tackling multiple uncertainties expressed as intervals, functional intervals, probability distributions, fuzzy sets, and their combinations; secondly, it has capability in addressing the temporal variations of the functional intervals; thirdly, it can facilitate dynamic analysis for decisions of facility-expansion planning and waste-flow allocation within a multi-facility, multi-period and multi-option context. Copyright 2009 Elsevier Ltd. All rights reserved.
Biglands, John D; Ibraheem, Montasir; Magee, Derek R; Radjenovic, Aleksandra; Plein, Sven; Greenwood, John P
2018-05-01
This study sought to compare the diagnostic accuracy of visual and quantitative analyses of myocardial perfusion cardiovascular magnetic resonance against a reference standard of quantitative coronary angiography. Visual analysis of perfusion cardiovascular magnetic resonance studies for assessing myocardial perfusion has been shown to have high diagnostic accuracy for coronary artery disease. However, only a few small studies have assessed the diagnostic accuracy of quantitative myocardial perfusion. This retrospective study included 128 patients randomly selected from the CE-MARC (Clinical Evaluation of Magnetic Resonance Imaging in Coronary Heart Disease) study population such that the distribution of risk factors and disease status was proportionate to the full population. Visual analysis results of cardiovascular magnetic resonance perfusion images, by consensus of 2 expert readers, were taken from the original study reports. Quantitative myocardial blood flow estimates were obtained using Fermi-constrained deconvolution. The reference standard for myocardial ischemia was a quantitative coronary x-ray angiogram stenosis severity of ≥70% diameter in any coronary artery of >2 mm diameter, or ≥50% in the left main stem. Diagnostic performance was calculated using receiver-operating characteristic curve analysis. The area under the curve for visual analysis was 0.88 (95% confidence interval: 0.81 to 0.95) with a sensitivity of 81.0% (95% confidence interval: 69.1% to 92.8%) and specificity of 86.0% (95% confidence interval: 78.7% to 93.4%). For quantitative stress myocardial blood flow the area under the curve was 0.89 (95% confidence interval: 0.83 to 0.96) with a sensitivity of 87.5% (95% confidence interval: 77.3% to 97.7%) and specificity of 84.5% (95% confidence interval: 76.8% to 92.3%). There was no statistically significant difference between the diagnostic performance of quantitative and visual analyses (p = 0.72). Incorporating rest myocardial blood flow values to generate a myocardial perfusion reserve did not significantly increase the quantitative analysis area under the curve (p = 0.79). Quantitative perfusion has a high diagnostic accuracy for detecting coronary artery disease but is not superior to visual analysis. The incorporation of rest perfusion imaging does not improve diagnostic accuracy in quantitative perfusion analysis. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Fluctuations in Wikipedia access-rate and edit-event data
NASA Astrophysics Data System (ADS)
Kämpf, Mirko; Tismer, Sebastian; Kantelhardt, Jan W.; Muchnik, Lev
2012-12-01
Internet-based social networks often reflect extreme events in nature and society by drastic increases in user activity. We study and compare the dynamics of the two major complex processes necessary for information spread via the online encyclopedia ‘Wikipedia’, i.e., article editing (information upload) and article access (information viewing) based on article edit-event time series and (hourly) user access-rate time series for all articles. Daily and weekly activity patterns occur in addition to fluctuations and bursting activity. The bursts (i.e., significant increases in activity for an extended period of time) are characterized by a power-law distribution of durations of increases and decreases. For describing the recurrence and clustering of bursts we investigate the statistics of the return intervals between them. We find stretched exponential distributions of return intervals in access-rate time series, while edit-event time series yield simple exponential distributions. To characterize the fluctuation behavior we apply detrended fluctuation analysis (DFA), finding that most article access-rate time series are characterized by strong long-term correlations with fluctuation exponents α≈0.9. The results indicate significant differences in the dynamics of information upload and access and help in understanding the complex process of collecting, processing, validating, and distributing information in self-organized social networks.
On Some Confidence Intervals for Estimating the Mean of a Skewed Population
ERIC Educational Resources Information Center
Shi, W.; Kibria, B. M. Golam
2007-01-01
A number of methods are available in the literature to measure confidence intervals. Here, confidence intervals for estimating the population mean of a skewed distribution are considered. This note proposes two alternative confidence intervals, namely, Median t and Mad t, which are simple adjustments to the Student's t confidence interval. In…
Stability of INFIT and OUTFIT Compared to Simulated Estimates in Applied Setting.
Hodge, Kari J; Morgan, Grant B
Residual-based fit statistics are commonly used as an indication of the extent to which the item response data fit the Rash model. Fit statistic estimates are influenced by sample size and rules-of thumb estimates may result in incorrect conclusions about the extent to which the model fits the data. Estimates obtained in this analysis were compared to 250 simulated data sets to examine the stability of the estimates. All INFIT estimates were within the rule-of-thumb range of 0.7 to 1.3. However, only 82% of the INFIT estimates fell within the 2.5th and 97.5th percentile of the simulated item's INFIT distributions using this 95% confidence-like interval. This is a 18 percentage point difference in items that were classified as acceptable. Fourty-eight percent of OUTFIT estimates fell within the 0.7 to 1.3 rule- of-thumb range. Whereas 34% of OUTFIT estimates fell within the 2.5th and 97.5th percentile of the simulated item's OUTFIT distributions. This is a 13 percentage point difference in items that were classified as acceptable. When using the rule-of- thumb ranges for fit estimates the magnitude of misfit was smaller than with the 95% confidence interval of the simulated distribution. The findings indicate that the use of confidence intervals as critical values for fit statistics leads to different model data fit conclusions than traditional rule of thumb critical values.
Sublobar resection is equivalent to lobectomy for clinical stage 1A lung cancer in solid nodules.
Altorki, Nasser K; Yip, Rowena; Hanaoka, Takaomi; Bauer, Thomas; Aye, Ralph; Kohman, Leslie; Sheppard, Barry; Thurer, Richard; Andaz, Shahriyour; Smith, Michael; Mayfield, William; Grannis, Fred; Korst, Robert; Pass, Harvey; Straznicka, Michaela; Flores, Raja; Henschke, Claudia I
2014-02-01
A single randomized trial established lobectomy as the standard of care for the surgical treatment of early-stage non-small cell lung cancer. Recent advances in imaging/staging modalities and detection of smaller tumors have once again rekindled interest in sublobar resection for early-stage disease. The objective of this study was to compare lung cancer survival in patients with non-small cell lung cancer with a diameter of 30 mm or less with clinical stage 1 disease who underwent lobectomy or sublobar resection. We identified 347 patients diagnosed with lung cancer who underwent lobectomy (n = 294) or sublobar resection (n = 53) for non-small cell lung cancer manifesting as a solid nodule in the International Early Lung Cancer Action Program from 1993 to 2011. Differences in the distribution of the presurgical covariates between sublobar resection and lobectomy were assessed using unadjusted P values determined by logistic regression analysis. Propensity scoring was performed using the same covariates. Differences in the distribution of the same covariates between sublobar resection and lobectomy were assessed using adjusted P values determined by logistic regression analysis with adjustment for the propensity scores. Lung cancer-specific survival was determined by the Kaplan-Meier method. Cox survival regression analysis was used to compare sublobar resection with lobectomy, adjusted for the propensity scores, surgical, and pathology findings, when adjusted and stratified by propensity quintiles. Among 347 patients, 10-year Kaplan-Meier for 53 patients treated by sublobar resection compared with 294 patients treated by lobectomy was 85% (95% confidence interval, 80-91) versus 86% (confidence interval, 75-96) (P = .86). Cox survival analysis showed no significant difference between sublobar resection and lobectomy when adjusted for propensity scores or when using propensity quintiles (P = .62 and P = .79, respectively). For those with cancers 20 mm or less in diameter, the 10-year rates were 88% (95% confidence interval, 82-93) versus 84% (95% confidence interval, 73-96) (P = .45), and Cox survival analysis showed no significant difference between sublobar resection and lobectomy using either approach (P = .42 and P = .52, respectively). Sublobar resection and lobectomy have equivalent survival for patients with clinical stage IA non-small cell lung cancer in the context of computed tomography screening for lung cancer. Copyright © 2014 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.
Gengsheng Qin; Davis, Angela E; Jing, Bing-Yi
2011-06-01
For a continuous-scale diagnostic test, it is often of interest to find the range of the sensitivity of the test at the cut-off that yields a desired specificity. In this article, we first define a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and show that its limiting distribution is a scaled chi-square distribution. We then propose two new empirical likelihood-based confidence intervals for the sensitivity of the test at a fixed level of specificity by using the scaled chi-square distribution. Simulation studies are conducted to compare the finite sample performance of the newly proposed intervals with the existing intervals for the sensitivity in terms of coverage probability. A real example is used to illustrate the application of the recommended methods.
Ometto, Giovanni; Erlandsen, Mogens; Hunter, Andrew; Bek, Toke
2017-06-01
It has previously been shown that the intervals between screening examinations for diabetic retinopathy can be optimized by including individual risk factors for the development of the disease in the risk assessment. However, in some cases, the risk model calculating the screening interval may recommend a different interval than an experienced clinician. The purpose of this study was to evaluate the influence of factors unrelated to diabetic retinopathy and the distribution of lesions for discrepancies between decisions made by the clinician and the risk model. Therefore, fundus photographs from 90 screening examinations where the recommendations of the clinician and a risk model had been discrepant were evaluated. Forty features were defined to describe the type and location of the lesions, and classification and ranking techniques were used to assess whether the features could predict the discrepancy between the grader and the risk model. Suspicion of tumours, retinal degeneration and vascular diseases other than diabetic retinopathy could explain why the clinician recommended shorter examination intervals than the model. Additionally, the regional distribution of microaneurysms/dot haemorrhages was important for defining a photograph as belonging to the group where both the clinician and the risk model had recommended a short screening interval as opposed to the other decision alternatives. Features unrelated to diabetic retinopathy and the regional distribution of retinal lesions may affect the recommendation of the examination interval during screening for diabetic retinopathy. The development of automated computerized algorithms for extracting information about the type and location of retinal lesions could be expected to further optimize examination intervals during screening for diabetic retinopathy. © 2016 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
Modeling Fractal Structure of City-Size Distributions Using Correlation Functions
Chen, Yanguang
2011-01-01
Zipf's law is one the most conspicuous empirical facts for cities, however, there is no convincing explanation for the scaling relation between rank and size and its scaling exponent. Using the idea from general fractals and scaling, I propose a dual competition hypothesis of city development to explain the value intervals and the special value, 1, of the power exponent. Zipf's law and Pareto's law can be mathematically transformed into one another, but represent different processes of urban evolution, respectively. Based on the Pareto distribution, a frequency correlation function can be constructed. By scaling analysis and multifractals spectrum, the parameter interval of Pareto exponent is derived as (0.5, 1]; Based on the Zipf distribution, a size correlation function can be built, and it is opposite to the first one. By the second correlation function and multifractals notion, the Pareto exponent interval is derived as [1, 2). Thus the process of urban evolution falls into two effects: one is the Pareto effect indicating city number increase (external complexity), and the other the Zipf effect indicating city size growth (internal complexity). Because of struggle of the two effects, the scaling exponent varies from 0.5 to 2; but if the two effects reach equilibrium with each other, the scaling exponent approaches 1. A series of mathematical experiments on hierarchical correlation are employed to verify the models and a conclusion can be drawn that if cities in a given region follow Zipf's law, the frequency and size correlations will follow the scaling law. This theory can be generalized to interpret the inverse power-law distributions in various fields of physical and social sciences. PMID:21949753
NASA Astrophysics Data System (ADS)
Lü, Hui; Shangguan, Wen-Bin; Yu, Dejie
2017-09-01
Automotive brake systems are always subjected to various types of uncertainties and two types of random-fuzzy uncertainties may exist in the brakes. In this paper, a unified approach is proposed for squeal instability analysis of disc brakes with two types of random-fuzzy uncertainties. In the proposed approach, two uncertainty analysis models with mixed variables are introduced to model the random-fuzzy uncertainties. The first one is the random and fuzzy model, in which random variables and fuzzy variables exist simultaneously and independently. The second one is the fuzzy random model, in which uncertain parameters are all treated as random variables while their distribution parameters are expressed as fuzzy numbers. Firstly, the fuzziness is discretized by using α-cut technique and the two uncertainty analysis models are simplified into random-interval models. Afterwards, by temporarily neglecting interval uncertainties, the random-interval models are degraded into random models, in which the expectations, variances, reliability indexes and reliability probabilities of system stability functions are calculated. And then, by reconsidering the interval uncertainties, the bounds of the expectations, variances, reliability indexes and reliability probabilities are computed based on Taylor series expansion. Finally, by recomposing the analysis results at each α-cut level, the fuzzy reliability indexes and probabilities can be obtained, by which the brake squeal instability can be evaluated. The proposed approach gives a general framework to deal with both types of random-fuzzy uncertainties that may exist in the brakes and its effectiveness is demonstrated by numerical examples. It will be a valuable supplement to the systematic study of brake squeal considering uncertainty.
Robust Confidence Interval for a Ratio of Standard Deviations
ERIC Educational Resources Information Center
Bonett, Douglas G.
2006-01-01
Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…
Overlap between treatment and control distributions as an effect size measure in experiments.
Hedges, Larry V; Olkin, Ingram
2016-03-01
The proportion π of treatment group observations that exceed the control group mean has been proposed as an effect size measure for experiments that randomly assign independent units into 2 groups. We give the exact distribution of a simple estimator of π based on the standardized mean difference and use it to study the small sample bias of this estimator. We also give the minimum variance unbiased estimator of π under 2 models, one in which the variance of the mean difference is known and one in which the variance is unknown. We show how to use the relation between the standardized mean difference and the overlap measure to compute confidence intervals for π and show that these results can be used to obtain unbiased estimators, large sample variances, and confidence intervals for 3 related effect size measures based on the overlap. Finally, we show how the effect size π can be used in a meta-analysis. (c) 2016 APA, all rights reserved).
Passive longitudes of solar cosmic rays in 19-24 solar cycles
NASA Astrophysics Data System (ADS)
Getselev, Igor; Podzolko, Mikhail; Shatov, Pavel; Tasenko, Sergey; Skorohodov, Ilya; Okhlopkov, Viktor
The distribution of solar proton event sources along the Carrington longitude in 19-24 solar cycles is considered. For this study an extensive database on ≈450 solar proton events have been constructed using various available sources and solar cosmic ray measurements, which included the data about the time of the event, fluences of protons of various energies in it and the coordinates of its source on the Sun. The analysis has shown the significant inhomogeneity of the distribution. In particular a region of “passive longitudes” has been discovered, extensive over the longitude (from ≈90-100° to 170°) and the life time (the whole period of observations). From the 60 most powerful proton events during the 19-24 solar cycles not more than 1 event was originated from the interval of 100-170° Carrington longitude, from another 80 “medium” events only 10 were injected from this interval. The summarized proton fluence of the events, which sources belong to the interval of 90-170° amounts only to 5%, and if not take into account the single “anomalous” powerful event - to just only 1.2% from the total fluence for all the considered events. The existence of the extensive and stable interval of “passive” Carrington longitudes is the remarkable phenomenon in solar physics. It also confirms the physical relevance of the mean synodic period of Sun’s rotation determined by R. C. Carrington.
NASA Astrophysics Data System (ADS)
Lagrosas, N.; Bautista, D. L. B.; Miranda, J. P.
2016-12-01
Aerosol optical properties and growth were measured during 2014 and 2016 New Year celebrations at Manila Observatory, Philippines. Measurements were done using a USB2000 spectrometer from 22:00 of 31 December 2013 to 03:00 of 01 January 2014 and from 18:00 of 31 December 2015 to 05:30 01 January 2016. A xenon lamp was used as a light source 150m from the spectrometer. Fireworks and firecrackers were the main sources of aerosols during these festivities. Data were collected every 60s and 10s for 2014 and 2016 respectively. The aerosol volume size distribution was derived using the parametric inversion method proposed by Kaijser (1983). The method is performed by selecting 8 wavelengths from 387.30nm to 600.00nm. The reference intensities were obtained when firework activities were considerably low and the air was assumed to be relatively clean. Using Mie theory and assuming that the volume size distribution is a linear combination of 33 bimodal lognormal distribution functions with geometric mean radii between 0.003um and 1.2um, a least-square minimization process was implemented between measured optical depths and computed optical depths. The 2016 New Year distribution showed mostly a unimodal size distribution (mean radius = 0.3um) from 23:00 to 05:30 (Fig. 1a). The mean Angstrom coefficient value during the same time interval was approximately 0.75. This could be attributed to a constant RH (100%) during this time interval. A bimodal distribution was observed when RH value was 94% from 18:30 to 21:30. The transition to a unimodal distribution was observed at 21:00 when the RH value changes from 94% to 100%. In contrast to the 2016 New Year celebration, the 2014 size distribution was bimodal from 23:30 to 02:30 (Fig 1b). The bimodal distribution is the result of firework activities before New Year. Aerosol growth was evident when the size distribution became unimodal after 02:30 (mean radius = 1.1um). The mean Angstrom coefficient, when the size distribution is unimodal, was around 0.5 and this could be attributed to increasing RH from 78% to 88% during this time interval. The two New Year celebrations showed different patterns of aerosols growth. Aerosols produced at high RH tend to be unimodal while aerosols produced at low RH tend to have a bimodal distribution. As RH increased, the bimodal distribution became unimodal.
Using recurrence plot for determinism analysis of EEG recordings in genetic absence epilepsy rats.
Ouyang, Gaoxiang; Li, Xiaoli; Dang, Chuangyin; Richards, Douglas A
2008-08-01
Understanding the transition of brain activity towards an absence seizure is a challenging task. In this paper, we use recurrence quantification analysis to indicate the deterministic dynamics of EEG series at the seizure-free, pre-seizure and seizure states in genetic absence epilepsy rats. The determinism measure, DET, based on recurrence plot, was applied to analyse these three EEG datasets, each dataset containing 300 single-channel EEG epochs of 5-s duration. Then, statistical analysis of the DET values in each dataset was carried out to determine whether their distributions over the three groups were significantly different. Furthermore, a surrogate technique was applied to calculate the significance level of determinism measures in EEG recordings. The mean (+/-SD) DET of EEG was 0.177+/-0.045 in pre-seizure intervals. The DET values of pre-seizure EEG data are significantly higher than those of seizure-free intervals, 0.123+/-0.023, (P<0.01), but lower than those of seizure intervals, 0.392+/-0.110, (P<0.01). Using surrogate data methods, the significance of determinism in EEG epochs was present in 25 of 300 (8.3%), 181 of 300 (60.3%) and 289 of 300 (96.3%) in seizure-free, pre-seizure and seizure intervals, respectively. Results provide some first indications that EEG epochs during pre-seizure intervals exhibit a higher degree of determinism than seizure-free EEG epochs, but lower than those in seizure EEG epochs in absence epilepsy. The proposed methods have the potential of detecting the transition between normal brain activity and the absence seizure state, thus opening up the possibility of intervention, whether electrical or pharmacological, to prevent the oncoming seizure.
Interresponse Time Structures in Variable-Ratio and Variable-Interval Schedules
ERIC Educational Resources Information Center
Bowers, Matthew T.; Hill, Jade; Palya, William L.
2008-01-01
The interresponse-time structures of pigeon key pecking were examined under variable-ratio, variable-interval, and variable-interval plus linear feedback schedules. Whereas the variable-ratio and variable-interval plus linear feedback schedules generally resulted in a distinct group of short interresponse times and a broad distribution of longer…
NASA Astrophysics Data System (ADS)
Wu, Yunna; Chen, Kaifeng; Xu, Hu; Xu, Chuanbo; Zhang, Haobo; Yang, Meng
2017-12-01
There is insufficient research relating to offshore wind farm site selection in China. The current methods for site selection have some defects. First, information loss is caused by two aspects: the implicit assumption that the probability distribution on the interval number is uniform; and ignoring the value of decision makers' (DMs') common opinion on the criteria information evaluation. Secondly, the difference in DMs' utility function has failed to receive attention. An innovative method is proposed in this article to solve these drawbacks. First, a new form of interval number and its weighted operator are proposed to reflect the uncertainty and reduce information loss. Secondly, a new stochastic dominance degree is proposed to quantify the interval number with a probability distribution. Thirdly, a two-stage method integrating the weighted operator with stochastic dominance degree is proposed to evaluate the alternatives. Finally, a case from China proves the effectiveness of this method.
Comparing interval estimates for small sample ordinal CFA models
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002
Comparing interval estimates for small sample ordinal CFA models.
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davies, K.E.; Morrison, K.E.; Daniels, R.I.
1994-09-01
We previously reported that the 400 kb interval flanked the polymorphic loci D5S435 and D5S557 contains blocks of a chromosome 5 specific repeat. This interval also defines the SMA candidate region by genetic analysis of recombinant families. A YAC contig of 2-3 Mb encompassing this area has been constructed and a 5.5 kb conserved fragment, isolated from a YAC end clone within the above interval, was used to obtain cDNAs from both fetal and adult brain libraries. We describe the identification of cDNAs with stretches of high DNA sequence homology to exons of {beta} glucuronidase on human chromosome 7. Themore » cDNAs map both to the candidate region and to an area of 5p using FISH and deletion hybrid analysis. Hybridization to bacteriophage and cosmid clones from the YACs localizes the {beta} glucuronidase related sequences within the 400 kb region of the YAC contig. The cDNAs show a polymorphic pattern on hybridization to genomic BamH1 fragments in the size range of 10-250 kb. Further analysis using YAC fragmentation vectors is being used to determine how these {beta} glucuronidase related cDNAs are distributed within 5q13. Dinucleotide repeats within the region are being investigated to determine linkage disequilibrium with the disease locus.« less
Li, Yongping; Huang, Guohe
2009-03-01
In this study, a dynamic analysis approach based on an inexact multistage integer programming (IMIP) model is developed for supporting municipal solid waste (MSW) management under uncertainty. Techniques of interval-parameter programming and multistage stochastic programming are incorporated within an integer-programming framework. The developed IMIP can deal with uncertainties expressed as probability distributions and interval numbers, and can reflect the dynamics in terms of decisions for waste-flow allocation and facility-capacity expansion over a multistage context. Moreover, the IMIP can be used for analyzing various policy scenarios that are associated with different levels of economic consequences. The developed method is applied to a case study of long-term waste-management planning. The results indicate that reasonable solutions have been generated for binary and continuous variables. They can help generate desired decisions of system-capacity expansion and waste-flow allocation with a minimized system cost and maximized system reliability.
Estimation of treatment effect in a subpopulation: An empirical Bayes approach.
Shen, Changyu; Li, Xiaochun; Jeong, Jaesik
2016-01-01
It is well recognized that the benefit of a medical intervention may not be distributed evenly in the target population due to patient heterogeneity, and conclusions based on conventional randomized clinical trials may not apply to every person. Given the increasing cost of randomized trials and difficulties in recruiting patients, there is a strong need to develop analytical approaches to estimate treatment effect in subpopulations. In particular, due to limited sample size for subpopulations and the need for multiple comparisons, standard analysis tends to yield wide confidence intervals of the treatment effect that are often noninformative. We propose an empirical Bayes approach to combine both information embedded in a target subpopulation and information from other subjects to construct confidence intervals of the treatment effect. The method is appealing in its simplicity and tangibility in characterizing the uncertainty about the true treatment effect. Simulation studies and a real data analysis are presented.
Klein, Mike E.; Zatorre, Robert J.
2015-01-01
In categorical perception (CP), continuous physical signals are mapped to discrete perceptual bins: mental categories not found in the physical world. CP has been demonstrated across multiple sensory modalities and, in audition, for certain over-learned speech and musical sounds. The neural basis of auditory CP, however, remains ambiguous, including its robustness in nonspeech processes and the relative roles of left/right hemispheres; primary/nonprimary cortices; and ventral/dorsal perceptual processing streams. Here, highly trained musicians listened to 2-tone musical intervals, which they perceive categorically while undergoing functional magnetic resonance imaging. Multivariate pattern analyses were performed after grouping sounds by interval quality (determined by frequency ratio between tones) or pitch height (perceived noncategorically, frequency ratios remain constant). Distributed activity patterns in spheres of voxels were used to determine sound sample identities. For intervals, significant decoding accuracy was observed in the right superior temporal and left intraparietal sulci, with smaller peaks observed homologously in contralateral hemispheres. For pitch height, no significant decoding accuracy was observed, consistent with the non-CP of this dimension. These results suggest that similar mechanisms are operative for nonspeech categories as for speech; espouse roles for 2 segregated processing streams; and support hierarchical processing models for CP. PMID:24488957
Model selection for identifying power-law scaling.
Ton, Robert; Daffertshofer, Andreas
2016-08-01
Long-range temporal and spatial correlations have been reported in a remarkable number of studies. In particular power-law scaling in neural activity raised considerable interest. We here provide a straightforward algorithm not only to quantify power-law scaling but to test it against alternatives using (Bayesian) model comparison. Our algorithm builds on the well-established detrended fluctuation analysis (DFA). After removing trends of a signal, we determine its mean squared fluctuations in consecutive intervals. In contrast to DFA we use the values per interval to approximate the distribution of these mean squared fluctuations. This allows for estimating the corresponding log-likelihood as a function of interval size without presuming the fluctuations to be normally distributed, as is the case in conventional DFA. We demonstrate the validity and robustness of our algorithm using a variety of simulated signals, ranging from scale-free fluctuations with known Hurst exponents, via more conventional dynamical systems resembling exponentially correlated fluctuations, to a toy model of neural mass activity. We also illustrate its use for encephalographic signals. We further discuss confounding factors like the finite signal size. Our model comparison provides a proper means to identify power-law scaling including the range over which it is present. Copyright © 2016 Elsevier Inc. All rights reserved.
ESTABLISHMENT OF A FIBRINOGEN REFERENCE INTERVAL IN ORNATE BOX TURTLES (TERRAPENE ORNATA ORNATA).
Parkinson, Lily; Olea-Popelka, Francisco; Klaphake, Eric; Dadone, Liza; Johnston, Matthew
2016-09-01
This study sought to establish a reference interval for fibrinogen in healthy ornate box turtles ( Terrapene ornata ornata). A total of 48 turtles were enrolled, with 42 turtles deemed to be noninflammatory and thus fitting the inclusion criteria and utilized to estimate a fibrinogen reference interval. Turtles were excluded based upon physical examination and blood work abnormalities. A Shapiro-Wilk normality test indicated that the noninflammatory turtle fibrinogen values were normally distributed (Gaussian distribution) with an average of 108 mg/dl and a 95% confidence interval of the mean of 97.9-117 mg/dl. Those turtles excluded from the reference interval because of abnormalities affecting their health had significantly different fibrinogen values (P = 0.313). A reference interval for healthy ornate box turtles was calculated. Further investigation into the utility of fibrinogen measurement for clinical usage in ornate box turtles is warranted.
Lui, Kung-Jong; Chang, Kuang-Chao
2016-10-01
When the frequency of event occurrences follows a Poisson distribution, we develop procedures for testing equality of treatments and interval estimators for the ratio of mean frequencies between treatments under a three-treatment three-period crossover design. Using Monte Carlo simulations, we evaluate the performance of these test procedures and interval estimators in various situations. We note that all test procedures developed here can perform well with respect to Type I error even when the number of patients per group is moderate. We further note that the two weighted-least-squares (WLS) test procedures derived here are generally preferable to the other two commonly used test procedures in the contingency table analysis. We also demonstrate that both interval estimators based on the WLS method and interval estimators based on Mantel-Haenszel (MH) approach can perform well, and are essentially of equal precision with respect to the average length. We use a double-blind randomized three-treatment three-period crossover trial comparing salbutamol and salmeterol with a placebo with respect to the number of exacerbations of asthma to illustrate the use of these test procedures and estimators. © The Author(s) 2014.
NASA Astrophysics Data System (ADS)
Engelke, Julia; Esser, Klaus J. K.; Linnert, Christian; Mutterlose, Jörg; Wilmsen, Markus
2016-12-01
The benthic macroinvertebrates of the Lower Maastrichtian chalk of Saturn quarry at Kronsmoor (northern Germany) have been studied taxonomically based on more than 1,000 specimens. Two successive benthic macrofossil assemblages were recognised: the lower interval in the upper part of the Kronsmoor Formation (Belemnella obtusa Zone) is characterized by low abundances of macroinvertebrates while the upper interval in the uppermost Kronsmoor and lowermost Hemmoor formations (lower to middle Belemnella sumensis Zone) shows a high macroinvertebrate abundance (eight times more than in the B. obtusa Zone) and a conspicuous dominance of brachiopods. The palaeoecological analysis of these two assemblages indicates the presence of eight different guilds, of which epifaunal suspension feeders (fixo-sessile and libero-sessile guilds), comprising approximately half of the trophic nucleus of the lower interval, increased to a dominant 86% in the upper interval, including a considerable proportion of rhynchonelliform brachiopods. It is tempting to relate this shift from the lower to the upper interval to an increase in nutrient supply and/or a shallowing of the depositional environment but further data including geochemical proxies are needed to fully understand the macrofossil distribution patterns in the Lower Maastrichtian of Kronsmoor.
Virlogeux, Victor; Li, Ming; Tsang, Tim K; Feng, Luzhao; Fang, Vicky J; Jiang, Hui; Wu, Peng; Zheng, Jiandong; Lau, Eric H Y; Cao, Yu; Qin, Ying; Liao, Qiaohong; Yu, Hongjie; Cowling, Benjamin J
2015-10-15
A novel avian influenza virus, influenza A(H7N9), emerged in China in early 2013 and caused severe disease in humans, with infections occurring most frequently after recent exposure to live poultry. The distribution of A(H7N9) incubation periods is of interest to epidemiologists and public health officials, but estimation of the distribution is complicated by interval censoring of exposures. Imputation of the midpoint of intervals was used in some early studies, resulting in estimated mean incubation times of approximately 5 days. In this study, we estimated the incubation period distribution of human influenza A(H7N9) infections using exposure data available for 229 patients with laboratory-confirmed A(H7N9) infection from mainland China. A nonparametric model (Turnbull) and several parametric models accounting for the interval censoring in some exposures were fitted to the data. For the best-fitting parametric model (Weibull), the mean incubation period was 3.4 days (95% confidence interval: 3.0, 3.7) and the variance was 2.9 days; results were very similar for the nonparametric Turnbull estimate. Under the Weibull model, the 95th percentile of the incubation period distribution was 6.5 days (95% confidence interval: 5.9, 7.1). The midpoint approximation for interval-censored exposures led to overestimation of the mean incubation period. Public health observation of potentially exposed persons for 7 days after exposure would be appropriate. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Detection of Frauds and Other Non-technical Losses in Power Utilities using Smart Meters: A Review
NASA Astrophysics Data System (ADS)
Ahmad, Tanveer; Ul Hasan, Qadeer
2016-06-01
Analysis of losses in power distribution system and techniques to mitigate these are two active areas of research especially in energy scarce countries like Pakistan to increase the availability of power without installing new generation. Since total energy losses account for both technical losses (TL) as well as non-technical losses (NTLs). Utility companies in developing countries are incurring of major financial losses due to non-technical losses. NTLs lead to a series of additional losses, such as damage to the network (infrastructure and the reduction of network reliability) etc. The purpose of this paper is to perform an introductory investigation of non-technical losses in power distribution systems. Additionally, analysis of NTLs using consumer energy consumption data with the help of Linear Regression Analysis has been carried out. This data focuses on the Low Voltage (LV) distribution network, which includes: residential, commercial, agricultural and industrial consumers by using the monthly kWh interval data acquired over a period (one month) of time using smart meters. In this research different prevention techniques are also discussed to prevent illegal use of electricity in the distribution of electrical power system.
Fixed-interval matching-to-sample: intermatching time and intermatching error runs1
Nelson, Thomas D.
1978-01-01
Four pigeons were trained on a matching-to-sample task in which reinforcers followed either the first matching response (fixed interval) or the fifth matching response (tandem fixed-interval fixed-ratio) that occurred 80 seconds or longer after the last reinforcement. Relative frequency distributions of the matching-to-sample responses that concluded intermatching times and runs of mismatches (intermatching error runs) were computed for the final matching responses directly followed by grain access and also for the three matching responses immediately preceding the final match. Comparison of these two distributions showed that the fixed-interval schedule arranged for the preferential reinforcement of matches concluding relatively extended intermatching times and runs of mismatches. Differences in matching accuracy and rate during the fixed interval, compared to the tandem fixed-interval fixed-ratio, suggested that reinforcers following matches concluding various intermatching times and runs of mismatches influenced the rate and accuracy of the last few matches before grain access, but did not control rate and accuracy throughout the entire fixed-interval period. PMID:16812032
Improved confidence intervals when the sample is counted an integer times longer than the blank.
Potter, William Edward; Strzelczyk, Jadwiga Jodi
2011-05-01
Past computer solutions for confidence intervals in paired counting are extended to the case where the ratio of the sample count time to the blank count time is taken to be an integer, IRR. Previously, confidence intervals have been named Neyman-Pearson confidence intervals; more correctly they should have been named Neyman confidence intervals or simply confidence intervals. The technique utilized mimics a technique used by Pearson and Hartley to tabulate confidence intervals for the expected value of the discrete Poisson and Binomial distributions. The blank count and the contribution of the sample to the gross count are assumed to be Poisson distributed. The expected value of the blank count, in the sample count time, is assumed known. The net count, OC, is taken to be the gross count minus the product of IRR with the blank count. The probability density function (PDF) for the net count can be determined in a straightforward manner.
Beach, Jeremy; Burstyn, Igor; Cherry, Nicola
2012-07-01
We previously described a method to identify the incidence of new-onset adult asthma (NOAA) in Alberta by industry and occupation, utilizing Workers' Compensation Board (WCB) and physician billing data. The aim of this study was to extend this method to data from British Columbia (BC) so as to compare the two provinces and to incorporate Bayesian methodology into estimates of risk. WCB claims for any reason 1995-2004 were linked to physician billing data. NOAA was defined as a billing for asthma (ICD-9 493) in the 12 months before a WCB claim without asthma in the previous 3 years. Incidence was calculated by occupation and industry. In a matched case-referent analysis, associations with exposures were examined using an asthma-specific job exposure matrix (JEM). Posterior distributions from the Alberta analysis and estimated misclassification parameters were used as priors in the Bayesian analysis of the BC data. Among 1 118 239 eligible WCB claims the incidence of NOAA was 1.4%. Sixteen occupations and 44 industries had a significantly increased risk; six industries had a decreased risk. The JEM identified wood dust [odds ratio (OR) 1.55, 95% confidence interval (CI) 1.08-2.24] and animal antigens (OR 1.66, 95% CI 1.17-2.36) as related to an increased risk of NOAA. Exposure to isocyanates was associated with decreased risk (OR 0.57, 95% CI 0.39-0.85). Bayesian analyses taking account of exposure misclassification and informative priors resulted in posterior distributions of ORs with lower boundary of 95% credible intervals >1.00 for almost all exposures. The distribution of NOAA in BC appeared somewhat similar to that in Alberta, except for isocyanates. Bayesian analyses allowed incorporation of prior evidence into risk estimates, permitting reconsideration of the apparently protective effect of isocyanate exposure.
Parsimonious nonstationary flood frequency analysis
NASA Astrophysics Data System (ADS)
Serago, Jake M.; Vogel, Richard M.
2018-02-01
There is now widespread awareness of the impact of anthropogenic influences on extreme floods (and droughts) and thus an increasing need for methods to account for such influences when estimating a frequency distribution. We introduce a parsimonious approach to nonstationary flood frequency analysis (NFFA) based on a bivariate regression equation which describes the relationship between annual maximum floods, x, and an exogenous variable which may explain the nonstationary behavior of x. The conditional mean, variance and skewness of both x and y = ln (x) are derived, and combined with numerous common probability distributions including the lognormal, generalized extreme value and log Pearson type III models, resulting in a very simple and general approach to NFFA. Our approach offers several advantages over existing approaches including: parsimony, ease of use, graphical display, prediction intervals, and opportunities for uncertainty analysis. We introduce nonstationary probability plots and document how such plots can be used to assess the improved goodness of fit associated with a NFFA.
NASA Astrophysics Data System (ADS)
Anikushina, T. A.; Naumov, A. V.
2013-12-01
This article demonstrates the principal advantages of the technique for analysis of the long-term spectral evolution of single molecules (SM) in the study of the microscopic nature of the dynamic processes in low-temperature polymers. We performed the detailed analysis of the spectral trail of single tetra-tert-butylterrylene (TBT) molecule in an amorphous polyisobutylene matrix, measured over 5 hours at T = 7K. It has been shown that the slow temporal dynamics is in qualitative agreement with the standard model of two-level systems and stochastic sudden-jump model. At the same time the distributions of the first four moments (cumulants) of the spectra of the selected SM measured at different time points were found not consistent with the standard theory prediction. It was considered as evidence that in a given time interval the system is not ergodic
NASA Astrophysics Data System (ADS)
Li, Jiaqiang; Choutko, Vitaly; Xiao, Liyi
2018-03-01
Based on the collection of error data from the Alpha Magnetic Spectrometer (AMS) Digital Signal Processors (DSP), on-orbit Single Event Upsets (SEUs) of the DSP program memory are analyzed. The daily error distribution and time intervals between errors are calculated to evaluate the reliability of the system. The particle density distribution of International Space Station (ISS) orbit is presented and the effects from the South Atlantic Anomaly (SAA) and the geomagnetic poles are analyzed. The impact of solar events on the DSP program memory is carried out combining data analysis and Monte Carlo simulation (MC). From the analysis and simulation results, it is concluded that the area corresponding to the SAA is the main source of errors on the ISS orbit. Solar events can also cause errors on DSP program memory, but the effect depends on the on-orbit particle density.
Data on the no-load performance analysis of a tomato postharvest storage system.
Ayomide, Orhewere B; Ajayi, Oluseyi O; Banjo, Solomon O; Ajayi, Adesola A
2017-08-01
In this present investigation, an original and detailed empirical data on the transfer of heat in a tomato postharvest storage system was presented. No-load tests were performed for a period of 96 h. The heat distribution at different locations, namely the top, middle and bottom of the system was acquired, at a time interval of 30 min for the test period. The humidity inside the system was taken into consideration. Thus, No-load tests with or without introduction of humidity were carried out and data showing the effect of a rise in humidity level, on temperature distribution were acquired. The temperatures at the external mechanical cooling components were acquired and could be used for showing the performance analysis of the storage system.
Modelling volatility recurrence intervals in the Chinese commodity futures market
NASA Astrophysics Data System (ADS)
Zhou, Weijie; Wang, Zhengxin; Guo, Haiming
2016-09-01
The law of extreme event occurrence attracts much research. The volatility recurrence intervals of Chinese commodity futures market prices are studied: the results show that the probability distributions of the scaled volatility recurrence intervals have a uniform scaling curve for different thresholds q. So we can deduce the probability distribution of extreme events from normal events. The tail of a scaling curve can be well fitted by a Weibull form, which is significance-tested by KS measures. Both short-term and long-term memories are present in the recurrence intervals with different thresholds q, which denotes that the recurrence intervals can be predicted. In addition, similar to volatility, volatility recurrence intervals also have clustering features. Through Monte Carlo simulation, we artificially synthesise ARMA, GARCH-class sequences similar to the original data, and find out the reason behind the clustering. The larger the parameter d of the FIGARCH model, the stronger the clustering effect is. Finally, we use the Fractionally Integrated Autoregressive Conditional Duration model (FIACD) to analyse the recurrence interval characteristics. The results indicated that the FIACD model may provide a method to analyse volatility recurrence intervals.
Finn, Thomas M.
2014-01-01
The lower shaly member of the Cody Shale in the Bighorn Basin, Wyoming and Montana is Coniacian to Santonian in age and is equivalent to the upper part of the Carlile Shale and basal part of the Niobrara Formation in the Powder River Basin to the east. The lower Cody ranges in thickness from 700 to 1,200 feet and underlies much of the central part of the basin. It is composed of gray to black shale, calcareous shale, bentonite, and minor amounts of siltstone and sandstone. Sixty-six samples, collected from well cuttings, from the lower Cody Shale were analyzed using Rock-Eval and total organic carbon analysis to determine the source rock potential. Total organic carbon content averages 2.28 weight percent for the Carlile equivalent interval and reaches a maximum of nearly 5 weight percent. The Niobrara equivalent interval averages about 1.5 weight percent and reaches a maximum of over 3 weight percent, indicating that both intervals are good to excellent source rocks. S2 values from pyrolysis analysis also indicate that both intervals have a good to excellent source rock potential. Plots of hydrogen index versus oxygen index, hydrogen index versus Tmax, and S2/S3 ratios indicate that organic matter contains both Type II and Type III kerogen capable of generating oil and gas. Maps showing the distribution of kerogen types and organic richness for the lower shaly member of the Cody Shale show that it is more organic-rich and more oil-prone in the eastern and southeastern parts of the basin. Thermal maturity based on vitrinite reflectance (Ro) ranges from 0.60–0.80 percent Ro around the margins of the basin, increasing to greater than 2.0 percent Ro in the deepest part of the basin, indicates that the lower Cody is mature to overmature with respect to hydrocarbon generation.
Analysis of backward error recovery for concurrent processes with recovery blocks
NASA Technical Reports Server (NTRS)
Shin, K. G.; Lee, Y. H.
1982-01-01
Three different methods of implementing recovery blocks (RB's). These are the asynchronous, synchronous, and the pseudo recovery point implementations. Pseudo recovery points so that unbounded rollback may be avoided while maintaining process autonomy are proposed. Probabilistic models for analyzing these three methods under standard assumptions in computer performance analysis, i.e., exponential distributions for related random variables were developed. The interval between two successive recovery lines for asynchronous RB's mean loss in computation power for the synchronized method, and additional overhead and rollback distance in case PRP's are used were estimated.
Electromagnetic Cyclotron Waves in the Solar Wind: Wind Observation and Wave Dispersion Analysis
NASA Technical Reports Server (NTRS)
Jian, L. K.; Moya, P. S.; Vinas, A. F.; Stevens, M.
2016-01-01
Wind observed long-lasting electromagnetic cyclotron waves near the proton cyclotron frequency on 11 March 2005, in the descending part of a fast wind stream. Bi-Maxwellian velocity distributions are fitted for core protons, beam protons, and alpha-particles. Using the fitted plasma parameters we conduct kinetic linear dispersion analysis and find ion cyclotron and/or firehose instabilities grow in six of 10 wave intervals. After Doppler shift, some of the waves have frequency and polarization consistent with observation, thus may be correspondence to the cyclotron waves observed.
Electromagnetic cyclotron waves in the solar wind: Wind observation and wave dispersion analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jian, L. K., E-mail: lan.jian@nasa.gov; Heliophysics Science Division, NASA Goddard Space Flight Center, Greenbelt, MD 20771; Moya, P. S.
2016-03-25
Wind observed long-lasting electromagnetic cyclotron waves near the proton cyclotron frequency on 11 March 2005, in the descending part of a fast wind stream. Bi-Maxwellian velocity distributions are fitted for core protons, beam protons, and α-particles. Using the fitted plasma parameters we conduct kinetic linear dispersion analysis and find ion cyclotron and/or firehose instabilities grow in six of 10 wave intervals. After Doppler shift, some of the waves have frequency and polarization consistent with observation, thus may be correspondence to the cyclotron waves observed.
Bach, Alex; Busto, Isabel
2005-02-01
A database consisting of 35291 milking records from 83 cows was built over a period of 10 months with the objectives of studying the effect of teat cup attachment failures and milking interval regularity on milk production with an automated milking system (AMS). The database collected records of lactation number, days in milk (DIM), milk production, interval between milkings (for both the entire udder and individual quarters in case of a teat cup attachment failure) and average and peak milk flows for each milking. The weekly coefficient of variation (CV) of milking intervals was used as a measure of milking regularity. DIM, milking intervals, and CV of milking intervals were divided into four categories coinciding with the four quartiles of their respective distributions. The data were analysed by analysis of variance with cow as a random effect and lactation number, DIM, the occurrence of a milking failure, and the intervals between milkings or the weekly CV of milking intervals as fixed effects. The incidence of attachment failures was 7.6% of total milkings. Milk production by quarters affected by a milking failure following the failure was numerically greater owing to the longer interval between milkings. When accounting for the effect of milking intervals, milk production by affected quarters following a milking failure was 26% lower than with regular milkings. However, the decrease in milk production by quarters affected by milking failures was more severe as DIM increased. Average and peak milk flows by quarters affected by a milking failure were lower than when milkings occurred normally. However, milk production recovered its former level within seven milkings following a milking failure. Uneven frequency (weekly CV of milking intervals >27%) decreased daily milk yield, and affected multiparous more negatively than primiparous cows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swift, T.E.; Marlow, R.E.; Wilhelm, M.H.
1981-11-01
This report describes part of the work done to fulfill a contract awarded to Gruy Federal, Inc., by the Department of Energy (DOE) on Feburary 12, 1979. The work includes pressure-coring and associated logging and testing programs to provide data on in-situ oil saturation, porosity and permeability distribution, and other data needed for resource characterization of fields and reservoirs in which CO/sub 2/ injection might have a high probability of success. This report details the second such project. Core porosities agreed well with computed log porosities. Core water saturation and computed log porosities agree fairly well from 3692 to 3712more » feet, poorly from 3712 to 3820 feet and in a general way from 4035 to 4107 feet. Computer log analysis techniques incorporating the a, m, and n values obtained from Core Laboratories analysis did not improve the agreement of log versus core derived water saturations. However, both core and log analysis indicated the ninth zone had the highest residual hydrocarbon saturations and production data confirmed the validity of oil saturation determinations. Residual oil saturation, for the perforated and tested intervals were 259 STB/acre-ft for the interval from 4035 to 4055 feet, and 150 STB/acre-ft for the interval from 3692 to 3718 feet. Nine BOPD was produced from the interval 4035 to 4055 feet and no oil was produced from interval 3692 to 3718 feet, qualitatively confirming the relative oil saturations as calculated. The low oil production in the zone from 4022 to 4055 and the lack of production from 3692 to 3718 feet indicated the zone to be at or near residual waterflood conditions as determined by log analysis. This project demonstrates the usefulness of integrating pressure core, log, and production data to realistically evaluate a reservoir for carbon dioxide flood.« less
NASA Astrophysics Data System (ADS)
Tiedeman, C. R.; Barrash, W.; Thrash, C. J.; Patterson, J.; Johnson, C. D.
2016-12-01
Hydraulic tomography was performed in a 100 m2 by 20 m thick volume of contaminated fractured mudstones at the former Naval Air Warfare Center (NAWC) in the Newark Basin, New Jersey, with the objective of estimating the detailed distribution of hydraulic conductivity (K). Characterizing the fine-scale K variability is important for designing effective remediation strategies in complex geologic settings such as fractured rock. In the tomography experiment, packers isolated two to six intervals in each of seven boreholes in the volume of investigation, and fiber-optic pressure transducers enabled collection of high-resolution drawdown observations. A hydraulic tomography dataset was obtained by conducting multiple aquifer tests in which a given isolated well interval was pumped and drawdown was monitored in all other intervals. The collective data from all tests display a wide range of behavior indicative of highly heterogeneous K within the tested volume, such as: drawdown curves for different intervals crossing one another on drawdown-time plots; unique drawdown curve shapes for certain intervals; and intervals with negligible drawdown adjacent to intervals with large drawdown. Tomographic inversion of data from 15 tests conducted in the first field season focused on estimating the K distribution at a scale of 1 m3 over approximately 25% of the investigated volume, where observation density was greatest. The estimated K field is consistent with prior geologic, geophysical, and hydraulic information, including: highly variable K within bedding-plane-parting fractures that are the primary flow and transport paths at NAWC, connected high-K features perpendicular to bedding, and a spatially heterogeneous distribution of low-K rock matrix and closed fractures. Subsequent tomographic testing was conducted in the second field season, with the region of high observation density expanded to cover a greater volume of the wellfield.
Govoni, V; Della Coletta, E; Cesnik, E; Casetta, I; Tugnoli, V; Granieri, E
2015-04-01
An ecological study in the resident population of the Health District (HD) of Ferrara, Italy, has been carried out to establish the distribution in space and time of the amyotrophic lateral sclerosis (ALS) incident cases according to the disease onset type and gender in the period 1964-2009. The hypothesis of a uniform distribution was assumed. The incident cases of spinal onset ALS and bulbar onset ALS were evenly distributed in space and time in both men and women. The spinal onset ALS incident cases distribution according to gender was significantly different from the expected in the extra-urban population (20 observed cases in men 95% Poisson confidence interval 12.22-30.89, expected cases in men 12.19; six observed cases in women 95% Poisson confidence interval 2.20-13.06, expected cases in women 13.81), whereas no difference was found in the urban population. The spinal onset ALS incidence was higher in men than in women in the extra-urban population (difference between the rates = 1.53, 95% CI associated with the difference 0.52-2.54), whereas no difference between sexes was found in the urban population. The uneven distribution according to gender of the spinal onset ALS incident cases only in the extra-urban population suggests the involvement of a gender related environmental risk factor associated with the extra-urban environment. Despite some limits of the spatial analysis in the study of rare diseases, the results appear consistent with the literature data. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Indication of multiscaling in the volatility return intervals of stock markets
NASA Astrophysics Data System (ADS)
Wang, Fengzhong; Yamasaki, Kazuko; Havlin, Shlomo; Stanley, H. Eugene
2008-01-01
The distribution of the return intervals τ between price volatilities above a threshold height q for financial records has been approximated by a scaling behavior. To explore how accurate is the scaling and therefore understand the underlined nonlinear mechanism, we investigate intraday data sets of 500 stocks which consist of Standard & Poor’s 500 index. We show that the cumulative distribution of return intervals has systematic deviations from scaling. We support this finding by studying the m -th moment μm≡⟨(τ/⟨τ⟩)m⟩1/m , which show a certain trend with the mean interval ⟨τ⟩ . We generate surrogate records using the Schreiber method, and find that their cumulative distributions almost collapse to a single curve and moments are almost constant for most ranges of ⟨τ⟩ . Those substantial differences suggest that nonlinear correlations in the original volatility sequence account for the deviations from a single scaling law. We also find that the original and surrogate records exhibit slight tendencies for short and long ⟨τ⟩ , due to the discreteness and finite size effects of the records, respectively. To avoid as possible those effects for testing the multiscaling behavior, we investigate the moments in the range 10<⟨τ⟩≤100 , and find that the exponent α from the power law fitting μm˜⟨τ⟩α has a narrow distribution around α≠0 which depends on m for the 500 stocks. The distribution of α for the surrogate records are very narrow and centered around α=0 . This suggests that the return interval distribution exhibits multiscaling behavior due to the nonlinear correlations in the original volatility.
1987-09-01
is practically the same as the one proposed by ARINC Research Corporation [ 4 ] for the construction of oil analysis decision tables. For each...pooled over two different failure modes ( auxillary drive bearing and an oil pump). The plot seems to indicate two distinct groups of data and one of the... decision would be to continue sampling at the same rate. -0 S 4,, .4 -12 Oo INITIAL DISTRIBUTION LIST DIRECTOR (2) DEPARTMENT OF MATHEMATICS DEFENSE TECH
Effects of Distributed Practice on the Acquisition of Second Language English Syntax
ERIC Educational Resources Information Center
Bird, Steve
2010-01-01
A longitudinal study compared the effects of distributed and massed practice schedules on the learning of second language English syntax. Participants were taught distinctions in the tense and aspect systems of English at short and long practice intervals. They were then tested at short and long intervals. The results showed that distributed…
NASA Technical Reports Server (NTRS)
Palumbo, Dan
2008-01-01
The lifetimes of coherent structures are derived from data correlated over a 3 sensor array sampling streamwise sidewall pressure at high Reynolds number (> 10(exp 8)). The data were acquired at subsonic, transonic and supersonic speeds aboard a Tupolev Tu-144. The lifetimes are computed from a variant of the correlation length termed the lifelength. Characteristic lifelengths are estimated by fitting a Gaussian distribution to the sensors cross spectra and are shown to compare favorably with Efimtsov s prediction of correlation space scales. Lifelength distributions are computed in the time/frequency domain using an interval correlation technique on the continuous wavelet transform of the original time data. The median values of the lifelength distributions are found to be very close to the frequency averaged result. The interval correlation technique is shown to allow the retrieval and inspection of the original time data of each event in the lifelength distributions, thus providing a means to locate and study the nature of the coherent structure in the turbulent boundary layer. The lifelength data are converted to lifetimes using the convection velocity. The lifetime of events in the time/frequency domain are displayed in Lifetime Maps. The primary purpose of the paper is to validate these new analysis techniques so that they can be used with confidence to further characterize the behavior of coherent structures in the turbulent boundary layer.
Casellas, J; Bach, R
2012-06-01
Lambing interval is a relevant reproductive indicator for sheep populations under continuous mating systems, although there is a shortage of selection programs accounting for this trait in the sheep industry. Both the historical assumption of small genetic background and its unorthodox distribution pattern have limited its implementation as a breeding objective. In this manuscript, statistical performances of 3 alternative parametrizations [i.e., symmetric Gaussian mixed linear (GML) model, skew-Gaussian mixed linear (SGML) model, and piecewise Weibull proportional hazard (PWPH) model] have been compared to elucidate the preferred methodology to handle lambing interval data. More specifically, flock-by-flock analyses were performed on 31,986 lambing interval records (257.3 ± 0.2 d) from 6 purebred Ripollesa flocks. Model performances were compared in terms of deviance information criterion (DIC) and Bayes factor (BF). For all flocks, PWPH models were clearly preferred; they generated a reduction of 1,900 or more DIC units and provided BF estimates larger than 100 (i.e., PWPH models against linear models). These differences were reduced when comparing PWPH models with different number of change points for the baseline hazard function. In 4 flocks, only 2 change points were required to minimize the DIC, whereas 4 and 6 change points were needed for the 2 remaining flocks. These differences demonstrated a remarkable degree of heterogeneity across sheep flocks that must be properly accounted for in genetic evaluation models to avoid statistical biases and suboptimal genetic trends. Within this context, all 6 Ripollesa flocks revealed substantial genetic background for lambing interval with heritabilities ranging between 0.13 and 0.19. This study provides the first evidence of the suitability of PWPH models for lambing interval analysis, clearly discarding previous parametrizations focused on mixed linear models.
Stability of Spatial Distributions of Stink Bugs, Boll Injury, and NDVI in Cotton.
Reay-Jones, Francis P F; Greene, Jeremy K; Bauer, Philip J
2016-10-01
A 3-yr study was conducted to determine the degree of aggregation of stink bugs and boll injury in cotton, Gossypium hirsutum L., and their spatial association with a multispectral vegetation index (normalized difference vegetation index [NDVI]). Using the spatial analysis by distance indices analyses, stink bugs were less frequently aggregated (17% for adults and 4% for nymphs) than boll injury (36%). NDVI values were also significantly aggregated within fields in 19 of 48 analyses (40%), with the majority of significant indices occurring in July and August. Paired NDVI datasets from different sampling dates were frequently associated (86.5% for weekly intervals among datasets). Spatial distributions of both stink bugs and boll injury were less stable than for NDVI, with positive associations varying from 12.5 to 25% for adult stink bugs for weekly intervals, depending on species. Spatial distributions of boll injury from stink bug feeding were more stable than stink bugs, with 46% positive associations among paired datasets with weekly intervals. NDVI values were positively associated with boll injury from stink bug feeding in 11 out of 22 analyses, with no significant negative associations. This indicates that NDVI has potential as a component of site-specific management. Future work should continue to examine the value of remote sensing for insect management in cotton, with an aim to develop tools such as risk assessment maps that will help growers to reduce insecticide inputs. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Dziak, John J.; Bray, Bethany C.; Zhang, Jieting; Zhang, Minqiang; Lanza, Stephanie T.
2016-01-01
Several approaches are available for estimating the relationship of latent class membership to distal outcomes in latent profile analysis (LPA). A three-step approach is commonly used, but has problems with estimation bias and confidence interval coverage. Proposed improvements include the correction method of Bolck, Croon, and Hagenaars (BCH; 2004), Vermunt’s (2010) maximum likelihood (ML) approach, and the inclusive three-step approach of Bray, Lanza, & Tan (2015). These methods have been studied in the related case of latent class analysis (LCA) with categorical indicators, but not as well studied for LPA with continuous indicators. We investigated the performance of these approaches in LPA with normally distributed indicators, under different conditions of distal outcome distribution, class measurement quality, relative latent class size, and strength of association between latent class and the distal outcome. The modified BCH implemented in Latent GOLD had excellent performance. The maximum likelihood and inclusive approaches were not robust to violations of distributional assumptions. These findings broadly agree with and extend the results presented by Bakk and Vermunt (2016) in the context of LCA with categorical indicators. PMID:28630602
Cooley, Richard L.
1993-01-01
A new method is developed to efficiently compute exact Scheffé-type confidence intervals for output (or other function of parameters) g(β) derived from a groundwater flow model. The method is general in that parameter uncertainty can be specified by any statistical distribution having a log probability density function (log pdf) that can be expanded in a Taylor series. However, for this study parameter uncertainty is specified by a statistical multivariate beta distribution that incorporates hydrogeologic information in the form of the investigator's best estimates of parameters and a grouping of random variables representing possible parameter values so that each group is defined by maximum and minimum bounds and an ordering according to increasing value. The new method forms the confidence intervals from maximum and minimum limits of g(β) on a contour of a linear combination of (1) the quadratic form for the parameters used by Cooley and Vecchia (1987) and (2) the log pdf for the multivariate beta distribution. Three example problems are used to compare characteristics of the confidence intervals for hydraulic head obtained using different weights for the linear combination. Different weights generally produced similar confidence intervals, whereas the method of Cooley and Vecchia (1987) often produced much larger confidence intervals.
Automatic image equalization and contrast enhancement using Gaussian mixture modeling.
Celik, Turgay; Tjahjadi, Tardi
2012-01-01
In this paper, we propose an adaptive image equalization algorithm that automatically enhances the contrast in an input image. The algorithm uses the Gaussian mixture model to model the image gray-level distribution, and the intersection points of the Gaussian components in the model are used to partition the dynamic range of the image into input gray-level intervals. The contrast equalized image is generated by transforming the pixels' gray levels in each input interval to the appropriate output gray-level interval according to the dominant Gaussian component and the cumulative distribution function of the input interval. To take account of the hypothesis that homogeneous regions in the image represent homogeneous silences (or set of Gaussian components) in the image histogram, the Gaussian components with small variances are weighted with smaller values than the Gaussian components with larger variances, and the gray-level distribution is also used to weight the components in the mapping of the input interval to the output interval. Experimental results show that the proposed algorithm produces better or comparable enhanced images than several state-of-the-art algorithms. Unlike the other algorithms, the proposed algorithm is free of parameter setting for a given dynamic range of the enhanced image and can be applied to a wide range of image types.
Recurrence time statistics for finite size intervals
NASA Astrophysics Data System (ADS)
Altmann, Eduardo G.; da Silva, Elton C.; Caldas, Iberê L.
2004-12-01
We investigate the statistics of recurrences to finite size intervals for chaotic dynamical systems. We find that the typical distribution presents an exponential decay for almost all recurrence times except for a few short times affected by a kind of memory effect. We interpret this effect as being related to the unstable periodic orbits inside the interval. Although it is restricted to a few short times it changes the whole distribution of recurrences. We show that for systems with strong mixing properties the exponential decay converges to the Poissonian statistics when the width of the interval goes to zero. However, we alert that special attention to the size of the interval is required in order to guarantee that the short time memory effect is negligible when one is interested in numerically or experimentally calculated Poincaré recurrence time statistics.
Loce, R P; Jodoin, R E
1990-09-10
Using the tools of Fourier analysis, a sampling requirement is derived that assures that sufficient information is contained within the samples of a distribution to calculate accurately geometric moments of that distribution. The derivation follows the standard textbook derivation of the Whittaker-Shannon sampling theorem, which is used for reconstruction, but further insight leads to a coarser minimum sampling interval for moment determination. The need for fewer samples to determine moments agrees with intuition since less information should be required to determine a characteristic of a distribution compared with that required to construct the distribution. A formula for calculation of the moments from these samples is also derived. A numerical analysis is performed to quantify the accuracy of the calculated first moment for practical nonideal sampling conditions. The theory is applied to a high speed laser beam position detector, which uses the normalized first moment to measure raster line positional accuracy in a laser printer. The effects of the laser irradiance profile, sampling aperture, number of samples acquired, quantization, and noise are taken into account.
Redshift data and statistical inference
NASA Technical Reports Server (NTRS)
Newman, William I.; Haynes, Martha P.; Terzian, Yervant
1994-01-01
Frequency histograms and the 'power spectrum analysis' (PSA) method, the latter developed by Yu & Peebles (1969), have been widely employed as techniques for establishing the existence of periodicities. We provide a formal analysis of these two classes of methods, including controlled numerical experiments, to better understand their proper use and application. In particular, we note that typical published applications of frequency histograms commonly employ far greater numbers of class intervals or bins than is advisable by statistical theory sometimes giving rise to the appearance of spurious patterns. The PSA method generates a sequence of random numbers from observational data which, it is claimed, is exponentially distributed with unit mean and variance, essentially independent of the distribution of the original data. We show that the derived random processes is nonstationary and produces a small but systematic bias in the usual estimate of the mean and variance. Although the derived variable may be reasonably described by an exponential distribution, the tail of the distribution is far removed from that of an exponential, thereby rendering statistical inference and confidence testing based on the tail of the distribution completely unreliable. Finally, we examine a number of astronomical examples wherein these methods have been used giving rise to widespread acceptance of statistically unconfirmed conclusions.
Intertime jump statistics of state-dependent Poisson processes.
Daly, Edoardo; Porporato, Amilcare
2007-01-01
A method to obtain the probability distribution of the interarrival times of jump occurrences in systems driven by state-dependent Poisson noise is proposed. Such a method uses the survivor function obtained by a modified version of the master equation associated to the stochastic process under analysis. A model for the timing of human activities shows the capability of state-dependent Poisson noise to generate power-law distributions. The application of the method to a model for neuron dynamics and to a hydrological model accounting for land-atmosphere interaction elucidates the origin of characteristic recurrence intervals and possible persistence in state-dependent Poisson models.
Hyatt, M.W.; Hubert, W.A.
2001-01-01
We assessed relative weight (Wr) distributions among 291 samples of stock-to-quality-length brook trout Salvelinus fontinalis, brown trout Salmo trutta, rainbow trout Oncorhynchus mykiss, and cutthroat trout O. clarki from lentic and lotic habitats. Statistics describing Wr sample distributions varied slightly among species and habitat types. The average sample was leptokurtotic and slightly skewed to the right with a standard deviation of about 10, but the shapes of Wr distributions varied widely among samples. Twenty-two percent of the samples had nonnormal distributions, suggesting the need to evaluate sample distributions before applying statistical tests to determine whether assumptions are met. In general, our findings indicate that samples of about 100 stock-to-quality-length fish are needed to obtain confidence interval widths of four Wr units around the mean. Power analysis revealed that samples of about 50 stock-to-quality-length fish are needed to detect a 2% change in mean Wr at a relatively high level of power (beta = 0.01, alpha = 0.05).
Yoganandan, Narayan; Arun, Mike W J; Pintar, Frank A; Banerjee, Anjishnu
2015-01-01
Derive lower leg injury risk functions using survival analysis and determine injury reference values (IRV) applicable to human mid-size male and small-size female anthropometries by conducting a meta-analysis of experimental data from different studies under axial impact loading to the foot-ankle-leg complex. Specimen-specific dynamic peak force, age, total body mass, and injury data were obtained from tests conducted by applying the external load to the dorsal surface of the foot of postmortem human subject (PMHS) foot-ankle-leg preparations. Calcaneus and/or tibia injuries, alone or in combination and with/without involvement of adjacent articular complexes, were included in the injury group. Injury and noninjury tests were included. Maximum axial loads recorded by a load cell attached to the proximal end of the preparation were used. Data were analyzed by treating force as the primary variable. Age was considered as the covariate. Data were censored based on the number of tests conducted on each specimen and whether it remained intact or sustained injury; that is, right, left, and interval censoring. The best fits from different distributions were based on the Akaike information criterion; mean and plus and minus 95% confidence intervals were obtained; and normalized confidence interval sizes (quality indices) were determined at 5, 10, 25, and 50% risk levels. The normalization was based on the mean curve. Using human-equivalent age as 45 years, data were normalized and risk curves were developed for the 50th and 5th percentile human size of the dummies. Out of the available 114 tests (76 fracture and 38 no injury) from 5 groups of experiments, survival analysis was carried out using 3 groups consisting of 62 tests (35 fracture and 27 no injury). Peak forces associated with 4 specific risk levels at 25, 45, and 65 years of age are given along with probability curves (mean and plus and minus 95% confidence intervals) for PMHS and normalized data applicable to male and female dummies. Quality indices increased (less tightness-of-fit) with decreasing age and risk level for all age groups and these data are given for all chosen risk levels. These PMHS-based probability distributions at different ages using information from different groups of researchers constituting the largest body of data can be used as human tolerances to lower leg injury from axial loading. Decreasing quality indices (increasing index value) at lower probabilities suggest the need for additional tests. The anthropometry-specific mid-size male and small-size female mean human risk curves along with plus and minus 95% confidence intervals from survival analysis and associated IRV data can be used as a first step in studies aimed at advancing occupant safety in automotive and other environments.
NASA Astrophysics Data System (ADS)
Klimenko, V. V.
2017-12-01
We obtain expressions for the probabilities of the normal-noise spikes with the Gaussian correlation function and for the probability density of the inter-spike intervals. As distinct from the delta-correlated noise, in which the intervals are distributed by the exponential law, the probability of the subsequent spike depends on the previous spike and the interval-distribution law deviates from the exponential one for a finite noise-correlation time (frequency-bandwidth restriction). This deviation is the most pronounced for a low detection threshold. Similarity of the behaviors of the distributions of the inter-discharge intervals in a thundercloud and the noise spikes for the varying repetition rate of the discharges/spikes, which is determined by the ratio of the detection threshold to the root-mean-square value of noise, is observed. The results of this work can be useful for the quantitative description of the statistical characteristics of the noise spikes and studying the role of fluctuations for the discharge emergence in a thundercloud.
NASA Astrophysics Data System (ADS)
Tice, Michael M.
2009-12-01
All mats are preserved in the shallowest-water interval of those rocks deposited below normal wave base and above storm wave base. This interval is bounded below by a transgressive lag formed during regional flooding and above by a small condensed section that marks a local relative sea-level maximum. Restriction of all mat morphotypes to the shallowest interval of the storm-active layer in the BRC ocean reinforces previous interpretations that these mats were constructed primarily by photosynthetic organisms. Morphotypes α and β dominate the lower half of this interval and grew during deposition of relatively coarse detrital carbonaceous grains, while morphotype γ dominates the upper half and grew during deposition of fine detrital carbonaceous grains. The observed mat distribution suggests that either light intensity or, more likely, small variations in ambient current energy acted as a first-order control on mat morphotype distribution. These results demonstrate significant environmental control on biological morphogenetic processes independent of influences from siliciclastic sedimentation.
NASA Technical Reports Server (NTRS)
Shantaram, S. Pai; Gyekenyesi, John P.
1989-01-01
The calculation of shape and scale parametes of the two-parameter Weibull distribution is described using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. Detailed procedures are given for evaluating 90 percent confidence intervals for maximum likelihood estimates of shape and scale parameters, the unbiased estimates of the shape parameters, and the Weibull mean values and corresponding standard deviations. Furthermore, the necessary steps are described for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull distribution. It also shows how to calculate the Batdorf flaw-density constants by using the Weibull distribution statistical parameters. The techniques described were verified with several example problems, from the open literature, and were coded in the Structural Ceramics Analysis and Reliability Evaluation (SCARE) design program.
NASA Astrophysics Data System (ADS)
Fei, S.; Xinong, X.
2017-12-01
The fifth organic-matter-rich interval (ORI 5) in the He-third Member of the Paleogene Hetaoyuan Formation is believed to be the main exploration target for shale oil in Biyang Depression, eastern China. An important part of successful explorating and producing shale oil is to identify and predict organic-rich shale lithofacies with different reservoir capacities and rock geomechanical properties, which are related to organic matter content and mineral components. In this study, shale lithofacies are defined by core analysis data, well-logging and seismic data, and the spatial-temporal distribution of various lithologies are predicted qualitatively by seismic attribute technology and quantitatively by geostatistical inversion analysis, and the prediction results are confirmed by the logging data and geological background. ORI 5 is present in lacustrine expanding system tract and can be further divided into four parasequence sets based on the analysis of conventional logs, TOC content and wavelet transform. Calcareous shale, dolomitic shale, argillaceous shale, silty shale and muddy siltstone are defined within ORI 5, and can be separated and predicted in regional-scale by root mean square amplitude (RMS) analysis and wave impedance. The results indicate that in the early expansion system tract, dolomitic shale and calcareous shale widely developed in the study area, and argillaceous shale, silty shale, and muddy siltstone only developed in periphery of deep depression. With the lake level rising, argillaceous shale and calcareous shale are well developed, and argillaceous shale interbeded with silty shale or muddy siltstone developed in deep or semi-deep lake. In the late expansion system tract, argillaceous shale is widely deposited in the deepest depression, calcareous shale presented band distribution in the east of the depression. Actual test results indicate that these methods are feasible to predict the spatial distribution of shale lithofacies.
Cardiovascular response to acute stress in freely moving rats: time-frequency analysis.
Loncar-Turukalo, Tatjana; Bajic, Dragana; Japundzic-Zigon, Nina
2008-01-01
Spectral analysis of cardiovascular series is an important tool for assessing the features of the autonomic control of the cardiovascular system. In this experiment Wistar rats ecquiped with intraarterial catheter for blood pressure (BP) recording were exposed to stress induced by blowing air. The problem of non stationary data was overcomed applying the Smoothed Pseudo Wigner Villle (SPWV) time-frequency distribution. Spectral analysis was done before stress, during stress, immediately after stress and later in recovery. The spectral indices were calculated for both systolic blood pressure (SBP) and pulse interval (PI) series. The time evolution of spectral indices showed perturbed sympathovagal balance.
Development of a national electronic interval cancer review for breast screening
NASA Astrophysics Data System (ADS)
Halling-Brown, M. D.; Patel, M. N.; Wallis, M. G.; Young, K. C.
2018-03-01
Reviewing interval cancers and prior screening mammograms are a key measure to monitor screening performance. Radiological analysis of the imaging features in prior mammograms and retrospective classification are an important educational tool for readers to improve individual performance. The requirements of remote, collaborative image review sessions, such as those required to run a remote interval cancer review, are variable and demand a flexible and configurable software solution that is not currently available on commercial workstations. The wide range of requirements for both collection and remote review of interval cancers has precipitated the creation of extensible medical image viewers and accompanying systems. In order to allow remote viewing, an application has been designed to allow workstation-independent, PACS-less viewing and interaction with medical images in a remote, collaborative manner, providing centralised reporting and web-based feedback. A semi-automated process, which allows the centralisation of interval cancer cases, has been developed. This stand-alone, flexible image collection toolkit provides the extremely important function of bespoke, ad-hoc image collection at sites where there is no dedicated hardware. Web interfaces have been created which allow a national or regional administrator to organise, coordinate and administer interval cancer review sessions and deploy invites to session members to participate. The same interface allows feedback to be analysed and distributed. The eICR provides a uniform process for classifying interval cancers across the NHSBSP, which facilitates rapid access to a robust 'external' review for patients and their relatives seeking answers about why their cancer was 'missed'.
Determinants of birth interval in a rural Mediterranean population (La Alpujarra, Spain).
Polo, V; Luna, F; Fuster, V
2000-10-01
The fertility pattern, in terms of birth intervals, in a rural population not practicing contraception belonging to La Alta Alpujarra Oriental (southeast Spain) is analyzed. During the first half of the 20th century, this population experienced a considerable degree of geographical and cultural isolation. Because of this population's high variability in fertility and therefore in birth intervals, the analysis was limited to a homogenous subsample of 154 families, each with at least five pregnancies. This limitation allowed us to analyze, among and within families, effects of a set of variables on the interbirth pattern, and to avoid possible problems of pseudoreplication. Information on birth date of the mother, age at marriage, children's birth date and death date, birth order, and frequency of miscarriages was collected. Our results indicate that interbirth intervals depend on an exponential effect of maternal age, especially significant after the age of 35. This effect is probably related to the biological degenerative processes of female fertility with age. A linear increase of birth intervals with birth order within families was found as well as a reduction of intervals among families experiencing an infant death. Our sample size was insufficient to detect a possible replacement behavior in the case of infant death. High natality and mortality rates, a secular decrease of natality rates, a log-normal birth interval, and family-size distributions suggest that La Alpujarra has been a natural fertility population following a demographic transition process.
Krishnamoorthy, K; Oral, Evrim
2017-12-01
Standardized likelihood ratio test (SLRT) for testing the equality of means of several log-normal distributions is proposed. The properties of the SLRT and an available modified likelihood ratio test (MLRT) and a generalized variable (GV) test are evaluated by Monte Carlo simulation and compared. Evaluation studies indicate that the SLRT is accurate even for small samples, whereas the MLRT could be quite liberal for some parameter values, and the GV test is in general conservative and less powerful than the SLRT. Furthermore, a closed-form approximate confidence interval for the common mean of several log-normal distributions is developed using the method of variance estimate recovery, and compared with the generalized confidence interval with respect to coverage probabilities and precision. Simulation studies indicate that the proposed confidence interval is accurate and better than the generalized confidence interval in terms of coverage probabilities. The methods are illustrated using two examples.
Modeling and simulation of count data.
Plan, E L
2014-08-13
Count data, or number of events per time interval, are discrete data arising from repeated time to event observations. Their mean count, or piecewise constant event rate, can be evaluated by discrete probability distributions from the Poisson model family. Clinical trial data characterization often involves population count analysis. This tutorial presents the basics and diagnostics of count modeling and simulation in the context of pharmacometrics. Consideration is given to overdispersion, underdispersion, autocorrelation, and inhomogeneity.
Analysis of neuronal cells of dissociated primary culture on high-density CMOS electrode array
Matsuda, Eiko; Mita, Takeshi; Hubert, Julien; Bakkum, Douglas; Frey, Urs; Hierlemann, Andreas; Takahashi, Hirokazu; Ikegami, Takashi
2017-01-01
Spontaneous development of neuronal cells was recorded around 4–34 days in vitro (DIV) with high-density CMOS array, which enables detailed study of the spatio-temporal activity of neuronal culture. We used the CMOS array to characterize the evolution of the inter-spike interval (ISI) distribution from putative single neurons, and estimate the network structure based on transfer entropy analysis, where each node corresponds to a single neuron. We observed that the ISI distributions gradually obeyed the power law with maturation of the network. The amount of information transferred between neurons increased at the early stage of development, but decreased as the network matured. These results suggest that both ISI and transfer entropy were very useful for characterizing the dynamic development of cultured neural cells over a few weeks. PMID:24109870
NASA Astrophysics Data System (ADS)
Ivanov, A. A.
2018-04-01
The Yakutsk array data set in the energy interval (1017,1019) eV is revisited in order to interpret the zenith angle distribution of an extensive air shower event rate of ultra-high-energy cosmic rays. The close relation of the distribution to the attenuation of the main measurable parameter of showers, ρ600, is examined. Measured and expected distributions are used to analyze the arrival directions of cosmic rays on an equatorial map including the energy range below 1018 eV , which was previously avoided due to the reduced trigger efficiency of the array in the range. While the null hypothesis cannot be rejected with data from the Yakutsk array, an upper limit on the fraction of cosmic rays from a separable source in the uniform background is derived as a function of declination and energy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kocharovsky, V. V., E-mail: vkochar@physics.tamu.edu; Department of Physics and Astronomy, Texas A&M University, College Station, Texas 77843-4242; Kocharovsky, VI. V.
Widespread use of a broken-power-law description of the spectra of synchrotron emission of various plasma objects requires an analysis of origin and a proper interpretation of spectral components. We show that, for a self-consistent magnetic configuration in a collisionless plasma, these components may be angle-dependent according to an anisotropic particle momentum distribution and may have no counterparts in a particle energy distribution. That has never been studied analytically and is in contrast to a usual model of synchrotron radiation, assuming an external magnetic field and a particle ensemble with isotropic momentum distribution. We demonstrate that for the wide intervals ofmore » observation angle the power-law spectra and, in particular, the positions and number of spectral breaks may be essentially different for the cases of the self-consistent and not-self-consistent magnetic fields in current structures responsible for the synchrotron radiation of the ensembles of relativistic particles with the multi-power-law energy distributions.« less
Analysis and machine mapping of the distribution of band recoveries
Cowardin, L.M.
1977-01-01
A method of calculating distance and bearing from banding site to recovery location based on the solution of a spherical triangle is presented. X and Y distances on an ordinate grid were applied to computer plotting of recoveries on a map. The advantages and disadvantages of tables of recoveries by State or degree block, axial lines, and distance of recovery from banding site for presentation and comparison of the spatial distribution of band recoveries are discussed. A special web-shaped partition formed by concentric circles about the point of banding and great circles at 30-degree intervals through the point of banding has certain advantages over other methods. Comparison of distributions by means of a X? contingency test is illustrated. The statistic V = X?/N can be used as a measure of difference between two distributions of band recoveries and its possible use is illustrated as a measure of the degree of migrational homing.
ERIC Educational Resources Information Center
Strazzeri, Kenneth Charles
2013-01-01
The purposes of this study were to investigate (a) undergraduate students' reasoning about the concepts of confidence intervals (b) undergraduate students' interactions with "well-designed" screencast videos on sampling distributions and confidence intervals, and (c) how screencast videos improve undergraduate students' reasoning ability…
Time series models on analysing mortality rates and acute childhood lymphoid leukaemia.
Kis, Maria
2005-01-01
In this paper we demonstrate applying time series models on medical research. The Hungarian mortality rates were analysed by autoregressive integrated moving average models and seasonal time series models examined the data of acute childhood lymphoid leukaemia.The mortality data may be analysed by time series methods such as autoregressive integrated moving average (ARIMA) modelling. This method is demonstrated by two examples: analysis of the mortality rates of ischemic heart diseases and analysis of the mortality rates of cancer of digestive system. Mathematical expressions are given for the results of analysis. The relationships between time series of mortality rates were studied with ARIMA models. Calculations of confidence intervals for autoregressive parameters by tree methods: standard normal distribution as estimation and estimation of the White's theory and the continuous time case estimation. Analysing the confidence intervals of the first order autoregressive parameters we may conclude that the confidence intervals were much smaller than other estimations by applying the continuous time estimation model.We present a new approach to analysing the occurrence of acute childhood lymphoid leukaemia. We decompose time series into components. The periodicity of acute childhood lymphoid leukaemia in Hungary was examined using seasonal decomposition time series method. The cyclic trend of the dates of diagnosis revealed that a higher percent of the peaks fell within the winter months than in the other seasons. This proves the seasonal occurrence of the childhood leukaemia in Hungary.
Reclaimed mineland curve number response to temporal distribution of rainfall
Warner, R.C.; Agouridis, C.T.; Vingralek, P.T.; Fogle, A.W.
2010-01-01
The curve number (CN) method is a common technique to estimate runoff volume, and it is widely used in coal mining operations such as those in the Appalachian region of Kentucky. However, very little CN data are available for watersheds disturbed by surface mining and then reclaimed using traditional techniques. Furthermore, as the CN method does not readily account for variations in infiltration rates due to varying rainfall distributions, the selection of a single CN value to encompass all temporal rainfall distributions could lead engineers to substantially under- or over-size water detention structures used in mining operations or other land uses such as development. Using rainfall and runoff data from a surface coal mine located in the Cumberland Plateau of eastern Kentucky, CNs were computed for conventionally reclaimed lands. The effects of temporal rainfall distributions on CNs was also examined by classifying storms as intense, steady, multi-interval intense, or multi-interval steady. Results indicate that CNs for such reclaimed lands ranged from 62 to 94 with a mean value of 85. Temporal rainfall distributions were also shown to significantly affect CN values with intense storms having significantly higher CNs than multi-interval storms. These results indicate that a period of recovery is present between rainfall bursts of a multi-interval storm that allows depressional storage and infiltration rates to rebound. ?? 2010 American Water Resources Association.
Information distribution in distributed microprocessor based flight control systems
NASA Technical Reports Server (NTRS)
Montgomery, R. C.; Lee, P. S.
1977-01-01
This paper presents an optimal control theory that accounts for variable time intervals in the information distribution to control effectors in a distributed microprocessor based flight control system. The theory is developed using a linear process model for the aircraft dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved that provides the control law that minimizes the expected value of a quadratic cost function. An example is presented where the theory is applied to the control of the longitudinal motions of the F8-DFBW aircraft. Theoretical and simulation results indicate that, for the example problem, the optimal cost obtained using a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained using a known uniform information update interval.
Intermittency via moments and distributions in central O+Cu collisions at 14. 6 A[center dot]GeV/c
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tannenbaum, M.J.
Fluctuations in pseudorapidity distributions of charged particles from central (ZCAL) collisions of [sup 16]O+Cu at 14.6 A[center dot]GeV/c have been analyzed by Ju Kang using the method of scaled factorial moments as a function of the interval [delta][eta] an apparent power-law growth of moments with decreasing interval is observed down to [delta][eta] [approximately] 0.1, and the measured slope parameters are found to obey two scaling rules. Previous experience with E[sub T] distributions suggested that fluctuations of multiplicity and transverse energy can be well described by Gamma or Negative Binomial Distributions (NBD) and excellent fits to NBD were obtained in allmore » [delta][eta] bins. The k parameter of the NBD fit was found to increase linearly with the [delta][eta] interval, which due to the well known property of the NBD under convolution, indicates that the multiplicity distributions in adjacent bins of pseudorapidity [delta][eta] [approximately] 0.1 are largely statistically independent.« less
Intermittency via moments and distributions in central O+Cu collisions at 14.6 A{center_dot}GeV/c
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tannenbaum, M.J.; The E802 Collaboration
Fluctuations in pseudorapidity distributions of charged particles from central (ZCAL) collisions of {sup 16}O+Cu at 14.6 A{center_dot}GeV/c have been analyzed by Ju Kang using the method of scaled factorial moments as a function of the interval {delta}{eta} an apparent power-law growth of moments with decreasing interval is observed down to {delta}{eta} {approximately} 0.1, and the measured slope parameters are found to obey two scaling rules. Previous experience with E{sub T} distributions suggested that fluctuations of multiplicity and transverse energy can be well described by Gamma or Negative Binomial Distributions (NBD) and excellent fits to NBD were obtained in all {delta}{eta}more » bins. The k parameter of the NBD fit was found to increase linearly with the {delta}{eta} interval, which due to the well known property of the NBD under convolution, indicates that the multiplicity distributions in adjacent bins of pseudorapidity {delta}{eta} {approximately} 0.1 are largely statistically independent.« less
Recent and Future Enhancements in NDI for Aircraft Structures (Postprint)
2015-11-01
found that different capabilities were being used to determine inspection intervals for different aircraft [7]. This led to an internal effort...capability of the NDI technique determines the inspection intervals and the Distribution Statement A. Approved for public release; distribution...damage and that the aircraft structure had to be inspectable . The results of the damage tolerance assessments were incorporated into USAF Technical
Recent and Future Enhancement in NDI for Aircraft Structures (Postprint)
2015-11-01
found that different capabilities were being used to determine inspection intervals for different aircraft [7]. This led to an internal effort...capability of the NDI technique determines the inspection intervals and the Distribution Statement A. Approved for public release; distribution...damage and that the aircraft structure had to be inspectable . The results of the damage tolerance assessments were incorporated into USAF Technical
Hu, X H; Li, Y P; Huang, G H; Zhuang, X W; Ding, X W
2016-05-01
In this study, a Bayesian-based two-stage inexact optimization (BTIO) method is developed for supporting water quality management through coupling Bayesian analysis with interval two-stage stochastic programming (ITSP). The BTIO method is capable of addressing uncertainties caused by insufficient inputs in water quality model as well as uncertainties expressed as probabilistic distributions and interval numbers. The BTIO method is applied to a real case of water quality management for the Xiangxi River basin in the Three Gorges Reservoir region to seek optimal water quality management schemes under various uncertainties. Interval solutions for production patterns under a range of probabilistic water quality constraints have been generated. Results obtained demonstrate compromises between the system benefit and the system failure risk due to inherent uncertainties that exist in various system components. Moreover, information about pollutant emission is accomplished, which would help managers to adjust production patterns of regional industry and local policies considering interactions of water quality requirement, economic benefit, and industry structure.
Correlation of physical and genetic maps of human chromosome 16
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sutherland, G.R.
1991-01-01
This project aimed to divide chromosome 16 into approximately 50 intervals of {approximately}2Mb in size by constructing a series of mouse/human somatic cell hybrids each containing a rearranged chromosome 16. Using these hybrids, DNA probes would be regionally mapped by Southern blot or PCR analysis. Preference would be given to mapping probes which demonstrated polymorphisms for which the CEPH panel of families had been typed. This would allow a correlation of the physical and linkage maps of this chromosome. The aims have been substantially achieved. 49 somatic cell hybrids have been constructed which have allowed definition of 46, and potentiallymore » 57, different physical intervals on the chromosome. 164 loci have been fully mapped into these intervals. A correlation of the physical and genetic maps of the chromosome is in an advanced stage of preparation. The somatic cell hybrids constructed have been widely distributed to groups working on chromosome 16 and other genome projects.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sutherland, G.R.
1991-12-31
This project aimed to divide chromosome 16 into approximately 50 intervals of {approximately}2Mb in size by constructing a series of mouse/human somatic cell hybrids each containing a rearranged chromosome 16. Using these hybrids, DNA probes would be regionally mapped by Southern blot or PCR analysis. Preference would be given to mapping probes which demonstrated polymorphisms for which the CEPH panel of families had been typed. This would allow a correlation of the physical and linkage maps of this chromosome. The aims have been substantially achieved. 49 somatic cell hybrids have been constructed which have allowed definition of 46, and potentiallymore » 57, different physical intervals on the chromosome. 164 loci have been fully mapped into these intervals. A correlation of the physical and genetic maps of the chromosome is in an advanced stage of preparation. The somatic cell hybrids constructed have been widely distributed to groups working on chromosome 16 and other genome projects.« less
Optimizing preventive maintenance policy: A data-driven application for a light rail braking system.
Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel
2017-10-01
This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions.
Optimizing preventive maintenance policy: A data-driven application for a light rail braking system
Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel
2017-01-01
This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions. PMID:29278245
Kheifets, Aaron; Freestone, David; Gallistel, C R
2017-07-01
In three experiments with mice ( Mus musculus ) and rats (Rattus norvigicus), we used a switch paradigm to measure quantitative properties of the interval-timing mechanism. We found that: 1) Rodents adjusted the precision of their timed switches in response to changes in the interval between the short and long feed latencies (the temporal goalposts). 2) The variability in the timing of the switch response was reduced or unchanged in the face of large trial-to-trial random variability in the short and long feed latencies. 3) The adjustment in the distribution of switch latencies in response to changes in the relative frequency of short and long trials was sensitive to the asymmetry in the Kullback-Leibler divergence. The three results suggest that durations are represented with adjustable precision, that they are timed by multiple timers, and that there is a trial-by-trial (episodic) record of feed latencies in memory. © 2017 Society for the Experimental Analysis of Behavior.
Min and Max Exponential Extreme Interval Values and Statistics
ERIC Educational Resources Information Center
Jance, Marsha; Thomopoulos, Nick
2009-01-01
The extreme interval values and statistics (expected value, median, mode, standard deviation, and coefficient of variation) for the smallest (min) and largest (max) values of exponentially distributed variables with parameter ? = 1 are examined for different observation (sample) sizes. An extreme interval value g[subscript a] is defined as a…
Global distribution of moisture, evaporation-precipitation, and diabatic heating rates
NASA Technical Reports Server (NTRS)
Christy, John R.
1989-01-01
Global archives were established for ECMWF 12-hour, multilevel analysis beginning 1 January 1985; day and night IR temperatures, and solar incoming and solar absorbed. Routines were written to access these data conveniently from NASA/MSFC MASSTOR facility for diagnostic analysis. Calculations of diabatic heating rates were performed from the ECMWF data using 4-day intervals. Calculations of precipitable water (W) from 1 May 1985 were carried out using the ECMWF data. Because a major operational change on 1 May 1985 had a significant impact on the moisture field, values prior to that date are incompatible with subsequent analyses.
Evaluation Of Statistical Models For Forecast Errors From The HBV-Model
NASA Astrophysics Data System (ADS)
Engeland, K.; Kolberg, S.; Renard, B.; Stensland, I.
2009-04-01
Three statistical models for the forecast errors for inflow to the Langvatn reservoir in Northern Norway have been constructed and tested according to how well the distribution and median values of the forecasts errors fit to the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order autoregressive model was constructed for the forecast errors. The parameters were conditioned on climatic conditions. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order autoregressive model was constructed for the forecast errors. For the last model positive and negative errors were modeled separately. The errors were first NQT-transformed before a model where the mean values were conditioned on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: We wanted a) the median values to be close to the observed values; b) the forecast intervals to be narrow; c) the distribution to be correct. The results showed that it is difficult to obtain a correct model for the forecast errors, and that the main challenge is to account for the auto-correlation in the errors. Model 1 and 2 gave similar results, and the main drawback is that the distributions are not correct. The 95% forecast intervals were well identified, but smaller forecast intervals were over-estimated, and larger intervals were under-estimated. Model 3 gave a distribution that fits better, but the median values do not fit well since the auto-correlation is not properly accounted for. If the 95% forecast interval is of interest, Model 2 is recommended. If the whole distribution is of interest, Model 3 is recommended.
NASA Astrophysics Data System (ADS)
Muhammed Naseef, T.; Sanil Kumar, V.
2017-10-01
An assessment of extreme wave characteristics during the design of marine facilities not only helps to ensure their safety but also assess the economic aspects. In this study, return levels of significant wave height (Hs) for different periods are estimated using the generalized extreme value distribution (GEV) and generalized Pareto distribution (GPD) based on the Waverider buoy data spanning 8 years and the ERA-Interim reanalysis data spanning 38 years. The analysis is carried out for wind-sea, swell and total Hs separately for buoy data. Seasonality of the prevailing wave climate is also considered in the analysis to provide return levels for short-term activities in the location. The study shows that the initial distribution method (IDM) underestimates return levels compared to GPD. The maximum return levels estimated by the GPD corresponding to 100 years are 5.10 m for the monsoon season (JJAS), 2.66 m for the pre-monsoon season (FMAM) and 4.28 m for the post-monsoon season (ONDJ). The intercomparison of return levels by block maxima (annual, seasonal and monthly maxima) and the r-largest method for GEV theory shows that the maximum return level for 100 years is 7.20 m in the r-largest series followed by monthly maxima (6.02 m) and annual maxima (AM) (5.66 m) series. The analysis is also carried out to understand the sensitivity of the number of observations for the GEV annual maxima estimates. It indicates that the variations in the standard deviation of the series caused by changes in the number of observations are positively correlated with the return level estimates. The 100-year return level results of Hs using the GEV method are comparable for short-term (2008 to 2016) buoy data (4.18 m) and long-term (1979 to 2016) ERA-Interim shallow data (4.39 m). The 6 h interval data tend to miss high values of Hs, and hence there is a significant difference in the 100-year return level Hs obtained using 6 h interval data compared to data at 0.5 h interval. The study shows that a single storm can cause a large difference in the 100-year Hs value.
Body size distributions signal a regime shift in a lake ecosystem
Spanbauer, Trisha; Allen, Craig R.; Angeler, David G.; Eason, Tarsha; Fritz, Sherilyn C.; Garmestani, Ahjond S.; Nash, Kirsty L.; Stone, Jeffery R.; Stow, Craig A.; Sundstrom, Shana M.
2016-01-01
Communities of organisms, from mammals to microorganisms, have discontinuous distributions of body size. This pattern of size structuring is a conservative trait of community organization and is a product of processes that occur at multiple spatial and temporal scales. In this study, we assessed whether body size patterns serve as an indicator of a threshold between alternative regimes. Over the past 7000 years, the biological communities of Foy Lake (Montana, USA) have undergone a major regime shift owing to climate change. We used a palaeoecological record of diatom communities to estimate diatom sizes, and then analysed the discontinuous distribution of organism sizes over time. We used Bayesian classification and regression tree models to determine that all time intervals exhibited aggregations of sizes separated by gaps in the distribution and found a significant change in diatom body size distributions approximately 150 years before the identified ecosystem regime shift. We suggest that discontinuity analysis is a useful addition to the suite of tools for the detection of early warning signals of regime shifts.
NASA Technical Reports Server (NTRS)
Glick, B. J.
1985-01-01
Techniques for classifying objects into groups or clases go under many different names including, most commonly, cluster analysis. Mathematically, the general problem is to find a best mapping of objects into an index set consisting of class identifiers. When an a priori grouping of objects exists, the process of deriving the classification rules from samples of classified objects is known as discrimination. When such rules are applied to objects of unknown class, the process is denoted classification. The specific problem addressed involves the group classification of a set of objects that are each associated with a series of measurements (ratio, interval, ordinal, or nominal levels of measurement). Each measurement produces one variable in a multidimensional variable space. Cluster analysis techniques are reviewed and methods for incuding geographic location, distance measures, and spatial pattern (distribution) as parameters in clustering are examined. For the case of patterning, measures of spatial autocorrelation are discussed in terms of the kind of data (nominal, ordinal, or interval scaled) to which they may be applied.
Link, William; Hesed, Kyle Miller
2015-01-01
Knowledge of organisms’ growth rates and ages at sexual maturity is important for conservation efforts and a wide variety of studies in ecology and evolutionary biology. However, these life history parameters may be difficult to obtain from natural populations: individuals encountered may be of unknown age, information on age at sexual maturity may be uncertain and interval-censored, and growth data may include both individual heterogeneity and measurement errors. We analyzed mark–recapture data for Red-backed Salamanders (Plethodon cinereus) to compare sex-specific growth rates and ages at sexual maturity. Aging of individuals was made possible by the use of a von Bertalanffy model of growth, complemented with models for interval-censored and imperfect observations at sexual maturation. Individual heterogeneity in growth was modeled through the use of Gamma processes. Our analysis indicates that female P. cinereus mature earlier and grow more quickly than males, growing to nearly identical asymptotic size distributions as males.
Jitter Reduces Response-Time Variability in ADHD: An Ex-Gaussian Analysis.
Lee, Ryan W Y; Jacobson, Lisa A; Pritchard, Alison E; Ryan, Matthew S; Yu, Qilu; Denckla, Martha B; Mostofsky, Stewart; Mahone, E Mark
2015-09-01
"Jitter" involves randomization of intervals between stimulus events. Compared with controls, individuals with ADHD demonstrate greater intrasubject variability (ISV) performing tasks with fixed interstimulus intervals (ISIs). Because Gaussian curves mask the effect of extremely slow or fast response times (RTs), ex-Gaussian approaches have been applied to study ISV. This study applied ex-Gaussian analysis to examine the effects of jitter on RT variability in children with and without ADHD. A total of 75 children, aged 9 to 14 years (44 ADHD, 31 controls), completed a go/no-go test with two conditions: fixed ISI and jittered ISI. ADHD children showed greater variability, driven by elevations in exponential (tau), but not normal (sigma) components of the RT distribution. Jitter decreased tau in ADHD to levels not statistically different than controls, reducing lapses in performance characteristic of impaired response control. Jitter may provide a nonpharmacologic mechanism to facilitate readiness to respond and reduce lapses from sustained (controlled) performance. © 2012 SAGE Publications.
Bounding the Failure Probability Range of Polynomial Systems Subject to P-box Uncertainties
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2012-01-01
This paper proposes a reliability analysis framework for systems subject to multiple design requirements that depend polynomially on the uncertainty. Uncertainty is prescribed by probability boxes, also known as p-boxes, whose distribution functions have free or fixed functional forms. An approach based on the Bernstein expansion of polynomials and optimization is proposed. In particular, we search for the elements of a multi-dimensional p-box that minimize (i.e., the best-case) and maximize (i.e., the worst-case) the probability of inner and outer bounding sets of the failure domain. This technique yields intervals that bound the range of failure probabilities. The offset between this bounding interval and the actual failure probability range can be made arbitrarily tight with additional computational effort.
Brown, Angus M
2010-04-01
The objective of the method described in this paper is to develop a spreadsheet template for the purpose of comparing multiple sample means. An initial analysis of variance (ANOVA) test on the data returns F--the test statistic. If F is larger than the critical F value drawn from the F distribution at the appropriate degrees of freedom, convention dictates rejection of the null hypothesis and allows subsequent multiple comparison testing to determine where the inequalities between the sample means lie. A variety of multiple comparison methods are described that return the 95% confidence intervals for differences between means using an inclusive pairwise comparison of the sample means. 2009 Elsevier Ireland Ltd. All rights reserved.
HESS Opinions: The need for process-based evaluation of large-domain hyper-resolution models
NASA Astrophysics Data System (ADS)
Melsen, Lieke A.; Teuling, Adriaan J.; Torfs, Paul J. J. F.; Uijlenhoet, Remko; Mizukami, Naoki; Clark, Martyn P.
2016-03-01
A meta-analysis on 192 peer-reviewed articles reporting on applications of the variable infiltration capacity (VIC) model in a distributed way reveals that the spatial resolution at which the model is applied has increased over the years, while the calibration and validation time interval has remained unchanged. We argue that the calibration and validation time interval should keep pace with the increase in spatial resolution in order to resolve the processes that are relevant at the applied spatial resolution. We identified six time concepts in hydrological models, which all impact the model results and conclusions. Process-based model evaluation is particularly relevant when models are applied at hyper-resolution, where stakeholders expect credible results both at a high spatial and temporal resolution.
HESS Opinions: The need for process-based evaluation of large-domain hyper-resolution models
NASA Astrophysics Data System (ADS)
Melsen, L. A.; Teuling, A. J.; Torfs, P. J. J. F.; Uijlenhoet, R.; Mizukami, N.; Clark, M. P.
2015-12-01
A meta-analysis on 192 peer-reviewed articles reporting applications of the Variable Infiltration Capacity (VIC) model in a distributed way reveals that the spatial resolution at which the model is applied has increased over the years, while the calibration and validation time interval has remained unchanged. We argue that the calibration and validation time interval should keep pace with the increase in spatial resolution in order to resolve the processes that are relevant at the applied spatial resolution. We identified six time concepts in hydrological models, which all impact the model results and conclusions. Process-based model evaluation is particularly relevant when models are applied at hyper-resolution, where stakeholders expect credible results both at a high spatial and temporal resolution.
Confidence intervals for expected moments algorithm flood quantile estimates
Cohn, Timothy A.; Lane, William L.; Stedinger, Jery R.
2001-01-01
Historical and paleoflood information can substantially improve flood frequency estimates if appropriate statistical procedures are properly applied. However, the Federal guidelines for flood frequency analysis, set forth in Bulletin 17B, rely on an inefficient “weighting” procedure that fails to take advantage of historical and paleoflood information. This has led researchers to propose several more efficient alternatives including the Expected Moments Algorithm (EMA), which is attractive because it retains Bulletin 17B's statistical structure (method of moments with the Log Pearson Type 3 distribution) and thus can be easily integrated into flood analyses employing the rest of the Bulletin 17B approach. The practical utility of EMA, however, has been limited because no closed‐form method has been available for quantifying the uncertainty of EMA‐based flood quantile estimates. This paper addresses that concern by providing analytical expressions for the asymptotic variance of EMA flood‐quantile estimators and confidence intervals for flood quantile estimates. Monte Carlo simulations demonstrate the properties of such confidence intervals for sites where a 25‐ to 100‐year streamgage record is augmented by 50 to 150 years of historical information. The experiments show that the confidence intervals, though not exact, should be acceptable for most purposes.
NASA Astrophysics Data System (ADS)
Ono, T.; Takahashi, T.
2017-12-01
Non-structural mitigation measures such as flood hazard map based on estimated inundation area have been more important because heavy rains exceeding the design rainfall frequently occur in recent years. However, conventional method may lead to an underestimation of the area because assumed locations of dike breach in river flood analysis are limited to the cases exceeding the high-water level. The objective of this study is to consider the uncertainty of estimated inundation area with difference of the location of dike breach in river flood analysis. This study proposed multiple flood scenarios which can set automatically multiple locations of dike breach in river flood analysis. The major premise of adopting this method is not to be able to predict the location of dike breach correctly. The proposed method utilized interval of dike breach which is distance of dike breaches placed next to each other. That is, multiple locations of dike breach were set every interval of dike breach. The 2D shallow water equations was adopted as the governing equation of river flood analysis, and the leap-frog scheme with staggered grid was used. The river flood analysis was verified by applying for the 2015 Kinugawa river flooding, and the proposed multiple flood scenarios was applied for the Akutagawa river in Takatsuki city. As the result of computation in the Akutagawa river, a comparison with each computed maximum inundation depth of dike breaches placed next to each other proved that the proposed method enabled to prevent underestimation of estimated inundation area. Further, the analyses on spatial distribution of inundation class and maximum inundation depth in each of the measurement points also proved that the optimum interval of dike breach which can evaluate the maximum inundation area using the minimum assumed locations of dike breach. In brief, this study found the optimum interval of dike breach in the Akutagawa river, which enabled estimated maximum inundation area to predict efficiently and accurately. The river flood analysis by using this proposed method will contribute to mitigate flood disaster by improving the accuracy of estimated inundation area.
Methods of Stochastic Analysis of Complex Regimes in the 3D Hindmarsh-Rose Neuron Model
NASA Astrophysics Data System (ADS)
Bashkirtseva, Irina; Ryashko, Lev; Slepukhina, Evdokia
A problem of the stochastic nonlinear analysis of neuronal activity is studied by the example of the Hindmarsh-Rose (HR) model. For the parametric region of tonic spiking oscillations, it is shown that random noise transforms the spiking dynamic regime into the bursting one. This stochastic phenomenon is specified by qualitative changes in distributions of random trajectories and interspike intervals (ISIs). For a quantitative analysis of the noise-induced bursting, we suggest a constructive semi-analytical approach based on the stochastic sensitivity function (SSF) technique and the method of confidence domains that allows us to describe geometrically a distribution of random states around the deterministic attractors. Using this approach, we develop a new algorithm for estimation of critical values for the noise intensity corresponding to the qualitative changes in stochastic dynamics. We show that the obtained estimations are in good agreement with the numerical results. An interplay between noise-induced bursting and transitions from order to chaos is discussed.
Intra-tumor distribution of PEGylated liposome upon repeated injection: No possession by prior dose.
Nakamura, Hiroyuki; Abu Lila, Amr S; Nishio, Miho; Tanaka, Masao; Ando, Hidenori; Kiwada, Hiroshi; Ishida, Tatsuhiro
2015-12-28
Liposomes have proven to be a viable means for the delivery of chemotherapeutic agents to solid tumors. However, significant variability has been detected in their intra-tumor accumulation and distribution, resulting in compromised therapeutic outcomes. We recently examined the intra-tumor accumulation and distribution of weekly sequentially administered oxaliplatin (l-OHP)-containing PEGylated liposomes. In that study, the first and second doses of l-OHP-containing PEGylated liposomes were distributed diversely and broadly within tumor tissues, resulting in a potent anti-tumor efficacy. However, little is known about the mechanism underlying such a diverse and broad liposome distribution. Therefore, in the present study, we investigated the influence of dosage interval on the intra-tumor accumulation and distribution of "empty" PEGylated liposomes. Intra-tumor distribution of sequentially administered "empty" PEGylated liposomes was altered in a dosing interval-dependent manner. In addition, the intra-tumor distribution pattern was closely related to the chronological alteration of tumor blood flow as well as vascular permeability in the growing tumor tissue. These results suggest that the sequential administrations of PEGylated liposomes in well-spaced intervals might allow the distribution to different areas and enhance the total bulk accumulation within tumor tissue, resulting in better therapeutic efficacy of the encapsulated payload. This study may provide useful information for a better design of therapeutic regimens involving multiple administrations of nanocarrier drug delivery systems. Copyright © 2015 Elsevier B.V. All rights reserved.
Machine learning approaches for estimation of prediction interval for the model output.
Shrestha, Durga L; Solomatine, Dimitri P
2006-03-01
A novel method for estimating prediction uncertainty using machine learning techniques is presented. Uncertainty is expressed in the form of the two quantiles (constituting the prediction interval) of the underlying distribution of prediction errors. The idea is to partition the input space into different zones or clusters having similar model errors using fuzzy c-means clustering. The prediction interval is constructed for each cluster on the basis of empirical distributions of the errors associated with all instances belonging to the cluster under consideration and propagated from each cluster to the examples according to their membership grades in each cluster. Then a regression model is built for in-sample data using computed prediction limits as targets, and finally, this model is applied to estimate the prediction intervals (limits) for out-of-sample data. The method was tested on artificial and real hydrologic data sets using various machine learning techniques. Preliminary results show that the method is superior to other methods estimating the prediction interval. A new method for evaluating performance for estimating prediction interval is proposed as well.
Antarctic Pliocene Biotic and Environmental Change in a Global Context Changes
NASA Astrophysics Data System (ADS)
Quilty, P. G.; Whitehead, J.
2005-12-01
The Pliocene was globally an interval of dramatic climate change and often compared with the environment evolving through human-induced global change. Antarctic history needs to be integrated into global patterns. The Prydz Bay-Prince Charles Mountains region of East Antarctica is a major source of data on Late Paleozoic-Recent changes in Antarctic biota and environment. This paper reviews what is known of 13 marine transgressions in the Late Neogene of the region and attempts to compare the Antarctic pattern with global patterns, such as those identified through global sequence stratigraphic analysis. Although temporal resolution in Antarctic sections is not always as good as for sections elsewhere, enough data exist to indicate that many events can be construed as part of global changes. It is expected that further correlation will be effected. During much of the Pliocene, there was less continental ice, reduced sea-ice cover, probably higher sea-level, penetration of marine conditions deep into the hinterland, and independent evidence to indicate that this was due to warmth. The Antarctic Polar Frontal Zone probably was much farther south than currently. There have been major changes in the marine fauna, and distribution of surviving species since the mid-Pliocene. Antarctic fish faunas underwent major changes during this interval with evolution of a major new Subfamily and diversification in at least two subfamilies. No palynological evidence of terrestrial vegetation has been recovered from the Prydz Bay - Prince Charles Mountain region. Analysis of origin and extinction data for two global planktonic foraminiferal biostratigraphic zonations shows that the interval Late Miocene-Pliocene was an interval of enhanced extinction and evolution, consistent with an interval of more rapid and high amplitude fluctuating environments.
NASA Astrophysics Data System (ADS)
Khan, Sahubar Ali Mohd. Nadhar; Ramli, Razamin; Baten, M. D. Azizul
2015-12-01
Agricultural production process typically produces two types of outputs which are economic desirable as well as environmentally undesirable outputs (such as greenhouse gas emission, nitrate leaching, effects to human and organisms and water pollution). In efficiency analysis, this undesirable outputs cannot be ignored and need to be included in order to obtain the actual estimation of firms efficiency. Additionally, climatic factors as well as data uncertainty can significantly affect the efficiency analysis. There are a number of approaches that has been proposed in DEA literature to account for undesirable outputs. Many researchers has pointed that directional distance function (DDF) approach is the best as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, it has been found that interval data approach is the most suitable to account for data uncertainty as it is much simpler to model and need less information regarding its distribution and membership function. In this paper, an enhanced DEA model based on DDF approach that considers undesirable outputs as well as climatic factors and interval data is proposed. This model will be used to determine the efficiency of rice farmers who produces undesirable outputs and operates under uncertainty. It is hoped that the proposed model will provide a better estimate of rice farmers' efficiency.
An analysis of excessive running in the development of activity anorexia.
Beneke, W M; Schulte, S E; vander Tuig, J G
1995-09-01
Food restriction combined with activity wheel access produces activity anorexia: a combination of excessive running, reduced food intake and rapid weight loss. Temporal distributions of running in activity anorexia were examined in a reversal design with one of 2 x 2 x 2 factorial combinations (pelleted-vs-powdered food x deprivation x wheel access) as the treatment condition. Wheel revolutions were recorded in 30 min intervals; body weights, food and water intakes were measured daily. Only wheel access combined with food deprivation reliably produced activity anorexia. Excessive running occurred in the absence of schedule-induced polydipsia, was unaffected by food form, and showed distributional characteristics of facultative behavior. These results are inconsistent with schedule-induced behavior explanations. Running distributions appeared consistent with chronobiological models with light/dark onset and feeding serving as zeitgebers.
NASA Astrophysics Data System (ADS)
Jiang, Zhi-Qiang; Zhou, Wei-Xing; Tan, Qun-Zhao
2009-11-01
Massive multiplayer online role-playing games (MMORPGs) are very popular in China, which provides a potential platform for scientific research. We study the online-offline activities of avatars in an MMORPG to understand their game-playing behavior. The statistical analysis unveils that the active avatars can be classified into three types. The avatars of the first type are owned by game cheaters who go online and offline in preset time intervals with the online duration distributions dominated by pulses. The second type of avatars is characterized by a Weibull distribution in the online durations, which is confirmed by statistical tests. The distributions of online durations of the remaining individual avatars differ from the above two types and cannot be described by a simple form. These findings have potential applications in the game industry.
NASA Astrophysics Data System (ADS)
Rouillon, M.; Taylor, M. P.; Dong, C.
2016-12-01
This research assesses the advantages of integrating field portable X-ray Fluorescence (pXRF) technology for reducing the risk and increase confidence of decision making for metal-contaminated site assessments. Metal-contaminated sites are often highly heterogeneous and require a high sampling density to accurately characterize the distribution and concentration of contaminants. The current regulatory assessment approaches rely on a small number of samples processed using standard wet-chemistry methods. In New South Wales (NSW), Australia, the current notification trigger for characterizing metal-contaminated sites require the upper 95% confidence interval of the site mean to equal or exceed the relevant guidelines. The method's low `minimum' sampling requirements can misclassify sites due to the heterogeneous nature of soil contamination, leading to inaccurate decision making. To address this issue, we propose integrating infield pXRF analysis with the established sampling method to overcome sampling limitations. This approach increases the minimum sampling resolution and reduces the 95% CI of the site mean. Infield pXRF analysis at contamination hotspots enhances sample resolution efficiently and without the need to return to the site. In this study, the current and proposed pXRF site assessment methods are compared at five heterogeneous metal-contaminated sites by analysing the spatial distribution of contaminants, 95% confidence intervals of site means, and the sampling and analysis uncertainty associated with each method. Finally, an analysis of costs associated with both the current and proposed methods is presented to demonstrate the advantages of incorporating pXRF into metal-contaminated site assessments. The data shows that pXRF integrated site assessments allows for faster, cost-efficient, characterisation of metal-contaminated sites with greater confidence for decision making.
Time interval between successive trading in foreign currency market: from microscopic to macroscopic
NASA Astrophysics Data System (ADS)
Sato, Aki-Hiro
2004-12-01
Recently, it has been shown that inter-transaction interval (ITI) distribution of foreign currency rates has a fat tail. In order to understand the statistical property of the ITI dealer model with N interactive agents is proposed. From numerical simulations it is confirmed that the ITI distribution of the dealer model has a power law tail. The random multiplicative process (RMP) can be approximately derived from the ITI of the dealer model. Consequently, we conclude that the power law tail of the ITI distribution of the dealer model is a result of the RMP.
Statistical variability and confidence intervals for planar dose QA pass rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, Daniel W.; Nelms, Benjamin E.; Attwood, Kristopher
Purpose: The most common metric for comparing measured to calculated dose, such as for pretreatment quality assurance of intensity-modulated photon fields, is a pass rate (%) generated using percent difference (%Diff), distance-to-agreement (DTA), or some combination of the two (e.g., gamma evaluation). For many dosimeters, the grid of analyzed points corresponds to an array with a low areal density of point detectors. In these cases, the pass rates for any given comparison criteria are not absolute but exhibit statistical variability that is a function, in part, on the detector sampling geometry. In this work, the authors analyze the statistics ofmore » various methods commonly used to calculate pass rates and propose methods for establishing confidence intervals for pass rates obtained with low-density arrays. Methods: Dose planes were acquired for 25 prostate and 79 head and neck intensity-modulated fields via diode array and electronic portal imaging device (EPID), and matching calculated dose planes were created via a commercial treatment planning system. Pass rates for each dose plane pair (both centered to the beam central axis) were calculated with several common comparison methods: %Diff/DTA composite analysis and gamma evaluation, using absolute dose comparison with both local and global normalization. Specialized software was designed to selectively sample the measured EPID response (very high data density) down to discrete points to simulate low-density measurements. The software was used to realign the simulated detector grid at many simulated positions with respect to the beam central axis, thereby altering the low-density sampled grid. Simulations were repeated with 100 positional iterations using a 1 detector/cm{sup 2} uniform grid, a 2 detector/cm{sup 2} uniform grid, and similar random detector grids. For each simulation, %/DTA composite pass rates were calculated with various %Diff/DTA criteria and for both local and global %Diff normalization techniques. Results: For the prostate and head/neck cases studied, the pass rates obtained with gamma analysis of high density dose planes were 2%-5% higher than respective %/DTA composite analysis on average (ranging as high as 11%), depending on tolerances and normalization. Meanwhile, the pass rates obtained via local normalization were 2%-12% lower than with global maximum normalization on average (ranging as high as 27%), depending on tolerances and calculation method. Repositioning of simulated low-density sampled grids leads to a distribution of possible pass rates for each measured/calculated dose plane pair. These distributions can be predicted using a binomial distribution in order to establish confidence intervals that depend largely on the sampling density and the observed pass rate (i.e., the degree of difference between measured and calculated dose). These results can be extended to apply to 3D arrays of detectors, as well. Conclusions: Dose plane QA analysis can be greatly affected by choice of calculation metric and user-defined parameters, and so all pass rates should be reported with a complete description of calculation method. Pass rates for low-density arrays are subject to statistical uncertainty (vs. the high-density pass rate), but these sampling errors can be modeled using statistical confidence intervals derived from the sampled pass rate and detector density. Thus, pass rates for low-density array measurements should be accompanied by a confidence interval indicating the uncertainty of each pass rate.« less
Estimating equivalence with quantile regression
Cade, B.S.
2011-01-01
Equivalence testing and corresponding confidence interval estimates are used to provide more enlightened statistical statements about parameter estimates by relating them to intervals of effect sizes deemed to be of scientific or practical importance rather than just to an effect size of zero. Equivalence tests and confidence interval estimates are based on a null hypothesis that a parameter estimate is either outside (inequivalence hypothesis) or inside (equivalence hypothesis) an equivalence region, depending on the question of interest and assignment of risk. The former approach, often referred to as bioequivalence testing, is often used in regulatory settings because it reverses the burden of proof compared to a standard test of significance, following a precautionary principle for environmental protection. Unfortunately, many applications of equivalence testing focus on establishing average equivalence by estimating differences in means of distributions that do not have homogeneous variances. I discuss how to compare equivalence across quantiles of distributions using confidence intervals on quantile regression estimates that detect differences in heterogeneous distributions missed by focusing on means. I used one-tailed confidence intervals based on inequivalence hypotheses in a two-group treatment-control design for estimating bioequivalence of arsenic concentrations in soils at an old ammunition testing site and bioequivalence of vegetation biomass at a reclaimed mining site. Two-tailed confidence intervals based both on inequivalence and equivalence hypotheses were used to examine quantile equivalence for negligible trends over time for a continuous exponential model of amphibian abundance. ?? 2011 by the Ecological Society of America.
INFLUENCES OF RESPONSE RATE AND DISTRIBUTION ON THE CALCULATION OF INTEROBSERVER RELIABILITY SCORES
Rolider, Natalie U.; Iwata, Brian A.; Bullock, Christopher E.
2012-01-01
We examined the effects of several variations in response rate on the calculation of total, interval, exact-agreement, and proportional reliability indices. Trained observers recorded computer-generated data that appeared on a computer screen. In Study 1, target responses occurred at low, moderate, and high rates during separate sessions so that reliability results based on the four calculations could be compared across a range of values. Total reliability was uniformly high, interval reliability was spuriously high for high-rate responding, proportional reliability was somewhat lower for high-rate responding, and exact-agreement reliability was the lowest of the measures, especially for high-rate responding. In Study 2, we examined the separate effects of response rate per se, bursting, and end-of-interval responding. Response rate and bursting had little effect on reliability scores; however, the distribution of some responses at the end of intervals decreased interval reliability somewhat, proportional reliability noticeably, and exact-agreement reliability markedly. PMID:23322930
Timing in a Variable Interval Procedure: Evidence for a Memory Singularity
Matell, Matthew S.; Kim, Jung S.; Hartshorne, Loryn
2013-01-01
Rats were trained in either a 30s peak-interval procedure, or a 15–45s variable interval peak procedure with a uniform distribution (Exp 1) or a ramping probability distribution (Exp 2). Rats in all groups showed peak shaped response functions centered around 30s, with the uniform group having an earlier and broader peak response function and rats in the ramping group having a later peak function as compared to the single duration group. The changes in these mean functions, as well as the statistics from single trial analyses, can be better captured by a model of timing in which memory is represented by a single, average, delay to reinforcement compared to one in which all durations are stored as a distribution, such as the complete memory model of Scalar Expectancy Theory or a simple associative model. PMID:24012783
Prediction of Malaysian monthly GDP
NASA Astrophysics Data System (ADS)
Hin, Pooi Ah; Ching, Soo Huei; Yeing, Pan Wei
2015-12-01
The paper attempts to use a method based on multivariate power-normal distribution to predict the Malaysian Gross Domestic Product next month. Letting r(t) be the vector consisting of the month-t values on m selected macroeconomic variables, and GDP, we model the month-(t+1) GDP to be dependent on the present and l-1 past values r(t), r(t-1),…,r(t-l+1) via a conditional distribution which is derived from a [(m+1)l+1]-dimensional power-normal distribution. The 100(α/2)% and 100(1-α/2)% points of the conditional distribution may be used to form an out-of sample prediction interval. This interval together with the mean of the conditional distribution may be used to predict the month-(t+1) GDP. The mean absolute percentage error (MAPE), estimated coverage probability and average length of the prediction interval are used as the criterions for selecting the suitable lag value l-1 and the subset from a pool of 17 macroeconomic variables. It is found that the relatively better models would be those of which 2 ≤ l ≤ 3, and involving one or two of the macroeconomic variables given by Market Indicative Yield, Oil Prices, Exchange Rate and Import Trade.
NASA Astrophysics Data System (ADS)
van de Giesen, Nicolaas; Hut, Rolf; ten Veldhuis, Marie-claire
2017-04-01
If one can assume that drop size distributions can be effectively described by a generalized gamma function [1], one can estimate this function on the basis of the distribution of time intervals between drops hitting a certain area. The arrival of a single drop is relatively easy to measure with simple consumer devices such as cameras or piezoelectric elements. Here we present an open-hardware design for the electronics and statistical processing of an intervalometer that measures time intervals between drop arrivals. The specific hardware in this case is a piezoelectric element in an appropriate housing, combined with an instrumentation op-amp and an Arduino processor. Although it would not be too difficult to simply register the arrival times of all drops, it is more practical to only report the main statistics. For this purpose, all intervals below a certain threshold during a reporting interval are summed and counted. We also sum the scaled squares, cubes, and fourth powers of the intervals. On the basis of the first four moments, one can estimate the corresponding generalized gamma function and obtain some sense of the accuracy of the underlying assumptions. Special attention is needed to determine the lower threshold of the drop sizes that can be measured. This minimum size often varies over the area being monitored, such as is the case for piezoelectric elements. We describe a simple method to determine these (distributed) minimal drop sizes and present a bootstrap method to make the necessary corrections. Reference [1] Uijlenhoet, R., and J. N. M. Stricker. "A consistent rainfall parameterization based on the exponential raindrop size distribution." Journal of Hydrology 218, no. 3 (1999): 101-127.
2013-01-01
Background Macrosatellite repeats (MSRs), usually spanning hundreds of kilobases of genomic DNA, comprise a significant proportion of the human genome. Because of their highly polymorphic nature, MSRs represent an extreme example of copy number variation, but their structure and function is largely understudied. Here, we describe a detailed study of six autosomal and two X chromosomal MSRs among 270 HapMap individuals from Central Europe, Asia and Africa. Copy number variation, stability and genetic heterogeneity of the autosomal macrosatellite repeats RS447 (chromosome 4p), MSR5p (5p), FLJ40296 (13q), RNU2 (17q) and D4Z4 (4q and 10q) and X chromosomal DXZ4 and CT47 were investigated. Results Repeat array size distribution analysis shows that all of these MSRs are highly polymorphic with the most genetic variation among Africans and the least among Asians. A mitotic mutation rate of 0.4-2.2% was observed, exceeding meiotic mutation rates and possibly explaining the large size variability found for these MSRs. By means of a novel Bayesian approach, statistical support for a distinct multimodal rather than a uniform allele size distribution was detected in seven out of eight MSRs, with evidence for equidistant intervals between the modes. Conclusions The multimodal distributions with evidence for equidistant intervals, in combination with the observation of MSR-specific constraints on minimum array size, suggest that MSRs are limited in their configurations and that deviations thereof may cause disease, as is the case for facioscapulohumeral muscular dystrophy. However, at present we cannot exclude that there are mechanistic constraints for MSRs that are not directly disease-related. This study represents the first comprehensive study of MSRs in different human populations by applying novel statistical methods and identifies commonalities and differences in their organization and function in the human genome. PMID:23496858
Hips don't lie: Waist-to-hip ratio in trauma patients.
Joseph, Bellal; Zangbar, Bardiya; Haider, Ansab Abbas; Kulvatunyou, Naroung; Khalil, Mazhar; Tang, Andrew; O'Keeffe, Terence; Friese, Randall S; Orouji Jokar, Tahereh; Vercruysse, Gary; Latifi, Rifat; Rhee, Peter
2015-12-01
Obesity measured by body mass index (BMI) is known to be associated with worse outcomes in trauma patients. Recent studies have assessed the impact of distribution of body fat measured by waist-hip ratio (WHR) on outcomes in nontrauma patients. The aim of this study was to assess the impact of distribution of body fat (WHR) on outcomes in trauma patients. A 6-month (June to November 2013) prospective cohort analysis of all admitted trauma patients was performed at our Level 1 trauma center. WHR was measured in each patient on the first day of hospital admission. Patients were stratified into two groups: patients with WHR of 1 or greater and patients with WHR of less than 1. Outcome measures were complications and in-hospital mortality. Complications were defined as infectious, pulmonary, and renal complications. Regression and correlation analyses were performed. A total of 240 patients were enrolled, of which 28.8% patients (n = 69) had WHR of 1 or greater. WHR had a weak correlation with BMI (R = 0.231, R = 0.481). Eighteen percent (n = 43) of the patients developed complications, and the mortality rate was 10% (n = 24). Patients with a WHR of 1 or greater were more likely to develop in-hospital complications (32% vs. 13%, p = 0.001) and had a higher mortality rate (24% vs. 4%, p = 0.001) compared with the patients with a WHR of less than 1. In multivariate analysis, a WHR of 1 or greater was an independent predictor for the development of complications (odds ratio, 3.1; 95% confidence interval 1.08-9.2; p = 0.03) and mortality (odds ratio, 13.1; 95% confidence interval, 1.1-70; p = 0.04). Distribution of body fat as measured by WHR independently predicts mortality and complications in trauma patients. WHR is better than BMI in predicting adverse outcomes in trauma patients. Assessing the fat distribution pattern in trauma patients may help improve patient outcomes through focused targeted intervention. Prognostic study, level II.
Effect of Sampling Period on Flood Frequency Distributions in the Susquehanna Basin
NASA Astrophysics Data System (ADS)
Kargar, M.; Beighley, R. E.
2010-12-01
Flooding is a devastating natural hazard that claims many human lives and significantly impact regional economies each year. Given the magnitude of flooding impacts, significant resources are dedicated to the development of forecasting models for early warning and evacuation planning, construction of flood defenses (levees/dams) to limit flooding, and the design of civil infrastructure (bridges, culverts, storm sewers) to convey flood flows without failing. In all these cases, it is particularly important to understand the potential flooding risk in terms of both recurrence interval (i.e., return period) and magnitude. Flood frequency analysis (FFA) is a form of risk analysis used to extrapolate the return periods of floods beyond the gauged record. The technique involves using observed annual peak flow discharge data to calculate statistical information such as mean values, standard deviations, skewness, and recurrence intervals. Since discharge data for most catchments have been collected for periods of time less than 100 years, the estimation of the design discharge requires a degree of extrapolation. This study focuses on the assessment and modifications of flood frequency based discharges for sites with limited sampling periods. Here, limited sampling period is intended to capture two issues: (1) limited number of observations to adequately capture the flood frequency signal (i.e., minimum number of annual peaks needed) and (2) climate variability (i.e., sampling period contains primarily “wet” or “dry” periods only). Total of 34 gauges (more than 70 years of data) spread throughout the Susquehanna River basin (71,000 sq km) were used to investigate the impact of sampling period on flood frequency distributions. Data subsets ranging from 10 years to the total number of years available were created from the data for each gauging station. To estimate the flood frequency, the Log Pearson Type III distribution was fit to the logarithms of instantaneous annual peak flows following Bulletin 17B guidelines of the U.S. Interagency Advisory Committee on Water Data. The resulting flood frequencies from these subsets were compared to the results from the entire record at each gauge. Based on the analysis, the minimum number of years required to obtain a reasonable flood frequency distribution was determined for each gauge. In addition, a method to adjust flood frequency distribution at a given gauging station with limited data based on other locations with longer periods of records was developed.
Choi, Hyang-Ki; Jung, Jin Ah; Fujita, Tomoe; Amano, Hideki; Ghim, Jong-Lyul; Lee, Dong-Hwan; Tabata, Kenichi; Song, Il-Dae; Maeda, Mika; Kumagai, Yuji; Mendzelevski, Boaz; Shin, Jae-Gook
2016-12-01
The goal of this study was to evaluate the moxifloxacin-induced QT interval prolongation in healthy male and female Korean and Japanese volunteers to investigate interethnic differences. This multicenter, randomized, double-blind, placebo-controlled, 2-way crossover study was conducted in healthy male and female Korean and Japanese volunteers. In each period, a single dose of moxifloxacin or placebo 400 mg was administered orally under fasting conditions. Triplicate 12-lead ECGs were recorded at defined time points before, up to 24 hours after dosing, and at corresponding time points during baseline. Serial blood sampling was conducted for pharmacokinetic analysis of moxifloxacin. The pharmacokinetic-pharmacodynamic data between the 2 ethnic groups were compared by using a typical analysis based on the intersection-union test and a nonlinear mixed effects method. A total of 39 healthy subjects (Korean, male: 10, female: 10; Japanese, male: 10, female: 9) were included in the analysis. The concentration-effect analysis revealed that there was no change in slope (and confirmed that the difference was caused by a change in the pharmacokinetic model of moxifloxacin). A 2-compartment model with first-order absorption provided the best description of moxifloxacin's pharmacokinetic parameters. Weight and sex were selected as significant covariates for central volume of distribution and intercompartmental clearance, respectively. An E max model (E[C]=[E max ⋅C]/[EC 50 +C]) described the QT interval data of this study well. However, ethnicity was not found to be a significant factor in a pharmacokinetic-pharmacodynamic link model. The drug-induced QTc prolongations evaluated using moxifloxacin as the probe did not seem to be significantly different between these Korean and Japanese subjects. ClinicalTrials.gov identifier: NCT01876316. Copyright © 2016 Elsevier HS Journals, Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.; Gyekenyesi, John P.
1988-01-01
The calculation of shape and scale parameters of the two-parameter Weibull distribution is described using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. Detailed procedures are given for evaluating 90 percent confidence intervals for maximum likelihood estimates of shape and scale parameters, the unbiased estimates of the shape parameters, and the Weibull mean values and corresponding standard deviations. Furthermore, the necessary steps are described for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull distribution. It also shows how to calculate the Batdorf flaw-density constants by uing the Weibull distribution statistical parameters. The techniques described were verified with several example problems, from the open literature, and were coded. The techniques described were verified with several example problems from the open literature, and were coded in the Structural Ceramics Analysis and Reliability Evaluation (SCARE) design program.
Waltemeyer, Scott D.
2008-01-01
Estimates of the magnitude and frequency of peak discharges are necessary for the reliable design of bridges, culverts, and open-channel hydraulic analysis, and for flood-hazard mapping in New Mexico and surrounding areas. The U.S. Geological Survey, in cooperation with the New Mexico Department of Transportation, updated estimates of peak-discharge magnitude for gaging stations in the region and updated regional equations for estimation of peak discharge and frequency at ungaged sites. Equations were developed for estimating the magnitude of peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years at ungaged sites by use of data collected through 2004 for 293 gaging stations on unregulated streams that have 10 or more years of record. Peak discharges for selected recurrence intervals were determined at gaging stations by fitting observed data to a log-Pearson Type III distribution with adjustments for a low-discharge threshold and a zero skew coefficient. A low-discharge threshold was applied to frequency analysis of 140 of the 293 gaging stations. This application provides an improved fit of the log-Pearson Type III frequency distribution. Use of the low-discharge threshold generally eliminated the peak discharge by having a recurrence interval of less than 1.4 years in the probability-density function. Within each of the nine regions, logarithms of the maximum peak discharges for selected recurrence intervals were related to logarithms of basin and climatic characteristics by using stepwise ordinary least-squares regression techniques for exploratory data analysis. Generalized least-squares regression techniques, an improved regression procedure that accounts for time and spatial sampling errors, then were applied to the same data used in the ordinary least-squares regression analyses. The average standard error of prediction, which includes average sampling error and average standard error of regression, ranged from 38 to 93 percent (mean value is 62, and median value is 59) for the 100-year flood. The 1996 investigation standard error of prediction for the flood regions ranged from 41 to 96 percent (mean value is 67, and median value is 68) for the 100-year flood that was analyzed by using generalized least-squares regression analysis. Overall, the equations based on generalized least-squares regression techniques are more reliable than those in the 1996 report because of the increased length of record and improved geographic information system (GIS) method to determine basin and climatic characteristics. Flood-frequency estimates can be made for ungaged sites upstream or downstream from gaging stations by using a method that transfers flood-frequency data at the gaging station to the ungaged site by using a drainage-area ratio adjustment equation. The peak discharge for a given recurrence interval at the gaging station, drainage-area ratio, and the drainage-area exponent from the regional regression equation of the respective region is used to transfer the peak discharge for the recurrence interval to the ungaged site. Maximum observed peak discharge as related to drainage area was determined for New Mexico. Extreme events are commonly used in the design and appraisal of bridge crossings and other structures. Bridge-scour evaluations are commonly made by using the 500-year peak discharge for these appraisals. Peak-discharge data collected at 293 gaging stations and 367 miscellaneous sites were used to develop a maximum peak-discharge relation as an alternative method of estimating peak discharge of an extreme event such as a maximum probable flood.
Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr
2012-01-01
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Moroz, V. I.; Moshkin, B. Y.; Ekonomov, A. P.; Sanko, N. F.; Parfentev, N. A.; Golovin, Y. M.
1979-01-01
The spectra of the daytime sky of Venus were recorded on the Venera-11 and Venera-12 descent vehicles at various altitudes above the planet's surface, within the interval of 4500 to 12,000 Angstroms. The angular distribution of the brightness of the scattered radiation was recorded and the ratio of water and carbon dioxide were studied, with respect to the cloud cover boundaries.
Bénet, Thomas; Voirin, Nicolas; Nicolle, Marie-Christine; Picot, Stephane; Michallet, Mauricette; Vanhems, Philippe
2013-02-01
The duration of the incubation of invasive aspergillosis (IA) remains unknown. The objective of this investigation was to estimate the time interval between aplasia onset and that of IA symptoms in acute myeloid leukemia (AML) patients. A single-centre prospective survey (2004-2009) included all patients with AML and probable/proven IA. Parametric survival models were fitted to the distribution of the time intervals between aplasia onset and IA. Overall, 53 patients had IA after aplasia, with the median observed time interval between the two being 15 days. Based on log-normal distribution, the median estimated IA incubation period was 14.6 days (95% CI; 12.8-16.5 days).
Mixed-mode oscillations and interspike interval statistics in the stochastic FitzHugh-Nagumo model
NASA Astrophysics Data System (ADS)
Berglund, Nils; Landon, Damien
2012-08-01
We study the stochastic FitzHugh-Nagumo equations, modelling the dynamics of neuronal action potentials in parameter regimes characterized by mixed-mode oscillations. The interspike time interval is related to the random number of small-amplitude oscillations separating consecutive spikes. We prove that this number has an asymptotically geometric distribution, whose parameter is related to the principal eigenvalue of a substochastic Markov chain. We provide rigorous bounds on this eigenvalue in the small-noise regime and derive an approximation of its dependence on the system's parameters for a large range of noise intensities. This yields a precise description of the probability distribution of observed mixed-mode patterns and interspike intervals.
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks
Lam, William H. K.; Li, Qingquan
2017-01-01
Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks. PMID:29210978
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.
Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan
2017-12-06
Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.
Automatic contouring of geologic fabric and finite strain data on the unit hyperboloid
NASA Astrophysics Data System (ADS)
Vollmer, Frederick W.
2018-06-01
Fabric and finite strain analysis, an integral part of studies of geologic structures and orogenic belts, is commonly done by the analysis of particles whose shapes can be approximated as ellipses. Given a sample of such particles, the mean and confidence intervals of particular parameters can be calculated, however, taking the extra step of plotting and contouring the density distribution can identify asymmetries or modes related to sedimentary fabrics or other factors. A common graphical strain analysis technique is to plot final ellipse ratios, Rf , versus orientations, ϕf on polar Elliott or Rf / ϕ plots to examine the density distribution. The plot may be contoured, however, it is desirable to have a contouring method that is rapid, reproducible, and based on the underlying geometry of the data. The unit hyperboloid, H2 , gives a natural parameter space for two-dimensional strain, and various projections, including equal-area and stereographic, have useful properties for examining density distributions for anisotropy. An index, Ia , is given to quantify the magnitude and direction of anisotropy. Elliott and Rf / ϕ plots can be understood by applying hyperbolic geometry and recognizing them as projections of H2 . These both distort area, however, so the equal-area projection is preferred for examining density distributions. The algorithm presented here gives fast, accurate, and reproducible contours of density distributions calculated directly on H2 . The algorithm back-projects the data onto H2 , where the density calculation is done at regular nodes using a weighting value based on the hyperboloid distribution, which is then contoured. It is implemented as an Octave compatible MATLAB function that plots ellipse data using a variety of projections, and calculates and displays contours of their density distribution on H2 .
NASA Technical Reports Server (NTRS)
Eigen, D. J.; Fromm, F. R.; Northouse, R. A.
1974-01-01
A new clustering algorithm is presented that is based on dimensional information. The algorithm includes an inherent feature selection criterion, which is discussed. Further, a heuristic method for choosing the proper number of intervals for a frequency distribution histogram, a feature necessary for the algorithm, is presented. The algorithm, although usable as a stand-alone clustering technique, is then utilized as a global approximator. Local clustering techniques and configuration of a global-local scheme are discussed, and finally the complete global-local and feature selector configuration is shown in application to a real-time adaptive classification scheme for the analysis of remote sensed multispectral scanner data.
Electrocardiogram reference intervals for clinically normal wild-born chimpanzees (Pan troglodytes).
Atencia, Rebeca; Revuelta, Luis; Somauroo, John D; Shave, Robert E
2015-08-01
To generate reference intervals for ECG variables in clinically normal chimpanzees (Pan troglodytes). 100 clinically normal (51 young [< 10 years old] and 49 adult [≥ 10 years old]) wild-born chimpanzees. Electrocardiograms collected between 2009 and 2013 at the Tchimpounga Chimpanzee Rehabilitation Centre were assessed to determine heart rate, PR interval, QRS duration, QT interval, QRS axis, P axis, and T axis. Electrocardiographic characteristics for left ventricular hypertrophy (LVH) and morphology of the ST segment, T wave, and QRS complex were identified. Reference intervals for young and old animals were calculated as mean ± 1.96•SD for normally distributed data and as 5th to 95th percentiles for data not normally distributed. Differences between age groups were assessed by use of unpaired Student t tests. RESULTS Reference intervals were generated for young and adult wild-born chimpanzees. Most animals had sinus rhythm with small or normal P wave morphology; 24 of 51 (47%) young chimpanzees and 30 of 49 (61%) adult chimpanzees had evidence of LVH as determined on the basis of criteria for humans. Cardiac disease has been implicated as the major cause of death in captive chimpanzees. Species-specific ECG reference intervals for chimpanzees may aid in the diagnosis and treatment of animals with, or at risk of developing, heart disease. Chimpanzees with ECG characteristics outside of these intervals should be considered for follow-up assessment and regular cardiac monitoring.
Four applications of permutation methods to testing a single-mediator model.
Taylor, Aaron B; MacKinnon, David P
2012-09-01
Four applications of permutation tests to the single-mediator model are described and evaluated in this study. Permutation tests work by rearranging data in many possible ways in order to estimate the sampling distribution for the test statistic. The four applications to mediation evaluated here are the permutation test of ab, the permutation joint significance test, and the noniterative and iterative permutation confidence intervals for ab. A Monte Carlo simulation study was used to compare these four tests with the four best available tests for mediation found in previous research: the joint significance test, the distribution of the product test, and the percentile and bias-corrected bootstrap tests. We compared the different methods on Type I error, power, and confidence interval coverage. The noniterative permutation confidence interval for ab was the best performer among the new methods. It successfully controlled Type I error, had power nearly as good as the most powerful existing methods, and had better coverage than any existing method. The iterative permutation confidence interval for ab had lower power than do some existing methods, but it performed better than any other method in terms of coverage. The permutation confidence interval methods are recommended when estimating a confidence interval is a primary concern. SPSS and SAS macros that estimate these confidence intervals are provided.
Scaling and memory in volatility return intervals in financial markets
Yamasaki, Kazuko; Muchnik, Lev; Havlin, Shlomo; Bunde, Armin; Stanley, H. Eugene
2005-01-01
For both stock and currency markets, we study the return intervals τ between the daily volatilities of the price changes that are above a certain threshold q. We find that the distribution function Pq(τ) scales with the mean return interval \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} \\begin{equation*}{\\bar {{\\tau}}}\\end{equation*}\\end{document} as \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} \\begin{equation*}P_{q}({\\tau})={\\bar {{\\tau}}}^{-1}f({\\tau}/{\\bar {{\\tau}}})\\end{equation*}\\end{document}. The scaling function f(x) is similar in form for all seven stocks and for all seven currency databases analyzed, and f(x) is consistent with a power-law form, f(x) ∼ x-γ with γ ≈ 2. We also quantify how the conditional distribution Pq(τ|τ0) depends on the previous return interval τ0 and find that small (or large) return intervals are more likely to be followed by small (or large) return intervals. This “clustering” of the volatility return intervals is a previously unrecognized phenomenon that we relate to the long-term correlations known to be present in the volatility. PMID:15980152
Scaling and memory in volatility return intervals in financial markets
NASA Astrophysics Data System (ADS)
Yamasaki, Kazuko; Muchnik, Lev; Havlin, Shlomo; Bunde, Armin; Stanley, H. Eugene
2005-06-01
For both stock and currency markets, we study the return intervals τ between the daily volatilities of the price changes that are above a certain threshold q. We find that the distribution function Pq(τ) scales with the mean return interval [Formula] as [Formula]. The scaling function f(x) is similar in form for all seven stocks and for all seven currency databases analyzed, and f(x) is consistent with a power-law form, f(x) ˜ x-γ with γ ≈ 2. We also quantify how the conditional distribution Pq(τ|τ0) depends on the previous return interval τ0 and find that small (or large) return intervals are more likely to be followed by small (or large) return intervals. This “clustering” of the volatility return intervals is a previously unrecognized phenomenon that we relate to the long-term correlations known to be present in the volatility. Author contributions: S.H. and H.E.S. designed research; K.Y., L.M., S.H., and H.E.S. performed research; A.B. contributed new reagents/analytic tools; A.B. analyzed data; and S.H. wrote the paper.Abbreviations: pdf, probability density function; S&P 500, Standard and Poor's 500 Index; USD, U.S. dollar; JPY, Japanese yen; SEK, Swedish krona.
Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection
NASA Technical Reports Server (NTRS)
Kumar, Sricharan; Srivistava, Ashok N.
2012-01-01
Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.
Black, Andrew J.; Ross, Joshua V.
2013-01-01
The clinical serial interval of an infectious disease is the time between date of symptom onset in an index case and the date of symptom onset in one of its secondary cases. It is a quantity which is commonly collected during a pandemic and is of fundamental importance to public health policy and mathematical modelling. In this paper we present a novel method for calculating the serial interval distribution for a Markovian model of household transmission dynamics. This allows the use of Bayesian MCMC methods, with explicit evaluation of the likelihood, to fit to serial interval data and infer parameters of the underlying model. We use simulated and real data to verify the accuracy of our methodology and illustrate the importance of accounting for household size. The output of our approach can be used to produce posterior distributions of population level epidemic characteristics. PMID:24023679
An iterative method for analysis of hadron ratios and Spectra in relativistic heavy-ion collisions
NASA Astrophysics Data System (ADS)
Choi, Suk; Lee, Kang Seog
2016-04-01
A new iteration method is proposed for analyzing both the multiplicities and the transverse momentum spectra measured within a small rapidity interval with low momentum cut-off without assuming the invariance of the rapidity distribution under the Lorentz-boost and is applied to the hadron data measured by the ALICE collaboration for Pb+Pb collisions at √ {^sNN} = 2.76 TeV. In order to correctly consider the resonance contribution only to the small rapidity interval measured, we only consider ratios involving only those hadrons whose transverse momentum spectrum is available. In spite of the small number of ratios considered, the quality of fitting both of the ratios and the transverse momentum spectra is excellent. Also, the calculated ratios involving strange baryons with the fitted parameters agree with the data surprisingly well.
Distributed fiber sparse-wideband vibration sensing by sub-Nyquist additive random sampling
NASA Astrophysics Data System (ADS)
Zhang, Jingdong; Zheng, Hua; Zhu, Tao; Yin, Guolu; Liu, Min; Bai, Yongzhong; Qu, Dingrong; Qiu, Feng; Huang, Xianbing
2018-05-01
The round trip time of the light pulse limits the maximum detectable vibration frequency response range of phase-sensitive optical time domain reflectometry ({\\phi}-OTDR). Unlike the uniform laser pulse interval in conventional {\\phi}-OTDR, we randomly modulate the pulse interval, so that an equivalent sub-Nyquist additive random sampling (sNARS) is realized for every sensing point of the long interrogation fiber. For an {\\phi}-OTDR system with 10 km sensing length, the sNARS method is optimized by theoretical analysis and Monte Carlo simulation, and the experimental results verify that a wide-band spars signal can be identified and reconstructed. Such a method can broaden the vibration frequency response range of {\\phi}-OTDR, which is of great significance in sparse-wideband-frequency vibration signal detection, such as rail track monitoring and metal defect detection.
Multichannel Spectrometer of Time Distribution
NASA Astrophysics Data System (ADS)
Akindinova, E. V.; Babenko, A. G.; Vakhtel, V. M.; Evseev, N. A.; Rabotkin, V. A.; Kharitonova, D. D.
2015-06-01
For research and control of characteristics of radiation fluxes, radioactive sources in particular, for example, in paper [1], a spectrometer and methods of data measurement and processing based on the multichannel counter of time intervals of accident events appearance (impulses of particle detector) MC-2A (SPC "ASPECT") were created. The spectrometer has four independent channels of registration of time intervals of impulses appearance and correspondent amplitude and spectrometric channels for control along the energy spectra of the operation stationarity of paths of each of the channels from the detector to the amplifier. The registration of alpha-radiation is carried out by the semiconductor detectors with energy resolution of 16-30 keV. Using a spectrometer there have been taken measurements of oscillations of alpha-radiation 239-Pu flux intensity with a subsequent autocorrelative statistical analysis of the time series of readings.
NASA Astrophysics Data System (ADS)
Zoeller, G.
2017-12-01
Paleo- and historic earthquakes are the most important source of information for the estimationof long-term recurrence intervals in fault zones, because sequences of paleoearthquakes cover more than one seismic cycle. On the other hand, these events are often rare, dating uncertainties are enormous and the problem of missing or misinterpreted events leads to additional problems. Taking these shortcomings into account, long-term recurrence intervals are usually unstable as long as no additional information are included. In the present study, we assume that the time to the next major earthquake depends on the rate of small and intermediate events between the large ones in terms of a ``clock-change'' model that leads to a Brownian Passage Time distribution for recurrence intervals. We take advantage of an earlier finding that the aperiodicity of this distribution can be related to the Gutenberg-Richter-b-value, which is usually around one and can be estimated easily from instrumental seismicity in the region under consideration. This allows to reduce the uncertainties in the estimation of the mean recurrence interval significantly, especially for short paleoearthquake sequences and high dating uncertainties. We present illustrative case studies from Southern California and compare the method with the commonly used approach of exponentially distributed recurrence times assuming a stationary Poisson process.
Measurements of geomagnetically trapped alpha particles, 1968-1970. I - Quiet time distributions
NASA Technical Reports Server (NTRS)
Krimigis, S. M.; Verzariu, P.
1973-01-01
Results of observations of geomagnetically trapped alpha particles over the energy range from 1.18 to 8 MeV performed with the aid of the Injun 5 polar-orbiting satellite during the period from September 1968 to May 1970. Following a presentation of a time history covering this entire period, a detailed analysis is made of the magnetically quiet period from Feb. 11 to 28, 1970. During this period the alpha particle fluxes and the intensity ratio of alpha particles to protons attained their lowest values in approximately 20 months; the alpha particle intensity versus L profile was most similar to the proton profile at the same energy per nucleon interval; the intensity ratio was nearly constant as a function of L in the same energy per nucleon representation, but rose sharply with L when computed in the same total energy interval; the variation of alpha particle intensity with B suggested a steep angular distribution at small equatorial pitch angles, while the intensity ratio showed little dependence on B; and the alpha particle spectral parameter showed a markedly different dependence on L from the equivalent one for protons.
Wen, Zhe; Tong, Guansheng; Liu, Yong; Meeks, Jacqui K; Ma, Daqing; Yang, Jigang
2014-05-01
The aim of this study was to analyze the imaging characteristics of (99m)Tc-dextran ((99m)Tc-DX) lymphatic imaging in the diagnosis of primary intestinal lymphangiectasia (PIL). Forty-one PIL patients were diagnosed as having PIL with the diagnosis being subsequently confirmed by laparotomy, endoscopy, biopsy, or capsule colonoscopy. Nineteen patients were male and 22 were female. A whole-body (99m)Tc-DX scan was performed at 10 min, 1 h, 3 h, and 6 h intervals after injection. The 10 min and 1 h postinjection intervals were considered the early phase, the 3 h postinjection interval was considered the middle phase, and the 6 h postinjection interval was considered the delayed phase. The imaging characteristics of (99m)Tc-DX lymphatic imaging in PIL were of five different types: (i) presence of dynamic radioactivity in the intestine, associated with radioactivity moving from the small intestine to the ascending and transverse colon; (ii) presence of delayed dynamic radioactivity in the intestine, no radioactivity or little radioactivity distributing in the intestine in the early phase, or significant radioactivity distributing in the intestine in the delayed phase; (iii) radioactivity distributing in the intestine and abdominal cavity; (iv) radioactivity distributing only in the abdominal cavity with no radioactivity in the intestines; and (v) no radioactivity distributing in the intestine and abdominal activity. (99m)Tc-DX lymphatic imaging in PIL showed different imaging characteristics. Caution should be exercised in the diagnosis of PIL using lymphoscintigraphy. Lymphoscintigraphy is a safe and accurate examination method and is a significant diagnostic tool in the diagnosis of PIL.
H. T. Schreuder; M. S. Williams
2000-01-01
In simulation sampling from forest populations using sample sizes of 20, 40, and 60 plots respectively, confidence intervals based on the bootstrap (accelerated, percentile, and t-distribution based) were calculated and compared with those based on the classical t confidence intervals for mapped populations and subdomains within those populations. A 68.1 ha mapped...
Huang, Shi; MacKinnon, David P.; Perrino, Tatiana; Gallo, Carlos; Cruden, Gracelyn; Brown, C Hendricks
2016-01-01
Mediation analysis often requires larger sample sizes than main effect analysis to achieve the same statistical power. Combining results across similar trials may be the only practical option for increasing statistical power for mediation analysis in some situations. In this paper, we propose a method to estimate: 1) marginal means for mediation path a, the relation of the independent variable to the mediator; 2) marginal means for path b, the relation of the mediator to the outcome, across multiple trials; and 3) the between-trial level variance-covariance matrix based on a bivariate normal distribution. We present the statistical theory and an R computer program to combine regression coefficients from multiple trials to estimate a combined mediated effect and confidence interval under a random effects model. Values of coefficients a and b, along with their standard errors from each trial are the input for the method. This marginal likelihood based approach with Monte Carlo confidence intervals provides more accurate inference than the standard meta-analytic approach. We discuss computational issues, apply the method to two real-data examples and make recommendations for the use of the method in different settings. PMID:28239330
Non stationary analysis of heart rate variability during the obstructive sleep apnea.
Méndez, M O; Bianchi, A M; Cerutti, S
2004-01-01
Characteristic fluctuations of the heart rate are found during obstructive sleep apnea (OSA), bradycardia in apneonic phase and tachycardia at the recovery of ventilation. In order to assess its autonomic response, in this study, the time-frequency distribution of Born-Jordan and evolutive Poincare plots are used. From Physionet was taken a database with records of ECG and respiratory signals. During the OSA all spectral indexes presented oscillations correspondent to the changes between brady and tachycardia of the RR intervals as well as greater values than during control epochs. Born-Jordan distribution and evolutive Poincare plots could help to characterize and develop an index for the evaluation of OSA. Very low frequency could also be a good index of OSA.
TEMPERATURE DISTRIBUTION IN A DIFFUSION CLOUD CHAMBER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slavic, I.; Szymakowski, J.; Stachorska, D.
1961-03-01
A diffusion cloud chamber with working conditions within a pressure range from 10 mm Hg to 2 atmospheres and at variable boundary surface temperatures in a wide interval is described. A simple procedure is described for cooling and thermoregulating the bottom of the chamber by means of vapor flow of liquid air which makes possible the achievement of temperature up to -120 deg C with stability better that plus or minus 1 deg C. A method for the measurement of temperature distribution by means of a thermistor is described, and a number of curves of the observed temperature gradient, dependentmore » on the boundary surface temperature is given. Analysis of other factors influencing the stable work of the diffusion cloud chamber was made. (auth)« less
Statistical analysis of field data for aircraft warranties
NASA Astrophysics Data System (ADS)
Lakey, Mary J.
Air Force and Navy maintenance data collection systems were researched to determine their scientific applicability to the warranty process. New and unique algorithms were developed to extract failure distributions which were then used to characterize how selected families of equipment typically fails. Families of similar equipment were identified in terms of function, technology and failure patterns. Statistical analyses and applications such as goodness-of-fit test, maximum likelihood estimation and derivation of confidence intervals for the probability density function parameters were applied to characterize the distributions and their failure patterns. Statistical and reliability theory, with relevance to equipment design and operational failures were also determining factors in characterizing the failure patterns of the equipment families. Inferences about the families with relevance to warranty needs were then made.
Monte Carlo simulation of wave sensing with a short pulse radar
NASA Technical Reports Server (NTRS)
Levine, D. M.; Davisson, L. D.; Kutz, R. L.
1977-01-01
A Monte Carlo simulation is used to study the ocean wave sensing potential of a radar which scatters short pulses at small off-nadir angles. In the simulation, realizations of a random surface are created commensurate with an assigned probability density and power spectrum. Then the signal scattered back to the radar is computed for each realization using a physical optics analysis which takes wavefront curvature and finite radar-to-surface distance into account. In the case of a Pierson-Moskowitz spectrum and a normally distributed surface, reasonable assumptions for a fully developed sea, it has been found that the cumulative distribution of time intervals between peaks in the scattered power provides a measure of surface roughness. This observation is supported by experiments.
Evaluation of statistical models for forecast errors from the HBV model
NASA Astrophysics Data System (ADS)
Engeland, Kolbjørn; Renard, Benjamin; Steinsland, Ingelin; Kolberg, Sjur
2010-04-01
SummaryThree statistical models for the forecast errors for inflow into the Langvatn reservoir in Northern Norway have been constructed and tested according to the agreement between (i) the forecast distribution and the observations and (ii) median values of the forecast distribution and the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order auto-regressive model was constructed for the forecast errors. The parameters were conditioned on weather classes. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order auto-regressive model was constructed for the forecast errors. For the third model positive and negative errors were modeled separately. The errors were first NQT-transformed before conditioning the mean error values on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: we wanted (a) the forecast distribution to be reliable; (b) the forecast intervals to be narrow; (c) the median values of the forecast distribution to be close to the observed values. Models 1 and 2 gave almost identical results. The median values improved the forecast with Nash-Sutcliffe R eff increasing from 0.77 for the original forecast to 0.87 for the corrected forecasts. Models 1 and 2 over-estimated the forecast intervals but gave the narrowest intervals. Their main drawback was that the distributions are less reliable than Model 3. For Model 3 the median values did not fit well since the auto-correlation was not accounted for. Since Model 3 did not benefit from the potential variance reduction that lies in bias estimation and removal it gave on average wider forecasts intervals than the two other models. At the same time Model 3 on average slightly under-estimated the forecast intervals, probably explained by the use of average measures to evaluate the fit.
Stochastic space interval as a link between quantum randomness and macroscopic randomness?
NASA Astrophysics Data System (ADS)
Haug, Espen Gaarder; Hoff, Harald
2018-03-01
For many stochastic phenomena, we observe statistical distributions that have fat-tails and high-peaks compared to the Gaussian distribution. In this paper, we will explain how observable statistical distributions in the macroscopic world could be related to the randomness in the subatomic world. We show that fat-tailed (leptokurtic) phenomena in our everyday macroscopic world are ultimately rooted in Gaussian - or very close to Gaussian-distributed subatomic particle randomness, but they are not, in a strict sense, Gaussian distributions. By running a truly random experiment over a three and a half-year period, we observed a type of random behavior in trillions of photons. Combining our results with simple logic, we find that fat-tailed and high-peaked statistical distributions are exactly what we would expect to observe if the subatomic world is quantized and not continuously divisible. We extend our analysis to the fact that one typically observes fat-tails and high-peaks relative to the Gaussian distribution in stocks and commodity prices and many aspects of the natural world; these instances are all observable and documentable macro phenomena that strongly suggest that the ultimate building blocks of nature are discrete (e.g. they appear in quanta).
NASA Astrophysics Data System (ADS)
Sembiring, N.; Ginting, E.; Darnello, T.
2017-12-01
Problems that appear in a company that produces refined sugar, the production floor has not reached the level of critical machine availability because it often suffered damage (breakdown). This results in a sudden loss of production time and production opportunities. This problem can be solved by Reliability Engineering method where the statistical approach to historical damage data is performed to see the pattern of the distribution. The method can provide a value of reliability, rate of damage, and availability level, of an machine during the maintenance time interval schedule. The result of distribution test to time inter-damage data (MTTF) flexible hose component is lognormal distribution while component of teflon cone lifthing is weibull distribution. While from distribution test to mean time of improvement (MTTR) flexible hose component is exponential distribution while component of teflon cone lifthing is weibull distribution. The actual results of the flexible hose component on the replacement schedule per 720 hours obtained reliability of 0.2451 and availability 0.9960. While on the critical components of teflon cone lifthing actual on the replacement schedule per 1944 hours obtained reliability of 0.4083 and availability 0.9927.
NASA Astrophysics Data System (ADS)
Wang, Z.; Gu, Z.; Chen, B.; Yuan, J.; Wang, C.
2016-12-01
The CHAOS-6 geomagnetic field model, presented in 2016 by the Denmark's national space institute (DTU Space), is a model of the near-Earth magnetic field. According the CHAOS-6 model, seven component data of geomagnetic filed at 30 observatories in China in 2015 and at 3 observatories in China spanning the time interval 2008.0-2016.5 were calculated. Also seven component data of geomagnetic filed from the geomagnetic data of practical observations in China was obtained. Based on the model calculated data and the practical data, we have compared and analyzed the spatial distribution and the secular variation of the geomagnetic field in China. There is obvious difference between the two type data. The CHAOS-6 model cannot describe the spatial distribution and the secular variation of the geomagnetic field in China with comparative precision because of the regional and local magnetic anomalies in China.
Rapid Temporal Changes of Midtropospheric Winds
NASA Technical Reports Server (NTRS)
Merceret, Francis J.
1997-01-01
The statistical distribution of the magnitude of the vector wind change over 0.25-, 1-, 2-. and 4-h periods based on data from October 1995 through March 1996 over central Florida is presented. The wind changes at altitudes from 6 to 17 km were measured using the Kennedy Space Center 50-MHz Doppler radar wind profiler. Quality controlled profiles were produced every 5 min for 112 gates, each representing 150 m in altitude. Gates 28 through 100 were selected for analysis because of their significance to ascending space launch vehicles. The distribution was found to be lognormal. The parameters of the lognormal distribution depend systematically on the time interval. This dependence is consistent with the behavior of structure functions in the f(exp 5/3) spectral regime. There is a small difference between the 1995 data and the 1996 data, which may represent a weak seasonal effect.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spane, Frank A.; Newcomer, Darrell R.
2010-06-15
This report presents test descriptions and analysis results for multiple, stress-level slug tests that were performed at selected test/depth intervals within three Operable Unit (OU) UP-1 wells: 299-W19-48 (C4300/Well K), 699-30-66 (C4298/Well R), and 699-36-70B (C4299/Well P). These wells are located within, adjacent to, and to the southeast of the Hanford Site 200-West Area. The test intervals were characterized as the individual boreholes were advanced to their final drill depths. The primary objective of the hydrologic tests was to provide information pertaining to the areal variability and vertical distribution of hydraulic conductivity with depth at these locations within the OUmore » UP-1 area. This type of characterization information is important for predicting/simulating contaminant migration (i.e., numerical flow/transport modeling) and designing proper monitor well strategies for OU and Waste Management Area locations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spane, Frank A.; Newcomer, Darrell R.
2010-06-21
The following report presents test descriptions and analysis results for multiple, stress level slug tests that were performed at selected test/depth intervals within three Operable Unit (OU) ZP-1 wells: 299-W11-43 (C4694/Well H), 299-W15-50 (C4302/Well E), and 299-W18-16 (C4303/Well D). These wells are located within south-central region of the Hanford Site 200-West Area (Figure 1.1). The test intervals were characterized as the individual boreholes were advanced to their final drill depths. The primary objective of the hydrologic tests was to provide information pertaining to the areal variability and vertical distribution of hydraulic conductivity with depth at these locations within the OUmore » ZP-1 area. This type of characterization information is important for predicting/simulating contaminant migration (i.e., numerical flow/transport modeling) and designing proper monitor well strategies for OU and Waste Management Area locations.« less
Bansal, Ravi; Staib, Lawrence H.; Laine, Andrew F.; Xu, Dongrong; Liu, Jun; Posecion, Lainie F.; Peterson, Bradley S.
2010-01-01
Images from different individuals typically cannot be registered precisely because anatomical features within the images differ across the people imaged and because the current methods for image registration have inherent technological limitations that interfere with perfect registration. Quantifying the inevitable error in image registration is therefore of crucial importance in assessing the effects that image misregistration may have on subsequent analyses in an imaging study. We have developed a mathematical framework for quantifying errors in registration by computing the confidence intervals of the estimated parameters (3 translations, 3 rotations, and 1 global scale) for the similarity transformation. The presence of noise in images and the variability in anatomy across individuals ensures that estimated registration parameters are always random variables. We assume a functional relation among intensities across voxels in the images, and we use the theory of nonlinear, least-squares estimation to show that the parameters are multivariate Gaussian distributed. We then use the covariance matrix of this distribution to compute the confidence intervals of the transformation parameters. These confidence intervals provide a quantitative assessment of the registration error across the images. Because transformation parameters are nonlinearly related to the coordinates of landmark points in the brain, we subsequently show that the coordinates of those landmark points are also multivariate Gaussian distributed. Using these distributions, we then compute the confidence intervals of the coordinates for landmark points in the image. Each of these confidence intervals in turn provides a quantitative assessment of the registration error at a particular landmark point. Because our method is computationally intensive, however, its current implementation is limited to assessing the error of the parameters in the similarity transformation across images. We assessed the performance of our method in computing the error in estimated similarity parameters by applying that method to real world dataset. Our results showed that the size of the confidence intervals computed using our method decreased – i.e. our confidence in the registration of images from different individuals increased – for increasing amounts of blur in the images. Moreover, the size of the confidence intervals increased for increasing amounts of noise, misregistration, and differing anatomy. Thus, our method precisely quantified confidence in the registration of images that contain varying amounts of misregistration and varying anatomy across individuals. PMID:19138877
Baltzer, Pascal Andreas Thomas; Renz, Diane M; Kullnig, Petra E; Gajda, Mieczyslaw; Camara, Oumar; Kaiser, Werner A
2009-04-01
The identification of the most suspect enhancing part of a lesion is regarded as a major diagnostic criterion in dynamic magnetic resonance mammography. Computer-aided diagnosis (CAD) software allows the semi-automatic analysis of the kinetic characteristics of complete enhancing lesions, providing additional information about lesion vasculature. The diagnostic value of this information has not yet been quantified. Consecutive patients from routine diagnostic studies (1.5 T, 0.1 mmol gadopentetate dimeglumine, dynamic gradient-echo sequences at 1-minute intervals) were analyzed prospectively using CAD. Dynamic sequences were processed and reduced to a parametric map. Curve types were classified by initial signal increase (not significant, intermediate, and strong) and the delayed time course of signal intensity (continuous, plateau, and washout). Lesion enhancement was measured using CAD. The most suspect curve, the curve-type distribution percentage, and combined dynamic data were compared. Statistical analysis included logistic regression analysis and receiver-operating characteristic analysis. Fifty-one patients with 46 malignant and 44 benign lesions were enrolled. On receiver-operating characteristic analysis, the most suspect curve showed diagnostic accuracy of 76.7 +/- 5%. In comparison, the curve-type distribution percentage demonstrated accuracy of 80.2 +/- 4.9%. Combined dynamic data had the highest diagnostic accuracy (84.3 +/- 4.2%). These differences did not achieve statistical significance. With appropriate cutoff values, sensitivity and specificity, respectively, were found to be 80.4% and 72.7% for the most suspect curve, 76.1% and 83.6% for the curve-type distribution percentage, and 78.3% and 84.5% for both parameters. The integration of whole-lesion dynamic data tends to improve specificity. However, no statistical significance backs up this finding.
Liu, C C; Crone, N E; Franaszczuk, P J; Cheng, D T; Schretlen, D S; Lenz, F A
2011-08-25
The current model of fear conditioning suggests that it is mediated through modules involving the amygdala (AMY), hippocampus (HIP), and frontal lobe (FL). We now test the hypothesis that habituation and acquisition stages of a fear conditioning protocol are characterized by different event-related causal interactions (ERCs) within and between these modules. The protocol used the painful cutaneous laser as the unconditioned stimulus and ERC was estimated by analysis of local field potentials recorded through electrodes implanted for investigation of epilepsy. During the prestimulus interval of the habituation stage FL>AMY ERC interactions were common. For comparison, in the poststimulus interval of the habituation stage, only a subdivision of the FL (dorsolateral prefrontal cortex, dlPFC) still exerted the FL>AMY ERC interaction (dlFC>AMY). For a further comparison, during the poststimulus interval of the acquisition stage, the dlPFC>AMY interaction persisted and an AMY>FL interaction appeared. In addition to these ERC interactions between modules, the results also show ERC interactions within modules. During the poststimulus interval, HIP>HIP ERC interactions were more common during acquisition, and deep hippocampal contacts exerted causal interactions on superficial contacts, possibly explained by connectivity between the perihippocampal gyrus and the HIP. During the prestimulus interval of the habituation stage, AMY>AMY ERC interactions were commonly found, while interactions between the deep and superficial AMY (indirect pathway) were independent of intervals and stages. These results suggest that the network subserving fear includes distributed or widespread modules, some of which are themselves "local networks." ERC interactions between and within modules can be either static or change dynamically across intervals or stages of fear conditioning. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
Falcaro, Milena; Pickles, Andrew
2007-02-10
We focus on the analysis of multivariate survival times with highly structured interdependency and subject to interval censoring. Such data are common in developmental genetics and genetic epidemiology. We propose a flexible mixed probit model that deals naturally with complex but uninformative censoring. The recorded ages of onset are treated as possibly censored ordinal outcomes with the interval censoring mechanism seen as arising from a coarsened measurement of a continuous variable observed as falling between subject-specific thresholds. This bypasses the requirement for the failure times to be observed as falling into non-overlapping intervals. The assumption of a normal age-of-onset distribution of the standard probit model is relaxed by embedding within it a multivariate Box-Cox transformation whose parameters are jointly estimated with the other parameters of the model. Complex decompositions of the underlying multivariate normal covariance matrix of the transformed ages of onset become possible. The new methodology is here applied to a multivariate study of the ages of first use of tobacco and first consumption of alcohol without parental permission in twins. The proposed model allows estimation of the genetic and environmental effects that are shared by both of these risk behaviours as well as those that are specific. 2006 John Wiley & Sons, Ltd.
Statistical inferences with jointly type-II censored samples from two Pareto distributions
NASA Astrophysics Data System (ADS)
Abu-Zinadah, Hanaa H.
2017-08-01
In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.
ERIC Educational Resources Information Center
Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong
2010-01-01
This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile…
NASA Astrophysics Data System (ADS)
Naylor, M.; Main, I. G.; Greenhough, J.; Bell, A. F.; McCloskey, J.
2009-04-01
The Sumatran Boxing Day earthquake and subsequent large events provide an opportunity to re-evaluate the statistical evidence for characteristic earthquake events in frequency-magnitude distributions. Our aims are to (i) improve intuition regarding the properties of samples drawn from power laws, (ii) illustrate using random samples how appropriate Poisson confidence intervals can both aid the eye and provide an appropriate statistical evaluation of data drawn from power-law distributions, and (iii) apply these confidence intervals to test for evidence of characteristic earthquakes in subduction-zone frequency-magnitude distributions. We find no need for a characteristic model to describe frequency magnitude distributions in any of the investigated subduction zones, including Sumatra, due to an emergent skew in residuals of power law count data at high magnitudes combined with a sample bias for examining large earthquakes as candidate characteristic events.
Urban Noise Recorded by Stationary Monitoring Stations
NASA Astrophysics Data System (ADS)
Bąkowski, Andrzej; Radziszewski, Leszek; Dekýš, Vladimir
2017-10-01
The paper presents the analysis results of equivalent sound level recorded by two road traffic noise monitoring stations. The stations were located in Kielce (an example of a medium-size town in Poland) at the roads in the town in the direction of Łódź and Lublin. The measurements were carried out through stationary stations monitoring the noise and traffic of motor vehicles. The RMS values based on A-weighted sound level were recorded every 1 s in the buffer and the results were registered every 1 min over the period of investigations. The registered data were the basis for calculating the equivalent sound level for three time intervals: from 6:00 to 18:00, from 18:00 to 22:00 and from 22:00 to 6:00. Analysis included the values of the equivalent sound level recorded for different days of the week split into 24h periods, nights, days and evenings. The data analysed included recordings from 2013. The agreement of the distribution of the variable under analysis with normal distribution was evaluated. It was demonstrated that in most cases (for both roads) there was sufficient evidence to reject the null hypothesis at the significance level of 0.05. It was noted that compared with Łódź Road, in the case of Lublin Road data, more cases were recorded for which the null hypothesis could not be rejected. Uncertainties of the equivalent sound level measurements were compared within the periods under analysis. The standard deviation, coefficient of variation, the positional coefficient of variation, the quartile deviation was proposed for performing a comparative analysis of the obtained data scattering. The investigations indicated that the recorded data varied depending on the traffic routes and time intervals. The differences concerned the values of uncertainties and coefficients of variation of the equivalent sound levels.
Bayesian analysis of energy and count rate data for detection of low count rate radioactive sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klumpp, John
We propose a radiation detection system which generates its own discrete sampling distribution based on past measurements of background. The advantage to this approach is that it can take into account variations in background with respect to time, location, energy spectra, detector-specific characteristics (i.e. different efficiencies at different count rates and energies), etc. This would therefore be a 'machine learning' approach, in which the algorithm updates and improves its characterization of background over time. The system would have a 'learning mode,' in which it measures and analyzes background count rates, and a 'detection mode,' in which it compares measurements frommore » an unknown source against its unique background distribution. By characterizing and accounting for variations in the background, general purpose radiation detectors can be improved with little or no increase in cost. The statistical and computational techniques to perform this kind of analysis have already been developed. The necessary signal analysis can be accomplished using existing Bayesian algorithms which account for multiple channels, multiple detectors, and multiple time intervals. Furthermore, Bayesian machine-learning techniques have already been developed which, with trivial modifications, can generate appropriate decision thresholds based on the comparison of new measurements against a nonparametric sampling distribution. (authors)« less
NASA Astrophysics Data System (ADS)
Hemingway, Jordon D.; Rothman, Daniel H.; Rosengard, Sarah Z.; Galy, Valier V.
2017-11-01
Serial oxidation coupled with stable carbon and radiocarbon analysis of sequentially evolved CO2 is a promising method to characterize the relationship between organic carbon (OC) chemical composition, source, and residence time in the environment. However, observed decay profiles depend on experimental conditions and oxidation pathway. It is therefore necessary to properly assess serial oxidation kinetics before utilizing decay profiles as a measure of OC reactivity. We present a regularized inverse method to estimate the distribution of OC activation energy (E), a proxy for bond strength, using serial oxidation. Here, we apply this method to ramped temperature pyrolysis or oxidation (RPO) analysis but note that this approach is broadly applicable to any serial oxidation technique. RPO analysis directly compares thermal reactivity to isotope composition by determining the E range for OC decaying within each temperature interval over which CO2 is collected. By analyzing a decarbonated test sample at multiple masses and oven ramp rates, we show that OC decay during RPO analysis follows a superposition of parallel first-order kinetics and that resulting E distributions are independent of experimental conditions. We therefore propose the E distribution as a novel proxy to describe OC thermal reactivity and suggest that E vs. isotope relationships can provide new insight into the compositional controls on OC source and residence time.
Addressing data privacy in matched studies via virtual pooling.
Saha-Chaudhuri, P; Weinberg, C R
2017-09-07
Data confidentiality and shared use of research data are two desirable but sometimes conflicting goals in research with multi-center studies and distributed data. While ideal for straightforward analysis, confidentiality restrictions forbid creation of a single dataset that includes covariate information of all participants. Current approaches such as aggregate data sharing, distributed regression, meta-analysis and score-based methods can have important limitations. We propose a novel application of an existing epidemiologic tool, specimen pooling, to enable confidentiality-preserving analysis of data arising from a matched case-control, multi-center design. Instead of pooling specimens prior to assay, we apply the methodology to virtually pool (aggregate) covariates within nodes. Such virtual pooling retains most of the information used in an analysis with individual data and since individual participant data is not shared externally, within-node virtual pooling preserves data confidentiality. We show that aggregated covariate levels can be used in a conditional logistic regression model to estimate individual-level odds ratios of interest. The parameter estimates from the standard conditional logistic regression are compared to the estimates based on a conditional logistic regression model with aggregated data. The parameter estimates are shown to be similar to those without pooling and to have comparable standard errors and confidence interval coverage. Virtual data pooling can be used to maintain confidentiality of data from multi-center study and can be particularly useful in research with large-scale distributed data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khan, Sahubar Ali Mohd. Nadhar, E-mail: sahubar@uum.edu.my; Ramli, Razamin, E-mail: razamin@uum.edu.my; Baten, M. D. Azizul, E-mail: baten-math@yahoo.com
Agricultural production process typically produces two types of outputs which are economic desirable as well as environmentally undesirable outputs (such as greenhouse gas emission, nitrate leaching, effects to human and organisms and water pollution). In efficiency analysis, this undesirable outputs cannot be ignored and need to be included in order to obtain the actual estimation of firms efficiency. Additionally, climatic factors as well as data uncertainty can significantly affect the efficiency analysis. There are a number of approaches that has been proposed in DEA literature to account for undesirable outputs. Many researchers has pointed that directional distance function (DDF) approachmore » is the best as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, it has been found that interval data approach is the most suitable to account for data uncertainty as it is much simpler to model and need less information regarding its distribution and membership function. In this paper, an enhanced DEA model based on DDF approach that considers undesirable outputs as well as climatic factors and interval data is proposed. This model will be used to determine the efficiency of rice farmers who produces undesirable outputs and operates under uncertainty. It is hoped that the proposed model will provide a better estimate of rice farmers’ efficiency.« less
Resampling methods in Microsoft Excel® for estimating reference intervals
Theodorsson, Elvar
2015-01-01
Computer- intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular. Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples. PMID:26527366
Resampling methods in Microsoft Excel® for estimating reference intervals.
Theodorsson, Elvar
2015-01-01
Computer-intensive resampling/bootstrap methods are feasible when calculating reference intervals from non-Gaussian or small reference samples. Microsoft Excel® in version 2010 or later includes natural functions, which lend themselves well to this purpose including recommended interpolation procedures for estimating 2.5 and 97.5 percentiles. The purpose of this paper is to introduce the reader to resampling estimation techniques in general and in using Microsoft Excel® 2010 for the purpose of estimating reference intervals in particular. Parametric methods are preferable to resampling methods when the distributions of observations in the reference samples is Gaussian or can transformed to that distribution even when the number of reference samples is less than 120. Resampling methods are appropriate when the distribution of data from the reference samples is non-Gaussian and in case the number of reference individuals and corresponding samples are in the order of 40. At least 500-1000 random samples with replacement should be taken from the results of measurement of the reference samples.
Daluwatte, Chathuri; Vicente, Jose; Galeotti, Loriano; Johannesen, Lars; Strauss, David G; Scully, Christopher G
Performance of ECG beat detectors is traditionally assessed on long intervals (e.g.: 30min), but only incorrect detections within a short interval (e.g.: 10s) may cause incorrect (i.e., missed+false) heart rate limit alarms (tachycardia and bradycardia). We propose a novel performance metric based on distribution of incorrect beat detection over a short interval and assess its relationship with incorrect heart rate limit alarm rates. Six ECG beat detectors were assessed using performance metrics over long interval (sensitivity and positive predictive value over 30min) and short interval (Area Under empirical cumulative distribution function (AUecdf) for short interval (i.e., 10s) sensitivity and positive predictive value) on two ECG databases. False heart rate limit and asystole alarm rates calculated using a third ECG database were then correlated (Spearman's rank correlation) with each calculated performance metric. False alarm rates correlated with sensitivity calculated on long interval (i.e., 30min) (ρ=-0.8 and p<0.05) and AUecdf for sensitivity (ρ=0.9 and p<0.05) in all assessed ECG databases. Sensitivity over 30min grouped the two detectors with lowest false alarm rates while AUecdf for sensitivity provided further information to identify the two beat detectors with highest false alarm rates as well, which was inseparable with sensitivity over 30min. Short interval performance metrics can provide insights on the potential of a beat detector to generate incorrect heart rate limit alarms. Published by Elsevier Inc.
Belitz, Kenneth; Jurgens, Bryant C.; Landon, Matthew K.; Fram, Miranda S.; Johnson, Tyler D.
2010-01-01
The proportion of an aquifer with constituent concentrations above a specified threshold (high concentrations) is taken as a nondimensional measure of regional scale water quality. If computed on the basis of area, it can be referred to as the aquifer scale proportion. A spatially unbiased estimate of aquifer scale proportion and a confidence interval for that estimate are obtained through the use of equal area grids and the binomial distribution. Traditionally, the confidence interval for a binomial proportion is computed using either the standard interval or the exact interval. Research from the statistics literature has shown that the standard interval should not be used and that the exact interval is overly conservative. On the basis of coverage probability and interval width, the Jeffreys interval is preferred. If more than one sample per cell is available, cell declustering is used to estimate the aquifer scale proportion, and Kish's design effect may be useful for estimating an effective number of samples. The binomial distribution is also used to quantify the adequacy of a grid with a given number of cells for identifying a small target, defined as a constituent that is present at high concentrations in a small proportion of the aquifer. Case studies illustrate a consistency between approaches that use one well per grid cell and many wells per cell. The methods presented in this paper provide a quantitative basis for designing a sampling program and for utilizing existing data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Xiaofeng; Thornton, Peter E; Post, Wilfred M
2013-01-01
Soil microbes play a pivotal role in regulating land-atmosphere interactions; the soil microbial biomass carbon (C), nitrogen (N), phosphorus (P) and C:N:P stoichiometry are important regulators for soil biogeochemical processes; however, the current knowledge on magnitude, stoichiometry, storage, and spatial distribution of global soil microbial biomass C, N, and P is limited. In this study, 3087 pairs of data points were retrieved from 281 published papers and further used to summarize the magnitudes and stoichiometries of C, N, and P in soils and soil microbial biomass at global- and biome-levels. Finally, global stock and spatial distribution of microbial biomass Cmore » and N in 0-30 cm and 0-100 cm soil profiles were estimated. The results show that C, N, and P in soils and soil microbial biomass vary substantially across biomes; the fractions of soil nutrient C, N, and P in soil microbial biomass are 1.6% in a 95% confidence interval of (1.5%-1.6%), 2.9% in a 95% confidence interval of (2.8%-3.0%), and 4.4% in a 95% confidence interval of (3.9%-5.0%), respectively. The best estimates of C:N:P stoichiometries for soil nutrients and soil microbial biomass are 153:11:1, and 47:6:1, respectively, at global scale, and they vary in a wide range among biomes. Vertical distribution of soil microbial biomass follows the distribution of roots up to 1 m depth. The global stock of soil microbial biomass C and N were estimated to be 15.2 Pg C and 2.3 Pg N in the 0-30 cm soil profiles, and 21.2 Pg C and 3.2 Pg N in the 0-100 cm soil profiles. We did not estimate P in soil microbial biomass due to data shortage and insignificant correlation with soil total P and climate variables. The spatial patterns of soil microbial biomass C and N were consistent with those of soil organic C and total N, i.e. high density in northern high latitude, and low density in low latitudes and southern hemisphere.« less
Jones, Hayley E; Hickman, Matthew; Kasprzyk-Hordern, Barbara; Welton, Nicky J; Baker, David R; Ades, A E
2014-07-15
Concentrations of metabolites of illicit drugs in sewage water can be measured with great accuracy and precision, thanks to the development of sensitive and robust analytical methods. Based on assumptions about factors including the excretion profile of the parent drug, routes of administration and the number of individuals using the wastewater system, the level of consumption of a drug can be estimated from such measured concentrations. When presenting results from these 'back-calculations', the multiple sources of uncertainty are often discussed, but are not usually explicitly taken into account in the estimation process. In this paper we demonstrate how these calculations can be placed in a more formal statistical framework by assuming a distribution for each parameter involved, based on a review of the evidence underpinning it. Using a Monte Carlo simulations approach, it is then straightforward to propagate uncertainty in each parameter through the back-calculations, producing a distribution for instead of a single estimate of daily or average consumption. This can be summarised for example by a median and credible interval. To demonstrate this approach, we estimate cocaine consumption in a large urban UK population, using measured concentrations of two of its metabolites, benzoylecgonine and norbenzoylecgonine. We also demonstrate a more sophisticated analysis, implemented within a Bayesian statistical framework using Markov chain Monte Carlo simulation. Our model allows the two metabolites to simultaneously inform estimates of daily cocaine consumption and explicitly allows for variability between days. After accounting for this variability, the resulting credible interval for average daily consumption is appropriately wider, representing additional uncertainty. We discuss possibilities for extensions to the model, and whether analysis of wastewater samples has potential to contribute to a prevalence model for illicit drug use. Copyright © 2014. Published by Elsevier B.V.
Jones, Hayley E.; Hickman, Matthew; Kasprzyk-Hordern, Barbara; Welton, Nicky J.; Baker, David R.; Ades, A.E.
2014-01-01
Concentrations of metabolites of illicit drugs in sewage water can be measured with great accuracy and precision, thanks to the development of sensitive and robust analytical methods. Based on assumptions about factors including the excretion profile of the parent drug, routes of administration and the number of individuals using the wastewater system, the level of consumption of a drug can be estimated from such measured concentrations. When presenting results from these ‘back-calculations’, the multiple sources of uncertainty are often discussed, but are not usually explicitly taken into account in the estimation process. In this paper we demonstrate how these calculations can be placed in a more formal statistical framework by assuming a distribution for each parameter involved, based on a review of the evidence underpinning it. Using a Monte Carlo simulations approach, it is then straightforward to propagate uncertainty in each parameter through the back-calculations, producing a distribution for instead of a single estimate of daily or average consumption. This can be summarised for example by a median and credible interval. To demonstrate this approach, we estimate cocaine consumption in a large urban UK population, using measured concentrations of two of its metabolites, benzoylecgonine and norbenzoylecgonine. We also demonstrate a more sophisticated analysis, implemented within a Bayesian statistical framework using Markov chain Monte Carlo simulation. Our model allows the two metabolites to simultaneously inform estimates of daily cocaine consumption and explicitly allows for variability between days. After accounting for this variability, the resulting credible interval for average daily consumption is appropriately wider, representing additional uncertainty. We discuss possibilities for extensions to the model, and whether analysis of wastewater samples has potential to contribute to a prevalence model for illicit drug use. PMID:24636801
Vallejo, C; Perez, J; Rodriguez, R; Cuevas, M; Machiavelli, M; Lacava, J; Romero, A; Rabinovich, M; Leone, B
1994-03-01
The development of ultimate visceral metastases and the visceral metastases-free time interval was evaluated in patients with breast carcinoma bearing bone-only metastases. Ninety patients were identified and were subdivided into three groups according to the anatomic distribution of osseous lesions: group A with osseous involvement cranial to the lumbosacral junction, group B caudal to this, and group C with lesions in both areas. The purpose of this subdivision was to evaluate if there is any correlation between bone-metastases distribution and probability of developing visceral lesions. All patients received systemic therapy consisting of hormonal therapy, chemotherapy or both. The median survival for the whole group was 28 months, whereas it was 33, 43 and 26 months for patients in groups A, B and C, respectively (p=NS). No differences in subsequent visceral involvement and visceral-free time interval were observed among the three groups of patients regardless of tumor burden. In conclusion, our analyses did not show significant differences in the incidence of visceral metastases, visceral metastases-free time interval and overall survival in patients with breast cancer with bone-only lesions independently of anatomic distribution.
Virlogeux, Victor; Fang, Vicky J; Park, Minah; Wu, Joseph T; Cowling, Benjamin J
2016-10-24
The incubation period is an important epidemiologic distribution, it is often incorporated in case definitions, used to determine appropriate quarantine periods, and is an input to mathematical modeling studies. Middle East Respiratory Syndrome coronavirus (MERS) is an emerging infectious disease in the Arabian Peninsula. There was a large outbreak of MERS in South Korea in 2015. We examined the incubation period distribution of MERS coronavirus infection for cases in South Korea and in Saudi Arabia. Using parametric and nonparametric methods, we estimated a mean incubation period of 6.9 days (95% credibility interval: 6.3-7.5) for cases in South Korea and 5.0 days (95% credibility interval: 4.0-6.6) among cases in Saudi Arabia. In a log-linear regression model, the mean incubation period was 1.42 times longer (95% credibility interval: 1.18-1.71) among cases in South Korea compared to Saudi Arabia. The variation that we identified in the incubation period distribution between locations could be associated with differences in ascertainment or reporting of exposure dates and illness onset dates, differences in the source or mode of infection, or environmental differences.
Regional Classification of Traditional Japanese Folk Songs
NASA Astrophysics Data System (ADS)
Kawase, Akihiro; Tokosumi, Akifumi
In this study, we focus on the melodies of Japanese folk songs, and examine the basic structures of Japanese folk songs that represent the characteristics of different regions. We sample the five largest song genres within the music corpora of the Nihon Min-yo Taikan (Anthology of Japanese Folk Songs), consisting of 202,246 tones from 1,794 song pieces from 45 prefectures in Japan. Then, we calculate the probabilities of 24 transition patterns that fill the interval of the perfect fourth pitch, which is the interval that maintains most of the frequency for one-step and two-step pitch transitions within 11 regions, in order to determine the parameters for cluster analysis. As a result, we successively classify the regions into two basic groups, eastern Japan and western Japan, which corresponds to geographical factors and cultural backgrounds, and also match accent distributions in the Japanese language.
Asthma and school commuting time.
McConnell, Rob; Liu, Feifei; Wu, Jun; Lurmann, Fred; Peters, John; Berhane, Kiros
2010-08-01
This study examined associations of asthma with school commuting time. Time on likely school commute route was used as a proxy for on-road air pollution exposure among 4741 elementary school children at enrollment into the Children's Health Study. Lifetime asthma and severe wheeze (including multiple attacks, nocturnal, or with shortness of breath) were reported by parents. In asthmatic children, severe wheeze was associated with commuting time (odds ratio, 1.54 across the 9-minute 5% to 95% exposure distribution; 95% confidence interval, 1.01 to 2.36). The association was stronger in analysis restricted to asthmatic children with commuting times 5 minutes or longer (odds ratio, 1.97; 95% confidence interval, 1.02 to 3.77). No significant associations were observed with asthma prevalence. Among asthmatics, severe wheeze was associated with relatively short school commuting times. Further investigation of effects of on-road pollutant exposure is warranted.
A DMAP Program for the Selection of Accelerometer Locations in MSC/NASTRAN
NASA Technical Reports Server (NTRS)
Peck, Jeff; Torres, Isaias
2004-01-01
A new program for selecting sensor locations has been written in the DMAP (Direct Matrix Abstraction Program) language of MSC/NASTRAN. The program implements the method of Effective Independence for selecting sensor locations, and is executed within a single NASTRAN analysis as a "rigid format alter" to the normal modes solution sequence (SOL 103). The user of the program is able to choose among various analysis options using Case Control and Bulk Data entries. Algorithms tailored for the placement of both uni-axial and tri- axial accelerometers are available, as well as several options for including the model s mass distribution into the calculations. Target modes for the Effective Independence analysis are selected from the MSC/NASTRAN ASET modes calculated by the "SOL 103" solution sequence. The initial candidate sensor set is also under user control, and is selected from the ASET degrees of freedom. Analysis results are printed to the MSCINASTRAN output file (*.f06), and may include the current candidate sensors set, and their associated Effective Independence distribution, at user specified iteration intervals. At the conclusion of the analysis, the model is reduced to the final sensor set, and frequencies and orthogonality checks are printed. Example results are given for a pre-test analysis of NASA s five-segment solid rocket booster modal test.
Bayesian alternative to the ISO-GUM's use of the Welch Satterthwaite formula
NASA Astrophysics Data System (ADS)
Kacker, Raghu N.
2006-02-01
In certain disciplines, uncertainty is traditionally expressed as an interval about an estimate for the value of the measurand. Development of such uncertainty intervals with a stated coverage probability based on the International Organization for Standardization (ISO) Guide to the Expression of Uncertainty in Measurement (GUM) requires a description of the probability distribution for the value of the measurand. The ISO-GUM propagates the estimates and their associated standard uncertainties for various input quantities through a linear approximation of the measurement equation to determine an estimate and its associated standard uncertainty for the value of the measurand. This procedure does not yield a probability distribution for the value of the measurand. The ISO-GUM suggests that under certain conditions motivated by the central limit theorem the distribution for the value of the measurand may be approximated by a scaled-and-shifted t-distribution with effective degrees of freedom obtained from the Welch-Satterthwaite (W-S) formula. The approximate t-distribution may then be used to develop an uncertainty interval with a stated coverage probability for the value of the measurand. We propose an approximate normal distribution based on a Bayesian uncertainty as an alternative to the t-distribution based on the W-S formula. A benefit of the approximate normal distribution based on a Bayesian uncertainty is that it greatly simplifies the expression of uncertainty by eliminating altogether the need for calculating effective degrees of freedom from the W-S formula. In the special case where the measurand is the difference between two means, each evaluated from statistical analyses of independent normally distributed measurements with unknown and possibly unequal variances, the probability distribution for the value of the measurand is known to be a Behrens-Fisher distribution. We compare the performance of the approximate normal distribution based on a Bayesian uncertainty and the approximate t-distribution based on the W-S formula with respect to the Behrens-Fisher distribution. The approximate normal distribution is simpler and better in this case. A thorough investigation of the relative performance of the two approximate distributions would require comparison for a range of measurement equations by numerical methods.
Turner, Rebecca M; Davey, Jonathan; Clarke, Mike J; Thompson, Simon G; Higgins, Julian PT
2012-01-01
Background Many meta-analyses contain only a small number of studies, which makes it difficult to estimate the extent of between-study heterogeneity. Bayesian meta-analysis allows incorporation of external evidence on heterogeneity, and offers advantages over conventional random-effects meta-analysis. To assist in this, we provide empirical evidence on the likely extent of heterogeneity in particular areas of health care. Methods Our analyses included 14 886 meta-analyses from the Cochrane Database of Systematic Reviews. We classified each meta-analysis according to the type of outcome, type of intervention comparison and medical specialty. By modelling the study data from all meta-analyses simultaneously, using the log odds ratio scale, we investigated the impact of meta-analysis characteristics on the underlying between-study heterogeneity variance. Predictive distributions were obtained for the heterogeneity expected in future meta-analyses. Results Between-study heterogeneity variances for meta-analyses in which the outcome was all-cause mortality were found to be on average 17% (95% CI 10–26) of variances for other outcomes. In meta-analyses comparing two active pharmacological interventions, heterogeneity was on average 75% (95% CI 58–95) of variances for non-pharmacological interventions. Meta-analysis size was found to have only a small effect on heterogeneity. Predictive distributions are presented for nine different settings, defined by type of outcome and type of intervention comparison. For example, for a planned meta-analysis comparing a pharmacological intervention against placebo or control with a subjectively measured outcome, the predictive distribution for heterogeneity is a log-normal (−2.13, 1.582) distribution, which has a median value of 0.12. In an example of meta-analysis of six studies, incorporating external evidence led to a smaller heterogeneity estimate and a narrower confidence interval for the combined intervention effect. Conclusions Meta-analysis characteristics were strongly associated with the degree of between-study heterogeneity, and predictive distributions for heterogeneity differed substantially across settings. The informative priors provided will be very beneficial in future meta-analyses including few studies. PMID:22461129
Reliability analysis based on the losses from failures.
Todinov, M T
2006-04-01
The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the early-life failures region and the expected losses given failure characterizing the corresponding time intervals. For complex systems whose components are not logically arranged in series, discrete simulation algorithms and software have been created for determining the losses from failures in terms of expected lost production time, cost of intervention, and cost of replacement. Different system topologies are assessed to determine the effect of modifications of the system topology on the expected losses from failures. It is argued that the reliability allocation in a production system should be done to maximize the profit/value associated with the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the profit by minimizing the total cost has been developed. Reliability allocation that maximizes the profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually the reliabilities of the components in the block that minimize the sum of the capital, operation costs, and the expected losses from failures. A Monte Carlo simulation based net present value (NPV) cash-flow model has also been proposed, which has significant advantages to cash-flow models based on the expected value of the losses from failures per time interval. Unlike these models, the proposed model has the capability to reveal the variation of the NPV due to different number of failures occurring during a specified time interval (e.g., during one year). The model also permits tracking the impact of the distribution pattern of failure occurrences and the time dependence of the losses from failures.
NASA Astrophysics Data System (ADS)
Saidi, Helmi; Ciampittiello, Marzia; Dresti, Claudia; Ghiglieri, Giorgio
2013-07-01
Alpine and Mediterranean areas are undergoing a profound change in the typology and distribution of rainfall. In particular, there has been an increase in consecutive non-rainy days, and an escalation of extreme rainy events. The climatic characteristic of extreme precipitations over short-term intervals is an object of study in the watershed of Lake Maggiore, the second largest freshwater basin in Italy (located in the north-west of the country) and an important resource for tourism, fishing and commercial flower growing. The historical extreme rainfall series with high-resolution from 5 to 45 min and above: 1, 2, 3, 6, 12 and 24 h collected at different gauges located at representative sites in the watershed of Lake Maggiore, have been computed to perform regional frequency analysis of annual maxima precipitation based on the L-moments approach, and to produce growth curves for different return-period rainfall events. Because of different rainfall-generating mechanisms in the watershed of Lake Maggiore such as elevation, no single parent distribution could be found for the entire study area. This paper concerns an investigation designed to give a first view of the temporal change and evolution of annual maxima precipitation, focusing particularly on both heavy and extreme events recorded at time intervals ranging from few minutes to 24 h and also to create and develop an extreme storm precipitation database, starting from historical sub-daily precipitation series distributed over the territory. There have been two-part changes in extreme rainfall events occurrence in the last 23 years from 1987 to 2009. Little change is observed in 720 min and 24-h precipitations, but the change seen in 5, 10, 15, 20, 30, 45, 60, 120, 180 and 360 min events is significant. In fact, during the 2000s, growth curves have flattened and annual maxima have decreased.
NASA Astrophysics Data System (ADS)
Kozlowska, M.; Orlecka-Sikora, B.; Kwiatek, G.; Boettcher, M. S.; Dresen, G. H.
2014-12-01
Static stress changes following large earthquakes are known to affect the rate and spatio-temporal distribution of the aftershocks. Here we utilize a unique dataset of M ≥ -3.4 earthquakes following a MW 2.2 earthquake in Mponeng gold mine, South Africa, to investigate this process for nano- and pico- scale seismicity at centimeter length scales in shallow, mining conditions. The aftershock sequence was recorded during a quiet interval in the mine and thus enabled us to perform the analysis using Dietrich's (1994) rate and state dependent friction law. The formulation for earthquake productivity requires estimation of Coulomb stress changes due to the mainshock, the reference seismicity rate, frictional resistance parameter, and the duration of aftershock relaxation time. We divided the area into six depth intervals and for each we estimated the parameters and modeled the spatio-temporal patterns of seismicity rates after the stress perturbation. Comparing the modeled patterns of seismicity with the observed distribution we found that while the spatial patterns match well, the rate of modeled aftershocks is lower than the observed rate. To test our model, we used four metrics of the goodness-of-fit evaluation. Testing procedure allowed rejecting the null hypothesis of no significant difference between seismicity rates only for one depth interval containing the mainshock, for the other, no significant differences have been found. Results show that mining-induced earthquakes may be followed by a stress relaxation expressed through aftershocks located on the rupture plane and in regions of positive Coulomb stress change. Furthermore, we demonstrate that the main features of the temporal and spatial distribution of very small, mining-induced earthquakes at shallow depths can be successfully determined using rate- and state-based stress modeling.
NASA Astrophysics Data System (ADS)
Zhang, Penghui; Zhang, Jinliang; Wang, Jinkai; Li, Ming; Liang, Jie; Wu, Yingli
2018-05-01
Flow units classification can be used in reservoir characterization. In addition, characterizing the reservoir interval into flow units is an effective way to simulate the reservoir. Paraflow units (PFUs), the second level of flow units, are used to estimate the spatial distribution of continental clastic reservoirs at the detailed reservoir description stage. In this study, we investigate a nonroutine methodology to predict the external and internal distribution of PFUs. The methodology outlined enables the classification of PFUs using sandstone core samples and log data. The relationships obtained between porosity, permeability and pore throat aperture radii (r35) values were established for core and log data obtained from 26 wells from the Funing Formation, Gaoji Oilfield, Subei Basin, China. The present study refines predicted PFUs at logged (0.125-m) intervals, whose scale is much smaller than routine methods. Meanwhile, three-dimensional models are built using sequential indicator simulation to characterize PFUs in wells. Four distinct PFUs are classified and located based on the statistical methodology of cluster analysis, and each PFU has different seepage ability. The results of this study demonstrate the obtained models are able to quantify reservoir heterogeneity. Due to different petrophysical characteristics and seepage ability, PFUs have a significant impact on the distribution of the remaining oil. Considering these allows a more accurate understanding of reservoir quality, especially within non-marine sandstone reservoirs.
NASA Astrophysics Data System (ADS)
Lundberg, J.; Conrad, J.; Rolke, W.; Lopez, A.
2010-03-01
A C++ class was written for the calculation of frequentist confidence intervals using the profile likelihood method. Seven combinations of Binomial, Gaussian, Poissonian and Binomial uncertainties are implemented. The package provides routines for the calculation of upper and lower limits, sensitivity and related properties. It also supports hypothesis tests which take uncertainties into account. It can be used in compiled C++ code, in Python or interactively via the ROOT analysis framework. Program summaryProgram title: TRolke version 2.0 Catalogue identifier: AEFT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: MIT license No. of lines in distributed program, including test data, etc.: 3431 No. of bytes in distributed program, including test data, etc.: 21 789 Distribution format: tar.gz Programming language: ISO C++. Computer: Unix, GNU/Linux, Mac. Operating system: Linux 2.6 (Scientific Linux 4 and 5, Ubuntu 8.10), Darwin 9.0 (Mac-OS X 10.5.8). RAM:˜20 MB Classification: 14.13. External routines: ROOT ( http://root.cern.ch/drupal/) Nature of problem: The problem is to calculate a frequentist confidence interval on the parameter of a Poisson process with statistical or systematic uncertainties in signal efficiency or background. Solution method: Profile likelihood method, Analytical Running time:<10 seconds per extracted limit.
NASA Astrophysics Data System (ADS)
Paiva, F. M.; Batista, J. C.; Rêgo, F. S. C.; Lima, J. A.; Freire, P. T. C.; Melo, F. E. A.; Mendes Filho, J.; de Menezes, A. S.; Nogueira, C. E. S.
2017-01-01
Single crystals of DL-valine and DL-lysine hydrochloride were grown by slow evaporation method and the crystallographic structure were confirmed by X-ray diffraction experiment and Rietveld method. These two crystals have been studied by Raman spectroscopy in the 25-3600 cm-1 spectral range and by infrared spectroscopy through the interval 375-4000 cm-1 at room temperature. Experimental and theoretical vibrational spectra were compared and a complete analysis of the modes was done in terms of the Potential Energy Distribution (PED).
Robust portfolio selection based on asymmetric measures of variability of stock returns
NASA Astrophysics Data System (ADS)
Chen, Wei; Tan, Shaohua
2009-10-01
This paper addresses a new uncertainty set--interval random uncertainty set for robust optimization. The form of interval random uncertainty set makes it suitable for capturing the downside and upside deviations of real-world data. These deviation measures capture distributional asymmetry and lead to better optimization results. We also apply our interval random chance-constrained programming to robust mean-variance portfolio selection under interval random uncertainty sets in the elements of mean vector and covariance matrix. Numerical experiments with real market data indicate that our approach results in better portfolio performance.
Fluctuations of healthy and unhealthy heartbeat intervals
NASA Astrophysics Data System (ADS)
Lan, Boon Leong; Toda, Mikito
2013-04-01
We show that the RR-interval fluctuations, defined as the difference between successive natural-logarithm of the RR interval, for healthy, congestive-heart-failure (CHF) and atrial-fibrillation (AF) subjects are well modeled by non-Gaussian stable distributions. Our results suggest that healthy or unhealthy RR-interval fluctuation can generally be modeled as a sum of a large number of independent physiological effects which are identically distributed with infinite variance. Furthermore, we show for the first time that one indicator —the scale parameter of the stable distribution— is sufficient to robustly distinguish the three groups of subjects. The scale parameters for healthy subjects are smaller than those for AF subjects but larger than those for CHF subjects —this ordering suggests that the scale parameter could be used to objectively quantify the severity of CHF and AF over time and also serve as an early warning signal for a healthy person when it approaches either boundary of the healthy range.
Leonid Storm Flux Analysis From One Leonid MAC Video AL50R
NASA Technical Reports Server (NTRS)
Gural, Peter S.; Jenniskens, Peter; DeVincenzi, Donald L. (Technical Monitor)
2000-01-01
A detailed meteor flux analysis is presented of a seventeen-minute portion of one videotape, collected on November 18, 1999, during the Leonid Multi-instrument Aircraft Campaign. The data was recorded around the peak of the Leonid meteor storm using an intensified CCD camera pointed towards the low southern horizon. Positions of meteors on the sky were measured. These measured meteor distributions were compared to a Monte Carlo simulation, which is a new approach to parameter estimation for mass ratio and flux. Comparison of simulated flux versus observed flux levels, seen between 1:50:00 and 2:06:41 UT, indicate a magnitude population index of r = 1.8 +/- 0.1 and mass ratio of s = 1.64 +/- 0.06. The average spatial density of the material contributing to the Leonid storm peak is measured at 0.82 +/- 0.19 particles per square kilometer per hour for particles of at least absolute visual magnitude +6.5. Clustering analysis of the arrival times of Leonids impacting the earth's atmosphere over the total observing interval shows no enhancement or clumping down to time scales of the video frame rate. This indicates a uniformly random temporal distribution of particles in the stream encountered during the 1999 epoch. Based on the observed distribution of meteors on the sky and the model distribution, recommendations am made for the optimal pointing directions for video camera meteor counts during future ground and airborne missions.
Whitmore, Roy W; Chen, Wenlin
2013-12-04
The ability to infer human exposure to substances from drinking water using monitoring data helps determine and/or refine potential risks associated with drinking water consumption. We describe a survey sampling approach and its application to an atrazine groundwater monitoring study to adequately characterize upper exposure centiles and associated confidence intervals with predetermined precision. Study design and data analysis included sampling frame definition, sample stratification, sample size determination, allocation to strata, analysis weights, and weighted population estimates. Sampling frame encompassed 15 840 groundwater community water systems (CWS) in 21 states throughout the U. S. Median, and 95th percentile atrazine concentrations were 0.0022 and 0.024 ppb, respectively, for all CWS. Statistical estimates agreed with historical monitoring results, suggesting that the study design was adequate and robust. This methodology makes no assumptions regarding the occurrence distribution (e.g., lognormality); thus analyses based on the design-induced distribution provide the most robust basis for making inferences from the sample to target population.
Ademi, Abdulakim; Grozdanov, Anita; Paunović, Perica; Dimitrov, Aleksandar T
2015-01-01
Summary A model consisting of an equation that includes graphene thickness distribution is used to calculate theoretical 002 X-ray diffraction (XRD) peak intensities. An analysis was performed upon graphene samples produced by two different electrochemical procedures: electrolysis in aqueous electrolyte and electrolysis in molten salts, both using a nonstationary current regime. Herein, the model is enhanced by a partitioning of the corresponding 2θ interval, resulting in significantly improved accuracy of the results. The model curves obtained exhibit excellent fitting to the XRD intensities curves of the studied graphene samples. The employed equation parameters make it possible to calculate the j-layer graphene region coverage of the graphene samples, and hence the number of graphene layers. The results of the thorough analysis are in agreement with the calculated number of graphene layers from Raman spectra C-peak position values and indicate that the graphene samples studied are few-layered. PMID:26665083
Error correction and diversity analysis of population mixtures determined by NGS
Burroughs, Nigel J.; Evans, David J.; Ryabov, Eugene V.
2014-01-01
The impetus for this work was the need to analyse nucleotide diversity in a viral mix taken from honeybees. The paper has two findings. First, a method for correction of next generation sequencing error in the distribution of nucleotides at a site is developed. Second, a package of methods for assessment of nucleotide diversity is assembled. The error correction method is statistically based and works at the level of the nucleotide distribution rather than the level of individual nucleotides. The method relies on an error model and a sample of known viral genotypes that is used for model calibration. A compendium of existing and new diversity analysis tools is also presented, allowing hypotheses about diversity and mean diversity to be tested and associated confidence intervals to be calculated. The methods are illustrated using honeybee viral samples. Software in both Excel and Matlab and a guide are available at http://www2.warwick.ac.uk/fac/sci/systemsbiology/research/software/, the Warwick University Systems Biology Centre software download site. PMID:25405074
A gentle introduction to quantile regression for ecologists
Cade, B.S.; Noon, B.R.
2003-01-01
Quantile regression is a way to estimate the conditional quantiles of a response variable distribution in the linear model that provides a more complete view of possible causal relationships between variables in ecological processes. Typically, all the factors that affect ecological processes are not measured and included in the statistical models used to investigate relationships between variables associated with those processes. As a consequence, there may be a weak or no predictive relationship between the mean of the response variable (y) distribution and the measured predictive factors (X). Yet there may be stronger, useful predictive relationships with other parts of the response variable distribution. This primer relates quantile regression estimates to prediction intervals in parametric error distribution regression models (eg least squares), and discusses the ordering characteristics, interval nature, sampling variation, weighting, and interpretation of the estimates for homogeneous and heterogeneous regression models.
Nathenson, Manuel; Donnelly-Nolan, Julie M.; Champion, Duane E.; Lowenstern, Jacob B.
2007-01-01
Medicine Lake volcano has had 4 eruptive episodes in its postglacial history (since 13,000 years ago) comprising 16 eruptions. Time intervals between events within the episodes are relatively short, whereas time intervals between the episodes are much longer. An updated radiocarbon chronology for these eruptions is presented that uses paleomagnetic data to constrain the choice of calibrated ages. This chronology is used with exponential, Weibull, and mixed-exponential probability distributions to model the data for time intervals between eruptions. The mixed exponential distribution is the best match to the data and provides estimates for the conditional probability of a future eruption given the time since the last eruption. The probability of an eruption at Medicine Lake volcano in the next year from today is 0.00028.
Diaconis, Persi; Holmes, Susan; Janson, Svante
2015-01-01
We work out a graph limit theory for dense interval graphs. The theory developed departs from the usual description of a graph limit as a symmetric function W (x, y) on the unit square, with x and y uniform on the interval (0, 1). Instead, we fix a W and change the underlying distribution of the coordinates x and y. We find choices such that our limits are continuous. Connections to random interval graphs are given, including some examples. We also show a continuity result for the chromatic number and clique number of interval graphs. Some results on uniqueness of the limit description are given for general graph limits. PMID:26405368
Minimax rational approximation of the Fermi-Dirac distribution.
Moussa, Jonathan E
2016-10-28
Accurate rational approximations of the Fermi-Dirac distribution are a useful component in many numerical algorithms for electronic structure calculations. The best known approximations use O(log(βΔ)log(ϵ -1 )) poles to achieve an error tolerance ϵ at temperature β -1 over an energy interval Δ. We apply minimax approximation to reduce the number of poles by a factor of four and replace Δ with Δ occ , the occupied energy interval. This is particularly beneficial when Δ ≫ Δ occ , such as in electronic structure calculations that use a large basis set.
Minimax rational approximation of the Fermi-Dirac distribution
NASA Astrophysics Data System (ADS)
Moussa, Jonathan E.
2016-10-01
Accurate rational approximations of the Fermi-Dirac distribution are a useful component in many numerical algorithms for electronic structure calculations. The best known approximations use O(log(βΔ)log(ɛ-1)) poles to achieve an error tolerance ɛ at temperature β-1 over an energy interval Δ. We apply minimax approximation to reduce the number of poles by a factor of four and replace Δ with Δocc, the occupied energy interval. This is particularly beneficial when Δ ≫ Δocc, such as in electronic structure calculations that use a large basis set.
Long-range anticorrelations and non-Gaussian behavior of the heartbeat
NASA Technical Reports Server (NTRS)
Peng, C.-K.; Mietus, J.; Hausdorff, J. M.; Havlin, S.; Stanley, H. E.; Goldberger, A. L.
1993-01-01
We find that the successive increments in the cardiac beat-to-beat intervals of healthy subjects display scale-invariant, long-range anticorrelations (up to 10 exp 4 heart beats). Furthermore, we find that the histogram for the heartbeat intervals increments is well described by a Levy (1991) stable distribution. For a group of subjects with severe heart disease, we find that the distribution is unchanged, but the long-range correlations vanish. Therefore, the different scaling behavior in health and disease must relate to the underlying dynamics of the heartbeat.
Karabatsos, George
2017-02-01
Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.
Voice-onset time and buzz-onset time identification: A ROC analysis
NASA Astrophysics Data System (ADS)
Lopez-Bascuas, Luis E.; Rosner, Burton S.; Garcia-Albea, Jose E.
2004-05-01
Previous studies have employed signal detection theory to analyze data from speech and nonspeech experiments. Typically, signal distributions were assumed to be Gaussian. Schouten and van Hessen [J. Acoust. Soc. Am. 104, 2980-2990 (1998)] explicitly tested this assumption for an intensity continuum and a speech continuum. They measured response distributions directly and, assuming an interval scale, concluded that the Gaussian assumption held for both continua. However, Pastore and Macmillan [J. Acoust. Soc. Am. 111, 2432 (2002)] applied ROC analysis to Schouten and van Hessen's data, assuming only an ordinal scale. Their ROC curves suppported the Gaussian assumption for the nonspeech signals only. Previously, Lopez-Bascuas [Proc. Audit. Bas. Speech Percept., 158-161 (1997)] found evidence with a rating scale procedure that the Gaussian model was inadequate for a voice-onset time continuum but not for a noise-buzz continuum. Both continua contained ten stimuli with asynchronies ranging from -35 ms to +55 ms. ROC curves (double-probability plots) are now reported for each pair of adjacent stimuli on the two continua. Both speech and nonspeech ROCs often appeared nonlinear, indicating non-Gaussian signal distributions under the usual zero-variance assumption for response criteria.
Sripada, Chandra Sekhar; Kessler, Daniel; Welsh, Robert; Angstadt, Michael; Liberzon, Israel; Phan, K Luan; Scott, Clayton
2013-11-01
Methylphenidate is a psychostimulant medication that produces improvements in functions associated with multiple neurocognitive systems. To investigate the potentially distributed effects of methylphenidate on the brain's intrinsic network architecture, we coupled resting state imaging with multivariate pattern classification. In a within-subject, double-blind, placebo-controlled, randomized, counterbalanced, cross-over design, 32 healthy human volunteers received either methylphenidate or placebo prior to two fMRI resting state scans separated by approximately one week. Resting state connectomes were generated by placing regions of interest at regular intervals throughout the brain, and these connectomes were submitted for support vector machine analysis. We found that methylphenidate produces a distributed, reliably detected, multivariate neural signature. Methylphenidate effects were evident across multiple resting state networks, especially visual, somatomotor, and default networks. Methylphenidate reduced coupling within visual and somatomotor networks. In addition, default network exhibited decoupling with several task positive networks, consistent with methylphenidate modulation of the competitive relationship between these networks. These results suggest that connectivity changes within and between large-scale networks are potentially involved in the mechanisms by which methylphenidate improves attention functioning. Copyright © 2013 Elsevier Inc. All rights reserved.
Non-uniform Solar Temperature Field on Large Aperture, Fully-Steerable Telescope Structure
NASA Astrophysics Data System (ADS)
Liu, Yan
2016-09-01
In this study, a 110-m fully steerable radio telescope was used as an analysis platform and the integral parametric finite element model of the antenna structure was built in the ANSYS thermal analysis module. The boundary conditions of periodic air temperature, solar radiation, long-wave radiation shadows of the surrounding environment, etc. were computed at 30 min intervals under a cloudless sky on a summer day, i.e., worstcase climate conditions. The transient structural temperatures were then analyzed under a period of several days of sunshine with a rational initial structural temperature distribution until the whole set of structural temperatures converged to the results obtained the day before. The non-uniform temperature field distribution of the entire structure and the main reflector surface RMS were acquired according to changes in pitch and azimuth angle over the observation period. Variations in the solar cooker effect over time and spatial distributions in the secondary reflector were observed to elucidate the mechanism of the effect. The results presented here not only provide valuable realtime data for the design, construction, sensor arrangement and thermal deformation control of actuators but also provide a troubleshooting reference for existing actuators.
Borgdorff, Martien W; Sebek, Maruschka; Geskus, Ronald B; Kremer, Kristin; Kalisvaart, Nico; van Soolingen, Dick
2011-08-01
There is limited information on the distribution of incubation periods of tuberculosis (TB). In The Netherlands, patients whose Mycobacterium tuberculosis isolates have identical DNA fingerprints in the period 1993-2007 were interviewed to identify epidemiological links between cases. We determined the incubation period distribution in secondary cases. Survival analysis techniques were used to include secondary cases not yet symptomatic at diagnosis with weighting to adjust for lower capture probabilities of couples with longer time intervals between their diagnoses. In order to deal with missing data, we used multiple imputations. We identified 1095 epidemiologically linked secondary cases, attributed to 688 source cases with pulmonary TB. Of those developing disease within 15 years, the Kaplan-Meier probability to fall ill within 1 year was 45%, within 2 years 62% and within 5 years 83%. The incubation time was shorter in secondary cases who were men, young, those with extra-pulmonary TB and those not reporting previous TB or previous preventive therapy. Molecular epidemiological analysis has allowed a more precise description of the incubation period of TB than was possible in previous studies, including the identification of risk factors for shorter incubation periods.
Extended-Interval Gentamicin Dosing in Achieving Therapeutic Concentrations in Malaysian Neonates
Tan, Sin Li; Wan, Angeline SL
2015-01-01
OBJECTIVE: To evaluate the usefulness of extended-interval gentamicin dosing practiced in neonatal intensive care unit (NICU) and special care nursery (SCN) of a Malaysian hospital. METHODS: Cross-sectional observational study with pharmacokinetic analysis of all patients aged ≤28 days who received gentamicin treatment in NICU/SCN. Subjects received dosing according to a regimen modified from an Australian-based pediatric guideline. During a study period of 3 months, subjects were evaluated for gestational age, body weight, serum creatinine concentration, gentamicin dose/interval, serum peak and trough concentrations, and pharmacokinetic parameters. Descriptive percentages were used to determine the overall dosing accuracy, while analysis of variance (ANOVA) was conducted to compare the accuracy rates among different gestational ages. Pharmacokinetic profile among different gestational age and body weight groups were compared by using ANOVA. RESULTS: Of the 113 subjects included, 82.3% (n = 93) achieved therapeutic concentrations at the first drug-monitoring assessment. There was no significant difference found between the percentage of term neonates who achieved therapeutic concentrations and the premature group (87.1% vs. 74.4%), p = 0.085. A total of 112 subjects (99.1%) achieved desired therapeutic trough concentration of <2 mg/L. Mean gentamicin peak concentration was 8.52 mg/L (95% confidence interval [Cl], 8.13–8.90 mg/L) and trough concentration was 0.54 mg/L (95% CI, 0.48–0.60 mg/L). Mean volume of distribution, half-life, and elimination rate were 0.65 L/kg (95% CI, 0.62–0.68 L/kg), 6.96 hours (95% CI, 6.52–7.40 hours), and 0.11 hour−1 (95% CI, 0.10–0.11 hour−1), respectively. CONCLUSION: The larger percentage of subjects attaining therapeutic range with extended-interval gentamicin dosing suggests that this regimen is appropriate and can be safely used among Malaysian neonates. PMID:25964729
Verdugo, Cristobal; Valdes, Maria Francisca; Salgado, Miguel
2018-06-01
This study aimed to estimate the distributions of the within-herd true prevalence (TP) and the annual clinical incidence proportion (CIp) of Mycobacterium avium subsp. paratuberculosis (MAP) infection in dairy cattle herds in Chile. Forty two commercial herds with antecedents of MAP infection were randomly selected to participate in the study. In small herds (≤30 cows), serum samples were collected from all animals present. Whereas, in larger herds, milk or serum samples were collected from all milking cows with 2 or more parities. Samples were analysed using the Pourquier® ELISA PARATUBERCULOSIS (Insitute Pourquier, France) test. Moreover, a questionnaire gathering information on management practices and the frequency of clinical cases, compatible with paratuberculosis (in the previous 12 months), was applied on the sampling date. A Bayesian latent class analysis was used to obtain TP and clinical incidence posterior distributions. The model adjusts for uncertainty in test sensitivity (serum or milk) and specificity, and prior TP & CIp estimates. A total of 4963 animals were tested, with an average contribution of 124 samples per herd. A mean apparent prevalence of 6.3% (95% confidence interval: 4.0-8.0%) was observed. Model outputs indicated an overall TP posterior distribution, across herds, with a median of 13.1% (95% posterior probability interval (PPI); 3.2-38.1%). A high TP variability was observed between herds. CIp presented a posterior median of 1.1% (95% PPI; 0.2-4.6%). Model results complement information missing from previously conducted epidemiological studies in the sector, and they could be used for further assessment of the disease impact and planning of control programs. Copyright © 2018 Elsevier B.V. All rights reserved.
Nash, Rebecca; Ward, Kevin C; Jemal, Ahmedin; Sandberg, David E; Tangpricha, Vin; Goodman, Michael
2018-03-09
Transgender people and persons with disorders of sex development (DSD) are two separate categories of gender minorities, each characterized by unique cancer risk factors. Although cancer registry data typically include only two categories of sex, registrars have the option of indicating that a patient is transgender or has a DSD. Data for primary cancer cases in 46 states and the District of Columbia were obtained from the North American Association of Central Cancer Registries (NAACCR) database for the period 1995-2013. The distributions of primary sites and categories of cancers with shared risk factors were examined separately for transgender and DSD patients and compared to the corresponding distributions in male and female cancer patients. Proportional incidence ratios were calculated by dividing the number of observed cases by the number of expected cases. Expected cases were calculated based on the age- and year of diagnosis-specific proportions of cases in each cancer category observed among male and female patients. Transgender patients have significantly elevated proportional incidence ratios (95% confidence intervals) for viral infection induced cancers compared to either males (2.3; 2.0-2.7) or females (3.3; 2.8-3.7). Adult DSD cancer patients have a similar distribution of primary sites compared to male or female patients but DSD children with cancer have ten times more cases of testicular malignancies than expected (95% confidence interval: 4.7-20). The proportions of certain primary sites and categories of malignancies among transgender and DSD cancer patients are different from the proportions observed for male or female patients. Copyright © 2018 Elsevier Ltd. All rights reserved.
Yan, Cunling; Hu, Jian; Yang, Jia; Chen, Zhaoyun; Li, Huijun; Wei, Lianhua; Zhang, Wei; Xing, Hao; Sang, Guoyao; Wang, Xiaoqin; Han, Ruilin; Liu, Ping; Li, Zhihui; Li, Zhiyan; Huang, Ying; Jiang, Li; Li, Shunjun; Dai, Shuyang; Wang, Nianyue; Yang, Yongfeng; Ma, Li; Soh, Andrew; Beshiri, Agim; Shen, Feng; Yang, Tian; Fan, Zhuping; Zheng, Yijie; Chen, Wei
2018-04-01
Protein induced by vitamin K absence or antagonist-II (PIVKA-II) has been widely used as a biomarker for liver cancer diagnosis in Japan for decades. However, the reference intervals for serum ARCHITECT PIVKA-II have not been established in the Chinese population. Thus, this study aimed to measure serum PIVKA-II levels in healthy Chinese subjects. This is a sub-analysis from the prospective, cross-sectional and multicenter study (ClinicalTrials.gov Identifier: NCT03047603). A total of 892 healthy participants (777 Han and 115 Uygur) with complete health checkup results were recruited from 7 regional centers in China. Serum PIVKA-II level was measured by ARCHITECT immunoassay. All 95% reference ranges were estimated by nonparametric method. The distribution of PIVKA-II values showed significant difference with ethnicity and sex, but not age. The 95% reference range of PIVKA-II was 13.62-40.38 mAU/ml in Han Chinese subjects and 15.16-53.74 mAU/ml in Uygur subjects. PIVKA-II level was significantly higher in males than in females (P < 0.001). The 95% reference range of PIVKA-II was 15.39-42.01 mAU/ml in Han males while 11.96-39.13 mAU/ml in Han females. The reference interval of serum PIVKA-II on the Architect platform was established in healthy Chinese adults. This will be valuable for future clinical and laboratory studies performed using the Architect analyzer. Different ethnic backgrounds and analytical methods underline the need for redefining the reference interval of analytes such as PIVKA-II, in central laboratories in different countries. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Solar cycle variations in polar cap area measured by the superDARN radars
NASA Astrophysics Data System (ADS)
Imber, S. M.; Milan, S. E.; Lester, M.
2013-10-01
present a long-term study, from January 1996 to August 2012, of the latitude of the Heppner-Maynard Boundary (HMB) measured at midnight using the northern hemisphere Super Dual Auroral Radar Network (SuperDARN). The HMB represents the equatorward extent of ionospheric convection and is used in this study as a measure of the global magnetospheric dynamics. We find that the yearly distribution of HMB latitudes is single peaked at 64° magnetic latitude for the majority of the 17 year interval. During 2003, the envelope of the distribution shifts to lower latitudes and a second peak in the distribution is observed at 61°. The solar wind-magnetosphere coupling function derived by Milan et al. (2012) suggests that the solar wind driving during this year was significantly higher than during the rest of the 17 year interval. In contrast, during the period 2008-2011, HMB distribution shifts to higher latitudes, and a second peak in the distribution is again observed, this time at 68° magnetic latitude. This time interval corresponds to a period of extremely low solar wind driving during the recent extreme solar minimum. This is the first long-term study of the polar cap area and the results demonstrate that there is a close relationship between the solar activity cycle and the area of the polar cap on a large-scale, statistical basis.
Solar Cycle Variations in Polar Cap Area Measured by the SuperDARN Radars
NASA Astrophysics Data System (ADS)
Imber, S. M.; Milan, S. E.; Lester, M.
2013-12-01
We present a long term study, from January 1996 - August 2012, of the latitude of the Heppner-Maynard Boundary (HMB) measured at midnight using the northern hemisphere SuperDARN radars. The HMB represents the equatorward extent of ionospheric convection, and is used in this study as a measure of the global magnetospheric dynamics and activity. We find that the yearly distribution of HMB latitudes is single-peaked at 64° magnetic latitude for the majority of the 17-year interval. During 2003 the envelope of the distribution shifts to lower latitudes and a second peak in the distribution is observed at 61°. The solar wind-magnetosphere coupling function derived by Milan et al. (2012) suggests that the solar wind driving during this year was significantly higher than during the rest of the 17-year interval. In contrast, during the period 2008-2011 HMB distribution shifts to higher latitudes, and a second peak in the distribution is again observed, this time at 68° magnetic latitude. This time interval corresponds to a period of extremely low solar wind driving during the recent extreme solar minimum. This is the first statistical study of the polar cap area over an entire solar cycle, and the results demonstrate that there is a close relationship between the phase of the solar cycle and the area of the polar cap on a large scale statistical basis.
Characterization of Fissile Assemblies Using Low-Efficiency Detection Systems
Chapline, George F.; Verbeke, Jerome M.
2017-02-02
Here, we have investigated the possibility that the amount, chemical form, multiplication, and shape of the fissile material in an assembly can be passively assayed using scintillator detection systems by only measuring the fast neutron pulse height distribution and distribution of time intervals Δt between fast neutrons. We have previously demonstrated that the alpha-ratio can be obtained from the observed pulse height distribution for fast neutrons. In this paper we report that we report that when the distribution of time intervals is plotted as a function of logΔt, the position of the correlated neutron peak is nearly independent of detectormore » efficiency and determines the internal relaxation rate for fast neutrons. If this information is combined with knowledge of the alpha-ratio, then the position of the minimum between the correlated and uncorrelated peaks can be used to rapidly estimate the mass, multiplication, and shape of fissile material. This method does not require a priori knowledge of either the efficiency for neutron detection or the alpha-ratio. Although our method neglects 3-neutron correlations, we have used previously obtained experimental data for metallic and oxide forms of Pu to demonstrate that our method yields good estimates for multiplications as large as 2, and that the only constraint on detector efficiency/observation time is that a peak in the interval time distribution due to correlated neutrons is visible.« less
Dorazio, Robert; Karanth, K. Ullas
2017-01-01
MotivationSeveral spatial capture-recapture (SCR) models have been developed to estimate animal abundance by analyzing the detections of individuals in a spatial array of traps. Most of these models do not use the actual dates and times of detection, even though this information is readily available when using continuous-time recorders, such as microphones or motion-activated cameras. Instead most SCR models either partition the period of trap operation into a set of subjectively chosen discrete intervals and ignore multiple detections of the same individual within each interval, or they simply use the frequency of detections during the period of trap operation and ignore the observed times of detection. Both practices make inefficient use of potentially important information in the data.Model and data analysisWe developed a hierarchical SCR model to estimate the spatial distribution and abundance of animals detected with continuous-time recorders. Our model includes two kinds of point processes: a spatial process to specify the distribution of latent activity centers of individuals within the region of sampling and a temporal process to specify temporal patterns in the detections of individuals. We illustrated this SCR model by analyzing spatial and temporal patterns evident in the camera-trap detections of tigers living in and around the Nagarahole Tiger Reserve in India. We also conducted a simulation study to examine the performance of our model when analyzing data sets of greater complexity than the tiger data.BenefitsOur approach provides three important benefits: First, it exploits all of the information in SCR data obtained using continuous-time recorders. Second, it is sufficiently versatile to allow the effects of both space use and behavior of animals to be specified as functions of covariates that vary over space and time. Third, it allows both the spatial distribution and abundance of individuals to be estimated, effectively providing a species distribution model, even in cases where spatial covariates of abundance are unknown or unavailable. We illustrated these benefits in the analysis of our data, which allowed us to quantify differences between nocturnal and diurnal activities of tigers and to estimate their spatial distribution and abundance across the study area. Our continuous-time SCR model allows an analyst to specify many of the ecological processes thought to be involved in the distribution, movement, and behavior of animals detected in a spatial trapping array of continuous-time recorders. We plan to extend this model to estimate the population dynamics of animals detected during multiple years of SCR surveys.
Bayesian lead time estimation for the Johns Hopkins Lung Project data.
Jang, Hyejeong; Kim, Seongho; Wu, Dongfeng
2013-09-01
Lung cancer screening using X-rays has been controversial for many years. A major concern is whether lung cancer screening really brings any survival benefits, which depends on effective treatment after early detection. The problem was analyzed from a different point of view and estimates were presented of the projected lead time for participants in a lung cancer screening program using the Johns Hopkins Lung Project (JHLP) data. The newly developed method of lead time estimation was applied where the lifetime T was treated as a random variable rather than a fixed value, resulting in the number of future screenings for a given individual is a random variable. Using the actuarial life table available from the United States Social Security Administration, the lifetime distribution was first obtained, then the lead time distribution was projected using the JHLP data. The data analysis with the JHLP data shows that, for a male heavy smoker with initial screening ages at 50, 60, and 70, the probability of no-early-detection with semiannual screens will be 32.16%, 32.45%, and 33.17%, respectively; while the mean lead time is 1.36, 1.33 and 1.23 years. The probability of no-early-detection increases monotonically when the screening interval increases, and it increases slightly as the initial age increases for the same screening interval. The mean lead time and its standard error decrease when the screening interval increases for all age groups, and both decrease when initial age increases with the same screening interval. The overall mean lead time estimated with a random lifetime T is slightly less than that with a fixed value of T. This result is hoped to be of benefit to improve current screening programs. Copyright © 2013 Ministry of Health, Saudi Arabia. Published by Elsevier Ltd. All rights reserved.
Fiore, Lorenzo; Lorenzetti, Walter; Ratti, Giovannino
2005-11-30
A procedure is proposed to compare single-unit spiking activity elicited in repetitive cycles with an inhomogeneous Poisson process (IPP). Each spike sequence in a cycle is discretized and represented as a point process on a circle. The interspike interval probability density predicted for an IPP is computed on the basis of the experimental firing probability density; differences from the experimental interval distribution are assessed. This procedure was applied to spike trains which were repetitively induced by opening-closing movements of the distal article of a lobster leg. As expected, the density of short interspike intervals, less than 20-40 ms in length, was found to lie greatly below the level predicted for an IPP, reflecting the occurrence of the refractory period. Conversely, longer intervals, ranging from 20-40 to 100-120 ms, were markedly more abundant than expected; this provided evidence for a time window of increased tendency to fire again after a spike. Less consistently, a weak depression of spike generation was observed for longer intervals. A Monte Carlo procedure, implemented for comparison, produced quite similar results, but was slightly less precise and more demanding as concerns computation time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos, M. S. dos, E-mail: michel.santos@iffarroupilha.edu.br; Instituto Federal de Educação, Ciência e Tecnologia Farroupilha, 98590-000, Santo Augusto, RS; Ziebell, L. F., E-mail: luiz.ziebell@ufrgs.br
2016-01-15
We study the dispersion relation for low frequency waves in the whistler mode propagating along the ambient magnetic field, considering ions and electrons with product-bi-kappa (PBK) velocity distributions and taking into account the presence of a population of dust particles. The results obtained by numerical analysis of the dispersion relation show that the decrease in the κ indexes in the ion PBK distribution contributes to the increase in magnitude of the growth rates of the ion firehose instability and the size of the region in wave number space where the instability occurs. It is also shown that the decrease inmore » the κ indexes in the electron PBK distribution contribute to decrease in the growth rates of instability, despite the fact that the instability occurs due to the anisotropy in the ion distribution function. For most of the interval of κ values which has been investigated, the ability of the non-thermal ions to increase the instability overcomes the tendency of decrease due to the non-thermal electron distribution, but for very small values of the kappa indexes the deleterious effect of the non-thermal electrons tends to overcome the effect due to the non-thermal ion distribution.« less
NASA Astrophysics Data System (ADS)
Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan
2016-10-01
This paper introduces mixed fuzzy and interval parametric uncertainties into the FE components of the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) model for mid-frequency analysis of built-up systems, thus an uncertain ensemble combining non-parametric with mixed fuzzy and interval parametric uncertainties comes into being. A fuzzy interval Finite Element/Statistical Energy Analysis (FIFE/SEA) framework is proposed to obtain the uncertain responses of built-up systems, which are described as intervals with fuzzy bounds, termed as fuzzy-bounded intervals (FBIs) in this paper. Based on the level-cut technique, a first-order fuzzy interval perturbation FE/SEA (FFIPFE/SEA) and a second-order fuzzy interval perturbation FE/SEA method (SFIPFE/SEA) are developed to handle the mixed parametric uncertainties efficiently. FFIPFE/SEA approximates the response functions by the first-order Taylor series, while SFIPFE/SEA improves the accuracy by considering the second-order items of Taylor series, in which all the mixed second-order items are neglected. To further improve the accuracy, a Chebyshev fuzzy interval method (CFIM) is proposed, in which the Chebyshev polynomials is used to approximate the response functions. The FBIs are eventually reconstructed by assembling the extrema solutions at all cut levels. Numerical results on two built-up systems verify the effectiveness of the proposed methods.
NASA Astrophysics Data System (ADS)
Beach, Shaun E.; Semkow, Thomas M.; Remling, David J.; Bradt, Clayton J.
2017-07-01
We have developed accessible methods to demonstrate fundamental statistics in several phenomena, in the context of teaching electronic signal processing in a physics-based college-level curriculum. A relationship between the exponential time-interval distribution and Poisson counting distribution for a Markov process with constant rate is derived in a novel way and demonstrated using nuclear counting. Negative binomial statistics is demonstrated as a model for overdispersion and justified by the effect of electronic noise in nuclear counting. The statistics of digital packets on a computer network are shown to be compatible with the fractal-point stochastic process leading to a power-law as well as generalized inverse Gaussian density distributions of time intervals between packets.
Prediction future asset price which is non-concordant with the historical distribution
NASA Astrophysics Data System (ADS)
Seong, Ng Yew; Hin, Pooi Ah
2015-12-01
This paper attempts to predict the major characteristics of the future asset price which is non-concordant with the distribution estimated from the price today and the prices on a large number of previous days. The three major characteristics of the i-th non-concordant asset price are the length of the interval between the occurrence time of the previous non-concordant asset price and that of the present non-concordant asset price, the indicator which denotes that the non-concordant price is extremely small or large by its values -1 and 1 respectively, and the degree of non-concordance given by the negative logarithm of the probability of the left tail or right tail of which one of the end points is given by the observed future price. The vector of three major characteristics of the next non-concordant price is modelled to be dependent on the vectors corresponding to the present and l - 1 previous non-concordant prices via a 3-dimensional conditional distribution which is derived from a 3(l + 1)-dimensional power-normal mixture distribution. The marginal distribution for each of the three major characteristics can then be derived from the conditional distribution. The mean of the j-th marginal distribution is an estimate of the value of the j-th characteristics of the next non-concordant price. Meanwhile, the 100(α/2) % and 100(1 - α/2) % points of the j-th marginal distribution can be used to form a prediction interval for the j-th characteristic of the next non-concordant price. The performance measures of the above estimates and prediction intervals indicate that the fitted conditional distribution is satisfactory. Thus the incorporation of the distribution of the characteristics of the next non-concordant price in the model for asset price has a good potential of yielding a more realistic model.
Chang, Wen-Shin; Tsai, Chia-Wen; Wang, Ju-Yu; Ying, Tsung-Ho; Hsiao, Tsan-Seng; Chuang, Chin-Liang; Yueh, Te-Cheng; Liao, Cheng-Hsi; Hsu, Chin-Mu; Liu, Shih-Ping; Gong, Chi-Li; Tsai, Chang-Hai; Bau, Da-Tian
2015-09-01
The present study aimed at investigating whether X-ray repair cross complementing protein 3 (XRCC3) genotype may serve as a useful marker for detecting leiomyoma and predicting risk. A total of 640 women (166 patients with leiomyoma and 474 healthy controls) were examined for their XRCC3 rs1799794, rs45603942, rs861530, rs3212057, rs1799796, rs861539, rs28903081 genotype. The distributions of genotypic and allelic frequencies between the two groups were compared. The results showed that the CT and TT genotypes of XRCC3 rs861539 were associated with increased leiomyoma risk (odds ratio=2.19, 95% confidence interval=1.23-3.90; odds ratio=3.72, 95% confidence interval=1.23-11.26, respectively). On allelic frequency analysis, we found a significant difference in the distribution of the T allelic frequency of the XRCC3 rs861539 (p=5.88 × 10(-5)). None of the other six single nucleotide polymorphisms were associated with altered leiomyoma susceptibility. The T allele (CT and TT genotypes) of XRCC3 rs861539 contributes to increased risk of leiomyoma among Taiwanese women and may serve as a early detection and predictive marker. Copyright© 2015 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.
Ederli, Nicole Brand; de Oliveira, Francisco Carlos Rodrigues
2015-01-01
The ratite group is composed of ostriches, rheas, emus, cassowaries and kiwis. Little research has been done on parasitism in these birds. The aim of this study was to determine the distribution of infections by gastrointestinal nematodes in ostriches in the state of Rio de Janeiro. For this, fecal samples were collected from 192 on 13 farms. From each sample, four grams of feces were used to determine the eggs per gram of feces (EPG) count, by means of the McMaster technique. Part of the feces sample was used for fecal cultures, to identify 100 larvae per sample. The results were subjected to descriptive central trend and dispersion analysis, using confidence intervals at the 5% error probability level in accordance with the Student t distribution, and Tukey's test with a 95% confidence interval. The mean EPG in the state was 1,557, and the municipality of Três Rios had the lowest average (62). The city of Campos dos Goytacazes presented the highest mean EPG of all the municipalities analyzed. The northern region presented the highest mean EPG, followed by the southern, metropolitan, coastal lowland and central regions. Libyostrongylus species were observed on all the farms: L. douglassii predominated, followed by L. dentatus and Codiostomum struthionis.
The retest distribution of the visual field summary index mean deviation is close to normal.
Anderson, Andrew J; Cheng, Allan C Y; Lau, Samantha; Le-Pham, Anne; Liu, Victor; Rahman, Farahnaz
2016-09-01
When modelling optimum strategies for how best to determine visual field progression in glaucoma, it is commonly assumed that the summary index mean deviation (MD) is normally distributed on repeated testing. Here we tested whether this assumption is correct. We obtained 42 reliable 24-2 Humphrey Field Analyzer SITA standard visual fields from one eye of each of five healthy young observers, with the first two fields excluded from analysis. Previous work has shown that although MD variability is higher in glaucoma, the shape of the MD distribution is similar to that found in normal visual fields. A Shapiro-Wilks test determined any deviation from normality. Kurtosis values for the distributions were also calculated. Data from each observer passed the Shapiro-Wilks normality test. Bootstrapped 95% confidence intervals for kurtosis encompassed the value for a normal distribution in four of five observers. When examined with quantile-quantile plots, distributions were close to normal and showed no consistent deviations across observers. The retest distribution of MD is not significantly different from normal in healthy observers, and so is likely also normally distributed - or nearly so - in those with glaucoma. Our results increase our confidence in the results of influential modelling studies where a normal distribution for MD was assumed. © 2016 The Authors Ophthalmic & Physiological Optics © 2016 The College of Optometrists.
Daly, Caitlin H; Higgins, Victoria; Adeli, Khosrow; Grey, Vijay L; Hamid, Jemila S
2017-12-01
To statistically compare and evaluate commonly used methods of estimating reference intervals and to determine which method is best based on characteristics of the distribution of various data sets. Three approaches for estimating reference intervals, i.e. parametric, non-parametric, and robust, were compared with simulated Gaussian and non-Gaussian data. The hierarchy of the performances of each method was examined based on bias and measures of precision. The findings of the simulation study were illustrated through real data sets. In all Gaussian scenarios, the parametric approach provided the least biased and most precise estimates. In non-Gaussian scenarios, no single method provided the least biased and most precise estimates for both limits of a reference interval across all sample sizes, although the non-parametric approach performed the best for most scenarios. The hierarchy of the performances of the three methods was only impacted by sample size and skewness. Differences between reference interval estimates established by the three methods were inflated by variability. Whenever possible, laboratories should attempt to transform data to a Gaussian distribution and use the parametric approach to obtain the most optimal reference intervals. When this is not possible, laboratories should consider sample size and skewness as factors in their choice of reference interval estimation method. The consequences of false positives or false negatives may also serve as factors in this decision. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Hyltoft Petersen, Per; Lund, Flemming; Fraser, Callum G; Sandberg, Sverre; Sölétormos, György
2018-01-01
Background Many clinical decisions are based on comparison of patient results with reference intervals. Therefore, an estimation of the analytical performance specifications for the quality that would be required to allow sharing common reference intervals is needed. The International Federation of Clinical Chemistry (IFCC) recommended a minimum of 120 reference individuals to establish reference intervals. This number implies a certain level of quality, which could then be used for defining analytical performance specifications as the maximum combination of analytical bias and imprecision required for sharing common reference intervals, the aim of this investigation. Methods Two methods were investigated for defining the maximum combination of analytical bias and imprecision that would give the same quality of common reference intervals as the IFCC recommendation. Method 1 is based on a formula for the combination of analytical bias and imprecision and Method 2 is based on the Microsoft Excel formula NORMINV including the fractional probability of reference individuals outside each limit and the Gaussian variables of mean and standard deviation. The combinations of normalized bias and imprecision are illustrated for both methods. The formulae are identical for Gaussian and log-Gaussian distributions. Results Method 2 gives the correct results with a constant percentage of 4.4% for all combinations of bias and imprecision. Conclusion The Microsoft Excel formula NORMINV is useful for the estimation of analytical performance specifications for both Gaussian and log-Gaussian distributions of reference intervals.
Jolley, Sarah E; Hough, Catherine L; Clermont, Gilles; Hayden, Douglas; Hou, Suqin; Schoenfeld, David; Smith, Nicholas L; Thompson, Boyd Taylor; Bernard, Gordon R; Angus, Derek C
2017-09-01
Short-term follow-up in the Fluid and Catheter Treatment Trial (FACTT) suggested differential mortality by race with conservative fluid management, but no significant interaction. In a post hoc analysis of FACTT including 1-year follow-up, we sought to estimate long-term mortality by race and test for an interaction between fluids and race. We performed a post hoc analysis of FACTT and the Economic Analysis of Pulmonary Artery Catheters (EAPAC) study (which included 655 of the 1,000 FACTT patients with near-complete 1-year follow up). We fit a multistate Markov model to estimate 1-year mortality for all non-Hispanic black and white randomized FACTT subjects. The model estimated the distribution of time from randomization to hospital discharge or hospital death (available on all patients) and estimated the distribution of time from hospital discharge to death using data on patients after hospital discharge for patients in EAPAC. The 1-year mortality was found by combining these estimates. Non-Hispanic black (n = 217, 25%) or white identified subjects (n = 641, 75%) were included. There was a significant interaction between race and fluid treatment (P = 0.012). One-year mortality was lower for black subjects assigned to conservative fluids (38 vs. 54%; mean mortality difference, 16%; 95% confidence interval, 2-30%; P = 0.027 between conservative and liberal). Conversely, 1-year mortality for white subjects was 35% versus 30% for conservative versus liberal arms (mean mortality difference, -4.8%; 95% confidence interval, -13% to 3%; P = 0.23). In our cohort, conservative fluid management may have improved 1-year mortality for non-Hispanic black patients with ARDS. However, we found no long-term benefit of conservative fluid management in white subjects.
A General theory of Signal Integration for Fault-Tolerant Dynamic Distributed Sensor Networks
1993-10-01
related to a) the architecture and fault- tolerance of the distributed sensor network, b) the proper synchronisation of sensor signals, c) the...Computational complexities of the problem of distributed detection. 5) Issues related to recording of events and synchronization in distributed sensor...Intervals for Synchronization in Real Time Distributed Systems", Submitted to Electronic Encyclopedia. 3. V. G. Hegde and S. S. Iyengar "Efficient
A method for analyzing temporal patterns of variability of a time series from Poincare plots.
Fishman, Mikkel; Jacono, Frank J; Park, Soojin; Jamasebi, Reza; Thungtong, Anurak; Loparo, Kenneth A; Dick, Thomas E
2012-07-01
The Poincaré plot is a popular two-dimensional, time series analysis tool because of its intuitive display of dynamic system behavior. Poincaré plots have been used to visualize heart rate and respiratory pattern variabilities. However, conventional quantitative analysis relies primarily on statistical measurements of the cumulative distribution of points, making it difficult to interpret irregular or complex plots. Moreover, the plots are constructed to reflect highly correlated regions of the time series, reducing the amount of nonlinear information that is presented and thereby hiding potentially relevant features. We propose temporal Poincaré variability (TPV), a novel analysis methodology that uses standard techniques to quantify the temporal distribution of points and to detect nonlinear sources responsible for physiological variability. In addition, the analysis is applied across multiple time delays, yielding a richer insight into system dynamics than the traditional circle return plot. The method is applied to data sets of R-R intervals and to synthetic point process data extracted from the Lorenz time series. The results demonstrate that TPV complements the traditional analysis and can be applied more generally, including Poincaré plots with multiple clusters, and more consistently than the conventional measures and can address questions regarding potential structure underlying the variability of a data set.
Oldenburg, Catherine E; Amza, Abdou; Kadri, Boubacar; Nassirou, Beido; Cotter, Sun Y; Stoller, Nicole E; West, Sheila K; Bailey, Robin L; Porco, Travis C; Keenan, Jeremy D; Lietman, Thomas M; Gaynor, Bruce D
2018-06-01
Azithromycin has modest efficacy against malaria, and previous cluster randomized trials have suggested that mass azithromycin distribution for trachoma control may play a role in malaria control. We evaluated the effect of annual versus biannual mass azithromycin distribution over a 3-year period on malaria prevalence during the peak transmission season in a region with seasonal malaria transmission in Niger. Twenty-four communities in Matameye, Niger, were randomized to annual mass azithromycin distribution (3 distributions to the entire community during the peak transmission season) or biannual-targeted azithromycin distribution (6 distributions to children <12 years of age, including 3 in the peak transmission season and 3 in the low transmission season). Malaria indices were evaluated at 36 months during the high transmission season. Parasitemia prevalence was 42.6% (95% confidence interval: 31.7%-53.6%) in the biannual distribution arm compared with 50.6% (95% confidence interval: 40.3%-60.8%) in the annual distribution arm (P = 0.29). There was no difference in parasite density or hemoglobin concentration in the 2 treatment arms. Additional rounds of mass azithromycin distribution during low transmission may not have a significant impact on malaria parasitemia measured during the peak transmission season.
Silva, Wagner G; Zerfass, Geise S A; Souza, Paulo A; Helenes, Javier
2015-09-01
This paper presents the integration of micropaleontological (palynology and foraminifera) and isotopic (87Sr/86Sr) analysis of a selected interval from the well 2-TG-96-RS, drilled on the onshore portion of the Pelotas Basin, Rio Grande do Sul, Brazil. A total of eight samples of the section between 140.20 and 73.50 m in depth was selected for palynological analysis, revealing diversified and abundant palynomorph associations. Species of spores, pollen grains and dinoflagellate cysts are the most common palynomorphs found. Planktic and benthic calcareous foraminifera were recovered from the lowest two levels of the section (140.20 and 134.30 m). Based on the stratigraphic range of the species of dinoflagellate cysts and sporomorphs, a span age from Late Miocene to Early Pliocene is assigned. The relative age obtained from the 87Sr/86Sr ratio in shells of calcareous foraminifers indicates a Late Miocene (Messinian) correspondence, corroborating the biostratigraphic positioning performed with palynomorphs. Paleoenvironmental interpretations based on the quantitative distribution of organic components (palynomorphs, phytoclasts and amorphous organic matter) throughout the section and on foraminiferal associations indicate a shallow marine depositional environment for the section. Two palynologicals intervals were recognized based on palynofacies analysis, related to middle to outer shelf (140.20 to 128.90 m) and inner shelf (115.75 to 73.50 m) conditions.
Coefficient Omega Bootstrap Confidence Intervals: Nonnormal Distributions
ERIC Educational Resources Information Center
Padilla, Miguel A.; Divers, Jasmin
2013-01-01
The performance of the normal theory bootstrap (NTB), the percentile bootstrap (PB), and the bias-corrected and accelerated (BCa) bootstrap confidence intervals (CIs) for coefficient omega was assessed through a Monte Carlo simulation under conditions not previously investigated. Of particular interests were nonnormal Likert-type and binary items.…
Schwacke, Lori H; Hall, Ailsa J; Townsend, Forrest I; Wells, Randall S; Hansen, Larry J; Hohn, Aleta A; Bossart, Gregory D; Fair, Patricia A; Rowles, Teresa K
2009-08-01
To develop robust reference intervals for hematologic and serum biochemical variables by use of data derived from free-ranging bottlenose dolphins (Tursiops truncatus) and examine potential variation in distributions of clinicopathologic values related to sampling sites' geographic locations. 255 free-ranging bottlenose dolphins. Data from samples collected during multiple bottlenose dolphin capture-release projects conducted at 4 southeastern US coastal locations in 2000 through 2006 were combined to determine reference intervals for 52 clinicopathologic variables. A nonparametric bootstrap approach was applied to estimate 95th percentiles and associated 90% confidence intervals; the need for partitioning by length and sex classes was determined by testing for differences in estimated thresholds with a bootstrap method. When appropriate, quantile regression was used to determine continuous functions for 95th percentiles dependent on length. The proportion of out-of-range samples for all clinicopathologic measurements was examined for each geographic site, and multivariate ANOVA was applied to further explore variation in leukocyte subgroups. A need for partitioning by length and sex classes was indicated for many clinicopathologic variables. For each geographic site, few significant deviations from expected number of out-of-range samples were detected. Although mean leukocyte counts did not vary among sites, differences in the mean counts for leukocyte subgroups were identified. Although differences in the centrality of distributions for some variables were detected, the 95th percentiles estimated from the pooled data were robust and applicable across geographic sites. The derived reference intervals provide critical information for conducting bottlenose dolphin population health studies.
Repeat sample intraocular pressure variance in induced and naturally ocular hypertensive monkeys.
Dawson, William W; Dawson, Judyth C; Hope, George M; Brooks, Dennis E; Percicot, Christine L
2005-12-01
To compare repeat-sample means variance of laser induced ocular hypertension (OH) in rhesus monkeys with the repeat-sample mean variance of natural OH in age-range matched monkeys of similar and dissimilar pedigrees. Multiple monocular, retrospective, intraocular pressure (IOP) measures were recorded repeatedly during a short sampling interval (SSI, 1-5 months) and a long sampling interval (LSI, 6-36 months). There were 5-13 eyes in each SSI and LSI subgroup. Each interval contained subgroups from the Florida with natural hypertension (NHT), induced hypertension (IHT1) Florida monkeys, unrelated (Strasbourg, France) induced hypertensives (IHT2), and Florida age-range matched controls (C). Repeat-sample individual variance means and related IOPs were analyzed by a parametric analysis of variance (ANOV) and results compared to non-parametric Kruskal-Wallis ANOV. As designed, all group intraocular pressure distributions were significantly different (P < or = 0.009) except for the two (Florida/Strasbourg) induced OH groups. A parametric 2 x 4 design ANOV for mean variance showed large significant effects due to treatment group and sampling interval. Similar results were produced by the nonparametric ANOV. Induced OH sample variance (LSI) was 43x the natural OH sample variance-mean. The same relationship for the SSI was 12x. Laser induced ocular hypertension in rhesus monkeys produces large IOP repeat-sample variance mean results compared to controls and natural OH.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monson, L.M.; Lund, D.F.
1991-06-01
Five shallow gas-bearing Cretaceous intervals have been identified on the Fort Peck Reservation of northeastern Montana. They include the Lower Judith River Sandstone and shaly sandstone intervals in the Gammon, Niobrara, Greenhorn, and Mowry Formations, Stratigraphic correlations have been carried from southwestern Saskatchewan through the Bowdoin gas field to the reservation. Sparse yet widely distributed gas shows confirm this relatively untested resource. Each of these gas-bearing intervals belongs to a recognized stratigraphic cycle characterized by thick shales overlain by progradational shaly sandstones and siltstones. The bottom cycle (Skull Creek to Mowry) contains considerable nonmarine deposits, especially within the Muddy Sandstonemore » interval, which is thickly developed in the eastern part of the reservation as a large valley-fill network. Some individual sandstone units are not continuous across the reservation. These, and those that correlate, appear to be related to paleotectonic features defined by northwest-trending lineament zones, and by lineament zone intersections. Northeast-trending paleotectonic elements exert secondary influence on stratigraphic isopachs. Circular tectonic elements, which carry through to basement, also have anomalous stratigraphic expression. Conventional drilling has not been conducive to properly testing the Cretaceous gas potential on the reservation, but empirical well-log analysis suggests that gas can be identified by various crossover techniques. The Judith River Formation did produce gas for field use at East Poplar.« less
NASA Astrophysics Data System (ADS)
Eldardiry, H. A.; Habib, E. H.
2014-12-01
Radar-based technologies have made spatially and temporally distributed quantitative precipitation estimates (QPE) available in an operational environmental compared to the raingauges. The floods identified through flash flood monitoring and prediction systems are subject to at least three sources of uncertainties: (a) those related to rainfall estimation errors, (b) those due to streamflow prediction errors due to model structural issues, and (c) those due to errors in defining a flood event. The current study focuses on the first source of uncertainty and its effect on deriving important climatological characteristics of extreme rainfall statistics. Examples of such characteristics are rainfall amounts with certain Average Recurrence Intervals (ARI) or Annual Exceedance Probability (AEP), which are highly valuable for hydrologic and civil engineering design purposes. Gauge-based precipitation frequencies estimates (PFE) have been maturely developed and widely used over the last several decades. More recently, there has been a growing interest by the research community to explore the use of radar-based rainfall products for developing PFE and understand the associated uncertainties. This study will use radar-based multi-sensor precipitation estimates (MPE) for 11 years to derive PFE's corresponding to various return periods over a spatial domain that covers the state of Louisiana in southern USA. The PFE estimation approach used in this study is based on fitting generalized extreme value distribution to hydrologic extreme rainfall data based on annual maximum series (AMS). Some of the estimation problems that may arise from fitting GEV distributions at each radar pixel is the large variance and seriously biased quantile estimators. Hence, a regional frequency analysis approach (RFA) is applied. The RFA involves the use of data from different pixels surrounding each pixel within a defined homogenous region. In this study, region of influence approach along with the index flood technique are used in the RFA. A bootstrap technique procedure is carried out to account for the uncertainty in the distribution parameters to construct 90% confidence intervals (i.e., 5% and 95% confidence limits) on AMS-based precipitation frequency curves.
Distribution of polycyclic aromatic hydrocarbons in urban stormwater in Queensland, Australia.
Herngren, Lars; Goonetilleke, Ashantha; Ayoko, Godwin A; Mostert, Maria M M
2010-09-01
This paper reports the distribution of Polycyclic Aromatic Hydrocarbons (PAHs) in wash-off in urban stormwater in Gold Coast, Australia. Runoff samples collected from residential, industrial and commercial sites were separated into a dissolved fraction (<0.45 microm), and three particulate fractions (0.45-75 microm, 75-150 microm and >150 microm). Patterns in the distribution of PAHs in the fractions were investigated using Principal Component Analysis. Regardless of the land use and particle size fraction characteristics, the presence of organic carbon plays a dominant role in the distribution of PAHs. The PAHs concentrations were also found to decrease with rainfall duration. Generally, the 1- and 2-year average recurrence interval rainfall events were associated with the majority of the PAHs and the wash-off was a source limiting process. In the context of stormwater quality mitigation, targeting the initial part of the rainfall event is the most effective treatment strategy. The implications of the study results for urban stormwater quality management are also discussed. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Xie, Y L; Li, Y P; Huang, G H; Li, Y F; Chen, L R
2011-04-15
In this study, an inexact-chance-constrained water quality management (ICC-WQM) model is developed for planning regional environmental management under uncertainty. This method is based on an integration of interval linear programming (ILP) and chance-constrained programming (CCP) techniques. ICC-WQM allows uncertainties presented as both probability distributions and interval values to be incorporated within a general optimization framework. Complexities in environmental management systems can be systematically reflected, thus applicability of the modeling process can be highly enhanced. The developed method is applied to planning chemical-industry development in Binhai New Area of Tianjin, China. Interval solutions associated with different risk levels of constraint violation have been obtained. They can be used for generating decision alternatives and thus help decision makers identify desired policies under various system-reliability constraints of water environmental capacity of pollutant. Tradeoffs between system benefits and constraint-violation risks can also be tackled. They are helpful for supporting (a) decision of wastewater discharge and government investment, (b) formulation of local policies regarding water consumption, economic development and industry structure, and (c) analysis of interactions among economic benefits, system reliability and pollutant discharges. Copyright © 2011 Elsevier B.V. All rights reserved.
Quantifying Safety Margin Using the Risk-Informed Safety Margin Characterization (RISMC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabaskas, David; Bucknor, Matthew; Brunett, Acacia
2015-04-26
The Risk-Informed Safety Margin Characterization (RISMC), developed by Idaho National Laboratory as part of the Light-Water Reactor Sustainability Project, utilizes a probabilistic safety margin comparison between a load and capacity distribution, rather than a deterministic comparison between two values, as is usually done in best-estimate plus uncertainty analyses. The goal is to determine the failure probability, or in other words, the probability of the system load equaling or exceeding the system capacity. While this method has been used in pilot studies, there has been little work conducted investigating the statistical significance of the resulting failure probability. In particular, it ismore » difficult to determine how many simulations are necessary to properly characterize the failure probability. This work uses classical (frequentist) statistics and confidence intervals to examine the impact in statistical accuracy when the number of simulations is varied. Two methods are proposed to establish confidence intervals related to the failure probability established using a RISMC analysis. The confidence interval provides information about the statistical accuracy of the method utilized to explore the uncertainty space, and offers a quantitative method to gauge the increase in statistical accuracy due to performing additional simulations.« less
Steady state, relaxation and first-passage properties of a run-and-tumble particle in one-dimension
NASA Astrophysics Data System (ADS)
Malakar, Kanaya; Jemseena, V.; Kundu, Anupam; Vijay Kumar, K.; Sabhapandit, Sanjib; Majumdar, Satya N.; Redner, S.; Dhar, Abhishek
2018-04-01
We investigate the motion of a run-and-tumble particle (RTP) in one dimension. We find the exact probability distribution of the particle with and without diffusion on the infinite line, as well as in a finite interval. In the infinite domain, this probability distribution approaches a Gaussian form in the long-time limit, as in the case of a regular Brownian particle. At intermediate times, this distribution exhibits unexpected multi-modal forms. In a finite domain, the probability distribution reaches a steady-state form with peaks at the boundaries, in contrast to a Brownian particle. We also study the relaxation to the steady-state analytically. Finally we compute the survival probability of the RTP in a semi-infinite domain with an absorbing boundary condition at the origin. In the finite interval, we compute the exit probability and the associated exit times. We provide numerical verification of our analytical results.
Extensions of the MCNP5 and TRIPOLI4 Monte Carlo Codes for Transient Reactor Analysis
NASA Astrophysics Data System (ADS)
Hoogenboom, J. Eduard; Sjenitzer, Bart L.
2014-06-01
To simulate reactor transients for safety analysis with the Monte Carlo method the generation and decay of delayed neutron precursors is implemented in the MCNP5 and TRIPOLI4 general purpose Monte Carlo codes. Important new variance reduction techniques like forced decay of precursors in each time interval and the branchless collision method are included to obtain reasonable statistics for the power production per time interval. For simulation of practical reactor transients also the feedback effect from the thermal-hydraulics must be included. This requires coupling of the Monte Carlo code with a thermal-hydraulics (TH) code, providing the temperature distribution in the reactor, which affects the neutron transport via the cross section data. The TH code also provides the coolant density distribution in the reactor, directly influencing the neutron transport. Different techniques for this coupling are discussed. As a demonstration a 3x3 mini fuel assembly with a moving control rod is considered for MCNP5 and a mini core existing of 3x3 PWR fuel assemblies with control rods and burnable poisons for TRIPOLI4. Results are shown for reactor transients due to control rod movement or withdrawal. The TRIPOLI4 transient calculation is started at low power and includes thermal-hydraulic feedback. The power rises about 10 decades and finally stabilises the reactor power at a much higher level than initial. The examples demonstrate that the modified Monte Carlo codes are capable of performing correct transient calculations, taking into account all geometrical and cross section detail.
Forecasting overhaul or replacement intervals based on estimated system failure intensity
NASA Astrophysics Data System (ADS)
Gannon, James M.
1994-12-01
System reliability can be expressed in terms of the pattern of failure events over time. Assuming a nonhomogeneous Poisson process and Weibull intensity function for complex repairable system failures, the degree of system deterioration can be approximated. Maximum likelihood estimators (MLE's) for the system Rate of Occurrence of Failure (ROCOF) function are presented. Evaluating the integral of the ROCOF over annual usage intervals yields the expected number of annual system failures. By associating a cost of failure with the expected number of failures, budget and program policy decisions can be made based on expected future maintenance costs. Monte Carlo simulation is used to estimate the range and the distribution of the net present value and internal rate of return of alternative cash flows based on the distributions of the cost inputs and confidence intervals of the MLE's.
Grid Resolution Study over Operability Space for a Mach 1.7 Low Boom External Compression Inlet
NASA Technical Reports Server (NTRS)
Anderson, Bernhard H.
2014-01-01
This paper presents a statistical methodology whereby the probability limits associated with CFD grid resolution of inlet flow analysis can be determined which provide quantitative information on the distribution of that error over the specified operability range. The objectives of this investigation is to quantify the effects of both random (accuracy) and systemic (biasing) errors associated with grid resolution in the analysis of the Lockheed Martin Company (LMCO) N+2 Low Boom external compression supersonic inlet. The study covers the entire operability space as defined previously by the High Speed Civil Transport (HSCT) High Speed Research (HSR) program goals. The probability limits in terms of a 95.0% confidence interval on the analysis data were evaluated for four ARP1420 inlet metrics, namely (1) total pressure recovery (PFAIP), (2) radial hub distortion (DPH/P), (3) ) radial tip distortion (DPT/P), and (4) ) circumferential distortion (DPC/P). In general, the resulting +/-0.95 delta Y interval was unacceptably large in comparison to the stated goals of the HSCT program. Therefore, the conclusion was reached that the "standard grid" size was insufficient for this type of analysis. However, in examining the statistical data, it was determined that the CFD analysis results at the outer fringes of the operability space were the determining factor in the measure of statistical uncertainty. Adequate grids are grids that are free of biasing (systemic) errors and exhibit low random (precision) errors in comparison to their operability goals. In order to be 100% certain that the operability goals have indeed been achieved for each of the inlet metrics, the Y+/-0.95 delta Y limit must fall inside the stated operability goals. For example, if the operability goal for DPC/P circumferential distortion is =0.06, then the forecast Y for DPC/P plus the 95% confidence interval on DPC/P, i.e. +/-0.95 delta Y, must all be less than or equal to 0.06.
NASA Astrophysics Data System (ADS)
Hoynant, G.
2007-12-01
Fourier analysis allows to identify periodical components in a time series of measurements under the form of a spectrum of the periodical components mathematically included in the series. The reading of a spectrum is often delicate and contradictory interpretations can be presented in some cases as for the luminosity of Seyfert galaxy NGC 4151 despite the very large number of observations since 1968. The present study identifies the causes of these difficulties thanks to an experimental approach based on analysis of synthetic series with one periodic component only. The total duration of the campaign must be long as compared to the periods to be identified: this ratio governs the separation capability of the spectral analysis. A large number of observations is obviously favourable but the intervals between measurements are not critical : the analysis can accommodate intervals significantly longer than the periods to be identified. But interruptions along the campaign, with separate sessions of observations, make the physical understanding of the analysis difficult and sometimes impossible. An analysis performed on an imperfect series shows peaks which are not significant of the signal itself but of the chronological distribution of the measurements. These chronological peaks are becoming numerous and important when there are vacancy periods in the campaign. A method for authentication of a peak as a peak of the signal is to cut the chronological series in pieces with the same length than the period to identify and to superpose all these pieces. The present study shows that some chronological peaks can exhibit superposition graphics almost as clear as those for the signal peaks. Practically, the search for periodical components necessitates to organise the campaign specifically with a neutral chronological distribution of measurements without vacancies and the authentication of a peak as a peak of the signal requires a dominating amplitude or a graphic of periodical superposition significantly clearer than for any peak with a comparable or bigger amplitude.
A Bayesian approach to meta-analysis of plant pathology studies.
Mila, A L; Ngugi, H K
2011-01-01
Bayesian statistical methods are used for meta-analysis in many disciplines, including medicine, molecular biology, and engineering, but have not yet been applied for quantitative synthesis of plant pathology studies. In this paper, we illustrate the key concepts of Bayesian statistics and outline the differences between Bayesian and classical (frequentist) methods in the way parameters describing population attributes are considered. We then describe a Bayesian approach to meta-analysis and present a plant pathological example based on studies evaluating the efficacy of plant protection products that induce systemic acquired resistance for the management of fire blight of apple. In a simple random-effects model assuming a normal distribution of effect sizes and no prior information (i.e., a noninformative prior), the results of the Bayesian meta-analysis are similar to those obtained with classical methods. Implementing the same model with a Student's t distribution and a noninformative prior for the effect sizes, instead of a normal distribution, yields similar results for all but acibenzolar-S-methyl (Actigard) which was evaluated only in seven studies in this example. Whereas both the classical (P = 0.28) and the Bayesian analysis with a noninformative prior (95% credibility interval [CRI] for the log response ratio: -0.63 to 0.08) indicate a nonsignificant effect for Actigard, specifying a t distribution resulted in a significant, albeit variable, effect for this product (CRI: -0.73 to -0.10). These results confirm the sensitivity of the analytical outcome (i.e., the posterior distribution) to the choice of prior in Bayesian meta-analyses involving a limited number of studies. We review some pertinent literature on more advanced topics, including modeling of among-study heterogeneity, publication bias, analyses involving a limited number of studies, and methods for dealing with missing data, and show how these issues can be approached in a Bayesian framework. Bayesian meta-analysis can readily include information not easily incorporated in classical methods, and allow for a full evaluation of competing models. Given the power and flexibility of Bayesian methods, we expect them to become widely adopted for meta-analysis of plant pathology studies.
Baeten; Bruggeman; Paepen; Carchon
2000-03-01
The non-destructive quantification of transuranic elements in nuclear waste management or in safeguards verifications is commonly performed by passive neutron assay techniques. To minimise the number of unknown sample-dependent parameters, Neutron Multiplicity Counting (NMC) is applied. We developed a new NMC-technique, called Time Interval Correlation Spectroscopy (TICS), which is based on the measurement of Rossi-alpha time interval distributions. Compared to other NMC-techniques, TICS offers several advantages.
NASA Astrophysics Data System (ADS)
Rimskaya-Korsavkova, L. K.
2017-07-01
To find the possible reasons for the midlevel elevation of the Weber fraction in intensity discrimination of a tone burst, a comparison was performed for the complementary distributions of spike activity of an ensemble of space nerves, such as the distribution of time instants when spikes occur, the distribution of interspike intervals, and the autocorrelation function. The distribution properties were detected in a poststimulus histogram, an interspike interval histogram, and an autocorrelation histogram—all obtained from the reaction of an ensemble of model space nerves in response to an auditory noise burst-useful tone burst complex. Two configurations were used: in the first, the peak amplitude of the tone burst was varied and the noise amplitude was fixed; in the other, the tone burst amplitude was fixed and the noise amplitude was varied. Noise could precede or follow the tone burst. The noise and tone burst durations, as well as the interval between them, was 4 kHz and corresponded to the characteristic frequencies of the model space nerves. The profiles of all the mentioned histograms had two maxima. The values and the positions of the maxima in the poststimulus histogram corresponded to the amplitudes and mutual time position of the noise and the tone burst. The maximum that occurred in response to the tone burst action could be a basis for the formation of the loudness of the latter (explicit loudness). However, the positions of the maxima in the other two histograms did not depend on the positions of tone bursts and noise in the combinations. The first maximum fell in short intervals and united intervals corresponding to the noise and tone burst durations. The second maximum fell in intervals corresponding to a tone burst delay with respect to noise, and its value was proportional to the noise amplitude or tone burst amplitude that was smaller in the complex. An increase in tone burst or noise amplitudes was caused by nonlinear variations in the two maxima and the ratio between them. The size of the first maximum in the of interspike interval distribution could be the basis for the formation of the loudness of the masked tone burst (implicit loudness), and the size of the second maximum, for the formation of intensity in the periodicity pitch of the complex. The auditory effect of the midlevel enhancement of tone burst loudness could be the result of variations in the implicit tone burst loudness caused by variations in tone-burst or noise intensity. The reason for the enhancement of the Weber fraction could be competitive interaction between such subjective qualities as explicit and implicit tone-burst loudness and the intensity of the periodicity pitch of the complex.
Visual Aggregate Analysis of Eligibility Features of Clinical Trials
He, Zhe; Carini, Simona; Sim, Ida; Weng, Chunhua
2015-01-01
Objective To develop a method for profiling the collective populations targeted for recruitment by multiple clinical studies addressing the same medical condition using one eligibility feature each time. Methods Using a previously published database COMPACT as the backend, we designed a scalable method for visual aggregate analysis of clinical trial eligibility features. This method consists of four modules for eligibility feature frequency analysis, query builder, distribution analysis, and visualization, respectively. This method is capable of analyzing (1) frequently used qualitative and quantitative features for recruiting subjects for a selected medical condition, (2) distribution of study enrollment on consecutive value points or value intervals of each quantitative feature, and (3) distribution of studies on the boundary values, permissible value ranges, and value range widths of each feature. All analysis results were visualized using Google Charts API. Five recruited potential users assessed the usefulness of this method for identifying common patterns in any selected eligibility feature for clinical trial participant selection. Results We implemented this method as a Web-based analytical system called VITTA (Visual Analysis Tool of Clinical Study Target Populations). We illustrated the functionality of VITTA using two sample queries involving quantitative features BMI and HbA1c for conditions “hypertension” and “Type 2 diabetes”, respectively. The recruited potential users rated the user-perceived usefulness of VITTA with an average score of 86.4/100. Conclusions We contributed a novel aggregate analysis method to enable the interrogation of common patterns in quantitative eligibility criteria and the collective target populations of multiple related clinical studies. A larger-scale study is warranted to formally assess the usefulness of VITTA among clinical investigators and sponsors in various therapeutic areas. PMID:25615940
Visual aggregate analysis of eligibility features of clinical trials.
He, Zhe; Carini, Simona; Sim, Ida; Weng, Chunhua
2015-04-01
To develop a method for profiling the collective populations targeted for recruitment by multiple clinical studies addressing the same medical condition using one eligibility feature each time. Using a previously published database COMPACT as the backend, we designed a scalable method for visual aggregate analysis of clinical trial eligibility features. This method consists of four modules for eligibility feature frequency analysis, query builder, distribution analysis, and visualization, respectively. This method is capable of analyzing (1) frequently used qualitative and quantitative features for recruiting subjects for a selected medical condition, (2) distribution of study enrollment on consecutive value points or value intervals of each quantitative feature, and (3) distribution of studies on the boundary values, permissible value ranges, and value range widths of each feature. All analysis results were visualized using Google Charts API. Five recruited potential users assessed the usefulness of this method for identifying common patterns in any selected eligibility feature for clinical trial participant selection. We implemented this method as a Web-based analytical system called VITTA (Visual Analysis Tool of Clinical Study Target Populations). We illustrated the functionality of VITTA using two sample queries involving quantitative features BMI and HbA1c for conditions "hypertension" and "Type 2 diabetes", respectively. The recruited potential users rated the user-perceived usefulness of VITTA with an average score of 86.4/100. We contributed a novel aggregate analysis method to enable the interrogation of common patterns in quantitative eligibility criteria and the collective target populations of multiple related clinical studies. A larger-scale study is warranted to formally assess the usefulness of VITTA among clinical investigators and sponsors in various therapeutic areas. Copyright © 2015 Elsevier Inc. All rights reserved.
Kumar, Sanjeev; Karmeshu
2018-04-01
A theoretical investigation is presented that characterizes the emerging sub-threshold membrane potential and inter-spike interval (ISI) distributions of an ensemble of IF neurons that group together and fire together. The squared-noise intensity σ 2 of the ensemble of neurons is treated as a random variable to account for the electrophysiological variations across population of nearly identical neurons. Employing superstatistical framework, both ISI distribution and sub-threshold membrane potential distribution of neuronal ensemble are obtained in terms of generalized K-distribution. The resulting distributions exhibit asymptotic behavior akin to stretched exponential family. Extensive simulations of the underlying SDE with random σ 2 are carried out. The results are found to be in excellent agreement with the analytical results. The analysis has been extended to cover the case corresponding to independent random fluctuations in drift in addition to random squared-noise intensity. The novelty of the proposed analytical investigation for the ensemble of IF neurons is that it yields closed form expressions of probability distributions in terms of generalized K-distribution. Based on a record of spiking activity of thousands of neurons, the findings of the proposed model are validated. The squared-noise intensity σ 2 of identified neurons from the data is found to follow gamma distribution. The proposed generalized K-distribution is found to be in excellent agreement with that of empirically obtained ISI distribution of neuronal ensemble. Copyright © 2018 Elsevier B.V. All rights reserved.
Sheng, Li; Wang, Zidong; Tian, Engang; Alsaadi, Fuad E
2016-12-01
This paper deals with the H ∞ state estimation problem for a class of discrete-time neural networks with stochastic delays subject to state- and disturbance-dependent noises (also called (x,v)-dependent noises) and fading channels. The time-varying stochastic delay takes values on certain intervals with known probability distributions. The system measurement is transmitted through fading channels described by the Rice fading model. The aim of the addressed problem is to design a state estimator such that the estimation performance is guaranteed in the mean-square sense against admissible stochastic time-delays, stochastic noises as well as stochastic fading signals. By employing the stochastic analysis approach combined with the Kronecker product, several delay-distribution-dependent conditions are derived to ensure that the error dynamics of the neuron states is stochastically stable with prescribed H ∞ performance. Finally, a numerical example is provided to illustrate the effectiveness of the obtained results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Statistical mechanics of economics I
NASA Astrophysics Data System (ADS)
Kusmartsev, F. V.
2011-02-01
We show that statistical mechanics is useful in the description of financial crisis and economics. Taking a large amount of instant snapshots of a market over an interval of time we construct their ensembles and study their statistical interference. This results in a probability description of the market and gives capital, money, income, wealth and debt distributions, which in the most cases takes the form of the Bose-Einstein distribution. In addition, statistical mechanics provides the main market equations and laws which govern the correlations between the amount of money, debt, product, prices and number of retailers. We applied the found relations to a study of the evolution of the economics in USA between the years 1996 to 2008 and observe that over that time the income of a major population is well described by the Bose-Einstein distribution which parameters are different for each year. Each financial crisis corresponds to a peak in the absolute activity coefficient. The analysis correctly indicates the past crises and predicts the future one.
New approach application of data transformation in mean centering of ratio spectra method
NASA Astrophysics Data System (ADS)
Issa, Mahmoud M.; Nejem, R.'afat M.; Van Staden, Raluca Ioana Stefan; Aboul-Enein, Hassan Y.
2015-05-01
Most of mean centering (MCR) methods are designed to be used with data sets whose values have a normal or nearly normal distribution. The errors associated with the values are also assumed to be independent and random. If the data are skewed, the results obtained may be doubtful. Most of the time, it was assumed a normal distribution and if a confidence interval includes a negative value, it was cut off at zero. However, it is possible to transform the data so that at least an approximately normal distribution is attained. Taking the logarithm of each data point is one transformation frequently used. As a result, the geometric mean is deliberated a better measure of central tendency than the arithmetic mean. The developed MCR method using the geometric mean has been successfully applied to the analysis of a ternary mixture of aspirin (ASP), atorvastatin (ATOR) and clopidogrel (CLOP) as a model. The results obtained were statistically compared with reported HPLC method.
Roberts, Laura N. Robinson; Kirschbaum, Mark A.
1995-01-01
A synthesis of Late Cretaceous paleogeography of the Western Interior from Mexico to southwestern Canada emphasizes the areal distribution of peat-forming environments during six biostratigraphically constrained time intervals. Isopach maps of strata for each interval reveal the locations and magnitude of major depocenters. The paleogeographic framework provides insight into the relative importance of tectonism, eustasy, and climate on the accumulation of thick peats and their preservation as coals. A total of 123 basin summaries and their data provide the ground truth for construction of the isopach and paleogeographic maps.
Minimax rational approximation of the Fermi-Dirac distribution
Moussa, Jonathan E.
2016-10-27
Accurate rational approximations of the Fermi-Dirac distribution are a useful component in many numerical algorithms for electronic structure calculations. The best known approximations use O(log(βΔ)log(ϵ –1)) poles to achieve an error tolerance ϵ at temperature β –1 over an energy interval Δ. We apply minimax approximation to reduce the number of poles by a factor of four and replace Δ with Δ occ, the occupied energy interval. Furthermore, this is particularly beneficial when Δ >> Δ occ, such as in electronic structure calculations that use a large basis set.
Whys and Hows of the Parameterized Interval Analyses: A Guide for the Perplexed
NASA Astrophysics Data System (ADS)
Elishakoff, I.
2013-10-01
Novel elements of the parameterized interval analysis developed in [1, 2] are emphasized in this response, to Professor E.D. Popova, or possibly to others who may be perplexed by the parameterized interval analysis. It is also shown that the overwhelming majority of comments by Popova [3] are based on a misreading of our paper [1]. Partial responsibility for this misreading can be attributed to the fact that explanations provided in [1] were laconic. These could have been more extensive in view of the novelty of our approach [1, 2]. It is our duty, therefore, to reiterate, in this response, the whys and hows of parameterization of intervals, introduced in [1] to incorporate the possibly available information on dependencies between various intervals describing the problem at hand. This possibility appears to have been discarded by the standard interval analysis, which may, as a result, lead to overdesign, leading to the possible divorce of engineers from the otherwise beautiful interval analysis.
On Some Nonclassical Algebraic Properties of Interval-Valued Fuzzy Soft Sets
2014-01-01
Interval-valued fuzzy soft sets realize a hybrid soft computing model in a general framework. Both Molodtsov's soft sets and interval-valued fuzzy sets can be seen as special cases of interval-valued fuzzy soft sets. In this study, we first compare four different types of interval-valued fuzzy soft subsets and reveal the relations among them. Then we concentrate on investigating some nonclassical algebraic properties of interval-valued fuzzy soft sets under the soft product operations. We show that some fundamental algebraic properties including the commutative and associative laws do not hold in the conventional sense, but hold in weaker forms characterized in terms of the relation =L. We obtain a number of algebraic inequalities of interval-valued fuzzy soft sets characterized by interval-valued fuzzy soft inclusions. We also establish the weak idempotent law and the weak absorptive law of interval-valued fuzzy soft sets using interval-valued fuzzy soft J-equal relations. It is revealed that the soft product operations ∧ and ∨ of interval-valued fuzzy soft sets do not always have similar algebraic properties. Moreover, we find that only distributive inequalities described by the interval-valued fuzzy soft L-inclusions hold for interval-valued fuzzy soft sets. PMID:25143964
On some nonclassical algebraic properties of interval-valued fuzzy soft sets.
Liu, Xiaoyan; Feng, Feng; Zhang, Hui
2014-01-01
Interval-valued fuzzy soft sets realize a hybrid soft computing model in a general framework. Both Molodtsov's soft sets and interval-valued fuzzy sets can be seen as special cases of interval-valued fuzzy soft sets. In this study, we first compare four different types of interval-valued fuzzy soft subsets and reveal the relations among them. Then we concentrate on investigating some nonclassical algebraic properties of interval-valued fuzzy soft sets under the soft product operations. We show that some fundamental algebraic properties including the commutative and associative laws do not hold in the conventional sense, but hold in weaker forms characterized in terms of the relation = L . We obtain a number of algebraic inequalities of interval-valued fuzzy soft sets characterized by interval-valued fuzzy soft inclusions. We also establish the weak idempotent law and the weak absorptive law of interval-valued fuzzy soft sets using interval-valued fuzzy soft J-equal relations. It is revealed that the soft product operations ∧ and ∨ of interval-valued fuzzy soft sets do not always have similar algebraic properties. Moreover, we find that only distributive inequalities described by the interval-valued fuzzy soft L-inclusions hold for interval-valued fuzzy soft sets.
Multifactor analysis of multiscaling in volatility return intervals.
Wang, Fengzhong; Yamasaki, Kazuko; Havlin, Shlomo; Stanley, H Eugene
2009-01-01
We study the volatility time series of 1137 most traded stocks in the U.S. stock markets for the two-year period 2001-2002 and analyze their return intervals tau , which are time intervals between volatilities above a given threshold q . We explore the probability density function of tau , P_(q)(tau) , assuming a stretched exponential function, P_(q)(tau) approximately e;(-tau;(gamma)) . We find that the exponent gamma depends on the threshold in the range between q=1 and 6 standard deviations of the volatility. This finding supports the multiscaling nature of the return interval distribution. To better understand the multiscaling origin, we study how gamma depends on four essential factors, capitalization, risk, number of trades, and return. We show that gamma depends on the capitalization, risk, and return but almost does not depend on the number of trades. This suggests that gamma relates to the portfolio selection but not on the market activity. To further characterize the multiscaling of individual stocks, we fit the moments of tau , mu_(m) identical with(tautau);(m);(1m) , in the range of 10
Multifactor analysis of multiscaling in volatility return intervals
NASA Astrophysics Data System (ADS)
Wang, Fengzhong; Yamasaki, Kazuko; Havlin, Shlomo; Stanley, H. Eugene
2009-01-01
We study the volatility time series of 1137 most traded stocks in the U.S. stock markets for the two-year period 2001-2002 and analyze their return intervals τ , which are time intervals between volatilities above a given threshold q . We explore the probability density function of τ , Pq(τ) , assuming a stretched exponential function, Pq(τ)˜e-τγ . We find that the exponent γ depends on the threshold in the range between q=1 and 6 standard deviations of the volatility. This finding supports the multiscaling nature of the return interval distribution. To better understand the multiscaling origin, we study how γ depends on four essential factors, capitalization, risk, number of trades, and return. We show that γ depends on the capitalization, risk, and return but almost does not depend on the number of trades. This suggests that γ relates to the portfolio selection but not on the market activity. To further characterize the multiscaling of individual stocks, we fit the moments of τ , μm≡⟨(τ/⟨τ⟩)m⟩1/m , in the range of 10<⟨τ⟩⩽100 by a power law, μm˜⟨τ⟩δ . The exponent δ is found also to depend on the capitalization, risk, and return but not on the number of trades, and its tendency is opposite to that of γ . Moreover, we show that δ decreases with increasing γ approximately by a linear relation. The return intervals demonstrate the temporal structure of volatilities and our findings suggest that their multiscaling features may be helpful for portfolio optimization.
A computer program for uncertainty analysis integrating regression and Bayesian methods
Lu, Dan; Ye, Ming; Hill, Mary C.; Poeter, Eileen P.; Curtis, Gary
2014-01-01
This work develops a new functionality in UCODE_2014 to evaluate Bayesian credible intervals using the Markov Chain Monte Carlo (MCMC) method. The MCMC capability in UCODE_2014 is based on the FORTRAN version of the differential evolution adaptive Metropolis (DREAM) algorithm of Vrugt et al. (2009), which estimates the posterior probability density function of model parameters in high-dimensional and multimodal sampling problems. The UCODE MCMC capability provides eleven prior probability distributions and three ways to initialize the sampling process. It evaluates parametric and predictive uncertainties and it has parallel computing capability based on multiple chains to accelerate the sampling process. This paper tests and demonstrates the MCMC capability using a 10-dimensional multimodal mathematical function, a 100-dimensional Gaussian function, and a groundwater reactive transport model. The use of the MCMC capability is made straightforward and flexible by adopting the JUPITER API protocol. With the new MCMC capability, UCODE_2014 can be used to calculate three types of uncertainty intervals, which all can account for prior information: (1) linear confidence intervals which require linearity and Gaussian error assumptions and typically 10s–100s of highly parallelizable model runs after optimization, (2) nonlinear confidence intervals which require a smooth objective function surface and Gaussian observation error assumptions and typically 100s–1,000s of partially parallelizable model runs after optimization, and (3) MCMC Bayesian credible intervals which require few assumptions and commonly 10,000s–100,000s or more partially parallelizable model runs. Ready access allows users to select methods best suited to their work, and to compare methods in many circumstances.
Łoniewska, Beata; Kaczmarczyk, Mariusz; Clark, Jeremy Simon; Gorący, Iwona; Horodnicka-Józwa, Anita; Ciechanowicz, Andrzej
2015-03-16
A-Kinase Anchoring Proteins (AKAPs) coordinate the specificity of protein kinase A signaling by localizing the kinase to subcellular sites. The 1936G (V646) AKAP10 allele has been associated in adults with low cholinergic/vagus nerve sensitivity, shortened PR intervals in ECG recording and in newborns with increased blood pressure and higher cholesterol cord blood concentration. The aim of the study was to answer the question of whether 1936A > G AKAP10 polymorphism is associated with the newborn electrocardiographic variables. Electrocardiograms were recorded from 114 consecutive healthy Polish newborns (55 females, 59 males), born after 37 gestational weeks to healthy women with uncomplicated pregnancies. All recordings were made between 3(rd) and 7(th) day of life to avoid QT variability. The heart rate per minute and duration of PR, QRS, RR and QT intervals were usually measured. The ECGs were evaluated independently by three observers. At birth, cord blood of neonates was obtained for isolation of genomic DNA. The distribution of anthropometric and electrocardiographic variables in our cohort approached normality (skewness < 2 for all variables). No significant differences in anthropometric variables and electrocardiographic traits with respect to AKAP10 genotype were found. Multiple regression analysis with adjustment for gender, gestational age and birth mass revealed that QTc interval in GG AKAP10 homozygotes was significantly longer, but in range, when compared with A alleles carriers (AA + AG, recessive mode of inheritance). No rhythm disturbances were observed. Results demonstrate possible association between AKAP10 1936A > G variant and QTc interval in Polish newborns.
NASA Astrophysics Data System (ADS)
Pustil'Nik, Lev A.; Dorman, L. I.; Yom Din, G.
2003-07-01
The database of Professor Rogers, with wheat prices in England in the Middle Ages (1249-1703) was used to search for possible manifestations of solar activity and cosmic ray variations. The main object of the statistical analysis is investigation of bursts of prices. We present a conceptual model of possible modes for sensitivity of wheat prices to weather conditions, caused by solar cycle variations in cosmic rays, and compare the expected price fluctuations with wheat price variations recorded in the Medieval England. We compared statistical properties of the intervals between price bursts with statistical properties of the intervals between extremes (minimums) of solar cycles during the years 1700-2000. Statistical properties of these two samples are similar both in averaged/median values of intervals and in standard deviation of this values. We show that histogram of intervals distribution for price bursts and solar minimums are coincidence with high confidence level. We analyzed direct links between wheat prices and solar activity in the th 17 Century, for which wheat prices and solar activity data as well as cosmic ray intensity (from 10 Be isotop e) are available. We show that for all seven solar activity minimums the observed prices were higher than prices for the nine intervals of maximal solar activity proceed preceding to the minimums. This result, combined with the conclusion on similarity of statistical properties of the price bursts and solar activity extremes we consider as direct evidence of a causal connection between wheat prices bursts and solar activity.
Fractal scaling analysis of groundwater dynamics in confined aquifers
NASA Astrophysics Data System (ADS)
Tu, Tongbi; Ercan, Ali; Kavvas, M. Levent
2017-10-01
Groundwater closely interacts with surface water and even climate systems in most hydroclimatic settings. Fractal scaling analysis of groundwater dynamics is of significance in modeling hydrological processes by considering potential temporal long-range dependence and scaling crossovers in the groundwater level fluctuations. In this study, it is demonstrated that the groundwater level fluctuations in confined aquifer wells with long observations exhibit site-specific fractal scaling behavior. Detrended fluctuation analysis (DFA) was utilized to quantify the monofractality, and multifractal detrended fluctuation analysis (MF-DFA) and multiscale multifractal analysis (MMA) were employed to examine the multifractal behavior. The DFA results indicated that fractals exist in groundwater level time series, and it was shown that the estimated Hurst exponent is closely dependent on the length and specific time interval of the time series. The MF-DFA and MMA analyses showed that different levels of multifractality exist, which may be partially due to a broad probability density distribution with infinite moments. Furthermore, it is demonstrated that the underlying distribution of groundwater level fluctuations exhibits either non-Gaussian characteristics, which may be fitted by the Lévy stable distribution, or Gaussian characteristics depending on the site characteristics. However, fractional Brownian motion (fBm), which has been identified as an appropriate model to characterize groundwater level fluctuation, is Gaussian with finite moments. Therefore, fBm may be inadequate for the description of physical processes with infinite moments, such as the groundwater level fluctuations in this study. It is concluded that there is a need for generalized governing equations of groundwater flow processes that can model both the long-memory behavior and the Brownian finite-memory behavior.
Precipitation areal-reduction factor estimation using an annual-maxima centered approach
Asquith, W.H.; Famiglietti, J.S.
2000-01-01
The adjustment of precipitation depth of a point storm to an effective (mean) depth over a watershed is important for characterizing rainfall-runoff relations and for cost-effective designs of hydraulic structures when design storms are considered. A design storm is the precipitation point depth having a specified duration and frequency (recurrence interval). Effective depths are often computed by multiplying point depths by areal-reduction factors (ARF). ARF range from 0 to 1, vary according to storm characteristics, such as recurrence interval; and are a function of watershed characteristics, such as watershed size, shape, and geographic location. This paper presents a new approach for estimating ARF and includes applications for the 1-day design storm in Austin, Dallas, and Houston, Texas. The approach, termed 'annual-maxima centered,' specifically considers the distribution of concurrent precipitation surrounding an annual-precipitation maxima, which is a feature not seen in other approaches. The approach does not require the prior spatial averaging of precipitation, explicit determination of spatial correlation coefficients, nor explicit definition of a representative area of a particular storm in the analysis. The annual-maxima centered approach was designed to exploit the wide availability of dense precipitation gauge data in many regions of the world. The approach produces ARF that decrease more rapidly than those from TP-29. Furthermore, the ARF from the approach decay rapidly with increasing recurrence interval of the annual-precipitation maxima. (C) 2000 Elsevier Science B.V.The adjustment of precipitation depth of a point storm to an effective (mean) depth over a watershed is important for characterizing rainfall-runoff relations and for cost-effective designs of hydraulic structures when design storms are considered. A design storm is the precipitation point depth having a specified duration and frequency (recurrence interval). Effective depths are often computed by multiplying point depths by areal-reduction factors (ARF). ARF range from 0 to 1, vary according to storm characteristics, such as recurrence interval; and are a function of watershed characteristics, such as watershed size, shape, and geographic location. This paper presents a new approach for estimating ARF and includes applications for the 1-day design storm in Austin, Dallas, and Houston, Texas. The approach, termed 'annual-maxima centered,' specifically considers the distribution of concurrent precipitation surrounding an annual-precipitation maxima, which is a feature not seen in other approaches. The approach does not require the prior spatial averaging of precipitation, explicit determination of spatial correlation coefficients, nor explicit definition of a representative area of a particular storm in the analysis. The annual-maxima centered approach was designed to exploit the wide availability of dense precipitation gauge data in many regions of the world. The approach produces ARF that decrease more rapidly than those from TP-29. Furthermore, the ARF from the approach decay rapidly with increasing recurrence interval of the annual-precipitation maxima.
de Souza Rodrigues, Fernando; Cezar, Alfredo Skrebsky; de Menezes, Fernanda Rezer; Sangioni, Luis Antônio; Vogel, Fernanda Silveira Flores; de Avila Botton, Sônia
2017-11-01
This study evaluated the efficacy and the economic viability of two anticoccidial treatment regimens tested in lambs naturally exposed to Eimeria spp. re-infections in a grazing system during a 140-day period. Twenty-four suckling lambs were distributed into three groups based on the individual count of oocysts per gram of feces (OPG) and body weight. Animals were treated with toltrazuril 5% (20 mg/kg) at 14- (GI) or 21-day (GII) intervals, and GIII was kept as untreated control. A cost-benefit analysis of each treatment regimen was calculated. Additionally, economic analysis was performed on four hypothetical scenarios, in which lambs could be having 10, 25, 50, or 85% decrease in their expected body weight gain due to clinical. Efficacy of toltrazuril against Eimeria spp. was 96.9-99.9% (GI) and 74.2-99.9% (GII). E. ovinoidalis was most frequently identified, but no clinical signs of coccidiosis were observed in lambs. There were no differences in weight gain among the groups. The cost of treatment per lamb was $13.09 (GI) and $7.83 (GII). The estimation model showed that the cost-benefit ratio favored treatment with toltrazuril when lambs fail to gain weight. In the studied flock, the break-even point for toltrazuril administered at 14-day intervals was reached with 85% decrease in mean weight gain. In conclusion, toltrazuril can be used at 14-day intervals to control Eimeria spp. (re)-infection in lambs raised on pasture. This treatment regimen was not economically feasible for subclinical coccidiosis; however, it may be feasible when used to prevent weight loss caused by clinical coccidiosis.
NASA Technical Reports Server (NTRS)
Goldman, A.; Murcray, F. J.; Rinsland, C. P.; Blatherwick, R. D.; Murcray, F. H.; Murcray, D. G.
1991-01-01
Results of ongoing studies of high-resolution solar absorption spectra aimed at the identification and quantification of trace constituents of importance in the chemistry of the stratosphere and upper troposphere are presented. An analysis of balloon-borne and ground-based spectra obtained at 0.0025/cm covering the 700-2200/cm interval is presented. The 0.0025/cm spectra, along with corresponding laboratory spectra, improves the spectral line parameters, and thus the accuracy of quantifying trace constituents. Results for COF2, F22, SF6, and other species are presented. The retrieval methods used for total column density and altitude distribution for both ground-based and balloon-borne spectra are also discussed.
Dowsett, Harry J.
1999-01-01
Analysis of climate indicators from the North Atlantic, California Margin, and ice cores from Greenland suggest millennial scale climate variability is a component of earth's climate system during the last interglacial period (marine oxygen isotope stage 5). The USGS is involved in a survey of high resolution marine records covering the last interglacial period (MIS 5) to further document the variability of climate and assess the rate at which climate can change during warm intervals. The Gulf of Mexico (GOM) is an attractive area for analysis of climate variability and rapid change. Changes in the Mississippi River Basin presumably are translated to the GOM via the river and its effect on sediment distribution and type. Likewise, the summer monsoon in the southwestern US is driven by strong southerly winds. These winds may produce upwelling in the GOM which will be recorded in the sedimentary record. Several areas of high accumulation rate have been identified in the GOM. Ocean Drilling Program (ODP) Site 625 appears to meet the criteria of having a well preserved carbonate record and accumulation rate capable of discerning millennial scale changes.
Inherent length-scales of periodic solar wind number density structures
NASA Astrophysics Data System (ADS)
Viall, N. M.; Kepko, L.; Spence, H. E.
2008-07-01
We present an analysis of the radial length-scales of periodic solar wind number density structures. We converted 11 years (1995-2005) of solar wind number density data into radial length series segments and Fourier analyzed them to identify all spectral peaks with radial wavelengths between 72 (116) and 900 (900) Mm for slow (fast) wind intervals. Our window length for the spectral analysis was 9072 Mm, approximately equivalent to 7 (4) h of data for the slow (fast) solar wind. We required that spectral peaks pass both an amplitude test and a harmonic F-test at the 95% confidence level simultaneously. From the occurrence distributions of these spectral peaks for slow and fast wind, we find that periodic number density structures occur more often at certain radial length-scales than at others, and are consistently observed within each speed range over most of the 11-year interval. For the slow wind, those length-scales are L ˜ 73, 120, 136, and 180 Mm. For the fast wind, those length-scales are L ˜ 187, 270 and 400 Mm. The results argue for the existence of inherent radial length-scales in the solar wind number density.
Kumaraswamy autoregressive moving average models for double bounded environmental data
NASA Astrophysics Data System (ADS)
Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme
2017-12-01
In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.
USDA-ARS?s Scientific Manuscript database
Accurate spatially distributed estimates of evapotranspiration (ET) derived from remotely sensed data are critical to a broad range of practical and operational applications. However, due to lengthy return intervals and cloud cover, data acquisition is not continuous over time. To fill the data gaps...
Exact intervals and tests for median when one sample value possibly an outliner
NASA Technical Reports Server (NTRS)
Keller, G. J.; Walsh, J. E.
1973-01-01
Available are independent observations (continuous data) that are believed to be a random sample. Desired are distribution-free confidence intervals and significance tests for the population median. However, there is the possibility that either the smallest or the largest observation is an outlier. Then, use of a procedure for rejection of an outlying observation might seem appropriate. Such a procedure would consider that two alternative situations are possible and would select one of them. Either (1) the n observations are truly a random sample, or (2) an outlier exists and its removal leaves a random sample of size n-1. For either situation, confidence intervals and tests are desired for the median of the population yielding the random sample. Unfortunately, satisfactory rejection procedures of a distribution-free nature do not seem to be available. Moreover, all rejection procedures impose undesirable conditional effects on the observations, and also, can select the wrong one of the two above situations. It is found that two-sided intervals and tests based on two symmetrically located order statistics (not the largest and smallest) of the n observations have this property.
Aryal, Madhava P; Nagaraja, Tavarekere N; Brown, Stephen L; Lu, Mei; Bagher-Ebadian, Hassan; Ding, Guangliang; Panda, Swayamprava; Keenan, Kelly; Cabral, Glauber; Mikkelsen, Tom; Ewing, James R
2014-10-01
The distribution of dynamic contrast-enhanced MRI (DCE-MRI) parametric estimates in a rat U251 glioma model was analyzed. Using Magnevist as contrast agent (CA), 17 nude rats implanted with U251 cerebral glioma were studied by DCE-MRI twice in a 24 h interval. A data-driven analysis selected one of three models to estimate either (1) plasma volume (vp), (2) vp and forward volume transfer constant (K(trans)) or (3) vp, K(trans) and interstitial volume fraction (ve), constituting Models 1, 2 and 3, respectively. CA distribution volume (VD) was estimated in Model 3 regions by Logan plots. Regions of interest (ROIs) were selected by model. In the Model 3 ROI, descriptors of parameter distributions--mean, median, variance and skewness--were calculated and compared between the two time points for repeatability. All distributions of parametric estimates in Model 3 ROIs were positively skewed. Test-retest differences between population summaries for any parameter were not significant (p ≥ 0.10; Wilcoxon signed-rank and paired t tests). These and similar measures of parametric distribution and test-retest variance from other tumor models can be used to inform the choice of biomarkers that best summarize tumor status and treatment effects. Copyright © 2014 John Wiley & Sons, Ltd.
Ahlborn, W; Tuz, H J; Uberla, K
1990-03-01
In cohort studies the Mantel-Haenszel estimator ORMH is computed from sample data and is used as a point estimator of relative risk. Test-based confidence intervals are estimated with the help of the asymptotic chi-squared distributed MH-statistic chi 2MHS. The Mantel-extension-chi-squared is used as a test statistic for a dose-response relationship. Both test statistics--the Mantel-Haenszel-chi as well as the Mantel-extension-chi--assume homogeneity of risk across strata, which is rarely present. Also an extended nonparametric statistic, proposed by Terpstra, which is based on the Mann-Whitney-statistics assumes homogeneity of risk across strata. We have earlier defined four risk measures RRkj (k = 1,2,...,4) in the population and considered their estimates and the corresponding asymptotic distributions. In order to overcome the homogeneity assumption we use the delta-method to get "test-based" confidence intervals. Because the four risk measures RRkj are presented as functions of four weights gik we give, consequently, the asymptotic variances of these risk estimators also as functions of the weights gik in a closed form. Approximations to these variances are given. For testing a dose-response relationship we propose a new class of chi 2(1)-distributed global measures Gk and the corresponding global chi 2-test. In contrast to the Mantel-extension-chi homogeneity of risk across strata must not be assumed. These global test statistics are of the Wald type for composite hypotheses.(ABSTRACT TRUNCATED AT 250 WORDS)
Multiplicative point process as a model of trading activity
NASA Astrophysics Data System (ADS)
Gontis, V.; Kaulakys, B.
2004-11-01
Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.
[Influence of gender, age and season on thyroid hormone reference interval].
Qiu, L; Wang, D C; Xu, T; Cheng, X Q; Sun, Q; Hu, Y Y; Liu, H C; Lu, S Y; Yang, G H; Wang, Z J
2018-05-29
Objective: Using clinical "big data" , to investigate the factors that affect the levels of thyroid hormones, and to explore the partitioning criteria for reference intervals (RI) of these hormones. Methods: An observation study was conducted. Information of 107 107 individuals undergoing routine physical examination in Peking Union Medical College Hospital from September 1(st,) 2013 to August 31(st,) 2016 was collected, thyroid hormone of these subjects were detected. To explore the test results distribution and differences of TSH, FT4 and FT3 by gender and age; according to the seasonal division standard of China Meteorological Administration, the study period was divided into four seasons, and the seasonal fluctuation on TSH was analyzed.To define the appropriate partition by gender, age and season according to significant difference analysis. Results: In male and female, the distributions of TSH were 1.779(0.578-4.758), 2.023(0.420-5.343)mU/L, respectively, and the level of TSH in female was higher than in male ( Z =-37.600, P <0.001). The distributions of FT4 were 0.127(0.098-0.162), 0.117(0.091-0.151) μg/L, the distributions of FT3 were 3.33(2.47-3.74), 3.01(2.35-3.57)ng/L. And the level of FT4, FT3 in female were significantly lower than in male ( Z =-94.000, -154.600, all P <0.001). Furthermore, males were divided into two groups by 65 years old and female were divided by 50 years old, respectively, and the distributions of TSH in male and female of older group were 1.818(0.528-5.240), 2.111(0.348-5.735)mU/L, in younger group were 1.778(0.582-4.696), 1.991(0.427-5.316)mU/L. The level of TSH in older group was significantly higher than in younger group ( Z =-2.269, -10.400, all P <0.05), and the distribution of TSH in older group was much wider than in younger. The distribution of whole in spring, summer and autumn was 1.869( 0.510-5.042)mU/L, in winter was 1.978(0.527-5.250) mU/L, and the difference between them had statistical significance ( Z =-15.000, P <0.001). Conclusions: Gender and age significantly affect the serum levels of TSH, FT4, and FT3, the distribution of TSH in female and elder group are wider than in male, and that of FT4, FT3 are lower.Seasons significantly affect the serum TSH level, the peak value is observed in winter. There are obviously differences between "rough" RIs and manufacture recommended RIs. Each laboratory should establish reference intervals for thyroid hormones on the premise of appropriate grouping.
Coupling detrended fluctuation analysis for multiple warehouse-out behavioral sequences
NASA Astrophysics Data System (ADS)
Yao, Can-Zhong; Lin, Ji-Nan; Zheng, Xu-Zhou
2017-01-01
Interaction patterns among different warehouses could make the warehouse-out behavioral sequences less predictable. We firstly take a coupling detrended fluctuation analysis on the warehouse-out quantity, and find that the multivariate sequences exhibit significant coupling multifractal characteristics regardless of the types of steel products. Secondly, we track the sources of multifractal warehouse-out sequences by shuffling and surrogating original ones, and we find that fat-tail distribution contributes more to multifractal features than the long-term memory, regardless of types of steel products. From perspective of warehouse contribution, some warehouses steadily contribute more to multifractal than other warehouses. Finally, based on multiscale multifractal analysis, we propose Hurst surface structure to investigate coupling multifractal, and show that multiple behavioral sequences exhibit significant coupling multifractal features that emerge and usually be restricted within relatively greater time scale interval.
Lightness, chroma, and hue distributions of a shade guide as measured by a spectroradiometer.
Lee, Yong-Keun; Yu, Bin; Lim, Ho-Nam
2010-09-01
The color attributes of commercially available shade guides have been measured by spectrophotometers (SP), which are designed to measure flat surfaces. However, there is limited information on the color distribution of shade guides as measured by spectroradiometers (SR), which are capable of measuring the color of curved surfaces. The purpose of this study was to determine the distributions of lightness (CIE L*) and chroma (C*(ab)) step intervals between adjacent shade tabs of a shade guide based on the lightness, chroma, and hue attributes measured by an SR. Lightness, chroma, hue angle, and CIE a* and b* values of the shade tabs (n=26) from a shade guide (Vitapan 3D-Master) were measured by an SR under daylight conditions. The distributions of the ratios in lightness and chroma of each tab compared with the lowest lightness tab or the lowest chroma tab were determined. The values for each color parameter were analyzed by a 3-way ANOVA with the factors of lightness, chroma, and hue designations of the shade tabs (alpha=.05). The chroma and CIE a* and b* values were influenced by the lightness, chroma, and hue designations of the shade tabs (P<.001); however, the lightness and hue angle were influenced by the lightness and hue designations, but not by the chroma designation. Distributions for the CIE a* and b* values, in each lightness group, corresponded with the chroma designation. However, the intervals in the lightness and chroma scales between adjacent tabs were not uniform. The intervals in the color parameters between adjacent shade tabs were not uniform based on SR measurements. Therefore, a shade guide in which shade tabs are more equally spaced by the color attributes, based on the values as measured by an SR along with observers' responses with respect to the equality of the intervals, should be devised. Copyright © 2010 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eckert-Gallup, Aubrey C.; Sallaberry, Cédric J.; Dallman, Ann R.
Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulations as a part of the standard current practice for designing marine structures to survive extreme sea states. These environmental contours are characterized by combinations of significant wave height (H s) and either energy period (T e) or peak period (T p) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first-order reliability method (I-FORM) is a standard design practice for generating environmentalmore » contours. This paper develops enhanced methodologies for data analysis prior to the application of the I-FORM, including the use of principal component analysis (PCA) to create an uncorrelated representation of the variables under consideration as well as new distribution and parameter fitting techniques. As a result, these modifications better represent the measured data and, therefore, should contribute to the development of more realistic representations of environmental contours of extreme sea states for determining design loads for marine structures.« less
Eckert-Gallup, Aubrey C.; Sallaberry, Cédric J.; Dallman, Ann R.; ...
2016-01-06
Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulations as a part of the standard current practice for designing marine structures to survive extreme sea states. These environmental contours are characterized by combinations of significant wave height (H s) and either energy period (T e) or peak period (T p) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first-order reliability method (I-FORM) is a standard design practice for generating environmentalmore » contours. This paper develops enhanced methodologies for data analysis prior to the application of the I-FORM, including the use of principal component analysis (PCA) to create an uncorrelated representation of the variables under consideration as well as new distribution and parameter fitting techniques. As a result, these modifications better represent the measured data and, therefore, should contribute to the development of more realistic representations of environmental contours of extreme sea states for determining design loads for marine structures.« less
NASA Astrophysics Data System (ADS)
Panozzo, M.; Quintero-Quiroz, C.; Tiana-Alsina, J.; Torrent, M. C.; Masoller, C.
2017-11-01
Semiconductor lasers with time-delayed optical feedback display a wide range of dynamical regimes, which have found various practical applications. They also provide excellent testbeds for data analysis tools for characterizing complex signals. Recently, several of us have analyzed experimental intensity time-traces and quantitatively identified the onset of different dynamical regimes, as the laser current increases. Specifically, we identified the onset of low-frequency fluctuations (LFFs), where the laser intensity displays abrupt dropouts, and the onset of coherence collapse (CC), where the intensity fluctuations are highly irregular. Here we map these regimes when both, the laser current and the feedback strength vary. We show that the shape of the distribution of intensity fluctuations (characterized by the standard deviation, the skewness, and the kurtosis) allows to distinguish among noise, LFFs and CC, and to quantitatively determine (in spite of the gradual nature of the transitions) the boundaries of the three regimes. Ordinal analysis of the inter-dropout time intervals consistently identifies the three regimes occurring in the same parameter regions as the analysis of the intensity distribution. Simulations of the well-known time-delayed Lang-Kobayashi model are in good qualitative agreement with the observations.
NASA Astrophysics Data System (ADS)
Anggit Maulana, Hiska; Haris, Abdul
2018-05-01
Reservoir and source rock Identification has been performed to deliniate the reservoir distribution of Talangakar Formation South Sumatra Basin. This study is based on integrated geophysical, geological and petrophysical data. The aims of study to determine the characteristics of the reservoir and source rock, to differentiate reservoir and source rock in same Talangakar formation, to find out the distribution of net pay reservoir and source rock layers. The method of geophysical included seismic data interpretation using time and depth structures map, post-stack inversion, interval velocity, geological interpretations included the analysis of structures and faults, and petrophysical processing is interpret data log wells that penetrating Talangakar formation containing hydrocarbons (oil and gas). Based on seismic interpretation perform subsurface mapping on Layer A and Layer I to determine the development of structures in the Regional Research. Based on the geological interpretation, trapping in the form of regional research is anticline structure on southwest-northeast trending and bounded by normal faults on the southwest-southeast regional research structure. Based on petrophysical analysis, the main reservoir in the field of research, is a layer 1,375 m of depth and a thickness 2 to 8.3 meters.
Plasma Electrolyte Distributions in Humans-Normal or Skewed?
Feldman, Mark; Dickson, Beverly
2017-11-01
It is widely believed that plasma electrolyte levels are normally distributed. Statistical tests and calculations using plasma electrolyte data are often reported based on this assumption of normality. Examples include t tests, analysis of variance, correlations and confidence intervals. The purpose of our study was to determine whether plasma sodium (Na + ), potassium (K + ), chloride (Cl - ) and bicarbonate [Formula: see text] distributions are indeed normally distributed. We analyzed plasma electrolyte data from 237 consecutive adults (137 women and 100 men) who had normal results on a standard basic metabolic panel which included plasma electrolyte measurements. The skewness of each distribution (as a measure of its asymmetry) was compared to the zero skewness of a normal (Gaussian) distribution. The plasma Na + distribution was skewed slightly to the right, but the skew was not significantly different from zero skew. The plasma Cl - distribution was skewed slightly to the left, but again the skew was not significantly different from zero skew. On the contrary, both the plasma K + and [Formula: see text] distributions were significantly skewed to the right (P < 0.01 zero skew). There was also a suggestion from examining frequency distribution curves that K + and [Formula: see text] distributions were bimodal. In adults with a normal basic metabolic panel, plasma potassium and bicarbonate levels are not normally distributed and may be bimodal. Thus, statistical methods to evaluate these 2 plasma electrolytes should be nonparametric tests and not parametric ones that require a normal distribution. Copyright © 2017 Southern Society for Clinical Investigation. Published by Elsevier Inc. All rights reserved.
Uncertainty estimation of the self-thinning process by Maximum-Entropy Principle
Shoufan Fang; George Z. Gertner
2000-01-01
When available information is scarce, the Maximum-Entropy Principle can estimate the distributions of parameters. In our case study, we estimated the distributions of the parameters of the forest self-thinning process based on literature information, and we derived the conditional distribution functions and estimated the 95 percent confidence interval (CI) of the self-...
A model of interval timing by neural integration.
Simen, Patrick; Balci, Fuat; de Souza, Laura; Cohen, Jonathan D; Holmes, Philip
2011-06-22
We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rahman, R.A.; Said, Md.J.; Bedingfield, J.R.
1994-07-01
The group J stratigraphic interval is lower Miocene (18.5-21 Ma) in age and was deposited during the early sag phase of the Malay Basin structural development. Reduction in depositional relief and first evidence of widespread marine influence characterize the transition into this interval. Twelve group J sequences have been identified. Reservoirs consist of progradational to aggradational tidally-dominated paralic to shallow marine sands deposited in the lowstand systems tract. Transgressive and highstand deposits are dominantly offshore shales. In PM-9, the original lift-related depocenters, coupled with changes in relative sea level, have strongly influenced group J unit thickness and the distribution ofmore » reservoir and seal facies. Two important reservoir intervals in PM-9 are the J18/20 and J15 sands. The reservoirs in these intervals are contained within the lowstand systems tracts of fourth-order sequences. These fourth-order sequences stack to form sequence sets in response to a third-order change in relative sea level. The sequences of the J18/20 interval stack to form part of a lowstand sequence set, whereas the J15 interval forms part of the transgressive sequence set. Reservoir facies range from tidal bars and subtidal shoals in the J18/20 interval to lower shoreface sands in the J15. Reservoir quality and continuity in group J reservoirs are dependent on depositional facies. An understanding of the controls on the distribution of facies types is crucial to the success of the current phase of field development and exploration programs in PM-9.« less
Apparatus and method for data communication in an energy distribution network
Hussain, Mohsin; LaPorte, Brock; Uebel, Udo; Zia, Aftab
2014-07-08
A system for communicating information on an energy distribution network is disclosed. In one embodiment, the system includes a local supervisor on a communication network, wherein the local supervisor can collect data from one or more energy generation/monitoring devices. The system also includes a command center on the communication network, wherein the command center can generate one or more commands for controlling the one or more energy generation devices. The local supervisor can periodically transmit a data signal indicative of the data to the command center via a first channel of the communication network at a first interval. The local supervisor can also periodically transmit a request for a command to the command center via a second channel of the communication network at a second interval shorter than the first interval. This channel configuration provides effective data communication without a significant increase in the use of network resources.
Examples of measurement uncertainty evaluations in accordance with the revised GUM
NASA Astrophysics Data System (ADS)
Runje, B.; Horvatic, A.; Alar, V.; Medic, S.; Bosnjakovic, A.
2016-11-01
The paper presents examples of the evaluation of uncertainty components in accordance with the current and revised Guide to the expression of uncertainty in measurement (GUM). In accordance with the proposed revision of the GUM a Bayesian approach was conducted for both type A and type B evaluations.The law of propagation of uncertainty (LPU) and the law of propagation of distribution applied through the Monte Carlo method, (MCM) were used to evaluate associated standard uncertainties, expanded uncertainties and coverage intervals. Furthermore, the influence of the non-Gaussian dominant input quantity and asymmetric distribution of the output quantity y on the evaluation of measurement uncertainty was analyzed. In the case when the probabilistically coverage interval is not symmetric, the coverage interval for the probability P is estimated from the experimental probability density function using the Monte Carlo method. Key highlights of the proposed revision of the GUM were analyzed through a set of examples.
Estimation of reference intervals from small samples: an example using canine plasma creatinine.
Geffré, A; Braun, J P; Trumel, C; Concordet, D
2009-12-01
According to international recommendations, reference intervals should be determined from at least 120 reference individuals, which often are impossible to achieve in veterinary clinical pathology, especially for wild animals. When only a small number of reference subjects is available, the possible bias cannot be known and the normality of the distribution cannot be evaluated. A comparison of reference intervals estimated by different methods could be helpful. The purpose of this study was to compare reference limits determined from a large set of canine plasma creatinine reference values, and large subsets of this data, with estimates obtained from small samples selected randomly. Twenty sets each of 120 and 27 samples were randomly selected from a set of 1439 plasma creatinine results obtained from healthy dogs in another study. Reference intervals for the whole sample and for the large samples were determined by a nonparametric method. The estimated reference limits for the small samples were minimum and maximum, mean +/- 2 SD of native and Box-Cox-transformed values, 2.5th and 97.5th percentiles by a robust method on native and Box-Cox-transformed values, and estimates from diagrams of cumulative distribution functions. The whole sample had a heavily skewed distribution, which approached Gaussian after Box-Cox transformation. The reference limits estimated from small samples were highly variable. The closest estimates to the 1439-result reference interval for 27-result subsamples were obtained by both parametric and robust methods after Box-Cox transformation but were grossly erroneous in some cases. For small samples, it is recommended that all values be reported graphically in a dot plot or histogram and that estimates of the reference limits be compared using different methods.
Characteristics of the April 2007 Flood at 10 Streamflow-Gaging Stations in Massachusetts
Zarriello, Phillip J.; Carlson, Carl S.
2009-01-01
A large 'nor'easter' storm on April 15-18, 2007, brought heavy rains to the southern New England region that, coupled with normal seasonal high flows and associated wet soil-moisture conditions, caused extensive flooding in many parts of Massachusetts and neighboring states. To characterize the magnitude of the April 2007 flood, a peak-flow frequency analysis was undertaken at 10 selected streamflow-gaging stations in Massachusetts to determine the magnitude of flood flows at 5-, 10-, 25-, 50-, 100-, 200-, and 500-year return intervals. The magnitude of flood flows at various return intervals were determined from the logarithms of the annual peaks fit to a Pearson Type III probability distribution. Analysis included augmenting the station record with longer-term records from one or more nearby stations to provide a common period of comparison that includes notable floods in 1936, 1938, and 1955. The April 2007 peak flow was among the highest recorded or estimated since 1936, often ranking between the 3d and 5th highest peak for that period. In general, the peak-flow frequency analysis indicates the April 2007 peak flow has an estimated return interval between 25 and 50 years; at stations in the northeastern and central areas of the state, the storm was less severe resulting in flows with return intervals of about 5 and 10 years, respectively. At Merrimack River at Lowell, the April 2007 peak flow approached a 100-year return interval that was computed from post-flood control records and the 1936 and 1938 peak flows adjusted for flood control. In general, the magnitude of flood flow for a given return interval computed from the streamflow-gaging station period-of-record was greater than those used to calculate flood profiles in various community flood-insurance studies. In addition, the magnitude of the updated flood flow and current (2008) stage-discharge relation at a given streamflow-gaging station often produced a flood stage that was considerably different than the flood stage indicated in the flood-insurance study flood profile at that station. Equations for estimating the flow magnitudes for 5-, 10-, 25-, 50-, 100-, 200-, and 500-year floods were developed from the relation of the magnitude of flood flows to drainage area calculated from the six streamflow-gaging stations with the longest unaltered record. These equations produced a more conservative estimate of flood flows (higher discharges) than the existing regional equations for estimating flood flows at ungaged rivers in Massachusetts. Large differences in the magnitude of flood flows for various return intervals determined in this study compared to results from existing regional equations and flood insurance studies indicate a need for updating regional analyses and equations for estimating the expected magnitude of flood flows in Massachusetts.
O'Gorman, Thomas W
2018-05-01
In the last decade, it has been shown that an adaptive testing method could be used, along with the Robbins-Monro search procedure, to obtain confidence intervals that are often narrower than traditional confidence intervals. However, these confidence interval limits require a great deal of computation and some familiarity with stochastic search methods. We propose a method for estimating the limits of confidence intervals that uses only a few tests of significance. We compare these limits to those obtained by a lengthy Robbins-Monro stochastic search and find that the proposed method is nearly as accurate as the Robbins-Monro search. Adaptive confidence intervals that are produced by the proposed method are often narrower than traditional confidence intervals when the distributions are long-tailed, skewed, or bimodal. Moreover, the proposed method of estimating confidence interval limits is easy to understand, because it is based solely on the p-values from a few tests of significance.
Saripella, Kalyan K; Mallipeddi, Rama; Neau, Steven H
2014-11-20
Polyplasdone of different particle size was used to study the sorption, desorption, and distribution of water, and to seek evidence that larger particles can internalize water. The three samples were Polyplasdone® XL, XL-10, and INF-10. Moisture sorption and desorption isotherms at 25 °C at 5% intervals from 0 to 95% relative humidity (RH) were generated by dynamic vapor sorption analysis. The three products provided similar data, judged to be Type III with a small hysteresis that appears when RH is below 65%. An absent rounded knee in the sorption curve suggests that multilayers form before the monolayer is completed. The hysteresis indicates that internally absorbed moisture is trapped as the water is desorbed and the polymer sample shrinks, thus requiring a lower level of RH to continue desorption. The fit of the Guggenheim-Anderson-de Boer (GAB) and the Young and Nelson equations was accomplished in the data analysis. The W(m), C(G), and K values from GAB analysis are similar across the three samples, revealing 0.962 water molecules per repeating unit in the monolayer. A small amount of absorbed water is identified, but this is consistent across the three particle sizes. Copyright © 2014 Elsevier B.V. All rights reserved.
Detectability of auditory signals presented without defined observation intervals
NASA Technical Reports Server (NTRS)
Watson, C. S.; Nichols, T. L.
1976-01-01
Ability to detect tones in noise was measured without defined observation intervals. Latency density functions were estimated for the first response following a signal and, separately, for the first response following randomly distributed instances of background noise. Detection performance was measured by the maximum separation between the cumulative latency density functions for signal-plus-noise and for noise alone. Values of the index of detectability, estimated by this procedure, were approximately those obtained with a 2-dB weaker signal and defined observation intervals. Simulation of defined- and non-defined-interval tasks with an energy detector showed that this device performs very similarly to the human listener in both cases.
The population pharmacokinetics of R- and S-warfarin: effect of genetic and clinical factors.
Lane, Steven; Al-Zubiedi, Sameh; Hatch, Ellen; Matthews, Ivan; Jorgensen, Andrea L; Deloukas, Panos; Daly, Ann K; Park, B Kevin; Aarons, Leon; Ogungbenro, Kayode; Kamali, Farhad; Hughes, Dyfrig; Pirmohamed, Munir
2012-01-01
Warfarin is a drug with a narrow therapeutic index and large interindividual variability in daily dosing requirements. Patients commencing warfarin treatment are at risk of bleeding due to excessive anticoagulation caused by overdosing. The interindividual variability in dose requirements is influenced by a number of factors, including polymorphisms in genes mediating warfarin pharmacology, co-medication, age, sex, body size and diet. To develop population pharmacokinetic models of both R- and S-warfarin using clinical and genetic factors and to identify the covariates which influence the interindividual variability in the pharmacokinetic parameters of clearance and volume of distribution in patients on long-term warfarin therapy. Patients commencing warfarin therapy were followed up for 26 weeks. Plasma warfarin enantiomer concentrations were determined in 306 patients for S-warfarin and in 309 patients for R-warfarin at 1, 8 and 26 weeks. Patients were also genotyped for CYP2C9 variants (CYP2C9*1,*2 and *3), two single-nucleotide polymorphisms (SNPs) in CYP1A2, one SNP in CYP3A4 and six SNPs in CYP2C19. A base pharmacokinetic model was developed using NONMEM software to determine the warfarin clearance and volume of distribution. The model was extended to include covariates that influenced the between-subject variability. Bodyweight, age, sex and CYP2C9 genotype significantly influenced S-warfarin clearance. The S-warfarin clearance was estimated to be 0.144 l h⁻¹ (95% confidence interval 0.131, 0.157) in a 70 kg woman aged 69.8 years with the wild-type CYP2C9 genotype, and the volume of distribution was 16.6 l (95% confidence interval 13.5, 19.7). Bodyweight and age, along with the SNPs rs3814637 (in CYP2C19) and rs2242480 (in CYP3A4), significantly influenced R-warfarin clearance. The R-warfarin clearance was estimated to be 0.125 l h⁻¹ (95% confidence interval 0.115, 0.135) in a 70 kg individual aged 69.8 years with the wild-type CYP2C19 and CYP3A4 genotypes, and the volume of distribution was 10.9 l (95% confidence interval 8.63, 13.2). Our analysis, based on exposure rather than dose, provides quantitative estimates of the clinical and genetic factors impacting on the clearance of both the S- and R-enantiomers of warfarin, which can be used in developing improved dosing algorithms. © 2011 The Authors. British Journal of Clinical Pharmacology © 2011 The British Pharmacological Society.
NASA Astrophysics Data System (ADS)
Kurdhi, N. A.; Jamaluddin, A.; Jauhari, W. A.; Saputro, D. R. S.
2017-06-01
In this study, we consider a stochastic integrated manufacturer-retailer inventory model with service level constraint. The model analyzed in this article considers the situation in which the vendor and the buyer establish a long-term contract and strategic partnership to jointly determine the best strategy. The lead time and setup cost are assumed can be controlled by an additional crashing cost and an investment, respectively. It is assumed that shortages are allowed and partially backlogged on the buyer’s side, and that the protection interval (i.e., review period plus lead time) demand distribution is unknown but has given finite first and second moments. The objective is to apply the minmax distribution free approach to simultaneously optimize the review period, the lead time, the setup cost, the safety factor, and the number of deliveries in order to minimize the joint total expected annual cost. The service level constraint guarantees that the service level requirement can be satisfied at the worst case. By constructing Lagrange function, the analysis regarding the solution procedure is conducted, and a solution algorithm is then developed. Moreover, a numerical example and sensitivity analysis are given to illustrate the proposed model and to provide some observations and managerial implications.
Distributed interactive communication in simulated space-dwelling groups.
Brady, Joseph V; Hienz, Robert D; Hursh, Steven R; Ragusa, Leonard C; Rouse, Charles O; Gasior, Eric D
2004-03-01
This report describes the development and preliminary application of an experimental test bed for modeling human behavior in the context of a computer generated environment to analyze the effects of variations in communication modalities, incentives and stressful conditions. In addition to detailing the methodological development of a simulated task environment that provides for electronic monitoring and recording of individual and group behavior, the initial substantive findings from an experimental analysis of distributed interactive communication in simulated space dwelling groups are described. Crews of three members each (male and female) participated in simulated "planetary missions" based upon a synthetic scenario task that required identification, collection, and analysis of geologic specimens with a range of grade values. The results of these preliminary studies showed clearly that cooperative and productive interactions were maintained between individually isolated and distributed individuals communicating and problem-solving effectively in a computer-generated "planetary" environment over extended time intervals without benefit of one another's physical presence. Studies on communication channel constraints confirmed the functional interchangeability between available modalities with the highest degree of interchangeability occurring between Audio and Text modes of communication. The effects of task-related incentives were determined by the conditions under which they were available with Positive Incentives effectively attenuating decrements in performance under stressful time pressure. c2003 Elsevier Ltd. All rights reserved.
Automatic, time-interval traffic counts for recreation area management planning
D. L. Erickson; C. J. Liu; H. K. Cordell
1980-01-01
Automatic, time-interval recorders were used to count directional vehicular traffic on a multiple entry/exit road network in the Red River Gorge Geological Area, Daniel Boone National Forest. Hourly counts of entering and exiting traffic differed according to recorder location, but an aggregated distribution showed a delayed peak in exiting traffic thought to be...
Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models
ERIC Educational Resources Information Center
Doebler, Anna; Doebler, Philipp; Holling, Heinz
2013-01-01
The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…
Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient
ERIC Educational Resources Information Center
Krishnamoorthy, K.; Xia, Yanping
2008-01-01
The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…
An interval model updating strategy using interval response surface models
NASA Astrophysics Data System (ADS)
Fang, Sheng-En; Zhang, Qiu-Hu; Ren, Wei-Xin
2015-08-01
Stochastic model updating provides an effective way of handling uncertainties existing in real-world structures. In general, probabilistic theories, fuzzy mathematics or interval analyses are involved in the solution of inverse problems. However in practice, probability distributions or membership functions of structural parameters are often unavailable due to insufficient information of a structure. At this moment an interval model updating procedure shows its superiority in the aspect of problem simplification since only the upper and lower bounds of parameters and responses are sought. To this end, this study develops a new concept of interval response surface models for the purpose of efficiently implementing the interval model updating procedure. The frequent interval overestimation due to the use of interval arithmetic can be maximally avoided leading to accurate estimation of parameter intervals. Meanwhile, the establishment of an interval inverse problem is highly simplified, accompanied by a saving of computational costs. By this means a relatively simple and cost-efficient interval updating process can be achieved. Lastly, the feasibility and reliability of the developed method have been verified against a numerical mass-spring system and also against a set of experimentally tested steel plates.
Perceptual basis of evolving Western musical styles
Rodriguez Zivic, Pablo H.; Shifres, Favio; Cecchi, Guillermo A.
2013-01-01
The brain processes temporal statistics to predict future events and to categorize perceptual objects. These statistics, called expectancies, are found in music perception, and they span a variety of different features and time scales. Specifically, there is evidence that music perception involves strong expectancies regarding the distribution of a melodic interval, namely, the distance between two consecutive notes within the context of another. The recent availability of a large Western music dataset, consisting of the historical record condensed as melodic interval counts, has opened new possibilities for data-driven analysis of musical perception. In this context, we present an analytical approach that, based on cognitive theories of music expectation and machine learning techniques, recovers a set of factors that accurately identifies historical trends and stylistic transitions between the Baroque, Classical, Romantic, and Post-Romantic periods. We also offer a plausible musicological and cognitive interpretation of these factors, allowing us to propose them as data-driven principles of melodic expectation. PMID:23716669
Characterizing the response of a scintillator-based detector to single electrons.
Sang, Xiahan; LeBeau, James M
2016-02-01
Here we report the response of a high angle annular dark field scintillator-based detector to single electrons. We demonstrate that care must be taken when determining the single electron intensity as significant discrepancies can occur when quantifying STEM images with different methods. To account for the detector response, we first image the detector using very low beam currents (∼8fA), and subsequently model the interval between consecutive single electrons events. We find that single electrons striking the detector present a wide distribution of intensities, which we show is not described by a simple function. Further, we present a method to accurately account for the electrons within the incident probe when conducting quantitative imaging. The role detector settings play on determining the single electron intensity is also explored. Finally, we extend our analysis to describe the response of the detector to multiple electron events within the dwell interval of each pixel. Copyright © 2015 Elsevier B.V. All rights reserved.
Shot Peening Numerical Simulation of Aircraft Aluminum Alloy Structure
NASA Astrophysics Data System (ADS)
Liu, Yong; Lv, Sheng-Li; Zhang, Wei
2018-03-01
After shot peening, the 7050 aluminum alloy has good anti-fatigue and anti-stress corrosion properties. In the shot peening process, the pellet collides with target material randomly, and generated residual stress distribution on the target material surface, which has great significance to improve material property. In this paper, a simplified numerical simulation model of shot peening was established. The influence of pellet collision velocity, pellet collision position and pellet collision time interval on the residual stress of shot peening was studied, which is simulated by the ANSYS/LS-DYNA software. The analysis results show that different velocity, different positions and different time intervals have great influence on the residual stress after shot peening. Comparing with the numerical simulation results based on Kriging model, the accuracy of the simulation results in this paper was verified. This study provides a reference for the optimization of the shot peening process, and makes an effective exploration for the precise shot peening numerical simulation.
Lott, B.; Escande, L.; Larsson, S.; ...
2012-07-19
Here, we present a method enabling the creation of constant-uncertainty/constant-significance light curves with the data of the Fermi-Large Area Telescope (LAT). The adaptive-binning method enables more information to be encapsulated within the light curve than with the fixed-binning method. Although primarily developed for blazar studies, it can be applied to any sources. Furthermore, this method allows the starting and ending times of each interval to be calculated in a simple and quick way during a first step. The reported mean flux and spectral index (assuming the spectrum is a power-law distribution) in the interval are calculated via the standard LATmore » analysis during a second step. In the absence of major caveats associated with this method Monte-Carlo simulations have been established. We present the performance of this method in determining duty cycles as well as power-density spectra relative to the traditional fixed-binning method.« less
Statistical summaries of fatigue data for design purposes
NASA Technical Reports Server (NTRS)
Wirsching, P. H.
1983-01-01
Two methods are discussed for constructing a design curve on the safe side of fatigue data. Both the tolerance interval and equivalent prediction interval (EPI) concepts provide such a curve while accounting for both the distribution of the estimators in small samples and the data scatter. The EPI is also useful as a mechanism for providing necessary statistics on S-N data for a full reliability analysis which includes uncertainty in all fatigue design factors. Examples of statistical analyses of the general strain life relationship are presented. The tolerance limit and EPI techniques for defining a design curve are demonstrated. Examples usng WASPALOY B and RQC-100 data demonstrate that a reliability model could be constructed by considering the fatigue strength and fatigue ductility coefficients as two independent random variables. A technique given for establishing the fatigue strength for high cycle lives relies on an extrapolation technique and also accounts for "runners." A reliability model or design value can be specified.
Na, Youn; Park, Sungjin; Lee, Changhee; Kim, Dong-Kyu; Park, Joo Min; Sockanathan, Shanthini; Huganir, Richard L; Worley, Paul F
2016-08-03
The immediate early gene Arc (also Arg3.1) produces rapid changes in synaptic properties that are linked to de novo translation. Here we develop a novel translation reporter that exploits the rapid maturation and "flash" kinetics of Gaussia luciferase (Gluc) to visualize Arc translation. Following glutamate stimulation, discrete Arc-Gluc bioluminescent flashes representing sites of de novo translation are detected within 15 s at distributed sites in dendrites, but not spines. Flashes are episodic, lasting ∼20 s, and may be unitary or repeated at ∼minute intervals at the same sites. Analysis of flash amplitudes suggests they represent the quantal product of one or more polyribosomes, while inter-flash intervals appear random, suggesting they arise from a stochastic process. Surprisingly, glutamate-induced translation is dependent on Arc open reading frame. Combined observations support a model in which stalled ribosomes are reactivated to rapidly generate Arc protein. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daigle, Hugh; Rice, Mary Anna; Daigle, Hugh
Relative permeabilities to water and gas are important parameters for accurate modeling of the formation of methane hydrate deposits and production of methane from hydrate reservoirs. Experimental measurements of gas and water permeability in the presence of hydrate are difficult to obtain. The few datasets that do exist suggest that relative permeability obeys a power law relationship with water or gas saturation with exponents ranging from around 2 to greater than 10. Critical path analysis and percolation theory provide a framework for interpreting the saturation-dependence of relative permeability based on percolation thresholds and the breadth of pore size distributions, whichmore » may be determined easily from 3-D images or gas adsorption-desorption hysteresis. We show that the exponent of the permeability-saturation relationship for relative permeability to water is related to the breadth of the pore size distribution, with broader pore size distributions corresponding to larger exponents. Relative permeability to water in well-sorted sediments with narrow pore size distributions, such as Berea sandstone or Toyoura sand, follows percolation scaling with an exponent of 2. On the other hand, pore-size distributions determined from argon adsorption measurements we performed on clays from the Nankai Trough suggest that relative permeability to water in fine-grained intervals may be characterized by exponents as large as 10 as determined from critical path analysis. We also show that relative permeability to the gas phase follows percolation scaling with a quadratic dependence on gas saturation, but the threshold gas saturation for percolation changes with hydrate saturation, which is an important consideration in systems in which both hydrate and gas are present, such as during production from a hydrate reservoir. Our work shows how measurements of pore size distributions from 3-D imaging or gas adsorption may be used to determine relative permeabilities.« less
Marinaccio, Christian; Giudice, Giuseppe; Nacchiero, Eleonora; Robusto, Fabio; Opinto, Giuseppina; Lastilla, Gaetano; Maiorano, Eugenio; Ribatti, Domenico
2016-08-01
The presence of interval sentinel lymph nodes in melanoma is documented in several studies, but controversies still exist about the management of these lymph nodes. In this study, an immunohistochemical evaluation of tumor cell proliferation and neo-angiogenesis has been performed with the aim of establishing a correlation between these two parameters between positive and negative interval sentinel lymph nodes. This retrospective study reviewed data of 23 patients diagnosed with melanoma. Bioptic specimens of interval sentinel lymph node were retrieved, and immunohistochemical reactions on tissue sections were performed using Ki67 as a marker of proliferation and CD31 as a blood vessel marker for the study of angiogenesis. The entire stained tissue sections for each case were digitized using Aperio Scanscope Cs whole-slide scanning platform and stored as high-resolution images. Image analysis was carried out on three selected fields of equal area using IHC Nuclear and Microvessel analysis algorithms to determine positive Ki67 nuclei and vessel number. Patients were divided into positive and negative interval sentinel lymph node groups, and the positive interval sentinel lymph node group was further divided into interval positive with micrometastasis and interval positive with macrometastasis subgroups. The analysis revealed a significant difference between positive and negative interval sentinel lymph nodes in the percentage of Ki67-positive nuclei and mean vessel number suggestive of an increased cellular proliferation and angiogenesis in positive interval sentinel lymph nodes. Further analysis in the interval positive lymph node group showed a significant difference between micro- and macrometastasis subgroups in the percentage of Ki67-positive nuclei and mean vessel number. Percentage of Ki67-positive nuclei was increased in the macrometastasis subgroup, while mean vessel number was increased in the micrometastasis subgroup. The results of this study suggest that the correlation between tumor cell proliferation and neo-angiogenesis in interval sentinel lymph nodes in melanoma could be used as a good predictive marker to distinguish interval positive sentinel lymph nodes with micrometastasis from interval positive lymph nodes with macrometastasis subgroups.
NASA Astrophysics Data System (ADS)
Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia
2016-10-01
Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.
Sensitivity Analysis of Multicriteria Choice to Changes in Intervals of Value Tradeoffs
NASA Astrophysics Data System (ADS)
Podinovski, V. V.
2018-03-01
An approach to sensitivity (stability) analysis of nondominated alternatives to changes in the bounds of intervals of value tradeoffs, where the alternatives are selected based on interval data of criteria tradeoffs is proposed. Methods of computations for the analysis of sensitivity of individual nondominated alternatives and the set of such alternatives as a whole are developed.
Ayubi, Erfan; Sani, Mohadeseh; Safiri, Saeid; Khedmati Morasae, Esmaeil; Almasi-Hashiani, Amir; Nazarzadeh, Milad
2017-07-01
The effect of socioeconomic status on adolescent smoking behaviors is unclear, and sparse studies are available about the potential association. The present study aimed to measure and explain socioeconomic inequality in smoking behavior among a sample of Iranian adolescents. In a cross-sectional survey, a multistage sample of adolescents ( n = 1,064) was recruited from high school students in Zanjan city, northwest of Iran. Principal component analysis was used to measure economic status of adolescents. Concentration index was used to measure socioeconomic inequality in smoking behavior, and then it was decomposed to reveal inequality contributors. Concentration index and its 95% confidence interval for never, experimental, and regular smoking behaviors were 0.004 [-0.03, 0.04], 0.05 [0.02, 0.11], and -0.10 [-0.04, -0.19], respectively. The contribution of economic status to measured inequality in experimental and regular smoking was 80.0% and 68.8%, respectively. Household economic status could be targeted as one of the relevant factors in the unequal distribution of smoking behavior among adolescents.
NASA Astrophysics Data System (ADS)
Godsey, S. E.; Kirchner, J. W.
2008-12-01
The mean residence time - the average time that it takes rainfall to reach the stream - is a basic parameter used to characterize catchment processes. Heterogeneities in these processes lead to a distribution of travel times around the mean residence time. By examining this travel time distribution, we can better predict catchment response to contamination events. A catchment system with shorter residence times or narrower distributions will respond quickly to contamination events, whereas systems with longer residence times or longer-tailed distributions will respond more slowly to those same contamination events. The travel time distribution of a catchment is typically inferred from time series of passive tracers (e.g., water isotopes or chloride) in precipitation and streamflow. Variations in the tracer concentration in streamflow are usually damped compared to those in precipitation, because precipitation inputs from different storms (with different tracer signatures) are mixed within the catchment. Mathematically, this mixing process is represented by the convolution of the travel time distribution and the precipitation tracer inputs to generate the stream tracer outputs. Because convolution in the time domain is equivalent to multiplication in the frequency domain, it is relatively straightforward to estimate the parameters of the travel time distribution in either domain. In the time domain, the parameters describing the travel time distribution are typically estimated by maximizing the goodness of fit between the modeled and measured tracer outputs. In the frequency domain, the travel time distribution parameters can be estimated by fitting a power-law curve to the ratio of precipitation spectral power to stream spectral power. Differences between the methods of parameter estimation in the time and frequency domain mean that these two methods may respond differently to variations in data quality, record length and sampling frequency. Here we evaluate how well these two methods of travel time parameter estimation respond to different sources of uncertainty and compare the methods to one another. We do this by generating synthetic tracer input time series of different lengths, and convolve these with specified travel-time distributions to generate synthetic output time series. We then sample both the input and output time series at various sampling intervals and corrupt the time series with realistic error structures. Using these 'corrupted' time series, we infer the apparent travel time distribution, and compare it to the known distribution that was used to generate the synthetic data in the first place. This analysis allows us to quantify how different record lengths, sampling intervals, and error structures in the tracer measurements affect the apparent mean residence time and the apparent shape of the travel time distribution.
Bucchi, L; Pierri, C; Caprara, L; Cortecchia, S; De Lillo, M; Bondi, A
2003-02-01
This paper presents a computerised system for the monitoring of integrated cervical screening, i.e. the integration of spontaneous Pap smear practice into organised screening. The general characteristics of the system are described, including background and rationale (integrated cervical screening in European countries, impact of integration on monitoring, decentralised organization of screening and levels of monitoring), general methods (definitions, sections, software description, and setting of application), and indicators of participation (distribution by time interval since previous Pap smear, distribution by screening sector--organised screening centres vs public and private clinical settings--, distribution by time interval between the last two Pap smears, and movement of women between the two screening sectors). Also, the paper reports the results of the application of these indicators in the general database of the Pathology Department of Imola Health District in northern Italy.
NASA Technical Reports Server (NTRS)
Fedi, F.; Migliorini, P.
1981-01-01
Measurement results of attenuation due to rain are reported. Cumulative distribution functions of the attenuation found in three connections are described. Differences between the distribution functions and different polarization frequencies are demonstrated. The possibilty of establishing a bond between the statistics of annual attenuation and worst month attenuation is explored.
Practicability of monitoring soil Cd, Hg, and Pb pollution based on a geochemical survey in China.
Xia, Xueqi; Yang, Zhongfang; Li, Guocheng; Yu, Tao; Hou, Qingye; Mutelo, Admire Muchimamui
2017-04-01
Repeated visiting, i.e., sampling and analysis at two or more temporal points, is one of the important ways of monitoring soil heavy metal contamination. However, with the concern about the cost, determination of the number of samples and the temporal interval, and their capability to detect a certain change is a key technical problem to be solved. This depends on the spatial variation of the parameters in the monitoring units. The "National Multi-Purpose Regional Geochemical Survey" (NMPRGS) project in China, acquired the spatial distribution of heavy metals using a high density sampling method in the most arable regions in China. Based on soil Cd, Hg, and Pb data and taking administrative regions as the monitoring units, the number of samples and temporal intervals that may be used for monitoring soil heavy metal contamination were determined. It was found that there is a large variety of spatial variation of the elements in each NMPRGS region. This results in the difficulty in the determination of the minimum detectable changes (MDC), the number of samples, and temporal intervals for revisiting. This paper recommends a suitable set of the number of samples (n r ) for each region under the balance of cost, practicability, and monitoring precision. Under n r , MDC values are acceptable for all the regions, and the minimum temporal intervals are practical with the range of 3.3-13.3 years. Copyright © 2017 Elsevier Ltd. All rights reserved.
Armando García-Miranda, L; Contreras, I; Estrada, J A
2014-04-01
To determine reference values for full blood count parameters in a population of children 8 to 12 years old, living at an altitude of 2760 m above sea level. Our sample consisted of 102 individuals on whom a full blood count was performed. The parameters included: total number of red blood cells, platelets, white cells, and a differential count (millions/μl and %) of neutrophils, lymphocytes, monocytes, eosinophils and basophils. Additionally, we obtained values for hemoglobin, hematocrit, mean corpuscular volume, mean corpuscular hemoglobin, concentration of corpuscular hemoglobin and red blood cell distribution width. The results were statistically analyzed with a non-parametric test, to divide the sample in quartiles and obtain the lower and upper limits for our intervals. Moreover, the values for the intervals obtained from this analysis were compared to intervals obtained estimating+- 2 standard deviations above and below from our mean values. Our results showed significant differences compared to normal interval values reported for the adult Mexican population in most of the parameters studied. The full blood count is an important laboratory test used routinely for the initial assessment of a patient. Values of full blood counts in healthy individuals vary according to gender, age and geographic location; therefore, each population should have its own reference values. Copyright © 2013 Asociación Española de Pediatría. Published by Elsevier Espana. All rights reserved.
Meng, Ran; Dennison, Philip E; D'Antonio, Carla M; Moritz, Max A
2014-01-01
Increased fire frequency has been shown to promote alien plant invasions in the western United States, resulting in persistent vegetation type change. Short interval fires are widely considered to be detrimental to reestablishment of shrub species in southern California chaparral, facilitating the invasion of exotic annuals and producing "type conversion". However, supporting evidence for type conversion has largely been at local, site scales and over short post-fire time scales. Type conversion has not been shown to be persistent or widespread in chaparral, and past range improvement studies present evidence that chaparral type conversion may be difficult and a relatively rare phenomenon across the landscape. With the aid of remote sensing data covering coastal southern California and a historical wildfire dataset, the effects of short interval fires (<8 years) on chaparral recovery were evaluated by comparing areas that burned twice to adjacent areas burned only once. Twelve pairs of once- and twice-burned areas were compared using normalized burn ratio (NBR) distributions. Correlations between measures of recovery and explanatory factors (fire history, climate and elevation) were analyzed by linear regression. Reduced vegetation cover was found in some lower elevation areas that were burned twice in short interval fires, where non-sprouting species are more common. However, extensive type conversion of chaparral to grassland was not evident in this study. Most variables, with the exception of elevation, were moderately or poorly correlated with differences in vegetation recovery.
Meng, Ran; Dennison, Philip E.; D’Antonio, Carla M.; Moritz, Max A.
2014-01-01
Increased fire frequency has been shown to promote alien plant invasions in the western United States, resulting in persistent vegetation type change. Short interval fires are widely considered to be detrimental to reestablishment of shrub species in southern California chaparral, facilitating the invasion of exotic annuals and producing “type conversion”. However, supporting evidence for type conversion has largely been at local, site scales and over short post-fire time scales. Type conversion has not been shown to be persistent or widespread in chaparral, and past range improvement studies present evidence that chaparral type conversion may be difficult and a relatively rare phenomenon across the landscape. With the aid of remote sensing data covering coastal southern California and a historical wildfire dataset, the effects of short interval fires (<8 years) on chaparral recovery were evaluated by comparing areas that burned twice to adjacent areas burned only once. Twelve pairs of once- and twice-burned areas were compared using normalized burn ratio (NBR) distributions. Correlations between measures of recovery and explanatory factors (fire history, climate and elevation) were analyzed by linear regression. Reduced vegetation cover was found in some lower elevation areas that were burned twice in short interval fires, where non-sprouting species are more common. However, extensive type conversion of chaparral to grassland was not evident in this study. Most variables, with the exception of elevation, were moderately or poorly correlated with differences in vegetation recovery. PMID:25337785
Automatic Error Analysis Using Intervals
ERIC Educational Resources Information Center
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
Cardiopulmonary resuscitation quality: Widespread variation in data intervals used for analysis.
Talikowska, Milena; Tohira, Hideo; Bailey, Paul; Finn, Judith
2016-05-01
There is a growing body of evidence for the relationship between CPR quality and survival in cardiac arrest patients. We sought to describe the characteristics of the analysis intervals used across studies. Relevant papers were selected as described in our recent systematic review. From these papers we collected information about (1) the time interval used for analysis; (2) the event that marked the beginning of the analysis interval; and (3) the minimum amount of CPR quality data required for a case to be included in the analysed cohort. We then compared this data across papers. Twenty-one studies reported on the association between CPR quality and cardiac arrest patient survival. In two thirds of studies data from the start of the resuscitation episode was analysed, in particular the first 5min. Commencement of the analysis interval was marked by various events including ECG pad placement and first chest compression. Nine studies specified a minimum amount of data that had to have been collected for the individual case to be included in the analysis; most commonly 1min of data. The use of shorter intervals allowed for inclusion of more cases as it included cases that did not have a complete dataset. To facilitate comparisons across studies, a standardised definition of the data analysis interval should be developed; one that maximises the amount of cases available without compromising the data's representability of the resuscitation effort. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Genter, Albert; Traineau, Hervé
1996-07-01
An exhaustive analysis of 3000 macroscopic fractures encountered in the geothermal Hot Dry Rock borehole, EPS-1, located inside the Rhine graben (Soultz-sous-Foreˆts, France), was done on a continuous core section over a depth interval from 1420 to 2230 m: 97% of the macroscopic structures were successfully reorientated with a good degree of confidence by comparison between core and acoustic borehole imagery. Detailed structural analysis of the fracture population indicates that fractures are grouped in two principal fractures sets striking N005 and N170 °, and dipping 70 °W and 70 °E, respectively. This average attitude is closely related to the past tectonic rifting activity of the graben during the Tertiary, and is consistent with data obtained from nearby boreholes and from neighbouring crystalline outcrops. Fractures are distributed in clusters of hydrothermally altered and fractured zones. They constitute a complex network of fault strands dominated by N-S trends, except within some of the most fractured depth intervals (1650 m, 2170 m), where an E-W-striking fracture set occurs. The geometry of the pre-existing fracture system strikes in a direction nearly parallel to the maximum horizontal stress. In this favorable situation, hydraulic injections will tend both to reactivate natural fractures at low pressures, and to create a geothermal reservoir.
At least some errors are randomly generated (Freud was wrong)
NASA Technical Reports Server (NTRS)
Sellen, A. J.; Senders, J. W.
1986-01-01
An experiment was carried out to expose something about human error generating mechanisms. In the context of the experiment, an error was made when a subject pressed the wrong key on a computer keyboard or pressed no key at all in the time allotted. These might be considered, respectively, errors of substitution and errors of omission. Each of seven subjects saw a sequence of three digital numbers, made an easily learned binary judgement about each, and was to press the appropriate one of two keys. Each session consisted of 1,000 presentations of randomly permuted, fixed numbers broken into 10 blocks of 100. One of two keys should have been pressed within one second of the onset of each stimulus. These data were subjected to statistical analyses in order to probe the nature of the error generating mechanisms. Goodness of fit tests for a Poisson distribution for the number of errors per 50 trial interval and for an exponential distribution of the length of the intervals between errors were carried out. There is evidence for an endogenous mechanism that may best be described as a random error generator. Furthermore, an item analysis of the number of errors produced per stimulus suggests the existence of a second mechanism operating on task driven factors producing exogenous errors. Some errors, at least, are the result of constant probability generating mechanisms with error rate idiosyncratically determined for each subject.
Scott, Laura L F; Maldonado, George
2015-10-15
The purpose of this analysis was to quantify and adjust for disease misclassification from loss to follow-up in a historical cohort mortality study of workers where exposure was categorized as a multi-level variable. Disease classification parameters were defined using 2008 mortality data for the New Zealand population and the proportions of known deaths observed for the cohort. The probability distributions for each classification parameter were constructed to account for potential differences in mortality due to exposure status, gender, and ethnicity. Probabilistic uncertainty analysis (bias analysis), which uses Monte Carlo techniques, was then used to sample each parameter distribution 50,000 times, calculating adjusted odds ratios (ORDM-LTF) that compared the mortality of workers with the highest cumulative exposure to those that were considered never-exposed. The geometric mean ORDM-LTF ranged between 1.65 (certainty interval (CI): 0.50-3.88) and 3.33 (CI: 1.21-10.48), and the geometric mean of the disease-misclassification error factor (εDM-LTF), which is the ratio of the observed odds ratio to the adjusted odds ratio, had a range of 0.91 (CI: 0.29-2.52) to 1.85 (CI: 0.78-6.07). Only when workers in the highest exposure category were more likely than those never-exposed to be misclassified as non-cases did the ORDM-LTF frequency distributions shift further away from the null. The application of uncertainty analysis to historical cohort mortality studies with multi-level exposures can provide valuable insight into the magnitude and direction of study error resulting from losses to follow-up.
High northern latitude temperature extremes, 1400-1999
NASA Astrophysics Data System (ADS)
Tingley, M. P.; Huybers, P.; Hughen, K. A.
2009-12-01
There is often an interest in determining which interval features the most extreme value of a reconstructed climate field, such as the warmest year or decade in a temperature reconstruction. Previous approaches to this type of question have not fully accounted for the spatial and temporal covariance in the climate field when assessing the significance of extreme values. Here we present results from applying BARSAT, a new, Bayesian approach to reconstructing climate fields, to a 600 year multiproxy temperature data set that covers land areas between 45N and 85N. The end result of the analysis is an ensemble of spatially and temporally complete realizations of the temperature field, each of which is consistent with the observations and the estimated values of the parameters that define the assumed spatial and temporal covariance functions. In terms of the spatial average temperature, 1990-1999 was the warmest decade in the 1400-1999 interval in each of 2000 ensemble members, while 1995 was the warmest year in 98% of the ensemble members. A similar analysis at each node of a regular 5 degree grid gives insight into the spatial distribution of warm temperatures, and reveals that 1995 was anomalously warm in Eurasia, whereas 1998 featured extreme warmth in North America. In 70% of the ensemble members, 1601 featured the coldest spatial average, indicating that the eruption of Huaynaputina in Peru in 1600 (with a volcanic explosivity index of 6) had a major cooling impact on the high northern latitudes. Repeating this analysis at each node reveals the varying impacts of major volcanic eruptions on the distribution of extreme cooling. Finally, we use the ensemble to investigate extremes in the time evolution of centennial temperature trends, and find that in more than half the ensemble members, the greatest rate of change in the spatial mean time series was a cooling centered at 1600. The largest rate of centennial scale warming, however, occurred in the 20th Century in more than 98% of the ensemble members.
Statistics of zero crossings in rough interfaces with fractional elasticity
NASA Astrophysics Data System (ADS)
Zamorategui, Arturo L.; Lecomte, Vivien; Kolton, Alejandro B.
2018-04-01
We study numerically the distribution of zero crossings in one-dimensional elastic interfaces described by an overdamped Langevin dynamics with periodic boundary conditions. We model the elastic forces with a Riesz-Feller fractional Laplacian of order z =1 +2 ζ , such that the interfaces spontaneously relax, with a dynamical exponent z , to a self-affine geometry with roughness exponent ζ . By continuously increasing from ζ =-1 /2 (macroscopically flat interface described by independent Ornstein-Uhlenbeck processes [Phys. Rev. 36, 823 (1930), 10.1103/PhysRev.36.823]) to ζ =3 /2 (super-rough Mullins-Herring interface), three different regimes are identified: (I) -1 /2 <ζ <0 , (II) 0 <ζ <1 , and (III) 1 <ζ <3 /2 . Starting from a flat initial condition, the mean number of zeros of the discretized interface (I) decays exponentially in time and reaches an extensive value in the system size, or decays as a power-law towards (II) a subextensive or (III) an intensive value. In the steady state, the distribution of intervals between zeros changes from an exponential decay in (I) to a power-law decay P (ℓ ) ˜ℓ-γ in (II) and (III). While in (II) γ =1 -θ with θ =1 -ζ the steady-state persistence exponent, in (III) we obtain γ =3 -2 ζ , different from the exponent γ =1 expected from the prediction θ =0 for infinite super-rough interfaces with ζ >1 . The effect on P (ℓ ) of short-scale smoothening is also analyzed numerically and analytically. A tight relation between the mean interval, the mean width of the interface, and the density of zeros is also reported. The results drawn from our analysis of rough interfaces subject to particular boundary conditions or constraints, along with discretization effects, are relevant for the practical analysis of zeros in interface imaging experiments or in numerical analysis.
Hou, Deyi; O'Connor, David; Nathanail, Paul; Tian, Li; Ma, Yan
2017-12-01
Heavy metal soil contamination is associated with potential toxicity to humans or ecotoxicity. Scholars have increasingly used a combination of geographical information science (GIS) with geostatistical and multivariate statistical analysis techniques to examine the spatial distribution of heavy metals in soils at a regional scale. A review of such studies showed that most soil sampling programs were based on grid patterns and composite sampling methodologies. Many programs intended to characterize various soil types and land use types. The most often used sampling depth intervals were 0-0.10 m, or 0-0.20 m, below surface; and the sampling densities used ranged from 0.0004 to 6.1 samples per km 2 , with a median of 0.4 samples per km 2 . The most widely used spatial interpolators were inverse distance weighted interpolation and ordinary kriging; and the most often used multivariate statistical analysis techniques were principal component analysis and cluster analysis. The review also identified several determining and correlating factors in heavy metal distribution in soils, including soil type, soil pH, soil organic matter, land use type, Fe, Al, and heavy metal concentrations. The major natural and anthropogenic sources of heavy metals were found to derive from lithogenic origin, roadway and transportation, atmospheric deposition, wastewater and runoff from industrial and mining facilities, fertilizer application, livestock manure, and sewage sludge. This review argues that the full potential of integrated GIS and multivariate statistical analysis for assessing heavy metal distribution in soils on a regional scale has not yet been fully realized. It is proposed that future research be conducted to map multivariate results in GIS to pinpoint specific anthropogenic sources, to analyze temporal trends in addition to spatial patterns, to optimize modeling parameters, and to expand the use of different multivariate analysis tools beyond principal component analysis (PCA) and cluster analysis (CA). Copyright © 2017 Elsevier Ltd. All rights reserved.
Distribution functions of probabilistic automata
NASA Technical Reports Server (NTRS)
Vatan, F.
2001-01-01
Each probabilistic automaton M over an alphabet A defines a probability measure Prob sub(M) on the set of all finite and infinite words over A. We can identify a k letter alphabet A with the set {0, 1,..., k-1}, and, hence, we can consider every finite or infinite word w over A as a radix k expansion of a real number X(w) in the interval [0, 1]. This makes X(w) a random variable and the distribution function of M is defined as usual: F(x) := Prob sub(M) { w: X(w) < x }. Utilizing the fixed-point semantics (denotational semantics), extended to probabilistic computations, we investigate the distribution functions of probabilistic automata in detail. Automata with continuous distribution functions are characterized. By a new, and much more easier method, it is shown that the distribution function F(x) is an analytic function if it is a polynomial. Finally, answering a question posed by D. Knuth and A. Yao, we show that a polynomial distribution function F(x) on [0, 1] can be generated by a prob abilistic automaton iff all the roots of F'(x) = 0 in this interval, if any, are rational numbers. For this, we define two dynamical systems on the set of polynomial distributions and study attracting fixed points of random composition of these two systems.
Pozzi, Federico; Di Stasi, Stephanie; Zeni, Joseph A; Barrios, Joaquin A
2017-03-01
The purpose of this study was to characterize the magnitude and distribution of the total support moment during single-limb drop landings in individuals after anterior cruciate ligament reconstruction compared to a control group. Twenty participants after reconstruction and twenty control participants matched on sex, limb dominance and activity level were recruited. Motion analysis was performed during a single-limb drop landing task. Total support moment was determined by summing the internal extensor moments at the ankle, knee, and hip. Each relative joint contribution to the total support moment was calculated by dividing each individual contribution by the total support moment. Data were captured during a landing interval that started at initial contact and ended at the lowest vertical position of the pelvis. Data were then time-normalized and indexed at 25, 50, 75, and 100% of the landing interval. No between-group differences for total support moment magnitude were observed. At both 75% and 100% of the landing, the relative contribution of the knee joint was lower in those with a history of surgery (p<0.001). At the same instances, the relative contribution to the total support moment by the hip joint was greater in those with a history of surgery (p=0.004). In active participants after anterior cruciate ligament reconstruction, relative contributions to anti-gravity support of the center of mass shifted from the knee to the hip joint during single-limb landing, which became evident towards the end of the landing interval. Copyright © 2017 Elsevier Ltd. All rights reserved.
Reference interval for thyrotropin in a ultrasonography screened Korean population
Kim, Mijin; Kim, Soo Han; Lee, Yunkyoung; Park, Su-yeon; Kim, Hyung-don; Kwon, Hyemi; Choi, Yun Mi; Jang, Eun Kyung; Jeon, Min Ji; Kim, Won Gu; Shong, Young Kee; Kim, Won Bae
2015-01-01
Background/Aims The diagnostic accuracy of thyroid dysfunctions is primarily affected by the validity of the reference interval for serum thyroid-stimulating hormone (TSH). Thus, the present study aimed to establish a reference interval for TSH using a normal Korean population. Methods This study included 19,465 subjects who were recruited after undergoing routine health check-ups. Subjects with overt thyroid disease, a prior history of thyroid disease, or a family history of thyroid cancer were excluded from the present analyses. The reference range for serum TSH was evaluated in a normal Korean reference population which was defined according to criteria based on the guidelines of the National Academy of Clinical Biochemistry, ultrasound (US) findings, and smoking status. Sex and age were also taken into consideration when evaluating the distribution of serum TSH levels in different groups. Results In the presence of positive anti-thyroid peroxidase antibodies or abnormal US findings, the central 95 percentile interval of the serum TSH levels was widened. Additionally, the distribution of serum TSH levels shifted toward lower values in the current smokers group. The reference interval for TSH obtained using a normal Korean reference population was 0.73 to 7.06 mIU/L. The serum TSH levels were higher in females than in males in all groups, and there were no age-dependent shifts. Conclusions The present findings demonstrate that the serum TSH reference interval in a normal Korean reference population was higher than that in other countries. This result suggests that the upper and lower limits of the TSH reference interval, which was previously defined by studies from Western countries, should be raised for Korean populations. PMID:25995664
Lambert, Matthew A.; Weir-McCall, Jonathan R.; Gandy, Stephen J.; Levin, Daniel; Cavin, Ian; Littleford, Roberta; MacFarlane, Jennifer A.; Matthew, Shona Z.; Nicholas, Richard S.; Struthers, Allan D.; Sullivan, Frank; Henderson, Shelley A.; White, Richard D.; Belch, Jill J. F.
2018-01-01
Purpose To quantify the burden and distribution of asymptomatic atherosclerosis in a population with a low to intermediate risk of cardiovascular disease. Materials and Methods Between June 2008 and February 2013, 1528 participants with 10-year risk of cardiovascular disease less than 20% were prospectively enrolled. They underwent whole-body magnetic resonance (MR) angiography at 3.0 T by using a two-injection, four-station acquisition technique. Thirty-one arterial segments were scored according to maximum stenosis. Scores were summed and normalized for the number of assessable arterial segments to provide a standardized atheroma score (SAS). Multiple linear regression was performed to assess effects of risk factors on atheroma burden. Results A total of 1513 participants (577 [37.9%] men; median age, 53.5 years; range, 40–83 years) completed the study protocol. Among 46 903 potentially analyzable segments, 46 601 (99.4%) were interpretable. Among these, 2468 segments (5%) demonstrated stenoses, of which 1649 (3.5%) showed stenosis less than 50% and 484 (1.0%) showed stenosis greater than or equal to 50%. Vascular stenoses were distributed throughout the body with no localized distribution. Seven hundred forty-seven (49.4%) participants had at least one stenotic vessel, and 408 (27.0%) participants had multiple stenotic vessels. At multivariable linear regression, SAS correlated with age (B = 3.4; 95% confidence interval: 2.61, 4.20), heart rate (B = 1.23; 95% confidence interval: 0.51, 1.95), systolic blood pressure (B = 0.02; 95% confidence interval: 0.01, 0.03), smoking status (B = 0.79; 95% confidence interval: 0.44, 1.15), and socioeconomic status (B = −0.06; 95% confidence interval: −0.10, −0.02) (P < .01 for all). Conclusion Whole-body MR angiography identifies early vascular disease at a population level. Although disease prevalence is low on a per-vessel level, vascular disease is common on a per-participant level, even in this low- to intermediate-risk cohort. © RSNA, 2018 Online supplemental material is available for this article. PMID:29714681
Waltemeyer, Scott D.
2006-01-01
Estimates of the magnitude and frequency of peak discharges are necessary for the reliable flood-hazard mapping in the Navajo Nation in Arizona, Utah, Colorado, and New Mexico. The Bureau of Indian Affairs, U.S. Army Corps of Engineers, and Navajo Nation requested that the U.S. Geological Survey update estimates of peak discharge magnitude for gaging stations in the region and update regional equations for estimation of peak discharge and frequency at ungaged sites. Equations were developed for estimating the magnitude of peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years at ungaged sites using data collected through 1999 at 146 gaging stations, an additional 13 years of peak-discharge data since a 1997 investigation, which used gaging-station data through 1986. The equations for estimation of peak discharges at ungaged sites were developed for flood regions 8, 11, high elevation, and 6 and are delineated on the basis of the hydrologic codes from the 1997 investigation. Peak discharges for selected recurrence intervals were determined at gaging stations by fitting observed data to a log-Pearson Type III distribution with adjustments for a low-discharge threshold and a zero skew coefficient. A low-discharge threshold was applied to frequency analysis of 82 of the 146 gaging stations. This application provides an improved fit of the log-Pearson Type III frequency distribution. Use of the low-discharge threshold generally eliminated the peak discharge having a recurrence interval of less than 1.4 years in the probability-density function. Within each region, logarithms of the peak discharges for selected recurrence intervals were related to logarithms of basin and climatic characteristics using stepwise ordinary least-squares regression techniques for exploratory data analysis. Generalized least-squares regression techniques, an improved regression procedure that accounts for time and spatial sampling errors, then was applied to the same data used in the ordinary least-squares regression analyses. The average standard error of prediction for a peak discharge have a recurrence interval of 100-years for region 8 was 53 percent (average) for the 100-year flood. The average standard of prediction, which includes average sampling error and average standard error of regression, ranged from 45 to 83 percent for the 100-year flood. Estimated standard error of prediction for a hybrid method for region 11 was large in the 1997 investigation. No distinction of floods produced from a high-elevation region was presented in the 1997 investigation. Overall, the equations based on generalized least-squares regression techniques are considered to be more reliable than those in the 1997 report because of the increased length of record and improved GIS method. Techniques for transferring flood-frequency relations to ungaged sites on the same stream can be estimated at an ungaged site by a direct application of the regional regression equation or at an ungaged site on a stream that has a gaging station upstream or downstream by using the drainage-area ratio and the drainage-area exponent from the regional regression equation of the respective region.
Fluctuation behaviors of financial return volatility duration
NASA Astrophysics Data System (ADS)
Niu, Hongli; Wang, Jun; Lu, Yunfan
2016-04-01
It is of significantly crucial to understand the return volatility of financial markets because it helps to quantify the investment risk, optimize the portfolio, and provide a key input of option pricing models. The characteristics of isolated high volatility events above certain threshold in price fluctuations and the distributions of return intervals between these events arouse great interest in financial research. In the present work, we introduce a new concept of daily return volatility duration, which is defined as the shortest passage time when the future volatility intensity is above or below the current volatility intensity (without predefining a threshold). The statistical properties of the daily return volatility durations for seven representative stock indices from the world financial markets are investigated. Some useful and interesting empirical results of these volatility duration series about the probability distributions, memory effects and multifractal properties are obtained. These results also show that the proposed stock volatility series analysis is a meaningful and beneficial trial.
NASA Astrophysics Data System (ADS)
Uddin, Iftikhar; Khan, Muhammad Altaf; Ullah, Saif; Islam, Saeed; Israr, Muhammad; Hussain, Fawad
2018-03-01
This attempt dedicated to the solution of buoyancy effect over a stretching sheet in existence of MHD stagnation point flow with convective boundary conditions. Thermophoresis and Brownian motion aspects are included. Incompressible fluid is electrically conducted in the presence of varying magnetic field. Boundary layer analysis is used to develop the mathematical formulation. Zero mass flux condition is considered at the boundary. Non-linear ordinary differential system of equations is constructed by means of proper transformations. Interval of convergence via numerical data and plots are developed. Characteristics of involved variables on the velocity, temperature and concentration distributions are sketched and discussed. Features of correlated parameters on Cf and Nu are examined by means of tables. It is found that buoyancy ratio and magnetic parameters increase and reduce the velocity field. Further opposite feature is noticed for higher values of thermophoresis and Brownian motion parameters on concentration distribution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peniche, C.; Zaldivar, D.; Bulay, A.
1993-12-20
The thermal behavior of random copolymers of furfuryl methacrylate (F) and N-vinyl-pyrrolidone (P) was studied by means of dynamic thermogravimetric analysis (TGA) in the range 100--600 C. The dynamic experiments show that these copolymers exhibit two degradation steps in the intervals 260--320 C and 350--520 C, respectively. The normalized weight loss in the low temperature interval increases as the mole fraction of F in the copolymer m[sub F] increases, whereas an inverted trend in the high temperature interval is observed. The apparent activation energy E[sub a] of the first degradation step for copolymers prepared with different composition, was obtained accordingmore » to the treatment suggested by Broido. A plot of the values of E[sub a] versus the F dead molar fraction in the copolymer chains m[sub FF] gave a straight line that indicates that there is a direct relationship between the thermogravimetric behavior of these systems and their corresponding microstructure, that is, the distribution of comonomeric units along the copolymers chains. The first decomposition step was also studied by isothermal TGA and a good linearity for the variation of the weight loss percentage [Delta]W versus m[sub F] at least during the first 30 min of treatment was obtained.« less
Gahlaut, Vijay; Jaiswal, Vandana; Tyagi, Bhudeva S.; Singh, Gyanendra; Sareen, Sindhu; Balyan, Harindra S.
2017-01-01
In bread wheat, QTL interval mapping was conducted for nine important drought responsive agronomic traits. For this purpose, a doubled haploid (DH) mapping population derived from Kukri/Excalibur was grown over three years at four separate locations in India, both under irrigated and rain-fed environments. Single locus analysis using composite interval mapping (CIM) allowed detection of 98 QTL, which included 66 QTL for nine individual agronomic traits and 32 QTL, which affected drought sensitivity index (DSI) for the same nine traits. Two-locus analysis allowed detection of 19 main effect QTL (M-QTL) for four traits (days to anthesis, days to maturity, grain filling duration and thousand grain weight) and 19 pairs of epistatic QTL (E-QTL) for two traits (days to anthesis and thousand grain weight). Eight QTL were common in single locus analysis and two locus analysis. These QTL (identified both in single- and two-locus analysis) were distributed on 20 different chromosomes (except 4D). Important genomic regions on chromosomes 5A and 7A were also identified (5A carried QTL for seven traits and 7A carried QTL for six traits). Marker-assisted recurrent selection (MARS) involving pyramiding of important QTL reported in the present study, together with important QTL reported earlier, may be used for improvement of drought tolerance in wheat. In future, more closely linked markers for the QTL reported here may be developed through fine mapping, and the candidate genes may be identified and used for developing a better understanding of the genetic basis of drought tolerance in wheat. PMID:28793327
NASA Technical Reports Server (NTRS)
Quemarais, E.; Lallement, R.; Bertaux, J. L.; Sandel, B. R.
1995-01-01
The all-sky interplanetary Lyman-alpha pattern is sensitive to the latitude distribution of the solar wind because of destruction of neutral H by charge-exchange with solar wind protons. Lyman-alpha intensities recorded by Prognoz 5 and 6 in 1976 in a few parts of the sky were demonstrating a decrease of solar wind mass flux by about 30 % from equator to pole, when assuming a sinusoidal variation of this mass flux (harmonic distribution). A new analysis with a discrete variation with latitude has shown a decrease from 0 to 30 deg and then a plateau of constant mass flux up to the pole. This distribution bears a striking resemblance with Ulysses in-situ measurements, showing a clear similarity at 19 years interval. The Ulysses measurements were then used as a model input to calculate an all-sky Lyman-alpha pattern, either with a discrete model or with a harmonic solar wind variation with the same Ulysses equator-to-pole variation. There are conspicuous differences between the two Lyman-alpha patterns, in particular in the downwind region which are discussed in the context of future all-sky measurements with SWAN experiment on SOHO.
Virtuality and transverse momentum dependence of the pion distribution amplitude
Radyushkin, Anatoly V.
2016-03-08
We describe basics of a new approach to transverse momentum dependence in hard exclusive processes. We develop it in application to the transition process γ*γ → π 0 at the handbag level. Our starting point is coordinate representation for matrix elements of operators (in the simplest case, bilocal O (0,z)) describing a hadron with momentum p. Treated as functions of (pz) and z 2, they are parametrized through virtuality distribution amplitudes (VDA) Φ(x,σ), with x being Fourier-conjugate to (pz) and σ Laplace-conjugate to z 2. For intervals with z + = 0, we introduce the transverse momentum distribution amplitude (TMDA)more » ψ(x, k), and write it in terms of VDA Φ(x,σ). The results of covariant calculations, written in terms of Φ(x, σ) are converted into expressions involving ψ(x, k). Starting with scalar toy models, we extend the analysis onto the case of spin-1/2 quarks and QCD. We propose simple models for soft VDAs/TMDAs, and use them for comparison of handbag results with experimental (BaBar and BELLE) data on the pion transition form factor. Furthermore, we discuss how one can generate high-k tails from primordial soft distributions.« less
Halo-independent determination of the unmodulated WIMP signal in DAMA: the isotropic case
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gondolo, Paolo; Scopel, Stefano, E-mail: paolo.gondolo@utah.edu, E-mail: scopel@sogang.ac.kr
2017-09-01
We present a halo-independent determination of the unmodulated signal corresponding to the DAMA modulation if interpreted as due to dark matter weakly interacting massive particles (WIMPs). First we show how a modulated signal gives information on the WIMP velocity distribution function in the Galactic rest frame from which the unmodulated signal descends. Then we describe a mathematically-sound profile likelihood analysis in which the likelihood is profiled over a continuum of nuisance parameters (namely, the WIMP velocity distribution). As a first application of the method, which is very general and valid for any class of velocity distributions, we restrict the analysismore » to velocity distributions that are isotropic in the Galactic frame. In this way we obtain halo-independent maximum-likelihood estimates and confidence intervals for the DAMA unmodulated signal. We find that the estimated unmodulated signal is in line with expectations for a WIMP-induced modulation and is compatible with the DAMA background+signal rate. Specifically, for the isotropic case we find that the modulated amplitude ranges between a few percent and about 25% of the unmodulated amplitude, depending on the WIMP mass.« less
Deriving injury risk curves using survival analysis from biomechanical experiments.
Yoganandan, Narayan; Banerjee, Anjishnu; Hsu, Fang-Chi; Bass, Cameron R; Voo, Liming; Pintar, Frank A; Gayzik, F Scott
2016-10-03
Injury risk curves from biomechanical experimental data analysis are used in automotive studies to improve crashworthiness and advance occupant safety. Metrics such as acceleration and deflection coupled with outcomes such as fractures and anatomical disruptions from impact tests are used in simple binary regression models. As an improvement, the International Standards Organization suggested a different approach. It was based on survival analysis. While probability curves for side-impact-induced thorax and abdominal injuries and frontal impact-induced foot-ankle-leg injuries are developed using this approach, deficiencies are apparent. The objective of this study is to present an improved, robust and generalizable methodology in an attempt to resolve these issues. It includes: (a) statistical identification of the most appropriate independent variable (metric) from a pool of candidate metrics, measured and or derived during experimentation and analysis processes, based on the highest area under the receiver operator curve, (b) quantitative determination of the most optimal probability distribution based on the lowest Akaike information criterion, (c) supplementing the qualitative/visual inspection method for comparing the selected distribution with a non-parametric distribution with objective measures, (d) identification of overly influential observations using different methods, and (e) estimation of confidence intervals using techniques more appropriate to the underlying survival statistical model. These clear and quantified details can be easily implemented with commercial/open source packages. They can be used in retrospective analysis and prospective design of experiments, and in applications to different loading scenarios such as underbody blast events. The feasibility of the methodology is demonstrated using post mortem human subject experiments and 24 metrics associated with thoracic/abdominal injuries in side-impacts. Published by Elsevier Ltd.
Human dynamic model co-driven by interest and social identity in the MicroBlog community
NASA Astrophysics Data System (ADS)
Yan, Qiang; Yi, Lanli; Wu, Lianren
2012-02-01
This paper analyzes the behavior of releasing messages in the MicroBlog community and presents a human dynamic model co-driven by interest and social identity. According to the empirical analysis and simulation results, the messaging interval distribution follows a power law, which is mainly influenced by the degree of users' interests. Meanwhile, social identity plays a significant role regarding the change of interests and may slow down the decline of the latter. A positive correlation between social identity and numbers of comments or forwarding of messages is illustrated. Besides, the analysis of data for each 24 h reveals obvious differences between micro-blogging and website visits, email, instant communication, and the use of mobile phones, reflecting how people use small amounts of time via mobile Internet technology.
Extraction and LOD control of colored interval volumes
NASA Astrophysics Data System (ADS)
Miyamura, Hiroko N.; Takeshima, Yuriko; Fujishiro, Issei; Saito, Takafumi
2005-03-01
Interval volume serves as a generalized isosurface and represents a three-dimensional subvolume for which the associated scalar filed values lie within a user-specified closed interval. In general, it is not an easy task for novices to specify the scalar field interval corresponding to their ROIs. In order to extract interval volumes from which desirable geometric features can be mined effectively, we propose a suggestive technique which extracts interval volumes automatically based on the global examination of the field contrast structure. Also proposed here is a simplification scheme for decimating resultant triangle patches to realize efficient transmission and rendition of large-scale interval volumes. Color distributions as well as geometric features are taken into account to select best edges to be collapsed. In addition, when a user wants to selectively display and analyze the original dataset, the simplified dataset is restructured to the original quality. Several simulated and acquired datasets are used to demonstrate the effectiveness of the present methods.
Multilayer Perceptron for Robust Nonlinear Interval Regression Analysis Using Genetic Algorithms
2014-01-01
On the basis of fuzzy regression, computational models in intelligence such as neural networks have the capability to be applied to nonlinear interval regression analysis for dealing with uncertain and imprecise data. When training data are not contaminated by outliers, computational models perform well by including almost all given training data in the data interval. Nevertheless, since training data are often corrupted by outliers, robust learning algorithms employed to resist outliers for interval regression analysis have been an interesting area of research. Several approaches involving computational intelligence are effective for resisting outliers, but the required parameters for these approaches are related to whether the collected data contain outliers or not. Since it seems difficult to prespecify the degree of contamination beforehand, this paper uses multilayer perceptron to construct the robust nonlinear interval regression model using the genetic algorithm. Outliers beyond or beneath the data interval will impose slight effect on the determination of data interval. Simulation results demonstrate that the proposed method performs well for contaminated datasets. PMID:25110755
Multilayer perceptron for robust nonlinear interval regression analysis using genetic algorithms.
Hu, Yi-Chung
2014-01-01
On the basis of fuzzy regression, computational models in intelligence such as neural networks have the capability to be applied to nonlinear interval regression analysis for dealing with uncertain and imprecise data. When training data are not contaminated by outliers, computational models perform well by including almost all given training data in the data interval. Nevertheless, since training data are often corrupted by outliers, robust learning algorithms employed to resist outliers for interval regression analysis have been an interesting area of research. Several approaches involving computational intelligence are effective for resisting outliers, but the required parameters for these approaches are related to whether the collected data contain outliers or not. Since it seems difficult to prespecify the degree of contamination beforehand, this paper uses multilayer perceptron to construct the robust nonlinear interval regression model using the genetic algorithm. Outliers beyond or beneath the data interval will impose slight effect on the determination of data interval. Simulation results demonstrate that the proposed method performs well for contaminated datasets.
Sartain-Iverson, Autumn R.; Hart, Kristen M.; Fujisaki, Ikuko; Cherkiss, Michael S.; Pollock, Clayton; Lundgren, Ian; Hillis-Starr, Zandy
2016-01-01
Hawksbill sea turtles (Eretmochelys imbricata) are circumtropically distributed and listed as Critically Endangered by the IUCN (Meylan & Donnelly 1999; NMFS & USFWS 1993). To aid in population recovery and protection, the Hawksbill Recovery Plan identified the need to determine demographic information for hawksbills, such as distribution, abundance, seasonal movements, foraging areas (sections 121 and 2211), growth rates, and survivorship (section 2213, NMFS & USFWS 1993). Mark-recapture analyses are helpful in estimating demographic parameters and have been used for hawksbills throughout the Caribbean (e.g., Richardson et al. 1999; Velez-Zuazo et al. 2008); integral to these studies are recaptures at the nesting site as well as remigration interval estimates (Hays 2000). Estimates of remigration intervals (the duration between nesting seasons) are critical to marine turtle population estimates and measures of nesting success (Hays 2000; Richardson et al. 1999). Although hawksbills in the Caribbean generally show natal philopatry and nesting-site fidelity (Bass et al. 1996; Bowen et al. 2007), exceptions to this have been observed for hawksbills and other marine turtles (Bowen & Karl 2007; Diamond 1976; Esteban et al. 2015; Hart et al. 2013). This flexibility in choosing a nesting beach could therefore affect the apparent remigration interval and subsequently, region-wide population counts.
Normal probability plots with confidence.
Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang
2015-01-01
Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Modified stochastic fragmentation of an interval as an ageing process
NASA Astrophysics Data System (ADS)
Fortin, Jean-Yves
2018-02-01
We study a stochastic model based on modified fragmentation of a finite interval. The mechanism consists of cutting the interval at a random location and substituting a unique fragment on the right of the cut to regenerate and preserve the interval length. This leads to a set of segments of random sizes, with the accumulation of small fragments near the origin. This model is an example of record dynamics, with the presence of ‘quakes’ and slow dynamics. The fragment size distribution is a universal inverse power law with logarithmic corrections. The exact distribution for the fragment number as function of time is simply related to the unsigned Stirling numbers of the first kind. Two-time correlation functions are defined, and computed exactly. They satisfy scaling relations, and exhibit aging phenomena. In particular, the probability that the same number of fragments is found at two different times t>s is asymptotically equal to [4πlog(s)]-1/2 when s\\gg 1 and the ratio t/s is fixed, in agreement with the numerical simulations. The same process with a reset impedes the aging phenomenon-beyond a typical time scale defined by the reset parameter.
NASA Astrophysics Data System (ADS)
Warren, Aaron R.
2009-11-01
Time-series designs are an alternative to pretest-posttest methods that are able to identify and measure the impacts of multiple educational interventions, even for small student populations. Here, we use an instrument employing standard multiple-choice conceptual questions to collect data from students at regular intervals. The questions are modified by asking students to distribute 100 Confidence Points among the options in order to indicate the perceived likelihood of each answer option being the correct one. Tracking the class-averaged ratings for each option produces a set of time-series. ARIMA (autoregressive integrated moving average) analysis is then used to test for, and measure, changes in each series. In particular, it is possible to discern which educational interventions produce significant changes in class performance. Cluster analysis can also identify groups of students whose ratings evolve in similar ways. A brief overview of our methods and an example are presented.
Probabilistic Parameter Uncertainty Analysis of Single Input Single Output Control Systems
NASA Technical Reports Server (NTRS)
Smith, Brett A.; Kenny, Sean P.; Crespo, Luis G.
2005-01-01
The current standards for handling uncertainty in control systems use interval bounds for definition of the uncertain parameters. This approach gives no information about the likelihood of system performance, but simply gives the response bounds. When used in design, current methods of m-analysis and can lead to overly conservative controller design. With these methods, worst case conditions are weighted equally with the most likely conditions. This research explores a unique approach for probabilistic analysis of control systems. Current reliability methods are examined showing the strong areas of each in handling probability. A hybrid method is developed using these reliability tools for efficiently propagating probabilistic uncertainty through classical control analysis problems. The method developed is applied to classical response analysis as well as analysis methods that explore the effects of the uncertain parameters on stability and performance metrics. The benefits of using this hybrid approach for calculating the mean and variance of responses cumulative distribution functions are shown. Results of the probabilistic analysis of a missile pitch control system, and a non-collocated mass spring system, show the added information provided by this hybrid analysis.
A model of interval timing by neural integration
Simen, Patrick; Balci, Fuat; deSouza, Laura; Cohen, Jonathan D.; Holmes, Philip
2011-01-01
We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes; that correlations among them can be largely cancelled by balancing excitation and inhibition; that neural populations can act as integrators; and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule’s predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior. PMID:21697374
NASA Astrophysics Data System (ADS)
Olafsdottir, Kristin B.; Mudelsee, Manfred
2013-04-01
Estimation of the Pearson's correlation coefficient between two time series to evaluate the influences of one time depended variable on another is one of the most often used statistical method in climate sciences. Various methods are used to estimate confidence interval to support the correlation point estimate. Many of them make strong mathematical assumptions regarding distributional shape and serial correlation, which are rarely met. More robust statistical methods are needed to increase the accuracy of the confidence intervals. Bootstrap confidence intervals are estimated in the Fortran 90 program PearsonT (Mudelsee, 2003), where the main intention was to get an accurate confidence interval for correlation coefficient between two time series by taking the serial dependence of the process that generated the data into account. However, Monte Carlo experiments show that the coverage accuracy for smaller data sizes can be improved. Here we adapt the PearsonT program into a new version called PearsonT3, by calibrating the confidence interval to increase the coverage accuracy. Calibration is a bootstrap resampling technique, which basically performs a second bootstrap loop or resamples from the bootstrap resamples. It offers, like the non-calibrated bootstrap confidence intervals, robustness against the data distribution. Pairwise moving block bootstrap is used to preserve the serial correlation of both time series. The calibration is applied to standard error based bootstrap Student's t confidence intervals. The performances of the calibrated confidence intervals are examined with Monte Carlo simulations, and compared with the performances of confidence intervals without calibration, that is, PearsonT. The coverage accuracy is evidently better for the calibrated confidence intervals where the coverage error is acceptably small (i.e., within a few percentage points) already for data sizes as small as 20. One form of climate time series is output from numerical models which simulate the climate system. The method is applied to model data from the high resolution ocean model, INALT01 where the relationship between the Agulhas Leakage and the North Brazil Current is evaluated. Preliminary results show significant correlation between the two variables when there is 10 year lag between them, which is more or less the time that takes the Agulhas Leakage water to reach the North Brazil Current. Mudelsee, M., 2003. Estimating Pearson's correlation coefficient with bootstrap confidence interval from serially dependent time series. Mathematical Geology 35, 651-665.
Counting Raindrops and the Distribution of Intervals Between Them.
NASA Astrophysics Data System (ADS)
Van De Giesen, N.; Ten Veldhuis, M. C.; Hut, R.; Pape, J. J.
2017-12-01
Drop size distributions are often assumed to follow a generalized gamma function, characterized by one parameter, Λ, [1]. In principle, this Λ can be estimated by measuring the arrival rate of raindrops. The arrival rate should follow a Poisson distribution. By measuring the distribution of the time intervals between drops arriving at a certain surface area, one should not only be able to estimate the arrival rate but also the robustness of the underlying assumption concerning steady state. It is important to note that many rainfall radar systems also assume fixeddrop size distributions, and associated arrival rates, to derive rainfall rates. By testing these relationships with a simple device, we will be able to improve both land-based and space-based radar rainfall estimates. Here, an open-hardware sensor design is presented, consisting of a 3D printed housing for a piezoelectric element, some simple electronics and an Arduino. The target audience for this device are citizen scientists who want to contribute to collecting rainfall information beyond the standard rain gauge. The core of the sensor is a simple piezo-buzzer, as found in many devices such as watches and fire alarms. When a raindrop falls on a piezo-buzzer, a small voltage is generated , which can be used to register the drop's arrival time. By registering the intervals between raindrops, the associated Poisson distribution can be estimated. In addition to the hardware, we will present the first results of a measuring campaign in Myanmar that will have ran from August to October 2017. All design files and descriptions are available through GitHub: https://github.com/nvandegiesen/Intervalometer. This research is partially supported through the TWIGA project, funded by the European Commission's H2020 program under call SC5-18-2017 `Novel in-situ observation systems'. Reference [1]: Uijlenhoet, R., and J. N. M. Stricker. "A consistent rainfall parameterization based on the exponential raindrop size distribution." Journal of Hydrology 218, no. 3 (1999): 101-127.
Magnetic storm generation by large-scale complex structure Sheath/ICME
NASA Astrophysics Data System (ADS)
Grigorenko, E. E.; Yermolaev, Y. I.; Lodkina, I. G.; Yermolaev, M. Y.; Riazantseva, M.; Borodkova, N. L.
2017-12-01
We study temporal profiles of interplanetary plasma and magnetic field parameters as well as magnetospheric indices. We use our catalog of large-scale solar wind phenomena for 1976-2000 interval (see the catalog for 1976-2016 in web-side ftp://ftp.iki.rssi.ru/pub/omni/ prepared on basis of OMNI database (Yermolaev et al., 2009)) and the double superposed epoch analysis method (Yermolaev et al., 2010). Our analysis showed (Yermolaev et al., 2015) that average profiles of Dst and Dst* indices decrease in Sheath interval (magnetic storm activity increases) and increase in ICME interval. This profile coincides with inverted distribution of storm numbers in both intervals (Yermolaev et al., 2017). This behavior is explained by following reasons. (1) IMF magnitude in Sheath is higher than in Ejecta and closed to value in MC. (2) Sheath has 1.5 higher efficiency of storm generation than ICME (Nikolaeva et al., 2015). The most part of so-called CME-induced storms are really Sheath-induced storms and this fact should be taken into account during Space Weather prediction. The work was in part supported by the Russian Science Foundation, grant 16-12-10062. References. 1. Nikolaeva N.S., Y. I. Yermolaev and I. G. Lodkina (2015), Modeling of the corrected Dst* index temporal profile on the main phase of the magnetic storms generated by different types of solar wind, Cosmic Res., 53(2), 119-127 2. Yermolaev Yu. I., N. S. Nikolaeva, I. G. Lodkina and M. Yu. Yermolaev (2009), Catalog of Large-Scale Solar Wind Phenomena during 1976-2000, Cosmic Res., , 47(2), 81-94 3. Yermolaev, Y. I., N. S. Nikolaeva, I. G. Lodkina, and M. Y. Yermolaev (2010), Specific interplanetary conditions for CIR-induced, Sheath-induced, and ICME-induced geomagnetic storms obtained by double superposed epoch analysis, Ann. Geophys., 28, 2177-2186 4. Yermolaev Yu. I., I. G. Lodkina, N. S. Nikolaeva and M. Yu. Yermolaev (2015), Dynamics of large-scale solar wind streams obtained by the double superposed epoch analysis, J. Geophys. Res. Space Physics, 120, doi:10.1002/2015JA021274 5. Yermolaev Y. I., I. G. Lodkina, N. S. Nikolaeva, M. Y. Yermolaev, M. O. Riazantseva (2017), Some Problems of Identification of Large-Scale Solar Wind types and Their Role in the Physics of the Magnetosphere, Cosmic Res., 55(3), pp. 178-189. DOI: 10.1134/S0010952517030029
A novel approach based on preference-based index for interval bilevel linear programming problem.
Ren, Aihong; Wang, Yuping; Xue, Xingsi
2017-01-01
This paper proposes a new methodology for solving the interval bilevel linear programming problem in which all coefficients of both objective functions and constraints are considered as interval numbers. In order to keep as much uncertainty of the original constraint region as possible, the original problem is first converted into an interval bilevel programming problem with interval coefficients in both objective functions only through normal variation of interval number and chance-constrained programming. With the consideration of different preferences of different decision makers, the concept of the preference level that the interval objective function is preferred to a target interval is defined based on the preference-based index. Then a preference-based deterministic bilevel programming problem is constructed in terms of the preference level and the order relation [Formula: see text]. Furthermore, the concept of a preference δ -optimal solution is given. Subsequently, the constructed deterministic nonlinear bilevel problem is solved with the help of estimation of distribution algorithm. Finally, several numerical examples are provided to demonstrate the effectiveness of the proposed approach.
Body size distributions signal a regime shift in a lake ...
Communities of organisms, from mammals to microorganisms, have discontinuous distributions of body size. This pattern of size structuring is a conservative trait of community organization and is a product of processes that occur at multiple spatial and temporal scales. In this study, we assessed whether body size patterns serve as an indicator of a threshold between alternative regimes. Over the past 7000 years, the biological communities of Foy Lake (Montana,USA) have undergone a major regime shift owing to climate change. We used a palaeoecological record of diatom communities to estimate diatom sizes, and then analysed the discontinuous distribution of organism sizes over time. We used Bayesian classification and regression tree models to determine that all time intervals exhibited aggregations of sizes separated by gaps in the distribution and found a significant change in diatom body size distributions approximately 150 years before the identified ecosystem regime shift. We suggest that discontinuity analysis is a useful addition to the suite of tools for the detection of early warning signals of regime shifts. Communities of organisms from mammals to microorganisms have discontinuous distributions of body size. This pattern of size structuring is a conservative trait of community organization and is a product of processes that occur at discrete spatial and temporal scales within ecosystems. Here, a paleoecological record of diatom community change is use
Consequences of Secondary Calibrations on Divergence Time Estimates.
Schenk, John J
2016-01-01
Secondary calibrations (calibrations based on the results of previous molecular dating studies) are commonly applied in divergence time analyses in groups that lack fossil data; however, the consequences of applying secondary calibrations in a relaxed-clock approach are not fully understood. I tested whether applying the posterior estimate from a primary study as a prior distribution in a secondary study results in consistent age and uncertainty estimates. I compared age estimates from simulations with 100 randomly replicated secondary trees. On average, the 95% credible intervals of node ages for secondary estimates were significantly younger and narrower than primary estimates. The primary and secondary age estimates were significantly different in 97% of the replicates after Bonferroni corrections. Greater error in magnitude was associated with deeper than shallower nodes, but the opposite was found when standardized by median node age, and a significant positive relationship was determined between the number of tips/age of secondary trees and the total amount of error. When two secondary calibrated nodes were analyzed, estimates remained significantly different, and although the minimum and median estimates were associated with less error, maximum age estimates and credible interval widths had greater error. The shape of the prior also influenced error, in which applying a normal, rather than uniform, prior distribution resulted in greater error. Secondary calibrations, in summary, lead to a false impression of precision and the distribution of age estimates shift away from those that would be inferred by the primary analysis. These results suggest that secondary calibrations should not be applied as the only source of calibration in divergence time analyses that test time-dependent hypotheses until the additional error associated with secondary calibrations is more properly modeled to take into account increased uncertainty in age estimates.
NASA Astrophysics Data System (ADS)
Zhang, X.; Zhao, W.; Liu, Y.; Fang, X.
2017-12-01
Soil water overconsumption is threatening the sustainability of regional vegetation rehabilitation in the Loess Plateau of China. The use of fractal geometry theory in describing soil quality improves the accuracy of the relevant research. Typical grasslands, shrublands, forests, cropland and orchards under different precipitation regimes were selected, and in this study, the spatial distribution of the relationship between soil moisture and soil particle size in typical slopes on Loess Plateau were investigated to provide support for the predict of soil moisture by using soil physical characteristics in the Loess Plateau. During the sampling year, the mean annual precipitation gradients were divided at an interval of 70 mm from 370mm to 650mm. Grasslands with Medicago sativa L. or Stipa bungeana Trin., shrublands with Caragana Korshinskii Kom. or Hippophae rhamnoides L., forests with Robinia pseudoacacia Linn., orchards with apple trees and croplands with corn or potatoes were chosen to represent the natural grassland. A soil auger with a diameter of 5 cm was used to obtain soil samples at depths of 0-5 m at intervals of 20 cm.The Van Genuchten model, fractal theory and redundancy analysis (RDA) were used to estimate and analyze the soil water characteristic curve, soil particle size distribution, and fractal dimension and the correlations between the relevant parameters. The results showed that (1) the change of the singular fractal dimension is positively correlated with soil water content, while D0 (capacity dimension) is negatively correlated with soil water content as the depth increases; (2) the relationship between soil moisture and soil particle size shows differences under different plants and precipitation gradient.
Ethnic Distribution of Microscopic Colitis in the United States.
Turner, Kevin; Genta, Robert M; Sonnenberg, Amnon
2015-11-01
A large electronic database of histopathology reports was used to study the ethnic distribution of microscopic colitis in the United States. Miraca Life Sciences is a nation-wide pathology laboratory that receives biopsy specimens submitted by 1500 gastroenterologists distributed throughout the United States. In a case-control study, the prevalence of microscopic colitis in 4 ethnic groups (East Asians, Indians, Hispanics, and Jews) was compared with that of all other ethnic groups (composed of American Caucasians and African Americans), serving as reference group. A total of 11,706 patients with microscopic colitis were included in the analysis. In all ethnic groups alike, microscopic colitis was more common in women than men (78% versus 22%, odds ratio = 3.40, 95% confidence interval = 3.26-3.55). In all ethnic groups, the prevalence of microscopic colitis showed a continuous age-dependent rise. Hispanic patients with microscopic colitis were on average younger than the reference group (59.4 ± 16.2 years versus 64.2 ± 13.8 years, P < 0.001). Jewish patients with microscopic colitis were slightly older than the reference group (65.6 ± 13.4 years, P = 0.015). Compared with the reference group (prevalence = 1.20%), microscopic colitis was significantly less common among patients of Indian (prevalence = 0.28%, odds ratio = 0.32, 95% confidence interval = 0.13-0.65), East Asian (0.22%, 0.19, 0.14-0.26), or Hispanic decent (0.48%, 0.40, 0.36-0.45) and significantly more common among Jewish patients (1.30%, 1.10, 1.01-1.21). Microscopic colitis shows striking variations of its occurrence among different ethnic groups. Such variations could point at differences in the exposure to environmental risk factors.
Izadifar, Zahra; Belev, George; Babyn, Paul; Chapman, Dean
2015-10-19
The observation of ultrasound generated cavitation bubbles deep in tissue is very difficult. The development of an imaging method capable of investigating cavitation bubbles in tissue would improve the efficiency and application of ultrasound in the clinic. Among the previous imaging modalities capable of detecting cavitation bubbles in vivo, the acoustic detection technique has the positive aspect of in vivo application. However the size of the initial cavitation bubble and the amplitude of the ultrasound that produced the cavitation bubbles, affect the timing and amplitude of the cavitation bubbles' emissions. The spatial distribution of cavitation bubbles, driven by 0.8835 MHz therapeutic ultrasound system at output power of 14 Watt, was studied in water using a synchrotron X-ray imaging technique, Analyzer Based Imaging (ABI). The cavitation bubble distribution was investigated by repeated application of the ultrasound and imaging the water tank. The spatial frequency of the cavitation bubble pattern was evaluated by Fourier analysis. Acoustic cavitation was imaged at four different locations through the acoustic beam in water at a fixed power level. The pattern of cavitation bubbles in water was detected by synchrotron X-ray ABI. The spatial distribution of cavitation bubbles driven by the therapeutic ultrasound system was observed using ABI X-ray imaging technique. It was observed that the cavitation bubbles appeared in a periodic pattern. The calculated distance between intervals revealed that the distance of frequent cavitation lines (intervals) is one-half of the acoustic wave length consistent with standing waves. This set of experiments demonstrates the utility of synchrotron ABI for visualizing cavitation bubbles formed in water by clinical ultrasound systems working at high frequency and output powers as low as a therapeutic system.
Crackles and instabilities during lung inflation
NASA Astrophysics Data System (ADS)
Alencar, Adriano M.; Majumdar, Arnab; Hantos, Zoltan; Buldyrev, Sergey V.; Eugene Stanley, H.; Suki, Béla
2005-11-01
In a variety of physico-chemical reactions, the actual process takes place in a reactive zone, called the “active surface”. We define the active surface of the lung as the set of airway segments that are closed but connected to the trachea through an open pathway, which is the interface between closed and open regions in a collapsed lung. To study the active surface and the time interval between consecutive openings, we measured the sound pressure of crackles, associated with the opening of collapsed airway segments in isolated dog lungs, inflating from the collapsed state in 120 s. We analyzed the sequence of crackle amplitudes, inter-crackle intervals, and low frequency energy from acoustic data. The series of spike amplitudes spans two orders of magnitude and the inter-crackle intervals spans over five orders of magnitude. The distribution of spike amplitudes follows a power law for nearly two decades, while the distribution of time intervals between consecutive crackles shows two regimes of power law behavior, where the first region represents crackles coming from avalanches of openings whereas the second region is due to the time intervals between separate avalanches. Using the time interval between measured crackles, we estimated the time evolution of the active surface during lung inflation. In addition, we show that recruitment and instabilities along the pressure-volume curve are associated with airway opening and recruitment. We find a good agreement between the theory of the dynamics of lung inflation and the experimental data which combined with numerical results may prove useful in the clinical diagnosis of lung diseases.
SPA- STATISTICAL PACKAGE FOR TIME AND FREQUENCY DOMAIN ANALYSIS
NASA Technical Reports Server (NTRS)
Brownlow, J. D.
1994-01-01
The need for statistical analysis often arises when data is in the form of a time series. This type of data is usually a collection of numerical observations made at specified time intervals. Two kinds of analysis may be performed on the data. First, the time series may be treated as a set of independent observations using a time domain analysis to derive the usual statistical properties including the mean, variance, and distribution form. Secondly, the order and time intervals of the observations may be used in a frequency domain analysis to examine the time series for periodicities. In almost all practical applications, the collected data is actually a mixture of the desired signal and a noise signal which is collected over a finite time period with a finite precision. Therefore, any statistical calculations and analyses are actually estimates. The Spectrum Analysis (SPA) program was developed to perform a wide range of statistical estimation functions. SPA can provide the data analyst with a rigorous tool for performing time and frequency domain studies. In a time domain statistical analysis the SPA program will compute the mean variance, standard deviation, mean square, and root mean square. It also lists the data maximum, data minimum, and the number of observations included in the sample. In addition, a histogram of the time domain data is generated, a normal curve is fit to the histogram, and a goodness-of-fit test is performed. These time domain calculations may be performed on both raw and filtered data. For a frequency domain statistical analysis the SPA program computes the power spectrum, cross spectrum, coherence, phase angle, amplitude ratio, and transfer function. The estimates of the frequency domain parameters may be smoothed with the use of Hann-Tukey, Hamming, Barlett, or moving average windows. Various digital filters are available to isolate data frequency components. Frequency components with periods longer than the data collection interval are removed by least-squares detrending. As many as ten channels of data may be analyzed at one time. Both tabular and plotted output may be generated by the SPA program. This program is written in FORTRAN IV and has been implemented on a CDC 6000 series computer with a central memory requirement of approximately 142K (octal) of 60 bit words. This core requirement can be reduced by segmentation of the program. The SPA program was developed in 1978.
The assignment of scores procedure for ordinal categorical data.
Chen, Han-Ching; Wang, Nae-Sheng
2014-01-01
Ordinal data are the most frequently encountered type of data in the social sciences. Many statistical methods can be used to process such data. One common method is to assign scores to the data, convert them into interval data, and further perform statistical analysis. There are several authors who have recently developed assigning score methods to assign scores to ordered categorical data. This paper proposes an approach that defines an assigning score system for an ordinal categorical variable based on underlying continuous latent distribution with interpretation by using three case study examples. The results show that the proposed score system is well for skewed ordinal categorical data.