Sample records for time intervals distribution

  1. Optimal methods for fitting probability distributions to propagule retention time in studies of zoochorous dispersal.

    PubMed

    Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi

    2016-02-01

    Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We recommend the use of cumulative probability to fit parametric probability distributions to propagule retention time, specifically using maximum likelihood for parameter estimation. Furthermore, the experimental design for an optimal characterization of unimodal propagule retention time should contemplate at least 500 recovered propagules and sampling time-intervals not larger than the time peak of propagule retrieval, except in the tail of the distribution where broader sampling time-intervals may also produce accurate fits.

  2. An understanding of human dynamics in urban subway traffic from the Maximum Entropy Principle

    NASA Astrophysics Data System (ADS)

    Yong, Nuo; Ni, Shunjiang; Shen, Shifei; Ji, Xuewei

    2016-08-01

    We studied the distribution of entry time interval in Beijing subway traffic by analyzing the smart card transaction data, and then deduced the probability distribution function of entry time interval based on the Maximum Entropy Principle. Both theoretical derivation and data statistics indicated that the entry time interval obeys power-law distribution with an exponential cutoff. In addition, we pointed out the constraint conditions for the distribution form and discussed how the constraints affect the distribution function. It is speculated that for bursts and heavy tails in human dynamics, when the fitted power exponent is less than 1.0, it cannot be a pure power-law distribution, but with an exponential cutoff, which may be ignored in the previous studies.

  3. Modeling stream fish distributions using interval-censored detection times.

    PubMed

    Ferreira, Mário; Filipe, Ana Filipa; Bardos, David C; Magalhães, Maria Filomena; Beja, Pedro

    2016-08-01

    Controlling for imperfect detection is important for developing species distribution models (SDMs). Occupancy-detection models based on the time needed to detect a species can be used to address this problem, but this is hindered when times to detection are not known precisely. Here, we extend the time-to-detection model to deal with detections recorded in time intervals and illustrate the method using a case study on stream fish distribution modeling. We collected electrofishing samples of six fish species across a Mediterranean watershed in Northeast Portugal. Based on a Bayesian hierarchical framework, we modeled the probability of water presence in stream channels, and the probability of species occupancy conditional on water presence, in relation to environmental and spatial variables. We also modeled time-to-first detection conditional on occupancy in relation to local factors, using modified interval-censored exponential survival models. Posterior distributions of occupancy probabilities derived from the models were used to produce species distribution maps. Simulations indicated that the modified time-to-detection model provided unbiased parameter estimates despite interval-censoring. There was a tendency for spatial variation in detection rates to be primarily influenced by depth and, to a lesser extent, stream width. Species occupancies were consistently affected by stream order, elevation, and annual precipitation. Bayesian P-values and AUCs indicated that all models had adequate fit and high discrimination ability, respectively. Mapping of predicted occupancy probabilities showed widespread distribution by most species, but uncertainty was generally higher in tributaries and upper reaches. The interval-censored time-to-detection model provides a practical solution to model occupancy-detection when detections are recorded in time intervals. This modeling framework is useful for developing SDMs while controlling for variation in detection rates, as it uses simple data that can be readily collected by field ecologists.

  4. Buffered coscheduling for parallel programming and enhanced fault tolerance

    DOEpatents

    Petrini, Fabrizio [Los Alamos, NM; Feng, Wu-chun [Los Alamos, NM

    2006-01-31

    A computer implemented method schedules processor jobs on a network of parallel machine processors or distributed system processors. Control information communications generated by each process performed by each processor during a defined time interval is accumulated in buffers, where adjacent time intervals are separated by strobe intervals for a global exchange of control information. A global exchange of the control information communications at the end of each defined time interval is performed during an intervening strobe interval so that each processor is informed by all of the other processors of the number of incoming jobs to be received by each processor in a subsequent time interval. The buffered coscheduling method of this invention also enhances the fault tolerance of a network of parallel machine processors or distributed system processors

  5. Voter model with non-Poissonian interevent intervals

    NASA Astrophysics Data System (ADS)

    Takaguchi, Taro; Masuda, Naoki

    2011-09-01

    Recent analysis of social communications among humans has revealed that the interval between interactions for a pair of individuals and for an individual often follows a long-tail distribution. We investigate the effect of such a non-Poissonian nature of human behavior on dynamics of opinion formation. We use a variant of the voter model and numerically compare the time to consensus of all the voters with different distributions of interevent intervals and different networks. Compared with the exponential distribution of interevent intervals (i.e., the standard voter model), the power-law distribution of interevent intervals slows down consensus on the ring. This is because of the memory effect; in the power-law case, the expected time until the next update event on a link is large if the link has not had an update event for a long time. On the complete graph, the consensus time in the power-law case is close to that in the exponential case. Regular graphs bridge these two results such that the slowing down of the consensus in the power-law case as compared to the exponential case is less pronounced as the degree increases.

  6. Interresponse Time Structures in Variable-Ratio and Variable-Interval Schedules

    ERIC Educational Resources Information Center

    Bowers, Matthew T.; Hill, Jade; Palya, William L.

    2008-01-01

    The interresponse-time structures of pigeon key pecking were examined under variable-ratio, variable-interval, and variable-interval plus linear feedback schedules. Whereas the variable-ratio and variable-interval plus linear feedback schedules generally resulted in a distinct group of short interresponse times and a broad distribution of longer…

  7. Recurrence time statistics for finite size intervals

    NASA Astrophysics Data System (ADS)

    Altmann, Eduardo G.; da Silva, Elton C.; Caldas, Iberê L.

    2004-12-01

    We investigate the statistics of recurrences to finite size intervals for chaotic dynamical systems. We find that the typical distribution presents an exponential decay for almost all recurrence times except for a few short times affected by a kind of memory effect. We interpret this effect as being related to the unstable periodic orbits inside the interval. Although it is restricted to a few short times it changes the whole distribution of recurrences. We show that for systems with strong mixing properties the exponential decay converges to the Poissonian statistics when the width of the interval goes to zero. However, we alert that special attention to the size of the interval is required in order to guarantee that the short time memory effect is negligible when one is interested in numerically or experimentally calculated Poincaré recurrence time statistics.

  8. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks

    PubMed Central

    Lam, William H. K.; Li, Qingquan

    2017-01-01

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks. PMID:29210978

  9. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.

    PubMed

    Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan

    2017-12-06

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.

  10. Fixed-interval matching-to-sample: intermatching time and intermatching error runs1

    PubMed Central

    Nelson, Thomas D.

    1978-01-01

    Four pigeons were trained on a matching-to-sample task in which reinforcers followed either the first matching response (fixed interval) or the fifth matching response (tandem fixed-interval fixed-ratio) that occurred 80 seconds or longer after the last reinforcement. Relative frequency distributions of the matching-to-sample responses that concluded intermatching times and runs of mismatches (intermatching error runs) were computed for the final matching responses directly followed by grain access and also for the three matching responses immediately preceding the final match. Comparison of these two distributions showed that the fixed-interval schedule arranged for the preferential reinforcement of matches concluding relatively extended intermatching times and runs of mismatches. Differences in matching accuracy and rate during the fixed interval, compared to the tandem fixed-interval fixed-ratio, suggested that reinforcers following matches concluding various intermatching times and runs of mismatches influenced the rate and accuracy of the last few matches before grain access, but did not control rate and accuracy throughout the entire fixed-interval period. PMID:16812032

  11. Improved confidence intervals when the sample is counted an integer times longer than the blank.

    PubMed

    Potter, William Edward; Strzelczyk, Jadwiga Jodi

    2011-05-01

    Past computer solutions for confidence intervals in paired counting are extended to the case where the ratio of the sample count time to the blank count time is taken to be an integer, IRR. Previously, confidence intervals have been named Neyman-Pearson confidence intervals; more correctly they should have been named Neyman confidence intervals or simply confidence intervals. The technique utilized mimics a technique used by Pearson and Hartley to tabulate confidence intervals for the expected value of the discrete Poisson and Binomial distributions. The blank count and the contribution of the sample to the gross count are assumed to be Poisson distributed. The expected value of the blank count, in the sample count time, is assumed known. The net count, OC, is taken to be the gross count minus the product of IRR with the blank count. The probability density function (PDF) for the net count can be determined in a straightforward manner.

  12. Temporal structure in the light response of relay cells in the dorsal lateral geniculate nucleus of the cat.

    PubMed Central

    Funke, K; Wörgötter, F

    1995-01-01

    1. The spike interval pattern during the light responses of 155 on- and 81 off-centre cells of the dorsal lateral geniculate nucleus (LGN) was studied in anaesthetized and paralysed cats by the use of a novel analysis. Temporally localized interval distributions were computed from a 100 ms time window, which was shifted along the time axis in 10 ms steps, resulting in a 90% overlap between two adjacent windows. For each step the interval distribution was computed inside the time window with 1 ms resolution, and plotted as a greyscale-coded pixel line orthogonal to the time axis. For visual stimulation, light or dark spots of different size and contrast were presented with different background illumination levels. 2. Two characteristic interval patterns were observed during the sustained response component of the cells. Mainly on-cells (77%) responded with multimodal interval distributions, resulting in elongated 'bands' in the 2-dimensional time window plots. In similar situations, the interval distributions for most (71%) off-cells were rather wide and featureless. In those cases where interval bands (i.e. multimodal interval distributions) were observed for off-cells (14%), they were always much wider than for the on-cells. This difference between the on- and off-cell population was independent of the background illumination and the contrast of the stimulus. Y on-cells also tended to produce wider interval bands than X on-cells. 3. For most stimulation situations the first interval band was centred around 6-9 ms, which has been called the fundamental interval; higher order bands are multiples thereof. The fundamental interval shifted towards larger sizes with decreasing stimulus contrast. Increasing stimulus size, on the other hand, resulted in a redistribution of the intervals into higher order bands, while at the same time the location of the fundamental interval remained largely unaffected. This was interpreted as an effect of the increasing surround inhibition at the geniculate level, by which individual retinal EPSPs were cancelled. A changing level of adaptation can result in a mixed shift/redistribution effect because of the changing stimulus contrast and changing level of tonic inhibition. 4. The occurrence of interval bands is not directly related to the shape of the autocorrelation function, which can be flat, weakly oscillatory or strongly oscillatory, regardless of the interval band pattern. 5. A simple computer model was devised to account for the observed cell behaviour. The model is highly robust against parameter variations.(ABSTRACT TRUNCATED AT 400 WORDS) Images Figure 1 Figure 2 Figure 3 Figure 4 Figure 6 Figure 7 Figure 8 Figure 9 Figure 10 Figure 11 Figure 12 Figure 13 Figure 15 PMID:7562612

  13. Information distribution in distributed microprocessor based flight control systems

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.; Lee, P. S.

    1977-01-01

    This paper presents an optimal control theory that accounts for variable time intervals in the information distribution to control effectors in a distributed microprocessor based flight control system. The theory is developed using a linear process model for the aircraft dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved that provides the control law that minimizes the expected value of a quadratic cost function. An example is presented where the theory is applied to the control of the longitudinal motions of the F8-DFBW aircraft. Theoretical and simulation results indicate that, for the example problem, the optimal cost obtained using a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained using a known uniform information update interval.

  14. Estimating the duration of geologic intervals from a small number of age determinations: A challenge common to petrology and paleobiology

    NASA Astrophysics Data System (ADS)

    Glazner, Allen F.; Sadler, Peter M.

    2016-12-01

    The duration of a geologic interval, such as the time over which a given volume of magma accumulated to form a pluton, or the lifespan of a large igneous province, is commonly determined from a relatively small number of geochronologic determinations (e.g., 4-10) within that interval. Such sample sets can underestimate the true length of the interval by a significant amount. For example, the average interval determined from a sample of size n = 5, drawn from a uniform random distribution, will underestimate the true interval by 50%. Even for n = 10, the average sample only captures ˜80% of the interval. If the underlying distribution is known then a correction factor can be determined from theory or Monte Carlo analysis; for a uniform random distribution, this factor is n+1n-1. Systematic undersampling of interval lengths can have a large effect on calculated magma fluxes in plutonic systems. The problem is analogous to determining the duration of an extinct species from its fossil occurrences. Confidence interval statistics developed for species origination and extinction times are applicable to the onset and cessation of magmatic events.

  15. The temporal organization of behavior on periodic food schedules.

    PubMed Central

    Reid, A K; Bacha, G; Morán, C

    1993-01-01

    Various theories of temporal control and schedule induction imply that periodic schedules temporally modulate an organism's motivational states within interreinforcement intervals. This speculation has been fueled by frequently observed multimodal activity distributions created by averaging across interreinforcement intervals. We tested this hypothesis by manipulating the cost associated with schedule-induced activities and the availability of other activities to determine the degree to which (a) the temporal distributions of activities within the interreinforcement interval are fixed or can be temporally displaced, (b) rats can reallocate activities across different interreinforcement intervals, and (c) noninduced activities can substitute for schedule-induced activities. Obtained multimodal activity distributions created by averaging across interreinforcement intervals were not representative of the transitions occurring within individual intervals, so the averaged multimodal distributions should not be assumed to represent changes in the subject's motivational states within the interval. Rather, the multimodal distributions often result from averaging across interreinforcement intervals in which only a single activity occurs. A direct influence of the periodic schedule on the motivational states implies that drinking and running should occur at different periods within the interval, but in three experiments the starting times of drinking and running within interreinforcement intervals were equal. Thus, the sequential pattern of drinking and running on periodic schedules does not result from temporal modulation of motivational states within interreinforcement intervals. PMID:8433061

  16. Temporal Structure of Volatility Fluctuations

    NASA Astrophysics Data System (ADS)

    Wang, Fengzhong; Yamasaki, Kazuko; Stanley, H. Eugene; Havlin, Shlomo

    Volatility fluctuations are of great importance for the study of financial markets, and the temporal structure is an essential feature of fluctuations. To explore the temporal structure, we employ a new approach based on the return interval, which is defined as the time interval between two successive volatility values that are above a given threshold. We find that the distribution of the return intervals follows a scaling law over a wide range of thresholds, and over a broad range of sampling intervals. Moreover, this scaling law is universal for stocks of different countries, for commodities, for interest rates, and for currencies. However, further and more detailed analysis of the return intervals shows some systematic deviations from the scaling law. We also demonstrate a significant memory effect in the return intervals time organization. We find that the distribution of return intervals is strongly related to the correlations in the volatility.

  17. Effectiveness of motor sequential learning according to practice schedules in healthy adults; distributed practice versus massed practice

    PubMed Central

    Kwon, Yong Hyun; Kwon, Jung Won; Lee, Myoung Hee

    2015-01-01

    [Purpose] The purpose of the current study was to compare the effectiveness of motor sequential learning according to two different types of practice schedules, distributed practice schedule (two 12-hour inter-trial intervals) and massed practice schedule (two 10-minute inter-trial intervals) using a serial reaction time (SRT) task. [Subjects and Methods] Thirty healthy subjects were recruited and then randomly and evenly assigned to either the distributed practice group or the massed practice group. All subjects performed three consecutive sessions of the SRT task following one of the two different types of practice schedules. Distributed practice was scheduled for two 12-hour inter-session intervals including sleeping time, whereas massed practice was administered for two 10-minute inter-session intervals. Response time (RT) and response accuracy (RA) were measured in at pre-test, mid-test, and post-test. [Results] For RT, univariate analysis demonstrated significant main effects in the within-group comparison of the three tests as well as the interaction effect of two groups × three tests, whereas the between-group comparison showed no significant effect. The results for RA showed no significant differences in neither the between-group comparison nor the interaction effect of two groups × three tests, whereas the within-group comparison of the three tests showed a significant main effect. [Conclusion] Distributed practice led to enhancement of motor skill acquisition at the first inter-session interval as well as at the second inter-interval the following day, compared to massed practice. Consequentially, the results of this study suggest that a distributed practice schedule can enhance the effectiveness of motor sequential learning in 1-day learning as well as for two days learning formats compared to massed practice. PMID:25931727

  18. Musical training generalises across modalities and reveals efficient and adaptive mechanisms for reproducing temporal intervals.

    PubMed

    Aagten-Murphy, David; Cappagli, Giulia; Burr, David

    2014-03-01

    Expert musicians are able to time their actions accurately and consistently during a musical performance. We investigated how musical expertise influences the ability to reproduce auditory intervals and how this generalises across different techniques and sensory modalities. We first compared various reproduction strategies and interval length, to examine the effects in general and to optimise experimental conditions for testing the effect of music, and found that the effects were robust and consistent across different paradigms. Focussing on a 'ready-set-go' paradigm subjects reproduced time intervals drawn from distributions varying in total length (176, 352 or 704 ms) or in the number of discrete intervals within the total length (3, 5, 11 or 21 discrete intervals). Overall, Musicians performed more veridical than Non-Musicians, and all subjects reproduced auditory-defined intervals more accurately than visually-defined intervals. However, Non-Musicians, particularly with visual stimuli, consistently exhibited a substantial and systematic regression towards the mean interval. When subjects judged intervals from distributions of longer total length they tended to regress more towards the mean, while the ability to discriminate between discrete intervals within the distribution had little influence on subject error. These results are consistent with a Bayesian model that minimizes reproduction errors by incorporating a central tendency prior weighted by the subject's own temporal precision relative to the current distribution of intervals. Finally a strong correlation was observed between all durations of formal musical training and total reproduction errors in both modalities (accounting for 30% of the variance). Taken together these results demonstrate that formal musical training improves temporal reproduction, and that this improvement transfers from audition to vision. They further demonstrate the flexibility of sensorimotor mechanisms in adapting to different task conditions to minimise temporal estimation errors. © 2013.

  19. Estimation of the incubation period of invasive aspergillosis by survival models in acute myeloid leukemia patients.

    PubMed

    Bénet, Thomas; Voirin, Nicolas; Nicolle, Marie-Christine; Picot, Stephane; Michallet, Mauricette; Vanhems, Philippe

    2013-02-01

    The duration of the incubation of invasive aspergillosis (IA) remains unknown. The objective of this investigation was to estimate the time interval between aplasia onset and that of IA symptoms in acute myeloid leukemia (AML) patients. A single-centre prospective survey (2004-2009) included all patients with AML and probable/proven IA. Parametric survival models were fitted to the distribution of the time intervals between aplasia onset and IA. Overall, 53 patients had IA after aplasia, with the median observed time interval between the two being 15 days. Based on log-normal distribution, the median estimated IA incubation period was 14.6 days (95% CI; 12.8-16.5 days).

  20. A model of interval timing by neural integration.

    PubMed

    Simen, Patrick; Balci, Fuat; de Souza, Laura; Cohen, Jonathan D; Holmes, Philip

    2011-06-22

    We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.

  1. Empirical estimation of a distribution function with truncated and doubly interval-censored data and its application to AIDS studies.

    PubMed

    Sun, J

    1995-09-01

    In this paper we discuss the non-parametric estimation of a distribution function based on incomplete data for which the measurement origin of a survival time or the date of enrollment in a study is known only to belong to an interval. Also the survival time of interest itself is observed from a truncated distribution and is known only to lie in an interval. To estimate the distribution function, a simple self-consistency algorithm, a generalization of Turnbull's (1976, Journal of the Royal Statistical Association, Series B 38, 290-295) self-consistency algorithm, is proposed. This method is then used to analyze two AIDS cohort studies, for which direct use of the EM algorithm (Dempster, Laird and Rubin, 1976, Journal of the Royal Statistical Association, Series B 39, 1-38), which is computationally complicated, has previously been the usual method of the analysis.

  2. Interval timing under a behavioral microscope: Dissociating motivational and timing processes in fixed-interval performance.

    PubMed

    Daniels, Carter W; Sanabria, Federico

    2017-03-01

    The distribution of latencies and interresponse times (IRTs) of rats was compared between two fixed-interval (FI) schedules of food reinforcement (FI 30 s and FI 90 s), and between two levels of food deprivation. Computational modeling revealed that latencies and IRTs were well described by mixture probability distributions embodying two-state Markov chains. Analysis of these models revealed that only a subset of latencies is sensitive to the periodicity of reinforcement, and prefeeding only reduces the size of this subset. The distribution of IRTs suggests that behavior in FI schedules is organized in bouts that lengthen and ramp up in frequency with proximity to reinforcement. Prefeeding slowed down the lengthening of bouts and increased the time between bouts. When concatenated, latency and IRT models adequately reproduced sigmoidal FI response functions. These findings suggest that behavior in FI schedules fluctuates in and out of schedule control; an account of such fluctuation suggests that timing and motivation are dissociable components of FI performance. These mixture-distribution models also provide novel insights on the motivational, associative, and timing processes expressed in FI performance. These processes may be obscured, however, when performance in timing tasks is analyzed in terms of mean response rates.

  3. Chronology of Postglacial Eruptive Activity and Calculation of Eruption Probabilities for Medicine Lake Volcano, Northern California

    USGS Publications Warehouse

    Nathenson, Manuel; Donnelly-Nolan, Julie M.; Champion, Duane E.; Lowenstern, Jacob B.

    2007-01-01

    Medicine Lake volcano has had 4 eruptive episodes in its postglacial history (since 13,000 years ago) comprising 16 eruptions. Time intervals between events within the episodes are relatively short, whereas time intervals between the episodes are much longer. An updated radiocarbon chronology for these eruptions is presented that uses paleomagnetic data to constrain the choice of calibrated ages. This chronology is used with exponential, Weibull, and mixed-exponential probability distributions to model the data for time intervals between eruptions. The mixed exponential distribution is the best match to the data and provides estimates for the conditional probability of a future eruption given the time since the last eruption. The probability of an eruption at Medicine Lake volcano in the next year from today is 0.00028.

  4. Return Intervals Approach to Financial Fluctuations

    NASA Astrophysics Data System (ADS)

    Wang, Fengzhong; Yamasaki, Kazuko; Havlin, Shlomo; Stanley, H. Eugene

    Financial fluctuations play a key role for financial markets studies. A new approach focusing on properties of return intervals can help to get better understanding of the fluctuations. A return interval is defined as the time between two successive volatilities above a given threshold. We review recent studies and analyze the 1000 most traded stocks in the US stock markets. We find that the distribution of the return intervals has a well approximated scaling over a wide range of thresholds. The scaling is also valid for various time windows from one minute up to one trading day. Moreover, these results are universal for stocks of different countries, commodities, interest rates as well as currencies. Further analysis shows some systematic deviations from a scaling law, which are due to the nonlinear correlations in the volatility sequence. We also examine the memory in return intervals for different time scales, which are related to the long-term correlations in the volatility. Furthermore, we test two popular models, FIGARCH and fractional Brownian motion (fBm). Both models can catch the memory effect but only fBm shows a good scaling in the return interval distribution.

  5. The error and bias of supplementing a short, arid climate, rainfall record with regional vs. global frequency analysis

    NASA Astrophysics Data System (ADS)

    Endreny, Theodore A.; Pashiardis, Stelios

    2007-02-01

    SummaryRobust and accurate estimates of rainfall frequencies are difficult to make with short, and arid-climate, rainfall records, however new regional and global methods were used to supplement such a constrained 15-34 yr record in Cyprus. The impact of supplementing rainfall frequency analysis with the regional and global approaches was measured with relative bias and root mean square error (RMSE) values. Analysis considered 42 stations with 8 time intervals (5-360 min) in four regions delineated by proximity to sea and elevation. Regional statistical algorithms found the sites passed discordancy tests of coefficient of variation, skewness and kurtosis, while heterogeneity tests revealed the regions were homogeneous to mildly heterogeneous. Rainfall depths were simulated in the regional analysis method 500 times, and then goodness of fit tests identified the best candidate distribution as the general extreme value (GEV) Type II. In the regional analysis, the method of L-moments was used to estimate location, shape, and scale parameters. In the global based analysis, the distribution was a priori prescribed as GEV Type II, a shape parameter was a priori set to 0.15, and a time interval term was constructed to use one set of parameters for all time intervals. Relative RMSE values were approximately equal at 10% for the regional and global method when regions were compared, but when time intervals were compared the global method RMSE had a parabolic-shaped time interval trend. Relative bias values were also approximately equal for both methods when regions were compared, but again a parabolic-shaped time interval trend was found for the global method. The global method relative RMSE and bias trended with time interval, which may be caused by fitting a single scale value for all time intervals.

  6. Quantification of transuranic elements by time interval correlation spectroscopy of the detected neutrons

    PubMed

    Baeten; Bruggeman; Paepen; Carchon

    2000-03-01

    The non-destructive quantification of transuranic elements in nuclear waste management or in safeguards verifications is commonly performed by passive neutron assay techniques. To minimise the number of unknown sample-dependent parameters, Neutron Multiplicity Counting (NMC) is applied. We developed a new NMC-technique, called Time Interval Correlation Spectroscopy (TICS), which is based on the measurement of Rossi-alpha time interval distributions. Compared to other NMC-techniques, TICS offers several advantages.

  7. A model of interval timing by neural integration

    PubMed Central

    Simen, Patrick; Balci, Fuat; deSouza, Laura; Cohen, Jonathan D.; Holmes, Philip

    2011-01-01

    We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes; that correlations among them can be largely cancelled by balancing excitation and inhibition; that neural populations can act as integrators; and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule’s predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior. PMID:21697374

  8. Quantitative analysis of ground penetrating radar data in the Mu Us Sandland

    NASA Astrophysics Data System (ADS)

    Fu, Tianyang; Tan, Lihua; Wu, Yongqiu; Wen, Yanglei; Li, Dawei; Duan, Jinlong

    2018-06-01

    Ground penetrating radar (GPR), which can reveal the sedimentary structure and development process of dunes, is widely used to evaluate aeolian landforms. The interpretations for GPR profiles are mostly based on qualitative descriptions of geometric features of the radar reflections. This research quantitatively analyzed the waveform parameter characteristics of different radar units by extracting the amplitude and time interval parameters of GPR data in the Mu Us Sandland in China, and then identified and interpreted different sedimentary structures. The results showed that different types of radar units had specific waveform parameter characteristics. The main waveform parameter characteristics of sand dune radar facies and sandstone radar facies included low amplitudes and wide ranges of time intervals, ranging from 0 to 0.25 and 4 to 33 ns respectively, and the mean amplitudes changed gradually with time intervals. The amplitude distribution curves of various sand dune radar facies were similar as unimodal distributions. The radar surfaces showed high amplitudes with time intervals concentrated in high-value areas, ranging from 0.08 to 0.61 and 9 to 34 ns respectively, and the mean amplitudes changed drastically with time intervals. The amplitude and time interval values of lacustrine radar facies were between that of sand dune radar facies and radar surfaces, ranging from 0.08 to 0.29 and 11 to 30 ns respectively, and the mean amplitude and time interval curve was approximately trapezoidal. The quantitative extraction and analysis of GPR reflections could help distinguish various radar units and provide evidence for identifying sedimentary structure in aeolian landforms.

  9. Statistical physics approaches to financial fluctuations

    NASA Astrophysics Data System (ADS)

    Wang, Fengzhong

    2009-12-01

    Complex systems attract many researchers from various scientific fields. Financial markets are one of these widely studied complex systems. Statistical physics, which was originally developed to study large systems, provides novel ideas and powerful methods to analyze financial markets. The study of financial fluctuations characterizes market behavior, and helps to better understand the underlying market mechanism. Our study focuses on volatility, a fundamental quantity to characterize financial fluctuations. We examine equity data of the entire U.S. stock market during 2001 and 2002. To analyze the volatility time series, we develop a new approach, called return interval analysis, which examines the time intervals between two successive volatilities exceeding a given value threshold. We find that the return interval distribution displays scaling over a wide range of thresholds. This scaling is valid for a range of time windows, from one minute up to one day. Moreover, our results are similar for commodities, interest rates, currencies, and for stocks of different countries. Further analysis shows some systematic deviations from a scaling law, which we can attribute to nonlinear correlations in the volatility time series. We also find a memory effect in return intervals for different time scales, which is related to the long-term correlations in the volatility. To further characterize the mechanism of price movement, we simulate the volatility time series using two different models, fractionally integrated generalized autoregressive conditional heteroscedasticity (FIGARCH) and fractional Brownian motion (fBm), and test these models with the return interval analysis. We find that both models can mimic time memory but only fBm shows scaling in the return interval distribution. In addition, we examine the volatility of daily opening to closing and of closing to opening. We find that each volatility distribution has a power law tail. Using the detrended fluctuation analysis (DFA) method, we show long-term auto-correlations in these volatility time series. We also analyze return, the actual price changes of stocks, and find that the returns over the two sessions are often anti-correlated.

  10. Monitoring molecular interactions using photon arrival-time interval distribution analysis

    DOEpatents

    Laurence, Ted A [Livermore, CA; Weiss, Shimon [Los Angels, CA

    2009-10-06

    A method for analyzing/monitoring the properties of species that are labeled with fluorophores. A detector is used to detect photons emitted from species that are labeled with one or more fluorophores and located in a confocal detection volume. The arrival time of each of the photons is determined. The interval of time between various photon pairs is then determined to provide photon pair intervals. The number of photons that have arrival times within the photon pair intervals is also determined. The photon pair intervals are then used in combination with the corresponding counts of intervening photons to analyze properties and interactions of the molecules including brightness, concentration, coincidence and transit time. The method can be used for analyzing single photon streams and multiple photon streams.

  11. Demonstration of fundamental statistics by studying timing of electronics signals in a physics-based laboratory

    NASA Astrophysics Data System (ADS)

    Beach, Shaun E.; Semkow, Thomas M.; Remling, David J.; Bradt, Clayton J.

    2017-07-01

    We have developed accessible methods to demonstrate fundamental statistics in several phenomena, in the context of teaching electronic signal processing in a physics-based college-level curriculum. A relationship between the exponential time-interval distribution and Poisson counting distribution for a Markov process with constant rate is derived in a novel way and demonstrated using nuclear counting. Negative binomial statistics is demonstrated as a model for overdispersion and justified by the effect of electronic noise in nuclear counting. The statistics of digital packets on a computer network are shown to be compatible with the fractal-point stochastic process leading to a power-law as well as generalized inverse Gaussian density distributions of time intervals between packets.

  12. Analysis of aggregated tick returns: Evidence for anomalous diffusion

    NASA Astrophysics Data System (ADS)

    Weber, Philipp

    2007-01-01

    In order to investigate the origin of large price fluctuations, we analyze stock price changes of ten frequently traded NASDAQ stocks in the year 2002. Though the influence of the trading frequency on the aggregate return in a certain time interval is important, it cannot alone explain the heavy-tailed distribution of stock price changes. For this reason, we analyze intervals with a fixed number of trades in order to eliminate the influence of the trading frequency and investigate the relevance of other factors for the aggregate return. We show that in tick time the price follows a discrete diffusion process with a variable step width while the difference between the number of steps in positive and negative direction in an interval is Gaussian distributed. The step width is given by the return due to a single trade and is long-term correlated in tick time. Hence, its mean value can well characterize an interval of many trades and turns out to be an important determinant for large aggregate returns. We also present a statistical model reproducing the cumulative distribution of aggregate returns. For an accurate agreement with the empirical distribution, we also take into account asymmetries of the step widths in different directions together with cross correlations between these asymmetries and the mean step width as well as the signs of the steps.

  13. Timing in a Variable Interval Procedure: Evidence for a Memory Singularity

    PubMed Central

    Matell, Matthew S.; Kim, Jung S.; Hartshorne, Loryn

    2013-01-01

    Rats were trained in either a 30s peak-interval procedure, or a 15–45s variable interval peak procedure with a uniform distribution (Exp 1) or a ramping probability distribution (Exp 2). Rats in all groups showed peak shaped response functions centered around 30s, with the uniform group having an earlier and broader peak response function and rats in the ramping group having a later peak function as compared to the single duration group. The changes in these mean functions, as well as the statistics from single trial analyses, can be better captured by a model of timing in which memory is represented by a single, average, delay to reinforcement compared to one in which all durations are stored as a distribution, such as the complete memory model of Scalar Expectancy Theory or a simple associative model. PMID:24012783

  14. Characterization of Fissile Assemblies Using Low-Efficiency Detection Systems

    DOE PAGES

    Chapline, George F.; Verbeke, Jerome M.

    2017-02-02

    Here, we have investigated the possibility that the amount, chemical form, multiplication, and shape of the fissile material in an assembly can be passively assayed using scintillator detection systems by only measuring the fast neutron pulse height distribution and distribution of time intervals Δt between fast neutrons. We have previously demonstrated that the alpha-ratio can be obtained from the observed pulse height distribution for fast neutrons. In this paper we report that we report that when the distribution of time intervals is plotted as a function of logΔt, the position of the correlated neutron peak is nearly independent of detectormore » efficiency and determines the internal relaxation rate for fast neutrons. If this information is combined with knowledge of the alpha-ratio, then the position of the minimum between the correlated and uncorrelated peaks can be used to rapidly estimate the mass, multiplication, and shape of fissile material. This method does not require a priori knowledge of either the efficiency for neutron detection or the alpha-ratio. Although our method neglects 3-neutron correlations, we have used previously obtained experimental data for metallic and oxide forms of Pu to demonstrate that our method yields good estimates for multiplications as large as 2, and that the only constraint on detector efficiency/observation time is that a peak in the interval time distribution due to correlated neutrons is visible.« less

  15. Constraining the Long-Term Average of Earthquake Recurrence Intervals From Paleo- and Historic Earthquakes by Assimilating Information From Instrumental Seismicity

    NASA Astrophysics Data System (ADS)

    Zoeller, G.

    2017-12-01

    Paleo- and historic earthquakes are the most important source of information for the estimationof long-term recurrence intervals in fault zones, because sequences of paleoearthquakes cover more than one seismic cycle. On the other hand, these events are often rare, dating uncertainties are enormous and the problem of missing or misinterpreted events leads to additional problems. Taking these shortcomings into account, long-term recurrence intervals are usually unstable as long as no additional information are included. In the present study, we assume that the time to the next major earthquake depends on the rate of small and intermediate events between the large ones in terms of a ``clock-change'' model that leads to a Brownian Passage Time distribution for recurrence intervals. We take advantage of an earlier finding that the aperiodicity of this distribution can be related to the Gutenberg-Richter-b-value, which is usually around one and can be estimated easily from instrumental seismicity in the region under consideration. This allows to reduce the uncertainties in the estimation of the mean recurrence interval significantly, especially for short paleoearthquake sequences and high dating uncertainties. We present illustrative case studies from Southern California and compare the method with the commonly used approach of exponentially distributed recurrence times assuming a stationary Poisson process.

  16. Raindrop intervalometer

    NASA Astrophysics Data System (ADS)

    van de Giesen, Nicolaas; Hut, Rolf; ten Veldhuis, Marie-claire

    2017-04-01

    If one can assume that drop size distributions can be effectively described by a generalized gamma function [1], one can estimate this function on the basis of the distribution of time intervals between drops hitting a certain area. The arrival of a single drop is relatively easy to measure with simple consumer devices such as cameras or piezoelectric elements. Here we present an open-hardware design for the electronics and statistical processing of an intervalometer that measures time intervals between drop arrivals. The specific hardware in this case is a piezoelectric element in an appropriate housing, combined with an instrumentation op-amp and an Arduino processor. Although it would not be too difficult to simply register the arrival times of all drops, it is more practical to only report the main statistics. For this purpose, all intervals below a certain threshold during a reporting interval are summed and counted. We also sum the scaled squares, cubes, and fourth powers of the intervals. On the basis of the first four moments, one can estimate the corresponding generalized gamma function and obtain some sense of the accuracy of the underlying assumptions. Special attention is needed to determine the lower threshold of the drop sizes that can be measured. This minimum size often varies over the area being monitored, such as is the case for piezoelectric elements. We describe a simple method to determine these (distributed) minimal drop sizes and present a bootstrap method to make the necessary corrections. Reference [1] Uijlenhoet, R., and J. N. M. Stricker. "A consistent rainfall parameterization based on the exponential raindrop size distribution." Journal of Hydrology 218, no. 3 (1999): 101-127.

  17. Oscillatory dynamics of an intravenous glucose tolerance test model with delay interval

    NASA Astrophysics Data System (ADS)

    Shi, Xiangyun; Kuang, Yang; Makroglou, Athena; Mokshagundam, Sriprakash; Li, Jiaxu

    2017-11-01

    Type 2 diabetes mellitus (T2DM) has become prevalent pandemic disease in view of the modern life style. Both diabetic population and health expenses grow rapidly according to American Diabetes Association. Detecting the potential onset of T2DM is an essential focal point in the research of diabetes mellitus. The intravenous glucose tolerance test (IVGTT) is an effective protocol to determine the insulin sensitivity, glucose effectiveness, and pancreatic β-cell functionality, through the analysis and parameter estimation of a proper differential equation model. Delay differential equations have been used to study the complex physiological phenomena including the glucose and insulin regulations. In this paper, we propose a novel approach to model the time delay in IVGTT modeling. This novel approach uses two parameters to simulate not only both discrete time delay and distributed time delay in the past interval, but also the time delay distributed in a past sub-interval. Normally, larger time delay, either a discrete or a distributed delay, will destabilize the system. However, we find that time delay over a sub-interval might not. We present analytically some basic model properties, which are desirable biologically and mathematically. We show that this relatively simple model provides good fit to fluctuating patient data sets and reveals some intriguing dynamics. Moreover, our numerical simulation results indicate that our model may remove the defect in well known Minimal Model, which often overestimates the glucose effectiveness index.

  18. Recurrence and interoccurrence behavior of self-organized complex phenomena

    NASA Astrophysics Data System (ADS)

    Abaimov, S. G.; Turcotte, D. L.; Shcherbakov, R.; Rundle, J. B.

    2007-08-01

    The sandpile, forest-fire and slider-block models are said to exhibit self-organized criticality. Associated natural phenomena include landslides, wildfires, and earthquakes. In all cases the frequency-size distributions are well approximated by power laws (fractals). Another important aspect of both the models and natural phenomena is the statistics of interval times. These statistics are particularly important for earthquakes. For earthquakes it is important to make a distinction between interoccurrence and recurrence times. Interoccurrence times are the interval times between earthquakes on all faults in a region whereas recurrence times are interval times between earthquakes on a single fault or fault segment. In many, but not all cases, interoccurrence time statistics are exponential (Poissonian) and the events occur randomly. However, the distribution of recurrence times are often Weibull to a good approximation. In this paper we study the interval statistics of slip events using a slider-block model. The behavior of this model is sensitive to the stiffness α of the system, α=kC/kL where kC is the spring constant of the connector springs and kL is the spring constant of the loader plate springs. For a soft system (small α) there are no system-wide events and interoccurrence time statistics of the larger events are Poissonian. For a stiff system (large α), system-wide events dominate the energy dissipation and the statistics of the recurrence times between these system-wide events satisfy the Weibull distribution to a good approximation. We argue that this applicability of the Weibull distribution is due to the power-law (scale invariant) behavior of the hazard function, i.e. the probability that the next event will occur at a time t0 after the last event has a power-law dependence on t0. The Weibull distribution is the only distribution that has a scale invariant hazard function. We further show that the onset of system-wide events is a well defined critical point. We find that the number of system-wide events NSWE satisfies the scaling relation NSWE ∝(α-αC)δ where αC is the critical value of the stiffness. The system-wide events represent a new phase for the slider-block system.

  19. Osseous metastatic pattern in breast-cancer - relation between anatomical distribution and ulterior visceral involvement.

    PubMed

    Vallejo, C; Perez, J; Rodriguez, R; Cuevas, M; Machiavelli, M; Lacava, J; Romero, A; Rabinovich, M; Leone, B

    1994-03-01

    The development of ultimate visceral metastases and the visceral metastases-free time interval was evaluated in patients with breast carcinoma bearing bone-only metastases. Ninety patients were identified and were subdivided into three groups according to the anatomic distribution of osseous lesions: group A with osseous involvement cranial to the lumbosacral junction, group B caudal to this, and group C with lesions in both areas. The purpose of this subdivision was to evaluate if there is any correlation between bone-metastases distribution and probability of developing visceral lesions. All patients received systemic therapy consisting of hormonal therapy, chemotherapy or both. The median survival for the whole group was 28 months, whereas it was 33, 43 and 26 months for patients in groups A, B and C, respectively (p=NS). No differences in subsequent visceral involvement and visceral-free time interval were observed among the three groups of patients regardless of tumor burden. In conclusion, our analyses did not show significant differences in the incidence of visceral metastases, visceral metastases-free time interval and overall survival in patients with breast cancer with bone-only lesions independently of anatomic distribution.

  20. Steady state, relaxation and first-passage properties of a run-and-tumble particle in one-dimension

    NASA Astrophysics Data System (ADS)

    Malakar, Kanaya; Jemseena, V.; Kundu, Anupam; Vijay Kumar, K.; Sabhapandit, Sanjib; Majumdar, Satya N.; Redner, S.; Dhar, Abhishek

    2018-04-01

    We investigate the motion of a run-and-tumble particle (RTP) in one dimension. We find the exact probability distribution of the particle with and without diffusion on the infinite line, as well as in a finite interval. In the infinite domain, this probability distribution approaches a Gaussian form in the long-time limit, as in the case of a regular Brownian particle. At intermediate times, this distribution exhibits unexpected multi-modal forms. In a finite domain, the probability distribution reaches a steady-state form with peaks at the boundaries, in contrast to a Brownian particle. We also study the relaxation to the steady-state analytically. Finally we compute the survival probability of the RTP in a semi-infinite domain with an absorbing boundary condition at the origin. In the finite interval, we compute the exit probability and the associated exit times. We provide numerical verification of our analytical results.

  1. Statistical inferences with jointly type-II censored samples from two Pareto distributions

    NASA Astrophysics Data System (ADS)

    Abu-Zinadah, Hanaa H.

    2017-08-01

    In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.

  2. Study of temperature distributions in wafer exposure process

    NASA Astrophysics Data System (ADS)

    Lin, Zone-Ching; Wu, Wen-Jang

    During the exposure process of photolithography, wafer absorbs the exposure energy, which results in rising temperature and the phenomenon of thermal expansion. This phenomenon was often neglected due to its limited effect in the previous generation of process. However, in the new generation of process, it may very likely become a factor to be considered. In this paper, the finite element model for analyzing the transient behavior of the distribution of wafer temperature during exposure was established under the assumption that the wafer was clamped by a vacuum chuck without warpage. The model is capable of simulating the distribution of the wafer temperature under different exposure conditions. The flowchart of analysis begins with the simulation of transient behavior in a single exposure region to the variation of exposure energy, interval of exposure locations and interval of exposure time under continuous exposure to investigate the distribution of wafer temperature. The simulation results indicate that widening the interval of exposure locations has a greater impact in improving the distribution of wafer temperature than extending the interval of exposure time between neighboring image fields. Besides, as long as the distance between the field center locations of two neighboring exposure regions exceeds the straight distance equals to three image fields wide, the interacting thermal effect during wafer exposure can be ignored. The analysis flow proposed in this paper can serve as a supporting reference tool for engineers in planning exposure paths.

  3. Automatic, time-interval traffic counts for recreation area management planning

    Treesearch

    D. L. Erickson; C. J. Liu; H. K. Cordell

    1980-01-01

    Automatic, time-interval recorders were used to count directional vehicular traffic on a multiple entry/exit road network in the Red River Gorge Geological Area, Daniel Boone National Forest. Hourly counts of entering and exiting traffic differed according to recorder location, but an aggregated distribution showed a delayed peak in exiting traffic thought to be...

  4. More accurate, calibrated bootstrap confidence intervals for correlating two autocorrelated climate time series

    NASA Astrophysics Data System (ADS)

    Olafsdottir, Kristin B.; Mudelsee, Manfred

    2013-04-01

    Estimation of the Pearson's correlation coefficient between two time series to evaluate the influences of one time depended variable on another is one of the most often used statistical method in climate sciences. Various methods are used to estimate confidence interval to support the correlation point estimate. Many of them make strong mathematical assumptions regarding distributional shape and serial correlation, which are rarely met. More robust statistical methods are needed to increase the accuracy of the confidence intervals. Bootstrap confidence intervals are estimated in the Fortran 90 program PearsonT (Mudelsee, 2003), where the main intention was to get an accurate confidence interval for correlation coefficient between two time series by taking the serial dependence of the process that generated the data into account. However, Monte Carlo experiments show that the coverage accuracy for smaller data sizes can be improved. Here we adapt the PearsonT program into a new version called PearsonT3, by calibrating the confidence interval to increase the coverage accuracy. Calibration is a bootstrap resampling technique, which basically performs a second bootstrap loop or resamples from the bootstrap resamples. It offers, like the non-calibrated bootstrap confidence intervals, robustness against the data distribution. Pairwise moving block bootstrap is used to preserve the serial correlation of both time series. The calibration is applied to standard error based bootstrap Student's t confidence intervals. The performances of the calibrated confidence intervals are examined with Monte Carlo simulations, and compared with the performances of confidence intervals without calibration, that is, PearsonT. The coverage accuracy is evidently better for the calibrated confidence intervals where the coverage error is acceptably small (i.e., within a few percentage points) already for data sizes as small as 20. One form of climate time series is output from numerical models which simulate the climate system. The method is applied to model data from the high resolution ocean model, INALT01 where the relationship between the Agulhas Leakage and the North Brazil Current is evaluated. Preliminary results show significant correlation between the two variables when there is 10 year lag between them, which is more or less the time that takes the Agulhas Leakage water to reach the North Brazil Current. Mudelsee, M., 2003. Estimating Pearson's correlation coefficient with bootstrap confidence interval from serially dependent time series. Mathematical Geology 35, 651-665.

  5. Eliciting interval beliefs: An experimental study

    PubMed Central

    Peeters, Ronald; Wolk, Leonard

    2017-01-01

    In this paper we study the interval scoring rule as a mechanism to elicit subjective beliefs under varying degrees of uncertainty. In our experiment, subjects forecast the termination time of a time series to be generated from a given but unknown stochastic process. Subjects gradually learn more about the underlying process over time and hence the true distribution over termination times. We conduct two treatments, one with a high and one with a low volatility process. We find that elicited intervals are better when subjects are facing a low volatility process. In this treatment, participants learn to position their intervals almost optimally over the course of the experiment. This is in contrast with the high volatility treatment, where subjects, over the course of the experiment, learn to optimize the location of their intervals but fail to provide the optimal length. PMID:28380020

  6. Stochastic modeling of a serial killer

    PubMed Central

    Simkin, M.V.; Roychowdhury, V.P.

    2014-01-01

    We analyze the time pattern of the activity of a serial killer, who during twelve years had murdered 53 people. The plot of the cumulative number of murders as a function of time is of “Devil’s staircase” type. The distribution of the intervals between murders (step length) follows a power law with the exponent of 1.4. We propose a model according to which the serial killer commits murders when neuronal excitation in his brain exceeds certain threshold. We model this neural activity as a branching process, which in turn is approximated by a random walk. As the distribution of the random walk return times is a power law with the exponent 1.5, the distribution of the inter-murder intervals is thus explained. We illustrate analytical results by numerical simulation. Time pattern activity data from two other serial killers further substantiate our analysis. PMID:24721476

  7. Stochastic modeling of a serial killer.

    PubMed

    Simkin, M V; Roychowdhury, V P

    2014-08-21

    We analyze the time pattern of the activity of a serial killer, who during 12 years had murdered 53 people. The plot of the cumulative number of murders as a function of time is of "Devil's staircase" type. The distribution of the intervals between murders (step length) follows a power law with the exponent of 1.4. We propose a model according to which the serial killer commits murders when neuronal excitation in his brain exceeds certain threshold. We model this neural activity as a branching process, which in turn is approximated by a random walk. As the distribution of the random walk return times is a power law with the exponent 1.5, the distribution of the inter-murder intervals is thus explained. We illustrate analytical results by numerical simulation. Time pattern activity data from two other serial killers further substantiate our analysis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. A new variable interval schedule with constant hazard rate and finite time range.

    PubMed

    Bugallo, Mehdi; Machado, Armando; Vasconcelos, Marco

    2018-05-27

    We propose a new variable interval (VI) schedule that achieves constant probability of reinforcement in time while using a bounded range of intervals. By sampling each trial duration from a uniform distribution ranging from 0 to 2 T seconds, and then applying a reinforcement rule that depends linearly on trial duration, the schedule alternates reinforced and unreinforced trials, each less than 2 T seconds, while preserving a constant hazard function. © 2018 Society for the Experimental Analysis of Behavior.

  9. MEASUREMENT OF TIME INTERVALS FOR TIME CORRELATED RADIOACTIVE DECAY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lindeman, H.; Mornel, E.; Galil, U.

    1960-11-01

    The distribution of time intervals between successive counts was measured for radioactive decay in the thorium series. The measurements showed that the classical Marsden-Barratt law does not apply to this case of timecorrelated decay. They appeared, however, to be in agreement with the theory of Lindeman-Rosen, taking into account the fact that the counter receives only the radiation emitted in a solid angle near to 2 pi . (auth)

  10. Fluctuations in time intervals of financial data from the view point of the Gini index

    NASA Astrophysics Data System (ADS)

    Sazuka, Naoya; Inoue, Jun-ichi

    2007-09-01

    We propose an approach to explain fluctuations in time intervals of financial markets data from the view-point of the Gini index. We show the explicit form of the Gini index for a Weibull distribution: A good candidate to describe the first passage time of foreign exchange rate. The analytical expression of the Gini index compares well with the value obtained from empirical data.

  11. Modified stochastic fragmentation of an interval as an ageing process

    NASA Astrophysics Data System (ADS)

    Fortin, Jean-Yves

    2018-02-01

    We study a stochastic model based on modified fragmentation of a finite interval. The mechanism consists of cutting the interval at a random location and substituting a unique fragment on the right of the cut to regenerate and preserve the interval length. This leads to a set of segments of random sizes, with the accumulation of small fragments near the origin. This model is an example of record dynamics, with the presence of ‘quakes’ and slow dynamics. The fragment size distribution is a universal inverse power law with logarithmic corrections. The exact distribution for the fragment number as function of time is simply related to the unsigned Stirling numbers of the first kind. Two-time correlation functions are defined, and computed exactly. They satisfy scaling relations, and exhibit aging phenomena. In particular, the probability that the same number of fragments is found at two different times t>s is asymptotically equal to [4πlog(s)]-1/2 when s\\gg 1 and the ratio t/s is fixed, in agreement with the numerical simulations. The same process with a reset impedes the aging phenomenon-beyond a typical time scale defined by the reset parameter.

  12. Timescale- and Sensory Modality-Dependency of the Central Tendency of Time Perception.

    PubMed

    Murai, Yuki; Yotsumoto, Yuko

    2016-01-01

    When individuals are asked to reproduce intervals of stimuli that are intermixedly presented at various times, longer intervals are often underestimated and shorter intervals overestimated. This phenomenon may be attributed to the central tendency of time perception, and suggests that our brain optimally encodes a stimulus interval based on current stimulus input and prior knowledge of the distribution of stimulus intervals. Two distinct systems are thought to be recruited in the perception of sub- and supra-second intervals. Sub-second timing is subject to local sensory processing, whereas supra-second timing depends on more centralized mechanisms. To clarify the factors that influence time perception, the present study investigated how both sensory modality and timescale affect the central tendency. In Experiment 1, participants were asked to reproduce sub- or supra-second intervals, defined by visual or auditory stimuli. In the sub-second range, the magnitude of the central tendency was significantly larger for visual intervals compared to auditory intervals, while visual and auditory intervals exhibited a correlated and comparable central tendency in the supra-second range. In Experiment 2, the ability to discriminate sub-second intervals in the reproduction task was controlled across modalities by using an interval discrimination task. Even when the ability to discriminate intervals was controlled, visual intervals exhibited a larger central tendency than auditory intervals in the sub-second range. In addition, the magnitude of the central tendency for visual and auditory sub-second intervals was significantly correlated. These results suggest that a common modality-independent mechanism is responsible for the supra-second central tendency, and that both the modality-dependent and modality-independent components of the timing system contribute to the central tendency in the sub-second range.

  13. Prediction of future asset prices

    NASA Astrophysics Data System (ADS)

    Seong, Ng Yew; Hin, Pooi Ah; Ching, Soo Huei

    2014-12-01

    This paper attempts to incorporate trading volumes as an additional predictor for predicting asset prices. Denoting r(t) as the vector consisting of the time-t values of the trading volume and price of a given asset, we model the time-(t+1) asset price to be dependent on the present and l-1 past values r(t), r(t-1), ....., r(t-1+1) via a conditional distribution which is derived from a (2l+1)-dimensional power-normal distribution. A prediction interval based on the 100(α/2)% and 100(1-α/2)% points of the conditional distribution is then obtained. By examining the average lengths of the prediction intervals found by using the composite indices of the Malaysia stock market for the period 2008 to 2013, we found that the value 2 appears to be a good choice for l. With the omission of the trading volume in the vector r(t), the corresponding prediction interval exhibits a slightly longer average length, showing that it might be desirable to keep trading volume as a predictor. From the above conditional distribution, the probability that the time-(t+1) asset price will be larger than the time-t asset price is next computed. When the probability differs from 0 (or 1) by less than 0.03, the observed time-(t+1) increase in price tends to be negative (or positive). Thus the above probability has a good potential of being used as a market indicator in technical analysis.

  14. The 22nd Annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting

    NASA Technical Reports Server (NTRS)

    Sydnor, Richard L. (Editor)

    1990-01-01

    Papers presented at the 22nd Annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting are compiled. The following subject areas are covered: Rb, Cs, and H-based frequency standards and cryogenic and trapped-ion technology; satellite laser tracking networks, GLONASS timing, intercomparison of national time scales and international telecommunications; telecommunications, power distribution, platform positioning, and geophysical survey industries; military communications and navigation systems; and dissemination of precise time and frequency by means of GPS, GLONASS, MILSTAR, LORAN, and synchronous communication satellites.

  15. Crackles and instabilities during lung inflation

    NASA Astrophysics Data System (ADS)

    Alencar, Adriano M.; Majumdar, Arnab; Hantos, Zoltan; Buldyrev, Sergey V.; Eugene Stanley, H.; Suki, Béla

    2005-11-01

    In a variety of physico-chemical reactions, the actual process takes place in a reactive zone, called the “active surface”. We define the active surface of the lung as the set of airway segments that are closed but connected to the trachea through an open pathway, which is the interface between closed and open regions in a collapsed lung. To study the active surface and the time interval between consecutive openings, we measured the sound pressure of crackles, associated with the opening of collapsed airway segments in isolated dog lungs, inflating from the collapsed state in 120 s. We analyzed the sequence of crackle amplitudes, inter-crackle intervals, and low frequency energy from acoustic data. The series of spike amplitudes spans two orders of magnitude and the inter-crackle intervals spans over five orders of magnitude. The distribution of spike amplitudes follows a power law for nearly two decades, while the distribution of time intervals between consecutive crackles shows two regimes of power law behavior, where the first region represents crackles coming from avalanches of openings whereas the second region is due to the time intervals between separate avalanches. Using the time interval between measured crackles, we estimated the time evolution of the active surface during lung inflation. In addition, we show that recruitment and instabilities along the pressure-volume curve are associated with airway opening and recruitment. We find a good agreement between the theory of the dynamics of lung inflation and the experimental data which combined with numerical results may prove useful in the clinical diagnosis of lung diseases.

  16. Optimal regulation in systems with stochastic time sampling

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.; Lee, P. S.

    1980-01-01

    An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.

  17. Heterogeneous characters modeling of instant message services users’ online behavior

    PubMed Central

    Fang, Yajun; Horn, Berthold

    2018-01-01

    Research on temporal characteristics of human dynamics has attracted much attentions for its contribution to various areas such as communication, medical treatment, finance, etc. Existing studies show that the time intervals between two consecutive events present different non-Poisson characteristics, such as power-law, Pareto, bimodal distribution of power-law, exponential distribution, piecewise power-law, et al. With the occurrences of new services, new types of distributions may arise. In this paper, we study the distributions of the time intervals between two consecutive visits to QQ and WeChat service, the top two popular instant messaging services in China, and present a new finding that when the value of statistical unit T is set to 0.001s, the inter-event time distribution follows a piecewise distribution of exponential and power-law, indicating the heterogeneous character of IM services users’ online behavior in different time scales. We infer that the heterogeneous character is related to the communication mechanism of IM and the habits of users. Then we develop a combination model of exponential model and interest model to characterize the heterogeneity. Furthermore, we find that the exponent of the inter-event time distribution of the same service is different in two cities, which is correlated with the popularity of the services. Our research is useful for the application of information diffusion, prediction of economic development of cities, and so on. PMID:29734327

  18. Heterogeneous characters modeling of instant message services users' online behavior.

    PubMed

    Cui, Hongyan; Li, Ruibing; Fang, Yajun; Horn, Berthold; Welsch, Roy E

    2018-01-01

    Research on temporal characteristics of human dynamics has attracted much attentions for its contribution to various areas such as communication, medical treatment, finance, etc. Existing studies show that the time intervals between two consecutive events present different non-Poisson characteristics, such as power-law, Pareto, bimodal distribution of power-law, exponential distribution, piecewise power-law, et al. With the occurrences of new services, new types of distributions may arise. In this paper, we study the distributions of the time intervals between two consecutive visits to QQ and WeChat service, the top two popular instant messaging services in China, and present a new finding that when the value of statistical unit T is set to 0.001s, the inter-event time distribution follows a piecewise distribution of exponential and power-law, indicating the heterogeneous character of IM services users' online behavior in different time scales. We infer that the heterogeneous character is related to the communication mechanism of IM and the habits of users. Then we develop a combination model of exponential model and interest model to characterize the heterogeneity. Furthermore, we find that the exponent of the inter-event time distribution of the same service is different in two cities, which is correlated with the popularity of the services. Our research is useful for the application of information diffusion, prediction of economic development of cities, and so on.

  19. Neutron coincidence counting based on time interval analysis with one- and two-dimensional Rossi-alpha distributions: an application for passive neutron waste assay

    NASA Astrophysics Data System (ADS)

    Bruggeman, M.; Baeten, P.; De Boeck, W.; Carchon, R.

    1996-02-01

    Neutron coincidence counting is commonly used for the non-destructive assay of plutonium bearing waste or for safeguards verification measurements. A major drawback of conventional coincidence counting is related to the fact that a valid calibration is needed to convert a neutron coincidence count rate to a 240Pu equivalent mass ( 240Pu eq). In waste assay, calibrations are made for representative waste matrices and source distributions. The actual waste however may have quite different matrices and source distributions compared to the calibration samples. This often results in a bias of the assay result. This paper presents a new neutron multiplicity sensitive coincidence counting technique including an auto-calibration of the neutron detection efficiency. The coincidence counting principle is based on the recording of one- and two-dimensional Rossi-alpha distributions triggered respectively by pulse pairs and by pulse triplets. Rossi-alpha distributions allow an easy discrimination between real and accidental coincidences and are aimed at being measured by a PC-based fast time interval analyser. The Rossi-alpha distributions can be easily expressed in terms of a limited number of factorial moments of the neutron multiplicity distributions. The presented technique allows an unbiased measurement of the 240Pu eq mass. The presented theory—which will be indicated as Time Interval Analysis (TIA)—is complementary to Time Correlation Analysis (TCA) theories which were developed in the past, but is from the theoretical point of view much simpler and allows a straightforward calculation of deadtime corrections and error propagation. Analytical expressions are derived for the Rossi-alpha distributions as a function of the factorial moments of the efficiency dependent multiplicity distributions. The validity of the proposed theory is demonstrated and verified via Monte Carlo simulations of pulse trains and the subsequent analysis of the simulated data.

  20. The 26th Annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting

    NASA Technical Reports Server (NTRS)

    Sydnor, Richard (Editor)

    1995-01-01

    This document is a compilation of technical papers presented at the 26th Annual PTTI Applications and Planning Meeting. Papers are in the following categories: (1) Recent developments in rubidium, cesium, and hydrogen-based frequency standards, and in cryogenic and trapped-ion technology; (2) International and transnational applications of Precise Time and Time Interval technology with emphasis on satellite laser tracking, GLONASS timing, intercomparison of national time scales and international telecommunications; (3) Applications of Precise Time and Time Interval technology to the telecommunications, power distribution, platform positioning, and geophysical survey industries; (4) Applications of PTTI technology to evolving military communications and navigation systems; and (5) Dissemination of precise time and frequency by means of GPS, GLONASS, MILSTAR, LORAN, and synchronous communications satellites.

  1. Fluctuations of healthy and unhealthy heartbeat intervals

    NASA Astrophysics Data System (ADS)

    Lan, Boon Leong; Toda, Mikito

    2013-04-01

    We show that the RR-interval fluctuations, defined as the difference between successive natural-logarithm of the RR interval, for healthy, congestive-heart-failure (CHF) and atrial-fibrillation (AF) subjects are well modeled by non-Gaussian stable distributions. Our results suggest that healthy or unhealthy RR-interval fluctuation can generally be modeled as a sum of a large number of independent physiological effects which are identically distributed with infinite variance. Furthermore, we show for the first time that one indicator —the scale parameter of the stable distribution— is sufficient to robustly distinguish the three groups of subjects. The scale parameters for healthy subjects are smaller than those for AF subjects but larger than those for CHF subjects —this ordering suggests that the scale parameter could be used to objectively quantify the severity of CHF and AF over time and also serve as an early warning signal for a healthy person when it approaches either boundary of the healthy range.

  2. Time interval between successive trading in foreign currency market: from microscopic to macroscopic

    NASA Astrophysics Data System (ADS)

    Sato, Aki-Hiro

    2004-12-01

    Recently, it has been shown that inter-transaction interval (ITI) distribution of foreign currency rates has a fat tail. In order to understand the statistical property of the ITI dealer model with N interactive agents is proposed. From numerical simulations it is confirmed that the ITI distribution of the dealer model has a power law tail. The random multiplicative process (RMP) can be approximately derived from the ITI of the dealer model. Consequently, we conclude that the power law tail of the ITI distribution of the dealer model is a result of the RMP.

  3. Bayesian analyses of time-interval data for environmental radiation monitoring.

    PubMed

    Luo, Peng; Sharp, Julia L; DeVol, Timothy A

    2013-01-01

    Time-interval (time difference between two consecutive pulses) analysis based on the principles of Bayesian inference was investigated for online radiation monitoring. Using experimental and simulated data, Bayesian analysis of time-interval data [Bayesian (ti)] was compared with Bayesian and a conventional frequentist analysis of counts in a fixed count time [Bayesian (cnt) and single interval test (SIT), respectively]. The performances of the three methods were compared in terms of average run length (ARL) and detection probability for several simulated detection scenarios. Experimental data were acquired with a DGF-4C system in list mode. Simulated data were obtained using Monte Carlo techniques to obtain a random sampling of the Poisson distribution. All statistical algorithms were developed using the R Project for statistical computing. Bayesian analysis of time-interval information provided a similar detection probability as Bayesian analysis of count information, but the authors were able to make a decision with fewer pulses at relatively higher radiation levels. In addition, for the cases with very short presence of the source (< count time), time-interval information is more sensitive to detect a change than count information since the source data is averaged by the background data over the entire count time. The relationships of the source time, change points, and modifications to the Bayesian approach for increasing detection probability are presented.

  4. The 25th Annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting

    NASA Technical Reports Server (NTRS)

    Sydnor, Richard L. (Editor)

    1994-01-01

    Papers in the following categories are presented: recent developments in rubidium, cesium, and hydrogen-based frequency standards, and in cryogenic and trapped-ion technology; international and transnational applications of precise time and time interval (PTTI) technology with emphasis on satellite laser tracking networks, GLONASS timing, intercomparison of national time scales and international telecommunication; applications of PTTI technology to the telecommunications, power distribution, platform positioning, and geophysical survey industries; application of PTTI technology to evolving military communications and navigation systems; and dissemination of precise time and frequency by means of GPS, GLONASS, MILSTAR, LORAN, and synchronous communications satellites.

  5. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution.

    PubMed

    Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr

    2012-01-01

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Non-homogeneous Behaviour of the Spatial Distribution of Macrospicules

    NASA Astrophysics Data System (ADS)

    Gyenge, N.; Bennett, S.; Erdélyi, R.

    2015-03-01

    In this paper the longitudinal and latitudinal spatial distribution of macrospicules is examined. We found a statistical relationship between the active longitude (determined by sunspot groups) and the longitudinal distribution of macrospicules. This distribution of macrospicules shows an inhomogeneity and non-axisymmetrical behaviour in the time interval between June 2010 and December 2012, covered by observations of the Solar Dynamic Observatory (SDO) satellite. The enhanced positions of the activity and its time variation have been calculated. The migration of the longitudinal distribution of macrospicules shows a similar behaviour to that of the sunspot groups.

  7. Statistical Characteristics of the Gaussian-Noise Spikes Exceeding the Specified Threshold as Applied to Discharges in a Thundercloud

    NASA Astrophysics Data System (ADS)

    Klimenko, V. V.

    2017-12-01

    We obtain expressions for the probabilities of the normal-noise spikes with the Gaussian correlation function and for the probability density of the inter-spike intervals. As distinct from the delta-correlated noise, in which the intervals are distributed by the exponential law, the probability of the subsequent spike depends on the previous spike and the interval-distribution law deviates from the exponential one for a finite noise-correlation time (frequency-bandwidth restriction). This deviation is the most pronounced for a low detection threshold. Similarity of the behaviors of the distributions of the inter-discharge intervals in a thundercloud and the noise spikes for the varying repetition rate of the discharges/spikes, which is determined by the ratio of the detection threshold to the root-mean-square value of noise, is observed. The results of this work can be useful for the quantitative description of the statistical characteristics of the noise spikes and studying the role of fluctuations for the discharge emergence in a thundercloud.

  8. Interval sampling methods and measurement error: a computer simulation.

    PubMed

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  9. Interval Estimation of Seismic Hazard Parameters

    NASA Astrophysics Data System (ADS)

    Orlecka-Sikora, Beata; Lasocki, Stanislaw

    2017-03-01

    The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.

  10. Comparing neuronal spike trains with inhomogeneous Poisson distribution: evaluation procedure and experimental application in cases of cyclic activity.

    PubMed

    Fiore, Lorenzo; Lorenzetti, Walter; Ratti, Giovannino

    2005-11-30

    A procedure is proposed to compare single-unit spiking activity elicited in repetitive cycles with an inhomogeneous Poisson process (IPP). Each spike sequence in a cycle is discretized and represented as a point process on a circle. The interspike interval probability density predicted for an IPP is computed on the basis of the experimental firing probability density; differences from the experimental interval distribution are assessed. This procedure was applied to spike trains which were repetitively induced by opening-closing movements of the distal article of a lobster leg. As expected, the density of short interspike intervals, less than 20-40 ms in length, was found to lie greatly below the level predicted for an IPP, reflecting the occurrence of the refractory period. Conversely, longer intervals, ranging from 20-40 to 100-120 ms, were markedly more abundant than expected; this provided evidence for a time window of increased tendency to fire again after a spike. Less consistently, a weak depression of spike generation was observed for longer intervals. A Monte Carlo procedure, implemented for comparison, produced quite similar results, but was slightly less precise and more demanding as concerns computation time.

  11. Interval timing in genetically modified mice: a simple paradigm

    PubMed Central

    Balci, F.; Papachristos, E. B.; Gallistel, C. R.; Brunner, D.; Gibson, J.; Shumyatsky, G. P.

    2009-01-01

    We describe a behavioral screen for the quantitative study of interval timing and interval memory in mice. Mice learn to switch from a short-latency feeding station to a long-latency station when the short latency has passed without a feeding. The psychometric function is the cumulative distribution of switch latencies. Its median measures timing accuracy and its interquartile interval measures timing precision. Next, using this behavioral paradigm, we have examined mice with a gene knockout of the receptor for gastrin-releasing peptide that show enhanced (i.e. prolonged) freezing in fear conditioning. We have tested the hypothesis that the mutants freeze longer because they are more uncertain than wild types about when to expect the electric shock. The knockouts however show normal accuracy and precision in timing, so we have rejected this alternative hypothesis. Last, we conduct the pharmacological validation of our behavioral screen using D-amphetamine and methamphetamine. We suggest including the analysis of interval timing and temporal memory in tests of genetically modified mice for learning and memory and argue that our paradigm allows this to be done simply and efficiently. PMID:17696995

  12. Interval timing in genetically modified mice: a simple paradigm.

    PubMed

    Balci, F; Papachristos, E B; Gallistel, C R; Brunner, D; Gibson, J; Shumyatsky, G P

    2008-04-01

    We describe a behavioral screen for the quantitative study of interval timing and interval memory in mice. Mice learn to switch from a short-latency feeding station to a long-latency station when the short latency has passed without a feeding. The psychometric function is the cumulative distribution of switch latencies. Its median measures timing accuracy and its interquartile interval measures timing precision. Next, using this behavioral paradigm, we have examined mice with a gene knockout of the receptor for gastrin-releasing peptide that show enhanced (i.e. prolonged) freezing in fear conditioning. We have tested the hypothesis that the mutants freeze longer because they are more uncertain than wild types about when to expect the electric shock. The knockouts however show normal accuracy and precision in timing, so we have rejected this alternative hypothesis. Last, we conduct the pharmacological validation of our behavioral screen using d-amphetamine and methamphetamine. We suggest including the analysis of interval timing and temporal memory in tests of genetically modified mice for learning and memory and argue that our paradigm allows this to be done simply and efficiently.

  13. Statistical Parameter Study of the Time Interval Distribution for Nonparalyzable, Paralyzable, and Hybrid Dead Time Models

    NASA Astrophysics Data System (ADS)

    Syam, Nur Syamsi; Maeng, Seongjin; Kim, Myo Gwang; Lim, Soo Yeon; Lee, Sang Hoon

    2018-05-01

    A large dead time of a Geiger Mueller (GM) detector may cause a large count loss in radiation measurements and consequently may cause distortion of the Poisson statistic of radiation events into a new distribution. The new distribution will have different statistical parameters compared to the original distribution. Therefore, the variance, skewness, and excess kurtosis in association with the observed count rate of the time interval distribution for well-known nonparalyzable, paralyzable, and nonparalyzable-paralyzable hybrid dead time models of a Geiger Mueller detector were studied using Monte Carlo simulation (GMSIM). These parameters were then compared with the statistical parameters of a perfect detector to observe the change in the distribution. The results show that the behaviors of the statistical parameters for the three dead time models were different. The values of the skewness and the excess kurtosis of the nonparalyzable model are equal or very close to those of the perfect detector, which are ≅2 for skewness, and ≅6 for excess kurtosis, while the statistical parameters in the paralyzable and hybrid model obtain minimum values that occur around the maximum observed count rates. The different trends of the three models resulting from the GMSIM simulation can be used to distinguish the dead time behavior of a GM counter; i.e. whether the GM counter can be described best by using the nonparalyzable, paralyzable, or hybrid model. In a future study, these statistical parameters need to be analyzed further to determine the possibility of using them to determine a dead time for each model, particularly for paralyzable and hybrid models.

  14. Mixed-mode oscillations and interspike interval statistics in the stochastic FitzHugh-Nagumo model

    NASA Astrophysics Data System (ADS)

    Berglund, Nils; Landon, Damien

    2012-08-01

    We study the stochastic FitzHugh-Nagumo equations, modelling the dynamics of neuronal action potentials in parameter regimes characterized by mixed-mode oscillations. The interspike time interval is related to the random number of small-amplitude oscillations separating consecutive spikes. We prove that this number has an asymptotically geometric distribution, whose parameter is related to the principal eigenvalue of a substochastic Markov chain. We provide rigorous bounds on this eigenvalue in the small-noise regime and derive an approximation of its dependence on the system's parameters for a large range of noise intensities. This yields a precise description of the probability distribution of observed mixed-mode patterns and interspike intervals.

  15. [Computerized monitoring for integrated cervical screening. Rationale, methods and indicators of participation].

    PubMed

    Bucchi, L; Pierri, C; Caprara, L; Cortecchia, S; De Lillo, M; Bondi, A

    2003-02-01

    This paper presents a computerised system for the monitoring of integrated cervical screening, i.e. the integration of spontaneous Pap smear practice into organised screening. The general characteristics of the system are described, including background and rationale (integrated cervical screening in European countries, impact of integration on monitoring, decentralised organization of screening and levels of monitoring), general methods (definitions, sections, software description, and setting of application), and indicators of participation (distribution by time interval since previous Pap smear, distribution by screening sector--organised screening centres vs public and private clinical settings--, distribution by time interval between the last two Pap smears, and movement of women between the two screening sectors). Also, the paper reports the results of the application of these indicators in the general database of the Pathology Department of Imola Health District in northern Italy.

  16. Fluctuations in Wikipedia access-rate and edit-event data

    NASA Astrophysics Data System (ADS)

    Kämpf, Mirko; Tismer, Sebastian; Kantelhardt, Jan W.; Muchnik, Lev

    2012-12-01

    Internet-based social networks often reflect extreme events in nature and society by drastic increases in user activity. We study and compare the dynamics of the two major complex processes necessary for information spread via the online encyclopedia ‘Wikipedia’, i.e., article editing (information upload) and article access (information viewing) based on article edit-event time series and (hourly) user access-rate time series for all articles. Daily and weekly activity patterns occur in addition to fluctuations and bursting activity. The bursts (i.e., significant increases in activity for an extended period of time) are characterized by a power-law distribution of durations of increases and decreases. For describing the recurrence and clustering of bursts we investigate the statistics of the return intervals between them. We find stretched exponential distributions of return intervals in access-rate time series, while edit-event time series yield simple exponential distributions. To characterize the fluctuation behavior we apply detrended fluctuation analysis (DFA), finding that most article access-rate time series are characterized by strong long-term correlations with fluctuation exponents α≈0.9. The results indicate significant differences in the dynamics of information upload and access and help in understanding the complex process of collecting, processing, validating, and distributing information in self-organized social networks.

  17. The Theory of Distributed Practice as Related to Acquisition of Psychomotor Skills by Adolescents in a Selected Curricular Field.

    ERIC Educational Resources Information Center

    Drake, James Bob

    1981-01-01

    From results on the tensile strength and nick-break average jury evaluations test, it was concluded that with the same total practice time, different distributions of welding practice time intervals (15, 30, and 45 minutes) influence the quality of butt welds made by ninth-grade vocational agriculture students. (Author/SJL)

  18. Effect of the revisit interval on the accuracy of remote sensing-based estimates of evapotranspiration at field scales

    USDA-ARS?s Scientific Manuscript database

    Accurate spatially distributed estimates of evapotranspiration (ET) derived from remotely sensed data are critical to a broad range of practical and operational applications. However, due to lengthy return intervals and cloud cover, data acquisition is not continuous over time. To fill the data gaps...

  19. Characteristic Lifelength of Coherent Structure in the Turbulent Boundary Layer

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.

    2006-01-01

    A characteristic lifelength is defined by which a Gaussian distribution is fit to data correlated over a 3 sensor array sampling streamwise sidewall pressure. The data were acquired at subsonic, transonic and supersonic speeds aboard a Tu-144. Lifelengths are estimated using the cross spectrum and are shown to compare favorably with Efimtsov's prediction of correlation space scales. Lifelength distributions are computed in the time/frequency domain using an interval correlation technique on the continuous wavelet transform of the original time data. The median values of the lifelength distributions are found to be very close to the frequency averaged result. The interval correlation technique is shown to allow the retrieval and inspection of the original time data of each event in the lifelength distribution, thus providing a means to locate and study the nature of the coherent structure in the turbulent boundary layer. The lifelength data can be converted to lifetimes using the convection velocity. The lifetime of events in the time/frequency domain are displayed in Lifetime Maps. The primary purpose of the paper is to validate these new analysis techniques so that they can be used with confidence to further characterize coherent structure in the turbulent boundary layer.

  20. A General theory of Signal Integration for Fault-Tolerant Dynamic Distributed Sensor Networks

    DTIC Science & Technology

    1993-10-01

    related to a) the architecture and fault- tolerance of the distributed sensor network, b) the proper synchronisation of sensor signals, c) the...Computational complexities of the problem of distributed detection. 5) Issues related to recording of events and synchronization in distributed sensor...Intervals for Synchronization in Real Time Distributed Systems", Submitted to Electronic Encyclopedia. 3. V. G. Hegde and S. S. Iyengar "Efficient

  1. Solar cycle variations in polar cap area measured by the superDARN radars

    NASA Astrophysics Data System (ADS)

    Imber, S. M.; Milan, S. E.; Lester, M.

    2013-10-01

    present a long-term study, from January 1996 to August 2012, of the latitude of the Heppner-Maynard Boundary (HMB) measured at midnight using the northern hemisphere Super Dual Auroral Radar Network (SuperDARN). The HMB represents the equatorward extent of ionospheric convection and is used in this study as a measure of the global magnetospheric dynamics. We find that the yearly distribution of HMB latitudes is single peaked at 64° magnetic latitude for the majority of the 17 year interval. During 2003, the envelope of the distribution shifts to lower latitudes and a second peak in the distribution is observed at 61°. The solar wind-magnetosphere coupling function derived by Milan et al. (2012) suggests that the solar wind driving during this year was significantly higher than during the rest of the 17 year interval. In contrast, during the period 2008-2011, HMB distribution shifts to higher latitudes, and a second peak in the distribution is again observed, this time at 68° magnetic latitude. This time interval corresponds to a period of extremely low solar wind driving during the recent extreme solar minimum. This is the first long-term study of the polar cap area and the results demonstrate that there is a close relationship between the solar activity cycle and the area of the polar cap on a large-scale, statistical basis.

  2. Solar Cycle Variations in Polar Cap Area Measured by the SuperDARN Radars

    NASA Astrophysics Data System (ADS)

    Imber, S. M.; Milan, S. E.; Lester, M.

    2013-12-01

    We present a long term study, from January 1996 - August 2012, of the latitude of the Heppner-Maynard Boundary (HMB) measured at midnight using the northern hemisphere SuperDARN radars. The HMB represents the equatorward extent of ionospheric convection, and is used in this study as a measure of the global magnetospheric dynamics and activity. We find that the yearly distribution of HMB latitudes is single-peaked at 64° magnetic latitude for the majority of the 17-year interval. During 2003 the envelope of the distribution shifts to lower latitudes and a second peak in the distribution is observed at 61°. The solar wind-magnetosphere coupling function derived by Milan et al. (2012) suggests that the solar wind driving during this year was significantly higher than during the rest of the 17-year interval. In contrast, during the period 2008-2011 HMB distribution shifts to higher latitudes, and a second peak in the distribution is again observed, this time at 68° magnetic latitude. This time interval corresponds to a period of extremely low solar wind driving during the recent extreme solar minimum. This is the first statistical study of the polar cap area over an entire solar cycle, and the results demonstrate that there is a close relationship between the phase of the solar cycle and the area of the polar cap on a large scale statistical basis.

  3. Intra-abdominal temperature distribution during consolidation hyperthermic intraperitoneal chemotherapy with carboplatin in the treatment of advanced stage ovarian carcinoma.

    PubMed

    Rettenmaier, Mark A; Mendivil, Alberto A; Gray, Crystal M; Chapman, Amber P; Stone, Michelle K; Tinnerman, Erin J; Goldstein, Bram H

    2015-06-01

    Hyperthermic intraperitoneal chemotherapy (HIPEC) involves the continuous heating and circulation of chemotherapy throughout the abdominal cavity in an attempt to enhance cytotoxicity. Despite the potential of this chemotherapy procedure, there are scant anatomical temperature distribution studies reporting on this therapeutic process. We prospectively evaluated the temperature of select anatomical (e.g. upper abdominal, mid-abdominal and supra-pubic) sites in 11 advanced stage ovarian cancer patients who were treated with consolidation HIPEC carboplatin (AUC 10). The temperature of the aforementioned anatomical regions and the inflow/outflow tubing was measured at baseline and at 15-min intervals until the procedure's completion. The lowest observed mean composite temperature was 41.1 °C at the supra-pubic site whereas the highest temperature was 42.6 °C, in association with the inflow/outflow tubing. During the various time intervals we also ascertained that the lowest composite temperature was 40.9 °C at baseline (i.e. time 0), whereas the highest value (41.8 °C) occurred at multiple time periods (e.g., 15, 45 and 60 min). The HIPEC temperature variation amongst the various abdominal sites and time intervals was minimal. We also discerned that uniform temperature distribution throughout the abdominal cavity was facilitated when the abdomen was both maximally distended with fluid and a high flow rate was maintained.

  4. Are EUR and GBP different words for the same currency?

    NASA Astrophysics Data System (ADS)

    Ivanova, K.; Ausloos, M.

    2002-05-01

    The British Pound (GBP) is not part of the Euro (EUR) monetary system. In order to find out arguments on whether GBP should join the EUR or not correlations are calculated between GBP exchange rates with respect to various currencies: USD, JPY, CHF, DKK, the currencies forming EUR and a reconstructed EUR for the time interval from 1993 till June 30, 2000. The distribution of fluctuations of the exchange rates is Gaussian for the central part of the distribution, but has fat tails for the large size fluctuations. Within the Detrended Fluctuation Analysis (DFA) statistical method the power law behavior describing the root-mean-square deviation from a linear trend of the exchange rate fluctuations is obtained as a function of time for the time interval of interest. The time-dependent exponent evolution of the exchange rate fluctuations is given. Statistical considerations imply that the GBP is already behaving as a true EUR.

  5. Modeling of thin-film GaAs growth

    NASA Technical Reports Server (NTRS)

    Heinbockel, J. H.

    1981-01-01

    A solid Monte Carlo model is constructed for the simulation of crystal growth. The model assumes thermally accommodated adatoms impinge upon the surface during a delta time interval. The surface adatoms are assigned a random energy from a Boltzmann distribution, and this energy determines whether the adatoms evaporate, migrate, or remain stationary during the delta time interval. For each addition or migration of an adatom, potential wells are adjusted to reflect the absorption, migration, or desorption potential changes.

  6. Not All Prehospital Time is Equal: Influence of Scene Time on Mortality

    PubMed Central

    Brown, Joshua B.; Rosengart, Matthew R.; Forsythe, Raquel M.; Reynolds, Benjamin R.; Gestring, Mark L.; Hallinan, William M.; Peitzman, Andrew B.; Billiar, Timothy R.; Sperry, Jason L.

    2016-01-01

    Background Trauma is time-sensitive and minimizing prehospital (PH) time is appealing. However, most studies have not linked increasing PH time with worse outcomes, as raw PH times are highly variable. It is unclear whether specific PH time patterns affect outcomes. Our objective was to evaluate the association of PH time interval distribution with mortality. Methods Patients transported by EMS in the Pennsylvania trauma registry 2000-2013 with total prehospital time (TPT)≥20min were included. TPT was divided into three PH time intervals: response, scene, and transport time. The number of minutes in each PH time interval was divided by TPT to determine the relative proportion each interval contributed to TPT. A prolonged interval was defined as any one PH interval contributing ≥50% of TPT. Patients were classified by prolonged PH interval or no prolonged PH interval (all intervals<50% of TPT). Patients were matched for TPT and conditional logistic regression determined the association of mortality with PH time pattern, controlling for confounders. PH interventions were explored as potential mediators, and prehospital triage criteria used identify patients with time-sensitive injuries. Results There were 164,471 patients included. Patients with prolonged scene time had increased odds of mortality (OR 1.21; 95%CI 1.02–1.44, p=0.03). Prolonged response, transport, and no prolonged interval were not associated with mortality. When adjusting for mediators including extrication and PH intubation, prolonged scene time was no longer associated with mortality (OR 1.06; 0.90–1.25, p=0.50). Together these factors mediated 61% of the effect between prolonged scene time and mortality. Mortality remained associated with prolonged scene time in patients with hypotension, penetrating injury, and flail chest. Conclusions Prolonged scene time is associated with increased mortality. PH interventions partially mediate this association. Further study should evaluate whether these interventions drive increased mortality because they prolong scene time or by another mechanism, as reducing scene time may be a target for intervention. Level of Evidence IV, prognostic study PMID:26886000

  7. Estimating the Distribution of the Incubation Periods of Human Avian Influenza A(H7N9) Virus Infections.

    PubMed

    Virlogeux, Victor; Li, Ming; Tsang, Tim K; Feng, Luzhao; Fang, Vicky J; Jiang, Hui; Wu, Peng; Zheng, Jiandong; Lau, Eric H Y; Cao, Yu; Qin, Ying; Liao, Qiaohong; Yu, Hongjie; Cowling, Benjamin J

    2015-10-15

    A novel avian influenza virus, influenza A(H7N9), emerged in China in early 2013 and caused severe disease in humans, with infections occurring most frequently after recent exposure to live poultry. The distribution of A(H7N9) incubation periods is of interest to epidemiologists and public health officials, but estimation of the distribution is complicated by interval censoring of exposures. Imputation of the midpoint of intervals was used in some early studies, resulting in estimated mean incubation times of approximately 5 days. In this study, we estimated the incubation period distribution of human influenza A(H7N9) infections using exposure data available for 229 patients with laboratory-confirmed A(H7N9) infection from mainland China. A nonparametric model (Turnbull) and several parametric models accounting for the interval censoring in some exposures were fitted to the data. For the best-fitting parametric model (Weibull), the mean incubation period was 3.4 days (95% confidence interval: 3.0, 3.7) and the variance was 2.9 days; results were very similar for the nonparametric Turnbull estimate. Under the Weibull model, the 95th percentile of the incubation period distribution was 6.5 days (95% confidence interval: 5.9, 7.1). The midpoint approximation for interval-censored exposures led to overestimation of the mean incubation period. Public health observation of potentially exposed persons for 7 days after exposure would be appropriate. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Nonlogarithmic magnetization relaxation at the initial time intervals and magnetic field dependence of the flux creep rate in Bi2Sr2Ca(sub I)Cu2Ox single crystals

    NASA Technical Reports Server (NTRS)

    Moshchalcov, V. V.; Zhukov, A. A.; Kuznetzov, V. D.; Metlushko, V. V.; Leonyuk, L. I.

    1990-01-01

    At the initial time intervals, preceding the thermally activated flux creep regime, fast nonlogarithmic relaxation is found. The fully magnetic moment Pm(t) relaxation curve is shown. The magnetic measurements were made using SQUID-magnetometer. Two different relaxation regimes exist. The nonlogarithmic relaxation for the initial time intervals may be related to the viscous Abrikosov vortices flow with j is greater than j(sub c) for high enough temperature T and magnetic field induction B. This assumption correlates with Pm(t) measurements. The characteristic time t(sub O) separating two different relaxation regimes decreases as temperature and magnetic field are lowered. The logarithmic magnetization relaxation curves Pm(t) for fixed temperature and different external magnetic field inductions B are given. The relaxation rate dependence on magnetic field, R(B) = dPm(B, T sub O)/d(1nt) has a sharp maximum which is similar to that found for R(T) temperature dependences. The maximum shifts to lower fields as temperature goes up. The observed sharp maximum is related to a topological transition in shielding critical current distribution and, consequently, in Abrikosov vortices density. The nonlogarithmic magnetization relaxation for the initial time intervals is found. This fast relaxation has almost an exponentional character. The sharp relaxation rate R(B) maximum is observed. This maximum corresponds to a topological transition in Abrikosov vortices distribution.

  9. Estimating a Markovian Epidemic Model Using Household Serial Interval Data from the Early Phase of an Epidemic

    PubMed Central

    Black, Andrew J.; Ross, Joshua V.

    2013-01-01

    The clinical serial interval of an infectious disease is the time between date of symptom onset in an index case and the date of symptom onset in one of its secondary cases. It is a quantity which is commonly collected during a pandemic and is of fundamental importance to public health policy and mathematical modelling. In this paper we present a novel method for calculating the serial interval distribution for a Markovian model of household transmission dynamics. This allows the use of Bayesian MCMC methods, with explicit evaluation of the likelihood, to fit to serial interval data and infer parameters of the underlying model. We use simulated and real data to verify the accuracy of our methodology and illustrate the importance of accounting for household size. The output of our approach can be used to produce posterior distributions of population level epidemic characteristics. PMID:24023679

  10. Accuracy of time-domain and frequency-domain methods used to characterize catchment transit time distributions

    NASA Astrophysics Data System (ADS)

    Godsey, S. E.; Kirchner, J. W.

    2008-12-01

    The mean residence time - the average time that it takes rainfall to reach the stream - is a basic parameter used to characterize catchment processes. Heterogeneities in these processes lead to a distribution of travel times around the mean residence time. By examining this travel time distribution, we can better predict catchment response to contamination events. A catchment system with shorter residence times or narrower distributions will respond quickly to contamination events, whereas systems with longer residence times or longer-tailed distributions will respond more slowly to those same contamination events. The travel time distribution of a catchment is typically inferred from time series of passive tracers (e.g., water isotopes or chloride) in precipitation and streamflow. Variations in the tracer concentration in streamflow are usually damped compared to those in precipitation, because precipitation inputs from different storms (with different tracer signatures) are mixed within the catchment. Mathematically, this mixing process is represented by the convolution of the travel time distribution and the precipitation tracer inputs to generate the stream tracer outputs. Because convolution in the time domain is equivalent to multiplication in the frequency domain, it is relatively straightforward to estimate the parameters of the travel time distribution in either domain. In the time domain, the parameters describing the travel time distribution are typically estimated by maximizing the goodness of fit between the modeled and measured tracer outputs. In the frequency domain, the travel time distribution parameters can be estimated by fitting a power-law curve to the ratio of precipitation spectral power to stream spectral power. Differences between the methods of parameter estimation in the time and frequency domain mean that these two methods may respond differently to variations in data quality, record length and sampling frequency. Here we evaluate how well these two methods of travel time parameter estimation respond to different sources of uncertainty and compare the methods to one another. We do this by generating synthetic tracer input time series of different lengths, and convolve these with specified travel-time distributions to generate synthetic output time series. We then sample both the input and output time series at various sampling intervals and corrupt the time series with realistic error structures. Using these 'corrupted' time series, we infer the apparent travel time distribution, and compare it to the known distribution that was used to generate the synthetic data in the first place. This analysis allows us to quantify how different record lengths, sampling intervals, and error structures in the tracer measurements affect the apparent mean residence time and the apparent shape of the travel time distribution.

  11. Dynamical behaviors of inter-out-of-equilibrium state intervals in Korean futures exchange markets

    NASA Astrophysics Data System (ADS)

    Lim, Gyuchang; Kim, SooYong; Kim, Kyungsik; Lee, Dong-In; Scalas, Enrico

    2008-05-01

    A recently discovered feature of financial markets, the two-phase phenomenon, is utilized to categorize a financial time series into two phases, namely equilibrium and out-of-equilibrium states. For out-of-equilibrium states, we analyze the time intervals at which the state is revisited. The power-law distribution of inter-out-of-equilibrium state intervals is shown and we present an analogy with discrete-time heat bath dynamics, similar to random Ising systems. In the mean-field approximation, this model reduces to a one-dimensional multiplicative process. By varying global and local model parameters, the relevance between volatilities in financial markets and the interaction strengths between agents in the Ising model are investigated and discussed.

  12. Statistical regularities in the return intervals of volatility

    NASA Astrophysics Data System (ADS)

    Wang, F.; Weber, P.; Yamasaki, K.; Havlin, S.; Stanley, H. E.

    2007-01-01

    We discuss recent results concerning statistical regularities in the return intervals of volatility in financial markets. In particular, we show how the analysis of volatility return intervals, defined as the time between two volatilities larger than a given threshold, can help to get a better understanding of the behavior of financial time series. We find scaling in the distribution of return intervals for thresholds ranging over a factor of 25, from 0.6 to 15 standard deviations, and also for various time windows from one minute up to 390 min (an entire trading day). Moreover, these results are universal for different stocks, commodities, interest rates as well as currencies. We also analyze the memory in the return intervals which relates to the memory in the volatility and find two scaling regimes, ℓ<ℓ* with α1=0.64±0.02 and ℓ> ℓ* with α2=0.92±0.04; these exponent values are similar to results of Liu et al. for the volatility. As an application, we use the scaling and memory properties of the return intervals to suggest a possibly useful method for estimating risk.

  13. Hydraulic Tomography in Fractured Sedimentary Rocks to Estimate High-Resolution 3-D Distribution of Hydraulic Conductivity

    NASA Astrophysics Data System (ADS)

    Tiedeman, C. R.; Barrash, W.; Thrash, C. J.; Patterson, J.; Johnson, C. D.

    2016-12-01

    Hydraulic tomography was performed in a 100 m2 by 20 m thick volume of contaminated fractured mudstones at the former Naval Air Warfare Center (NAWC) in the Newark Basin, New Jersey, with the objective of estimating the detailed distribution of hydraulic conductivity (K). Characterizing the fine-scale K variability is important for designing effective remediation strategies in complex geologic settings such as fractured rock. In the tomography experiment, packers isolated two to six intervals in each of seven boreholes in the volume of investigation, and fiber-optic pressure transducers enabled collection of high-resolution drawdown observations. A hydraulic tomography dataset was obtained by conducting multiple aquifer tests in which a given isolated well interval was pumped and drawdown was monitored in all other intervals. The collective data from all tests display a wide range of behavior indicative of highly heterogeneous K within the tested volume, such as: drawdown curves for different intervals crossing one another on drawdown-time plots; unique drawdown curve shapes for certain intervals; and intervals with negligible drawdown adjacent to intervals with large drawdown. Tomographic inversion of data from 15 tests conducted in the first field season focused on estimating the K distribution at a scale of 1 m3 over approximately 25% of the investigated volume, where observation density was greatest. The estimated K field is consistent with prior geologic, geophysical, and hydraulic information, including: highly variable K within bedding-plane-parting fractures that are the primary flow and transport paths at NAWC, connected high-K features perpendicular to bedding, and a spatially heterogeneous distribution of low-K rock matrix and closed fractures. Subsequent tomographic testing was conducted in the second field season, with the region of high observation density expanded to cover a greater volume of the wellfield.

  14. Semi-Markov models for interval censored transient cognitive states with back transitions and a competing risk

    PubMed Central

    Wei, Shaoceng; Kryscio, Richard J.

    2015-01-01

    Continuous-time multi-state stochastic processes are useful for modeling the flow of subjects from intact cognition to dementia with mild cognitive impairment and global impairment as intervening transient, cognitive states and death as a competing risk (Figure 1). Each subject's cognition is assessed periodically resulting in interval censoring for the cognitive states while death without dementia is not interval censored. Since back transitions among the transient states are possible, Markov chains are often applied to this type of panel data. In this manuscript we apply a Semi-Markov process in which we assume that the waiting times are Weibull distributed except for transitions from the baseline state, which are exponentially distributed and in which we assume no additional changes in cognition occur between two assessments. We implement a quasi-Monte Carlo (QMC) method to calculate the higher order integration needed for likelihood estimation. We apply our model to a real dataset, the Nun Study, a cohort of 461 participants. PMID:24821001

  15. Semi-Markov models for interval censored transient cognitive states with back transitions and a competing risk.

    PubMed

    Wei, Shaoceng; Kryscio, Richard J

    2016-12-01

    Continuous-time multi-state stochastic processes are useful for modeling the flow of subjects from intact cognition to dementia with mild cognitive impairment and global impairment as intervening transient cognitive states and death as a competing risk. Each subject's cognition is assessed periodically resulting in interval censoring for the cognitive states while death without dementia is not interval censored. Since back transitions among the transient states are possible, Markov chains are often applied to this type of panel data. In this manuscript, we apply a semi-Markov process in which we assume that the waiting times are Weibull distributed except for transitions from the baseline state, which are exponentially distributed and in which we assume no additional changes in cognition occur between two assessments. We implement a quasi-Monte Carlo (QMC) method to calculate the higher order integration needed for likelihood estimation. We apply our model to a real dataset, the Nun Study, a cohort of 461 participants. © The Author(s) 2014.

  16. Availability and mean time between failures of redundant systems with random maintenance of subsystems

    NASA Technical Reports Server (NTRS)

    Schneeweiss, W.

    1977-01-01

    It is shown how the availability and MTBF (Mean Time Between Failures) of a redundant system with subsystems maintenanced at the points of so-called stationary renewal processes can be determined from the distributions of the intervals between maintenance actions and of the failure-free operating intervals of the subsystems. The results make it possible, for example, to determine the frequency and duration of hidden failure states in computers which are incidentally corrected during the repair of observed failures.

  17. On the Determination of Poisson Statistics for Haystack Radar Observations of Orbital Debris

    NASA Technical Reports Server (NTRS)

    Stokely, Christopher L.; Benbrook, James R.; Horstman, Matt

    2007-01-01

    A convenient and powerful method is used to determine if radar detections of orbital debris are observed according to Poisson statistics. This is done by analyzing the time interval between detection events. For Poisson statistics, the probability distribution of the time interval between events is shown to be an exponential distribution. This distribution is a special case of the Erlang distribution that is used in estimating traffic loads on telecommunication networks. Poisson statistics form the basis of many orbital debris models but the statistical basis of these models has not been clearly demonstrated empirically until now. Interestingly, during the fiscal year 2003 observations with the Haystack radar in a fixed staring mode, there are no statistically significant deviations observed from that expected with Poisson statistics, either independent or dependent of altitude or inclination. One would potentially expect some significant clustering of events in time as a result of satellite breakups, but the presence of Poisson statistics indicates that such debris disperse rapidly with respect to Haystack's very narrow radar beam. An exception to Poisson statistics is observed in the months following the intentional breakup of the Fengyun satellite in January 2007.

  18. A model of return intervals between earthquake events

    NASA Astrophysics Data System (ADS)

    Zhou, Yu; Chechkin, Aleksei; Sokolov, Igor M.; Kantz, Holger

    2016-06-01

    Application of the diffusion entropy analysis and the standard deviation analysis to the time sequence of the southern California earthquake events from 1976 to 2002 uncovered scaling behavior typical for anomalous diffusion. However, the origin of such behavior is still under debate. Some studies attribute the scaling behavior to the correlations in the return intervals, or waiting times, between aftershocks or mainshocks. To elucidate a nature of the scaling, we applied specific reshulffling techniques to eliminate correlations between different types of events and then examined how it affects the scaling behavior. We demonstrate that the origin of the scaling behavior observed is the interplay between mainshock waiting time distribution and the structure of clusters of aftershocks, but not correlations in waiting times between the mainshocks and aftershocks themselves. Our findings are corroborated by numerical simulations of a simple model showing a very similar behavior. The mainshocks are modeled by a renewal process with a power-law waiting time distribution between events, and aftershocks follow a nonhomogeneous Poisson process with the rate governed by Omori's law.

  19. Program for Weibull Analysis of Fatigue Data

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    2005-01-01

    A Fortran computer program has been written for performing statistical analyses of fatigue-test data that are assumed to be adequately represented by a two-parameter Weibull distribution. This program calculates the following: (1) Maximum-likelihood estimates of the Weibull distribution; (2) Data for contour plots of relative likelihood for two parameters; (3) Data for contour plots of joint confidence regions; (4) Data for the profile likelihood of the Weibull-distribution parameters; (5) Data for the profile likelihood of any percentile of the distribution; and (6) Likelihood-based confidence intervals for parameters and/or percentiles of the distribution. The program can account for tests that are suspended without failure (the statistical term for such suspension of tests is "censoring"). The analytical approach followed in this program for the software is valid for type-I censoring, which is the removal of unfailed units at pre-specified times. Confidence regions and intervals are calculated by use of the likelihood-ratio method.

  20. Paleogeography and the Late Cretaceous of the Western Interior of middle North America; coal distribution and sediment accumulation

    USGS Publications Warehouse

    Roberts, Laura N. Robinson; Kirschbaum, Mark A.

    1995-01-01

    A synthesis of Late Cretaceous paleogeography of the Western Interior from Mexico to southwestern Canada emphasizes the areal distribution of peat-forming environments during six biostratigraphically constrained time intervals. Isopach maps of strata for each interval reveal the locations and magnitude of major depocenters. The paleogeographic framework provides insight into the relative importance of tectonism, eustasy, and climate on the accumulation of thick peats and their preservation as coals. A total of 123 basin summaries and their data provide the ground truth for construction of the isopach and paleogeographic maps.

  1. Preventive care and recall intervals. Targeting of services in child dental care in Norway.

    PubMed

    Wang, N J; Aspelund, G Ø

    2010-03-01

    Skewed caries distribution has made interesting the use of a high risk strategy in child dental services. The purpose of this study was to describe the preventive dental care given and the recall intervals used for children and adolescents in a low caries risk population, and to study how the time spent for preventive care and the length of intervals were associated with characteristics of the children and factors related to care delivery. Time spent for and type of preventive care, recall intervals, oral health and health behaviour of children and adolescents three to 18 years of age (n = 576) and the preventive services delivered were registered at routine dental examinations in the public dental services. The time used for preventive dental care was on average 22% of the total time used in a course of treatment (7.3 of 33.4 minutes). Less than 15% of the variation in time spent for prevention was explained by oral health, oral health behaviours and other characteristics of the children and the service delivery. The mean (SD) recall intervals were 15.4 (4.6) months and 55% of the children were given intervals equal to or longer than 18 months. Approximately 30% of the variation in the length of the recall intervals was explained by characteristics of the child and the service delivery. The time used for preventive dental care of children in a low risk population was standardized, while the recall intervals to a certain extent were individualized according to dental health and dental health behaviour.

  2. Deriving Lifetime Maps in the Time/Frequency Domain of Coherent Structures in the Turbulent Boundary Layer

    NASA Technical Reports Server (NTRS)

    Palumbo, Dan

    2008-01-01

    The lifetimes of coherent structures are derived from data correlated over a 3 sensor array sampling streamwise sidewall pressure at high Reynolds number (> 10(exp 8)). The data were acquired at subsonic, transonic and supersonic speeds aboard a Tupolev Tu-144. The lifetimes are computed from a variant of the correlation length termed the lifelength. Characteristic lifelengths are estimated by fitting a Gaussian distribution to the sensors cross spectra and are shown to compare favorably with Efimtsov s prediction of correlation space scales. Lifelength distributions are computed in the time/frequency domain using an interval correlation technique on the continuous wavelet transform of the original time data. The median values of the lifelength distributions are found to be very close to the frequency averaged result. The interval correlation technique is shown to allow the retrieval and inspection of the original time data of each event in the lifelength distributions, thus providing a means to locate and study the nature of the coherent structure in the turbulent boundary layer. The lifelength data are converted to lifetimes using the convection velocity. The lifetime of events in the time/frequency domain are displayed in Lifetime Maps. The primary purpose of the paper is to validate these new analysis techniques so that they can be used with confidence to further characterize the behavior of coherent structures in the turbulent boundary layer.

  3. Detection of abnormal item based on time intervals for recommender systems.

    PubMed

    Gao, Min; Yuan, Quan; Ling, Bin; Xiong, Qingyu

    2014-01-01

    With the rapid development of e-business, personalized recommendation has become core competence for enterprises to gain profits and improve customer satisfaction. Although collaborative filtering is the most successful approach for building a recommender system, it suffers from "shilling" attacks. In recent years, the research on shilling attacks has been greatly improved. However, the approaches suffer from serious problem in attack model dependency and high computational cost. To solve the problem, an approach for the detection of abnormal item is proposed in this paper. In the paper, two common features of all attack models are analyzed at first. A revised bottom-up discretized approach is then proposed based on time intervals and the features for the detection. The distributions of ratings in different time intervals are compared to detect anomaly based on the calculation of chi square distribution (χ(2)). We evaluated our approach on four types of items which are defined according to the life cycles of these items. The experimental results show that the proposed approach achieves a high detection rate with low computational cost when the number of attack profiles is more than 15. It improves the efficiency in shilling attacks detection by narrowing down the suspicious users.

  4. Invariance in the recurrence of large returns and the validation of models of price dynamics

    NASA Astrophysics Data System (ADS)

    Chang, Lo-Bin; Geman, Stuart; Hsieh, Fushing; Hwang, Chii-Ruey

    2013-08-01

    Starting from a robust, nonparametric definition of large returns (“excursions”), we study the statistics of their occurrences, focusing on the recurrence process. The empirical waiting-time distribution between excursions is remarkably invariant to year, stock, and scale (return interval). This invariance is related to self-similarity of the marginal distributions of returns, but the excursion waiting-time distribution is a function of the entire return process and not just its univariate probabilities. Generalized autoregressive conditional heteroskedasticity (GARCH) models, market-time transformations based on volume or trades, and generalized (Lévy) random-walk models all fail to fit the statistical structure of excursions.

  5. An Evaluation of the Reliability, Availability, Maintainability and Durability of the Utility Tactical Transport Aircraft System

    DTIC Science & Technology

    1980-01-01

    specified condition. 10. Mean-Time-Between-Maintenance ( MTDM ). The mean of the distribution of the time intervals between all maintenance actions...which requires unscheduled maint. action. 3. MTDM : Mean-Time-Between-Maintenance. 4. W4I/OH: Maint Man-Hour/Engine Opbrating Hour S. CLASS III FAIURE

  6. The spatial return level of aggregated hourly extreme rainfall in Peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Shaffie, Mardhiyyah; Eli, Annazirin; Wan Zin, Wan Zawiah; Jemain, Abdul Aziz

    2015-07-01

    This paper is intended to ascertain the spatial pattern of extreme rainfall distribution in Peninsular Malaysia at several short time intervals, i.e., on hourly basis. Motivation of this research is due to historical records of extreme rainfall in Peninsular Malaysia, whereby many hydrological disasters at this region occur within a short time period. The hourly periods considered are 1, 2, 3, 6, 12, and 24 h. Many previous hydrological studies dealt with daily rainfall data; thus, this study enables comparison to be made on the estimated performances between daily and hourly rainfall data analyses so as to identify the impact of extreme rainfall at a shorter time scale. Return levels based on the time aggregate considered are also computed. Parameter estimation using L-moment method for four probability distributions, namely, the generalized extreme value (GEV), generalized logistic (GLO), generalized Pareto (GPA), and Pearson type III (PE3) distributions were conducted. Aided with the L-moment diagram test and mean square error (MSE) test, GLO was found to be the most appropriate distribution to represent the extreme rainfall data. At most time intervals (10, 50, and 100 years), the spatial patterns revealed that the rainfall distribution across the peninsula differ for 1- and 24-h extreme rainfalls. The outcomes of this study would provide additional information regarding patterns of extreme rainfall in Malaysia which may not be detected when considering only a higher time scale such as daily; thus, appropriate measures for shorter time scales of extreme rainfall can be planned. The implementation of such measures would be beneficial to the authorities to reduce the impact of any disastrous natural event.

  7. Toward a quantitative account of pitch distribution in spontaneous narrative: Method and validation

    PubMed Central

    Matteson, Samuel E.; Streit Olness, Gloria; Caplow, Nancy J.

    2013-01-01

    Pitch is well-known both to animate human discourse and to convey meaning in communication. The study of the statistical population distributions of pitch in discourse will undoubtedly benefit from methodological improvements. The current investigation examines a method that parameterizes pitch in discourse as musical pitch interval H measured in units of cents and that disaggregates the sequence of peak word-pitches using tools employed in time-series analysis and digital signal processing. The investigators test the proposed methodology by its application to distributions in pitch interval of the peak word-pitch (collectively called the discourse gamut) that occur in simulated and actual spontaneous emotive narratives obtained from 17 middle-aged African-American adults. The analysis, in rigorous tests, not only faithfully reproduced simulated distributions imbedded in realistic time series that drift and include pitch breaks, but the protocol also reveals that the empirical distributions exhibit a common hidden structure when normalized to a slowly varying mode (called the gamut root) of their respective probability density functions. Quantitative differences between narratives reveal the speakers' relative propensity for the use of pitch levels corresponding to elevated degrees of a discourse gamut (the “e-la”) superimposed upon a continuum that conforms systematically to an asymmetric Laplace distribution. PMID:23654400

  8. Why noise is useful in functional and neural mechanisms of interval timing?

    PubMed Central

    2013-01-01

    Background The ability to estimate durations in the seconds-to-minutes range - interval timing - is essential for survival, adaptation and its impairment leads to severe cognitive and/or motor dysfunctions. The response rate near a memorized duration has a Gaussian shape centered on the to-be-timed interval (criterion time). The width of the Gaussian-like distribution of responses increases linearly with the criterion time, i.e., interval timing obeys the scalar property. Results We presented analytical and numerical results based on the striatal beat frequency (SBF) model showing that parameter variability (noise) mimics behavioral data. A key functional block of the SBF model is the set of oscillators that provide the time base for the entire timing network. The implementation of the oscillators block as simplified phase (cosine) oscillators has the additional advantage that is analytically tractable. We also checked numerically that the scalar property emerges in the presence of memory variability by using biophysically realistic Morris-Lecar oscillators. First, we predicted analytically and tested numerically that in a noise-free SBF model the output function could be approximated by a Gaussian. However, in a noise-free SBF model the width of the Gaussian envelope is independent of the criterion time, which violates the scalar property. We showed analytically and verified numerically that small fluctuations of the memorized criterion time leads to scalar property of interval timing. Conclusions Noise is ubiquitous in the form of small fluctuations of intrinsic frequencies of the neural oscillators, the errors in recording/retrieving stored information related to criterion time, fluctuation in neurotransmitters’ concentration, etc. Our model suggests that the biological noise plays an essential functional role in the SBF interval timing. PMID:23924391

  9. Estimating equivalence with quantile regression

    USGS Publications Warehouse

    Cade, B.S.

    2011-01-01

    Equivalence testing and corresponding confidence interval estimates are used to provide more enlightened statistical statements about parameter estimates by relating them to intervals of effect sizes deemed to be of scientific or practical importance rather than just to an effect size of zero. Equivalence tests and confidence interval estimates are based on a null hypothesis that a parameter estimate is either outside (inequivalence hypothesis) or inside (equivalence hypothesis) an equivalence region, depending on the question of interest and assignment of risk. The former approach, often referred to as bioequivalence testing, is often used in regulatory settings because it reverses the burden of proof compared to a standard test of significance, following a precautionary principle for environmental protection. Unfortunately, many applications of equivalence testing focus on establishing average equivalence by estimating differences in means of distributions that do not have homogeneous variances. I discuss how to compare equivalence across quantiles of distributions using confidence intervals on quantile regression estimates that detect differences in heterogeneous distributions missed by focusing on means. I used one-tailed confidence intervals based on inequivalence hypotheses in a two-group treatment-control design for estimating bioequivalence of arsenic concentrations in soils at an old ammunition testing site and bioequivalence of vegetation biomass at a reclaimed mining site. Two-tailed confidence intervals based both on inequivalence and equivalence hypotheses were used to examine quantile equivalence for negligible trends over time for a continuous exponential model of amphibian abundance. ?? 2011 by the Ecological Society of America.

  10. Modeling the heterogeneity of human dynamics based on the measurements of influential users in Sina Microblog

    NASA Astrophysics Data System (ADS)

    Wang, Chenxu; Guan, Xiaohong; Qin, Tao; Yang, Tao

    2015-06-01

    Online social network has become an indispensable communication tool in the information age. The development of microblog also provides us a great opportunity to study human dynamics that play a crucial role in the design of efficient communication systems. In this paper we study the characteristics of the tweeting behavior based on the data collected from Sina Microblog. The user activity level is measured to characterize how often a user posts a tweet. We find that the user activity level follows a bimodal distribution. That is, the microblog users tend to be either active or inactive. The inter-tweeting time distribution is then measured at both the aggregate and individual levels. We find that the inter-tweeting time follows a piecewise power law distribution of two tails. Furthermore, the exponents of the two tails have different correlations with the user activity level. These findings demonstrate that the dynamics of the tweeting behavior are heterogeneous in different time scales. We then develop a dynamic model co-driven by the memory and the interest mechanism to characterize the heterogeneity. The numerical simulations validate the model and verify that the short time interval tweeting behavior is driven by the memory mechanism while the long time interval behavior by the interest mechanism.

  11. Forecasting overhaul or replacement intervals based on estimated system failure intensity

    NASA Astrophysics Data System (ADS)

    Gannon, James M.

    1994-12-01

    System reliability can be expressed in terms of the pattern of failure events over time. Assuming a nonhomogeneous Poisson process and Weibull intensity function for complex repairable system failures, the degree of system deterioration can be approximated. Maximum likelihood estimators (MLE's) for the system Rate of Occurrence of Failure (ROCOF) function are presented. Evaluating the integral of the ROCOF over annual usage intervals yields the expected number of annual system failures. By associating a cost of failure with the expected number of failures, budget and program policy decisions can be made based on expected future maintenance costs. Monte Carlo simulation is used to estimate the range and the distribution of the net present value and internal rate of return of alternative cash flows based on the distributions of the cost inputs and confidence intervals of the MLE's.

  12. A quantile-based Time at Risk: A new approach for assessing risk in financial markets

    NASA Astrophysics Data System (ADS)

    Bolgorian, Meysam; Raei, Reza

    2013-11-01

    In this paper, we provide a new measure for evaluation of risk in financial markets. This measure is based on the return interval of critical events in financial markets or other investment situations. Our main goal was to devise a model like Value at Risk (VaR). As VaR, for a given financial asset, probability level and time horizon, gives a critical value such that the likelihood of loss on the asset over the time horizon exceeds this value is equal to the given probability level, our concept of Time at Risk (TaR), using a probability distribution function of return intervals, provides a critical time such that the probability that the return interval of a critical event exceeds this time equals the given probability level. As an empirical application, we applied our model to data from the Tehran Stock Exchange Price Index (TEPIX) as a financial asset (market portfolio) and reported the results.

  13. Graphic analysis and multifractal on percolation-based return interval series

    NASA Astrophysics Data System (ADS)

    Pei, A. Q.; Wang, J.

    2015-05-01

    A financial time series model is developed and investigated by the oriented percolation system (one of the statistical physics systems). The nonlinear and statistical behaviors of the return interval time series are studied for the proposed model and the real stock market by applying visibility graph (VG) and multifractal detrended fluctuation analysis (MF-DFA). We investigate the fluctuation behaviors of return intervals of the model for different parameter settings, and also comparatively study these fluctuation patterns with those of the real financial data for different threshold values. The empirical research of this work exhibits the multifractal features for the corresponding financial time series. Further, the VGs deviated from both of the simulated data and the real data show the behaviors of small-world, hierarchy, high clustering and power-law tail for the degree distributions.

  14. Real-time hydraulic interval state estimation for water transport networks: a case study

    NASA Astrophysics Data System (ADS)

    Vrachimis, Stelios G.; Eliades, Demetrios G.; Polycarpou, Marios M.

    2018-03-01

    Hydraulic state estimation in water distribution networks is the task of estimating water flows and pressures in the pipes and nodes of the network based on some sensor measurements. This requires a model of the network as well as knowledge of demand outflow and tank water levels. Due to modeling and measurement uncertainty, standard state estimation may result in inaccurate hydraulic estimates without any measure of the estimation error. This paper describes a methodology for generating hydraulic state bounding estimates based on interval bounds on the parametric and measurement uncertainties. The estimation error bounds provided by this method can be applied to determine the existence of unaccounted-for water in water distribution networks. As a case study, the method is applied to a modified transport network in Cyprus, using actual data in real time.

  15. Spatial Distribution of the Coefficient of Variation for the Paleo-Earthquakes in Japan

    NASA Astrophysics Data System (ADS)

    Nomura, S.; Ogata, Y.

    2015-12-01

    Renewal processes, point prccesses in which intervals between consecutive events are independently and identically distributed, are frequently used to describe this repeating earthquake mechanism and forecast the next earthquakes. However, one of the difficulties in applying recurrent earthquake models is the scarcity of the historical data. Most studied fault segments have few, or only one observed earthquake that often have poorly constrained historic and/or radiocarbon ages. The maximum likelihood estimate from such a small data set can have a large bias and error, which tends to yield high probability for the next event in a very short time span when the recurrence intervals have similar lengths. On the other hand, recurrence intervals at a fault depend on the long-term slip rate caused by the tectonic motion in average. In addition, recurrence times are also fluctuated by nearby earthquakes or fault activities which encourage or discourage surrounding seismicity. These factors have spatial trends due to the heterogeneity of tectonic motion and seismicity. Thus, this paper introduces a spatial structure on the key parameters of renewal processes for recurrent earthquakes and estimates it by using spatial statistics. Spatial variation of mean and variance parameters of recurrence times are estimated in Bayesian framework and the next earthquakes are forecasted by Bayesian predictive distributions. The proposal model is applied for recurrent earthquake catalog in Japan and its result is compared with the current forecast adopted by the Earthquake Research Committee of Japan.

  16. Monte Carlo Method for Determining Earthquake Recurrence Parameters from Short Paleoseismic Catalogs: Example Calculations for California

    USGS Publications Warehouse

    Parsons, Tom

    2008-01-01

    Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques [e.g., Ellsworth et al., 1999]. In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means [e.g., NIST/SEMATECH, 2006]. For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDF?s, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.

  17. Monte Carlo method for determining earthquake recurrence parameters from short paleoseismic catalogs: Example calculations for California

    USGS Publications Warehouse

    Parsons, T.

    2008-01-01

    Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques (e.g., Ellsworth et al., 1999). In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means (e.g., NIST/SEMATECH, 2006). For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDFs, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.

  18. Estimation of time-delayed mutual information and bias for irregularly and sparsely sampled time-series

    PubMed Central

    Albers, D. J.; Hripcsak, George

    2012-01-01

    A method to estimate the time-dependent correlation via an empirical bias estimate of the time-delayed mutual information for a time-series is proposed. In particular, the bias of the time-delayed mutual information is shown to often be equivalent to the mutual information between two distributions of points from the same system separated by infinite time. Thus intuitively, estimation of the bias is reduced to estimation of the mutual information between distributions of data points separated by large time intervals. The proposed bias estimation techniques are shown to work for Lorenz equations data and glucose time series data of three patients from the Columbia University Medical Center database. PMID:22536009

  19. Comparison of incubation period distribution of human infections with MERS-CoV in South Korea and Saudi Arabia.

    PubMed

    Virlogeux, Victor; Fang, Vicky J; Park, Minah; Wu, Joseph T; Cowling, Benjamin J

    2016-10-24

    The incubation period is an important epidemiologic distribution, it is often incorporated in case definitions, used to determine appropriate quarantine periods, and is an input to mathematical modeling studies. Middle East Respiratory Syndrome coronavirus (MERS) is an emerging infectious disease in the Arabian Peninsula. There was a large outbreak of MERS in South Korea in 2015. We examined the incubation period distribution of MERS coronavirus infection for cases in South Korea and in Saudi Arabia. Using parametric and nonparametric methods, we estimated a mean incubation period of 6.9 days (95% credibility interval: 6.3-7.5) for cases in South Korea and 5.0 days (95% credibility interval: 4.0-6.6) among cases in Saudi Arabia. In a log-linear regression model, the mean incubation period was 1.42 times longer (95% credibility interval: 1.18-1.71) among cases in South Korea compared to Saudi Arabia. The variation that we identified in the incubation period distribution between locations could be associated with differences in ascertainment or reporting of exposure dates and illness onset dates, differences in the source or mode of infection, or environmental differences.

  20. DOD Dictionary of Military and Associated Terms

    DTIC Science & Technology

    2017-03-01

    to regionally grouped military and federal customers from commercial distributors using electronic commerce. Also called PV . See also distribution...and magnetosphere, interplanetary space, and the solar atmosphere. (JP 3-59) Terms and Definitions 218 space force application — Combat...precise time and time interval PUK packup kit PV prime vendor PVNTMED preventive medicine PVT positioning, velocity, and timing Abbreviations

  1. High time resolution characteristics of intermediate ion distributions upstream of the earth's bow shock

    NASA Technical Reports Server (NTRS)

    Potter, D. W.

    1985-01-01

    High time resolution particle data upstream of the bow shock during time intervals that have been identified as having intermediate ion distributions often show high amplitude oscillations in the ion fluxes of energy 2 and 6 keV. These ion oscillations, observed with the particle instruments of the University of California, Berkeley, on the ISEE 1 and 2 spacecraft, are at the same frequency (about 0.04 Hz) as the magnetic field oscillations. Typically, the 6-keV ion flux increases then the 2-keV flux increases followed by a decrease in the 2-keV flux and then the 6-keV flux decreases. This process repeats many times. Although there is no entirely satisfactory explanation, the presence of these ion flux oscillations suggests that distributions often are misidentified as intermediate ion distributions.

  2. Time prediction of failure a type of lamps by using general composite hazard rate model

    NASA Astrophysics Data System (ADS)

    Riaman; Lesmana, E.; Subartini, B.; Supian, S.

    2018-03-01

    This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.

  3. Theoretical implications of quantitative properties of interval timing and probability estimation in mouse and rat.

    PubMed

    Kheifets, Aaron; Freestone, David; Gallistel, C R

    2017-07-01

    In three experiments with mice ( Mus musculus ) and rats (Rattus norvigicus), we used a switch paradigm to measure quantitative properties of the interval-timing mechanism. We found that: 1) Rodents adjusted the precision of their timed switches in response to changes in the interval between the short and long feed latencies (the temporal goalposts). 2) The variability in the timing of the switch response was reduced or unchanged in the face of large trial-to-trial random variability in the short and long feed latencies. 3) The adjustment in the distribution of switch latencies in response to changes in the relative frequency of short and long trials was sensitive to the asymmetry in the Kullback-Leibler divergence. The three results suggest that durations are represented with adjustable precision, that they are timed by multiple timers, and that there is a trial-by-trial (episodic) record of feed latencies in memory. © 2017 Society for the Experimental Analysis of Behavior.

  4. Observation and analysis of aerosol optical properties and aerosol growth in two New Year celebrations in Manila Observatory (14.64N, 127.07E)

    NASA Astrophysics Data System (ADS)

    Lagrosas, N.; Bautista, D. L. B.; Miranda, J. P.

    2016-12-01

    Aerosol optical properties and growth were measured during 2014 and 2016 New Year celebrations at Manila Observatory, Philippines. Measurements were done using a USB2000 spectrometer from 22:00 of 31 December 2013 to 03:00 of 01 January 2014 and from 18:00 of 31 December 2015 to 05:30 01 January 2016. A xenon lamp was used as a light source 150m from the spectrometer. Fireworks and firecrackers were the main sources of aerosols during these festivities. Data were collected every 60s and 10s for 2014 and 2016 respectively. The aerosol volume size distribution was derived using the parametric inversion method proposed by Kaijser (1983). The method is performed by selecting 8 wavelengths from 387.30nm to 600.00nm. The reference intensities were obtained when firework activities were considerably low and the air was assumed to be relatively clean. Using Mie theory and assuming that the volume size distribution is a linear combination of 33 bimodal lognormal distribution functions with geometric mean radii between 0.003um and 1.2um, a least-square minimization process was implemented between measured optical depths and computed optical depths. The 2016 New Year distribution showed mostly a unimodal size distribution (mean radius = 0.3um) from 23:00 to 05:30 (Fig. 1a). The mean Angstrom coefficient value during the same time interval was approximately 0.75. This could be attributed to a constant RH (100%) during this time interval. A bimodal distribution was observed when RH value was 94% from 18:30 to 21:30. The transition to a unimodal distribution was observed at 21:00 when the RH value changes from 94% to 100%. In contrast to the 2016 New Year celebration, the 2014 size distribution was bimodal from 23:30 to 02:30 (Fig 1b). The bimodal distribution is the result of firework activities before New Year. Aerosol growth was evident when the size distribution became unimodal after 02:30 (mean radius = 1.1um). The mean Angstrom coefficient, when the size distribution is unimodal, was around 0.5 and this could be attributed to increasing RH from 78% to 88% during this time interval. The two New Year celebrations showed different patterns of aerosols growth. Aerosols produced at high RH tend to be unimodal while aerosols produced at low RH tend to have a bimodal distribution. As RH increased, the bimodal distribution became unimodal.

  5. Powerplexer

    NASA Technical Reports Server (NTRS)

    Woods, J. M. (Inventor)

    1973-01-01

    An electrical power distribution system is described for use in providing different dc voltage levels. A circuit is supplied with DC voltage levels and commutates pulses for timed intervals onto a pair of distribution wires. The circuit is driven by a command generator which places pulses on the wires in a timed sequence. The pair of wires extend to voltage strippers connected to the various loads. The voltage strippers each respond to the pulse dc levels on the pair of wires and form different output voltages communicated to each load.

  6. Exact Scheffé-type confidence intervals for output from groundwater flow models: 2. Combined use of hydrogeologic information and calibration data

    USGS Publications Warehouse

    Cooley, Richard L.

    1993-01-01

    Calibration data (observed values corresponding to model-computed values of dependent variables) are incorporated into a general method of computing exact Scheffé-type confidence intervals analogous to the confidence intervals developed in part 1 (Cooley, this issue) for a function of parameters derived from a groundwater flow model. Parameter uncertainty is specified by a distribution of parameters conditioned on the calibration data. This distribution was obtained as a posterior distribution by applying Bayes' theorem to the hydrogeologically derived prior distribution of parameters from part 1 and a distribution of differences between the calibration data and corresponding model-computed dependent variables. Tests show that the new confidence intervals can be much smaller than the intervals of part 1 because the prior parameter variance-covariance structure is altered so that combinations of parameters that give poor model fit to the data are unlikely. The confidence intervals of part 1 and the new confidence intervals can be effectively employed in a sequential method of model construction whereby new information is used to reduce confidence interval widths at each stage.

  7. Spatially distributed potential evapotranspiration modeling and climate projections.

    PubMed

    Gharbia, Salem S; Smullen, Trevor; Gill, Laurence; Johnston, Paul; Pilla, Francesco

    2018-08-15

    Evapotranspiration integrates energy and mass transfer between the Earth's surface and atmosphere and is the most active mechanism linking the atmosphere, hydrosphsophere, lithosphere and biosphere. This study focuses on the fine resolution modeling and projection of spatially distributed potential evapotranspiration on the large catchment scale as response to climate change. Six potential evapotranspiration designed algorithms, systematically selected based on a structured criteria and data availability, have been applied and then validated to long-term mean monthly data for the Shannon River catchment with a 50m 2 cell size. The best validated algorithm was therefore applied to evaluate the possible effect of future climate change on potential evapotranspiration rates. Spatially distributed potential evapotranspiration projections have been modeled based on climate change projections from multi-GCM ensembles for three future time intervals (2020, 2050 and 2080) using a range of different Representative Concentration Pathways producing four scenarios for each time interval. Finally, seasonal results have been compared to baseline results to evaluate the impact of climate change on the potential evapotranspiration and therefor on the catchment dynamical water balance. The results present evidence that the modeled climate change scenarios would have a significant impact on the future potential evapotranspiration rates. All the simulated scenarios predicted an increase in potential evapotranspiration for each modeled future time interval, which would significantly affect the dynamical catchment water balance. This study addresses the gap in the literature of using GIS-based algorithms to model fine-scale spatially distributed potential evapotranspiration on the large catchment systems based on climatological observations and simulations in different climatological zones. Providing fine-scale potential evapotranspiration data is very crucial to assess the dynamical catchment water balance to setup management scenarios for the water abstractions. This study illustrates a transferable systematic method to design GIS-based algorithms to simulate spatially distributed potential evapotranspiration on the large catchment systems. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. A model for the statistical description of analytical errors occurring in clinical chemical laboratories with time.

    PubMed

    Hyvärinen, A

    1985-01-01

    The main purpose of the present study was to describe the statistical behaviour of daily analytical errors in the dimensions of place and time, providing a statistical basis for realistic estimates of the analytical error, and hence allowing the importance of the error and the relative contributions of its different sources to be re-evaluated. The observation material consists of creatinine and glucose results for control sera measured in daily routine quality control in five laboratories for a period of one year. The observation data were processed and computed by means of an automated data processing system. Graphic representations of time series of daily observations, as well as their means and dispersion limits when grouped over various time intervals, were investigated. For partition of the total variation several two-way analyses of variance were done with laboratory and various time classifications as factors. Pooled sets of observations were tested for normality of distribution and for consistency of variances, and the distribution characteristics of error variation in different categories of place and time were compared. Errors were found from the time series to vary typically between days. Due to irregular fluctuations in general and particular seasonal effects in creatinine, stable estimates of means or of dispersions for errors in individual laboratories could not be easily obtained over short periods of time but only from data sets pooled over long intervals (preferably at least one year). Pooled estimates of proportions of intralaboratory variation were relatively low (less than 33%) when the variation was pooled within days. However, when the variation was pooled over longer intervals this proportion increased considerably, even to a maximum of 89-98% (95-98% in each method category) when an outlying laboratory in glucose was omitted, with a concomitant decrease in the interaction component (representing laboratory-dependent variation with time). This indicates that a substantial part of the variation comes from intralaboratory variation with time rather than from constant interlaboratory differences. Normality and consistency of statistical distributions were best achieved in the long-term intralaboratory sets of the data, under which conditions the statistical estimates of error variability were also most characteristic of the individual laboratories rather than necessarily being similar to one another. Mixing of data from different laboratories may give heterogeneous and nonparametric distributions and hence is not advisable.(ABSTRACT TRUNCATED AT 400 WORDS)

  9. Normal reference intervals and the effects of time and feeding on serum bile acid concentrations in llamas.

    PubMed

    Andreasen, C B; Pearson, E G; Smith, B B; Gerros, T C; Lassen, E D

    1998-04-01

    Fifty clinically healthy llamas, 0.5-13 years of age (22 intact males, 10 neutered males, 18 females), with no biochemical evidence of liver disease or hematologic abnormalities, were selected to establish serum bile acid reference intervals. Serum samples submitted to the clinical pathology laboratory were analyzed using a colorimetric enzymatic assay to establish bile acid reference intervals. A nonparametric distribution of llama bile acid concentrations was 1-23 micromol/liter for llamas >1 year of age and 10-44 micromol/liter for llamas < or = 1 year of age. A significant difference was found between these 2 age groups. No correlation was detected between gender and bile acid concentrations. The reference intervals were 1.1-22.9 micromol/liter for llamas >1 year of age and 1.8-49.8 micromol/liter for llamas < or = 1 year of age. Additionally, a separate group of 10 healthy adult llamas (5 males, 5 females, 5-11 years of age) without biochemical or hematologic abnormalities was selected to assess the effects of feeding and time intervals on serum bile acid concentrations. These 10 llamas were provided fresh water and hay ad libitum, and serum samples were obtained via an indwelling jugular catheter hourly for 11 hours. Llamas were then kept from food overnight (12 hours), and subsequent samples were taken prior to feeding (fasting baseline time, 23 hours after trial initiation) and postprandially at 0.5, 1, 2, 4, and 8 hours. In feeding trials, there was no consistent interaction between bile acid concentrations and time, feeding, or 12-hour fasting. Prior feeding or time of day did not result in serum bile acid concentrations outside the reference interval, but concentrations from individual llamas varied within this interval over time.

  10. Counting Raindrops and the Distribution of Intervals Between Them.

    NASA Astrophysics Data System (ADS)

    Van De Giesen, N.; Ten Veldhuis, M. C.; Hut, R.; Pape, J. J.

    2017-12-01

    Drop size distributions are often assumed to follow a generalized gamma function, characterized by one parameter, Λ, [1]. In principle, this Λ can be estimated by measuring the arrival rate of raindrops. The arrival rate should follow a Poisson distribution. By measuring the distribution of the time intervals between drops arriving at a certain surface area, one should not only be able to estimate the arrival rate but also the robustness of the underlying assumption concerning steady state. It is important to note that many rainfall radar systems also assume fixeddrop size distributions, and associated arrival rates, to derive rainfall rates. By testing these relationships with a simple device, we will be able to improve both land-based and space-based radar rainfall estimates. Here, an open-hardware sensor design is presented, consisting of a 3D printed housing for a piezoelectric element, some simple electronics and an Arduino. The target audience for this device are citizen scientists who want to contribute to collecting rainfall information beyond the standard rain gauge. The core of the sensor is a simple piezo-buzzer, as found in many devices such as watches and fire alarms. When a raindrop falls on a piezo-buzzer, a small voltage is generated , which can be used to register the drop's arrival time. By registering the intervals between raindrops, the associated Poisson distribution can be estimated. In addition to the hardware, we will present the first results of a measuring campaign in Myanmar that will have ran from August to October 2017. All design files and descriptions are available through GitHub: https://github.com/nvandegiesen/Intervalometer. This research is partially supported through the TWIGA project, funded by the European Commission's H2020 program under call SC5-18-2017 `Novel in-situ observation systems'. Reference [1]: Uijlenhoet, R., and J. N. M. Stricker. "A consistent rainfall parameterization based on the exponential raindrop size distribution." Journal of Hydrology 218, no. 3 (1999): 101-127.

  11. Assessing and minimizing contamination in time of flight based validation data

    NASA Astrophysics Data System (ADS)

    Lennox, Kristin P.; Rosenfield, Paul; Blair, Brenton; Kaplan, Alan; Ruz, Jaime; Glenn, Andrew; Wurtz, Ronald

    2017-10-01

    Time of flight experiments are the gold standard method for generating labeled training and testing data for the neutron/gamma pulse shape discrimination problem. As the popularity of supervised classification methods increases in this field, there will also be increasing reliance on time of flight data for algorithm development and evaluation. However, time of flight experiments are subject to various sources of contamination that lead to neutron and gamma pulses being mislabeled. Such labeling errors have a detrimental effect on classification algorithm training and testing, and should therefore be minimized. This paper presents a method for identifying minimally contaminated data sets from time of flight experiments and estimating the residual contamination rate. This method leverages statistical models describing neutron and gamma travel time distributions and is easily implemented using existing statistical software. The method produces a set of optimal intervals that balance the trade-off between interval size and nuisance particle contamination, and its use is demonstrated on a time of flight data set for Cf-252. The particular properties of the optimal intervals for the demonstration data are explored in detail.

  12. Manifestation of peripherial coding in the effect of increasing loudness and enhanced discrimination of the intensity of tone bursts before and after tone burst noise

    NASA Astrophysics Data System (ADS)

    Rimskaya-Korsavkova, L. K.

    2017-07-01

    To find the possible reasons for the midlevel elevation of the Weber fraction in intensity discrimination of a tone burst, a comparison was performed for the complementary distributions of spike activity of an ensemble of space nerves, such as the distribution of time instants when spikes occur, the distribution of interspike intervals, and the autocorrelation function. The distribution properties were detected in a poststimulus histogram, an interspike interval histogram, and an autocorrelation histogram—all obtained from the reaction of an ensemble of model space nerves in response to an auditory noise burst-useful tone burst complex. Two configurations were used: in the first, the peak amplitude of the tone burst was varied and the noise amplitude was fixed; in the other, the tone burst amplitude was fixed and the noise amplitude was varied. Noise could precede or follow the tone burst. The noise and tone burst durations, as well as the interval between them, was 4 kHz and corresponded to the characteristic frequencies of the model space nerves. The profiles of all the mentioned histograms had two maxima. The values and the positions of the maxima in the poststimulus histogram corresponded to the amplitudes and mutual time position of the noise and the tone burst. The maximum that occurred in response to the tone burst action could be a basis for the formation of the loudness of the latter (explicit loudness). However, the positions of the maxima in the other two histograms did not depend on the positions of tone bursts and noise in the combinations. The first maximum fell in short intervals and united intervals corresponding to the noise and tone burst durations. The second maximum fell in intervals corresponding to a tone burst delay with respect to noise, and its value was proportional to the noise amplitude or tone burst amplitude that was smaller in the complex. An increase in tone burst or noise amplitudes was caused by nonlinear variations in the two maxima and the ratio between them. The size of the first maximum in the of interspike interval distribution could be the basis for the formation of the loudness of the masked tone burst (implicit loudness), and the size of the second maximum, for the formation of intensity in the periodicity pitch of the complex. The auditory effect of the midlevel enhancement of tone burst loudness could be the result of variations in the implicit tone burst loudness caused by variations in tone-burst or noise intensity. The reason for the enhancement of the Weber fraction could be competitive interaction between such subjective qualities as explicit and implicit tone-burst loudness and the intensity of the periodicity pitch of the complex.

  13. Association between the physical activity and heart rate corrected-QT interval in older adults.

    PubMed

    Michishita, Ryoma; Fukae, Chika; Mihara, Rikako; Ikenaga, Masahiro; Morimura, Kazuhiro; Takeda, Noriko; Yamada, Yosuke; Higaki, Yasuki; Tanaka, Hiroaki; Kiyonaga, Akira

    2015-07-01

    Increased physical activity can reduce the incidence of cardiovascular disease and the mortality rate. In contrast, a prolonged heart rate corrected-QT (QTc) interval is associated with an increased risk of arrhythmias, sudden cardiac death and coronary artery disease. The present cross-sectional study was designed to clarify the association between the physical activity level and the QTc interval in older adults. The participants included 586 older adults (267 men and 319 women, age 71.2 ± 4.7 years) without a history of cardiovascular disease, who were taking cardioactive drugs. Electrocardiography was recorded with a standard resting 12-lead electrocardiograph, while the QTc interval was calculated according to Hodges' formula. The physical activity level was assessed using a triaxial accelerometer. The participants were divided into four categories, which were defined equally quartile distributions of the QTc interval. After adjusting for age, body mass index, waist circumference and the number of steps, the time spent in inactivity was higher and the time spent in light physical activity was significantly lower in the longest QTc interval group than in the shortest QTc interval group in both sexes (P < 0.05, respectively). However, there were no significant differences in the time spent in moderate and vigorous physical activities among the four groups in either sex. These results suggest that a decreased physical activity level, especially inactivity and light intensity physical activity, were associated with QTc interval in older adults. © 2014 Japan Geriatrics Society.

  14. Bayesian lead time estimation for the Johns Hopkins Lung Project data.

    PubMed

    Jang, Hyejeong; Kim, Seongho; Wu, Dongfeng

    2013-09-01

    Lung cancer screening using X-rays has been controversial for many years. A major concern is whether lung cancer screening really brings any survival benefits, which depends on effective treatment after early detection. The problem was analyzed from a different point of view and estimates were presented of the projected lead time for participants in a lung cancer screening program using the Johns Hopkins Lung Project (JHLP) data. The newly developed method of lead time estimation was applied where the lifetime T was treated as a random variable rather than a fixed value, resulting in the number of future screenings for a given individual is a random variable. Using the actuarial life table available from the United States Social Security Administration, the lifetime distribution was first obtained, then the lead time distribution was projected using the JHLP data. The data analysis with the JHLP data shows that, for a male heavy smoker with initial screening ages at 50, 60, and 70, the probability of no-early-detection with semiannual screens will be 32.16%, 32.45%, and 33.17%, respectively; while the mean lead time is 1.36, 1.33 and 1.23 years. The probability of no-early-detection increases monotonically when the screening interval increases, and it increases slightly as the initial age increases for the same screening interval. The mean lead time and its standard error decrease when the screening interval increases for all age groups, and both decrease when initial age increases with the same screening interval. The overall mean lead time estimated with a random lifetime T is slightly less than that with a fixed value of T. This result is hoped to be of benefit to improve current screening programs. Copyright © 2013 Ministry of Health, Saudi Arabia. Published by Elsevier Ltd. All rights reserved.

  15. Space-time distribution of the ALS incident cases by onset type in the Health District of Ferrara, Italy.

    PubMed

    Govoni, V; Della Coletta, E; Cesnik, E; Casetta, I; Tugnoli, V; Granieri, E

    2015-04-01

    An ecological study in the resident population of the Health District (HD) of Ferrara, Italy, has been carried out to establish the distribution in space and time of the amyotrophic lateral sclerosis (ALS) incident cases according to the disease onset type and gender in the period 1964-2009. The hypothesis of a uniform distribution was assumed. The incident cases of spinal onset ALS and bulbar onset ALS were evenly distributed in space and time in both men and women. The spinal onset ALS incident cases distribution according to gender was significantly different from the expected in the extra-urban population (20 observed cases in men 95% Poisson confidence interval 12.22-30.89, expected cases in men 12.19; six observed cases in women 95% Poisson confidence interval 2.20-13.06, expected cases in women 13.81), whereas no difference was found in the urban population. The spinal onset ALS incidence was higher in men than in women in the extra-urban population (difference between the rates = 1.53, 95% CI associated with the difference 0.52-2.54), whereas no difference between sexes was found in the urban population. The uneven distribution according to gender of the spinal onset ALS incident cases only in the extra-urban population suggests the involvement of a gender related environmental risk factor associated with the extra-urban environment. Despite some limits of the spatial analysis in the study of rare diseases, the results appear consistent with the literature data. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. Distribution of pre-course BLS/AED manuals does not influence skill acquisition and retention in lay rescuers: a randomised study.

    PubMed

    Papadimitriou, Lila; Xanthos, Theodoros; Bassiakou, Eleni; Stroumpoulis, Kostantinos; Barouxis, Dimitrios; Iacovidou, Nicolleta

    2010-03-01

    The present study aims to investigate whether the distribution of the Basic Life Support and Automated External Defibrillation (BLS/AED) manual, 4 weeks prior to the course, has an effect on skill acquisition, theoretical knowledge and skill retention, compared with courses where manuals were not distributed. A total of 303 laypeople were included in the present study. The courses were randomised with sealed envelopes in 12 courses, where manuals were distributed to participants (group A) and in 12 courses, where manuals were not distributed to participants (group B). The participants were formally evaluated at the end of the course, and at 1, 3 and 6 months after each course. The evaluation procedure was the same at all time intervals and consisted of two distinct parts: a written test and a simulated cardiac arrest scenario. No significant difference was observed between the two groups in skill acquisition at the time of initial training. Furthermore, there was no significant difference between the groups in performing BLS/AED skills at 1, 3 and 6 months after initial training. Theoretical knowledge in either group at the specified time intervals did not exhibit any significant difference. Significant deterioration of skills was observed in both groups between initial training and at 1 month after the course, as well as between the first and third month after the course. The present study shows that distribution of BLS/AED manuals 1 month prior to the course has no effect on theoretical knowledge, skill acquisition and skill retention in laypeople. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  17. Avalanches and power-law behaviour in lung inflation

    NASA Astrophysics Data System (ADS)

    Suki, Béla; Barabási, Albert-László; Hantos, Zoltán; Peták, Ferenc; Stanley, H. Eugene

    1994-04-01

    WHEN lungs are emptied during exhalation, peripheral airways close up1. For people with lung disease, they may not reopen for a significant portion of inhalation, impairing gas exchange2,3. A knowledge of the mechanisms that govern reinflation of collapsed regions of lungs is therefore central to the development of ventilation strategies for combating respiratory problems. Here we report measurements of the terminal airway resistance, Rt , during the opening of isolated dog lungs. When inflated by a constant flow, Rt decreases in discrete jumps. We find that the probability distribution of the sizes of the jumps and of the time intervals between them exhibit power-law behaviour over two decades. We develop a model of the inflation process in which 'avalanches' of airway openings are seen-with power-law distributions of both the size of avalanches and the time intervals between them-which agree quantitatively with those seen experimentally, and are reminiscent of the power-law behaviour observed for self-organized critical systems4. Thus power-law distributions, arising from avalanches associated with threshold phenomena propagating down a branching tree structure, appear to govern the recruitment of terminal airspaces.

  18. 27th Annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting

    NASA Technical Reports Server (NTRS)

    Sydnor, Richard L. (Editor)

    1996-01-01

    This document is a compilation of technical papers presented at the 27th Annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting, held November 29 - December 1, 1995 at San Diego, CA. Papers are in the following categories: Recent developments in rubidium, cesium, and hydrogen-based frequency standards; and in cryogenic and trapped-ion technology; International and transnational applications of PTTI technology with emphasis on satellite laser tracking, GLONASS timing, intercomparison of national time scales and international telecommunications; Applications of PTTI technology to the telecommunications, power distribution, platform positioning, and geophysical survey industries; Applications of PTTI technology to evolving military communications and navigation systems; and Dissemination of precise time and frequency by means of Global Positioning System (GPS), Global Satellite Navigation System (GLONASS), MILSTAR, LORAN, and synchronous communications satellites.

  19. A review of statistical issues with progression-free survival as an interval-censored time-to-event endpoint.

    PubMed

    Sun, Xing; Li, Xiaoyun; Chen, Cong; Song, Yang

    2013-01-01

    Frequent rise of interval-censored time-to-event data in randomized clinical trials (e.g., progression-free survival [PFS] in oncology) challenges statistical researchers in the pharmaceutical industry in various ways. These challenges exist in both trial design and data analysis. Conventional statistical methods treating intervals as fixed points, which are generally practiced by pharmaceutical industry, sometimes yield inferior or even flawed analysis results in extreme cases for interval-censored data. In this article, we examine the limitation of these standard methods under typical clinical trial settings and further review and compare several existing nonparametric likelihood-based methods for interval-censored data, methods that are more sophisticated but robust. Trial design issues involved with interval-censored data comprise another topic to be explored in this article. Unlike right-censored survival data, expected sample size or power for a trial with interval-censored data relies heavily on the parametric distribution of the baseline survival function as well as the frequency of assessments. There can be substantial power loss in trials with interval-censored data if the assessments are very infrequent. Such an additional dependency controverts many fundamental assumptions and principles in conventional survival trial designs, especially the group sequential design (e.g., the concept of information fraction). In this article, we discuss these fundamental changes and available tools to work around their impacts. Although progression-free survival is often used as a discussion point in the article, the general conclusions are equally applicable to other interval-censored time-to-event endpoints.

  20. Kinetics of Slow Neutrons in a Time-of-flight Spectrometer. II. Probability of Transmission Across a Rotating Slit and Distribution after the Flight of Neutrons with Velocity Spectrum F (v); CINETICA DEI NEUTRONI LENTI IN UNO SPETTROMETRO A TEMPO DI VOLO. II. PROBABILITA DI TRANSMISSIONE ATTRAVERSO UNA FENDITURA RUOTANTE E DISTRIBUZIONE DOPO IL VOLO DI NEUTRONI CON SPETTRO DI VELOCITA F (V)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marsequerra, M.; Pauli, G.

    1958-12-01

    On the basis of the results obtained in Part I (CNC-1), expressions are derived for the transmission probability through a revolving curved slit for neutrons having a velocity distribution f(v), the distribution shown by the neutrons after the flight, and the uncertainty in the energy of neutrons detected in an infinitesimal time interval. (auth)

  1. Emulation of Industrial Control Field Device Protocols

    DTIC Science & Technology

    2013-03-01

    platforms such as the Arduino ( based on the Atmel AVR architecture) or popular PIC architecture based devices, which are programmed for specific functions...UNIVERSITY AIR FORCE INSTITUTE OF TECHNOLOGY Wright-Patterson Air Force Base , Ohio DISTRIBUTION STATEMENT A. APPROVED FOR PUBLIC RELEASE; DISTRIBUTION...confidence intervals for the mean. Based on these results, extensive knowledge of the specific implementations of the protocols or timing profiles of the

  2. Semiparametric regression analysis of interval-censored competing risks data.

    PubMed

    Mao, Lu; Lin, Dan-Yu; Zeng, Donglin

    2017-09-01

    Interval-censored competing risks data arise when each study subject may experience an event or failure from one of several causes and the failure time is not observed directly but rather is known to lie in an interval between two examinations. We formulate the effects of possibly time-varying (external) covariates on the cumulative incidence or sub-distribution function of competing risks (i.e., the marginal probability of failure from a specific cause) through a broad class of semiparametric regression models that captures both proportional and non-proportional hazards structures for the sub-distribution. We allow each subject to have an arbitrary number of examinations and accommodate missing information on the cause of failure. We consider nonparametric maximum likelihood estimation and devise a fast and stable EM-type algorithm for its computation. We then establish the consistency, asymptotic normality, and semiparametric efficiency of the resulting estimators for the regression parameters by appealing to modern empirical process theory. In addition, we show through extensive simulation studies that the proposed methods perform well in realistic situations. Finally, we provide an application to a study on HIV-1 infection with different viral subtypes. © 2017, The International Biometric Society.

  3. Evaluation of statistical models for forecast errors from the HBV model

    NASA Astrophysics Data System (ADS)

    Engeland, Kolbjørn; Renard, Benjamin; Steinsland, Ingelin; Kolberg, Sjur

    2010-04-01

    SummaryThree statistical models for the forecast errors for inflow into the Langvatn reservoir in Northern Norway have been constructed and tested according to the agreement between (i) the forecast distribution and the observations and (ii) median values of the forecast distribution and the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order auto-regressive model was constructed for the forecast errors. The parameters were conditioned on weather classes. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order auto-regressive model was constructed for the forecast errors. For the third model positive and negative errors were modeled separately. The errors were first NQT-transformed before conditioning the mean error values on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: we wanted (a) the forecast distribution to be reliable; (b) the forecast intervals to be narrow; (c) the median values of the forecast distribution to be close to the observed values. Models 1 and 2 gave almost identical results. The median values improved the forecast with Nash-Sutcliffe R eff increasing from 0.77 for the original forecast to 0.87 for the corrected forecasts. Models 1 and 2 over-estimated the forecast intervals but gave the narrowest intervals. Their main drawback was that the distributions are less reliable than Model 3. For Model 3 the median values did not fit well since the auto-correlation was not accounted for. Since Model 3 did not benefit from the potential variance reduction that lies in bias estimation and removal it gave on average wider forecasts intervals than the two other models. At the same time Model 3 on average slightly under-estimated the forecast intervals, probably explained by the use of average measures to evaluate the fit.

  4. Uncertainty analysis for absorbed dose from a brain receptor imaging agent

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aydogan, B.; Miller, L.F.; Sparks, R.B.

    Absorbed dose estimates are known to contain uncertainties. A recent literature search indicates that prior to this study no rigorous investigation of uncertainty associated with absorbed dose has been undertaken. A method of uncertainty analysis for absorbed dose calculations has been developed and implemented for the brain receptor imaging agent {sup 123}I-IPT. The two major sources of uncertainty considered were the uncertainty associated with the determination of residence time and that associated with the determination of the S values. There are many sources of uncertainty in the determination of the S values, but only the inter-patient organ mass variation wasmore » considered in this work. The absorbed dose uncertainties were determined for lung, liver, heart and brain. Ninety-five percent confidence intervals of the organ absorbed dose distributions for each patient and for a seven-patient population group were determined by the ``Latin Hypercube Sampling`` method. For an individual patient, the upper bound of the 95% confidence interval of the absorbed dose was found to be about 2.5 times larger than the estimated mean absorbed dose. For the seven-patient population the upper bound of the 95% confidence interval of the absorbed dose distribution was around 45% more than the estimated population mean. For example, the 95% confidence interval of the population liver dose distribution was found to be between 1.49E+0.7 Gy/MBq and 4.65E+07 Gy/MBq with a mean of 2.52E+07 Gy/MBq. This study concluded that patients in a population receiving {sup 123}I-IPT could receive absorbed doses as much as twice as large as the standard estimated absorbed dose due to these uncertainties.« less

  5. Constructing Confidence Intervals for Reliability Coefficients Using Central and Noncentral Distributions.

    ERIC Educational Resources Information Center

    Weber, Deborah A.

    Greater understanding and use of confidence intervals is central to changes in statistical practice (G. Cumming and S. Finch, 2001). Reliability coefficients and confidence intervals for reliability coefficients can be computed using a variety of methods. Estimating confidence intervals includes both central and noncentral distribution approaches.…

  6. Endogenous modulation of low frequency oscillations by temporal expectations

    PubMed Central

    Cravo, Andre M.; Rohenkohl, Gustavo; Wyart, Valentin

    2011-01-01

    Recent studies have associated increasing temporal expectations with synchronization of higher frequency oscillations and suppression of lower frequencies. In this experiment, we explore a proposal that low-frequency oscillations provide a mechanism for regulating temporal expectations. We used a speeded Go/No-go task and manipulated temporal expectations by changing the probability of target presentation after certain intervals. Across two conditions, the temporal conditional probability of target events differed substantially at the first of three possible intervals. We found that reactions times differed significantly at this first interval across conditions, decreasing with higher temporal expectations. Interestingly, the power of theta activity (4–8 Hz), distributed over central midline sites, also differed significantly across conditions at this first interval. Furthermore, we found a transient coupling between theta phase and beta power after the first interval in the condition with high temporal expectation for targets at this time point. Our results suggest that the adjustments in theta power and the phase-power coupling between theta and beta contribute to a central mechanism for controlling neural excitability according to temporal expectations. PMID:21900508

  7. Selective Attention in Pigeon Temporal Discrimination.

    PubMed

    Subramaniam, Shrinidhi; Kyonka, Elizabeth

    2017-07-27

    Cues can vary in how informative they are about when specific outcomes, such as food availability, will occur. This study was an experimental investigation of the functional relation between cue informativeness and temporal discrimination in a peak-interval (PI) procedure. Each session consisted of fixed-interval (FI) 2-s and 4-s schedules of food and occasional, 12-s PI trials during which pecks had no programmed consequences. Across conditions, the phi (ϕ) correlation between key light color and FI schedule value was manipulated. Red and green key lights signaled the onset of either or both FI schedules. Different colors were either predictive (ϕ = 1), moderately predictive (ϕ = 0.2-0.8), or not predictive (ϕ = 0) of a specific FI schedule. This study tested the hypothesis that temporal discrimination is a function of the momentary conditional probability of food; that is, pigeons peck the most at either 2 s or 4 s when ϕ = 1 and peck at both intervals when ϕ < 1. Response distributions were bimodal Gaussian curves; distributions from red- and green-key PI trials converged when ϕ ≤ 0.6. Peak times estimated by summed Gaussian functions, averaged across conditions and pigeons, were 1.85 s and 3.87 s, however, pigeons did not always maximize the momentary probability of food. When key light color was highly correlated with FI schedules (ϕ ≥ 0.6), estimates of peak times indicated that temporal discrimination accuracy was reduced at the unlikely interval, but not the likely interval. The mechanism of this reduced temporal discrimination accuracy could be interpreted as an attentional process.

  8. Physical Layer Ethernet Clock Synchronization

    DTIC Science & Technology

    2010-11-01

    42 nd Annual Precise Time and Time Interval (PTTI) Meeting 77 PHYSICAL LAYER ETHERNET CLOCK SYNCHRONIZATION Reinhard Exel, Georg...oeaw.ac.at Nikolaus Kerö Oregano Systems, Mohsgasse 1, 1030 Wien, Austria E-mail: nikolaus.keroe@oregano.at Abstract Clock synchronization ...is a service widely used in distributed networks to coordinate data acquisition and actions. As the requirement to achieve tighter synchronization

  9. On the appropriateness of applying chi-square distribution based confidence intervals to spectral estimates of helicopter flyover data

    NASA Technical Reports Server (NTRS)

    Rutledge, Charles K.

    1988-01-01

    The validity of applying chi-square based confidence intervals to far-field acoustic flyover spectral estimates was investigated. Simulated data, using a Kendall series and experimental acoustic data from the NASA/McDonnell Douglas 500E acoustics test, were analyzed. Statistical significance tests to determine the equality of distributions of the simulated and experimental data relative to theoretical chi-square distributions were performed. Bias and uncertainty errors associated with the spectral estimates were easily identified from the data sets. A model relating the uncertainty and bias errors to the estimates resulted, which aided in determining the appropriateness of the chi-square distribution based confidence intervals. Such confidence intervals were appropriate for nontonally associated frequencies of the experimental data but were inappropriate for tonally associated estimate distributions. The appropriateness at the tonally associated frequencies was indicated by the presence of bias error and noncomformity of the distributions to the theoretical chi-square distribution. A technique for determining appropriate confidence intervals at the tonally associated frequencies was suggested.

  10. The distribution of the intervals between neural impulses in the maintained discharges of retinal ganglion cells.

    PubMed

    Levine, M W

    1991-01-01

    Simulated neural impulse trains were generated by a digital realization of the integrate-and-fire model. The variability in these impulse trains had as its origin a random noise of specified distribution. Three different distributions were used: the normal (Gaussian) distribution (no skew, normokurtic), a first-order gamma distribution (positive skew, leptokurtic), and a uniform distribution (no skew, platykurtic). Despite these differences in the distribution of the variability, the distributions of the intervals between impulses were nearly indistinguishable. These inter-impulse distributions were better fit with a hyperbolic gamma distribution than a hyperbolic normal distribution, although one might expect a better approximation for normally distributed inverse intervals. Consideration of why the inter-impulse distribution is independent of the distribution of the causative noise suggests two putative interval distributions that do not depend on the assumed noise distribution: the log normal distribution, which is predicated on the assumption that long intervals occur with the joint probability of small input values, and the random walk equation, which is the diffusion equation applied to a random walk model of the impulse generating process. Either of these equations provides a more satisfactory fit to the simulated impulse trains than the hyperbolic normal or hyperbolic gamma distributions. These equations also provide better fits to impulse trains derived from the maintained discharges of ganglion cells in the retinae of cats or goldfish. It is noted that both equations are free from the constraint that the coefficient of variation (CV) have a maximum of unity.(ABSTRACT TRUNCATED AT 250 WORDS)

  11. Slow diffusion by Markov random flights

    NASA Astrophysics Data System (ADS)

    Kolesnik, Alexander D.

    2018-06-01

    We present a conception of the slow diffusion processes in the Euclidean spaces Rm , m ≥ 1, based on the theory of random flights with small constant speed that are driven by a homogeneous Poisson process of small rate. The slow diffusion condition that, on long time intervals, leads to the stationary distributions, is given. The stationary distributions of slow diffusion processes in some Euclidean spaces of low dimensions, are presented.

  12. The Poisson model limits in NBA basketball: Complexity in team sports

    NASA Astrophysics Data System (ADS)

    Martín-González, Juan Manuel; de Saá Guerra, Yves; García-Manso, Juan Manuel; Arriaza, Enrique; Valverde-Estévez, Teresa

    2016-12-01

    Team sports are frequently studied by researchers. There is presumption that scoring in basketball is a random process and that can be described using the Poisson Model. Basketball is a collaboration-opposition sport, where the non-linear local interactions among players are reflected in the evolution of the score that ultimately determines the winner. In the NBA, the outcomes of close games are often decided in the last minute, where fouls play a main role. We examined 6130 NBA games in order to analyze the time intervals between baskets and scoring dynamics. Most numbers of baskets (n) over a time interval (ΔT) follow a Poisson distribution, but some (e.g., ΔT = 10 s, n > 3) behave as a Power Law. The Poisson distribution includes most baskets in any game, in most game situations, but in close games in the last minute, the numbers of events are distributed following a Power Law. The number of events can be adjusted by a mixture of two distributions. In close games, both teams try to maintain their advantage solely in order to reach the last minute: a completely different game. For this reason, we propose to use the Poisson model as a reference. The complex dynamics will emerge from the limits of this model.

  13. Stabilization of memory States by stochastic facilitating synapses.

    PubMed

    Miller, Paul

    2013-12-06

    Bistability within a small neural circuit can arise through an appropriate strength of excitatory recurrent feedback. The stability of a state of neural activity, measured by the mean dwelling time before a noise-induced transition to another state, depends on the neural firing-rate curves, the net strength of excitatory feedback, the statistics of spike times, and increases exponentially with the number of equivalent neurons in the circuit. Here, we show that such stability is greatly enhanced by synaptic facilitation and reduced by synaptic depression. We take into account the alteration in times of synaptic vesicle release, by calculating distributions of inter-release intervals of a synapse, which differ from the distribution of its incoming interspike intervals when the synapse is dynamic. In particular, release intervals produced by a Poisson spike train have a coefficient of variation greater than one when synapses are probabilistic and facilitating, whereas the coefficient of variation is less than one when synapses are depressing. However, in spite of the increased variability in postsynaptic input produced by facilitating synapses, their dominant effect is reduced synaptic efficacy at low input rates compared to high rates, which increases the curvature of neural input-output functions, leading to wider regions of bistability in parameter space and enhanced lifetimes of memory states. Our results are based on analytic methods with approximate formulae and bolstered by simulations of both Poisson processes and of circuits of noisy spiking model neurons.

  14. Stochastic nature of series of waiting times.

    PubMed

    Anvari, Mehrnaz; Aghamohammadi, Cina; Dashti-Naserabadi, H; Salehi, E; Behjat, E; Qorbani, M; Nezhad, M Khazaei; Zirak, M; Hadjihosseini, Ali; Peinke, Joachim; Tabar, M Reza Rahimi

    2013-06-01

    Although fluctuations in the waiting time series have been studied for a long time, some important issues such as its long-range memory and its stochastic features in the presence of nonstationarity have so far remained unstudied. Here we find that the "waiting times" series for a given increment level have long-range correlations with Hurst exponents belonging to the interval 1/2

  15. Determining probability distribution of coherent integration time near 133 Hz and 1346 km in the Pacific Ocean.

    PubMed

    Spiesberger, John L

    2013-02-01

    The hypothesis tested is that internal gravity waves limit the coherent integration time of sound at 1346 km in the Pacific ocean at 133 Hz and a pulse resolution of 0.06 s. Six months of continuous transmissions at about 18 min intervals are examined. The source and receiver are mounted on the bottom of the ocean with timing governed by atomic clocks. Measured variability is only due to fluctuations in the ocean. A model for the propagation of sound through fluctuating internal waves is run without any tuning with data. Excellent resemblance is found between the model and data's probability distributions of integration time up to five hours.

  16. Testing for time-based correlates of perceived gender discrimination.

    PubMed

    Blau, Gary; Tatum, Donna Surges; Ward-Cook, Kory; Dobria, Lidia; McCoy, Keith

    2005-01-01

    Using a sample of 201 medical technologists (MTs) over a five-year period, this study extends initial findings on perceived gender discrimination (PGD) by Blau and Tatum (2000) by applying organizational justice variables and internal-external locus of control as hypothesized correlates of PGD. Three types of organizational justice were measured: distributive, procedural, and interactional. General relationships found include locus of control being related to PGD such that internals perceived lower PGD. Also, distributive, procedural, and interactional justice were negatively related to PGD. However, increasing the time interval between these correlates weakened their relationships. The relationship of interactional justice to PGD remained the most "resistant" to attenuation over time.

  17. Geosocial process and its regularities

    NASA Astrophysics Data System (ADS)

    Vikulina, Marina; Vikulin, Alexander; Dolgaya, Anna

    2015-04-01

    Natural disasters and social events (wars, revolutions, genocides, epidemics, fires, etc.) accompany each other throughout human civilization, thus reflecting the close relationship of these phenomena that are seemingly of different nature. In order to study this relationship authors compiled and analyzed the list of the 2,400 natural disasters and social phenomena weighted by their magnitude that occurred during the last XXXVI centuries of our history. Statistical analysis was performed separately for each aggregate (natural disasters and social phenomena), and for particular statistically representative types of events. There was 5 + 5 = 10 types. It is shown that the numbers of events in the list are distributed by logarithmic law: the bigger the event, the less likely it happens. For each type of events and each aggregate the existence of periodicities with periods of 280 ± 60 years was established. Statistical analysis of the time intervals between adjacent events for both aggregates showed good agreement with Weibull-Gnedenko distribution with shape parameter less than 1, which is equivalent to the conclusion about the grouping of events at small time intervals. Modeling of statistics of time intervals with Pareto distribution allowed to identify the emergent property for all events in the aggregate. This result allowed the authors to make conclusion about interaction between natural disasters and social phenomena. The list of events compiled by authors and first identified properties of cyclicity, grouping and interaction process reflected by this list is the basis of modeling essentially unified geosocial process at high enough statistical level. Proof of interaction between "lifeless" Nature and Society is fundamental and provided a new approach to forecasting demographic crises with taking into account both natural disasters and social phenomena.

  18. Measurement of baseline and orientation between distributed aerospace platforms.

    PubMed

    Wang, Wen-Qin

    2013-01-01

    Distributed platforms play an important role in aerospace remote sensing, radar navigation, and wireless communication applications. However, besides the requirement of high accurate time and frequency synchronization for coherent signal processing, the baseline between the transmitting platform and receiving platform and the orientation of platform towards each other during data recording must be measured in real time. In this paper, we propose an improved pulsed duplex microwave ranging approach, which allows determining the spatial baseline and orientation between distributed aerospace platforms by the proposed high-precision time-interval estimation method. This approach is novel in the sense that it cancels the effect of oscillator frequency synchronization errors due to separate oscillators that are used in the platforms. Several performance specifications are also discussed. The effectiveness of the approach is verified by simulation results.

  19. Photocounting distributions for exponentially decaying sources.

    PubMed

    Teich, M C; Card, H C

    1979-05-01

    Exact photocounting distributions are obtained for a pulse of light whose intensity is exponentially decaying in time, when the underlying photon statistics are Poisson. It is assumed that the starting time for the sampling interval (which is of arbitrary duration) is uniformly distributed. The probability of registering n counts in the fixed time T is given in terms of the incomplete gamma function for n >/= 1 and in terms of the exponential integral for n = 0. Simple closed-form expressions are obtained for the count mean and variance. The results are expected to be of interest in certain studies involving spontaneous emission, radiation damage in solids, and nuclear counting. They will also be useful in neurobiology and psychophysics, since habituation and sensitization processes may sometimes be characterized by the same stochastic model.

  20. Asthma and school commuting time.

    PubMed

    McConnell, Rob; Liu, Feifei; Wu, Jun; Lurmann, Fred; Peters, John; Berhane, Kiros

    2010-08-01

    This study examined associations of asthma with school commuting time. Time on likely school commute route was used as a proxy for on-road air pollution exposure among 4741 elementary school children at enrollment into the Children's Health Study. Lifetime asthma and severe wheeze (including multiple attacks, nocturnal, or with shortness of breath) were reported by parents. In asthmatic children, severe wheeze was associated with commuting time (odds ratio, 1.54 across the 9-minute 5% to 95% exposure distribution; 95% confidence interval, 1.01 to 2.36). The association was stronger in analysis restricted to asthmatic children with commuting times 5 minutes or longer (odds ratio, 1.97; 95% confidence interval, 1.02 to 3.77). No significant associations were observed with asthma prevalence. Among asthmatics, severe wheeze was associated with relatively short school commuting times. Further investigation of effects of on-road pollutant exposure is warranted.

  1. Stochastic nature of series of waiting times

    NASA Astrophysics Data System (ADS)

    Anvari, Mehrnaz; Aghamohammadi, Cina; Dashti-Naserabadi, H.; Salehi, E.; Behjat, E.; Qorbani, M.; Khazaei Nezhad, M.; Zirak, M.; Hadjihosseini, Ali; Peinke, Joachim; Tabar, M. Reza Rahimi

    2013-06-01

    Although fluctuations in the waiting time series have been studied for a long time, some important issues such as its long-range memory and its stochastic features in the presence of nonstationarity have so far remained unstudied. Here we find that the “waiting times” series for a given increment level have long-range correlations with Hurst exponents belonging to the interval 1/2

  2. Earthquakes: Recurrence and Interoccurrence Times

    NASA Astrophysics Data System (ADS)

    Abaimov, S. G.; Turcotte, D. L.; Shcherbakov, R.; Rundle, J. B.; Yakovlev, G.; Goltz, C.; Newman, W. I.

    2008-04-01

    The purpose of this paper is to discuss the statistical distributions of recurrence times of earthquakes. Recurrence times are the time intervals between successive earthquakes at a specified location on a specified fault. Although a number of statistical distributions have been proposed for recurrence times, we argue in favor of the Weibull distribution. The Weibull distribution is the only distribution that has a scale-invariant hazard function. We consider three sets of characteristic earthquakes on the San Andreas fault: (1) The Parkfield earthquakes, (2) the sequence of earthquakes identified by paleoseismic studies at the Wrightwood site, and (3) an example of a sequence of micro-repeating earthquakes at a site near San Juan Bautista. In each case we make a comparison with the applicable Weibull distribution. The number of earthquakes in each of these sequences is too small to make definitive conclusions. To overcome this difficulty we consider a sequence of earthquakes obtained from a one million year “Virtual California” simulation of San Andreas earthquakes. Very good agreement with a Weibull distribution is found. We also obtain recurrence statistics for two other model studies. The first is a modified forest-fire model and the second is a slider-block model. In both cases good agreements with Weibull distributions are obtained. Our conclusion is that the Weibull distribution is the preferred distribution for estimating the risk of future earthquakes on the San Andreas fault and elsewhere.

  3. q-Gaussian distributions of leverage returns, first stopping times, and default risk valuations

    NASA Astrophysics Data System (ADS)

    Katz, Yuri A.; Tian, Li

    2013-10-01

    We study the probability distributions of daily leverage returns of 520 North American industrial companies that survive de-listing during the financial crisis, 2006-2012. We provide evidence that distributions of unbiased leverage returns of all individual firms belong to the class of q-Gaussian distributions with the Tsallis entropic parameter within the interval 1

  4. Priority queues with bursty arrivals of incoming tasks

    NASA Astrophysics Data System (ADS)

    Masuda, N.; Kim, J. S.; Kahng, B.

    2009-03-01

    Recently increased accessibility of large-scale digital records enables one to monitor human activities such as the interevent time distributions between two consecutive visits to a web portal by a single user, two consecutive emails sent out by a user, two consecutive library loans made by a single individual, etc. Interestingly, those distributions exhibit a universal behavior, D(τ)˜τ-δ , where τ is the interevent time, and δ≃1 or 3/2 . The universal behaviors have been modeled via the waiting-time distribution of a task in the queue operating based on priority; the waiting time follows a power-law distribution Pw(τ)˜τ-α with either α=1 or 3/2 depending on the detail of queuing dynamics. In these models, the number of incoming tasks in a unit time interval has been assumed to follow a Poisson-type distribution. For an email system, however, the number of emails delivered to a mail box in a unit time we measured follows a power-law distribution with general exponent γ . For this case, we obtain analytically the exponent α , which is not necessarily 1 or 3/2 and takes nonuniversal values depending on γ . We develop the generating function formalism to obtain the exponent α , which is distinct from the continuous time approximation used in the previous studies.

  5. Reliability analysis of component of affination centrifugal 1 machine by using reliability engineering

    NASA Astrophysics Data System (ADS)

    Sembiring, N.; Ginting, E.; Darnello, T.

    2017-12-01

    Problems that appear in a company that produces refined sugar, the production floor has not reached the level of critical machine availability because it often suffered damage (breakdown). This results in a sudden loss of production time and production opportunities. This problem can be solved by Reliability Engineering method where the statistical approach to historical damage data is performed to see the pattern of the distribution. The method can provide a value of reliability, rate of damage, and availability level, of an machine during the maintenance time interval schedule. The result of distribution test to time inter-damage data (MTTF) flexible hose component is lognormal distribution while component of teflon cone lifthing is weibull distribution. While from distribution test to mean time of improvement (MTTR) flexible hose component is exponential distribution while component of teflon cone lifthing is weibull distribution. The actual results of the flexible hose component on the replacement schedule per 720 hours obtained reliability of 0.2451 and availability 0.9960. While on the critical components of teflon cone lifthing actual on the replacement schedule per 1944 hours obtained reliability of 0.4083 and availability 0.9927.

  6. Passive longitudes of solar cosmic rays in 19-24 solar cycles

    NASA Astrophysics Data System (ADS)

    Getselev, Igor; Podzolko, Mikhail; Shatov, Pavel; Tasenko, Sergey; Skorohodov, Ilya; Okhlopkov, Viktor

    The distribution of solar proton event sources along the Carrington longitude in 19-24 solar cycles is considered. For this study an extensive database on ≈450 solar proton events have been constructed using various available sources and solar cosmic ray measurements, which included the data about the time of the event, fluences of protons of various energies in it and the coordinates of its source on the Sun. The analysis has shown the significant inhomogeneity of the distribution. In particular a region of “passive longitudes” has been discovered, extensive over the longitude (from ≈90-100° to 170°) and the life time (the whole period of observations). From the 60 most powerful proton events during the 19-24 solar cycles not more than 1 event was originated from the interval of 100-170° Carrington longitude, from another 80 “medium” events only 10 were injected from this interval. The summarized proton fluence of the events, which sources belong to the interval of 90-170° amounts only to 5%, and if not take into account the single “anomalous” powerful event - to just only 1.2% from the total fluence for all the considered events. The existence of the extensive and stable interval of “passive” Carrington longitudes is the remarkable phenomenon in solar physics. It also confirms the physical relevance of the mean synodic period of Sun’s rotation determined by R. C. Carrington.

  7. Self-organized criticality in complex systems: Applicability to the interoccurrent and recurrent statistical behavior of earthquakes

    NASA Astrophysics Data System (ADS)

    Abaimov, Sergey G.

    The concept of self-organized criticality is associated with scale-invariant, fractal behavior; this concept is also applicable to earthquake systems. It is known that the interoccurrent frequency-size distribution of earthquakes in a region is scale-invariant and obeys the Gutenberg-Richter power-law dependence. Also, the interoccurrent time-interval distribution is known to obey Poissonian statistics excluding aftershocks. However, to estimate the hazard risk for a region it is necessary to know also the recurrent behavior of earthquakes at a given point on a fault. This behavior has been investigated in the literature, however, major questions remain unresolved. The reason is the small number of earthquakes in observed sequences. To overcome this difficulty this research utilizes numerical simulations of a slider-block model and a sand-pile model. Also, experimental observations of creep events on the creeping section of the San Andreas fault are processed and sequences up to 100 events are studied. Then the recurrent behavior of earthquakes at a given point on a fault or at a given fault is investigated. It is shown that both the recurrent frequency-size and the time-interval behaviors of earthquakes obey the Weibull distribution.

  8. Better Bet-Hedging with coupled positive and negative feedback loops

    NASA Astrophysics Data System (ADS)

    Narula, Jatin; Igoshin, Oleg

    2011-03-01

    Bacteria use the phenotypic heterogeneity associated with bistable switches to distribute the risk of activating stress response strategies like sporulation and persistence. However bistable switches offer little control over the timing of phenotype switching and first passage times (FPT) for individual cells are found to be exponentially distributed. We show that a genetic circuit consisting of interlinked positive and negative feedback loops allows cells to control the timing of phenotypic switching. Using a mathematical model we find that in this system a stable high expression state and stable low expression limit cycle coexist and the FPT distribution for stochastic transitions between them shows multiple peaks at regular intervals. A multimodal FPT distribution allows cells to detect the persistence of stress and control the rate of phenotype transition of the population. We further show that extracellular signals from cell-cell communication that change the strength of the feedback loops can modulate the FPT distribution and allow cells even greater control in a bet-hedging strategy.

  9. Estimation of the cloud transmittance from radiometric measurements at the ground level

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, Dario; Mares, Oana, E-mail: mareshoana@yahoo.com

    2014-11-24

    The extinction of solar radiation due to the clouds is more significant than due to any other atmospheric constituent, but it is always difficult to be modeled because of the random distribution of clouds on the sky. Moreover, the transmittance of a layer of clouds is in a very complex relation with their type and depth. A method for estimating cloud transmittance was proposed in Paulescu et al. (Energ. Convers. Manage, 75 690–697, 2014). The approach is based on the hypothesis that the structure of the cloud covering the sun at a time moment does not change significantly in amore » short time interval (several minutes). Thus, the cloud transmittance can be calculated as the estimated coefficient of a simple linear regression for the computed versus measured solar irradiance in a time interval Δt. The aim of this paper is to optimize the length of the time interval Δt. Radiometric data measured on the Solar Platform of the West University of Timisoara during 2010 at a frequency of 1/15 seconds are used in this study.« less

  10. Estimation of the cloud transmittance from radiometric measurements at the ground level

    NASA Astrophysics Data System (ADS)

    Costa, Dario; Mares, Oana

    2014-11-01

    The extinction of solar radiation due to the clouds is more significant than due to any other atmospheric constituent, but it is always difficult to be modeled because of the random distribution of clouds on the sky. Moreover, the transmittance of a layer of clouds is in a very complex relation with their type and depth. A method for estimating cloud transmittance was proposed in Paulescu et al. (Energ. Convers. Manage, 75 690-697, 2014). The approach is based on the hypothesis that the structure of the cloud covering the sun at a time moment does not change significantly in a short time interval (several minutes). Thus, the cloud transmittance can be calculated as the estimated coefficient of a simple linear regression for the computed versus measured solar irradiance in a time interval Δt. The aim of this paper is to optimize the length of the time interval Δt. Radiometric data measured on the Solar Platform of the West University of Timisoara during 2010 at a frequency of 1/15 seconds are used in this study.

  11. The Distribution of the Product Explains Normal Theory Mediation Confidence Interval Estimation.

    PubMed

    Kisbu-Sakarya, Yasemin; MacKinnon, David P; Miočević, Milica

    2014-05-01

    The distribution of the product has several useful applications. One of these applications is its use to form confidence intervals for the indirect effect as the product of 2 regression coefficients. The purpose of this article is to investigate how the moments of the distribution of the product explain normal theory mediation confidence interval coverage and imbalance. Values of the critical ratio for each random variable are used to demonstrate how the moments of the distribution of the product change across values of the critical ratio observed in research studies. Results of the simulation study showed that as skewness in absolute value increases, coverage decreases. And as skewness in absolute value and kurtosis increases, imbalance increases. The difference between testing the significance of the indirect effect using the normal theory versus the asymmetric distribution of the product is further illustrated with a real data example. This article is the first study to show the direct link between the distribution of the product and indirect effect confidence intervals and clarifies the results of previous simulation studies by showing why normal theory confidence intervals for indirect effects are often less accurate than those obtained from the asymmetric distribution of the product or from resampling methods.

  12. Multiplicative point process as a model of trading activity

    NASA Astrophysics Data System (ADS)

    Gontis, V.; Kaulakys, B.

    2004-11-01

    Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.

  13. Exponential synchronization of neural networks with discrete and distributed delays under time-varying sampling.

    PubMed

    Wu, Zheng-Guang; Shi, Peng; Su, Hongye; Chu, Jian

    2012-09-01

    This paper investigates the problem of master-slave synchronization for neural networks with discrete and distributed delays under variable sampling with a known upper bound on the sampling intervals. An improved method is proposed, which captures the characteristic of sampled-data systems. Some delay-dependent criteria are derived to ensure the exponential stability of the error systems, and thus the master systems synchronize with the slave systems. The desired sampled-data controller can be achieved by solving a set of linear matrix inequalitys, which depend upon the maximum sampling interval and the decay rate. The obtained conditions not only have less conservatism but also have less decision variables than existing results. Simulation results are given to show the effectiveness and benefits of the proposed methods.

  14. Prediction future asset price which is non-concordant with the historical distribution

    NASA Astrophysics Data System (ADS)

    Seong, Ng Yew; Hin, Pooi Ah

    2015-12-01

    This paper attempts to predict the major characteristics of the future asset price which is non-concordant with the distribution estimated from the price today and the prices on a large number of previous days. The three major characteristics of the i-th non-concordant asset price are the length of the interval between the occurrence time of the previous non-concordant asset price and that of the present non-concordant asset price, the indicator which denotes that the non-concordant price is extremely small or large by its values -1 and 1 respectively, and the degree of non-concordance given by the negative logarithm of the probability of the left tail or right tail of which one of the end points is given by the observed future price. The vector of three major characteristics of the next non-concordant price is modelled to be dependent on the vectors corresponding to the present and l - 1 previous non-concordant prices via a 3-dimensional conditional distribution which is derived from a 3(l + 1)-dimensional power-normal mixture distribution. The marginal distribution for each of the three major characteristics can then be derived from the conditional distribution. The mean of the j-th marginal distribution is an estimate of the value of the j-th characteristics of the next non-concordant price. Meanwhile, the 100(α/2) % and 100(1 - α/2) % points of the j-th marginal distribution can be used to form a prediction interval for the j-th characteristic of the next non-concordant price. The performance measures of the above estimates and prediction intervals indicate that the fitted conditional distribution is satisfactory. Thus the incorporation of the distribution of the characteristics of the next non-concordant price in the model for asset price has a good potential of yielding a more realistic model.

  15. The Isolation of Motivational, Motoric, and Schedule Effects on Operant Performance: A Modeling Approach

    PubMed Central

    Brackney, Ryan J; Cheung, Timothy H. C; Neisewander, Janet L; Sanabria, Federico

    2011-01-01

    Dissociating motoric and motivational effects of pharmacological manipulations on operant behavior is a substantial challenge. To address this problem, we applied a response-bout analysis to data from rats trained to lever press for sucrose on variable-interval (VI) schedules of reinforcement. Motoric, motivational, and schedule factors (effort requirement, deprivation level, and schedule requirements, respectively) were manipulated. Bout analysis found that interresponse times (IRTs) were described by a mixture of two exponential distributions, one characterizing IRTs within response bouts, another characterizing intervals between bouts. Increasing effort requirement lengthened the shortest IRT (the refractory period between responses). Adding a ratio requirement increased the length and density of response bouts. Both manipulations also decreased the bout-initiation rate. In contrast, food deprivation only increased the bout-initiation rate. Changes in the distribution of IRTs over time showed that responses during extinction were also emitted in bouts, and that the decrease in response rate was primarily due to progressively longer intervals between bouts. Taken together, these results suggest that changes in the refractory period indicate motoric effects, whereas selective alterations in bout initiation rate indicate incentive-motivational effects. These findings support the use of response-bout analyses to identify the influence of pharmacological manipulations on processes underlying operant performance. PMID:21765544

  16. Multichannel Spectrometer of Time Distribution

    NASA Astrophysics Data System (ADS)

    Akindinova, E. V.; Babenko, A. G.; Vakhtel, V. M.; Evseev, N. A.; Rabotkin, V. A.; Kharitonova, D. D.

    2015-06-01

    For research and control of characteristics of radiation fluxes, radioactive sources in particular, for example, in paper [1], a spectrometer and methods of data measurement and processing based on the multichannel counter of time intervals of accident events appearance (impulses of particle detector) MC-2A (SPC "ASPECT") were created. The spectrometer has four independent channels of registration of time intervals of impulses appearance and correspondent amplitude and spectrometric channels for control along the energy spectra of the operation stationarity of paths of each of the channels from the detector to the amplifier. The registration of alpha-radiation is carried out by the semiconductor detectors with energy resolution of 16-30 keV. Using a spectrometer there have been taken measurements of oscillations of alpha-radiation 239-Pu flux intensity with a subsequent autocorrelative statistical analysis of the time series of readings.

  17. The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution

    NASA Astrophysics Data System (ADS)

    Shin, H.; Heo, J.; Kim, T.; Jung, Y.

    2007-12-01

    The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.

  18. Algorithms development for the GEM-based detection system

    NASA Astrophysics Data System (ADS)

    Czarski, T.; Chernyshova, M.; Malinowski, K.; Pozniak, K. T.; Kasprowicz, G.; Kolasinski, P.; Krawczyk, R.; Wojenski, A.; Zabolotny, W.

    2016-09-01

    The measurement system based on GEM - Gas Electron Multiplier detector - is developed for soft X-ray diagnostics of tokamak plasmas. The multi-channel setup is designed for estimation of the energy and the position distribution of an Xray source. The focal measuring issue is the charge cluster identification by its value and position estimation. The fast and accurate mode of the serial data acquisition is applied for the dynamic plasma diagnostics. The charge clusters are counted in the space determined by 2D position, charge value and time intervals. Radiation source characteristics are presented by histograms for a selected range of position, time intervals and cluster charge values corresponding to the energy spectra.

  19. Distribution of cattle grazing in a northeastern Oregon riparian pasture

    USDA-ARS?s Scientific Manuscript database

    Livestock grazing of a northeastern Oregon riparian pasture was monitored using high-frequency GPS tracking of cattle and high-resolution aerial photography. Tracking collars recorded positions, velocity, date, and time at 1-sec intervals. Areas where animals rested and moved were identified and re...

  20. Generation and Validation of Spatial Distribution of Hourly Wind Speed Time-Series using Machine Learning

    NASA Astrophysics Data System (ADS)

    Veronesi, F.; Grassi, S.

    2016-09-01

    Wind resource assessment is a key aspect of wind farm planning since it allows to estimate the long term electricity production. Moreover, wind speed time-series at high resolution are helpful to estimate the temporal changes of the electricity generation and indispensable to design stand-alone systems, which are affected by the mismatch of supply and demand. In this work, we present a new generalized statistical methodology to generate the spatial distribution of wind speed time-series, using Switzerland as a case study. This research is based upon a machine learning model and demonstrates that statistical wind resource assessment can successfully be used for estimating wind speed time-series. In fact, this method is able to obtain reliable wind speed estimates and propagate all the sources of uncertainty (from the measurements to the mapping process) in an efficient way, i.e. minimizing computational time and load. This allows not only an accurate estimation, but the creation of precise confidence intervals to map the stochasticity of the wind resource for a particular site. The validation shows that machine learning can minimize the bias of the wind speed hourly estimates. Moreover, for each mapped location this method delivers not only the mean wind speed, but also its confidence interval, which are crucial data for planners.

  1. Prediction of mortality rates using a model with stochastic parameters

    NASA Astrophysics Data System (ADS)

    Tan, Chon Sern; Pooi, Ah Hin

    2016-10-01

    Prediction of future mortality rates is crucial to insurance companies because they face longevity risks while providing retirement benefits to a population whose life expectancy is increasing. In the past literature, a time series model based on multivariate power-normal distribution has been applied on mortality data from the United States for the years 1933 till 2000 to forecast the future mortality rates for the years 2001 till 2010. In this paper, a more dynamic approach based on the multivariate time series will be proposed where the model uses stochastic parameters that vary with time. The resulting prediction intervals obtained using the model with stochastic parameters perform better because apart from having good ability in covering the observed future mortality rates, they also tend to have distinctly shorter interval lengths.

  2. Asymptotic expansion of pair production probability in a time-dependent electric field

    NASA Astrophysics Data System (ADS)

    Arai, Takashi

    2015-12-01

    We study particle creation in a single pulse of an electric field in scalar quantum electrodynamics. We investigate the parameter condition for the case where the dynamical pair creation and Schwinger mechanism respectively dominate. Then, an asymptotic expansion for the particle distribution in terms of the time interval of the applied electric field is derived. We compare our result with particle creation in a constant electric field with a finite-time interval. These results coincide in an extremely strong field, however they differ in general field strength. We interpret the reason of this difference as a nonperturbative effect of high-frequency photons in external electric fields. Moreover, we find that the next-to-leading-order term in our asymptotic expansion coincides with the derivative expansion of the effective action.

  3. Long-Term Memory: A Natural Mechanism for the Clustering of Extreme Events and Anomalous Residual Times in Climate Records

    NASA Astrophysics Data System (ADS)

    Bunde, Armin; Eichner, Jan F.; Kantelhardt, Jan W.; Havlin, Shlomo

    2005-01-01

    We study the statistics of the return intervals between extreme events above a certain threshold in long-term persistent records. We find that the long-term memory leads (i)to a stretched exponential distribution of the return intervals, (ii)to a pronounced clustering of extreme events, and (iii)to an anomalous behavior of the mean residual time to the next event that depends on the history and increases with the elapsed time in a counterintuitive way. We present an analytical scaling approach and demonstrate that all these features can be seen in long climate records. The phenomena should also occur in heartbeat records, Internet traffic, and stock market volatility and have to be taken into account for an efficient risk evaluation.

  4. HESS Opinions: The need for process-based evaluation of large-domain hyper-resolution models

    NASA Astrophysics Data System (ADS)

    Melsen, Lieke A.; Teuling, Adriaan J.; Torfs, Paul J. J. F.; Uijlenhoet, Remko; Mizukami, Naoki; Clark, Martyn P.

    2016-03-01

    A meta-analysis on 192 peer-reviewed articles reporting on applications of the variable infiltration capacity (VIC) model in a distributed way reveals that the spatial resolution at which the model is applied has increased over the years, while the calibration and validation time interval has remained unchanged. We argue that the calibration and validation time interval should keep pace with the increase in spatial resolution in order to resolve the processes that are relevant at the applied spatial resolution. We identified six time concepts in hydrological models, which all impact the model results and conclusions. Process-based model evaluation is particularly relevant when models are applied at hyper-resolution, where stakeholders expect credible results both at a high spatial and temporal resolution.

  5. HESS Opinions: The need for process-based evaluation of large-domain hyper-resolution models

    NASA Astrophysics Data System (ADS)

    Melsen, L. A.; Teuling, A. J.; Torfs, P. J. J. F.; Uijlenhoet, R.; Mizukami, N.; Clark, M. P.

    2015-12-01

    A meta-analysis on 192 peer-reviewed articles reporting applications of the Variable Infiltration Capacity (VIC) model in a distributed way reveals that the spatial resolution at which the model is applied has increased over the years, while the calibration and validation time interval has remained unchanged. We argue that the calibration and validation time interval should keep pace with the increase in spatial resolution in order to resolve the processes that are relevant at the applied spatial resolution. We identified six time concepts in hydrological models, which all impact the model results and conclusions. Process-based model evaluation is particularly relevant when models are applied at hyper-resolution, where stakeholders expect credible results both at a high spatial and temporal resolution.

  6. Atlas of interoccurrence intervals for selected thresholds of daily precipitation in Texas

    USGS Publications Warehouse

    Asquith, William H.; Roussel, Meghan C.

    2003-01-01

    A Poisson process model is used to define the distribution of interoccurrence intervals of daily precipitation in Texas. A precipitation interoccurrence interval is the time period between two successive rainfall events. Rainfall events are defined as daily precipitation equaling or exceeding a specified depth threshold. Ten precipitation thresholds are considered: 0.05, 0.10, 0.25, 0.50, 0.75, 1.0, 1.5, 2.0, 2.5, and 3.0 inches. Site-specific mean interoccurrence interval and ancillary statistics are presented for each threshold and for each of 1,306 National Weather Service daily precipitation gages. Maps depicting the spatial variation across Texas of the mean interoccurrence interval for each threshold are presented. The percent change from the statewide standard deviation of the interoccurrence intervals to the root-mean-square error ranges from a magnitude minimum of (negative) -24 to a magnitude maximum of -60 percent for the 0.05- and 2.0-inch thresholds, respectively. Because of the substantial negative percent change, the maps are considered more reliable estimators of the mean interoccurrence interval for most locations in Texas than the statewide mean values.

  7. 76 FR 12775 - Self-Regulatory Organizations; C2 Options Exchange, Incorporated; Notice of Filing of a Proposed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-08

    ... series in the pilot: (1) A time series analysis of open interest; and (2) An analysis of the distribution... times the number of shares outstanding. These are summed for all 500 stocks and divided by a... below $3.00 and $0.10 for all other series. Strike price intervals would be set no less than 5 points...

  8. Weblog patterns and human dynamics with decreasing interest

    NASA Astrophysics Data System (ADS)

    Guo, J.-L.; Fan, C.; Guo, Z.-H.

    2011-06-01

    In order to describe the phenomenon that people's interest in doing something always keep high in the beginning while gradually decreases until reaching the balance, a model which describes the attenuation of interest is proposed to reflect the fact that people's interest becomes more stable after a long time. We give a rigorous analysis on this model by non-homogeneous Poisson processes. Our analysis indicates that the interval distribution of arrival-time is a mixed distribution with exponential and power-law feature, which is a power law with an exponential cutoff. After that, we collect blogs in ScienceNet.cn and carry on empirical study on the interarrival time distribution. The empirical results agree well with the theoretical analysis, obeying a special power law with the exponential cutoff, that is, a special kind of Gamma distribution. These empirical results verify the model by providing an evidence for a new class of phenomena in human dynamics. It can be concluded that besides power-law distributions, there are other distributions in human dynamics. These findings demonstrate the variety of human behavior dynamics.

  9. Modeling the distribution of extreme share return in Malaysia using Generalized Extreme Value (GEV) distribution

    NASA Astrophysics Data System (ADS)

    Hasan, Husna; Radi, Noor Fadhilah Ahmad; Kassim, Suraiya

    2012-05-01

    Extreme share return in Malaysia is studied. The monthly, quarterly, half yearly and yearly maximum returns are fitted to the Generalized Extreme Value (GEV) distribution. The Augmented Dickey Fuller (ADF) and Phillips Perron (PP) tests are performed to test for stationarity, while Mann-Kendall (MK) test is for the presence of monotonic trend. Maximum Likelihood Estimation (MLE) is used to estimate the parameter while L-moments estimate (LMOM) is used to initialize the MLE optimization routine for the stationary model. Likelihood ratio test is performed to determine the best model. Sherman's goodness of fit test is used to assess the quality of convergence of the GEV distribution by these monthly, quarterly, half yearly and yearly maximum. Returns levels are then estimated for prediction and planning purposes. The results show all maximum returns for all selection periods are stationary. The Mann-Kendall test indicates the existence of trend. Thus, we ought to model for non-stationary model too. Model 2, where the location parameter is increasing with time is the best for all selection intervals. Sherman's goodness of fit test shows that monthly, quarterly, half yearly and yearly maximum converge to the GEV distribution. From the results, it seems reasonable to conclude that yearly maximum is better for the convergence to the GEV distribution especially if longer records are available. Return level estimates, which is the return level (in this study return amount) that is expected to be exceeded, an average, once every t time periods starts to appear in the confidence interval of T = 50 for quarterly, half yearly and yearly maximum.

  10. Multiserver Queueing Model subject to Single Exponential Vacation

    NASA Astrophysics Data System (ADS)

    Vijayashree, K. V.; Janani, B.

    2018-04-01

    A multi-server queueing model subject to single exponential vacation is considered. The arrivals are allowed to join the queue according to a Poisson distribution and services takes place according to an exponential distribution. Whenever the system becomes empty, all the servers goes for a vacation and returns back after a fixed interval of time. The servers then starts providing service if there are waiting customers otherwise they will wait to complete the busy period. The vacation times are also assumed to be exponentially distributed. In this paper, the stationary and transient probabilities for the number of customers during ideal and functional state of the server are obtained explicitly. Also, numerical illustrations are added to visualize the effect of various parameters.

  11. A model for foreign exchange markets based on glassy Brownian systems

    PubMed Central

    Trinidad-Segovia, J. E.; Clara-Rahola, J.; Puertas, A. M.; De las Nieves, F. J.

    2017-01-01

    In this work we extend a well-known model from arrested physical systems, and employ it in order to efficiently depict different currency pairs of foreign exchange market price fluctuation distributions. We consider the exchange rate price in the time range between 2010 and 2016 at yearly time intervals and resolved at one minute frequency. We then fit the experimental datasets with this model, and find significant qualitative symmetry between price fluctuation distributions from the currency market, and the ones belonging to colloidal particles position in arrested states. The main contribution of this paper is a well-known physical model that does not necessarily assume the independent and identically distributed (i.i.d.) restrictive condition. PMID:29206868

  12. Ion distributions in RC at different energy levels retrieved from TWINS ENA images by voxel CT tech

    NASA Astrophysics Data System (ADS)

    Ma, S. Y.; McComas, David; Xu, Liang; Goldstein, Jerry; Yan, Wei-Nan

    2012-07-01

    Distributions of energetic ions in the RC regions in different energy levels are retrieved by using 3-D voxel CT inversion method from ENA measurements onboard TWINS constellation during the main phase of a moderate geomagnetic storm. It is assumed that the ion flux distribution in the RC is anisotropic in regard to pitch angle which complies with the adiabatic invariance of the magnetic moment as ion moving in the dipole magnetic mirror field. A semi-empirical model of the RC ion distribution in the magnetic equator is quoted to form the ion flux distribution shape at off-equatorial latitudes by mapping. For the concerned time interval, the two satellites of the TWINS flying in double Molnia orbits were located in nearly the same meridian plane at vantage points widely separated in magnetic local time, and both more than 5 RE geocentric distance from the Earth. The ENA data used in this study are differential fluxes averaged over 12 sweeps (corresponding to an interval of 16 min.) at different energy levels ranging from about 1 to 100 keV. The retrieved ion distributions show that in total the main part of the RC is located in the region with L value larger than 4, tending to increase at larger L. It reveals that there are two distinct dominant energy bands at which the ion fluxes are significantly larger magnitude than at other energy levels, one is at lower level around 2 keV and the other at higher level of 30-100 keV. Furthermore, it is very interesting that the peak fluxes of the RC ions at the two energy bands occurred in different magnetic local time, low energy ions appear preferentially in after midnight, while the higher energy ions mainly distributed around midnight and pre-midnight. This new profile is worthy of further study and needs to be demonstrated by more cases.

  13. How people make friends in social networking sites—A microscopic perspective

    NASA Astrophysics Data System (ADS)

    Hu, Haibo; Wang, Xiaofan

    2012-02-01

    We study the detailed growth of a social networking site with full temporal information by examining the creation process of each friendship relation that can collectively lead to the macroscopic properties of the network. We first study the reciprocal behavior of users, and find that link requests are quickly responded to and that the distribution of reciprocation intervals decays in an exponential form. The degrees of inviters/accepters are slightly negatively correlative with reciprocation time. In addition, the temporal feature of the online community shows that the distributions of intervals of user behaviors, such as sending or accepting link requests, follow a power law with a universal exponent, and peaks emerge for intervals of an integral day. We finally study the preferential selection and linking phenomena of the social networking site and find that, for the former, a linear preference holds for preferential sending and reception, and for the latter, a linear preference also holds for preferential acceptance, creation, and attachment. Based on the linearly preferential linking, we put forward an analyzable network model which can reproduce the degree distribution of the network. The research framework presented in the paper could provide a potential insight into how the micro-motives of users lead to the global structure of online social networks.

  14. A hierarchical model for estimating the spatial distribution and abundance of animals detected by continuous-time recorders

    USGS Publications Warehouse

    Dorazio, Robert; Karanth, K. Ullas

    2017-01-01

    MotivationSeveral spatial capture-recapture (SCR) models have been developed to estimate animal abundance by analyzing the detections of individuals in a spatial array of traps. Most of these models do not use the actual dates and times of detection, even though this information is readily available when using continuous-time recorders, such as microphones or motion-activated cameras. Instead most SCR models either partition the period of trap operation into a set of subjectively chosen discrete intervals and ignore multiple detections of the same individual within each interval, or they simply use the frequency of detections during the period of trap operation and ignore the observed times of detection. Both practices make inefficient use of potentially important information in the data.Model and data analysisWe developed a hierarchical SCR model to estimate the spatial distribution and abundance of animals detected with continuous-time recorders. Our model includes two kinds of point processes: a spatial process to specify the distribution of latent activity centers of individuals within the region of sampling and a temporal process to specify temporal patterns in the detections of individuals. We illustrated this SCR model by analyzing spatial and temporal patterns evident in the camera-trap detections of tigers living in and around the Nagarahole Tiger Reserve in India. We also conducted a simulation study to examine the performance of our model when analyzing data sets of greater complexity than the tiger data.BenefitsOur approach provides three important benefits: First, it exploits all of the information in SCR data obtained using continuous-time recorders. Second, it is sufficiently versatile to allow the effects of both space use and behavior of animals to be specified as functions of covariates that vary over space and time. Third, it allows both the spatial distribution and abundance of individuals to be estimated, effectively providing a species distribution model, even in cases where spatial covariates of abundance are unknown or unavailable. We illustrated these benefits in the analysis of our data, which allowed us to quantify differences between nocturnal and diurnal activities of tigers and to estimate their spatial distribution and abundance across the study area. Our continuous-time SCR model allows an analyst to specify many of the ecological processes thought to be involved in the distribution, movement, and behavior of animals detected in a spatial trapping array of continuous-time recorders. We plan to extend this model to estimate the population dynamics of animals detected during multiple years of SCR surveys.

  15. Effects of long memory in the order submission process on the properties of recurrence intervals of large price fluctuations

    NASA Astrophysics Data System (ADS)

    Meng, Hao; Ren, Fei; Gu, Gao-Feng; Xiong, Xiong; Zhang, Yong-Jie; Zhou, Wei-Xing; Zhang, Wei

    2012-05-01

    Understanding the statistical properties of recurrence intervals (also termed return intervals in econophysics literature) of extreme events is crucial to risk assessment and management of complex systems. The probability distributions and correlations of recurrence intervals for many systems have been extensively investigated. However, the impacts of microscopic rules of a complex system on the macroscopic properties of its recurrence intervals are less studied. In this letter, we adopt an order-driven stock model to address this issue for stock returns. We find that the distributions of the scaled recurrence intervals of simulated returns have a power-law scaling with stretched exponential cutoff and the intervals possess multifractal nature, which are consistent with empirical results. We further investigate the effects of long memory in the directions (or signs) and relative prices of the order flow on the characteristic quantities of these properties. It is found that the long memory in the order directions (Hurst index Hs) has a negligible effect on the interval distributions and the multifractal nature. In contrast, the power-law exponent of the interval distribution increases linearly with respect to the Hurst index Hx of the relative prices, and the singularity width of the multifractal nature fluctuates around a constant value when Hx<0.7 and then increases with Hx. No evident effects of Hs and Hx are found on the long memory of the recurrence intervals. Our results indicate that the nontrivial properties of the recurrence intervals of returns are mainly caused by traders' behaviors of persistently placing new orders around the best bid and ask prices.

  16. Confidence Intervals for True Scores Using the Skew-Normal Distribution

    ERIC Educational Resources Information Center

    Garcia-Perez, Miguel A.

    2010-01-01

    A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…

  17. Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number.

    PubMed

    Fragkos, Konstantinos C; Tsagris, Michail; Frangos, Christos C

    2014-01-01

    The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator.

  18. Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number

    PubMed Central

    Fragkos, Konstantinos C.; Tsagris, Michail; Frangos, Christos C.

    2014-01-01

    The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator. PMID:27437470

  19. Timescale dependent deformation of orogenic belts?

    NASA Astrophysics Data System (ADS)

    Hoth, S.; Friedrich, A. M.; Vietor, T.; Hoffmann-Rothe, A.; Kukowski, N.; Oncken, O.

    2004-12-01

    The principle aim to link geodetic, paleoseismologic and geologic estimates of fault slip is to extrapolate the respective rates from one timescale to the other to finally predict the recurrence interval of large earthquakes, which threat human habitats. This approach however, is based on two often implicitly made assumptions: a uniform slip distribution through time and space and no changes of the boundary conditions during the time interval of interest. Both assumptions are often hard to verify. A recent study, which analysed an exceptionally complete record of seismic slip for the Wasatch and related faults (Basin and Range province), ranging from 10 yr to 10 Myr suggests that such a link between geodetic and geologic rates might not exist, i.e., that our records of fault displacement may depend on the timescale over which they were measured. This view derives support from results of scaled 2D sandbox experiments, as well as numerical simulations with distinct elements, both of which investigated the effect of boundary conditions such as flexure, mechanic stratigraphy and erosion on the spatio-temporal distribution of deformation within bivergent wedges. We identified three types of processes based on their distinct spatio-temporal distribution of deformation. First, incremental strain and local strain rates are very short-lived are broadly distributed within the bivergent wedge and no temporal pattern could be established. Second, footwall shortcuts and the re-activation of either internal thrusts or of the retro shear-zone are irregularly distributed in time and are thus not predictable either, but last for a longer time interval. Third, the stepwise initiation and propagation of the deformation front is very regular in time, since it depends on the thickness of the incoming layer and on its internal and basal material properties. We consider the propagation of the deformation front as an internal clock of a thrust belt, which is therefore predictable. A deformation front advance cycle requires the longest timescale. Thus, despite known and constant boundary conditions during the simulations, we found only one regular temporal pattern of deformation in a steady active bivergent-wedge. We therefore propose that the structural inventory of an orogenic belt is hierarchically ordered with respect to accumulated slip, in analogy to the discharge pattern in a drainage network. The deformation front would have the highest, a branching splay the lowest order. Since kinematic boundary conditions control deformation front advance, its timing and the related maximum magnitude of finite strain, i.e. throw on the frontal thrust are predictable. However, the number of controlling factors, such as the degree of strain softening, the orientation of faults or fluid flow and resulting cementation of faults, responsible for the reactivation of faults increases with increasing distance from the deformation front. Since it is rarely possible to determine the complete network of forces within a wedge, the reactivation of lower order structures is not predictable in time and space. Two implications for field studies may emerge: A change of the propagation of deformation can only be determined, if at least two accretion cycles are sampled. The link between geodetic, paleoseismologic and geologic fault slip estimates can only be successfully derived if the position of the investigated fault within the hierarchical order has not changed over the time interval of interest.

  20. Nonparametric change point estimation for survival distributions with a partially constant hazard rate.

    PubMed

    Brazzale, Alessandra R; Küchenhoff, Helmut; Krügel, Stefanie; Schiergens, Tobias S; Trentzsch, Heiko; Hartl, Wolfgang

    2018-04-05

    We present a new method for estimating a change point in the hazard function of a survival distribution assuming a constant hazard rate after the change point and a decreasing hazard rate before the change point. Our method is based on fitting a stump regression to p values for testing hazard rates in small time intervals. We present three real data examples describing survival patterns of severely ill patients, whose excess mortality rates are known to persist far beyond hospital discharge. For designing survival studies in these patients and for the definition of hospital performance metrics (e.g. mortality), it is essential to define adequate and objective end points. The reliable estimation of a change point will help researchers to identify such end points. By precisely knowing this change point, clinicians can distinguish between the acute phase with high hazard (time elapsed after admission and before the change point was reached), and the chronic phase (time elapsed after the change point) in which hazard is fairly constant. We show in an extensive simulation study that maximum likelihood estimation is not robust in this setting, and we evaluate our new estimation strategy including bootstrap confidence intervals and finite sample bias correction.

  1. Quasi-parton distribution functions, momentum distributions, and pseudo-parton distribution functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radyushkin, Anatoly V.

    Here, we show that quasi-PDFs may be treated as hybrids of PDFs and primordial rest-frame momentum distributions of partons. This results in a complicated convolution nature of quasi-PDFs that necessitates using large p 3≳ 3 GeV momenta to get reasonably close to the PDF limit. Furthemore, as an alternative approach, we propose to use pseudo-PDFs P(x, zmore » $$2\\atop{3}$$) that generalize the light-front PDFs onto spacelike intervals and are related to Ioffe-time distributions M (v, z$$2\\atop{3}$$), the functions of the Ioffe time v = p 3 z 3 and the distance parameter z$$2\\atop{3}$$ with respect to which it displays perturbative evolution for small z 3. In this form, one may divide out the z$$2\\atop{3}$$ dependence coming from the primordial rest-frame distribution and from the problematic factor due to lattice renormalization of the gauge link. The v-dependence remains intact and determines the shape of PDFs.« less

  2. Quasi-parton distribution functions, momentum distributions, and pseudo-parton distribution functions

    DOE PAGES

    Radyushkin, Anatoly V.

    2017-08-28

    Here, we show that quasi-PDFs may be treated as hybrids of PDFs and primordial rest-frame momentum distributions of partons. This results in a complicated convolution nature of quasi-PDFs that necessitates using large p 3≳ 3 GeV momenta to get reasonably close to the PDF limit. Furthemore, as an alternative approach, we propose to use pseudo-PDFs P(x, zmore » $$2\\atop{3}$$) that generalize the light-front PDFs onto spacelike intervals and are related to Ioffe-time distributions M (v, z$$2\\atop{3}$$), the functions of the Ioffe time v = p 3 z 3 and the distance parameter z$$2\\atop{3}$$ with respect to which it displays perturbative evolution for small z 3. In this form, one may divide out the z$$2\\atop{3}$$ dependence coming from the primordial rest-frame distribution and from the problematic factor due to lattice renormalization of the gauge link. The v-dependence remains intact and determines the shape of PDFs.« less

  3. Quantifying Mixed Uncertainties in Cyber Attacker Payoffs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, Samrat; Halappanavar, Mahantesh; Tipireddy, Ramakrishna

    Representation and propagation of uncertainty in cyber attacker payoffs is a key aspect of security games. Past research has primarily focused on representing the defender’s beliefs about attacker payoffs as point utility estimates. More recently, within the physical security domain, attacker payoff uncertainties have been represented as Uniform and Gaussian probability distributions, and intervals. Within cyber-settings, continuous probability distributions may still be appropriate for addressing statistical (aleatory) uncertainties where the defender may assume that the attacker’s payoffs differ over time. However, systematic (epistemic) uncertainties may exist, where the defender may not have sufficient knowledge or there is insufficient information aboutmore » the attacker’s payoff generation mechanism. Such epistemic uncertainties are more suitably represented as probability boxes with intervals. In this study, we explore the mathematical treatment of such mixed payoff uncertainties.« less

  4. Intra-arterial bolus of 125I labeled meglumine diatrizoate. Early extravascular distribution.

    PubMed

    Dean, P B; Kormano, M

    1977-07-01

    A mixture of 125I labeled meglumine diatrizoate and 131I labeled human serum albumin was injected into the lower abdominal aorta of 30 anesthetized, laparotomized male rats. Measurements of the activities in cardiac blood and in different tissues of the hindlimbs and tests were perfomed at six time intervals ranging from 5 seconds to 2 minutes after injection, the determine early uptake and distribution volumes of diatrizoate. Concentrations and distribution volumes were initially much greater than values obtained after intravenous injection, but these differences had considerably decreased or disappeared by 2 minutes.

  5. Benford's law first significant digit and distribution distances for testing the reliability of financial reports in developing countries

    NASA Astrophysics Data System (ADS)

    Shi, Jing; Ausloos, Marcel; Zhu, Tingting

    2018-02-01

    We discuss a common suspicion about reported financial data, in 10 industrial sectors of the 6 so called "main developing countries" over the time interval [2000-2014]. These data are examined through Benford's law first significant digit and through distribution distances tests. It is shown that several visually anomalous data have to be a priori removed. Thereafter, the distributions much better follow the first digit significant law, indicating the usefulness of a Benford's law test from the research starting line. The same holds true for distance tests. A few outliers are pointed out.

  6. Communication interval selection in distributed heterogeneous simulation of large-scale dynamical systems

    NASA Astrophysics Data System (ADS)

    Lucas, Charles E.; Walters, Eric A.; Jatskevich, Juri; Wasynczuk, Oleg; Lamm, Peter T.

    2003-09-01

    In this paper, a new technique useful for the numerical simulation of large-scale systems is presented. This approach enables the overall system simulation to be formed by the dynamic interconnection of the various interdependent simulations, each representing a specific component or subsystem such as control, electrical, mechanical, hydraulic, or thermal. Each simulation may be developed separately using possibly different commercial-off-the-shelf simulation programs thereby allowing the most suitable language or tool to be used based on the design/analysis needs. These subsystems communicate the required interface variables at specific time intervals. A discussion concerning the selection of appropriate communication intervals is presented herein. For the purpose of demonstration, this technique is applied to a detailed simulation of a representative aircraft power system, such as that found on the Joint Strike Fighter (JSF). This system is comprised of ten component models each developed using MATLAB/Simulink, EASY5, or ACSL. When the ten component simulations were distributed across just four personal computers (PCs), a greater than 15-fold improvement in simulation speed (compared to the single-computer implementation) was achieved.

  7. A Hydraulic Tomography Experiment in Fractured Sedimentary Rocks, Newark Basin, New Jersey, USA

    NASA Astrophysics Data System (ADS)

    Tiedeman, C. R.; Barrash, W.; Thrash, C. J.; Johnson, C. D.

    2015-12-01

    Hydraulic tomography was performed in July 2015 in contaminated fractured mudstone beds at the former Naval Air Warfare Center (NAWC) in the Newark Basin near Trenton, NJ using seven existing wells. The spatial arrangement of wells (in a circle of 9 m radius with one central well), the use of packers to divide the wells into multiple monitoring intervals, and the deployment of fiber optic pressure transducers enabled collection of a hydraulic tomography dataset comprising high-resolution drawdown observations at an unprecedented level of spatial detail for fractured rocks. The experiment involved 45-minute cross-hole aquifer tests, conducted by pumping from a given packer-isolated well interval and continuously monitoring drawdowns in all other well intervals. The collective set of drawdown data from all tests and intervals displays a wide range of behavior suggestive of highly heterogeneous hydraulic conductivity (K) within the tested volume, such as: drawdown curves for different well intervals crossing one another on drawdown-time plots; variable drawdown curve shapes, including linear segments on log-log plots; variable order and magnitude of time-lag and/or drawdown for intervals of a given well in response to pumping from similar fractures or stratigraphic units in different wells; and variable groupings of wells and intervals showing similar responses for different pumping tests. The observed behavior is consistent with previous testing at the NAWC indicating that K within and across individual mudstone beds can vary by orders of magnitude over scales of meters. Preliminary assessment of the drawdown data together with a rich set of geophysical logs suggests an initial conceptual model that includes densely distributed fractures of moderate K at the shallowest depths of the tested volume, connected high-K bedding-plane-parting fractures at intermediate depths, and sparse low-K fractures in the deeper rocks. Future work will involve tomographic inversion of the data to estimate the K distribution at a scale of ~1 m3 in the upper two-thirds of the investigated volume where observation density is greatest.

  8. Dead time corrections for inbeam γ-spectroscopy measurements

    NASA Astrophysics Data System (ADS)

    Boromiza, M.; Borcea, C.; Negret, A.; Olacel, A.; Suliman, G.

    2017-08-01

    Relatively high counting rates were registered in a proton inelastic scattering experiment on 16O and 28Si using HPGe detectors which was performed at the Tandem facility of IFIN-HH, Bucharest. In consequence, dead time corrections were needed in order to determine the absolute γ-production cross sections. Considering that the real counting rate follows a Poisson distribution, the dead time correction procedure is reformulated in statistical terms. The arriving time interval between the incoming events (Δt) obeys an exponential distribution with a single parameter - the average of the associated Poisson distribution. We use this mathematical connection to calculate and implement the dead time corrections for the counting rates of the mentioned experiment. Also, exploiting an idea introduced by Pommé et al., we describe a consistent method for calculating the dead time correction which completely eludes the complicated problem of measuring the dead time of a given detection system. Several comparisons are made between the corrections implemented through this method and by using standard (phenomenological) dead time models and we show how these results were used for correcting our experimental cross sections.

  9. A simulation model for determining the optimal size of emergency teams on call in the operating room at night.

    PubMed

    van Oostrum, Jeroen M; Van Houdenhoven, Mark; Vrielink, Manon M J; Klein, Jan; Hans, Erwin W; Klimek, Markus; Wullink, Gerhard; Steyerberg, Ewout W; Kazemier, Geert

    2008-11-01

    Hospitals that perform emergency surgery during the night (e.g., from 11:00 pm to 7:30 am) face decisions on optimal operating room (OR) staffing. Emergency patients need to be operated on within a predefined safety window to decrease morbidity and improve their chances of full recovery. We developed a process to determine the optimal OR team composition during the night, such that staffing costs are minimized, while providing adequate resources to start surgery within the safety interval. A discrete event simulation in combination with modeling of safety intervals was applied. Emergency surgery was allowed to be postponed safely. The model was tested using data from the main OR of Erasmus University Medical Center (Erasmus MC). Two outcome measures were calculated: violation of safety intervals and frequency with which OR and anesthesia nurses were called in from home. We used the following input data from Erasmus MC to estimate distributions of all relevant parameters in our model: arrival times of emergency patients, durations of surgical cases, length of stay in the postanesthesia care unit, and transportation times. In addition, surgeons and OR staff of Erasmus MC specified safety intervals. Reducing in-house team members from 9 to 5 increased the fraction of patients treated too late by 2.5% as compared to the baseline scenario. Substantially more OR and anesthesia nurses were called in from home when needed. The use of safety intervals benefits OR management during nights. Modeling of safety intervals substantially influences the number of emergency patients treated on time. Our case study showed that by modeling safety intervals and applying computer simulation, an OR can reduce its staff on call without jeopardizing patient safety.

  10. The cluster charge identification in the GEM detector for fusion plasma imaging by soft X-ray diagnostics

    NASA Astrophysics Data System (ADS)

    Czarski, T.; Chernyshova, M.; Malinowski, K.; Pozniak, K. T.; Kasprowicz, G.; Kolasinski, P.; Krawczyk, R.; Wojenski, A.; Zabolotny, W.

    2016-11-01

    The measurement system based on gas electron multiplier detector is developed for soft X-ray diagnostics of tokamak plasmas. The multi-channel setup is designed for estimation of the energy and the position distribution of an X-ray source. The focal measuring issue is the charge cluster identification by its value and position estimation. The fast and accurate mode of the serial data acquisition is applied for the dynamic plasma diagnostics. The charge clusters are counted in the space determined by 2D position, charge value, and time intervals. Radiation source characteristics are presented by histograms for a selected range of position, time intervals, and cluster charge values corresponding to the energy spectra.

  11. The cluster charge identification in the GEM detector for fusion plasma imaging by soft X-ray diagnostics.

    PubMed

    Czarski, T; Chernyshova, M; Malinowski, K; Pozniak, K T; Kasprowicz, G; Kolasinski, P; Krawczyk, R; Wojenski, A; Zabolotny, W

    2016-11-01

    The measurement system based on gas electron multiplier detector is developed for soft X-ray diagnostics of tokamak plasmas. The multi-channel setup is designed for estimation of the energy and the position distribution of an X-ray source. The focal measuring issue is the charge cluster identification by its value and position estimation. The fast and accurate mode of the serial data acquisition is applied for the dynamic plasma diagnostics. The charge clusters are counted in the space determined by 2D position, charge value, and time intervals. Radiation source characteristics are presented by histograms for a selected range of position, time intervals, and cluster charge values corresponding to the energy spectra.

  12. Intertime jump statistics of state-dependent Poisson processes.

    PubMed

    Daly, Edoardo; Porporato, Amilcare

    2007-01-01

    A method to obtain the probability distribution of the interarrival times of jump occurrences in systems driven by state-dependent Poisson noise is proposed. Such a method uses the survivor function obtained by a modified version of the master equation associated to the stochastic process under analysis. A model for the timing of human activities shows the capability of state-dependent Poisson noise to generate power-law distributions. The application of the method to a model for neuron dynamics and to a hydrological model accounting for land-atmosphere interaction elucidates the origin of characteristic recurrence intervals and possible persistence in state-dependent Poisson models.

  13. Enhancing the Selection of Backoff Interval Using Fuzzy Logic over Wireless Ad Hoc Networks

    PubMed Central

    Ranganathan, Radha; Kannan, Kathiravan

    2015-01-01

    IEEE 802.11 is the de facto standard for medium access over wireless ad hoc network. The collision avoidance mechanism (i.e., random binary exponential backoff—BEB) of IEEE 802.11 DCF (distributed coordination function) is inefficient and unfair especially under heavy load. In the literature, many algorithms have been proposed to tune the contention window (CW) size. However, these algorithms make every node select its backoff interval between [0, CW] in a random and uniform manner. This randomness is incorporated to avoid collisions among the nodes. But this random backoff interval can change the optimal order and frequency of channel access among competing nodes which results in unfairness and increased delay. In this paper, we propose an algorithm that schedules the medium access in a fair and effective manner. This algorithm enhances IEEE 802.11 DCF with additional level of contention resolution that prioritizes the contending nodes according to its queue length and waiting time. Each node computes its unique backoff interval using fuzzy logic based on the input parameters collected from contending nodes through overhearing. We evaluate our algorithm against IEEE 802.11, GDCF (gentle distributed coordination function) protocols using ns-2.35 simulator and show that our algorithm achieves good performance. PMID:25879066

  14. Time series models on analysing mortality rates and acute childhood lymphoid leukaemia.

    PubMed

    Kis, Maria

    2005-01-01

    In this paper we demonstrate applying time series models on medical research. The Hungarian mortality rates were analysed by autoregressive integrated moving average models and seasonal time series models examined the data of acute childhood lymphoid leukaemia.The mortality data may be analysed by time series methods such as autoregressive integrated moving average (ARIMA) modelling. This method is demonstrated by two examples: analysis of the mortality rates of ischemic heart diseases and analysis of the mortality rates of cancer of digestive system. Mathematical expressions are given for the results of analysis. The relationships between time series of mortality rates were studied with ARIMA models. Calculations of confidence intervals for autoregressive parameters by tree methods: standard normal distribution as estimation and estimation of the White's theory and the continuous time case estimation. Analysing the confidence intervals of the first order autoregressive parameters we may conclude that the confidence intervals were much smaller than other estimations by applying the continuous time estimation model.We present a new approach to analysing the occurrence of acute childhood lymphoid leukaemia. We decompose time series into components. The periodicity of acute childhood lymphoid leukaemia in Hungary was examined using seasonal decomposition time series method. The cyclic trend of the dates of diagnosis revealed that a higher percent of the peaks fell within the winter months than in the other seasons. This proves the seasonal occurrence of the childhood leukaemia in Hungary.

  15. Transient statistics in stabilizing periodic orbits

    NASA Astrophysics Data System (ADS)

    Meucci, R.; Gadomski, W.; Ciofini, M.; Arecchi, F. T.

    1995-11-01

    The statistics of chaotic and periodic transient time intervals preceding the stabilization of a given periodic orbit have been experimentally studied in a CO2 laser with modulated losses, subjected to a small subharmonic perturbation. As predicted by the theory, an exponential tail has been found in the probability distribution of chaotic transients. Furthermore, a fine periodic structure in the distributions of the periodic transients, resulting from the interaction of the control signal and the local structure of the chaotic attractor, has been revealed.

  16. A flexible model for multivariate interval-censored survival times with complex correlation structure.

    PubMed

    Falcaro, Milena; Pickles, Andrew

    2007-02-10

    We focus on the analysis of multivariate survival times with highly structured interdependency and subject to interval censoring. Such data are common in developmental genetics and genetic epidemiology. We propose a flexible mixed probit model that deals naturally with complex but uninformative censoring. The recorded ages of onset are treated as possibly censored ordinal outcomes with the interval censoring mechanism seen as arising from a coarsened measurement of a continuous variable observed as falling between subject-specific thresholds. This bypasses the requirement for the failure times to be observed as falling into non-overlapping intervals. The assumption of a normal age-of-onset distribution of the standard probit model is relaxed by embedding within it a multivariate Box-Cox transformation whose parameters are jointly estimated with the other parameters of the model. Complex decompositions of the underlying multivariate normal covariance matrix of the transformed ages of onset become possible. The new methodology is here applied to a multivariate study of the ages of first use of tobacco and first consumption of alcohol without parental permission in twins. The proposed model allows estimation of the genetic and environmental effects that are shared by both of these risk behaviours as well as those that are specific. 2006 John Wiley & Sons, Ltd.

  17. Phase sensitive distributed vibration sensing based on ultraweak fiber Bragg grating array using double-pulse

    NASA Astrophysics Data System (ADS)

    Liu, Tao; Wang, Feng; Zhang, Xuping; Zhang, Lin; Yuan, Quan; Liu, Yu; Yan, Zhijun

    2017-08-01

    A distributed vibration sensing technique using double-optical-pulse based on phase-sensitive optical time-domain reflectometry (ϕ-OTDR) and an ultraweak fiber Bragg grating (UWFBG) array is proposed for the first time. The single-mode sensing fiber is integrated with the UWFBG array that has uniform spatial interval and ultraweak reflectivity. The relatively high reflectivity of the UWFBG, compared with the Rayleigh scattering, gains a high signal-to-noise ratio for the signal, which can make the system achieve the maximum detectable frequency limited by the round-trip time of the probe pulse in fiber. A corresponding experimental ϕ-OTDR system with a 4.5 km sensing fiber integrated with the UWFBG array was setup for the evaluation of the system performance. Distributed vibration sensing is successfully realized with spatial resolution of 50 m. The sensing range of the vibration frequency can cover from 3 Hz to 9 kHz.

  18. Method and device for landing aircraft dependent on runway occupancy time

    NASA Technical Reports Server (NTRS)

    Ghalebsaz Jeddi, Babak (Inventor)

    2012-01-01

    A technique for landing aircraft using an aircraft landing accident avoidance device is disclosed. The technique includes determining at least two probability distribution functions; determining a safe lower limit on a separation between a lead aircraft and a trail aircraft on a glide slope to the runway; determining a maximum sustainable safe attempt-to-land rate on the runway based on the safe lower limit and the probability distribution functions; directing the trail aircraft to enter the glide slope with a target separation from the lead aircraft corresponding to the maximum sustainable safe attempt-to-land rate; while the trail aircraft is in the glide slope, determining an actual separation between the lead aircraft and the trail aircraft; and directing the trail aircraft to execute a go-around maneuver if the actual separation approaches the safe lower limit. Probability distribution functions include runway occupancy time, and landing time interval and/or inter-arrival distance.

  19. On the statistical properties of viral misinformation in online social media

    NASA Astrophysics Data System (ADS)

    Bessi, Alessandro

    2017-03-01

    The massive diffusion of online social media allows for the rapid and uncontrolled spreading of conspiracy theories, hoaxes, unsubstantiated claims, and false news. Such an impressive amount of misinformation can influence policy preferences and encourage behaviors strongly divergent from recommended practices. In this paper, we study the statistical properties of viral misinformation in online social media. By means of methods belonging to Extreme Value Theory, we show that the number of extremely viral posts over time follows a homogeneous Poisson process, and that the interarrival times between such posts are independent and identically distributed, following an exponential distribution. Moreover, we characterize the uncertainty around the rate parameter of the Poisson process through Bayesian methods. Finally, we are able to derive the predictive posterior probability distribution of the number of posts exceeding a certain threshold of shares over a finite interval of time.

  20. Analysis of the Factors Affecting the Interval between Blood Donations Using Log-Normal Hazard Model with Gamma Correlated Frailties.

    PubMed

    Tavakol, Najmeh; Kheiri, Soleiman; Sedehi, Morteza

    2016-01-01

    Time to donating blood plays a major role in a regular donor to becoming continues one. The aim of this study was to determine the effective factors on the interval between the blood donations. In a longitudinal study in 2008, 864 samples of first-time donors in Shahrekord Blood Transfusion Center,  capital city of Chaharmahal and Bakhtiari Province, Iran were selected by a systematic sampling and were followed up for five years. Among these samples, a subset of 424 donors who had at least two successful blood donations were chosen for this study and the time intervals between their donations were measured as response variable. Sex, body weight, age, marital status, education, stay and job were recorded as independent variables. Data analysis was performed based on log-normal hazard model with gamma correlated frailty. In this model, the frailties are sum of two independent components assumed a gamma distribution. The analysis was done via Bayesian approach using Markov Chain Monte Carlo algorithm by OpenBUGS. Convergence was checked via Gelman-Rubin criteria using BOA program in R. Age, job and education were significant on chance to donate blood (P<0.05). The chances of blood donation for the higher-aged donors, clericals, workers, free job, students and educated donors were higher and in return, time intervals between their blood donations were shorter. Due to the significance effect of some variables in the log-normal correlated frailty model, it is necessary to plan educational and cultural program to encourage the people with longer inter-donation intervals to donate more frequently.

  1. Benchmark of multi-phase method for the computation of fast ion distributions in a tokamak plasma in the presence of low-amplitude resonant MHD activity

    NASA Astrophysics Data System (ADS)

    Bierwage, A.; Todo, Y.

    2017-11-01

    The transport of fast ions in a beam-driven JT-60U tokamak plasma subject to resonant magnetohydrodynamic (MHD) mode activity is simulated using the so-called multi-phase method, where 4 ms intervals of classical Monte-Carlo simulations (without MHD) are interlaced with 1 ms intervals of hybrid simulations (with MHD). The multi-phase simulation results are compared to results obtained with continuous hybrid simulations, which were recently validated against experimental data (Bierwage et al., 2017). It is shown that the multi-phase method, in spite of causing significant overshoots in the MHD fluctuation amplitudes, accurately reproduces the frequencies and positions of the dominant resonant modes, as well as the spatial profile and velocity distribution of the fast ions, while consuming only a fraction of the computation time required by the continuous hybrid simulation. The present paper is limited to low-amplitude fluctuations consisting of a few long-wavelength modes that interact only weakly with each other. The success of this benchmark study paves the way for applying the multi-phase method to the simulation of Abrupt Large-amplitude Events (ALE), which were seen in the same JT-60U experiments but at larger time intervals. Possible implications for the construction of reduced models for fast ion transport are discussed.

  2. TURBULENCE-GENERATED PROTON-SCALE STRUCTURES IN THE TERRESTRIAL MAGNETOSHEATH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vörös, Zoltán; Narita, Yasuhito; Yordanova, Emiliya

    2016-03-01

    Recent results of numerical magnetohydrodynamic simulations suggest that in collisionless space plasmas, turbulence can spontaneously generate thin current sheets. These coherent structures can partially explain the intermittency and the non-homogenous distribution of localized plasma heating in turbulence. In this Letter, Cluster multi-point observations are used to investigate the distribution of magnetic field discontinuities and the associated small-scale current sheets in the terrestrial magnetosheath downstream of a quasi-parallel bow shock. It is shown experimentally, for the first time, that the strongest turbulence-generated current sheets occupy the long tails of probability distribution functions associated with extremal values of magnetic field partial derivatives.more » During the analyzed one-hour time interval, about a hundred strong discontinuities, possibly proton-scale current sheets, were observed.« less

  3. Migration and Extension of Solar Active Longitudinal Zones

    NASA Astrophysics Data System (ADS)

    Gyenge, N.; Baranyi, T.; Ludmány, A.

    2014-02-01

    Solar active longitudes show a characteristic migration pattern in the Carrington coordinate system if they can be identified at all. By following this migration, the longitudinal activity distribution around the center of the band can be determined. The half-width of the distribution is found to be varying in Cycles 21 - 23, and in some time intervals it was as narrow as 20 - 30 degrees. It was more extended around a maximum but it was also narrow when the activity jumped to the opposite longitude. Flux emergence exhibited a quasi-periodic variation within the active zone with a period of about 1.3 years. The path of the active-longitude migration does not support the view that it might be associated with the 11-year solar cycle. These results were obtained for a limited time interval of a few solar cycles and, bearing in mind uncertainties of the migration-path definition, are only indicative. For the major fraction of the dataset no systematic active longitudes were found. Sporadic migration of active longitudes was identified only for Cycles 21 - 22 in the northern hemisphere and Cycle 23 in the southern hemisphere.

  4. Filling in the juvenile magmatic gap: Evidence for uninterrupted Paleoproterozoic plate tectonics

    NASA Astrophysics Data System (ADS)

    Partin, C. A.; Bekker, A.; Sylvester, P. J.; Wodicka, N.; Stern, R. A.; Chacko, T.; Heaman, L. M.

    2014-02-01

    Despite several decades of research on growth of the continental crust, it remains unclear whether the production of juvenile continental crust has been continuous or episodic throughout the Precambrian. Models for episodic crustal growth have gained traction recently through compilations of global U-Pb zircon age frequency distributions interpreted to delineate peaks and lulls in crustal growth through geologic time. One such apparent trough in zircon age frequency distributions between ∼2.45 and 2.22 Ga is thought to represent a pause in crustal addition, resulting from a global shutdown of magmatic and tectonic processes. The ∼2.45-2.22 Ga magmatic shutdown model envisions a causal relationship between the cessation of plate tectonics and accumulation of atmospheric oxygen over the same period. Here, we present new coupled U-Pb, Hf, and O isotope data for detrital and magmatic zircon from the western Churchill Province and Trans-Hudson orogen of Canada, covering an area of approximately 1.3 million km2, that demonstrate significant juvenile crustal production during the ∼2.45-2.22 Ga time interval, and thereby argue against the magmatic shutdown hypothesis. Our data is corroborated by literature data showing an extensive 2.22-2.45 Ga record in both detrital and magmatic rocks on every continent, and suggests that the operation of plate tectonics continued throughout the early Paleoproterozoic, while atmospheric oxygen rose over the same time interval. We argue that uninterrupted plate tectonics between ∼2.45 and 2.22 Ga would have contributed to efficient burial of organic matter and sedimentary pyrite, and the consequent rise in atmospheric oxygen documented for this time interval.

  5. Timing of repetition suppression of event-related potentials to unattended objects.

    PubMed

    Stefanics, Gabor; Heinzle, Jakob; Czigler, István; Valentini, Elia; Stephan, Klaas Enno

    2018-05-26

    Current theories of object perception emphasize the automatic nature of perceptual inference. Repetition suppression (RS), the successive decrease of brain responses to repeated stimuli, is thought to reflect the optimization of perceptual inference through neural plasticity. While functional imaging studies revealed brain regions that show suppressed responses to the repeated presentation of an object, little is known about the intra-trial time course of repetition effects to everyday objects. Here we used event-related potentials (ERP) to task-irrelevant line-drawn objects, while participants engaged in a distractor task. We quantified changes in ERPs over repetitions using three general linear models (GLM) that modelled RS by an exponential, linear, or categorical "change detection" function in each subject. Our aim was to select the model with highest evidence and determine the within-trial time-course and scalp distribution of repetition effects using that model. Model comparison revealed the superiority of the exponential model indicating that repetition effects are observable for trials beyond the first repetition. Model parameter estimates revealed a sequence of RS effects in three time windows (86-140ms, 322-360ms, and 400-446ms) and with occipital, temporo-parietal, and fronto-temporal distribution, respectively. An interval of repetition enhancement (RE) was also observed (320-340ms) over occipito-temporal sensors. Our results show that automatic processing of task-irrelevant objects involves multiple intervals of RS with distinct scalp topographies. These sequential intervals of RS and RE might reflect the short-term plasticity required for optimization of perceptual inference and the associated changes in prediction errors (PE) and predictions, respectively, over stimulus repetitions during automatic object processing. This article is protected by copyright. All rights reserved. © 2018 The Authors European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  6. How to Deal with Interval-Censored Data Practically while Assessing the Progression-Free Survival: A Step-by-Step Guide Using SAS and R Software.

    PubMed

    Dugué, Audrey Emmanuelle; Pulido, Marina; Chabaud, Sylvie; Belin, Lisa; Gal, Jocelyn

    2016-12-01

    We describe how to estimate progression-free survival while dealing with interval-censored data in the setting of clinical trials in oncology. Three procedures with SAS and R statistical software are described: one allowing for a nonparametric maximum likelihood estimation of the survival curve using the EM-ICM (Expectation and Maximization-Iterative Convex Minorant) algorithm as described by Wellner and Zhan in 1997; a sensitivity analysis procedure in which the progression time is assigned (i) at the midpoint, (ii) at the upper limit (reflecting the standard analysis when the progression time is assigned at the first radiologic exam showing progressive disease), or (iii) at the lower limit of the censoring interval; and finally, two multiple imputations are described considering a uniform or the nonparametric maximum likelihood estimation (NPMLE) distribution. Clin Cancer Res; 22(23); 5629-35. ©2016 AACR. ©2016 American Association for Cancer Research.

  7. Pigeons' Choices between Fixed-Interval and Random-Interval Schedules: Utility of Variability?

    ERIC Educational Resources Information Center

    Andrzejewski, Matthew E.; Cardinal, Claudia D.; Field, Douglas P.; Flannery, Barbara A.; Johnson, Michael; Bailey, Kathleen; Hineline, Philip N.

    2005-01-01

    Pigeons' choosing between fixed-interval and random-interval schedules of reinforcement was investigated in three experiments using a discrete-trial procedure. In all three experiments, the random-interval schedule was generated by sampling a probability distribution at an interval (and in multiples of the interval) equal to that of the…

  8. Statistical study of spatio-temporal distribution of precursor solar flares associated with major flares

    NASA Astrophysics Data System (ADS)

    Gyenge, N.; Ballai, I.; Baranyi, T.

    2016-07-01

    The aim of the present investigation is to study the spatio-temporal distribution of precursor flares during the 24 h interval preceding M- and X-class major flares and the evolution of follower flares. Information on associated (precursor and follower) flares is provided by Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI). Flare list, while the major flares are observed by the Geostationary Operational Environmental Satellite (GOES) system satellites between 2002 and 2014. There are distinct evolutionary differences between the spatio-temporal distributions of associated flares in about one-day period depending on the type of the main flare. The spatial distribution was characterized by the normalized frequency distribution of the quantity δ (the distance between the major flare and its precursor flare normalized by the sunspot group diameter) in four 6 h time intervals before the major event. The precursors of X-class flares have a double-peaked spatial distribution for more than half a day prior to the major flare, but it changes to a lognormal-like distribution roughly 6 h prior to the event. The precursors of M-class flares show lognormal-like distribution in each 6 h subinterval. The most frequent sites of the precursors in the active region are within a distance of about 0.1 diameter of sunspot group from the site of the major flare in each case. Our investigation shows that the build-up of energy is more effective than the release of energy because of precursors.

  9. Optimal investment horizons

    NASA Astrophysics Data System (ADS)

    Simonsen, I.; Jensen, M. H.; Johansen, A.

    2002-06-01

    In stochastic finance, one traditionally considers the return as a competitive measure of an asset, i.e., the profit generated by that asset after some fixed time span Δt, say one week or one year. This measures how well (or how bad) the asset performs over that given period of time. It has been established that the distribution of returns exhibits ``fat tails'' indicating that large returns occur more frequently than what is expected from standard Gaussian stochastic processes [1-3]. Instead of estimating this ``fat tail'' distribution of returns, we propose here an alternative approach, which is outlined by addressing the following question: What is the smallest time interval needed for an asset to cross a fixed return level of say 10%? For a particular asset, we refer to this time as the investment horizon and the corresponding distribution as the investment horizon distribution. This latter distribution complements that of returns and provides new and possibly crucial information for portfolio design and risk-management, as well as for pricing of more exotic options. By considering historical financial data, exemplified by the Dow Jones Industrial Average, we obtain a novel set of probability distributions for the investment horizons which can be used to estimate the optimal investment horizon for a stock or a future contract.

  10. Modeling and simulation of count data.

    PubMed

    Plan, E L

    2014-08-13

    Count data, or number of events per time interval, are discrete data arising from repeated time to event observations. Their mean count, or piecewise constant event rate, can be evaluated by discrete probability distributions from the Poisson model family. Clinical trial data characterization often involves population count analysis. This tutorial presents the basics and diagnostics of count modeling and simulation in the context of pharmacometrics. Consideration is given to overdispersion, underdispersion, autocorrelation, and inhomogeneity.

  11. Recurrence interval analysis of trading volumes

    NASA Astrophysics Data System (ADS)

    Ren, Fei; Zhou, Wei-Xing

    2010-06-01

    We study the statistical properties of the recurrence intervals τ between successive trading volumes exceeding a certain threshold q . The recurrence interval analysis is carried out for the 20 liquid Chinese stocks covering a period from January 2000 to May 2009, and two Chinese indices from January 2003 to April 2009. Similar to the recurrence interval distribution of the price returns, the tail of the recurrence interval distribution of the trading volumes follows a power-law scaling, and the results are verified by the goodness-of-fit tests using the Kolmogorov-Smirnov (KS) statistic, the weighted KS statistic and the Cramér-von Mises criterion. The measurements of the conditional probability distribution and the detrended fluctuation function show that both short-term and long-term memory effects exist in the recurrence intervals between trading volumes. We further study the relationship between trading volumes and price returns based on the recurrence interval analysis method. It is found that large trading volumes are more likely to occur following large price returns, and the comovement between trading volumes and price returns is more pronounced for large trading volumes.

  12. Recurrence interval analysis of trading volumes.

    PubMed

    Ren, Fei; Zhou, Wei-Xing

    2010-06-01

    We study the statistical properties of the recurrence intervals τ between successive trading volumes exceeding a certain threshold q. The recurrence interval analysis is carried out for the 20 liquid Chinese stocks covering a period from January 2000 to May 2009, and two Chinese indices from January 2003 to April 2009. Similar to the recurrence interval distribution of the price returns, the tail of the recurrence interval distribution of the trading volumes follows a power-law scaling, and the results are verified by the goodness-of-fit tests using the Kolmogorov-Smirnov (KS) statistic, the weighted KS statistic and the Cramér-von Mises criterion. The measurements of the conditional probability distribution and the detrended fluctuation function show that both short-term and long-term memory effects exist in the recurrence intervals between trading volumes. We further study the relationship between trading volumes and price returns based on the recurrence interval analysis method. It is found that large trading volumes are more likely to occur following large price returns, and the comovement between trading volumes and price returns is more pronounced for large trading volumes.

  13. A Study of Transport in the Near-Earth Plasma Sheet During A Substorm Using Time-Dependent Large Scale Kinetics

    NASA Technical Reports Server (NTRS)

    El-Alaoui, M.; Ashour-Abdalla, M.; Raeder, J.; Frank, L. A.; Paterson, W. R.

    1998-01-01

    In this study we investigate the transport of H+ ions that made up the complex ion distribution function observed by the Geotail spacecraft at 0740 UT on November 24, 1996. This ion distribution function, observed by Geotail at approximately 20 R(sub E) downtail, was used to initialize a time-dependent large-scale kinetic (LSK) calculation of the trajectories of 75,000 ions forward in time. Time-dependent magnetic and electric fields were obtained from a global magnetohydrodynamic (MHD) simulation of the magnetosphere and its interaction with the solar wind and the interplanetary magnetic field (IMF) as observed during the interval of the observation of the distribution function. Our calculations indicate that the particles observed by Geotail were scattered across the equatorial plane by the multiple interactions with the current sheet and then convected sunward. They were energized by the dawn-dusk electric field during their transport from Geotail location and ultimately were lost at the ionospheric boundary or into the magnetopause.

  14. Pacific Northwest GridWise™ Testbed Demonstration Projects; Part I. Olympic Peninsula Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammerstrom, Donald J.; Ambrosio, Ron; Carlon, Teresa A.

    2008-01-09

    This report describes the implementation and results of a field demonstration wherein residential electric water heaters and thermostats, commercial building space conditioning, municipal water pump loads, and several distributed generators were coordinated to manage constrained feeder electrical distribution through the two-way communication of load status and electric price signals. The field demonstration took place in Washington and Oregon and was paid for by the U.S. Department of Energy and several northwest utilities. Price is found to be an effective control signal for managing transmission or distribution congestion. Real-time signals at 5-minute intervals are shown to shift controlled load in time.more » The behaviors of customers and their responses under fixed, time-of-use, and real-time price contracts are compared. Peak loads are effectively reduced on the experimental feeder. A novel application of portfolio theory is applied to the selection of an optimal mix of customer contract types.« less

  15. Contraction frequency after administration of misoprostol in obese versus nonobese women.

    PubMed

    Stefely, Erin; Warshak, Carri R

    2018-04-30

    To examine impact of obesity on contraction frequency following misoprostol. Our hypothesis is that an increased volume of distribution reduces the bioavailability of misoprostol and may be an explanation for reduced efficacy. We examined the contraction frequency as a surrogate marker for bioavailability of misoprostol. We compared the rate of contractions at five time intervals in 313 subjects: prior to administration, and at four intervals post administration. We compared number of contractions in obese versus nonobese. As a planned secondary analysis, we then compared the rate of change in contractions per hour at four time intervals: a repeated measures analysis to compare the rate of change in contractions per hour over the 5-hour window controlling for race (White versus non-White) and parity (primiparous versus multiparous). General linear model and repeated measures analysis were conducted to report the parameter estimates, least square means, difference of least square means, and p values. Nonobese women presented with more contractions at baseline, 7 ± 5 versus 4 ± 5 c/h, p < .001. At all four time intervals after misoprostol administration obese women had fewer contractions per hour. The rate of change in contraction frequency after administration found obese women had a lower rate of increase in contraction frequency over the course of all four hours. We found a least squares means estimate (c/h): first hour (-0.87), p = .08, second hour (-2.43), p = .01, third hour (-1.80), p = .96, and fourth hour (-2.98), p = .007. Obese women have a lower rate of contractions per hour at baseline and at four intervals after misoprostol administration. In addition, the rate of change in the increase in contractions/hour also was reduced in obese women versus nonobese women. This suggests a lower bioavailability of misoprostol in women with a larger volume of distribution which would likely impact the efficacy of misoprostol in obese women when given the same dose of misoprostol. It is unknown if higher misoprostol dosing would increase efficacy of misoprostol in obese women.

  16. Diffusion with stochastic resetting at power-law times.

    PubMed

    Nagar, Apoorva; Gupta, Shamik

    2016-06-01

    What happens when a continuously evolving stochastic process is interrupted with large changes at random intervals τ distributed as a power law ∼τ^{-(1+α)};α>0? Modeling the stochastic process by diffusion and the large changes as abrupt resets to the initial condition, we obtain exact closed-form expressions for both static and dynamic quantities, while accounting for strong correlations implied by a power law. Our results show that the resulting dynamics exhibits a spectrum of rich long-time behavior, from an ever-spreading spatial distribution for α<1, to one that is time independent for α>1. The dynamics has strong consequences on the time to reach a distant target for the first time; we specifically show that there exists an optimal α that minimizes the mean time to reach the target, thereby offering a step towards a viable strategy to locate targets in a crowded environment.

  17. Body size distributions signal a regime shift in a lake ecosystem

    USGS Publications Warehouse

    Spanbauer, Trisha; Allen, Craig R.; Angeler, David G.; Eason, Tarsha; Fritz, Sherilyn C.; Garmestani, Ahjond S.; Nash, Kirsty L.; Stone, Jeffery R.; Stow, Craig A.; Sundstrom, Shana M.

    2016-01-01

    Communities of organisms, from mammals to microorganisms, have discontinuous distributions of body size. This pattern of size structuring is a conservative trait of community organization and is a product of processes that occur at multiple spatial and temporal scales. In this study, we assessed whether body size patterns serve as an indicator of a threshold between alternative regimes. Over the past 7000 years, the biological communities of Foy Lake (Montana, USA) have undergone a major regime shift owing to climate change. We used a palaeoecological record of diatom communities to estimate diatom sizes, and then analysed the discontinuous distribution of organism sizes over time. We used Bayesian classification and regression tree models to determine that all time intervals exhibited aggregations of sizes separated by gaps in the distribution and found a significant change in diatom body size distributions approximately 150 years before the identified ecosystem regime shift. We suggest that discontinuity analysis is a useful addition to the suite of tools for the detection of early warning signals of regime shifts.

  18. The cluster charge identification in the GEM detector for fusion plasma imaging by soft X-ray diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Czarski, T., E-mail: tomasz.czarski@ifpilm.pl; Chernyshova, M.; Malinowski, K.

    2016-11-15

    The measurement system based on gas electron multiplier detector is developed for soft X-ray diagnostics of tokamak plasmas. The multi-channel setup is designed for estimation of the energy and the position distribution of an X-ray source. The focal measuring issue is the charge cluster identification by its value and position estimation. The fast and accurate mode of the serial data acquisition is applied for the dynamic plasma diagnostics. The charge clusters are counted in the space determined by 2D position, charge value, and time intervals. Radiation source characteristics are presented by histograms for a selected range of position, time intervals,more » and cluster charge values corresponding to the energy spectra.« less

  19. Earthquake Clustering on Normal Faults: Insight from Rate-and-State Friction Models

    NASA Astrophysics Data System (ADS)

    Biemiller, J.; Lavier, L. L.; Wallace, L.

    2016-12-01

    Temporal variations in slip rate on normal faults have been recognized in Hawaii and the Basin and Range. The recurrence intervals of these slip transients range from 2 years on the flanks of Kilauea, Hawaii to 10 kyr timescale earthquake clustering on the Wasatch Fault in the eastern Basin and Range. In addition to these longer recurrence transients in the Basin and Range, recent GPS results there also suggest elevated deformation rate events with recurrence intervals of 2-4 years. These observations suggest that some active normal fault systems are dominated by slip behaviors that fall between the end-members of steady aseismic creep and periodic, purely elastic, seismic-cycle deformation. Recent studies propose that 200 year to 50 kyr timescale supercycles may control the magnitude, timing, and frequency of seismic-cycle earthquakes in subduction zones, where aseismic slip transients are known to play an important role in total deformation. Seismic cycle deformation of normal faults may be similarly influenced by its timing within long-period supercycles. We present numerical models (based on rate-and-state friction) of normal faults such as the Wasatch Fault showing that realistic rate-and-state parameter distributions along an extensional fault zone can give rise to earthquake clusters separated by 500 yr - 5 kyr periods of aseismic slip transients on some portions of the fault. The recurrence intervals of events within each earthquake cluster range from 200 to 400 years. Our results support the importance of stress and strain history as controls on a normal fault's present and future slip behavior and on the characteristics of its current seismic cycle. These models suggest that long- to medium-term fault slip history may influence the temporal distribution, recurrence interval, and earthquake magnitudes for a given normal fault segment.

  20. A New Insight into the Earthquake Recurrence Studies from the Three-parameter Generalized Exponential Distributions

    NASA Astrophysics Data System (ADS)

    Pasari, S.; Kundu, D.; Dikshit, O.

    2012-12-01

    Earthquake recurrence interval is one of the important ingredients towards probabilistic seismic hazard assessment (PSHA) for any location. Exponential, gamma, Weibull and lognormal distributions are quite established probability models in this recurrence interval estimation. However, they have certain shortcomings too. Thus, it is imperative to search for some alternative sophisticated distributions. In this paper, we introduce a three-parameter (location, scale and shape) exponentiated exponential distribution and investigate the scope of this distribution as an alternative of the afore-mentioned distributions in earthquake recurrence studies. This distribution is a particular member of the exponentiated Weibull distribution. Despite of its complicated form, it is widely accepted in medical and biological applications. Furthermore, it shares many physical properties with gamma and Weibull family. Unlike gamma distribution, the hazard function of generalized exponential distribution can be easily computed even if the shape parameter is not an integer. To contemplate the plausibility of this model, a complete and homogeneous earthquake catalogue of 20 events (M ≥ 7.0) spanning for the period 1846 to 1995 from North-East Himalayan region (20-32 deg N and 87-100 deg E) has been used. The model parameters are estimated using maximum likelihood estimator (MLE) and method of moment estimator (MOME). No geological or geophysical evidences have been considered in this calculation. The estimated conditional probability reaches quite high after about a decade for an elapsed time of 17 years (i.e. 2012). Moreover, this study shows that the generalized exponential distribution fits the above data events more closely compared to the conventional models and hence it is tentatively concluded that generalized exponential distribution can be effectively considered in earthquake recurrence studies.

  1. Time Analyzer for Time Synchronization and Monitor of the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Cole, Steven; Gonzalez, Jorge, Jr.; Calhoun, Malcolm; Tjoelker, Robert

    2003-01-01

    A software package has been developed to measure, monitor, and archive the performance of timing signals distributed in the NASA Deep Space Network. Timing signals are generated from a central master clock and distributed to over 100 users at distances up to 30 kilometers. The time offset due to internal distribution delays and time jitter with respect to the central master clock are critical for successful spacecraft navigation, radio science, and very long baseline interferometry (VLBI) applications. The instrument controller and operator interface software is written in LabView and runs on the Linux operating system. The software controls a commercial multiplexer to switch 120 separate timing signals to measure offset and jitter with a time-interval counter referenced to the master clock. The offset of each channel is displayed in histogram form, and "out of specification" alarms are sent to a central complex monitor and control system. At any time, the measurement cycle of 120 signals can be interrupted for diagnostic tests on an individual channel. The instrument also routinely monitors and archives the long-term stability of all frequency standards or any other 1-pps source compared against the master clock. All data is stored and made available for

  2. RECONCILIATION OF WAITING TIME STATISTICS OF SOLAR FLARES OBSERVED IN HARD X-RAYS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aschwanden, Markus J.; McTiernan, James M., E-mail: aschwanden@lmsal.co, E-mail: jimm@ssl.berkeley.ed

    2010-07-10

    We study the waiting time distributions of solar flares observed in hard X-rays with ISEE-3/ICE, HXRBS/SMM, WATCH/GRANAT, BATSE/CGRO, and RHESSI. Although discordant results and interpretations have been published earlier, based on relatively small ranges (<2 decades) of waiting times, we find that all observed distributions, spanning over 6 decades of waiting times ({Delta}t {approx} 10{sup -3}-10{sup 3} hr), can be reconciled with a single distribution function, N({Delta}t) {proportional_to} {lambda}{sub 0}(1 + {lambda}{sub 0{Delta}}t){sup -2}, which has a power-law slope of p {approx} 2.0 at large waiting times ({Delta}t {approx} 1-1000 hr) and flattens out at short waiting times {Delta}t {approx}

  3. Myocardium distribution of sertindole and its metabolite dehydrosertindole in guinea-pigs.

    PubMed

    Canal-Raffin, Mireille; Titier, Karine; Déridet, Evelyne; Martinez, Béatrice; Abouelfath, Abdelilah; Miras, Alain; Gromb, Sophie; Molimard, Mathieu; Moore, Nicholas

    2006-05-01

    Sertindole, like other atypical antipsychotics, has been shown to increase the action potential duration and QT interval in a concentration dependent manner, in in vitro electrophysiological studies. However, this does not always translate into increased duration of the QT interval, increased risk of torsade de pointes or sudden death in clinical practice. The reasons for these apparent discrepancies are unclear and many studies have underscored the importance of the interpretation of in vitro electrophysiological data in the context of other pharmacodynamic (e.g. cardiac ion channels target, receptor affinity) and pharmacokinetic parameters (total plasma drug concentration and drug distribution). To address the possible relevance of the concentrations used in experimental studies, the myocardium distribution of sertindole and its metabolite was determined after single and repeated intraperitoneal administration to guinea-pigs. The data suggest that the plasma concentration appears to predict the concentration in the myocardium and that the myocardium concentrations of sertindole are 3.1 times higher than plasma concentrations. Using these data, the relevance of in vitro electrophysiological studies to clinical plasma concentrations has been appraised. Copyright 2006 John Wiley & Sons, Ltd.

  4. First Test of Stochastic Growth Theory for Langmuir Waves in Earth's Foreshock

    NASA Technical Reports Server (NTRS)

    Cairns, Iver H.; Robinson, P. A.

    1997-01-01

    This paper presents the first test of whether stochastic growth theory (SGT) can explain the detailed characteristics of Langmuir-like waves in Earth's foreshock. A period with unusually constant solar wind magnetic field is analyzed. The observed distributions P(logE) of wave fields E for two intervals with relatively constant spacecraft location (DIFF) are shown to agree well with the fundamental prediction of SGT, that P(logE) is Gaussian in log E. This stochastic growth can be accounted for semi-quantitatively in terms of standard foreshock beam parameters and a model developed for interplanetary type III bursts. Averaged over the entire period with large variations in DIFF, the P(logE) distribution is a power-law with index approximately -1; this is interpreted in terms of convolution of intrinsic, spatially varying P(logE) distributions with a probability function describing ISEE's residence time at a given DIFF. Wave data from this interval thus provide good observational evidence that SGT can sometimes explain the clumping, burstiness, persistence, and highly variable fields of the foreshock Langmuir-like waves.

  5. First test of stochastic growth theory for Langmuir waves in Earth's foreshock

    NASA Astrophysics Data System (ADS)

    Cairns, Iver H.; Robinson, P. A.

    This paper presents the first test of whether stochastic growth theory (SGT) can explain the detailed characteristics of Langmuir-like waves in Earth's foreshock. A period with unusually constant solar wind magnetic field is analyzed. The observed distributions P(log E) of wave fields E for two intervals with relatively constant spacecraft location (DIFF) are shown to agree well with the fundamental prediction of SGT, that P(log E) is Gaussian in log E. This stochastic growth can be accounted for semi-quantitatively in terms of standard foreshock beam parameters and a model developed for interplanetary type III bursts. Averaged over the entire period with large variations in DIFF, the P(log E) distribution is a power-law with index ˜ -1 this is interpreted in terms of convolution of intrinsic, spatially varying P(log E) distributions with a probability function describing ISEE's residence time at a given DIFF. Wave data from this interval thus provide good observational evidence that SGT can sometimes explain the clumping, burstiness, persistence, and highly variable fields of the foreshock Langmuir-like waves.

  6. Annoyance to Noise Produced by a Distributed Electric Propulsion High-Lift System

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Palumbo, Daniel L.; Rathsam, Jonathan; Christian, Andrew; Rafaelof, Menachem

    2017-01-01

    A psychoacoustic test was performed using simulated sounds from a distributed electric propulsion aircraft concept to help understand factors associated with human annoyance. A design space spanning the number of high-lift leading edge propellers and their relative operating speeds, inclusive of time varying effects associated with motor controller error and atmospheric turbulence, was considered. It was found that the mean annoyance response varies in a statistically significant manner with the number of propellers and with the inclusion of time varying effects, but does not differ significantly with the relative RPM between propellers. An annoyance model was developed, inclusive of confidence intervals, using the noise metrics of loudness, roughness, and tonality as predictors.

  7. Kumaraswamy autoregressive moving average models for double bounded environmental data

    NASA Astrophysics Data System (ADS)

    Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme

    2017-12-01

    In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.

  8. Volatility return intervals analysis of the Japanese market

    NASA Astrophysics Data System (ADS)

    Jung, W.-S.; Wang, F. Z.; Havlin, S.; Kaizoji, T.; Moon, H.-T.; Stanley, H. E.

    2008-03-01

    We investigate scaling and memory effects in return intervals between price volatilities above a certain threshold q for the Japanese stock market using daily and intraday data sets. We find that the distribution of return intervals can be approximated by a scaling function that depends only on the ratio between the return interval τ and its mean <τ>. We also find memory effects such that a large (or small) return interval follows a large (or small) interval by investigating the conditional distribution and mean return interval. The results are similar to previous studies of other markets and indicate that similar statistical features appear in different financial markets. We also compare our results between the period before and after the big crash at the end of 1989. We find that scaling and memory effects of the return intervals show similar features although the statistical properties of the returns are different.

  9. Intravenous bolus of 125I labeled meglumine diatrizoate. Early extravascular distribution.

    PubMed

    Dean, P B; Kormano, M

    1977-05-01

    A mixture of 125I labeled meglumine diatrizoate and 131I labeled human serum albumin was injected into the femoral vein of 26 anesthetized male rats. Measurements of the activities in cardiac blood and in different tissues of the lower extremity and in the testis were performed at time intervals ranging from 5 s to 5 min after injection. The determination of tissue uptake and distribution volumes of diatrizoate showed widely differing accumulation of contrast medium. Over 50 per cent of the intravenous bolus of diatrizoate was extravascular at 40 s.

  10. Design and implementation of the NaI(Tl)/CsI(Na) detectors output signal generator

    NASA Astrophysics Data System (ADS)

    Zhou, Xu; Liu, Cong-Zhan; Zhao, Jian-Ling; Zhang, Fei; Zhang, Yi-Fei; Li, Zheng-Wei; Zhang, Shuo; Li, Xu-Fang; Lu, Xue-Feng; Xu, Zhen-Ling; Lu, Fang-Jun

    2014-02-01

    We designed and implemented a signal generator that can simulate the output of the NaI(Tl)/CsI(Na) detectors' pre-amplifier onboard the Hard X-ray Modulation Telescope (HXMT). Using the development of the FPGA (Field Programmable Gate Array) with VHDL language and adding a random constituent, we have finally produced the double exponential random pulse signal generator. The statistical distribution of the signal amplitude is programmable. The occurrence time intervals of the adjacent signals contain negative exponential distribution statistically.

  11. Ehrenfest model with large jumps in finance

    NASA Astrophysics Data System (ADS)

    Takahashi, Hisanao

    2004-02-01

    Changes (returns) in stock index prices and exchange rates for currencies are argued, based on empirical data, to obey a stable distribution with characteristic exponent α<2 for short sampling intervals and a Gaussian distribution for long sampling intervals. In order to explain this phenomenon, an Ehrenfest model with large jumps (ELJ) is introduced to explain the empirical density function of price changes for both short and long sampling intervals.

  12. Highly accurate pulse-per-second timing distribution over optical fibre network using VCSEL side-mode injection

    NASA Astrophysics Data System (ADS)

    Wassin, Shukree; Isoe, George M.; Gamatham, Romeo R. G.; Leitch, Andrew W. R.; Gibbon, Tim B.

    2017-01-01

    Precise and accurate timing signals distributed between a centralized location and several end-users are widely used in both metro-access and speciality networks for Coordinated Universal Time (UTC), GPS satellite systems, banking, very long baseline interferometry and science projects such as SKA radio telescope. Such systems utilize time and frequency technology to ensure phase coherence among data signals distributed across an optical fibre network. For accurate timing requirements, precise time intervals should be measured between successive pulses. In this paper we describe a novel, all optical method for quantifying one-way propagation times and phase perturbations in the fibre length, using pulse-persecond (PPS) signals. The approach utilizes side mode injection of a 1550nm 10Gbps vertical cavity surface emitting laser (VCSEL) at the remote end. A 125 μs one-way time of flight was accurately measured for 25 km G655 fibre. Since the approach is all-optical, it avoids measurement inaccuracies introduced by electro-optical conversion phase delays. Furthermore, the implementation uses cost effective VCSEL technology and suited to a flexible range of network architectures, supporting a number of end-users conducting measurements at the remote end.

  13. Load estimator (LOADEST): a FORTRAN program for estimating constituent loads in streams and rivers

    USGS Publications Warehouse

    Runkel, Robert L.; Crawford, Charles G.; Cohn, Timothy A.

    2004-01-01

    LOAD ESTimator (LOADEST) is a FORTRAN program for estimating constituent loads in streams and rivers. Given a time series of streamflow, additional data variables, and constituent concentration, LOADEST assists the user in developing a regression model for the estimation of constituent load (calibration). Explanatory variables within the regression model include various functions of streamflow, decimal time, and additional user-specified data variables. The formulated regression model then is used to estimate loads over a user-specified time interval (estimation). Mean load estimates, standard errors, and 95 percent confidence intervals are developed on a monthly and(or) seasonal basis. The calibration and estimation procedures within LOADEST are based on three statistical estimation methods. The first two methods, Adjusted Maximum Likelihood Estimation (AMLE) and Maximum Likelihood Estimation (MLE), are appropriate when the calibration model errors (residuals) are normally distributed. Of the two, AMLE is the method of choice when the calibration data set (time series of streamflow, additional data variables, and concentration) contains censored data. The third method, Least Absolute Deviation (LAD), is an alternative to maximum likelihood estimation when the residuals are not normally distributed. LOADEST output includes diagnostic tests and warnings to assist the user in determining the appropriate estimation method and in interpreting the estimated loads. This report describes the development and application of LOADEST. Sections of the report describe estimation theory, input/output specifications, sample applications, and installation instructions.

  14. Earthquake number forecasts testing

    NASA Astrophysics Data System (ADS)

    Kagan, Yan Y.

    2017-10-01

    We study the distributions of earthquake numbers in two global earthquake catalogues: Global Centroid-Moment Tensor and Preliminary Determinations of Epicenters. The properties of these distributions are especially required to develop the number test for our forecasts of future seismic activity rate, tested by the Collaboratory for Study of Earthquake Predictability (CSEP). A common assumption, as used in the CSEP tests, is that the numbers are described by the Poisson distribution. It is clear, however, that the Poisson assumption for the earthquake number distribution is incorrect, especially for the catalogues with a lower magnitude threshold. In contrast to the one-parameter Poisson distribution so widely used to describe earthquake occurrences, the negative-binomial distribution (NBD) has two parameters. The second parameter can be used to characterize the clustering or overdispersion of a process. We also introduce and study a more complex three-parameter beta negative-binomial distribution. We investigate the dependence of parameters for both Poisson and NBD distributions on the catalogue magnitude threshold and on temporal subdivision of catalogue duration. First, we study whether the Poisson law can be statistically rejected for various catalogue subdivisions. We find that for most cases of interest, the Poisson distribution can be shown to be rejected statistically at a high significance level in favour of the NBD. Thereafter, we investigate whether these distributions fit the observed distributions of seismicity. For this purpose, we study upper statistical moments of earthquake numbers (skewness and kurtosis) and compare them to the theoretical values for both distributions. Empirical values for the skewness and the kurtosis increase for the smaller magnitude threshold and increase with even greater intensity for small temporal subdivision of catalogues. The Poisson distribution for large rate values approaches the Gaussian law, therefore its skewness and kurtosis both tend to zero for large earthquake rates: for the Gaussian law, these values are identically zero. A calculation of the NBD skewness and kurtosis levels based on the values of the first two statistical moments of the distribution, shows rapid increase of these upper moments levels. However, the observed catalogue values of skewness and kurtosis are rising even faster. This means that for small time intervals, the earthquake number distribution is even more heavy-tailed than the NBD predicts. Therefore for small time intervals, we propose using empirical number distributions appropriately smoothed for testing forecasted earthquake numbers.

  15. Quasi-periodic recurrence of large earthquakes on the southern San Andreas fault

    USGS Publications Warehouse

    Scharer, Katherine M.; Biasi, Glenn P.; Weldon, Ray J.; Fumal, Tom E.

    2010-01-01

    It has been 153 yr since the last large earthquake on the southern San Andreas fault (California, United States), but the average interseismic interval is only ~100 yr. If the recurrence of large earthquakes is periodic, rather than random or clustered, the length of this period is notable and would generally increase the risk estimated in probabilistic seismic hazard analyses. Unfortunately, robust characterization of a distribution describing earthquake recurrence on a single fault is limited by the brevity of most earthquake records. Here we use statistical tests on a 3000 yr combined record of 29 ground-rupturing earthquakes from Wrightwood, California. We show that earthquake recurrence there is more regular than expected from a Poisson distribution and is not clustered, leading us to conclude that recurrence is quasi-periodic. The observation of unimodal time dependence is persistent across an observationally based sensitivity analysis that critically examines alternative interpretations of the geologic record. The results support formal forecast efforts that use renewal models to estimate probabilities of future earthquakes on the southern San Andreas fault. Only four intervals (15%) from the record are longer than the present open interval, highlighting the current hazard posed by this fault.

  16. Rapid learning of visual ensembles.

    PubMed

    Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni

    2017-02-01

    We recently demonstrated that observers are capable of encoding not only summary statistics, such as mean and variance of stimulus ensembles, but also the shape of the ensembles. Here, for the first time, we show the learning dynamics of this process, investigate the possible priors for the distribution shape, and demonstrate that observers are able to learn more complex distributions, such as bimodal ones. We used speeding and slowing of response times between trials (intertrial priming) in visual search for an oddly oriented line to assess internal models of distractor distributions. Experiment 1 demonstrates that two repetitions are sufficient for enabling learning of the shape of uniform distractor distributions. In Experiment 2, we compared Gaussian and uniform distractor distributions, finding that following only two repetitions Gaussian distributions are represented differently than uniform ones. Experiment 3 further showed that when distractor distributions are bimodal (with a 30° distance between two uniform intervals), observers initially treat them as uniform, and only with further repetitions do they begin to treat the distributions as bimodal. In sum, observers do not have strong initial priors for distribution shapes and quickly learn simple ones but have the ability to adjust their representations to more complex feature distributions as information accumulates with further repetitions of the same distractor distribution.

  17. The impact of retirement account distributions on measures of family income.

    PubMed

    Iams, Howard M; Purcell, Patrick J

    2013-01-01

    In recent decades, employers have increasingly replaced defined benefit (DB) pensions with defined contribution (DC) retirement accounts for their employees. DB plans provide annuities, or lifetime benefits paid at regular intervals. The timing and amounts of DC distributions, however, may vary widely. Most surveys that provide data on the family income of the aged either collect no data on nonannuity retirement account distributions, or exclude such distributions from their summary measures of family income. We use Survey of Income and Program Participation (SIPP) data for 2009 to estimate the impact of including retirement account distributions on total family income calculations. We find that about one-fifth of aged families received distributions from retirement accounts in 2009. Measured mean income for those families would be about 15 percent higher and median income would be 18 percent higher if those distributions were included in the SIPP summary measure of family income.

  18. Statistical properties of share volume traded in financial markets

    NASA Astrophysics Data System (ADS)

    Gopikrishnan, Parameswaran; Plerou, Vasiliki; Gabaix, Xavier; Stanley, H. Eugene

    2000-10-01

    We quantitatively investigate the ideas behind the often-expressed adage ``it takes volume to move stock prices,'' and study the statistical properties of the number of shares traded QΔt for a given stock in a fixed time interval Δt. We analyze transaction data for the largest 1000 stocks for the two-year period 1994-95, using a database that records every transaction for all securities in three major US stock markets. We find that the distribution P(QΔt) displays a power-law decay, and that the time correlations in QΔt display long-range persistence. Further, we investigate the relation between QΔt and the number of transactions NΔt in a time interval Δt, and find that the long-range correlations in QΔt are largely due to those of NΔt. Our results are consistent with the interpretation that the large equal-time correlation previously found between QΔt and the absolute value of price change \\|GΔt\\| (related to volatility) are largely due to NΔt.

  19. A Gaussian Model-Based Probabilistic Approach for Pulse Transit Time Estimation.

    PubMed

    Jang, Dae-Geun; Park, Seung-Hun; Hahn, Minsoo

    2016-01-01

    In this paper, we propose a new probabilistic approach to pulse transit time (PTT) estimation using a Gaussian distribution model. It is motivated basically by the hypothesis that PTTs normalized by RR intervals follow the Gaussian distribution. To verify the hypothesis, we demonstrate the effects of arterial compliance on the normalized PTTs using the Moens-Korteweg equation. Furthermore, we observe a Gaussian distribution of the normalized PTTs on real data. In order to estimate the PTT using the hypothesis, we first assumed that R-waves in the electrocardiogram (ECG) can be correctly identified. The R-waves limit searching ranges to detect pulse peaks in the photoplethysmogram (PPG) and to synchronize the results with cardiac beats--i.e., the peaks of the PPG are extracted within the corresponding RR interval of the ECG as pulse peak candidates. Their probabilities of being the actual pulse peak are then calculated using a Gaussian probability function. The parameters of the Gaussian function are automatically updated when a new pulse peak is identified. This update makes the probability function adaptive to variations of cardiac cycles. Finally, the pulse peak is identified as the candidate with the highest probability. The proposed approach is tested on a database where ECG and PPG waveforms are collected simultaneously during the submaximal bicycle ergometer exercise test. The results are promising, suggesting that the method provides a simple but more accurate PTT estimation in real applications.

  20. Financial factor influence on scaling and memory of trading volume in stock market

    NASA Astrophysics Data System (ADS)

    Li, Wei; Wang, Fengzhong; Havlin, Shlomo; Stanley, H. Eugene

    2011-10-01

    We study the daily trading volume volatility of 17 197 stocks in the US stock markets during the period 1989-2008 and analyze the time return intervals τ between volume volatilities above a given threshold q. For different thresholds q, the probability density function Pq(τ) scales with mean interval <τ> as Pq(τ)=<τ>-1f(τ/<τ>), and the tails of the scaling function can be well approximated by a power law f(x)˜x-γ. We also study the relation between the form of the distribution function Pq(τ) and several financial factors: stock lifetime, market capitalization, volume, and trading value. We find a systematic tendency of Pq(τ) associated with these factors, suggesting a multiscaling feature in the volume return intervals. We analyze the conditional probability Pq(τ|τ0) for τ following a certain interval τ0, and find that Pq(τ|τ0) depends on τ0 such that immediately following a short (long) return interval a second short (long) return interval tends to occur. We also find indications that there is a long-term correlation in the daily volume volatility. We compare our results to those found earlier for price volatility.

  1. The Use of Probabilistic Methods to Evaluate the Systems Impact of Component Design Improvements on Large Turbofan Engines

    NASA Technical Reports Server (NTRS)

    Packard, Michael H.

    2002-01-01

    Probabilistic Structural Analysis (PSA) is now commonly used for predicting the distribution of time/cycles to failure of turbine blades and other engine components. These distributions are typically based on fatigue/fracture and creep failure modes of these components. Additionally, reliability analysis is used for taking test data related to particular failure modes and calculating failure rate distributions of electronic and electromechanical components. How can these individual failure time distributions of structural, electronic and electromechanical component failure modes be effectively combined into a top level model for overall system evaluation of component upgrades, changes in maintenance intervals, or line replaceable unit (LRU) redesign? This paper shows an example of how various probabilistic failure predictions for turbine engine components can be evaluated and combined to show their effect on overall engine performance. A generic model of a turbofan engine was modeled using various Probabilistic Risk Assessment (PRA) tools (Quantitative Risk Assessment Software (QRAS) etc.). Hypothetical PSA results for a number of structural components along with mitigation factors that would restrict the failure mode from propagating to a Loss of Mission (LOM) failure were used in the models. The output of this program includes an overall failure distribution for LOM of the system. The rank and contribution to the overall Mission Success (MS) is also given for each failure mode and each subsystem. This application methodology demonstrates the effectiveness of PRA for assessing the performance of large turbine engines. Additionally, the effects of system changes and upgrades, the application of different maintenance intervals, inclusion of new sensor detection of faults and other upgrades were evaluated in determining overall turbine engine reliability.

  2. A distribution method for analysing the baseline of pulsatile endocrine signals as exemplified by 24-hour growth hormone profiles.

    PubMed

    Matthews, D R; Hindmarsh, P C; Pringle, P J; Brook, C G

    1991-09-01

    To develop a method for quantifying the distribution of concentrations present in hormone profiles, which would allow an observer-unbiased estimate of the time concentration attribute and to make an assessment of the baseline. The log-transformed concentrations (regardless of their temporal attribute) are sorted and allocated to class intervals. The number of observations in each interval are then determined and expressed as a percentage of the total number of samples drawn in the study period. The data may be displayed as a frequency distribution or as a cumulative distribution. Cumulative distributions may be plotted as sigmoidal ogives or can be transformed into discrete probabilities (linear probits), which are then linear, and amenable to regression analysis. Probability analysis gives estimates of the mean (the value below which 50% of the observed concentrations lie, which we term 'OC50'). 'Baseline' can be defined in terms of percentage occupancy--the 'Observed Concentration for 5%' (which we term 'OC5') which is the threshold at or below which the hormone concentrations are measured 5% of the time. We report the use of applying this method to 24-hour growth hormone (GH) profiles from 63 children, 26 adults and one giant. We demonstrate that GH effects (growth or gigantism) in these groups are more related to the baseline OC5 concentration than peak concentration (OC5 +/- 95% confidence limits: adults 0.05 +/- 0.04, peak-height-velocity pubertal 0.39 +/- 0.22, giant 8.9 mU/l). Pulsatile hormone profiles can be analysed using this method in order to assess baseline and other concentration domains.

  3. Changes in the distribution of high-risk births associated with changes in contraceptive prevalence.

    PubMed

    Stover, John; Ross, John

    2013-01-01

    Several birth characteristics are associated with high mortality risk: very young or old mothers, short birth intervals and high birth order. One justification for family planning programs is the health benefits associated with better spacing and timing of births. This study examines the extent to which the prevalence of these risk factors changes as a country transitions from high to low fertility. We use data from 194 national surveys to examine both cross section and within-country variation in these risk factors as they relate to the total fertility rate. Declines in the total fertility rate are associated with large declines in the proportion of high order births, those to mothers over the age of 34 and those with multiple risk factors; as well as to increasing proportions of first order births. There is little change in the proportion of births with short birth intervals except in sub-Saharan Africa. The use of family planning is strongly associated with fertility declines. The proportion of second and higher order births with demographic risk factors declines substantially as fertility declines. This creates a potential for reducing child mortality rates. Some of the reduction comes from modifying the birth interval distribution or by bringing maternal age at the time of birth into the 'safe' range of 18-35 years, and some comes from the actual elimination of births that would have a high mortality risk (high parity births).

  4. EXACT DISTRIBUTIONS OF INTRACLASS CORRELATION AND CRONBACH'S ALPHA WITH GAUSSIAN DATA AND GENERAL COVARIANCE.

    PubMed

    Kistner, Emily O; Muller, Keith E

    2004-09-01

    Intraclass correlation and Cronbach's alpha are widely used to describe reliability of tests and measurements. Even with Gaussian data, exact distributions are known only for compound symmetric covariance (equal variances and equal correlations). Recently, large sample Gaussian approximations were derived for the distribution functions. New exact results allow calculating the exact distribution function and other properties of intraclass correlation and Cronbach's alpha, for Gaussian data with any covariance pattern, not just compound symmetry. Probabilities are computed in terms of the distribution function of a weighted sum of independent chi-square random variables. New F approximations for the distribution functions of intraclass correlation and Cronbach's alpha are much simpler and faster to compute than the exact forms. Assuming the covariance matrix is known, the approximations typically provide sufficient accuracy, even with as few as ten observations. Either the exact or approximate distributions may be used to create confidence intervals around an estimate of reliability. Monte Carlo simulations led to a number of conclusions. Correctly assuming that the covariance matrix is compound symmetric leads to accurate confidence intervals, as was expected from previously known results. However, assuming and estimating a general covariance matrix produces somewhat optimistically narrow confidence intervals with 10 observations. Increasing sample size to 100 gives essentially unbiased coverage. Incorrectly assuming compound symmetry leads to pessimistically large confidence intervals, with pessimism increasing with sample size. In contrast, incorrectly assuming general covariance introduces only a modest optimistic bias in small samples. Hence the new methods seem preferable for creating confidence intervals, except when compound symmetry definitely holds.

  5. [Evaluation of the principles of distribution of electrocardiographic R-R intervals for elaboration of methods of automated diagnosis of cardiac rhythm disorders].

    PubMed

    Tsukerman, B M; Finkel'shteĭn, I E

    1987-07-01

    A statistical analysis of prolonged ECG records has been carried out in patients with various heart rhythm and conductivity disorders. The distribution of absolute R-R duration values and relationships between adjacent intervals have been examined. A two-step algorithm has been constructed that excludes anomalous and "suspicious" intervals from a sample of consecutively recorded R-R intervals, until only the intervals between contractions of veritably sinus origin remain in the sample. The algorithm has been developed into a programme for microcomputer Electronica NC-80. It operates reliably even in cases of complex combined rhythm and conductivity disorders.

  6. Consequences of Secondary Calibrations on Divergence Time Estimates.

    PubMed

    Schenk, John J

    2016-01-01

    Secondary calibrations (calibrations based on the results of previous molecular dating studies) are commonly applied in divergence time analyses in groups that lack fossil data; however, the consequences of applying secondary calibrations in a relaxed-clock approach are not fully understood. I tested whether applying the posterior estimate from a primary study as a prior distribution in a secondary study results in consistent age and uncertainty estimates. I compared age estimates from simulations with 100 randomly replicated secondary trees. On average, the 95% credible intervals of node ages for secondary estimates were significantly younger and narrower than primary estimates. The primary and secondary age estimates were significantly different in 97% of the replicates after Bonferroni corrections. Greater error in magnitude was associated with deeper than shallower nodes, but the opposite was found when standardized by median node age, and a significant positive relationship was determined between the number of tips/age of secondary trees and the total amount of error. When two secondary calibrated nodes were analyzed, estimates remained significantly different, and although the minimum and median estimates were associated with less error, maximum age estimates and credible interval widths had greater error. The shape of the prior also influenced error, in which applying a normal, rather than uniform, prior distribution resulted in greater error. Secondary calibrations, in summary, lead to a false impression of precision and the distribution of age estimates shift away from those that would be inferred by the primary analysis. These results suggest that secondary calibrations should not be applied as the only source of calibration in divergence time analyses that test time-dependent hypotheses until the additional error associated with secondary calibrations is more properly modeled to take into account increased uncertainty in age estimates.

  7. Mining of hospital laboratory information systems: a model study defining age- and gender-specific reference intervals and trajectories for plasma creatinine in a pediatric population.

    PubMed

    Søeby, Karen; Jensen, Peter Bjødstrup; Werge, Thomas; Sørensen, Steen

    2015-09-01

    The knowledge of physiological fluctuation and variation of even commonly used biochemical quantities in extreme age groups and during development is sparse. This challenges the clinical interpretation and utility of laboratory tests in these age groups. To explore the utility of hospital laboratory data as a source of information, we analyzed enzymatic plasma creatinine as a model analyte in two large pediatric hospital samples. Plasma creatinine measurements from 9700 children aged 0-18 years were obtained from hospital laboratory databases and partitioned into high-resolution gender- and age-groups. Normal probability plots were used to deduce parameters of the normal distributions from healthy creatinine values in the mixed hospital datasets. Furthermore, temporal trajectories were generated from repeated measurements to examine developmental patterns in periods of changing creatinine levels. Creatinine shows great age dependence from birth throughout childhood. We computed and replicated 95% reference intervals in narrow gender and age bins and showed them to be comparable to those determined in healthy population studies. We identified pronounced transitions in creatinine levels at different time points after birth and around the early teens, which challenges the establishment and usefulness of reference intervals in those age groups. The study documents that hospital laboratory data may inform on the developmental aspects of creatinine, on periods with pronounced heterogeneity and valid reference intervals. Furthermore, part of the heterogeneity in creatinine distribution is likely due to differences in biological and chronological age of children and should be considered when using age-specific reference intervals.

  8. Random local temporal structure of category fluency responses.

    PubMed

    Meyer, David J; Messer, Jason; Singh, Tanya; Thomas, Peter J; Woyczynski, Wojbor A; Kaye, Jeffrey; Lerner, Alan J

    2012-04-01

    The Category Fluency Test (CFT) provides a sensitive measurement of cognitive capabilities in humans related to retrieval from semantic memory. In particular, it is widely used to assess progress of cognitive impairment in patients with dementia. Previous research shows that, in the first approximation, the intensity of tested individuals' responses within a standard 60-s test period decays exponentially with time, with faster decay rates for more cognitively impaired patients. Such decay rate can then be viewed as a global (macro) diagnostic parameter of each test. In the present paper we focus on the statistical properties of the properly de-trended time intervals between consecutive responses (inter-call times) in the Category Fluency Test. In a sense, those properties reflect the local (micro) structure of the response generation process. We find that a good approximation for the distribution of the de-trended inter-call times is provided by the Weibull Distribution, a probability distribution that appears naturally in this context as a distribution of a minimum of independent random quantities and is the standard tool in industrial reliability theory. This insight leads us to a new interpretation of the concept of "navigating a semantic space" via patient responses.

  9. An informational transition in conditioned Markov chains: Applied to genetics and evolution.

    PubMed

    Zhao, Lei; Lascoux, Martin; Waxman, David

    2016-08-07

    In this work we assume that we have some knowledge about the state of a population at two known times, when the dynamics is governed by a Markov chain such as a Wright-Fisher model. Such knowledge could be obtained, for example, from observations made on ancient and contemporary DNA, or during laboratory experiments involving long term evolution. A natural assumption is that the behaviour of the population, between observations, is related to (or constrained by) what was actually observed. The present work shows that this assumption has limited validity. When the time interval between observations is larger than a characteristic value, which is a property of the population under consideration, there is a range of intermediate times where the behaviour of the population has reduced or no dependence on what was observed and an equilibrium-like distribution applies. Thus, for example, if the frequency of an allele is observed at two different times, then for a large enough time interval between observations, the population has reduced or no dependence on the two observed frequencies for a range of intermediate times. Given observations of a population at two times, we provide a general theoretical analysis of the behaviour of the population at all intermediate times, and determine an expression for the characteristic time interval, beyond which the observations do not constrain the population's behaviour over a range of intermediate times. The findings of this work relate to what can be meaningfully inferred about a population at intermediate times, given knowledge of terminal states. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Appetitive and consummatory sexual behaviors of female rats in bilevel chambers. II. Patterns of estrus termination following vaginocervical stimulation.

    PubMed

    Pfaus, J G; Smith, W J; Byrne, N; Stephens, G

    2000-02-01

    Copulation with intromission or manual vaginocervical stimulation (VCS) shortens the duration that intact female rats maintain lordosis responding during estrus. The present study examined whether VCS could shorten the duration of both appetitive and consummatory measures of female sexual behavior, and whether these effects occur differentially in time and across different hormone priming intervals. Ovariectomized, sexually experienced female rats were administered subcutaneous injections of estradiol benzoate 48 h and progesterone 4 h, before receiving 50 manual VCSs with a lubricated glass rod distributed over 1 h. Control females received sham VCSs distributed over the same time. The females were then tested for sexual behavior in bilevel chambers with two sexually vigorous males (to one ejaculatory series or 10 min with each male, separated by 5 min) 12, 16, and 20 h after VCS. Prior to the final hormone treatment, different groups of females had been given the same hormone treatment either 28, 14, 7, or 4 days before. In females tested at 28- and 14-day hormone intervals, VCS induced both active and passive rejection responses at 12, 16, and 20 h. In contrast, females that received sham VCS displayed relatively normal sexual behavior at 12 h, although by 16 and 20 h these females displayed active and passive rejection. Females tested at 7- or 4-day intervals displayed normal levels of lordosis at all testing times, regardless of VCS treatment. These data indicate that VCS facilitates rejection responses that precede the decrease in lordosis responsiveness. However, the effects of VCS are dependent on the frequency of hormone priming, suggesting that hormone treatment may block some of the long-term inhibitory effects of VCS on female sexual behavior. Copyright 2000 Academic Press.

  11. Modeling the formation of methane hydrate-bearing intervals in fine-grained sediments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malinverno, Alberto; Cook, Ann; Daigle, Hugh

    Sediment grain size exerts a fundamental control on how methane hydrates are distributed within the pore space. Fine-grained muds are the predominant sediments in continental margins, and hydrates in these sediments have often been observed in semi-vertical veins and fractures. In several instances, these hydrate veins/fractures are found in discrete depth intervals a few tens meters thick within the gas hydrate stability zone (GHSZ) surrounded by hydrate-free sediments above and below. As they are not obviously connected with free gas occurring beneath the base of the GHSZ, these isolated hydrate-bearing intervals have been interpreted as formed by microbial methane generatedmore » in situ. To investigate further the formation of these hydrate deposits, we applied a time-dependent advection-diffusion-reaction model that includes the effects of sedimentation, solute diffusion, and microbial methane generation. The microbial methane generation term depends on the amount of metabolizable organic carbon deposited at the seafloor, which is degraded at a prescribed rate resulting in methane formation beneath the sulfate reduction zone. In the model, methane hydrate precipitates once the dissolved methane concentration is greater than solubility, or hydrate dissolves if concentration goes below solubility. If the deposition of organic carbon at the seafloor is kept constant in time, we found that the predicted amounts of hydrate formed in discrete intervals within the GHSZ are much less than those estimated from observations. We then investigated the effect of temporal variations in the deposition of organic carbon. If greater amounts of organic carbon are deposited during some time interval, methane generation is enhanced during burial in the corresponding sediment interval. With variations in organic carbon deposition that are consistent with observations in continental margin sediments, we were able to reproduce the methane hydrate contents estimated in discrete depth intervals. Our results support the suggestion that in situ microbial methane generation is the source for hydrates within fine-grained sediments.« less

  12. Delay-distribution-dependent H∞ state estimation for delayed neural networks with (x,v)-dependent noises and fading channels.

    PubMed

    Sheng, Li; Wang, Zidong; Tian, Engang; Alsaadi, Fuad E

    2016-12-01

    This paper deals with the H ∞ state estimation problem for a class of discrete-time neural networks with stochastic delays subject to state- and disturbance-dependent noises (also called (x,v)-dependent noises) and fading channels. The time-varying stochastic delay takes values on certain intervals with known probability distributions. The system measurement is transmitted through fading channels described by the Rice fading model. The aim of the addressed problem is to design a state estimator such that the estimation performance is guaranteed in the mean-square sense against admissible stochastic time-delays, stochastic noises as well as stochastic fading signals. By employing the stochastic analysis approach combined with the Kronecker product, several delay-distribution-dependent conditions are derived to ensure that the error dynamics of the neuron states is stochastically stable with prescribed H ∞ performance. Finally, a numerical example is provided to illustrate the effectiveness of the obtained results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. The Superstatistical Nature and Interoccurrence Time of Atmospheric Mercury Concentration Fluctuations

    NASA Astrophysics Data System (ADS)

    Carbone, F.; Bruno, A. G.; Naccarato, A.; De Simone, F.; Gencarelli, C. N.; Sprovieri, F.; Hedgecock, I. M.; Landis, M. S.; Skov, H.; Pfaffhuber, K. A.; Read, K. A.; Martin, L.; Angot, H.; Dommergue, A.; Magand, O.; Pirrone, N.

    2018-01-01

    The probability density function (PDF) of the time intervals between subsequent extreme events in atmospheric Hg0 concentration data series from different latitudes has been investigated. The Hg0 dynamic possesses a long-term memory autocorrelation function. Above a fixed threshold Q in the data, the PDFs of the interoccurrence time of the Hg0 data are well described by a Tsallis q-exponential function. This PDF behavior has been explained in the framework of superstatistics, where the competition between multiple mesoscopic processes affects the macroscopic dynamics. An extensive parameter μ, encompassing all possible fluctuations related to mesoscopic phenomena, has been identified. It follows a χ2 distribution, indicative of the superstatistical nature of the overall process. Shuffling the data series destroys the long-term memory, the distributions become independent of Q, and the PDFs collapse on to the same exponential distribution. The possible central role of atmospheric turbulence on extreme events in the Hg0 data is highlighted.

  14. Time Resolved Near Field Optical Microscopy

    NASA Astrophysics Data System (ADS)

    Stark, J. B.

    1996-03-01

    We use broadband pulses to image the carrier dynamics of semiconductor microstructures on a 150 nm spatial scale, with a time resolution of 60 femtoseconds. Etched disks of GaAs/AlGaAs multiple quantum well material, 10 microns in diameter, are excited with a 30 fs pump from a Ti:Sapphire laser, and probed using a near-field optical microscope. The nonlinear transmission of the microdisks is measured using a double-modulation technique, sensitive to transmission changes of 0.0005 within a 150 nm diameter spot on the sample. This spot is scanned to produce an image of the sample. The nonlinear response is produced by the occupation of phase space by the excited distribution. Images of this evolving distribution are collected at time intervals following excitation, measuring the relaxation of carriers at each point in the microdisk. The resulting data can be viewed as a movie of the carrier dynamics of nonequilibrium distributions in excited semiconductor structures. Work done in collaboration with U. Mohideen and R. E. Slusher.

  15. Improved radioimmunotherapy of hematologic malignancies. Progress report, November 1, 1993--October 31, 1994

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Press, O.W.

    1994-08-04

    This report summaries progress made during the time interval between November 1, 1993 and October 31, 1994 and briefly describes studies on the metabolism of antibodies targeting B cell antigens, retention of labeled antibodies by human B cell lymphocytes, and tissue distribution of Chloramine T and tyramine cellobiose labeled antibodies in mice harboring a human erythroleukemia tumor transplant.

  16. Dopaminergic Actions of D-Amphetamine on Schedule-Induced Polydipsia in Rats

    ERIC Educational Resources Information Center

    Pellon, Ricardo; Ruiz, Ana; Rodriguez, Cilia; Flores, Pilar

    2007-01-01

    Schedule-induced polydipsia in rats was developed by means of a fixed-time 60-s schedule of food presentation. The acute administration of d-amphetamine sulfate (0.1-3.0 mg/kg) produced a dose-dependent decrease in the rate of licking. D-Amphetamine shifted to the left the temporal distribution of adjunctive drinking within interfood intervals.…

  17. Measurements of geomagnetically trapped alpha particles, 1968-1970. I - Quiet time distributions

    NASA Technical Reports Server (NTRS)

    Krimigis, S. M.; Verzariu, P.

    1973-01-01

    Results of observations of geomagnetically trapped alpha particles over the energy range from 1.18 to 8 MeV performed with the aid of the Injun 5 polar-orbiting satellite during the period from September 1968 to May 1970. Following a presentation of a time history covering this entire period, a detailed analysis is made of the magnetically quiet period from Feb. 11 to 28, 1970. During this period the alpha particle fluxes and the intensity ratio of alpha particles to protons attained their lowest values in approximately 20 months; the alpha particle intensity versus L profile was most similar to the proton profile at the same energy per nucleon interval; the intensity ratio was nearly constant as a function of L in the same energy per nucleon representation, but rose sharply with L when computed in the same total energy interval; the variation of alpha particle intensity with B suggested a steep angular distribution at small equatorial pitch angles, while the intensity ratio showed little dependence on B; and the alpha particle spectral parameter showed a markedly different dependence on L from the equivalent one for protons.

  18. Wave propagation model of heat conduction and group speed

    NASA Astrophysics Data System (ADS)

    Zhang, Long; Zhang, Xiaomin; Peng, Song

    2018-03-01

    In view of the finite relaxation model of non-Fourier's law, the Cattaneo and Vernotte (CV) model and Fourier's law are presented in this work for comparing wave propagation modes. Independent variable translation is applied to solve the partial differential equation. Results show that the general form of the time spatial distribution of temperature for the three media comprises two solutions: those corresponding to the positive and negative logarithmic heating rates. The former shows that a group of heat waves whose spatial distribution follows the exponential function law propagates at a group speed; the speed of propagation is related to the logarithmic heating rate. The total speed of all the possible heat waves can be combined to form the group speed of the wave propagation. The latter indicates that the spatial distribution of temperature, which follows the exponential function law, decays with time. These features show that propagation accelerates when heated and decelerates when cooled. For the model media that follow Fourier's law and correspond to the positive heat rate of heat conduction, the propagation mode is also considered the propagation of a group of heat waves because the group speed has no upper bound. For the finite relaxation model with non-Fourier media, the interval of group speed is bounded and the maximum speed can be obtained when the logarithmic heating rate is exactly the reciprocal of relaxation time. And for the CV model with a non-Fourier medium, the interval of group speed is also bounded and the maximum value can be obtained when the logarithmic heating rate is infinite.

  19. Quantile Regression Models for Current Status Data

    PubMed Central

    Ou, Fang-Shu; Zeng, Donglin; Cai, Jianwen

    2016-01-01

    Current status data arise frequently in demography, epidemiology, and econometrics where the exact failure time cannot be determined but is only known to have occurred before or after a known observation time. We propose a quantile regression model to analyze current status data, because it does not require distributional assumptions and the coefficients can be interpreted as direct regression effects on the distribution of failure time in the original time scale. Our model assumes that the conditional quantile of failure time is a linear function of covariates. We assume conditional independence between the failure time and observation time. An M-estimator is developed for parameter estimation which is computed using the concave-convex procedure and its confidence intervals are constructed using a subsampling method. Asymptotic properties for the estimator are derived and proven using modern empirical process theory. The small sample performance of the proposed method is demonstrated via simulation studies. Finally, we apply the proposed method to analyze data from the Mayo Clinic Study of Aging. PMID:27994307

  20. Criterion-free measurement of motion transparency perception at different speeds

    PubMed Central

    Rocchi, Francesca; Ledgeway, Timothy; Webb, Ben S.

    2018-01-01

    Transparency perception often occurs when objects within the visual scene partially occlude each other or move at the same time, at different velocities across the same spatial region. Although transparent motion perception has been extensively studied, we still do not understand how the distribution of velocities within a visual scene contribute to transparent perception. Here we use a novel psychophysical procedure to characterize the distribution of velocities in a scene that give rise to transparent motion perception. To prevent participants from adopting a subjective decision criterion when discriminating transparent motion, we used an “odd-one-out,” three-alternative forced-choice procedure. Two intervals contained the standard—a random-dot-kinematogram with dot speeds or directions sampled from a uniform distribution. The other interval contained the comparison—speeds or directions sampled from a distribution with the same range as the standard, but with a notch of different widths removed. Our results suggest that transparent motion perception is driven primarily by relatively slow speeds, and does not emerge when only very fast speeds are present within a visual scene. Transparent perception of moving surfaces is modulated by stimulus-based characteristics, such as the separation between the means of the overlapping distributions or the range of speeds presented within an image. Our work illustrates the utility of using objective, forced-choice methods to reveal the mechanisms underlying motion transparency perception. PMID:29614154

  1. On Some Confidence Intervals for Estimating the Mean of a Skewed Population

    ERIC Educational Resources Information Center

    Shi, W.; Kibria, B. M. Golam

    2007-01-01

    A number of methods are available in the literature to measure confidence intervals. Here, confidence intervals for estimating the population mean of a skewed distribution are considered. This note proposes two alternative confidence intervals, namely, Median t and Mad t, which are simple adjustments to the Student's t confidence interval. In…

  2. Distributed fiber sparse-wideband vibration sensing by sub-Nyquist additive random sampling

    NASA Astrophysics Data System (ADS)

    Zhang, Jingdong; Zheng, Hua; Zhu, Tao; Yin, Guolu; Liu, Min; Bai, Yongzhong; Qu, Dingrong; Qiu, Feng; Huang, Xianbing

    2018-05-01

    The round trip time of the light pulse limits the maximum detectable vibration frequency response range of phase-sensitive optical time domain reflectometry ({\\phi}-OTDR). Unlike the uniform laser pulse interval in conventional {\\phi}-OTDR, we randomly modulate the pulse interval, so that an equivalent sub-Nyquist additive random sampling (sNARS) is realized for every sensing point of the long interrogation fiber. For an {\\phi}-OTDR system with 10 km sensing length, the sNARS method is optimized by theoretical analysis and Monte Carlo simulation, and the experimental results verify that a wide-band spars signal can be identified and reconstructed. Such a method can broaden the vibration frequency response range of {\\phi}-OTDR, which is of great significance in sparse-wideband-frequency vibration signal detection, such as rail track monitoring and metal defect detection.

  3. On the properties of stochastic intermittency in rainfall processes.

    PubMed

    Molini, A; La, Barbera P; Lanza, L G

    2002-01-01

    In this work we propose a mixed approach to deal with the modelling of rainfall events, based on the analysis of geometrical and statistical properties of rain intermittency in time, combined with the predictability power derived from the analysis of no-rain periods distribution and from the binary decomposition of the rain signal. Some recent hypotheses on the nature of rain intermittency are reviewed too. In particular, the internal intermittent structure of a high resolution pluviometric time series covering one decade and recorded at the tipping bucket station of the University of Genova is analysed, by separating the internal intermittency of rainfall events from the inter-arrival process through a simple geometrical filtering procedure. In this way it is possible to associate no-rain intervals with a probability distribution both in virtue of their position within the event and their percentage. From this analysis, an invariant probability distribution for the no-rain periods within the events is obtained at different aggregation levels and its satisfactory agreement with a typical extreme value distribution is shown.

  4. Asynchronous Distributed Flow Control Algorithms.

    DTIC Science & Technology

    1984-10-01

    all yeX, y <x. We may think of X as a "feasible" set. The fair allocation vector solves the following set of nested problems. The first problem is to...interval parameters that can be adjusted: the time between a succesful update and the next update attempt, and the time between an unsuccessful attempt...For years she took great delight in being able to tell people , quite truthfully, that she was from Mars. In 1970, she graduated from the University of

  5. Spatial distribution of diesel transit bus emissions and urban populations: implications of coincidence and scale on exposure.

    PubMed

    Gouge, Brian; Ries, Francis J; Dowlatabadi, Hadi

    2010-09-15

    Macroscale emissions modeling approaches have been widely applied in impact assessments of mobile source emissions. However, these approaches poorly characterize the spatial distribution of emissions and have been shown to underestimate emissions of some pollutants. To quantify the implications of these limitations on exposure assessments, CO, NO(X), and HC emissions from diesel transit buses were estimated at 50 m intervals along a bus rapid transit route using a microscale emissions modeling approach. The impacted population around the route was estimated using census, pedestrian count and transit ridership data. Emissions exhibited significant spatial variability. In intervals near major intersections and bus stops, emissions were 1.6-3.0 times higher than average. The coincidence of these emission hot spots and peaks in pedestrian populations resulted in a 20-40% increase in exposure compared to estimates that assumed homogeneous spatial distributions of emissions and/or populations along the route. An additional 19-30% increase in exposure resulted from the underestimate of CO and NO(X) emissions by macroscale modeling approaches. The results of this study indicate that macroscale modeling approaches underestimate exposure due to poor characterization of the influence of vehicle activity on the spatial distribution of emissions and total emissions.

  6. A model-free characterization of recurrences in stationary time series

    NASA Astrophysics Data System (ADS)

    Chicheportiche, Rémy; Chakraborti, Anirban

    2017-05-01

    Study of recurrences in earthquakes, climate, financial time-series, etc. is crucial to better forecast disasters and limit their consequences. Most of the previous phenomenological studies of recurrences have involved only a long-ranged autocorrelation function, and ignored the multi-scaling properties induced by potential higher order dependencies. We argue that copulas is a natural model-free framework to study non-linear dependencies in time series and related concepts like recurrences. Consequently, we arrive at the facts that (i) non-linear dependences do impact both the statistics and dynamics of recurrence times, and (ii) the scaling arguments for the unconditional distribution may not be applicable. Hence, fitting and/or simulating the intertemporal distribution of recurrence intervals is very much system specific, and cannot actually benefit from universal features, in contrast to the previous claims. This has important implications in epilepsy prognosis and financial risk management applications.

  7. Lévy walks with variable waiting time: A ballistic case

    NASA Astrophysics Data System (ADS)

    Kamińska, A.; Srokowski, T.

    2018-06-01

    The Lévy walk process for a lower interval of an excursion times distribution (α <1 ) is discussed. The particle rests between the jumps, and the waiting time is position-dependent. Two cases are considered: a rising and diminishing waiting time rate ν (x ) , which require different approximations of the master equation. The process comprises two phases of the motion: particles at rest and in flight. The density distributions for them are derived, as a solution of corresponding fractional equations. For strongly falling ν (x ) , the resting particles density assumes the α -stable form (truncated at fronts), and the process resolves itself to the Lévy flights. The diffusion is enhanced for this case but no longer ballistic, in contrast to the case for the rising ν (x ) . The analytical results are compared with Monte Carlo trajectory simulations. The results qualitatively agree with observed properties of human and animal movements.

  8. Situational Lightning Climatologies

    NASA Technical Reports Server (NTRS)

    Bauman, William; Crawford, Winifred

    2010-01-01

    Research has revealed distinct spatial and temporal distributions of lightning occurrence that are strongly influenced by large-scale atmospheric flow regimes. It was believed there were two flow systems, but it has been discovered that actually there are seven distinct flow regimes. The Applied Meteorology Unit (AMU) has recalculated the lightning climatologies for the Shuttle Landing Facility (SLF), and the eight airfields in the National Weather Service in Melbourne (NWS MLB) County Warning Area (CWA) using individual lightning strike data to improve the accuracy of the climatologies. The software determines the location of each CG lightning strike with 5-, 10-, 20-, and 30-nmi (.9.3-, 18.5-, 37-, 55.6-km) radii from each airfield. Each CG lightning strike is binned at 1-, 3-, and 6-hour intervals at each specified radius. The software merges the CG lightning strike time intervals and distance with each wind flow regime and creates probability statistics for each time interval, radii, and flow regime, and stratifies them by month and warm season. The AMU also updated the graphical user interface (GUI) with the new data.

  9. Computing Role Assignments of Proper Interval Graphs in Polynomial Time

    NASA Astrophysics Data System (ADS)

    Heggernes, Pinar; van't Hof, Pim; Paulusma, Daniël

    A homomorphism from a graph G to a graph R is locally surjective if its restriction to the neighborhood of each vertex of G is surjective. Such a homomorphism is also called an R-role assignment of G. Role assignments have applications in distributed computing, social network theory, and topological graph theory. The Role Assignment problem has as input a pair of graphs (G,R) and asks whether G has an R-role assignment. This problem is NP-complete already on input pairs (G,R) where R is a path on three vertices. So far, the only known non-trivial tractable case consists of input pairs (G,R) where G is a tree. We present a polynomial time algorithm that solves Role Assignment on all input pairs (G,R) where G is a proper interval graph. Thus we identify the first graph class other than trees on which the problem is tractable. As a complementary result, we show that the problem is Graph Isomorphism-hard on chordal graphs, a superclass of proper interval graphs and trees.

  10. An Element of Determinism in a Stochastic Flagellar Motor Switch

    PubMed Central

    Xie, Li; Altindal, Tuba; Wu, Xiao-Lun

    2015-01-01

    Marine bacterium Vibrio alginolyticus uses a single polar flagellum to navigate in an aqueous environment. Similar to Escherichia coli cells, the polar flagellar motor has two states; when the motor is counter-clockwise, the cell swims forward and when the motor is clockwise, the cell swims backward. V. alginolyticus also incorporates a direction randomization step at the start of the forward swimming interval by flicking its flagellum. To gain an understanding on how the polar flagellar motor switch is regulated, distributions of the forward Δf and backward Δb intervals are investigated herein. We found that the steady-state probability density functions, P(Δf) and P(Δb), of freely swimming bacteria are strongly peaked at a finite time, suggesting that the motor switch is not Poissonian. The short-time inhibition is sufficiently strong and long lasting, i.e., several hundred milliseconds for both intervals, which is readily observed and characterized. Treating motor reversal dynamics as a first-passage problem, which results from conformation fluctuations of the motor switch, we calculated P(Δf) and P(Δb) and found good agreement with the measurements. PMID:26554590

  11. An Element of Determinism in a Stochastic Flagellar Motor Switch.

    PubMed

    Xie, Li; Altindal, Tuba; Wu, Xiao-Lun

    2015-01-01

    Marine bacterium Vibrio alginolyticus uses a single polar flagellum to navigate in an aqueous environment. Similar to Escherichia coli cells, the polar flagellar motor has two states; when the motor is counter-clockwise, the cell swims forward and when the motor is clockwise, the cell swims backward. V. alginolyticus also incorporates a direction randomization step at the start of the forward swimming interval by flicking its flagellum. To gain an understanding on how the polar flagellar motor switch is regulated, distributions of the forward Δf and backward Δb intervals are investigated herein. We found that the steady-state probability density functions, P(Δf) and P(Δb), of freely swimming bacteria are strongly peaked at a finite time, suggesting that the motor switch is not Poissonian. The short-time inhibition is sufficiently strong and long lasting, i.e., several hundred milliseconds for both intervals, which is readily observed and characterized. Treating motor reversal dynamics as a first-passage problem, which results from conformation fluctuations of the motor switch, we calculated P(Δf) and P(Δb) and found good agreement with the measurements.

  12. Empirical likelihood-based confidence intervals for the sensitivity of a continuous-scale diagnostic test at a fixed level of specificity.

    PubMed

    Gengsheng Qin; Davis, Angela E; Jing, Bing-Yi

    2011-06-01

    For a continuous-scale diagnostic test, it is often of interest to find the range of the sensitivity of the test at the cut-off that yields a desired specificity. In this article, we first define a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and show that its limiting distribution is a scaled chi-square distribution. We then propose two new empirical likelihood-based confidence intervals for the sensitivity of the test at a fixed level of specificity by using the scaled chi-square distribution. Simulation studies are conducted to compare the finite sample performance of the newly proposed intervals with the existing intervals for the sensitivity in terms of coverage probability. A real example is used to illustrate the application of the recommended methods.

  13. The role of retinopathy distribution and other lesion types for the definition of examination intervals during screening for diabetic retinopathy.

    PubMed

    Ometto, Giovanni; Erlandsen, Mogens; Hunter, Andrew; Bek, Toke

    2017-06-01

    It has previously been shown that the intervals between screening examinations for diabetic retinopathy can be optimized by including individual risk factors for the development of the disease in the risk assessment. However, in some cases, the risk model calculating the screening interval may recommend a different interval than an experienced clinician. The purpose of this study was to evaluate the influence of factors unrelated to diabetic retinopathy and the distribution of lesions for discrepancies between decisions made by the clinician and the risk model. Therefore, fundus photographs from 90 screening examinations where the recommendations of the clinician and a risk model had been discrepant were evaluated. Forty features were defined to describe the type and location of the lesions, and classification and ranking techniques were used to assess whether the features could predict the discrepancy between the grader and the risk model. Suspicion of tumours, retinal degeneration and vascular diseases other than diabetic retinopathy could explain why the clinician recommended shorter examination intervals than the model. Additionally, the regional distribution of microaneurysms/dot haemorrhages was important for defining a photograph as belonging to the group where both the clinician and the risk model had recommended a short screening interval as opposed to the other decision alternatives. Features unrelated to diabetic retinopathy and the regional distribution of retinal lesions may affect the recommendation of the examination interval during screening for diabetic retinopathy. The development of automated computerized algorithms for extracting information about the type and location of retinal lesions could be expected to further optimize examination intervals during screening for diabetic retinopathy. © 2016 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  14. Emergence of patterns in random processes

    NASA Astrophysics Data System (ADS)

    Newman, William I.; Turcotte, Donald L.; Malamud, Bruce D.

    2012-08-01

    Sixty years ago, it was observed that any independent and identically distributed (i.i.d.) random variable would produce a pattern of peak-to-peak sequences with, on average, three events per sequence. This outcome was employed to show that randomness could yield, as a null hypothesis for animal populations, an explanation for their apparent 3-year cycles. We show how we can explicitly obtain a universal distribution of the lengths of peak-to-peak sequences in time series and that this can be employed for long data sets as a test of their i.i.d. character. We illustrate the validity of our analysis utilizing the peak-to-peak statistics of a Gaussian white noise. We also consider the nearest-neighbor cluster statistics of point processes in time. If the time intervals are random, we show that cluster size statistics are identical to the peak-to-peak sequence statistics of time series. In order to study the influence of correlations in a time series, we determine the peak-to-peak sequence statistics for the Langevin equation of kinetic theory leading to Brownian motion. To test our methodology, we consider a variety of applications. Using a global catalog of earthquakes, we obtain the peak-to-peak statistics of earthquake magnitudes and the nearest neighbor interoccurrence time statistics. In both cases, we find good agreement with the i.i.d. theory. We also consider the interval statistics of the Old Faithful geyser in Yellowstone National Park. In this case, we find a significant deviation from the i.i.d. theory which we attribute to antipersistence. We consider the interval statistics using the AL index of geomagnetic substorms. We again find a significant deviation from i.i.d. behavior that we attribute to mild persistence. Finally, we examine the behavior of Standard and Poor's 500 stock index's daily returns from 1928-2011 and show that, while it is close to being i.i.d., there is, again, significant persistence. We expect that there will be many other applications of our methodology both to interoccurrence statistics and to time series.

  15. Seasonal variation in size-dependent survival of juvenile Atlantic salmon (Salmo salar): Performance of multistate capture-mark-recapture models

    USGS Publications Warehouse

    Letcher, B.H.; Horton, G.E.

    2008-01-01

    We estimated the magnitude and shape of size-dependent survival (SDS) across multiple sampling intervals for two cohorts of stream-dwelling Atlantic salmon (Salmo salar) juveniles using multistate capture-mark-recapture (CMR) models. Simulations designed to test the effectiveness of multistate models for detecting SDS in our system indicated that error in SDS estimates was low and that both time-invariant and time-varying SDS could be detected with sample sizes of >250, average survival of >0.6, and average probability of capture of >0.6, except for cases of very strong SDS. In the field (N ??? 750, survival 0.6-0.8 among sampling intervals, probability of capture 0.6-0.8 among sampling occasions), about one-third of the sampling intervals showed evidence of SDS, with poorer survival of larger fish during the age-2+ autumn and quadratic survival (opposite direction between cohorts) during age-1+ spring. The varying magnitude and shape of SDS among sampling intervals suggest a potential mechanism for the maintenance of the very wide observed size distributions. Estimating SDS using multistate CMR models appears complementary to established approaches, can provide estimates with low error, and can be used to detect intermittent SDS. ?? 2008 NRC Canada.

  16. Discrete Time Rescaling Theorem: Determining Goodness of Fit for Discrete Time Statistical Models of Neural Spiking

    PubMed Central

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-01-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868

  17. Discrete time rescaling theorem: determining goodness of fit for discrete time statistical models of neural spiking.

    PubMed

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-10-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.

  18. GRB Diversity vs. Utility as Cosmological Probes

    NASA Technical Reports Server (NTRS)

    Norris, J. P.; Scargle, J. D.; Bonnell, J. T.; Nemiroff, R. J.; Young, Richard E. (Technical Monitor)

    1997-01-01

    Recent detections of apparent gamma-ray burst (GRB) counterparts in optical and radio wavebands strongly favor the cosmological distance scale, at least for some GRBs, opening the possibility of GRBs serving as cosmological probes. But GRBs exhibit great diversity: in total duration; in number, width and pulse configuration; and in pulse and overall spectral evolution. However, it is possible that a portion of this behavior reflects a luminosity distribution, and possible that evolution of with cosmic time introduces dispersion into the average GRB characteristics -- issues analogous to those encountered with quasars. The temporal domain offers a rich avenue to investigate this problem. When corrected for assumed spectral redshift, time dilation of event durations, pulse widths, and intervals between pulses must yield the same time-dilation factor as a function of peak flux, or else a luminosity distribution may be the cause of observed time dilation effects. We describe results of burst analysis using an automated, Bayesian-based algorithm to determine burst temporal characteristics for different peak flux groups, and derived constraints on any physical process that would introduce a luminosity distribution.

  19. Power law for the duration of recession and prosperity in Latin American countries

    NASA Astrophysics Data System (ADS)

    Redelico, Francisco O.; Proto, Araceli N.; Ausloos, Marcel

    2008-11-01

    Ormerod and Mounfield [P. Ormerod, C. Mounfield, Power law distribution of duration and magnitude of recessions in capitalist economies: Breakdown of scaling, Physica A 293 (2001) 573] and Ausloos et al. [M. Ausloos, J. Mikiewicz, M. Sanglier, The durations of recession and prosperity: Does their distribution follow a power or an exponential law? Physica A 339 (2004) 548] have independently analyzed the duration of recessions for developed countries through the evolution of the GDP in different time windows. It was found that there is a power law governing the duration distribution. We have analyzed data collected from 19 Latin American countries in order to observe whether such results are valid or not for developing countries. The case of prosperity years is also discussed. We observe that the power law of recession time intervals, see Ref. [1], is valid for Latin American countries as well. Thus an interesting point is discovered: the same scaling time is found in the case of recessions for the three data sets (ca. 1 year), and this could represent a universal feature. Other time scale parameters differ significantly from each other.

  20. Ethnicity-specific birthweight distributions improve identification of term newborns at risk for short-term morbidity.

    PubMed

    Hanley, Gillian E; Janssen, Patricia A

    2013-11-01

    We aimed to determine whether ethnicity-specific birthweight distributions more accurately identify newborns at risk for short-term neonatal morbidity associated with small for gestational age (SGA) birth than population-based distributions not stratified on ethnicity. We examined 100,463 singleton term infants born to parents in Washington State between Jan. 1, 2006, and Dec. 31, 2008. Using multivariable logistic regression models, we compared the ability of an ethnicity-specific growth distribution and a population-based growth distribution to predict which infants were at increased risk for Apgar score <7 at 5 minutes, admission to the neonatal intensive care unit, ventilation, extended length of stay in hospital, hypothermia, hypoglycemia, and infection. Newborns considered SGA by ethnicity-specific weight distributions had the highest rates of each of the adverse outcomes assessed-more than double those of infants only considered SGA by the population-based standards. When controlling for mother's age, parity, body mass index, education, gestational age, mode of delivery, and marital status, newborns considered SGA by ethnicity-specific birthweight distributions were between 2 and 7 times more likely to suffer from the adverse outcomes listed above than infants who were not SGA. In contrast, newborns considered SGA by population-based birthweight distributions alone were at no higher risk of any adverse outcome except hypothermia (adjusted odds ratio, 2.76; 95% confidence interval, 1.68-4.55) and neonatal intensive care unit admission (adjusted odds ratio, 1.40; 95% confidence interval, 1.18-1.67). Ethnicity-specific birthweight distributions were significantly better at identifying the infants at higher risk of short-term neonatal morbidity, suggesting that their use could save resources and unnecessary parental anxiety. Copyright © 2013 Mosby, Inc. All rights reserved.

  1. Reliability analysis based on the losses from failures.

    PubMed

    Todinov, M T

    2006-04-01

    The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the early-life failures region and the expected losses given failure characterizing the corresponding time intervals. For complex systems whose components are not logically arranged in series, discrete simulation algorithms and software have been created for determining the losses from failures in terms of expected lost production time, cost of intervention, and cost of replacement. Different system topologies are assessed to determine the effect of modifications of the system topology on the expected losses from failures. It is argued that the reliability allocation in a production system should be done to maximize the profit/value associated with the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the profit by minimizing the total cost has been developed. Reliability allocation that maximizes the profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually the reliabilities of the components in the block that minimize the sum of the capital, operation costs, and the expected losses from failures. A Monte Carlo simulation based net present value (NPV) cash-flow model has also been proposed, which has significant advantages to cash-flow models based on the expected value of the losses from failures per time interval. Unlike these models, the proposed model has the capability to reveal the variation of the NPV due to different number of failures occurring during a specified time interval (e.g., during one year). The model also permits tracking the impact of the distribution pattern of failure occurrences and the time dependence of the losses from failures.

  2. Interval between onset of psoriasis and psoriatic arthritis comparing the UK Clinical Practice Research Datalink with a hospital-based cohort.

    PubMed

    Tillett, William; Charlton, Rachel; Nightingale, Alison; Snowball, Julia; Green, Amelia; Smith, Catherine; Shaddick, Gavin; McHugh, Neil

    2017-12-01

    To describe the time interval between the onset of psoriasis and PsA in the UK primary care setting and compare with a large, well-classified secondary care cohort. Patients with PsA and/or psoriasis were identified in the UK Clinical Practice Research Datalink (CPRD). The secondary care cohort comprised patients from the Bath PsA longitudinal observational cohort study. For incident PsA patients in the CPRD who also had a record of psoriasis, the time interval between PsA diagnosis and first psoriasis record was calculated. Comparisons were made with the time interval between diagnoses in the Bath cohort. There were 5272 eligible PsA patients in the CPRD and 815 in the Bath cohort. In both cohorts, the majority of patients (82.3 and 61.3%, respectively) had psoriasis before their PsA diagnosis or within the same calendar year (10.5 and 23.8%), with only a minority receiving their PsA diagnosis first (7.1 and 14.8%). Excluding those who presented with arthritis before psoriasis, the median time between diagnoses was 8 years [interquartile range (IQR) 2-15] in the CPRD and 7 years (IQR 0-20) in the Bath cohort. In the CPRD, 60.1 and 75.1% received their PsA diagnosis within 10 and 15 years of their psoriasis diagnosis, respectively; this was comparable with 57.2 and 67.7% in the Bath cohort. A similar distribution for the time interval between psoriasis and arthritis was observed in the CPRD and secondary care cohort. These data can inform screening strategies and support the validity of data from each cohort. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  3. Determination of the boiling-point distribution by simulated distillation from n-pentane through n-tetratetracontane in 70 to 80 seconds.

    PubMed

    Lubkowitz, Joaquin A; Meneghini, Roberto I

    2002-01-01

    This work presents the carrying out of boiling-point distributions by simulated distillation with direct-column heating rather than oven-column heating. Column-heating rates of 300 degrees C/min are obtained yielding retention times of 73 s for n-tetratetracontane. The calibration curves of the retention time versus the boiling point, in the range of n-pentane to n-tetratetracontane, are identical to those obtained by slower oven-heating rates. The boiling-point distribution of the reference gas oil is compared with that obtained with column oven heating at rates of 15 to 40 degrees C/min. The results show boiling-point distribution values nearly the same (1-2 degrees F) as those obtained with oven column heating from the initial boiling point to 80% distilled off. Slightly higher differences are obtained (3-4 degrees F) for the 80% distillation to final boiling-point interval. Nonetheless, allowed consensus differences are never exceeded. Precision of the boiling-point distributions (expressed as standard deviations) are 0.1-0.3% for the data obtained in the direct column-heating mode.

  4. Robust Confidence Interval for a Ratio of Standard Deviations

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2006-01-01

    Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…

  5. Classification mapping and species identification of salt marshes based on a short-time interval NDVI time-series from HJ-1 optical imagery

    NASA Astrophysics Data System (ADS)

    Sun, Chao; Liu, Yongxue; Zhao, Saishuai; Zhou, Minxi; Yang, Yuhao; Li, Feixue

    2016-03-01

    Salt marshes are seen as the most dynamic and valuable ecosystems in coastal zones, and in these areas, it is crucial to obtain accurate remote sensing information on the spatial distributions of species over time. However, discriminating various types of salt marsh is rather difficult because of their strong spectral similarities. Previous salt marsh mapping studies have focused mainly on high spatial and spectral (i.e., hyperspectral) resolution images combined with auxiliary information; however, the results are often limited to small regions. With a high temporal and moderate spatial resolution, the Chinese HuanJing-1 (HJ-1) satellite optical imagery can be used not only to monitor phenological changes of salt marsh vegetation over short-time intervals, but also to obtain coverage of large areas. Here, we apply HJ-1 satellite imagery to the middle coast of Jiangsu in east China to monitor changes in saltmarsh vegetation cover. First, we constructed a monthly NDVI time-series to classify various types of salt marsh and then we tested the possibility of using compressed time-series continuously, to broaden the applicability of this particular approach. Our principal findings are as follows: (1) the overall accuracy of salt marsh mapping based on the monthly NDVI time-series was 90.3%, which was ∼16.0% higher than the single-phase classification strategy; (2) a compressed time-series, including NDVI from six key months (April, June-September, and November), demonstrated very little reduction (2.3%) in overall accuracy but led to obvious improvements in unstable regions; and (3) a simple rule for Spartina alterniflora identification was established using a scene solely from November, which may provide an effective way for regularly monitoring its distribution.

  6. Beginning of a new age: How did freshwater gastropods respond to the Quaternary climate change in Europe?

    NASA Astrophysics Data System (ADS)

    Georgopoulou, Elisavet; Neubauer, Thomas A.; Strona, Giovanni; Kroh, Andreas; Mandic, Oleg; Harzhauser, Mathias

    2016-10-01

    The well documented fossil record of European Quaternary freshwater gastropods offers a unique resource for continental-scale biogeographical analyses. Here, we assembled a dataset including 338 freshwater gastropod taxa from 1058 localities across Europe, which we used to explore how freshwater gastropod communities varied in space and time across six distinct time intervals of the Quaternary, i.e. Gelasian, Calabrian, Middle Pleistocene, Last Interglacial, Last Glacial and Holocene. We took into consideration both species richness and qualitative structural patterns, comparing turnover rates between time intervals and examining variations in community nestedness-segregation patterns. Species richness differed significantly between time intervals. The Early Pleistocene showed the highest diversity, likely because of the contribution of long-lived aquatic systems like the lakes Bresse and Tiberino that promoted speciation and endemism. The rich Middle to Late Pleistocene and Holocene assemblages were mostly linked to fluvial and/or lacustrine systems with short temporal durations. We identified a major turnover event at the Plio-Pleistocene boundary, related to the demise of long-lived lakes and of their rich, endemic faunas at the end of the Pliocene. In the subsequent intervals, little or no turnover was observed. We also observed a pattern of high segregation in Early Pleistocene communities, associated with the abundance of endemic species with small distribution ranges, and reflecting the provincial character of the aquatic freshwater systems at that time. This structured pattern disintegrated gradually towards the Middle Pleistocene and remained unstructured up to present. In particular, spatial patterns of community nestedness-segregation in the Last Interglacial and Holocene suggest a random recolonization of freshwater habitats mostly by generalist species following deglaciation.

  7. Subjective and Real Time: Coding Under Different Drug States

    PubMed Central

    Sanchez-Castillo, Hugo; Taylor, Kathleen M.; Ward, Ryan D.; Paz-Trejo, Diana B.; Arroyo-Araujo, Maria; Castillo, Oscar Galicia; Balsam, Peter D.

    2016-01-01

    Organisms are constantly extracting information from the temporal structure of the environment, which allows them to select appropriate actions and predict impending changes. Several lines of research have suggested that interval timing is modulated by the dopaminergic system. It has been proposed that higher levels of dopamine cause an internal clock to speed up, whereas less dopamine causes a deceleration of the clock. In most experiments the subjects are first trained to perform a timing task while drug free. Consequently, most of what is known about the influence of dopaminergic modulation of timing is on well-established timing performance. In the current study the impact of altered DA on the acquisition of temporal control was the focal question. Thirty male Sprague-Dawley rats were distributed randomly into three different groups (haloperidol, d-amphetamine or vehicle). Each animal received an injection 15 min prior to the start of every session from the beginning of interval training. The subjects were trained in a Fixed Interval (FI) 16s schedule followed by training on a peak procedure in which 64s non-reinforced peak trials were intermixed with FI trials. In a final test session all subjects were given vehicle injections and 10 consecutive non-reinforced peak trials to see if training under drug conditions altered the encoding of time. The current study suggests that administration of drugs that modulate dopamine do not alter the encoding temporal durations but do acutely affect the initiation of responding. PMID:27087743

  8. Performance of finite order distribution-generated universal portfolios

    NASA Astrophysics Data System (ADS)

    Pang, Sook Theng; Liew, How Hui; Chang, Yun Fah

    2017-04-01

    A Constant Rebalanced Portfolio (CRP) is an investment strategy which reinvests by redistributing wealth equally among a set of stocks. The empirical performance of the distribution-generated universal portfolio strategies are analysed experimentally concerning 10 higher volume stocks from different categories in Kuala Lumpur Stock Exchange. The time interval of study is from January 2000 to December 2015, which includes the credit crisis from September 2008 to March 2009. The performance of the finite-order universal portfolio strategies has been shown to be better than Constant Rebalanced Portfolio with some selected parameters of proposed universal portfolios.

  9. An innovative method for offshore wind farm site selection based on the interval number with probability distribution

    NASA Astrophysics Data System (ADS)

    Wu, Yunna; Chen, Kaifeng; Xu, Hu; Xu, Chuanbo; Zhang, Haobo; Yang, Meng

    2017-12-01

    There is insufficient research relating to offshore wind farm site selection in China. The current methods for site selection have some defects. First, information loss is caused by two aspects: the implicit assumption that the probability distribution on the interval number is uniform; and ignoring the value of decision makers' (DMs') common opinion on the criteria information evaluation. Secondly, the difference in DMs' utility function has failed to receive attention. An innovative method is proposed in this article to solve these drawbacks. First, a new form of interval number and its weighted operator are proposed to reflect the uncertainty and reduce information loss. Secondly, a new stochastic dominance degree is proposed to quantify the interval number with a probability distribution. Thirdly, a two-stage method integrating the weighted operator with stochastic dominance degree is proposed to evaluate the alternatives. Finally, a case from China proves the effectiveness of this method.

  10. Multifractality in Cardiac Dynamics

    NASA Astrophysics Data System (ADS)

    Ivanov, Plamen Ch.; Rosenblum, Misha; Stanley, H. Eugene; Havlin, Shlomo; Goldberger, Ary

    1997-03-01

    Wavelet decomposition is used to analyze the fractal scaling properties of heart beat time series. The singularity spectrum D(h) of the variations in the beat-to-beat intervals is obtained from the wavelet transform modulus maxima which contain information on the hierarchical distribution of the singularities in the signal. Multifractal behavior is observed for healthy cardiac dynamics while pathologies are associated with loss of support in the singularity spectrum.

  11. Variable diffusion in stock market fluctuations

    NASA Astrophysics Data System (ADS)

    Hua, Jia-Chen; Chen, Lijian; Falcon, Liberty; McCauley, Joseph L.; Gunaratne, Gemunu H.

    2015-02-01

    We analyze intraday fluctuations in several stock indices to investigate the underlying stochastic processes using techniques appropriate for processes with nonstationary increments. The five most actively traded stocks each contains two time intervals during the day where the variance of increments can be fit by power law scaling in time. The fluctuations in return within these intervals follow asymptotic bi-exponential distributions. The autocorrelation function for increments vanishes rapidly, but decays slowly for absolute and squared increments. Based on these results, we propose an intraday stochastic model with linear variable diffusion coefficient as a lowest order approximation to the real dynamics of financial markets, and to test the effects of time averaging techniques typically used for financial time series analysis. We find that our model replicates major stylized facts associated with empirical financial time series. We also find that ensemble averaging techniques can be used to identify the underlying dynamics correctly, whereas time averages fail in this task. Our work indicates that ensemble average approaches will yield new insight into the study of financial markets' dynamics. Our proposed model also provides new insight into the modeling of financial markets dynamics in microscopic time scales.

  12. Markov reward processes

    NASA Technical Reports Server (NTRS)

    Smith, R. M.

    1991-01-01

    Numerous applications in the area of computer system analysis can be effectively studied with Markov reward models. These models describe the behavior of the system with a continuous-time Markov chain, where a reward rate is associated with each state. In a reliability/availability model, upstates may have reward rate 1 and down states may have reward rate zero associated with them. In a queueing model, the number of jobs of certain type in a given state may be the reward rate attached to that state. In a combined model of performance and reliability, the reward rate of a state may be the computational capacity, or a related performance measure. Expected steady-state reward rate and expected instantaneous reward rate are clearly useful measures of the Markov reward model. More generally, the distribution of accumulated reward or time-averaged reward over a finite time interval may be determined from the solution of the Markov reward model. This information is of great practical significance in situations where the workload can be well characterized (deterministically, or by continuous functions e.g., distributions). The design process in the development of a computer system is an expensive and long term endeavor. For aerospace applications the reliability of the computer system is essential, as is the ability to complete critical workloads in a well defined real time interval. Consequently, effective modeling of such systems must take into account both performance and reliability. This fact motivates our use of Markov reward models to aid in the development and evaluation of fault tolerant computer systems.

  13. Sampling effects on the identification of roadkill hotspots: Implications for survey design.

    PubMed

    Santos, Sara M; Marques, J Tiago; Lourenço, André; Medinas, Denis; Barbosa, A Márcia; Beja, Pedro; Mira, António

    2015-10-01

    Although locating wildlife roadkill hotspots is essential to mitigate road impacts, the influence of study design on hotspot identification remains uncertain. We evaluated how sampling frequency affects the accuracy of hotspot identification, using a dataset of vertebrate roadkills (n = 4427) recorded over a year of daily surveys along 37 km of roads. "True" hotspots were identified using this baseline dataset, as the 500-m segments where the number of road-killed vertebrates exceeded the upper 95% confidence limit of the mean, assuming a Poisson distribution of road-kills per segment. "Estimated" hotspots were identified likewise, using datasets representing progressively lower sampling frequencies, which were produced by extracting data from the baseline dataset at appropriate time intervals (1-30 days). Overall, 24.3% of segments were "true" hotspots, concentrating 40.4% of roadkills. For different groups, "true" hotspots accounted from 6.8% (bats) to 29.7% (small birds) of road segments, concentrating from <40% (frogs and toads, snakes) to >60% (lizards, lagomorphs, carnivores) of roadkills. Spatial congruence between "true" and "estimated" hotspots declined rapidly with increasing time interval between surveys, due primarily to increasing false negatives (i.e., missing "true" hotspots). There were also false positives (i.e., wrong "estimated" hotspots), particularly at low sampling frequencies. Spatial accuracy decay with increasing time interval between surveys was higher for smaller-bodied (amphibians, reptiles, small birds, small mammals) than for larger-bodied species (birds of prey, hedgehogs, lagomorphs, carnivores). Results suggest that widely used surveys at weekly or longer intervals may produce poor estimates of roadkill hotspots, particularly for small-bodied species. Surveying daily or at two-day intervals may be required to achieve high accuracy in hotspot identification for multiple species. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. ESTABLISHMENT OF A FIBRINOGEN REFERENCE INTERVAL IN ORNATE BOX TURTLES (TERRAPENE ORNATA ORNATA).

    PubMed

    Parkinson, Lily; Olea-Popelka, Francisco; Klaphake, Eric; Dadone, Liza; Johnston, Matthew

    2016-09-01

    This study sought to establish a reference interval for fibrinogen in healthy ornate box turtles ( Terrapene ornata ornata). A total of 48 turtles were enrolled, with 42 turtles deemed to be noninflammatory and thus fitting the inclusion criteria and utilized to estimate a fibrinogen reference interval. Turtles were excluded based upon physical examination and blood work abnormalities. A Shapiro-Wilk normality test indicated that the noninflammatory turtle fibrinogen values were normally distributed (Gaussian distribution) with an average of 108 mg/dl and a 95% confidence interval of the mean of 97.9-117 mg/dl. Those turtles excluded from the reference interval because of abnormalities affecting their health had significantly different fibrinogen values (P = 0.313). A reference interval for healthy ornate box turtles was calculated. Further investigation into the utility of fibrinogen measurement for clinical usage in ornate box turtles is warranted.

  15. Complex reference values for endocrine and special chemistry biomarkers across pediatric, adult, and geriatric ages: establishment of robust pediatric and adult reference intervals on the basis of the Canadian Health Measures Survey.

    PubMed

    Adeli, Khosrow; Higgins, Victoria; Nieuwesteeg, Michelle; Raizman, Joshua E; Chen, Yunqi; Wong, Suzy L; Blais, David

    2015-08-01

    Defining laboratory biomarker reference values in a healthy population and understanding the fluctuations in biomarker concentrations throughout life and between sexes are critical to clinical interpretation of laboratory test results in different disease states. The Canadian Health Measures Survey (CHMS) has collected blood samples and health information from the Canadian household population. In collaboration with the Canadian Laboratory Initiative on Pediatric Reference Intervals (CALIPER), the data have been analyzed to determine reference value distributions and reference intervals for several endocrine and special chemistry biomarkers in pediatric, adult, and geriatric age groups. CHMS collected data and blood samples from thousands of community participants aged 3 to 79 years. We used serum samples to measure 13 immunoassay-based special chemistry and endocrine markers. We assessed reference value distributions and, after excluding outliers, calculated age- and sex-specific reference intervals, along with corresponding 90% CIs, according to CLSI C28-A3 guidelines. We observed fluctuations in biomarker reference values across the pediatric, adult, and geriatric age range, with stratification required on the basis of age for all analytes. Additional sex partitions were required for apolipoprotein AI, homocysteine, ferritin, and high sensitivity C-reactive protein. The unique collaboration between CALIPER and CHMS has enabled, for the first time, a detailed examination of the changes in various immunochemical markers that occur in healthy individuals of different ages. The robust age- and sex-specific reference intervals established in this study provide insight into the complex biological changes that take place throughout development and aging and will contribute to improved clinical test interpretation. © 2015 American Association for Clinical Chemistry.

  16. Measuring Skew in Average Surface Roughness as a Function of Surface Preparation

    NASA Technical Reports Server (NTRS)

    Stahl, Mark

    2015-01-01

    Characterizing surface roughness is important for predicting optical performance. Better measurement of surface roughness reduces polishing time, saves money and allows the science requirements to be better defined. This study characterized statistics of average surface roughness as a function of polishing time. Average surface roughness was measured at 81 locations using a Zygo white light interferometer at regular intervals during the polishing process. Each data set was fit to a normal and Largest Extreme Value (LEV) distribution; then tested for goodness of fit. We show that the skew in the average data changes as a function of polishing time.

  17. Proceedings of the Annual NASA and Department of Defense Precise Time and Time Interval (PITI) Planning Meeting (5th), Held at Goddard Space Flight Center on December 4-6, 1973

    DTIC Science & Technology

    1972-01-01

    and police stations in Washington, and since 1877 to Western Union for nation-wide distribution. In 1904 the first operational radio time signals were...to do the job with the accuracy and low cost demanded in these days of tight operating budgets. In closing, I would like to acknowledge the fine...signal received from a celestial source is recorded at each antenna on magnetic tape, and the tapes are cross-correlated by matching the streams of

  18. The 1983 tail-era series. Volume 1: ISEE 3 plasma

    NASA Technical Reports Server (NTRS)

    Fairfield, D. H.; Phillips, J. L.

    1991-01-01

    Observations from the ISEE 3 electron analyzer are presented in plots. Electrons were measured in 15 continuous energy levels between 8.5 and 1140 eV during individual 3-sec spacecraft spins. Times associated with each data point are the beginning time of the 3 sec data collection interval. Moments calculated from the measured distribution function are shown as density, temperature, velocity, and velocity azimuthal angle. Spacecraft ephemeris is shown at the bottom in GSE and GSM coordinates in units of Earth radii, with vertical ticks on the time axis corresponding to the printed positions.

  19. Real-time control systems: feedback, scheduling and robustness

    NASA Astrophysics Data System (ADS)

    Simon, Daniel; Seuret, Alexandre; Sename, Olivier

    2017-08-01

    The efficient control of real-time distributed systems, where continuous components are governed through digital devices and communication networks, needs a careful examination of the constraints arising from the different involved domains inside co-design approaches. Thanks to the robustness of feedback control, both new control methodologies and slackened real-time scheduling schemes are proposed beyond the frontiers between these traditionally separated fields. A methodology to design robust aperiodic controllers is provided, where the sampling interval is considered as a control variable of the system. Promising experimental results are provided to show the feasibility and robustness of the approach.

  20. Exponential Sum-Fitting of Dwell-Time Distributions without Specifying Starting Parameters

    PubMed Central

    Landowne, David; Yuan, Bin; Magleby, Karl L.

    2013-01-01

    Fitting dwell-time distributions with sums of exponentials is widely used to characterize histograms of open- and closed-interval durations recorded from single ion channels, as well as for other physical phenomena. However, it can be difficult to identify the contributing exponential components. Here we extend previous methods of exponential sum-fitting to present a maximum-likelihood approach that consistently detects all significant exponentials without the need for user-specified starting parameters. Instead of searching for exponentials, the fitting starts with a very large number of initial exponentials with logarithmically spaced time constants, so that none are missed. Maximum-likelihood fitting then determines the areas of all the initial exponentials keeping the time constants fixed. In an iterative manner, with refitting after each step, the analysis then removes exponentials with negligible area and combines closely spaced adjacent exponentials, until only those exponentials that make significant contributions to the dwell-time distribution remain. There is no limit on the number of significant exponentials and no starting parameters need be specified. We demonstrate fully automated detection for both experimental and simulated data, as well as for classical exponential-sum-fitting problems. PMID:23746510

  1. Human memory retrieval as Lévy foraging

    NASA Astrophysics Data System (ADS)

    Rhodes, Theo; Turvey, Michael T.

    2007-11-01

    When people attempt to recall as many words as possible from a specific category (e.g., animal names) their retrievals occur sporadically over an extended temporal period. Retrievals decline as recall progresses, but short retrieval bursts can occur even after tens of minutes of performing the task. To date, efforts to gain insight into the nature of retrieval from this fundamental phenomenon of semantic memory have focused primarily upon the exponential growth rate of cumulative recall. Here we focus upon the time intervals between retrievals. We expected and found that, for each participant in our experiment, these intervals conformed to a Lévy distribution suggesting that the Lévy flight dynamics that characterize foraging behavior may also characterize retrieval from semantic memory. The closer the exponent on the inverse square power-law distribution of retrieval intervals approximated the optimal foraging value of 2, the more efficient was the retrieval. At an abstract dynamical level, foraging for particular foods in one's niche and searching for particular words in one's memory must be similar processes if particular foods and particular words are randomly and sparsely located in their respective spaces at sites that are not known a priori. We discuss whether Lévy dynamics imply that memory processes, like foraging, are optimized in an ecological way.

  2. Transformation to equivalent dimensions—a new methodology to study earthquake clustering

    NASA Astrophysics Data System (ADS)

    Lasocki, Stanislaw

    2014-05-01

    A seismic event is represented by a point in a parameter space, quantified by the vector of parameter values. Studies of earthquake clustering involve considering distances between such points in multidimensional spaces. However, the metrics of earthquake parameters are different, hence the metric in a multidimensional parameter space cannot be readily defined. The present paper proposes a solution of this metric problem based on a concept of probabilistic equivalence of earthquake parameters. Under this concept the lengths of parameter intervals are equivalent if the probability for earthquakes to take values from either interval is the same. Earthquake clustering is studied in an equivalent rather than the original dimensions space, where the equivalent dimension (ED) of a parameter is its cumulative distribution function. All transformed parameters are of linear scale in [0, 1] interval and the distance between earthquakes represented by vectors in any ED space is Euclidean. The unknown, in general, cumulative distributions of earthquake parameters are estimated from earthquake catalogues by means of the model-free non-parametric kernel estimation method. Potential of the transformation to EDs is illustrated by two examples of use: to find hierarchically closest neighbours in time-space and to assess temporal variations of earthquake clustering in a specific 4-D phase space.

  3. The duration of prograde garnet crystallization in the UHP eclogites at Lago di Cignana, Italy

    NASA Astrophysics Data System (ADS)

    Skora, Susanne; Lapen, Thomas J.; Baumgartner, Lukas P.; Johnson, Clark M.; Hellebrand, Eric; Mahlen, Nancy J.

    2009-10-01

    The distinct core-to-rim zonation of different REEs in garnet in metamorphic rocks, specifically Sm relative to Lu, suggests that Sm-Nd and Lu-Hf isochron ages will record different times along a prograde garnet growth history. Therefore, REE zonations in garnet must be measured in order to correctly interpret the isochron ages in terms of the garnet growth interval, which could span several m.y. New REE profiles, garnet crystal size distributions, and garnet growth modeling, combined with previously published Sm-Nd and Lu-Hf geochronology on a UHP eclogite of the Zermatt-Saas Fee (ZSF) ophiolite, Lago di Cignana (Italy), demonstrate that prograde garnet growth of this sample occurred over a ~ 30 to 40 m.y. interval. Relative to peak metamorphism at 38 to 40 Ma, garnet growth is estimated to have begun at ~ 11 to 14 kbar pressure at ~ 70 to 80 Ma. Although such a protracted garnet growth interval is surprising, this is supported by plate tectonic reconstructions which suggest that subduction of the Liguro-Piemont ocean occurred through slow and oblique convergence. These results demonstrate that REE zonations in garnet, coupled to crystal size distributions, provide a powerful means for understanding prograde metamorphic paths when combined with Sm-Nd and Lu-Hf geochronology.

  4. The disturbed geomagnetic field at European observatories. Sources and significance

    NASA Astrophysics Data System (ADS)

    Greculeasa, Razvan; Dobrica, Venera; Demetrescu, Crisan

    2014-05-01

    The disturbed geomagnetic field recorded at Earth's surface is given by the effects of electric current systems in the magnetosphere and ionosphere, as a result of the interaction of geomagnetic field with the solar wind and the interplanetary magnetic field. In this paper the geomagnetic disturbance recorded at European observatories has been investigated as regards its sources, for the time interval August 1-10, 2010, in which a moderate storm (Dstmin= -70 nT) occurred (August 3-4). The disturbance has been evidenced against the solar quiet daily variation, for each of the 29 observatories with minute data in the mentioned time interval. Data have been downloaded from the INTERMAGNET web page. The contribution of the magnetospheric ring current and of the auroral electrojet to the observed disturbance field in the X, Z, and D geomagnetic elements is discussed and the corresponding geographical distribution is presented.

  5. Modeling the incubation period of inhalational anthrax.

    PubMed

    Wilkening, Dean A

    2008-01-01

    Ever since the pioneering work of Philip Sartwell, the incubation period distribution for infectious diseases is most often modeled using a lognormal distribution. Theoretical models based on underlying disease mechanisms in the host are less well developed. This article modifies a theoretical model originally developed by Brookmeyer and others for the inhalational anthrax incubation period distribution in humans by using a more accurate distribution to represent the in vivo bacterial growth phase and by extending the model to represent the time from exposure to death, thereby allowing the model to be fit to nonhuman primate time-to-death data. The resulting incubation period distribution and the dose dependence of the median incubation period are in good agreement with human data from the 1979 accidental atmospheric anthrax release in Sverdlovsk, Russia, and limited nonhuman primate data. The median incubation period for the Sverdlovsk victims is 9.05 (95% confidence interval = 8.0-10.3) days, shorter than previous estimates, and it is predicted to drop to less than 2.5 days at doses above 10(6) spores. The incubation period distribution is important because the left tail determines the time at which clinical diagnosis or syndromic surveillance systems might first detect an anthrax outbreak based on early symptomatic cases, the entire distribution determines the efficacy of medical intervention-which is determined by the speed of the prophylaxis campaign relative to the incubation period-and the right tail of the distribution influences the recommended duration for antibiotic treatment.

  6. Ensemble-Based Source Apportionment of Fine Particulate Matter and Emergency Department Visits for Pediatric Asthma

    PubMed Central

    Gass, Katherine; Balachandran, Sivaraman; Chang, Howard H.; Russell, Armistead G.; Strickland, Matthew J.

    2015-01-01

    Epidemiologic studies utilizing source apportionment (SA) of fine particulate matter have shown that particles from certain sources might be more detrimental to health than others; however, it is difficult to quantify the uncertainty associated with a given SA approach. In the present study, we examined associations between source contributions of fine particulate matter and emergency department visits for pediatric asthma in Atlanta, Georgia (2002–2010) using a novel ensemble-based SA technique. Six daily source contributions from 4 SA approaches were combined into an ensemble source contribution. To better account for exposure uncertainty, 10 source profiles were sampled from their posterior distributions, resulting in 10 time series with daily SA concentrations. For each of these time series, Poisson generalized linear models with varying lag structures were used to estimate the health associations for the 6 sources. The rate ratios for the source-specific health associations from the 10 imputed source contribution time series were combined, resulting in health associations with inflated confidence intervals to better account for exposure uncertainty. Adverse associations with pediatric asthma were observed for 8-day exposure to particles generated from diesel-fueled vehicles (rate ratio = 1.06, 95% confidence interval: 1.01, 1.10) and gasoline-fueled vehicles (rate ratio = 1.10, 95% confidence interval: 1.04, 1.17). PMID:25776011

  7. The matching law in and within groups of rats1

    PubMed Central

    Graft, D. A.; Lea, S. E. G.; Whitworth, T. L.

    1977-01-01

    In each of the two experiments, a group of five rats lived in a complex maze containing four small single-lever operant chambers. In two of these chambers, food was available on variable-interval schedules of reinforcement. In Experiment I, nine combinations of variable intervals were used, and the aggregate lever-pressing rates (by the five rats together) were studied. The log ratio of the rates in the two chambers was linearly related to the log ratio of the reinforcement rates in them; this is an instance of Herrnstein's matching law, as generalized by Baum. Summing over the two food chambers, food consumption decreased, and response output increased, as the time required to earn each pellet increased. In Experiment II, the behavior of individual rats was observed by time-sampling on selected days, while different variable-interval schedules were arranged in the two chambers where food was available. Individual lever-pressing rates for the rats were obtained, and their median bore the same “matching” relationship to the reinforcement rates as the group aggregate in Experiment I. There were differences between the rats in their distribution of time and responses between the two food chambers; these differences were correlated with differences in the proportions of reinforcements the rats obtained from each chamber. PMID:16811975

  8. Generalized Riemann hypothesis and stochastic time series

    NASA Astrophysics Data System (ADS)

    Mussardo, Giuseppe; LeClair, André

    2018-06-01

    Using the Dirichlet theorem on the equidistribution of residue classes modulo q and the Lemke Oliver–Soundararajan conjecture on the distribution of pairs of residues on consecutive primes, we show that the domain of convergence of the infinite product of Dirichlet L-functions of non-principal characters can be extended from down to , without encountering any zeros before reaching this critical line. The possibility of doing so can be traced back to a universal diffusive random walk behavior of a series C N over the primes which underlies the convergence of the infinite product of the Dirichlet functions. The series C N presents several aspects in common with stochastic time series and its control requires to address a problem similar to the single Brownian trajectory problem in statistical mechanics. In the case of the Dirichlet functions of non principal characters, we show that this problem can be solved in terms of a self-averaging procedure based on an ensemble of block variables computed on extended intervals of primes. Those intervals, called inertial intervals, ensure the ergodicity and stationarity of the time series underlying the quantity C N . The infinity of primes also ensures the absence of rare events which would have been responsible for a different scaling behavior than the universal law of the random walks.

  9. Triggering of repeating earthquakes in central California

    USGS Publications Warehouse

    Wu, Chunquan; Gomberg, Joan; Ben-Naim, Eli; Johnson, Paul

    2014-01-01

    Dynamic stresses carried by transient seismic waves have been found capable of triggering earthquakes instantly in various tectonic settings. Delayed triggering may be even more common, but the mechanisms are not well understood. Catalogs of repeating earthquakes, earthquakes that recur repeatedly at the same location, provide ideal data sets to test the effects of transient dynamic perturbations on the timing of earthquake occurrence. Here we employ a catalog of 165 families containing ~2500 total repeating earthquakes to test whether dynamic perturbations from local, regional, and teleseismic earthquakes change recurrence intervals. The distance to the earthquake generating the perturbing waves is a proxy for the relative potential contributions of static and dynamic deformations, because static deformations decay more rapidly with distance. Clear changes followed the nearby 2004 Mw6 Parkfield earthquake, so we study only repeaters prior to its origin time. We apply a Monte Carlo approach to compare the observed number of shortened recurrence intervals following dynamic perturbations with the distribution of this number estimated for randomized perturbation times. We examine the comparison for a series of dynamic stress peak amplitude and distance thresholds. The results suggest a weak correlation between dynamic perturbations in excess of ~20 kPa and shortened recurrence intervals, for both nearby and remote perturbations.

  10. Human instrumental performance in ratio and interval contingencies: A challenge for associative theory.

    PubMed

    Pérez, Omar D; Aitken, Michael R F; Zhukovsky, Peter; Soto, Fabián A; Urcelay, Gonzalo P; Dickinson, Anthony

    2016-12-15

    Associative learning theories regard the probability of reinforcement as the critical factor determining responding. However, the role of this factor in instrumental conditioning is not completely clear. In fact, free-operant experiments show that participants respond at a higher rate on variable ratio than on variable interval schedules even though the reinforcement probability is matched between the schedules. This difference has been attributed to the differential reinforcement of long inter-response times (IRTs) by interval schedules, which acts to slow responding. In the present study, we used a novel experimental design to investigate human responding under random ratio (RR) and regulated probability interval (RPI) schedules, a type of interval schedule that sets a reinforcement probability independently of the IRT duration. Participants responded on each type of schedule before a final choice test in which they distributed responding between two schedules similar to those experienced during training. Although response rates did not differ during training, the participants responded at a lower rate on the RPI schedule than on the matched RR schedule during the choice test. This preference cannot be attributed to a higher probability of reinforcement for long IRTs and questions the idea that similar associative processes underlie classical and instrumental conditioning.

  11. The benthic macrofauna from the Lower Maastrichtian chalk of Kronsmoor (northern Germany, Saturn quarry): taxonomic outline and palaeoecologic implications

    NASA Astrophysics Data System (ADS)

    Engelke, Julia; Esser, Klaus J. K.; Linnert, Christian; Mutterlose, Jörg; Wilmsen, Markus

    2016-12-01

    The benthic macroinvertebrates of the Lower Maastrichtian chalk of Saturn quarry at Kronsmoor (northern Germany) have been studied taxonomically based on more than 1,000 specimens. Two successive benthic macrofossil assemblages were recognised: the lower interval in the upper part of the Kronsmoor Formation (Belemnella obtusa Zone) is characterized by low abundances of macroinvertebrates while the upper interval in the uppermost Kronsmoor and lowermost Hemmoor formations (lower to middle Belemnella sumensis Zone) shows a high macroinvertebrate abundance (eight times more than in the B. obtusa Zone) and a conspicuous dominance of brachiopods. The palaeoecological analysis of these two assemblages indicates the presence of eight different guilds, of which epifaunal suspension feeders (fixo-sessile and libero-sessile guilds), comprising approximately half of the trophic nucleus of the lower interval, increased to a dominant 86% in the upper interval, including a considerable proportion of rhynchonelliform brachiopods. It is tempting to relate this shift from the lower to the upper interval to an increase in nutrient supply and/or a shallowing of the depositional environment but further data including geochemical proxies are needed to fully understand the macrofossil distribution patterns in the Lower Maastrichtian of Kronsmoor.

  12. Noise effects in bacterial motor switch

    NASA Astrophysics Data System (ADS)

    Tu, Yuhai

    2006-03-01

    The clockwise (CW) or counter clockwise (CCW) spinning of bacterial flagellar motors is controlled by the concentration of a phosphorylated protein CheY-P. In this talk, we represent the stochastic switching behavior of a bacterial flagellar motor by a dynamical two-state (CW and CCW) model, with the energy levels of the two states fluctuating in time according to the variation of the CheY-P concentration in the cell. We show that with a generic normal distribution and a modest amplitude for CheY-P concentration fluctuations, the dynamical two-state model is capable of generating a power-law distribution (as opposed to an exponential Poisson-like distribution) for the durations of the CCW states, in agreement with recent experimental observations of Korobkova et al (Nature, 428, 574(2004)). In addition, we show that the power spectrum for the flagellar motor switching time series is not determined solely by the power-law duration distribution, but also by the temporal correlation between the duration times of different CCW intervals. We point out the intrinsic connection between anomalously large fluctuations of the motor output and the overall high gain of the bacterial chemotaxis system. Suggestions for experimental verification of the dynamical two-state model will also be discussed.

  13. A Dual Power Law Distribution for the Stellar Initial Mass Function

    NASA Astrophysics Data System (ADS)

    Hoffmann, Karl Heinz; Essex, Christopher; Basu, Shantanu; Prehl, Janett

    2018-05-01

    We introduce a new dual power law (DPL) probability distribution function for the mass distribution of stellar and substellar objects at birth, otherwise known as the initial mass function (IMF). The model contains both deterministic and stochastic elements, and provides a unified framework within which to view the formation of brown dwarfs and stars resulting from an accretion process that starts from extremely low mass seeds. It does not depend upon a top down scenario of collapsing (Jeans) masses or an initial lognormal or otherwise IMF-like distribution of seed masses. Like the modified lognormal power law (MLP) distribution, the DPL distribution has a power law at the high mass end, as a result of exponential growth of mass coupled with equally likely stopping of accretion at any time interval. Unlike the MLP, a power law decay also appears at the low mass end of the IMF. This feature is closely connected to the accretion stopping probability rising from an initially low value up to a high value. This might be associated with physical effects of ejections sometimes (i.e., rarely) stopping accretion at early times followed by outflow driven accretion stopping at later times, with the transition happening at a critical time (therefore mass). Comparing the DPL to empirical data, the critical mass is close to the substellar mass limit, suggesting that the onset of nuclear fusion plays an important role in the subsequent accretion history of a young stellar object.

  14. Modelling volatility recurrence intervals in the Chinese commodity futures market

    NASA Astrophysics Data System (ADS)

    Zhou, Weijie; Wang, Zhengxin; Guo, Haiming

    2016-09-01

    The law of extreme event occurrence attracts much research. The volatility recurrence intervals of Chinese commodity futures market prices are studied: the results show that the probability distributions of the scaled volatility recurrence intervals have a uniform scaling curve for different thresholds q. So we can deduce the probability distribution of extreme events from normal events. The tail of a scaling curve can be well fitted by a Weibull form, which is significance-tested by KS measures. Both short-term and long-term memories are present in the recurrence intervals with different thresholds q, which denotes that the recurrence intervals can be predicted. In addition, similar to volatility, volatility recurrence intervals also have clustering features. Through Monte Carlo simulation, we artificially synthesise ARMA, GARCH-class sequences similar to the original data, and find out the reason behind the clustering. The larger the parameter d of the FIGARCH model, the stronger the clustering effect is. Finally, we use the Fractionally Integrated Autoregressive Conditional Duration model (FIACD) to analyse the recurrence interval characteristics. The results indicated that the FIACD model may provide a method to analyse volatility recurrence intervals.

  15. System implications of the ambulance arrival-to-patient contact interval on response interval compliance.

    PubMed

    Campbell, J P; Gratton, M C; Salomone, J A; Lindholm, D J; Watson, W A

    1994-01-01

    In some emergency medical services (EMS) system designs, response time intervals are mandated with monetary penalties for noncompliance. These times are set with the goal of providing rapid, definitive patient care. The time interval of vehicle at scene-to-patient access (VSPA) has been measured, but its effect on response time interval compliance has not been determined. To determine the effect of the VSPA interval on the mandated code 1 (< 9 min) and code 2 (< 13 min) response time interval compliance in an urban, public-utility model system. A prospective, observational study used independent third-party riders to collect the VSPA interval for emergency life-threatening (code 1) and emergency nonlife-threatening (code 2) calls. The VSPA interval was added to the 9-1-1 call-to-dispatch and vehicle dispatch-to-scene intervals to determine the total time interval from call received until paramedic access to the patient (9-1-1 call-to-patient access). Compliance with the mandated response time intervals was determined using the traditional time intervals (9-1-1 call-to-scene) plus the VSPA time intervals (9-1-1 call-to-patient access). Chi-square was used to determine statistical significance. Of the 216 observed calls, 198 were matched to the traditional time intervals. Sixty-three were code 1, and 135 were code 2. Of the code 1 calls, 90.5% were compliant using 9-1-1 call-to-scene intervals dropping to 63.5% using 9-1-1 call-to-patient access intervals (p < 0.0005). Of the code 2 calls, 94.1% were compliant using 9-1-1 call-to-scene intervals. Compliance decreased to 83.7% using 9-1-1 call-to-patient access intervals (p = 0.012). The addition of the VSPA interval to the traditional time intervals impacts system response time compliance. Using 9-1-1 call-to-scene compliance as a basis for measuring system performance underestimates the time for the delivery of definitive care. This must be considered when response time interval compliances are defined.

  16. Induced Ellipticity for Inspiraling Binary Systems

    NASA Astrophysics Data System (ADS)

    Randall, Lisa; Xianyu, Zhong-Zhi

    2018-01-01

    Although gravitational waves tend to erase eccentricity of an inspiraling binary system, ellipticity can be generated in the presence of surrounding matter. We present a semianalytical method for understanding the eccentricity distribution of binary black holes (BHs) in the presence of a supermassive BH in a galactic center. Given a matter distribution, we show how to determine the resultant eccentricity analytically in the presence of both tidal forces and evaporation up to one cutoff and one matter-distribution-independent function, paving the way for understanding the environment of detected inspiraling BHs. We furthermore generalize Kozai–Lidov dynamics to situations where perturbation theory breaks down for short time intervals, allowing more general angular momentum exchange, such that eccentricity is generated even when all bodies orbit in the same plane.

  17. A 22,000 year record of changing redox conditions from the Peruvian Oxygen Minimum Zone (OMZ): benthic foraminifera approach

    NASA Astrophysics Data System (ADS)

    Erdem, Z.; Schönfeld, J.; Glock, N.

    2015-12-01

    Benthic foraminifera have been used as proxies for the prevailing conditions at the sediment-water interface. Their distribution patterns are thought to facilitate reconstruction of past environmental conditions. Variations of bottom water oxygenation can be traced by the downcore distribution of benthic foraminifera and some of their morphological characters. Being one of the strongest and most pronounced OMZs in today's world oceans, the Peruvian OMZ is a key area to study such variations in relation with changing climate. Spatial changes or an extension of the OMZ through time and space are investigated using sediment cores from the lower OMZ boundary. We focus on time intervals Late Holocene, Early Holocene, Bølling Allerød, Heinrich-Stadial 1 and Last Glacial Maximum (LGM) to investigate changes in bottom-water oxygen and redox conditions. The recent distributions of benthic foraminiferal assemblages provide background data for an interpretation of the past conditions. Living benthic foraminiferal faunas from the Peruvian margin are structured with the prevailing bottom-water oxygen concentrations today (Mallon et al., 2012). Downcore distribution of benthic foraminiferal assemblages showed fluctuations in the abundance of the indicator species depicting variations and a decreasing trend in bottom water oxygen conditions since the LGM. In addition, changes in bottom-water oxygen and nitrate concentrations are reconstructed for the same time intervals by the pore density in tests of Planulina limbata and Bolivina spissa (Glock et al., 2011), respectively. The pore densities also indicate a trend of higher oxygen and nitrate concentrations in the LGM compared to the Holocene. Combination of both proxies provide information on past bottom-water conditions and changes of oxygen concentrations for the Peruvian margin. Glock et al., 2011: Environmental influences on the pore density of Bolivina spissa (Cushman), Journal of Foraminiferal Research, v. 41, no. 1, p. 22-32. Mallon et al., 2012: The response of benthic foraminifera to low-oxygen conditions of the Peruvian oxygen minimum zone, in ANOXIA, pp.305-322.

  18. The unexpected 2012 Draconid meteor storm

    NASA Astrophysics Data System (ADS)

    Ye, Quanzhi; Wiegert, Paul A.; Brown, Peter G.; Campbell-Brown, Margaret D.; Weryk, Robert J.

    2014-02-01

    An unexpected intense outburst of the Draconid meteor shower was detected by the Canadian Meteor Orbit Radar on 2012 October 8. The peak flux occurred at ˜16:40 UT on October 8 with a maximum of 2.4 ± 0.3 h-1 km-2 (appropriate to meteoroid mass larger than 10-7 kg), equivalent to a ZHRmax ≈ 9000 ± 1000 using 5-min intervals, using a mass distribution index of s = 1.88 ± 0.01 as determined from the amplitude distribution of underdense Draconid echoes. This makes the outburst among the strongest Draconid returns since 1946 and the highest flux shower since the 1966 Leonid meteor storm, assuming that a constant power-law distribution holds from radar to visual meteoroid sizes. The weighted mean geocentric radiant in the time interval of 15-19 h UT, 2012 October 8, was αg = 262.4° 4 ± 0.1°, δg = 55.7° ± 0.1° (epoch J2000.0). Visual observers also reported increased activity around the peak time, but with a much lower rate (ZHR ˜ 200), suggesting that the magnitude-cumulative number relationship is not a simple power law. Ablation modelling of the observed meteors as a population does not yield a unique solution for the grain size and distribution of Draconid meteoroids, but is consistent with a typical Draconid meteoroid of mtotal between 10-6 and 10-4 kg being composed of 10-100 grains. Dynamical simulations indicate that the outburst was caused by dust particles released during the 1966 perihelion passage of the parent comet, 21P/Giacobini-Zinner, although there are discrepancies between the modelled and observed timing of the encounter, presumably caused by approaches of the comet to Jupiter during 1966-1972. Based on the results of our dynamical simulation, we predict possible increased activity of the Draconid meteor shower in 2018, 2019, 2021 and 2025.

  19. Magnetotail Structure and its Internal Particle Dynamics During Northward IMF

    NASA Technical Reports Server (NTRS)

    Ashour-Abdalla, M.; Raeder, J.; El-Alaoui, M.; Peroomian, V.

    1998-01-01

    This study uses Global magnetohydrodynamic (MHD) simulations driven by solar wind data along with Geotail observations of the magnetotail to investigate the magnetotail's response to changes in the interplanetary magnetic field (IMF); observed events used in the study occurred on March 29, 1993 and February 9, 1995. For events from February 9, 1995, we also use the time-dependent MHD magnetic and electric fields and the large-scale kinetic (LSK) technique to examine changes in the Geotail ion velocity distributions. Our MHD simulation shows that on March 29, 1993, during a long period of steady northward IMF, the tail was strongly squeezed and twisted around the Sun-Earth axis in response to variations in the IMF B(sub y) component. The mixed (magnetotail and magnetosheath) plasma observed by Geotail results from the spacecraft's close proximity to the magnetopause and its frequent crossings of this boundary. In our second example (February 9, 1995) the IMF was also steady and northward, and in addition had a significant B(sub y) component. Again the magnetotail was twisted, but not as strongly as on March 29, 1993. The Geotail spacecraft, located approximately 30 R(sub E) downtail, observed highly structured ion distribution functions. Using the time-dependent LSK technique, we investigate the ion sources and acceleration mechanisms affecting the Geotail distribution functions during this interval. At 1325 UT most ions are found to enter the magnetosphere on the dusk side earthward of Geotail with a secondary source on the dawn side in the low latitude boundary layer (LLBL). A small percentage come from the ionosphere. By 1347 UT the majority of the ions come from the dawn side LLBL. The distribution functions measured during the later time interval are much warmer, mainly because particles reaching the spacecraft from the dawn side are affected by nonadiabatic scattering and acceleration in the neutral sheet.

  20. Magnetotail Structure and its Internal Particle Dynamics During Northward IMF

    NASA Technical Reports Server (NTRS)

    Ashour-Abdalia, M.; El-Alaoui, M.; Peroomian, V.

    1998-01-01

    This study uses Global magnetohydrodynamic (MHD) simulations driven by solar wind data along with Geotail observations of the magnetotail to investigate the magnetotail's response to changes in the interplanetary magnetic field (IMF); observed events used in the study occurred on March 29, 1993 and February 9, 1995. For events from February 9, 1995, we also use the time-dependent MHD magnetic and electric fields and the large-scale kinetic (LSK) technique to examine changes in the Geotail ion velocity distributions. Our MHD simulation shows that on March 29, 1993, during a long period of steady northward IMF, the tail was strongly squeezed and twisted around the Sun-Earth axis in response to variations in the IMF B(sub y) component. The mixed (magnetotail and magnetosheath) plasma observed by Geotail results from the spacecraft's close proximity to the magnetopause and its frequent crossings of this boundary. In our second example (February 9, 1995) the IMF was also steady and northward, and in addition had a significant B(sub y) component. Again the magnetotail was twisted, but not as strongly as on March 29, 1993. The Geotail spacecraft, located approximately 30 R(sub E) downtail, observed highly structured ion distribution functions. Using the time-dependent LSK technique, we investigate the ion sources and acceleration mechanisms affecting the Geotail distribution functions during this interval. At 1325 UT most ions are found to enter the magnetosphere on the dusk side earthward of Geotail with a secondary source on the dawn side in the low latitude boundary layer (LLBL). A small percentage come from the ionosphere. By 1347 UT the majority of the ions come from the dawn side LLBL. The distribution functions measured during the later time interval are much warmer, mainly because particles reaching the spacecraft from the dawnside are affected by nonadiabatic scattering and acceleration in the neutral sheet.

  1. Stochastic modeling for neural spiking events based on fractional superstatistical Poisson process

    NASA Astrophysics Data System (ADS)

    Konno, Hidetoshi; Tamura, Yoshiyasu

    2018-01-01

    In neural spike counting experiments, it is known that there are two main features: (i) the counting number has a fractional power-law growth with time and (ii) the waiting time (i.e., the inter-spike-interval) distribution has a heavy tail. The method of superstatistical Poisson processes (SSPPs) is examined whether these main features are properly modeled. Although various mixed/compound Poisson processes are generated with selecting a suitable distribution of the birth-rate of spiking neurons, only the second feature (ii) can be modeled by the method of SSPPs. Namely, the first one (i) associated with the effect of long-memory cannot be modeled properly. Then, it is shown that the two main features can be modeled successfully by a class of fractional SSPP (FSSPP).

  2. Moments distributions of single dye molecule spectra in a low-temperature polymer: Analysis of system ergodicity

    NASA Astrophysics Data System (ADS)

    Anikushina, T. A.; Naumov, A. V.

    2013-12-01

    This article demonstrates the principal advantages of the technique for analysis of the long-term spectral evolution of single molecules (SM) in the study of the microscopic nature of the dynamic processes in low-temperature polymers. We performed the detailed analysis of the spectral trail of single tetra-tert-butylterrylene (TBT) molecule in an amorphous polyisobutylene matrix, measured over 5 hours at T = 7K. It has been shown that the slow temporal dynamics is in qualitative agreement with the standard model of two-level systems and stochastic sudden-jump model. At the same time the distributions of the first four moments (cumulants) of the spectra of the selected SM measured at different time points were found not consistent with the standard theory prediction. It was considered as evidence that in a given time interval the system is not ergodic

  3. Indication of multiscaling in the volatility return intervals of stock markets

    NASA Astrophysics Data System (ADS)

    Wang, Fengzhong; Yamasaki, Kazuko; Havlin, Shlomo; Stanley, H. Eugene

    2008-01-01

    The distribution of the return intervals τ between price volatilities above a threshold height q for financial records has been approximated by a scaling behavior. To explore how accurate is the scaling and therefore understand the underlined nonlinear mechanism, we investigate intraday data sets of 500 stocks which consist of Standard & Poor’s 500 index. We show that the cumulative distribution of return intervals has systematic deviations from scaling. We support this finding by studying the m -th moment μm≡⟨(τ/⟨τ⟩)m⟩1/m , which show a certain trend with the mean interval ⟨τ⟩ . We generate surrogate records using the Schreiber method, and find that their cumulative distributions almost collapse to a single curve and moments are almost constant for most ranges of ⟨τ⟩ . Those substantial differences suggest that nonlinear correlations in the original volatility sequence account for the deviations from a single scaling law. We also find that the original and surrogate records exhibit slight tendencies for short and long ⟨τ⟩ , due to the discreteness and finite size effects of the records, respectively. To avoid as possible those effects for testing the multiscaling behavior, we investigate the moments in the range 10<⟨τ⟩≤100 , and find that the exponent α from the power law fitting μm˜⟨τ⟩α has a narrow distribution around α≠0 which depends on m for the 500 stocks. The distribution of α for the surrogate records are very narrow and centered around α=0 . This suggests that the return interval distribution exhibits multiscaling behavior due to the nonlinear correlations in the original volatility.

  4. Effects of Distributed Practice on the Acquisition of Second Language English Syntax

    ERIC Educational Resources Information Center

    Bird, Steve

    2010-01-01

    A longitudinal study compared the effects of distributed and massed practice schedules on the learning of second language English syntax. Participants were taught distinctions in the tense and aspect systems of English at short and long practice intervals. They were then tested at short and long intervals. The results showed that distributed…

  5. Real-time distribution of pelagic fish: combining hydroacoustics, GIS and spatial modelling at a fine spatial scale.

    PubMed

    Muška, Milan; Tušer, Michal; Frouzová, Jaroslava; Mrkvička, Tomáš; Ricard, Daniel; Seďa, Jaromír; Morelli, Federico; Kubečka, Jan

    2018-03-29

    Understanding spatial distribution of organisms in heterogeneous environment remains one of the chief issues in ecology. Spatial organization of freshwater fish was investigated predominantly on large-scale, neglecting important local conditions and ecological processes. However, small-scale processes are of an essential importance for individual habitat preferences and hence structuring trophic cascades and species coexistence. In this work, we analysed the real-time spatial distribution of pelagic freshwater fish in the Římov Reservoir (Czechia) observed by hydroacoustics in relation to important environmental predictors during 48 hours at 3-h interval. Effect of diurnal cycle was revealed of highest significance in all spatial models with inverse trends between fish distribution and predictors in day and night in general. Our findings highlighted daytime pelagic fish distribution as highly aggregated, with general fish preferences for central, deep and highly illuminated areas, whereas nighttime distribution was more disperse and fish preferred nearshore steep sloped areas with higher depth. This turnover suggests prominent movements of significant part of fish assemblage between pelagic and nearshore areas on a diel basis. In conclusion, hydroacoustics, GIS and spatial modelling proved as valuable tool for predicting local fish distribution and elucidate its drivers, which has far reaching implications for understanding freshwater ecosystem functioning.

  6. Time distribution of injury-related in-hospital mortality in a trauma referral center in South of Iran (2010–2015)

    PubMed Central

    Abbasi, Hamidreza; Bolandparvaz, Shahram; Yadollahi, Mahnaz; Anvar, Mehrdad; Farahgol, Zahra

    2017-01-01

    Abstract In Iran, there are no studies addressing trauma death timing and factors affecting time of death after injuries. This study aimed to examine time distribution of trauma deaths in an urban major trauma referral center with respect to victims’ injury characteristics during 2010 to 2015. This was a cross-sectional study of adult trauma-related in-hospital deaths resulting from traffic-related accidents, falls, and violence-related injuries. Information on injury characteristics and time interval between admission and death was extracted from 3 hospital databases. Mortality time distribution was analyzed separately in the context of each baseline variable. A total of 1117 in-hospital deaths (mean age 47.6 ± 22.2 years, 80% male) were studied. Deaths timing followed an extremely positive skewed bimodal distribution with 1 peak during the first 24 hours of admission (41.6% of deaths) and another peak starting from the 7th day of hospitalization to the end of first month (27.7% of total). Subjects older than 65 years were more likely to die after 24 hours compared to younger deceased (P = .031). More than 70% of firearm-related deaths and 48% of assault-related mortalities occurred early, whereas 67% and 66% of deaths from falls and motorcycle accidents occurred late (P < .001). Over 57% of deaths from severe thoracic injuries occurred early, whereas this value was only 37% for central nervous system injuries (P < .001). From 2010 to 2015, percentage of late deaths decreased significantly from 68% to 54% (P < .001). Considering 1 prehospital peak of mortality and 2 in-hospital peaks, mortality time distribution follows the old trimodal pattern in Shiraz. This distribution is affected by victims’ age, injury mechanism, and injured body area. Although such distribution reflects a relatively lower quality of care comparing to mature trauma systems, a change toward expected bimodal pattern has started. PMID:28538377

  7. Are Earthquake Clusters/Supercycles Real or Random?

    NASA Astrophysics Data System (ADS)

    Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.

    2016-12-01

    Long records of earthquakes at plate boundaries such as the San Andreas or Cascadia often show that large earthquakes occur in temporal clusters, also termed supercycles, separated by less active intervals. These are intriguing because the boundary is presumably being loaded by steady plate motion. If so, earthquakes resulting from seismic cycles - in which their probability is small shortly after the past one, and then increases with time - should occur quasi-periodically rather than be more frequent in some intervals than others. We are exploring this issue with two approaches. One is to assess whether the clusters result purely by chance from a time-independent process that has no "memory." Thus a future earthquake is equally likely immediately after the past one and much later, so earthquakes can cluster in time. We analyze the agreement between such a model and inter-event times for Parkfield, Pallet Creek, and other records. A useful tool is transformation by the inverse cumulative distribution function, so the inter-event times have a uniform distribution when the memorylessness property holds. The second is via a time-variable model in which earthquake probability increases with time between earthquakes and decreases after an earthquake. The probability of an event increases with time until one happens, after which it decreases, but not to zero. Hence after a long period of quiescence, the probability of an earthquake can remain higher than the long-term average for several cycles. Thus the probability of another earthquake is path dependent, i.e. depends on the prior earthquake history over multiple cycles. Time histories resulting from simulations give clusters with properties similar to those observed. The sequences of earthquakes result from both the model parameters and chance, so two runs with the same parameters look different. The model parameters control the average time between events and the variation of the actual times around this average, so models can be strongly or weakly time-dependent.

  8. Exact Scheffé-type confidence intervals for output from groundwater flow models: 1. Use of hydrogeologic information

    USGS Publications Warehouse

    Cooley, Richard L.

    1993-01-01

    A new method is developed to efficiently compute exact Scheffé-type confidence intervals for output (or other function of parameters) g(β) derived from a groundwater flow model. The method is general in that parameter uncertainty can be specified by any statistical distribution having a log probability density function (log pdf) that can be expanded in a Taylor series. However, for this study parameter uncertainty is specified by a statistical multivariate beta distribution that incorporates hydrogeologic information in the form of the investigator's best estimates of parameters and a grouping of random variables representing possible parameter values so that each group is defined by maximum and minimum bounds and an ordering according to increasing value. The new method forms the confidence intervals from maximum and minimum limits of g(β) on a contour of a linear combination of (1) the quadratic form for the parameters used by Cooley and Vecchia (1987) and (2) the log pdf for the multivariate beta distribution. Three example problems are used to compare characteristics of the confidence intervals for hydraulic head obtained using different weights for the linear combination. Different weights generally produced similar confidence intervals, whereas the method of Cooley and Vecchia (1987) often produced much larger confidence intervals.

  9. Automatic image equalization and contrast enhancement using Gaussian mixture modeling.

    PubMed

    Celik, Turgay; Tjahjadi, Tardi

    2012-01-01

    In this paper, we propose an adaptive image equalization algorithm that automatically enhances the contrast in an input image. The algorithm uses the Gaussian mixture model to model the image gray-level distribution, and the intersection points of the Gaussian components in the model are used to partition the dynamic range of the image into input gray-level intervals. The contrast equalized image is generated by transforming the pixels' gray levels in each input interval to the appropriate output gray-level interval according to the dominant Gaussian component and the cumulative distribution function of the input interval. To take account of the hypothesis that homogeneous regions in the image represent homogeneous silences (or set of Gaussian components) in the image histogram, the Gaussian components with small variances are weighted with smaller values than the Gaussian components with larger variances, and the gray-level distribution is also used to weight the components in the mapping of the input interval to the output interval. Experimental results show that the proposed algorithm produces better or comparable enhanced images than several state-of-the-art algorithms. Unlike the other algorithms, the proposed algorithm is free of parameter setting for a given dynamic range of the enhanced image and can be applied to a wide range of image types.

  10. Agent Based Computing Machine

    DTIC Science & Technology

    2005-12-09

    decision making logic that respond to the environment (concentration of operands - the state vector), and bias or "mood" as established by its history of...mentioned in the chart, there is no need for file management in a ABC Machine. Information is distributed, no history is maintained. The instruction set... Postgresql ) for collection of cluster samples/snapshots over intervals of time. An prototypical example of an XML file to configure and launch the ABC

  11. Challenges in risk estimation using routinely collected clinical data: The example of estimating cervical cancer risks from electronic health-records.

    PubMed

    Landy, Rebecca; Cheung, Li C; Schiffman, Mark; Gage, Julia C; Hyun, Noorie; Wentzensen, Nicolas; Kinney, Walter K; Castle, Philip E; Fetterman, Barbara; Poitras, Nancy E; Lorey, Thomas; Sasieni, Peter D; Katki, Hormuzd A

    2018-06-01

    Electronic health-records (EHR) are increasingly used by epidemiologists studying disease following surveillance testing to provide evidence for screening intervals and referral guidelines. Although cost-effective, undiagnosed prevalent disease and interval censoring (in which asymptomatic disease is only observed at the time of testing) raise substantial analytic issues when estimating risk that cannot be addressed using Kaplan-Meier methods. Based on our experience analysing EHR from cervical cancer screening, we previously proposed the logistic-Weibull model to address these issues. Here we demonstrate how the choice of statistical method can impact risk estimates. We use observed data on 41,067 women in the cervical cancer screening program at Kaiser Permanente Northern California, 2003-2013, as well as simulations to evaluate the ability of different methods (Kaplan-Meier, Turnbull, Weibull and logistic-Weibull) to accurately estimate risk within a screening program. Cumulative risk estimates from the statistical methods varied considerably, with the largest differences occurring for prevalent disease risk when baseline disease ascertainment was random but incomplete. Kaplan-Meier underestimated risk at earlier times and overestimated risk at later times in the presence of interval censoring or undiagnosed prevalent disease. Turnbull performed well, though was inefficient and not smooth. The logistic-Weibull model performed well, except when event times didn't follow a Weibull distribution. We have demonstrated that methods for right-censored data, such as Kaplan-Meier, result in biased estimates of disease risks when applied to interval-censored data, such as screening programs using EHR data. The logistic-Weibull model is attractive, but the model fit must be checked against Turnbull non-parametric risk estimates. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems.

    PubMed

    Shehzad, Danish; Bozkuş, Zeki

    2016-01-01

    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models.

  13. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems

    PubMed Central

    Bozkuş, Zeki

    2016-01-01

    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models. PMID:27413363

  14. Non-renewal statistics for electron transport in a molecular junction with electron-vibration interaction

    NASA Astrophysics Data System (ADS)

    Kosov, Daniel S.

    2017-09-01

    Quantum transport of electrons through a molecule is a series of individual electron tunneling events separated by stochastic waiting time intervals. We study the emergence of temporal correlations between successive waiting times for the electron transport in a vibrating molecular junction. Using the master equation approach, we compute the joint probability distribution for waiting times of two successive tunneling events. We show that the probability distribution is completely reset after each tunneling event if molecular vibrations are thermally equilibrated. If we treat vibrational dynamics exactly without imposing the equilibration constraint, the statistics of electron tunneling events become non-renewal. Non-renewal statistics between two waiting times τ1 and τ2 means that the density matrix of the molecule is not fully renewed after time τ1 and the probability of observing waiting time τ2 for the second electron transfer depends on the previous electron waiting time τ1. The strong electron-vibration coupling is required for the emergence of the non-renewal statistics. We show that in the Franck-Condon blockade regime, extremely rare tunneling events become positively correlated.

  15. A Validation Study of the Rank-Preserving Structural Failure Time Model: Confidence Intervals and Unique, Multiple, and Erroneous Solutions.

    PubMed

    Ouwens, Mario; Hauch, Ole; Franzén, Stefan

    2018-05-01

    The rank-preserving structural failure time model (RPSFTM) is used for health technology assessment submissions to adjust for switching patients from reference to investigational treatment in cancer trials. It uses counterfactual survival (survival when only reference treatment would have been used) and assumes that, at randomization, the counterfactual survival distribution for the investigational and reference arms is identical. Previous validation reports have assumed that patients in the investigational treatment arm stay on therapy throughout the study period. To evaluate the validity of the RPSFTM at various levels of crossover in situations in which patients are taken off the investigational drug in the investigational arm. The RPSFTM was applied to simulated datasets differing in percentage of patients switching, time of switching, underlying acceleration factor, and number of patients, using exponential distributions for the time on investigational and reference treatment. There were multiple scenarios in which two solutions were found: one corresponding to identical counterfactual distributions, and the other to two different crossing counterfactual distributions. The same was found for the hazard ratio (HR). Unique solutions were observed only when switching patients were on investigational treatment for <40% of the time that patients in the investigational arm were on treatment. Distributions other than exponential could have been used for time on treatment. An HR equal to 1 is a necessary but not always sufficient condition to indicate acceleration factors associated with equal counterfactual survival. Further assessment to distinguish crossing counterfactual curves from equal counterfactual curves is especially needed when the time that switchers stay on investigational treatment is relatively long compared to the time direct starters stay on investigational treatment.

  16. The Third BATSE Gamma-Ray Burst Catalog

    NASA Technical Reports Server (NTRS)

    Meegan, Charles A.; Pendleton, Geoffrey N.; Briggs, Michael S.; Kouveliotou, Chryssa; Koshut, Thomas M.; Lestrade, John Patrick; Paciesas, William S.; McCollough, Michael L.; Brainerd, Jerome J.; Horack, John M.; hide

    1996-01-01

    The Burst and Transient Source Experiment (BATSE) on the Compton Gamma Ray Observatory (CGRO) has triggered on 1122 cosmic gamma-ray bursts between 1991 April 19 and 1994 September 19. These events constitute the Third BATSE (3B) burst catalog. This catalog includes the events previously reported in the 2B catalog, which covered the time interval 1991 April 19 to 1993 March 9. We present tables of the burst occurrence times, locations, peak fluxes, fluences, and durations. In general, results from previous BATSE catalogs are confirmed here with greater statistical significance. The angular distribution is consistent with isotropy. The mean galactic dipole and quadrupole moments are within 0.6 a and 0.3 a, respectively, of the values expected for isotropy. The intensity distribution is not consistent with a homogeneous distribution of burst sources, with V/V(sub max) = 0.33 +/- 0.01. The duration distribution (T(sub 90)) exhibits bimodality, with peaks at approx. 0.5 and approx. 30 s. There is no compelling evidence for burst repetition, but only weak limits can be placed on the repetition rate.

  17. Some case studies of skewed (and other ab-normal) data distributions arising in low-level environmental research.

    PubMed

    Currie, L A

    2001-07-01

    Three general classes of skewed data distributions have been encountered in research on background radiation, chemical and radiochemical blanks, and low levels of 85Kr and 14C in the atmosphere and the cryosphere. The first class of skewed data can be considered to be theoretically, or fundamentally skewed. It is typified by the exponential distribution of inter-arrival times for nuclear counting events for a Poisson process. As part of a study of the nature of low-level (anti-coincidence) Geiger-Muller counter background radiation, tests were performed on the Poisson distribution of counts, the uniform distribution of arrival times, and the exponential distribution of inter-arrival times. The real laboratory system, of course, failed the (inter-arrival time) test--for very interesting reasons, linked to the physics of the measurement process. The second, computationally skewed, class relates to skewness induced by non-linear transformations. It is illustrated by non-linear concentration estimates from inverse calibration, and bivariate blank corrections for low-level 14C-12C aerosol data that led to highly asymmetric uncertainty intervals for the biomass carbon contribution to urban "soot". The third, environmentally, skewed, data class relates to a universal problem for the detection of excursions above blank or baseline levels: namely, the widespread occurrence of ab-normal distributions of environmental and laboratory blanks. This is illustrated by the search for fundamental factors that lurk behind skewed frequency distributions of sulfur laboratory blanks and 85Kr environmental baselines, and the application of robust statistical procedures for reliable detection decisions in the face of skewed isotopic carbon procedural blanks with few degrees of freedom.

  18. Spatial and stratigraphic distribution of water in oil shale of the Green River Formation using Fischer Assay, Piceance Basin, northwestern Colorado

    USGS Publications Warehouse

    Johnson, Ronald C.; Mercier, Tracey J.; Brownfield, Michael E.

    2014-01-01

    The spatial and stratigraphic distribution of water in oil shale of the Eocene Green River Formation in the Piceance Basin of northwestern Colorado was studied in detail using some 321,000 Fischer assay analyses in the U.S. Geological Survey oil-shale database. The oil-shale section was subdivided into 17 roughly time-stratigraphic intervals, and the distribution of water in each interval was assessed separately. This study was conducted in part to determine whether water produced during retorting of oil shale could provide a significant amount of the water needed for an oil-shale industry. Recent estimates of water requirements vary from 1 to 10 barrels of water per barrel of oil produced, depending on the type of retort process used. Sources of water in Green River oil shale include (1) free water within clay minerals; (2) water from the hydrated minerals nahcolite (NaHCO3), dawsonite (NaAl(OH)2CO3), and analcime (NaAlSi2O6.H20); and (3) minor water produced from the breakdown of organic matter in oil shale during retorting. The amounts represented by each of these sources vary both stratigraphically and areally within the basin. Clay is the most important source of water in the lower part of the oil-shale interval and in many basin-margin areas. Nahcolite and dawsonite are the dominant sources of water in the oil-shale and saline-mineral depocenter, and analcime is important in the upper part of the formation. Organic matter does not appear to be a major source of water. The ratio of water to oil generated with retorting is significantly less than 1:1 for most areas of the basin and for most stratigraphic intervals; thus water within oil shale can provide only a fraction of the water needed for an oil-shale industry.

  19. Spatial and stratigraphic distribution of water in oil shale of the Green River Formation using Fischer assay, Piceance Basin, northwestern Colorado

    USGS Publications Warehouse

    Johnson, Ronald C.; Mercier, Tracey J.; Brownfield, Michael E.

    2014-01-01

    The spatial and stratigraphic distribution of water in oil shale of the Eocene Green River Formation in the Piceance Basin of northwestern Colorado was studied in detail using some 321,000 Fischer assay analyses in the U.S. Geological Survey oil-shale database. The oil-shale section was subdivided into 17 roughly time-stratigraphic intervals, and the distribution of water in each interval was assessed separately. This study was conducted in part to determine whether water produced during retorting of oil shale could provide a significant amount of the water needed for an oil-shale industry. Recent estimates of water requirements vary from 1 to 10 barrels of water per barrel of oil produced, depending on the type of retort process used. Sources of water in Green River oil shale include (1) free water within clay minerals; (2) water from the hydrated minerals nahcolite (NaHCO3), dawsonite (NaAl(OH)2CO3), and analcime (NaAlSi2O6.H20); and (3) minor water produced from the breakdown of organic matter in oil shale during retorting. The amounts represented by each of these sources vary both stratigraphically and areally within the basin. Clay is the most important source of water in the lower part of the oil-shale interval and in many basin-margin areas. Nahcolite and dawsonite are the dominant sources of water in the oil-shale and saline-mineral depocenter, and analcime is important in the upper part of the formation. Organic matter does not appear to be a major source of water. The ratio of water to oil generated with retorting is significantly less than 1:1 for most areas of the basin and for most stratigraphic intervals; thus water within oil shale can provide only a fraction of the water needed for an oil-shale industry.

  20. Substorm occurrence rates, substorm recurrence times, and solar wind structure

    NASA Astrophysics Data System (ADS)

    Borovsky, Joseph E.; Yakymenko, Kateryna

    2017-03-01

    Two collections of substorms are created: 28,464 substorms identified with jumps in the SuperMAG AL index in the years 1979-2015 and 16,025 substorms identified with electron injections into geosynchronous orbit in the years 1989-2007. Substorm occurrence rates and substorm recurrence-time distributions are examined as functions of the phase of the solar cycle, the season of the year, the Russell-McPherron favorability, the type of solar wind plasma at Earth, the geomagnetic-activity level, and as functions of various solar and solar wind properties. Three populations of substorm occurrences are seen: (1) quasiperiodically occurring substorms with recurrence times (waiting times) of 2-4 h, (2) randomly occurring substorms with recurrence times of about 6-15 h, and (3) long intervals wherein no substorms occur. A working model is suggested wherein (1) the period of periodic substorms is set by the magnetosphere with variations in the actual recurrence times caused by the need for a solar wind driving interval to occur, (2) the mesoscale structure of the solar wind magnetic field triggers the occurrence of the random substorms, and (3) the large-scale structure of the solar wind plasma is responsible for the long intervals wherein no substorms occur. Statistically, the recurrence period of periodically occurring substorms is slightly shorter when the ram pressure of the solar wind is high, when the magnetic field strength of the solar wind is strong, when the Mach number of the solar wind is low, and when the polar-cap potential saturation parameter is high.

  1. Environmental Controls on Photosynthetic Microbial Mat Distribution and Morphogenesis on a 3.42 Ga Clastic-Starved Platform

    NASA Astrophysics Data System (ADS)

    Tice, Michael M.

    2009-12-01

    All mats are preserved in the shallowest-water interval of those rocks deposited below normal wave base and above storm wave base. This interval is bounded below by a transgressive lag formed during regional flooding and above by a small condensed section that marks a local relative sea-level maximum. Restriction of all mat morphotypes to the shallowest interval of the storm-active layer in the BRC ocean reinforces previous interpretations that these mats were constructed primarily by photosynthetic organisms. Morphotypes α and β dominate the lower half of this interval and grew during deposition of relatively coarse detrital carbonaceous grains, while morphotype γ dominates the upper half and grew during deposition of fine detrital carbonaceous grains. The observed mat distribution suggests that either light intensity or, more likely, small variations in ambient current energy acted as a first-order control on mat morphotype distribution. These results demonstrate significant environmental control on biological morphogenetic processes independent of influences from siliciclastic sedimentation.

  2. Scaling and clustering effects of extreme precipitation distributions

    NASA Astrophysics Data System (ADS)

    Zhang, Qiang; Zhou, Yu; Singh, Vijay P.; Li, Jianfeng

    2012-08-01

    SummaryOne of the impacts of climate change and human activities on the hydrological cycle is the change in the precipitation structure. Closely related to the precipitation structure are two characteristics: the volume (m) of wet periods (WPs) and the time interval between WPs or waiting time (t). Using daily precipitation data for a period of 1960-2005 from 590 rain gauge stations in China, these two characteristics are analyzed, involving scaling and clustering of precipitation episodes. Our findings indicate that m and t follow similar probability distribution curves, implying that precipitation processes are controlled by similar underlying thermo-dynamics. Analysis of conditional probability distributions shows a significant dependence of m and t on their previous values of similar volumes, and the dependence tends to be stronger when m is larger or t is longer. It indicates that a higher probability can be expected when high-intensity precipitation is followed by precipitation episodes with similar precipitation intensity and longer waiting time between WPs is followed by the waiting time of similar duration. This result indicates the clustering of extreme precipitation episodes and severe droughts or floods are apt to occur in groups.

  3. Asymptotic confidence intervals for the Pearson correlation via skewness and kurtosis.

    PubMed

    Bishara, Anthony J; Li, Jiexiang; Nash, Thomas

    2018-02-01

    When bivariate normality is violated, the default confidence interval of the Pearson correlation can be inaccurate. Two new methods were developed based on the asymptotic sampling distribution of Fisher's z' under the general case where bivariate normality need not be assumed. In Monte Carlo simulations, the most successful of these methods relied on the (Vale & Maurelli, 1983, Psychometrika, 48, 465) family to approximate a distribution via the marginal skewness and kurtosis of the sample data. In Simulation 1, this method provided more accurate confidence intervals of the correlation in non-normal data, at least as compared to no adjustment of the Fisher z' interval, or to adjustment via the sample joint moments. In Simulation 2, this approximate distribution method performed favourably relative to common non-parametric bootstrap methods, but its performance was mixed relative to an observed imposed bootstrap and two other robust methods (PM1 and HC4). No method was completely satisfactory. An advantage of the approximate distribution method, though, is that it can be implemented even without access to raw data if sample skewness and kurtosis are reported, making the method particularly useful for meta-analysis. Supporting information includes R code. © 2017 The British Psychological Society.

  4. Statistical mechanics of economics I

    NASA Astrophysics Data System (ADS)

    Kusmartsev, F. V.

    2011-02-01

    We show that statistical mechanics is useful in the description of financial crisis and economics. Taking a large amount of instant snapshots of a market over an interval of time we construct their ensembles and study their statistical interference. This results in a probability description of the market and gives capital, money, income, wealth and debt distributions, which in the most cases takes the form of the Bose-Einstein distribution. In addition, statistical mechanics provides the main market equations and laws which govern the correlations between the amount of money, debt, product, prices and number of retailers. We applied the found relations to a study of the evolution of the economics in USA between the years 1996 to 2008 and observe that over that time the income of a major population is well described by the Bose-Einstein distribution which parameters are different for each year. Each financial crisis corresponds to a peak in the absolute activity coefficient. The analysis correctly indicates the past crises and predicts the future one.

  5. Estimated Accuracy of Three Common Trajectory Statistical Methods

    NASA Technical Reports Server (NTRS)

    Kabashnikov, Vitaliy P.; Chaikovsky, Anatoli P.; Kucsera, Tom L.; Metelskaya, Natalia S.

    2011-01-01

    Three well-known trajectory statistical methods (TSMs), namely concentration field (CF), concentration weighted trajectory (CWT), and potential source contribution function (PSCF) methods were tested using known sources and artificially generated data sets to determine the ability of TSMs to reproduce spatial distribution of the sources. In the works by other authors, the accuracy of the trajectory statistical methods was estimated for particular species and at specified receptor locations. We have obtained a more general statistical estimation of the accuracy of source reconstruction and have found optimum conditions to reconstruct source distributions of atmospheric trace substances. Only virtual pollutants of the primary type were considered. In real world experiments, TSMs are intended for application to a priori unknown sources. Therefore, the accuracy of TSMs has to be tested with all possible spatial distributions of sources. An ensemble of geographical distributions of virtual sources was generated. Spearman s rank order correlation coefficient between spatial distributions of the known virtual and the reconstructed sources was taken to be a quantitative measure of the accuracy. Statistical estimates of the mean correlation coefficient and a range of the most probable values of correlation coefficients were obtained. All the TSMs that were considered here showed similar close results. The maximum of the ratio of the mean correlation to the width of the correlation interval containing the most probable correlation values determines the optimum conditions for reconstruction. An optimal geographical domain roughly coincides with the area supplying most of the substance to the receptor. The optimal domain s size is dependent on the substance decay time. Under optimum reconstruction conditions, the mean correlation coefficients can reach 0.70 0.75. The boundaries of the interval with the most probable correlation values are 0.6 0.9 for the decay time of 240 h and 0.5 0.95 for the decay time of 12 h. The best results of source reconstruction can be expected for the trace substances with a decay time on the order of several days. Although the methods considered in this paper do not guarantee high accuracy they are computationally simple and fast. Using the TSMs in optimum conditions and taking into account the range of uncertainties, one can obtain a first hint on potential source areas.

  6. Multifactor analysis of multiscaling in volatility return intervals.

    PubMed

    Wang, Fengzhong; Yamasaki, Kazuko; Havlin, Shlomo; Stanley, H Eugene

    2009-01-01

    We study the volatility time series of 1137 most traded stocks in the U.S. stock markets for the two-year period 2001-2002 and analyze their return intervals tau , which are time intervals between volatilities above a given threshold q . We explore the probability density function of tau , P_(q)(tau) , assuming a stretched exponential function, P_(q)(tau) approximately e;(-tau;(gamma)) . We find that the exponent gamma depends on the threshold in the range between q=1 and 6 standard deviations of the volatility. This finding supports the multiscaling nature of the return interval distribution. To better understand the multiscaling origin, we study how gamma depends on four essential factors, capitalization, risk, number of trades, and return. We show that gamma depends on the capitalization, risk, and return but almost does not depend on the number of trades. This suggests that gamma relates to the portfolio selection but not on the market activity. To further characterize the multiscaling of individual stocks, we fit the moments of tau , mu_(m) identical with(tautau);(m);(1m) , in the range of 10

  7. Multifactor analysis of multiscaling in volatility return intervals

    NASA Astrophysics Data System (ADS)

    Wang, Fengzhong; Yamasaki, Kazuko; Havlin, Shlomo; Stanley, H. Eugene

    2009-01-01

    We study the volatility time series of 1137 most traded stocks in the U.S. stock markets for the two-year period 2001-2002 and analyze their return intervals τ , which are time intervals between volatilities above a given threshold q . We explore the probability density function of τ , Pq(τ) , assuming a stretched exponential function, Pq(τ)˜e-τγ . We find that the exponent γ depends on the threshold in the range between q=1 and 6 standard deviations of the volatility. This finding supports the multiscaling nature of the return interval distribution. To better understand the multiscaling origin, we study how γ depends on four essential factors, capitalization, risk, number of trades, and return. We show that γ depends on the capitalization, risk, and return but almost does not depend on the number of trades. This suggests that γ relates to the portfolio selection but not on the market activity. To further characterize the multiscaling of individual stocks, we fit the moments of τ , μm≡⟨(τ/⟨τ⟩)m⟩1/m , in the range of 10<⟨τ⟩⩽100 by a power law, μm˜⟨τ⟩δ . The exponent δ is found also to depend on the capitalization, risk, and return but not on the number of trades, and its tendency is opposite to that of γ . Moreover, we show that δ decreases with increasing γ approximately by a linear relation. The return intervals demonstrate the temporal structure of volatilities and our findings suggest that their multiscaling features may be helpful for portfolio optimization.

  8. The correlation between infectivity and incubation period of measles, estimated from households with two cases.

    PubMed

    Klinkenberg, Don; Nishiura, Hiroshi

    2011-09-07

    The generation time of an infectious disease is the time between infection of a primary case and infection of a secondary case by the primary case. Its distribution plays a key role in understanding the dynamics of infectious diseases in populations, e.g. in estimating the basic reproduction number. Moreover, the generation time and incubation period distributions together characterize the effectiveness of control by isolation and quarantine. In modelling studies, a relation between the two is often not made specific, but a correlation is biologically plausible. However, it is difficult to establish such correlation, because of the unobservable nature of infection events. We have quantified a joint distribution of generation time and incubation period by a novel estimation method for household data with two susceptible individuals, consisting of time intervals between disease onsets of two measles cases. We used two such datasets, and a separate incubation period dataset. Results indicate that the mean incubation period and the generation time of measles are positively correlated, and that both lie in the range of 11-12 days, suggesting that infectiousness of measles cases increases significantly around the time of symptom onset. The correlation between times from infection to secondary transmission and to symptom onset could critically affect the predicted effectiveness of isolation and quarantine. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Sample size calculation for studies with grouped survival data.

    PubMed

    Li, Zhiguo; Wang, Xiaofei; Wu, Yuan; Owzar, Kouros

    2018-06-10

    Grouped survival data arise often in studies where the disease status is assessed at regular visits to clinic. The time to the event of interest can only be determined to be between two adjacent visits or is right censored at one visit. In data analysis, replacing the survival time with the endpoint or midpoint of the grouping interval leads to biased estimators of the effect size in group comparisons. Prentice and Gloeckler developed a maximum likelihood estimator for the proportional hazards model with grouped survival data and the method has been widely applied. Previous work on sample size calculation for designing studies with grouped data is based on either the exponential distribution assumption or the approximation of variance under the alternative with variance under the null. Motivated by studies in HIV trials, cancer trials and in vitro experiments to study drug toxicity, we develop a sample size formula for studies with grouped survival endpoints that use the method of Prentice and Gloeckler for comparing two arms under the proportional hazards assumption. We do not impose any distributional assumptions, nor do we use any approximation of variance of the test statistic. The sample size formula only requires estimates of the hazard ratio and survival probabilities of the event time of interest and the censoring time at the endpoints of the grouping intervals for one of the two arms. The formula is shown to perform well in a simulation study and its application is illustrated in the three motivating examples. Copyright © 2018 John Wiley & Sons, Ltd.

  10. Standardized likelihood ratio test for comparing several log-normal means and confidence interval for the common mean.

    PubMed

    Krishnamoorthy, K; Oral, Evrim

    2017-12-01

    Standardized likelihood ratio test (SLRT) for testing the equality of means of several log-normal distributions is proposed. The properties of the SLRT and an available modified likelihood ratio test (MLRT) and a generalized variable (GV) test are evaluated by Monte Carlo simulation and compared. Evaluation studies indicate that the SLRT is accurate even for small samples, whereas the MLRT could be quite liberal for some parameter values, and the GV test is in general conservative and less powerful than the SLRT. Furthermore, a closed-form approximate confidence interval for the common mean of several log-normal distributions is developed using the method of variance estimate recovery, and compared with the generalized confidence interval with respect to coverage probabilities and precision. Simulation studies indicate that the proposed confidence interval is accurate and better than the generalized confidence interval in terms of coverage probabilities. The methods are illustrated using two examples.

  11. Measuring skew in average surface roughness as a function of surface preparation

    NASA Astrophysics Data System (ADS)

    Stahl, Mark T.

    2015-08-01

    Characterizing surface roughness is important for predicting optical performance. Better measurement of surface roughness reduces polishing time, saves money and allows the science requirements to be better defined. This study characterized statistics of average surface roughness as a function of polishing time. Average surface roughness was measured at 81 locations using a Zygo® white light interferometer at regular intervals during the polishing process. Each data set was fit to a normal and Largest Extreme Value (LEV) distribution; then tested for goodness of fit. We show that the skew in the average data changes as a function of polishing time.

  12. Cardiovascular response to acute stress in freely moving rats: time-frequency analysis.

    PubMed

    Loncar-Turukalo, Tatjana; Bajic, Dragana; Japundzic-Zigon, Nina

    2008-01-01

    Spectral analysis of cardiovascular series is an important tool for assessing the features of the autonomic control of the cardiovascular system. In this experiment Wistar rats ecquiped with intraarterial catheter for blood pressure (BP) recording were exposed to stress induced by blowing air. The problem of non stationary data was overcomed applying the Smoothed Pseudo Wigner Villle (SPWV) time-frequency distribution. Spectral analysis was done before stress, during stress, immediately after stress and later in recovery. The spectral indices were calculated for both systolic blood pressure (SBP) and pulse interval (PI) series. The time evolution of spectral indices showed perturbed sympathovagal balance.

  13. An automated and objective method for age partitioning of reference intervals based on continuous centile curves.

    PubMed

    Yang, Qian; Lew, Hwee Yeong; Peh, Raymond Hock Huat; Metz, Michael Patrick; Loh, Tze Ping

    2016-10-01

    Reference intervals are the most commonly used decision support tool when interpreting quantitative laboratory results. They may require partitioning to better describe subpopulations that display significantly different reference values. Partitioning by age is particularly important for the paediatric population since there are marked physiological changes associated with growth and maturation. However, most partitioning methods are either technically complex or require prior knowledge of the underlying physiology/biological variation of the population. There is growing interest in the use of continuous centile curves, which provides seamless laboratory reference values as a child grows, as an alternative to rigidly described fixed reference intervals. However, the mathematical functions that describe these curves can be complex and may not be easily implemented in laboratory information systems. Hence, the use of fixed reference intervals is expected to continue for a foreseeable time. We developed a method that objectively proposes optimised age partitions and reference intervals for quantitative laboratory data (http://research.sph.nus.edu.sg/pp/ppResult.aspx), based on the sum of gradient that best describes the underlying distribution of the continuous centile curves. It is hoped that this method may improve the selection of age intervals for partitioning, which is receiving increasing attention in paediatric laboratory medicine. Copyright © 2016 Royal College of Pathologists of Australasia. Published by Elsevier B.V. All rights reserved.

  14. Using Screencast Videos to Enhance Undergraduate Students' Statistical Reasoning about Confidence Intervals

    ERIC Educational Resources Information Center

    Strazzeri, Kenneth Charles

    2013-01-01

    The purposes of this study were to investigate (a) undergraduate students' reasoning about the concepts of confidence intervals (b) undergraduate students' interactions with "well-designed" screencast videos on sampling distributions and confidence intervals, and (c) how screencast videos improve undergraduate students' reasoning ability…

  15. A method for developing design diagrams for ceramic and glass materials using fatigue data

    NASA Technical Reports Server (NTRS)

    Heslin, T. M.; Magida, M. B.; Forrest, K. A.

    1986-01-01

    The service lifetime of glass and ceramic materials can be expressed as a plot of time-to-failure versus applied stress whose plot is parametric in percent probability of failure. This type of plot is called a design diagram. Confidence interval estimates for such plots depend on the type of test that is used to generate the data, on assumptions made concerning the statistical distribution of the test results, and on the type of analysis used. This report outlines the development of design diagrams for glass and ceramic materials in engineering terms using static or dynamic fatigue tests, assuming either no particular statistical distribution of test results or a Weibull distribution and using either median value or homologous ratio analysis of the test results.

  16. Reclaimed mineland curve number response to temporal distribution of rainfall

    USGS Publications Warehouse

    Warner, R.C.; Agouridis, C.T.; Vingralek, P.T.; Fogle, A.W.

    2010-01-01

    The curve number (CN) method is a common technique to estimate runoff volume, and it is widely used in coal mining operations such as those in the Appalachian region of Kentucky. However, very little CN data are available for watersheds disturbed by surface mining and then reclaimed using traditional techniques. Furthermore, as the CN method does not readily account for variations in infiltration rates due to varying rainfall distributions, the selection of a single CN value to encompass all temporal rainfall distributions could lead engineers to substantially under- or over-size water detention structures used in mining operations or other land uses such as development. Using rainfall and runoff data from a surface coal mine located in the Cumberland Plateau of eastern Kentucky, CNs were computed for conventionally reclaimed lands. The effects of temporal rainfall distributions on CNs was also examined by classifying storms as intense, steady, multi-interval intense, or multi-interval steady. Results indicate that CNs for such reclaimed lands ranged from 62 to 94 with a mean value of 85. Temporal rainfall distributions were also shown to significantly affect CN values with intense storms having significantly higher CNs than multi-interval storms. These results indicate that a period of recovery is present between rainfall bursts of a multi-interval storm that allows depressional storage and infiltration rates to rebound. ?? 2010 American Water Resources Association.

  17. Population Pharmacokinetic Model-Based Evaluation of Standard Dosing Regimens for Cefuroxime Used in Coronary Artery Bypass Graft Surgery with Cardiopulmonary Bypass.

    PubMed

    Alqahtani, Saeed A; Alsultan, Abdullah S; Alqattan, Hussain M; Eldemerdash, Ahmed; Albacker, Turki B

    2018-04-01

    The purpose of this study was to investigate the population pharmacokinetics (PK) of cefuroxime in patients undergoing coronary artery bypass graft (CABG) surgery. In this observational pharmacokinetic study, multiple blood samples were collected over a 48-h interval of intravenous cefuroxime administration. The samples were analyzed by using a validated high-performance liquid chromatography (HPLC) method. Population pharmacokinetic models were developed using Monolix (version 4.4) software. Pharmacokinetic-pharmacodynamic (PD) simulations were performed to explore the ability of different dosage regimens to achieve the pharmacodynamic targets. A total of 468 blood samples from 78 patients were analyzed. The PK for cefuroxime were best described by a two-compartment model with between-subject variability on clearance, the volume of distribution of the central compartment, and the volume of distribution of the peripheral compartment. The clearance of cefuroxime was related to creatinine clearance (CL CR ). Dosing simulations showed that standard dosing regimens of 1.5 g could achieve the PK-PD target of the percentage of the time that the free concentration is maintained above the MIC during a dosing interval ( fT MIC ) of 65% for an MIC of 8 mg/liter in patients with a CL CR of 30, 60, or 90 ml/min, whereas this dosing regimen failed to achieve the PK-PD target in patients with a CL CR of ≥125 ml/min. In conclusion, administration of standard doses of 1.5 g three times daily provided adequate antibiotic prophylaxis in patients undergoing CABG surgery. Lower doses failed to achieve the PK-PD target. Patients with high CL CR values required either higher doses or shorter intervals of cefuroxime dosing. On the other hand, lower doses (1 g three times daily) produced adequate target attainment for patients with low CL CR values (≤30 ml/min). Copyright © 2018 American Society for Microbiology.

  18. Intermittency via moments and distributions in central O+Cu collisions at 14. 6 A[center dot]GeV/c

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tannenbaum, M.J.

    Fluctuations in pseudorapidity distributions of charged particles from central (ZCAL) collisions of [sup 16]O+Cu at 14.6 A[center dot]GeV/c have been analyzed by Ju Kang using the method of scaled factorial moments as a function of the interval [delta][eta] an apparent power-law growth of moments with decreasing interval is observed down to [delta][eta] [approximately] 0.1, and the measured slope parameters are found to obey two scaling rules. Previous experience with E[sub T] distributions suggested that fluctuations of multiplicity and transverse energy can be well described by Gamma or Negative Binomial Distributions (NBD) and excellent fits to NBD were obtained in allmore » [delta][eta] bins. The k parameter of the NBD fit was found to increase linearly with the [delta][eta] interval, which due to the well known property of the NBD under convolution, indicates that the multiplicity distributions in adjacent bins of pseudorapidity [delta][eta] [approximately] 0.1 are largely statistically independent.« less

  19. Intermittency via moments and distributions in central O+Cu collisions at 14.6 A{center_dot}GeV/c

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tannenbaum, M.J.; The E802 Collaboration

    Fluctuations in pseudorapidity distributions of charged particles from central (ZCAL) collisions of {sup 16}O+Cu at 14.6 A{center_dot}GeV/c have been analyzed by Ju Kang using the method of scaled factorial moments as a function of the interval {delta}{eta} an apparent power-law growth of moments with decreasing interval is observed down to {delta}{eta} {approximately} 0.1, and the measured slope parameters are found to obey two scaling rules. Previous experience with E{sub T} distributions suggested that fluctuations of multiplicity and transverse energy can be well described by Gamma or Negative Binomial Distributions (NBD) and excellent fits to NBD were obtained in all {delta}{eta}more » bins. The k parameter of the NBD fit was found to increase linearly with the {delta}{eta} interval, which due to the well known property of the NBD under convolution, indicates that the multiplicity distributions in adjacent bins of pseudorapidity {delta}{eta} {approximately} 0.1 are largely statistically independent.« less

  20. Statistical interpretation of transient current power-law decay in colloidal quantum dot arrays

    NASA Astrophysics Data System (ADS)

    Sibatov, R. T.

    2011-08-01

    A new statistical model of the charge transport in colloidal quantum dot arrays is proposed. It takes into account Coulomb blockade forbidding multiple occupancy of nanocrystals and the influence of energetic disorder of interdot space. The model explains power-law current transients and the presence of the memory effect. The fractional differential analogue of the Ohm law is found phenomenologically for nanocrystal arrays. The model combines ideas that were considered as conflicting by other authors: the Scher-Montroll idea about the power-law distribution of waiting times in localized states for disordered semiconductors is applied taking into account Coulomb blockade; Novikov's condition about the asymptotic power-law distribution of time intervals between successful current pulses in conduction channels is fulfilled; and the carrier injection blocking predicted by Ginger and Greenham (2000 J. Appl. Phys. 87 1361) takes place.

  1. Modeling and Scaling of the Distribution of Trade Avalanches in a STOCK Market

    NASA Astrophysics Data System (ADS)

    Kim, Hyun-Joo

    We study the trading activity in the Korea Stock Exchange by considering trade avalanches. A series of successive trading with small trade time interval is regarded as a trade avalanche of which the size s is defined as the number of trade in a series of successive trades. We measure the distribution of trade avalanches sizes P(s) and find that it follows the power-law behavior P(s) ~ s-α with the exponent α ≈ 2 for two stocks with the largest number of trades. A simple stochastic model which describes the power-law behavior of the distribution of trade avalanche size is introduced. In the model it is assumed that the some trades induce the accompanying trades, which results in the trade avalanches and we find that the distribution of the trade avalanche size also follows power-law behavior with the exponent α ≈ 2.

  2. Extended Poisson process modelling and analysis of grouped binary data.

    PubMed

    Faddy, Malcolm J; Smith, David M

    2012-05-01

    A simple extension of the Poisson process results in binomially distributed counts of events in a time interval. A further extension generalises this to probability distributions under- or over-dispersed relative to the binomial distribution. Substantial levels of under-dispersion are possible with this modelling, but only modest levels of over-dispersion - up to Poisson-like variation. Although simple analytical expressions for the moments of these probability distributions are not available, approximate expressions for the mean and variance are derived, and used to re-parameterise the models. The modelling is applied in the analysis of two published data sets, one showing under-dispersion and the other over-dispersion. More appropriate assessment of the precision of estimated parameters and reliable model checking diagnostics follow from this more general modelling of these data sets. © 2012 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Recent and Future Enhancements in NDI for Aircraft Structures (Postprint)

    DTIC Science & Technology

    2015-11-01

    found that different capabilities were being used to determine inspection intervals for different aircraft [7]. This led to an internal effort...capability of the NDI technique determines the inspection intervals and the Distribution Statement A. Approved for public release; distribution...damage and that the aircraft structure had to be inspectable . The results of the damage tolerance assessments were incorporated into USAF Technical

  4. Recent and Future Enhancement in NDI for Aircraft Structures (Postprint)

    DTIC Science & Technology

    2015-11-01

    found that different capabilities were being used to determine inspection intervals for different aircraft [7]. This led to an internal effort...capability of the NDI technique determines the inspection intervals and the Distribution Statement A. Approved for public release; distribution...damage and that the aircraft structure had to be inspectable . The results of the damage tolerance assessments were incorporated into USAF Technical

  5. Performance Analysis of the IEEE 802.11p Multichannel MAC Protocol in Vehicular Ad Hoc Networks

    PubMed Central

    2017-01-01

    Vehicular Ad Hoc Networks (VANETs) employ multichannel to provide a variety of safety and non-safety applications, based on the IEEE 802.11p and IEEE 1609.4 protocols. The safety applications require timely and reliable transmissions, while the non-safety applications require efficient and high throughput. In the IEEE 1609.4 protocol, operating interval is divided into alternating Control Channel (CCH) interval and Service Channel (SCH) interval with an identical length. During the CCH interval, nodes transmit safety-related messages and control messages, and Enhanced Distributed Channel Access (EDCA) mechanism is employed to allow four Access Categories (ACs) within a station with different priorities according to their criticality for the vehicle’s safety. During the SCH interval, the non-safety massages are transmitted. An analytical model is proposed in this paper to evaluate performance, reliability and efficiency of the IEEE 802.11p and IEEE 1609.4 protocols. The proposed model improves the existing work by taking serval aspects and the character of multichannel switching into design consideration. Extensive performance evaluations based on analysis and simulation help to validate the accuracy of the proposed model and analyze the capabilities and limitations of the IEEE 802.11p and IEEE 1609.4 protocols, and enhancement suggestions are given. PMID:29231882

  6. Performance Analysis of the IEEE 802.11p Multichannel MAC Protocol in Vehicular Ad Hoc Networks.

    PubMed

    Song, Caixia

    2017-12-12

    Vehicular Ad Hoc Networks (VANETs) employ multichannel to provide a variety of safety and non-safety applications, based on the IEEE 802.11p and IEEE 1609.4 protocols. The safety applications require timely and reliable transmissions, while the non-safety applications require efficient and high throughput. In the IEEE 1609.4 protocol, operating interval is divided into alternating Control Channel (CCH) interval and Service Channel (SCH) interval with an identical length. During the CCH interval, nodes transmit safety-related messages and control messages, and Enhanced Distributed Channel Access (EDCA) mechanism is employed to allow four Access Categories (ACs) within a station with different priorities according to their criticality for the vehicle's safety. During the SCH interval, the non-safety massages are transmitted. An analytical model is proposed in this paper to evaluate performance, reliability and efficiency of the IEEE 802.11p and IEEE 1609.4 protocols. The proposed model improves the existing work by taking serval aspects and the character of multichannel switching into design consideration. Extensive performance evaluations based on analysis and simulation help to validate the accuracy of the proposed model and analyze the capabilities and limitations of the IEEE 802.11p and IEEE 1609.4 protocols, and enhancement suggestions are given.

  7. Possible scenarios for occurrence of M ~ 7 interplate earthquakes prior to and following the 2011 Tohoku-Oki earthquake based on numerical simulation.

    PubMed

    Nakata, Ryoko; Hori, Takane; Hyodo, Mamoru; Ariyoshi, Keisuke

    2016-05-10

    We show possible scenarios for the occurrence of M ~ 7 interplate earthquakes prior to and following the M ~ 9 earthquake along the Japan Trench, such as the 2011 Tohoku-Oki earthquake. One such M ~ 7 earthquake is so-called the Miyagi-ken-Oki earthquake, for which we conducted numerical simulations of earthquake generation cycles by using realistic three-dimensional (3D) geometry of the subducting Pacific Plate. In a number of scenarios, the time interval between the M ~ 9 earthquake and the subsequent Miyagi-ken-Oki earthquake was equal to or shorter than the average recurrence interval during the later stage of the M ~ 9 earthquake cycle. The scenarios successfully reproduced important characteristics such as the recurrence of M ~ 7 earthquakes, coseismic slip distribution, afterslip distribution, the largest foreshock, and the largest aftershock of the 2011 earthquake. Thus, these results suggest that we should prepare for future M ~ 7 earthquakes in the Miyagi-ken-Oki segment even though this segment recently experienced large coseismic slip in 2011.

  8. Psychophysics of remembering.

    PubMed Central

    White, K G; Wixted, J T

    1999-01-01

    We present a new model of remembering in the context of conditional discrimination. For procedures such as delayed matching to sample, the effect of the sample stimuli at the time of remembering is represented by a pair of Thurstonian (normal) distributions of effective stimulus values. The critical assumption of the model is that, based on prior experience, each effective stimulus value is associated with a ratio of reinforcers obtained for previous correct choices of the comparison stimuli. That ratio determines the choice that is made on the basis of the matching law. The standard deviations of the distributions are assumed to increase with increasing retention-interval duration, and the distance between their means is assumed to be a function of other factors that influence overall difficulty of the discrimination. It is a behavioral model in that choice is determined by its reinforcement history. The model predicts that the biasing effects of the reinforcer differential increase with decreasing discriminability and with increasing retention-interval duration. Data from several conditions using a delayed matching-to-sample procedure with pigeons support the predictions. PMID:10028693

  9. Possible scenarios for occurrence of M ~ 7 interplate earthquakes prior to and following the 2011 Tohoku-Oki earthquake based on numerical simulation

    PubMed Central

    Nakata, Ryoko; Hori, Takane; Hyodo, Mamoru; Ariyoshi, Keisuke

    2016-01-01

    We show possible scenarios for the occurrence of M ~ 7 interplate earthquakes prior to and following the M ~ 9 earthquake along the Japan Trench, such as the 2011 Tohoku-Oki earthquake. One such M ~ 7 earthquake is so-called the Miyagi-ken-Oki earthquake, for which we conducted numerical simulations of earthquake generation cycles by using realistic three-dimensional (3D) geometry of the subducting Pacific Plate. In a number of scenarios, the time interval between the M ~ 9 earthquake and the subsequent Miyagi-ken-Oki earthquake was equal to or shorter than the average recurrence interval during the later stage of the M ~ 9 earthquake cycle. The scenarios successfully reproduced important characteristics such as the recurrence of M ~ 7 earthquakes, coseismic slip distribution, afterslip distribution, the largest foreshock, and the largest aftershock of the 2011 earthquake. Thus, these results suggest that we should prepare for future M ~ 7 earthquakes in the Miyagi-ken-Oki segment even though this segment recently experienced large coseismic slip in 2011. PMID:27161897

  10. Interval Timing Is Preserved Despite Circadian Desynchrony in Rats: Constant Light and Heavy Water Studies.

    PubMed

    Petersen, Christian C; Mistlberger, Ralph E

    2017-08-01

    The mechanisms that enable mammals to time events that recur at 24-h intervals (circadian timing) and at arbitrary intervals in the seconds-to-minutes range (interval timing) are thought to be distinct at the computational and neurobiological levels. Recent evidence that disruption of circadian rhythmicity by constant light (LL) abolishes interval timing in mice challenges this assumption and suggests a critical role for circadian clocks in short interval timing. We sought to confirm and extend this finding by examining interval timing in rats in which circadian rhythmicity was disrupted by long-term exposure to LL or by chronic intake of 25% D 2 O. Adult, male Sprague-Dawley rats were housed in a light-dark (LD) cycle or in LL until free-running circadian rhythmicity was markedly disrupted or abolished. The rats were then trained and tested on 15- and 30-sec peak-interval procedures, with water restriction used to motivate task performance. Interval timing was found to be unimpaired in LL rats, but a weak circadian activity rhythm was apparently rescued by the training procedure, possibly due to binge feeding that occurred during the 15-min water access period that followed training each day. A second group of rats in LL were therefore restricted to 6 daily meals scheduled at 4-h intervals. Despite a complete absence of circadian rhythmicity in this group, interval timing was again unaffected. To eliminate all possible temporal cues, we tested a third group of rats in LL by using a pseudo-randomized schedule. Again, interval timing remained accurate. Finally, rats tested in LD received 25% D 2 O in place of drinking water. This markedly lengthened the circadian period and caused a failure of LD entrainment but did not disrupt interval timing. These results indicate that interval timing in rats is resistant to disruption by manipulations of circadian timekeeping previously shown to impair interval timing in mice.

  11. Min and Max Exponential Extreme Interval Values and Statistics

    ERIC Educational Resources Information Center

    Jance, Marsha; Thomopoulos, Nick

    2009-01-01

    The extreme interval values and statistics (expected value, median, mode, standard deviation, and coefficient of variation) for the smallest (min) and largest (max) values of exponentially distributed variables with parameter ? = 1 are examined for different observation (sample) sizes. An extreme interval value g[subscript a] is defined as a…

  12. A new framework of statistical inferences based on the valid joint sampling distribution of the observed counts in an incomplete contingency table.

    PubMed

    Tian, Guo-Liang; Li, Hui-Qiong

    2017-08-01

    Some existing confidence interval methods and hypothesis testing methods in the analysis of a contingency table with incomplete observations in both margins entirely depend on an underlying assumption that the sampling distribution of the observed counts is a product of independent multinomial/binomial distributions for complete and incomplete counts. However, it can be shown that this independency assumption is incorrect and can result in unreliable conclusions because of the under-estimation of the uncertainty. Therefore, the first objective of this paper is to derive the valid joint sampling distribution of the observed counts in a contingency table with incomplete observations in both margins. The second objective is to provide a new framework for analyzing incomplete contingency tables based on the derived joint sampling distribution of the observed counts by developing a Fisher scoring algorithm to calculate maximum likelihood estimates of parameters of interest, the bootstrap confidence interval methods, and the bootstrap testing hypothesis methods. We compare the differences between the valid sampling distribution and the sampling distribution under the independency assumption. Simulation studies showed that average/expected confidence-interval widths of parameters based on the sampling distribution under the independency assumption are shorter than those based on the new sampling distribution, yielding unrealistic results. A real data set is analyzed to illustrate the application of the new sampling distribution for incomplete contingency tables and the analysis results again confirm the conclusions obtained from the simulation studies.

  13. Optimizing preventive maintenance policy: A data-driven application for a light rail braking system.

    PubMed

    Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel

    2017-10-01

    This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions.

  14. Optimizing preventive maintenance policy: A data-driven application for a light rail braking system

    PubMed Central

    Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel

    2017-01-01

    This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions. PMID:29278245

  15. Use of microcomputer in mapping depth of stratigraphic horizons in National Petroleum Reserve in Alaska

    USGS Publications Warehouse

    Payne, Thomas G.

    1982-01-01

    REGIONAL MAPPER is a menu-driven system in the BASIC language for computing and plotting (1) time, depth, and average velocity to geologic horizons, (2) interval time, thickness, and interval velocity of stratigraphic intervals, and (3) subcropping and onlapping intervals at unconformities. The system consists of three programs: FILER, TRAVERSER, and PLOTTER. A control point is a shot point with velocity analysis or a shot point at or near a well with velocity check-shot survey. Reflection time to and code number of seismic horizons are filed by digitizing tablet from record sections. TRAVERSER starts at a point of geologic control and, in traversing to another, parallels seismic events, records loss of horizons by onlap and truncation, and stores reflection time for geologic horizons at traversed shot points. TRAVERSER is basically a phantoming procedure. Permafrost thickness and velocity variations, buried canyons with low-velocity fill, and error in seismically derived velocity cause velocity anomalies that complicate depth mapping. Two depths to the top of the pebble is based shale are computed for each control point. One depth, designated Zs on seismically derived velocity. The other (Zw) is based on interval velocity interpolated linearly between wells and multiplied by interval time (isochron) to give interval thickness. Z w is computed for all geologic horizons by downward summation of interval thickness. Unknown true depth (Z) to the pebble shale may be expressed as Z = Zs + es and Z = Zw + ew where the e terms represent error. Equating the two expressions gives the depth difference D = Zs + Zw = ew + es A plot of D for the top of the pebble shale is readily contourable but smoothing is required to produce a reasonably simple surface. Seismically derived velocity used in computing Zs includes the effect of velocity anomalies but is subject to some large randomly distributed errors resulting in depth errors (es). Well-derived velocity used in computing Zw does not include the effect of velocity anomalies, but the error (ew) should reflect these anomalies and should be contourable (non-random). The D surface as contoured with smoothing is assumed to represent ew, that is, the depth effect of variations in permafrost thickness and velocity and buried canyon depth. Estimated depth (Zest) to each geologic horizon is the sum of Z w for that horizon and a constant e w as contoured for the pebble shale, which is the first highly continuous seismic horizon below the zone of anomalous velocity. Results of this 'depthing' procedure are compared with those of Tetra Tech, Inc., the subcontractor responsible for geologic and geophysical interpretation and mapping.

  16. The spike trains of inhibited pacemaker neurons seen through the magnifying glass of nonlinear analyses.

    PubMed

    Segundo, J P; Sugihara, G; Dixon, P; Stiber, M; Bersier, L F

    1998-12-01

    This communication describes the new information that may be obtained by applying nonlinear analytical techniques to neurobiological time-series. Specifically, we consider the sequence of interspike intervals Ti (the "timing") of trains recorded from synaptically inhibited crayfish pacemaker neurons. As reported earlier, different postsynaptic spike train forms (sets of timings with shared properties) are generated by varying the average rate and/or pattern (implying interval dispersions and sequences) of presynaptic spike trains. When the presynaptic train is Poisson (independent exponentially distributed intervals), the form is "Poisson-driven" (unperturbed and lengthened intervals succeed each other irregularly). When presynaptic trains are pacemaker (intervals practically equal), forms are either "p:q locked" (intervals repeat periodically), "intermittent" (mostly almost locked but disrupted irregularly), "phase walk throughs" (intermittencies with briefer regular portions), or "messy" (difficult to predict or describe succinctly). Messy trains are either "erratic" (some intervals natural and others lengthened irregularly) or "stammerings" (intervals are integral multiples of presynaptic intervals). The individual spike train forms were analysed using attractor reconstruction methods based on the lagged coordinates provided by successive intervals from the time-series Ti. Numerous models were evaluated in terms of their predictive performance by a trial-and-error procedure: the most successful model was taken as best reflecting the true nature of the system's attractor. Each form was characterized in terms of its dimensionality, nonlinearity and predictability. (1) The dimensionality of the underlying dynamical attractor was estimated by the minimum number of variables (coordinates Ti) required to model acceptably the system's dynamics, i.e. by the system's degrees of freedom. Each model tested was based on a different number of Ti; the smallest number whose predictions were judged successful provided the best integer approximation of the attractor's true dimension (not necessarily an integer). Dimensionalities from three to five provided acceptable fits. (2) The degree of nonlinearity was estimated by: (i) comparing the correlations between experimental results and data from linear and nonlinear models, and (ii) tuning model nonlinearity via a distance-weighting function and identifying the either local or global neighborhood size. Lockings were compatible with linear models and stammerings were marginal; nonlinear models were best for Poisson-driven, intermittent and erratic forms. (3) Finally, prediction accuracy was plotted against increasingly long sequences of intervals forecast: the accuracies for Poisson-driven, locked and stammering forms were invariant, revealing irregularities due to uncorrelated noise, but those of intermittent and messy erratic forms decayed rapidly, indicating an underlying deterministic process. The excellent reconstructions possible for messy erratic and for some intermittent forms are especially significant because of their relatively low dimensionality (around 4), high degree of nonlinearity and prediction decay with time. This is characteristic of chaotic systems, and provides evidence that nonlinear couplings between relatively few variables are the major source of the apparent complexity seen in these cases. This demonstration of different dimensions, degrees of nonlinearity and predictabilities provides rigorous support for the categorization of different synaptically driven discharge forms proposed earlier on the basis of more heuristic criteria. This has significant implications. (1) It demonstrates that heterogeneous postsynaptic forms can indeed be induced by manipulating a few presynaptic variables. (2) Each presynaptic timing induces a form with characteristic dimensionality, thus breaking up the preparation into subsystems such that the physical variables in each operate as one

  17. Evaluation Of Statistical Models For Forecast Errors From The HBV-Model

    NASA Astrophysics Data System (ADS)

    Engeland, K.; Kolberg, S.; Renard, B.; Stensland, I.

    2009-04-01

    Three statistical models for the forecast errors for inflow to the Langvatn reservoir in Northern Norway have been constructed and tested according to how well the distribution and median values of the forecasts errors fit to the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order autoregressive model was constructed for the forecast errors. The parameters were conditioned on climatic conditions. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order autoregressive model was constructed for the forecast errors. For the last model positive and negative errors were modeled separately. The errors were first NQT-transformed before a model where the mean values were conditioned on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: We wanted a) the median values to be close to the observed values; b) the forecast intervals to be narrow; c) the distribution to be correct. The results showed that it is difficult to obtain a correct model for the forecast errors, and that the main challenge is to account for the auto-correlation in the errors. Model 1 and 2 gave similar results, and the main drawback is that the distributions are not correct. The 95% forecast intervals were well identified, but smaller forecast intervals were over-estimated, and larger intervals were under-estimated. Model 3 gave a distribution that fits better, but the median values do not fit well since the auto-correlation is not properly accounted for. If the 95% forecast interval is of interest, Model 2 is recommended. If the whole distribution is of interest, Model 3 is recommended.

  18. Comparison between volatility return intervals of the S&P 500 index and two common models

    NASA Astrophysics Data System (ADS)

    Vodenska-Chitkushev, I.; Wang, F. Z.; Weber, P.; Yamasaki, K.; Havlin, S.; Stanley, H. E.

    2008-01-01

    We analyze the S&P 500 index data for the 13-year period, from January 1, 1984 to December 31, 1996, with one data point every 10 min. For this database, we study the distribution and clustering of volatility return intervals, which are defined as the time intervals between successive volatilities above a certain threshold q. We find that the long memory in the volatility leads to a clustering of above-median as well as below-median return intervals. In addition, it turns out that the short return intervals form larger clusters compared to the long return intervals. When comparing the empirical results to the ARMA-FIGARCH and fBm models for volatility, we find that the fBm model predicts scaling better than the ARMA-FIGARCH model, which is consistent with the argument that both ARMA-FIGARCH and fBm capture the long-term dependence in return intervals to a certain extent, but only fBm accounts for the scaling. We perform the Student's t-test to compare the empirical data with the shuffled records, ARMA-FIGARCH and fBm. We analyze separately the clusters of above-median return intervals and the clusters of below-median return intervals for different thresholds q. We find that the empirical data are statistically different from the shuffled data for all thresholds q. Our results also suggest that the ARMA-FIGARCH model is statistically different from the S&P 500 for intermediate q for both above-median and below-median clusters, while fBm is statistically different from S&P 500 for small and large q for above-median clusters and for small q for below-median clusters. Neither model can fully explain the entire regime of q studied.

  19. Modelling foraging movements of diving predators: a theoretical study exploring the effect of heterogeneous landscapes on foraging efficiency

    PubMed Central

    Bartoń, Kamil A.; Scott, Beth E.; Travis, Justin M.J.

    2014-01-01

    Foraging in the marine environment presents particular challenges for air-breathing predators. Information about prey capture rates, the strategies that diving predators use to maximise prey encounter rates and foraging success are still largely unknown and difficult to observe. As well, with the growing awareness of potential climate change impacts and the increasing interest in the development of renewable sources it is unknown how the foraging activity of diving predators such as seabirds will respond to both the presence of underwater structures and the potential corresponding changes in prey distributions. Motivated by this issue we developed a theoretical model to gain general understanding of how the foraging efficiency of diving predators may vary according to landscape structure and foraging strategy. Our theoretical model highlights that animal movements, intervals between prey capture and foraging efficiency are likely to critically depend on the distribution of the prey resource and the size and distribution of introduced underwater structures. For multiple prey loaders, changes in prey distribution affected the searching time necessary to catch a set amount of prey which in turn affected the foraging efficiency. The spatial aggregation of prey around small devices (∼ 9 × 9 m) created a valuable habitat for a successful foraging activity resulting in shorter intervals between prey captures and higher foraging efficiency. The presence of large devices (∼ 24 × 24 m) however represented an obstacle for predator movement, thus increasing the intervals between prey captures. In contrast, for single prey loaders the introduction of spatial aggregation of the resources did not represent an advantage suggesting that their foraging efficiency is more strongly affected by other factors such as the timing to find the first prey item which was found to occur faster in the presence of large devices. The development of this theoretical model represents a useful starting point to understand the energetic reasons for a range of potential predator responses to spatial heterogeneity and environmental uncertainties in terms of search behaviour and predator–prey interactions. We highlight future directions that integrated empirical and modelling studies should take to improve our ability to predict how diving predators will be impacted by the deployment of manmade structures in the marine environment. PMID:25250211

  20. The role of dopamine in the timing of Pavlovian conditioned keypecking in ring doves.

    PubMed

    Ohyama, T; Horvitz, J C; Kitsos, E; Balsam, P D

    2001-01-01

    The effect of dopaminergic drugs on the timing of conditioned keypecking in ring doves was studied in two experiments. Subjects were given pairings of a keylight with food and the temporal distribution of keypecks was obtained during unreinforced probe trials. Experiment 1 demonstrated that injections of pimozide before each session immediately decreased response rates but shifted timing distributions gradually to the right over several days of treatment. Experiment 2 showed similar results using a longer interstimulus interval (ISI). No shifts were observed when the drug was injected after training sessions, or when a delay, identical to each subject's average latency to eat during the drug condition, was inserted between keylight offset and food presentation. Consequently, the shifts in timing were mediated neither by mere accumulation of the drug nor a delay from keylight offset to food presentation resulting from the drug's ability to slow motor processes. The results suggest that pimozide modulates response rate through its effect on motor processes or incentive value, and response timing through a conditioned response (CR) to injection-related cues established via their repeated pairings with the drug.

  1. [A research on real-time ventricular QRS classification methods for single-chip-microcomputers].

    PubMed

    Peng, L; Yang, Z; Li, L; Chen, H; Chen, E; Lin, J

    1997-05-01

    Ventricular QRS classification is key technique of ventricular arrhythmias detection in single-chip-microcomputer based dynamic electrocardiogram real-time analyser. This paper adopts morphological feature vector including QRS amplitude, interval information to reveal QRS morphology. After studying the distribution of QRS morphology feature vector of MIT/BIH DB ventricular arrhythmia files, we use morphological feature vector cluster to classify multi-morphology QRS. Based on the method, morphological feature parameters changing method which is suitable to catch occasional ventricular arrhythmias is presented. Clinical experiments verify missed ventricular arrhythmia is less than 1% by this method.

  2. Spatial Distribution of the Coefficient of Variation and Bayesian Forecast for the Paleo-Earthquakes in Japan

    NASA Astrophysics Data System (ADS)

    Nomura, Shunichi; Ogata, Yosihiko

    2016-04-01

    We propose a Bayesian method of probability forecasting for recurrent earthquakes of inland active faults in Japan. Renewal processes with the Brownian Passage Time (BPT) distribution are applied for over a half of active faults in Japan by the Headquarters for Earthquake Research Promotion (HERP) of Japan. Long-term forecast with the BPT distribution needs two parameters; the mean and coefficient of variation (COV) for recurrence intervals. The HERP applies a common COV parameter for all of these faults because most of them have very few specified paleoseismic events, which is not enough to estimate reliable COV values for respective faults. However, different COV estimates are proposed for the same paleoseismic catalog by some related works. It can make critical difference in forecast to apply different COV estimates and so COV should be carefully selected for individual faults. Recurrence intervals on a fault are, on the average, determined by the long-term slip rate caused by the tectonic motion but fluctuated by nearby seismicities which influence surrounding stress field. The COVs of recurrence intervals depend on such stress perturbation and so have spatial trends due to the heterogeneity of tectonic motion and seismicity. Thus we introduce a spatial structure on its COV parameter by Bayesian modeling with a Gaussian process prior. The COVs on active faults are correlated and take similar values for closely located faults. It is found that the spatial trends in the estimated COV values coincide with the density of active faults in Japan. We also show Bayesian forecasts by the proposed model using Markov chain Monte Carlo method. Our forecasts are different from HERP's forecast especially on the active faults where HERP's forecasts are very high or low.

  3. Real-time hierarchically distributed processing network interaction simulation

    NASA Technical Reports Server (NTRS)

    Zimmerman, W. F.; Wu, C.

    1987-01-01

    The Telerobot Testbed is a hierarchically distributed processing system which is linked together through a standard, commercial Ethernet. Standard Ethernet systems are primarily designed to manage non-real-time information transfer. Therefore, collisions on the net (i.e., two or more sources attempting to send data at the same time) are managed by randomly rescheduling one of the sources to retransmit at a later time interval. Although acceptable for transmitting noncritical data such as mail, this particular feature is unacceptable for real-time hierarchical command and control systems such as the Telerobot. Data transfer and scheduling simulations, such as token ring, offer solutions to collision management, but do not appropriately characterize real-time data transfer/interactions for robotic systems. Therefore, models like these do not provide a viable simulation environment for understanding real-time network loading. A real-time network loading model is being developed which allows processor-to-processor interactions to be simulated, collisions (and respective probabilities) to be logged, collision-prone areas to be identified, and network control variable adjustments to be reentered as a means of examining and reducing collision-prone regimes that occur in the process of simulating a complete task sequence.

  4. No Additional Benefits of Block- Over Evenly-Distributed High-Intensity Interval Training within a Polarized Microcycle

    PubMed Central

    McGawley, Kerry; Juudas, Elisabeth; Kazior, Zuzanna; Ström, Kristoffer; Blomstrand, Eva; Hansson, Ola; Holmberg, Hans-Christer

    2017-01-01

    Introduction: The current study aimed to investigate the responses to block- versus evenly-distributed high-intensity interval training (HIT) within a polarized microcycle. Methods: Twenty well-trained junior cross-country skiers (10 males, age 17.6 ± 1.5 and 10 females, age 17.3 ± 1.5) completed two, 3-week periods of training (EVEN and BLOCK) in a randomized, crossover-design study. In EVEN, 3 HIT sessions (5 × 4-min of diagonal-stride roller-skiing) were completed at a maximal sustainable intensity each week while low-intensity training (LIT) was distributed evenly around the HIT. In BLOCK, the same 9 HIT sessions were completed in the second week while only LIT was completed in the first and third weeks. Heart rate (HR), session ratings of perceived exertion (sRPE), and perceived recovery (pREC) were recorded for all HIT and LIT sessions, while distance covered was recorded for each HIT interval. The recovery-stress questionnaire for athletes (RESTQ-Sport) was completed weekly. Before and after EVEN and BLOCK, resting saliva and muscle samples were collected and an incremental test and 600-m time-trial (TT) were completed. Results: Pre- to post-testing revealed no significant differences between EVEN and BLOCK for changes in resting salivary cortisol, testosterone, or IgA, or for changes in muscle capillary density, fiber area, fiber composition, enzyme activity (CS, HAD, and PFK) or the protein content of VEGF or PGC-1α. Neither were any differences observed in the changes in skiing economy, V˙O2max or 600-m time-trial performance between interventions. These findings were coupled with no significant differences between EVEN and BLOCK for distance covered during HIT, summated HR zone scores, total sRPE training load, overall pREC or overall recovery-stress state. However, 600-m TT performance improved from pre- to post-training, irrespective of intervention (P = 0.003), and a number of hormonal and muscle biopsy markers were also significantly altered post-training (P < 0.05). Discussion: The current study shows that well-trained junior cross-country skiers are able to complete 9 HIT sessions within 1 week without compromising total work done and without experiencing greater stress or reduced recovery over a 3-week polarized microcycle. However, the findings do not support block-distributed HIT as a superior method to a more even distribution of HIT in terms of enhancing physiological or performance adaptions. PMID:28659826

  5. No Additional Benefits of Block- Over Evenly-Distributed High-Intensity Interval Training within a Polarized Microcycle.

    PubMed

    McGawley, Kerry; Juudas, Elisabeth; Kazior, Zuzanna; Ström, Kristoffer; Blomstrand, Eva; Hansson, Ola; Holmberg, Hans-Christer

    2017-01-01

    Introduction: The current study aimed to investigate the responses to block- versus evenly-distributed high-intensity interval training (HIT) within a polarized microcycle. Methods: Twenty well-trained junior cross-country skiers (10 males, age 17.6 ± 1.5 and 10 females, age 17.3 ± 1.5) completed two, 3-week periods of training (EVEN and BLOCK) in a randomized, crossover-design study. In EVEN, 3 HIT sessions (5 × 4-min of diagonal-stride roller-skiing) were completed at a maximal sustainable intensity each week while low-intensity training (LIT) was distributed evenly around the HIT. In BLOCK, the same 9 HIT sessions were completed in the second week while only LIT was completed in the first and third weeks. Heart rate (HR), session ratings of perceived exertion (sRPE), and perceived recovery (pREC) were recorded for all HIT and LIT sessions, while distance covered was recorded for each HIT interval. The recovery-stress questionnaire for athletes (RESTQ-Sport) was completed weekly. Before and after EVEN and BLOCK, resting saliva and muscle samples were collected and an incremental test and 600-m time-trial (TT) were completed. Results: Pre- to post-testing revealed no significant differences between EVEN and BLOCK for changes in resting salivary cortisol, testosterone, or IgA, or for changes in muscle capillary density, fiber area, fiber composition, enzyme activity (CS, HAD, and PFK) or the protein content of VEGF or PGC-1α. Neither were any differences observed in the changes in skiing economy, [Formula: see text] or 600-m time-trial performance between interventions. These findings were coupled with no significant differences between EVEN and BLOCK for distance covered during HIT, summated HR zone scores, total sRPE training load, overall pREC or overall recovery-stress state. However, 600-m TT performance improved from pre- to post-training, irrespective of intervention ( P = 0.003), and a number of hormonal and muscle biopsy markers were also significantly altered post-training ( P < 0.05). Discussion: The current study shows that well-trained junior cross-country skiers are able to complete 9 HIT sessions within 1 week without compromising total work done and without experiencing greater stress or reduced recovery over a 3-week polarized microcycle. However, the findings do not support block-distributed HIT as a superior method to a more even distribution of HIT in terms of enhancing physiological or performance adaptions.

  6. Caribou distribution during the post-calving period in relation to infrastructure in the Prudhoe Bay oil field, Alaska

    USGS Publications Warehouse

    Cronin, Matthew A.; Amstrup, Steven C.; Durner, George M.; Noel, Lynn E.; McDonald, Trent L.; Ballard, Warren B.

    1998-01-01

    There is concern that caribou (Rangifer tarandus) may avoid roads and facilities (i.e., infrastructure) in the Prudhoe Bay oil field (PBOF) in northern Alaska, and that this avoidance can have negative effects on the animals. We quantified the relationship between caribou distribution and PBOF infrastructure during the post-calving period (mid-June to mid-August) with aerial surveys from 1990 to 1995. We conducted four to eight surveys per year with complete coverage of the PBOF. We identified active oil field infrastructure and used a geographic information system (GIS) to construct ten 1 km wide concentric intervals surrounding the infrastructure. We tested whether caribou distribution is related to distance from infrastructure with a chi-squared habitat utilization-availability analysis and log-linear regression. We considered bulls, calves, and total caribou of all sex/age classes separately. The habitat utilization-availability analysis indicated there was no consistent trend of attraction to or avoidance of infrastructure. Caribou frequently were more abundant than expected in the intervals close to infrastructure, and this trend was more pronounced for bulls and for total caribou of all sex/age classes than for calves. Log-linear regression (with Poisson error structure) of numbers of caribou and distance from infrastructure were also done, with and without combining data into the 1 km distance intervals. The analysis without intervals revealed no relationship between caribou distribution and distance from oil field infrastructure, or between caribou distribution and Julian date, year, or distance from the Beaufort Sea coast. The log-linear regression with caribou combined into distance intervals showed the density of bulls and total caribou of all sex/age classes declined with distance from infrastructure. Our results indicate that during the post-calving period: 1) caribou distribution is largely unrelated to distance from infrastructure; 2) caribou regularly use habitats in the PBOF; 3) caribou often occur close to infrastructure; and 4) caribou do not appear to avoid oil field infrastructure.

  7. eMODIS Expedited: Overview of a Near Real Time MODIS Production System for Operational Vegetation Monitoring

    NASA Astrophysics Data System (ADS)

    Jenkerson, C.; Meyer, D. J.; Werpy, J.; Evenson, K.; Merritt, M.

    2010-12-01

    The expedited MODIS, or eMODIS production system derives near-real time Normalized Difference Vegetation Index (NDVI) from Moderate Resolution Imaging Spectroradiometer (MODIS) surface reflectance provided by the Land and Atmosphere Near-real time Capability for EOS (LANCE). There are currently three regions covered by this U.S. Geological Survey (USGS) capability, including the continental U.S., Africa, and the Central America/Caribbean regions. Each of the eMODIS production streams is configured to output its data in map projections, compositing intervals, spatial resolutions, and file formats specific to its region and user community. The challenges of processing 1,000-meter, 500-m, and especially 250-m products by midnight on the last day of a product interval have been met with increasingly effective software and system architecture. An anonymous file transfer protocol (FTP) distribution site (ftp://emodisftp.cr.usgs.gov/eMODIS) allows users direct access to eMODIS NDVI products for operational (near-real time) monitoring of vegetation conditions like drought, crop failure, insect infestation, and other threats, thus supporting subsequent early warning of famine and the targeting of vulnerable populations for insecure food situations.

  8. Reliability estimation of a N- M-cold-standby redundancy system in a multicomponent stress-strength model with generalized half-logistic distribution

    NASA Astrophysics Data System (ADS)

    Liu, Yiming; Shi, Yimin; Bai, Xuchao; Zhan, Pei

    2018-01-01

    In this paper, we study the estimation for the reliability of a multicomponent system, named N- M-cold-standby redundancy system, based on progressive Type-II censoring sample. In the system, there are N subsystems consisting of M statistically independent distributed strength components, and only one of these subsystems works under the impact of stresses at a time and the others remain as standbys. Whenever the working subsystem fails, one from the standbys takes its place. The system fails when the entire subsystems fail. It is supposed that the underlying distributions of random strength and stress both belong to the generalized half-logistic distribution with different shape parameter. The reliability of the system is estimated by using both classical and Bayesian statistical inference. Uniformly minimum variance unbiased estimator and maximum likelihood estimator for the reliability of the system are derived. Under squared error loss function, the exact expression of the Bayes estimator for the reliability of the system is developed by using the Gauss hypergeometric function. The asymptotic confidence interval and corresponding coverage probabilities are derived based on both the Fisher and the observed information matrices. The approximate highest probability density credible interval is constructed by using Monte Carlo method. Monte Carlo simulations are performed to compare the performances of the proposed reliability estimators. A real data set is also analyzed for an illustration of the findings.

  9. Jitter Reduces Response-Time Variability in ADHD: An Ex-Gaussian Analysis.

    PubMed

    Lee, Ryan W Y; Jacobson, Lisa A; Pritchard, Alison E; Ryan, Matthew S; Yu, Qilu; Denckla, Martha B; Mostofsky, Stewart; Mahone, E Mark

    2015-09-01

    "Jitter" involves randomization of intervals between stimulus events. Compared with controls, individuals with ADHD demonstrate greater intrasubject variability (ISV) performing tasks with fixed interstimulus intervals (ISIs). Because Gaussian curves mask the effect of extremely slow or fast response times (RTs), ex-Gaussian approaches have been applied to study ISV. This study applied ex-Gaussian analysis to examine the effects of jitter on RT variability in children with and without ADHD. A total of 75 children, aged 9 to 14 years (44 ADHD, 31 controls), completed a go/no-go test with two conditions: fixed ISI and jittered ISI. ADHD children showed greater variability, driven by elevations in exponential (tau), but not normal (sigma) components of the RT distribution. Jitter decreased tau in ADHD to levels not statistically different than controls, reducing lapses in performance characteristic of impaired response control. Jitter may provide a nonpharmacologic mechanism to facilitate readiness to respond and reduce lapses from sustained (controlled) performance. © 2012 SAGE Publications.

  10. Shot Peening Numerical Simulation of Aircraft Aluminum Alloy Structure

    NASA Astrophysics Data System (ADS)

    Liu, Yong; Lv, Sheng-Li; Zhang, Wei

    2018-03-01

    After shot peening, the 7050 aluminum alloy has good anti-fatigue and anti-stress corrosion properties. In the shot peening process, the pellet collides with target material randomly, and generated residual stress distribution on the target material surface, which has great significance to improve material property. In this paper, a simplified numerical simulation model of shot peening was established. The influence of pellet collision velocity, pellet collision position and pellet collision time interval on the residual stress of shot peening was studied, which is simulated by the ANSYS/LS-DYNA software. The analysis results show that different velocity, different positions and different time intervals have great influence on the residual stress after shot peening. Comparing with the numerical simulation results based on Kriging model, the accuracy of the simulation results in this paper was verified. This study provides a reference for the optimization of the shot peening process, and makes an effective exploration for the precise shot peening numerical simulation.

  11. Properties of single NMDA receptor channels in human dentate gyrus granule cells

    PubMed Central

    Lieberman, David N; Mody, Istvan

    1999-01-01

    Cell-attached single-channel recordings of NMDA channels were carried out in human dentate gyrus granule cells acutely dissociated from slices prepared from hippocampi surgically removed for the treatment of temporal lobe epilepsy (TLE). The channels were activated by l-aspartate (250–500 nm) in the presence of saturating glycine (8 μm). The main conductance was 51 ± 3 pS. In ten of thirty granule cells, clear subconductance states were observed with a mean conductance of 42 ± 3 pS, representing 8 ± 2% of the total openings. The mean open times varied from cell to cell, possibly owing to differences in the epileptogenicity of the tissue of origin. The mean open time was 2.70 ± 0.95 ms (range, 1.24–4.78 ms). In 87% of the cells, three exponential components were required to fit the apparent open time distributions. In the remaining neurons, as in control rat granule cells, two exponentials were sufficient. Shut time distributions were fitted by five exponential components. The average numbers of openings in bursts (1.74 ± 0.09) and clusters (3.06 ± 0.26) were similar to values obtained in rodents. The mean burst (6.66 ± 0.9 ms), cluster (20.1 ± 3.3 ms) and supercluster lengths (116.7 ± 17.5 ms) were longer than those in control rat granule cells, but approached the values previously reported for TLE (kindled) rats. As in rat NMDA channels, adjacent open and shut intervals appeared to be inversely related to each other, but it was only the relative areas of the three open time constants that changed with adjacent shut time intervals. The long openings of human TLE NMDA channels resembled those produced by calcineurin inhibitors in control rat granule cells. Yet the calcineurin inhibitor FK-506 (500 nm) did not prolong the openings of human channels, consistent with a decreased calcineurin activity in human TLE. Many properties of the human NMDA channels resemble those recorded in rat hippocampal neurons. Both have similar slope conductances, five exponential shut time distributions, complex groupings of openings, and a comparable number of openings per grouping. Other properties of human TLE NMDA channels correspond to those observed in kindling; the openings are considerably long, requiring an additional exponential component to fit their distributions, and inhibition of calcineurin is without effect in prolonging the openings. PMID:10373689

  12. A Variable Oscillator Underlies the Measurement of Time Intervals in the Rostral Medial Prefrontal Cortex during Classical Eyeblink Conditioning in Rabbits.

    PubMed

    Caro-Martín, C Rocío; Leal-Campanario, Rocío; Sánchez-Campusano, Raudel; Delgado-García, José M; Gruart, Agnès

    2015-11-04

    We were interested in determining whether rostral medial prefrontal cortex (rmPFC) neurons participate in the measurement of conditioned stimulus-unconditioned stimulus (CS-US) time intervals during classical eyeblink conditioning. Rabbits were conditioned with a delay paradigm consisting of a tone as CS. The CS started 50, 250, 500, 1000, or 2000 ms before and coterminated with an air puff (100 ms) directed at the cornea as the US. Eyelid movements were recorded with the magnetic search coil technique and the EMG activity of the orbicularis oculi muscle. Firing activities of rmPFC neurons were recorded across conditioning sessions. Reflex and conditioned eyelid responses presented a dominant oscillatory frequency of ≈12 Hz. The firing rate of each recorded neuron presented a single peak of activity with a frequency dependent on the CS-US interval (i.e., ≈12 Hz for 250 ms, ≈6 Hz for 500 ms, and≈3 Hz for 1000 ms). Interestingly, rmPFC neurons presented their dominant firing peaks at three precise times evenly distributed with respect to CS start and also depending on the duration of the CS-US interval (only for intervals of 250, 500, and 1000 ms). No significant neural responses were recorded at very short (50 ms) or long (2000 ms) CS-US intervals. rmPFC neurons seem not to encode the oscillatory properties characterizing conditioned eyelid responses in rabbits, but are probably involved in the determination of CS-US intervals of an intermediate range (250-1000 ms). We propose that a variable oscillator underlies the generation of working memories in rabbits. The way in which brains generate working memories (those used for the transient processing and storage of newly acquired information) is still an intriguing question. Here, we report that the firing activities of neurons located in the rostromedial prefrontal cortex recorded in alert behaving rabbits are controlled by a dynamic oscillator. This oscillator generated firing frequencies in a variable band of 3-12 Hz depending on the conditioned stimulus-unconditioned stimulus intervals (1 s, 500 ms, 250 ms) selected for classical eyeblink conditioning of behaving rabbits. Shorter (50 ms) and longer (2 s) intervals failed to activate the oscillator and prevented the acquisition of conditioned eyelid responses. This is an unexpected mechanism to generate sustained firing activities in neural circuits generating working memories. Copyright © 2015 the authors 0270-6474/15/3514809-13$15.00/0.

  13. The probability of lava inundation at the proposed and existing Kulani prison sites

    USGS Publications Warehouse

    Kauahikaua, J.P.; Trusdell, F.A.; Heliker, C.C.

    1998-01-01

    The State of Hawai`i has proposed building a 2,300-bed medium-security prison about 10 km downslope from the existing Kulani medium-security correctional facility. The proposed and existing facilities lie on the northeast rift zone of Mauna Loa, which last erupted in 1984 in this same general area. We use the best available geologic mapping and dating with GIS software to estimate the average recurrence interval between lava flows that inundate these sites. Three different methods are used to adjust the number of flows exposed at the surface for those flows that are buried to allow a better representation of the recurrence interval. Probabilities are then computed, based on these recurrence intervals, assuming that the data match a Poisson distribution. The probability of lava inundation for the existing prison site is estimated to be 11- 12% in the next 50 years. The probability of lava inundation for the proposed sites B and C are 2- 3% and 1-2%, respectively, in the same period. The probabilities are based on estimated recurrence intervals for lava flows, which are approximately proportional to the area considered. The probability of having to evacuate the prison is certainly higher than the probability of lava entering the site. Maximum warning times between eruption and lava inundation of a site are estimated to be 24 hours for the existing prison site and 72 hours for proposed sites B and C. Evacuation plans should take these times into consideration.

  14. Intra-tumor distribution of PEGylated liposome upon repeated injection: No possession by prior dose.

    PubMed

    Nakamura, Hiroyuki; Abu Lila, Amr S; Nishio, Miho; Tanaka, Masao; Ando, Hidenori; Kiwada, Hiroshi; Ishida, Tatsuhiro

    2015-12-28

    Liposomes have proven to be a viable means for the delivery of chemotherapeutic agents to solid tumors. However, significant variability has been detected in their intra-tumor accumulation and distribution, resulting in compromised therapeutic outcomes. We recently examined the intra-tumor accumulation and distribution of weekly sequentially administered oxaliplatin (l-OHP)-containing PEGylated liposomes. In that study, the first and second doses of l-OHP-containing PEGylated liposomes were distributed diversely and broadly within tumor tissues, resulting in a potent anti-tumor efficacy. However, little is known about the mechanism underlying such a diverse and broad liposome distribution. Therefore, in the present study, we investigated the influence of dosage interval on the intra-tumor accumulation and distribution of "empty" PEGylated liposomes. Intra-tumor distribution of sequentially administered "empty" PEGylated liposomes was altered in a dosing interval-dependent manner. In addition, the intra-tumor distribution pattern was closely related to the chronological alteration of tumor blood flow as well as vascular permeability in the growing tumor tissue. These results suggest that the sequential administrations of PEGylated liposomes in well-spaced intervals might allow the distribution to different areas and enhance the total bulk accumulation within tumor tissue, resulting in better therapeutic efficacy of the encapsulated payload. This study may provide useful information for a better design of therapeutic regimens involving multiple administrations of nanocarrier drug delivery systems. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Beat-to-beat control of human optokinetic nystagmus slow phase durations

    PubMed Central

    Furman, Joseph M.

    2016-01-01

    This study provides the first clear evidence that the generation of optokinetic nystagmus fast phases (FPs) is a decision process that is influenced by performance of a concurrent disjunctive reaction time task (DRT). Ten subjects performed an auditory DRT during constant velocity optokinetic stimulation. Eye movements were measured in three dimensions with a magnetic search coil. Slow phase (SP) durations were defined as the interval between FPs. There were three main findings. Firstly, human optokinetic nystagmus SP durations are consistent with a model of a Gaussian basic interval generator (a type of biological clock), such that FPs can be triggered randomly at the end of a clock cycle (mean duration: 200–250 ms). Kolmogorov-Smirnov tests could not reject the modeled cumulative distribution for any data trials. Secondly, the FP need not be triggered at the end of a clock cycle, so that individual SP durations represent single or multiple clock cycles. Thirdly, the probability of generating a FP at the end of each interval generator cycle decreases significantly during performance of a DRT. These findings indicate that the alternation between SPs and FPs of optokinetic nystagmus is not purely reflexive. Rather, the triggering of the next FP is postponed more frequently if a recently presented DRT trial is pending action when the timing cycle expires. Hence, optokinetic nystagmus FPs show dual-task interference in a manner usually attributed to voluntary movements, including saccades. NEW & NOTEWORTHY This study provides the first clear evidence that the generation of optokinetic nystagmus (OKN) fast phases is a decision process that is influenced by performance of a concurrent disjunctive reaction time task (DRT). The slow phase (SP) durations are consistent with a Gaussian basic interval generator and multiple interval SP durations occur more frequently in the presence of the DRT. Hence, OKN shows dual-task interference in a manner observed in voluntary movements, such as saccades. PMID:27760815

  16. Beat-to-beat control of human optokinetic nystagmus slow phase durations.

    PubMed

    Balaban, Carey D; Furman, Joseph M

    2017-01-01

    This study provides the first clear evidence that the generation of optokinetic nystagmus fast phases (FPs) is a decision process that is influenced by performance of a concurrent disjunctive reaction time task (DRT). Ten subjects performed an auditory DRT during constant velocity optokinetic stimulation. Eye movements were measured in three dimensions with a magnetic search coil. Slow phase (SP) durations were defined as the interval between FPs. There were three main findings. Firstly, human optokinetic nystagmus SP durations are consistent with a model of a Gaussian basic interval generator (a type of biological clock), such that FPs can be triggered randomly at the end of a clock cycle (mean duration: 200-250 ms). Kolmogorov-Smirnov tests could not reject the modeled cumulative distribution for any data trials. Secondly, the FP need not be triggered at the end of a clock cycle, so that individual SP durations represent single or multiple clock cycles. Thirdly, the probability of generating a FP at the end of each interval generator cycle decreases significantly during performance of a DRT. These findings indicate that the alternation between SPs and FPs of optokinetic nystagmus is not purely reflexive. Rather, the triggering of the next FP is postponed more frequently if a recently presented DRT trial is pending action when the timing cycle expires. Hence, optokinetic nystagmus FPs show dual-task interference in a manner usually attributed to voluntary movements, including saccades. This study provides the first clear evidence that the generation of optokinetic nystagmus (OKN) fast phases is a decision process that is influenced by performance of a concurrent disjunctive reaction time task (DRT). The slow phase (SP) durations are consistent with a Gaussian basic interval generator and multiple interval SP durations occur more frequently in the presence of the DRT. Hence, OKN shows dual-task interference in a manner observed in voluntary movements, such as saccades. Copyright © 2017 the American Physiological Society.

  17. Machine learning approaches for estimation of prediction interval for the model output.

    PubMed

    Shrestha, Durga L; Solomatine, Dimitri P

    2006-03-01

    A novel method for estimating prediction uncertainty using machine learning techniques is presented. Uncertainty is expressed in the form of the two quantiles (constituting the prediction interval) of the underlying distribution of prediction errors. The idea is to partition the input space into different zones or clusters having similar model errors using fuzzy c-means clustering. The prediction interval is constructed for each cluster on the basis of empirical distributions of the errors associated with all instances belonging to the cluster under consideration and propagated from each cluster to the examples according to their membership grades in each cluster. Then a regression model is built for in-sample data using computed prediction limits as targets, and finally, this model is applied to estimate the prediction intervals (limits) for out-of-sample data. The method was tested on artificial and real hydrologic data sets using various machine learning techniques. Preliminary results show that the method is superior to other methods estimating the prediction interval. A new method for evaluating performance for estimating prediction interval is proposed as well.

  18. The distribution of the therapeutic monoclonal antibodies cetuximab and trastuzumab within solid tumors.

    PubMed

    Lee, Carol M; Tannock, Ian F

    2010-06-03

    Poor distribution of some anticancer drugs in solid tumors may limit their anti-tumor activity. Here we used immunohistochemistry to quantify the distribution of the therapeutic monoclonal antibodies cetuximab and trastuzumab in relation to blood vessels and to regions of hypoxia in human tumor xenografts. The antibodies were injected into mice implanted with human epidermoid carcinoma A431 or human breast carcinoma MDA-MB-231 transfected with ERBB2 (231-H2N) that express high levels of ErbB1 and ErbB2 respectively, or wild-type MDA-MB-231, which expresses intermediate levels of ErbB1 and low levels of ErbB2. The distribution of cetuximab in A431 xenografts and trastuzumab in 231-H2N xenografts was time and dose dependent. At early intervals after injection of 1 mg cetuximab into A431 xenografts, the concentration of cetuximab decreased with increasing distance from blood vessels, but became more uniformly distributed at later times; there remained however limited distribution and binding in hypoxic regions of tumors. Injection of lower doses of cetuximab led to heterogeneous distributions. Similar results were observed with trastuzumab in 231-H2N xenografts. In MDA-MB-231 xenografts, which express lower levels of ErbB1, homogeneity of distribution of cetuximab was achieved more rapidly. Cetuximab and trastuzumab distribute slowly, but at higher doses achieve a relatively uniform distribution after about 24 hours, most likely due to their long half-lives in the circulation. There remains poor distribution within hypoxic regions of tumors.

  19. Algorithm of pulmonary emphysema extraction using thoracic 3D CT images

    NASA Astrophysics Data System (ADS)

    Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki

    2007-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  20. Algorithm of pulmonary emphysema extraction using low dose thoracic 3D CT images

    NASA Astrophysics Data System (ADS)

    Saita, S.; Kubo, M.; Kawata, Y.; Niki, N.; Nakano, Y.; Omatsu, H.; Tominaga, K.; Eguchi, K.; Moriyama, N.

    2006-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to 100 thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  1. Comparison of gamma-gamma Phase Coarsening Responses of Three Powder Metal Disk Superalloys

    NASA Technical Reports Server (NTRS)

    Gabb, T. P.; Gayda, J.; Johnson, D. F.; MacKay, R. A.; Rogers, R. B.; Sudbrack, C. K.; Garg, A.; Locci, I. E.; Semiatin, S. L.; Kang, E.

    2016-01-01

    The phase microstructures of several powder metal (PM) disk superalloys were quantitatively evaluated. Contents, chemistries, and lattice parameters of gamma and gamma strengthening phase were determined for conventionally heat treated Alloy 10, LSHR, and ME3 superalloys, after electrolytic phase extractions. Several of long term heat treatments were then performed, to allow quantification of the precipitation, content, and size distribution of gamma at a long time interval to approximate equilibrium conditions. Additional coarsening heat treatments were performed at multiple temperatures and shorter time intervals, to allow quantification of the precipitation, contents and size distributions of gamma at conditions diverging from equilibrium. Modest differences in gamma and gamma lattice parameters and their mismatch were observed among the alloys, which varied with heat treatment. Yet, gamma coarsening rates were very similar for all three alloys in the heat treatment conditions examined. Alloy 10 had higher gamma dissolution and formation temperatures than LSHR and ME3, but a lower lattice mismatch, which was slightly positive for all three alloys at room temperature. The gamma precipitates of Alloy 10 appeared to remain coherent at higher temperatures than for LSHR and ME3. Higher coarsening rates were observed for gamma precipitates residing along grain boundaries than for those within grains in all three alloys, during slow-moderate quenching from supersolvus solution heat treatments, and during aging at temperatures of 843 C and higher.

  2. Optimization of a Deep Convective Cloud Technique in Evaluating the Long-Term Radiometric Stability of MODIS Reflective Solar Bands

    NASA Technical Reports Server (NTRS)

    Mu, Qiaozhen; Wu, Aisheng; Xiong, Xiaoxiong; Doelling, David R.; Angal, Amit; Chang, Tiejun; Bhatt, Rajendra

    2017-01-01

    MODIS reflective solar bands are calibrated on-orbit using a solar diffuser and near-monthly lunar observations. To monitor the performance and effectiveness of the on-orbit calibrations, pseudo-invariant targets such as deep convective clouds (DCCs), Libya-4, and Dome-C are used to track the long-term stability of MODIS Level 1B product. However, the current MODIS operational DCC technique (DCCT) simply uses the criteria set for the 0.65- m band. We optimize several critical DCCT parameters including the 11- micrometer IR-band Brightness Temperature (BT11) threshold for DCC identification, DCC core size and uniformity to help locate DCCs at convection centers, data collection time interval, and probability distribution function (PDF) bin increment for each channel. The mode reflectances corresponding to the PDF peaks are utilized as the DCC reflectances. Results show that the BT11 threshold and time interval are most critical for the Short Wave Infrared (SWIR) bands. The Bidirectional Reflectance Distribution Function model is most effective in reducing the DCC anisotropy for the visible channels. The uniformity filters and PDF bin size have minimal impacts on the visible channels and a larger impact on the SWIR bands. The newly optimized DCCT will be used for future evaluation of MODIS on-orbit calibration by MODIS Characterization Support Team.

  3. Examining Temporal Sample Scale and Model Choice with Spatial Capture-Recapture Models in the Common Leopard Panthera pardus.

    PubMed

    Goldberg, Joshua F; Tempa, Tshering; Norbu, Nawang; Hebblewhite, Mark; Mills, L Scott; Wangchuk, Tshewang R; Lukacs, Paul

    2015-01-01

    Many large carnivores occupy a wide geographic distribution, and face threats from habitat loss and fragmentation, poaching, prey depletion, and human wildlife-conflicts. Conservation requires robust techniques for estimating population densities and trends, but the elusive nature and low densities of many large carnivores make them difficult to detect. Spatial capture-recapture (SCR) models provide a means for handling imperfect detectability, while linking population estimates to individual movement patterns to provide more accurate estimates than standard approaches. Within this framework, we investigate the effect of different sample interval lengths on density estimates, using simulations and a common leopard (Panthera pardus) model system. We apply Bayesian SCR methods to 89 simulated datasets and camera-trapping data from 22 leopards captured 82 times during winter 2010-2011 in Royal Manas National Park, Bhutan. We show that sample interval length from daily, weekly, monthly or quarterly periods did not appreciably affect median abundance or density, but did influence precision. We observed the largest gains in precision when moving from quarterly to shorter intervals. We therefore recommend daily sampling intervals for monitoring rare or elusive species where practicable, but note that monthly or quarterly sample periods can have similar informative value. We further develop a novel application of Bayes factors to select models where multiple ecological factors are integrated into density estimation. Our simulations demonstrate that these methods can help identify the "true" explanatory mechanisms underlying the data. Using this method, we found strong evidence for sex-specific movement distributions in leopards, suggesting that sexual patterns of space-use influence density. This model estimated a density of 10.0 leopards/100 km2 (95% credibility interval: 6.25-15.93), comparable to contemporary estimates in Asia. These SCR methods provide a guide to monitor and observe the effect of management interventions on leopards and other species of conservation interest.

  4. Examining Temporal Sample Scale and Model Choice with Spatial Capture-Recapture Models in the Common Leopard Panthera pardus

    PubMed Central

    Goldberg, Joshua F.; Tempa, Tshering; Norbu, Nawang; Hebblewhite, Mark; Mills, L. Scott; Wangchuk, Tshewang R.; Lukacs, Paul

    2015-01-01

    Many large carnivores occupy a wide geographic distribution, and face threats from habitat loss and fragmentation, poaching, prey depletion, and human wildlife-conflicts. Conservation requires robust techniques for estimating population densities and trends, but the elusive nature and low densities of many large carnivores make them difficult to detect. Spatial capture-recapture (SCR) models provide a means for handling imperfect detectability, while linking population estimates to individual movement patterns to provide more accurate estimates than standard approaches. Within this framework, we investigate the effect of different sample interval lengths on density estimates, using simulations and a common leopard (Panthera pardus) model system. We apply Bayesian SCR methods to 89 simulated datasets and camera-trapping data from 22 leopards captured 82 times during winter 2010–2011 in Royal Manas National Park, Bhutan. We show that sample interval length from daily, weekly, monthly or quarterly periods did not appreciably affect median abundance or density, but did influence precision. We observed the largest gains in precision when moving from quarterly to shorter intervals. We therefore recommend daily sampling intervals for monitoring rare or elusive species where practicable, but note that monthly or quarterly sample periods can have similar informative value. We further develop a novel application of Bayes factors to select models where multiple ecological factors are integrated into density estimation. Our simulations demonstrate that these methods can help identify the “true” explanatory mechanisms underlying the data. Using this method, we found strong evidence for sex-specific movement distributions in leopards, suggesting that sexual patterns of space-use influence density. This model estimated a density of 10.0 leopards/100 km2 (95% credibility interval: 6.25–15.93), comparable to contemporary estimates in Asia. These SCR methods provide a guide to monitor and observe the effect of management interventions on leopards and other species of conservation interest. PMID:26536231

  5. Experiment on Synchronous Timing Signal Detection from ISDB-T Terrestrial Digital TV Signal with Application to Autonomous Distributed ITS-IVC Network

    NASA Astrophysics Data System (ADS)

    Karasawa, Yoshio; Kumagai, Taichi; Takemoto, Atsushi; Fujii, Takeo; Ito, Kenji; Suzuki, Noriyoshi

    A novel timing synchronizing scheme is proposed for use in inter-vehicle communication (IVC) with an autonomous distributed intelligent transport system (ITS). The scheme determines the timing of packet signal transmission in the IVC network and employs the guard interval (GI) timing in the orthogonal frequency divisional multiplexing (OFDM) signal currently used for terrestrial broadcasts in the Japanese digital television system (ISDB-T). This signal is used because it is expected that the automotive market will demand the capability for cars to receive terrestrial digital TV broadcasts in the near future. The use of broadcasts by automobiles presupposes that the on-board receivers are capable of accurately detecting the GI timing data in an extremely low carrier-to-noise ratio (CNR) condition regardless of a severe multipath environment which will introduce broad scatter in signal arrival times. Therefore, we analyzed actual broadcast signals received in a moving vehicle in a field experiment and showed that the GI timing signal is detected with the desired accuracy even in the case of extremely low-CNR environments. Some considerations were also given about how to use these findings.

  6. Human comment dynamics in on-line social systems

    NASA Astrophysics Data System (ADS)

    Wu, Ye; Zhou, Changsong; Chen, Maoying; Xiao, Jinghua; Kurths, Jürgen

    2010-12-01

    Human comment is studied using data from ‘tianya’ which is one of the most popular on-line social systems in China. We found that the time interval between two consecutive comments on the same topic, called inter-event time, follows a power-law distribution. This result shows that there is no characteristic decay time on a topic. It allows for very long periods without comments that separate bursts of intensive comments. Furthermore, the frequency of a different ID commenting on a topic also follows a power-law distribution. It indicates that there are some “hubs” in the topic who lead the direction of the public opinion. Based on the personal comments habit, a model is introduced to explain these phenomena. The numerical simulations of the model fit well with the empirical results. Our findings are helpful for discovering regular patterns of human behavior in on-line society and the evolution of the public opinion on the virtual as well as real society.

  7. Voice-onset time and buzz-onset time identification: A ROC analysis

    NASA Astrophysics Data System (ADS)

    Lopez-Bascuas, Luis E.; Rosner, Burton S.; Garcia-Albea, Jose E.

    2004-05-01

    Previous studies have employed signal detection theory to analyze data from speech and nonspeech experiments. Typically, signal distributions were assumed to be Gaussian. Schouten and van Hessen [J. Acoust. Soc. Am. 104, 2980-2990 (1998)] explicitly tested this assumption for an intensity continuum and a speech continuum. They measured response distributions directly and, assuming an interval scale, concluded that the Gaussian assumption held for both continua. However, Pastore and Macmillan [J. Acoust. Soc. Am. 111, 2432 (2002)] applied ROC analysis to Schouten and van Hessen's data, assuming only an ordinal scale. Their ROC curves suppported the Gaussian assumption for the nonspeech signals only. Previously, Lopez-Bascuas [Proc. Audit. Bas. Speech Percept., 158-161 (1997)] found evidence with a rating scale procedure that the Gaussian model was inadequate for a voice-onset time continuum but not for a noise-buzz continuum. Both continua contained ten stimuli with asynchronies ranging from -35 ms to +55 ms. ROC curves (double-probability plots) are now reported for each pair of adjacent stimuli on the two continua. Both speech and nonspeech ROCs often appeared nonlinear, indicating non-Gaussian signal distributions under the usual zero-variance assumption for response criteria.

  8. Search for a Signature of Interaction between Relativistic Jet and Progenitor in Gamma-Ray Bursts

    NASA Astrophysics Data System (ADS)

    Yoshida, Kazuki; Yoneoku, Daisuke; Sawano, Tatsuya; Ito, Hirotaka; Matsumoto, Jin; Nagataki, Shigehiro

    2017-11-01

    The time variability of prompt emission in gamma-ray bursts (GRBs) is expected to originate from the temporal behavior of the central engine activity and the jet propagation in the massive stellar envelope. Using a pulse search algorithm for bright GRBs, we investigate the time variability of gamma-ray light curves to search a signature of the interaction between the jet and the inner structure of the progenitor. Since this signature might appear in the earlier phase of prompt emission, we divide the light curves into the initial phase and the late phase by referring to the trigger time and the burst duration of each GRB. We also adopt this algorithm for GRBs associated with supernovae/hypernovae that certainly are accompanied by massive stars. However, there is no difference between each pulse interval distribution described by a lognorma distribution in the two phases. We confirm that this result can be explained by the photospheric emission model if the energy injection of the central engine is not steady or completely periodic but episodic and described by the lognormal distribution with a mean of ˜1 s.

  9. Search for a Signature of Interaction between Relativistic Jet and Progenitor in Gamma-Ray Bursts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshida, Kazuki; Yoneoku, Daisuke; Sawano, Tatsuya

    The time variability of prompt emission in gamma-ray bursts (GRBs) is expected to originate from the temporal behavior of the central engine activity and the jet propagation in the massive stellar envelope. Using a pulse search algorithm for bright GRBs, we investigate the time variability of gamma-ray light curves to search a signature of the interaction between the jet and the inner structure of the progenitor. Since this signature might appear in the earlier phase of prompt emission, we divide the light curves into the initial phase and the late phase by referring to the trigger time and the burstmore » duration of each GRB. We also adopt this algorithm for GRBs associated with supernovae/hypernovae that certainly are accompanied by massive stars. However, there is no difference between each pulse interval distribution described by a lognorma distribution in the two phases. We confirm that this result can be explained by the photospheric emission model if the energy injection of the central engine is not steady or completely periodic but episodic and described by the lognormal distribution with a mean of ∼1 s.« less

  10. Online-offline activities and game-playing behaviors of avatars in a massive multiplayer online role-playing game

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Zhou, Wei-Xing; Tan, Qun-Zhao

    2009-11-01

    Massive multiplayer online role-playing games (MMORPGs) are very popular in China, which provides a potential platform for scientific research. We study the online-offline activities of avatars in an MMORPG to understand their game-playing behavior. The statistical analysis unveils that the active avatars can be classified into three types. The avatars of the first type are owned by game cheaters who go online and offline in preset time intervals with the online duration distributions dominated by pulses. The second type of avatars is characterized by a Weibull distribution in the online durations, which is confirmed by statistical tests. The distributions of online durations of the remaining individual avatars differ from the above two types and cannot be described by a simple form. These findings have potential applications in the game industry.

  11. Multifractal analysis of mobile social networks

    NASA Astrophysics Data System (ADS)

    Zheng, Wei; Zhang, Zifeng; Deng, Yufan

    2017-09-01

    As Wireless Fidelity (Wi-Fi)-enabled handheld devices have been widely used, the mobile social networks (MSNs) has been attracting extensive attention. Fractal approaches have also been widely applied to characterierize natural networks as useful tools to depict their spatial distribution and scaling properties. Moreover, when the complexity of the spatial distribution of MSNs cannot be properly charaterized by single fractal dimension, multifractal analysis is required. For further research, we introduced a multifractal analysis method based on box-covering algorithm to describe the structure of MSNs. Using this method, we find that the networks are multifractal at different time interval. The simulation results demonstrate that the proposed method is efficient for analyzing the multifractal characteristic of MSNs, which provides a distribution of singularities adequately describing both the heterogeneity of fractal patterns and the statistics of measurements across spatial scales in MSNs.

  12. INFLUENCES OF RESPONSE RATE AND DISTRIBUTION ON THE CALCULATION OF INTEROBSERVER RELIABILITY SCORES

    PubMed Central

    Rolider, Natalie U.; Iwata, Brian A.; Bullock, Christopher E.

    2012-01-01

    We examined the effects of several variations in response rate on the calculation of total, interval, exact-agreement, and proportional reliability indices. Trained observers recorded computer-generated data that appeared on a computer screen. In Study 1, target responses occurred at low, moderate, and high rates during separate sessions so that reliability results based on the four calculations could be compared across a range of values. Total reliability was uniformly high, interval reliability was spuriously high for high-rate responding, proportional reliability was somewhat lower for high-rate responding, and exact-agreement reliability was the lowest of the measures, especially for high-rate responding. In Study 2, we examined the separate effects of response rate per se, bursting, and end-of-interval responding. Response rate and bursting had little effect on reliability scores; however, the distribution of some responses at the end of intervals decreased interval reliability somewhat, proportional reliability noticeably, and exact-agreement reliability markedly. PMID:23322930

  13. flexsurv: A Platform for Parametric Survival Modeling in R

    PubMed Central

    Jackson, Christopher H.

    2018-01-01

    flexsurv is an R package for fully-parametric modeling of survival data. Any parametric time-to-event distribution may be fitted if the user supplies a probability density or hazard function, and ideally also their cumulative versions. Standard survival distributions are built in, including the three and four-parameter generalized gamma and F distributions. Any parameter of any distribution can be modeled as a linear or log-linear function of covariates. The package also includes the spline model of Royston and Parmar (2002), in which both baseline survival and covariate effects can be arbitrarily flexible parametric functions of time. The main model-fitting function, flexsurvreg, uses the familiar syntax of survreg from the standard survival package (Therneau 2016). Censoring or left-truncation are specified in ‘Surv’ objects. The models are fitted by maximizing the full log-likelihood, and estimates and confidence intervals for any function of the model parameters can be printed or plotted. flexsurv also provides functions for fitting and predicting from fully-parametric multi-state models, and connects with the mstate package (de Wreede, Fiocco, and Putter 2011). This article explains the methods and design principles of the package, giving several worked examples of its use. PMID:29593450

  14. On intra-supply chain system with an improved distribution plan, multiple sales locations and quality assurance.

    PubMed

    Chiu, Singa Wang; Huang, Chao-Chih; Chiang, Kuo-Wei; Wu, Mei-Fang

    2015-01-01

    Transnational companies, operating in extremely competitive global markets, always seek to lower different operating costs, such as inventory holding costs in their intra- supply chain system. This paper incorporates a cost reducing product distribution policy into an intra-supply chain system with multiple sales locations and quality assurance studied by [Chiu et al., Expert Syst Appl, 40:2669-2676, (2013)]. Under the proposed cost reducing distribution policy, an added initial delivery of end items is distributed to multiple sales locations to meet their demand during the production unit's uptime and rework time. After rework when the remaining production lot goes through quality assurance, n fixed quantity installments of finished items are then transported to sales locations at a fixed time interval. Mathematical modeling and optimization techniques are used to derive closed-form optimal operating policies for the proposed system. Furthermore, the study demonstrates significant savings in stock holding costs for both the production unit and sales locations. Alternative of outsourcing product delivery task to an external distributor is analyzed to assist managerial decision making in potential outsourcing issues in order to facilitate further reduction in operating costs.

  15. Individual and group dynamics in purchasing activity

    NASA Astrophysics Data System (ADS)

    Gao, Lei; Guo, Jin-Li; Fan, Chao; Liu, Xue-Jiao

    2013-01-01

    As a major part of the daily operation in an enterprise, purchasing frequency is in constant change. Recent approaches on the human dynamics can provide some new insights into the economic behavior of companies in the supply chain. This paper captures the attributes of creation times of purchase orders to an individual vendor, as well as to all vendors, and further investigates whether they have some kind of dynamics by applying logarithmic binning to the construction of distribution plots. It’s found that the former displays a power-law distribution with approximate exponent 2.0, while the latter is fitted by a mixture distribution with both power-law and exponential characteristics. Obviously, two distinctive characteristics are presented for the interval time distribution from the perspective of individual dynamics and group dynamics. Actually, this mixing feature can be attributed to the fitting deviations as they are negligible for individual dynamics, but those of different vendors are cumulated and then lead to an exponential factor for group dynamics. To better describe the mechanism generating the heterogeneity of the purchase order assignment process from the objective company to all its vendors, a model driven by product life cycle is introduced, and then the analytical distribution and the simulation result are obtained, which are in good agreement with the empirical data.

  16. The application of prototype point processes for the summary and description of California wildfires

    USGS Publications Warehouse

    Nichols, K.; Schoenberg, F.P.; Keeley, J.E.; Bray, A.; Diez, D.

    2011-01-01

    A method for summarizing repeated realizations of a space-time marked point process, known as prototyping, is discussed and applied to catalogues of wildfires in California. Prototype summaries are constructed for varying time intervals using California wildfire data from 1990 to 2006. Previous work on prototypes for temporal and space-time point processes is extended here to include methods for computing prototypes with marks and the incorporation of prototype summaries into hierarchical clustering algorithms, the latter of which is used to delineate fire seasons in California. Other results include summaries of patterns in the spatial-temporal distribution of wildfires within each wildfire season. ?? 2011 Blackwell Publishing Ltd.

  17. Feynman-Kac equations for reaction and diffusion processes

    NASA Astrophysics Data System (ADS)

    Hou, Ru; Deng, Weihua

    2018-04-01

    This paper provides a theoretical framework for deriving the forward and backward Feynman-Kac equations for the distribution of functionals of the path of a particle undergoing both diffusion and reaction processes. Once given the diffusion type and reaction rate, a specific forward or backward Feynman-Kac equation can be obtained. The results in this paper include those for normal/anomalous diffusions and reactions with linear/nonlinear rates. Using the derived equations, we apply our findings to compute some physical (experimentally measurable) statistics, including the occupation time in half-space, the first passage time, and the occupation time in half-interval with an absorbing or reflecting boundary, for the physical system with anomalous diffusion and spontaneous evanescence.

  18. Prediction of Malaysian monthly GDP

    NASA Astrophysics Data System (ADS)

    Hin, Pooi Ah; Ching, Soo Huei; Yeing, Pan Wei

    2015-12-01

    The paper attempts to use a method based on multivariate power-normal distribution to predict the Malaysian Gross Domestic Product next month. Letting r(t) be the vector consisting of the month-t values on m selected macroeconomic variables, and GDP, we model the month-(t+1) GDP to be dependent on the present and l-1 past values r(t), r(t-1),…,r(t-l+1) via a conditional distribution which is derived from a [(m+1)l+1]-dimensional power-normal distribution. The 100(α/2)% and 100(1-α/2)% points of the conditional distribution may be used to form an out-of sample prediction interval. This interval together with the mean of the conditional distribution may be used to predict the month-(t+1) GDP. The mean absolute percentage error (MAPE), estimated coverage probability and average length of the prediction interval are used as the criterions for selecting the suitable lag value l-1 and the subset from a pool of 17 macroeconomic variables. It is found that the relatively better models would be those of which 2 ≤ l ≤ 3, and involving one or two of the macroeconomic variables given by Market Indicative Yield, Oil Prices, Exchange Rate and Import Trade.

  19. Adaptive real time selection for quantum key distribution in lossy and turbulent free-space channels

    NASA Astrophysics Data System (ADS)

    Vallone, Giuseppe; Marangon, Davide G.; Canale, Matteo; Savorgnan, Ilaria; Bacco, Davide; Barbieri, Mauro; Calimani, Simon; Barbieri, Cesare; Laurenti, Nicola; Villoresi, Paolo

    2015-04-01

    The unconditional security in the creation of cryptographic keys obtained by quantum key distribution (QKD) protocols will induce a quantum leap in free-space communication privacy in the same way that we are beginning to realize secure optical fiber connections. However, free-space channels, in particular those with long links and the presence of atmospheric turbulence, are affected by losses, fluctuating transmissivity, and background light that impair the conditions for secure QKD. Here we introduce a method to contrast the atmospheric turbulence in QKD experiments. Our adaptive real time selection (ARTS) technique at the receiver is based on the selection of the intervals with higher channel transmissivity. We demonstrate, using data from the Canary Island 143-km free-space link, that conditions with unacceptable average quantum bit error rate which would prevent the generation of a secure key can be used once parsed according to the instantaneous scintillation using the ARTS technique.

  20. Distributed communication and psychosocial performance in simulated space dwelling groups

    NASA Astrophysics Data System (ADS)

    Hienz, R. D.; Brady, J. V.; Hursh, S. R.; Ragusa, L. C.; Rouse, C. O.; Gasior, E. D.

    2005-05-01

    The present report describes the development and application of a distributed interactive multi-person simulation in a computer-generated planetary environment as an experimental test bed for modeling the human performance effects of variations in the types of communication modes available, and in the types of stress and incentive conditions underlying the completion of mission goals. The results demonstrated a high degree of interchangeability between communication modes (audio, text) when one mode was not available. Additionally, the addition of time pressure stress to complete tasks resulted in a reduction in performance effectiveness, and these performance reductions were ameliorated via the introduction of positive incentives contingent upon improved performances. The results obtained confirmed that cooperative and productive psychosocial interactions can be maintained between individually isolated and dispersed members of simulated spaceflight crews communicating and problem-solving effectively over extended time intervals without the benefit of one another's physical presence.

  1. Surface-distributed low-frequency asynchronous stimulation delays fatigue of stimulated muscles.

    PubMed

    Maneski, Lana Z Popović; Malešević, Nebojša M; Savić, Andrej M; Keller, Thierry; Popović, Dejan B

    2013-12-01

    One important reason why functional electrical stimulation (FES) has not gained widespread clinical use is the limitation imposed by rapid muscle fatigue due to non-physiological activation of the stimulated muscles. We aimed to show that asynchronous low-pulse-rate (LPR) electrical stimulation applied by multipad surface electrodes greatly postpones the occurrence of muscle fatigue compared with conventional stimulation (high pulse rate, HPR). We compared the produced force vs. time of the forearm muscles responsible for finger flexion in 2 stimulation protocols, LPR (fL = 10 Hz) and HPR (fH = 40 Hz). Surface-distributed low-frequency asynchronous stimulation (sDLFAS) doubles the time interval before the onset of fatigue (104 ± 80%) compared with conventional synchronous stimulation. Combining the performance of multipad electrodes (increased selectivity and facilitated positioning) with sDLFAS (decreased fatigue) can improve many FES applications in both the lower and upper extremities. Copyright © 2013 Wiley Periodicals, Inc.

  2. High resolution data acquisition

    DOEpatents

    Thornton, G.W.; Fuller, K.R.

    1993-04-06

    A high resolution event interval timing system measures short time intervals such as occur in high energy physics or laser ranging. Timing is provided from a clock, pulse train, and analog circuitry for generating a triangular wave synchronously with the pulse train (as seen in diagram on patent). The triangular wave has an amplitude and slope functionally related to the time elapsed during each clock pulse in the train. A converter forms a first digital value of the amplitude and slope of the triangle wave at the start of the event interval and a second digital value of the amplitude and slope of the triangle wave at the end of the event interval. A counter counts the clock pulse train during the interval to form a gross event interval time. A computer then combines the gross event interval time and the first and second digital values to output a high resolution value for the event interval.

  3. High resolution data acquisition

    DOEpatents

    Thornton, Glenn W.; Fuller, Kenneth R.

    1993-01-01

    A high resolution event interval timing system measures short time intervals such as occur in high energy physics or laser ranging. Timing is provided from a clock (38) pulse train (37) and analog circuitry (44) for generating a triangular wave (46) synchronously with the pulse train (37). The triangular wave (46) has an amplitude and slope functionally related to the time elapsed during each clock pulse in the train. A converter (18, 32) forms a first digital value of the amplitude and slope of the triangle wave at the start of the event interval and a second digital value of the amplitude and slope of the triangle wave at the end of the event interval. A counter (26) counts the clock pulse train (37) during the interval to form a gross event interval time. A computer (52) then combines the gross event interval time and the first and second digital values to output a high resolution value for the event interval.

  4. Reflection of the State of Hunger in Impulse Activity of Nose Wing Muscles and Upper Esophageal Sphincter during Search behavior in Rabbits.

    PubMed

    Kromin, A A; Dvoenko, E E; Zenina, O Yu

    2016-07-01

    Reflection of the state of hunger in impulse activity of nose wing muscles and upper esophageal sphincter muscles was studied in chronic experiments on rabbits subjected to 24-h food deprivation in the absence of locomotion and during search behavior. In the absence of apparent behavioral activity, including sniffing, alai nasi muscles of hungry rabbits constantly generated bursts of action potentials synchronous with breathing, while upper esophageal sphincter muscles exhibited regular aperiodic low-amplitude impulse activity of tonic type. Latent form of food motivation was reflected in the structure of temporal organization of impulse activity of alai nasi muscles in the form of bimodal distribution of interpulse intervals and in temporal structure of impulse activity of upper esophageal sphincter muscles in the form of monomodal distribution. The latent form of food motivation was manifested in the structure of temporal organization of periods of the action potentials burst-like rhythm, generated by alai nasi muscles, in the form of monomodal distribution, characterized by a high degree of dispersion of respiratory cycle periods. In the absence of physical activity hungry animals sporadically exhibited sniffing activity, manifested in the change from the burst-like impulse activity of alai nasi muscles to the single-burst activity type with bimodal distribution of interpulse intervals and monomodal distribution of the burst-like action potentials rhythm periods, the maximum of which was shifted towards lower values, which was the cause of increased respiratory rate. At the same time, the monomodal temporal structure of impulse activity of the upper esophageal sphincter muscles was not changed. With increasing food motivation in the process of search behavior temporal structure of periods of the burst-like action potentials rhythm, generated by alai nasi muscles, became similar to that observed during sniffing, not accompanied by animal's locomotion, which is typical for the increased respiratory rhythm frequency. Increased hunger motivation was reflected in the temporal structure of impulse activity of upper esophageal sphincter muscles in the form of a shift to lower values of the maximum of monomodal distribution of interpulse intervals on the histogram, resulting in higher impulse activity frequency. The simultaneous increase in the frequency of action potentials bursts generation by alai nasi muscles and regular impulse activity of upper esophageal sphincter muscles is a reliable criterion for enhanced food motivation during search behavior in rabbits.

  5. Body size distributions signal a regime shift in a lake ...

    EPA Pesticide Factsheets

    Communities of organisms, from mammals to microorganisms, have discontinuous distributions of body size. This pattern of size structuring is a conservative trait of community organization and is a product of processes that occur at multiple spatial and temporal scales. In this study, we assessed whether body size patterns serve as an indicator of a threshold between alternative regimes. Over the past 7000 years, the biological communities of Foy Lake (Montana,USA) have undergone a major regime shift owing to climate change. We used a palaeoecological record of diatom communities to estimate diatom sizes, and then analysed the discontinuous distribution of organism sizes over time. We used Bayesian classification and regression tree models to determine that all time intervals exhibited aggregations of sizes separated by gaps in the distribution and found a significant change in diatom body size distributions approximately 150 years before the identified ecosystem regime shift. We suggest that discontinuity analysis is a useful addition to the suite of tools for the detection of early warning signals of regime shifts. Communities of organisms from mammals to microorganisms have discontinuous distributions of body size. This pattern of size structuring is a conservative trait of community organization and is a product of processes that occur at discrete spatial and temporal scales within ecosystems. Here, a paleoecological record of diatom community change is use

  6. Quasiperiodicity in time evolution of the Bloch vector under the thermal Jaynes-Cummings model

    NASA Astrophysics Data System (ADS)

    Azuma, Hiroo; Ban, Masashi

    2014-07-01

    We study a quasiperiodic structure in the time evolution of the Bloch vector, whose dynamics is governed by the thermal Jaynes-Cummings model (JCM). Putting the two-level atom into a certain pure state and the cavity field into a mixed state in thermal equilibrium at initial time, we let the whole system evolve according to the JCM Hamiltonian. During this time evolution, motion of the Bloch vector seems to be in disorder. Because of the thermal photon distribution, both a norm and a direction of the Bloch vector change hard at random. In this paper, taking a different viewpoint compared with ones that we have been used to, we investigate quasiperiodicity of the Bloch vector’s trajectories. Introducing the concept of the quasiperiodic motion, we can explain the confused behaviour of the system as an intermediate state between periodic and chaotic motions. More specifically, we discuss the following two facts: (1) If we adjust the time interval Δt properly, figures consisting of plotted dots at the constant time interval acquire scale invariance under replacement of Δt by sΔt, where s(>1) is an arbitrary real but not transcendental number. (2) We can compute values of the time variable t, which let |Sz(t)| (the absolute value of the z-component of the Bloch vector) be very small, with the Diophantine approximation (a rational approximation of an irrational number).

  7. On the (Non)Evolution of H I Gas in Galaxies Over Cosmic Time

    NASA Astrophysics Data System (ADS)

    Prochaska, J. Xavier; Wolfe, Arthur M.

    2009-05-01

    We present new results on the frequency distribution of projected H I column densities f(N H I , X), total comoving covering fraction, and integrated mass densities ρH I of high-redshift, H I galactic gas from a survey of damped Lyα systems (DLAs) in the Sloan Digital Sky Survey, Data Release 5. For the full sample spanning z = 2.2-5 (738 DLAs), f(N H I , X) is well fitted by a double power law with a break column density Nd = 1021.55±0.04 cm-2 and low/high-end exponents α = -2.00 ± 0.05, - 6.4+1.1 -1.6. The shape of f(N H I , X) is invariant during this redshift interval and also follows the projected surface density distribution of present-day H I disks as inferred from 21 cm observations. We conclude that H I gas has been distributed in a self-similar fashion for the past 12 Gyr. The normalization of f(N H I , X), in contrast, decreases by a factor of 2 during the ≈2 Gyr interval from z = 4-2.2 with coincident decreases in both the total covering fraction and ρH I . At z ≈ 2, these quantities match the present-day values suggesting no evolution during the past ≈10 Gyr. We argue that the evolution at early times is driven by "violent" processes that removes gas from nearly half the galaxies at z ≈ 3 establishing the antecedents of current early-type galaxies. The perceived constancy of ρH I , meanwhile, implies that H I gas is a necessary but insufficient precondition for star formation and that the global star formation rate is driven by the accretion and condensation of fresh gas from the intergalactic medium.

  8. Measuring the EMS patient access time interval and the impact of responding to high-rise buildings.

    PubMed

    Morrison, Laurie J; Angelini, Mark P; Vermeulen, Marian J; Schwartz, Brian

    2005-01-01

    To measure the patient access time interval and characterize its contribution to the total emergency medical services (EMS) response time interval; to compare the patient access time intervals for patients located three or more floors above ground with those less than three floors above or below ground, and specifically in the apartment subgroup; and to identify barriers that significantly impede EMS access to patients in high-rise apartments. An observational study of all patients treated by an emergency medical technician paramedics (EMT-P) crew was conducted using a trained independent observer to collect time intervals and identify potential barriers to access. Of 118 observed calls, 25 (21%) originated from patients three or more floors above ground. The overall median and 90th percentile (95% confidence interval) patient access time intervals were 1.61 (1.27, 1.91) and 3.47 (3.08, 4.05) minutes, respectively. The median interval was 2.73 (2.22, 3.03) minutes among calls from patients located three or more stories above ground compared with 1.25 (1.07, 1.55) minutes among those at lower levels. The patient access time interval represented 23.5% of the total EMS response time interval among calls originating less than three floors above or below ground and 32.2% of those located three or more stories above ground. The most frequently encountered barriers to access included security code entry requirements, lack of directional signs, and inability to fit the stretcher into the elevator. The patient access time interval is significantly long and represents a substantial component of the total EMS response time interval, especially among ambulance calls originating three or more floors above ground. A number of barriers appear to contribute to delayed paramedic access.

  9. Joint time/frequency-domain inversion of reflection data for seabed geoacoustic profiles and uncertainties.

    PubMed

    Dettmer, Jan; Dosso, Stan E; Holland, Charles W

    2008-03-01

    This paper develops a joint time/frequency-domain inversion for high-resolution single-bounce reflection data, with the potential to resolve fine-scale profiles of sediment velocity, density, and attenuation over small seafloor footprints (approximately 100 m). The approach utilizes sequential Bayesian inversion of time- and frequency-domain reflection data, employing ray-tracing inversion for reflection travel times and a layer-packet stripping method for spherical-wave reflection-coefficient inversion. Posterior credibility intervals from the travel-time inversion are passed on as prior information to the reflection-coefficient inversion. Within the reflection-coefficient inversion, parameter information is passed from one layer packet inversion to the next in terms of marginal probability distributions rotated into principal components, providing an efficient approach to (partially) account for multi-dimensional parameter correlations with one-dimensional, numerical distributions. Quantitative geoacoustic parameter uncertainties are provided by a nonlinear Gibbs sampling approach employing full data error covariance estimation (including nonstationary effects) and accounting for possible biases in travel-time picks. Posterior examination of data residuals shows the importance of including data covariance estimates in the inversion. The joint inversion is applied to data collected on the Malta Plateau during the SCARAB98 experiment.

  10. Modulation of human time processing by subthalamic deep brain stimulation.

    PubMed

    Wojtecki, Lars; Elben, Saskia; Timmermann, Lars; Reck, Christiane; Maarouf, Mohammad; Jörgens, Silke; Ploner, Markus; Südmeyer, Martin; Groiss, Stefan Jun; Sturm, Volker; Niedeggen, Michael; Schnitzler, Alfons

    2011-01-01

    Timing in the range of seconds referred to as interval timing is crucial for cognitive operations and conscious time processing. According to recent models of interval timing basal ganglia (BG) oscillatory loops are involved in time interval recognition. Parkinsońs disease (PD) is a typical disease of the basal ganglia that shows distortions in interval timing. Deep brain stimulation (DBS) of the subthalamic nucleus (STN) is a powerful treatment of PD which modulates motor and cognitive functions depending on stimulation frequency by affecting subcortical-cortical oscillatory loops. Thus, for the understanding of BG-involvement in interval timing it is of interest whether STN-DBS can modulate timing in a frequency dependent manner by interference with oscillatory time recognition processes. We examined production and reproduction of 5 and 15 second intervals and millisecond timing in a double blind, randomised, within-subject repeated-measures design of 12 PD-patients applying no, 10-Hz- and ≥ 130-Hz-STN-DBS compared to healthy controls. We found under(re-)production of the 15-second interval and a significant enhancement of this under(re-)production by 10-Hz-stimulation compared to no stimulation, ≥ 130-Hz-STN-DBS and controls. Milliseconds timing was not affected. We provide first evidence for a frequency-specific modulatory effect of STN-DBS on interval timing. Our results corroborate the involvement of BG in general and of the STN in particular in the cognitive representation of time intervals in the range of multiple seconds.

  11. Modulation of Human Time Processing by Subthalamic Deep Brain Stimulation

    PubMed Central

    Timmermann, Lars; Reck, Christiane; Maarouf, Mohammad; Jörgens, Silke; Ploner, Markus; Südmeyer, Martin; Groiss, Stefan Jun; Sturm, Volker; Niedeggen, Michael; Schnitzler, Alfons

    2011-01-01

    Timing in the range of seconds referred to as interval timing is crucial for cognitive operations and conscious time processing. According to recent models of interval timing basal ganglia (BG) oscillatory loops are involved in time interval recognition. Parkinsońs disease (PD) is a typical disease of the basal ganglia that shows distortions in interval timing. Deep brain stimulation (DBS) of the subthalamic nucleus (STN) is a powerful treatment of PD which modulates motor and cognitive functions depending on stimulation frequency by affecting subcortical-cortical oscillatory loops. Thus, for the understanding of BG-involvement in interval timing it is of interest whether STN-DBS can modulate timing in a frequency dependent manner by interference with oscillatory time recognition processes. We examined production and reproduction of 5 and 15 second intervals and millisecond timing in a double blind, randomised, within-subject repeated-measures design of 12 PD-patients applying no, 10-Hz- and ≥130-Hz-STN-DBS compared to healthy controls. We found under(re-)production of the 15-second interval and a significant enhancement of this under(re-)production by 10-Hz-stimulation compared to no stimulation, ≥130-Hz-STN-DBS and controls. Milliseconds timing was not affected. We provide first evidence for a frequency-specific modulatory effect of STN-DBS on interval timing. Our results corroborate the involvement of BG in general and of the STN in particular in the cognitive representation of time intervals in the range of multiple seconds. PMID:21931767

  12. Likelihood-based confidence intervals for estimating floods with given return periods

    NASA Astrophysics Data System (ADS)

    Martins, Eduardo Sávio P. R.; Clarke, Robin T.

    1993-06-01

    This paper discusses aspects of the calculation of likelihood-based confidence intervals for T-year floods, with particular reference to (1) the two-parameter gamma distribution; (2) the Gumbel distribution; (3) the two-parameter log-normal distribution, and other distributions related to the normal by Box-Cox transformations. Calculation of the confidence limits is straightforward using the Nelder-Mead algorithm with a constraint incorporated, although care is necessary to ensure convergence either of the Nelder-Mead algorithm, or of the Newton-Raphson calculation of maximum-likelihood estimates. Methods are illustrated using records from 18 gauging stations in the basin of the River Itajai-Acu, State of Santa Catarina, southern Brazil. A small and restricted simulation compared likelihood-based confidence limits with those given by use of the central limit theorem; for the same confidence probability, the confidence limits of the simulation were wider than those of the central limit theorem, which failed more frequently to contain the true quantile being estimated. The paper discusses possible applications of likelihood-based confidence intervals in other areas of hydrological analysis.

  13. One-way ANOVA based on interval information

    NASA Astrophysics Data System (ADS)

    Hesamian, Gholamreza

    2016-08-01

    This paper deals with extending the one-way analysis of variance (ANOVA) to the case where the observed data are represented by closed intervals rather than real numbers. In this approach, first a notion of interval random variable is introduced. Especially, a normal distribution with interval parameters is introduced to investigate hypotheses about the equality of interval means or test the homogeneity of interval variances assumption. Moreover, the least significant difference (LSD method) for investigating multiple comparison of interval means is developed when the null hypothesis about the equality of means is rejected. Then, at a given interval significance level, an index is applied to compare the interval test statistic and the related interval critical value as a criterion to accept or reject the null interval hypothesis of interest. Finally, the method of decision-making leads to some degrees to accept or reject the interval hypotheses. An applied example will be used to show the performance of this method.

  14. Algorithm for Compressing Time-Series Data

    NASA Technical Reports Server (NTRS)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  15. Mechanism-based pharmacokinetic-pharmacodynamic modeling of the antinociceptive effect of buprenorphine in healthy volunteers.

    PubMed

    Yassen, Ashraf; Olofsen, Erik; Romberg, Raymonda; Sarton, Elise; Danhof, Meindert; Dahan, Albert

    2006-06-01

    The objective of this investigation was to characterize the pharmacokinetic-pharmacodynamic relation of buprenorphine's antinociceptive effect in healthy volunteers. Data on the time course of the antinociceptive effect after intravenous administration of 0.05-0.6 mg/70 kg buprenorphine in healthy volunteers was analyzed in conjunction with plasma concentrations by nonlinear mixed-effects analysis. A three-compartment pharmacokinetic model best described the concentration time course. Four structurally different pharmacokinetic-pharmacodynamic models were evaluated for their appropriateness to describe the time course of buprenorphine's antinociceptive effect: (1) E(max) model with an effect compartment model, (2) "power" model with an effect compartment model, (3) receptor association-dissociation model with a linear transduction function, and (4) combined biophase equilibration/receptor association-dissociation model with a linear transduction function. The latter pharmacokinetic-pharmacodynamic model described the time course of effect best and was used to explain time dependencies in buprenorphine's pharmacodynamics. The model converged, yielding precise estimation of the parameters characterizing hysteresis and the relation between relative receptor occupancy and antinociceptive effect. The rate constant describing biophase equilibration (k(eo)) was 0.00447 min(-1) (95% confidence interval, 0.00299-0.00595 min(-1)). The receptor dissociation rate constant (k(off)) was 0.0785 min(-1) (95% confidence interval, 0.0352-0.122 min(-1)), and k(on) was 0.0631 ml . ng(-1) . min(-1) (95% confidence interval, 0.0390-0.0872 ml . ng(-1) . min(-1)). This is consistent with observations in rats, suggesting that the rate-limiting step in the onset and offset of the antinociceptive effect is biophase distribution rather than slow receptor association-dissociation. In the dose range studied, no saturation of receptor occupancy occurred explaining the lack of a ceiling effect for antinociception.

  16. Temporal and spatial distribution of red tide outbreaks in the Yangtze River Estuary and adjacent waters, China.

    PubMed

    Liu, Lusan; Zhou, Juan; Zheng, Binghui; Cai, Wenqian; Lin, Kuixuan; Tang, Jingliang

    2013-07-15

    Between 1972 and 2009, evidence of red tide outbreaks in the Yangtze River Estuary and adjacent waters was collected. A geographic information system (GIS) was used to analyze the temporal and spatial distribution of these red tides, and it was subsequently used to map the distribution of these events. The results show that the following findings. (1) There were three red tide-prone areas: outside the Yangtze River Estuary and the eastern coast of Sheshan, the Huaniaoshan-Shengshan-Gouqi waters, and the Zhoushan areas and eastern coast of Zhujiajian. In these areas, red tides occurred 174 total times, 25 of which were larger than 1000 km(2) in areal extent. After 2000, the frequency of red tide outbreaks increased significantly. (2) During the months of May and June, the red tide occurrence in these areas was 51% and 20%, respectively. (3) Outbreaks of the dominant red tide plankton species Prorocentrum dong-haiense, Skeletonema costatum, Prorocentrum dantatum, and Noctiluca scientillan occurred 38, 35, 15, and 10 times, respectively, during the study interval. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. The time-evolution of DCIS size distributions with applications to breast cancer growth and progression.

    PubMed

    Dowty, James G; Byrnes, Graham B; Gertig, Dorota M

    2014-12-01

    Ductal carcinoma in situ (DCIS) lesions are non-invasive tumours of the breast that are thought to precede most invasive breast cancers (IBCs). As individual DCIS lesions are initiated, grow and invade (i.e. become IBC), the size distribution of the DCIS lesions present in a given human population will evolve. We derive a differential equation governing this evolution and show, for given assumptions about growth and invasion, that there is a unique distribution which does not vary with time. Further, we show that any initial distribution converges to this stationary distribution exponentially quickly. Therefore, it is reasonable to assume that the stationary distribution governs the size of DCIS lesions in human populations which are relatively stable with respect to the determinants of breast cancer. Based on this assumption and the size data of 110 DCIS lesions detected in a mammographic screening programme between 1993 and 2000, we produce maximum likelihood estimates for certain growth and invasion parameters. Assuming that DCIS size is proportional to a positive power p of the time since tumour initiation, we estimate p to be 0.50 with a 95% confidence interval of (0.35, 0.71). Therefore, we estimate that DCIS lesions follow a square-root growth law and hence that they grow rapidly when small and relatively slowly when large. Our approach and results should be useful for other mathematical studies of cancer, especially those investigating biological mechanisms of invasion. © The Authors 2013. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

  18. A dynamic optimization model of the diel vertical distribution of a pelagic planktivorous fish

    NASA Astrophysics Data System (ADS)

    Rosland, Rune; Giske, Jarl

    A stochastic dynamic optimization model for the diel depth distribution of juveniles and adults of the mesopelagic planktivore Maurolicus muelleri (Gmelin) is developed and used for a winter situation. Observations from Masfjorden, western Norway, reveal differences in vertical distribution, growth and mortality between juveniles and adults in January. Juveniles stay within the upper 100m with high feeding rates, while adults stay within the 100-150m zone with very low feeding rates during the diel cycle. The difference in depth profitability is assumed to be caused by age-dependent processes, and are calculated from a mechanistic model for visual feeding. The environment is described as a set of habitats represented by discrete depth intervals along the vertical axis, differing with respect to light intensity, food abundance, predation risk and temperature. The short time interval (24h) allows fitness to be linearly related to growth (feeding), assuming that growth increases the future reproductive output of the fish. Optimal depth position is calculated from balancing feeding opportunity against mortality risk, where the fitness reward gained by feeding is weighted against the danger of being killed by a predator. A basic run is established, and the model is validated by comparing predictions and observations. The sensitivity for different parameter values is also tested. The modelled vertical distributions and feeding patterns of juvenile and adult fish correspond well with the observations, and the assumption of age differences in mortality-feeding trade-offs seems adequate to explain the different depth profitability of the two age groups. The results indicate a preference for crepuscular feeding activity of the juveniles, and the vertical distribution of zooplankton seems to be the most important environmental factor regulating the adult depth position during the winter months in Masfjorden.

  19. Development of schooling behaviour during the downstream migration of Atlantic salmon Salmo salar smolts in a chalk stream.

    PubMed

    Riley, W D; Ibbotson, A T; Maxwell, D L; Davison, P I; Beaumont, W R C; Ives, M J

    2014-10-01

    The downstream migratory behaviour of wild Atlantic salmon Salmo salar smolts was monitored using passive integrated transponder (PIT) antennae systems over 10 years in the lower reaches of a small chalk stream in southern England, U.K. The timing of smolt movements and the likely occurrence of schooling were investigated and compared to previous studies. In nine of the 10 consecutive years of study, the observed diel downstream patterns of S. salar smolt migration appeared to be synchronized with the onset of darkness. The distribution of time intervals between successive nocturnal detections of PIT-tagged smolts was as expected if generated randomly from observed hourly rates. There were, however, significantly more short intervals than expected for smolts detected migrating during the day. For each year from 2006 to 2011, the observed 10th percentile of the daytime intervals was <4 s, compared to ≥55 s for the simulated random times, indicating greater incidence of groups of smolts. Groups with the shortest time intervals between successive PIT tag detections originated from numerous parr tagging sites (used as a proxy for relatedness). The results suggest that the ecological drivers influencing daily smolt movements in the lower reaches of chalk stream catchments are similar to those previously reported at the onset of migration for smolts leaving their natal tributaries; that smolts detected migrating during the night are moving independently following initiation by a common environmental factor (presumably darkness), whereas those detected migrating during the day often move in groups, and that such schools may not be site (kin)-structured. The importance of understanding smolt migratory behaviour is considered with reference to stock monitoring programmes and enhancing downstream passage past barriers. © 2014 Crown copyright. Journal of Fish Biology © 2014 The Fisheries Society of the British Isles.

  20. The effect of methylphenidate and rearing environment on behavioral inhibition in adult male rats.

    PubMed

    Hill, Jade C; Covarrubias, Pablo; Terry, Joel; Sanabria, Federico

    2012-01-01

    The ability to withhold reinforced responses-behavioral inhibition-is impaired in various psychiatric conditions including Attention Deficit Hyperactivity Disorder (ADHD). Methodological and analytical limitations have constrained our understanding of the effects of pharmacological and environmental factors on behavioral inhibition. To determine the effects of acute methylphenidate (MPH) administration and rearing conditions (isolated vs. pair-housed) on behavioral inhibition in adult rats. Inhibitory capacity was evaluated using two response-withholding tasks, differential reinforcement of low rates (DRL) and fixed minimum interval (FMI) schedules of reinforcement. Both tasks made sugar pellets contingent on intervals longer than 6 s between consecutive responses. Inferences on inhibitory and timing capacities were drawn from the distribution of withholding times (interresponse times, or IRTs). MPH increased the number of intervals produced in both tasks. Estimates of behavioral inhibition increased with MPH dose in FMI and with social isolation in DRL. Nonetheless, burst responding in DRL and the divergence of DRL data relative to past studies, among other limitations, undermined the reliability of DRL data as the basis for inferences on behavioral inhibition. Inhibitory capacity was more precisely estimated from FMI than from DRL performance. Based on FMI data, MPH, but not a socially enriched environment, appears to improve inhibitory capacity. The highest dose of MPH tested, 8 mg/kg, did not reduce inhibitory capacity but reduced the responsiveness to waiting contingencies. These results support the use of the FMI schedule, complemented with appropriate analytic techniques, for the assessment of behavioral inhibition in animal models.

  1. Effect of specialized diagnostic assessment units on the time to diagnosis in screen-detected breast cancer patients.

    PubMed

    Jiang, L; Gilbert, J; Langley, H; Moineddin, R; Groome, P A

    2015-05-26

    The duration of the cancer diagnostic process has considerable influence on patients' psychosocial well-being. Breast diagnostic assessment units (DAUs) in Ontario, Canada are designed to improve the quality and timeliness of care during a breast cancer diagnosis. We compared the diagnostic duration of patients diagnosed through a DAU vs usual care (UC). Retrospective population-based cohort study of 2499 screen-detected breast cancers (2011) using administrative health-care databases linked to the Ontario Cancer Registry. The diagnostic interval was measured from the initial screen to cancer diagnosis. Diagnostic assessment unit use was based on the biopsy and/or surgery hospital. We compared the length of the diagnostic interval between the DAU groups using multivariable quantile regression. Diagnostic assessment units had a higher proportion of patients diagnosed within the 7-week target compared with UC (79.1% vs 70.2%, P<0.001). The median time to diagnosis at DAUs was 26 days, which was 9 days shorter compared with UC (95% CI: 6.4-11.6). This effect was reduced to 8.3 days after adjusting for all study covariates. Adjusted DAU differences were similar at the 75th and 90th percentiles of the diagnostic interval distribution. Diagnosis through an Ontario DAU was associated with a reduced time to diagnosis for screen-detected breast cancer patients, which likely reduces the anxiety and distress associated with waiting for a diagnosis.

  2. Intermittency in electric brain activity in the perception of ambiguous images

    NASA Astrophysics Data System (ADS)

    Kurovskaya, Maria K.; Runnova, Anastasiya E.; Zhuravlev, Maxim O.; Grubov, Vadim V.; Koronovskii, Alexey A.; Pavlov, Alexey N.; Pisarchik, Alexander N.

    2017-04-01

    Present paper is devoted to the study of intermittency during the perception of bistable Necker cube image being a good example of an ambiguous object, with simultaneous measurement of EEG. Distributions of time interval lengths corresponding to the left-oriented and right-oriented cube perception have been obtain. EEG data have been analyzed using continuous wavelet transform and it was shown that the destruction of alpha rhythm with accompanying generation of high frequency oscillations can serve as a marker of Necker cube recognition process.

  3. Event-Triggered Distributed Average Consensus Over Directed Digital Networks With Limited Communication Bandwidth.

    PubMed

    Li, Huaqing; Chen, Guo; Huang, Tingwen; Dong, Zhaoyang; Zhu, Wei; Gao, Lan

    2016-12-01

    In this paper, we consider the event-triggered distributed average-consensus of discrete-time first-order multiagent systems with limited communication data rate and general directed network topology. In the framework of digital communication network, each agent has a real-valued state but can only exchange finite-bit binary symbolic data sequence with its neighborhood agents at each time step due to the digital communication channels with energy constraints. Novel event-triggered dynamic encoder and decoder for each agent are designed, based on which a distributed control algorithm is proposed. A scheme that selects the number of channel quantization level (number of bits) at each time step is developed, under which all the quantizers in the network are never saturated. The convergence rate of consensus is explicitly characterized, which is related to the scale of network, the maximum degree of nodes, the network structure, the scaling function, the quantization interval, the initial states of agents, the control gain and the event gain. It is also found that under the designed event-triggered protocol, by selecting suitable parameters, for any directed digital network containing a spanning tree, the distributed average consensus can be always achieved with an exponential convergence rate based on merely one bit information exchange between each pair of adjacent agents at each time step. Two simulation examples are provided to illustrate the feasibility of presented protocol and the correctness of the theoretical results.

  4. Electrocardiogram reference intervals for clinically normal wild-born chimpanzees (Pan troglodytes).

    PubMed

    Atencia, Rebeca; Revuelta, Luis; Somauroo, John D; Shave, Robert E

    2015-08-01

    To generate reference intervals for ECG variables in clinically normal chimpanzees (Pan troglodytes). 100 clinically normal (51 young [< 10 years old] and 49 adult [≥ 10 years old]) wild-born chimpanzees. Electrocardiograms collected between 2009 and 2013 at the Tchimpounga Chimpanzee Rehabilitation Centre were assessed to determine heart rate, PR interval, QRS duration, QT interval, QRS axis, P axis, and T axis. Electrocardiographic characteristics for left ventricular hypertrophy (LVH) and morphology of the ST segment, T wave, and QRS complex were identified. Reference intervals for young and old animals were calculated as mean ± 1.96•SD for normally distributed data and as 5th to 95th percentiles for data not normally distributed. Differences between age groups were assessed by use of unpaired Student t tests. RESULTS Reference intervals were generated for young and adult wild-born chimpanzees. Most animals had sinus rhythm with small or normal P wave morphology; 24 of 51 (47%) young chimpanzees and 30 of 49 (61%) adult chimpanzees had evidence of LVH as determined on the basis of criteria for humans. Cardiac disease has been implicated as the major cause of death in captive chimpanzees. Species-specific ECG reference intervals for chimpanzees may aid in the diagnosis and treatment of animals with, or at risk of developing, heart disease. Chimpanzees with ECG characteristics outside of these intervals should be considered for follow-up assessment and regular cardiac monitoring.

  5. Four applications of permutation methods to testing a single-mediator model.

    PubMed

    Taylor, Aaron B; MacKinnon, David P

    2012-09-01

    Four applications of permutation tests to the single-mediator model are described and evaluated in this study. Permutation tests work by rearranging data in many possible ways in order to estimate the sampling distribution for the test statistic. The four applications to mediation evaluated here are the permutation test of ab, the permutation joint significance test, and the noniterative and iterative permutation confidence intervals for ab. A Monte Carlo simulation study was used to compare these four tests with the four best available tests for mediation found in previous research: the joint significance test, the distribution of the product test, and the percentile and bias-corrected bootstrap tests. We compared the different methods on Type I error, power, and confidence interval coverage. The noniterative permutation confidence interval for ab was the best performer among the new methods. It successfully controlled Type I error, had power nearly as good as the most powerful existing methods, and had better coverage than any existing method. The iterative permutation confidence interval for ab had lower power than do some existing methods, but it performed better than any other method in terms of coverage. The permutation confidence interval methods are recommended when estimating a confidence interval is a primary concern. SPSS and SAS macros that estimate these confidence intervals are provided.

  6. Scaling and memory in volatility return intervals in financial markets

    PubMed Central

    Yamasaki, Kazuko; Muchnik, Lev; Havlin, Shlomo; Bunde, Armin; Stanley, H. Eugene

    2005-01-01

    For both stock and currency markets, we study the return intervals τ between the daily volatilities of the price changes that are above a certain threshold q. We find that the distribution function Pq(τ) scales with the mean return interval \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} \\begin{equation*}{\\bar {{\\tau}}}\\end{equation*}\\end{document} as \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} \\begin{equation*}P_{q}({\\tau})={\\bar {{\\tau}}}^{-1}f({\\tau}/{\\bar {{\\tau}}})\\end{equation*}\\end{document}. The scaling function f(x) is similar in form for all seven stocks and for all seven currency databases analyzed, and f(x) is consistent with a power-law form, f(x) ∼ x-γ with γ ≈ 2. We also quantify how the conditional distribution Pq(τ|τ0) depends on the previous return interval τ0 and find that small (or large) return intervals are more likely to be followed by small (or large) return intervals. This “clustering” of the volatility return intervals is a previously unrecognized phenomenon that we relate to the long-term correlations known to be present in the volatility. PMID:15980152

  7. Scaling and memory in volatility return intervals in financial markets

    NASA Astrophysics Data System (ADS)

    Yamasaki, Kazuko; Muchnik, Lev; Havlin, Shlomo; Bunde, Armin; Stanley, H. Eugene

    2005-06-01

    For both stock and currency markets, we study the return intervals τ between the daily volatilities of the price changes that are above a certain threshold q. We find that the distribution function Pq(τ) scales with the mean return interval [Formula] as [Formula]. The scaling function f(x) is similar in form for all seven stocks and for all seven currency databases analyzed, and f(x) is consistent with a power-law form, f(x) ˜ x-γ with γ ≈ 2. We also quantify how the conditional distribution Pq(τ|τ0) depends on the previous return interval τ0 and find that small (or large) return intervals are more likely to be followed by small (or large) return intervals. This “clustering” of the volatility return intervals is a previously unrecognized phenomenon that we relate to the long-term correlations known to be present in the volatility. Author contributions: S.H. and H.E.S. designed research; K.Y., L.M., S.H., and H.E.S. performed research; A.B. contributed new reagents/analytic tools; A.B. analyzed data; and S.H. wrote the paper.Abbreviations: pdf, probability density function; S&P 500, Standard and Poor's 500 Index; USD, U.S. dollar; JPY, Japanese yen; SEK, Swedish krona.

  8. Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Kumar, Sricharan; Srivistava, Ashok N.

    2012-01-01

    Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.

  9. Real-Time Station Grouping under Dynamic Traffic for IEEE 802.11ah

    PubMed Central

    Tian, Le; Latré, Steven

    2017-01-01

    IEEE 802.11ah, marketed as Wi-Fi HaLow, extends Wi-Fi to the sub-1 GHz spectrum. Through a number of physical layer (PHY) and media access control (MAC) optimizations, it aims to bring greatly increased range, energy-efficiency, and scalability. This makes 802.11ah the perfect candidate for providing connectivity to Internet of Things (IoT) devices. One of these new features, referred to as the Restricted Access Window (RAW), focuses on improving scalability in highly dense deployments. RAW divides stations into groups and reduces contention and collisions by only allowing channel access to one group at a time. However, the standard does not dictate how to determine the optimal RAW grouping parameters. The optimal parameters depend on the current network conditions, and it has been shown that incorrect configuration severely impacts throughput, latency and energy efficiency. In this paper, we propose a traffic-adaptive RAW optimization algorithm (TAROA) to adapt the RAW parameters in real time based on the current traffic conditions, optimized for sensor networks in which each sensor transmits packets with a certain (predictable) frequency and may change the transmission frequency over time. The TAROA algorithm is executed at each target beacon transmission time (TBTT), and it first estimates the packet transmission interval of each station only based on packet transmission information obtained by access point (AP) during the last beacon interval. Then, TAROA determines the RAW parameters and assigns stations to RAW slots based on this estimated transmission frequency. The simulation results show that, compared to enhanced distributed channel access/distributed coordination function (EDCA/DCF), the TAROA algorithm can highly improve the performance of IEEE 802.11ah dense networks in terms of throughput, especially when hidden nodes exist, although it does not always achieve better latency performance. This paper contributes with a practical approach to optimizing RAW grouping under dynamic traffic in real time, which is a major leap towards applying RAW mechanism in real-life IoT networks. PMID:28677617

  10. Real-Time Station Grouping under Dynamic Traffic for IEEE 802.11ah.

    PubMed

    Tian, Le; Khorov, Evgeny; Latré, Steven; Famaey, Jeroen

    2017-07-04

    IEEE 802.11ah, marketed as Wi-Fi HaLow, extends Wi-Fi to the sub-1 GHz spectrum. Through a number of physical layer (PHY) and media access control (MAC) optimizations, it aims to bring greatly increased range, energy-efficiency, and scalability. This makes 802.11ah the perfect candidate for providing connectivity to Internet of Things (IoT) devices. One of these new features, referred to as the Restricted Access Window (RAW), focuses on improving scalability in highly dense deployments. RAW divides stations into groups and reduces contention and collisions by only allowing channel access to one group at a time. However, the standard does not dictate how to determine the optimal RAW grouping parameters. The optimal parameters depend on the current network conditions, and it has been shown that incorrect configuration severely impacts throughput, latency and energy efficiency. In this paper, we propose a traffic-adaptive RAW optimization algorithm (TAROA) to adapt the RAW parameters in real time based on the current traffic conditions, optimized for sensor networks in which each sensor transmits packets with a certain (predictable) frequency and may change the transmission frequency over time. The TAROA algorithm is executed at each target beacon transmission time (TBTT), and it first estimates the packet transmission interval of each station only based on packet transmission information obtained by access point (AP) during the last beacon interval. Then, TAROA determines the RAW parameters and assigns stations to RAW slots based on this estimated transmission frequency. The simulation results show that, compared to enhanced distributed channel access/distributed coordination function (EDCA/DCF), the TAROA algorithm can highly improve the performance of IEEE 802.11ah dense networks in terms of throughput, especially when hidden nodes exist, although it does not always achieve better latency performance. This paper contributes with a practical approach to optimizing RAW grouping under dynamic traffic in real time, which is a major leap towards applying RAW mechanism in real-life IoT networks.

  11. Department of Defense Precise Time and Time Interval program improvement plan

    NASA Technical Reports Server (NTRS)

    Bowser, J. R.

    1981-01-01

    The United States Naval Observatory is responsible for ensuring uniformity in precise time and time interval operations including measurements, the establishment of overall DOD requirements for time and time interval, and the accomplishment of objectives requiring precise time and time interval with minimum cost. An overview of the objectives, the approach to the problem, the schedule, and a status report, including significant findings relative to organizational relationships, current directives, principal PTTI users, and future requirements as currently identified by the users are presented.

  12. The lymphoscintigraphic manifestation of (99m)Tc-dextran lymphatic imaging in primary intestinal lymphangiectasia.

    PubMed

    Wen, Zhe; Tong, Guansheng; Liu, Yong; Meeks, Jacqui K; Ma, Daqing; Yang, Jigang

    2014-05-01

    The aim of this study was to analyze the imaging characteristics of (99m)Tc-dextran ((99m)Tc-DX) lymphatic imaging in the diagnosis of primary intestinal lymphangiectasia (PIL). Forty-one PIL patients were diagnosed as having PIL with the diagnosis being subsequently confirmed by laparotomy, endoscopy, biopsy, or capsule colonoscopy. Nineteen patients were male and 22 were female. A whole-body (99m)Tc-DX scan was performed at 10 min, 1 h, 3 h, and 6 h intervals after injection. The 10 min and 1 h postinjection intervals were considered the early phase, the 3 h postinjection interval was considered the middle phase, and the 6 h postinjection interval was considered the delayed phase. The imaging characteristics of (99m)Tc-DX lymphatic imaging in PIL were of five different types: (i) presence of dynamic radioactivity in the intestine, associated with radioactivity moving from the small intestine to the ascending and transverse colon; (ii) presence of delayed dynamic radioactivity in the intestine, no radioactivity or little radioactivity distributing in the intestine in the early phase, or significant radioactivity distributing in the intestine in the delayed phase; (iii) radioactivity distributing in the intestine and abdominal cavity; (iv) radioactivity distributing only in the abdominal cavity with no radioactivity in the intestines; and (v) no radioactivity distributing in the intestine and abdominal activity. (99m)Tc-DX lymphatic imaging in PIL showed different imaging characteristics. Caution should be exercised in the diagnosis of PIL using lymphoscintigraphy. Lymphoscintigraphy is a safe and accurate examination method and is a significant diagnostic tool in the diagnosis of PIL.

  13. Reliability of confidence intervals calculated by bootstrap and classical methods using the FIA 1-ha plot design

    Treesearch

    H. T. Schreuder; M. S. Williams

    2000-01-01

    In simulation sampling from forest populations using sample sizes of 20, 40, and 60 plots respectively, confidence intervals based on the bootstrap (accelerated, percentile, and t-distribution based) were calculated and compared with those based on the classical t confidence intervals for mapped populations and subdomains within those populations. A 68.1 ha mapped...

  14. Measuring Skew in Average Surface Roughness as a Function of Surface Preparation

    NASA Technical Reports Server (NTRS)

    Stahl, Mark T.

    2015-01-01

    Characterizing surface roughness is important for predicting optical performance. Better measurement of surface roughness reduces grinding saving both time and money and allows the science requirements to be better defined. In this study various materials are polished from a fine grind to a fine polish. Each sample's RMS surface roughness is measured at 81 locations in a 9x9 square grid using a Zygo white light interferometer at regular intervals during the polishing process. Each data set is fit with various standard distributions and tested for goodness of fit. We show that the skew in the RMS data changes as a function of polishing time.

  15. Generating Discrete Power-Law Distributions from a Death- Multiple Immigration Population Process

    NASA Astrophysics Data System (ADS)

    Matthews, J. O.; Jakeman, E.; Hopcraft, K. I.

    2003-04-01

    We consider the evolution of a simple population process governed by deaths and multiple immigrations that arrive with rates particular to their order. For a particular choice of rates, the equilibrium solution has a discrete power-law form. The model is a generalization of a process investigated previously where immigrants arrived in pairs [1]. The general properties of this model are discussed in a companion paper. The population is initiated with precisely M individuals present and evolves to an equilibrium distribution with a power-law tail. However the power-law tails of the equilibrium distribution are established immediately, so that moments and correlation properties of the population are undefined for any non-zero time. The technique we develop to characterize this process utilizes external monitoring that counts the emigrants leaving the population in specified time intervals. This counting distribution also possesses a power-law tail for all sampling times and the resulting time series exhibits two features worthy of note, a large variation in the strength of the signal, reflecting the power-law PDF; and secondly, intermittency of the emissions. We show that counting with a detector of finite dynamic range regularizes naturally the fluctuations, in effect `clipping' the events. All previously undefined characteristics such as the mean, autocorrelation and probabilities to the first event and time between events are well defined and derived. These properties, although obtained by discarding much data, nevertheless possess embedded power-law regimes that characterize the population in a way that is analogous to box averaging determination of fractal-dimension.

  16. On the Use of the Beta Distribution in Probabilistic Resource Assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olea, Ricardo A., E-mail: olea@usgs.gov

    2011-12-15

    The triangular distribution is a popular choice when it comes to modeling bounded continuous random variables. Its wide acceptance derives mostly from its simple analytic properties and the ease with which modelers can specify its three parameters through the extremes and the mode. On the negative side, hardly any real process follows a triangular distribution, which from the outset puts at a disadvantage any model employing triangular distributions. At a time when numerical techniques such as the Monte Carlo method are displacing analytic approaches in stochastic resource assessments, easy specification remains the most attractive characteristic of the triangular distribution. Themore » beta distribution is another continuous distribution defined within a finite interval offering wider flexibility in style of variation, thus allowing consideration of models in which the random variables closely follow the observed or expected styles of variation. Despite its more complex definition, generation of values following a beta distribution is as straightforward as generating values following a triangular distribution, leaving the selection of parameters as the main impediment to practically considering beta distributions. This contribution intends to promote the acceptance of the beta distribution by explaining its properties and offering several suggestions to facilitate the specification of its two shape parameters. In general, given the same distributional parameters, use of the beta distributions in stochastic modeling may yield significantly different results, yet better estimates, than the triangular distribution.« less

  17. Load Segmentation for Convergence of Distribution Automation and Advanced Metering Infrastructure Systems

    NASA Astrophysics Data System (ADS)

    Pamulaparthy, Balakrishna; KS, Swarup; Kommu, Rajagopal

    2014-12-01

    Distribution automation (DA) applications are limited to feeder level today and have zero visibility outside of the substation feeder and reaching down to the low-voltage distribution network level. This has become a major obstacle in realizing many automated functions and enhancing existing DA capabilities. Advanced metering infrastructure (AMI) systems are being widely deployed by utilities across the world creating system-wide communications access to every monitoring and service point, which collects data from smart meters and sensors in short time intervals, in response to utility needs. DA and AMI systems convergence provides unique opportunities and capabilities for distribution grid modernization with the DA system acting as a controller and AMI system acting as feedback to DA system, for which DA applications have to understand and use the AMI data selectively and effectively. In this paper, we propose a load segmentation method that helps the DA system to accurately understand and use the AMI data for various automation applications with a suitable case study on power restoration.

  18. 1/f oscillations in a model of moth populations oriented by diffusive pheromones

    NASA Astrophysics Data System (ADS)

    Barbosa, L. A.; Martins, M. L.; Lima, E. R.

    2005-01-01

    An individual-based model for the population dynamics of Spodoptera frugiperda in a homogeneous environment is proposed. The model involves moths feeding plants, mating through an anemotaxis search (i.e., oriented by odor dispersed in a current of air), and dying due to resource competition or at a maximum age. As observed in the laboratory, the females release pheromones at exponentially distributed time intervals, and it is assumed that the ranges of the male flights follow a power-law distribution. Computer simulations of the model reveal the central role of anemotaxis search for the persistence of moth population. Such stationary populations are exponentially distributed in age, exhibit random temporal fluctuations with 1/f spectrum, and self-organize in disordered spatial patterns with long-range correlations. In addition, the model results demonstrate that pest control through pheromone mass trapping is effective only if the amounts of pheromone released by the traps decay much slower than the exponential distribution for calling female.

  19. The storage time, age, and erosion hazard of laterally accreted sediment on the floodplain of a simulated meandering river

    USGS Publications Warehouse

    Bradley, D. Nathan; Tucker, Gregory E.

    2013-01-01

    A sediment particle traversing the fluvial system may spend the majority of the total transit time at rest, stored in various sedimentary deposits. Floodplains are among the most important of these deposits, with the potential to store large amounts of sediment for long periods of time. The virtual velocity of a sediment grain depends strongly on the amount of time spent in storage, but little is known about sediment storage times. Measurements of floodplain vegetation age have suggested that storage times are exponentially distributed, a case that arises when all the sediment on a floodplain is equally vulnerable to erosion in a given interval. This assumption has been incorporated into sediment routing models, despite some evidence that younger sediment is more likely to be eroded from floodplains than older sediment. We investigate the relationship between sediment age and erosion, which we term the “erosion hazard,” with a model of a meandering river that constructs its floodplain by lateral accretion. We find that the erosion hazard decreases with sediment age, leading to a storage time distribution that is not exponential. We propose an alternate model that requires that channel motion is approximately diffusive and results in a heavy tailed distribution of storage time. The model applies to timescales over which the direction of channel motion is uncorrelated. We speculate that the lower end of this range of time is set by the meander cutoff timescale and the upper end is set by processes that limit the width of the meander belt.

  20. Quantitative retrieving forest ecological parameters based on remote sensing in Liping County of China

    NASA Astrophysics Data System (ADS)

    Tian, Qingjiu; Chen, Jing M.; Zheng, Guang; Xia, Xueqi; Chen, Junying

    2006-09-01

    Forest ecosystem is an important component of terrestrial ecosystem and plays an important role in global changes. Aboveground biomass (AGB) of forest ecosystem is an important factor in global carbon cycle studies. The purpose of this study was to retrieve the yearly Net Primary Productivity (NPP) of forest from the 8-days-interval MODIS-LAI images of a year and produce a yearly NPP distribution map. The LAI, DBH (diameter at breast height), tree height, and tree age field were measured in different 80 plots for Chinese fir, Masson pine, bamboo, broadleaf, mix forest in Liping County. Based on the DEM image and Landsat TM images acquired on May 14th, 2000, the geometric correction and terrain correction were taken. In addition, the "6S"model was used to gain the surface reflectance image. Then the correlation between Leaf Area Index (LAI) and Reduced Simple Ratio (RSR) was built. Combined with the Landcover map, forest stand map, the LAI, aboveground biomass, tree age map were produced respectively. After that, the 8-days- interval LAI images of a year, meteorology data, soil data, forest stand image and Landcover image were inputted into the BEPS model to get the NPP spatial distribution. At last, the yearly NPP spatial distribution map with 30m spatial resolution was produced. The values in those forest ecological parameters distribution maps were quite consistent with those of field measurements. So it's possible, feasible and time-saving to estimate forest ecological parameters at a large scale by using remote sensing.

  1. A survey of solar wind conditions at 5 AU: A tool for interpreting solar wind-magnetosphere interactions at Jupiter

    NASA Astrophysics Data System (ADS)

    Ebert, Robert; Bagenal, Fran; McComas, David; Fowler, Christopher

    2014-09-01

    We examine Ulysses solar wind and interplanetary magnetic field (IMF) observations at 5 AU for two ~13 month intervals during the rising and declining phases of solar cycle 23 and the predicted response of the Jovian magnetosphere during these times. The declining phase solar wind, composed primarily of corotating interaction regions and high-speed streams, was, on average, faster, hotter, less dense, and more Alfvénic relative to the rising phase solar wind, composed mainly of slow wind and interplanetary coronal mass ejections. Interestingly, none of solar wind and IMF distributions reported here were bimodal, a feature used to explain the bimodal distribution of bow shock and magnetopause standoff distances observed at Jupiter. Instead, many of these distributions had extended, non-Gaussian tails that resulted in large standard deviations and much larger mean over median values. The distribution of predicted Jupiter bow shock and magnetopause standoff distances during these intervals were also not bimodal, the mean/median values being larger during the declining phase by ~1 - 4%. These results provide data-derived solar wind and IMF boundary conditions at 5 AU for models aimed at studying solar wind-magnetosphere interactions at Jupiter and can support the science investigations of upcoming Jupiter system missions. Here, we provide expectations for Juno, which is scheduled to arrive at Jupiter in July 2016. Accounting for the long-term decline in solar wind dynamic pressure reported by McComas et al. (2013), Jupiter’s bow shock and magnetopause is expected to be at least 8 - 12% further from Jupiter, if these trends continue.

  2. Calculation of the confidence intervals for transformation parameters in the registration of medical images

    PubMed Central

    Bansal, Ravi; Staib, Lawrence H.; Laine, Andrew F.; Xu, Dongrong; Liu, Jun; Posecion, Lainie F.; Peterson, Bradley S.

    2010-01-01

    Images from different individuals typically cannot be registered precisely because anatomical features within the images differ across the people imaged and because the current methods for image registration have inherent technological limitations that interfere with perfect registration. Quantifying the inevitable error in image registration is therefore of crucial importance in assessing the effects that image misregistration may have on subsequent analyses in an imaging study. We have developed a mathematical framework for quantifying errors in registration by computing the confidence intervals of the estimated parameters (3 translations, 3 rotations, and 1 global scale) for the similarity transformation. The presence of noise in images and the variability in anatomy across individuals ensures that estimated registration parameters are always random variables. We assume a functional relation among intensities across voxels in the images, and we use the theory of nonlinear, least-squares estimation to show that the parameters are multivariate Gaussian distributed. We then use the covariance matrix of this distribution to compute the confidence intervals of the transformation parameters. These confidence intervals provide a quantitative assessment of the registration error across the images. Because transformation parameters are nonlinearly related to the coordinates of landmark points in the brain, we subsequently show that the coordinates of those landmark points are also multivariate Gaussian distributed. Using these distributions, we then compute the confidence intervals of the coordinates for landmark points in the image. Each of these confidence intervals in turn provides a quantitative assessment of the registration error at a particular landmark point. Because our method is computationally intensive, however, its current implementation is limited to assessing the error of the parameters in the similarity transformation across images. We assessed the performance of our method in computing the error in estimated similarity parameters by applying that method to real world dataset. Our results showed that the size of the confidence intervals computed using our method decreased – i.e. our confidence in the registration of images from different individuals increased – for increasing amounts of blur in the images. Moreover, the size of the confidence intervals increased for increasing amounts of noise, misregistration, and differing anatomy. Thus, our method precisely quantified confidence in the registration of images that contain varying amounts of misregistration and varying anatomy across individuals. PMID:19138877

  3. Non stationary analysis of heart rate variability during the obstructive sleep apnea.

    PubMed

    Méndez, M O; Bianchi, A M; Cerutti, S

    2004-01-01

    Characteristic fluctuations of the heart rate are found during obstructive sleep apnea (OSA), bradycardia in apneonic phase and tachycardia at the recovery of ventilation. In order to assess its autonomic response, in this study, the time-frequency distribution of Born-Jordan and evolutive Poincare plots are used. From Physionet was taken a database with records of ECG and respiratory signals. During the OSA all spectral indexes presented oscillations correspondent to the changes between brady and tachycardia of the RR intervals as well as greater values than during control epochs. Born-Jordan distribution and evolutive Poincare plots could help to characterize and develop an index for the evaluation of OSA. Very low frequency could also be a good index of OSA.

  4. Data that describe at-a-point temporal variations in the transport rate and particle-size distribution of bedload; East Fork River, Wyoming, and Fall River, Colorado

    USGS Publications Warehouse

    Gomez, Basil; Emmett, W.W.

    1990-01-01

    Data from the East Fork River, Wyoming, and the Fall River, Colorado, that document at-a-point temporal variations in the transport rate and particle-size distribution of bedload, associated with the downstream migration of dunes, are presented. Bedload sampling was undertaken, using a 76.2 x 76.2 mm Helley-Smith sampler, on three separate occasions at each site in June 1988. In each instance, the sampling time was 30 seconds and the sampling intervals 5 minutes. The sampling period ranged from 4.92 to 8.25 hours. Water stage did not vary appreciably during any of the sampling periods. (USGS)

  5. Monte Carlo simulation of wave sensing with a short pulse radar

    NASA Technical Reports Server (NTRS)

    Levine, D. M.; Davisson, L. D.; Kutz, R. L.

    1977-01-01

    A Monte Carlo simulation is used to study the ocean wave sensing potential of a radar which scatters short pulses at small off-nadir angles. In the simulation, realizations of a random surface are created commensurate with an assigned probability density and power spectrum. Then the signal scattered back to the radar is computed for each realization using a physical optics analysis which takes wavefront curvature and finite radar-to-surface distance into account. In the case of a Pierson-Moskowitz spectrum and a normally distributed surface, reasonable assumptions for a fully developed sea, it has been found that the cumulative distribution of time intervals between peaks in the scattered power provides a measure of surface roughness. This observation is supported by experiments.

  6. Baseline-dependent sampling and windowing for radio interferometry: data compression, field-of-interest shaping, and outer field suppression

    NASA Astrophysics Data System (ADS)

    Atemkeng, M.; Smirnov, O.; Tasse, C.; Foster, G.; Keimpema, A.; Paragi, Z.; Jonas, J.

    2018-07-01

    Traditional radio interferometric correlators produce regular-gridded samples of the true uv-distribution by averaging the signal over constant, discrete time-frequency intervals. This regular sampling and averaging then translate to be irregular-gridded samples in the uv-space, and results in a baseline-length-dependent loss of amplitude and phase coherence, which is dependent on the distance from the image phase centre. The effect is often referred to as `decorrelation' in the uv-space, which is equivalent in the source domain to `smearing'. This work discusses and implements a regular-gridded sampling scheme in the uv-space (baseline-dependent sampling) and windowing that allow for data compression, field-of-interest shaping, and source suppression. The baseline-dependent sampling requires irregular-gridded sampling in the time-frequency space, i.e. the time-frequency interval becomes baseline dependent. Analytic models and simulations are used to show that decorrelation remains constant across all the baselines when applying baseline-dependent sampling and windowing. Simulations using MeerKAT telescope and the European Very Long Baseline Interferometry Network show that both data compression, field-of-interest shaping, and outer field-of-interest suppression are achieved.

  7. A Deadline-Aware Scheduling and Forwarding Scheme in Wireless Sensor Networks.

    PubMed

    Dao, Thi-Nga; Yoon, Seokhoon; Kim, Jangyoung

    2016-01-05

    Many applications in wireless sensor networks (WSNs) require energy consumption to be minimized and the data delivered to the sink within a specific delay. A usual solution for reducing energy consumption is duty cycling, in which nodes periodically switch between sleep and active states. By increasing the duty cycle interval, consumed energy can be reduced more. However, a large duty cycle interval causes a long end-to-end (E2E) packet delay. As a result, the requirement of a specific delay bound for packet delivery may not be satisfied. In this paper, we aim at maximizing the duty cycle while still guaranteeing that the packets arrive at the sink with the required probability, i.e., the required delay-constrained success ratio (DCSR) is achieved. In order to meet this objective, we propose a novel scheduling and forwarding scheme, namely the deadline-aware scheduling and forwarding (DASF) algorithm. In DASF, the E2E delay distribution with the given network model and parameters is estimated in order to determine the maximum duty cycle interval, with which the required DCSR is satisfied. Each node independently selects a wake-up time using the selected interval, and packets are forwarded to a node in the potential forwarding set, which is determined based on the distance between nodes and the sink. DASF does not require time synchronization between nodes, and a node does not need to maintain neighboring node information in advance. Simulation results show that the proposed scheme can satisfy a required delay-constrained success ratio and outperforms existing algorithms in terms of E2E delay and DCSR.

  8. A Deadline-Aware Scheduling and Forwarding Scheme in Wireless Sensor Networks

    PubMed Central

    Dao, Thi-Nga; Yoon, Seokhoon; Kim, Jangyoung

    2016-01-01

    Many applications in wireless sensor networks (WSNs) require energy consumption to be minimized and the data delivered to the sink within a specific delay. A usual solution for reducing energy consumption is duty cycling, in which nodes periodically switch between sleep and active states. By increasing the duty cycle interval, consumed energy can be reduced more. However, a large duty cycle interval causes a long end-to-end (E2E) packet delay. As a result, the requirement of a specific delay bound for packet delivery may not be satisfied. In this paper, we aim at maximizing the duty cycle while still guaranteeing that the packets arrive at the sink with the required probability, i.e., the required delay-constrained success ratio (DCSR) is achieved. In order to meet this objective, we propose a novel scheduling and forwarding scheme, namely the deadline-aware scheduling and forwarding (DASF) algorithm. In DASF, the E2E delay distribution with the given network model and parameters is estimated in order to determine the maximum duty cycle interval, with which the required DCSR is satisfied. Each node independently selects a wake-up time using the selected interval, and packets are forwarded to a node in the potential forwarding set, which is determined based on the distance between nodes and the sink. DASF does not require time synchronization between nodes, and a node does not need to maintain neighboring node information in advance. Simulation results show that the proposed scheme can satisfy a required delay-constrained success ratio and outperforms existing algorithms in terms of E2E delay and DCSR. PMID:26742046

  9. EFFECT ON PERFUSION VALUES OF SAMPLING INTERVAL OF CT PERFUSION ACQUISITIONS IN NEUROENDOCRINE LIVER METASTASES AND NORMAL LIVER

    PubMed Central

    Ng, Chaan S.; Hobbs, Brian P.; Wei, Wei; Anderson, Ella F.; Herron, Delise H.; Yao, James C.; Chandler, Adam G.

    2014-01-01

    Objective To assess the effects of sampling interval (SI) of CT perfusion acquisitions on CT perfusion values in normal liver and liver metastases from neuroendocrine tumors. Methods CT perfusion in 16 patients with neuroendocrine liver metastases were analyzed by distributed parameter modeling to yield tissue blood flow, blood volume, mean transit time, permeability, and hepatic arterial fraction, for tumor and normal liver. CT perfusion values for the reference sampling interval of 0.5s (SI0.5) were compared with those of SI datasets of 1s, 2s, 3s and 4s, using mixed-effects model analyses. Results Increases in SI beyond 1s were associated with significant and increasing departures of CT perfusion parameters from reference values at SI0.5 (p≤0.0009). CT perfusion values deviated from reference with increasing uncertainty with increasing SIs. Findings for normal liver were concordant. Conclusion Increasing SIs beyond 1s yield significantly different CT perfusion parameter values compared to reference values at SI0.5. PMID:25626401

  10. Spatial Distribution and Secular Variation of Geomagnetic Filed in China Described by the CHAOS-6 Model and its Error Analysis

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Gu, Z.; Chen, B.; Yuan, J.; Wang, C.

    2016-12-01

    The CHAOS-6 geomagnetic field model, presented in 2016 by the Denmark's national space institute (DTU Space), is a model of the near-Earth magnetic field. According the CHAOS-6 model, seven component data of geomagnetic filed at 30 observatories in China in 2015 and at 3 observatories in China spanning the time interval 2008.0-2016.5 were calculated. Also seven component data of geomagnetic filed from the geomagnetic data of practical observations in China was obtained. Based on the model calculated data and the practical data, we have compared and analyzed the spatial distribution and the secular variation of the geomagnetic field in China. There is obvious difference between the two type data. The CHAOS-6 model cannot describe the spatial distribution and the secular variation of the geomagnetic field in China with comparative precision because of the regional and local magnetic anomalies in China.

  11. Entraining the topology and the dynamics of a network of phase oscillators

    NASA Astrophysics Data System (ADS)

    Sendiña-Nadal, I.; Leyva, I.; Buldú, J. M.; Almendral, J. A.; Boccaletti, S.

    2009-04-01

    We show that the topology and dynamics of a network of unsynchronized Kuramoto oscillators can be simultaneously controlled by means of a forcing mechanism which yields a phase locking of the oscillators to that of an external pacemaker in connection with the reshaping of the network’s degree distribution. The entrainment mechanism is based on the addition, at regular time intervals, of unidirectional links from oscillators that follow the dynamics of a pacemaker to oscillators in the pristine graph whose phases hold a prescribed phase relationship. Such a dynamically based rule in the attachment process leads to the emergence of a power-law shape in the final degree distribution of the graph whenever the network is entrained to the dynamics of the pacemaker. We show that the arousal of a scale-free distribution in connection with the success of the entrainment process is a robust feature, characterizing different networks’ initial configurations and parameters.

  12. Relativity, nonextensivity, and extended power law distributions.

    PubMed

    Silva, R; Lima, J A S

    2005-11-01

    A proof of the relativistic theorem by including nonextensive effects is given. As it happens in the nonrelativistic limit, the molecular chaos hypothesis advanced by Boltzmann does not remain valid, and the second law of thermodynamics combined with a duality transformation implies that the parameter lies on the interval [0,2]. It is also proven that the collisional equilibrium states (null entropy source term) are described by the relativistic power law extension of the exponential Juttner distribution which reduces, in the nonrelativistic domain, to the Tsallis power law function. As a simple illustration of the basic approach, we derive the relativistic nonextensive equilibrium distribution for a dilute charged gas under the action of an electromagnetic field . Such results reduce to the standard ones in the extensive limit, thereby showing that the nonextensive entropic framework can be harmonized with the space-time ideas contained in the special relativity theory.

  13. Rapid Temporal Changes of Midtropospheric Winds

    NASA Technical Reports Server (NTRS)

    Merceret, Francis J.

    1997-01-01

    The statistical distribution of the magnitude of the vector wind change over 0.25-, 1-, 2-. and 4-h periods based on data from October 1995 through March 1996 over central Florida is presented. The wind changes at altitudes from 6 to 17 km were measured using the Kennedy Space Center 50-MHz Doppler radar wind profiler. Quality controlled profiles were produced every 5 min for 112 gates, each representing 150 m in altitude. Gates 28 through 100 were selected for analysis because of their significance to ascending space launch vehicles. The distribution was found to be lognormal. The parameters of the lognormal distribution depend systematically on the time interval. This dependence is consistent with the behavior of structure functions in the f(exp 5/3) spectral regime. There is a small difference between the 1995 data and the 1996 data, which may represent a weak seasonal effect.

  14. Ratio-based lengths of intervals to improve fuzzy time series forecasting.

    PubMed

    Huarng, Kunhuang; Yu, Tiffany Hui-Kuang

    2006-04-01

    The objective of this study is to explore ways of determining the useful lengths of intervals in fuzzy time series. It is suggested that ratios, instead of equal lengths of intervals, can more properly represent the intervals among observations. Ratio-based lengths of intervals are, therefore, proposed to improve fuzzy time series forecasting. Algebraic growth data, such as enrollments and the stock index, and exponential growth data, such as inventory demand, are chosen as the forecasting targets, before forecasting based on the various lengths of intervals is performed. Furthermore, sensitivity analyses are also carried out for various percentiles. The ratio-based lengths of intervals are found to outperform the effective lengths of intervals, as well as the arbitrary ones in regard to the different statistical measures. The empirical analysis suggests that the ratio-based lengths of intervals can also be used to improve fuzzy time series forecasting.

  15. Importance of the Time Interval between Bowel Preparation and Colonoscopy in Determining the Quality of Bowel Preparation for Full-Dose Polyethylene Glycol Preparation

    PubMed Central

    Kim, Tae Kyung; Kim, Hyung Wook; Kim, Su Jin; Ha, Jong Kun; Jang, Hyung Ha; Hong, Young Mi; Park, Su Bum; Choi, Cheol Woong; Kang, Dae Hwan

    2014-01-01

    Background/Aims The quality of bowel preparation (QBP) is the important factor in performing a successful colonoscopy. Several factors influencing QBP have been reported; however, some factors, such as the optimal preparation-to-colonoscopy time interval, remain controversial. This study aimed to determine the factors influencing QBP and the optimal time interval for full-dose polyethylene glycol (PEG) preparation. Methods A total of 165 patients who underwent colonoscopy from June 2012 to August 2012 were prospectively evaluated. The QBP was assessed using the Ottawa Bowel Preparation Scale (Ottawa) score according to several factors influencing the QBP were analyzed. Results Colonoscopies with a time interval of 5 to 6 hours had the best Ottawa score in all parts of the colon. Patients with time intervals of 6 hours or less had the better QBP than those with time intervals of more than 6 hours (p=0.046). In the multivariate analysis, the time interval (odds ratio, 1.897; 95% confidence interval, 1.006 to 3.577; p=0.048) was the only significant contributor to a satisfactory bowel preparation. Conclusions The optimal time was 5 to 6 hours for the full-dose PEG method, and the time interval was the only significant contributor to a satisfactory bowel preparation. PMID:25368750

  16. Carboniferous paleogeographic, phytogeographic, and paleoclimatic reconstructions

    USGS Publications Warehouse

    Rowley, D.B.; Raymond, A.; Parrish, Judith T.; Lottes, A.L.; Scotese, C.R.; Ziegler, A.M.

    1985-01-01

    Two revised paleogeographic reconstructions of the Visean and Westphalian C-D stages are presented based on recent paleomagnetic, phytogeographic, stratigraphic, and tectonic data. These data change the positions of some continental blocks, and allow the definition of several new ones. The most important modifications that have been incorporated in these reconstructions are: (1) a proposed isthmus linking North America and Siberia across the Bering Strait; and (2) the separation of China and Southeast Asia in six major blocks, including South China, North China, Shan Thai-Malaya, Indochina, Qangtang, and Tarim blocks. Evidence is presented that suggests that at least the South China, Shan Thai-Malaya, and Qangtang blocks were derived from the northern margin of Gondwana. Multivariate statistical analysis of phytogeographic data from the middle and late Paleozoic allow definition of a number of different phytogeographic units for four time intervals: (1) the Early Devonian, (2) Tournaisian-early Visean, (3) Visean, and (4) late Visean-early Namurian A. Pre-late Visean-early Namurian A floral assemblages from South China show affinities with northern Gondwana floras suggesting a southerly position and provides additional support for our reconstruction of South China against the northern margin of Gondwana. There is a marked decrease in the diversity of phytogeographic units in the Namurian and younger Carboniferous. This correlates closely with the time of assembly of most of Pangaea. The general pattern of Carboniferous phytogeographic units corresponds well with global distribution of continents shown on our paleogeographic reconstructions. In addition, we have constructed paleoclimatic maps for the two Carboniferous time intervals. These maps stress the distribution of rainfall, as this should be strongly correlated with the floras. There is marked change in the rainfall patterns between the Visean and Westphalian C-D. This change corresponds with the closing of the Appalachian-Ouachita ocean between Laurussia and Gondwana, and reflects the removal of a low-latitude moisture source that probably gave rise to monsoonal conditions along the northern margin of Gondwana in the Visean and earlier times. As well, the presence of a substantial heat source at high elevation in the Late Carboniferous significantly influenced the distribution of climatic belts. ?? 1986.

  17. Effects of nandrolone decanoate on time to consolidation of bone defects resulting from osteotomy for tibial tuberosity advancement.

    PubMed

    Marques, Danilo R C; Marques, Danilo; Ibanez, Jose F; Freitas, Itallo B; Hespanha, Ana C; Monteiro, Juliana F; Eggert, Mayara; Becker, Amanda

    2017-09-12

    Experimental study. The aim of this study was to evaluate the effect of nandrolone decanoate (ND) on the time taken for bone consolidation in dogs undergoing tibial tuberosity advancement surgery (TTA). Seventeen dogs that underwent TTA surgery were randomly divided into two groups: group C (TTA; 9 stifles), and group TTA+ND (TTA and systemic administration of ND; 8 stifles). Three observers (two radiologists and an orthopaedic surgeon), assessed bone consolidation by visual inspection of serial radiographs at intervals of 21 days following surgery. There were no differences in median weight and age between groups, nor between the medians of the variables right and left stifle. Only weight and age values were normally distributed. The other variables, right and left stifle and time to consolidation, showed non-normal distribution. Meniscal injury was present in all animals in group C and all animals in group TTA+ND. There was a significant difference between time to consolidation in groups C and TTA+ND (p <0.05). One animal in the group TTA+ND showed increased libido. Kappa agreement among observers on radiographs was 0.87. Administration of ND reduces time to bone consolidation in dogs undergoing TTA.

  18. Do weak global stresses synchronize earthquakes?

    NASA Astrophysics Data System (ADS)

    Bendick, R.; Bilham, R.

    2017-08-01

    Insofar as slip in an earthquake is related to the strain accumulated near a fault since a previous earthquake, and this process repeats many times, the earthquake cycle approximates an autonomous oscillator. Its asymmetric slow accumulation of strain and rapid release is quite unlike the harmonic motion of a pendulum and need not be time predictable, but still resembles a class of repeating systems known as integrate-and-fire oscillators, whose behavior has been shown to demonstrate a remarkable ability to synchronize to either external or self-organized forcing. Given sufficient time and even very weak physical coupling, the phases of sets of such oscillators, with similar though not necessarily identical period, approach each other. Topological and time series analyses presented here demonstrate that earthquakes worldwide show evidence of such synchronization. Though numerous studies demonstrate that the composite temporal distribution of major earthquakes in the instrumental record is indistinguishable from random, the additional consideration of event renewal interval serves to identify earthquake groupings suggestive of synchronization that are absent in synthetic catalogs. We envisage the weak forces responsible for clustering originate from lithospheric strain induced by seismicity itself, by finite strains over teleseismic distances, or by other sources of lithospheric loading such as Earth's variable rotation. For example, quasi-periodic maxima in rotational deceleration are accompanied by increased global seismicity at multidecadal intervals.

  19. Laser plasma interaction at an early stage of laser ablation

    NASA Astrophysics Data System (ADS)

    Lu, Y. F.; Hong, M. H.; Low, T. S.

    1999-03-01

    Laser scattering and its interaction with plasma during KrF excimer laser ablation of silicon are investigated by ultrafast phototube detection. There are two peaks in an optical signal with the first peak attributed to laser scattering and the second one to plasma generation. For laser fluence above 5.8 J/cm2, the second peak rises earlier to overlap with the first one. The optical signal is fitted by a pulse distribution for the scattered laser light and a drifted Maxwell-Boltzmann distribution with a center-of-mass velocity for the plasma. Peak amplitude and its arrival time, full width at half maximum (FWHM), starting time, and termination time of the profiles are studied for different laser fluences and detection angles. Laser pulse is scattered from both the substrate and the plasma with the latter part as a dominant factor during the laser ablation. Peak amplitude of the scattered laser signal increases but its FWHM decreases with the laser fluence. Angular distribution of the peak amplitude can be fitted with cosn θ(n=4) while the detection angle has no obvious influence on the FWHM. In addition, FWHM and peak amplitude of plasma signal increase with the laser fluence. However, starting time and peak arrival time of plasma signal reduce with the laser fluence. The time interval between plasma starting and scattered laser pulse termination is proposed as a quantitative parameter to characterize laser plasma interaction. Threshold fluence for the interaction is estimated to be 3.5 J/cm2. For laser fluence above 12.6 J/cm2, the plasma and scattered laser pulse distributions tend to saturate.

  20. Reaction Event Counting Statistics of Biopolymer Reaction Systems with Dynamic Heterogeneity.

    PubMed

    Lim, Yu Rim; Park, Seong Jun; Park, Bo Jung; Cao, Jianshu; Silbey, Robert J; Sung, Jaeyoung

    2012-04-10

    We investigate the reaction event counting statistics (RECS) of an elementary biopolymer reaction in which the rate coefficient is dependent on states of the biopolymer and the surrounding environment and discover a universal kinetic phase transition in the RECS of the reaction system with dynamic heterogeneity. From an exact analysis for a general model of elementary biopolymer reactions, we find that the variance in the number of reaction events is dependent on the square of the mean number of the reaction events when the size of measurement time is small on the relaxation time scale of rate coefficient fluctuations, which does not conform to renewal statistics. On the other hand, when the size of the measurement time interval is much greater than the relaxation time of rate coefficient fluctuations, the variance becomes linearly proportional to the mean reaction number in accordance with renewal statistics. Gillespie's stochastic simulation method is generalized for the reaction system with a rate coefficient fluctuation. The simulation results confirm the correctness of the analytic results for the time dependent mean and variance of the reaction event number distribution. On the basis of the obtained results, we propose a method of quantitative analysis for the reaction event counting statistics of reaction systems with rate coefficient fluctuations, which enables one to extract information about the magnitude and the relaxation times of the fluctuating reaction rate coefficient, without a bias that can be introduced by assuming a particular kinetic model of conformational dynamics and the conformation dependent reactivity. An exact relationship is established between a higher moment of the reaction event number distribution and the multitime correlation of the reaction rate for the reaction system with a nonequilibrium initial state distribution as well as for the system with the equilibrium initial state distribution.

Top