Sample records for large time interval

  1. Periodic, chaotic, and doubled earthquake recurrence intervals on the deep San Andreas Fault

    USGS Publications Warehouse

    Shelly, David R.

    2010-01-01

    Earthquake recurrence histories may provide clues to the timing of future events, but long intervals between large events obscure full recurrence variability. In contrast, small earthquakes occur frequently, and recurrence intervals are quantifiable on a much shorter time scale. In this work, I examine an 8.5-year sequence of more than 900 recurring low-frequency earthquake bursts composing tremor beneath the San Andreas fault near Parkfield, California. These events exhibit tightly clustered recurrence intervals that, at times, oscillate between ~3 and ~6 days, but the patterns sometimes change abruptly. Although the environments of large and low-frequency earthquakes are different, these observations suggest that similar complexity might underlie sequences of large earthquakes.

  2. Intermediate-term earthquake prediction

    USGS Publications Warehouse

    Knopoff, L.

    1990-01-01

    The problems in predicting earthquakes have been attacked by phenomenological methods from pre-historic times to the present. The associations of presumed precursors with large earthquakes often have been remarked upon. the difficulty in identifying whether such correlations are due to some chance coincidence or are real precursors is that usually one notes the associations only in the relatively short time intervals before the large events. Only rarely, if ever, is notice taken of whether the presumed precursor is to be found in the rather long intervals that follow large earthquakes, or in fact is absent in these post-earthquake intervals. If there are enough examples, the presumed correlation fails as a precursor in the former case, while in the latter case the precursor would be verified. Unfortunately, the observer is usually not concerned with the 'uniteresting' intervals that have no large earthquakes. 

  3. Estimating the duration of geologic intervals from a small number of age determinations: A challenge common to petrology and paleobiology

    NASA Astrophysics Data System (ADS)

    Glazner, Allen F.; Sadler, Peter M.

    2016-12-01

    The duration of a geologic interval, such as the time over which a given volume of magma accumulated to form a pluton, or the lifespan of a large igneous province, is commonly determined from a relatively small number of geochronologic determinations (e.g., 4-10) within that interval. Such sample sets can underestimate the true length of the interval by a significant amount. For example, the average interval determined from a sample of size n = 5, drawn from a uniform random distribution, will underestimate the true interval by 50%. Even for n = 10, the average sample only captures ˜80% of the interval. If the underlying distribution is known then a correction factor can be determined from theory or Monte Carlo analysis; for a uniform random distribution, this factor is n+1n-1. Systematic undersampling of interval lengths can have a large effect on calculated magma fluxes in plutonic systems. The problem is analogous to determining the duration of an extinct species from its fossil occurrences. Confidence interval statistics developed for species origination and extinction times are applicable to the onset and cessation of magmatic events.

  4. Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wada, Takao

    2014-07-01

    A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result.

  5. Military Applicability of Interval Training for Health and Performance.

    PubMed

    Gibala, Martin J; Gagnon, Patrick J; Nindl, Bradley C

    2015-11-01

    Militaries from around the globe have predominantly used endurance training as their primary mode of aerobic physical conditioning, with historical emphasis placed on the long distance run. In contrast to this traditional exercise approach to training, interval training is characterized by brief, intermittent bouts of intense exercise, separated by periods of lower intensity exercise or rest for recovery. Although hardly a novel concept, research over the past decade has shed new light on the potency of interval training to elicit physiological adaptations in a time-efficient manner. This work has largely focused on the benefits of low-volume interval training, which involves a relatively small total amount of exercise, as compared with the traditional high-volume approach to training historically favored by militaries. Studies that have directly compared interval and moderate-intensity continuous training have shown similar improvements in cardiorespiratory fitness and the capacity for aerobic energy metabolism, despite large differences in total exercise and training time commitment. Interval training can also be applied in a calisthenics manner to improve cardiorespiratory fitness and strength, and this approach could easily be incorporated into a military conditioning environment. Although interval training can elicit physiological changes in men and women, the potential for sex-specific adaptations in the adaptive response to interval training warrants further investigation. Additional work is needed to clarify adaptations occurring over the longer term; however, interval training deserves consideration from a military applicability standpoint as a time-efficient training strategy to enhance soldier health and performance. There is value for military leaders in identifying strategies that reduce the time required for exercise, but nonetheless provide an effective training stimulus.

  6. Kinetics of matching.

    PubMed

    Mark, T A; Gallistel, C R

    1994-01-01

    Rats responded on concurrent variable interval schedules of brain stimulation reward in 2-trial sessions. Between trials, there was a 16-fold reversal in the relative rate of reward. In successive, narrow time windows, the authors compared the ratio of the times spent on the 2 levers to the ratio of the rewards received. Time-allocation ratios tracked wide, random fluctuations in the reward ratio. The adjustment to the midsession reversal in relative rate of reward was largely completed within 1 interreward interval on the leaner schedule. Both results were unaffected by a 16-fold change in the combined rates of reward. The large, rapid, scale-invariant shifts in time-allocation ratios that underlie matching behavior imply that the subjective relative rate of reward can be determined by a very few of the most recent interreward intervals and that this estimate can directly determine the ratio of the expected stay durations.

  7. Finding Intervals of Abrupt Change in Earth Science Data

    NASA Astrophysics Data System (ADS)

    Zhou, X.; Shekhar, S.; Liess, S.

    2011-12-01

    In earth science data (e.g., climate data), it is often observed that a persistently abrupt change in value occurs in a certain time-period or spatial interval. For example, abrupt climate change is defined as an unusually large shift of precipitation, temperature, etc, that occurs during a relatively short time period. A similar pattern can also be found in geographical space, representing a sharp transition of the environment (e.g., vegetation between different ecological zones). Identifying such intervals of change from earth science datasets is a crucial step for understanding and attributing the underlying phenomenon. However, inconsistencies in these noisy datasets can obstruct the major change trend, and more importantly can complicate the search of the beginning and end points of the interval of change. Also, the large volume of data makes it challenging to process the dataset reasonably fast. In earth science data (e.g., climate data), it is often observed that a persistently abrupt change in value occurs in a certain time-period or spatial interval. For example, abrupt climate change is defined as an unusually large shift of precipitation, temperature, etc, that occurs during a relatively short time period. A similar change pattern can also be found in geographical space, representing a sharp transition of the environment (e.g., vegetation between different ecological zones). Identifying such intervals of change from earth science datasets is a crucial step for understanding and attributing the underlying phenomenon. However, inconsistencies in these noisy datasets can obstruct the major change trend, and more importantly can complicate the search of the beginning and end points of the interval of change. Also, the large volume of data makes it challenging to process the dataset fast. In this work, we analyze earth science data using a novel, automated data mining approach to identify spatial/temporal intervals of persistent, abrupt change. We first propose a statistical model to quantitatively evaluate the change abruptness and persistence in an interval. Then we design an algorithm to exhaustively examine all the intervals using this model. Intervals passing a threshold test will be kept as final results. We evaluate the proposed method with the Climate Research Unit (CRU) precipitation data, whereby we focus on the Sahel rainfall index. Results show that this method can find periods of persistent and abrupt value changes with different temporal scales. We also further optimize the algorithm using a smart strategy, which always examines longer intervals before its subsets. By doing this, we reduce the computational cost to only one third of that of the original algorithm for the above test case. More significantly, the optimized algorithm is also proven to scale up well with data volume and number of changes. Particularly, it achieves better performance when dealing with longer change intervals.

  8. a New Approach for Accuracy Improvement of Pulsed LIDAR Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Huang, W.; Zhou, X.; He, C.; Li, X.; Huang, Y.; Zhang, L.

    2018-05-01

    In remote sensing applications, the accuracy of time interval measurement is one of the most important parameters that affect the quality of pulsed lidar data. The traditional time interval measurement technique has the disadvantages of low measurement accuracy, complicated circuit structure and large error. A high-precision time interval data cannot be obtained in these traditional methods. In order to obtain higher quality of remote sensing cloud images based on the time interval measurement, a higher accuracy time interval measurement method is proposed. The method is based on charging the capacitance and sampling the change of capacitor voltage at the same time. Firstly, the approximate model of the capacitance voltage curve in the time of flight of pulse is fitted based on the sampled data. Then, the whole charging time is obtained with the fitting function. In this method, only a high-speed A/D sampler and capacitor are required in a single receiving channel, and the collected data is processed directly in the main control unit. The experimental results show that the proposed method can get error less than 3 ps. Compared with other methods, the proposed method improves the time interval accuracy by at least 20 %.

  9. Outcome-Dependent Sampling with Interval-Censored Failure Time Data

    PubMed Central

    Zhou, Qingning; Cai, Jianwen; Zhou, Haibo

    2017-01-01

    Summary Epidemiologic studies and disease prevention trials often seek to relate an exposure variable to a failure time that suffers from interval-censoring. When the failure rate is low and the time intervals are wide, a large cohort is often required so as to yield reliable precision on the exposure-failure-time relationship. However, large cohort studies with simple random sampling could be prohibitive for investigators with a limited budget, especially when the exposure variables are expensive to obtain. Alternative cost-effective sampling designs and inference procedures are therefore desirable. We propose an outcome-dependent sampling (ODS) design with interval-censored failure time data, where we enrich the observed sample by selectively including certain more informative failure subjects. We develop a novel sieve semiparametric maximum empirical likelihood approach for fitting the proportional hazards model to data from the proposed interval-censoring ODS design. This approach employs the empirical likelihood and sieve methods to deal with the infinite-dimensional nuisance parameters, which greatly reduces the dimensionality of the estimation problem and eases the computation difficulty. The consistency and asymptotic normality of the resulting regression parameter estimator are established. The results from our extensive simulation study show that the proposed design and method works well for practical situations and is more efficient than the alternative designs and competing approaches. An example from the Atherosclerosis Risk in Communities (ARIC) study is provided for illustration. PMID:28771664

  10. Investigating Changes in the High-Latitude Topside Ionosphere During Large Magnetic Storms

    NASA Technical Reports Server (NTRS)

    Fainberg, Joseph; Benson, Robert F.; Osherovich, Vladimir; Truhlik, Vladimir; Wang, Yongli; Fung, Shing; Bilitza, Dieter

    2009-01-01

    A search was conducted to locate periods of nearly simultaneous solar-wind and high latitude topside-ionospheric data during magnetic storms. The focus was on the 20-yr interval from 1965 to 1985 when both solar-wind and Alouette/ISIS topside-sounder data are potentially available. The search yielded 125 large magnetic storms (minimum Dst less than 100) and 280 moderate magnetic storms (minimum Dst between -60 and -100). Solar wind data were available for most, but not all, of these storms. A search of the available high-latitude topside electron-density Ne(h) profiles available from the National Space Science Data Center (NSSDC), both from manual inspection of 35-mm film ionograms in the 1960s and more recent auto-processing of ISIS-2 topside digital ionograms using the TOPIST software, during 9-day intervals associated with the 125 large magnetic storm minimum Dst times yielded the following results: 31 intervals had 10 or more manual-scaled profiles (21 intervals had more than 100 profiles and 5 of these had more than 1,000 profiles), and 34 intervals had 10 or more TOPIST profiles (2 intervals had more than 100 profiles). In addition, a search of the available Alouette-2, ISIS-1 and ISIS-2 digital ionograms during the above periods has yielded encouraging initial results in that many ISIS-1 ionograms were found for the early time intervals. Future work will include the search for 35-mm film ionograms during selected intervals. This presentation will illustrate the results of this investigation to date.

  11. Large capacity storage of integrated objects before change blindness.

    PubMed

    Landman, Rogier; Spekreijse, Henk; Lamme, Victor A F

    2003-01-01

    Normal people have a strikingly low ability to detect changes in a visual scene. This has been taken as evidence that the brain represents only a few objects at a time, namely those currently in the focus of attention. In the present study, subjects were asked to detect changes in the orientation of rectangular figures in a textured display across a 1600 ms gray interval. In the first experiment, change detection improved when the location of a possible change was cued during the interval. The cue remained effective during the entire interval, but after the interval, it was ineffective, suggesting that an initially large representation was overwritten by the post-change display. To control for an effect of light intensity during the interval on the decay of the representation, we compared performance with a gray or a white interval screen in a second experiment. We found no difference between these conditions. In the third experiment, attention was occasionally misdirected during the interval by first cueing the wrong figure, before cueing the correct figure. This did not compromise performance compared to a single cue, indicating that when an item is attentionally selected, the representation of yet unchosen items remains available. In the fourth experiment, the cue was shown to be effective when changes in figure size and orientation were randomly mixed. At the time the cue appeared, subjects could not know whether size or orientation would change, therefore these results suggest that the representation contains features in their 'bound' state. Together, these findings indicate that change blindness involves overwriting of a large capacity representation by the post-change display.

  12. Finding Spatio-Temporal Patterns in Large Sensor Datasets

    ERIC Educational Resources Information Center

    McGuire, Michael Patrick

    2010-01-01

    Spatial or temporal data mining tasks are performed in the context of the relevant space, defined by a spatial neighborhood, and the relevant time period, defined by a specific time interval. Furthermore, when mining large spatio-temporal datasets, interesting patterns typically emerge where the dataset is most dynamic. This dissertation is…

  13. Reading a 400,000-year record of earthquake frequency for an intraplate fault

    NASA Astrophysics Data System (ADS)

    Williams, Randolph T.; Goodwin, Laurel B.; Sharp, Warren D.; Mozley, Peter S.

    2017-05-01

    Our understanding of the frequency of large earthquakes at timescales longer than instrumental and historical records is based mostly on paleoseismic studies of fast-moving plate-boundary faults. Similar study of intraplate faults has been limited until now, because intraplate earthquake recurrence intervals are generally long (10s to 100s of thousands of years) relative to conventional paleoseismic records determined by trenching. Long-term variations in the earthquake recurrence intervals of intraplate faults therefore are poorly understood. Longer paleoseismic records for intraplate faults are required both to better quantify their earthquake recurrence intervals and to test competing models of earthquake frequency (e.g., time-dependent, time-independent, and clustered). We present the results of U-Th dating of calcite veins in the Loma Blanca normal fault zone, Rio Grande rift, New Mexico, United States, that constrain earthquake recurrence intervals over much of the past ˜550 ka—the longest direct record of seismic frequency documented for any fault to date. The 13 distinct seismic events delineated by this effort demonstrate that for >400 ka, the Loma Blanca fault produced periodic large earthquakes, consistent with a time-dependent model of earthquake recurrence. However, this time-dependent series was interrupted by a cluster of earthquakes at ˜430 ka. The carbon isotope composition of calcite formed during this seismic cluster records rapid degassing of CO2, suggesting an interval of anomalous fluid source. In concert with U-Th dates recording decreased recurrence intervals, we infer seismicity during this interval records fault-valve behavior. These data provide insight into the long-term seismic behavior of the Loma Blanca fault and, by inference, other intraplate faults.

  14. Effects of practice on variability in an isochronous serial interval production task: asymptotical levels of tapping variability after training are similar to those of musicians.

    PubMed

    Madison, Guy; Karampela, Olympia; Ullén, Fredrik; Holm, Linus

    2013-05-01

    Timing permeates everyday activities such as walking, dancing and music, yet the effect of short-term practice in this ubiquitous activity is largely unknown. In two training experiments involving sessions spread across several days, we examined short-term practice effects on timing variability in a sequential interval production task. In Experiment 1, we varied the mode of response (e.g., drumstick and finger tapping) and the level of sensory feedback. In Experiment 2 we varied the interval in 18 levels ranging from 500 ms to 1624 ms. Both experiments showed a substantial decrease in variability within the first hour of practice, but little thereafter. This effect was similar across mode of response, amount of feedback, and interval duration, and was manifested as a reduction in both local variability (between neighboring intervals) and drift (fluctuation across multiple intervals). The results suggest mainly effects on motor implementation rather than on cognitive timing processes, and have methodological implications for timing studies that have not controlled for practice. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Waking and scrambling in holographic heating up

    NASA Astrophysics Data System (ADS)

    Ageev, D. S.; Aref'eva, I. Ya.

    2017-10-01

    Using holographic methods, we study the heating up process in quantum field theory. As a holographic dual of this process, we use absorption of a thin shell on a black brane. We find the explicit form of the time evolution of the quantum mutual information during heating up from the temperature Ti to the temperature T f in a system of two intervals in two-dimensional space-time. We determine the geometric characteristics of the system under which the time dependence of the mutual information has a bell shape: it is equal to zero at the initial instant, becomes positive at some subsequent instant, further attains its maximum, and again decreases to zero. Such a behavior of the mutual information occurs in the process of photosynthesis. We show that if the distance x between the intervals is less than log 2/2π T i, then the evolution of the holographic mutual information has a bell shape only for intervals whose lengths are bounded from above and below. For sufficiently large x, i.e., for x < log 2/2π T i, the bell-like shape of the time dependence of the quantum mutual information is present only for sufficiently large intervals. Moreover, the zone narrows as T i increases and widens as T f increases.

  16. Mouse Activity across Time Scales: Fractal Scenarios

    PubMed Central

    Lima, G. Z. dos Santos; Lobão-Soares, B.; do Nascimento, G. C.; França, Arthur S. C.; Muratori, L.; Ribeiro, S.; Corso, G.

    2014-01-01

    In this work we devise a classification of mouse activity patterns based on accelerometer data using Detrended Fluctuation Analysis. We use two characteristic mouse behavioural states as benchmarks in this study: waking in free activity and slow-wave sleep (SWS). In both situations we find roughly the same pattern: for short time intervals we observe high correlation in activity - a typical 1/f complex pattern - while for large time intervals there is anti-correlation. High correlation of short intervals ( to : waking state and to : SWS) is related to highly coordinated muscle activity. In the waking state we associate high correlation both to muscle activity and to mouse stereotyped movements (grooming, waking, etc.). On the other side, the observed anti-correlation over large time scales ( to : waking state and to : SWS) during SWS appears related to a feedback autonomic response. The transition from correlated regime at short scales to an anti-correlated regime at large scales during SWS is given by the respiratory cycle interval, while during the waking state this transition occurs at the time scale corresponding to the duration of the stereotyped mouse movements. Furthermore, we find that the waking state is characterized by longer time scales than SWS and by a softer transition from correlation to anti-correlation. Moreover, this soft transition in the waking state encompass a behavioural time scale window that gives rise to a multifractal pattern. We believe that the observed multifractality in mouse activity is formed by the integration of several stereotyped movements each one with a characteristic time correlation. Finally, we compare scaling properties of body acceleration fluctuation time series during sleep and wake periods for healthy mice. Interestingly, differences between sleep and wake in the scaling exponents are comparable to previous works regarding human heartbeat. Complementarily, the nature of these sleep-wake dynamics could lead to a better understanding of neuroautonomic regulation mechanisms. PMID:25275515

  17. Statistical properties of share volume traded in financial markets

    NASA Astrophysics Data System (ADS)

    Gopikrishnan, Parameswaran; Plerou, Vasiliki; Gabaix, Xavier; Stanley, H. Eugene

    2000-10-01

    We quantitatively investigate the ideas behind the often-expressed adage ``it takes volume to move stock prices,'' and study the statistical properties of the number of shares traded QΔt for a given stock in a fixed time interval Δt. We analyze transaction data for the largest 1000 stocks for the two-year period 1994-95, using a database that records every transaction for all securities in three major US stock markets. We find that the distribution P(QΔt) displays a power-law decay, and that the time correlations in QΔt display long-range persistence. Further, we investigate the relation between QΔt and the number of transactions NΔt in a time interval Δt, and find that the long-range correlations in QΔt are largely due to those of NΔt. Our results are consistent with the interpretation that the large equal-time correlation previously found between QΔt and the absolute value of price change \\|GΔt\\| (related to volatility) are largely due to NΔt.

  18. Time-resolved speckle effects on the estimation of laser-pulse arrival times

    NASA Technical Reports Server (NTRS)

    Tsai, B.-M.; Gardner, C. S.

    1985-01-01

    A maximum-likelihood (ML) estimator of the pulse arrival in laser ranging and altimetry is derived for the case of a pulse distorted by shot noise and time-resolved speckle. The performance of the estimator is evaluated for pulse reflections from flat diffuse targets and compared with the performance of a suboptimal centroid estimator and a suboptimal Bar-David ML estimator derived under the assumption of no speckle. In the large-signal limit the accuracy of the estimator was found to improve as the width of the receiver observational interval increases. The timing performance of the estimator is expected to be highly sensitive to background noise when the received pulse energy is high and the receiver observational interval is large. Finally, in the speckle-limited regime the ML estimator performs considerably better than the suboptimal estimators.

  19. A model of interval timing by neural integration.

    PubMed

    Simen, Patrick; Balci, Fuat; de Souza, Laura; Cohen, Jonathan D; Holmes, Philip

    2011-06-22

    We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.

  20. Voter model with non-Poissonian interevent intervals

    NASA Astrophysics Data System (ADS)

    Takaguchi, Taro; Masuda, Naoki

    2011-09-01

    Recent analysis of social communications among humans has revealed that the interval between interactions for a pair of individuals and for an individual often follows a long-tail distribution. We investigate the effect of such a non-Poissonian nature of human behavior on dynamics of opinion formation. We use a variant of the voter model and numerically compare the time to consensus of all the voters with different distributions of interevent intervals and different networks. Compared with the exponential distribution of interevent intervals (i.e., the standard voter model), the power-law distribution of interevent intervals slows down consensus on the ring. This is because of the memory effect; in the power-law case, the expected time until the next update event on a link is large if the link has not had an update event for a long time. On the complete graph, the consensus time in the power-law case is close to that in the exponential case. Regular graphs bridge these two results such that the slowing down of the consensus in the power-law case as compared to the exponential case is less pronounced as the degree increases.

  1. The interval between prothrombin time tests and the quality of oral anticoagulants treatment in patients with chronic atrial fibrillation.

    PubMed

    Shalev, Varda; Rogowski, Ori; Shimron, Orit; Sheinberg, Bracha; Shapira, Itzhak; Seligsohn, Uri; Berliner, Shlomo; Misgav, Mudi

    2007-01-01

    The incidence of stroke in patients with atrial fibrillation (AF) can be significantly reduced with warfarin therapy especially if optimally controlled. To evaluate the effect of the interval between consecutive prothrombin time measurements on the time in therapeutic range (INR 2-3) in a cohort of patients with AF on chronic warfarin treatment in the community. All INR measurements available from a relatively large cohort of patients with chronic AF were reviewed and the mean interval between consecutive INR tests of each patient was correlated with the time in therapeutic range (TTR). Altogether 251,916 INR measurements performed in 4408 patients over a period of seven years were reviewed. Sixty percent of patients had their INR measured on average every 2 to 3 weeks and most others were followed at intervals of 4 weeks or longer. A small proportion (3.6%) had their INR measured on average every week. A significant decline in the time in therapeutic range was observed as the intervals between tests increased. At one to three weeks interval the TTR was 48%, at 4 weeks interval 45% and at 5 weeks 41% (P<0.0005). A five percent increment in TTR was observed if more tests were performed at multiplications of exactly 7 days (43% vs 48% P<0.0001). A better control with an increase in the TTR was observed in patients with atrial fibrillation if prothrombin time tests are performed at regular intervals of no longer than 3 weeks.

  2. 75 FR 37311 - Airplane and Engine Certification Requirements in Supercooled Large Drop, Mixed Phase, and Ice...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-29

    ... maximum time interval between any engine run-ups from idle and the minimum ambient temperature associated with that run-up interval. This limitation is necessary because we do not currently have any specific requirements for run-up procedures for engine ground operation in icing conditions. The engine run-up procedure...

  3. A model of interval timing by neural integration

    PubMed Central

    Simen, Patrick; Balci, Fuat; deSouza, Laura; Cohen, Jonathan D.; Holmes, Philip

    2011-01-01

    We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes; that correlations among them can be largely cancelled by balancing excitation and inhibition; that neural populations can act as integrators; and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule’s predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior. PMID:21697374

  4. Increasing Recovery Time Between Injuries Improves Cognitive Outcome After Repetitive Mild Concussive Brain Injuries in Mice

    PubMed Central

    Zhang, Jimmy; Mannix, Rebekah; Whalen, Michael J.

    2012-01-01

    Abstract BACKGROUND: Although previous evidence suggests that the cognitive effects of concussions are cumulative, the effect of time interval between repeat concussions is largely unknown. OBJECTIVE: To determine the effect of time interval between repeat concussions on the cognitive function of mice. METHODS: We used a weight-drop model of concussion to subject anesthetized mice to 1, 3, 5, or 10 concussions, each a day apart. Additional mice were subjected to 5 concussions at varying time intervals: daily, weekly, and monthly. Morris water maze performance was measured 24 hours, 1 month, and 1 year after final injury. RESULTS: After 1 concussion, injured and sham-injured mice performed similarly in the Morris water maze. As the number of concussions increased, injured mice performed worse than sham-injured mice. Mice sustaining 5 concussions either 1 day or 1 week apart performed worse than sham-injured mice. When 5 concussions were delivered at 1-month time intervals, no difference in Morris water maze performance was observed between injured and sham-injured mice. After a 1-month recovery period, mice that sustained 5 concussions at daily and weekly time intervals continued to perform worse than sham-injured mice. One year after the final injury, mice sustaining 5 concussions at a daily time interval still performed worse than sham-injured mice. CONCLUSION: When delivered within a period of vulnerability, the cognitive effects of multiple concussions are cumulative, persistent, and may be permanent. Increasing the time interval between concussions attenuates the effects on cognition. When multiple concussions are sustained by mice daily, the effects on cognition are long term. PMID:22743360

  5. Single photon detection and timing in the Lunar Laser Ranging Experiment.

    NASA Technical Reports Server (NTRS)

    Poultney, S. K.

    1972-01-01

    The goals of the Lunar Laser Ranging Experiment lead to the need for the measurement of a 2.5 sec time interval to an accuracy of a nanosecond or better. The systems analysis which included practical retroreflector arrays, available laser systems, and large telescopes led to the necessity of single photon detection. Operation under all background illumination conditions required auxiliary range gates and extremely narrow spectral and spatial filters in addition to the effective gate provided by the time resolution. Nanosecond timing precision at relatively high detection efficiency was obtained using the RCA C31000F photomultiplier and Ortec 270 constant fraction of pulse-height timing discriminator. The timing accuracy over the 2.5 sec interval was obtained using a digital interval with analog vernier ends. Both precision and accuracy are currently checked internally using a triggerable, nanosecond light pulser. Future measurements using sub-nanosecond laser pulses will be limited by the time resolution of single photon detectors.

  6. Analysis of aggregated tick returns: Evidence for anomalous diffusion

    NASA Astrophysics Data System (ADS)

    Weber, Philipp

    2007-01-01

    In order to investigate the origin of large price fluctuations, we analyze stock price changes of ten frequently traded NASDAQ stocks in the year 2002. Though the influence of the trading frequency on the aggregate return in a certain time interval is important, it cannot alone explain the heavy-tailed distribution of stock price changes. For this reason, we analyze intervals with a fixed number of trades in order to eliminate the influence of the trading frequency and investigate the relevance of other factors for the aggregate return. We show that in tick time the price follows a discrete diffusion process with a variable step width while the difference between the number of steps in positive and negative direction in an interval is Gaussian distributed. The step width is given by the return due to a single trade and is long-term correlated in tick time. Hence, its mean value can well characterize an interval of many trades and turns out to be an important determinant for large aggregate returns. We also present a statistical model reproducing the cumulative distribution of aggregate returns. For an accurate agreement with the empirical distribution, we also take into account asymmetries of the step widths in different directions together with cross correlations between these asymmetries and the mean step width as well as the signs of the steps.

  7. Coupling between perception and action timing during sensorimotor synchronization.

    PubMed

    Serrien, Deborah J; Spapé, Michiel M

    2010-12-17

    Time is an important parameter in behaviour, especially when synchronization with external events is required. To evaluate the nature of the association between perception and action timing, this study introduced pitch accented tones during performance of a sensorimotor tapping task. Furthermore, regularity of the pacing cues was modified by small (subliminal) or large (conscious) timing perturbations. A global analysis across the intervals showed that repeated accented tones increased the tap-tone asynchrony in the regular (control) and irregular (subliminal) trials but not in the irregular trials with awareness of the perturbations. Asynchrony variability demonstrated no effect of accentuation in the regular and subliminal irregular trials, whereas it increased in the conscious irregular trials. A local analysis of the intervals showed that pitch accentuation lengthened the duration of the tapping responses, but only in the irregular trials with large timing perturbations. These data underline that common timing processes are automatically engaged for perception and action, although this arrangement can be overturned by cognitive intervention. Overall, the findings highlight a flexible association between perception and action timing within a functional information processing framework. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  8. A numerical study of the laminar necklace vortex system and its effect on the wake for a circular cylinder

    NASA Astrophysics Data System (ADS)

    Kirkil, Gokhan; Constantinescu, George

    2014-11-01

    Large Eddy Simulation is used to investigate the structure of the laminar horseshoe vortex (HV) system and the dynamics of the necklace vortices as they fold around the base of a circular cylinder mounted on the flat bed of an open channel for Reynolds numbers defined with the cylinder diameter, D, smaller than 4,460. The study concentrates on the analysis of the structure of the HV system in the periodic breakaway sub-regime which is characterized by the formation of three main necklace vortices. For the relatively shallow flow conditions considered in this study (H/D 1, H is the channel depth), at times, the disturbances induced by the legs of the necklace vortices do not allow the SSLs on the two sides of the cylinder to interact in a way that allows the vorticity redistribution mechanism to lead to the formation of a new wake roller. As a result, the shedding of large scale rollers in the turbulent wake is suppressed for relatively large periods of time. Simulation results show that the wake structure changes randomly between time intervals when large-scale rollers are forming and are convected in the wake (von Karman regime), and time intervals when the rollers do not form.

  9. Theoretical implications of quantitative properties of interval timing and probability estimation in mouse and rat.

    PubMed

    Kheifets, Aaron; Freestone, David; Gallistel, C R

    2017-07-01

    In three experiments with mice ( Mus musculus ) and rats (Rattus norvigicus), we used a switch paradigm to measure quantitative properties of the interval-timing mechanism. We found that: 1) Rodents adjusted the precision of their timed switches in response to changes in the interval between the short and long feed latencies (the temporal goalposts). 2) The variability in the timing of the switch response was reduced or unchanged in the face of large trial-to-trial random variability in the short and long feed latencies. 3) The adjustment in the distribution of switch latencies in response to changes in the relative frequency of short and long trials was sensitive to the asymmetry in the Kullback-Leibler divergence. The three results suggest that durations are represented with adjustable precision, that they are timed by multiple timers, and that there is a trial-by-trial (episodic) record of feed latencies in memory. © 2017 Society for the Experimental Analysis of Behavior.

  10. An actual load forecasting methodology by interval grey modeling based on the fractional calculus.

    PubMed

    Yang, Yang; Xue, Dingyü

    2017-07-17

    The operation processes for thermal power plant are measured by the real-time data, and a large number of historical interval data can be obtained from the dataset. Within defined periods of time, the interval information could provide important information for decision making and equipment maintenance. Actual load is one of the most important parameters, and the trends hidden in the historical data will show the overall operation status of the equipments. However, based on the interval grey parameter numbers, the modeling and prediction process is more complicated than the one with real numbers. In order not lose any information, the geometric coordinate features are used by the coordinates of area and middle point lines in this paper, which are proved with the same information as the original interval data. The grey prediction model for interval grey number by the fractional-order accumulation calculus is proposed. Compared with integer-order model, the proposed method could have more freedom with better performance for modeling and prediction, which can be widely used in the modeling process and prediction for the small amount interval historical industry sequence samples. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Hematology and biochemistry reference intervals for Ontario commercial nursing pigs close to the time of weaning

    PubMed Central

    Perri, Amanda M.; O’Sullivan, Terri L.; Harding, John C.S.; Wood, R. Darren; Friendship, Robert M.

    2017-01-01

    The evaluation of pig hematology and biochemistry parameters is rarely done largely due to the costs associated with laboratory testing and labor, and the limited availability of reference intervals needed for interpretation. Within-herd and between-herd biological variation of these values also make it difficult to establish reference intervals. Regardless, baseline reference intervals are important to aid veterinarians in the interpretation of blood parameters for the diagnosis and treatment of diseased swine. The objective of this research was to provide reference intervals for hematology and biochemistry parameters of 3-week-old commercial nursing piglets in Ontario. A total of 1032 pigs lacking clinical signs of disease from 20 swine farms were sampled for hematology and iron panel evaluation, with biochemistry analysis performed on a subset of 189 randomly selected pigs. The 95% reference interval, mean, median, range, and 90% confidence intervals were calculated for each parameter. PMID:28373729

  12. Modeling of the static recrystallization for 7055 aluminum alloy by cellular automaton

    NASA Astrophysics Data System (ADS)

    Zhang, Tao; Lu, Shi-hong; Zhang, Jia-bin; Li, Zheng-fang; Chen, Peng; Gong, Hai; Wu, Yun-xin

    2017-09-01

    In order to simulate the flow behavior and microstructure evolution during the pass interval period of the multi-pass deformation process, models of static recovery (SR) and static recrystallization (SRX) by the cellular automaton (CA) method for the 7055 aluminum alloy were established. Double-pass hot compression tests were conducted to acquire flow stress and microstructure variation during the pass interval period. With the basis of the material constants obtained from the compression tests, models of the SR, incubation period, nucleation rate and grain growth were fitted by least square method. A model of the grain topology and a statistical computation of the CA results were also introduced. The effects of the pass interval time, temperature, strain, strain rate and initial grain size on the microstructure variation for the SRX of the 7055 aluminum alloy were studied. The results show that a long pass interval time, large strain, high temperature and large strain rate are beneficial for finer grains during the pass interval period. The stable size of the static recrystallized grain is not concerned with the initial grain size, but mainly depends on the strain rate and temperature. The SRX plays a vital role in grain refinement, while the SR has no effect on the variation of microstructure morphology. Using flow stress and microstructure comparisons of the simulated and experimental CA results, the established CA models can accurately predict the flow stress and microstructure evolution during the pass interval period, and provide guidance for the selection of optimized parameters for the multi-pass deformation process.

  13. Stability of Predictors of Mortality after Spinal Cord Injury

    PubMed Central

    Krause, James S.; Saunders, Lee L.; Zhai, Yusheng

    2011-01-01

    Objective To identify the stability of socio-environmental, behavioral, and health predictors of mortality over an eight year time frame. Study Design Cohort study. Setting Data were analyzed at a large medical university in the Southeast United States of America (USA). Methods Adults with residual impairment from a spinal cord injury (SCI) who were at least one year post-injury at assessment were recruited through a large specialty hospital in the Southeast USA. 1209 participants were included in the final analysis. A piecewise exponential model with 2 equal time intervals (eight years total) was used to assess the stability of the hazard and the predictors over time. Results The hazard did significantly change over time, where the hazard in the first time interval was significantly lower than the second. There were no interactions between the socio-environmental, behavior, or health factors and time, although there was a significant interaction between age at injury (a demographic variable) and time. Conclusion These results suggest there is stability in the association between the predictors and mortality, even over an eight year time period. Results reinforce the use of historic variables for prediction of mortality in persons with SCI. PMID:22231541

  14. MESOSCOPIC MODELING OF STOCHASTIC REACTION-DIFFUSION KINETICS IN THE SUBDIFFUSIVE REGIME

    PubMed Central

    BLANC, EMILIE; ENGBLOM, STEFAN; HELLANDER, ANDREAS; LÖTSTEDT, PER

    2017-01-01

    Subdiffusion has been proposed as an explanation of various kinetic phenomena inside living cells. In order to fascilitate large-scale computational studies of subdiffusive chemical processes, we extend a recently suggested mesoscopic model of subdiffusion into an accurate and consistent reaction-subdiffusion computational framework. Two different possible models of chemical reaction are revealed and some basic dynamic properties are derived. In certain cases those mesoscopic models have a direct interpretation at the macroscopic level as fractional partial differential equations in a bounded time interval. Through analysis and numerical experiments we estimate the macroscopic effects of reactions under subdiffusive mixing. The models display properties observed also in experiments: for a short time interval the behavior of the diffusion and the reaction is ordinary, in an intermediate interval the behavior is anomalous, and at long times the behavior is ordinary again. PMID:29046618

  15. Relationship Between Maximum Tsunami Amplitude and Duration of Signal

    NASA Astrophysics Data System (ADS)

    Kim, Yoo Yin; Whitmore, Paul M.

    2014-12-01

    All available tsunami observations at tide gauges situated along the North American coast were examined to determine if there is any clear relationship between maximum amplitude and signal duration. In total, 89 historical tsunami recordings generated by 13 major earthquakes between 1952 and 2011 were investigated. Tidal variations were filtered out of the signal and the duration between the arrival time and the time at which the signals drops and stays below 0.3 m amplitude was computed. The processed tsunami time series were evaluated and a linear least-squares fit with a 95 % confidence interval was examined to compare tsunami durations with maximum tsunami amplitude in the study region. The confidence interval is roughly 20 h over the range of maximum tsunami amplitudes in which we are interested. This relatively large confidence interval likely results from variations in local resonance effects, late-arriving reflections, and other effects.

  16. Magnetospheric electric fields and auroral oval

    NASA Technical Reports Server (NTRS)

    Laakso, Harri; Pedersen, Arne; Craven, John D.; Frank, L. A.

    1992-01-01

    DC electric field variations in a synchronous orbit (GEOS 2) during four substorms in the time sector 19 to 01 LT were investigated. Simultaneously, the imaging photometer on board DE 1 provided auroral images that are also utilized. Substorm onset is defined here as a sudden appearance of large electric fields. During the growth phase, the orientation of the electric field begins to oscillate some 30 min prior to onset. About 10 min before the onset GEOS 2 starts moving into a more tenuous plasma, probably due to a thinning of the current sheet. The onset is followed by a period of 10 to 15 min during which large electric fields occur. This interval can be divided into two intervals. During the first interval, which lasts 4 to 8 min, very large fields of 8 to 20 mV/m are observed, while the second interval contains relatively large fields (2 to 5 mV/m). A few min after the onset, the spacecraft returns to a plasma region of higher electron fluxes which are usually larger than before substorm. Some 30 min after onset, enhanced activity, lasting about 10 min, appears in the electric field. One of the events selected offers a good opportunity to study the formation and development of the Westward Traveling Surge (WST). During the traversal of the leading edge of the WTS (approximately 8 min) a stable wave mode at 5.7 mHz is detected.

  17. Generalization of Turbulent Pair Dispersion to Large Initial Separations

    NASA Astrophysics Data System (ADS)

    Shnapp, Ron; Liberzon, Alex; International Collaboration for Turbulence Research

    2018-06-01

    We present a generalization of turbulent pair dispersion to large initial separations (η

  18. Applying the Pseudo-Panel Approach to International Large-Scale Assessments: A Methodology for Analyzing Subpopulation Trend Data

    ERIC Educational Resources Information Center

    Hooper, Martin

    2017-01-01

    TIMSS and PIRLS assess representative samples of students at regular intervals, measuring trends in student achievement and student contexts for learning. Because individual students are not tracked over time, analysis of international large-scale assessment data is usually conducted cross-sectionally. Gustafsson (2007) proposed examining the data…

  19. Detection of timescales in evolving complex systems

    PubMed Central

    Darst, Richard K.; Granell, Clara; Arenas, Alex; Gómez, Sergio; Saramäki, Jari; Fortunato, Santo

    2016-01-01

    Most complex systems are intrinsically dynamic in nature. The evolution of a dynamic complex system is typically represented as a sequence of snapshots, where each snapshot describes the configuration of the system at a particular instant of time. This is often done by using constant intervals but a better approach would be to define dynamic intervals that match the evolution of the system’s configuration. To this end, we propose a method that aims at detecting evolutionary changes in the configuration of a complex system, and generates intervals accordingly. We show that evolutionary timescales can be identified by looking for peaks in the similarity between the sets of events on consecutive time intervals of data. Tests on simple toy models reveal that the technique is able to detect evolutionary timescales of time-varying data both when the evolution is smooth as well as when it changes sharply. This is further corroborated by analyses of several real datasets. Our method is scalable to extremely large datasets and is computationally efficient. This allows a quick, parameter-free detection of multiple timescales in the evolution of a complex system. PMID:28004820

  20. Intertrial interval duration and learning in autistic children.

    PubMed Central

    Koegel, R L; Dunlap, G; Dyer, K

    1980-01-01

    This study investigated the influence of intertrial interval duration on the performance of autistic children during teaching situations. The children were taught under the same conditions existing in their regular programs, except that the length of time between trials was systematically manipulated. With both multiple baseline and repeated reversal designs, two lengths of intertrial interval were employed; short intervals with the SD for any given trial presented approximately one second following the reinforcer for the previous trial versus long intervals with the SD presented four or more seconds following the reinforcer for the previous trial. The results showed that: (1) the short intertrial intervals always produced higher levels of correct responding than the long intervals; and (2) there were improving trends in performance and rapid acquisition with the short intertrial intervals, in contrast to minimal or no change with the long intervals. The results are discussed in terms of utilizing information about child and task characteristics in terms of selecting optimal intervals. The data suggest that manipulations made between trials have a large influence on autistic children's learning. PMID:7364701

  1. Individual Differences in Time Estimation Related to Cognitive Ability, Speed of Information Processing and Working Memory

    ERIC Educational Resources Information Center

    Fink, A.; Neubauer, A. C.

    2005-01-01

    In experimental time estimation research, it has consistently been found that the more a person is engaged in some kind of demanding cognitive activity within a given period of time, the more experienced duration of this time interval decreases. However, the role of individual differences has been largely ignored in this field of research. In a…

  2. The Bay Area Earthquake Cycle:A Paleoseismic Perspective

    NASA Astrophysics Data System (ADS)

    Schwartz, D. P.; Seitz, G.; Lienkaemper, J. J.; Dawson, T. E.; Hecker, S.; William, L.; Kelson, K.

    2001-12-01

    Stress changes produced by the 1906 San Francisco earthquake had a profound effect on Bay Area seismicity, dramatically reducing it in the 20th century. Whether the San Francisco Bay Region (SFBR) is still within, is just emerging from it, or is out of the 1906 stress shadow is an issue of strong debate with important implications for earthquake mechanics and seismic hazards. Historically the SFBR has not experienced one complete earthquake cycle--the interval immediately following, then leading up to and repeating, a 1906-type (multi-segment rupture, M7.9) San Andreas event. The historical record of earthquake occurrence in the SFBR appears to be complete at about M5.5 back to 1850 (Bakun, 1999), which is less than half a cycle. For large events (qualitatively placed at M*7) Toppozada and Borchardt (1998) suggest the record is complete back to 1776, which may represent about half a cycle. During this period only the southern Hayward fault (1868) and the San Andreas fault (1838?, 1906) have produced their expected large events. New paleoseismic data now provide, for the first time, a more complete view of the most recent pre-1906 SFBR earthquake cycle. Focused paleoseismic efforts under the Bay Area Paleoearthquake Experiment (BAPEX) have developed a chronology of the most recent large earthquakes (MRE) on major SFBR faults. The San Andreas (SA), northern Hayward (NH), southern Hayward (SH), Rodgers Creek (RC), and northern Calaveras (NC) faults provide clear paleoseismic evidence for large events post-1600 AD. The San Gregorio (SG) may have also produced a large earthquake after this date. The timing of the MREs, in years AD, follows. The age ranges are 2-sigma radiocarbon intervals; the dates in parentheses are 1-sigma. MRE ages are: a) SA 1600-1670 (1630-1660), NH 1640-1776 (1635-1776); SH 1635-1776 (1685-1676); RC 1670-1776 (1730-1776); NC 1670-1830?; and San Gregorio 1270-1776 but possibly 1640-1776 (1685-1776). Based on present radiocarbon dating, the NH/SH/RC/NC/(SG?) sequence likely occurred subsequent to the penultimate San Andreas event. Although offset data, which reflect M, are limited, observations indicate that the penultimate SA event ruptured essentially the same fault length as 1906 (Schwartz et al, 1998). In addition, measured point-specific slip (RC, 1.8-2.3m; SG, 3.5-5m) and modeled average slip (SH, 1.9m) for the MREs indicate large magnitude earthquakes on the other regional faults. The major observation from the new paleoseismic data is that during a maximum interval of 176 years (1600 to 1776), significant seismic moment was released in the SFBR by large (M*6.7) surface-faulting earthquakes on the SA, RC, SH, NH, NC and possibly SG faults. This places an upper limit on the duration of San Andreas interaction effects (stress shadow) on the regional fault system. In fact, the interval between the penultimate San Andreas rupture and large earthquakes on other SFBR faults could have been considerably shorter. We are now 95 years out from the 1906 and the SFBR Working Group 99 probability time window extends to 2030, an interval of 124 years. The paleoearthquake data allow that within this amount of time following the penultimate San Andreas event one or more large earthquakes may have occurred on Bay Area faults. Longer paleoearthquake chronologies with more precise event dating in the SFBR and other locales provide the exciting potential for defining regional earthquake cycles and modeling long-term fault interactions.

  3. Current Fluctuations in Stochastic Lattice Gases

    NASA Astrophysics Data System (ADS)

    Bertini, L.; de Sole, A.; Gabrielli, D.; Jona-Lasinio, G.; Landim, C.

    2005-01-01

    We study current fluctuations in lattice gases in the macroscopic limit extending the dynamic approach for density fluctuations developed in previous articles. More precisely, we establish a large deviation theory for the space-time fluctuations of the empirical current which include the previous results. We then estimate the probability of a fluctuation of the average current over a large time interval. It turns out that recent results by Bodineau and Derrida [Phys. Rev. Lett.922004180601] in certain cases underestimate this probability due to the occurrence of dynamical phase transitions.

  4. Monetary Shocks in Models with Inattentive Producers.

    PubMed

    Alvarez, Fernando E; Lippi, Francesco; Paciello, Luigi

    2016-04-01

    We study models where prices respond slowly to shocks because firms are rationally inattentive. Producers must pay a cost to observe the determinants of the current profit maximizing price, and hence observe them infrequently. To generate large real effects of monetary shocks in such a model the time between observations must be long and/or highly volatile. Previous work on rational inattentiveness has allowed for observation intervals that are either constant-but-long ( e.g . Caballero, 1989 or Reis, 2006) or volatile-but-short ( e.g . Reis's, 2006 example where observation costs are negligible), but not both. In these models, the real effects of monetary policy are small for realistic values of the duration between observations. We show that non-negligible observation costs produce both of these effects: intervals between observations are infrequent and volatile. This generates large real effects of monetary policy for realistic values of the average time between observations.

  5. Monetary Shocks in Models with Inattentive Producers

    PubMed Central

    Alvarez, Fernando E.; Lippi, Francesco; Paciello, Luigi

    2016-01-01

    We study models where prices respond slowly to shocks because firms are rationally inattentive. Producers must pay a cost to observe the determinants of the current profit maximizing price, and hence observe them infrequently. To generate large real effects of monetary shocks in such a model the time between observations must be long and/or highly volatile. Previous work on rational inattentiveness has allowed for observation intervals that are either constant-but-long (e.g. Caballero, 1989 or Reis, 2006) or volatile-but-short (e.g. Reis's, 2006 example where observation costs are negligible), but not both. In these models, the real effects of monetary policy are small for realistic values of the duration between observations. We show that non-negligible observation costs produce both of these effects: intervals between observations are infrequent and volatile. This generates large real effects of monetary policy for realistic values of the average time between observations. PMID:27516627

  6. Temporal structure in the light response of relay cells in the dorsal lateral geniculate nucleus of the cat.

    PubMed Central

    Funke, K; Wörgötter, F

    1995-01-01

    1. The spike interval pattern during the light responses of 155 on- and 81 off-centre cells of the dorsal lateral geniculate nucleus (LGN) was studied in anaesthetized and paralysed cats by the use of a novel analysis. Temporally localized interval distributions were computed from a 100 ms time window, which was shifted along the time axis in 10 ms steps, resulting in a 90% overlap between two adjacent windows. For each step the interval distribution was computed inside the time window with 1 ms resolution, and plotted as a greyscale-coded pixel line orthogonal to the time axis. For visual stimulation, light or dark spots of different size and contrast were presented with different background illumination levels. 2. Two characteristic interval patterns were observed during the sustained response component of the cells. Mainly on-cells (77%) responded with multimodal interval distributions, resulting in elongated 'bands' in the 2-dimensional time window plots. In similar situations, the interval distributions for most (71%) off-cells were rather wide and featureless. In those cases where interval bands (i.e. multimodal interval distributions) were observed for off-cells (14%), they were always much wider than for the on-cells. This difference between the on- and off-cell population was independent of the background illumination and the contrast of the stimulus. Y on-cells also tended to produce wider interval bands than X on-cells. 3. For most stimulation situations the first interval band was centred around 6-9 ms, which has been called the fundamental interval; higher order bands are multiples thereof. The fundamental interval shifted towards larger sizes with decreasing stimulus contrast. Increasing stimulus size, on the other hand, resulted in a redistribution of the intervals into higher order bands, while at the same time the location of the fundamental interval remained largely unaffected. This was interpreted as an effect of the increasing surround inhibition at the geniculate level, by which individual retinal EPSPs were cancelled. A changing level of adaptation can result in a mixed shift/redistribution effect because of the changing stimulus contrast and changing level of tonic inhibition. 4. The occurrence of interval bands is not directly related to the shape of the autocorrelation function, which can be flat, weakly oscillatory or strongly oscillatory, regardless of the interval band pattern. 5. A simple computer model was devised to account for the observed cell behaviour. The model is highly robust against parameter variations.(ABSTRACT TRUNCATED AT 400 WORDS) Images Figure 1 Figure 2 Figure 3 Figure 4 Figure 6 Figure 7 Figure 8 Figure 9 Figure 10 Figure 11 Figure 12 Figure 13 Figure 15 PMID:7562612

  7. Evolution of motion uncertainty in rectal cancer: implications for adaptive radiotherapy

    NASA Astrophysics Data System (ADS)

    Kleijnen, Jean-Paul J. E.; van Asselen, Bram; Burbach, Johannes P. M.; Intven, Martijn; Philippens, Marielle E. P.; Reerink, Onne; Lagendijk, Jan J. W.; Raaymakers, Bas W.

    2016-01-01

    Reduction of motion uncertainty by applying adaptive radiotherapy strategies depends largely on the temporal behavior of this motion. To fully optimize adaptive strategies, insight into target motion is needed. The purpose of this study was to analyze stability and evolution in time of motion uncertainty of both the gross tumor volume (GTV) and clinical target volume (CTV) for patients with rectal cancer. We scanned 16 patients daily during one week, on a 1.5 T MRI scanner in treatment position, prior to each radiotherapy fraction. Single slice sagittal cine MRIs were made at the beginning, middle, and end of each scan session, for one minute at 2 Hz temporal resolution. GTV and CTV motion were determined by registering a delineated reference frame to time-points later in time. The 95th percentile of observed motion (dist95%) was taken as a measure of motion. The stability of motion in time was evaluated within each cine-MRI separately. The evolution of motion was investigated between the reference frame and the cine-MRIs of a single scan session and between the reference frame and the cine-MRIs of several days later in the course of treatment. This observed motion was then converted into a PTV-margin estimate. Within a one minute cine-MRI scan, motion was found to be stable and small. Independent of the time-point within the scan session, the average dist95% remains below 3.6 mm and 2.3 mm for CTV and GTV, respectively 90% of the time. We found similar motion over time intervals from 18 min to 4 days. When reducing the time interval from 18 min to 1 min, a large reduction in motion uncertainty is observed. A reduction in motion uncertainty, and thus the PTV-margin estimate, of 71% and 75% for CTV and tumor was observed, respectively. Time intervals of 15 and 30 s yield no further reduction in motion uncertainty compared to a 1 min time interval.

  8. Fluctuations of healthy and unhealthy heartbeat intervals

    NASA Astrophysics Data System (ADS)

    Lan, Boon Leong; Toda, Mikito

    2013-04-01

    We show that the RR-interval fluctuations, defined as the difference between successive natural-logarithm of the RR interval, for healthy, congestive-heart-failure (CHF) and atrial-fibrillation (AF) subjects are well modeled by non-Gaussian stable distributions. Our results suggest that healthy or unhealthy RR-interval fluctuation can generally be modeled as a sum of a large number of independent physiological effects which are identically distributed with infinite variance. Furthermore, we show for the first time that one indicator —the scale parameter of the stable distribution— is sufficient to robustly distinguish the three groups of subjects. The scale parameters for healthy subjects are smaller than those for AF subjects but larger than those for CHF subjects —this ordering suggests that the scale parameter could be used to objectively quantify the severity of CHF and AF over time and also serve as an early warning signal for a healthy person when it approaches either boundary of the healthy range.

  9. How large a dataset should be in order to estimate scaling exponents and other statistics correctly in studies of solar wind turbulence

    NASA Astrophysics Data System (ADS)

    Rowlands, G.; Kiyani, K. H.; Chapman, S. C.; Watkins, N. W.

    2009-12-01

    Quantitative analysis of solar wind fluctuations are often performed in the context of intermittent turbulence and center around methods to quantify statistical scaling, such as power spectra and structure functions which assume a stationary process. The solar wind exhibits large scale secular changes and so the question arises as to whether the timeseries of the fluctuations is non-stationary. One approach is to seek a local stationarity by parsing the time interval over which statistical analysis is performed. Hence, natural systems such as the solar wind unavoidably provide observations over restricted intervals. Consequently, due to a reduction of sample size leading to poorer estimates, a stationary stochastic process (time series) can yield anomalous time variation in the scaling exponents, suggestive of nonstationarity. The variance in the estimates of scaling exponents computed from an interval of N observations is known for finite variance processes to vary as ~1/N as N becomes large for certain statistical estimators; however, the convergence to this behavior will depend on the details of the process, and may be slow. We study the variation in the scaling of second-order moments of the time-series increments with N for a variety of synthetic and “real world” time series, and we find that in particular for heavy tailed processes, for realizable N, one is far from this ~1/N limiting behavior. We propose a semiempirical estimate for the minimum N needed to make a meaningful estimate of the scaling exponents for model stochastic processes and compare these with some “real world” time series from the solar wind. With fewer datapoints the stationary timeseries becomes indistinguishable from a nonstationary process and we illustrate this with nonstationary synthetic datasets. Reference article: K. H. Kiyani, S. C. Chapman and N. W. Watkins, Phys. Rev. E 79, 036109 (2009).

  10. An Efficient Format for Nearly Constant-Time Access to Arbitrary Time Intervals in Large Trace Files

    DOE PAGES

    Chan, Anthony; Gropp, William; Lusk, Ewing

    2008-01-01

    A powerful method to aid in understanding the performance of parallel applications uses log or trace files containing time-stamped events and states (pairs of events). These trace files can be very large, often hundreds or even thousands of megabytes. Because of the cost of accessing and displaying such files, other methods are often used that reduce the size of the tracefiles at the cost of sacrificing detail or other information. This paper describes a hierarchical trace file format that provides for display of an arbitrary time window in a time independent of the total size of the file and roughlymore » proportional to the number of events within the time window. This format eliminates the need to sacrifice data to achieve a smaller trace file size (since storage is inexpensive, it is necessary only to make efficient use of bandwidth to that storage). The format can be used to organize a trace file or to create a separate file of annotations that may be used with conventional trace files. We present an analysis of the time to access all of the events relevant to an interval of time and we describe experiments demonstrating the performance of this file format.« less

  11. Method for Veterbi decoding of large constraint length convolutional codes

    NASA Technical Reports Server (NTRS)

    Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor); Reed, Irving S. (Inventor); Jing, Sun (Inventor)

    1988-01-01

    A new method of Viterbi decoding of convolutional codes lends itself to a pipline VLSI architecture using a single sequential processor to compute the path metrics in the Viterbi trellis. An array method is used to store the path information for NK intervals where N is a number, and K is constraint length. The selected path at the end of each NK interval is then selected from the last entry in the array. A trace-back method is used for returning to the beginning of the selected path back, i.e., to the first time unit of the interval NK to read out the stored branch metrics of the selected path which correspond to the message bits. The decoding decision made in this way is no longer maximum likelihood, but can be almost as good, provided that constraint length K in not too small. The advantage is that for a long message, it is not necessary to provide a large memory to store the trellis derived information until the end of the message to select the path that is to be decoded; the selection is made at the end of every NK time unit, thus decoding a long message in successive blocks.

  12. RISMA: A Rule-based Interval State Machine Algorithm for Alerts Generation, Performance Analysis and Monitoring Real-Time Data Processing

    NASA Astrophysics Data System (ADS)

    Laban, Shaban; El-Desouky, Aly

    2013-04-01

    The monitoring of real-time systems is a challenging and complicated process. So, there is a continuous need to improve the monitoring process through the use of new intelligent techniques and algorithms for detecting exceptions, anomalous behaviours and generating the necessary alerts during the workflow monitoring of such systems. The interval-based or period-based theorems have been discussed, analysed, and used by many researches in Artificial Intelligence (AI), philosophy, and linguistics. As explained by Allen, there are 13 relations between any two intervals. Also, there have also been many studies of interval-based temporal reasoning and logics over the past decades. Interval-based theorems can be used for monitoring real-time interval-based data processing. However, increasing the number of processed intervals makes the implementation of such theorems a complex and time consuming process as the relationships between such intervals are increasing exponentially. To overcome the previous problem, this paper presents a Rule-based Interval State Machine Algorithm (RISMA) for processing, monitoring, and analysing the behaviour of interval-based data, received from real-time sensors. The proposed intelligent algorithm uses the Interval State Machine (ISM) approach to model any number of interval-based data into well-defined states as well as inferring them. An interval-based state transition model and methodology are presented to identify the relationships between the different states of the proposed algorithm. By using such model, the unlimited number of relationships between similar large numbers of intervals can be reduced to only 18 direct relationships using the proposed well-defined states. For testing the proposed algorithm, necessary inference rules and code have been designed and applied to the continuous data received in near real-time from the stations of International Monitoring System (IMS) by the International Data Centre (IDC) of the Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO). The CLIPS expert system shell has been used as the main rule engine for implementing the algorithm rules. Python programming language and the module "PyCLIPS" are used for building the necessary code for algorithm implementation. More than 1.7 million intervals constitute the Concise List of Frames (CLF) from 20 different seismic stations have been used for evaluating the proposed algorithm and evaluating stations behaviour and performance. The initial results showed that proposed algorithm can help in better understanding of the operation and performance of those stations. Different important information, such as alerts and some station performance parameters, can be derived from the proposed algorithm. For IMS interval-based data and at any period of time it is possible to analyze station behavior, determine the missing data, generate necessary alerts, and to measure some of station performance attributes. The details of the proposed algorithm, methodology, implementation, experimental results, advantages, and limitations of this research are presented. Finally, future directions and recommendations are discussed.

  13. Highly variable recurrence of tsunamis in the 7,400 years before the 2004 Indian Ocean tsunami

    NASA Astrophysics Data System (ADS)

    Horton, B.; Rubin, C. M.; Sieh, K.; Jessica, P.; Daly, P.; Ismail, N.; Parnell, A. C.

    2017-12-01

    The devastating 2004 Indian Ocean tsunami caught millions of coastal residents and the scientific community off-guard. Subsequent research in the Indian Ocean basin has identified prehistoric tsunamis, but the timing and recurrence intervals of such events are uncertain. Here, we identify coastal caves as a new depositional environment for reconstructing tsunami records and present a 5,000 year record of continuous tsunami deposits from a coastal cave in Sumatra, Indonesia which shows the irregular recurrence of 11 tsunamis between 7,400 and 2,900 years BP. The data demonstrates that the 2004 tsunami was just the latest in a sequence of devastating tsunamis stretching back to at least the early Holocene and suggests a high likelihood for future tsunamis in the Indian Ocean. The sedimentary record in the cave shows that ruptures of the Sunda megathrust vary between large (which generated the 2004 Indian Ocean tsunami) and smaller slip failures. The chronology of events suggests the recurrence of multiple smaller tsunamis within relatively short time periods, interrupted by long periods of strain accumulation followed by giant tsunamis. The average time period between tsunamis is about 450 years with intervals ranging from a long, dormant period of over 2,000 years, to multiple tsunamis within the span of a century. The very long dormant period suggests that the Sunda megathrust is capable of accumulating large slip deficits between earthquakes. Such a high slip rupture would produce a substantially larger earthquake than the 2004 event. Although there is evidence that the likelihood of another tsunamigenic earthquake in Aceh province is high, these variable recurrence intervals suggest that long dormant periods may follow Sunda Megathrust ruptures as large as that of 2004 Indian Ocean tsunami. The remarkable variability of recurrence suggests that regional hazard mitigation plans should be based upon the high likelihood of future destructive tsunami demonstrated by the cave record and other paleotsunami sites, rather than estimates of recurrence intervals.

  14. Dynamic response analysis of structure under time-variant interval process model

    NASA Astrophysics Data System (ADS)

    Xia, Baizhan; Qin, Yuan; Yu, Dejie; Jiang, Chao

    2016-10-01

    Due to the aggressiveness of the environmental factor, the variation of the dynamic load, the degeneration of the material property and the wear of the machine surface, parameters related with the structure are distinctly time-variant. Typical model for time-variant uncertainties is the random process model which is constructed on the basis of a large number of samples. In this work, we propose a time-variant interval process model which can be effectively used to deal with time-variant uncertainties with limit information. And then two methods are presented for the dynamic response analysis of the structure under the time-variant interval process model. The first one is the direct Monte Carlo method (DMCM) whose computational burden is relative high. The second one is the Monte Carlo method based on the Chebyshev polynomial expansion (MCM-CPE) whose computational efficiency is high. In MCM-CPE, the dynamic response of the structure is approximated by the Chebyshev polynomials which can be efficiently calculated, and then the variational range of the dynamic response is estimated according to the samples yielded by the Monte Carlo method. To solve the dependency phenomenon of the interval operation, the affine arithmetic is integrated into the Chebyshev polynomial expansion. The computational effectiveness and efficiency of MCM-CPE is verified by two numerical examples, including a spring-mass-damper system and a shell structure.

  15. Wrightwood and the earthquake cycle: What a long recurrence record tells us about how faults work

    USGS Publications Warehouse

    Weldon, R.; Scharer, K.; Fumal, T.; Biasi, G.

    2004-01-01

    The concept of the earthquake cycle is so well established that one often hears statements in the popular media like, "the Big One is overdue" and "the longer it waits, the bigger it will be." Surprisingly, data to critically test the variability in recurrence intervals, rupture displacements, and relationships between the two are almost nonexistent. To generate a long series of earthquake intervals and offsets, we have conducted paleoseismic investigations across the San Andreas fault near the town of Wrightwood, California, excavating 45 trenches over 18 years, and can now provide some answers to basic questions about recurrence behavior of large earthquakes. To date, we have characterized at least 30 prehistoric earthquakes in a 6000-yr-long record, complete for the past 1500 yr and for the interval 3000-1500 B.C. For the past 1500 yr, the mean recurrence interval is 105 yr (31-165 yr for individual intervals) and the mean slip is 3.2 m (0.7-7 m per event). The series is slightly more ordered than random and has a notable cluster of events, during which strain was released at 3 times the long-term average rate. Slip associated with an earthquake is not well predicted by the interval preceding it, and only the largest two earthquakes appear to affect the time interval to the next earthquake. Generally, short intervals tend to coincide with large displacements and long intervals with small displacements. The most significant correlation we find is that earthquakes are more frequent following periods of net strain accumulation spanning multiple seismic cycles. The extent of paleoearthquake ruptures may be inferred by correlating event ages between different sites along the San Andreas fault. Wrightwood and other nearby sites experience rupture that could be attributed to overlap of relatively independent segments that each behave in a more regular manner. However, the data are equally consistent with a model in which the irregular behavior seen at Wrightwood typifies the entire southern San Andreas fault; more long event series will be required to definitively outline prehistoric rupture extents.

  16. Impurity transport in fractal media in the presence of a degrading diffusion barrier

    NASA Astrophysics Data System (ADS)

    Kondratenko, P. S.; Leonov, K. V.

    2017-08-01

    We have analyzed the transport regimes and the asymptotic forms of the impurity concentration in a randomly inhomogeneous fractal medium in the case when an impurity source is surrounded by a weakly permeable degrading barrier. The systematization of transport regimes depends on the relation between the time t 0 of emergence of impurity from the barrier and time t * corresponding to the beginning of degradation. For t 0 < t *, degradation processes are immaterial. In the opposite situation, when t 0 > t *, the results on time intervals t < t * can be formally reduced to the problem with a stationary barrier. The characteristics of regimes with t * < t < t 0 depend on the scenario of barrier degradation. For an exponentially fast scenario, the interval t * < t < t 0 is very narrow, and the transport regime occurring over time intervals t < t * passes almost jumpwise to the regime of the problem without a barrier. In the slow power-law scenario, the transport over long time interval t * < t < t 0 occurs in a new regime, which is faster as compared to the problem with a stationary barrier, but slower than in the problem without a barrier. The asymptotic form of the concentration at large distances from the source over time intervals t < t 0 has two steps, while for t > t 0, it has only one step. The more remote step for t < t 0 and the single step for t > t 0 coincide with the asymptotic form in the problem without a barrier.

  17. Statistical physics approaches to financial fluctuations

    NASA Astrophysics Data System (ADS)

    Wang, Fengzhong

    2009-12-01

    Complex systems attract many researchers from various scientific fields. Financial markets are one of these widely studied complex systems. Statistical physics, which was originally developed to study large systems, provides novel ideas and powerful methods to analyze financial markets. The study of financial fluctuations characterizes market behavior, and helps to better understand the underlying market mechanism. Our study focuses on volatility, a fundamental quantity to characterize financial fluctuations. We examine equity data of the entire U.S. stock market during 2001 and 2002. To analyze the volatility time series, we develop a new approach, called return interval analysis, which examines the time intervals between two successive volatilities exceeding a given value threshold. We find that the return interval distribution displays scaling over a wide range of thresholds. This scaling is valid for a range of time windows, from one minute up to one day. Moreover, our results are similar for commodities, interest rates, currencies, and for stocks of different countries. Further analysis shows some systematic deviations from a scaling law, which we can attribute to nonlinear correlations in the volatility time series. We also find a memory effect in return intervals for different time scales, which is related to the long-term correlations in the volatility. To further characterize the mechanism of price movement, we simulate the volatility time series using two different models, fractionally integrated generalized autoregressive conditional heteroscedasticity (FIGARCH) and fractional Brownian motion (fBm), and test these models with the return interval analysis. We find that both models can mimic time memory but only fBm shows scaling in the return interval distribution. In addition, we examine the volatility of daily opening to closing and of closing to opening. We find that each volatility distribution has a power law tail. Using the detrended fluctuation analysis (DFA) method, we show long-term auto-correlations in these volatility time series. We also analyze return, the actual price changes of stocks, and find that the returns over the two sessions are often anti-correlated.

  18. Disassembly time of deuterium-cluster-fusion plasma irradiated by an intense laser pulse.

    PubMed

    Bang, W

    2015-07-01

    Energetic deuterium ions from large deuterium clusters (>10nm diameter) irradiated by an intense laser pulse (>10(16)W/cm(2)) produce DD fusion neutrons for a time interval determined by the geometry of the resulting fusion plasma. We present an analytical solution of this time interval, the plasma disassembly time, for deuterium plasmas that are cylindrical in shape. Assuming a symmetrically expanding deuterium plasma, we calculate the expected fusion neutron yield and compare with an independent calculation of the yield using the concept of a finite confinement time at a fixed plasma density. The calculated neutron yields agree quantitatively with the available experimental data. Our one-dimensional simulations indicate that one could expect a tenfold increase in total neutron yield by magnetically confining a 10-keV deuterium fusion plasma for 10ns.

  19. A pacemaker with P = 2.48 h modulated the generator of flares in the X-ray light curve of Sgr A* in the year 2012

    NASA Astrophysics Data System (ADS)

    Leibowitz, Elia

    2017-01-01

    In an intensive observational campaign in the nine month duration of Chandra X-ray Visionary Project that was conducted in the year 2012, 39 large X-ray flares of Sgr A* were recorded. An analysis of the times of the observed flares reveals that the 39 flares are separated in time by intervals that are grouped around integer numbers times 0.10333 days. This time interval is thus the period of a uniform grid of equally spaced points on the time axis. The grouping of the flares around tic marks of this grid is derived from the data with at least a 3.2 σ level of statistical significance. No signal of any period can be found among 22 flares recorded by Chandra in the years 2013-2014. If the 0.10333 day period is that of a nearly circular Keplerian orbit around the blackhole at the center of the Galaxy, its radius is at 7.6 Schwarzschild radii. Large flares were more likely to be triggered when the agent responsible for their outbursts was near the peri-center phase of its slightly eccentric orbit.

  20. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution.

    PubMed

    Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr

    2012-01-01

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Statistics of return intervals between long heartbeat intervals and their usability for online prediction of disorders

    NASA Astrophysics Data System (ADS)

    Bogachev, Mikhail I.; Kireenkov, Igor S.; Nifontov, Eugene M.; Bunde, Armin

    2009-06-01

    We study the statistics of return intervals between large heartbeat intervals (above a certain threshold Q) in 24 h records obtained from healthy subjects. We find that both the linear and the nonlinear long-term memory inherent in the heartbeat intervals lead to power-laws in the probability density function PQ(r) of the return intervals. As a consequence, the probability WQ(t; Δt) that at least one large heartbeat interval will occur within the next Δt heartbeat intervals, with an increasing elapsed number of intervals t after the last large heartbeat interval, follows a power-law. Based on these results, we suggest a method of obtaining a priori information about the occurrence of the next large heartbeat interval, and thus to predict it. We show explicitly that the proposed method, which exploits long-term memory, is superior to the conventional precursory pattern recognition technique, which focuses solely on short-term memory. We believe that our results can be straightforwardly extended to obtain more reliable predictions in other physiological signals like blood pressure, as well as in other complex records exhibiting multifractal behaviour, e.g. turbulent flow, precipitation, river flows and network traffic.

  2. Ignition in tokamaks with modulated source of auxiliary heating

    NASA Astrophysics Data System (ADS)

    Morozov, D. Kh

    2017-12-01

    It is shown that the ignition may be achieved in tokamaks with the modulated power source. The time-averaged source power may be smaller than the steady-state source power, which is sufficient for the ignition. Nevertheless, the maximal power must be large enough, because the ignition must be achieved within a finite time interval.

  3. Synthesis and radiosensitization properties of hydrogen peroxide and sodium hyaluronate complex

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosli, Nur Ratasha Alia Md.; Mohamed, Faizal; Heng, Cheong Kai

    2014-09-03

    Cancer cells which are large in size are resistant towards radiation therapy due to the presence of large amount of anti-oxidative enzymes and hypoxic cancer cells. Thus radiosensitizer agents have been developed to enhance the therapeutic effect of radiotherapy by increasing the sensitivity of these cancer cells towards radiation. This study is conducted to investigate the radiosensitization properties of radiosensitizer complex containing hydrogen peroxide and sodium hyaluronate. Combination with sodium hyaluronate may decrease reactivity of hydrogen peroxide but maintain the oxygen concentration needed for radiosensitizing effect. HepG2 cancer cells are cultured as the mean of test subject. Cancer cell samplesmore » which are targeted and not targeted with these radiosensitizers are irradiated with 2Gy single fractionated dose. Results obtained shows that the cancer cells which are not targeted with radiosensitizers has a cell viability of 98.80±0.37% after a time interval of 48 hours and has even repopulated over 100% after a 72 hour time interval. This shows that the cancer cells are resistant towards radiation. However, when the cancer cells are targeted with radiosensitizers prior to irradiation, there is a reduction of cell viability by 25.50±10.81% and 10.30±5.10% at time intervals of 48 and 72 hours respectively. This indicates that through the use of these radiosensitizers, cancer cells are more sensitive towards radiation.« less

  4. Synthesis and radiosensitization properties of hydrogen peroxide and sodium hyaluronate complex

    NASA Astrophysics Data System (ADS)

    Rosli, Nur Ratasha Alia Md.; Mohamed, Faizal; Heng, Cheong Kai; Rahman, Irman Abdul; Ahmad, Ainee Fatimah; Mohamad, Hur Munawar Kabir

    2014-09-01

    Cancer cells which are large in size are resistant towards radiation therapy due to the presence of large amount of anti-oxidative enzymes and hypoxic cancer cells. Thus radiosensitizer agents have been developed to enhance the therapeutic effect of radiotherapy by increasing the sensitivity of these cancer cells towards radiation. This study is conducted to investigate the radiosensitization properties of radiosensitizer complex containing hydrogen peroxide and sodium hyaluronate. Combination with sodium hyaluronate may decrease reactivity of hydrogen peroxide but maintain the oxygen concentration needed for radiosensitizing effect. HepG2 cancer cells are cultured as the mean of test subject. Cancer cell samples which are targeted and not targeted with these radiosensitizers are irradiated with 2Gy single fractionated dose. Results obtained shows that the cancer cells which are not targeted with radiosensitizers has a cell viability of 98.80±0.37% after a time interval of 48 hours and has even repopulated over 100% after a 72 hour time interval. This shows that the cancer cells are resistant towards radiation. However, when the cancer cells are targeted with radiosensitizers prior to irradiation, there is a reduction of cell viability by 25.50±10.81% and 10.30±5.10% at time intervals of 48 and 72 hours respectively. This indicates that through the use of these radiosensitizers, cancer cells are more sensitive towards radiation.

  5. A Benchmark for Comparing Different Approaches for Specifying and Verifying Real-Time Systems

    DTIC Science & Technology

    1993-01-01

    To be considered correct or useful, real - time systems must deliver results within specified time intervals, either without exception or with high...probability. Recently, a large number of formal methods have been invented for specifying and verifying real - time systems . It has been suggested that...these formal methods need to be tested out on actual real - time systems . Such testing will allow the scalability of the methods to be assessed and also

  6. A Bimodal Hybrid Model for Time-Dependent Probabilistic Seismic Hazard Analysis

    NASA Astrophysics Data System (ADS)

    Yaghmaei-Sabegh, Saman; Shoaeifar, Nasser; Shoaeifar, Parva

    2018-03-01

    The evaluation of evidence provided by geological studies and historical catalogs indicates that in some seismic regions and faults, multiple large earthquakes occur in cluster. Then, the occurrences of large earthquakes confront with quiescence and only the small-to-moderate earthquakes take place. Clustering of large earthquakes is the most distinguishable departure from the assumption of constant hazard of random occurrence of earthquakes in conventional seismic hazard analysis. In the present study, a time-dependent recurrence model is proposed to consider a series of large earthquakes that occurs in clusters. The model is flexible enough to better reflect the quasi-periodic behavior of large earthquakes with long-term clustering, which can be used in time-dependent probabilistic seismic hazard analysis with engineering purposes. In this model, the time-dependent hazard results are estimated by a hazard function which comprises three parts. A decreasing hazard of last large earthquake cluster and an increasing hazard of the next large earthquake cluster, along with a constant hazard of random occurrence of small-to-moderate earthquakes. In the final part of the paper, the time-dependent seismic hazard of the New Madrid Seismic Zone at different time intervals has been calculated for illustrative purpose.

  7. A comparison of pairs figure skaters in repeated jumps.

    PubMed

    Sands, William A; Kimmel, Wendy L; McNeal, Jeni R; Murray, Steven Ross; Stone, Michael H

    2012-01-01

    Trends in pairs figure skating have shown that increasingly difficult jumps have become an essential aspect of high-level performance, especially in the latter part of a competitive program. We compared a repeated jump power index in a 60 s repeated jump test to determine the relationship of repeated jump test to competitive rank and to measure 2D hip, knee, and ankle angles and angular velocities at 0, 20, 40, and 60 s. Eighteen National Team Pairs Figure Skaters performed a 60 s repeated jump test on a large switch-mat with timing of flight and ground durations and digital video recording. Each 60-s period was divided into 6, 10-s intervals, with power indexes (W/kg) calculated for each 10-s interval. Power index by 10-s interval repeated measures ANOVAs (RMANOVA) showed that males exceeded females at all intervals, and the highest power index interval was during 10 to 20 s for both sexes. RMANOVAs of angles and angular velocities showed main effects for time only. Power index and jumping techniques among figure skaters showed rapid and steady declines over the test duration. Power index can predict approximately 50% of competitive rank variance, and sex differences in jumping technique were rare. Key pointsThe repeated jumps test can account for about 50% of the variance in pairs ranks.Changes in technique are largely due to fatigue, but the athletes were able to maintain a maximum flexion knee angle very close to the desired 90 degrees. Changes in angular velocity and jump heights occurred as expected, again probably due to fatigue.As expected from metabolic information, the athletes' power indexes peak around 20s and decline thereafter. Coaches should be aware of this time as a boundary beyond which fatigue becomes more manifest, and use careful choreographic choices to provide rest periods that are disguised as less demanding skating elements to afford recovery.The repeated jumps test may be a helpful off-ice test of power-endurance for figure skaters.

  8. Development of two-framing camera with large format and ultrahigh speed

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaoguo; Wang, Yuan; Wang, Yi

    2012-10-01

    High-speed imaging facility is important and necessary for the formation of time-resolved measurement system with multi-framing capability. The framing camera which satisfies the demands of both high speed and large format needs to be specially developed in the ultrahigh speed research field. A two-framing camera system with high sensitivity and time-resolution has been developed and used for the diagnosis of electron beam parameters of Dragon-I linear induction accelerator (LIA). The camera system, which adopts the principle of light beam splitting in the image space behind the lens with long focus length, mainly consists of lens-coupled gated image intensifier, CCD camera and high-speed shutter trigger device based on the programmable integrated circuit. The fastest gating time is about 3 ns, and the interval time between the two frames can be adjusted discretely at the step of 0.5 ns. Both the gating time and the interval time can be tuned to the maximum value of about 1 s independently. Two images with the size of 1024×1024 for each can be captured simultaneously in our developed camera. Besides, this camera system possesses a good linearity, uniform spatial response and an equivalent background illumination as low as 5 electrons/pix/sec, which fully meets the measurement requirements of Dragon-I LIA.

  9. New Madrid seismic zone recurrence intervals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schweig, E.S.; Ellis, M.A.

    1993-03-01

    Frequency-magnitude relations in the New Madrid seismic zone suggest that great earthquakes should occur every 700--1,200 yrs, implying relatively high strain rates. These estimates are supported by some geological and GPS results. Recurrence intervals of this order should have produced about 50 km of strike-slip offset since Miocene time. No subsurface evidence for such large displacements is known within the seismic zone. Moreover, the irregular fault pattern forming a compressive step that one sees today is not compatible with large displacements. There are at least three possible interpretations of the observations of short recurrence intervals and high strain rates, butmore » apparently youthful fault geometry and lack of major post-Miocene deformation. One is that the seismological and geodetic evidence are misleading. A second possibility is that activity in the region is cyclic. That is, the geological and geodetic observations that suggest relatively short recurrence intervals reflect a time of high, but geologically temporary, pore-fluid pressure. Zoback and Zoback have suggested such a model for intraplate seismicity in general. Alternatively, the New Madrid seismic zone is geologically young feature that has been active for only the last few tens of thousands of years. In support of this, observe an irregular fault geometry associated with a unstable compressive step, a series of en echelon and discontinuous lineaments that may define the position of a youthful linking fault, and the general absence of significant post-Eocene faulting or topography.« less

  10. Mixed-mode oscillations and interspike interval statistics in the stochastic FitzHugh-Nagumo model

    NASA Astrophysics Data System (ADS)

    Berglund, Nils; Landon, Damien

    2012-08-01

    We study the stochastic FitzHugh-Nagumo equations, modelling the dynamics of neuronal action potentials in parameter regimes characterized by mixed-mode oscillations. The interspike time interval is related to the random number of small-amplitude oscillations separating consecutive spikes. We prove that this number has an asymptotically geometric distribution, whose parameter is related to the principal eigenvalue of a substochastic Markov chain. We provide rigorous bounds on this eigenvalue in the small-noise regime and derive an approximation of its dependence on the system's parameters for a large range of noise intensities. This yields a precise description of the probability distribution of observed mixed-mode patterns and interspike intervals.

  11. Disassembly time of deuterium-cluster-fusion plasma irradiated by an intense laser pulse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bang, W.

    Energetic deuterium ions from large deuterium clusters (>10 nm diameter) irradiated by an intense laser pulse (>10¹⁶ W/cm²) produce DD fusion neutrons for a time interval determined by the geometry of the resulting fusion plasma. We show an analytical solution of this time interval, the plasma disassembly time, for deuterium plasmas that are cylindrical in shape. Assuming a symmetrically expanding deuterium plasma, we calculate the expected fusion neutron yield and compare with an independent calculation of the yield using the concept of a finite confinement time at a fixed plasma density. The calculated neutron yields agree quantitatively with the availablemore » experimental data. Our one-dimensional simulations indicate that one could expect a tenfold increase in total neutron yield by magnetically confining a 10 - keV deuterium fusion plasma for 10 ns.« less

  12. Disassembly time of deuterium-cluster-fusion plasma irradiated by an intense laser pulse

    DOE PAGES

    Bang, W.

    2015-07-02

    Energetic deuterium ions from large deuterium clusters (>10 nm diameter) irradiated by an intense laser pulse (>10¹⁶ W/cm²) produce DD fusion neutrons for a time interval determined by the geometry of the resulting fusion plasma. We show an analytical solution of this time interval, the plasma disassembly time, for deuterium plasmas that are cylindrical in shape. Assuming a symmetrically expanding deuterium plasma, we calculate the expected fusion neutron yield and compare with an independent calculation of the yield using the concept of a finite confinement time at a fixed plasma density. The calculated neutron yields agree quantitatively with the availablemore » experimental data. Our one-dimensional simulations indicate that one could expect a tenfold increase in total neutron yield by magnetically confining a 10 - keV deuterium fusion plasma for 10 ns.« less

  13. Constraining the Long-Term Average of Earthquake Recurrence Intervals From Paleo- and Historic Earthquakes by Assimilating Information From Instrumental Seismicity

    NASA Astrophysics Data System (ADS)

    Zoeller, G.

    2017-12-01

    Paleo- and historic earthquakes are the most important source of information for the estimationof long-term recurrence intervals in fault zones, because sequences of paleoearthquakes cover more than one seismic cycle. On the other hand, these events are often rare, dating uncertainties are enormous and the problem of missing or misinterpreted events leads to additional problems. Taking these shortcomings into account, long-term recurrence intervals are usually unstable as long as no additional information are included. In the present study, we assume that the time to the next major earthquake depends on the rate of small and intermediate events between the large ones in terms of a ``clock-change'' model that leads to a Brownian Passage Time distribution for recurrence intervals. We take advantage of an earlier finding that the aperiodicity of this distribution can be related to the Gutenberg-Richter-b-value, which is usually around one and can be estimated easily from instrumental seismicity in the region under consideration. This allows to reduce the uncertainties in the estimation of the mean recurrence interval significantly, especially for short paleoearthquake sequences and high dating uncertainties. We present illustrative case studies from Southern California and compare the method with the commonly used approach of exponentially distributed recurrence times assuming a stationary Poisson process.

  14. A New Correlation of Large Earthquakes Along the Southern San Andreas Fault

    NASA Astrophysics Data System (ADS)

    Scharer, K. M.; Weldon, R. J.; Biasi, G. P.

    2010-12-01

    There are now three sites on the southern San Andreas fault (SSAF) with records of 10 or more dated ground rupturing earthquakes (Frazier Mountain, Wrightwood and Pallett Creek) and at least seven other sites with 3-5 dated events. Numerous sites have related information including geomorphic offsets caused by 1 to a few earthquakes, a known amount of slip spanning a specific interval of time or number of earthquakes, or the number (but not necessarily the exact ages) of earthquakes in an interval of time. We use this information to construct a record of recent large earthquakes on the SSAF. Strongly overlapping C-14 age ranges, especially between closely spaced sites like Pallett Creek and Wrightwood on the Mojave segment and Thousand Palms, Indio, Coachella and Salt Creek on the southernmost 100 kms of the fault, and overlap between the more distant Frazier Mountain and Bidart Fan sites on the northernmost part of the fault suggest that the paleoseismic data are robust and can be explained by a relatively small number of events that span substantial portions of the fault. This is consistent with the extent of rupture of the two historic events (1857 was ~300 km long and 1812 was 100-200 km long); slip per event data that averages 3-5 m per event at most sites; and the long historical hiatus since 1857. While some sites have smaller offsets for individual events, correlation between sites suggests that many small offsets are near the end of long ruptures. While the long event series on the Mojave are quasi-periodic, individual intervals range about an order of magnitude, from a few decades up to ~200 years. This wide range of intervals and the apparent anti-slip predictable behavior of ruptures (small intervals are not followed by small events) suggest weak clustering or periods of time spanning multiple intervals when strain release is higher low lower than average. These properties defy the application of simple hazard analysis but need to be understood to properly forecast hazard along the fault.

  15. The Innisfree meteorite: Dynamical history of the orbit - Possible family of meteor bodies

    NASA Astrophysics Data System (ADS)

    Galibina, I. V.; Terent'eva, A. K.

    1987-09-01

    Evolution of the Innisfree meteorite orbit caused by secular perturbations is studied over the time interval of 500000 yrs (from the current epoch backwards). Calculations are made by the Gauss-Halphen-Gorjatschew method taking into account perturbations from the four outer planets - Jupiter, Saturn, Uranus and Neptune. In the above mentioned time interval the meteorite orbit has undergone no essential transformations. The Innisfree orbit intersected in 91 cases the Earth orbit and in 94 - the Mars orbit. A system of small and large meteor bodies (producing ordinary meteors and fireballs) which may be genetically related to the Innisfree meteorite has been found, i.e. there probably exists an Innisfree family of meteor bodies.

  16. Demonstration of periodic nanostructure formation with less ablation by double-pulse laser irradiation on titanium

    NASA Astrophysics Data System (ADS)

    Furukawa, Yuki; Sakata, Ryoichi; Konishi, Kazuki; Ono, Koki; Matsuoka, Shusaku; Watanabe, Kota; Inoue, Shunsuke; Hashida, Masaki; Sakabe, Shuji

    2016-06-01

    By pairing femtosecond laser pulses (duration ˜40 fs and central wavelength ˜810 nm) at an appropriate time interval, a laser-induced periodic surface structure (LIPSS) is formed with much less ablation than one formed with a single pulse. On a titanium plate, a pair of laser pulses with fluences of 70 and 140 mJ/cm2 and a rather large time interval (>10 ps) creates a LIPSS with an interspace of 600 nm, the same as that formed by a single pulse of 210 mJ/cm2, while the double pulse ablates only 4 nm, a quarter of the ablation depth of a single pulse.

  17. Quasi-periodic recurrence of large earthquakes on the southern San Andreas fault

    USGS Publications Warehouse

    Scharer, Katherine M.; Biasi, Glenn P.; Weldon, Ray J.; Fumal, Tom E.

    2010-01-01

    It has been 153 yr since the last large earthquake on the southern San Andreas fault (California, United States), but the average interseismic interval is only ~100 yr. If the recurrence of large earthquakes is periodic, rather than random or clustered, the length of this period is notable and would generally increase the risk estimated in probabilistic seismic hazard analyses. Unfortunately, robust characterization of a distribution describing earthquake recurrence on a single fault is limited by the brevity of most earthquake records. Here we use statistical tests on a 3000 yr combined record of 29 ground-rupturing earthquakes from Wrightwood, California. We show that earthquake recurrence there is more regular than expected from a Poisson distribution and is not clustered, leading us to conclude that recurrence is quasi-periodic. The observation of unimodal time dependence is persistent across an observationally based sensitivity analysis that critically examines alternative interpretations of the geologic record. The results support formal forecast efforts that use renewal models to estimate probabilities of future earthquakes on the southern San Andreas fault. Only four intervals (15%) from the record are longer than the present open interval, highlighting the current hazard posed by this fault.

  18. HESS Opinions: The need for process-based evaluation of large-domain hyper-resolution models

    NASA Astrophysics Data System (ADS)

    Melsen, Lieke A.; Teuling, Adriaan J.; Torfs, Paul J. J. F.; Uijlenhoet, Remko; Mizukami, Naoki; Clark, Martyn P.

    2016-03-01

    A meta-analysis on 192 peer-reviewed articles reporting on applications of the variable infiltration capacity (VIC) model in a distributed way reveals that the spatial resolution at which the model is applied has increased over the years, while the calibration and validation time interval has remained unchanged. We argue that the calibration and validation time interval should keep pace with the increase in spatial resolution in order to resolve the processes that are relevant at the applied spatial resolution. We identified six time concepts in hydrological models, which all impact the model results and conclusions. Process-based model evaluation is particularly relevant when models are applied at hyper-resolution, where stakeholders expect credible results both at a high spatial and temporal resolution.

  19. HESS Opinions: The need for process-based evaluation of large-domain hyper-resolution models

    NASA Astrophysics Data System (ADS)

    Melsen, L. A.; Teuling, A. J.; Torfs, P. J. J. F.; Uijlenhoet, R.; Mizukami, N.; Clark, M. P.

    2015-12-01

    A meta-analysis on 192 peer-reviewed articles reporting applications of the Variable Infiltration Capacity (VIC) model in a distributed way reveals that the spatial resolution at which the model is applied has increased over the years, while the calibration and validation time interval has remained unchanged. We argue that the calibration and validation time interval should keep pace with the increase in spatial resolution in order to resolve the processes that are relevant at the applied spatial resolution. We identified six time concepts in hydrological models, which all impact the model results and conclusions. Process-based model evaluation is particularly relevant when models are applied at hyper-resolution, where stakeholders expect credible results both at a high spatial and temporal resolution.

  20. Invariance in the recurrence of large returns and the validation of models of price dynamics

    NASA Astrophysics Data System (ADS)

    Chang, Lo-Bin; Geman, Stuart; Hsieh, Fushing; Hwang, Chii-Ruey

    2013-08-01

    Starting from a robust, nonparametric definition of large returns (“excursions”), we study the statistics of their occurrences, focusing on the recurrence process. The empirical waiting-time distribution between excursions is remarkably invariant to year, stock, and scale (return interval). This invariance is related to self-similarity of the marginal distributions of returns, but the excursion waiting-time distribution is a function of the entire return process and not just its univariate probabilities. Generalized autoregressive conditional heteroskedasticity (GARCH) models, market-time transformations based on volume or trades, and generalized (Lévy) random-walk models all fail to fit the statistical structure of excursions.

  1. Lone star tick abundance, fire, and bison grazing in tall-grass prairie

    USGS Publications Warehouse

    Cully, J.F.

    1999-01-01

    Lone star ticks (Amblyomma americanum L.) were collected by drag samples of 1 km transects on 12 watersheds at Konza Prairie Research Natural Area near Manhattan, Kans., during summer 1995-1996. Watersheds were treated to 2 experimental treatments: 3 burn intervals (1-year, 4-year, and 20-year) and 2 grazing treatments (grazed by bison (Bos bison L.) or ungrazed). The objectives were to determine whether fire interval, time since most recent burn, and the presence of large ungulate grazers would cause changes in lone star tick abundance in tallgrass prairie in central Kansas. Watersheds burned at 1-year intervals had fewer larvae and adults than watersheds burned at 4-year or 20-year intervals. Watersheds burned during the year of sampling had fewer ticks than watersheds burned one or more years in the past. For watersheds burned 1 or more years in the past there was no effect from time since burn. The presence of bison did not affect tick abundance. Spring burning is an effective method to reduce tick populations in tallgrass prairie during the year of the burn.

  2. Health-Terrain: Visualizing Large Scale Health Data

    DTIC Science & Technology

    2015-04-01

    diseases, Influenza, Typhoid Fever , and Hepatitis B. 1.2. Spiral Theme Plot Spatial texture provides overviews of health care data associated with...2012. The time interval is divided into 8 subintervals. Figure 5 (c-d) show three diseases, Influenza, Typhoid Fever , and Hepatitis B. 5 SPIRAL

  3. Experimental and numerical investigation of low-drag intervals in turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Park, Jae Sung; Ryu, Sangjin; Lee, Jin

    2017-11-01

    It has been widely investigated that there is a substantial intermittency between high and low drag states in wall-bounded shear flows. Recent experimental and computational studies in a turbulent channel flow have identified low-drag time intervals based on wall shear stress measurements. These intervals are a weak turbulence state characterized by low-speed streaks and weak streamwise vortices. In this study, the spatiotemporal dynamics of low-drag intervals in a turbulent boundary layer is investigated using experiments and simulations. The low-drag intervals are monitored based on the wall shear stress measurement. We show that near the wall conditionally-sampled mean velocity profiles during low-drag intervals closely approach that of a low-drag nonlinear traveling wave solution as well as that of the so-called maximum drag reduction asymptote. This observation is consistent with the channel flow studies. Interestingly, the large spatial stretching of the streak is very evident in the wall-normal direction during low-drag intervals. Lastly, a possible connection between the mean velocity profile during the low-drag intervals and the Blasius profile will be discussed. This work was supported by startup funds from the University of Nebraska-Lincoln.

  4. Proceedings of the Annual Precise Time and Time Interval (PTTI) Planning Meeting (6th). Held at U.S. Naval Research Laboratory, December 3-5, 1974

    DTIC Science & Technology

    1974-01-01

    General agreement seems to be developing that the geophysical system should be defined in terms of a large number of points...34A Laser-Interferometer System for the Absolute Determination of the Acceleration due to Gravity," In Proc. Int. Conf. on Precision Measurement...MO %. The ratio of the plasmaspheric to the total time-delays due to free

  5. Aortic stiffness and the balance between cardiac oxygen supply and demand: the Rotterdam Study.

    PubMed

    Guelen, Ilja; Mattace-Raso, Francesco Us; van Popele, Nicole M; Westerhof, Berend E; Hofman, Albert; Witteman, Jacqueline Cm; Bos, Willem Jan W

    2008-06-01

    Aortic stiffness is an independent predictor of cardiovascular morbidity and mortality. We investigated whether aortic stiffness, estimated as aortic pulse wave velocity, is associated with decreased perfusion pressure estimated as the cardiac oxygen supply potential. Aortic stiffness and aortic pressure waves, reconstructed from finger blood pressure waves, were obtained in 2490 older adults within the framework of the Rotterdam Study, a large population-based study. Cardiac oxygen supply and demand were estimated using pulse wave analysis techniques, and related to aortic stiffness by linear regression analyses after adjustment for age, sex, mean arterial pressure and heart rate. Cardiac oxygen demand, estimated as the Systolic Pressure Time Index and the Rate Pressure Product, increased with increasing aortic stiffness [0.27 mmHg s (95% confidence interval: 0.21; 0.34)] and [42.2 mmHg/min (95% confidence interval: 34.1; 50.3)], respectively. Cardiac oxygen supply potential estimated as the Diastolic Pressure Time Index decreased [-0.70 mmHg s (95% confidence interval: -0.86; -0.54)] with aortic stiffening. Accordingly, the supply/demand ratio Diastolic Pressure Time Index/Systolic Pressure Time Index -1.11 (95% confidence interval: -0.14; -0.009) decreased with increasing aortic stiffness. Aortic stiffness is associated with estimates of increased cardiac oxygen demand and a decreased cardiac oxygen supply potential. These results may offer additional explanation for the relation between aortic stiffness and cardiovascular morbidity and mortality.

  6. An operational definition of a statistically meaningful trend.

    PubMed

    Bryhn, Andreas C; Dimberg, Peter H

    2011-04-28

    Linear trend analysis of time series is standard procedure in many scientific disciplines. If the number of data is large, a trend may be statistically significant even if data are scattered far from the trend line. This study introduces and tests a quality criterion for time trends referred to as statistical meaningfulness, which is a stricter quality criterion for trends than high statistical significance. The time series is divided into intervals and interval mean values are calculated. Thereafter, r(2) and p values are calculated from regressions concerning time and interval mean values. If r(2) ≥ 0.65 at p ≤ 0.05 in any of these regressions, then the trend is regarded as statistically meaningful. Out of ten investigated time series from different scientific disciplines, five displayed statistically meaningful trends. A Microsoft Excel application (add-in) was developed which can perform statistical meaningfulness tests and which may increase the operationality of the test. The presented method for distinguishing statistically meaningful trends should be reasonably uncomplicated for researchers with basic statistics skills and may thus be useful for determining which trends are worth analysing further, for instance with respect to causal factors. The method can also be used for determining which segments of a time trend may be particularly worthwhile to focus on.

  7. Are EUR and GBP different words for the same currency?

    NASA Astrophysics Data System (ADS)

    Ivanova, K.; Ausloos, M.

    2002-05-01

    The British Pound (GBP) is not part of the Euro (EUR) monetary system. In order to find out arguments on whether GBP should join the EUR or not correlations are calculated between GBP exchange rates with respect to various currencies: USD, JPY, CHF, DKK, the currencies forming EUR and a reconstructed EUR for the time interval from 1993 till June 30, 2000. The distribution of fluctuations of the exchange rates is Gaussian for the central part of the distribution, but has fat tails for the large size fluctuations. Within the Detrended Fluctuation Analysis (DFA) statistical method the power law behavior describing the root-mean-square deviation from a linear trend of the exchange rate fluctuations is obtained as a function of time for the time interval of interest. The time-dependent exponent evolution of the exchange rate fluctuations is given. Statistical considerations imply that the GBP is already behaving as a true EUR.

  8. Computer-assisted recording of tensile tests for the evaluation of serrated flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weinhandl, H.; Mitter, F.; Bernt, W.

    1994-12-01

    In a previous paper the authors pointed out the difficulties which arise in the evaluation of serrated flow curves when the applied tensile strain rates are just above normal''. The recording system of tensile testing machines which were built, say, twenty years ago, are not capable of recording the full size of the load drops due to the inertia of the recording pen. This handicap was then overcome by establishing correction factors which were determined from recording a small number of load drops with an oscilloscope. Modern testing machines are equipped with digital recording. The disadvantage of the common systemmore » is, however, their limited capacity, so that not enough space for data points is available. Consequently, the time intervals between data points are of the order of tenths of seconds. It will become obvious from the present results that such a time interval is too large for recording a correct serration size. This report is concerned with the recording of complete load-extension relations during tensile tests using a computer which is capable of storing the data at sufficiently small time intervals.« less

  9. A pan-Precambrian link between deglaciation and environmental oxidation

    USGS Publications Warehouse

    Raub, T.J.; Kirschvink, J.L.

    2007-01-01

    Despite a continuous increase in solar luminosity to the present, Earth’s glacial record appears to become more frequent, though less severe, over geological time. At least two of the three major Precambrian glacial intervals were exceptionally intense, with solid evidence for widespread sea ice on or near the equator, well within a “Snowball Earth” zone produced by ice-albedo runaway in energy-balance models. The end of the first unambiguously low-latitude glaciation, the early Paleoproterozoic Makganyene event, is associated intimately with the first solid evidence for global oxygenation, including the world’s largest sedimentary manganese deposit. Subsequent low-latitude deglaciations during the Cryogenian interval of the Neoproterozoic Era are also associated with progressive oxidation, and these young Precambrian ice ages coincide with the time when basal animal phyla were diversifying. However, specifically testing hypotheses of cause and effect between Earth’s Neoproterozoic biosphere and glaciation is complicated because large and rapid True Polar Wander events appear to punctuate Neoproterozoic time and may have episodically dominated earlier and later intervals as well, rendering geographic reconstruction and age correlation challenging except for an exceptionally well-defined global paleomagnetic database.

  10. Use of Individual Flight Corridors to Avoid Vortex Wakes

    NASA Technical Reports Server (NTRS)

    Rossow, Vernon J.

    2001-01-01

    Vortex wakes of aircraft pose a hazard to following aircraft until the energetic parts of their flow fields have decayed to a harmless level. It is suggested here that in-trail spacings between aircraft can be significantly and safely reduced by designing an individual, vortex-free flight corridor for each aircraft. Because each aircraft will then have its own flight corridor, which is free of vortex wakes while in use by the assigned aircraft, the time intervals between aircraft operations can be safely reduced to the order of seconds. The productivity of airports can then be substantially increased. How large the offset distances between operational corridors need to be to have them vortex free, and how airports need to be changed to accommodate an individual flight-corridor process for landing and takeoff operations, are explored. Estimates are then made of the productivity of an individual flight-corridor system as a function of the in-trail time interval between operations for various values of wake decay time, runway width, and the velocity of a sidewind. The results confirm the need for short time intervals between aircraft operations if smaller offset distances and increased productivity are to be achieved.

  11. Estimating degradation in real time and accelerated stability tests with random lot-to-lot variation: a simulation study.

    PubMed

    Magari, Robert T

    2002-03-01

    The effect of different lot-to-lot variability levels on the prediction of stability are studied based on two statistical models for estimating degradation in real time and accelerated stability tests. Lot-to-lot variability is considered as random in both models, and is attributed to two sources-variability at time zero, and variability of degradation rate. Real-time stability tests are modeled as a function of time while accelerated stability tests as a function of time and temperatures. Several data sets were simulated, and a maximum likelihood approach was used for estimation. The 95% confidence intervals for the degradation rate depend on the amount of lot-to-lot variability. When lot-to-lot degradation rate variability is relatively large (CV > or = 8%) the estimated confidence intervals do not represent the trend for individual lots. In such cases it is recommended to analyze each lot individually. Copyright 2002 Wiley-Liss, Inc. and the American Pharmaceutical Association J Pharm Sci 91: 893-899, 2002

  12. Test plan for evaluating the operational performance of the prototype nested, fixed-depth fluidic sampler

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    REICH, F.R.

    The PHMC will provide Low Activity Wastes (LAW) tank wastes for final treatment by a privatization contractor from two double-shell feed tanks, 241-AP-102 and 241-AP-104. Concerns about the inability of the baseline ''grab'' sampling to provide large volume samples within time constraints has led to the development of a nested, fixed-depth sampling system. This sampling system will provide large volume, representative samples without the environmental, radiation exposure, and sample volume impacts of the current base-line ''grab'' sampling method. A plan has been developed for the cold testing of this nested, fixed-depth sampling system with simulant materials. The sampling system willmore » fill the 500-ml bottles and provide inner packaging to interface with the Hanford Sites cask shipping systems (PAS-1 and/or ''safe-send''). The sampling system will provide a waste stream that will be used for on-line, real-time measurements with an at-tank analysis system. The cold tests evaluate the performance and ability to provide samples that are representative of the tanks' content within a 95 percent confidence interval, to sample while mixing pumps are operating, to provide large sample volumes (1-15 liters) within a short time interval, to sample supernatant wastes with over 25 wt% solids content, to recover from precipitation- and settling-based plugging, and the potential to operate over the 20-year expected time span of the privatization contract.« less

  13. Demonstration of periodic nanostructure formation with less ablation by double-pulse laser irradiation on titanium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furukawa, Yuki; Graduate School of Science, Kyoto University, Kitashirakawa, Sakyo, Kyoto 606-8502; Advanced Research Center for Beam Science, Institute for Chemical Research, Kyoto University, Gokasho, Uji, Kyoto 611-0011

    By pairing femtosecond laser pulses (duration ∼40 fs and central wavelength ∼810 nm) at an appropriate time interval, a laser-induced periodic surface structure (LIPSS) is formed with much less ablation than one formed with a single pulse. On a titanium plate, a pair of laser pulses with fluences of 70 and 140 mJ/cm{sup 2} and a rather large time interval (>10 ps) creates a LIPSS with an interspace of 600 nm, the same as that formed by a single pulse of 210 mJ/cm{sup 2}, while the double pulse ablates only 4 nm, a quarter of the ablation depth of a single pulse.

  14. Interpreting Results from the Standardized UXO Test Sites

    DTIC Science & Technology

    2007-01-01

    Detector Focusing Lens Cs Cell Split Polarizer Filter Collimating Lens Cs Lamp RF Coil Tiffany Mount H1 Coil Light rays Figure II-1. G-858 Cesium...conductive earth typically decay at a more rapid rate than the currents in metallic objects. Measurements are made in discrete “time gates,” or...time intervals, following the turnoff of the current pulse generated by the transmitter. The early time gates will detect both small and large metallic

  15. Correlation of generation interval and scale of large-scale submarine landslides using 3D seismic data off Shimokita Peninsula, Northeast Japan

    NASA Astrophysics Data System (ADS)

    Nakamura, Yuki; Ashi, Juichiro; Morita, Sumito

    2016-04-01

    To clarify timing and scale of past submarine landslides is important to understand formation processes of the landslides. The study area is in a part of continental slope of the Japan Trench, where a number of large-scale submarine landslide (slump) deposits have been identified in Pliocene and Quaternary formations by analysing METI's 3D seismic data "Sanrikuoki 3D" off Shimokita Peninsula (Morita et al., 2011). As structural features, swarm of parallel dikes which are likely dewatering paths formed accompanying the slumping deformation, and slip directions are basically perpendicular to the parallel dikes. Therefore, parallel dikes are good indicator for estimation of slip directions. Slip direction of each slide was determined one kilometre grid in the survey area of 40 km x 20 km. The remarkable slip direction varies from Pliocene to Quaternary in the survey area. Parallel dike structure is also available for the distinguishment of the slump deposit and normal deposit on time slice images. By tracing outline of slump deposits at each depth, we identified general morphology of the overall slump deposits, and calculated the volume of the extracted slump deposits so as to estimate the scale of each event. We investigated temporal and spatial variation of depositional pattern of the slump deposits. Calculating the generation interval of the slumps, some periodicity is likely recognized, especially large slump do not occur in succession. Additionally, examining the relationship of the cumulative volume and the generation interval, certain correlation is observed in Pliocene and Quaternary. Key words: submarine landslides, 3D seismic data, Shimokita Peninsula

  16. Supporting Building Portfolio Investment and Policy Decision Making through an Integrated Building Utility Data Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aziz, Azizan; Lasternas, Bertrand; Alschuler, Elena

    The American Recovery and Reinvestment Act stimulus funding of 2009 for smart grid projects resulted in the tripling of smart meters deployment. In 2012, the Green Button initiative provided utility customers with access to their real-time1 energy usage. The availability of finely granular data provides an enormous potential for energy data analytics and energy benchmarking. The sheer volume of time-series utility data from a large number of buildings also poses challenges in data collection, quality control, and database management for rigorous and meaningful analyses. In this paper, we will describe a building portfolio-level data analytics tool for operational optimization, businessmore » investment and policy assessment using 15-minute to monthly intervals utility data. The analytics tool is developed on top of the U.S. Department of Energy’s Standard Energy Efficiency Data (SEED) platform, an open source software application that manages energy performance data of large groups of buildings. To support the significantly large volume of granular interval data, we integrated a parallel time-series database to the existing relational database. The time-series database improves on the current utility data input, focusing on real-time data collection, storage, analytics and data quality control. The fully integrated data platform supports APIs for utility apps development by third party software developers. These apps will provide actionable intelligence for building owners and facilities managers. Unlike a commercial system, this platform is an open source platform funded by the U.S. Government, accessible to the public, researchers and other developers, to support initiatives in reducing building energy consumption.« less

  17. Controls on hillslope stability in a mountain river catchment

    NASA Astrophysics Data System (ADS)

    Golly, Antonius; Turowski, Jens; Hovius, Niels; Badoux, Alexandre

    2015-04-01

    Sediment transport in fluvial systems accounts for a large fraction of natural hazard damage costs in mountainous regions and is an important factor for risk mitigation, engineering and ecology. Although sediment transport in high-gradient channels gathered research interest over the last decades, sediment dynamics in steep streams are generally not well understood. For instance, the sourcing of the sediment and when and how it is actually mobilized is largely undescribed. In the Erlenbach, a mountain torrent in the Swiss Prealps, we study the mechanistic relations between in-channel hydrology, channel morphology, external climatic controls and the surrounding sediment sources to identify relevant process domains for sediment input and their characteristic scales. Here, we analyze the motion of a slow-moving landslide complex that was permanently monitored by time-lapse cameras over a period of 70 days at a 30 minutes interval. In addition, data sets for stream discharge, air temperature and precipitation rates are available. Apparent changes in the channel morphology, e.g. the destruction of channel-spanning bed forms, were manually determined from the time-lapse images and were treated as event marks in the time series. We identify five relevant types of sediment displacement processes emerging during the hillslope motion: concentrated mud flows, deep seated hillslope failure, catastrophic cavity failure, hillslope bank erosion and individual grain loss. Generally, sediment displacement occurs on a large range of temporal and spatial scales and sediment dynamics in steep streams not only depend on large floods with long recurrence intervals. We find that each type of displacement acts in a specific temporal and spatial domain with their characteristic scales. Different external climatic forcing (e.g. high-intensity vs. long-lasting precipitation events) promote different displacement processes. Stream morphology and the presence of boulders have a large effect on sediment input through deep seated failures and cavity failures while they have only minor impact on the other process types. In addition to large floods, which are generally recognized to produce huge amounts of sediment, we identify two relevant climatic regimes that play an important role for the sediment dynamics: a) long-lasting but low-intensity rainfall that explicitly trigger specific sediment displacement processes on the hillslopes and b) smaller discharge events with recurrence intervals of approximately one year that mobilize sediments from the hillslope's toes along the channel.

  18. Highly variable recurrence of tsunamis in the 7,400 years before the 2004 Indian Ocean tsunami

    PubMed Central

    Rubin, Charles M.; Horton, Benjamin P.; Sieh, Kerry; Pilarczyk, Jessica E.; Daly, Patrick; Ismail, Nazli; Parnell, Andrew C.

    2017-01-01

    The devastating 2004 Indian Ocean tsunami caught millions of coastal residents and the scientific community off-guard. Subsequent research in the Indian Ocean basin has identified prehistoric tsunamis, but the timing and recurrence intervals of such events are uncertain. Here we present an extraordinary 7,400 year stratigraphic sequence of prehistoric tsunami deposits from a coastal cave in Aceh, Indonesia. This record demonstrates that at least 11 prehistoric tsunamis struck the Aceh coast between 7,400 and 2,900 years ago. The average time period between tsunamis is about 450 years with intervals ranging from a long, dormant period of over 2,000 years, to multiple tsunamis within the span of a century. Although there is evidence that the likelihood of another tsunamigenic earthquake in Aceh province is high, these variable recurrence intervals suggest that long dormant periods may follow Sunda megathrust ruptures as large as that of the 2004 Indian Ocean tsunami. PMID:28722009

  19. Highly variable recurrence of tsunamis in the 7,400 years before the 2004 Indian Ocean tsunami.

    PubMed

    Rubin, Charles M; Horton, Benjamin P; Sieh, Kerry; Pilarczyk, Jessica E; Daly, Patrick; Ismail, Nazli; Parnell, Andrew C

    2017-07-19

    The devastating 2004 Indian Ocean tsunami caught millions of coastal residents and the scientific community off-guard. Subsequent research in the Indian Ocean basin has identified prehistoric tsunamis, but the timing and recurrence intervals of such events are uncertain. Here we present an extraordinary 7,400 year stratigraphic sequence of prehistoric tsunami deposits from a coastal cave in Aceh, Indonesia. This record demonstrates that at least 11 prehistoric tsunamis struck the Aceh coast between 7,400 and 2,900 years ago. The average time period between tsunamis is about 450 years with intervals ranging from a long, dormant period of over 2,000 years, to multiple tsunamis within the span of a century. Although there is evidence that the likelihood of another tsunamigenic earthquake in Aceh province is high, these variable recurrence intervals suggest that long dormant periods may follow Sunda megathrust ruptures as large as that of the 2004 Indian Ocean tsunami.

  20. Highly variable recurrence of tsunamis in the 7,400 years before the 2004 Indian Ocean tsunami

    NASA Astrophysics Data System (ADS)

    Rubin, Charles M.; Horton, Benjamin P.; Sieh, Kerry; Pilarczyk, Jessica E.; Daly, Patrick; Ismail, Nazli; Parnell, Andrew C.

    2017-07-01

    The devastating 2004 Indian Ocean tsunami caught millions of coastal residents and the scientific community off-guard. Subsequent research in the Indian Ocean basin has identified prehistoric tsunamis, but the timing and recurrence intervals of such events are uncertain. Here we present an extraordinary 7,400 year stratigraphic sequence of prehistoric tsunami deposits from a coastal cave in Aceh, Indonesia. This record demonstrates that at least 11 prehistoric tsunamis struck the Aceh coast between 7,400 and 2,900 years ago. The average time period between tsunamis is about 450 years with intervals ranging from a long, dormant period of over 2,000 years, to multiple tsunamis within the span of a century. Although there is evidence that the likelihood of another tsunamigenic earthquake in Aceh province is high, these variable recurrence intervals suggest that long dormant periods may follow Sunda megathrust ruptures as large as that of the 2004 Indian Ocean tsunami.

  1. Variation in the production rate of biosonar signals in freshwater porpoises.

    PubMed

    Kimura, Satoko; Akamatsu, Tomonari; Wang, Ding; Li, Songhai; Wang, Kexiong; Yoda, Ken

    2013-05-01

    The biosonar (click train) production rate of ten Yangtze finless porpoises and their behavior were examined using animal-borne data loggers. The sound production rate varied from 0 to 290 click trains per 10-min time interval. Large individual differences were observed, regardless of body size. Taken together, however, sound production did not differ significantly between daytime and nighttime. Over the 172.5 h of analyzed recordings, an average of 99.0% of the click trains were produced within intervals of less than 60 s, indicating that during a 1-min interval, the number of click trains produced by each porpoise was typically greater than one. Most of the porpoises exhibited differences in average swimming speed and depth between day and night. Swimming speed reductions and usage of short-range sonar, which relates to prey-capture attempts, were observed more often during nighttime. However, biosonar appears to be affected not only by porpoise foraging, but also by their sensory environment, i.e., the turbid Yangtze River system. These features will be useful for passive acoustic detection of the porpoises. Calculations of porpoise density or abundance should be conducted carefully because large individual differences in the sound production rate will lead to large estimation error.

  2. Communication interval selection in distributed heterogeneous simulation of large-scale dynamical systems

    NASA Astrophysics Data System (ADS)

    Lucas, Charles E.; Walters, Eric A.; Jatskevich, Juri; Wasynczuk, Oleg; Lamm, Peter T.

    2003-09-01

    In this paper, a new technique useful for the numerical simulation of large-scale systems is presented. This approach enables the overall system simulation to be formed by the dynamic interconnection of the various interdependent simulations, each representing a specific component or subsystem such as control, electrical, mechanical, hydraulic, or thermal. Each simulation may be developed separately using possibly different commercial-off-the-shelf simulation programs thereby allowing the most suitable language or tool to be used based on the design/analysis needs. These subsystems communicate the required interface variables at specific time intervals. A discussion concerning the selection of appropriate communication intervals is presented herein. For the purpose of demonstration, this technique is applied to a detailed simulation of a representative aircraft power system, such as that found on the Joint Strike Fighter (JSF). This system is comprised of ten component models each developed using MATLAB/Simulink, EASY5, or ACSL. When the ten component simulations were distributed across just four personal computers (PCs), a greater than 15-fold improvement in simulation speed (compared to the single-computer implementation) was achieved.

  3. Lightning electromagnetic radiation field spectra in the interval from 0.2 to 20 MHz

    NASA Technical Reports Server (NTRS)

    Willett, J. C.; Bailey, J. C.; Leteinturier, C.; Krider, E. P.

    1990-01-01

    New Fourier transforms of wideband time-domain electric fields (E) produced by lightning (recorded at the Kennedy Space Center during the summers of 1985 and 1987) were recorded in such a way that several different events in each lightning flash could be captured. Average HF spectral amplitudes for first return strokes, stepped-leader steps, and 'characteristic pulses' are given for significantly more events, at closer ranges, and with better spectral resolution than in previous literature reports. The method of recording gives less bias toward the first large event in the flash and thus yields a large sample of a wide variety of lightning processes. As a result, reliable composite spectral amplitudes are obtained for a number of different processes in cloud-to-ground lightning over the frequency interval from 0.2 to 20 MHz.

  4. Job strain as a risk factor for leisure-time physical inactivity: an individual-participant meta-analysis of up to 170,000 men and women: the IPD-Work Consortium.

    PubMed

    Fransson, Eleonor I; Heikkilä, Katriina; Nyberg, Solja T; Zins, Marie; Westerlund, Hugo; Westerholm, Peter; Väänänen, Ari; Virtanen, Marianna; Vahtera, Jussi; Theorell, Töres; Suominen, Sakari; Singh-Manoux, Archana; Siegrist, Johannes; Sabia, Séverine; Rugulies, Reiner; Pentti, Jaana; Oksanen, Tuula; Nordin, Maria; Nielsen, Martin L; Marmot, Michael G; Magnusson Hanson, Linda L; Madsen, Ida E H; Lunau, Thorsten; Leineweber, Constanze; Kumari, Meena; Kouvonen, Anne; Koskinen, Aki; Koskenvuo, Markku; Knutsson, Anders; Kittel, France; Jöckel, Karl-Heinz; Joensuu, Matti; Houtman, Irene L; Hooftman, Wendela E; Goldberg, Marcel; Geuskens, Goedele A; Ferrie, Jane E; Erbel, Raimund; Dragano, Nico; De Bacquer, Dirk; Clays, Els; Casini, Annalisa; Burr, Hermann; Borritz, Marianne; Bonenfant, Sébastien; Bjorner, Jakob B; Alfredsson, Lars; Hamer, Mark; Batty, G David; Kivimäki, Mika

    2012-12-15

    Unfavorable work characteristics, such as low job control and too high or too low job demands, have been suggested to increase the likelihood of physical inactivity during leisure time, but this has not been verified in large-scale studies. The authors combined individual-level data from 14 European cohort studies (baseline years from 1985-1988 to 2006-2008) to examine the association between unfavorable work characteristics and leisure-time physical inactivity in a total of 170,162 employees (50% women; mean age, 43.5 years). Of these employees, 56,735 were reexamined after 2-9 years. In cross-sectional analyses, the odds for physical inactivity were 26% higher (odds ratio = 1.26, 95% confidence interval: 1.15, 1.38) for employees with high-strain jobs (low control/high demands) and 21% higher (odds ratio = 1.21, 95% confidence interval: 1.11, 1.31) for those with passive jobs (low control/low demands) compared with employees in low-strain jobs (high control/low demands). In prospective analyses restricted to physically active participants, the odds of becoming physically inactive during follow-up were 21% and 20% higher for those with high-strain (odds ratio = 1.21, 95% confidence interval: 1.11, 1.32) and passive (odds ratio = 1.20, 95% confidence interval: 1.11, 1.30) jobs at baseline. These data suggest that unfavorable work characteristics may have a spillover effect on leisure-time physical activity.

  5. Job Strain as a Risk Factor for Leisure-Time Physical Inactivity: An Individual-Participant Meta-Analysis of Up to 170,000 Men and Women

    PubMed Central

    Fransson, Eleonor I.; Heikkilä, Katriina; Nyberg, Solja T.; Zins, Marie; Westerlund, Hugo; Westerholm, Peter; Väänänen, Ari; Virtanen, Marianna; Vahtera, Jussi; Theorell, Töres; Suominen, Sakari; Singh-Manoux, Archana; Siegrist, Johannes; Sabia, Séverine; Rugulies, Reiner; Pentti, Jaana; Oksanen, Tuula; Nordin, Maria; Nielsen, Martin L.; Marmot, Michael G.; Magnusson Hanson, Linda L.; Madsen, Ida E. H.; Lunau, Thorsten; Leineweber, Constanze; Kumari, Meena; Kouvonen, Anne; Koskinen, Aki; Koskenvuo, Markku; Knutsson, Anders; Kittel, France; Jöckel, Karl-Heinz; Joensuu, Matti; Houtman, Irene L.; Hooftman, Wendela E.; Goldberg, Marcel; Geuskens, Goedele A.; Ferrie, Jane E.; Erbel, Raimund; Dragano, Nico; De Bacquer, Dirk; Clays, Els; Casini, Annalisa; Burr, Hermann; Borritz, Marianne; Bonenfant, Sébastien; Bjorner, Jakob B.; Alfredsson, Lars; Hamer, Mark; Batty, G. David; Kivimäki, Mika

    2012-01-01

    Unfavorable work characteristics, such as low job control and too high or too low job demands, have been suggested to increase the likelihood of physical inactivity during leisure time, but this has not been verified in large-scale studies. The authors combined individual-level data from 14 European cohort studies (baseline years from 1985–1988 to 2006–2008) to examine the association between unfavorable work characteristics and leisure-time physical inactivity in a total of 170,162 employees (50% women; mean age, 43.5 years). Of these employees, 56,735 were reexamined after 2–9 years. In cross-sectional analyses, the odds for physical inactivity were 26% higher (odds ratio = 1.26, 95% confidence interval: 1.15, 1.38) for employees with high-strain jobs (low control/high demands) and 21% higher (odds ratio = 1.21, 95% confidence interval: 1.11, 1.31) for those with passive jobs (low control/low demands) compared with employees in low-strain jobs (high control/low demands). In prospective analyses restricted to physically active participants, the odds of becoming physically inactive during follow-up were 21% and 20% higher for those with high-strain (odds ratio = 1.21, 95% confidence interval: 1.11, 1.32) and passive (odds ratio = 1.20, 95% confidence interval: 1.11, 1.30) jobs at baseline. These data suggest that unfavorable work characteristics may have a spillover effect on leisure-time physical activity. PMID:23144364

  6. Enhanced ionization of the Martian nightside ionosphere during solar energetic particle events

    NASA Astrophysics Data System (ADS)

    Nemec, F.; Morgan, D. D.; Dieval, C.; Gurnett, D. A.; Futaana, Y.

    2013-12-01

    The nightside ionosphere of Mars is highly variable and very irregular, controlled to a great extent by the configuration of the crustal magnetic fields. The ionospheric reflections observed by the MARSIS radar sounder on board the Mars Express spacecraft in this region are typically oblique (reflection by a distant feature), so that they cannot be used to determine the peak altitude precisely. Nevertheless, the peak electron density can be in principle readily determined. However, in more than 90% of measurements the peak electron densities are too low to be detected. We focus on the time intervals of solar energetic particle (SEP) events. One may expect high energy particle precipitation into the nightside ionosphere to increase the electron density there. Thus, comparison of characteristics between SEP/no-SEP time intervals is important to understand the formation mechanism of the nightside ionosphere. The time intervals of SEP events are determined using the increase in the background counts recorded by the ion sensor (IMA) of the ASPERA-3 particle instrument on board Mars Express. Then we use MARSIS measurements to determine how much the nightside ionosphere is enhanced during these time intervals. We show that the peak electron densities during these periods are large enough to be detected in more than 30% of measurements, while the reflections from the ground almost entirely disappear, indicating that the nightside electron densities are tremendously increased as compared to the normal nightside conditions. The influence of various parameters on the formation of the nightside ionosphere is thoroughly discussed.

  7. Interval Management with Spacing to Parallel Dependent Runways (IMSPIDR) Experiment and Results

    NASA Technical Reports Server (NTRS)

    Baxley, Brian T.; Swieringa, Kurt A.; Capron, William R.

    2012-01-01

    An area in aviation operations that may offer an increase in efficiency is the use of continuous descent arrivals (CDA), especially during dependent parallel runway operations. However, variations in aircraft descent angle and speed can cause inaccuracies in estimated time of arrival calculations, requiring an increase in the size of the buffer between aircraft. This in turn reduces airport throughput and limits the use of CDAs during high-density operations, particularly to dependent parallel runways. The Interval Management with Spacing to Parallel Dependent Runways (IMSPiDR) concept uses a trajectory-based spacing tool onboard the aircraft to achieve by the runway an air traffic control assigned spacing interval behind the previous aircraft. This paper describes the first ever experiment and results of this concept at NASA Langley. Pilots flew CDAs to the Dallas Fort-Worth airport using airspeed calculations from the spacing tool to achieve either a Required Time of Arrival (RTA) or Interval Management (IM) spacing interval at the runway threshold. Results indicate flight crews were able to land aircraft on the runway with a mean of 2 seconds and less than 4 seconds standard deviation of the air traffic control assigned time, even in the presence of forecast wind error and large time delay. Statistically significant differences in delivery precision and number of speed changes as a function of stream position were observed, however, there was no trend to the difference and the error did not increase during the operation. Two areas the flight crew indicated as not acceptable included the additional number of speed changes required during the wind shear event, and issuing an IM clearance via data link while at low altitude. A number of refinements and future spacing algorithm capabilities were also identified.

  8. System implications of the ambulance arrival-to-patient contact interval on response interval compliance.

    PubMed

    Campbell, J P; Gratton, M C; Salomone, J A; Lindholm, D J; Watson, W A

    1994-01-01

    In some emergency medical services (EMS) system designs, response time intervals are mandated with monetary penalties for noncompliance. These times are set with the goal of providing rapid, definitive patient care. The time interval of vehicle at scene-to-patient access (VSPA) has been measured, but its effect on response time interval compliance has not been determined. To determine the effect of the VSPA interval on the mandated code 1 (< 9 min) and code 2 (< 13 min) response time interval compliance in an urban, public-utility model system. A prospective, observational study used independent third-party riders to collect the VSPA interval for emergency life-threatening (code 1) and emergency nonlife-threatening (code 2) calls. The VSPA interval was added to the 9-1-1 call-to-dispatch and vehicle dispatch-to-scene intervals to determine the total time interval from call received until paramedic access to the patient (9-1-1 call-to-patient access). Compliance with the mandated response time intervals was determined using the traditional time intervals (9-1-1 call-to-scene) plus the VSPA time intervals (9-1-1 call-to-patient access). Chi-square was used to determine statistical significance. Of the 216 observed calls, 198 were matched to the traditional time intervals. Sixty-three were code 1, and 135 were code 2. Of the code 1 calls, 90.5% were compliant using 9-1-1 call-to-scene intervals dropping to 63.5% using 9-1-1 call-to-patient access intervals (p < 0.0005). Of the code 2 calls, 94.1% were compliant using 9-1-1 call-to-scene intervals. Compliance decreased to 83.7% using 9-1-1 call-to-patient access intervals (p = 0.012). The addition of the VSPA interval to the traditional time intervals impacts system response time compliance. Using 9-1-1 call-to-scene compliance as a basis for measuring system performance underestimates the time for the delivery of definitive care. This must be considered when response time interval compliances are defined.

  9. Sex Differences in the Age of Peak Marathon Race Time.

    PubMed

    Nikolaidis, Pantelis T.; Rosemann, Thomas; Knechtle, Beat

    2018-04-30

    Recent studies showed that women were older than men when achieving their fastest marathon race time. These studies, however, investigated a limited sample of athletes. We investigated the age of peak marathon performance in a large sample of female and male marathon finishers by using data from all finishers. We analyzed the age of peak marathon performance in 1-year and 5-year age intervals of 451,637 runners (i.e. 168,702 women and 282,935 men) who finished the ‘New York City Marathon’ between 2006 and 2016, using analysis of variance and non-linear regression analysis. During these 11 years, men were faster and older than women, the participation of women increased disproportionately to that of men resulting in a decrease of the male-to-female ratio, and relatively more women participated in the younger age groups. Most women were in the age group 30-34 years and most men in the age group 40-44 years. The fastest race time was shown at 29.7 years in women and 34.8 years in men in the 1-year age intervals, and in age group 30-34 years in women and 35-39 years in men in the 5-year age intervals. In contrast to existing findings reporting a higher age of peak marathon performance in women compared to men, we found that women achieved their best marathon race time ~5 years earlier in life than men in both 1-year and 5-year age intervals. Female athletes and their coaches should plan to achieve their fastest marathon race time at the age of ~30 years.

  10. Structure-oriented versus process-oriented approach to enhance efficiency for emergency room operations: what lessons can we learn?

    PubMed

    Hwang, Taik Gun; Lee, Younsuk; Shin, Hojung

    2011-01-01

    The efficiency and quality of a healthcare system can be defined as interactions among the system structure, processes, and outcome. This article examines the effect of structural adjustment (change in floor plan or layout) and process improvement (critical pathway implementation) on performance of emergency room (ER) operations for acute cerebral infarction patients. Two large teaching hospitals participated in this study: Korea University (KU) Guro Hospital and KU Anam Hospital. The administration of Guro adopted a structure-oriented approach in improving its ER operations while the administration of Anam employed a process-oriented approach, facilitating critical pathways and protocols. To calibrate improvements, the data for time interval, length of stay, and hospital charges were collected, before and after the planned changes were implemented at each hospital. In particular, time interval is the most essential measure for handling acute stroke patients because patients' survival and recovery are affected by the promptness of diagnosis and treatment. Statistical analyses indicated that both redesign of layout at Guro and implementation of critical pathways at Anam had a positive influence on most of the performance measures. However, reduction in time interval was not consistent at Guro, demonstrating delays in processing time for a few processes. The adoption of critical pathways at Anam appeared more effective in reducing time intervals than the structural rearrangement at Guro, mainly as a result of the extensive employee training required for a critical pathway implementation. Thus, hospital managers should combine structure-oriented and process-oriented strategies to maximize effectiveness of improvement efforts.

  11. Hazard ratio estimation and inference in clinical trials with many tied event times.

    PubMed

    Mehrotra, Devan V; Zhang, Yiwei

    2018-06-13

    The medical literature contains numerous examples of randomized clinical trials with time-to-event endpoints in which large numbers of events accrued over relatively short follow-up periods, resulting in many tied event times. A generally common feature across such examples was that the logrank test was used for hypothesis testing and the Cox proportional hazards model was used for hazard ratio estimation. We caution that this common practice is particularly risky in the setting of many tied event times for two reasons. First, the estimator of the hazard ratio can be severely biased if the Breslow tie-handling approximation for the Cox model (the default in SAS and Stata software) is used. Second, the 95% confidence interval for the hazard ratio can include one even when the corresponding logrank test p-value is less than 0.05. To help establish a better practice, with applicability for both superiority and noninferiority trials, we use theory and simulations to contrast Wald and score tests based on well-known tie-handling approximations for the Cox model. Our recommendation is to report the Wald test p-value and corresponding confidence interval based on the Efron approximation. The recommended test is essentially as powerful as the logrank test, the accompanying point and interval estimates of the hazard ratio have excellent statistical properties even in settings with many tied event times, inferential alignment between the p-value and confidence interval is guaranteed, and implementation is straightforward using commonly used software. Copyright © 2018 John Wiley & Sons, Ltd.

  12. Cardiotachometer displays heart rate on a beat-to-beat basis

    NASA Technical Reports Server (NTRS)

    Rasquin, J. R.; Smith, H. E.; Taylor, R. A.

    1974-01-01

    Electronics for this system may be chosen so that complete calculation and display may be accomplished in a few milliseconds, far less than even the fastest heartbeat interval. Accuracy may be increased, if desired, by using higher-frequency timing oscillator, although this will require large capacity registers at increased cost.

  13. Techniques for obtaining regional radiation budgets from satellite radiometer observations, phase 4 and phase 5. Ph.D. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Pina, J. F.; House, F. B.

    1976-01-01

    A scheme was developed which divides the earth-atmosphere system into 2060 elemental areas. The regions previously described are defined in terms of these elemental areas which are fixed in size and position as the satellite moves. One method, termed the instantaneous technique, yields values of the radiant emittance (We) and the radiant reflectance (Wr) which the regions have during the time interval of a single satellite pass. The number of observations matches the number of regions under study and a unique solution is obtained using matrix inversion. The other method (termed the best fit technique), yields time averages of We and Wr for large time intervals (e.g., months, seasons). The number of observations in this technique is much greater than the number of regions considered, and an approximate solution is obtained by the method of least squares.

  14. Design and interpretation of cell trajectory assays

    PubMed Central

    Bowden, Lucie G.; Simpson, Matthew J.; Baker, Ruth E.

    2013-01-01

    Cell trajectory data are often reported in the experimental cell biology literature to distinguish between different types of cell migration. Unfortunately, there is no accepted protocol for designing or interpreting such experiments and this makes it difficult to quantitatively compare different published datasets and to understand how changes in experimental design influence our ability to interpret different experiments. Here, we use an individual-based mathematical model to simulate the key features of a cell trajectory experiment. This shows that our ability to correctly interpret trajectory data is extremely sensitive to the geometry and timing of the experiment, the degree of motility bias and the number of experimental replicates. We show that cell trajectory experiments produce data that are most reliable when the experiment is performed in a quasi-one-dimensional geometry with a large number of identically prepared experiments conducted over a relatively short time-interval rather than a few trajectories recorded over particularly long time-intervals. PMID:23985736

  15. Reconciling short recurrence intervals with minor deformation in the new madrid seismic zone.

    PubMed

    Schweig, E S; Ellis, M A

    1994-05-27

    At least three great earthquakes occurred in the New Madrid seismic zone in 1811 and 1812. Estimates of present-day strain rates suggest that such events may have a repeat time of 1000 years or less. Paleoseismological data also indicate that earthquakes large enough to cause soil liquefaction have occurred several times in the past 5000 years. However, pervasive crustal deformation expected from such a high frequency of large earthquakes is not observed. This suggests that the seismic zone is a young feature, possibly as young as several tens of thousands of years old and no more than a few million years old.

  16. Measuring discharge with ADCPs: Inferences from synthetic velocity profiles

    USGS Publications Warehouse

    Rehmann, C.R.; Mueller, D.S.; Oberg, K.A.

    2009-01-01

    Synthetic velocity profiles are used to determine guidelines for sampling discharge with acoustic Doppler current profilers (ADCPs). The analysis allows the effects of instrument characteristics, sampling parameters, and properties of the flow to be studied systematically. For mid-section measurements, the averaging time required for a single profile measurement always exceeded the 40 s usually recommended for velocity measurements, and it increased with increasing sample interval and increasing time scale of the large eddies. Similarly, simulations of transect measurements show that discharge error decreases as the number of large eddies sampled increases. The simulations allow sampling criteria that account for the physics of the flow to be developed. ?? 2009 ASCE.

  17. Evolutionarily stable size of a megagametophyte: evolution of tiny megagametophytes of angiosperms from large ones of gymnosperms.

    PubMed

    Sakai, Satoki

    2013-02-01

    To examine the factors favoring large megagametophytes of gymnosperms and tiny ones of angiosperms, a game model for seed production was developed in which megagametophytes growing in the same female parent compete for resources provided by the parent. In the model, megagametophytes may continue to grow until seed completion or may cease to grow at a certain time and regrow at pollination or fertilization. Autonomous abortion of unpollinated or unfertilized megagametophytes may occur either at pollination or fertilization. Those megagametophytes absorb a certain amount of resources before abortion, due to constraints in the signal process, in addition to the resources absorbed before pollination or fertilization. It was found that both growth habits can be the ESS: megagametophytes continue to grow without cessation and monopolize resources, such as gymnosperms, or cease to grow until fertilization to reduce the loss of resources due to autonomous abortion, such as angiosperms. The former and the latter are the ESS if the time interval between pollination and fertilization is long and short, respectively. Thus, the fertilization interval may be a critical factor selecting for large megagametophytes of gymnosperms or tiny ones of angiosperms. © 2012 The Author(s). Evolution© 2012 The Society for the Study of Evolution.

  18. Development and interval testing of a naturalistic driving methodology to evaluate driving behavior in clinical research.

    PubMed

    Babulal, Ganesh M; Addison, Aaron; Ghoshal, Nupur; Stout, Sarah H; Vernon, Elizabeth K; Sellan, Mark; Roe, Catherine M

    2016-01-01

    Background : The number of older adults in the United States will double by 2056. Additionally, the number of licensed drivers will increase along with extended driving-life expectancy. Motor vehicle crashes are a leading cause of injury and death in older adults. Alzheimer's disease (AD) also negatively impacts driving ability and increases crash risk. Conventional methods to evaluate driving ability are limited in predicting decline among older adults. Innovations in GPS hardware and software can monitor driving behavior in the actual environments people drive in. Commercial off-the-shelf (COTS) devices are affordable, easy to install and capture large volumes of data in real-time. However, adapting these methodologies for research can be challenging. This study sought to adapt a COTS device and determine an interval that produced accurate data on the actual route driven for use in future studies involving older adults with and without AD.  Methods : Three subjects drove a single course in different vehicles at different intervals (30, 60 and 120 seconds), at different times of day, morning (9:00-11:59AM), afternoon (2:00-5:00PM) and night (7:00-10pm). The nine datasets were examined to determine the optimal collection interval. Results : Compared to the 120-second and 60-second intervals, the 30-second interval was optimal in capturing the actual route driven along with the lowest number of incorrect paths and affordability weighing considerations for data storage and curation. Discussion : Use of COTS devices offers minimal installation efforts, unobtrusive monitoring and discreet data extraction.  However, these devices require strict protocols and controlled testing for adoption into research paradigms.  After reliability and validity testing, these devices may provide valuable insight into daily driving behaviors and intraindividual change over time for populations of older adults with and without AD.  Data can be aggregated over time to look at changes or adverse events and ascertain if decline in performance is occurring.

  19. Assessing the Impact of Different Measurement Time Intervals on Observed Long-Term Wind Speed Trends

    NASA Astrophysics Data System (ADS)

    Azorin-Molina, C.; Vicente-Serrano, S. M.; McVicar, T.; Jerez, S.; Revuelto, J.; López Moreno, J. I.

    2014-12-01

    During the last two decades climate studies have reported a tendency toward a decline in measured near-surface wind speed in some regions of Europe, North America, Asia and Australia. This weakening in observed wind speed has been recently termed "global stilling", showing a worldwide average trend of -0.140 m s-1 dec-1 during last 50-years. The precise cause of the "global stilling" remains largely uncertain and has been hypothetically attributed to several factors, mainly related to: (i) an increasing surface roughness (i.e. forest growth, land use changes, and urbanization); (ii) a slowdown in large-scale atmospheric circulation; (iii) instrumental drifts and technological improvements, maintenance, and shifts in measurements sites and calibration issues; (iv) sunlight dimming due to air pollution; and (v) astronomical changes. This study proposed a novel investigation aimed at analyzing how different measurement time intervals used to calculate a wind speed series can affect the sign and magnitude of long-term wind speed trends. For instance, National Weather Services across the globe estimate daily average wind speed using different time intervals and formulae that may affect the trend results. Firstly, we carried out a comprehensive review of wind studies reporting the sign and magnitude of wind speed trend and the sampling intervals used. Secondly, we analyzed near-surface wind speed trends recorded at 59 land-based stations across Spain comparing monthly mean wind speed series obtained from: (a) daily mean wind speed data averaged from standard 10-min mean observations at 0000, 0700, 1300 and 1800 UTC; and (b) average wind speed of 24 hourly measurements (i.e., wind run measurements) from 0000 to 2400 UTC. Thirdly and finally, we quantified the impact of anemometer drift (i.e. bearing malfunction) by presenting preliminary results (1-year of paired measurements) from a comparison of one new anemometer sensor against one malfunctioned anenometer sensor due to old bearings.

  20. Palaeoclimate: ocean tides and Heinrich events.

    PubMed

    Arbic, Brian K; Macayeal, Douglas R; Mitrovica, Jerry X; Milne, Glenn A

    2004-11-25

    Climate varied enormously over the most recent ice age--for example, large pulses of ice-rafted debris, originating mainly from the Labrador Sea, were deposited into the North Atlantic at roughly 7,000-year intervals, with global climatic implications. Here we show that ocean tides within the Labrador Sea were exceptionally large over the period spanning these huge, abrupt ice movements, which are known as Heinrich events. We propose that tides played a catalytic role in liberating iceberg armadas during that time.

  1. Pulse rate variability compared with Heart Rate Variability in children with and without sleep disordered breathing.

    PubMed

    Dehkordi, Parastoo; Garde, Ainara; Karlen, Walter; Wensley, David; Ansermino, J Mark; Dumont, Guy A

    2013-01-01

    Heart Rate Variability (HRV), the variation of time intervals between heartbeats, is one of the most promising and widely used quantitative markers of autonomic activity. Traditionally, HRV is measured as the series of instantaneous cycle intervals obtained from the electrocardiogram (ECG). In this study, we investigated the estimation of variation in heart rate from a photoplethysmography (PPG) signal, called pulse rate variability (PRV), and assessed its accuracy as an estimate of HRV in children with and without sleep disordered breathing (SDB). We recorded raw PPGs from 72 children using the Phone Oximeter, an oximeter connected to a mobile phone. Full polysomnography including ECG was simultaneously recorded for each subject. We used correlation and Bland-Altman analysis for comparing the parameters of HRV and PRV between two groups of children. Significant correlation (r > 0.90, p < 0.05) and close agreement were found between HRV and PRV for mean intervals, standard deviation of intervals (SDNN) and the root-mean square of the difference of successive intervals (RMSSD). However Bland-Altman analysis showed a large divergence for LF/HF ratio parameter. In addition, children with SDB had depressed SDNN and RMSSD and elevated LF/HF in comparison to children without SDB. In conclusion, PRV provides the accurate estimate of HRV in time domain analysis but does not reflect precise estimation for parameters in frequency domain.

  2. Time Determines the Neural Circuit Underlying Associative Fear Learning

    PubMed Central

    Guimarãis, Marta; Gregório, Ana; Cruz, Andreia; Guyon, Nicolas; Moita, Marta A.

    2011-01-01

    Ultimately associative learning is a function of the temporal features and relationships between experienced stimuli. Nevertheless how time affects the neural circuit underlying this form of learning remains largely unknown. To address this issue, we used single-trial auditory trace fear conditioning and varied the length of the interval between tone and foot-shock. Through temporary inactivation of the amygdala, medial prefrontal-cortex (mPFC), and dorsal-hippocampus in rats, we tested the hypothesis that different temporal intervals between the tone and the shock influence the neuronal structures necessary for learning. With this study we provide the first experimental evidence showing that temporarily inactivating the amygdala before training impairs auditory fear learning when there is a temporal gap between the tone and the shock. Moreover, imposing a short interval (5 s) between the two stimuli also relies on the mPFC, while learning the association across a longer interval (40 s) becomes additionally dependent on a third structure, the dorsal-hippocampus. Thus, our results suggest that increasing the interval length between tone and shock leads to the involvement of an increasing number of brain areas in order for the association between the two stimuli to be acquired normally. These findings demonstrate that the temporal relationship between events is a key factor in determining the neuronal mechanisms underlying associative fear learning. PMID:22207842

  3. Time from cervical conization to pregnancy and preterm birth.

    PubMed

    Himes, Katherine P; Simhan, Hyagriv N

    2007-02-01

    To estimate whether the time interval between cervical conization and subsequent pregnancy is associated with risk of preterm birth. Our study is a case control study nested in a retrospective cohort. Women who underwent colposcopic biopsy or conization with loop electrosurgical excision procedure, large loop excision of the transformation zone, or cold knife cone and subsequently delivered at our hospital were identified with electronic databases. Variables considered as possible confounders included maternal race, age, marital status, payor status, years of education, self-reported tobacco use, history of preterm delivery, and dimensions of cone specimen. Conization was not associated with preterm birth or any subtypes of preterm birth. Among women who underwent conization, those with a subsequent preterm birth had a shorter conization-to-pregnancy interval (337 days) than women with a subsequent term birth (581 days) (P=.004). The association between short conization-to-pregnancy interval and preterm birth remained significant when controlling for confounders including race and cone dimensions. The effect of short conization-to-pregnancy interval on subsequent preterm birth was more persistent among African Americans when compared with white women. Women with a short conization-to-pregnancy interval are at increased risk for preterm birth. Women of reproductive age who must have a conization procedure can be counseled that conceiving within 2 to 3 months of the procedure may be associated with an increased risk of preterm birth. II.

  4. Spatially distributed potential evapotranspiration modeling and climate projections.

    PubMed

    Gharbia, Salem S; Smullen, Trevor; Gill, Laurence; Johnston, Paul; Pilla, Francesco

    2018-08-15

    Evapotranspiration integrates energy and mass transfer between the Earth's surface and atmosphere and is the most active mechanism linking the atmosphere, hydrosphsophere, lithosphere and biosphere. This study focuses on the fine resolution modeling and projection of spatially distributed potential evapotranspiration on the large catchment scale as response to climate change. Six potential evapotranspiration designed algorithms, systematically selected based on a structured criteria and data availability, have been applied and then validated to long-term mean monthly data for the Shannon River catchment with a 50m 2 cell size. The best validated algorithm was therefore applied to evaluate the possible effect of future climate change on potential evapotranspiration rates. Spatially distributed potential evapotranspiration projections have been modeled based on climate change projections from multi-GCM ensembles for three future time intervals (2020, 2050 and 2080) using a range of different Representative Concentration Pathways producing four scenarios for each time interval. Finally, seasonal results have been compared to baseline results to evaluate the impact of climate change on the potential evapotranspiration and therefor on the catchment dynamical water balance. The results present evidence that the modeled climate change scenarios would have a significant impact on the future potential evapotranspiration rates. All the simulated scenarios predicted an increase in potential evapotranspiration for each modeled future time interval, which would significantly affect the dynamical catchment water balance. This study addresses the gap in the literature of using GIS-based algorithms to model fine-scale spatially distributed potential evapotranspiration on the large catchment systems based on climatological observations and simulations in different climatological zones. Providing fine-scale potential evapotranspiration data is very crucial to assess the dynamical catchment water balance to setup management scenarios for the water abstractions. This study illustrates a transferable systematic method to design GIS-based algorithms to simulate spatially distributed potential evapotranspiration on the large catchment systems. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Dynamics of Stability of Orientation Maps Recorded with Optical Imaging.

    PubMed

    Shumikhina, S I; Bondar, I V; Svinov, M M

    2018-03-15

    Orientation selectivity is an important feature of visual cortical neurons. Optical imaging of the visual cortex allows for the generation of maps of orientation selectivity that reflect the activity of large populations of neurons. To estimate the statistical significance of effects of experimental manipulations, evaluation of the stability of cortical maps over time is required. Here, we performed optical imaging recordings of the visual cortex of anesthetized adult cats. Monocular stimulation with moving clockwise square-wave gratings that continuously changed orientation and direction was used as the mapping stimulus. Recordings were repeated at various time intervals, from 15 min to 16 h. Quantification of map stability was performed on a pixel-by-pixel basis using several techniques. Map reproducibility showed clear dynamics over time. The highest degree of stability was seen in maps recorded 15-45 min apart. Averaging across all time intervals and all stimulus orientations revealed a mean shift of 2.2 ± 0.1°. There was a significant tendency for larger shifts to occur at longer time intervals. Shifts between 2.8° (mean ± 2SD) and 5° were observed more frequently at oblique orientations, while shifts greater than 5° appeared more frequently at cardinal orientations. Shifts greater than 5° occurred rarely overall (5.4% of cases) and never exceeded 11°. Shifts of 10-10.6° (0.7%) were seen occasionally at time intervals of more than 4 h. Our findings should be considered when evaluating the potential effect of experimental manipulations on orientation selectivity mapping studies. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  6. An informational transition in conditioned Markov chains: Applied to genetics and evolution.

    PubMed

    Zhao, Lei; Lascoux, Martin; Waxman, David

    2016-08-07

    In this work we assume that we have some knowledge about the state of a population at two known times, when the dynamics is governed by a Markov chain such as a Wright-Fisher model. Such knowledge could be obtained, for example, from observations made on ancient and contemporary DNA, or during laboratory experiments involving long term evolution. A natural assumption is that the behaviour of the population, between observations, is related to (or constrained by) what was actually observed. The present work shows that this assumption has limited validity. When the time interval between observations is larger than a characteristic value, which is a property of the population under consideration, there is a range of intermediate times where the behaviour of the population has reduced or no dependence on what was observed and an equilibrium-like distribution applies. Thus, for example, if the frequency of an allele is observed at two different times, then for a large enough time interval between observations, the population has reduced or no dependence on the two observed frequencies for a range of intermediate times. Given observations of a population at two times, we provide a general theoretical analysis of the behaviour of the population at all intermediate times, and determine an expression for the characteristic time interval, beyond which the observations do not constrain the population's behaviour over a range of intermediate times. The findings of this work relate to what can be meaningfully inferred about a population at intermediate times, given knowledge of terminal states. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Paleoseismology under sea: First evidence for irregular seismic cycles during Holocene off Algeria from turbidites

    NASA Astrophysics Data System (ADS)

    Ratzov, Gueorgui; Cattaneo, Antonio; Babonneau, Nathalie; Déverchere, Jacques; Yelles, Karim; Bracene, Rabah

    2013-04-01

    According to simple models, stress build-up along a given fault is proportional to the time elapsed since the previous earthquake. Although the resulting « seismic gap » hypothesis suits well for moderate magnitude earthquake (Mw 4-5), large events (Mw>6) are hardly predictable and show great variation in recurrence intervals. Thus, models based on stress transfer and interactions between faults suggest that an earthquake may haste or delay the occurrence of next earthquake on adjacent fault by increasing or lowering the level of static stress. Here, we show that meaningful information of large earthquakes recurrence intervals over several seismic cycles may be obtained using turbidite record offshore the Algerian margin (Mediterranean Sea), an area prone to relatively large (M~7) earthquakes in historical times. Indeed, as evidenced on the Cascadia subduction zone, synchroneous turbidites over a large area and originated from independent sources, are most likely triggered by an earthquake. To test the method on this slowly convergent margin, we analysed turbidites in 3 sediment cores collected off the area shaken by the 1980 Ms 7.3 El Asnam and 1954 M6.7 Orléansville earthquakes. We used X-ray radioscopy, XRF major elements counter, magnetic susceptibility, and grain-size distribution to accurately discriminate turbidites (~instantaneous deposit) from hemipelagites (continuous background sedimentation). We dated turbidites by calculating hemipelagic sedimentation rates obtained with AMS radiocarbon ages, and applied the rates between turbidites. Finally, the age of events was compared to the only paleoseismic investigation available onland. We found that 10 to 25 turbidites deposited as single or multiple pulses over the last ~8ka. Once correlated from site to site, they support 14 seismic events. Most events are correlated with the paleoseismic record of the El Asnam fault, but uncorrelated events support that other faults were active. Only the first of the two major events of 1954 and 1980 triggered a turbidity current, implying that the sediment buffer on the continental shelf could not be reloaded in 26 years thus giving information on the minimum time resolution of our method. The new paleoseismic catalog shows a recurrence interval of 300-700 years for most events, but also a great interval of >1200 years without any major earthquake. This result suggest that the level of static stress may have drastically dropped as a result of three main events occurring within the 800 years prior the quiescence period. The quiescent period also supports a stress transfer and interaction between neighbouring faults.

  8. Diffusion with stochastic resetting at power-law times.

    PubMed

    Nagar, Apoorva; Gupta, Shamik

    2016-06-01

    What happens when a continuously evolving stochastic process is interrupted with large changes at random intervals τ distributed as a power law ∼τ^{-(1+α)};α>0? Modeling the stochastic process by diffusion and the large changes as abrupt resets to the initial condition, we obtain exact closed-form expressions for both static and dynamic quantities, while accounting for strong correlations implied by a power law. Our results show that the resulting dynamics exhibits a spectrum of rich long-time behavior, from an ever-spreading spatial distribution for α<1, to one that is time independent for α>1. The dynamics has strong consequences on the time to reach a distant target for the first time; we specifically show that there exists an optimal α that minimizes the mean time to reach the target, thereby offering a step towards a viable strategy to locate targets in a crowded environment.

  9. Statistical Parameter Study of the Time Interval Distribution for Nonparalyzable, Paralyzable, and Hybrid Dead Time Models

    NASA Astrophysics Data System (ADS)

    Syam, Nur Syamsi; Maeng, Seongjin; Kim, Myo Gwang; Lim, Soo Yeon; Lee, Sang Hoon

    2018-05-01

    A large dead time of a Geiger Mueller (GM) detector may cause a large count loss in radiation measurements and consequently may cause distortion of the Poisson statistic of radiation events into a new distribution. The new distribution will have different statistical parameters compared to the original distribution. Therefore, the variance, skewness, and excess kurtosis in association with the observed count rate of the time interval distribution for well-known nonparalyzable, paralyzable, and nonparalyzable-paralyzable hybrid dead time models of a Geiger Mueller detector were studied using Monte Carlo simulation (GMSIM). These parameters were then compared with the statistical parameters of a perfect detector to observe the change in the distribution. The results show that the behaviors of the statistical parameters for the three dead time models were different. The values of the skewness and the excess kurtosis of the nonparalyzable model are equal or very close to those of the perfect detector, which are ≅2 for skewness, and ≅6 for excess kurtosis, while the statistical parameters in the paralyzable and hybrid model obtain minimum values that occur around the maximum observed count rates. The different trends of the three models resulting from the GMSIM simulation can be used to distinguish the dead time behavior of a GM counter; i.e. whether the GM counter can be described best by using the nonparalyzable, paralyzable, or hybrid model. In a future study, these statistical parameters need to be analyzed further to determine the possibility of using them to determine a dead time for each model, particularly for paralyzable and hybrid models.

  10. Interval between onset of psoriasis and psoriatic arthritis comparing the UK Clinical Practice Research Datalink with a hospital-based cohort.

    PubMed

    Tillett, William; Charlton, Rachel; Nightingale, Alison; Snowball, Julia; Green, Amelia; Smith, Catherine; Shaddick, Gavin; McHugh, Neil

    2017-12-01

    To describe the time interval between the onset of psoriasis and PsA in the UK primary care setting and compare with a large, well-classified secondary care cohort. Patients with PsA and/or psoriasis were identified in the UK Clinical Practice Research Datalink (CPRD). The secondary care cohort comprised patients from the Bath PsA longitudinal observational cohort study. For incident PsA patients in the CPRD who also had a record of psoriasis, the time interval between PsA diagnosis and first psoriasis record was calculated. Comparisons were made with the time interval between diagnoses in the Bath cohort. There were 5272 eligible PsA patients in the CPRD and 815 in the Bath cohort. In both cohorts, the majority of patients (82.3 and 61.3%, respectively) had psoriasis before their PsA diagnosis or within the same calendar year (10.5 and 23.8%), with only a minority receiving their PsA diagnosis first (7.1 and 14.8%). Excluding those who presented with arthritis before psoriasis, the median time between diagnoses was 8 years [interquartile range (IQR) 2-15] in the CPRD and 7 years (IQR 0-20) in the Bath cohort. In the CPRD, 60.1 and 75.1% received their PsA diagnosis within 10 and 15 years of their psoriasis diagnosis, respectively; this was comparable with 57.2 and 67.7% in the Bath cohort. A similar distribution for the time interval between psoriasis and arthritis was observed in the CPRD and secondary care cohort. These data can inform screening strategies and support the validity of data from each cohort. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  11. Recurrence interval analysis of trading volumes

    NASA Astrophysics Data System (ADS)

    Ren, Fei; Zhou, Wei-Xing

    2010-06-01

    We study the statistical properties of the recurrence intervals τ between successive trading volumes exceeding a certain threshold q . The recurrence interval analysis is carried out for the 20 liquid Chinese stocks covering a period from January 2000 to May 2009, and two Chinese indices from January 2003 to April 2009. Similar to the recurrence interval distribution of the price returns, the tail of the recurrence interval distribution of the trading volumes follows a power-law scaling, and the results are verified by the goodness-of-fit tests using the Kolmogorov-Smirnov (KS) statistic, the weighted KS statistic and the Cramér-von Mises criterion. The measurements of the conditional probability distribution and the detrended fluctuation function show that both short-term and long-term memory effects exist in the recurrence intervals between trading volumes. We further study the relationship between trading volumes and price returns based on the recurrence interval analysis method. It is found that large trading volumes are more likely to occur following large price returns, and the comovement between trading volumes and price returns is more pronounced for large trading volumes.

  12. Recurrence interval analysis of trading volumes.

    PubMed

    Ren, Fei; Zhou, Wei-Xing

    2010-06-01

    We study the statistical properties of the recurrence intervals τ between successive trading volumes exceeding a certain threshold q. The recurrence interval analysis is carried out for the 20 liquid Chinese stocks covering a period from January 2000 to May 2009, and two Chinese indices from January 2003 to April 2009. Similar to the recurrence interval distribution of the price returns, the tail of the recurrence interval distribution of the trading volumes follows a power-law scaling, and the results are verified by the goodness-of-fit tests using the Kolmogorov-Smirnov (KS) statistic, the weighted KS statistic and the Cramér-von Mises criterion. The measurements of the conditional probability distribution and the detrended fluctuation function show that both short-term and long-term memory effects exist in the recurrence intervals between trading volumes. We further study the relationship between trading volumes and price returns based on the recurrence interval analysis method. It is found that large trading volumes are more likely to occur following large price returns, and the comovement between trading volumes and price returns is more pronounced for large trading volumes.

  13. Wind and wave extremes over the world oceans from very large ensembles

    NASA Astrophysics Data System (ADS)

    Breivik, Øyvind; Aarnes, Ole Johan; Abdalla, Saleh; Bidlot, Jean-Raymond; Janssen, Peter A. E. M.

    2014-07-01

    Global return values of marine wind speed and significant wave height are estimated from very large aggregates of archived ensemble forecasts at +240 h lead time. Long lead time ensures that the forecasts represent independent draws from the model climate. Compared with ERA-Interim, a reanalysis, the ensemble yields higher return estimates for both wind speed and significant wave height. Confidence intervals are much tighter due to the large size of the data set. The period (9 years) is short enough to be considered stationary even with climate change. Furthermore, the ensemble is large enough for nonparametric 100 year return estimates to be made from order statistics. These direct return estimates compare well with extreme value estimates outside areas with tropical cyclones. Like any method employing modeled fields, it is sensitive to tail biases in the numerical model, but we find that the biases are moderate outside areas with tropical cyclones.

  14. Estimating clinical chemistry reference values based on an existing data set of unselected animals.

    PubMed

    Dimauro, Corrado; Bonelli, Piero; Nicolussi, Paola; Rassu, Salvatore P G; Cappio-Borlino, Aldo; Pulina, Giuseppe

    2008-11-01

    In an attempt to standardise the determination of biological reference values, the International Federation of Clinical Chemistry (IFCC) has published a series of recommendations on developing reference intervals. The IFCC recommends the use of an a priori sampling of at least 120 healthy individuals. However, such a high number of samples and laboratory analysis is expensive, time-consuming and not always feasible, especially in veterinary medicine. In this paper, an alternative (a posteriori) method is described and is used to determine reference intervals for biochemical parameters of farm animals using an existing laboratory data set. The method used was based on the detection and removal of outliers to obtain a large sample of animals likely to be healthy from the existing data set. This allowed the estimation of reliable reference intervals for biochemical parameters in Sarda dairy sheep. This method may also be useful for the determination of reference intervals for different species, ages and gender.

  15. Anomalous Fluctuations in Autoregressive Models with Long-Term Memory

    NASA Astrophysics Data System (ADS)

    Sakaguchi, Hidetsugu; Honjo, Haruo

    2015-10-01

    An autoregressive model with a power-law type memory kernel is studied as a stochastic process that exhibits a self-affine-fractal-like behavior for a small time scale. We find numerically that the root-mean-square displacement Δ(m) for the time interval m increases with a power law as mα with α < 1/2 for small m but saturates at sufficiently large m. The exponent α changes with the power exponent of the memory kernel.

  16. High sensitivity Troponin T: an audit of implementation of its protocol in a district general hospital.

    PubMed

    Kalim, Shahid; Nazir, Shaista; Khan, Zia Ullah

    2013-01-01

    Protocols based on newer high sensitivity Troponin T (hsTropT) assays can rule in a suspected Acute Myocardial Infarction (AMI) as early as 3 hours. We conducted this study to audit adherence to our Trust's newly introduced AMI diagnostic protocol based on paired hsTropT testing at 0 and 3 hours. We retrospectively reviewed data of all patients who had hsTropT test done between 1st and 7th May 2012. Patient's demographics, utility of single or paired samples, time interval between paired samples, patient's presenting symptoms and ECG findings were noted and their means, medians, Standard deviations and proportions were calculated. A total of 66 patients had hsTropT test done during this period. Mean age was 63.30 +/- 17.46 years and 38 (57.57%) were males. Twenty-four (36.36%) patients had only single, rather than protocol recommended paired hsTropT samples, taken. Among the 42 (63.63%) patients with paired samples, the mean time interval was found to be 4.41 +/- 5.7 hours. Contrary to the recommendations, 15 (22.73%) had a very long whereas 2 (3.03%) had a very short time interval between two samples. A subgroup analysis of patients with single samples, found only 2 (3.03%) patient with ST-segment elevation, appropriate for single testing. Our study confirmed that in a large number of patients the protocol for paired sampling or a recommended time interval of 3 hours between 2 samples was not being followed.

  17. Situational Lightning Climatologies

    NASA Technical Reports Server (NTRS)

    Bauman, William; Crawford, Winifred

    2010-01-01

    Research has revealed distinct spatial and temporal distributions of lightning occurrence that are strongly influenced by large-scale atmospheric flow regimes. It was believed there were two flow systems, but it has been discovered that actually there are seven distinct flow regimes. The Applied Meteorology Unit (AMU) has recalculated the lightning climatologies for the Shuttle Landing Facility (SLF), and the eight airfields in the National Weather Service in Melbourne (NWS MLB) County Warning Area (CWA) using individual lightning strike data to improve the accuracy of the climatologies. The software determines the location of each CG lightning strike with 5-, 10-, 20-, and 30-nmi (.9.3-, 18.5-, 37-, 55.6-km) radii from each airfield. Each CG lightning strike is binned at 1-, 3-, and 6-hour intervals at each specified radius. The software merges the CG lightning strike time intervals and distance with each wind flow regime and creates probability statistics for each time interval, radii, and flow regime, and stratifies them by month and warm season. The AMU also updated the graphical user interface (GUI) with the new data.

  18. Representation of time interval entrained by periodic stimuli in the visual thalamus of pigeons

    PubMed Central

    Wang, Shu-Rong

    2017-01-01

    Animals use the temporal information from previously experienced periodic events to instruct their future behaviors. The retina and cortex are involved in such behavior, but it remains largely unknown how the thalamus, transferring visual information from the retina to the cortex, processes the periodic temporal patterns. Here we report that the luminance cells in the nucleus dorsolateralis anterior thalami (DLA) of pigeons exhibited oscillatory activities in a temporal pattern identical to the rhythmic luminance changes of repetitive light/dark (LD) stimuli with durations in the seconds-to-minutes range. Particularly, after LD stimulation, the DLA cells retained the entrained oscillatory activities with an interval closely matching the duration of the LD cycle. Furthermore, the post-stimulus oscillatory activities of the DLA cells were sustained without feedback inputs from the pallium (equivalent to the mammalian cortex). Our study suggests that the experience-dependent representation of time interval in the brain might not be confined to the pallial/cortical level, but may occur as early as at the thalamic level. PMID:29284554

  19. Coalbed methane potential of the Upper Cretaceous Mesaverde and Meeteetse formations, Wind River Reservation, Wyoming

    USGS Publications Warehouse

    Johnson, R.C.; Clark, A.C.; Barker, C.E.; Crysdale, B.L.; Higley, D.K.; Szmajter, R.J.; Finn, T.M.

    1993-01-01

    The environments of deposition of the uppermost part of the Cody Shale and the Mesaverde and Meeteetse Formations of Late Cretaceous age were studied on outcrop in the Shotgun Butte area in the north-central part of the Wind River Reservation. A shoreface sandstone occurs in the lower part of the Mesaverde Formation at all localities studied, and is directly overlain by a coaly interval. Repetitive coarsening-upward cycles of mudstone, siltstone, and sandstone occur in the 200 ft interval of the upper part of the Cody Shale below the shoreface sandstone. These Cody sandstones are typically hummocky cross stratified with symmetrical ripples near the top, indicating that they are largely storm surge deposits that were later reworked. Channel-form sandstones from 10 to 20 ft thick, with abundant locally derived clayey clasts, occur in a 75 ft thick interval below the shoreface at one locality. These unusual sandstones are largely confined to a narrow area of the outcrop and grade laterally into more typical storm surge deposits. They may be unusually large storm surge channels created when high-energy flow conditions were localized to a limited area of the shelf.The Mesaverde Formation above the shoreface sandstone is divided into a middle member and the Teapot Sandstone Member. The lower part of the middle member is everywhere coaly. Erosional-based sandstones in this coaly interval are highly variable in thickness and architecture. Thin, single channel sandstone bodies were deposited by moderate to high sinuosity streams, and thick, multistory channel sandstone bodies were deposited by rapidly switching fluvial channel systems that remained relatively stationary for extended periods of time. The architecture of the fluvial channel sandstones in the overlying noncoaly interval appears to be highly variable as well, with complex multistory sandstones occurring at different stratigraphic levels at different localities. This distribution may be explained by long term stability of fluvial channel systems followed by major avulsion events.The Teapot Sandstone Member consists of fairly persistent to lenticular white multistory sandstone units that are as much as 85 ft thick and contain trough cross beds as much as 5 ft high. These sandstone units are interbedded with gray mudstones and carbonaceous shales. Paleosols are preserved at the tops of individual sandstones in the multistory units in some places. It is suggested that these sandstones were deposited largely by low-sinuosity to braided streams. The Meeteetse Formation consists of alternating coal and sandstone-rich intervals. The coal-rich intervals have relatively thin fluvial channel sandstones probably deposited by medium to high sinuosity streams whereas the sand-rich intervals have thick (to 105 ft) multistory fluvial channel sandstones possibly deposited by low-sinousity to braided streams.

  20. A comparison of confidence interval methods for the intraclass correlation coefficient in community-based cluster randomization trials with a binary outcome.

    PubMed

    Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan

    2016-04-01

    Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith's method provide nominal or close to nominal coverage when the intraclass correlation coefficient is small (<0.05), as is the case in most community intervention trials. This study concludes that when a binary outcome variable is measured in a small number of large clusters, confidence intervals for the intraclass correlation coefficient may be constructed by dividing existing clusters into sub-clusters (e.g. groups of 5) and using Smith's method. The resulting confidence intervals provide nominal or close to nominal coverage across a wide range of parameters when the intraclass correlation coefficient is small (<0.05). Application of this method should provide investigators with a better understanding of the uncertainty associated with a point estimator of the intraclass correlation coefficient used for determining the sample size needed for a newly designed community-based trial. © The Author(s) 2015.

  1. A biophysical model examining the role of low-voltage-activated potassium currents in shaping the responses of vestibular ganglion neurons.

    PubMed

    Hight, Ariel E; Kalluri, Radha

    2016-08-01

    The vestibular nerve is characterized by two broad groups of neurons that differ in the timing of their interspike intervals; some fire at highly regular intervals, whereas others fire at highly irregular intervals. Heterogeneity in ion channel properties has been proposed as shaping these firing patterns (Highstein SM, Politoff AL. Brain Res 150: 182-187, 1978; Smith CE, Goldberg JM. Biol Cybern 54: 41-51, 1986). Kalluri et al. (J Neurophysiol 104: 2034-2051, 2010) proposed that regularity is controlled by the density of low-voltage-activated potassium currents (IKL). To examine the impact of IKL on spike timing regularity, we implemented a single-compartment model with three conductances known to be present in the vestibular ganglion: transient sodium (gNa), low-voltage-activated potassium (gKL), and high-voltage-activated potassium (gKH). Consistent with in vitro observations, removing gKL depolarized resting potential, increased input resistance and membrane time constant, and converted current step-evoked firing patterns from transient (1 spike at current onset) to sustained (many spikes). Modeled neurons were driven with a time-varying synaptic conductance that captured the random arrival times and amplitudes of glutamate-driven synaptic events. In the presence of gKL, spiking occurred only in response to large events with fast onsets. Models without gKL exhibited greater integration by responding to the superposition of rapidly arriving events. Three synaptic conductance were modeled, each with different kinetics to represent a variety of different synaptic processes. In response to all three types of synaptic conductance, models containing gKL produced spike trains with irregular interspike intervals. Only models lacking gKL when driven by rapidly arriving small excitatory postsynaptic currents were capable of generating regular spiking. Copyright © 2016 the American Physiological Society.

  2. Estimation of recurrence interval of large earthquakes on the central Longmen Shan fault zone based on seismic moment accumulation/release model.

    PubMed

    Ren, Junjie; Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.

  3. Estimation of Recurrence Interval of Large Earthquakes on the Central Longmen Shan Fault Zone Based on Seismic Moment Accumulation/Release Model

    PubMed Central

    Zhang, Shimin

    2013-01-01

    Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 1017 N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region. PMID:23878524

  4. Volatility return intervals analysis of the Japanese market

    NASA Astrophysics Data System (ADS)

    Jung, W.-S.; Wang, F. Z.; Havlin, S.; Kaizoji, T.; Moon, H.-T.; Stanley, H. E.

    2008-03-01

    We investigate scaling and memory effects in return intervals between price volatilities above a certain threshold q for the Japanese stock market using daily and intraday data sets. We find that the distribution of return intervals can be approximated by a scaling function that depends only on the ratio between the return interval τ and its mean <τ>. We also find memory effects such that a large (or small) return interval follows a large (or small) interval by investigating the conditional distribution and mean return interval. The results are similar to previous studies of other markets and indicate that similar statistical features appear in different financial markets. We also compare our results between the period before and after the big crash at the end of 1989. We find that scaling and memory effects of the return intervals show similar features although the statistical properties of the returns are different.

  5. Large time behavior of entropy solutions to one-dimensional unipolar hydrodynamic model for semiconductor devices

    NASA Astrophysics Data System (ADS)

    Huang, Feimin; Li, Tianhong; Yu, Huimin; Yuan, Difan

    2018-06-01

    We are concerned with the global existence and large time behavior of entropy solutions to the one-dimensional unipolar hydrodynamic model for semiconductors in the form of Euler-Poisson equations in a bounded interval. In this paper, we first prove the global existence of entropy solution by vanishing viscosity and compensated compactness framework. In particular, the solutions are uniformly bounded with respect to space and time variables by introducing modified Riemann invariants and the theory of invariant region. Based on the uniform estimates of density, we further show that the entropy solution converges to the corresponding unique stationary solution exponentially in time. No any smallness condition is assumed on the initial data and doping profile. Moreover, the novelty in this paper is about the unform bound with respect to time for the weak solutions of the isentropic Euler-Poisson system.

  6. Interval Timing Is Preserved Despite Circadian Desynchrony in Rats: Constant Light and Heavy Water Studies.

    PubMed

    Petersen, Christian C; Mistlberger, Ralph E

    2017-08-01

    The mechanisms that enable mammals to time events that recur at 24-h intervals (circadian timing) and at arbitrary intervals in the seconds-to-minutes range (interval timing) are thought to be distinct at the computational and neurobiological levels. Recent evidence that disruption of circadian rhythmicity by constant light (LL) abolishes interval timing in mice challenges this assumption and suggests a critical role for circadian clocks in short interval timing. We sought to confirm and extend this finding by examining interval timing in rats in which circadian rhythmicity was disrupted by long-term exposure to LL or by chronic intake of 25% D 2 O. Adult, male Sprague-Dawley rats were housed in a light-dark (LD) cycle or in LL until free-running circadian rhythmicity was markedly disrupted or abolished. The rats were then trained and tested on 15- and 30-sec peak-interval procedures, with water restriction used to motivate task performance. Interval timing was found to be unimpaired in LL rats, but a weak circadian activity rhythm was apparently rescued by the training procedure, possibly due to binge feeding that occurred during the 15-min water access period that followed training each day. A second group of rats in LL were therefore restricted to 6 daily meals scheduled at 4-h intervals. Despite a complete absence of circadian rhythmicity in this group, interval timing was again unaffected. To eliminate all possible temporal cues, we tested a third group of rats in LL by using a pseudo-randomized schedule. Again, interval timing remained accurate. Finally, rats tested in LD received 25% D 2 O in place of drinking water. This markedly lengthened the circadian period and caused a failure of LD entrainment but did not disrupt interval timing. These results indicate that interval timing in rats is resistant to disruption by manipulations of circadian timekeeping previously shown to impair interval timing in mice.

  7. A pharmacometric case study regarding the sensitivity of structural model parameter estimation to error in patient reported dosing times.

    PubMed

    Knights, Jonathan; Rohatagi, Shashank

    2015-12-01

    Although there is a body of literature focused on minimizing the effect of dosing inaccuracies on pharmacokinetic (PK) parameter estimation, most of the work centers on missing doses. No attempt has been made to specifically characterize the effect of error in reported dosing times. Additionally, existing work has largely dealt with cases in which the compound of interest is dosed at an interval no less than its terminal half-life. This work provides a case study investigating how error in patient reported dosing times might affect the accuracy of structural model parameter estimation under sparse sampling conditions when the dosing interval is less than the terminal half-life of the compound, and the underlying kinetics are monoexponential. Additional effects due to noncompliance with dosing events are not explored and it is assumed that the structural model and reasonable initial estimates of the model parameters are known. Under the conditions of our simulations, with structural model CV % ranging from ~20 to 60 %, parameter estimation inaccuracy derived from error in reported dosing times was largely controlled around 10 % on average. Given that no observed dosing was included in the design and sparse sampling was utilized, we believe these error results represent a practical ceiling given the variability and parameter estimates for the one-compartment model. The findings suggest additional investigations may be of interest and are noteworthy given the inability of current PK software platforms to accommodate error in dosing times.

  8. Time and resource limits on working memory: cross-age consistency in counting span performance.

    PubMed

    Ransdell, Sarah; Hecht, Steven

    2003-12-01

    This longitudinal study separated resource demand effects from those of retention interval in a counting span task among 100 children tested in grade 2 and again in grades 3 and 4. A last card large counting span condition had an equivalent memory load to a last card small, but the last card large required holding the count over a longer retention interval. In all three waves of assessment, the last card large condition was found to be less accurate than the last card small. A model predicting reading comprehension showed that age was a significant predictor when entered first accounting for 26% of the variance, but counting span accounted for a further 22% of the variance. Span at Wave 1 accounted for significant unique variance at Wave 2 and at Wave 3. Results were similar for math calculation with age accounting for 31% of the variance and counting span accounting for a further 34% of the variance. Span at Wave 1 explained unique variance in math at Wave 2 and at Wave 3.

  9. Longitudinal study of fingerprint recognition.

    PubMed

    Yoon, Soweon; Jain, Anil K

    2015-07-14

    Human identification by fingerprints is based on the fundamental premise that ridge patterns from distinct fingers are different (uniqueness) and a fingerprint pattern does not change over time (persistence). Although the uniqueness of fingerprints has been investigated by developing statistical models to estimate the probability of error in comparing two random samples of fingerprints, the persistence of fingerprints has remained a general belief based on only a few case studies. In this study, fingerprint match (similarity) scores are analyzed by multilevel statistical models with covariates such as time interval between two fingerprints in comparison, subject's age, and fingerprint image quality. Longitudinal fingerprint records of 15,597 subjects are sampled from an operational fingerprint database such that each individual has at least five 10-print records over a minimum time span of 5 y. In regard to the persistence of fingerprints, the longitudinal analysis on a single (right index) finger demonstrates that (i) genuine match scores tend to significantly decrease when time interval between two fingerprints in comparison increases, whereas the change in impostor match scores is negligible; and (ii) fingerprint recognition accuracy at operational settings, nevertheless, tends to be stable as the time interval increases up to 12 y, the maximum time span in the dataset. However, the uncertainty of temporal stability of fingerprint recognition accuracy becomes substantially large if either of the two fingerprints being compared is of poor quality. The conclusions drawn from 10-finger fusion analysis coincide with the conclusions from single-finger analysis.

  10. Longitudinal study of fingerprint recognition

    PubMed Central

    Yoon, Soweon; Jain, Anil K.

    2015-01-01

    Human identification by fingerprints is based on the fundamental premise that ridge patterns from distinct fingers are different (uniqueness) and a fingerprint pattern does not change over time (persistence). Although the uniqueness of fingerprints has been investigated by developing statistical models to estimate the probability of error in comparing two random samples of fingerprints, the persistence of fingerprints has remained a general belief based on only a few case studies. In this study, fingerprint match (similarity) scores are analyzed by multilevel statistical models with covariates such as time interval between two fingerprints in comparison, subject’s age, and fingerprint image quality. Longitudinal fingerprint records of 15,597 subjects are sampled from an operational fingerprint database such that each individual has at least five 10-print records over a minimum time span of 5 y. In regard to the persistence of fingerprints, the longitudinal analysis on a single (right index) finger demonstrates that (i) genuine match scores tend to significantly decrease when time interval between two fingerprints in comparison increases, whereas the change in impostor match scores is negligible; and (ii) fingerprint recognition accuracy at operational settings, nevertheless, tends to be stable as the time interval increases up to 12 y, the maximum time span in the dataset. However, the uncertainty of temporal stability of fingerprint recognition accuracy becomes substantially large if either of the two fingerprints being compared is of poor quality. The conclusions drawn from 10-finger fusion analysis coincide with the conclusions from single-finger analysis. PMID:26124106

  11. Mixture models for undiagnosed prevalent disease and interval-censored incident disease: applications to a cohort assembled from electronic health records.

    PubMed

    Cheung, Li C; Pan, Qing; Hyun, Noorie; Schiffman, Mark; Fetterman, Barbara; Castle, Philip E; Lorey, Thomas; Katki, Hormuzd A

    2017-09-30

    For cost-effectiveness and efficiency, many large-scale general-purpose cohort studies are being assembled within large health-care providers who use electronic health records. Two key features of such data are that incident disease is interval-censored between irregular visits and there can be pre-existing (prevalent) disease. Because prevalent disease is not always immediately diagnosed, some disease diagnosed at later visits are actually undiagnosed prevalent disease. We consider prevalent disease as a point mass at time zero for clinical applications where there is no interest in time of prevalent disease onset. We demonstrate that the naive Kaplan-Meier cumulative risk estimator underestimates risks at early time points and overestimates later risks. We propose a general family of mixture models for undiagnosed prevalent disease and interval-censored incident disease that we call prevalence-incidence models. Parameters for parametric prevalence-incidence models, such as the logistic regression and Weibull survival (logistic-Weibull) model, are estimated by direct likelihood maximization or by EM algorithm. Non-parametric methods are proposed to calculate cumulative risks for cases without covariates. We compare naive Kaplan-Meier, logistic-Weibull, and non-parametric estimates of cumulative risk in the cervical cancer screening program at Kaiser Permanente Northern California. Kaplan-Meier provided poor estimates while the logistic-Weibull model was a close fit to the non-parametric. Our findings support our use of logistic-Weibull models to develop the risk estimates that underlie current US risk-based cervical cancer screening guidelines. Published 2017. This article has been contributed to by US Government employees and their work is in the public domain in the USA. Published 2017. This article has been contributed to by US Government employees and their work is in the public domain in the USA.

  12. Methods for estimating confidence intervals in interrupted time series analyses of health interventions.

    PubMed

    Zhang, Fang; Wagner, Anita K; Soumerai, Stephen B; Ross-Degnan, Dennis

    2009-02-01

    Interrupted time series (ITS) is a strong quasi-experimental research design, which is increasingly applied to estimate the effects of health services and policy interventions. We describe and illustrate two methods for estimating confidence intervals (CIs) around absolute and relative changes in outcomes calculated from segmented regression parameter estimates. We used multivariate delta and bootstrapping methods (BMs) to construct CIs around relative changes in level and trend, and around absolute changes in outcome based on segmented linear regression analyses of time series data corrected for autocorrelated errors. Using previously published time series data, we estimated CIs around the effect of prescription alerts for interacting medications with warfarin on the rate of prescriptions per 10,000 warfarin users per month. Both the multivariate delta method (MDM) and the BM produced similar results. BM is preferred for calculating CIs of relative changes in outcomes of time series studies, because it does not require large sample sizes when parameter estimates are obtained correctly from the model. Caution is needed when sample size is small.

  13. DETECT: a MATLAB toolbox for event detection and identification in time series, with applications to artifact detection in EEG signals.

    PubMed

    Lawhern, Vernon; Hairston, W David; Robbins, Kay

    2013-01-01

    Recent advances in sensor and recording technology have allowed scientists to acquire very large time-series datasets. Researchers often analyze these datasets in the context of events, which are intervals of time where the properties of the signal change relative to a baseline signal. We have developed DETECT, a MATLAB toolbox for detecting event time intervals in long, multi-channel time series. Our primary goal is to produce a toolbox that is simple for researchers to use, allowing them to quickly train a model on multiple classes of events, assess the accuracy of the model, and determine how closely the results agree with their own manual identification of events without requiring extensive programming knowledge or machine learning experience. As an illustration, we discuss application of the DETECT toolbox for detecting signal artifacts found in continuous multi-channel EEG recordings and show the functionality of the tools found in the toolbox. We also discuss the application of DETECT for identifying irregular heartbeat waveforms found in electrocardiogram (ECG) data as an additional illustration.

  14. DETECT: A MATLAB Toolbox for Event Detection and Identification in Time Series, with Applications to Artifact Detection in EEG Signals

    PubMed Central

    Lawhern, Vernon; Hairston, W. David; Robbins, Kay

    2013-01-01

    Recent advances in sensor and recording technology have allowed scientists to acquire very large time-series datasets. Researchers often analyze these datasets in the context of events, which are intervals of time where the properties of the signal change relative to a baseline signal. We have developed DETECT, a MATLAB toolbox for detecting event time intervals in long, multi-channel time series. Our primary goal is to produce a toolbox that is simple for researchers to use, allowing them to quickly train a model on multiple classes of events, assess the accuracy of the model, and determine how closely the results agree with their own manual identification of events without requiring extensive programming knowledge or machine learning experience. As an illustration, we discuss application of the DETECT toolbox for detecting signal artifacts found in continuous multi-channel EEG recordings and show the functionality of the tools found in the toolbox. We also discuss the application of DETECT for identifying irregular heartbeat waveforms found in electrocardiogram (ECG) data as an additional illustration. PMID:23638169

  15. Landslide early warning based on failure forecast models: the example of the Mt. de La Saxe rockslide, northern Italy

    NASA Astrophysics Data System (ADS)

    Manconi, A.; Giordan, D.

    2015-07-01

    We apply failure forecast models by exploiting near-real-time monitoring data for the La Saxe rockslide, a large unstable slope threatening Aosta Valley in northern Italy. Starting from the inverse velocity theory, we analyze landslide surface displacements automatically and in near real time on different temporal windows and apply straightforward statistical methods to obtain confidence intervals on the estimated time of failure. Here, we present the result obtained for the La Saxe rockslide, a large unstable slope located in Aosta Valley, northern Italy. Based on this case study, we identify operational thresholds that are established on the reliability of the forecast models. Our approach is aimed at supporting the management of early warning systems in the most critical phases of the landslide emergency.

  16. Estimation of time-delayed mutual information and bias for irregularly and sparsely sampled time-series

    PubMed Central

    Albers, D. J.; Hripcsak, George

    2012-01-01

    A method to estimate the time-dependent correlation via an empirical bias estimate of the time-delayed mutual information for a time-series is proposed. In particular, the bias of the time-delayed mutual information is shown to often be equivalent to the mutual information between two distributions of points from the same system separated by infinite time. Thus intuitively, estimation of the bias is reduced to estimation of the mutual information between distributions of data points separated by large time intervals. The proposed bias estimation techniques are shown to work for Lorenz equations data and glucose time series data of three patients from the Columbia University Medical Center database. PMID:22536009

  17. Sensitivity analysis of seismic hazard for the northwestern portion of the state of Gujarat, India

    USGS Publications Warehouse

    Petersen, M.D.; Rastogi, B.K.; Schweig, E.S.; Harmsen, S.C.; Gomberg, J.S.

    2004-01-01

    We test the sensitivity of seismic hazard to three fault source models for the northwestern portion of Gujarat, India. The models incorporate different characteristic earthquake magnitudes on three faults with individual recurrence intervals of either 800 or 1600 years. These recurrence intervals imply that large earthquakes occur on one of these faults every 266-533 years, similar to the rate of historic large earthquakes in this region during the past two centuries and for earthquakes in intraplate environments like the New Madrid region in the central United States. If one assumes a recurrence interval of 800 years for large earthquakes on each of three local faults, the peak ground accelerations (PGA; horizontal) and 1-Hz spectral acceleration ground motions (5% damping) are greater than 1 g over a broad region for a 2% probability of exceedance in 50 years' hazard level. These probabilistic PGAs at this hazard level are similar to median deterministic ground motions. The PGAs for 10% in 50 years' hazard level are considerably lower, generally ranging between 0.2 g and 0.7 g across northwestern Gujarat. Ground motions calculated from our models that consider fault interevent times of 800 years are considerably higher than other published models even though they imply similar recurrence intervals. These higher ground motions are mainly caused by the application of intraplate attenuation relations, which account for less severe attenuation of seismic waves when compared to the crustal interplate relations used in these previous studies. For sites in Bhuj and Ahmedabad, magnitude (M) 7 3/4 earthquakes contribute most to the PGA and the 0.2- and 1-s spectral acceleration ground motion maps at the two considered hazard levels. ?? 2004 Elsevier B.V. All rights reserved.

  18. Fractal analyses reveal independent complexity and predictability of gait

    PubMed Central

    Dierick, Frédéric; Nivard, Anne-Laure

    2017-01-01

    Locomotion is a natural task that has been assessed for decades and used as a proxy to highlight impairments of various origins. So far, most studies adopted classical linear analyses of spatio-temporal gait parameters. Here, we use more advanced, yet not less practical, non-linear techniques to analyse gait time series of healthy subjects. We aimed at finding more sensitive indexes related to spatio-temporal gait parameters than those previously used, with the hope to better identify abnormal locomotion. We analysed large-scale stride interval time series and mean step width in 34 participants while altering walking direction (forward vs. backward walking) and with or without galvanic vestibular stimulation. The Hurst exponent α and the Minkowski fractal dimension D were computed and interpreted as indexes expressing predictability and complexity of stride interval time series, respectively. These holistic indexes can easily be interpreted in the framework of optimal movement complexity. We show that α and D accurately capture stride interval changes in function of the experimental condition. Walking forward exhibited maximal complexity (D) and hence, adaptability. In contrast, walking backward and/or stimulation of the vestibular system decreased D. Furthermore, walking backward increased predictability (α) through a more stereotyped pattern of the stride interval and galvanic vestibular stimulation reduced predictability. The present study demonstrates the complementary power of the Hurst exponent and the fractal dimension to improve walking classification. Our developments may have immediate applications in rehabilitation, diagnosis, and classification procedures. PMID:29182659

  19. Novel method for high-throughput phenotyping of sleep in mice.

    PubMed

    Pack, Allan I; Galante, Raymond J; Maislin, Greg; Cater, Jacqueline; Metaxas, Dimitris; Lu, Shan; Zhang, Lin; Von Smith, Randy; Kay, Timothy; Lian, Jie; Svenson, Karen; Peters, Luanne L

    2007-01-17

    Assessment of sleep in mice currently requires initial implantation of chronic electrodes for assessment of electroencephalogram (EEG) and electromyogram (EMG) followed by time to recover from surgery. Hence, it is not ideal for high-throughput screening. To address this deficiency, a method of assessment of sleep and wakefulness in mice has been developed based on assessment of activity/inactivity either by digital video analysis or by breaking infrared beams in the mouse cage. It is based on the algorithm that any episode of continuous inactivity of > or =40 s is predicted to be sleep. The method gives excellent agreement in C57BL/6J male mice with simultaneous assessment of sleep by EEG/EMG recording. The average agreement over 8,640 10-s epochs in 24 h is 92% (n = 7 mice) with agreement in individual mice being 88-94%. Average EEG/EMG determined sleep per 2-h interval across the day was 59.4 min. The estimated mean difference (bias) per 2-h interval between inactivity-defined sleep and EEG/EMG-defined sleep was only 1.0 min (95% confidence interval for mean bias -0.06 to +2.6 min). The standard deviation of differences (precision) was 7.5 min per 2-h interval with 95% limits of agreement ranging from -13.7 to +15.7 min. Although bias significantly varied by time of day (P = 0.0007), the magnitude of time-of-day differences was not large (average bias during lights on and lights off was +5.0 and -3.0 min per 2-h interval, respectively). This method has applications in chemical mutagenesis and for studies of molecular changes in brain with sleep/wakefulness.

  20. Moderate Recovery Unnecessary to Sustain High Stroke Volume during Interval Training. A Brief Report

    PubMed Central

    Stanley, Jamie; Buchheit, Martin

    2014-01-01

    It has been suggested that the time spent at a high stroke volume (SV) is important for improving maximal cardiac function. The aim of this study was to examine the effect of recovery intensity on cardiovascular parameters during a typical high-intensity interval training (HIIT) session in fourteen well-trained cyclists. Oxygen consumption (VO2), heart rate (HR), SV, cardiac output (Qc), and oxygenation of vastus lateralis (TSI) were measured during a HIIT (3×3-min work period, 2 min of recovery) session on two occasions. VO2, HR and Qc were largely higher during moderate-intensity (60%) compared with low-intensity (30%) (VO2, effect size; ES = +2.6; HR, ES = +2.8; Qc, ES = +2.2) and passive (HR, ES = +2.2; Qc, ES = +1.7) recovery. By contrast, there was no clear difference in SV between the three recovery conditions, with the SV during the two active recovery periods not being substantially different than during exercise (60%, ES = −0.1; 30%, ES = −0.2). To conclude, moderate-intensity recovery may not be required to maintain a high SV during HIIT. Key points Moderate-intensity recovery periods may not be necessary to maintain high stroke volume during the exercise intervals of HIIT. Stroke volume did not surpass the levels attained during the exercise intervals during the recovery periods of HIIT. The practical implication of these finding is that reducing the intensity of the recovery period during a HIIT protocol may prolong the time to exhaustion, potentially allowing completion of additional high-intensity intervals increasing the time accumulated at maximal cardiac output. PMID:24790495

  1. [Waiting time for the first colposcopic examination in women with abnormal Papanicolaou test].

    PubMed

    Nascimento, Maria Isabel do; Rabelo, Irene Machado Moraes Alvarenga; Cardoso, Fabrício Seabra Polidoro; Musse, Ricardo Neif Vieira

    2015-08-01

    To evaluate the waiting times before obtaining the first colposcopic examination for women with abnormal Papanicolaou smears. Retrospective cohort study conducted on patients who required a colposcopic examination to clarify an abnormal pap test, between 2002 January and 2008 August, in a metropolitan region of Brazil. The waiting times were defined as: Total Waiting Time (interval between the date of the pap test result and the date of the first colposcopic examination); Partial A Waiting Time (interval between the date of the pap test result and the date of referral); Partial B Waiting Time (interval between the date of referral and the date of the first colposcopic examination). Means, medians, relative and absolute frequencies were calculated. The Kruskal-Wallis test and Pearson's chi-square test were used to determine statistical significance. A total of 1,544 women with mean of age of 34 years (SD=12.6 years) were analyzed. Most of them had access to colposcopic examination within 30 days (65.8%) or 60 days (92.8%) from referral. Mean Total Waiting Time, Partial A Waiting Time, and Partial B Waiting Time were 94.5 days (SD=96.8 days), 67.8 days (SD=95.3 days) and 29.2 days (SD=35.1 days), respectively. A large part of the women studied had access to colposcopic examination within 60 days after referral, but Total waiting time was long. Measures to reduce the waiting time for obtaining the first colposcopic examination can help to improve the quality of care in the context of cervical cancer control in the region, and ought to be addressed at the phase between the date of the pap test results and the date of referral to the teaching hospital.

  2. Impact of heart disease and calibration interval on accuracy of pulse transit time-based blood pressure estimation.

    PubMed

    Ding, Xiaorong; Zhang, Yuanting; Tsang, Hon Ki

    2016-02-01

    Continuous blood pressure (BP) measurement without a cuff is advantageous for the early detection and prevention of hypertension. The pulse transit time (PTT) method has proven to be promising for continuous cuffless BP measurement. However, the problem of accuracy is one of the most challenging aspects before the large-scale clinical application of this method. Since PTT-based BP estimation relies primarily on the relationship between PTT and BP under certain assumptions, estimation accuracy will be affected by cardiovascular disorders that impair this relationship and by the calibration frequency, which may violate these assumptions. This study sought to examine the impact of heart disease and the calibration interval on the accuracy of PTT-based BP estimation. The accuracy of a PTT-BP algorithm was investigated in 37 healthy subjects and 48 patients with heart disease at different calibration intervals, namely 15 min, 2 weeks, and 1 month after initial calibration. The results showed that the overall accuracy of systolic BP estimation was significantly lower in subjects with heart disease than in healthy subjects, but diastolic BP estimation was more accurate in patients than in healthy subjects. The accuracy of systolic and diastolic BP estimation becomes less reliable with longer calibration intervals. These findings demonstrate that both heart disease and the calibration interval can influence the accuracy of PTT-based BP estimation and should be taken into consideration to improve estimation accuracy.

  3. Peculiarity of Seismicity in the Balakend-Zagatal Region, Azerbaijan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ismail-Zadeh, Tahir T.

    2006-03-23

    The study of seismicity in the Balakend-Zagatal region demonstrates a temporal correlation of small events in the region with the moderate events in Caucasus for the time interval of 1980 to 1990. It is shown that the processes resulting in deformation and tectonic movements of main structural elements of the Caucasus region are internal and are not related to large-scale tectonic processes. A week dependence of the regional movements on the large-scale motion of the lithospheric plates and microplates is apparent from another geological and geodetic data as well.

  4. Low authority-threshold control for large flexible structures

    NASA Technical Reports Server (NTRS)

    Zimmerman, D. C.; Inman, D. J.; Juang, J.-N.

    1988-01-01

    An improved active control strategy for the vibration control of large flexible structures is presented. A minimum force, low authority-threshold controller is developed to bring a system with or without known external disturbances back into an 'allowable' state manifold over a finite time interval. The concept of a constrained, or allowable feedback form of the controller is introduced that reflects practical hardware implementation concerns. The robustness properties of the control strategy are then assessed. Finally, examples are presented which highlight the key points made within the paper.

  5. Short-Term Forecasting of Taiwanese Earthquakes Using a Universal Model of Fusion-Fission Processes

    PubMed Central

    Cheong, Siew Ann; Tan, Teck Liang; Chen, Chien-Chih; Chang, Wu-Lung; Liu, Zheng; Chew, Lock Yue; Sloot, Peter M. A.; Johnson, Neil F.

    2014-01-01

    Predicting how large an earthquake can be, where and when it will strike remains an elusive goal in spite of the ever-increasing volume of data collected by earth scientists. In this paper, we introduce a universal model of fusion-fission processes that can be used to predict earthquakes starting from catalog data. We show how the equilibrium dynamics of this model very naturally explains the Gutenberg-Richter law. Using the high-resolution earthquake catalog of Taiwan between Jan 1994 and Feb 2009, we illustrate how out-of-equilibrium spatio-temporal signatures in the time interval between earthquakes and the integrated energy released by earthquakes can be used to reliably determine the times, magnitudes, and locations of large earthquakes, as well as the maximum numbers of large aftershocks that would follow. PMID:24406467

  6. Impact of the Timing of Metoprolol Administration During STEMI on Infarct Size and Ventricular Function.

    PubMed

    García-Ruiz, Jose M; Fernández-Jiménez, Rodrigo; García-Alvarez, Ana; Pizarro, Gonzalo; Galán-Arriola, Carlos; Fernández-Friera, Leticia; Mateos, Alonso; Nuno-Ayala, Mario; Aguero, Jaume; Sánchez-González, Javier; García-Prieto, Jaime; López-Melgar, Beatriz; Martínez-Tenorio, Pedro; López-Martín, Gonzalo J; Macías, Angel; Pérez-Asenjo, Braulio; Cabrera, José A; Fernández-Ortiz, Antonio; Fuster, Valentín; Ibáñez, Borja

    2016-05-10

    Pre-reperfusion administration of intravenous (IV) metoprolol reduces infarct size in ST-segment elevation myocardial infarction (STEMI). This study sought to determine how this cardioprotective effect is influenced by the timing of metoprolol therapy having either a long or short metoprolol bolus-to-reperfusion interval. We performed a post hoc analysis of the METOCARD-CNIC (effect of METOprolol of CARDioproteCtioN during an acute myocardial InfarCtion) trial, which randomized anterior STEMI patients to IV metoprolol or control before mechanical reperfusion. Treated patients were divided into short- and long-interval groups, split by the median time from 15 mg metoprolol bolus to reperfusion. We also performed a controlled validation study in 51 pigs subjected to 45 min ischemia/reperfusion. Pigs were allocated to IV metoprolol with a long (-25 min) or short (-5 min) pre-perfusion interval, IV metoprolol post-reperfusion (+60 min), or IV vehicle. Cardiac magnetic resonance (CMR) was performed in the acute and chronic phases in both clinical and experimental settings. For 218 patients (105 receiving IV metoprolol), the median time from 15 mg metoprolol bolus to reperfusion was 53 min. Compared with patients in the short-interval group, those with longer metoprolol exposure had smaller infarcts (22.9 g vs. 28.1 g; p = 0.06) and higher left ventricular ejection fraction (LVEF) (48.3% vs. 43.9%; p = 0.019) on day 5 CMR. These differences occurred despite total ischemic time being significantly longer in the long-interval group (214 min vs. 160 min; p < 0.001). There was no between-group difference in the time from symptom onset to metoprolol bolus. In the animal study, the long-interval group (IV metoprolol 25 min before reperfusion) had the smallest infarcts (day 7 CMR) and highest long-term LVEF (day 45 CMR). In anterior STEMI patients undergoing primary angioplasty, the sooner IV metoprolol is administered in the course of infarction, the smaller the infarct and the higher the LVEF. These hypothesis-generating clinical data are supported by a dedicated experimental large animal study. Copyright © 2016 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  7. Ehrenfest model with large jumps in finance

    NASA Astrophysics Data System (ADS)

    Takahashi, Hisanao

    2004-02-01

    Changes (returns) in stock index prices and exchange rates for currencies are argued, based on empirical data, to obey a stable distribution with characteristic exponent α<2 for short sampling intervals and a Gaussian distribution for long sampling intervals. In order to explain this phenomenon, an Ehrenfest model with large jumps (ELJ) is introduced to explain the empirical density function of price changes for both short and long sampling intervals.

  8. The Camden & Islington Research Database: Using electronic mental health records for research.

    PubMed

    Werbeloff, Nomi; Osborn, David P J; Patel, Rashmi; Taylor, Matthew; Stewart, Robert; Broadbent, Matthew; Hayes, Joseph F

    2018-01-01

    Electronic health records (EHRs) are widely used in mental health services. Case registers using EHRs from secondary mental healthcare have the potential to deliver large-scale projects evaluating mental health outcomes in real-world clinical populations. We describe the Camden and Islington NHS Foundation Trust (C&I) Research Database which uses the Clinical Record Interactive Search (CRIS) tool to extract and de-identify routinely collected clinical information from a large UK provider of secondary mental healthcare, and demonstrate its capabilities to answer a clinical research question regarding time to diagnosis and treatment of bipolar disorder. The C&I Research Database contains records from 108,168 mental health patients, of which 23,538 were receiving active care. The characteristics of the patient population are compared to those of the catchment area, of London, and of England as a whole. The median time to diagnosis of bipolar disorder was 76 days (interquartile range: 17-391) and median time to treatment was 37 days (interquartile range: 5-194). Compulsory admission under the UK Mental Health Act was associated with shorter intervals to diagnosis and treatment. Prior diagnoses of other psychiatric disorders were associated with longer intervals to diagnosis, though prior diagnoses of schizophrenia and related disorders were associated with decreased time to treatment. The CRIS tool, developed by the South London and Maudsley NHS Foundation Trust (SLaM) Biomedical Research Centre (BRC), functioned very well at C&I. It is reassuring that data from different organizations deliver similar results, and that applications developed in one Trust can then be successfully deployed in another. The information can be retrieved in a quicker and more efficient fashion than more traditional methods of health research. The findings support the secondary use of EHRs for large-scale mental health research in naturalistic samples and settings investigated across large, diverse geographical areas.

  9. Does major depression result in lasting personality change?

    PubMed

    Shea, M T; Leon, A C; Mueller, T I; Solomon, D A; Warshaw, M G; Keller, M B

    1996-11-01

    Individuals with a history of depression are characterized by high levels of certain personality traits, particularly neuroticism, introversion, and interpersonal dependency. The authors examined the "scar hypothesis," i.e., the possibility that episodes of major depression result in lasting personality changes that persist beyond recovery from the depression. A large sample of first-degree relatives, spouses, and comparison subjects ascertained in connection with the proband sample from the National Institute of Mental Health Collaborative Program on the Psychobiology of Depression were assessed at two points in time separated by an interval of 6 years. Subjects with a prospectively observed first episode of major depression during the interval were compared with subjects remaining well in terms of change from time 1 to time 2 in self-reported personality traits. All subjects studied were well (had no mental disorders) at the time of both assessments. There was no evidence of negative change from premorbid to postmorbid assessment in any of the personality traits for subjects with a prospectively observed first episode of major depression during the interval. The results suggested a possible association of number and length of episodes with increased levels of emotional reliance and introversion, respectively. The findings suggest that self-reported personality traits do not change after a typical episode of major depression. Future studies are needed to determine whether such change occurs following more severe, chronic, or recurrent episodes of depression.

  10. Characterization of Fissile Assemblies Using Low-Efficiency Detection Systems

    DOE PAGES

    Chapline, George F.; Verbeke, Jerome M.

    2017-02-02

    Here, we have investigated the possibility that the amount, chemical form, multiplication, and shape of the fissile material in an assembly can be passively assayed using scintillator detection systems by only measuring the fast neutron pulse height distribution and distribution of time intervals Δt between fast neutrons. We have previously demonstrated that the alpha-ratio can be obtained from the observed pulse height distribution for fast neutrons. In this paper we report that we report that when the distribution of time intervals is plotted as a function of logΔt, the position of the correlated neutron peak is nearly independent of detectormore » efficiency and determines the internal relaxation rate for fast neutrons. If this information is combined with knowledge of the alpha-ratio, then the position of the minimum between the correlated and uncorrelated peaks can be used to rapidly estimate the mass, multiplication, and shape of fissile material. This method does not require a priori knowledge of either the efficiency for neutron detection or the alpha-ratio. Although our method neglects 3-neutron correlations, we have used previously obtained experimental data for metallic and oxide forms of Pu to demonstrate that our method yields good estimates for multiplications as large as 2, and that the only constraint on detector efficiency/observation time is that a peak in the interval time distribution due to correlated neutrons is visible.« less

  11. A short review of paleoenvironments for Lower Beaufort (Upper Permian) Karoo sequences from southern to central Africa: A major Gondwana Lacustrine episode

    NASA Astrophysics Data System (ADS)

    Yemane, K.; Kelts, K.

    This paper compares Karoo deposits within the Lower Beaufort (Late Permian) time interval from southern to central Africa. Facies aspects are summarized for selected sequences and depositional environments assessed in connection with the palaeogeography. The comparison shows that thickness of Lower Beaufort sequences varies greatly; sequences are over a kilometre thick at the southern tip, but decrease drastically to the north, northwest and northeast, and is commonly absent from the western part of the subcontinent. Depositional environments are continental except for small estuarine intervals from a sequence in Tanzania. The commonest lithologies comprise mudstones, siltstones, arkoses and carbonates. In spite of the dominance of fluvial facies, the records preserved by intervals of lacustrine sequences suggest that large lakes were major features of the palaeogeography, and that lacustrine environments may have been dominant deposition environments. The Lower Beaufort landscape is generally interpreted as an expansive cratonic lowland with meandering rivers and streams crossing vast floodplains, which were indented by concomitant shallow lakes of various sizes. The lakes from the Karoo tectono-sedimentary terrain were often ephemeral and closely linked with fluvial processes, but large, anoxic lakers are also documented. On the other hand, giant, freshwater lakes, covered large areas of the Zambezian tectono-sedimentary terrain and may have been locally connected. Evidence from abundant freshwater fossil assemblages, particularly from the Zambezian tectono-sedimentary terrain suggest that in spite of the generally semi-arid global climate of the Upper Permian, seasonal precipitation (monsoonal?) supplied enough moisture to sustain large perennial lakes. Because of the unique nature of the Permian cotinental configuration and palaeogeography, however, modern analogues of large systems are lacking. The general lithological and palaeontological correlability of Lower Beaufort sequences suggests a similar regional palaeoclimate, whereas the differences in distribution are taken to be a result of control of tectonic settings. From the widespread occurrences of lake deposits in the African subcontinent, over relatively long interval, we conclude that lake deposits provide more information for a better understanding of Karoo palaeogeography than previously thought, since such lacustrine sequences should hold sensitive, high resolution records for palaeoenvironmental interpretations.

  12. Dating human skeletal remains: investigating the viability of measuring the equilibrium between 210Po and 210Pb as a means of estimating the post-mortem interval.

    PubMed

    Swift, B

    1998-11-30

    Estimating the post-mortem interval in skeletal remains is a notoriously difficult task; forensic pathologists often rely heavily upon experience in recognising morphological appearances. Previous techniques have involved measuring physical or chemical changes within the hydroxyapatite matrix, radiocarbon dating and 90Sr dating, though no individual test has been advocated. Within this paper it is proposed that measuring the equilibrium between two naturally occurring radio-isotopes, 210Po and 210Pb, and comparison with post-mortem examination samples would produce a new method of dating human skeletal remains. Possible limitations exist, notably the effect of diagenesis, time limitations and relative cost, though this technique could provide a relatively accurate means of determining the post-mortem interval. It is therefore proposed that a large study be undertaken to provide a calibration scale against which bones uncovered can be dated.

  13. Comment on short-term variation in subjective sleepiness.

    PubMed

    Eriksen, Claire A; Akerstedt, Torbjörn; Kecklund, Göran; Akerstedt, Anna

    2005-12-01

    Subjective sleepiness at different times is often measured in studies on sleep loss, night work, or drug effects. However, the context at the time of rating may influence results. The present study examined sleepiness throughout the day at hourly intervals and during controlled activities [reading, writing, walking, social interaction (discussion), etc.] by 10-min. intervals for 3 hr. This was done on a normal working day preceded by a scheduled early rising (to invite sleepiness) for six subjects. Analysis showed a significant U-shaped pattern across the day with peaks in the early morning and late evening. A walk and social interaction were associated with low sleepiness, compared to sedentary and quiet office work. None of this was visible in the hourly ratings. There was also a pronounced afternoon increase in sleepiness, that was not observable with hourly ratings. It was concluded that there are large variations in sleepiness related to time of day and also to context and that sparse sampling of subjective sleepiness may miss much of this variation.

  14. Allan deviation analysis of financial return series

    NASA Astrophysics Data System (ADS)

    Hernández-Pérez, R.

    2012-05-01

    We perform a scaling analysis for the return series of different financial assets applying the Allan deviation (ADEV), which is used in the time and frequency metrology to characterize quantitatively the stability of frequency standards since it has demonstrated to be a robust quantity to analyze fluctuations of non-stationary time series for different observation intervals. The data used are opening price daily series for assets from different markets during a time span of around ten years. We found that the ADEV results for the return series at short scales resemble those expected for an uncorrelated series, consistent with the efficient market hypothesis. On the other hand, the ADEV results for absolute return series for short scales (first one or two decades) decrease following approximately a scaling relation up to a point that is different for almost each asset, after which the ADEV deviates from scaling, which suggests that the presence of clustering, long-range dependence and non-stationarity signatures in the series drive the results for large observation intervals.

  15. A comparative analysis of spectral exponent estimation techniques for 1/fβ processes with applications to the analysis of stride interval time series

    PubMed Central

    Schaefer, Alexander; Brach, Jennifer S.; Perera, Subashan; Sejdić, Ervin

    2013-01-01

    Background The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f) = 1/fβ. The scaling exponent β is thus often interpreted as a “biomarker” of relative health and decline. New Method This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. Results The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Comparison with Existing Methods: Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. Conclusions The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. PMID:24200509

  16. A comparative analysis of spectral exponent estimation techniques for 1/f(β) processes with applications to the analysis of stride interval time series.

    PubMed

    Schaefer, Alexander; Brach, Jennifer S; Perera, Subashan; Sejdić, Ervin

    2014-01-30

    The time evolution and complex interactions of many nonlinear systems, such as in the human body, result in fractal types of parameter outcomes that exhibit self similarity over long time scales by a power law in the frequency spectrum S(f)=1/f(β). The scaling exponent β is thus often interpreted as a "biomarker" of relative health and decline. This paper presents a thorough comparative numerical analysis of fractal characterization techniques with specific consideration given to experimentally measured gait stride interval time series. The ideal fractal signals generated in the numerical analysis are constrained under varying lengths and biases indicative of a range of physiologically conceivable fractal signals. This analysis is to complement previous investigations of fractal characteristics in healthy and pathological gait stride interval time series, with which this study is compared. The results of our analysis showed that the averaged wavelet coefficient method consistently yielded the most accurate results. Class dependent methods proved to be unsuitable for physiological time series. Detrended fluctuation analysis as most prevailing method in the literature exhibited large estimation variances. The comparative numerical analysis and experimental applications provide a thorough basis for determining an appropriate and robust method for measuring and comparing a physiologically meaningful biomarker, the spectral index β. In consideration of the constraints of application, we note the significant drawbacks of detrended fluctuation analysis and conclude that the averaged wavelet coefficient method can provide reasonable consistency and accuracy for characterizing these fractal time series. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. The cyclic and fractal seismic series preceding an mb 4.8 earthquake on 1980 February 14 near the Virgin Islands

    USGS Publications Warehouse

    Varnes, D.J.; Bufe, C.G.

    1996-01-01

    Seismic activity in the 10 months preceding the 1980 February 14, mb 4.8 earthquake in the Virgin Islands, reported on by Frankel in 1982, consisted of four principal cycles. Each cycle began with a relatively large event or series of closely spaced events, and the duration of the cycles progressively shortened by a factor of about 3/4. Had this regular shortening of the cycles been recognized prior to the earthquake, the time of the next episode of setsmicity (the main shock) might have been closely estimated 41 days in advance. That this event could be much larger than the previous events is indicated from time-to-failure analysis of the accelerating rise in released seismic energy, using a non-linear time- and slip-predictable foreshock model. Examination of the timing of all events in the sequence shows an even higher degree of order. Rates of seismicity, measured by consecutive interevent times, when plotted on an iteration diagram of a rate versus the succeeding rate, form a triangular circulating trajectory. The trajectory becomes an ascending helix if extended in a third dimension, time. This construction reveals additional and precise relations among the time intervals between times of relatively high or relatively low rates of seismic activity, including period halving and doubling. The set of 666 time intervals between all possible pairs of the 37 recorded events appears to be a fractal; the set of time points that define the intervals has a finite, non-integer correlation dimension of 0.70. In contrast, the average correlation dimension of 50 random sequences of 37 events is significantly higher, dose to 1.0. In a similar analysis, the set of distances between pairs of epicentres has a fractal correlation dimension of 1.52. Well-defined cycles, numerous precise ratios among time intervals, and a non-random temporal fractal dimension suggest that the seismic series is not a random process, but rather the product of a deterministic dynamic system.

  18. Graph-based real-time fault diagnostics

    NASA Technical Reports Server (NTRS)

    Padalkar, S.; Karsai, G.; Sztipanovits, J.

    1988-01-01

    A real-time fault detection and diagnosis capability is absolutely crucial in the design of large-scale space systems. Some of the existing AI-based fault diagnostic techniques like expert systems and qualitative modelling are frequently ill-suited for this purpose. Expert systems are often inadequately structured, difficult to validate and suffer from knowledge acquisition bottlenecks. Qualitative modelling techniques sometimes generate a large number of failure source alternatives, thus hampering speedy diagnosis. In this paper we present a graph-based technique which is well suited for real-time fault diagnosis, structured knowledge representation and acquisition and testing and validation. A Hierarchical Fault Model of the system to be diagnosed is developed. At each level of hierarchy, there exist fault propagation digraphs denoting causal relations between failure modes of subsystems. The edges of such a digraph are weighted with fault propagation time intervals. Efficient and restartable graph algorithms are used for on-line speedy identification of failure source components.

  19. Quaternary Geology and Surface Faulting Hazard: Active and Capable Faults in Central Apennines, Italy

    NASA Astrophysics Data System (ADS)

    Falcucci, E.; Gori, S.

    2015-12-01

    The 2009 L'Aquila earthquake (Mw 6.1), in central Italy, raised the issue of surface faulting hazard in Italy, since large urban areas were affected by surface displacement along the causative structure, the Paganica fault. Since then, guidelines for microzonation were drew up that take into consideration the problem of surface faulting in Italy, and laying the bases for future regulations about related hazard, similarly to other countries (e.g. USA). More specific guidelines on the management of areas affected by active and capable faults (i.e. able to produce surface faulting) are going to be released by National Department of Civil Protection; these would define zonation of areas affected by active and capable faults, with prescriptions for land use planning. As such, the guidelines arise the problem of the time interval and general operational criteria to asses fault capability for the Italian territory. As for the chronology, the review of the international literature and regulatory allowed Galadini et al. (2012) to propose different time intervals depending on the ongoing tectonic regime - compressive or extensional - which encompass the Quaternary. As for the operational criteria, the detailed analysis of the large amount of works dealing with active faulting in Italy shows that investigations exclusively based on surface morphological features (e.g. fault planes exposition) or on indirect investigations (geophysical data), are not sufficient or even unreliable to define the presence of an active and capable fault; instead, more accurate geological information on the Quaternary space-time evolution of the areas affected by such tectonic structures is needed. A test area for which active and capable faults can be first mapped based on such a classical but still effective methodological approach can be the central Apennines. Reference Galadini F., Falcucci E., Galli P., Giaccio B., Gori S., Messina P., Moro M., Saroli M., Scardia G., Sposato A. (2012). Time intervals to assess active and capable faults for engineering practices in Italy. Eng. Geol., 139/140, 50-65.

  20. Photoinduced diffusion molecular transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rozenbaum, Viktor M., E-mail: vik-roz@mail.ru, E-mail: litrakh@gmail.com; Dekhtyar, Marina L.; Lin, Sheng Hsien

    2016-08-14

    We consider a Brownian photomotor, namely, the directed motion of a nanoparticle in an asymmetric periodic potential under the action of periodic rectangular resonant laser pulses which cause charge redistribution in the particle. Based on the kinetics for the photoinduced electron redistribution between two or three energy levels of the particle, the time dependence of its potential energy is derived and the average directed velocity is calculated in the high-temperature approximation (when the spatial amplitude of potential energy fluctuations is small relative to the thermal energy). The thus developed theory of photoinduced molecular transport appears applicable not only to conventionalmore » dichotomous Brownian motors (with only two possible potential profiles) but also to a much wider variety of molecular nanomachines. The distinction between the realistic time dependence of the potential energy and that for a dichotomous process (a step function) is represented in terms of relaxation times (they can differ on the time intervals of the dichotomous process). As shown, a Brownian photomotor has the maximum average directed velocity at (i) large laser pulse intensities (resulting in short relaxation times on laser-on intervals) and (ii) excited state lifetimes long enough to permit efficient photoexcitation but still much shorter than laser-off intervals. A Brownian photomotor with optimized parameters is exemplified by a cylindrically shaped semiconductor nanocluster which moves directly along a polar substrate due to periodically photoinduced dipole moment (caused by the repetitive excited electron transitions to a non-resonant level of the nanocylinder surface impurity).« less

  1. 78 FR 6232 - Energy Conservation Program: Test Procedures for Conventional Cooking Products With Induction...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-30

    ... Surface efficiency deviation interval technology unit % % ( ) % Large A Electric Coil... 1 69.79 1.59 1.97... Surface efficiency deviation interval technology unit % % ( ) % Large A Electric Coil... 1 64.52 0.87 1.08... technology unit % % ( ) % Large A Electric Coil... 1 79.81 1.66 2.06 B Electric........ 1 61.81 2.83 3.52...

  2. Timing of paleoearthquakes on the northern Hayward Fault: preliminary evidence in El Cerrito, California

    USGS Publications Warehouse

    Lienkaemper, J.J.; Schwartz, D.P.; Kelson, K.I.; Lettis, W.R.; Simpson, Gary D.; Southon, J.R.; Wanket, J.A.; Williams, P.L.

    1999-01-01

    The Working Group on California Earthquake Probabilities estimated that the northern Hayward fault had the highest probability (0.28) of producing a M7 Bay Area earthquake in 30 years (WGCEP, 1990). This probability was based, in part, on the assumption that the last large earthquake occurred on this segment in 1836. However, a recent study of historical documents concludes that the 1836 earthquake did not occur on the northern Hayward fault, thereby extending the elapsed time to at least 220 yr ago, the beginning of the written record. The average recurrence interval for a M7 on the northern Hayward is unknown. WGCEP (1990) assumed an interval of 167 years. The 1996 Working Group on Northern California Earthquake Potential estimated ~210 yr, based on extrapolations from southern Hayward paleoseismological studies and a revised estimate of 1868 slip on the southern Hayward fault. To help constrain the timing of paleoearthquakes on the northern Hayward fault for the 1999 Bay Area probability update, we excavated two trenches that cross the fault and a sag pond on the Mira Vista golf course. As the site is on the second fairway, we were limited to less than ten days to document these trenches. Analysis was aided by rapid C-14 dating of more than 90 samples which gave near real-time results with the trenches still open. A combination of upward fault terminations, disrupted strata, and discordant angular relations indicates at least four, and possibly seven or more, surface faulting earthquakes occurred during a 1630-2130 yr interval. Hence, average recurrence time could be <270 yr, but is no more than 710 yr. The most recent earthquake (MRE) occurred after AD 1640. Preliminary analysis of calibrated dates supports the assumption that no large historical (post-1776) earthquakes have ruptured the surface here, but the youngest dates need more corroboration. Analyses of pollen for presence of non-native species help to constrain the time of the MRE. The earthquake recurrence estimates described in this report are preliminary and should not be used as a basis for hazard estimates. Additional trenching is planned for this location to answer questions raised during the initial phase of trenching.

  3. Systolic time interval v heart rate regression equations using atropine: reproducibility studies.

    PubMed Central

    Kelman, A W; Sumner, D J; Whiting, B

    1981-01-01

    1. Systolic time intervals (STI) were recorded in six normal male subjects over a period of 3 weeks. On one day per week, each subject received incremental doses of atropine intravenously to increase heart rate, allowing the determination of individual STI v HR regression equations. On the other days STI were recorded with the subjects resting, in the supine position. 2. There were highly significant regression relationships between heart rate and both LVET and QS2, but not between heart rate and PEP. 3. The regression relationships showed little intra-subject variability, but a large degree of inter-subject variability: they proved adequate to correct the STI for the daily fluctuations in heart rate. 4. Administration of small doses of atropine intravenously provides a satisfactory and convenient method of deriving individual STI v HR regression equations which can be applied over a period of weeks. PMID:7248136

  4. Systolic time interval v heart rate regression equations using atropine: reproducibility studies.

    PubMed

    Kelman, A W; Sumner, D J; Whiting, B

    1981-07-01

    1. Systolic time intervals (STI) were recorded in six normal male subjects over a period of 3 weeks. On one day per week, each subject received incremental doses of atropine intravenously to increase heart rate, allowing the determination of individual STI v HR regression equations. On the other days STI were recorded with the subjects resting, in the supine position. 2. There were highly significant regression relationships between heart rate and both LVET and QS2, but not between heart rate and PEP. 3. The regression relationships showed little intra-subject variability, but a large degree of inter-subject variability: they proved adequate to correct the STI for the daily fluctuations in heart rate. 4. Administration of small doses of atropine intravenously provides a satisfactory and convenient method of deriving individual STI v HR regression equations which can be applied over a period of weeks.

  5. Power frequency spectrum analysis of surface EMG signals of upper limb muscles during elbow flexion - A comparison between healthy subjects and stroke survivors.

    PubMed

    Angelova, Silvija; Ribagin, Simeon; Raikova, Rositsa; Veneva, Ivanka

    2018-02-01

    After a stroke, motor units stop working properly and large, fast-twitch units are more frequently affected. Their impaired functions can be investigated during dynamic tasks using electromyographic (EMG) signal analysis. The aim of this paper is to investigate changes in the parameters of the power/frequency function during elbow flexion between affected, non-affected, and healthy muscles. Fifteen healthy subjects and ten stroke survivors participated in the experiments. Electromyographic data from 6 muscles of the upper limbs during elbow flexion were filtered and normalized to the amplitudes of EMG signals during maximal isometric tasks. The moments when motion started and when the flexion angle reached its maximal value were found. Equal intervals of 0.3407 s were defined between these two moments and one additional interval before the start of the flexion (first one) was supplemented. For each of these intervals the power/frequency function of EMG signals was calculated. The mean (MNF) and median frequencies (MDF), the maximal power (MPw) and the area under the power function (APw) were calculated. MNF was always higher than MDF. A significant decrease in these frequencies was found in only three post-stroke survivors. The frequencies in the first time interval were nearly always the highest among all intervals. The maximal power was nearly zero during first time interval and increased during the next ones. The largest values of MPw and APw were found for the flexor muscles and they increased for the muscles of the affected arm compared to the non-affected one of stroke survivors. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Timescale- and Sensory Modality-Dependency of the Central Tendency of Time Perception.

    PubMed

    Murai, Yuki; Yotsumoto, Yuko

    2016-01-01

    When individuals are asked to reproduce intervals of stimuli that are intermixedly presented at various times, longer intervals are often underestimated and shorter intervals overestimated. This phenomenon may be attributed to the central tendency of time perception, and suggests that our brain optimally encodes a stimulus interval based on current stimulus input and prior knowledge of the distribution of stimulus intervals. Two distinct systems are thought to be recruited in the perception of sub- and supra-second intervals. Sub-second timing is subject to local sensory processing, whereas supra-second timing depends on more centralized mechanisms. To clarify the factors that influence time perception, the present study investigated how both sensory modality and timescale affect the central tendency. In Experiment 1, participants were asked to reproduce sub- or supra-second intervals, defined by visual or auditory stimuli. In the sub-second range, the magnitude of the central tendency was significantly larger for visual intervals compared to auditory intervals, while visual and auditory intervals exhibited a correlated and comparable central tendency in the supra-second range. In Experiment 2, the ability to discriminate sub-second intervals in the reproduction task was controlled across modalities by using an interval discrimination task. Even when the ability to discriminate intervals was controlled, visual intervals exhibited a larger central tendency than auditory intervals in the sub-second range. In addition, the magnitude of the central tendency for visual and auditory sub-second intervals was significantly correlated. These results suggest that a common modality-independent mechanism is responsible for the supra-second central tendency, and that both the modality-dependent and modality-independent components of the timing system contribute to the central tendency in the sub-second range.

  7. High resolution data acquisition

    DOEpatents

    Thornton, G.W.; Fuller, K.R.

    1993-04-06

    A high resolution event interval timing system measures short time intervals such as occur in high energy physics or laser ranging. Timing is provided from a clock, pulse train, and analog circuitry for generating a triangular wave synchronously with the pulse train (as seen in diagram on patent). The triangular wave has an amplitude and slope functionally related to the time elapsed during each clock pulse in the train. A converter forms a first digital value of the amplitude and slope of the triangle wave at the start of the event interval and a second digital value of the amplitude and slope of the triangle wave at the end of the event interval. A counter counts the clock pulse train during the interval to form a gross event interval time. A computer then combines the gross event interval time and the first and second digital values to output a high resolution value for the event interval.

  8. High resolution data acquisition

    DOEpatents

    Thornton, Glenn W.; Fuller, Kenneth R.

    1993-01-01

    A high resolution event interval timing system measures short time intervals such as occur in high energy physics or laser ranging. Timing is provided from a clock (38) pulse train (37) and analog circuitry (44) for generating a triangular wave (46) synchronously with the pulse train (37). The triangular wave (46) has an amplitude and slope functionally related to the time elapsed during each clock pulse in the train. A converter (18, 32) forms a first digital value of the amplitude and slope of the triangle wave at the start of the event interval and a second digital value of the amplitude and slope of the triangle wave at the end of the event interval. A counter (26) counts the clock pulse train (37) during the interval to form a gross event interval time. A computer (52) then combines the gross event interval time and the first and second digital values to output a high resolution value for the event interval.

  9. Formulation of an explicit-multiple-time-step time integration method for use in a global primitive equation grid model

    NASA Technical Reports Server (NTRS)

    Chao, W. C.

    1982-01-01

    With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.

  10. Development of the USGS national land-cover database over two decades

    USGS Publications Warehouse

    Xian, George Z.; Homer, Collin G.; Yang, Limin; Weng, Qihao

    2011-01-01

    Land-cover composition and change have profound impacts on terrestrial ecosystems. Land-cover and land-use (LCLU) conditions and their changes can affect social and physical environments by altering ecosystem conditions and services. Information about LCLU change is often used to produce landscape-based metrics and evaluate landscape conditions to monitor LCLU status and trends over a specific time interval (Loveland et al. 2002; Coppin et al. 2004; Lunetta et al. 2006). Continuous, accurate, and up-to-date land-cover data are important for natural resource and ecosystem management and are needed to support consistent monitoring of landscape attributes over time. Large-area land-cover information at regional, national, and global scales is critical for monitoring landscape variations over large areas.

  11. The Influence of Pituitary Size on Outcome After Transsphenoidal Hypophysectomy in a Large Cohort of Dogs with Pituitary-Dependent Hypercortisolism.

    PubMed

    van Rijn, S J; Galac, S; Tryfonidou, M A; Hesselink, J W; Penning, L C; Kooistra, H S; Meij, B P

    2016-07-01

    Transsphenoidal hypophysectomy is one of the treatment strategies in the comprehensive management of dogs with pituitary-dependent hypercortisolism (PDH). To describe the influence of pituitary size at time of pituitary gland surgery on long-term outcome. Three-hundred-and-six dogs with PDH. Survival and disease-free fractions were analyzed and related to pituitary size; dogs with and without recurrence were compared. Four weeks after surgery, 91% of dogs were alive and remission was confirmed in 92% of these dogs. The median survival time was 781 days, median disease-free interval was 951 days. Over time, 27% of dogs developed recurrence of hypercortisolism after a median period of 555 days. Dogs with recurrence had significantly higher pituitary height/brain area (P/B) ratio and pre-operative basal urinary corticoid-to-creatinine ratio (UCCR) than dogs without recurrence. Survival time and disease-free interval of dogs with enlarged pituitary glands was significantly shorter than that of dogs with a non-enlarged pituitary gland. Pituitary size at the time of surgery significantly increased over the 20-year period. Although larger tumors have a less favorable prognosis, outcome in larger tumors improved over time. Transsphenoidal hypophysectomy is an effective treatment for PDH in dogs, with an acceptable long-term outcome. Survival time and disease-free fractions are correlated negatively with pituitary gland size, making the P/B ratio an important pre-operative prognosticator. However, with increasing experience, and for large tumors, pituitary gland surgery remains an option to control the pituitary mass and hypercortisolism. Copyright © 2016 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.

  12. Spring and fall phytoplankton blooms in a productive subarctic ecosystem, the eastern Bering Sea, during 1995-2011

    NASA Astrophysics Data System (ADS)

    Sigler, Michael F.; Stabeno, Phyllis J.; Eisner, Lisa B.; Napp, Jeffrey M.; Mueter, Franz J.

    2014-11-01

    The timing and magnitude of phytoplankton blooms in subarctic ecosystems often strongly influence the amount of energy that is transferred through subsequent trophic pathways. In the eastern Bering Sea, spring bloom timing has been linked to ice retreat timing and production of zooplankton and fish. A large part of the eastern Bering Sea shelf (~500 km wide) is ice-covered during winter and spring. Four oceanographic moorings have been deployed along the 70-m depth contour of the eastern Bering Sea shelf with the southern location occupied annually since 1995, the two northern locations since 2004 and the remaining location since 2001. Chlorophyll a fluorescence data from the four moorings provide 37 realizations of a spring bloom and 33 realizations of a fall bloom. We found that in the eastern Bering Sea: if ice was present after mid-March, spring bloom timing was related to ice retreat timing (p<0.001, df=1, 24); if ice was absent or retreated before mid-March, a spring bloom usually occurred in May or early June (average day 148, SE=3.5, n=11). A fall bloom also commonly occurred, usually in late September (average day 274, SE=4.2, n=33), and its timing was not significantly related to the timing of storms (p=0.88, df=1, 27) or fall water column overturn (p=0.49, df=1, 27). The magnitudes of the spring and fall blooms were correlated (p=0.011, df=28). The interval between the spring and fall blooms varied between four to six months depending on year and location. We present a hypothesis to explain how the large crustacean zooplankton taxa Calanus spp. likely respond to variation in the interval between blooms (spring to fall and fall to spring).

  13. Triggering up states in all-to-all coupled neurons

    NASA Astrophysics Data System (ADS)

    Ngo, H.-V. V.; Köhler, J.; Mayer, J.; Claussen, J. C.; Schuster, H. G.

    2010-03-01

    Slow-wave sleep in mammalians is characterized by a change of large-scale cortical activity currently paraphrased as cortical Up/Down states. A recent experiment demonstrated a bistable collective behaviour in ferret slices, with the remarkable property that the Up states can be switched on and off with pulses, or excitations, of same polarity; whereby the effect of the second pulse significantly depends on the time interval between the pulses. Here we present a simple time-discrete model of a neural network that exhibits this type of behaviour, as well as quantitatively reproduces the time dependence found in the experiments.

  14. Detection of anomalous signals in temporally correlated data (Invited)

    NASA Astrophysics Data System (ADS)

    Langbein, J. O.

    2010-12-01

    Detection of transient tectonic signals in data obtained from large geodetic networks requires the ability to detect signals that are both temporally and spatially coherent. In this report I will describe a modification to an existing method that estimates both the coefficients of temporally correlated noise model and an efficient filter based on the noise model. This filter, when applied to the original time-series, effectively whitens (or flattens) the power spectrum. The filtered data provide the means to calculate running averages which are then used to detect deviations from the background trends. For large networks, time-series of signal-to-noise ratio (SNR) can be easily constructed since, by filtering, each of the original time-series has been transformed into one that is closer to having a Gaussian distribution with a variance of 1.0. Anomalous intervals may be identified by counting the number of GPS sites for which the SNR exceeds a specified value. For example, during one time interval, if there were 5 out of 20 time-series with SNR>2, this would be considered anomalous; typically, one would expect at 95% confidence that there would be at least 1 out of 20 time-series with an SNR>2. For time intervals with an anomalously large number of high SNR, the spatial distribution of the SNR is mapped to identify the location of the anomalous signal(s) and their degree of spatial clustering. Estimating the filter that should be used to whiten the data requires modification of the existing methods that employ maximum likelihood estimation to determine the temporal covariance of the data. In these methods, it is assumed that the noise components in the data are a combination of white, flicker and random-walk processes and that they are derived from three different and independent sources. Instead, in this new method, the covariance matrix is constructed assuming that only one source is responsible for the noise and that source can be represented as a white-noise random-number generator convolved with a filter whose spectral properties are frequency (f) independent at its highest frequencies, 1/f at the middle frequencies, and 1/f2 at the lowest frequencies. For data sets with no gaps in their time-series, construction of covariance and inverse covariance matrices is extremely efficient. Application of the above algorithm to real data potentially involves several iterations as small, tectonic signals of interest are often indistinguishable from background noise. Consequently, simply plotting the time-series of each GPS site is used to identify the largest outliers and signals independent of their cause. Any analysis of the background noise levels must factor in these other signals while the gross outliers need to be removed.

  15. Direct thermal effects of the Hadean bombardment did not limit early subsurface habitability

    NASA Astrophysics Data System (ADS)

    Grimm, R. E.; Marchi, S.

    2018-03-01

    Intense bombardment is considered characteristic of the Hadean and early Archean eons, yet some detrital zircons indicate that near-surface water was present and thus at least intervals of clement conditions may have existed. We investigate the habitability of the top few kilometers of the subsurface by updating a prior approach to thermal evolution of the crust due to impact heating, using a revised bombardment history, a more accurate thermal model, and treatment of melt sheets from large projectiles (>100 km diameter). We find that subsurface habitable volume grows nearly continuously throughout the Hadean and early Archean (4.5-3.5 Ga) because impact heat is dissipated rapidly compared to the total duration and waning strength of the bombardment. Global sterilization was only achieved using an order of magnitude more projectiles in 1/10 the time. Melt sheets from large projectiles can completely resurface the Earth several times prior to ∼4.2 Ga but at most once since then. Even in the Hadean, melt sheets have little effect on habitability because cooling times are short compared to resurfacing intervals, allowing subsurface biospheres to be locally re-established by groundwater infiltration between major impacts. Therefore the subsurface is always habitable somewhere, and production of global steam or silicate-vapor atmospheres are the only remaining avenues to early surface sterilization by bombardment.

  16. Credible occurrence probabilities for extreme geophysical events: earthquakes, volcanic eruptions, magnetic storms

    USGS Publications Warehouse

    Love, Jeffrey J.

    2012-01-01

    Statistical analysis is made of rare, extreme geophysical events recorded in historical data -- counting the number of events $k$ with sizes that exceed chosen thresholds during specific durations of time $\\tau$. Under transformations that stabilize data and model-parameter variances, the most likely Poisson-event occurrence rate, $k/\\tau$, applies for frequentist inference and, also, for Bayesian inference with a Jeffreys prior that ensures posterior invariance under changes of variables. Frequentist confidence intervals and Bayesian (Jeffreys) credibility intervals are approximately the same and easy to calculate: $(1/\\tau)[(\\sqrt{k} - z/2)^{2},(\\sqrt{k} + z/2)^{2}]$, where $z$ is a parameter that specifies the width, $z=1$ ($z=2$) corresponding to $1\\sigma$, $68.3\\%$ ($2\\sigma$, $95.4\\%$). If only a few events have been observed, as is usually the case for extreme events, then these "error-bar" intervals might be considered to be relatively wide. From historical records, we estimate most likely long-term occurrence rates, 10-yr occurrence probabilities, and intervals of frequentist confidence and Bayesian credibility for large earthquakes, explosive volcanic eruptions, and magnetic storms.

  17. Chemical, Biological, and Radiological (CBR) Contamination Survivability, Large Item Interiors

    DTIC Science & Technology

    2016-08-03

    e.g., mud, grease, etc.). m. Pretest (baseline) and posttest (30 days after the first contamination and/or other defined long-term time interval...procedures used. f. Description of SUT-interior materials of construction, paint type, and surface condition (pretest and posttest ), including...difficult to decontaminate or allow liquid to penetrate. g. Pretest and posttest ME functional performance characteristics (when measured) used as

  18. The orbital evolution of the Aten asteroids over 11,550 years (9300 BC to 2250 AD)

    NASA Astrophysics Data System (ADS)

    Zausaev, A. F.; Pushkarev, A. N.

    1991-04-01

    The orbital evolution of five Aten asteroids was monitored by the Everhart method in the time interval from 9300 BC to 2250 AD. The closest encounters with large planets in the evolution process are calculated. Four out of five asteroids exhibit stable resonances with earth and Venus over the period from 9300 BC to 2250 AD.

  19. Dynamical phase transition in the simplest molecular chain model

    NASA Astrophysics Data System (ADS)

    Malyshev, V. A.; Muzychka, S. A.

    2014-04-01

    We consider the dynamics of the simplest chain of a large number N of particles. In the double scaling limit, we find the partition of the parameter space into two domains: for one domain, the supremum over the time interval ( 0,∞) of the relative extension of the chain tends to 1 as N → ∞, and for the other domain, to infinity.

  20. Measuring the EMS patient access time interval and the impact of responding to high-rise buildings.

    PubMed

    Morrison, Laurie J; Angelini, Mark P; Vermeulen, Marian J; Schwartz, Brian

    2005-01-01

    To measure the patient access time interval and characterize its contribution to the total emergency medical services (EMS) response time interval; to compare the patient access time intervals for patients located three or more floors above ground with those less than three floors above or below ground, and specifically in the apartment subgroup; and to identify barriers that significantly impede EMS access to patients in high-rise apartments. An observational study of all patients treated by an emergency medical technician paramedics (EMT-P) crew was conducted using a trained independent observer to collect time intervals and identify potential barriers to access. Of 118 observed calls, 25 (21%) originated from patients three or more floors above ground. The overall median and 90th percentile (95% confidence interval) patient access time intervals were 1.61 (1.27, 1.91) and 3.47 (3.08, 4.05) minutes, respectively. The median interval was 2.73 (2.22, 3.03) minutes among calls from patients located three or more stories above ground compared with 1.25 (1.07, 1.55) minutes among those at lower levels. The patient access time interval represented 23.5% of the total EMS response time interval among calls originating less than three floors above or below ground and 32.2% of those located three or more stories above ground. The most frequently encountered barriers to access included security code entry requirements, lack of directional signs, and inability to fit the stretcher into the elevator. The patient access time interval is significantly long and represents a substantial component of the total EMS response time interval, especially among ambulance calls originating three or more floors above ground. A number of barriers appear to contribute to delayed paramedic access.

  1. Substorm occurrence rates, substorm recurrence times, and solar wind structure

    NASA Astrophysics Data System (ADS)

    Borovsky, Joseph E.; Yakymenko, Kateryna

    2017-03-01

    Two collections of substorms are created: 28,464 substorms identified with jumps in the SuperMAG AL index in the years 1979-2015 and 16,025 substorms identified with electron injections into geosynchronous orbit in the years 1989-2007. Substorm occurrence rates and substorm recurrence-time distributions are examined as functions of the phase of the solar cycle, the season of the year, the Russell-McPherron favorability, the type of solar wind plasma at Earth, the geomagnetic-activity level, and as functions of various solar and solar wind properties. Three populations of substorm occurrences are seen: (1) quasiperiodically occurring substorms with recurrence times (waiting times) of 2-4 h, (2) randomly occurring substorms with recurrence times of about 6-15 h, and (3) long intervals wherein no substorms occur. A working model is suggested wherein (1) the period of periodic substorms is set by the magnetosphere with variations in the actual recurrence times caused by the need for a solar wind driving interval to occur, (2) the mesoscale structure of the solar wind magnetic field triggers the occurrence of the random substorms, and (3) the large-scale structure of the solar wind plasma is responsible for the long intervals wherein no substorms occur. Statistically, the recurrence period of periodically occurring substorms is slightly shorter when the ram pressure of the solar wind is high, when the magnetic field strength of the solar wind is strong, when the Mach number of the solar wind is low, and when the polar-cap potential saturation parameter is high.

  2. Modulation of human time processing by subthalamic deep brain stimulation.

    PubMed

    Wojtecki, Lars; Elben, Saskia; Timmermann, Lars; Reck, Christiane; Maarouf, Mohammad; Jörgens, Silke; Ploner, Markus; Südmeyer, Martin; Groiss, Stefan Jun; Sturm, Volker; Niedeggen, Michael; Schnitzler, Alfons

    2011-01-01

    Timing in the range of seconds referred to as interval timing is crucial for cognitive operations and conscious time processing. According to recent models of interval timing basal ganglia (BG) oscillatory loops are involved in time interval recognition. Parkinsońs disease (PD) is a typical disease of the basal ganglia that shows distortions in interval timing. Deep brain stimulation (DBS) of the subthalamic nucleus (STN) is a powerful treatment of PD which modulates motor and cognitive functions depending on stimulation frequency by affecting subcortical-cortical oscillatory loops. Thus, for the understanding of BG-involvement in interval timing it is of interest whether STN-DBS can modulate timing in a frequency dependent manner by interference with oscillatory time recognition processes. We examined production and reproduction of 5 and 15 second intervals and millisecond timing in a double blind, randomised, within-subject repeated-measures design of 12 PD-patients applying no, 10-Hz- and ≥ 130-Hz-STN-DBS compared to healthy controls. We found under(re-)production of the 15-second interval and a significant enhancement of this under(re-)production by 10-Hz-stimulation compared to no stimulation, ≥ 130-Hz-STN-DBS and controls. Milliseconds timing was not affected. We provide first evidence for a frequency-specific modulatory effect of STN-DBS on interval timing. Our results corroborate the involvement of BG in general and of the STN in particular in the cognitive representation of time intervals in the range of multiple seconds.

  3. Modulation of Human Time Processing by Subthalamic Deep Brain Stimulation

    PubMed Central

    Timmermann, Lars; Reck, Christiane; Maarouf, Mohammad; Jörgens, Silke; Ploner, Markus; Südmeyer, Martin; Groiss, Stefan Jun; Sturm, Volker; Niedeggen, Michael; Schnitzler, Alfons

    2011-01-01

    Timing in the range of seconds referred to as interval timing is crucial for cognitive operations and conscious time processing. According to recent models of interval timing basal ganglia (BG) oscillatory loops are involved in time interval recognition. Parkinsońs disease (PD) is a typical disease of the basal ganglia that shows distortions in interval timing. Deep brain stimulation (DBS) of the subthalamic nucleus (STN) is a powerful treatment of PD which modulates motor and cognitive functions depending on stimulation frequency by affecting subcortical-cortical oscillatory loops. Thus, for the understanding of BG-involvement in interval timing it is of interest whether STN-DBS can modulate timing in a frequency dependent manner by interference with oscillatory time recognition processes. We examined production and reproduction of 5 and 15 second intervals and millisecond timing in a double blind, randomised, within-subject repeated-measures design of 12 PD-patients applying no, 10-Hz- and ≥130-Hz-STN-DBS compared to healthy controls. We found under(re-)production of the 15-second interval and a significant enhancement of this under(re-)production by 10-Hz-stimulation compared to no stimulation, ≥130-Hz-STN-DBS and controls. Milliseconds timing was not affected. We provide first evidence for a frequency-specific modulatory effect of STN-DBS on interval timing. Our results corroborate the involvement of BG in general and of the STN in particular in the cognitive representation of time intervals in the range of multiple seconds. PMID:21931767

  4. Estimated preejection period (PEP) based on the detection of the R-wave and dZ/dt-min peaks in ECG and ICG

    NASA Astrophysics Data System (ADS)

    van Lien, René; Schutte, Nienke M.; Meijer, Jan H.; de Geus, Eco J. C.

    2013-04-01

    The validity of estimating the PEP from a fixed value for the Q-wave onset to the R-wave peak (QR) interval and from the R-wave peak to the dZ/dt-min peak (ISTI) interval is evaluated. Ninety-one subjects participated in a laboratory experiment in which a variety of physical and mental stressors were presented and 31 further subjects participated in a sequence of structured ambulatory activities in which large variation in posture and physical activity was induced. PEP, QR interval, and ISTI were scored. Across the diverse laboratory and ambulatory conditions the QR interval could be approximated by a fixed interval of 40 ms but 95% confidence intervals were large (25 to 54 ms). Multilevel analysis showed that 79% to 81% of the within and between-subject variation in the RB interval could be predicted by the ISTI. However, the optimal intercept and slope values varied significantly across subjects and study setting. Bland-Altman plots revealed a large discrepancy between the estimated PEP and the actual PEP based on the Q-wave onset and B-point. It is concluded that the estimated PEP can be a useful tool but cannot replace the actual PEP to index cardiac sympathetic control.

  5. An interval-parameter mixed integer multi-objective programming for environment-oriented evacuation management

    NASA Astrophysics Data System (ADS)

    Wu, C. Z.; Huang, G. H.; Yan, X. P.; Cai, Y. P.; Li, Y. P.

    2010-05-01

    Large crowds are increasingly common at political, social, economic, cultural and sports events in urban areas. This has led to attention on the management of evacuations under such situations. In this study, we optimise an approximation method for vehicle allocation and route planning in case of an evacuation. This method, based on an interval-parameter multi-objective optimisation model, has potential for use in a flexible decision support system for evacuation management. The modeling solutions are obtained by sequentially solving two sub-models corresponding to lower- and upper-bounds for the desired objective function value. The interval solutions are feasible and stable in the given decision space, and this may reduce the negative effects of uncertainty, thereby improving decision makers' estimates under different conditions. The resulting model can be used for a systematic analysis of the complex relationships among evacuation time, cost and environmental considerations. The results of a case study used to validate the proposed model show that the model does generate useful solutions for planning evacuation management and practices. Furthermore, these results are useful for evacuation planners, not only in making vehicle allocation decisions but also for providing insight into the tradeoffs among evacuation time, environmental considerations and economic objectives.

  6. Predecessors of the giant 1960 Chile earthquake.

    PubMed

    Cisternas, Marco; Atwater, Brian F; Torrejón, Fernando; Sawai, Yuki; Machuca, Gonzalo; Lagos, Marcelo; Eipert, Annaliese; Youlton, Cristián; Salgado, Ignacio; Kamataki, Takanobu; Shishikura, Masanobu; Rajendran, C P; Malik, Javed K; Rizal, Yan; Husni, Muhammad

    2005-09-15

    It is commonly thought that the longer the time since last earthquake, the larger the next earthquake's slip will be. But this logical predictor of earthquake size, unsuccessful for large earthquakes on a strike-slip fault, fails also with the giant 1960 Chile earthquake of magnitude 9.5 (ref. 3). Although the time since the preceding earthquake spanned 123 years (refs 4, 5), the estimated slip in 1960, which occurred on a fault between the Nazca and South American tectonic plates, equalled 250-350 years' worth of the plate motion. Thus the average interval between such giant earthquakes on this fault should span several centuries. Here we present evidence that such long intervals were indeed typical of the last two millennia. We use buried soils and sand layers as records of tectonic subsidence and tsunami inundation at an estuary midway along the 1960 rupture. In these records, the 1960 earthquake ended a recurrence interval that had begun almost four centuries before, with an earthquake documented by Spanish conquistadors in 1575. Two later earthquakes, in 1737 and 1837, produced little if any subsidence or tsunami at the estuary and they therefore probably left the fault partly loaded with accumulated plate motion that the 1960 earthquake then expended.

  7. Predictors of Functional Dependence Despite Successful Revascularization in Large-Vessel Occlusion Strokes

    PubMed Central

    Shi, Zhong-Song; Liebeskind, David S.; Xiang, Bin; Ge, Sijian Grace; Feng, Lei; Albers, Gregory W.; Budzik, Ronald; Devlin, Thomas; Gupta, Rishi; Jansen, Olav; Jovin, Tudor G.; Killer-Oberpfalzer, Monika; Lutsep, Helmi L.; Macho, Juan; Nogueira, Raul G.; Rymer, Marilyn; Smith, Wade S.; Wahlgren, Nils; Duckwiler, Gary R.

    2014-01-01

    Background and Purpose High revascularization rates in large-vessel occlusion strokes treated by mechanical thrombectomy are not always associated with good clinical outcomes. We evaluated predictors of functional dependence despite successful revascularization among patients with acute ischemic stroke treated with thrombectomy. Methods We analyzed the pooled data from the Multi Mechanical Embolus Removal in Cerebral Ischemia (MERCI), Thrombectomy Revascularization of Large Vessel Occlusions in Acute Ischemic Stroke (TREVO), and TREVO 2 trials. Successful revascularization was defined as thrombolysis in cerebral infarction score 2b or 3. Functional dependence was defined as a score of 3 to 6 on the modified Rankin Scale at 3 months. We assessed relationship of demographic, clinical, angiographic characteristics, and hemorrhage with functional dependence despite successful revascularization. Results Two hundred and twenty-eight patients with successful revascularization had clinical outcome follow-up. The rates of functional dependence with endovascular success were 48.6% for Trevo thrombectomy and 58.0% for Merci thrombectomy. Age (odds ratio, 1.04; 95% confidence interval, 1.02–1.06 per 1-year increase), National Institutes of Health Stroke Scale score (odds ratio, 1.08; 95% confidence interval, 1.02–1.15 per 1-point increase), and symptom onset to endovascular treatment time (odds ratio, 1.11; 95% confidence interval, 1.01–1.22 per 30-minute delay) were predictors of functional dependence despite successful revascularization. Symptom onset to reperfusion time beyond 5 hours was associated with functional dependence. All subjects with symptomatic intracranial hemorrhage had functional dependence. Conclusions One half of patients with successful mechanical thrombectomy do not have good outcomes. Age, severe neurological deficits, and delayed endovascular treatment were associated with functional dependence despite successful revascularization. Our data support efforts to minimize delays to endovascular therapy in patients with acute ischemic stroke to improve outcomes. PMID:24876082

  8. Department of Defense Precise Time and Time Interval program improvement plan

    NASA Technical Reports Server (NTRS)

    Bowser, J. R.

    1981-01-01

    The United States Naval Observatory is responsible for ensuring uniformity in precise time and time interval operations including measurements, the establishment of overall DOD requirements for time and time interval, and the accomplishment of objectives requiring precise time and time interval with minimum cost. An overview of the objectives, the approach to the problem, the schedule, and a status report, including significant findings relative to organizational relationships, current directives, principal PTTI users, and future requirements as currently identified by the users are presented.

  9. Use of microcomputer in mapping depth of stratigraphic horizons in National Petroleum Reserve in Alaska

    USGS Publications Warehouse

    Payne, Thomas G.

    1982-01-01

    REGIONAL MAPPER is a menu-driven system in the BASIC language for computing and plotting (1) time, depth, and average velocity to geologic horizons, (2) interval time, thickness, and interval velocity of stratigraphic intervals, and (3) subcropping and onlapping intervals at unconformities. The system consists of three programs: FILER, TRAVERSER, and PLOTTER. A control point is a shot point with velocity analysis or a shot point at or near a well with velocity check-shot survey. Reflection time to and code number of seismic horizons are filed by digitizing tablet from record sections. TRAVERSER starts at a point of geologic control and, in traversing to another, parallels seismic events, records loss of horizons by onlap and truncation, and stores reflection time for geologic horizons at traversed shot points. TRAVERSER is basically a phantoming procedure. Permafrost thickness and velocity variations, buried canyons with low-velocity fill, and error in seismically derived velocity cause velocity anomalies that complicate depth mapping. Two depths to the top of the pebble is based shale are computed for each control point. One depth, designated Zs on seismically derived velocity. The other (Zw) is based on interval velocity interpolated linearly between wells and multiplied by interval time (isochron) to give interval thickness. Z w is computed for all geologic horizons by downward summation of interval thickness. Unknown true depth (Z) to the pebble shale may be expressed as Z = Zs + es and Z = Zw + ew where the e terms represent error. Equating the two expressions gives the depth difference D = Zs + Zw = ew + es A plot of D for the top of the pebble shale is readily contourable but smoothing is required to produce a reasonably simple surface. Seismically derived velocity used in computing Zs includes the effect of velocity anomalies but is subject to some large randomly distributed errors resulting in depth errors (es). Well-derived velocity used in computing Zw does not include the effect of velocity anomalies, but the error (ew) should reflect these anomalies and should be contourable (non-random). The D surface as contoured with smoothing is assumed to represent ew, that is, the depth effect of variations in permafrost thickness and velocity and buried canyon depth. Estimated depth (Zest) to each geologic horizon is the sum of Z w for that horizon and a constant e w as contoured for the pebble shale, which is the first highly continuous seismic horizon below the zone of anomalous velocity. Results of this 'depthing' procedure are compared with those of Tetra Tech, Inc., the subcontractor responsible for geologic and geophysical interpretation and mapping.

  10. A high-order time-parallel scheme for solving wave propagation problems via the direct construction of an approximate time-evolution operator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haut, T. S.; Babb, T.; Martinsson, P. G.

    2015-06-16

    Our manuscript demonstrates a technique for efficiently solving the classical wave equation, the shallow water equations, and, more generally, equations of the form ∂u/∂t=Lu∂u/∂t=Lu, where LL is a skew-Hermitian differential operator. The idea is to explicitly construct an approximation to the time-evolution operator exp(τL)exp(τL) for a relatively large time-step ττ. Recently developed techniques for approximating oscillatory scalar functions by rational functions, and accelerated algorithms for computing functions of discretized differential operators are exploited. Principal advantages of the proposed method include: stability even for large time-steps, the possibility to parallelize in time over many characteristic wavelengths and large speed-ups over existingmore » methods in situations where simulation over long times are required. Numerical examples involving the 2D rotating shallow water equations and the 2D wave equation in an inhomogenous medium are presented, and the method is compared to the 4th order Runge–Kutta (RK4) method and to the use of Chebyshev polynomials. The new method achieved high accuracy over long-time intervals, and with speeds that are orders of magnitude faster than both RK4 and the use of Chebyshev polynomials.« less

  11. The Productive Ward Program™: A Two-Year Implementation Impact Review Using a Longitudinal Multilevel Study.

    PubMed

    Van Bogaert, Peter; Van Heusden, Danny; Verspuy, Martijn; Wouters, Kristien; Slootmans, Stijn; Van der Straeten, Johnny; Van Aken, Paul; White, Mark

    2017-03-01

    Aim To investigate the impact of the quality improvement program "Productive Ward - Releasing Time to Care™" using nurses' and midwives' reports of practice environment, burnout, quality of care, job outcomes, as well as workload, decision latitude, social capital, and engagement. Background Despite the requirement for health systems to improve quality and the proliferation of quality improvement programs designed for healthcare, the empirical evidence supporting large-scale quality improvement programs impacting patient satisfaction, staff engagement, and quality care remains sparse. Method A longitudinal study was performed in a large 600-bed acute care university hospital at two measurement intervals for nurse practice environment, burnout, and quality of care and job outcomes and three measurement intervals for workload, decision latitude, social capital, and engagement between June 2011 and November 2014. Results Positive results were identified in practice environment, decision latitude, and social capital. Less favorable results were identified in relation to perceived workload, emotional exhaustion. and vigor. Moreover, measures of quality of care and job satisfaction were reported less favorably. Conclusion This study highlights the need to further understand how to implement large-scale quality improvement programs so that they integrate with daily practices and promote "quality improvement" as "business as usual."

  12. Associations between motor timing, music practice, and intelligence studied in a large sample of twins.

    PubMed

    Ullén, Fredrik; Mosing, Miriam A; Madison, Guy

    2015-03-01

    Music performance depends critically on precise processing of time. A common model behavior in studies of motor timing is isochronous serial interval production (ISIP), that is, hand/finger movements with a regular beat. ISIP accuracy is related to both music practice and intelligence. Here we present a study of these associations in a large twin cohort, demonstrating that the effects of music practice and intelligence on motor timing are additive, with no significant multiplicative (interaction) effect. Furthermore, the association between music practice and motor timing was analyzed with the use of a co-twin control design using intrapair differences. These analyses revealed that the phenotypic association disappeared when all genetic and common environmental factors were controlled. This suggests that the observed association may not reflect a causal effect of music practice on ISIP performance but rather reflect common influences (e.g., genetic effects) on both outcomes. The relevance of these findings for models of practice and expert performance is discussed. © 2014 New York Academy of Sciences.

  13. Testing 8000 years of submarine paleoseismicity record offshore western Algeria : First evidence for irregular seismic cycles

    NASA Astrophysics Data System (ADS)

    Ratzov, G.; Cattaneo, A.; Babonneau, N.; Yelles, K.; Bracene, R.; Deverchere, J.

    2012-12-01

    It is commonly assumed that stress buildup along a given fault is proportional to the time elapsed since the previous earthquake. Although the resulting « seismic gap » hypothesis suits well for moderate magnitude earthquakes (Mw 4-5), large events (Mw>6) are hardly predictable and depict great variation in recurrence intervals. Models based on stress transfer and interactions between faults argue that an earthquake may promote or delay the occurrence of next earthquakes on adjacent faults by increasing or lowering the level of static stress. The Algerian margin is a Cenozoic passive margin presently inverted within the slow convergence between Africa and Eurasia plates (~3-6 mm/yr). The western margin experienced two large earthquakes in 1954 (Orléansville, M 6.7) and 1980 (El Asnam, M 7.3), supporting an interaction between the two faults. To get meaningful statistics of large earthquakes recurrence intervals over numerous seismic cycles, we conducted a submarine paleoseismicity investigation based on turbidite chronostratigraphy. As evidenced on the Cascadia subduction zone, synchronous turbidites accumulated over a large area and originated from independent sources are likely triggered by an earthquake. To test the method on a slowly convergent margin, we analyze turbidites from three sediment cores collected during the Maradja (2003) and Prisme (2007) cruises off the 1954-1980 source areas. We use X-ray radioscopy, XRF major elements counter, magnetic susceptibility, and grain-size distribution to accurately discriminate turbidites from hemipelagites. We date turbidites by calculating hemipelagic sedimentation rates obtained with radiocarbon ages, and interpolate the rates between turbidites. Finally, the age of events is compared with the only paleoseismic study available on land (El Asnam fault). Fourteen possible seismic events are identified by the counting and correlation of turbidites over the last 8 ka. Most events are correlated with the paleoseismic record of the El Asnam fault, but uncorrelated events suggest that other faults were active. Only the 1954 event (not the 1980) triggered a turbidity current, implying that the sediment buffer on the continental shelf could not be reloaded in 26 years, thus arguing for a minimum time resolution of our method. The new paleoseismic catalog shows a recurrence interval of 300-700 years for most events, but also a great interval of >1200 years without any major earthquake. This result suggests that the level of static stress may have drastically dropped as a result of three main events occurring within the 800 years prior the quiescence period.

  14. Myocardial performance index in female rats with myocardial infarction: relationship with ventricular function parameters by Doppler echocardiography.

    PubMed

    Cury, Alexandre Ferreira; Bonilha, Andre; Saraiva, Roberto; Campos, Orlando; Carvalho, Antonio Carlos C; De Paola, Angelo Amato V; Fischer, Claudio; Tucci, Paulo Ferreira; Moises, Valdir Ambrosio

    2005-05-01

    The aim of the study was to analyze the myocardial performance index (MPI), its relationship with the standard variables of systolic and diastolic functions, and the influence of time intervals in an experimental model of female rats with myocardial infarction (MI). Forty-one Wistar female rats were submitted to surgery to induce MI. Six weeks later, Doppler echocardiography was performed to assess infarct size (IS,%), fractional area change (FAC,%), ejection fraction biplane Simpson (EF), E/A ratio of mitral inflow, MPI and its time intervals: isovolumetric contraction (IVCT, ms) and relaxation (IVRT, ms) times, and ejection time (ET, ms); MPI = IVCT + IVRT/ET. EF and FAC were progressively lower in rats with small, medium and large-size MI ( P < .001). E/A ratio was higher only in rats with large-size MI (6.25 +/- 2.69; P < .001). MPI was not different between control rats and small-size MI (0.37 +/- 0.03 vs 0.34 +/- 0.06, P = .87), but different between large and medium-size MI (0.69 +/- 0.08 vs 0.47 +/- 0.07; P < .001) and between these two compared to small-size MI. MPI correlated with IS (r = 0.85; P < .001), EF (r = -0.86; P < .001), FAC (r = -0.77; P < .001) and E/A ratio (r = 0.77; P < .001, non-linear). IVCT was longer in large size MI compared to medium-size MI (31.87 +/- 7.99 vs 15.92 +/- 5.88; P < .001) and correlated with IS (r = 0.85; P < .001) and MPI (r = 0.92; P < .001). ET was shorter only in large-size MI (81.07 +/- 7.23; P < .001), and correlated with IS (r = -0.70; P < .001) and MPI (r = -0.85; P < .001). IVRT was shorter only in large-size compared to medium-size MI (24.40 +/- 5.38 vs 29.69 +/- 5.92; P < .037), had borderline correlation with MPI (r = 0.34; P = .0534) and no correlation with IS (r = 0.26; p = 0.144). The MPI increased with IS, correlated inversely with systolic function parameters and had a non-linear relationship with diastolic function. These changes were due to the increase of IVCT and a decrease of ET, without significant influence of IVRT.

  15. Probing failure susceptibilities of earthquake faults using small-quake tidal correlations.

    PubMed

    Brinkman, Braden A W; LeBlanc, Michael; Ben-Zion, Yehuda; Uhl, Jonathan T; Dahmen, Karin A

    2015-01-27

    Mitigating the devastating economic and humanitarian impact of large earthquakes requires signals for forecasting seismic events. Daily tide stresses were previously thought to be insufficient for use as such a signal. Recently, however, they have been found to correlate significantly with small earthquakes, just before large earthquakes occur. Here we present a simple earthquake model to investigate whether correlations between daily tidal stresses and small earthquakes provide information about the likelihood of impending large earthquakes. The model predicts that intervals of significant correlations between small earthquakes and ongoing low-amplitude periodic stresses indicate increased fault susceptibility to large earthquake generation. The results agree with the recent observations of large earthquakes preceded by time periods of significant correlations between smaller events and daily tide stresses. We anticipate that incorporating experimentally determined parameters and fault-specific details into the model may provide new tools for extracting improved probabilities of impending large earthquakes.

  16. In situ time-series measurements of subseafloor sediment properties

    USGS Publications Warehouse

    Wheatcroft, R.A.; Stevens, A.W.; Johnson, R.V.

    2007-01-01

    The capabilities and diversity of subsurface sediment sensors lags significantly from what is available for the water column, thereby limiting progress in understanding time-dependent seabed exchange and high-frequency acoustics. To help redress this imbalance, a new instrument, the autonomous sediment profiler (ASP), is described herein. ASP consists of a four-electrode, Wenner-type resistivity probe and a thermistor that log data at 0.1-cm vertical intervals over a 58-cm vertical profile. To avoid resampling the same spot on the seafloor, the probes are moved horizontally within a 20 times 100-cm-2 area in one of three preselected patterns. Memory and power capacities permit sampling at hourly intervals for up to 3-mo duration. The system was tested in a laboratory tank and shown to be able to resolve high-frequency sediment consolidation, as well as changes in sediment roughness. In a field test off the southern coast of France, the system collected resistivity and temperature data at hourly intervals for 16 d. Coupled with environmental data collected on waves, currents, and suspended sediment, the ASP is shown to be useful for understanding temporal evolution of subsurface sediment porosity, although no large depositional or erosional events occurred during the deployment. Following a rapid decrease in bottom-water temperature, the evolution of the subsurface temperature field was consistent with the 1-D thermal diffusion equation coupled with advection in the upper 3-4 cm. Collectively, the laboratory and field tests yielded promising results on time-dependent seabed change.

  17. Spectral analysis of finite-time correlation matrices near equilibrium phase transitions

    NASA Astrophysics Data System (ADS)

    Vinayak; Prosen, T.; Buča, B.; Seligman, T. H.

    2014-10-01

    We study spectral densities for systems on lattices, which, at a phase transition display, power-law spatial correlations. Constructing the spatial correlation matrix we prove that its eigenvalue density shows a power law that can be derived from the spatial correlations. In practice time series are short in the sense that they are either not stationary over long time intervals or not available over long time intervals. Also we usually do not have time series for all variables available. We shall make numerical simulations on a two-dimensional Ising model with the usual Metropolis algorithm as time evolution. Using all spins on a grid with periodic boundary conditions we find a power law, that is, for large grids, compatible with the analytic result. We still find a power law even if we choose a fairly small subset of grid points at random. The exponents of the power laws will be smaller under such circumstances. For very short time series leading to singular correlation matrices we use a recently developed technique to lift the degeneracy at zero in the spectrum and find a significant signature of critical behavior even in this case as compared to high temperature results which tend to those of random matrix models.

  18. Ratio-based lengths of intervals to improve fuzzy time series forecasting.

    PubMed

    Huarng, Kunhuang; Yu, Tiffany Hui-Kuang

    2006-04-01

    The objective of this study is to explore ways of determining the useful lengths of intervals in fuzzy time series. It is suggested that ratios, instead of equal lengths of intervals, can more properly represent the intervals among observations. Ratio-based lengths of intervals are, therefore, proposed to improve fuzzy time series forecasting. Algebraic growth data, such as enrollments and the stock index, and exponential growth data, such as inventory demand, are chosen as the forecasting targets, before forecasting based on the various lengths of intervals is performed. Furthermore, sensitivity analyses are also carried out for various percentiles. The ratio-based lengths of intervals are found to outperform the effective lengths of intervals, as well as the arbitrary ones in regard to the different statistical measures. The empirical analysis suggests that the ratio-based lengths of intervals can also be used to improve fuzzy time series forecasting.

  19. Importance of the Time Interval between Bowel Preparation and Colonoscopy in Determining the Quality of Bowel Preparation for Full-Dose Polyethylene Glycol Preparation

    PubMed Central

    Kim, Tae Kyung; Kim, Hyung Wook; Kim, Su Jin; Ha, Jong Kun; Jang, Hyung Ha; Hong, Young Mi; Park, Su Bum; Choi, Cheol Woong; Kang, Dae Hwan

    2014-01-01

    Background/Aims The quality of bowel preparation (QBP) is the important factor in performing a successful colonoscopy. Several factors influencing QBP have been reported; however, some factors, such as the optimal preparation-to-colonoscopy time interval, remain controversial. This study aimed to determine the factors influencing QBP and the optimal time interval for full-dose polyethylene glycol (PEG) preparation. Methods A total of 165 patients who underwent colonoscopy from June 2012 to August 2012 were prospectively evaluated. The QBP was assessed using the Ottawa Bowel Preparation Scale (Ottawa) score according to several factors influencing the QBP were analyzed. Results Colonoscopies with a time interval of 5 to 6 hours had the best Ottawa score in all parts of the colon. Patients with time intervals of 6 hours or less had the better QBP than those with time intervals of more than 6 hours (p=0.046). In the multivariate analysis, the time interval (odds ratio, 1.897; 95% confidence interval, 1.006 to 3.577; p=0.048) was the only significant contributor to a satisfactory bowel preparation. Conclusions The optimal time was 5 to 6 hours for the full-dose PEG method, and the time interval was the only significant contributor to a satisfactory bowel preparation. PMID:25368750

  20. A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies

    PubMed Central

    2014-01-01

    Background The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. Methods The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. Results The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. Conclusions If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used. PMID:24552686

  1. A modified Wald interval for the area under the ROC curve (AUC) in diagnostic case-control studies.

    PubMed

    Kottas, Martina; Kuss, Oliver; Zapf, Antonia

    2014-02-19

    The area under the receiver operating characteristic (ROC) curve, referred to as the AUC, is an appropriate measure for describing the overall accuracy of a diagnostic test or a biomarker in early phase trials without having to choose a threshold. There are many approaches for estimating the confidence interval for the AUC. However, all are relatively complicated to implement. Furthermore, many approaches perform poorly for large AUC values or small sample sizes. The AUC is actually a probability. So we propose a modified Wald interval for a single proportion, which can be calculated on a pocket calculator. We performed a simulation study to compare this modified Wald interval (without and with continuity correction) with other intervals regarding coverage probability and statistical power. The main result is that the proposed modified Wald intervals maintain and exploit the type I error much better than the intervals of Agresti-Coull, Wilson, and Clopper-Pearson. The interval suggested by Bamber, the Mann-Whitney interval without transformation and also the interval of the binormal AUC are very liberal. For small sample sizes the Wald interval with continuity has a comparable coverage probability as the LT interval and higher power. For large sample sizes the results of the LT interval and of the Wald interval without continuity correction are comparable. If individual patient data is not available, but only the estimated AUC and the total sample size, the modified Wald intervals can be recommended as confidence intervals for the AUC. For small sample sizes the continuity correction should be used.

  2. Interresponse Time Structures in Variable-Ratio and Variable-Interval Schedules

    ERIC Educational Resources Information Center

    Bowers, Matthew T.; Hill, Jade; Palya, William L.

    2008-01-01

    The interresponse-time structures of pigeon key pecking were examined under variable-ratio, variable-interval, and variable-interval plus linear feedback schedules. Whereas the variable-ratio and variable-interval plus linear feedback schedules generally resulted in a distinct group of short interresponse times and a broad distribution of longer…

  3. Accelerated plate tectonics.

    PubMed

    Anderson, D L

    1975-03-21

    The concept of a stressed elastic lithospheric plate riding on a viscous asthenosphere is used to calculate the recurrence interval of great earthquakes at convergent plate boundaries, the separation of decoupling and lithospheric earthquakes, and the migration pattern of large earthquakes along an arc. It is proposed that plate motions accelerate after great decoupling earthquakes and that most of the observed plate motions occur during short periods of time, separated by periods of relative quiescence.

  4. Comparison between volatility return intervals of the S&P 500 index and two common models

    NASA Astrophysics Data System (ADS)

    Vodenska-Chitkushev, I.; Wang, F. Z.; Weber, P.; Yamasaki, K.; Havlin, S.; Stanley, H. E.

    2008-01-01

    We analyze the S&P 500 index data for the 13-year period, from January 1, 1984 to December 31, 1996, with one data point every 10 min. For this database, we study the distribution and clustering of volatility return intervals, which are defined as the time intervals between successive volatilities above a certain threshold q. We find that the long memory in the volatility leads to a clustering of above-median as well as below-median return intervals. In addition, it turns out that the short return intervals form larger clusters compared to the long return intervals. When comparing the empirical results to the ARMA-FIGARCH and fBm models for volatility, we find that the fBm model predicts scaling better than the ARMA-FIGARCH model, which is consistent with the argument that both ARMA-FIGARCH and fBm capture the long-term dependence in return intervals to a certain extent, but only fBm accounts for the scaling. We perform the Student's t-test to compare the empirical data with the shuffled records, ARMA-FIGARCH and fBm. We analyze separately the clusters of above-median return intervals and the clusters of below-median return intervals for different thresholds q. We find that the empirical data are statistically different from the shuffled data for all thresholds q. Our results also suggest that the ARMA-FIGARCH model is statistically different from the S&P 500 for intermediate q for both above-median and below-median clusters, while fBm is statistically different from S&P 500 for small and large q for above-median clusters and for small q for below-median clusters. Neither model can fully explain the entire regime of q studied.

  5. Online Data Quality and Bad Interval Detection for the CUORE Neutrinoless Double Beta Decay Search

    NASA Astrophysics Data System (ADS)

    Welliver, Bradford; Cuore Collaboration

    2016-09-01

    The Cryogenic Underground Observatory for Rare Events (CUORE) is a large neutrinoless double beta decay (0 νββ) search being installed underground at the Laboratori Nazionali del Gran Sasso (LNGS). 0 νββ searches can address fundamental questions about the nature of the neutrino, such as whether it is a Dirac or MAJORANA fermion, its mass scale, and may provide insight into the observed matter-antimatter asymmetry in the universe. CUORE is the largest array of bolometer instrumented crystals in the world, nineteen times larger than the previous implementation used in CUORE-0, and contains a total of 988 TeO2 crystals with a mass of 741kg and is expected to achieve a sensitivity on the 130Te 0 νββ half-life of T1 / 2 = 9 . 5 x 1025 years (90 % C.L.) after 5 years of operation. The large number of individual crystals in CUORE presents challenges for monitoring data quality and the determination of bad intervals of time in detector operation. We will discuss the work being performed to provide expanded online detector quality monitoring tools as well as the development of automated algorithms to test and identify periods of abnormal behavior across all of the individual detectors.

  6. Disordered eating and weight changes after deployment: longitudinal assessment of a large US military cohort.

    PubMed

    Jacobson, Isabel G; Smith, Tyler C; Smith, Besa; Keel, Pamela K; Amoroso, Paul J; Wells, Timothy S; Bathalon, Gaston P; Boyko, Edward J; Ryan, Margaret A K

    2009-02-15

    The effect of military deployments to combat environments on disordered eating and weight changes is unknown. Using longitudinal data from Millennium Cohort Study participants who completed baseline (2001-2003) and follow-up (2004-2006) questionnaires (n=48,378), the authors investigated new-onset disordered eating and weight changes in a large military cohort. Multivariable logistic regression was used to compare these outcomes among those who deployed and reported combat exposures, those who deployed but did not report combat exposures, and those who did not deploy in support of the wars in Iraq and Afghanistan. Deployment was not significantly associated with new-onset disordered eating in women or men, after adjustment for baseline demographic, military, and behavioral characteristics. However, in subgroup comparison analyses of deployers, deployed women reporting combat exposures were 1.78 times more likely to report new-onset disordered eating (95% confidence interval: 1.02, 3.11) and 2.35 times more likely to lose 10% or more of their body weight compared with women who deployed but did not report combat exposures (95% confidence interval: 1.17, 4.70). Despite no significant overall association between deployment and disordered eating and weight changes, deployed women reporting combat exposures represent a subgroup at higher risk for developing eating problems and weight loss.

  7. Rate Analysis of Two Photovoltaic Systems in San Diego

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doris, E.; Ong, S.; Van Geet, O.

    2009-07-01

    Analysts have found increasing evidence that rate structure has impacts on the economics of solar systems. This paper uses 2007 15-minute interval photovoltaic (PV) system and load data from two San Diego City water treatment facilities to illustrate impacts of different rate designs. The comparison is based on rates available in San Diego at the time of data collection and include proportionately small to large demand charges (relative to volumetric consumption), and varying on- and off- peak times. Findings are twofold for these large commercial systems: 1) transferring costs into demand charges does not result in savings and 2) changesmore » in peak times do not result in a major cost difference during the course of a year. While lessons learned and discussion on rate components are based on the findings, the applicability is limited to buildings with similar systems, environments, rate options, and loads.« less

  8. Unfolding large-scale online collaborative human dynamics

    PubMed Central

    Zha, Yilong; Zhou, Tao; Zhou, Changsong

    2016-01-01

    Large-scale interacting human activities underlie all social and economic phenomena, but quantitative understanding of regular patterns and mechanism is very challenging and still rare. Self-organized online collaborative activities with a precise record of event timing provide unprecedented opportunity. Our empirical analysis of the history of millions of updates in Wikipedia shows a universal double–power-law distribution of time intervals between consecutive updates of an article. We then propose a generic model to unfold collaborative human activities into three modules: (i) individual behavior characterized by Poissonian initiation of an action, (ii) human interaction captured by a cascading response to previous actions with a power-law waiting time, and (iii) population growth due to the increasing number of interacting individuals. This unfolding allows us to obtain an analytical formula that is fully supported by the universal patterns in empirical data. Our modeling approaches reveal “simplicity” beyond complex interacting human activities. PMID:27911766

  9. Performance analysis of a laser propelled interorbital tansfer vehicle

    NASA Technical Reports Server (NTRS)

    Minovitch, M. A.

    1976-01-01

    Performance capabilities of a laser-propelled interorbital transfer vehicle receiving propulsive power from one ground-based transmitter was investigated. The laser transmits propulsive energy to the vehicle during successive station fly-overs. By applying a series of these propulsive maneuvers, large payloads can be economically transferred between low earth orbits and synchronous orbits. Operations involving the injection of large payloads onto escape trajectories are also studied. The duration of each successive engine burn must be carefully timed so that the vehicle reappears over the laser station to receive additional propulsive power within the shortest possible time. The analytical solution for determining these time intervals is presented, as is a solution to the problem of determining maximum injection payloads. Parameteric computer analysis based on these optimization studies is presented. The results show that relatively low beam powers, on the order of 50 MW to 60 MW, produce significant performance capabilities.

  10. Particle identification with the ALICE Time-Of-Flight detector at the LHC

    NASA Astrophysics Data System (ADS)

    Alici, A.

    2014-12-01

    High performance Particle Identification system (PID) is a distinguishing characteristic of the ALICE experiment at the CERN Large Hadron Collider (LHC). Charged particles in the intermediate momentum range are identified in ALICE by the Time-Of-Flight (TOF) detector. The TOF exploits the Multi-gap Resistive Plate Chamber (MRPC) technology, capable of an intrinsic time resolution at the level of few tens of ps with an overall efficiency close to 100% and a large operation plateau. The full system is made of 1593 MRPC chambers with a total area of 141 m2, covering the pseudorapidity interval [-0.9,+0.9] and the full azimuthal angle. The ALICE TOF system has shown very stable operation during the first 3 years of collisions at the LHC. In this paper a summary of the system performance as well as main results with data from collisions will be reported.

  11. Exact closed-form solution of the hyperbolic equation of string vibrations with material relaxation properties taken into account

    NASA Astrophysics Data System (ADS)

    Kudinov, I. V.; Kudinov, V. A.

    2014-09-01

    The differential equation of damped string vibrations was obtained with the finite speed of extension and strain propagation in the Hooke's law formula taken into account. In contrast to the well-known equations, the obtained equation contains the first and third time derivatives of the displacement and the mixed derivative with respect to the space and time variables. Separation of variables was used to obtain its exact closed-form solution, whose analysis showed that, for large values of the relaxation coefficient, the string return to the initial state after its escape from equilibrium is accompanied by high-frequency low-amplitude damped vibrations, which occur on the initial time interval only in the region of positive displacements. And in the limit, for some large values of the relaxation coefficient, the string return to the initial state occurs practically without any oscillatory process.

  12. Not All Prehospital Time is Equal: Influence of Scene Time on Mortality

    PubMed Central

    Brown, Joshua B.; Rosengart, Matthew R.; Forsythe, Raquel M.; Reynolds, Benjamin R.; Gestring, Mark L.; Hallinan, William M.; Peitzman, Andrew B.; Billiar, Timothy R.; Sperry, Jason L.

    2016-01-01

    Background Trauma is time-sensitive and minimizing prehospital (PH) time is appealing. However, most studies have not linked increasing PH time with worse outcomes, as raw PH times are highly variable. It is unclear whether specific PH time patterns affect outcomes. Our objective was to evaluate the association of PH time interval distribution with mortality. Methods Patients transported by EMS in the Pennsylvania trauma registry 2000-2013 with total prehospital time (TPT)≥20min were included. TPT was divided into three PH time intervals: response, scene, and transport time. The number of minutes in each PH time interval was divided by TPT to determine the relative proportion each interval contributed to TPT. A prolonged interval was defined as any one PH interval contributing ≥50% of TPT. Patients were classified by prolonged PH interval or no prolonged PH interval (all intervals<50% of TPT). Patients were matched for TPT and conditional logistic regression determined the association of mortality with PH time pattern, controlling for confounders. PH interventions were explored as potential mediators, and prehospital triage criteria used identify patients with time-sensitive injuries. Results There were 164,471 patients included. Patients with prolonged scene time had increased odds of mortality (OR 1.21; 95%CI 1.02–1.44, p=0.03). Prolonged response, transport, and no prolonged interval were not associated with mortality. When adjusting for mediators including extrication and PH intubation, prolonged scene time was no longer associated with mortality (OR 1.06; 0.90–1.25, p=0.50). Together these factors mediated 61% of the effect between prolonged scene time and mortality. Mortality remained associated with prolonged scene time in patients with hypotension, penetrating injury, and flail chest. Conclusions Prolonged scene time is associated with increased mortality. PH interventions partially mediate this association. Further study should evaluate whether these interventions drive increased mortality because they prolong scene time or by another mechanism, as reducing scene time may be a target for intervention. Level of Evidence IV, prognostic study PMID:26886000

  13. Time interval measurement device based on surface acoustic wave filter excitation, providing 1 ps precision and stability.

    PubMed

    Panek, Petr; Prochazka, Ivan

    2007-09-01

    This article deals with the time interval measurement device, which is based on a surface acoustic wave (SAW) filter as a time interpolator. The operating principle is based on the fact that a transversal SAW filter excited by a short pulse can generate a finite signal with highly suppressed spectra outside a narrow frequency band. If the responses to two excitations are sampled at clock ticks, they can be precisely reconstructed from a finite number of samples and then compared so as to determine the time interval between the two excitations. We have designed and constructed a two-channel time interval measurement device which allows independent timing of two events and evaluation of the time interval between them. The device has been constructed using commercially available components. The experimental results proved the concept. We have assessed the single-shot time interval measurement precision of 1.3 ps rms that corresponds to the time of arrival precision of 0.9 ps rms in each channel. The temperature drift of the measured time interval on temperature is lower than 0.5 ps/K, and the long term stability is better than +/-0.2 ps/h. These are to our knowledge the best values reported for the time interval measurement device. The results are in good agreement with the error budget based on the theoretical analysis.

  14. Interval Timing Accuracy and Scalar Timing in C57BL/6 Mice

    PubMed Central

    Buhusi, Catalin V.; Aziz, Dyana; Winslow, David; Carter, Rickey E.; Swearingen, Joshua E.; Buhusi, Mona C.

    2010-01-01

    In many species, interval timing behavior is accurate—appropriate estimated durations—and scalar—errors vary linearly with estimated durations. While accuracy has been previously examined, scalar timing has not been yet clearly demonstrated in house mice (Mus musculus), raising concerns about mouse models of human disease. We estimated timing accuracy and precision in C57BL/6 mice, the most used background strain for genetic models of human disease, in a peak-interval procedure with multiple intervals. Both when timing two intervals (Experiment 1) or three intervals (Experiment 2), C57BL/6 mice demonstrated varying degrees of timing accuracy. Importantly, both at individual and group level, their precision varied linearly with the subjective estimated duration. Further evidence for scalar timing was obtained using an intraclass correlation statistic. This is the first report of consistent, reliable scalar timing in a sizable sample of house mice, thus validating the PI procedure as a valuable technique, the intraclass correlation statistic as a powerful test of the scalar property, and the C57BL/6 strain as a suitable background for behavioral investigations of genetically engineered mice modeling disorders of interval timing. PMID:19824777

  15. Assessment of cardiac time intervals using high temporal resolution real-time spiral phase contrast with UNFOLDed-SENSE.

    PubMed

    Kowalik, Grzegorz T; Knight, Daniel S; Steeden, Jennifer A; Tann, Oliver; Odille, Freddy; Atkinson, David; Taylor, Andrew; Muthurangu, Vivek

    2015-02-01

    To develop a real-time phase contrast MR sequence with high enough temporal resolution to assess cardiac time intervals. The sequence utilized spiral trajectories with an acquisition strategy that allowed a combination of temporal encoding (Unaliasing by fourier-encoding the overlaps using the temporal dimension; UNFOLD) and parallel imaging (Sensitivity encoding; SENSE) to be used (UNFOLDed-SENSE). An in silico experiment was performed to determine the optimum UNFOLD filter. In vitro experiments were carried out to validate the accuracy of time intervals calculation and peak mean velocity quantification. In addition, 15 healthy volunteers were imaged with the new sequence, and cardiac time intervals were compared to reference standard Doppler echocardiography measures. For comparison, in silico, in vitro, and in vivo experiments were also carried out using sliding window reconstructions. The in vitro experiments demonstrated good agreement between real-time spiral UNFOLDed-SENSE phase contrast MR and the reference standard measurements of velocity and time intervals. The protocol was successfully performed in all volunteers. Subsequent measurement of time intervals produced values in keeping with literature values and good agreement with the gold standard echocardiography. Importantly, the proposed UNFOLDed-SENSE sequence outperformed the sliding window reconstructions. Cardiac time intervals can be successfully assessed with UNFOLDed-SENSE real-time spiral phase contrast. Real-time MR assessment of cardiac time intervals may be beneficial in assessment of patients with cardiac conditions such as diastolic dysfunction. © 2014 Wiley Periodicals, Inc.

  16. Prevalence of QT interval prolongation in patients admitted to cardiac care units and frequency of subsequent administration of QT interval-prolonging drugs: a prospective, observational study in a large urban academic medical center in the US.

    PubMed

    Tisdale, James E; Wroblewski, Heather A; Overholser, Brian R; Kingery, Joanna R; Trujillo, Tate N; Kovacs, Richard J

    2012-06-01

    Cardiac arrest due to torsades de pointes (TdP) is a rare but catastrophic event in hospitals. Patients admitted to cardiac units are at higher risk of drug-induced QT interval prolongation and TdP, due to a preponderance of risk factors. Few data exist regarding the prevalence of QT interval prolongation in patients admitted to cardiac units or the frequency of administering QT interval-prolonging drugs to patients presenting with QT interval prolongation. The aim of this study was to determine the prevalence of Bazett's-corrected QT (QT(c)) interval prolongation upon admission to cardiac units and the proportion of patients presenting with QT(c) interval prolongation who are subsequently administered QT interval-prolonging drugs during hospitalization. This was a prospective, observational study conducted over a 1-year period (October 2008-October 2009) in 1159 consecutive patients admitted to two cardiac units in a large urban academic medical centre located in Indianapolis, IN, USA. Patients were enrolled into the study at the time of admission to the hospital and were followed daily during hospitalization. Exclusion criteria were age <18 years, ECG rhythm of complete ventricular pacing, and patient designation as 'outpatient' in a bed and/or duration of stay <24 hours. Data collected included demographic information, past medical history, daily progress notes, medication administration records, laboratory data, ECGs, telemetry monitoring strips and diagnostic reports. All patients underwent continuous cardiac telemetry monitoring and/or had a baseline 12-lead ECG obtained within 4 hours of admission. QT intervals were determined manually from lead II of 12-lead ECGs or from continuous lead II telemetry monitoring strips. QT(c) interval prolongation was defined as ≥470 ms for males and ≥480 ms for females. In both males and females, QT(c) interval >500 ms was considered abnormally high. A medication was classified as QT interval-prolonging if there were published data indicating that the drug causes QT interval prolongation and/or TdP. Study endpoints were (i) prevalence of QT(c) interval prolongation upon admission to the Cardiac Medical Critical Care Unit (CMCCU) or an Advanced Heart Care Unit (AHCU); (ii) proportion of patients admitted to the CMCCU/AHCU with QT(c) interval prolongation who subsequently were administered QT interval-prolonging drugs during hospitalization; (iii) the proportion of these higher-risk patients in whom TdP risk factor monitoring was performed; (iv) proportion of patients with QT(c) interval prolongation who subsequently received QT-prolonging drugs and who experienced further QT(c) interval prolongation. Of 1159 patients enrolled, 259 patients met exclusion criteria, resulting in a final sample size of 900 patients. mean (± SD) age, 65 ± 15 years; female, 47%; Caucasian, 70%. Admitting diagnoses: heart failure (22%), myocardial infarction (16%), atrial fibrillation (9%), sudden cardiac arrest (3%). QT(c) interval prolongation was present in 27.9% of patients on admission; 18.2% had QT(c) interval >500 ms. Of 251 patients admitted with QT(c) interval prolongation, 87 (34.7%) were subsequently administered QT interval-prolonging drugs. Of 166 patients admitted with QT(c) interval >500 ms, 70 (42.2%) were subsequently administered QT interval-prolonging drugs; additional QT(c) interval prolongation ≥60 ms occurred in 57.1% of these patients. QT(c) interval prolongation is common among patients admitted to cardiac units. QT interval-prolonging drugs are commonly prescribed to patients presenting with QT(c) interval prolongation.

  17. VARIABLE TIME-INTERVAL GENERATOR

    DOEpatents

    Gross, J.E.

    1959-10-31

    This patent relates to a pulse generator and more particularly to a time interval generator wherein the time interval between pulses is precisely determined. The variable time generator comprises two oscillators with one having a variable frequency output and the other a fixed frequency output. A frequency divider is connected to the variable oscillator for dividing its frequency by a selected factor and a counter is used for counting the periods of the fixed oscillator occurring during a cycle of the divided frequency of the variable oscillator. This defines the period of the variable oscillator in terms of that of the fixed oscillator. A circuit is provided for selecting as a time interval a predetermined number of periods of the variable oscillator. The output of the generator consists of a first pulse produced by a trigger circuit at the start of the time interval and a second pulse marking the end of the time interval produced by the same trigger circuit.

  18. Reconciling short recurrence intervals with minor deformation in the New Madrid seismic zone

    USGS Publications Warehouse

    Schweig, E.S.; Ellis, M.A.

    1994-01-01

    At least three great earthquakes occurred in the New Madrid seismic zone in 1811 and 1812. Estimates of present-day strain rates suggest that such events may have a repeat time of 1000 years or less. Paleoseismological data also indicate that earthquakes large enough to cause soil liquefaction have occurred several times in the past 5000 years. However, pervasive crustal deformation expected from such a high frequency of large earthquakes is not observed. This suggests that the seismic zone is a young feature, possibly as young as several tens of thousands of years old and no more than a few million years old.At least three great earthquakes occurred in the New Madrid seismic zone in 1811 and 1812. Estimates of present-day strain rates suggest that such events may have a repeat time of 1000 years or less. Paleoseismological data also indicate that earthquakes large enough to cause soil liquefaction have occurred several times in the past 5000 years. However, pervasive crustal deformation expected from such a high frequency of large earthquakes is not observed. This suggests that the seismic zone is a young feature, possibly as young as several tens of thousands of years old and no more than a few million years old.

  19. Noninvasive Ph-telemetric Measurement of Gastrointestinal Function

    NASA Technical Reports Server (NTRS)

    Tietze, Karen J.

    1991-01-01

    The purpose of this study was to gain experience with and validate the Heidelberg pH-telemetric methodology in order to determine if the pH-telemetric methodology would be a useful noninvasive measure of gastrointestinal transit time for future ground-based and in-flight drug evaluation studies. The Heidelberg pH metering system is a noninvasive, nonradioactive telemetric system that, following oral ingestion, continuously measures intraluminal pH of the stomach, duodenum, small bowel, ileocecal junction, and large bowel. Gastrointestinal motility profiles were obtained in normal volunteers using the lactulose breath-hydrogen and Heidelberg pH metering techniques. All profiles were obtained in the morning after an overnight fast. Heidelberg pH profiles were obtained in the fasting and fed states; lactulose breath-hydrogen profiles were obtained after a standard breakfast. Mouth-to-cecum transit time was measured as the interval from administration of lactulose (30 ml; 20 g) to a sustained increase in breath-hydrogen of 10 ppm or more. Gastric emptying time was measured as the interval from the administration of the Heidelberg capsule to a sustained increase in pH of three units or more.

  20. Effect of continuous quality improvement analysis on the delivery of primary percutaneous revascularization for acute myocardial infarction: a community hospital experience.

    PubMed

    Caputo, Ronald P; Kosinski, Robert; Walford, Gary; Giambartolomei, Alex; Grant, William; Reger, Mark J; Simons, Alan; Esente, Paolo

    2005-04-01

    As time to reperfusion correlates with outcomes, a door-to-balloon time of 90 +/- 30 min for primary percutaneous coronary revascularization (PCI) for the treatment of acute myocardial infarction has been recently established as a guideline by the ACC/AHA. The purpose of this study is to assess the effects of a continuous quality assurance program designed to expedite primary angioplasty at a community hospital. A database of all primary PCI procedures was created in 1998. Two groups of consecutive patients treated with primary PCI were studied. Group 1 represented patients in the time period between 1 June 1998 to 1 November 1998 and group 2 represented patients in the period between 1 January 2000 and 16 June 2000. Continuous quality assurance analysis was performed. Modifications to the primary angioplasty program were initiated in the latter group. Time intervals to certain treatment landmarks were compared between the groups. Significant decreases in the time intervals from emergency room registration to initial electrocardiogram (8.4 +/- 8.2 vs. 3.7 +/- 19.5 min; P < 0.001), presentation to the catheterization laboratory to arterial access (13.5 +/- 12.9 vs. 11.6 +/- 5.8 min; P < 0.001), and emergency room registration to initial angioplasty balloon inflation (132.0 +/- 69.2 vs. 112 +/- 72.0 min; P < 0.001) were achieved. For the subgroup of patients presenting with diagnostic ST elevation myocardial infarction, a large decrease in the door-to-balloon time interval between group 1 and group 2 was demonstrated (114.15 +/- 9.67 vs. 87.92 +/- 10.93 min; P = NS), resulting in compliance with ACC/AHA guidelines. Continuous quality improvement analysis can expedite care for patients treated by primary PCI in the community hospital setting. Copyright 2005 Wiley-Liss, Inc.

  1. The effects of time compositing on obtaining clear-sky coverage for infrared temperature and moisture profiling from geosynchronous orbit

    NASA Technical Reports Server (NTRS)

    Shenk, William E.; Hope, William A.

    1994-01-01

    The impact of time compositing on infrared profiling from geosynchronous orbit was evaluated for two convective outbreak cases. Time compositing is the accumulation of the data from several successive images taken at short intervals to provide a single field of measurements with the temporal resolution equal to the time to take all of the images. This is especially effective when the variability of the measurements is slow compared to the image interval. Time compositing should be able to reduce the interference of clouds for infrared measurments since clouds move and change. The convective outbreak cases were on 4 and 21 May 1990 over the eastern Midwest and southeastern United States, respectively. Geostationary Operational Environmental (GOES) Satellite imagery was used to outline clear areas at hourly intervals by two independent analysts. Time compositing was done every 3 h (1330-1530 UTC; 1630-1830 UTC) and over the full 5-h period. For both cases, a significant increase in coverage was measured with each 3-h compositing (about a factor of 2) and a further increase over the full period (approximately a factor of 3). The increase was especially useful in areas of broken cloud cover where large gaps between potential profiling areas on each image were reduced. To provide information on measurement variability over local areas, the regions where the clear-area analyses were done were subdivided into 0.5 deg latitude-longitude boxes, and if some portion of each box was clear, it was assumed that at least one profile could be obtained within the box. In the largest clear areas, at least some portion was clear every hour. Even in the cloudier regions, multiple clear looks possible during the entire period.

  2. Hospital factors impact variation in emergency department length of stay more than physician factors.

    PubMed

    Krall, Scott P; Cornelius, Angela P; Addison, J Bruce

    2014-03-01

    To analyze the correlation between the many different emergency department (ED) treatment metric intervals and determine if the metrics directly impacted by the physician correlate to the "door to room" interval in an ED (interval determined by ED bed availability). Our null hypothesis was that the cause of the variation in delay to receiving a room was multifactorial and does not correlate to any one metric interval. We collected daily interval averages from the ED information system, Meditech©. Patient flow metrics were collected on a 24-hour basis. We analyzed the relationship between the time intervals that make up an ED visit and the "arrival to room" interval using simple correlation (Pearson Correlation coefficients). Summary statistics of industry standard metrics were also done by dividing the intervals into 2 groups, based on the average ED length of stay (LOS) from the National Hospital Ambulatory Medical Care Survey: 2008 Emergency Department Summary. Simple correlation analysis showed that the doctor-to-discharge time interval had no correlation to the interval of "door to room (waiting room time)", correlation coefficient (CC) (CC=0.000, p=0.96). "Room to doctor" had a low correlation to "door to room" CC=0.143, while "decision to admitted patients departing the ED time" had a moderate correlation of 0.29 (p <0.001). "New arrivals" (daily patient census) had a strong correlation to longer "door to room" times, 0.657, p<0.001. The "door to discharge" times had a very strong correlation CC=0.804 (p<0.001), to the extended "door to room" time. Physician-dependent intervals had minimal correlation to the variation in arrival to room time. The "door to room" interval was a significant component to the variation in "door to discharge" i.e. LOS. The hospital-influenced "admit decision to hospital bed" i.e. hospital inpatient capacity, interval had a correlation to delayed "door to room" time. The other major factor affecting department bed availability was the "total patients per day." The correlation to the increasing "door to room" time also reflects the effect of availability of ED resources (beds) on the patient evaluation time. The time that it took for a patient to receive a room appeared more dependent on the system resources, for example, beds in the ED, as well as in the hospital, than on the physician.

  3. Intact interval timing in circadian CLOCK mutants.

    PubMed

    Cordes, Sara; Gallistel, C R

    2008-08-28

    While progress has been made in determining the molecular basis for the circadian clock, the mechanism by which mammalian brains time intervals measured in seconds to minutes remains a mystery. An obvious question is whether the interval-timing mechanism shares molecular machinery with the circadian timing mechanism. In the current study, we trained circadian CLOCK +/- and -/- mutant male mice in a peak-interval procedure with 10 and 20-s criteria. The mutant mice were more active than their wild-type littermates, but there were no reliable deficits in the accuracy or precision of their timing as compared with wild-type littermates. This suggests that expression of the CLOCK protein is not necessary for normal interval timing.

  4. Recurrence and interoccurrence behavior of self-organized complex phenomena

    NASA Astrophysics Data System (ADS)

    Abaimov, S. G.; Turcotte, D. L.; Shcherbakov, R.; Rundle, J. B.

    2007-08-01

    The sandpile, forest-fire and slider-block models are said to exhibit self-organized criticality. Associated natural phenomena include landslides, wildfires, and earthquakes. In all cases the frequency-size distributions are well approximated by power laws (fractals). Another important aspect of both the models and natural phenomena is the statistics of interval times. These statistics are particularly important for earthquakes. For earthquakes it is important to make a distinction between interoccurrence and recurrence times. Interoccurrence times are the interval times between earthquakes on all faults in a region whereas recurrence times are interval times between earthquakes on a single fault or fault segment. In many, but not all cases, interoccurrence time statistics are exponential (Poissonian) and the events occur randomly. However, the distribution of recurrence times are often Weibull to a good approximation. In this paper we study the interval statistics of slip events using a slider-block model. The behavior of this model is sensitive to the stiffness α of the system, α=kC/kL where kC is the spring constant of the connector springs and kL is the spring constant of the loader plate springs. For a soft system (small α) there are no system-wide events and interoccurrence time statistics of the larger events are Poissonian. For a stiff system (large α), system-wide events dominate the energy dissipation and the statistics of the recurrence times between these system-wide events satisfy the Weibull distribution to a good approximation. We argue that this applicability of the Weibull distribution is due to the power-law (scale invariant) behavior of the hazard function, i.e. the probability that the next event will occur at a time t0 after the last event has a power-law dependence on t0. The Weibull distribution is the only distribution that has a scale invariant hazard function. We further show that the onset of system-wide events is a well defined critical point. We find that the number of system-wide events NSWE satisfies the scaling relation NSWE ∝(α-αC)δ where αC is the critical value of the stiffness. The system-wide events represent a new phase for the slider-block system.

  5. The Anaesthetic-ECT Time Interval in Electroconvulsive Therapy Practice--Is It Time to Time?

    PubMed

    Gálvez, Verònica; Hadzi-Pavlovic, Dusan; Wark, Harry; Harper, Simon; Leyden, John; Loo, Colleen K

    2016-01-01

    Because most common intravenous anaesthetics used in ECT have anticonvulsant properties, their plasma-brain concentration at the time of seizure induction might affect seizure expression. The quality of ECT seizure expression has been repeatedly associated with efficacy outcomes. The time interval between the anaesthetic bolus injection and the ECT stimulus (anaesthetic-ECT time interval) will determine the anaesthetic plasma-brain concentration when the ECT stimulus is administered. The aim of this study was to examine the effect of the anaesthetic-ECT time interval on ECT seizure quality and duration. The anaesthetic-ECT time interval was recorded in 771 ECT sessions (84 patients). Right unilateral brief pulse ECT was applied. Anaesthesia given was propofol (1-2 mg/kg) and succinylcholine (0.5-1.0 mg/kg). Seizure quality indices (slow wave onset, amplitude, regularity, stereotypy and post-ictal suppression) and duration were rated through a structured rating scale by a single blinded trained rater. Linear Mixed Effects Models analysed the effect of the anaesthetic-ECT time interval on seizure quality indices, controlling for propofol dose (mg), ECT charge (mC), ECT session number, days between ECT, age (years), initial seizure threshold (mC) and concurrent medication. Longer anaesthetic-ECT time intervals lead to significantly higher quality seizures (p < 0.001 for amplitude, regularity, stereotypy and post-ictal suppression). These results suggest that the anaesthetic-ECT time interval is an important factor to consider in ECT practice. This time interval should be extended to as long as practically possible to facilitate the production of better quality seizures. Close collaboration between the anaesthetist and the psychiatrist is essential. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. A leaf wax biomarker record of early Pleistocene hydroclimate from West Turkana, Kenya

    NASA Astrophysics Data System (ADS)

    Lupien, R. L.; Russell, J. M.; Feibel, C.; Beck, C.; Castañeda, I.; Deino, A.; Cohen, A. S.

    2018-04-01

    Climate is thought to play a critical role in human evolution; however, this hypothesis is difficult to test due to a lack of long, high-quality paleoclimate records from key hominin fossil locales. To address this issue, we analyzed organic geochemical indicators of climate in a drill core from West Turkana, Kenya that spans ∼1.9-1.4 Ma, an interval that includes several important hominin evolutionary transitions. We analyzed the hydrogen isotopic composition of terrestrial plant waxes (δDwax) to reconstruct orbital-timescale changes in regional hydrology and their relationship with global climate forcings and the hominin fossil record. Our data indicate little change in the long-term mean hydroclimate during this interval, in contrast to inferred changes in the level of Lake Turkana, suggesting that lake level may be responding dominantly to deltaic progradation or tectonically-driven changes in basin configuration as opposed to hydroclimate. Time-series spectral analyses of the isotopic data reveal strong precession-band (21 kyr) periodicity, indicating that regional hydroclimate was strongly affected by changes in insolation. We observe an interval of particularly high-amplitude hydrologic variation at ∼1.7 Ma, which occurs during a time of high orbital eccentricity hence large changes in processionally-driven insolation amplitude. This interval overlaps with multiple hominin species turnovers, the appearance of new stone tool technology, and hominin dispersal out of Africa, supporting the notion that climate variability played an important role in hominin evolution.

  7. Evidence against decay in verbal working memory.

    PubMed

    Oberauer, Klaus; Lewandowsky, Stephan

    2013-05-01

    The article tests the assumption that forgetting in working memory for verbal materials is caused by time-based decay, using the complex-span paradigm. Participants encoded 6 letters for serial recall; each letter was preceded and followed by a processing period comprising 4 trials of difficult visual search. Processing duration, during which memory could decay, was manipulated via search set size. This manipulation increased retention interval by up to 100% without having any effect on recall accuracy. This result held with and without articulatory suppression. Two experiments using a dual-task paradigm showed that the visual search process required central attention. Thus, even when memory maintenance by central attention and by articulatory rehearsal was prevented, a large delay had no effect on memory performance, contrary to the decay notion. Most previous experiments that manipulated the retention interval and the opportunity for maintenance processes in complex span have confounded these variables with time pressure during processing periods. Three further experiments identified time pressure as the variable that affected recall. We conclude that time-based decay does not contribute to the capacity limit of verbal working memory. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  8. A Study of Transport in the Near-Earth Plasma Sheet During A Substorm Using Time-Dependent Large Scale Kinetics

    NASA Technical Reports Server (NTRS)

    El-Alaoui, M.; Ashour-Abdalla, M.; Raeder, J.; Frank, L. A.; Paterson, W. R.

    1998-01-01

    In this study we investigate the transport of H+ ions that made up the complex ion distribution function observed by the Geotail spacecraft at 0740 UT on November 24, 1996. This ion distribution function, observed by Geotail at approximately 20 R(sub E) downtail, was used to initialize a time-dependent large-scale kinetic (LSK) calculation of the trajectories of 75,000 ions forward in time. Time-dependent magnetic and electric fields were obtained from a global magnetohydrodynamic (MHD) simulation of the magnetosphere and its interaction with the solar wind and the interplanetary magnetic field (IMF) as observed during the interval of the observation of the distribution function. Our calculations indicate that the particles observed by Geotail were scattered across the equatorial plane by the multiple interactions with the current sheet and then convected sunward. They were energized by the dawn-dusk electric field during their transport from Geotail location and ultimately were lost at the ionospheric boundary or into the magnetopause.

  9. Real Time Search Algorithm for Observation Outliers During Monitoring Engineering Constructions

    NASA Astrophysics Data System (ADS)

    Latos, Dorota; Kolanowski, Bogdan; Pachelski, Wojciech; Sołoducha, Ryszard

    2017-12-01

    Real time monitoring of engineering structures in case of an emergency of disaster requires collection of a large amount of data to be processed by specific analytical techniques. A quick and accurate assessment of the state of the object is crucial for a probable rescue action. One of the more significant evaluation methods of large sets of data, either collected during a specified interval of time or permanently, is the time series analysis. In this paper presented is a search algorithm for those time series elements which deviate from their values expected during monitoring. Quick and proper detection of observations indicating anomalous behavior of the structure allows to take a variety of preventive actions. In the algorithm, the mathematical formulae used provide maximal sensitivity to detect even minimal changes in the object's behavior. The sensitivity analyses were conducted for the algorithm of moving average as well as for the Douglas-Peucker algorithm used in generalization of linear objects in GIS. In addition to determining the size of deviations from the average it was used the so-called Hausdorff distance. The carried out simulation and verification of laboratory survey data showed that the approach provides sufficient sensitivity for automatic real time analysis of large amount of data obtained from different and various sensors (total stations, leveling, camera, radar).

  10. The orbital evolution of the Apollo asteroid group over 11,550 years

    NASA Astrophysics Data System (ADS)

    Zausaev, A. F.; Pushkarev, A. N.

    1992-08-01

    The Everhard method was used to monitor the orbital evolution of 20 Apollo asteroids in the time interval from 2250 A.D. to 9300 B.C. The closest encounters with large planets in the evolution process are calculated. Stable resonances with Venus and Earth over the period from 2250 A.D. to 9300 B.C. are obtained. Theoretical coordinates of radiants on initial and final moments of integration are calculated.

  11. Late Intrahepatic Hematoma Complicating Transjugular Intrahepatic Portosystemic Shunt for Budd-Chiari Syndrome

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terreni, Natalia; Vangeli, Marcello; Raimondo, Maria Luisa

    Late intrahepatic hematoma is a rare complication of the transjugular intrahepatic portosystemic shunt (TIPS) procedure. We describe a patient with Budd-Chiari syndrome (BCS), who presented with a large inrahepatic hematoma 13 days after TIPS. Review of the literature reveals only two previous cases, both occurring in patients with BCS and presenting after a similar time interval. This potentially serious complication appears to be specific for TIPS in BCS.

  12. Response of Late Carboniferous and Early Permian Plant Communities to Climate Change

    NASA Astrophysics Data System (ADS)

    Dimichele, William A.; Pfefferkorn, Hermann W.; Gastaldo, Robert A.

    Late Carboniferous and Early Permian strata record the transition from a cold interval in Earth history, characterized by the repeated periods of glaciation and deglaciation of the southern pole, to a warm-climate interval. Consequently, this time period is the best available analogue to the Recent in which to study patterns of vegetational response, both to glacial-interglacial oscillation and to the appearance of warm climate. Carboniferous wetland ecosystems were dominated by spore-producing plants and early gymnospermous seed plants. Global climate changes, largely drying, forced vegetational changes, resulting in a change to a seed plant-dominated world, beginning first at high latitudes during the Carboniferous, reaching the tropics near the Permo-Carboniferous boundary. For most of this time plant assemblages were very conservative in their composition. Change in the dominant vegetation was generally a rapid process, which suggests that environmental thresholds were crossed, and involved little mixing of elements from the wet and dry floras.

  13. An easy-to-use technique to characterize cardiodynamics from first-return maps on ΔRR-intervals

    NASA Astrophysics Data System (ADS)

    Fresnel, Emeline; Yacoub, Emad; Freitas, Ubiratan; Kerfourn, Adrien; Messager, Valérie; Mallet, Eric; Muir, Jean-François; Letellier, Christophe

    2015-08-01

    Heart rate variability analysis using 24-h Holter monitoring is frequently performed to assess the cardiovascular status of a patient. The present retrospective study is based on the beat-to-beat interval variations or ΔRR, which offer a better view of the underlying structures governing the cardiodynamics than the common RR-intervals. By investigating data for three groups of adults (with normal sinus rhythm, congestive heart failure, and atrial fibrillation, respectively), we showed that the first-return maps built on ΔRR can be classified according to three structures: (i) a moderate central disk, (ii) a reduced central disk with well-defined segments, and (iii) a large triangular shape. These three very different structures can be distinguished by computing a Shannon entropy based on a symbolic dynamics and an asymmetry coefficient, here introduced to quantify the balance between accelerations and decelerations in the cardiac rhythm. The probability P111111 of successive heart beats without large beat-to-beat fluctuations allows to assess the regularity of the cardiodynamics. A characteristic time scale, corresponding to the partition inducing the largest Shannon entropy, was also introduced to quantify the ability of the heart to modulate its rhythm: it was significantly different for the three structures of first-return maps. A blind validation was performed to validate the technique.

  14. Intact Interval Timing in Circadian CLOCK Mutants

    PubMed Central

    Cordes, Sara; Gallistel, C. R.

    2008-01-01

    While progress has been made in determining the molecular basis for the circadian clock, the mechanism by which mammalian brains time intervals measured in seconds to minutes remains a mystery. An obvious question is whether the interval timing mechanism shares molecular machinery with the circadian timing mechanism. In the current study, we trained circadian CLOCK +/− and −/− mutant male mice in a peak-interval procedure with 10 and 20-s criteria. The mutant mice were more active than their wild-type littermates, but there were no reliable deficits in the accuracy or precision of their timing as compared with wild-type littermates. This suggests that expression of the CLOCK protein is not necessary for normal interval timing. PMID:18602902

  15. SU-E-J-150: Impact of Intrafractional Prostate Motion On the Accuracy and Efficiency of Prostate SBRT Delivery: A Retrospective Analysis of Prostate Tracking Log Files

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiang, H; Hirsch, A; Willins, J

    2014-06-01

    Purpose: To measure intrafractional prostate motion by time-based stereotactic x-ray imaging and investigate the impact on the accuracy and efficiency of prostate SBRT delivery. Methods: Prostate tracking log files with 1,892 x-ray image registrations from 18 SBRT fractions for 6 patients were retrospectively analyzed. Patient setup and beam delivery sessions were reviewed to identify extended periods of large prostate motion that caused delays in setup or interruptions in beam delivery. The 6D prostate motions were compared to the clinically used PTV margin of 3–5 mm (3 mm posterior, 5 mm all other directions), a hypothetical PTV margin of 2–3 mmmore » (2 mm posterior, 3 mm all other directions), and the rotation correction limits (roll ±2°, pitch ±5° and yaw ±3°) of CyberKnife to quantify beam delivery accuracy. Results: Significant incidents of treatment start delay and beam delivery interruption were observed, mostly related to large pitch rotations of ≥±5°. Optimal setup time of 5–15 minutes was recorded in 61% of the fractions, and optimal beam delivery time of 30–40 minutes in 67% of the fractions. At a default imaging interval of 15 seconds, the percentage of prostate motion beyond PTV margin of 3–5 mm varied among patients, with a mean at 12.8% (range 0.0%–31.1%); and the percentage beyond PTV margin of 2–3 mm was at a mean of 36.0% (range 3.3%–83.1%). These timely detected offsets were all corrected real-time by the robotic manipulator or by operator intervention at the time of treatment interruptions. Conclusion: The durations of patient setup and beam delivery were directly affected by the occurrence of large prostate motion. Frequent imaging of down to 15 second interval is necessary for certain patients. Techniques for reducing prostate motion, such as using endorectal balloon, can be considered to assure consistently higher accuracy and efficiency of prostate SBRT delivery.« less

  16. TIME DISTRIBUTIONS OF LARGE AND SMALL SUNSPOT GROUPS OVER FOUR SOLAR CYCLES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kilcik, A.; Yurchyshyn, V. B.; Abramenko, V.

    2011-04-10

    Here we analyze solar activity by focusing on time variations of the number of sunspot groups (SGs) as a function of their modified Zurich class. We analyzed data for solar cycles 20-23 by using Rome (cycles 20 and 21) and Learmonth Solar Observatory (cycles 22 and 23) SG numbers. All SGs recorded during these time intervals were separated into two groups. The first group includes small SGs (A, B, C, H, and J classes by Zurich classification), and the second group consists of large SGs (D, E, F, and G classes). We then calculated small and large SG numbers frommore » their daily mean numbers as observed on the solar disk during a given month. We report that the time variations of small and large SG numbers are asymmetric except for solar cycle 22. In general, large SG numbers appear to reach their maximum in the middle of the solar cycle (phases 0.45-0.5), while the international sunspot numbers and the small SG numbers generally peak much earlier (solar cycle phases 0.29-0.35). Moreover, the 10.7 cm solar radio flux, the facular area, and the maximum coronal mass ejection speed show better agreement with the large SG numbers than they do with the small SG numbers. Our results suggest that the large SG numbers are more likely to shed light on solar activity and its geophysical implications. Our findings may also influence our understanding of long-term variations of the total solar irradiance, which is thought to be an important factor in the Sun-Earth climate relationship.« less

  17. Lateral masking in cycling displays: the relative importance of separation, flanker duration, and interstimulus interval for object-mediated updating.

    PubMed

    Hein, Elisabeth; Moore, Cathleen M

    2010-01-01

    A central bar repeatedly presented in alternation with two flanking bars can lead to the disappearance of the central bar. Recently it has been suggested that this masking effect could be explained by object-mediated updating: the information from the central bar is integrated into the representation of the flankers, leading not only to the disappearance of the central bar as a separate object, but also to the perception of the flankers in apparent motion between their real position and the position of the central bar. This account suggests that the visibility of the central bar should depend on the same factors as those that influence the construction and maintenance of object representations. Therefore separation between central bar and flankers should not influence visibility as long as the time interval between them is adequate to make an interpretation of the scene in terms of one object moving from one location to the other possible location. We found that if the time interval between the central bar and the flankers is neither too short nor too long, the central bar becomes invisible even at large separations. These findings are inconsistent with traditional accounts of the cycling lateral masking displays in terms of local inhibitory mechanisms.

  18. Predecessors of the giant 1960 Chile earthquake

    USGS Publications Warehouse

    Cisternas, M.; Atwater, B.F.; Torrejon, F.; Sawai, Y.; Machuca, G.; Lagos, M.; Eipert, A.; Youlton, C.; Salgado, I.; Kamataki, T.; Shishikura, M.; Rajendran, C.P.; Malik, J.K.; Rizal, Y.; Husni, M.

    2005-01-01

    It is commonly thought that the longer the time since last earthquake, the larger the next earthquake's slip will be. But this logical predictor of earthquake size, unsuccessful for large earthquakes on a strike-slip fault, fails also with the giant 1960 Chile earthquake of magnitude 9.5 (ref. 3). Although the time since the preceding earthquake spanned 123 years (refs 4, 5), the estimated slip in 1960, which occurred on a fault between the Nazca and South American tectonic plates, equalled 250-350 years' worth of the plate motion. Thus the average interval between such giant earthquakes on this fault should span several centuries. Here we present evidence that such long intervals were indeed typical of the last two millennia. We use buried soils and sand layers as records of tectonic subsidence and tsunami inundation at an estuary midway along the 1960 rupture. In these records, the 1960 earthquake ended a recurrence interval that had begun almost four centuries before, with an earthquake documented by Spanish conquistadors in 1575. Two later earthquakes, in 1737 and 1837, produced little if any subsidence or tsunami at the estuary and they therefore probably left the fault partly loaded with accumulated plate motion that the 1960 earthquake then expended. ?? 2005 Nature Publishing Group.

  19. Working times of elastomeric impression materials determined by dimensional accuracy.

    PubMed

    Tan, E; Chai, J; Wozniak, W T

    1996-01-01

    The working times of five poly(vinyl siloxane) impression materials were estimated by evaluating the dimensional accuracy of stone dies of impressions of a standard model made at successive time intervals. The stainless steel standard model was represented by two abutments having known distances between landmarks in three dimensions. Three dimensions in the x-, y-, and z-axes of the stone dies were measured with a traveling microscope. A time interval was rejected as being within the working time if the percentage change of the resultant dies, in any dimension, was statistically different from those measured from stone dies from previous time intervals. The absolute dimensions of those dies from the rejected time interval also must have exceeded all those from previous time intervals. Results showed that the working times estimated with this method generally were about 30 seconds longer than those recommended by the manufacturers.

  20. Single-channel autocorrelation functions: the effects of time interval omission.

    PubMed Central

    Ball, F G; Sansom, M S

    1988-01-01

    We present a general mathematical framework for analyzing the dynamic aspects of single channel kinetics incorporating time interval omission. An algorithm for computing model autocorrelation functions, incorporating time interval omission, is described. We show, under quite general conditions, that the form of these autocorrelations is identical to that which would be obtained if time interval omission was absent. We also show, again under quite general conditions, that zero correlations are necessarily a consequence of the underlying gating mechanism and not an artefact of time interval omission. The theory is illustrated by a numerical study of an allosteric model for the gating mechanism of the locust muscle glutamate receptor-channel. PMID:2455553

  1. Forensic use of the Greulich and Pyle atlas: prediction intervals and relevance.

    PubMed

    Chaumoitre, K; Saliba-Serre, B; Adalian, P; Signoli, M; Leonetti, G; Panuel, M

    2017-03-01

    The Greulich and Pyle (GP) atlas is one of the most frequently used methods of bone age (BA) estimation. Our aim is to assess its accuracy and to calculate the prediction intervals at 95% for forensic use. The study was conducted on a multi-ethnic sample of 2614 individuals (1423 boys and 1191 girls) referred to the university hospital of Marseille (France) for simple injuries. Hand radiographs were analysed using the GP atlas. Reliability of GP atlas and agreement between BA and chronological age (CA) were assessed and prediction intervals at 95% were calculated. The repeatability was excellent and the reproducibility was good. Pearson's linear correlation coefficient between CA and BA was 0.983. The mean difference between BA and CA was -0.18 years (boys) and 0.06 years (girls). The prediction interval at 95% for CA was given for each GP category and ranged between 1.2 and more than 4.5 years. The GP atlas is a reproducible and repeatable method that is still accurate for the present population, with a high correlation between BA and CA. The prediction intervals at 95% are wide, reflecting individual variability, and should be known when the method is used in forensic cases. • The GP atlas is still accurate at the present time. • There is a high correlation between bone age and chronological age. • Individual variability must be known when GP is used in forensic cases. • Prediction intervals (95%) are large; around 4 years after 10 year olds.

  2. Excitability of the motor system: A transcranial magnetic stimulation study on singing and speaking.

    PubMed

    Royal, Isabelle; Lidji, Pascale; Théoret, Hugo; Russo, Frank A; Peretz, Isabelle

    2015-08-01

    The perception of movements is associated with increased activity in the human motor cortex, which in turn may underlie our ability to understand actions, as it may be implicated in the recognition, understanding and imitation of actions. Here, we investigated the involvement and lateralization of the primary motor cortex (M1) in the perception of singing and speech. Transcranial magnetic stimulation (TMS) was applied independently for both hemispheres over the mouth representation of the motor cortex in healthy participants while they watched 4-s audiovisual excerpts of singers producing a 2-note ascending interval (singing condition) or 4-s audiovisual excerpts of a person explaining a proverb (speech condition). Subjects were instructed to determine whether a sung interval/written proverb, matched a written interval/proverb. During both tasks, motor evoked potentials (MEPs) were recorded from the contralateral mouth muscle (orbicularis oris) of the stimulated motor cortex compared to a control task. Moreover, to investigate the time course of motor activation, TMS pulses were randomly delivered at 7 different time points (ranging from 500 to 3500 ms after stimulus onset). Results show that stimulation of the right hemisphere had a similar effect on the MEPs for both the singing and speech perception tasks, whereas stimulation of the left hemisphere significantly differed in the speech perception task compared to the singing perception task. Furthermore, analysis of the MEPs in the singing task revealed that they decreased for small musical intervals, but increased for large musical intervals, regardless of which hemisphere was stimulated. Overall, these results suggest a dissociation between the lateralization of M1 activity for speech perception and for singing perception, and that in the latter case its activity can be modulated by musical parameters such as the size of a musical interval. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Seasonal temperature extremes in Potsdam

    NASA Astrophysics Data System (ADS)

    Kundzewicz, Zbigniew; Huang, Shaochun

    2010-12-01

    The awareness of global warming is well established and results from the observations made on thousands of stations. This paper complements the large-scale results by examining a long time-series of high-quality temperature data from the Secular Meteorological Station in Potsdam, where observation records over the last 117 years, i.e., from January 1893 are available. Tendencies of change in seasonal temperature-related climate extremes are demonstrated. "Cold" extremes have become less frequent and less severe than in the past, while "warm" extremes have become more frequent and more severe. Moreover, the interval of the occurrence of frost has been decreasing, while the interval of the occurrence of hot days has been increasing. However, many changes are not statistically significant, since the variability of temperature indices at the Potsdam station has been very strong.

  4. Infant Temperament: Stability by Age, Gender, Birth Order, Term Status, and SES

    PubMed Central

    Bornstein, Marc H.; Putnick, Diane L.; Gartstein, Maria A.; Hahn, Chun-Shin; Auestad, Nancy; O’Connor, Deborah L.

    2015-01-01

    Two complementary studies focused on stability of infant temperament across the first year and considered infant age, gender, birth order, term status, and socioeconomic status (SES) as moderators. Study 1 consisted of 73 mothers of firstborn term girls and boys queried at 2, 5, and 13 months of age. Study 2 consisted of 335 mothers of infants of different gender, birth order, term status, and SES queried at 6 and 12 months. Consistent positive and negative affectivity factors emerged at all time-points across both studies. Infant temperament proved stable and robust across gender, birth order, term status, and SES. Stability coefficients for temperament factors and scales were medium to large for shorter (<9 months) inter-assessment intervals and small to medium for longer (>10 months) intervals. PMID:25865034

  5. Enhanced sensitivity to the time variation of the fine-structure constant and m{p}/m{e} in diatomic molecules.

    PubMed

    Flambaum, V V; Kozlov, M G

    2007-10-12

    Sensitivity to temporal variation of the fundamental constants may be strongly enhanced in transitions between narrow close levels of different nature. This enhancement may be realized in a large number of molecules due to cancellation between the ground state fine-structure omega{f} and vibrational interval omega{v} [omega=omega{f}-nomega{v} approximately 0, delta omega/omega=K(2delta alpha/alpha+0.5 delta mu/mu), K>1, mu=m{p}/m{e}]. The intervals between the levels are conveniently located in microwave frequency range and the level widths are very small. Required accuracy of the shift measurements is about 0.01-1 Hz. As examples, we consider molecules Cl(+)(2), CuS, IrC, SiBr, and HfF(+).

  6. Photometric monitoring of three BL Lacertae objects in 1993-1998

    NASA Astrophysics Data System (ADS)

    Bai, J. M.; Xie, G. Z.; Li, K. H.; Zhang, X.; Liu, W. W.

    1999-05-01

    The results of optical photometric (BVRI) monitoring of three BL Lac objects over a time interval of about four years are presented. The sources are three classical radio-selected BL Lac objects, BL Lac, OJ 287 and PKS 0735+178. During our observation OJ 287 was in the stage of a large periodic outburst which consisted of at least two peaks. Almost all the observations obtained over consecutive nights detected intranight variations. In 1995 and 1996 BL Lac kept in faint states, with fewer and smaller rapid flares and fluctuations. On the contrary, in late 1997 BL Lac was at the stage of a large outburst, accompanied with much more large amplitude rapid flares and fluctuations. PKS 0735+178 was almost at its faint end from 1994 to early 1998. Over this time interval, the intraday variations and microvariations in PKS 0735+178 were rare and the amplitude was very small, except a rapid darkening of ~ 0.4 mag on 24 January 1995. Previous work by \\cite[Webb et al. (1988);]{web88} \\cite[Wagner et al. (1996);]{wag96} \\cite[Pian et al. (1997)]{pia97} also showed the same behaviour of variability as BL Lac and PKS 0735+178 in BL Lac, S5 0716+714, PKS 2155-304, respectively. We propose that the motion of orientation of the relativistic jet in a BL Lac object be responsible for these variability behaviours. Table~1 is only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/Abstract.html

  7. Nonelective surgery at night and in-hospital mortality: Prospective observational data from the European Surgical Outcomes Study.

    PubMed

    van Zaane, Bas; van Klei, Wilton A; Buhre, Wolfgang F; Bauer, Peter; Boerma, E Christiaan; Hoeft, Andreas; Metnitz, Philipp; Moreno, Rui P; Pearse, Rupert; Pelosi, Paolo; Sander, Michael; Vallet, Benoit; Pettilä, Ville; Vincent, Jean-Louis; Rhodes, Andrew

    2015-07-01

    Evidence suggests that sleep deprivation associated with night-time working may adversely affect performance resulting in a reduction in the safety of surgery and anaesthesia. Our primary objective was to evaluate an association between nonelective night-time surgery and in-hospital mortality. We hypothesised that urgent surgery performed during the night was associated with higher in-hospital mortality and also an increase in the duration of hospital stay and the number of admissions to critical care. A prospective cohort study. This is a secondary analysis of a large database related to perioperative care and outcome (European Surgical Outcome Study). Four hundred and ninety-eight hospitals in 28 European countries. Men and women older than 16 years who underwent nonelective, noncardiac surgery were included according to time of the procedure. None. Primary outcome was in-hospital mortality; the secondary outcome was the duration of hospital stay and critical care admission. Eleven thousand two hundred and ninety patients undergoing urgent surgery were included in the analysis with 636 in-hospital deaths (5.6%). Crude mortality odds ratios (ORs) increased sequentially from daytime [426 deaths (5.3%)] to evening [150 deaths (6.0%), OR 1.14; 95% confidence interval 0.94 to 1.38] to night-time [60 deaths (8.3%), OR 1.62; 95% confidence interval 1.22 to 2.14]. Following adjustment for confounding factors, surgery during the evening (OR 1.09; 95% confidence interval 0.91 to 1.31) and night (OR 1.20; 95% confidence interval 0.9 to 1.6) was not associated with an increased risk of postoperative death. Admittance rate to an ICU increased sequentially from daytime [891 (11.1%)], to evening [347 (13.8%)] to night time [149 (20.6%)]. In patients undergoing nonelective urgent noncardiac surgery, in-hospital mortality was associated with well known risk factors related to patients and surgery, but we did not identify any relationship with the time of day at which the procedure was performed. Clinicaltrials.gov identifier: NCT01203605.

  8. Persistence of non-Markovian Gaussian stationary processes in discrete time

    NASA Astrophysics Data System (ADS)

    Nyberg, Markus; Lizana, Ludvig

    2018-04-01

    The persistence of a stochastic variable is the probability that it does not cross a given level during a fixed time interval. Although persistence is a simple concept to understand, it is in general hard to calculate. Here we consider zero mean Gaussian stationary processes in discrete time n . Few results are known for the persistence P0(n ) in discrete time, except the large time behavior which is characterized by the nontrivial constant θ through P0(n ) ˜θn . Using a modified version of the independent interval approximation (IIA) that we developed before, we are able to calculate P0(n ) analytically in z -transform space in terms of the autocorrelation function A (n ) . If A (n )→0 as n →∞ , we extract θ numerically, while if A (n )=0 , for finite n >N , we find θ exactly (within the IIA). We apply our results to three special cases: the nearest-neighbor-correlated "first order moving average process", where A (n )=0 for n >1 , the double exponential-correlated "second order autoregressive process", where A (n ) =c1λ1n+c2λ2n , and power-law-correlated variables, where A (n ) ˜n-μ . Apart from the power-law case when μ <5 , we find excellent agreement with simulations.

  9. Bayesian dynamic regression models for interval censored survival data with application to children dental health.

    PubMed

    Wang, Xiaojing; Chen, Ming-Hui; Yan, Jun

    2013-07-01

    Cox models with time-varying coefficients offer great flexibility in capturing the temporal dynamics of covariate effects on event times, which could be hidden from a Cox proportional hazards model. Methodology development for varying coefficient Cox models, however, has been largely limited to right censored data; only limited work on interval censored data has been done. In most existing methods for varying coefficient models, analysts need to specify which covariate coefficients are time-varying and which are not at the time of fitting. We propose a dynamic Cox regression model for interval censored data in a Bayesian framework, where the coefficient curves are piecewise constant but the number of pieces and the jump points are covariate specific and estimated from the data. The model automatically determines the extent to which the temporal dynamics is needed for each covariate, resulting in smoother and more stable curve estimates. The posterior computation is carried out via an efficient reversible jump Markov chain Monte Carlo algorithm. Inference of each coefficient is based on an average of models with different number of pieces and jump points. A simulation study with three covariates, each with a coefficient of different degree in temporal dynamics, confirmed that the dynamic model is preferred to the existing time-varying model in terms of model comparison criteria through conditional predictive ordinate. When applied to a dental health data of children with age between 7 and 12 years, the dynamic model reveals that the relative risk of emergence of permanent tooth 24 between children with and without an infected primary predecessor is the highest at around age 7.5, and that it gradually reduces to one after age 11. These findings were not seen from the existing studies with Cox proportional hazards models.

  10. Evaluation of telephone first approach to demand management in English general practice: observational study.

    PubMed

    Newbould, Jennifer; Abel, Gary; Ball, Sarah; Corbett, Jennie; Elliott, Marc; Exley, Josephine; Martin, Adam; Saunders, Catherine; Wilson, Edward; Winpenny, Eleanor; Yang, Miaoqing; Roland, Martin

    2017-09-27

    Objective  To evaluate a "telephone first" approach, in which all patients wanting to see a general practitioner (GP) are asked to speak to a GP on the phone before being given an appointment for a face to face consultation. Design  Time series and cross sectional analysis of routine healthcare data, data from national surveys, and primary survey data. Participants  147 general practices adopting the telephone first approach compared with a 10% random sample of other practices in England. Intervention  Management support for workload planning and introduction of the telephone first approach provided by two commercial companies. Main outcome measures  Number of consultations, total time consulting (59 telephone first practices, no controls). Patient experience (GP Patient Survey, telephone first practices plus controls). Use and costs of secondary care (hospital episode statistics, telephone first practices plus controls). The main analysis was intention to treat, with sensitivity analyses restricted to practices thought to be closely following the companies' protocols. Results  After the introduction of the telephone first approach, face to face consultations decreased considerably (adjusted change within practices -38%, 95% confidence interval -45% to -29%; P<0.001). An average practice experienced a 12-fold increase in telephone consultations (1204%, 633% to 2290%; P<0.001). The average duration of both telephone and face to face consultations decreased, but there was an overall increase of 8% in the mean time spent consulting by GPs, albeit with large uncertainty on this estimate (95% confidence interval -1% to 17%; P=0.088). These average workload figures mask wide variation between practices, with some practices experiencing a substantial reduction in workload and others a large increase. Compared with other English practices in the national GP Patient Survey, in practices using the telephone first approach there was a large (20.0 percentage points, 95% confidence interval 18.2 to 21.9; P<0.001) improvement in length of time to be seen. In contrast, other scores on the GP Patient Survey were slightly more negative. Introduction of the telephone first approach was followed by a small (2.0%) increase in hospital admissions (95% confidence interval 1% to 3%; P=0.006), no initial change in emergency department attendance, but a small (2% per year) decrease in the subsequent rate of rise of emergency department attendance (1% to 3%; P=0.005). There was a small net increase in secondary care costs. Conclusions  The telephone first approach shows that many problems in general practice can be dealt with over the phone. The approach does not suit all patients or practices and is not a panacea for meeting demand. There was no evidence to support claims that the approach would, on average, save costs or reduce use of secondary care. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  11. Evaluation of telephone first approach to demand management in English general practice: observational study

    PubMed Central

    Newbould, Jennifer; Abel, Gary; Ball, Sarah; Corbett, Jennie; Elliott, Marc; Exley, Josephine; Martin, Adam; Saunders, Catherine; Wilson, Edward; Winpenny, Eleanor; Yang, Miaoqing

    2017-01-01

    Objective To evaluate a “telephone first” approach, in which all patients wanting to see a general practitioner (GP) are asked to speak to a GP on the phone before being given an appointment for a face to face consultation. Design Time series and cross sectional analysis of routine healthcare data, data from national surveys, and primary survey data. Participants 147 general practices adopting the telephone first approach compared with a 10% random sample of other practices in England. Intervention Management support for workload planning and introduction of the telephone first approach provided by two commercial companies. Main outcome measures Number of consultations, total time consulting (59 telephone first practices, no controls). Patient experience (GP Patient Survey, telephone first practices plus controls). Use and costs of secondary care (hospital episode statistics, telephone first practices plus controls). The main analysis was intention to treat, with sensitivity analyses restricted to practices thought to be closely following the companies’ protocols. Results After the introduction of the telephone first approach, face to face consultations decreased considerably (adjusted change within practices −38%, 95% confidence interval −45% to −29%; P<0.001). An average practice experienced a 12-fold increase in telephone consultations (1204%, 633% to 2290%; P<0.001). The average duration of both telephone and face to face consultations decreased, but there was an overall increase of 8% in the mean time spent consulting by GPs, albeit with large uncertainty on this estimate (95% confidence interval −1% to 17%; P=0.088). These average workload figures mask wide variation between practices, with some practices experiencing a substantial reduction in workload and others a large increase. Compared with other English practices in the national GP Patient Survey, in practices using the telephone first approach there was a large (20.0 percentage points, 95% confidence interval 18.2 to 21.9; P<0.001) improvement in length of time to be seen. In contrast, other scores on the GP Patient Survey were slightly more negative. Introduction of the telephone first approach was followed by a small (2.0%) increase in hospital admissions (95% confidence interval 1% to 3%; P=0.006), no initial change in emergency department attendance, but a small (2% per year) decrease in the subsequent rate of rise of emergency department attendance (1% to 3%; P=0.005). There was a small net increase in secondary care costs. Conclusions The telephone first approach shows that many problems in general practice can be dealt with over the phone. The approach does not suit all patients or practices and is not a panacea for meeting demand. There was no evidence to support claims that the approach would, on average, save costs or reduce use of secondary care. PMID:28954741

  12. Paleogeography and Depositional Systems of Cretaceous-Oligocene Strata: Eastern Precordillera, Argentina

    NASA Astrophysics Data System (ADS)

    Reat, Ellen J.; Fosdick, Julie C.

    2016-04-01

    New data from the Argentine Precordillera in the southern Central Andes document changes in depositional environment and sediment accumulation rates during Upper Cretaceous through Oligocene basin evolution, prior to the onset Miocene foredeep sedimentation. This work presents new sedimentology, detrital geochronology, and geologic mapping from a series of continental strata within this interval to resolve the timing of sedimentation, nature of depositional environments, and basin paleogeography at the nascent phase of Andean orogenic events, prior to the uplift and deformation of the Precordillera to the west. Five stratigraphic sections were measured across both limbs of the Huaco Anticline, detailing sedimentology of the terrestrial siliciclastic upper Patquía, Ciénaga del Río Huaco (CRH), Puesto la Flecha, Vallecito, and lower Cerro Morado formations. Paleocurrent data indicate a flow direction change from predominantly NE-SW in the upper Patquía and the lower CRH to SW-NE directed flow in the upper CRH, consistent with a large meandering river system and a potential rise in topography towards the west. This interpretation is further supported by pebble lag intervals and 1-3 meter scale trough cross-bedding in the CRH. The thinly laminated gypsum deposits and siltstones of the younger Puesto la Flecha Formation indicate an upsection transition into overbank and lacustrine sedimentation during semi-arid climatic conditions, before the onset of aeolian dune formation. New maximum depositional age results from detrital zircon U-Pb analysis indicate that the Puesto la Flecha Formation spans ~57 Myr (~92 to ~35 Ma) across a ~48 m thick interval without evidence for major erosion, indicating very low sedimentation rates. This time interval may represent distal foredeep or forebulge migration resultant from western lithospheric loading due to the onset of Andean deformation at this latitude. Detrital zircon U-Pb age spectra also indicate shifts in sediment routing pathways over time, consistent with a transition from local basement-sourced quartz-rich sediments during the Triassic-Cretaceous to increased volcanic and sedimentary lithics from the rising Andes in the west during Paleocene-Eocene time. We therefore interpret these changes in depositional character as representing a transition from a large fluvial system with craton-sourced sediments during the Triassic-Cretaceous CRH to low energy lacustrine and ephemeral playa environments with an increase in westerly derived sediments during the Paleocene-Eocene Puesto la Flecha, prior to the reported Oligocene onset of the Andean continental foredeep represented by the Vallecito Formation.

  13. Examining Temporal Sample Scale and Model Choice with Spatial Capture-Recapture Models in the Common Leopard Panthera pardus.

    PubMed

    Goldberg, Joshua F; Tempa, Tshering; Norbu, Nawang; Hebblewhite, Mark; Mills, L Scott; Wangchuk, Tshewang R; Lukacs, Paul

    2015-01-01

    Many large carnivores occupy a wide geographic distribution, and face threats from habitat loss and fragmentation, poaching, prey depletion, and human wildlife-conflicts. Conservation requires robust techniques for estimating population densities and trends, but the elusive nature and low densities of many large carnivores make them difficult to detect. Spatial capture-recapture (SCR) models provide a means for handling imperfect detectability, while linking population estimates to individual movement patterns to provide more accurate estimates than standard approaches. Within this framework, we investigate the effect of different sample interval lengths on density estimates, using simulations and a common leopard (Panthera pardus) model system. We apply Bayesian SCR methods to 89 simulated datasets and camera-trapping data from 22 leopards captured 82 times during winter 2010-2011 in Royal Manas National Park, Bhutan. We show that sample interval length from daily, weekly, monthly or quarterly periods did not appreciably affect median abundance or density, but did influence precision. We observed the largest gains in precision when moving from quarterly to shorter intervals. We therefore recommend daily sampling intervals for monitoring rare or elusive species where practicable, but note that monthly or quarterly sample periods can have similar informative value. We further develop a novel application of Bayes factors to select models where multiple ecological factors are integrated into density estimation. Our simulations demonstrate that these methods can help identify the "true" explanatory mechanisms underlying the data. Using this method, we found strong evidence for sex-specific movement distributions in leopards, suggesting that sexual patterns of space-use influence density. This model estimated a density of 10.0 leopards/100 km2 (95% credibility interval: 6.25-15.93), comparable to contemporary estimates in Asia. These SCR methods provide a guide to monitor and observe the effect of management interventions on leopards and other species of conservation interest.

  14. Examining Temporal Sample Scale and Model Choice with Spatial Capture-Recapture Models in the Common Leopard Panthera pardus

    PubMed Central

    Goldberg, Joshua F.; Tempa, Tshering; Norbu, Nawang; Hebblewhite, Mark; Mills, L. Scott; Wangchuk, Tshewang R.; Lukacs, Paul

    2015-01-01

    Many large carnivores occupy a wide geographic distribution, and face threats from habitat loss and fragmentation, poaching, prey depletion, and human wildlife-conflicts. Conservation requires robust techniques for estimating population densities and trends, but the elusive nature and low densities of many large carnivores make them difficult to detect. Spatial capture-recapture (SCR) models provide a means for handling imperfect detectability, while linking population estimates to individual movement patterns to provide more accurate estimates than standard approaches. Within this framework, we investigate the effect of different sample interval lengths on density estimates, using simulations and a common leopard (Panthera pardus) model system. We apply Bayesian SCR methods to 89 simulated datasets and camera-trapping data from 22 leopards captured 82 times during winter 2010–2011 in Royal Manas National Park, Bhutan. We show that sample interval length from daily, weekly, monthly or quarterly periods did not appreciably affect median abundance or density, but did influence precision. We observed the largest gains in precision when moving from quarterly to shorter intervals. We therefore recommend daily sampling intervals for monitoring rare or elusive species where practicable, but note that monthly or quarterly sample periods can have similar informative value. We further develop a novel application of Bayes factors to select models where multiple ecological factors are integrated into density estimation. Our simulations demonstrate that these methods can help identify the “true” explanatory mechanisms underlying the data. Using this method, we found strong evidence for sex-specific movement distributions in leopards, suggesting that sexual patterns of space-use influence density. This model estimated a density of 10.0 leopards/100 km2 (95% credibility interval: 6.25–15.93), comparable to contemporary estimates in Asia. These SCR methods provide a guide to monitor and observe the effect of management interventions on leopards and other species of conservation interest. PMID:26536231

  15. Evaluating the utility of hexapod species for calculating a confidence interval about a succession based postmortem interval estimate.

    PubMed

    Perez, Anne E; Haskell, Neal H; Wells, Jeffrey D

    2014-08-01

    Carrion insect succession patterns have long been used to estimate the postmortem interval (PMI) during a death investigation. However, no published carrion succession study included sufficient replication to calculate a confidence interval about a PMI estimate based on occurrence data. We exposed 53 pig carcasses (16±2.5 kg), near the likely minimum needed for such statistical analysis, at a site in north-central Indiana, USA, over three consecutive summer seasons. Insects and Collembola were sampled daily from each carcass for a total of 14 days, by this time each was skeletonized. The criteria for judging a life stage of a given species to be potentially useful for succession-based PMI estimation were (1) nonreoccurrence (observed during a single period of presence on a corpse), and (2) found in a sufficiently large proportion of carcasses to support a PMI confidence interval. For this data set that proportion threshold is 45/53. Of the 266 species collected and identified, none was nonreoccuring in that each showed at least a gap of one day on a single carcass. If the definition of nonreoccurrence is relaxed to include such a single one-day gap the larval forms of Necrophilaamericana, Fanniascalaris, Cochliomyia macellaria, Phormiaregina, and Luciliaillustris satisfied these two criteria. Adults of Creophilus maxillosus, Necrobiaruficollis, and Necrodessurinamensis were common and showed only a few, single-day gaps in occurrence. C.maxillosus, P.regina, and L.illustris displayed exceptional forensic utility in that they were observed on every carcass. Although these observations were made at a single site during one season of the year, the species we found to be useful have large geographic ranges. We suggest that future carrion insect succession research focus only on a limited set of species with high potential forensic utility so as to reduce sample effort per carcass and thereby enable increased experimental replication. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  16. Variations in rupture process with recurrence interval in a repeated small earthquake

    USGS Publications Warehouse

    Vidale, J.E.; Ellsworth, W.L.; Cole, A.; Marone, Chris

    1994-01-01

    In theory and in laboratory experiments, friction on sliding surfaces such as rock, glass and metal increases with time since the previous episode of slip. This time dependence is a central pillar of the friction laws widely used to model earthquake phenomena. On natural faults, other properties, such as rupture velocity, porosity and fluid pressure, may also vary with the recurrence interval. Eighteen repetitions of the same small earthquake, separated by intervals ranging from a few days to several years, allow us to test these laboratory predictions in situ. The events with the longest time since the previous earthquake tend to have about 15% larger seismic moment than those with the shortest intervals, although this trend is weak. In addition, the rupture durations of the events with the longest recurrence intervals are more than a factor of two shorter than for the events with the shortest intervals. Both decreased duration and increased friction are consistent with progressive fault healing during the time of stationary contact.In theory and in laboratory experiments, friction on sliding surfaces such as rock, glass and metal increases with time since the previous episode of slip. This time dependence is a central pillar of the friction laws widely used to model earthquake phenomena. On natural faults, other properties, such as rupture velocity, porosity and fluid pressure, may also vary with the recurrence interval. Eighteen repetitions of the same small earthquake, separated by intervals ranging from a few days to several years, allow us to test these laboratory predictions in situ. The events with the longest time since the previous earthquake tend to have about 15% larger seismic moment than those with the shortest intervals, although this trend is weak. In addition, the rupture durations of the events with the longest recurrence intervals are more than a factor of two shorter than for the events with the shortest intervals. Both decreased duration and increased friction are consistent with progressive fault healing during the time of stationary contact.

  17. Spatial Distribution of the Coefficient of Variation for the Paleo-Earthquakes in Japan

    NASA Astrophysics Data System (ADS)

    Nomura, S.; Ogata, Y.

    2015-12-01

    Renewal processes, point prccesses in which intervals between consecutive events are independently and identically distributed, are frequently used to describe this repeating earthquake mechanism and forecast the next earthquakes. However, one of the difficulties in applying recurrent earthquake models is the scarcity of the historical data. Most studied fault segments have few, or only one observed earthquake that often have poorly constrained historic and/or radiocarbon ages. The maximum likelihood estimate from such a small data set can have a large bias and error, which tends to yield high probability for the next event in a very short time span when the recurrence intervals have similar lengths. On the other hand, recurrence intervals at a fault depend on the long-term slip rate caused by the tectonic motion in average. In addition, recurrence times are also fluctuated by nearby earthquakes or fault activities which encourage or discourage surrounding seismicity. These factors have spatial trends due to the heterogeneity of tectonic motion and seismicity. Thus, this paper introduces a spatial structure on the key parameters of renewal processes for recurrent earthquakes and estimates it by using spatial statistics. Spatial variation of mean and variance parameters of recurrence times are estimated in Bayesian framework and the next earthquakes are forecasted by Bayesian predictive distributions. The proposal model is applied for recurrent earthquake catalog in Japan and its result is compared with the current forecast adopted by the Earthquake Research Committee of Japan.

  18. Walking through doorways causes forgetting: Event structure or updating disruption?

    PubMed

    Pettijohn, Kyle A; Radvansky, Gabriel A

    2016-11-01

    According to event cognition theory, people segment experience into separate event models. One consequence of this segmentation is that when people transport objects from one location to another, memory is worse than if people move across a large location. In two experiments participants navigated through a virtual environment, and recognition memory was tested in either the presence or the absence of a location shift for objects that were recently interacted with (i.e., just picked up or set down). Of particular concern here is whether this location updating effect is due to (a) differences in retention intervals as a result of the navigation process, (b) a temporary disruption in cognitive processing that may occur as a result of the updating processes, or (c) a need to manage multiple event models, as has been suggested in prior research. Experiment 1 explored whether retention interval is driving this effect by recording travel times from the acquisition of an object and the probe time. The results revealed that travel times were similar, thereby rejecting a retention interval explanation. Experiment 2 explored whether a temporary disruption in processing is producing the effect by introducing a 3-second delay prior to the presentation of a memory probe. The pattern of results was not affected by adding a delay, thereby rejecting a temporary disruption account. These results are interpreted in the context of the event horizon model, which suggests that when there are multiple event models that contain common elements there is interference at retrieval, which compromises performance.

  19. Variable selection in a flexible parametric mixture cure model with interval-censored data.

    PubMed

    Scolas, Sylvie; El Ghouch, Anouar; Legrand, Catherine; Oulhaj, Abderrahim

    2016-03-30

    In standard survival analysis, it is generally assumed that every individual will experience someday the event of interest. However, this is not always the case, as some individuals may not be susceptible to this event. Also, in medical studies, it is frequent that patients come to scheduled interviews and that the time to the event is only known to occur between two visits. That is, the data are interval-censored with a cure fraction. Variable selection in such a setting is of outstanding interest. Covariates impacting the survival are not necessarily the same as those impacting the probability to experience the event. The objective of this paper is to develop a parametric but flexible statistical model to analyze data that are interval-censored and include a fraction of cured individuals when the number of potential covariates may be large. We use the parametric mixture cure model with an accelerated failure time regression model for the survival, along with the extended generalized gamma for the error term. To overcome the issue of non-stable and non-continuous variable selection procedures, we extend the adaptive LASSO to our model. By means of simulation studies, we show good performance of our method and discuss the behavior of estimates with varying cure and censoring proportion. Lastly, our proposed method is illustrated with a real dataset studying the time until conversion to mild cognitive impairment, a possible precursor of Alzheimer's disease. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  20. Buffered coscheduling for parallel programming and enhanced fault tolerance

    DOEpatents

    Petrini, Fabrizio [Los Alamos, NM; Feng, Wu-chun [Los Alamos, NM

    2006-01-31

    A computer implemented method schedules processor jobs on a network of parallel machine processors or distributed system processors. Control information communications generated by each process performed by each processor during a defined time interval is accumulated in buffers, where adjacent time intervals are separated by strobe intervals for a global exchange of control information. A global exchange of the control information communications at the end of each defined time interval is performed during an intervening strobe interval so that each processor is informed by all of the other processors of the number of incoming jobs to be received by each processor in a subsequent time interval. The buffered coscheduling method of this invention also enhances the fault tolerance of a network of parallel machine processors or distributed system processors

  1. Using Large-Scale Linkage Data to Evaluate the Effectiveness of a National Educational Program on Antithrombotic Prescribing and Associated Stroke Prevention in Primary Care.

    PubMed

    Liu, Zhixin; Moorin, Rachael; Worthington, John; Tofler, Geoffrey; Bartlett, Mark; Khan, Rabia; Zuo, Yeqin

    2016-10-13

    The National Prescribing Service (NPS) MedicineWise Stroke Prevention Program, which was implemented nationally in 2009-2010 in Australia, sought to improve antithrombotic prescribing in stroke prevention using dedicated interventions that target general practitioners. This study evaluated the impact of the NPS MedicineWise Stroke Prevention Program on antithrombotic prescribing and primary stroke hospitalizations. This population-based time series study used administrative health data linked to 45 and Up Study participants with a high risk of cardiovascular disease (CVD) to assess the possible impact of the NPS MedicineWise program on first-time aspirin prescriptions and primary stroke-related hospitalizations. Time series analysis showed that the NPS MedicineWise program was significantly associated with increased first-time prescribing of aspirin (P=0.03) and decreased hospitalizations for primary ischemic stroke (P=0.03) in the at-risk study population (n=90 023). First-time aspirin prescription was correlated with a reduction in the rate of hospitalization for primary stroke (P=0.02). Following intervention, the number of first-time aspirin prescriptions increased by 19.8% (95% confidence interval, 1.6-38.0), while the number of first-time stroke hospitalizations decreased by 17.3% (95% confidence interval, 1.8-30.0). Consistent with NPS MedicineWise program messages for the high-risk CVD population, the NPS MedicineWise Stroke Prevention Program (2009) was associated with increased initiation of aspirin and a reduced rate of hospitalization for primary stroke. The findings suggest that the provision of evidence-based multifaceted large-scale educational programs in primary care can be effective in changing prescriber behavior and positively impacting patient health outcomes. © 2016 The Authors and NPS MedicineWise. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  2. Multiscale analysis of heart rate dynamics: entropy and time irreversibility measures.

    PubMed

    Costa, Madalena D; Peng, Chung-Kang; Goldberger, Ary L

    2008-06-01

    Cardiovascular signals are largely analyzed using traditional time and frequency domain measures. However, such measures fail to account for important properties related to multiscale organization and non-equilibrium dynamics. The complementary role of conventional signal analysis methods and emerging multiscale techniques, is, therefore, an important frontier area of investigation. The key finding of this presentation is that two recently developed multiscale computational tools--multiscale entropy and multiscale time irreversibility--are able to extract information from cardiac interbeat interval time series not contained in traditional methods based on mean, variance or Fourier spectrum (two-point correlation) techniques. These new methods, with careful attention to their limitations, may be useful in diagnostics, risk stratification and detection of toxicity of cardiac drugs.

  3. Modeling global vector fields of chaotic systems from noisy time series with the aid of structure-selection techniques.

    PubMed

    Xu, Daolin; Lu, Fangfang

    2006-12-01

    We address the problem of reconstructing a set of nonlinear differential equations from chaotic time series. A method that combines the implicit Adams integration and the structure-selection technique of an error reduction ratio is proposed for system identification and corresponding parameter estimation of the model. The structure-selection technique identifies the significant terms from a pool of candidates of functional basis and determines the optimal model through orthogonal characteristics on data. The technique with the Adams integration algorithm makes the reconstruction available to data sampled with large time intervals. Numerical experiment on Lorenz and Rossler systems shows that the proposed strategy is effective in global vector field reconstruction from noisy time series.

  4. Multiscale Analysis of Heart Rate Dynamics: Entropy and Time Irreversibility Measures

    PubMed Central

    Peng, Chung-Kang; Goldberger, Ary L.

    2016-01-01

    Cardiovascular signals are largely analyzed using traditional time and frequency domain measures. However, such measures fail to account for important properties related to multiscale organization and nonequilibrium dynamics. The complementary role of conventional signal analysis methods and emerging multiscale techniques, is, therefore, an important frontier area of investigation. The key finding of this presentation is that two recently developed multiscale computational tools— multiscale entropy and multiscale time irreversibility—are able to extract information from cardiac interbeat interval time series not contained in traditional methods based on mean, variance or Fourier spectrum (two-point correlation) techniques. These new methods, with careful attention to their limitations, may be useful in diagnostics, risk stratification and detection of toxicity of cardiac drugs. PMID:18172763

  5. Important influence of respiration on human R-R interval power spectra is largely ignored

    NASA Technical Reports Server (NTRS)

    Brown, T. E.; Beightol, L. A.; Koh, J.; Eckberg, D. L.

    1993-01-01

    Frequency-domain analyses of R-R intervals are used widely to estimate levels of autonomic neural traffic to the human heart. Because respiration modulates autonomic activity, we determined for nine healthy subjects the influence of breathing frequency and tidal volume on R-R interval power spectra (fast-Fourier transform method). We also surveyed published literature to determine current practices in this burgeoning field of scientific inquiry. Supine subjects breathed at rates of 6, 7.5, 10, 15, 17.1, 20, and 24 breaths/min and with nominal tidal volumes of 1,000 and 1,500 ml. R-R interval power at respiratory and low (0.06-0.14 Hz) frequencies declined significantly as breathing frequency increased. R-R interval power at respiratory frequencies was significantly greater at a tidal volume of 1,500 than 1,000 ml. Neither breathing frequency nor tidal volume influenced average R-R intervals significantly. Our review of studies reporting human R-R interval power spectra showed that 51% of the studies controlled respiratory rate, 11% controlled tidal volume, and 11% controlled both respiratory rate and tidal volume. The major implications of our analyses are that breathing parameters strongly influence low-frequency as well as respiratory frequency R-R interval power spectra and that this influence is largely ignored in published research.

  6. Dynamic Response of Metal-Polymer Bilayers - Viscoelasticity, Adhesion and Failure

    DTIC Science & Technology

    2013-11-25

    polymers, particularly at large stretches. In the method developed for this purpose, a gas gun is used to impact a flange and impart a known velocity...Mott in his fragmentation model. Winter (1979) developed a method based on a gas - gun launched projectile in order to generate a rapidly...6061-O a The steady ring expansion speed reached in the time interval 15-50 | xs is quoted. There is a brief period of acceleration for about 15

  7. On the predictability of individual events in power-law systems

    NASA Astrophysics Data System (ADS)

    de, Rumi

    2006-11-01

    We consider a modified Burridge-Knopoff model with a view to understand results of acoustic emission (AE) relevant to earthquakes by adding a dissipative term which mimics bursts of acous- tic signals. Interestingly, we find a precursor effect in the cumulative energy dissipated which allows identification of a large slip event. Further, the AE activity for several large slip events follows a universal stretched exponential behavior with corrections in terms of time-to-failure. We find that many features of the statistics of AE signals such as their amplitudes, durations and the intervals between successive AE bursts obey power laws consistent with recent experimental results. Large magnitude events have different power law from that of the small ones, the latter being sensitive to the pulling speed.

  8. Apportionment of motor vehicle emissions from fast changes in number concentration and chemical composition of ultrafine particles near a roadway intersection.

    PubMed

    Klems, Joseph P; Pennington, M Ross; Zordan, Christopher A; McFadden, Lauren; Johnston, Murray V

    2011-07-01

    High frequency spikes in ultrafine number concentration near a roadway intersection arise from motor vehicles that accelerate after a red light turns green. The present work describes a method to determine the contribution of motor vehicles to the total ambient ultrafine particle mass by correlating these number concentration spikes with fast changes in ultrafine particle chemical composition measured with the nano aerosol mass spectrometer, NAMS. Measurements were performed at an urban air quality monitoring site in Wilmington, Delaware during the summer and winter of 2009. Motor vehicles were found to contribute 48% of the ultrafine particle mass in the winter measurement period, but only 16% of the ultrafine particle mass in the summer period. Chemical composition profiles and contributions to the ultrafine particle mass of spark vs diesel vehicles were estimated by correlating still camera images, chemical composition and spike contribution at each time interval.. The spark and diesel contributions were roughly equal, but the uncertainty in the split was large. The distribution of emissions from individual vehicles was determined by correlating camera images with the spike contribution to particle number concentration at each time interval. A small percentage of motor vehicles were found to emit a disproportionally large concentration of ultrafine particles, and these high emitters included both spark ignition and diesel vehicles.

  9. Are There Multiple Populations of Fast Radio Bursts?

    NASA Astrophysics Data System (ADS)

    Palaniswamy, Divya; Li, Ye; Zhang, Bing

    2018-02-01

    The repeating FRB 121102 (the “repeater”) shows repetitive bursting activities and was localized in a host galaxy at z = 0.193. On the other hand, despite dozens of hours of telescope time spent on follow-up observations, no other fast radio bursts (FRBs) have been observed to repeat. Yet, it has been speculated that the repeater is the prototype of FRBs, and that other FRBs should show similar repeating patterns. Using the published data, we compare the repeater with other FRBs in the observed time interval (Δt)–flux ratio (S i /S i+1) plane. We find that whereas other FRBs occupy the upper (large S i /S i+1) and right (large Δt) regions of the plane due to the non-detections of other bursts, some of the repeater bursts fall into the lower left region of the plot (short interval and small flux ratio) excluded by the non-detection data of other FRBs. The trend also exists even if one only selects those bursts detectable by the Parkes radio telescope. If other FRBs were similar to the repeater, our simulations suggest that the probability that none of them have been detected to repeat with the current searches would be ∼(10‑4–10‑3). We suggest that the repeater is not representative of the entire FRB population, and that there is strong evidence of more than one population of FRBs.

  10. Eliminating livelock by assigning the same priority state to each message that is input into a flushable routing system during N time intervals

    DOEpatents

    Faber, V.

    1994-11-29

    Livelock-free message routing is provided in a network of interconnected nodes that is flushable in time T. An input message processor generates sequences of at least N time intervals, each of duration T. An input register provides for receiving and holding each input message, where the message is assigned a priority state p during an nth one of the N time intervals. At each of the network nodes a message processor reads the assigned priority state and awards priority to messages with priority state (p-1) during an nth time interval and to messages with priority state p during an (n+1) th time interval. The messages that are awarded priority are output on an output path toward the addressed output message processor. Thus, no message remains in the network for a time longer than T. 4 figures.

  11. Eliminating livelock by assigning the same priority state to each message that is inputted into a flushable routing system during N time intervals

    DOEpatents

    Faber, Vance

    1994-01-01

    Livelock-free message routing is provided in a network of interconnected nodes that is flushable in time T. An input message processor generates sequences of at least N time intervals, each of duration T. An input register provides for receiving and holding each input message, where the message is assigned a priority state p during an nth one of the N time intervals. At each of the network nodes a message processor reads the assigned priority state and awards priority to messages with priority state (p-1) during an nth time interval and to messages with priority state p during an (n+1) th time interval. The messages that are awarded priority are output on an output path toward the addressed output message processor. Thus, no message remains in the network for a time longer than T.

  12. Holographic calculation for large interval Rényi entropy at high temperature

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Wu, Jie-qiang

    2015-11-01

    In this paper, we study the holographic Rényi entropy of a large interval on a circle at high temperature for the two-dimensional conformal field theory (CFT) dual to pure AdS3 gravity. In the field theory, the Rényi entropy is encoded in the CFT partition function on n -sheeted torus connected with each other by a large branch cut. As proposed by Chen and Wu [Large interval limit of Rényi entropy at high temperature, arXiv:1412.0763], the effective way to read the entropy in the large interval limit is to insert a complete set of state bases of the twist sector at the branch cut. Then the calculation transforms into an expansion of four-point functions in the twist sector with respect to e-2/π T R n . By using the operator product expansion of the twist operators at the branch points, we read the first few terms of the Rényi entropy, including the leading and next-to-leading contributions in the large central charge limit. Moreover, we show that the leading contribution is actually captured by the twist vacuum module. In this case by the Ward identity the four-point functions can be derived from the correlation function of four twist operators, which is related to double interval entanglement entropy. Holographically, we apply the recipe in [T. Faulkner, The entanglement Rényi entropies of disjoint intervals in AdS/CFT, arXiv:1303.7221] and [T. Barrella et al., Holographic entanglement beyond classical gravity, J. High Energy Phys. 09 (2013) 109] to compute the classical Rényi entropy and its one-loop quantum correction, after imposing a new set of monodromy conditions. The holographic classical result matches exactly with the leading contribution in the field theory up to e-4 π T R and l6, while the holographical one-loop contribution is in exact agreement with next-to-leading results in field theory up to e-6/π T R n and l4 as well.

  13. Recurrence time statistics for finite size intervals

    NASA Astrophysics Data System (ADS)

    Altmann, Eduardo G.; da Silva, Elton C.; Caldas, Iberê L.

    2004-12-01

    We investigate the statistics of recurrences to finite size intervals for chaotic dynamical systems. We find that the typical distribution presents an exponential decay for almost all recurrence times except for a few short times affected by a kind of memory effect. We interpret this effect as being related to the unstable periodic orbits inside the interval. Although it is restricted to a few short times it changes the whole distribution of recurrences. We show that for systems with strong mixing properties the exponential decay converges to the Poissonian statistics when the width of the interval goes to zero. However, we alert that special attention to the size of the interval is required in order to guarantee that the short time memory effect is negligible when one is interested in numerically or experimentally calculated Poincaré recurrence time statistics.

  14. Fast transfer of crossmodal time interval training.

    PubMed

    Chen, Lihan; Zhou, Xiaolin

    2014-06-01

    Sub-second time perception is essential for many important sensory and perceptual tasks including speech perception, motion perception, motor coordination, and crossmodal interaction. This study investigates to what extent the ability to discriminate sub-second time intervals acquired in one sensory modality can be transferred to another modality. To this end, we used perceptual classification of visual Ternus display (Ternus in Psychol Forsch 7:81-136, 1926) to implicitly measure participants' interval perception in pre- and posttests and implemented an intra- or crossmodal sub-second interval discrimination training protocol in between the tests. The Ternus display elicited either an "element motion" or a "group motion" percept, depending on the inter-stimulus interval between the two visual frames. The training protocol required participants to explicitly compare the interval length between a pair of visual, auditory, or tactile stimuli with a standard interval or to implicitly perceive the length of visual, auditory, or tactile intervals by completing a non-temporal task (discrimination of auditory pitch or tactile intensity). Results showed that after fast explicit training of interval discrimination (about 15 min), participants improved their ability to categorize the visual apparent motion in Ternus displays, although the training benefits were mild for visual timing training. However, the benefits were absent for implicit interval training protocols. This finding suggests that the timing ability in one modality can be rapidly acquired and used to improve timing-related performance in another modality and that there may exist a central clock for sub-second temporal processing, although modality-specific perceptual properties may constrain the functioning of this clock.

  15. An assessment of forest cover trends in South and North Korea, from 1980 to 2010.

    PubMed

    Engler, Robin; Teplyakov, Victor; Adams, Jonathan M

    2014-01-01

    It is generally believed that forest cover in North Korea has undergone a substantial decrease since 1980, while in South Korea, forest cover has remained relatively static during that same period of time. The United Nations Food and Agriculture Organization (FAO) Forest Resources Assessments--based on the reported forest inventories from North and South Korea--suggest a major forest cover decrease in North Korea, but only a slight decrease in South Korea during the last 30 years. In this study, we seek to check and validate those assessments by comparing them to independently derived forest cover maps compiled for three time intervals between 1990 and 2010, as well as to provide a spatially explicit view of forest cover change in the Korean Peninsula since the 1990s. We extracted tree cover data for the Korean Peninsula from existing global datasets derived from satellite imagery. Our estimates, while qualitatively supporting the FAO results, show that North Korea has lost a large number of densely forested areas, and thus in this sense has suffered heavier forest loss than the FAO assessment suggests. Given the limited time interval studied in our assessment, the overall forest loss from North Korea during the whole span of time since 1980 may have been even heavier than in our estimate. For South Korea, our results indicate that the forest cover has remained relatively stable at the national level, but that important variability in forest cover evolution exists at the regional level: While the northern and western provinces show an overall decrease in forested areas, large areas in the southeastern part of the country have increased their forest cover.

  16. A Deadline-Aware Scheduling and Forwarding Scheme in Wireless Sensor Networks.

    PubMed

    Dao, Thi-Nga; Yoon, Seokhoon; Kim, Jangyoung

    2016-01-05

    Many applications in wireless sensor networks (WSNs) require energy consumption to be minimized and the data delivered to the sink within a specific delay. A usual solution for reducing energy consumption is duty cycling, in which nodes periodically switch between sleep and active states. By increasing the duty cycle interval, consumed energy can be reduced more. However, a large duty cycle interval causes a long end-to-end (E2E) packet delay. As a result, the requirement of a specific delay bound for packet delivery may not be satisfied. In this paper, we aim at maximizing the duty cycle while still guaranteeing that the packets arrive at the sink with the required probability, i.e., the required delay-constrained success ratio (DCSR) is achieved. In order to meet this objective, we propose a novel scheduling and forwarding scheme, namely the deadline-aware scheduling and forwarding (DASF) algorithm. In DASF, the E2E delay distribution with the given network model and parameters is estimated in order to determine the maximum duty cycle interval, with which the required DCSR is satisfied. Each node independently selects a wake-up time using the selected interval, and packets are forwarded to a node in the potential forwarding set, which is determined based on the distance between nodes and the sink. DASF does not require time synchronization between nodes, and a node does not need to maintain neighboring node information in advance. Simulation results show that the proposed scheme can satisfy a required delay-constrained success ratio and outperforms existing algorithms in terms of E2E delay and DCSR.

  17. A Deadline-Aware Scheduling and Forwarding Scheme in Wireless Sensor Networks

    PubMed Central

    Dao, Thi-Nga; Yoon, Seokhoon; Kim, Jangyoung

    2016-01-01

    Many applications in wireless sensor networks (WSNs) require energy consumption to be minimized and the data delivered to the sink within a specific delay. A usual solution for reducing energy consumption is duty cycling, in which nodes periodically switch between sleep and active states. By increasing the duty cycle interval, consumed energy can be reduced more. However, a large duty cycle interval causes a long end-to-end (E2E) packet delay. As a result, the requirement of a specific delay bound for packet delivery may not be satisfied. In this paper, we aim at maximizing the duty cycle while still guaranteeing that the packets arrive at the sink with the required probability, i.e., the required delay-constrained success ratio (DCSR) is achieved. In order to meet this objective, we propose a novel scheduling and forwarding scheme, namely the deadline-aware scheduling and forwarding (DASF) algorithm. In DASF, the E2E delay distribution with the given network model and parameters is estimated in order to determine the maximum duty cycle interval, with which the required DCSR is satisfied. Each node independently selects a wake-up time using the selected interval, and packets are forwarded to a node in the potential forwarding set, which is determined based on the distance between nodes and the sink. DASF does not require time synchronization between nodes, and a node does not need to maintain neighboring node information in advance. Simulation results show that the proposed scheme can satisfy a required delay-constrained success ratio and outperforms existing algorithms in terms of E2E delay and DCSR. PMID:26742046

  18. Multi-year longitudinal profiles of cortisol and corticosterone recovered from baleen of North Atlantic right whales (Eubalaena glacialis).

    PubMed

    Hunt, Kathleen E; Lysiak, Nadine S; Moore, Michael; Rolland, Rosalind M

    2017-12-01

    Research into stress physiology of mysticete whales has been hampered by difficulty in obtaining repeated physiological samples from individuals over time. We investigated whether multi-year longitudinal records of glucocorticoids can be reconstructed from serial sampling along full-length baleen plates (representing ∼10years of baleen growth), using baleen recovered from two female North Atlantic right whales (Eubalaena glacialis) of known reproductive history. Cortisol and corticosterone were quantified with immunoassay of subsamples taken every 4cm (representing ∼60d time intervals) along a full-length baleen plate from each female. In both whales, corticosterone was significantly elevated during known pregnancies (inferred from calf sightings and necropsy data) as compared to intercalving intervals; cortisol was significantly elevated during pregnancies in one female but not the other. Within intercalving intervals, corticosterone was significantly elevated during the first year (lactation year) and/or the second year (post-lactation year) as compared to later years of the intercalving interval, while cortisol showed more variable patterns. Cortisol occasionally showed brief high elevations ("spikes") not paralleled by corticosterone, suggesting that the two glucocorticoids might be differentially responsive to certain stressors. Generally, immunoreactive corticosterone was present in higher concentration in baleen than immunoreactive cortisol; corticosterone:cortisol ratio was usually >4 and was highly variable in both individuals. Further investigation of baleen cortisol and corticosterone profiles could prove fruitful for elucidating long-term, multi-year patterns in stress physiology of large whales, determined retrospectively from stranded or archived specimens. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Maternal Asian ethnicity and the risk of anal sphincter injury.

    PubMed

    Davies-Tuck, Miranda; Biro, Mary-Anne; Mockler, Joanne; Stewart, Lynne; Wallace, Euan M; East, Christine

    2015-03-01

    To examine associations between maternal Asian ethnicity (South Asian and South East/East Asian) and anal sphincter injury. Retrospective cross-sectional study, comparing outcomes for Asian women with those of Australian and New Zealand women. A large metropolitan maternity service in Victoria, Australia. Australian/New Zealand, South Asian and South East/East Asian women who had a singleton vaginal birth from 2006 to 2012. The relation between maternal ethnicity and anal sphincter injury was assessed by logistic regression, adjusting for potential confounders. Anal sphincter injury was defined as a third or fourth degree tear (with or without episiotomy). Among 32,653 vaginal births there was a significant difference in the rate of anal sphincter injury by maternal region of birth (p < 0.001). After adjustment for confounders, nulliparous women born in South Asian and South East/East Asia were 2.6 (95% confidence interval 2.2-3.3; p < 0.001) and 2.1 (95% confidence interval 1.7-2.5; p < 0.001) times more likely to sustain an anal sphincter injury than Australian/New Zealand women, respectively. Parous women born in South Asian and South East/East Asia were 2.4 (95% confidence interval 1.8-3.2; p < 0.001) and 2.0 (95% confidence interval 1.5-2.7; p < 0.001) times more likely to sustain an anal sphincter injury than Australian/New Zealand women, respectively. There are ethnic differences in the rates of anal sphincter injury not fully explained by known risk factors for such trauma. This may have implications for care provision. © 2014 Nordic Federation of Societies of Obstetrics and Gynecology.

  20. Place avoidance learning and memory in a jumping spider.

    PubMed

    Peckmezian, Tina; Taylor, Phillip W

    2017-03-01

    Using a conditioned passive place avoidance paradigm, we investigated the relative importance of three experimental parameters on learning and memory in a salticid, Servaea incana. Spiders encountered an aversive electric shock stimulus paired with one side of a two-sided arena. Our three parameters were the ecological relevance of the visual stimulus, the time interval between trials and the time interval before test. We paired electric shock with either a black or white visual stimulus, as prior studies in our laboratory have demonstrated that S. incana prefer dark 'safe' regions to light ones. We additionally evaluated the influence of two temporal features (time interval between trials and time interval before test) on learning and memory. Spiders exposed to the shock stimulus learned to associate shock with the visual background cue, but the extent to which they did so was dependent on which visual stimulus was present and the time interval between trials. Spiders trained with a long interval between trials (24 h) maintained performance throughout training, whereas spiders trained with a short interval (10 min) maintained performance only when the safe side was black. When the safe side was white, performance worsened steadily over time. There was no difference between spiders tested after a short (10 min) or long (24 h) interval before test. These results suggest that the ecological relevance of the stimuli used and the duration of the interval between trials can influence learning and memory in jumping spiders.

  1. Estimation of postmortem interval based on colony development time for Anoplolepsis longipes (Hymenoptera: Formicidae).

    PubMed

    Goff, M L; Win, B H

    1997-11-01

    The postmortem interval for a set of human remains discovered inside a metal tool box was estimated using the development time required for a stratiomyid fly (Diptera: Stratiomyidae), Hermetia illucens, in combination with the time required to establish a colony of the ant Anoplolepsis longipes (Hymenoptera: Formicidae) capable of producing alate (winged) reproductives. This analysis resulted in a postmortem interval estimate of 14 + months, with a period of 14-18 months being the most probable time interval. The victim had been missing for approximately 18 months.

  2. TIME-INTERVAL MEASURING DEVICE

    DOEpatents

    Gross, J.E.

    1958-04-15

    An electronic device for measuring the time interval between two control pulses is presented. The device incorporates part of a previous approach for time measurement, in that pulses from a constant-frequency oscillator are counted during the interval between the control pulses. To reduce the possible error in counting caused by the operation of the counter gating circuit at various points in the pulse cycle, the described device provides means for successively delaying the pulses for a fraction of the pulse period so that a final delay of one period is obtained and means for counting the pulses before and after each stage of delay during the time interval whereby a plurality of totals is obtained which may be averaged and multplied by the pulse period to obtain an accurate time- Interval measurement.

  3. Monitoring molecular interactions using photon arrival-time interval distribution analysis

    DOEpatents

    Laurence, Ted A [Livermore, CA; Weiss, Shimon [Los Angels, CA

    2009-10-06

    A method for analyzing/monitoring the properties of species that are labeled with fluorophores. A detector is used to detect photons emitted from species that are labeled with one or more fluorophores and located in a confocal detection volume. The arrival time of each of the photons is determined. The interval of time between various photon pairs is then determined to provide photon pair intervals. The number of photons that have arrival times within the photon pair intervals is also determined. The photon pair intervals are then used in combination with the corresponding counts of intervening photons to analyze properties and interactions of the molecules including brightness, concentration, coincidence and transit time. The method can be used for analyzing single photon streams and multiple photon streams.

  4. [Estimation of the atrioventricular time interval by pulse Doppler in the normal fetal heart].

    PubMed

    Hamela-Olkowska, Anita; Dangel, Joanna

    2009-08-01

    To assess normative values of the fetal atrioventricular (AV) time interval by pulse-wave Doppler methods on 5-chamber view. Fetal echocardiography exams were performed using Acuson Sequoia 512 in 140 singleton fetuses at 18 to 40 weeks of gestation with sinus rhythm and normal cardiac and extracardiac anatomy. Pulsed Doppler derived AV intervals were measured from left ventricular inflow/outflow view using transabdominal convex 3.5-6 MHz probe. The values of AV time interval ranged from 100 to 150 ms (mean 123 +/- 11.2). The AV interval was negatively correlated with the heart rhythm (p<0.001). Fetal heart rate decreased as gestation progressed (p<0.001). Thus, the AV intervals increased with the age of gestation (p=0.007). However, in the same subgroup of the fetal heart rate there was no relation between AV intervals and gestational age. Therefore, the AV intervals showed only the heart rate dependence. The 95th percentiles of AV intervals according to FHR ranged from 135 to 148 ms. 1. The AV interval duration was negatively correlated with the heart rhythm. 2. Measurement of AV time interval is easy to perform and has a good reproducibility. It may be used for the fetal heart block screening in anti-Ro and anti-La positive pregnancies. 3. Normative values established in the study may help obstetricians in assessing fetal abnormalities of the AV conduction.

  5. Hydraulic Tomography in Fractured Sedimentary Rocks to Estimate High-Resolution 3-D Distribution of Hydraulic Conductivity

    NASA Astrophysics Data System (ADS)

    Tiedeman, C. R.; Barrash, W.; Thrash, C. J.; Patterson, J.; Johnson, C. D.

    2016-12-01

    Hydraulic tomography was performed in a 100 m2 by 20 m thick volume of contaminated fractured mudstones at the former Naval Air Warfare Center (NAWC) in the Newark Basin, New Jersey, with the objective of estimating the detailed distribution of hydraulic conductivity (K). Characterizing the fine-scale K variability is important for designing effective remediation strategies in complex geologic settings such as fractured rock. In the tomography experiment, packers isolated two to six intervals in each of seven boreholes in the volume of investigation, and fiber-optic pressure transducers enabled collection of high-resolution drawdown observations. A hydraulic tomography dataset was obtained by conducting multiple aquifer tests in which a given isolated well interval was pumped and drawdown was monitored in all other intervals. The collective data from all tests display a wide range of behavior indicative of highly heterogeneous K within the tested volume, such as: drawdown curves for different intervals crossing one another on drawdown-time plots; unique drawdown curve shapes for certain intervals; and intervals with negligible drawdown adjacent to intervals with large drawdown. Tomographic inversion of data from 15 tests conducted in the first field season focused on estimating the K distribution at a scale of 1 m3 over approximately 25% of the investigated volume, where observation density was greatest. The estimated K field is consistent with prior geologic, geophysical, and hydraulic information, including: highly variable K within bedding-plane-parting fractures that are the primary flow and transport paths at NAWC, connected high-K features perpendicular to bedding, and a spatially heterogeneous distribution of low-K rock matrix and closed fractures. Subsequent tomographic testing was conducted in the second field season, with the region of high observation density expanded to cover a greater volume of the wellfield.

  6. Ultrasonic-assisted Aqueous Extraction and Physicochemical Characterization of Oil from Clanis bilineata.

    PubMed

    Sun, Mingmei; Xu, Xiao; Zhang, Qiuqin; Rui, Xin; Wu, Junjun; Dong, Mingsheng

    2018-02-01

    Ultrasound-assisted aqueous extraction (UAAE) was used to extract oil from Clanis bilineata (CB), a traditional edible insect that can be reared on a large scale in China, and the physicochemical property and antioxidant capacity of the UAAE-derived oil (UAAEO) were investigated for the first time. UAAE conditions of CB oil was optimized using response surface methodology (RSM) and the highest oil yield (19.47%) was obtained under optimal conditions for ultrasonic power, extraction temperature, extraction time, and ultrasonic interval time at 400 W, 40°C, 50 min, and 2 s, respectively. Compared with Soxhlet extraction-derived oil (SEO), UAAEO had lower acid (AV), peroxide (PV) and p-anisidine values (PAV) as well as higher polyunsaturated fatty acids contents and thermal stability. Furthermore, UAAEO showed stronger antioxidant activities than those of SEO, according to DPPH radical scavenging and β-carotene bleaching tests. Therefore, UAAE is a promising process for the large-scale production of CB oil and CB has a developing potential as functional oil resource.

  7. Bayesian analyses of time-interval data for environmental radiation monitoring.

    PubMed

    Luo, Peng; Sharp, Julia L; DeVol, Timothy A

    2013-01-01

    Time-interval (time difference between two consecutive pulses) analysis based on the principles of Bayesian inference was investigated for online radiation monitoring. Using experimental and simulated data, Bayesian analysis of time-interval data [Bayesian (ti)] was compared with Bayesian and a conventional frequentist analysis of counts in a fixed count time [Bayesian (cnt) and single interval test (SIT), respectively]. The performances of the three methods were compared in terms of average run length (ARL) and detection probability for several simulated detection scenarios. Experimental data were acquired with a DGF-4C system in list mode. Simulated data were obtained using Monte Carlo techniques to obtain a random sampling of the Poisson distribution. All statistical algorithms were developed using the R Project for statistical computing. Bayesian analysis of time-interval information provided a similar detection probability as Bayesian analysis of count information, but the authors were able to make a decision with fewer pulses at relatively higher radiation levels. In addition, for the cases with very short presence of the source (< count time), time-interval information is more sensitive to detect a change than count information since the source data is averaged by the background data over the entire count time. The relationships of the source time, change points, and modifications to the Bayesian approach for increasing detection probability are presented.

  8. Endovascular strategy for the elective treatment of concomitant aortoiliac aneurysm and symptomatic large bowel diverticular disease.

    PubMed

    Illuminati, Giulio; Ricco, Jean-Baptiste; Schneider, Fabrice; Caliò, Francesco G; Ceccanei, Gianluca; Pacilè, Maria A; Pizzardi, Giulia; Palumbo, Piergaspare; Vietri, Francesco

    2014-07-01

    The purpose of this study was to evaluate the strategy for treatment of patients presenting with asymptomatic diverticular disease of the large bowel associated with an asymptomatic aortoiliac aneurysmal (AAA) disease. Sixty-nine patients were included in this retrospective study. The patients were divided into 5 groups according to the type and sequence of the surgical treatment: 32 patients (47%) underwent colectomy followed by a staged open AAA repair (group A); 10 patients (14%) were treated with open AAA repair followed by a staged colectomy (group B); 13 patients (18%) received endovascular aneurysm repair (EVAR) followed by a staged bowel resection (group C); 8 patients (12%) had a bowel resection followed by staged EVAR (group D); and 6 patients (9%) underwent simultaneous open AAA repair and bowel resection (group E). Primary end points were mortality and complications after any of the procedures. Secondary end point was the time interval between the staged procedures. The cumulative death rate for delayed treatment of AAA was 6.5% and 0% for delayed treatment of diverticular disease [P=0.22]. The mean time interval between the staged procedures was 11 days for EVAR/colon resection (group C and group D) and 73 days for open AAA repair/colon resection (group A and group B; P<0.01). EVAR allows a significant reduction in the time required between AAA repair and colon resection, but no definite rule can be established regarding the sequence of staged procedures. Combined procedures should be reserved for selected cases. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Fixed-interval matching-to-sample: intermatching time and intermatching error runs1

    PubMed Central

    Nelson, Thomas D.

    1978-01-01

    Four pigeons were trained on a matching-to-sample task in which reinforcers followed either the first matching response (fixed interval) or the fifth matching response (tandem fixed-interval fixed-ratio) that occurred 80 seconds or longer after the last reinforcement. Relative frequency distributions of the matching-to-sample responses that concluded intermatching times and runs of mismatches (intermatching error runs) were computed for the final matching responses directly followed by grain access and also for the three matching responses immediately preceding the final match. Comparison of these two distributions showed that the fixed-interval schedule arranged for the preferential reinforcement of matches concluding relatively extended intermatching times and runs of mismatches. Differences in matching accuracy and rate during the fixed interval, compared to the tandem fixed-interval fixed-ratio, suggested that reinforcers following matches concluding various intermatching times and runs of mismatches influenced the rate and accuracy of the last few matches before grain access, but did not control rate and accuracy throughout the entire fixed-interval period. PMID:16812032

  10. Improved confidence intervals when the sample is counted an integer times longer than the blank.

    PubMed

    Potter, William Edward; Strzelczyk, Jadwiga Jodi

    2011-05-01

    Past computer solutions for confidence intervals in paired counting are extended to the case where the ratio of the sample count time to the blank count time is taken to be an integer, IRR. Previously, confidence intervals have been named Neyman-Pearson confidence intervals; more correctly they should have been named Neyman confidence intervals or simply confidence intervals. The technique utilized mimics a technique used by Pearson and Hartley to tabulate confidence intervals for the expected value of the discrete Poisson and Binomial distributions. The blank count and the contribution of the sample to the gross count are assumed to be Poisson distributed. The expected value of the blank count, in the sample count time, is assumed known. The net count, OC, is taken to be the gross count minus the product of IRR with the blank count. The probability density function (PDF) for the net count can be determined in a straightforward manner.

  11. The Time Is Up: Compression of Visual Time Interval Estimations of Bimodal Aperiodic Patterns

    PubMed Central

    Duarte, Fabiola; Lemus, Luis

    2017-01-01

    The ability to estimate time intervals subserves many of our behaviors and perceptual experiences. However, it is not clear how aperiodic (AP) stimuli affect our perception of time intervals across sensory modalities. To address this question, we evaluated the human capacity to discriminate between two acoustic (A), visual (V) or audiovisual (AV) time intervals of trains of scattered pulses. We first measured the periodicity of those stimuli and then sought for correlations with the accuracy and reaction times (RTs) of the subjects. We found that, for all time intervals tested in our experiment, the visual system consistently perceived AP stimuli as being shorter than the periodic (P) ones. In contrast, such a compression phenomenon was not apparent during auditory trials. Our conclusions are: first, the subjects exposed to P stimuli are more likely to measure their durations accurately. Second, perceptual time compression occurs for AP visual stimuli. Lastly, AV discriminations are determined by A dominance rather than by AV enhancement. PMID:28848406

  12. Animal choruses emerge from receiver psychology

    PubMed Central

    Greenfield, Michael D.; Esquer-Garrigos, Yareli; Streiff, Réjane; Party, Virginie

    2016-01-01

    Synchrony and alternation in large animal choruses are often viewed as adaptations by which cooperating males increase their attractiveness to females or evade predators. Alternatively, these seemingly composed productions may simply emerge by default from the receiver psychology of mate choice. This second, emergent property hypothesis has been inferred from findings that females in various acoustic species ignore male calls that follow a neighbor’s by a brief interval, that males often adjust the timing of their call rhythm and reduce the incidence of ineffective, following calls, and from simulations modeling the collective outcome of male adjustments. However, the purported connection between male song timing and female preference has never been tested experimentally, and the emergent property hypothesis has remained speculative. Studying a distinctive katydid species genetically structured as isolated populations, we conducted a comparative phylogenetic analysis of the correlation between male call timing and female preference. We report that across 17 sampled populations male adjustments match the interval over which females prefer leading calls; moreover, this correlation holds after correction for phylogenetic signal. Our study is the first demonstration that male adjustments coevolved with female preferences and thereby confirms the critical link in the emergent property model of chorus evolution. PMID:27670673

  13. An efficient solution to the decoherence enhanced trivial crossing problem in surface hopping

    NASA Astrophysics Data System (ADS)

    Bai, Xin; Qiu, Jing; Wang, Linjun

    2018-03-01

    We provide an in-depth investigation of the time interval convergence when both trivial crossing and decoherence corrections are applied to Tully's fewest switches surface hopping (FSSH) algorithm. Using one force-based and one energy-based decoherence strategies as examples, we show decoherence corrections intrinsically enhance the trivial crossing problem. We propose a restricted decoherence (RD) strategy and incorporate it into the self-consistent (SC) fewest switches surface hopping algorithm [L. Wang and O. V. Prezhdo, J. Phys. Chem. Lett. 5, 713 (2014)]. The resulting SC-FSSH-RD approach is applied to general Hamiltonians with different electronic couplings and electron-phonon couplings to mimic charge transport in tens to hundreds of molecules. In all cases, SC-FSSH-RD allows us to use a large time interval of 0.1 fs for convergence and the simulation time is reduced by over one order of magnitude. Both the band and hopping mechanisms of charge transport have been captured perfectly. SC-FSSH-RD makes surface hops in the adiabatic representation and can be implemented in both diabatic and locally diabatic representations for wave function propagation. SC-FSSH-RD can potentially describe general nonadiabatic dynamics of electrons and excitons in organics and other materials.

  14. Social tension as precursor of large damaging earthquake: legend or reality?

    NASA Astrophysics Data System (ADS)

    Molchanov, O.

    2008-11-01

    Using case study of earthquake (EQ) activity and war conflicts in Caucasus during 1975 2002 time interval and correlation analysis of global distribution of damaging EQs and war-related social tension during 1901 2005 period we conclude:

    • There is a statistically reliable increase of social tension several years (or several months in case study) before damaging EQs,
    • There is evident decrease of social tension several years after damaging EQs, probably due to society consolidation,
    • Preseismic effect is absent for the large EQs in unpopulated areas,
    • There is some factual background for legendary belief in Almighty retribution for social abnormal behavior.

  15. Time estimation by patients with frontal lesions and by Korsakoff amnesics.

    PubMed

    Mimura, M; Kinsbourne, M; O'Connor, M

    2000-07-01

    We studied time estimation in patients with frontal damage (F) and alcoholic Korsakoff (K) patients in order to differentiate between the contributions of working memory and episodic memory to temporal cognition. In Experiment 1, F and K patients estimated time intervals between 10 and 120 s less accurately than matched normal and alcoholic control subjects. F patients were less accurate than K patients at short (< 1 min) time intervals whereas K patients increasingly underestimated durations as intervals grew longer. F patients overestimated short intervals in inverse proportion to their performance on the Wisconsin Card Sorting Test. As intervals grew longer, overestimation yielded to underestimation for F patients. Experiment 2 involved time estimation while counting at a subjective 1/s rate. F patients' subjective tempo, though relatively rapid, did not fully explain their overestimation of short intervals. In Experiment 3, participants produced predetermined time intervals by depressing a mouse key. K patients underproduced longer intervals. F patients produced comparably to normal participants, but were extremely variable. Findings suggest that both working memory and episodic memory play an individual role in temporal cognition. Turnover within a short-term working memory buffer provides a metric for temporal decisions. The depleted working memory that typically attends frontal dysfunction may result in quicker turnover, and this may inflate subjective duration. On the other hand, temporal estimation beyond 30 s requires episodic remembering, and this puts K patients at a disadvantage.

  16. Method and apparatus for assessing cardiovascular risk

    NASA Technical Reports Server (NTRS)

    Albrecht, Paul (Inventor); Bigger, J. Thomas (Inventor); Cohen, Richard J. (Inventor)

    1998-01-01

    The method for assessing risk of an adverse clinical event includes detecting a physiologic signal in the subject and determining from the physiologic signal a sequence of intervals corresponding to time intervals between heart beats. The long-time structure of fluctuations in the intervals over a time period of more than fifteen minutes is analyzed to assess risk of an adverse clinical event. In a preferred embodiment, the physiologic signal is an electrocardiogram and the time period is at least fifteen minutes. A preferred method for analyzing the long-time structure variability in the intervals includes computing the power spectrum and fitting the power spectrum to a power law dependence on frequency over a selected frequency range such as 10.sup.-4 to 10.sup.-2 Hz. Characteristics of the long-time structure fluctuations in the intervals is used to assess risk of an adverse clinical event.

  17. The implications of variable remigration intervals for the assessment of population size in marine turtles.

    PubMed

    Hays, G C

    2000-09-21

    Sea turtles nest on sandy beaches and tend to show high fidelity to specific nesting areas, but, despite this fidelity, the inter-annual variation in nesting numbers may be large. This variation may reflect the fact that turtles do not usually nest in consecutive years. Here, theoretical models are developed in which the interval between successive nesting years (the remigration interval) reflects conditions encountered on the feeding grounds, with good feeding years leading to a reduction in the remigration interval and vice versa. These simple models produce high levels of inter-annual variation in nesting numbers with, on occasion, almost no turtles nesting in some years even when the population is large and stable. The implications for assessing the size of sea turtle populations are considered. Copyright 2000 Academic Press.

  18. Heterogeneous Effects of Birth Spacing on Neonatal Mortality Risks in Bangladesh

    PubMed Central

    Molitoris, Joseph

    2018-01-01

    Abstract The negative relationship between birth interval length and neonatal mortality risks is well documented, but heterogeneity in this relationship has been largely ignored. Using the Bangladesh Maternal Mortality and Health Care Survey 2010, this study investigates how the effect of birth interval length on neonatal mortality risks varies by maternal age at birth and maternal education. There is significant variation in the effect of interval length on neonatal mortality along these dimensions. Young mothers and those with little education, both of which make up a large share of the Bangladeshi population, can disproportionately benefit from longer intervals. Because these results were obtained from within‐family models, they are not due to unobservable heterogeneity between mothers. Targeting women with these characteristics may lead to significant improvements in neonatal mortality rates, but there are significant challenges in reaching them. PMID:29508949

  19. The Precision of Effect Size Estimation From Published Psychological Research: Surveying Confidence Intervals.

    PubMed

    Brand, Andrew; Bradley, Michael T

    2016-02-01

    Confidence interval ( CI) widths were calculated for reported Cohen's d standardized effect sizes and examined in two automated surveys of published psychological literature. The first survey reviewed 1,902 articles from Psychological Science. The second survey reviewed a total of 5,169 articles from across the following four APA journals: Journal of Abnormal Psychology, Journal of Applied Psychology, Journal of Experimental Psychology: Human Perception and Performance, and Developmental Psychology. The median CI width for d was greater than 1 in both surveys. Hence, CI widths were, as Cohen (1994) speculated, embarrassingly large. Additional exploratory analyses revealed that CI widths varied across psychological research areas and that CI widths were not discernably decreasing over time. The theoretical implications of these findings are discussed along with ways of reducing the CI widths and thus improving precision of effect size estimation.

  20. Progress on the Cluster Mission

    NASA Technical Reports Server (NTRS)

    Kivelson, Margaret; Khurana, Krishan; Acuna, Mario (Technical Monitor)

    2002-01-01

    Prof M. G. Kivelson and Dr. K. K. Khurana (UCLA (University of California, Los Angeles)) are co-investigators on the Cluster Magnetometer Consortium (CMC) that provided the fluxgate magnetometers and associated mission support for the Cluster Mission. The CMC designated UCLA as the site with primary responsibility for the inter-calibration of data from the four spacecraft and the production of fully corrected data critical to achieving the mission objectives. UCLA will also participate in the analysis and interpretation of the data. The UCLA group here reports its excellent progress in developing fully intra-calibrated data for large portions of the mission and an excellent start in developing inter-calibrated data for selected time intervals, especially extended intervals in August, 2001 on which a workshop held at ESTEC in March, 2002 focused. In addition, some scientific investigations were initiated and results were reported at meetings.

  1. Using Landslide Failure Forecast Models in Near Real Time: the Mt. de La Saxe case-study

    NASA Astrophysics Data System (ADS)

    Manconi, Andrea; Giordan, Daniele

    2014-05-01

    Forecasting the occurrence of landslide phenomena in space and time is a major scientific challenge. The approaches used to forecast landslides mainly depend on the spatial scale analyzed (regional vs. local), the temporal range of forecast (long- vs. short-term), as well as the triggering factor and the landslide typology considered. By focusing on short-term forecast methods for large, deep seated slope instabilities, the potential time of failure (ToF) can be estimated by studying the evolution of the landslide deformation over time (i.e., strain rate) provided that, under constant stress conditions, landslide materials follow creep mechanism before reaching rupture. In the last decades, different procedures have been proposed to estimate ToF by considering simplified empirical and/or graphical methods applied to time series of deformation data. Fukuzono, 1985 proposed a failure forecast method based on the experience performed during large scale laboratory experiments, which were aimed at observing the kinematic evolution of a landslide induced by rain. This approach, known also as the inverse-velocity method, considers the evolution over time of the inverse value of the surface velocity (v) as an indicator of the ToF, by assuming that failure approaches while 1/v tends to zero. Here we present an innovative method to aimed at achieving failure forecast of landslide phenomena by considering near-real-time monitoring data. Starting from the inverse velocity theory, we analyze landslide surface displacements on different temporal windows, and then apply straightforward statistical methods to obtain confidence intervals on the time of failure. Our results can be relevant to support the management of early warning systems during landslide emergency conditions, also when the predefined displacement and/or velocity thresholds are exceeded. In addition, our statistical approach for the definition of confidence interval and forecast reliability can be applied also to different failure forecast methods. We applied for the first time the herein presented approach in near real time during the emergency scenario relevant to the reactivation of the La Saxe rockslide, a large mass wasting menacing the population of Courmayeur, northern Italy, and the important European route E25. We show how the application of simplified but robust forecast models can be a convenient method to manage and support early warning systems during critical situations. References: Fukuzono T. (1985), A New Method for Predicting the Failure Time of a Slope, Proc. IVth International Conference and Field Workshop on Landslides, Tokyo.

  2. A Review of Marine Geophysical Constraints on the Motion Between East and West Antarctica in the Cenozoic (Invited)

    NASA Astrophysics Data System (ADS)

    Cande, S. C.; Stock, J. M.

    2010-12-01

    Motion between East and West Antarctica in the Late Cretaceous and Cenozoic is derived by summing the plate circuit(s) linking East Antarctica to Australia to the Lord Howe Rise to the Pacific plate to West Antarctica (the Aus-Pac plate circuit). We discuss this motion in two parts: motion before and after 42 Ma. For the younger time interval, motion is directly constrained by magnetic anomalies in the Adare Basin, which opened by ultraslow seafloor spreading between 42 and 26 Ma (anomalies 18 to 9). The Adare Basin magnetic anomaly constraints can be combined with magnetic anomaly and fracture zone data from the SEIR (Aus-East Ant to the west of the Balleny FZ and Aus - West Ant to the east) to set up an Aus-East Ant - West Ant three-plate problem. The original solution of this three-plate configuration (Cande et al., 2000) only had data from a very short section of the Adare Basin and obtained an answer with very large uncertainties on the East-West Ant rotation. Better estimates of the East-West Ant rotation have been calculated by adding constraints based on seismically controlled estimates of extension in the Victoria Land Basin (Davey et al., 2006) and constraints from Damaske et al’s (2007) detailed aeromagnetic survey of the Adare Basin and adjacent Northern Basin (Granot et al., 2010). Currently, we are working on improving the accuracy of rotations for the post-42 Ma time interval by taking advantage of an unusual plate geometry that enables us to solve a five-boundary, four-plate configuration. Specifically, motion between the four plates (East Ant, West Ant, Aus and Pac) encompasses two related triple junction systems with five spreading ridge segments (Aus-East Ant, Aus-West Ant, Aus-Pac, Pac-West Ant and East Ant-West Ant) which can be combined and solved simultaneously. For the older, pre-42 Ma time interval, the only way to calculate motion between East and West Antarctica is via the long Aus-Pac plate circuit (although it is possible that magnetic anomalies formed by direct spreading between East and West Antarctica, akin to the Adare Basin anomalies, may exist in the poorly mapped Central Basin between the Hallett Ridge and the Iselin Bank). The weakest link in this time interval is the Aus - East Ant boundary; for the time interval from 84 to 42 Ma there are distinctly different results depending on how the tectonic constraints are prioritized (Royer and Rollett, 1997; Tikku and Cande, 1999; Whittaker et al., 2007). The disagreement over the pre-42 Ma motion between Australia and East Antarctica leads to large differences in the predicted motion in the Western Ross Sea and near Ellsworth Land. Another weak link in this circuit is the pattern of sea floor spreading in the Tasman Sea, which is difficult to unravel because of the complex history of motion between Australia, the Lord Howe Rise, and Tasmania (Gaina et al., 1999). Resolution of these issues is required before a well constrained history of East -West Antarctic motion back to the Late Cretaceous is obtained

  3. Holocene geologic and climatic history around the Gulf of Alaska

    USGS Publications Warehouse

    Mann, D.H.; Crowell, A.L.; Hamilton, T.D.; Finney, B.P.

    1998-01-01

    Though not as dramatic as during the last Ice Age, pronounced climatic changes occurred in the northeastern Pacific over the last 10,000 years. Summers warmer and drier than today's accompanied a Hypsithermal interval between 9 and 6 ka. Subsequent Neoglaciation was marked by glacier expansion after 5-6 ka and the assembly of modern-type plant communities by 3-4 ka. The Neoglacial interval contained alternating cold and warm intervals, each lasting several hundred years to one millennium, and including both the Medieval Warm Period (ca. AD 900-1350) and the Little Ice Age (ca. AD 1350-1900). Salmon abundance fluctuated during the Little Ice Age in response to local glaciation and probably also to changes in the intensity of the Aleutian Low. Although poorly understood at present, climate fluctuations at all time scales were intimately connected with oceanographic changes in the North Pacific Ocean. The Gulf of Alaska region is tectonically highly active, resulting in a history of frequent geological catastrophes during the Holocene. Twelve to 14 major volcanic eruptions occurred since 12 ka. At intervals of 20-100 years, large earthquakes have raised and lowered sea level instantaneously by meters and generated destructive tsunamis. Sea level has often varied markedly between sites only 50-100 km apart due to tectonism and the isostatic effects of glacier fluctuations.

  4. Patterns of tree growth in relation to environmental variability in the tropical dry deciduous forest at Mudumalai, southern India.

    PubMed

    Nath, Cheryl D; Dattaraja, H S; Suresh, H S; Joshi, N V; Sukumar, R

    2006-12-01

    Tree diameter growth is sensitive to environmental fluctuations and tropical dry forests experience high seasonal and inter-annual environmental variation. Tree growth rates in a large permanent plot at Mudumalai, southern India, were examined for the influences of rainfall and three intrinsic factors (size, species and growth form) during three 4-year intervals over the period 1988-2000. Most trees had lowest growth during the second interval when rainfall was lowest, and skewness and kurtosis of growth distributions were reduced during this interval. Tree diameter generally explained less than 10% of growth variation and had less influence on growth than species identity or time interval. Intraspecific variation was high, yet species identity accounted for up to 16% of growth variation in the community. There were no consistent differences between canopy and understory tree growth rates; however, a few subgroups of species may potentially represent canopy and understory growth guilds. Environmentally-induced temporal variations in growth generally did not reduce the odds of subsequent survival. Growth rates appear to be strongly influenced by species identity and environmental variability in the Mudumalai dry forest. Understanding and predicting vegetation dynamics in the dry tropics thus also requires information on temporal variability in local climate.

  5. Observation of Relativistic Electron Microbursts in Conjunction with Intense Radiation Belt Whistler-Mode Waves

    NASA Technical Reports Server (NTRS)

    Kersten, K.; Cattell, C. A.; Breneman, A.; Goetz, K.; Kellogg, P. J.; Wygant, J. R.; Wilson, L. B., III; Blake, J. B.; Looper, M. D.; Roth, I.

    2011-01-01

    We present multi-satellite observations of large amplitude radiation belt whistler-mode waves and relativistic electron precipitation. On separate occasions during the Wind petal orbits and STEREO phasing orbits, Wind and STEREO recorded intense whistler-mode waves in the outer nightside equatorial radiation belt with peak-to-peak amplitudes exceeding 300 mV/m. During these intervals of intense wave activity, SAMPEX recorded relativistic electron microbursts in near magnetic conjunction with Wind and STEREO. This evidence of microburst precipitation occurring at the same time and at nearly the same magnetic local time and L-shell with a bursty temporal structure similar to that of the observed large amplitude wave packets suggests a causal connection between the two phenomena. Simulation studies corroborate this idea, showing that nonlinear wave.particle interactions may result in rapid energization and scattering on timescales comparable to those of the impulsive relativistic electron precipitation.

  6. The error and bias of supplementing a short, arid climate, rainfall record with regional vs. global frequency analysis

    NASA Astrophysics Data System (ADS)

    Endreny, Theodore A.; Pashiardis, Stelios

    2007-02-01

    SummaryRobust and accurate estimates of rainfall frequencies are difficult to make with short, and arid-climate, rainfall records, however new regional and global methods were used to supplement such a constrained 15-34 yr record in Cyprus. The impact of supplementing rainfall frequency analysis with the regional and global approaches was measured with relative bias and root mean square error (RMSE) values. Analysis considered 42 stations with 8 time intervals (5-360 min) in four regions delineated by proximity to sea and elevation. Regional statistical algorithms found the sites passed discordancy tests of coefficient of variation, skewness and kurtosis, while heterogeneity tests revealed the regions were homogeneous to mildly heterogeneous. Rainfall depths were simulated in the regional analysis method 500 times, and then goodness of fit tests identified the best candidate distribution as the general extreme value (GEV) Type II. In the regional analysis, the method of L-moments was used to estimate location, shape, and scale parameters. In the global based analysis, the distribution was a priori prescribed as GEV Type II, a shape parameter was a priori set to 0.15, and a time interval term was constructed to use one set of parameters for all time intervals. Relative RMSE values were approximately equal at 10% for the regional and global method when regions were compared, but when time intervals were compared the global method RMSE had a parabolic-shaped time interval trend. Relative bias values were also approximately equal for both methods when regions were compared, but again a parabolic-shaped time interval trend was found for the global method. The global method relative RMSE and bias trended with time interval, which may be caused by fitting a single scale value for all time intervals.

  7. Dissociations between interval timing and intertemporal choice following administration of fluoxetine, cocaine, or methamphetamine

    PubMed Central

    Heilbronner, Sarah R.; Meck, Warren. H.

    2014-01-01

    The goal of our study was to characterize the relationship between intertemporal choice and interval timing, including determining how drugs that modulate brain serotonin and dopamine levels influence these two processes. In Experiment 1, rats were tested on a standard 40-s peak-interval procedure following administration of fluoxetine (3, 5, or 8 mg/kg) or vehicle to assess basic effects on interval timing. In Experiment 2, rats were tested in a novel behavioral paradigm intended to simultaneously examine interval timing and impulsivity. Rats performed a variant of the bi-peak procedure using 10-s and 40-s target durations with an additional “defection” lever that provided the possibility of a small, immediate reward. Timing functions remained relatively intact, and ‘patience’ across subjects correlated with peak times, indicating a negative relationship between ‘patience’ and clock speed. We next examined the effects of fluoxetine (5 mg/kg), cocaine (15 mg/kg), or methamphetamine (1 mg/kg) on task performance. Fluoxetine reduced impulsivity as measured by defection time without corresponding changes in clock speed. In contrast, cocaine and methamphetamine both increased impulsivity and clock speed. Thus, variations in timing may mediate intertemporal choice via dopaminergic inputs. However, a separate, serotonergic system can affect intertemporal choice without affecting interval timing directly. PMID:24135569

  8. Insights from Smart Meters: Ramp-Up, Dependability, and Short-Term Persistence of Savings from Home Energy Reports

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Todd, A.; Perry, M.; Smith, B.

    2015-04-01

    Smart meters, smart thermostats, and other new technologies provide previously unavailable high-frequency and location-specific energy usage data. Many utilities are now able to capture real-time, customer-specific hourly interval usage data for a large proportion of their residential and small commercial customers. These vast, constantly growing streams of rich data (or big data) have the potential to provide novel insights into key policy questions about how people make energy decisions.

  9. Starting Circuit For Erasable Programmable Logic Device

    NASA Technical Reports Server (NTRS)

    Cole, Steven W.

    1990-01-01

    Voltage regulator bypassed to supply starting current. Starting or "pullup" circuit supplies large inrush of current required by erasable programmable logic device (EPLD) while being turned on. Operates only during such intervals of high demand for current and has little effect any other time. Performs needed bypass, acting as current-dependent shunt connecting battery or other source of power more nearly directly to EPLD. Input capacitor of regulator removed when starting circuit installed, reducing probability of damage to transistor in event of short circuit in or across load.

  10. Two-qubit correlations via a periodic plasmonic nanostructure

    NASA Astrophysics Data System (ADS)

    Iliopoulos, Nikos; Terzis, Andreas F.; Yannopapas, Vassilios; Paspalakis, Emmanuel

    2016-02-01

    We theoretically investigate the generation of quantum correlations by using two distant qubits in free space or mediated by a plasmonic nanostructure. We report both entanglement of formation as well as quantum discord and classical correlations. We have found that for proper initial state of the two-qubit system and distance between the two qubits we can produce quantum correlations taking significant value for a relatively large time interval so that it can be useful in quantum information and computation processes.

  11. Two-qubit correlations via a periodic plasmonic nanostructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iliopoulos, Nikos; Terzis, Andreas F.; Yannopapas, Vassilios

    2016-02-15

    We theoretically investigate the generation of quantum correlations by using two distant qubits in free space or mediated by a plasmonic nanostructure. We report both entanglement of formation as well as quantum discord and classical correlations. We have found that for proper initial state of the two-qubit system and distance between the two qubits we can produce quantum correlations taking significant value for a relatively large time interval so that it can be useful in quantum information and computation processes.

  12. Fast-dynamo action in unsteady flows and maps in three dimensions

    NASA Technical Reports Server (NTRS)

    Bayly, B. J.; Childress, S.

    1987-01-01

    Unsteady fast-dynamo action is obtained in a family of stretch-fold-shear maps applied to a spatially periodic magnetic field in three dimensions. Exponential growth of a mean field in the limit of vanishing diffusivity is demonstrated by a numerical method which alternates instantaneous deformations with molecular diffusion over a finite time interval. Analysis indicates that the dynamo is a coherent feature of the large scales, essentially independent of the cascade of structure to small scales.

  13. Big Data Analytics for Demand Response: Clustering Over Space and Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chelmis, Charalampos; Kolte, Jahanvi; Prasanna, Viktor K.

    The pervasive deployment of advanced sensing infrastructure in Cyber-Physical systems, such as the Smart Grid, has resulted in an unprecedented data explosion. Such data exhibit both large volumes and high velocity characteristics, two of the three pillars of Big Data, and have a time-series notion as datasets in this context typically consist of successive measurements made over a time interval. Time-series data can be valuable for data mining and analytics tasks such as identifying the “right” customers among a diverse population, to target for Demand Response programs. However, time series are challenging to mine due to their high dimensionality. Inmore » this paper, we motivate this problem using a real application from the smart grid domain. We explore novel representations of time-series data for BigData analytics, and propose a clustering technique for determining natural segmentation of customers and identification of temporal consumption patterns. Our method is generizable to large-scale, real-world scenarios, without making any assumptions about the data. We evaluate our technique using real datasets from smart meters, totaling ~ 18,200,000 data points, and show the efficacy of our technique in efficiency detecting the number of optimal number of clusters.« less

  14. Global Vertical Rates from VLBl

    NASA Technical Reports Server (NTRS)

    Ma, Chopo; MacMillan, D.; Petrov, L.

    2003-01-01

    The analysis of global VLBI observations provides vertical rates for 50 sites with formal errors less than 2 mm/yr and median formal error of 0.4 mm/yr. These sites are largely in Europe and North America with a few others in east Asia, Australia, South America and South Africa. The time interval of observations is up to 20 years. The error of the velocity reference frame is less than 0.5 mm/yr, but results from several sites with observations from more than one antenna suggest that the estimated vertical rates may have temporal variations or non-geophysical components. Comparisons with GPS rates and corresponding site position time series will be discussed.

  15. Self-organization leads to supraoptimal performance in public transportation systems.

    PubMed

    Gershenson, Carlos

    2011-01-01

    The performance of public transportation systems affects a large part of the population. Current theory assumes that passengers are served optimally when vehicles arrive at stations with regular intervals. In this paper, it is shown that self-organization can improve the performance of public transportation systems beyond the theoretical optimum by responding adaptively to local conditions. This is possible because of a "slower-is-faster" effect, where passengers wait more time at stations but total travel times are reduced. The proposed self-organizing method uses "antipheromones" to regulate headways, which are inspired by the stigmergy (communication via environment) of some ant colonies.

  16. A note on windowing for the waveform relaxation

    NASA Technical Reports Server (NTRS)

    Zhang, Hong

    1994-01-01

    The technique of windowing has been often used in the implementation of the waveform relaxations for solving ODE's or time dependent PDE's. Its efficiency depends upon problem stiffness and operator splitting. Using model problems, the estimates for window length and convergence rate are derived. The electiveness of windowing is then investigated for non-stiff and stiff cases respectively. lt concludes that for the former, windowing is highly recommended when a large discrepancy exists between the convergence rate on a time interval and the ones on its subintervals. For the latter, windowing does not provide any computational advantage if machine features are disregarded. The discussion is supported by experimental results.

  17. Characteristic analysis and simulation for polysilicon comb micro-accelerometer

    NASA Astrophysics Data System (ADS)

    Liu, Fengli; Hao, Yongping

    2008-10-01

    High force update rate is a key factor for achieving high performance haptic rendering, which imposes a stringent real time requirement upon the execution environment of the haptic system. This requirement confines the haptic system to simplified environment for reducing the computation cost of haptic rendering algorithms. In this paper, we present a novel "hyper-threading" architecture consisting of several threads for haptic rendering. The high force update rate is achieved with relatively large computation time interval for each haptic loop. The proposed method was testified and proved to be effective with experiments on virtual wall prototype haptic system via Delta Haptic Device.

  18. Kinetic energy budgets during the life cycle of intense convective activity

    NASA Technical Reports Server (NTRS)

    Fuelberg, H. E.; Scoggins, J. R.

    1978-01-01

    Synoptic-scale data at three- and six-hour intervals are employed to study the relationship between changing kinetic energy variables and the life cycles of two severe squall lines. The kinetic energy budgets indicate a high degree of kinetic energy generation, especially pronounced near the jet-stream level. Energy losses in the storm environment are due to the transfer of kinetic energy from grid to subgrid scales of motion; large-scale upward vertical motion carries aloft the kinetic energy generated by storm activity at lower levels. In general, the time of maximum storm intensity is also the time of maximum energy conversion and transport.

  19. Characterization of Cardiac Time Intervals in Healthy Bonnet Macaques (Macaca radiata) by Using an Electronic Stethoscope

    PubMed Central

    Kamran, Haroon; Salciccioli, Louis; Pushilin, Sergei; Kumar, Paraag; Carter, John; Kuo, John; Novotney, Carol; Lazar, Jason M

    2011-01-01

    Nonhuman primates are used frequently in cardiovascular research. Cardiac time intervals derived by phonocardiography have long been used to assess left ventricular function. Electronic stethoscopes are simple low-cost systems that display heart sound signals. We assessed the use of an electronic stethoscope to measure cardiac time intervals in 48 healthy bonnet macaques (age, 8 ± 5 y) based on recorded heart sounds. Technically adequate recordings were obtained from all animals and required 1.5 ± 1.3 min. The following cardiac time intervals were determined by simultaneously recording acoustic and single-lead electrocardiographic data: electromechanical activation time (QS1), electromechanical systole (QS2), the time interval between the first and second heart sounds (S1S2), and the time interval between the second and first sounds (S2S1). QS2 was correlated with heart rate, mean arterial pressure, diastolic blood pressure, and left ventricular ejection time determined by using echocardiography. S1S2 correlated with heart rate, mean arterial pressure, diastolic blood pressure, left ventricular ejection time, and age. S2S1 correlated with heart rate, mean arterial pressure, diastolic blood pressure, systolic blood pressure, and left ventricular ejection time. QS1 did not correlate with any anthropometric or echocardiographic parameter. The relation S1S2/S2S1 correlated with systolic blood pressure. On multivariate analyses, heart rate was the only independent predictor of QS2, S1S2, and S2S1. In conclusion, determination of cardiac time intervals is feasible and reproducible by using an electrical stethoscope in nonhuman primates. Heart rate is a major determinant of QS2, S1S2, and S2S1 but not QS1; regression equations for reference values for cardiac time intervals in bonnet macaques are provided. PMID:21439218

  20. Prediction Models for Dynamic Demand Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aman, Saima; Frincu, Marc; Chelmis, Charalampos

    2015-11-02

    As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D 2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D 2R, which we address inmore » this paper. Our first contribution is the formal definition of D 2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D 2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D 2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D 2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D 2R. Also, prediction models require just few days’ worth of data indicating that small amounts of historical training data can be used to make reliable predictions, simplifying the complexity of big data challenge associated with D 2R.« less

  1. Solar cycle variations in polar cap area measured by the superDARN radars

    NASA Astrophysics Data System (ADS)

    Imber, S. M.; Milan, S. E.; Lester, M.

    2013-10-01

    present a long-term study, from January 1996 to August 2012, of the latitude of the Heppner-Maynard Boundary (HMB) measured at midnight using the northern hemisphere Super Dual Auroral Radar Network (SuperDARN). The HMB represents the equatorward extent of ionospheric convection and is used in this study as a measure of the global magnetospheric dynamics. We find that the yearly distribution of HMB latitudes is single peaked at 64° magnetic latitude for the majority of the 17 year interval. During 2003, the envelope of the distribution shifts to lower latitudes and a second peak in the distribution is observed at 61°. The solar wind-magnetosphere coupling function derived by Milan et al. (2012) suggests that the solar wind driving during this year was significantly higher than during the rest of the 17 year interval. In contrast, during the period 2008-2011, HMB distribution shifts to higher latitudes, and a second peak in the distribution is again observed, this time at 68° magnetic latitude. This time interval corresponds to a period of extremely low solar wind driving during the recent extreme solar minimum. This is the first long-term study of the polar cap area and the results demonstrate that there is a close relationship between the solar activity cycle and the area of the polar cap on a large-scale, statistical basis.

  2. Solar Cycle Variations in Polar Cap Area Measured by the SuperDARN Radars

    NASA Astrophysics Data System (ADS)

    Imber, S. M.; Milan, S. E.; Lester, M.

    2013-12-01

    We present a long term study, from January 1996 - August 2012, of the latitude of the Heppner-Maynard Boundary (HMB) measured at midnight using the northern hemisphere SuperDARN radars. The HMB represents the equatorward extent of ionospheric convection, and is used in this study as a measure of the global magnetospheric dynamics and activity. We find that the yearly distribution of HMB latitudes is single-peaked at 64° magnetic latitude for the majority of the 17-year interval. During 2003 the envelope of the distribution shifts to lower latitudes and a second peak in the distribution is observed at 61°. The solar wind-magnetosphere coupling function derived by Milan et al. (2012) suggests that the solar wind driving during this year was significantly higher than during the rest of the 17-year interval. In contrast, during the period 2008-2011 HMB distribution shifts to higher latitudes, and a second peak in the distribution is again observed, this time at 68° magnetic latitude. This time interval corresponds to a period of extremely low solar wind driving during the recent extreme solar minimum. This is the first statistical study of the polar cap area over an entire solar cycle, and the results demonstrate that there is a close relationship between the phase of the solar cycle and the area of the polar cap on a large scale statistical basis.

  3. Rehabilitation following pediatric traumatic brain injury: variability in adherence to psychosocial quality-of-care indicators.

    PubMed

    Ennis, Stephanie K; Jaffe, Kenneth M; Mangione-Smith, Rita; Konodi, Mark A; MacKenzie, Ellen J; Rivara, Frederick P

    2014-01-01

    To examine variations in processes of pediatric inpatient rehabilitation care related to family-centered care, management of neurobehavioral and psychosocial needs, and community reintegration after traumatic brain injury. Nine acute rehabilitation facilities from geographically diverse areas of the United States. A total of 174 children with traumatic brain injury. Retrospective chart review. Adherence to care indicators (the number of times recommended care was delivered or attempted divided by the number of times care was indicated). Across facilities, adherence rates (adjusted for difficulty of delivery) ranged from 33.6% to 73.1% (95% confidence interval, 13.4-53.9, 58.7-87.4) for family-centered processes, 21.3% to 82.5% (95% confidence interval, 6.6-36.1, 67.6-97.4) for neurobehavioral and psychosocial processes, and 22.7% to 80.3% (95% confidence interval, 5.3-40.1, 68.1-92.5) for community integration processes. Within facilities, standard deviations for adherence rates were large (24.3-34.9, family-centered domain; 22.6-34.2, neurobehavioral and psychosocial domain; and 21.6-40.5, community reintegration domain). The current state of acute rehabilitation care for children with traumatic brain injury is variable across different quality-of-care indicators addressing neurobehavioral and psychosocial needs and facilitating community reintegration of the patient and the family. Individual rehabilitation facilities demonstrate inconsistent adherence to different indicators and inconsistent performance across different care domains.

  4. Benchmark of multi-phase method for the computation of fast ion distributions in a tokamak plasma in the presence of low-amplitude resonant MHD activity

    NASA Astrophysics Data System (ADS)

    Bierwage, A.; Todo, Y.

    2017-11-01

    The transport of fast ions in a beam-driven JT-60U tokamak plasma subject to resonant magnetohydrodynamic (MHD) mode activity is simulated using the so-called multi-phase method, where 4 ms intervals of classical Monte-Carlo simulations (without MHD) are interlaced with 1 ms intervals of hybrid simulations (with MHD). The multi-phase simulation results are compared to results obtained with continuous hybrid simulations, which were recently validated against experimental data (Bierwage et al., 2017). It is shown that the multi-phase method, in spite of causing significant overshoots in the MHD fluctuation amplitudes, accurately reproduces the frequencies and positions of the dominant resonant modes, as well as the spatial profile and velocity distribution of the fast ions, while consuming only a fraction of the computation time required by the continuous hybrid simulation. The present paper is limited to low-amplitude fluctuations consisting of a few long-wavelength modes that interact only weakly with each other. The success of this benchmark study paves the way for applying the multi-phase method to the simulation of Abrupt Large-amplitude Events (ALE), which were seen in the same JT-60U experiments but at larger time intervals. Possible implications for the construction of reduced models for fast ion transport are discussed.

  5. Scaling and memory in volatility return intervals in financial markets

    PubMed Central

    Yamasaki, Kazuko; Muchnik, Lev; Havlin, Shlomo; Bunde, Armin; Stanley, H. Eugene

    2005-01-01

    For both stock and currency markets, we study the return intervals τ between the daily volatilities of the price changes that are above a certain threshold q. We find that the distribution function Pq(τ) scales with the mean return interval \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} \\begin{equation*}{\\bar {{\\tau}}}\\end{equation*}\\end{document} as \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} \\begin{equation*}P_{q}({\\tau})={\\bar {{\\tau}}}^{-1}f({\\tau}/{\\bar {{\\tau}}})\\end{equation*}\\end{document}. The scaling function f(x) is similar in form for all seven stocks and for all seven currency databases analyzed, and f(x) is consistent with a power-law form, f(x) ∼ x-γ with γ ≈ 2. We also quantify how the conditional distribution Pq(τ|τ0) depends on the previous return interval τ0 and find that small (or large) return intervals are more likely to be followed by small (or large) return intervals. This “clustering” of the volatility return intervals is a previously unrecognized phenomenon that we relate to the long-term correlations known to be present in the volatility. PMID:15980152

  6. Scaling and memory in volatility return intervals in financial markets

    NASA Astrophysics Data System (ADS)

    Yamasaki, Kazuko; Muchnik, Lev; Havlin, Shlomo; Bunde, Armin; Stanley, H. Eugene

    2005-06-01

    For both stock and currency markets, we study the return intervals τ between the daily volatilities of the price changes that are above a certain threshold q. We find that the distribution function Pq(τ) scales with the mean return interval [Formula] as [Formula]. The scaling function f(x) is similar in form for all seven stocks and for all seven currency databases analyzed, and f(x) is consistent with a power-law form, f(x) ˜ x-γ with γ ≈ 2. We also quantify how the conditional distribution Pq(τ|τ0) depends on the previous return interval τ0 and find that small (or large) return intervals are more likely to be followed by small (or large) return intervals. This “clustering” of the volatility return intervals is a previously unrecognized phenomenon that we relate to the long-term correlations known to be present in the volatility. Author contributions: S.H. and H.E.S. designed research; K.Y., L.M., S.H., and H.E.S. performed research; A.B. contributed new reagents/analytic tools; A.B. analyzed data; and S.H. wrote the paper.Abbreviations: pdf, probability density function; S&P 500, Standard and Poor's 500 Index; USD, U.S. dollar; JPY, Japanese yen; SEK, Swedish krona.

  7. More accurate, calibrated bootstrap confidence intervals for correlating two autocorrelated climate time series

    NASA Astrophysics Data System (ADS)

    Olafsdottir, Kristin B.; Mudelsee, Manfred

    2013-04-01

    Estimation of the Pearson's correlation coefficient between two time series to evaluate the influences of one time depended variable on another is one of the most often used statistical method in climate sciences. Various methods are used to estimate confidence interval to support the correlation point estimate. Many of them make strong mathematical assumptions regarding distributional shape and serial correlation, which are rarely met. More robust statistical methods are needed to increase the accuracy of the confidence intervals. Bootstrap confidence intervals are estimated in the Fortran 90 program PearsonT (Mudelsee, 2003), where the main intention was to get an accurate confidence interval for correlation coefficient between two time series by taking the serial dependence of the process that generated the data into account. However, Monte Carlo experiments show that the coverage accuracy for smaller data sizes can be improved. Here we adapt the PearsonT program into a new version called PearsonT3, by calibrating the confidence interval to increase the coverage accuracy. Calibration is a bootstrap resampling technique, which basically performs a second bootstrap loop or resamples from the bootstrap resamples. It offers, like the non-calibrated bootstrap confidence intervals, robustness against the data distribution. Pairwise moving block bootstrap is used to preserve the serial correlation of both time series. The calibration is applied to standard error based bootstrap Student's t confidence intervals. The performances of the calibrated confidence intervals are examined with Monte Carlo simulations, and compared with the performances of confidence intervals without calibration, that is, PearsonT. The coverage accuracy is evidently better for the calibrated confidence intervals where the coverage error is acceptably small (i.e., within a few percentage points) already for data sizes as small as 20. One form of climate time series is output from numerical models which simulate the climate system. The method is applied to model data from the high resolution ocean model, INALT01 where the relationship between the Agulhas Leakage and the North Brazil Current is evaluated. Preliminary results show significant correlation between the two variables when there is 10 year lag between them, which is more or less the time that takes the Agulhas Leakage water to reach the North Brazil Current. Mudelsee, M., 2003. Estimating Pearson's correlation coefficient with bootstrap confidence interval from serially dependent time series. Mathematical Geology 35, 651-665.

  8. Optimal methods for fitting probability distributions to propagule retention time in studies of zoochorous dispersal.

    PubMed

    Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi

    2016-02-01

    Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We recommend the use of cumulative probability to fit parametric probability distributions to propagule retention time, specifically using maximum likelihood for parameter estimation. Furthermore, the experimental design for an optimal characterization of unimodal propagule retention time should contemplate at least 500 recovered propagules and sampling time-intervals not larger than the time peak of propagule retrieval, except in the tail of the distribution where broader sampling time-intervals may also produce accurate fits.

  9. On the (a)symmetry between the perception of time and space in large-scale environments.

    PubMed

    Riemer, Martin; Shine, Jonathan P; Wolbers, Thomas

    2018-04-23

    Cross-dimensional interference between spatial and temporal processing is well documented in humans, but the direction of these interactions remains unclear. The theory of metaphoric structuring states that space is the dominant concept influencing time perception, whereas time has little effect upon the perception of space. In contrast, theories proposing a common neuronal mechanism representing magnitudes argue for a symmetric interaction between space and time perception. Here, we investigated space-time interactions in realistic, large-scale virtual environments. Our results demonstrate a symmetric relationship between the perception of temporal intervals in the supra-second range and room size (experiment 1), but an asymmetric relationship between the perception of travel time and traveled distance (experiment 2). While the perception of time was influenced by the size of virtual rooms and by the distance traveled within these rooms, time itself affected only the perception of room size, but had no influence on the perception of traveled distance. These results are discussed in the context of recent evidence from rodent studies suggesting that subsets of hippocampal place and entorhinal grid cells can simultaneously code for space and time, providing a potential neuronal basis for the interactions between these domains. © 2018 Wiley Periodicals, Inc.

  10. Factors influencing pre-hospital care time intervals in Iran: a qualitative study.

    PubMed

    Khorasani-Zavareh, Davoud; Mohammadi, Reza; Bohm, Katarina

    2018-06-23

    Pre-hospital time management provides better access to victims of road traffic crashes (RTCs) and can help minimize preventable deaths, injuries and disabilities. While most studies have been focused on measuring various time intervals in the pre-hospital phase, to our best knowledge there is no study exploring the barriers and facilitators that affects these various intervals qualitatively. The present study aimed to explore factors affecting various time intervals relating to road traffic incidents in the pre-hospital phase and provides suggestions for improvements in Iran. The study was conducted during 2013-2014 at both the national and local level in Iran. Overall, 18 face-to-face interviews with emergency medical services (EMS) personnel were used for data collection. Qualitative content analysis was employed to analyze the data. The most important barriers in relation to pre-hospital intervals were related to the manner of cooperation by members of the public with the EMS and their involvement at the crash scene, as well as to pre-hospital system factors, including the number and location of EMS facilities, type and number of ambulances and manpower. These factors usually affect how rapidly the EMS can arrive at the scene of the crash and how quickly victims can be transferred to hospital. These two categories have six main themes: notification interval; activation interval; response interval; on-scene interval; transport interval; and delivery interval. Despite more focus on physical resources, cooperation from members of the public needs to be taken in account in order to achieve better pre-hospital management of the various intervals, possibly through the use of public education campaigns.

  11. Effect of aspirin in pregnant women is dependent on increase in bleeding time.

    PubMed

    Dumont, A; Flahault, A; Beaufils, M; Verdy, E; Uzan, S

    1999-01-01

    Randomized trials with low-dose aspirin to prevent preeclampsia and intrauterine growth restriction have yielded conflicting results. In particular, 3 recent large trials were not conclusive. Study designs, however, varied greatly regarding selection of patients, dose of aspirin, and timing of treatment, all of which can be determinants of the results. Retrospectively analyzing the conditions associated with failure or success of aspirin may therefore help to draw up new hypotheses and prepare for more specific randomized trials. We studied a historical cohort of 187 pregnant women who were considered at high risk for preeclampsia, intrauterine growth restriction, or both and were therefore treated with low-dose aspirin between 1989 and 1994. Various epidemiologic, clinical, and laboratory data were extracted from the files. Univariate and multivariate analyses were performed to search for independent parameters associated with the outcome of pregnancy. Age, parity, weight, height, and race had no influence on the outcome. The success rate was higher when treatment was given because of previous poor pregnancy outcomes than when it was given for other indications, and the patients with successful therapy had started aspirin earlier than had those with therapy failure (17.7 vs 20.0 weeks' gestation, P =.04). After multivariate analysis an increase in Ivy bleeding time after 10 days of treatment by >2 minutes was an independent predictor of a better outcome (odds ratio 0.22, 95% confidence interval 0.09-0.51). Borderline statistical significance was observed for aspirin initiation before 17 weeks' gestation (odds ratio 0.44, 95% confidence interval 0.18-1. 08). Abnormal uterine artery Doppler velocimetric scan at 20-24 weeks' gestation (odds ratio 3.31, 95% confidence interval 1.41-7.7), abnormal umbilical artery Doppler velocimetric scan after 26 weeks' gestation (odds ratio 37.6, 95% confidence interval 3.96-357), and use of antihypertensive therapy (odds ratio 6.06, 95% confidence interval 2.45-15) were independent predictors of poor outcome. Efficacy of aspirin seems optimal when bleeding time increases >/=2 minutes with treatment, indicating a more powerful antiplatelet effect. This suggests that the dose of aspirin should be adjusted according to a biologic marker of the antiplatelet effect. A prospective trial is warranted to test this hypothesis.

  12. Interval stability for complex systems

    NASA Astrophysics Data System (ADS)

    Klinshov, Vladimir V.; Kirillov, Sergey; Kurths, Jürgen; Nekorkin, Vladimir I.

    2018-04-01

    Stability of dynamical systems against strong perturbations is an important problem of nonlinear dynamics relevant to many applications in various areas. Here, we develop a novel concept of interval stability, referring to the behavior of the perturbed system during a finite time interval. Based on this concept, we suggest new measures of stability, namely interval basin stability (IBS) and interval stability threshold (IST). IBS characterizes the likelihood that the perturbed system returns to the stable regime (attractor) in a given time. IST provides the minimal magnitude of the perturbation capable to disrupt the stable regime for a given interval of time. The suggested measures provide important information about the system susceptibility to external perturbations which may be useful for practical applications. Moreover, from a theoretical viewpoint the interval stability measures are shown to bridge the gap between linear and asymptotic stability. We also suggest numerical algorithms for quantification of the interval stability characteristics and demonstrate their potential for several dynamical systems of various nature, such as power grids and neural networks.

  13. Tracking of large-scale structures in turbulent channel with direct numerical simulation of low Prandtl number passive scalar

    NASA Astrophysics Data System (ADS)

    Tiselj, Iztok

    2014-12-01

    Channel flow DNS (Direct Numerical Simulation) at friction Reynolds number 180 and with passive scalars of Prandtl numbers 1 and 0.01 was performed in various computational domains. The "normal" size domain was ˜2300 wall units long and ˜750 wall units wide; size taken from the similar DNS of Moser et al. The "large" computational domain, which is supposed to be sufficient to describe the largest structures of the turbulent flows was 3 times longer and 3 times wider than the "normal" domain. The "very large" domain was 6 times longer and 6 times wider than the "normal" domain. All simulations were performed with the same spatial and temporal resolution. Comparison of the standard and large computational domains shows the velocity field statistics (mean velocity, root-mean-square (RMS) fluctuations, and turbulent Reynolds stresses) that are within 1%-2%. Similar agreement is observed for Pr = 1 temperature fields and can be observed also for the mean temperature profiles at Pr = 0.01. These differences can be attributed to the statistical uncertainties of the DNS. However, second-order moments, i.e., RMS temperature fluctuations of standard and large computational domains at Pr = 0.01 show significant differences of up to 20%. Stronger temperature fluctuations in the "large" and "very large" domains confirm the existence of the large-scale structures. Their influence is more or less invisible in the main velocity field statistics or in the statistics of the temperature fields at Prandtl numbers around 1. However, these structures play visible role in the temperature fluctuations at low Prandtl number, where high temperature diffusivity effectively smears the small-scale structures in the thermal field and enhances the relative contribution of large-scales. These large thermal structures represent some kind of an echo of the large scale velocity structures: the highest temperature-velocity correlations are not observed between the instantaneous temperatures and instantaneous streamwise velocities, but between the instantaneous temperatures and velocities averaged over certain time interval.

  14. The date-delay framing effect in temporal discounting depends on substance abuse.

    PubMed

    Klapproth, Florian

    2012-07-01

    In the present study, individuals with substance use disorders (n=30) and non-addicted controls (n=30) were presented with a delay-discounting task with time being described either as dates or as temporal intervals. Three main results were obtained. First, in both groups reward size had a large impact on discounting future rewards, with discount rates becoming larger with smaller reward sizes. Second, participants discounted future rewards less strongly when their time of delivery was presented as a date instead of a temporal distance. Third, whereas discount rates of individuals with substance use disorders varied substantially with regard to the presentation of time in the task, the controls changed their choices depending on time presentation only slightly. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Atomic temporal interval relations in branching time: calculation and application

    NASA Astrophysics Data System (ADS)

    Anger, Frank D.; Ladkin, Peter B.; Rodriguez, Rita V.

    1991-03-01

    A practical method of reasoning about intervals in a branching-time model which is dense, unbounded, future-branching, without rejoining branches is presented. The discussion is based on heuristic constraint- propagation techniques using the relation algebra of binary temporal relations among the intervals over the branching-time model. This technique has been applied with success to models of intervals over linear time by Allen and others, and is of cubic-time complexity. To extend it to branding-time models, it is necessary to calculate compositions of the relations; thus, the table of compositions for the 'atomic' relations is computed, enabling the rapid determination of the composition of arbitrary relations, expressed as disjunctions or unions of the atomic relations.

  16. Analysis of single ion channel data incorporating time-interval omission and sampling

    PubMed Central

    The, Yu-Kai; Timmer, Jens

    2005-01-01

    Hidden Markov models are widely used to describe single channel currents from patch-clamp experiments. The inevitable anti-aliasing filter limits the time resolution of the measurements and therefore the standard hidden Markov model is not adequate anymore. The notion of time-interval omission has been introduced where brief events are not detected. The developed, exact solutions to this problem do not take into account that the measured intervals are limited by the sampling time. In this case the dead-time that specifies the minimal detectable interval length is not defined unambiguously. We show that a wrong choice of the dead-time leads to considerably biased estimates and present the appropriate equations to describe sampled data. PMID:16849220

  17. Cross Time-Frequency Analysis for Combining Information of Several Sources: Application to Estimation of Spontaneous Respiratory Rate from Photoplethysmography

    PubMed Central

    Peláez-Coca, M. D.; Orini, M.; Lázaro, J.; Bailón, R.; Gil, E.

    2013-01-01

    A methodology that combines information from several nonstationary biological signals is presented. This methodology is based on time-frequency coherence, that quantifies the similarity of two signals in the time-frequency domain. A cross time-frequency analysis method, based on quadratic time-frequency distribution, has been used for combining information of several nonstationary biomedical signals. In order to evaluate this methodology, the respiratory rate from the photoplethysmographic (PPG) signal is estimated. The respiration provokes simultaneous changes in the pulse interval, amplitude, and width of the PPG signal. This suggests that the combination of information from these sources will improve the accuracy of the estimation of the respiratory rate. Another target of this paper is to implement an algorithm which provides a robust estimation. Therefore, respiratory rate was estimated only in those intervals where the features extracted from the PPG signals are linearly coupled. In 38 spontaneous breathing subjects, among which 7 were characterized by a respiratory rate lower than 0.15 Hz, this methodology provided accurate estimates, with the median error {0.00; 0.98} mHz ({0.00; 0.31}%) and the interquartile range error {4.88; 6.59} mHz ({1.60; 1.92}%). The estimation error of the presented methodology was largely lower than the estimation error obtained without combining different PPG features related to respiration. PMID:24363777

  18. 'Stroke Room': Diagnosis and Treatment at a Single Location for Rapid Intraarterial Stroke Treatment.

    PubMed

    Ragoschke-Schumm, Andreas; Yilmaz, Umut; Kostopoulos, Panagiotis; Lesmeister, Martin; Manitz, Matthias; Walter, Silke; Helwig, Stefan; Schwindling, Lenka; Fousse, Mathias; Haass, Anton; Garner, Dominique; Körner, Heiko; Roumia, Safwan; Grunwald, Iris; Nasreldein, Ali; Halmer, Ramona; Liu, Yang; Schlechtriemen, Thomas; Reith, Wolfgang; Fassbender, Klaus

    2015-01-01

    For patients with acute ischemic stroke, intra-arterial treatment (IAT) is considered to be an effective strategy for removing the obstructing clot. Because outcome crucially depends on time to treatment ('time-is-brain' concept), we assessed the effects of an intervention based on performing all the time-sensitive diagnostic and therapeutic procedures at a single location on the delay before intra-arterial stroke treatment. Consecutive acute stroke patients with large vessel occlusion who obtained IAT were evaluated before and after implementation (April 26, 2010) of an intervention focused on performing all the diagnostic and therapeutic measures at a single site ('stroke room'). After implementation of the intervention, the median intervals between admission and first angiography series were significantly shorter for 174 intervention patients (102 min, interquartile range (IQR) 85-120 min) than for 81 control patients (117 min, IQR 89-150 min; p < 0.05), as were the intervals between admission and clot removal or end of angiography (152 min, IQR 123-185 min vs. 190 min, IQR 163-227 min; p < 0.001). However, no significant differences in clinical outcome were observed. This study shows for the, to our knowledge, first time that for patients with acute ischemic stroke, stroke diagnosis and treatment at a single location ('stroke room') saves crucial time until IAT. © 2015 S. Karger AG, Basel.

  19. Seismic hazard assessment over time: Modelling earthquakes in Taiwan

    NASA Astrophysics Data System (ADS)

    Chan, Chung-Han; Wang, Yu; Wang, Yu-Ju; Lee, Ya-Ting

    2017-04-01

    To assess the seismic hazard with temporal change in Taiwan, we develop a new approach, combining both the Brownian Passage Time (BPT) model and the Coulomb stress change, and implement the seismogenic source parameters by the Taiwan Earthquake Model (TEM). The BPT model was adopted to describe the rupture recurrence intervals of the specific fault sources, together with the time elapsed since the last fault-rupture to derive their long-term rupture probability. We also evaluate the short-term seismicity rate change based on the static Coulomb stress interaction between seismogenic sources. By considering above time-dependent factors, our new combined model suggests an increased long-term seismic hazard in the vicinity of active faults along the western Coastal Plain and the Longitudinal Valley, where active faults have short recurrence intervals and long elapsed time since their last ruptures, and/or short-term elevated hazard levels right after the occurrence of large earthquakes due to the stress triggering effect. The stress enhanced by the February 6th, 2016, Meinong ML 6.6 earthquake also significantly increased rupture probabilities of several neighbouring seismogenic sources in Southwestern Taiwan and raised hazard level in the near future. Our approach draws on the advantage of incorporating long- and short-term models, to provide time-dependent earthquake probability constraints. Our time-dependent model considers more detailed information than any other published models. It thus offers decision-makers and public officials an adequate basis for rapid evaluations of and response to future emergency scenarios such as victim relocation and sheltering.

  20. Digital time delay

    DOEpatents

    Martin, A.D.

    1986-05-09

    Method and apparatus are provided for generating an output pulse following a trigger pulse at a time delay interval preset with a resolution which is high relative to a low resolution available from supplied clock pulses. A first lumped constant delay provides a first output signal at predetermined interpolation intervals corresponding to the desired high resolution time interval. Latching circuits latch the high resolution data to form a first synchronizing data set. A selected time interval has been preset to internal counters and corrected for circuit propagation delay times having the same order of magnitude as the desired high resolution. Internal system clock pulses count down the counters to generate an internal pulse delayed by an internal which is functionally related to the preset time interval. A second LCD corrects the internal signal with the high resolution time delay. A second internal pulse is then applied to a third LCD to generate a second set of synchronizing data which is complementary with the first set of synchronizing data for presentation to logic circuits. The logic circuits further delay the internal output signal with the internal pulses. The final delayed output signal thereafter enables the output pulse generator to produce the desired output pulse at the preset time delay interval following input of the trigger pulse.

  1. Group selections among laboratory populations of Tribolium.

    PubMed

    Wade, M J

    1976-12-01

    Selection at the population level or group selection is defined as genetic change that is brought about or maintained by the differential extinction and/or proliferation of populations. Group selection for both increased and decreased adult population size was carried out among laboratory populations of Tribolium castaneum at 37-day intervals. The effect of individual selection within populations on adult population size was evaluated in an additional control series of populations. The response in the group selection treatments occurred rapidly, within three or four generations, and was large in magnitude, at times differing from the controls by over 200%. This response to selection at the populational level occurred despite strong individual selection which caused a decline in the mean size of the control populations from over 200 adults to near 50 adults in nine 37-day intervals. "Assay" experiments indicated that selective changes in fecundity, developmental time, body weight, and cannibalism rates were responsible in part for the observed treatment differences in adult population size. These findings have implications in terms of speciation in organisms whose range is composed of many partially isolated local populations.

  2. Cycles in oceanic teleconnections and global temperature change

    NASA Astrophysics Data System (ADS)

    Seip, Knut L.; Grøn, Øyvind

    2018-06-01

    Three large ocean currents are represented by proxy time series: the North Atlantic Oscillation (NAO), the Southern Oscillation Index (SOI), and the Pacific Decadal Oscillation (PDO). We here show how proxies for the currents interact with each other and with the global temperature anomaly (GTA). Our results are obtained by a novel method, which identifies running average leading-lagging (LL) relations, between paired series. We find common cycle times for a paired series of 6-7 and 25-28 years and identify years when the LL relations switch. Switching occurs with 18.4 ± 14.3-year intervals for the short 6-7-year cycles and with 27 ± 15-year intervals for the 25-28-year cycles. During the period 1940-1950, the LL relations for the long cycles were circular (nomenclature x leads y: x → y): GTA → NAO → SOI → PDO → GTA. However, after 1960, the LL relations become more complex and there are indications that GTA leads to both NAO and PDO. The switching years are related to ocean current tie points and reversals reported in the literature.

  3. Evaluation of the electromechanical properties of the cardiovascular system after prolonged weightlessness

    NASA Technical Reports Server (NTRS)

    Bergman, S. A., Jr.; Johnson, R. L.; Hoffler, G. W.

    1977-01-01

    Devices and techniques for measuring and analyzing systolic time intervals and quantitative phonocardiograms were initiated during Apollo 17. The data show that the systolic time interval from Apollo 17 crewmen remained elevated longer postflight than the response criteria of heart rate, blood pressure, and percent change in leg volume all of which had returned to preflight levels by the second day postflight. Although the systolic time interval values were only slightly outside the preflight fiducial limits, this finding suggested that: the analysis of systolic time intervals may help to identify the mechanisms of postflight orthostatic intolerance by virtue of measuring ventricular function more directly and, the noninvasive technique may prove useful in determining the extent and duration of cardiovascular instability after long duration space flight. The systolic time intervals obtained on the Apollo 17 crewmen during lower body negative pressure were similar to those noted in patients with significant heart disease.

  4. A semi-automated method for rapid detection of ripple events on interictal voltage discharges in the scalp electroencephalogram.

    PubMed

    Chu, Catherine J; Chan, Arthur; Song, Dan; Staley, Kevin J; Stufflebeam, Steven M; Kramer, Mark A

    2017-02-01

    High frequency oscillations are emerging as a clinically important indicator of epileptic networks. However, manual detection of these high frequency oscillations is difficult, time consuming, and subjective, especially in the scalp EEG, thus hindering further clinical exploration and application. Semi-automated detection methods augment manual detection by reducing inspection to a subset of time intervals. We propose a new method to detect high frequency oscillations that co-occur with interictal epileptiform discharges. The new method proceeds in two steps. The first step identifies candidate time intervals during which high frequency activity is increased. The second step computes a set of seven features for each candidate interval. These features require that the candidate event contain a high frequency oscillation approximately sinusoidal in shape, with at least three cycles, that co-occurs with a large amplitude discharge. Candidate events that satisfy these features are stored for validation through visual analysis. We evaluate the detector performance in simulation and on ten examples of scalp EEG data, and show that the proposed method successfully detects spike-ripple events, with high positive predictive value, low false positive rate, and high intra-rater reliability. The proposed method is less sensitive than the existing method of visual inspection, but much faster and much more reliable. Accurate and rapid detection of high frequency activity increases the clinical viability of this rhythmic biomarker of epilepsy. The proposed spike-ripple detector rapidly identifies candidate spike-ripple events, thus making clinical analysis of prolonged, multielectrode scalp EEG recordings tractable. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. A semi-automated method for rapid detection of ripple events on interictal voltage discharges in the scalp electroencephalogram

    PubMed Central

    Chu, Catherine. J.; Chan, Arthur; Song, Dan; Staley, Kevin J.; Stufflebeam, Steven M.; Kramer, Mark A.

    2017-01-01

    Summary Background High frequency oscillations are emerging as a clinically important indicator of epileptic networks. However, manual detection of these high frequency oscillations is difficult, time consuming, and subjective, especially in the scalp EEG, thus hindering further clinical exploration and application. Semi-automated detection methods augment manual detection by reducing inspection to a subset of time intervals. We propose a new method to detect high frequency oscillations that co-occur with interictal epileptiform discharges. New Method The new method proceeds in two steps. The first step identifies candidate time intervals during which high frequency activity is increased. The second step computes a set of seven features for each candidate interval. These features require that the candidate event contain a high frequency oscillation approximately sinusoidal in shape, with at least three cycles, that co-occurs with a large amplitude discharge. Candidate events that satisfy these features are stored for validation through visual analysis. Results We evaluate the detector performance in simulation and on ten examples of scalp EEG data, and show that the proposed method successfully detects spike-ripple events, with high positive predictive value, low false positive rate, and high intra-rater reliability. Comparison with Existing Method The proposed method is less sensitive than the existing method of visual inspection, but much faster and much more reliable. Conclusions Accurate and rapid detection of high frequency activity increases the clinical viability of this rhythmic biomarker of epilepsy. The proposed spike-ripple detector rapidly identifies candidate spike-ripple events, thus making clinical analysis of prolonged, multielectrode scalp EEG recordings tractable. PMID:27988323

  6. Step scaling and the Yang-Mills gradient flow

    NASA Astrophysics Data System (ADS)

    Lüscher, Martin

    2014-06-01

    The use of the Yang-Mills gradient flow in step-scaling studies of lattice QCD is expected to lead to results of unprecedented precision. Step scaling is usually based on the Schrödinger functional, where time ranges over an interval [0 , T] and all fields satisfy Dirichlet boundary conditions at time 0 and T. In these calculations, potentially important sources of systematic errors are boundary lattice effects and the infamous topology-freezing problem. The latter is here shown to be absent if Neumann instead of Dirichlet boundary conditions are imposed on the gauge field at time 0. Moreover, the expectation values of gauge-invariant local fields at positive flow time (and of other well localized observables) that reside in the center of the space-time volume are found to be largely insensitive to the boundary lattice effects.

  7. Time to relapse after epilepsy surgery in children: AED withdrawal policies are a contributing factor.

    PubMed

    Boshuisen, Kim; Schmidt, Dieter; Uiterwaal, Cuno S P M; Arzimanoglou, Alexis; Braun, Kees P J; Study Group, TimeToStop

    2014-09-01

    It was recently suggested that early postoperative seizure relapse implicates a failure to define and resect the epileptogenic zone, that late recurrences reflect the persistence or re-emergence of epileptogenic pathology, and that early recurrences are associated with poor treatment response. Timing of antiepileptic drugs withdrawal policies, however, have never been taken into account when investigating time to relapse following epilepsy surgery. Of the European paediatric epilepsy surgery cohort from the "TimeToStop" study, all 95 children with postoperative seizure recurrence following antiepileptic drug (AED) withdrawal were selected. We investigated how time intervals from surgery to AED withdrawal, as well as other previously suggested determinants of (timing of) seizure recurrence, related to time to relapse and to relapse treatability. Uni- and multivariable linear and logistic regression models were used. Based on multivariable analysis, a shorter interval to AED reduction was the only independent predictor of a shorter time to relapse. Based on univariable analysis, incomplete resection of the epileptogenic zone related to a shorter time to recurrence. Timing of recurrence was not related to the chance of regaining seizure freedom after reinstallation of medical treatment. For children in whom AED reduction is initiated following epilepsy surgery, the time to relapse is largely influenced by the timing of AED withdrawal, rather than by disease or surgery-specific factors. We could not confirm a relationship between time to recurrence and treatment response. Timing of AED withdrawal should be taken into account when studying time to relapse following epilepsy surgery, as early withdrawal reveals more rapidly whether surgery had the intended curative effect, independently of the other factors involved.

  8. Technical Note: A minimally invasive experimental system for pCO2 manipulation in plankton cultures using passive gas exchange (atmospheric carbon control simulator)

    NASA Astrophysics Data System (ADS)

    Love, Brooke A.; Olson, M. Brady; Wuori, Tristen

    2017-05-01

    As research into the biotic effects of ocean acidification has increased, the methods for simulating these environmental changes in the laboratory have multiplied. Here we describe the atmospheric carbon control simulator (ACCS) for the maintenance of plankton under controlled pCO2 conditions, designed for species sensitive to the physical disturbance introduced by the bubbling of cultures and for studies involving trophic interaction. The system consists of gas mixing and equilibration components coupled with large-volume atmospheric simulation chambers. These chambers allow gas exchange to counteract the changes in carbonate chemistry induced by the metabolic activity of the organisms. The system is relatively low cost, very flexible, and when used in conjunction with semi-continuous culture methods, it increases the density of organisms kept under realistic conditions, increases the allowable time interval between dilutions, and/or decreases the metabolically driven change in carbonate chemistry during these intervals. It accommodates a large number of culture vessels, which facilitate multi-trophic level studies and allow the tracking of variable responses within and across plankton populations to ocean acidification. It also includes components that increase the reliability of gas mixing systems using mass flow controllers.

  9. Intermittency of solar wind on scale 0.01-16 Hz.

    NASA Astrophysics Data System (ADS)

    Riazantseva, Maria; Zastenker, Georgy; Chernyshov, Alexander; Petrosyan, Arakel

    Magnetosphere of the Earth is formed in the process of solar wind flow around earth's magnetic field. Solar wind is a flow of turbulent plasma that displays a multifractal structure and an intermittent character. That is why the study of the characteristics of solar wind turbulence is very important part of the solution of the problem of the energy transport from the solar wind to magnetosphere. A large degree of intermittency is observed in the solar wind ion flux and magnetic field time rows. We investigated the intermittency of solar wind fluctuations under large statistics of high time resolution measurements onboard Interball-1 spacecraft on scale from 0.01 to 16 Hz. Especially it is important that these investigation is carry out for the first time for the earlier unexplored (by plasma data) region of comparatively fast variations (frequency up to 16 Hz), so we significantly extend the range of intermittency observations for solar wind plasma. The intermittency practically absent on scale more then 1000 s and it grows to the small scales right up till t 30-60 s. The behavior of the intermittency for the scale less then 30-60 s is rather changeable. The boundary between these two rates of intermittency is quantitatively near to the well-known boundary between the dissipation and inertial scales of fluctuations, what may point to their possible relation. Special attention is given to a comparison of intermittency for solar wind observation intervals containing SCIF (Sudden Changes of Ion Flux) to ones for intervals without SCIF. Such a comparison allows one to reveal the fundamental turbulent properties of the solar wind regions in which SCIF is observed more frequently. We use nearly incompressible model of the solar wind turbulence for obtained data interpretation. The regime when density fluctuations are passive scalar in a hydrodynamic field of velocity is realized in turbulent solar wind flows according to this model. This hypothesis can be verified straightforwardly by investigating the density spectrum which should be slaved to the incompressible velocity spectrum. Density discontinuities on times up to t 30-60 s are defined by intermittency of velocity turbulent field. Solar wind intermittency and many or most of its discontinuities are produced by MHD turbulence in this time interval. It is possible that many or even most of the current structures in the solar wind, particularly inertial range structures that contribute to the tails of the PDFs. Complex non-gaussian behaviour on smaller times is described by dissipation rate nonhomogeneity of statistical moments for density field in a random flow.

  10. Compression based entropy estimation of heart rate variability on multiple time scales.

    PubMed

    Baumert, Mathias; Voss, Andreas; Javorka, Michal

    2013-01-01

    Heart rate fluctuates beat by beat in a complex manner. The aim of this study was to develop a framework for entropy assessment of heart rate fluctuations on multiple time scales. We employed the Lempel-Ziv algorithm for lossless data compression to investigate the compressibility of RR interval time series on different time scales, using a coarse-graining procedure. We estimated the entropy of RR interval time series of 20 young and 20 old subjects and also investigated the compressibility of randomly shuffled surrogate RR time series. The original RR time series displayed significantly smaller compression entropy values than randomized RR interval data. The RR interval time series of older subjects showed significantly different entropy characteristics over multiple time scales than those of younger subjects. In conclusion, data compression may be useful approach for multiscale entropy assessment of heart rate variability.

  11. Cardiorespiratory interactions in patients with atrial flutter.

    PubMed

    Masè, Michela; Disertori, Marcello; Ravelli, Flavia

    2009-01-01

    Respiratory sinus arrhythmia (RSA) is generally known as the autonomically mediated modulation of the sinus node pacemaker frequency in synchrony with respiration. Cardiorespiratory interactions have been largely investigated during sinus rhythm, whereas little is known about interactions during reentrant arrhythmias. In this study, cardiorespiratory interactions at the atrial and ventricular level were investigated during atrial flutter (AFL), a supraventricular arrhythmia based on a reentry, by using cross-spectral analysis and computer modeling. The coherence and phase between respiration and atrial (gamma(AA)(2), phi(AA)) and ventricular (gamma(RR)(2), phi(RR)) interval series were estimated in 20 patients with typical AFL (68.0 +/- 8.8 yr) and some degree of atrioventricular (AV) conduction block. In all patients, atrial intervals displayed oscillations strongly coupled and in phase with respiration (gamma(AA)(2)= 0.97 +/- 0.05, phi(AA) = 0.71 +/- 0.31 rad), corresponding to a paradoxical lengthening of intervals during inspiration. The modulation pattern was frequency independent, with in-phase oscillations and short time delays (0.40 +/- 0.15 s) for respiratory frequencies in the range 0.1-0.4 Hz. Ventricular patterns were affected by AV conduction type. In patients with fixed AV conduction, ventricular intervals displayed oscillations strongly coupled (gamma(RR)(2)= 0.97 +/- 0.03) and in phase with respiration (phi(RR) = 1.08 +/- 0.80 rad). Differently, in patients with variable AV conduction, respiratory oscillations were secondary to Wencheback rhythmicity, resulting in a decreased level of coupling (gamma(RR)(2)= 0.50 +/- 0.21). Simulations with a simplified model of AV conduction showed ventricular patterns to originate from the combination of a respiratory modulated atrial input with the functional properties of the AV node. The paradoxical frequency-independent modulation pattern of atrial interval, the short time delays, and the complexity of ventricular rhythm characterize respiratory arrhythmia during AFL and distinguish it from normal RSA. These peculiar features can be explained by assuming a direct mechanical action of respiration on AFL reentrant circuit.

  12. The Effects of High Intensity Interval Training vs Steady State Training on Aerobic and Anaerobic Capacity

    PubMed Central

    Foster, Carl; Farland, Courtney V.; Guidotti, Flavia; Harbin, Michelle; Roberts, Brianna; Schuette, Jeff; Tuuri, Andrew; Doberstein, Scott T.; Porcari, John P.

    2015-01-01

    High intensity interval training (HIIT) has become an increasingly popular form of exercise due to its potentially large effects on exercise capacity and small time requirement. This study compared the effects of two HIIT protocols vs steady-state training on aerobic and anaerobic capacity following 8-weeks of training. Fifty-five untrained college-aged subjects were randomly assigned to three training groups (3x weekly). Steady-state (n = 19) exercised (cycle ergometer) 20 minutes at 90% of ventilatory threshold (VT). Tabata (n = 21) completed eight intervals of 20s at 170% VO2max/10s rest. Meyer (n = 15) completed 13 sets of 30s (20 min) @ 100% PVO2 max/ 60s recovery, average PO = 90% VT. Each subject did 24 training sessions during 8 weeks. Results: There were significant (p < 0.05) increases in VO2max (+19, +18 and +18%) and PPO (+17, +24 and +14%) for each training group, as well as significant increases in peak (+8, + 9 and +5%) & mean (+4, +7 and +6%) power during Wingate testing, but no significant differences between groups. Measures of the enjoyment of the training program indicated that the Tabata protocol was significantly less enjoyable (p < 0.05) than the steady state and Meyer protocols, and that the enjoyment of all protocols declined (p < 0.05) across the duration of the study. The results suggest that although HIIT protocols are time efficient, they are not superior to conventional exercise training in sedentary young adults. Key points Steady state training equivalent to HIIT in untrained students Mild interval training presents very similar physiologic challenge compared to steady state training HIIT (particularly very high intensity variants were less enjoyable than steady state or mild interval training Enjoyment of training decreases across the course of an 8 week experimental training program PMID:26664271

  13. The Effects of High Intensity Interval Training vs Steady State Training on Aerobic and Anaerobic Capacity.

    PubMed

    Foster, Carl; Farland, Courtney V; Guidotti, Flavia; Harbin, Michelle; Roberts, Brianna; Schuette, Jeff; Tuuri, Andrew; Doberstein, Scott T; Porcari, John P

    2015-12-01

    High intensity interval training (HIIT) has become an increasingly popular form of exercise due to its potentially large effects on exercise capacity and small time requirement. This study compared the effects of two HIIT protocols vs steady-state training on aerobic and anaerobic capacity following 8-weeks of training. Fifty-five untrained college-aged subjects were randomly assigned to three training groups (3x weekly). Steady-state (n = 19) exercised (cycle ergometer) 20 minutes at 90% of ventilatory threshold (VT). Tabata (n = 21) completed eight intervals of 20s at 170% VO2max/10s rest. Meyer (n = 15) completed 13 sets of 30s (20 min) @ 100% PVO2 max/ 60s recovery, average PO = 90% VT. Each subject did 24 training sessions during 8 weeks. There were significant (p < 0.05) increases in VO2max (+19, +18 and +18%) and PPO (+17, +24 and +14%) for each training group, as well as significant increases in peak (+8, + 9 and +5%) & mean (+4, +7 and +6%) power during Wingate testing, but no significant differences between groups. Measures of the enjoyment of the training program indicated that the Tabata protocol was significantly less enjoyable (p < 0.05) than the steady state and Meyer protocols, and that the enjoyment of all protocols declined (p < 0.05) across the duration of the study. The results suggest that although HIIT protocols are time efficient, they are not superior to conventional exercise training in sedentary young adults. Key pointsSteady state training equivalent to HIIT in untrained studentsMild interval training presents very similar physiologic challenge compared to steady state trainingHIIT (particularly very high intensity variants were less enjoyable than steady state or mild interval trainingEnjoyment of training decreases across the course of an 8 week experimental training program.

  14. Relationship between cutoff frequency and accuracy in time-interval photon statistics applied to oscillating signals

    NASA Astrophysics Data System (ADS)

    Rebolledo, M. A.; Martinez-Betorz, J. A.

    1989-04-01

    In this paper the accuracy in the determination of the period of an oscillating signal, when obtained from the photon statistics time-interval probability, is studied as a function of the precision (the inverse of the cutoff frequency of the photon counting system) with which time intervals are measured. The results are obtained by means of an experiment with a square-wave signal, where the Fourier or square-wave transforms of the time-interval probability are measured. It is found that for values of the frequency of the signal near the cutoff frequency the errors in the period are small.

  15. Method of high precision interval measurement in pulse laser ranging system

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Lv, Xin-yuan; Mao, Jin-jin; Liu, Wei; Yang, Dong

    2013-09-01

    Laser ranging is suitable for laser system, for it has the advantage of high measuring precision, fast measuring speed,no cooperative targets and strong resistance to electromagnetic interference,the measuremen of laser ranging is the key paremeters affecting the performance of the whole system.The precision of the pulsed laser ranging system was decided by the precision of the time interval measurement, the principle structure of laser ranging system was introduced, and a method of high precision time interval measurement in pulse laser ranging system was established in this paper.Based on the analysis of the factors which affected the precision of range measure,the pulse rising edges discriminator was adopted to produce timing mark for the start-stop time discrimination,and the TDC-GP2 high precision interval measurement system based on TMS320F2812 DSP was designed to improve the measurement precision.Experimental results indicate that the time interval measurement method in this paper can obtain higher range accuracy. Compared with the traditional time interval measurement system,the method simplifies the system design and reduce the influence of bad weather conditions,furthermore,it satisfies the requirements of low costs and miniaturization.

  16. A numerical study of the laminar necklace vortex system and its effect on the wake for a circular cylinder

    NASA Astrophysics Data System (ADS)

    Kirkil, Gokhan; Constantinescu, George

    2012-07-01

    Large eddy simulation (LES) is used to investigate the structure of the laminar horseshoe vortex (HV) system and the dynamics of the necklace vortices as they fold around the base of a circular cylinder mounted on the flat bed of an open channel for Reynolds numbers defined with the cylinder diameter, D, smaller than 4460. The study concentrates on the analysis of the structure of the HV system in the periodic breakaway sub-regime, which is characterized by the formation of three main necklace vortices. Over one oscillation cycle of the previously observed breakaway sub-regime, the corner vortex and the primary vortex merge (amalgamate) and a developing vortex separates from the incoming laminar boundary layer (BL) to become the new primary vortex. Results show that while the classical breakaway sub-regime, in which one amalgamation event occurs per oscillation cycle, is present when the nondimensional displacement thickness of the incoming BL at the location of the cylinder is relatively large (δ*/D > 0.1), a new type of breakaway sub-regime is present for low values of δ*/D. This sub-regime, which we call the double-breakaway sub-regime, is characterized by the occurrence of two amalgamation events over one full oscillation cycle. LES results show that when the HV system is in one of the breakaway sub-regimes, the interactions between the highly coherent necklace vortices and the eddies shed inside the separated shear layers (SSLs) are very strong. For the relatively shallow flow conditions considered in this study (H/D ≅ 1, H is the channel depth), at times, the disturbances induced by the legs of the necklace vortices do not allow the SSLs on the two sides of the cylinder to interact in a way that allows the vorticity redistribution mechanism to lead to the formation of a new wake roller. As a result, the shedding of large-scale rollers in the turbulent wake is suppressed for relatively large periods of time. Simulation results show that the wake structure changes randomly between time intervals when large-scale rollers are forming and are convected in the wake (von Karman regime), and time intervals when the rollers do not form. When the wake is in the von Karman regime, the shedding frequency of the rollers is close to that observed for flow past infinitely long cylinders.

  17. Revising the "Rule of Three" for inferring seizure freedom.

    PubMed

    Westover, M Brandon; Cormier, Justine; Bianchi, Matt T; Shafi, Mouhsin; Kilbride, Ronan; Cole, Andrew J; Cash, Sydney S

    2012-02-01

    How long after starting a new medication must a patient go without seizures before they can be regarded as seizure-free? A recent International League Against Epilepsy (ILAE) task force proposed using a "Rule of Three" as an operational definition of seizure freedom, according to which a patient should be considered seizure-free following an intervention after a period without seizures has elapsed equal to three times the longest preintervention interseizure interval over the previous year. This rule was motivated in large part by statistical considerations advanced in a classic 1983 paper by Hanley and Lippman-Hand. However, strict adherence to the statistical logic of this rule generally requires waiting much longer than recommended by the ILAE task force. Therefore, we set out to determine whether an alternative approach to the Rule of Three might be possible, and under what conditions the rule may be expected to hold or would need to be extended. Probabilistic modeling and application of Bayes' rule. We find that an alternative approach to the problem of inferring seizure freedom supports using the Rule of Three in the way proposed by the ILAE in many cases, particularly in evaluating responses to a first trial of antiseizure medication, and to favorably-selected epilepsy surgical candidates. In cases where the a priori odds of success are less favorable, our analysis requires longer seizure-free observation periods before declaring seizure freedom, up to six times the average preintervention interseizure interval. The key to our approach is to take into account not only the time elapsed without seizures but also empirical data regarding the a priori probability of achieving seizure freedom conferred by a particular intervention. In many cases it may be reasonable to consider a patient seizure-free after they have gone without seizures for a period equal to three times the preintervention interseizure interval, as proposed on pragmatic grounds in a recent ILAE position paper, although in other commonly encountered cases a waiting time up to six times this interval is required. In this work we have provided a coherent theoretical basis for modified criterion for seizure freedom, which we call the "Rule of Three-To-Six." Wiley Periodicals, Inc. © 2011 International League Against Epilepsy.

  18. Revising the Rule Of Three For Inferring Seizure Freedom

    PubMed Central

    Westover, M. Brandon; Cormier, Justine; Bianchi, Matt T.; Shafi, Mouhsin; Kilbride, Ronan; Cole, Andrew J.; Cash, Sydney S.

    2011-01-01

    Summary Purpose How long after starting a new medication must a patient go without seizures before they can be regarded as seizure free? A recent ILAE task force proposed using a “Rule of Three” as an operational definition of seizure freedom, according to which a patient should be considered seizure-free following an intervention after a period without seizures has elapsed equal to three times the longest pre-intervention inter-seizure interval over the previous year. This rule was motivated in large part by statistical considerations advanced in a classic 1983 paper by Hanley and Lippman-Hand. However, strict adherence to the statistical logic of this rule generally requires waiting much longer than recommended by the ILAE task force. Therefore, we set out to determine whether an alternative approach to the Rule of Three might be possible, and under what conditions the rule may be expected to hold or would need to be extended. Methods Probabilistic modeling and application of Bayes’ rule. Key Findings We find that an alternative approach to the problem of inferring seizure freedom supports using the Rule of Three in the way proposed by the ILAE in many cases, particularly in evaluating responses to a first trial of anti-seizure medication, and to favorably-selected epilepsy surgical candidates. In cases where the a priori odds of success are less favorable, our analysis requires longer seizure-free observation periods before declaring seizure freedom, up to six times the average pre-intervention insterseizure interval. The key to our approach is to take into account not only the time elapsed without seizures but also empirical data regarding the a priori probability of achieving seizure freedom conferred by a particular intervention. Significance In many cases it may be reasonable to consider a patient seizure free after they have gone without seizures for a period equal to three times the pre-intervention inter-seizure interval, as proposed on pragmatic grounds in a recent ILAE position paper, though in other commonly encountered cases a waiting time up to six times this interval is required. In this work we have provided a coherent theoretical basis for modified criterion for seizure freedom, which we call the “Rule of Three-To-Six”. PMID:22191711

  19. Oscillations of galactic cosmic rays and solar indices before the arrival of relativistic solar protons

    NASA Astrophysics Data System (ADS)

    Miroshnichenko, L. I.; Pérez-Peraza, J. A.; Velasco-Herrera, V. M.; Zapotitla, J.; Vashenyuk, E. V.

    2012-09-01

    Using modern wavelet analysis techniques, we have made an attempt to search for oscillations of intensity of galactic cosmic rays (GCR), sunspot numbers (SS) and magnitudes of coronal index (CI) implying that the time evolution of those oscillations may serve as a precursor of Ground Level Enhancements (GLEs) of solar cosmic rays (SCR). From total number of 70 GLEs registered in 1942-2006, the four large events — 23 February 1956, 14 July 2000, 28 October 2003, and 20 January 2005 — have been chosen for our study. By the results of our analysis, it was shown that a frequency of oscillations of GCR decreases as time approaches to the event day. We have also studied a behaviour of common periodicities of GCR and SCR within the time interval of individual GLE. The oscillations of GLE occurrence rate (OR) at different stages of the solar activity (SA) cycle is of special interest. We have found some common periodicities of SS and CI in the range of short (2.8, 5.2, 27 and 60 days), medium (0.3, 0.5, 0.7, 1.3, 1.8 and 3.2 years) and long (4.6 and 11.0 years) periods. Short and medium periodicities, in general, are rather concentrated around the maxima of solar cycles and display the complex phase relations. When comparing these results with the behaviour of OR oscillations we found that the period of 11 years is dominating (controlling); it is continuous over the entire time interval of 1942-2006, and during all this time it displays high synchronization and clear linear ratios between the phases of oscillations of η, SS and CI. It implies that SCR generation is not isolated stochastic phenomena characteristic exclusively for chromospheric and/or coronal structures. In fact, this process may have global features and involve large regions in the Sun's atmosphere.

  20. Floquet Engineering in Quantum Chains

    NASA Astrophysics Data System (ADS)

    Kennes, D. M.; de la Torre, A.; Ron, A.; Hsieh, D.; Millis, A. J.

    2018-03-01

    We consider a one-dimensional interacting spinless fermion model, which displays the well-known Luttinger liquid (LL) to charge density wave (CDW) transition as a function of the ratio between the strength of the interaction U and the hopping J . We subject this system to a spatially uniform drive which is ramped up over a finite time interval and becomes time periodic in the long-time limit. We show that by using a density matrix renormalization group approach formulated for infinite system sizes, we can access the large-time limit even when the drive induces finite heating. When both the initial and long-time states are in the gapless (LL) phase, the final state has power-law correlations for all ramp speeds. However, when the initial and final state are gapped (CDW phase), we find a pseudothermal state with an effective temperature that depends on the ramp rate, both for the Magnus regime in which the drive frequency is very large compared to other scales in the system and in the opposite limit where the drive frequency is less than the gap. Remarkably, quantum defects (instantons) appear when the drive tunes the system through the quantum critical point, in a realization of the Kibble-Zurek mechanism.

  1. Improving the Determination of Eastern Elongations of Planetary Satellites in the Astronomical Almanac

    NASA Astrophysics Data System (ADS)

    Rura, Christopher; Stollberg, Mark

    2018-01-01

    The Astronomical Almanac is an annual publication of the US Naval Observatory (USNO) and contains a wide variety of astronomical data used by astronomers worldwide as a general reference or for planning observations. Included in this almanac are the times of greatest eastern and northern elongations of the natural satellites of the planets, accurate to 0.1 hour UT. The production code currently used to determine elongation times generates X and Y coordinates for each satellite (16 total) in 5 second intervals. This consequentially caused very large data files, and resulted in the program devoted to determining the elongation times to be computationally intensive. To make this program more efficient, we wrote a Python program to fit a cubic spline to data generated with a 6-minute time step. This resulted in elongation times that were found to agree with those determined from the 5 second data currently used in a large number of cases and was tested for 16 satellites between 2017 and 2019. The accuracy of this program is being tested for the years past 2019 and, if no problems are found, the code will be considered for production of this section of The Astronomical Almanac.

  2. Arthroscopic repair of massive contracted rotator cuff tears: aggressive release with anterior and posterior interval slides do not improve cuff healing and integrity.

    PubMed

    Kim, Sung-Jae; Kim, Sung-Hwan; Lee, Su-Keon; Seo, Jae-Wan; Chun, Yong-Min

    2013-08-21

    Few studies of large-to-massive contracted rotator cuff tears have examined the arthroscopic complete repair obtained by a posterior interval slide and whether the clinical outcomes or structural integrity achieved are better than those after partial repair without the posterior interval slide. The study included forty-one patients with large-to-massive contracted rotator cuff tears, not amenable to complete repair with margin convergence alone. The patients underwent either arthroscopic complete repair with a posterior interval slide and side-to-side repair of the interval slide edge (twenty-two patients; Group P) or partial repair with margin convergence (nineteen patients; Group M). The patient assignment was not randomized. The Simple Shoulder Test (SST), American Shoulder and Elbow Surgeons (ASES) score, University of California at Los Angeles (UCLA) shoulder score, and range of motion were used to compare the functional outcomes. Preoperative and six-month postoperative magnetic resonance arthrography (MRA) images were compared within or between groups. At the two-year follow-up evaluation, the SST, ASES score, UCLA score, and range of motion had significantly improved (p < 0.001 for all) in both groups. However, no significant differences were detected between groups. Even though the difference in preoperative tear size on MRA images was not significant, follow-up MRA images identified a retear in twenty patients (91%) in Group P and a significant difference in tear size between groups (p = 0.007). The complete repair group with an aggressive release had no better clinical or structural outcomes compared with the partial repair group with margin convergence alone for large-to-massive contracted rotator cuff tears. In addition, the complete repair group had a 91% retear rate and a greater defect on follow-up MRA images. Even though this study had a relatively short-term follow-up, a complete repair of large-to-massive contracted rotator cuff tears, with an aggressive release such as posterior interval slide, may not have an increased benefit compared with partial repair without posterior interval slide.

  3. Can We Draw General Conclusions from Interval Training Studies?

    PubMed

    Viana, Ricardo Borges; de Lira, Claudio Andre Barbosa; Naves, João Pedro Araújo; Coswig, Victor Silveira; Del Vecchio, Fabrício Boscolo; Ramirez-Campillo, Rodrigo; Vieira, Carlos Alexandre; Gentil, Paulo

    2018-04-19

    Interval training (IT) has been used for many decades with the purpose of increasing performance and promoting health benefits while demanding a relatively small amount of time. IT can be defined as intermittent periods of intense exercise separated by periods of recovery and has been divided into high-intensity interval training (HIIT), sprint interval training (SIT), and repeated sprint training (RST). IT use has resulted in the publication of many studies and many of them with conflicting results and positions. The aim of this article was to move forward and understand the studies' protocols in order to draw accurate conclusions, as well as to avoid previous mistakes and effectively reproduce previous protocols. When analyzing the literature, we found many inconsistencies, such as the controversial concept of 'supramaximal' effort, a misunderstanding with regard to the term 'high intensity,' and the use of different strategies to control intensity. The adequate definition and interpretation of training intensity seems to be vital, since the results of IT are largely dependent on it. These observations are only a few examples of the complexity involved in IT prescription, and are discussed to illustrate some problems with the current literature regarding IT. Therefore, it is our opinion that it is not possible to draw general conclusions about IT without considering all variables used in IT prescription, such as exercise modality, intensity, effort and rest times, and participants' characteristics. In order to help guide researchers and health professionals in their practices it is important that experimental studies report their methods in as much detail as possible and future reviews and meta-analyses should critically discuss the articles included in the light of their methods to avoid inappropriate generalizations.

  4. Uncertainty analysis for absorbed dose from a brain receptor imaging agent

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aydogan, B.; Miller, L.F.; Sparks, R.B.

    Absorbed dose estimates are known to contain uncertainties. A recent literature search indicates that prior to this study no rigorous investigation of uncertainty associated with absorbed dose has been undertaken. A method of uncertainty analysis for absorbed dose calculations has been developed and implemented for the brain receptor imaging agent {sup 123}I-IPT. The two major sources of uncertainty considered were the uncertainty associated with the determination of residence time and that associated with the determination of the S values. There are many sources of uncertainty in the determination of the S values, but only the inter-patient organ mass variation wasmore » considered in this work. The absorbed dose uncertainties were determined for lung, liver, heart and brain. Ninety-five percent confidence intervals of the organ absorbed dose distributions for each patient and for a seven-patient population group were determined by the ``Latin Hypercube Sampling`` method. For an individual patient, the upper bound of the 95% confidence interval of the absorbed dose was found to be about 2.5 times larger than the estimated mean absorbed dose. For the seven-patient population the upper bound of the 95% confidence interval of the absorbed dose distribution was around 45% more than the estimated population mean. For example, the 95% confidence interval of the population liver dose distribution was found to be between 1.49E+0.7 Gy/MBq and 4.65E+07 Gy/MBq with a mean of 2.52E+07 Gy/MBq. This study concluded that patients in a population receiving {sup 123}I-IPT could receive absorbed doses as much as twice as large as the standard estimated absorbed dose due to these uncertainties.« less

  5. Investigations of timing during the schedule and reinforcement intervals with wheel-running reinforcement.

    PubMed

    Belke, Terry W; Christie-Fougere, Melissa M

    2006-11-01

    Across two experiments, a peak procedure was used to assess the timing of the onset and offset of an opportunity to run as a reinforcer. The first experiment investigated the effect of reinforcer duration on temporal discrimination of the onset of the reinforcement interval. Three male Wistar rats were exposed to fixed-interval (FI) 30-s schedules of wheel-running reinforcement and the duration of the opportunity to run was varied across values of 15, 30, and 60s. Each session consisted of 50 reinforcers and 10 probe trials. Results showed that as reinforcer duration increased, the percentage of postreinforcement pauses longer than the 30-s schedule interval increased. On probe trials, peak response rates occurred near the time of reinforcer delivery and peak times varied with reinforcer duration. In a second experiment, seven female Long-Evans rats were exposed to FI 30-s schedules leading to 30-s opportunities to run. Timing of the onset and offset of the reinforcement period was assessed by probe trials during the schedule interval and during the reinforcement interval in separate conditions. The results provided evidence of timing of the onset, but not the offset of the wheel-running reinforcement period. Further research is required to assess if timing occurs during a wheel-running reinforcement period.

  6. Automatic location of L/H transition times for physical studies with a large statistical basis

    NASA Astrophysics Data System (ADS)

    González, S.; Vega, J.; Murari, A.; Pereira, A.; Dormido-Canto, S.; Ramírez, J. M.; contributors, JET-EFDA

    2012-06-01

    Completely automatic techniques to estimate and validate L/H transition times can be essential in L/H transition analyses. The generation of databases with hundreds of transition times and without human intervention is an important step to accomplish (a) L/H transition physics analysis, (b) validation of L/H theoretical models and (c) creation of L/H scaling laws. An entirely unattended methodology is presented in this paper to build large databases of transition times in JET using time series. The proposed technique has been applied to a dataset of 551 JET discharges between campaigns C21 and C26. A prediction with discharges that show a clear signature in time series is made through the locating properties of the wavelet transform. It is an accurate prediction and the uncertainty interval is ±3.2 ms. The discharges with a non-clear pattern in the time series use an L/H mode classifier based on discharges with a clear signature. In this case, the estimation error shows a distribution with mean and standard deviation of 27.9 ms and 37.62 ms, respectively. Two different regression methods have been applied to the measurements acquired at the transition times identified by the automatic system. The obtained scaling laws for the threshold power are not significantly different from those obtained using the data at the transition times determined manually by the experts. The automatic methods allow performing physical studies with a large number of discharges, showing, for example, that there are statistically different types of transitions characterized by different scaling laws.

  7. High resolution digital delay timer

    DOEpatents

    Martin, Albert D.

    1988-01-01

    Method and apparatus are provided for generating an output pulse following a trigger pulse at a time delay interval preset with a resolution which is high relative to a low resolution available from supplied clock pulses. A first lumped constant delay (20) provides a first output signal (24) at predetermined interpolation intervals corresponding to the desired high resolution time interval. Latching circuits (26, 28) latch the high resolution data (24) to form a first synchronizing data set (60). A selected time interval has been preset to internal counters (142, 146, 154) and corrected for circuit propagation delay times having the same order of magnitude as the desired high resolution. Internal system clock pulses (32, 34) count down the counters to generate an internal pulse delayed by an interval which is functionally related to the preset time interval. A second LCD (184) corrects the internal signal with the high resolution time delay. A second internal pulse is then applied to a third LCD (74) to generate a second set of synchronizing data (76) which is complementary with the first set of synchronizing data (60) for presentation to logic circuits (64). The logic circuits (64) further delay the internal output signal (72) to obtain a proper phase relationship of an output signal (80) with the internal pulses (32, 34). The final delayed output signal (80) thereafter enables the output pulse generator (82) to produce the desired output pulse (84) at the preset time delay interval following input of the trigger pulse (10, 12).

  8. No Clear Association between Impaired Short-Term or Working Memory Storage and Time Reproduction Capacity in Adult ADHD Patients.

    PubMed

    Mette, Christian; Grabemann, Marco; Zimmermann, Marco; Strunz, Laura; Scherbaum, Norbert; Wiltfang, Jens; Kis, Bernhard

    2015-01-01

    Altered time reproduction is exhibited by patients with adult attention deficit hyperactivity disorder (ADHD). It remains unclear whether memory capacity influences the ability of adults with ADHD to reproduce time intervals. We conducted a behavioral study on 30 ADHD patients who were medicated with methylphenidate, 29 unmedicated adult ADHD patients and 32 healthy controls (HCs). We assessed time reproduction using six time intervals (1 s, 4 s, 6 s, 10 s, 24 s and 60 s) and assessed memory performance using the Wechsler memory scale. The patients with ADHD exhibited lower memory performance scores than the HCs. No significant differences in the raw scores for any of the time intervals (p > .05), with the exception of the variability at the short time intervals (1 s, 4 s and 6 s) (p < .01), were found between the groups. The overall analyses failed to reveal any significant correlations between time reproduction at any of the time intervals examined in the time reproduction task and working memory performance (p > .05). We detected no findings indicating that working memory might influence time reproduction in adult patients with ADHD. Therefore, further studies concerning time reproduction and memory capacity among adult patients with ADHD must be performed to verify and replicate the present findings.

  9. Optical timing receiver for the NASA laser ranging system. Part 2: High precision time interval digitizer

    NASA Technical Reports Server (NTRS)

    Leskovar, B.; Turko, B.

    1977-01-01

    The development of a high precision time interval digitizer is described. The time digitizer is a 10 psec resolution stop watch covering a range of up to 340 msec. The measured time interval is determined as a separation between leading edges of a pair of pulses applied externally to the start input and the stop input of the digitizer. Employing an interpolation techniques and a 50 MHz high precision master oscillator, the equivalent of a 100 GHz clock frequency standard is achieved. Absolute accuracy and stability of the digitizer are determined by the external 50 MHz master oscillator, which serves as a standard time marker. The start and stop pulses are fast 1 nsec rise time signals, according to the Nuclear Instrument means of tunnel diode discriminators. Firing level of the discriminator define start and stop points between which the time interval is digitized.

  10. Eliciting interval beliefs: An experimental study

    PubMed Central

    Peeters, Ronald; Wolk, Leonard

    2017-01-01

    In this paper we study the interval scoring rule as a mechanism to elicit subjective beliefs under varying degrees of uncertainty. In our experiment, subjects forecast the termination time of a time series to be generated from a given but unknown stochastic process. Subjects gradually learn more about the underlying process over time and hence the true distribution over termination times. We conduct two treatments, one with a high and one with a low volatility process. We find that elicited intervals are better when subjects are facing a low volatility process. In this treatment, participants learn to position their intervals almost optimally over the course of the experiment. This is in contrast with the high volatility treatment, where subjects, over the course of the experiment, learn to optimize the location of their intervals but fail to provide the optimal length. PMID:28380020

  11. Filling the blanks in temporal intervals: the type of filling influences perceived duration and discrimination performance

    PubMed Central

    Horr, Ninja K.; Di Luca, Massimiliano

    2015-01-01

    In this work we investigate how judgments of perceived duration are influenced by the properties of the signals that define the intervals. Participants compared two auditory intervals that could be any combination of the following four types: intervals filled with continuous tones (filled intervals), intervals filled with regularly-timed short tones (isochronous intervals), intervals filled with irregularly-timed short tones (anisochronous intervals), and intervals demarcated by two short tones (empty intervals). Results indicate that the type of intervals to be compared affects discrimination performance and induces distortions in perceived duration. In particular, we find that duration judgments are most precise when comparing two isochronous and two continuous intervals, while the comparison of two anisochronous intervals leads to the worst performance. Moreover, we determined that the magnitude of the distortions in perceived duration (an effect akin to the filled duration illusion) is higher for tone sequences (no matter whether isochronous or anisochronous) than for continuous tones. Further analysis of how duration distortions depend on the type of filling suggests that distortions are not only due to the perceived duration of the two individual intervals, but they may also be due to the comparison of two different filling types. PMID:25717310

  12. Use of precision time and time interval (PTTI)

    NASA Technical Reports Server (NTRS)

    Taylor, J. D.

    1974-01-01

    A review of range time synchronization methods are discussed as an important aspect of range operations. The overall capabilities of various missile ranges to determine precise time of day by synchronizing to available references and applying this time point to instrumentation for time interval measurements are described.

  13. Energetic Particle Sounding of the Magnetopause Deformed by Hot Flow Anomaly

    NASA Astrophysics Data System (ADS)

    Zhao, L.; Zong, Q.; Zhang, H.

    2017-12-01

    Hot flow anomalies (HFAs), which are frequently observed near Earth's bow shock, are phenomena resulting from the interaction between interplanetary discontinuities and Earth's bow shock. Such transient phenomena upstream the bow shock can cause significant deformation of the bow shock and the magnetosphere, generating traveling convection vortices, field-aligned currents, and ULF waves in the Earth's magnetosphere. A large HFA was observed by MMS on November 19, 2015, lasting about 16 minutes. In this study, energetic particle sounding method with high time resolution (150 ms) Fast Plasma Investigation (FPI) data is used to determine the deformed magnetopause distances, orientations, and structures in the interval when MMS pass through the deformed magnetopause. The energetic particle sounding result from single MMS satellite for every moment in the interval when the distance from the magnetopause to the satellite is less than two proton gyro radii shows the profile of the deformed magnetopause.

  14. Glacial Cycles Influence Marine Methane Hydrate Formation

    NASA Astrophysics Data System (ADS)

    Malinverno, A.; Cook, A. E.; Daigle, H.; Oryan, B.

    2018-01-01

    Methane hydrates in fine-grained continental slope sediments often occupy isolated depth intervals surrounded by hydrate-free sediments. As they are not connected to deep gas sources, these hydrate deposits have been interpreted as sourced by in situ microbial methane. We investigate here the hypothesis that these isolated hydrate accumulations form preferentially in sediments deposited during Pleistocene glacial lowstands that contain relatively large amounts of labile particulate organic carbon, leading to enhanced microbial methanogenesis. To test this hypothesis, we apply an advection-diffusion-reaction model with a time-dependent organic carbon deposition controlled by glacioeustatic sea level variations. In the model, hydrate forms in sediments with greater organic carbon content deposited during the penultimate glacial cycle ( 120-240 ka). The model predictions match hydrate-bearing intervals detected in three sites drilled on the northern Gulf of Mexico continental slope, supporting the hypothesis of hydrate formation driven by enhanced organic carbon burial during glacial lowstands.

  15. Infant temperament: stability by age, gender, birth order, term status, and socioeconomic status.

    PubMed

    Bornstein, Marc H; Putnick, Diane L; Gartstein, Maria A; Hahn, Chun-Shin; Auestad, Nancy; O'Connor, Deborah L

    2015-01-01

    Two complementary studies focused on stability of infant temperament across the 1st year and considered infant age, gender, birth order, term status, and socioeconomic status (SES) as moderators. Study 1 consisted of 73 mothers of firstborn term girls and boys queried at 2, 5, and 13 months of age. Study 2 consisted of 335 mothers of infants of different gender, birth order, term status, and SES queried at 6 and 12 months. Consistent positive and negative affectivity factors emerged at all time points across both studies. Infant temperament proved stable and robust across gender, birth order, term status, and SES. Stability coefficients for temperament factors and scales were medium to large for shorter (< 9 months) interassessment intervals and small to medium for longer (> 10 months) intervals. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  16. Risk factors for brachial plexus injury in a large cohort with shoulder dystocia.

    PubMed

    Volpe, Katherine A; Snowden, Jonathan M; Cheng, Yvonne W; Caughey, Aaron B

    2016-11-01

    To examine birthweight and other predictors of brachial plexus injury (BPI) among births complicated by shoulder dystocia. A retrospective cohort study of term births complicated by shoulder dystocia in California between 1997 and 2006. Birthweight at time of delivery was stratified into 500-g intervals. Women were further stratified by diabetes status, parity, and race/ethnicity. The perinatal outcome of BPI was assessed. This study included 62,762 deliveries complicated by shoulder dystocia, of which 3168 (5 %) resulted in BPI. The association between birthweight and BPI remained significant regardless of confounders. Each increasing birthweight interval was associated with an increasing risk of BPI compared with 3000-3499-g birthweight. Race/ethnicity, diabetes, and parity were also independently associated with BPI. Increasing birthweight increases the risk of BPI among births with shoulder dystocia, independent of advanced maternal age, race, parity, gestational diabetes, or operative vaginal delivery.

  17. Time intervals in the treatment of fractured femurs as indicators of the quality of trauma systems.

    PubMed

    Matityahu, Amir; Elliott, Iain; Marmor, Meir; Caldwell, Amber; Coughlin, Richard; Gosselin, Richard A

    2014-01-01

    To investigate the use of time intervals in the treatment of fractured femurs as indicators of the quality of trauma systems. Time intervals from injury to admission, admission to surgery and surgery to discharge for patients with isolated femur fractures in four low- and middle-income countries were compared with the corresponding values from one German hospital, an Israeli hospital and the National Trauma Data Bank of the United States of America by means of Student's t-tests. The correlations between the time intervals recorded in a country and that country's expenditure on health and gross domestic product (GDP) were also evaluated using Pearson's product moment correlation coefficient. Relative to patients from high-income countries, those from low- and middle-income countries were significantly more likely to be male and to have been treated by open femoral nailing, and their intervals from injury to admission, admission to surgery and surgery to discharge were significantly longer. Strong negative correlations were detected between the interval from injury to admission and government expenditure on health, and between the interval from admission to surgery and the per capita values for total expenditure on health, government expenditure on health and GDP. Strong positive correlations were detected between the interval from surgery to discharge and general government expenditure on health. The time intervals for the treatment of femur fractures are relatively long in low- and middle-income countries, can easily be measured, and are highly correlated with accessible and quantifiable country data on health and economics.

  18. Preventive care and recall intervals. Targeting of services in child dental care in Norway.

    PubMed

    Wang, N J; Aspelund, G Ø

    2010-03-01

    Skewed caries distribution has made interesting the use of a high risk strategy in child dental services. The purpose of this study was to describe the preventive dental care given and the recall intervals used for children and adolescents in a low caries risk population, and to study how the time spent for preventive care and the length of intervals were associated with characteristics of the children and factors related to care delivery. Time spent for and type of preventive care, recall intervals, oral health and health behaviour of children and adolescents three to 18 years of age (n = 576) and the preventive services delivered were registered at routine dental examinations in the public dental services. The time used for preventive dental care was on average 22% of the total time used in a course of treatment (7.3 of 33.4 minutes). Less than 15% of the variation in time spent for prevention was explained by oral health, oral health behaviours and other characteristics of the children and the service delivery. The mean (SD) recall intervals were 15.4 (4.6) months and 55% of the children were given intervals equal to or longer than 18 months. Approximately 30% of the variation in the length of the recall intervals was explained by characteristics of the child and the service delivery. The time used for preventive dental care of children in a low risk population was standardized, while the recall intervals to a certain extent were individualized according to dental health and dental health behaviour.

  19. Ca, Sr, Mo and U isotopes evidence ocean acidification and deoxygenation during the Late Permian mass extinction

    NASA Astrophysics Data System (ADS)

    Silva-Tamayo, Juan Carlos; Payne, Jon; Wignall, Paul; Newton, Rob; Eisenhauer, Anton; Weyer, Stenfan; Neubert, Nadja; Lau, Kim; Maher, Kate; Paytan, Adina; Lehrmann, Dan; Altiner, Demir; Yu, Meiyi

    2014-05-01

    The most catastrophic extinction event in the history of animal life occurred at the end of the Permian Period, ca. 252 Mya. Ocean acidification and global oceanic euxinia have each been proposed as causes of this biotic crisis, but the magnitude and timing of change in global ocean chemistry remains poorly constrained. Here we use multiple isotope systems - Ca, Sr, Mo and U - measured from well dated Upper Permian- Lower Triassic sedimentary sections to better constrain the magnitude and timing of change in ocean chemistry and the effects of ocean acidification and de-oxygenation through this interval. All the investigated carbonate successions (Turkey, Italy and China) exhibit decreasing δ44/40Ca compositions, from ~-1.4‰ to -2.0‰ in the interval preceding the main extinction. These values remain low during most of the Griesbachian, to finally return to -1.4‰ in the middle Dienerian. The limestone succession from southern Turkey also displays a major decrease in the δ88/86Sr values from 0.45‰ to 0.3‰ before the extinction. These values remain low during the Griesbachian and finally increase to 0.55‰ by the middle Dienerian. The paired negative anomalies on the carbonate δ44/40Ca and δ88/86Sr suggest a decrease in the carbonate precipitation and thus an episode of ocean acidification coincident with the major biotic crisis. The Mo and U isotope records also exhibit significant rapid negative anomalies at the onset of the main extinction interval, suggesting rapid expansion of anoxic and euxinic marine bottom waters during the extinction interval. The rapidity of the isotope excursions in Mo and U suggests substantially reduced residence times of these elements in seawater relative to the modern, consistent with expectations for a time of widespread anoxia. The large C-isotope variability within Lower Triassic rocks, which is similar to that of the Lower-Middle Cambrian, may reflect biologically controlled perturbations of the oceanic carbon cycle. These findings strengthen the evidence for a global ocean acidification event coupled with rapid expansion of anoxic zones as drivers of end-Permian extinction in the oceans.

  20. Preliminary Thermo-Chronometric and Paleo-Magnetic Results from the Western Margin of The Kırşehir Block: Implications for the Timing of Continental Collisions Occurred Along Neo-Tethyan Suture Zones (Central Anatolia, Turkey)

    NASA Astrophysics Data System (ADS)

    Gülyüz, Erhan; Özkaptan, Murat; Langereis, Cor G.; Kaymakcı, Nuretdin

    2017-04-01

    Closures of Paleo- (largely Paleozoic) and Neo-Tethys (largely Mesozoic) Oceans developed between Europe, Africa and Arabia are the main driving mechanisms behind the post-Triassic tectonics, magmatism and metamorphism occurred in Anatolia. Although various scenarios have been suggested for the timing and characteristics of the subduction systems, it is largely accepted that these blocks are progressively collided and amalgamated along the northern (İzmir-Ankara-Erzincan suture zone; IAESZ) and the southern (Bitlis-Zagros suture zone; BZSZ) branches of Neo-Tethys Ocean. The geographic positions of these suture zones in Anatolia are marked by imbricated stacks of largely metamorphosed remnants of the Paleo- and Neo-Tethys Oceans. In addition to this tectonic frame, the existence of another suture zone within the northern branch of the Neo-Tethys separating the Kırşehir Block, a triangular (200km*200km*200km) continental domain represented by mainly high-pressure (HP) meta-sedimentary rocks, from the Taurides, is proposed and named as Intra-Tauride Suture Zone (ITSZ). Although traces of the Neo-Tethyan closure and continental collisions in the Central Anatolia are recorded (1) in sedimentary basins as fold and thrust belt developments (as northern Taurides fold and thrust belt along IAESZ and central Taurides fold and thrust belt along ITSZ), (2) on metamorphic rocks with Late Cretaceous to Late Paleocene peak metamorphism, and (3) on magmatic rocks with Late Cretaceous - Paleocene arc-related intrusions and post-Paleocene post-collisional magmatism, timing of these continental collisions are discussed in limited studies and furthermore they indicate a large time span (post-Paleocene to Miocene) for the collisions. This study aims to date continental collisions occurred in Central Anatolia qualitatively. In this regard, low-temperature thermo-chronometric and paleo-magnetic studies were conducted on the sedimentary units cropped-out along the western and north-western margins of the Kırşehir Block where two suture zones coincided (IAESZ & ITSZ). Although, thermo-chronometric studies have not been completely conducted, initial results consistently indicate Oligocene-Early Miocene continental uplift along the western margin of the Kırşehir Block. In keeping with thermo-chronometric results, paleo-magnetic samples (400 cores) taken systematically from upper Cretaceous to Miocene sedimentary units exposed along the IAESZ and ITSZ suggest that concentration of vertical block rotations are accumulated in Oligocene-Early Miocene time interval indicating the timing of main deformation events. Based on the paleo-magnetic and low-temperature thermo-chronometric results, we propose that continental collisions along IAESZ and ITSZ in the Central Anatolia occurred during Oligocene - Early Miocene time interval which might also correspond to the commencement of continental deposition and the base of regional unconformities exposed in the region.

  1. TIMESERIESSTREAMING.VI: LabVIEW program for reliable data streaming of large analog time series

    NASA Astrophysics Data System (ADS)

    Czerwinski, Fabian; Oddershede, Lene B.

    2011-02-01

    With modern data acquisition devices that work fast and very precise, scientists often face the task of dealing with huge amounts of data. These need to be rapidly processed and stored onto a hard disk. We present a LabVIEW program which reliably streams analog time series of MHz sampling. Its run time has virtually no limitation. We explicitly show how to use the program to extract time series from two experiments: For a photodiode detection system that tracks the position of an optically trapped particle and for a measurement of ionic current through a glass capillary. The program is easy to use and versatile as the input can be any type of analog signal. Also, the data streaming software is simple, highly reliable, and can be easily customized to include, e.g., real-time power spectral analysis and Allan variance noise quantification. Program summaryProgram title: TimeSeriesStreaming.VI Catalogue identifier: AEHT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 250 No. of bytes in distributed program, including test data, etc.: 63 259 Distribution format: tar.gz Programming language: LabVIEW ( http://www.ni.com/labview/) Computer: Any machine running LabVIEW 8.6 or higher Operating system: Windows XP and Windows 7 RAM: 60-360 Mbyte Classification: 3 Nature of problem: For numerous scientific and engineering applications, it is highly desirable to have an efficient, reliable, and flexible program to perform data streaming of time series sampled with high frequencies and possibly for long time intervals. This type of data acquisition often produces very large amounts of data not easily streamed onto a computer hard disk using standard methods. Solution method: This LabVIEW program is developed to directly stream any kind of time series onto a hard disk. Due to optimized timing and usage of computational resources, such as multicores and protocols for memory usage, this program provides extremely reliable data acquisition. In particular, the program is optimized to deal with large amounts of data, e.g., taken with high sampling frequencies and over long time intervals. The program can be easily customized for time series analyses. Restrictions: Only tested in Windows-operating LabVIEW environments, must use TDMS format, acquisition cards must be LabVIEW compatible, driver DAQmx installed. Running time: As desirable: microseconds to hours

  2. Intercepting beats in predesignated target zones.

    PubMed

    Craig, Cathy; Pepping, Gert-Jan; Grealy, Madeleine

    2005-09-01

    Moving to a rhythm necessitates precise timing between the movement of the chosen limb and the timing imposed by the beats. However, the temporal information specifying the moment when a beat will sound (the moment onto which one must synchronise one's movement) is not continuously provided by the acoustic array. Because of this informational void, the actors need some form of prospective information that will allow them to act sufficiently ahead of time in order to get their hand in the right place at the right time. In this acoustic interception study, where participants were asked to move between two targets in such a way that they arrived and stopped in the target zone at the same time as a beat sounded, we tested a model derived from tau-coupling theory (Lee DN (1998) Ecol Psychol 10:221-250). This model attempts to explain the form of a potential timing guide that specifies the duration of the inter-beat intervals and also describes how this informational guide can be used in the timing and guidance of movements. The results of our first experiment show that, for inter-beat intervals of less than 3 s, a large proportion of the movement (over 70%) can be explained by the proposed model. However, a second experiment, which augments the time between beats so that it surpasses 3 s, shows a marked decline in the percentage of information/movement coupling. A close analysis of the movement kinematics indicates a lack of control and anticipation in the participants' movements. The implications of these findings, in light of other research studies, are discussed.

  3. Geophysical Anomalies and Earthquake Prediction

    NASA Astrophysics Data System (ADS)

    Jackson, D. D.

    2008-12-01

    Finding anomalies is easy. Predicting earthquakes convincingly from such anomalies is far from easy. Why? Why have so many beautiful geophysical abnormalities not led to successful prediction strategies? What is earthquake prediction? By my definition it is convincing information that an earthquake of specified size is temporarily much more likely than usual in a specific region for a specified time interval. We know a lot about normal earthquake behavior, including locations where earthquake rates are higher than elsewhere, with estimable rates and size distributions. We know that earthquakes have power law size distributions over large areas, that they cluster in time and space, and that aftershocks follow with power-law dependence on time. These relationships justify prudent protective measures and scientific investigation. Earthquake prediction would justify exceptional temporary measures well beyond those normal prudent actions. Convincing earthquake prediction would result from methods that have demonstrated many successes with few false alarms. Predicting earthquakes convincingly is difficult for several profound reasons. First, earthquakes start in tiny volumes at inaccessible depth. The power law size dependence means that tiny unobservable ones are frequent almost everywhere and occasionally grow to larger size. Thus prediction of important earthquakes is not about nucleation, but about identifying the conditions for growth. Second, earthquakes are complex. They derive their energy from stress, which is perniciously hard to estimate or model because it is nearly singular at the margins of cracks and faults. Physical properties vary from place to place, so the preparatory processes certainly vary as well. Thus establishing the needed track record for validation is very difficult, especially for large events with immense interval times in any one location. Third, the anomalies are generally complex as well. Electromagnetic anomalies in particular require some understanding of their sources and the physical properties of the crust, which also vary from place to place and time to time. Anomalies are not necessarily due to stress or earthquake preparation, and separating the extraneous ones is a problem as daunting as understanding earthquake behavior itself. Fourth, the associations presented between anomalies and earthquakes are generally based on selected data. Validating a proposed association requires complete data on the earthquake record and the geophysical measurements over a large area and time, followed by prospective testing which allows no adjustment of parameters, criteria, etc. The Collaboratory for Study of Earthquake Predictability (CSEP) is dedicated to providing such prospective testing. Any serious proposal for prediction research should deal with the problems above, and anticipate the huge investment in time required to test hypotheses.

  4. Deformation Time-Series of the Lost-Hills Oil Field using a Multi-Baseline Interferometric SAR Inversion Algorithm with Finite Difference Smoothing Constraints

    NASA Astrophysics Data System (ADS)

    Werner, C. L.; Wegmüller, U.; Strozzi, T.

    2012-12-01

    The Lost-Hills oil field located in Kern County,California ranks sixth in total remaining reserves in California. Hundreds of densely packed wells characterize the field with one well every 5000 to 20000 square meters. Subsidence due to oil extraction can be grater than 10 cm/year and is highly variable both in space and time. The RADARSAT-1 SAR satellite collected data over this area with a 24-day repeat during a 2 year period spanning 2002-2004. Relatively high interferometric correlation makes this an excellent region for development and test of deformation time-series inversion algorithms. Errors in deformation time series derived from a stack of differential interferograms are primarily due to errors in the digital terrain model, interferometric baselines, variability in tropospheric delay, thermal noise and phase unwrapping errors. Particularly challenging is separation of non-linear deformation from variations in troposphere delay and phase unwrapping errors. In our algorithm a subset of interferometric pairs is selected from a set of N radar acquisitions based on criteria of connectivity, time interval, and perpendicular baseline. When possible, the subset consists of temporally connected interferograms, otherwise the different groups of interferograms are selected to overlap in time. The maximum time interval is constrained to be less than a threshold value to minimize phase gradients due to deformation as well as minimize temporal decorrelation. Large baselines are also avoided to minimize the consequence of DEM errors on the interferometric phase. Based on an extension of the SVD based inversion described by Lee et al. ( USGS Professional Paper 1769), Schmidt and Burgmann (JGR, 2003), and the earlier work of Berardino (TGRS, 2002), our algorithm combines estimation of the DEM height error with a set of finite difference smoothing constraints. A set of linear equations are formulated for each spatial point that are functions of the deformation velocities during the time intervals spanned by the interferogram and a DEM height correction. The sensitivity of the phase to the height correction depends on the length of the perpendicular baseline of each interferogram. This design matrix is augmented with a set of additional weighted constraints on the acceleration that penalize rapid velocity variations. The weighting factor γ can be varied from 0 (no smoothing) to a large values (> 10) that yield an essentially linear time-series solution. The factor can be tuned to take into account a priori knowledge of the deformation non-linearity. The difference between the time-series solution and the unconstrained time-series can be interpreted as due to a combination of tropospheric path delay and baseline error. Spatial smoothing of the residual phase leads to an improved atmospheric model that can be fed back into the model and iterated. Our analysis shows non-linear deformation related to changes in the oil extraction as well as local height corrections improving on the low resolution 3 arc-sec SRTM DEM.

  5. Relationship between menstruation status and work conditions in Japan.

    PubMed

    Nishikitani, Mariko; Nakao, Mutsuhiro; Tsurugano, Shinobu; Inoure, Mariko; Yano, Eiji

    2017-01-01

    Menstrual problems can significantly impact daily and work life. In reaction to a shrinking population, the Japanese government is encouraging more women to participate in the labor force. Actual success in achieving this aim, however, is limited. Specifically, participation in the workforce by women during their reproductive years is impacted by their health, which involves not only work conditions, but also traditional family circumstances. Therefore, it is important to further assess and gather more information about the health status of women who work during their reproductive years in Japan. Specifically, women's health can be represented by menstruation status, which is a pivotal indicator. In this study, we assessed the association between short rest periods in work intervals and menstruation and other health status indicators among female workers in Japan. Study participants were recruited from the alumnae of a university, which provided a uniform educational level. All 9864 female alumnae were asked to join the survey and 1630 (17%) accepted. The final sample of study participants ( n  = 505) were aged 23-43 years, had maintained the same job status for at least 1 year, and were not shift workers, had no maternal status, and did not lack any related information. The participants were divided into two groups according to interval time, with 11 h between end of work and resumption of daily work as a benchmark. This interval time was based on EU regulations and the goal set by the government of Japan. Health outcomes included: menstrual cycle, dysmenorrhoea symptoms, anxiety regarding health, and satisfaction in terms of health. Multiple logistic regression analyses were conducted to estimate the odds ratios (ORs) and 95% confidence intervals (CIs) for health indexes in association with interval time by adjusting for confounding variables that included both psychosocial and biological factors. We compared the health status of women in the workforce with and without a sufficient interval time of 11 h/day. Workers who had a short interval time had a significantly higher prevalence of anxiety about health and dissatisfaction with their health. For menstruation status, only abnormal menstruation cycles were observed more often among workers in the short interval group than those of the long interval group. However, this association disappeared when biological confounding factors were adjusted in a multivariable regression model. Dysmenorrhea symptoms did not show a statistically significant association with short interval time. This study found a significant association between a short interval time of less than 11 h/day and subjective health indicators and the menstrual health status of women in the workforce. Menstrual health was more affected by biological factors than social psychological factors. A long work time and short interval time could increase worker anxiety and dissatisfaction and may deteriorate the menstrual cycle.

  6. Focusing and transport of high-intensity multi-MeV proton bunches from a compact laser-driven source

    NASA Astrophysics Data System (ADS)

    Busold, S.; Schumacher, D.; Deppert, O.; Brabetz, C.; Frydrych, S.; Kroll, F.; Joost, M.; Al-Omari, H.; Blažević, A.; Zielbauer, B.; Hofmann, I.; Bagnoud, V.; Cowan, T. E.; Roth, M.

    2013-10-01

    Laser ion acceleration provides for compact, high-intensity ion sources in the multi-MeV range. Using a pulsed high-field solenoid, for the first time high-intensity laser-accelerated proton bunches could be selected from the continuous exponential spectrum and delivered to large distances, containing more than 109 particles in a narrow energy interval around a central energy of 9.4 MeV and showing ≤30mrad envelope divergence. The bunches of only a few nanoseconds bunch duration were characterized 2.2 m behind the laser-plasma source with respect to arrival time, energy width, and intensity as well as spatial and temporal bunch profile.

  7. Modeling of cw OIL energy performance based on similarity criteria

    NASA Astrophysics Data System (ADS)

    Mezhenin, Andrey V.; Pichugin, Sergey Y.; Azyazov, Valeriy N.

    2012-01-01

    A simplified two-level generation model predicts that power extraction from an cw oxygen-iodine laser (OIL) with stable resonator depends on three similarity criteria. Criterion τd is the ratio of the residence time of active medium in the resonator to the O2(1Δ) reduction time at the infinitely large intraresonator intensity. Criterion Π is small-signal gain to the threshold ratio. Criterion Λ is the relaxation to excitation rate ratio for the electronically excited iodine atoms I(2P1/2). Effective power extraction from a cw OIL is achieved when the values of the similarity criteria are located in the intervals: τd=5-8, Π=3-8 and Λ<=0.01.

  8. Elective change of surgeon during the OR day has an operationally negligible impact on turnover time.

    PubMed

    Austin, Thomas M; Lam, Humphrey V; Shin, Naomi S; Daily, Bethany J; Dunn, Peter F; Sandberg, Warren S

    2014-08-01

    To compare turnover times for a series of elective cases with surgeons following themselves with turnover times for a series of previously scheduled elective procedures for which the succeeding surgeon differed from the preceding surgeon. Retrospective cohort study. University-affiliated teaching hospital. The operating room (OR) statistical database was accessed to gather 32 months of turnover data from a large academic institution. Turnover time data for the same-surgeon and surgeon-swap groups were batched by month to minimize autocorrelation and achieve data normalization. Two-way analysis of variance (ANOVA) using the monthly batched data was performed with surgeon swapping and changes in procedure category as variables of turnover time. Similar analyses were performed using individual surgical services, hourly time intervals during the surgical day, and turnover frequency per OR as additional covariates to surgeon swapping. The mean (95% confidence interval [CI]) same-surgeon turnover time was 43.6 (43.2 - 44.0) minutes versus 51.0 (50.5 - 51.6) minutes for a planned surgeon swap (P < 0.0001). This resulted in a difference (95% CI) of 7.4 (6.8 - 8.1) minutes. The exact increase in turnover time was dependent on surgical service, change in subsequent procedure type, time of day when the turnover occurred, and turnover frequency. The investigated institution averages 2.5 cases per OR per day. The cumulative additional turnover time (far less than one hour per OR per day) for switching surgeons definitely does not allow the addition of another elective procedure if the difference could be eliminated. A flexible scheduling policy allowing surgeon swapping rather than requiring full blocks incurs minimal additional staffed time during the OR day while allowing the schedule to be filled with available elective cases. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Time between the first and second operations for staged total knee arthroplasties when the interval is determined by the patient.

    PubMed

    Ishii, Yoshinori; Noguchi, Hideo; Takeda, Mitsuhiro; Sato, Junko; Toyabe, Shin-Ichi

    2014-01-01

    The purpose of this study was to evaluate the interval between the first and second operations for staged total knee arthroplasties (TKAs) in patients with bilateral knee osteoarthritis. Depending on satisfactory preoperative health status, the patients determined the timing of the second operation. We also analysed correlations between the interval and patient characteristics. Eighty-six patients with bilateral knee osteoarthritis were analysed. The mean follow-up time from the first TKA was 96 months. The side of the first TKA was chosen by the patients. The timing of the second TKA was determined by the patients, depending on their perceived ability to tolerate the additional pain and limitations to activities of daily living. The median interval between the first and second operations was 12.5 months, with a range of 2 to 113 months. In 43 (50%) patients, the interval was <12 months. There was no difference in the interval between females and males (p=0.861), and no correlation between the interval and body mass index or age. There was weak correlation between the year of the first TKA and the interval (R=-0.251, p=0.020), with the interval getting significantly shorter as the years progressed (p=0.032). The median interval between the first and second operations in patients who underwent staged TKAs for bilateral knee osteoarthritis was about 1 year. The results of the current study may help patients and physicians to plan effective treatment strategies for staged TKAs. Level II. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Relationship between large horizontal electric fields and auroral arc elements

    NASA Astrophysics Data System (ADS)

    Lanchester, B. S.; Kailá, K.; McCrea, I. W.

    1996-03-01

    High time resolution optical measurements in the magnetic zenith are compared with European Incoherent Scatter (EISCAT) field-aligned measurements of electron density at 0.2-s resolution and with horizontal electric field measurements made at 278 km with resolution of 9 s. In one event, 20 min after a spectacular auroral breakup, a system of narrow and active arc elements moved southward into the magnetic zenith, where it remained for several minutes. During a 30-s interval of activity in a narrow arc element very close to the radar beam, the electric field vectors at 3-s resolution were found to be extremely large (up to 400 mVm-1) and to point toward the bright optical features in the arc, which moved along its length. It is proposed that the large electric fields are short-lived and are directly associated with the particle precipitation that causes the bright features in auroral arc elements.

  11. Why noise is useful in functional and neural mechanisms of interval timing?

    PubMed Central

    2013-01-01

    Background The ability to estimate durations in the seconds-to-minutes range - interval timing - is essential for survival, adaptation and its impairment leads to severe cognitive and/or motor dysfunctions. The response rate near a memorized duration has a Gaussian shape centered on the to-be-timed interval (criterion time). The width of the Gaussian-like distribution of responses increases linearly with the criterion time, i.e., interval timing obeys the scalar property. Results We presented analytical and numerical results based on the striatal beat frequency (SBF) model showing that parameter variability (noise) mimics behavioral data. A key functional block of the SBF model is the set of oscillators that provide the time base for the entire timing network. The implementation of the oscillators block as simplified phase (cosine) oscillators has the additional advantage that is analytically tractable. We also checked numerically that the scalar property emerges in the presence of memory variability by using biophysically realistic Morris-Lecar oscillators. First, we predicted analytically and tested numerically that in a noise-free SBF model the output function could be approximated by a Gaussian. However, in a noise-free SBF model the width of the Gaussian envelope is independent of the criterion time, which violates the scalar property. We showed analytically and verified numerically that small fluctuations of the memorized criterion time leads to scalar property of interval timing. Conclusions Noise is ubiquitous in the form of small fluctuations of intrinsic frequencies of the neural oscillators, the errors in recording/retrieving stored information related to criterion time, fluctuation in neurotransmitters’ concentration, etc. Our model suggests that the biological noise plays an essential functional role in the SBF interval timing. PMID:23924391

  12. Natural fluoride in drinking water and myocardial infarction: A cohort study in Sweden.

    PubMed

    Näsman, Peggy; Granath, Fredrik; Ekstrand, Jan; Ekbom, Anders; Sandborgh-Englund, Gunilla; Fored, C Michael

    2016-08-15

    Large geographical variation in the coronary heart disease (CHD) incidence is seen worldwide and only a part of this difference is attributed to the classic risk factors. Several environmental factors, such as trace elements in the drinking water have been implicated in the pathogenesis of CHD. The objective was to assess the association between drinking water fluoride exposure and myocardial infarction in Sweden using nationwide registers. This large cohort consisted of 455,619 individuals, born in Sweden between January 1, 1900 and December 31, 1919, alive and living in their municipality of birth at the time of start of follow-up. Estimated individual drinking water fluoride exposure was stratified into four categories: very low (<0.3mg/l), low (0.3-<0.7mg/l), medium (0.7-<1.5mg/l) and high (≥1.5mg/l). In Cox regression analyses, compared to the very low fluoride group, the adjusted Hazard Ratio for the low fluoride group was 0.99 (95% confidence interval, 0.98-1.00), for the medium fluoride group 1.01 (95% confidence interval, 0.99-1.03) and 0.98 (95% confidence interval, 0.96-1.01) for the highest fluoride group. Adding water hardness to the model did not change the results. We conclude that the investigated levels of natural drinking water fluoride content does not appear to be associated with myocardial infarction, nor related to the geographic myocardial infarction risk variation in Sweden. Potential misclassification of exposure and unmeasured confounding may have influenced the results. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Improved adjoin-list for quality-guided phase unwrapping based on red-black trees

    NASA Astrophysics Data System (ADS)

    Cruz-Santos, William; López-García, Lourdes; Rueda-Paz, Juvenal; Redondo-Galvan, Arturo

    2016-08-01

    The quality-guide phase unwrapping is an important technique that is based on quality maps which guide the unwrapping process. The efficiency of this technique depends in the adjoin-list data structure implementation. There exists several proposals that improve the adjoin-list; Ming Zhao et. al. proposed an Indexed Interwoven Linked List (I2L2) that is based on dividing the quality values into intervals of equal size and inserting in a linked list those pixels with quality values within a certain interval. Ming Zhao and Qian Kemao proposed an improved I2L2 replacing each linked list in each interval by a heap data structure, which allows efficient procedures for insertion and deletion. In this paper, we propose an improved I2L2 which uses Red-Black trees (RBT) data structures for each interval. Our proposal has as main goal to avoid the unbalanced properties of the head and thus, reducing the time complexity of insertion. In order to maintain the same efficiency of the heap when deleting an element, we provide an efficient way to remove the pixel with the highest quality value in the RBT using a pointer to the rightmost element in the tree. We also provide a new partition strategy of the phase values that is based on a density criterion. Experimental results applied to phase shifting profilometry are shown for large images.

  14. Familiar Tonal Context Improves Accuracy of Pitch Interval Perception.

    PubMed

    Graves, Jackson E; Oxenham, Andrew J

    2017-01-01

    A fundamental feature of everyday music perception is sensitivity to familiar tonal structures such as musical keys. Many studies have suggested that a tonal context can enhance the perception and representation of pitch. Most of these studies have measured response time, which may reflect expectancy as opposed to perceptual accuracy. We instead used a performance-based measure, comparing participants' ability to discriminate between a "small, in-tune" interval and a "large, mistuned" interval in conditions that involved familiar tonal relations (diatonic, or major, scale notes), unfamiliar tonal relations (whole-tone or mistuned-diatonic scale notes), repetition of a single pitch, or no tonal context. The context was established with a brief sequence of tones in Experiment 1 (melodic context), and a cadence-like two-chord progression in Experiment 2 (harmonic context). In both experiments, performance significantly differed across the context conditions, with a diatonic context providing a significant advantage over no context; however, no correlation with years of musical training was observed. The diatonic tonal context also provided an advantage over the whole-tone scale context condition in Experiment 1 (melodic context), and over the mistuned scale or repetition context conditions in Experiment 2 (harmonic context). However, the relatively small benefit to performance suggests that the main advantage of tonal context may be priming of expected stimuli, rather than enhanced accuracy of pitch interval representation.

  15. Flood of May 23, 2004, in the Turkey and Maquoketa River basins, northeast Iowa

    USGS Publications Warehouse

    Eash, David A.

    2006-01-01

    Severe flooding occurred on May 23, 2004, in the Turkey River Basin in Clayton County and in the Maquoketa River Basin in Delaware County following intense thunderstorms over northeast Iowa. Rain gages at Postville and Waucoma, Iowa, recorded 72-hour rainfall of 6.32 and 6.55 inches, respectively, on May 23. Unofficial rainfall totals of 8 to 10 inches were reported in the Turkey River Basin. The peak discharge on May 23 at the Turkey River at Garber streamflow-gaging station was 66,700 cubic feet per second (recurrence interval greater than 500 years) and is the largest flood on record in the Turkey River Basin. The timing of flood crests on the Turkey and Volga Rivers, and local tributaries, coincided to produce a record flood on the lower part of the Turkey River. Three large floods have occurred at the Turkey River at Garber gaging station in a 13-year period. Peak discharges of the floods of June 1991 and May 1999 were 49,900 cubic feet per second (recurrence interval about 150 years) and 53,900 cubic feet per second (recurrence interval about 220 years), respectively. The peak discharge on May 23 at the Maquoketa River at Manchester gaging station was 26,000 cubic feet per second (recurrence interval about 100 years) and is the largest known flood in the upper part of the Maquoketa River Basin.

  16. Response of noctilucent cloud brightness to daily solar variations

    NASA Astrophysics Data System (ADS)

    Dalin, P.; Pertsev, N.; Perminov, V.; Dubietis, A.; Zadorozhny, A.; Zalcik, M.; McEachran, I.; McEwan, T.; Černis, K.; Grønne, J.; Taustrup, T.; Hansen, O.; Andersen, H.; Melnikov, D.; Manevich, A.; Romejko, V.; Lifatova, D.

    2018-04-01

    For the first time, long-term data sets of ground-based observations of noctilucent clouds (NLC) around the globe have been analyzed in order to investigate a response of NLC to solar UV irradiance variability on a day-to-day scale. NLC brightness has been considered versus variations of solar Lyman-alpha flux. We have found that day-to-day solar variability, whose effect is generally masked in the natural NLC variability, has a statistically significant effect when considering large statistics for more than ten years. Average increase in day-to-day solar Lyman-α flux results in average decrease in day-to-day NLC brightness that can be explained by robust physical mechanisms taking place in the summer mesosphere. Average time lags between variations of Lyman-α flux and NLC brightness are short (0-3 days), suggesting a dominant role of direct solar heating and of the dynamical mechanism compared to photodissociation of water vapor by solar Lyman-α flux. All found regularities are consistent between various ground-based NLC data sets collected at different locations around the globe and for various time intervals. Signatures of a 27-day periodicity seem to be present in the NLC brightness for individual summertime intervals; however, this oscillation cannot be unambiguously retrieved due to inevitable periods of tropospheric cloudiness.

  17. The recent and future health burden of air pollution apportioned across U.S. sectors.

    PubMed

    Fann, Neal; Fulcher, Charles M; Baker, Kirk

    2013-04-16

    Recent risk assessments have characterized the overall burden of recent PM2.5 and ozone levels on public health, but generally not the variability of these impacts over time or by sector. Using photochemical source apportionment modeling and a health impact function, we attribute PM2.5 and ozone air quality levels, population exposure and health burden to 23 industrial point, area, mobile and international emission sectors in the Continental U.S. in 2005 and 2016. Our modeled policy scenarios account for a suite of emission control requirements affecting many of these sectors. Between these two years, the number of PM2.5 and ozone-related deaths attributable to power plants and mobile sources falls from about 68,000 (90% confidence interval from 48,000 to 87,000) to about 36,000 (90% confidence intervals from 26,000 to 47,000). Area source mortality risk grows slightly between 2005 and 2016, due largely to population growth. Uncertainties relating to the timing and magnitude of the emission reductions may affect the size of these estimates. The detailed sector-level estimates of the size and distribution of mortality and morbidity risk suggest that the air pollution mortality burden has fallen over time but that many sectors continue to pose a substantial risk to human health.

  18. Modelling Short-Term Maximum Individual Exposure from Airborne Hazardous Releases in Urban Environments. Part ΙI: Validation of a Deterministic Model with Wind Tunnel Experimental Data.

    PubMed

    Efthimiou, George C; Bartzis, John G; Berbekar, Eva; Hertwig, Denise; Harms, Frank; Leitl, Bernd

    2015-06-26

    The capability to predict short-term maximum individual exposure is very important for several applications including, for example, deliberate/accidental release of hazardous substances, odour fluctuations or material flammability level exceedance. Recently, authors have proposed a simple approach relating maximum individual exposure to parameters such as the fluctuation intensity and the concentration integral time scale. In the first part of this study (Part I), the methodology was validated against field measurements, which are governed by the natural variability of atmospheric boundary conditions. In Part II of this study, an in-depth validation of the approach is performed using reference data recorded under truly stationary and well documented flow conditions. For this reason, a boundary-layer wind-tunnel experiment was used. The experimental dataset includes 196 time-resolved concentration measurements which detect the dispersion from a continuous point source within an urban model of semi-idealized complexity. The data analysis allowed the improvement of an important model parameter. The model performed very well in predicting the maximum individual exposure, presenting a factor of two of observations equal to 95%. For large time intervals, an exponential correction term has been introduced in the model based on the experimental observations. The new model is capable of predicting all time intervals giving an overall factor of two of observations equal to 100%.

  19. The Behavioral Economics of Choice and Interval Timing

    PubMed Central

    Jozefowiez, J.; Staddon, J. E. R.; Cerutti, D. T.

    2009-01-01

    We propose a simple behavioral economic model (BEM) describing how reinforcement and interval timing interact. The model assumes a Weber-law-compliant logarithmic representation of time. Associated with each represented time value are the payoffs that have been obtained for each possible response. At a given real time, the response with the highest payoff is emitted. The model accounts for a wide range of data from procedures such as simple bisection, metacognition in animals, economic effects in free-operant psychophysical procedures and paradoxical choice in double-bisection procedures. Although it assumes logarithmic time representation, it can also account for data from the time-left procedure usually cited in support of linear time representation. It encounters some difficulties in complex free-operant choice procedures, such as concurrent mixed fixed-interval schedules as well as some of the data on double bisection, that may involve additional processes. Overall, BEM provides a theoretical framework for understanding how reinforcement and interval timing work together to determine choice between temporally differentiated reinforcers. PMID:19618985

  20. Quantitative analysis of ground penetrating radar data in the Mu Us Sandland

    NASA Astrophysics Data System (ADS)

    Fu, Tianyang; Tan, Lihua; Wu, Yongqiu; Wen, Yanglei; Li, Dawei; Duan, Jinlong

    2018-06-01

    Ground penetrating radar (GPR), which can reveal the sedimentary structure and development process of dunes, is widely used to evaluate aeolian landforms. The interpretations for GPR profiles are mostly based on qualitative descriptions of geometric features of the radar reflections. This research quantitatively analyzed the waveform parameter characteristics of different radar units by extracting the amplitude and time interval parameters of GPR data in the Mu Us Sandland in China, and then identified and interpreted different sedimentary structures. The results showed that different types of radar units had specific waveform parameter characteristics. The main waveform parameter characteristics of sand dune radar facies and sandstone radar facies included low amplitudes and wide ranges of time intervals, ranging from 0 to 0.25 and 4 to 33 ns respectively, and the mean amplitudes changed gradually with time intervals. The amplitude distribution curves of various sand dune radar facies were similar as unimodal distributions. The radar surfaces showed high amplitudes with time intervals concentrated in high-value areas, ranging from 0.08 to 0.61 and 9 to 34 ns respectively, and the mean amplitudes changed drastically with time intervals. The amplitude and time interval values of lacustrine radar facies were between that of sand dune radar facies and radar surfaces, ranging from 0.08 to 0.29 and 11 to 30 ns respectively, and the mean amplitude and time interval curve was approximately trapezoidal. The quantitative extraction and analysis of GPR reflections could help distinguish various radar units and provide evidence for identifying sedimentary structure in aeolian landforms.

  1. Interval timing in genetically modified mice: a simple paradigm

    PubMed Central

    Balci, F.; Papachristos, E. B.; Gallistel, C. R.; Brunner, D.; Gibson, J.; Shumyatsky, G. P.

    2009-01-01

    We describe a behavioral screen for the quantitative study of interval timing and interval memory in mice. Mice learn to switch from a short-latency feeding station to a long-latency station when the short latency has passed without a feeding. The psychometric function is the cumulative distribution of switch latencies. Its median measures timing accuracy and its interquartile interval measures timing precision. Next, using this behavioral paradigm, we have examined mice with a gene knockout of the receptor for gastrin-releasing peptide that show enhanced (i.e. prolonged) freezing in fear conditioning. We have tested the hypothesis that the mutants freeze longer because they are more uncertain than wild types about when to expect the electric shock. The knockouts however show normal accuracy and precision in timing, so we have rejected this alternative hypothesis. Last, we conduct the pharmacological validation of our behavioral screen using D-amphetamine and methamphetamine. We suggest including the analysis of interval timing and temporal memory in tests of genetically modified mice for learning and memory and argue that our paradigm allows this to be done simply and efficiently. PMID:17696995

  2. Interval timing in genetically modified mice: a simple paradigm.

    PubMed

    Balci, F; Papachristos, E B; Gallistel, C R; Brunner, D; Gibson, J; Shumyatsky, G P

    2008-04-01

    We describe a behavioral screen for the quantitative study of interval timing and interval memory in mice. Mice learn to switch from a short-latency feeding station to a long-latency station when the short latency has passed without a feeding. The psychometric function is the cumulative distribution of switch latencies. Its median measures timing accuracy and its interquartile interval measures timing precision. Next, using this behavioral paradigm, we have examined mice with a gene knockout of the receptor for gastrin-releasing peptide that show enhanced (i.e. prolonged) freezing in fear conditioning. We have tested the hypothesis that the mutants freeze longer because they are more uncertain than wild types about when to expect the electric shock. The knockouts however show normal accuracy and precision in timing, so we have rejected this alternative hypothesis. Last, we conduct the pharmacological validation of our behavioral screen using d-amphetamine and methamphetamine. We suggest including the analysis of interval timing and temporal memory in tests of genetically modified mice for learning and memory and argue that our paradigm allows this to be done simply and efficiently.

  3. Measuring Cosmic Expansion and Large Scale Structure with Destiny

    NASA Technical Reports Server (NTRS)

    Benford, Dominic J.; Lauer, Tod R.

    2007-01-01

    Destiny is a simple, direct, low cost mission to determine the properties of dark energy by obtaining a cosmologically deep supernova (SN) type Ia Hubble diagram and by measuring the large-scale mass power spectrum over time. Its science instrument is a 1.65m space telescope, featuring a near-infrared survey camera/spectrometer with a large field of view. During its first two years, Destiny will detect, observe, and characterize 23000 SN Ia events over the redshift interval 0.4lo00 square degrees to measure the large-scale mass power spectrum. The combination of surveys is much more powerful than either technique on its own, and will have over an order of magnitude greater sensitivity than will be provided by ongoing ground-based projects.

  4. Changes in the High-Latitude Topside Ionospheric Vertical Electron-Density Profiles in Response to Solar-Wind Perturbations During Large Magnetic Storms

    NASA Technical Reports Server (NTRS)

    Benson, Robert F.; Fainberg, Joseph; Osherovich, Vladimir; Truhlik, Vladimir; Wang, Yongli; Arbacher, Becca

    2011-01-01

    The latest results from an investigation to establish links between solar-wind and topside-ionospheric parameters will be presented including a case where high-latitude topside electron-density Ne(h) profiles indicated dramatic rapid changes in the scale height during the main phase of a large magnetic storm (Dst < -200 nT). These scale-height changes suggest a large heat input to the topside ionosphere at this time. The topside profiles were derived from ISIS-1 digital ionograms obtained from the NASA Space Physics Data Facility (SPDF) Coordinated Data Analysis Web (CDA Web). Solar-wind data obtained from the NASA OMNIWeb database indicated that the magnetic storm was due to a magnetic cloud. This event is one of several large magnetic storms being investigated during the interval from 1965 to 1984 when both solar-wind and digital topside ionograms, from either Alouette-2, ISIS-1, or ISIS-2, are potentially available.

  5. Estimating short-run and long-run interaction mechanisms in interictal state.

    PubMed

    Ozkaya, Ata; Korürek, Mehmet

    2010-04-01

    We address the issue of analyzing electroencephalogram (EEG) from seizure patients in order to test, model and determine the statistical properties that distinguish between EEG states (interictal, pre-ictal, ictal) by introducing a new class of time series analysis methods. In the present study: firstly, we employ statistical methods to determine the non-stationary behavior of focal interictal epileptiform series within very short time intervals; secondly, for such intervals that are deemed non-stationary we suggest the concept of Autoregressive Integrated Moving Average (ARIMA) process modelling, well known in time series analysis. We finally address the queries of causal relationships between epileptic states and between brain areas during epileptiform activity. We estimate the interaction between different EEG series (channels) in short time intervals by performing Granger-causality analysis and also estimate such interaction in long time intervals by employing Cointegration analysis, both analysis methods are well-known in econometrics. Here we find: first, that the causal relationship between neuronal assemblies can be identified according to the duration and the direction of their possible mutual influences; second, that although the estimated bidirectional causality in short time intervals yields that the neuronal ensembles positively affect each other, in long time intervals neither of them is affected (increasing amplitudes) from this relationship. Moreover, Cointegration analysis of the EEG series enables us to identify whether there is a causal link from the interictal state to ictal state.

  6. Efficient Queries of Stand-off Annotations for Natural Language Processing on Electronic Medical Records.

    PubMed

    Luo, Yuan; Szolovits, Peter

    2016-01-01

    In natural language processing, stand-off annotation uses the starting and ending positions of an annotation to anchor it to the text and stores the annotation content separately from the text. We address the fundamental problem of efficiently storing stand-off annotations when applying natural language processing on narrative clinical notes in electronic medical records (EMRs) and efficiently retrieving such annotations that satisfy position constraints. Efficient storage and retrieval of stand-off annotations can facilitate tasks such as mapping unstructured text to electronic medical record ontologies. We first formulate this problem into the interval query problem, for which optimal query/update time is in general logarithm. We next perform a tight time complexity analysis on the basic interval tree query algorithm and show its nonoptimality when being applied to a collection of 13 query types from Allen's interval algebra. We then study two closely related state-of-the-art interval query algorithms, proposed query reformulations, and augmentations to the second algorithm. Our proposed algorithm achieves logarithmic time stabbing-max query time complexity and solves the stabbing-interval query tasks on all of Allen's relations in logarithmic time, attaining the theoretic lower bound. Updating time is kept logarithmic and the space requirement is kept linear at the same time. We also discuss interval management in external memory models and higher dimensions.

  7. Efficient Queries of Stand-off Annotations for Natural Language Processing on Electronic Medical Records

    PubMed Central

    Luo, Yuan; Szolovits, Peter

    2016-01-01

    In natural language processing, stand-off annotation uses the starting and ending positions of an annotation to anchor it to the text and stores the annotation content separately from the text. We address the fundamental problem of efficiently storing stand-off annotations when applying natural language processing on narrative clinical notes in electronic medical records (EMRs) and efficiently retrieving such annotations that satisfy position constraints. Efficient storage and retrieval of stand-off annotations can facilitate tasks such as mapping unstructured text to electronic medical record ontologies. We first formulate this problem into the interval query problem, for which optimal query/update time is in general logarithm. We next perform a tight time complexity analysis on the basic interval tree query algorithm and show its nonoptimality when being applied to a collection of 13 query types from Allen’s interval algebra. We then study two closely related state-of-the-art interval query algorithms, proposed query reformulations, and augmentations to the second algorithm. Our proposed algorithm achieves logarithmic time stabbing-max query time complexity and solves the stabbing-interval query tasks on all of Allen’s relations in logarithmic time, attaining the theoretic lower bound. Updating time is kept logarithmic and the space requirement is kept linear at the same time. We also discuss interval management in external memory models and higher dimensions. PMID:27478379

  8. No Clear Association between Impaired Short-Term or Working Memory Storage and Time Reproduction Capacity in Adult ADHD Patients

    PubMed Central

    Mette, Christian; Grabemann, Marco; Zimmermann, Marco; Strunz, Laura; Scherbaum, Norbert; Wiltfang, Jens; Kis, Bernhard

    2015-01-01

    Objective Altered time reproduction is exhibited by patients with adult attention deficit hyperactivity disorder (ADHD). It remains unclear whether memory capacity influences the ability of adults with ADHD to reproduce time intervals. Method We conducted a behavioral study on 30 ADHD patients who were medicated with methylphenidate, 29 unmedicated adult ADHD patients and 32 healthy controls (HCs). We assessed time reproduction using six time intervals (1 s, 4 s, 6 s, 10 s, 24 s and 60 s) and assessed memory performance using the Wechsler memory scale. Results The patients with ADHD exhibited lower memory performance scores than the HCs. No significant differences in the raw scores for any of the time intervals (p > .05), with the exception of the variability at the short time intervals (1 s, 4 s and 6 s) (p < .01), were found between the groups. The overall analyses failed to reveal any significant correlations between time reproduction at any of the time intervals examined in the time reproduction task and working memory performance (p > .05). Conclusion We detected no findings indicating that working memory might influence time reproduction in adult patients with ADHD. Therefore, further studies concerning time reproduction and memory capacity among adult patients with ADHD must be performed to verify and replicate the present findings. PMID:26221955

  9. Motor Synchronization in Patients With Schizophrenia: Preserved Time Representation With Abnormalities in Predictive Timing.

    PubMed

    Wilquin, Hélène; Delevoye-Turrell, Yvonne; Dione, Mariama; Giersch, Anne

    2018-01-01

    Objective: Basic temporal dysfunctions have been described in patients with schizophrenia, which may impact their ability to connect and synchronize with the outer world. The present study was conducted with the aim to distinguish between interval timing and synchronization difficulties and more generally the spatial-temporal organization disturbances for voluntary actions. A new sensorimotor synchronization task was developed to test these abilities. Method: Twenty-four chronic schizophrenia patients matched with 27 controls performed a spatial-tapping task in which finger taps were to be produced in synchrony with a regular metronome to six visual targets presented around a virtual circle on a tactile screen. Isochronous (time intervals of 500 ms) and non-isochronous auditory sequences (alternated time intervals of 300/600 ms) were presented. The capacity to produce time intervals accurately versus the ability to synchronize own actions (tap) with external events (tone) were measured. Results: Patients with schizophrenia were able to produce the tapping patterns of both isochronous and non-isochronous auditory sequences as accurately as controls producing inter-response intervals close to the expected interval of 500 and 900 ms, respectively. However, the synchronization performances revealed significantly more positive asynchrony means (but similar variances) in the patient group than in the control group for both types of auditory sequences. Conclusion: The patterns of results suggest that patients with schizophrenia are able to perceive and produce both simple and complex sequences of time intervals but are impaired in the ability to synchronize their actions with external events. These findings suggest a specific deficit in predictive timing, which may be at the core of early symptoms previously described in schizophrenia.

  10. Epidemiological data on US coal miners' pneumoconiosis, 1960 to 1988.

    PubMed

    Attfield, M D; Castellan, R M

    1992-07-01

    Statistics on prevalence of pneumoconiosis among working underground coal miners based on epidemiologic data collected between 1960 and 1988 are presented. The main intent was to examine the time-related trend in prevalence, particularly after 1969, when substantially lower dust levels were mandated by federal act. Data from studies undertaken between 1960 and 1968 were collected and compared. Information for the period 1969 to 1988 was extracted from a large ongoing national epidemiologic study. Tenure-specific prevalence rates and summary statistics derived from the latter data for four consecutive time intervals within the 19-year period were calculated and compared. The results indicate a reduction in pneumoconiosis over time. The trend is similar to that seen in a large radiologic surveillance program of underground miners operated concurrently. Although such factors as x-ray reader variation, changes in x-ray standards, and worker self-selection for examination may have influenced the findings to some extent, adjusted summary rates reveal a reduction in prevalence concurrent with reductions in coal mine dust levels mandated by federal act in 1969.

  11. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems.

    PubMed

    Shehzad, Danish; Bozkuş, Zeki

    2016-01-01

    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models.

  12. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems

    PubMed Central

    Bozkuş, Zeki

    2016-01-01

    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models. PMID:27413363

  13. Validity and Generalizability of Measuring Student Engaged Time in Physical Education.

    ERIC Educational Resources Information Center

    Silverman, Stephen; Zotos, Connee

    The validity of interval and time sampling methods of measuring student engaged time was investigated in a study estimating the actual time students spent engaged in relevant motor performance in physical education classes. Two versions of the interval Academic Learning Time in Physical Education (ALT-PE) instrument and an equivalent time sampling…

  14. Towards robust identification and tracking of nevi in sparse photographic time series

    NASA Astrophysics Data System (ADS)

    Vogel, Jakob; Duliu, Alexandru; Oyamada, Yuji; Gardiazabal, Jose; Lasser, Tobias; Ziai, Mahzad; Hein, Rüdiger; Navab, Nassir

    2014-03-01

    In dermatology, photographic imagery is acquired in large volumes in order to monitor the progress of diseases, especially melanocytic skin cancers. For this purpose, overview (macro) images are taken of the region of interest and used as a reference map to re-localize highly magni ed images of individual lesions. The latter are then used for diagnosis. These pictures are acquired at irregular intervals under only partially constrained circumstances, where patient positions as well as camera positions are not reliable. In the presence of a large number of nevi, correct identi cation of the same nevus in a series of such images is thus a time consuming task with ample chances for error. This paper introduces a method for largely automatic and simultaneous identi cation of nevi in di erent images, thus allowing the tracking of a single nevus over time, as well as pattern evaluation. The method uses a rotation-invariant feature descriptor that uses the local neighborhood of a nevus to describe it. The texture, size and shape of the nevus are not used to describe it, as these can change over time, especially in the case of a malignancy. We then use the Random Walks framework to compute the correspondences based on the probabilities derived from comparing the feature vectors. Evaluation is performed on synthetic and patient data at the university clinic.

  15. Clinical course of untreated tonic-clonic seizures in childhood: prospective, hospital based study.

    PubMed Central

    van Donselaar, C. A.; Brouwer, O. F.; Geerts, A. T.; Arts, W. F.; Stroink, H.; Peters, A. C.

    1997-01-01

    OBJECTIVE: To assess decleration and acceleration in the disease process in the initial phase of epilepsy in children with new onset tonic-clonic seizures. STUDY DESIGN: Hospital based follow up study. SETTING: Two university hospitals, a general hospital, and a children's hospital in the Netherlands. PATIENTS: 204 children aged 1 month to 16 years with idiopathic or remote symptomatic, newly diagnosed, tonic-clonic seizures, of whom 123 were enrolled at time of their first ever seizure; all children were followed until the start of drug treatment (78 children), the occurrence of the fourth untreated seizure (41 children), or the end of the follow up period of two years (85 untreated children). MAIN OUTCOME MEASURES: Analysis of disease pattern from first ever seizure. The pattern was categorised as decelerating if the child became free of seizures despite treatment being withheld. In cases with four seizures, the pattern was categorised as decelerating if successive intervals increased or as accelerating if intervals decreased. Patterns in the remaining children were classified as uncertain. RESULTS: A decelerating pattern was found in 83 of 85 children who became free of seizures without treatment. Three of the 41 children with four or more untreated seizures showed a decelerating pattern and eight an accelerating pattern. In 110 children the disease process could not be classified, mostly because drug treatment was started after the first, second, or third seizure. The proportion of children with a decelerating pattern (42%, 95% confidence interval 35% to 49%) may be a minimum estimate because of the large number of patients with an uncertain disease pattern. CONCLUSIONS: Though untreated epilepsy is commonly considered to be a progressive disorder with decreasing intervals between seizures, a large proportion of children with newly diagnosed, unprovoked tonic-clonic seizures have a decelerating disease process. The fear that tonic-clonic seizures commonly evolve into a progressive disease should not be used as an argument in favour of early drug treatment in children with epilepsy. PMID:9040384

  16. United Kingdom national paediatric bilateral project: Demographics and results of localization and speech perception testing.

    PubMed

    Cullington, H E; Bele, D; Brinton, J C; Cooper, S; Daft, M; Harding, J; Hatton, N; Humphries, J; Lutman, M E; Maddocks, J; Maggs, J; Millward, K; O'Donoghue, G; Patel, S; Rajput, K; Salmon, V; Sear, T; Speers, A; Wheeler, A; Wilson, K

    2017-01-01

    To assess longitudinal outcomes in a large and varied population of children receiving bilateral cochlear implants both simultaneously and sequentially. This observational non-randomized service evaluation collected localization and speech recognition in noise data from simultaneously and sequentially implanted children at four time points: before bilateral cochlear implants or before the sequential implant, 1 year, 2 years, and 3 years after bilateral implants. No inclusion criteria were applied, so children with additional difficulties, cochleovestibular anomalies, varying educational placements, 23 different home languages, a full range of outcomes and varying device use were included. 1001 children were included: 465 implanted simultaneously and 536 sequentially, representing just over 50% of children receiving bilateral implants in the UK in this period. In simultaneously implanted children the median age at implant was 2.1 years; 7% were implanted at less than 1 year of age. In sequentially implanted children the interval between implants ranged from 0.1 to 14.5 years. Children with simultaneous bilateral implants localized better than those with one implant. On average children receiving a second (sequential) cochlear implant showed improvement in localization and listening in background noise after 1 year of bilateral listening. The interval between sequential implants had no effect on localization improvement although a smaller interval gave more improvement in speech recognition in noise. Children with sequential implants on average were able to use their second device to obtain spatial release from masking after 2 years of bilateral listening. Although ranges were large, bilateral cochlear implants on average offered an improvement in localization and speech perception in noise over unilateral implants. These data represent the diverse population of children with bilateral cochlear implants in the UK from 2010 to 2012. Predictions of outcomes for individual patients are not possible from these data. However, there are no indications to preclude children with long inter-implant interval having the chance of a second cochlear implant.

  17. The efficacy of a whole body sprint-interval training intervention in an office setting: A feasibility study.

    PubMed

    Gurd, Brendon J; Patel, Jugal; Edgett, Brittany A; Scribbans, Trisha D; Quadrilatero, Joe; Fischer, Steven L

    2018-05-28

    Whole body sprint-interval training (WB-SIT) represents a mode of exercise training that is both time-efficient and does not require access to an exercise facility. The current study examined the feasibility of implementing a WB-SIT intervention in a workplace setting. A total of 747 employees from a large office building were invited to participate with 31 individuals being enrolled in the study. Anthropometrics, aerobic fitness, core and upper body strength, and lower body mobility were assessed before and after a 12-week exercise intervention consisting of 2-4 training sessions per week. Each training session required participants to complete 8, 20-second intervals (separated by 10 seconds of rest) of whole body exercise. Proportion of participation was 4.2% while the response rate was 35% (11/31 participants completed post training testing). In responders, compliance to prescribed training was 83±17%, and significant (p <  0.05) improvements were observed for aerobic fitness, push-up performance and lower body mobility. These results demonstrate the efficacy of WB-FIT for improving fitness and mobility in an office setting, but highlight the difficulties in achieving high rates of participation and response in this setting.

  18. Comparison of incubation period distribution of human infections with MERS-CoV in South Korea and Saudi Arabia.

    PubMed

    Virlogeux, Victor; Fang, Vicky J; Park, Minah; Wu, Joseph T; Cowling, Benjamin J

    2016-10-24

    The incubation period is an important epidemiologic distribution, it is often incorporated in case definitions, used to determine appropriate quarantine periods, and is an input to mathematical modeling studies. Middle East Respiratory Syndrome coronavirus (MERS) is an emerging infectious disease in the Arabian Peninsula. There was a large outbreak of MERS in South Korea in 2015. We examined the incubation period distribution of MERS coronavirus infection for cases in South Korea and in Saudi Arabia. Using parametric and nonparametric methods, we estimated a mean incubation period of 6.9 days (95% credibility interval: 6.3-7.5) for cases in South Korea and 5.0 days (95% credibility interval: 4.0-6.6) among cases in Saudi Arabia. In a log-linear regression model, the mean incubation period was 1.42 times longer (95% credibility interval: 1.18-1.71) among cases in South Korea compared to Saudi Arabia. The variation that we identified in the incubation period distribution between locations could be associated with differences in ascertainment or reporting of exposure dates and illness onset dates, differences in the source or mode of infection, or environmental differences.

  19. Study design and sampling intensity for demographic analyses of bear populations

    USGS Publications Warehouse

    Harris, R.B.; Schwartz, C.C.; Mace, R.D.; Haroldson, M.A.

    2011-01-01

    The rate of population change through time (??) is a fundamental element of a wildlife population's conservation status, yet estimating it with acceptable precision for bears is difficult. For studies that follow known (usually marked) bears, ?? can be estimated during some defined time by applying either life-table or matrix projection methods to estimates of individual vital rates. Usually however, confidence intervals surrounding the estimate are broader than one would like. Using an estimator suggested by Doak et al. (2005), we explored the precision to be expected in ?? from demographic analyses of typical grizzly (Ursus arctos) and American black (U. americanus) bear data sets. We also evaluated some trade-offs among vital rates in sampling strategies. Confidence intervals around ?? were more sensitive to adding to the duration of a short (e.g., 3 yrs) than a long (e.g., 10 yrs) study, and more sensitive to adding additional bears to studies with small (e.g., 10 adult females/yr) than large (e.g., 30 adult females/yr) sample sizes. Confidence intervals of ?? projected using process-only variance of vital rates were only slightly smaller than those projected using total variances of vital rates. Under sampling constraints typical of most bear studies, it may be more efficient to invest additional resources into monitoring recruitment and juvenile survival rates of females already a part of the study, than to simply increase the sample size of study females. ?? 2011 International Association for Bear Research and Management.

  20. Changes in the distribution of high-risk births associated with changes in contraceptive prevalence.

    PubMed

    Stover, John; Ross, John

    2013-01-01

    Several birth characteristics are associated with high mortality risk: very young or old mothers, short birth intervals and high birth order. One justification for family planning programs is the health benefits associated with better spacing and timing of births. This study examines the extent to which the prevalence of these risk factors changes as a country transitions from high to low fertility. We use data from 194 national surveys to examine both cross section and within-country variation in these risk factors as they relate to the total fertility rate. Declines in the total fertility rate are associated with large declines in the proportion of high order births, those to mothers over the age of 34 and those with multiple risk factors; as well as to increasing proportions of first order births. There is little change in the proportion of births with short birth intervals except in sub-Saharan Africa. The use of family planning is strongly associated with fertility declines. The proportion of second and higher order births with demographic risk factors declines substantially as fertility declines. This creates a potential for reducing child mortality rates. Some of the reduction comes from modifying the birth interval distribution or by bringing maternal age at the time of birth into the 'safe' range of 18-35 years, and some comes from the actual elimination of births that would have a high mortality risk (high parity births).

  1. Modelling foraging movements of diving predators: a theoretical study exploring the effect of heterogeneous landscapes on foraging efficiency

    PubMed Central

    Bartoń, Kamil A.; Scott, Beth E.; Travis, Justin M.J.

    2014-01-01

    Foraging in the marine environment presents particular challenges for air-breathing predators. Information about prey capture rates, the strategies that diving predators use to maximise prey encounter rates and foraging success are still largely unknown and difficult to observe. As well, with the growing awareness of potential climate change impacts and the increasing interest in the development of renewable sources it is unknown how the foraging activity of diving predators such as seabirds will respond to both the presence of underwater structures and the potential corresponding changes in prey distributions. Motivated by this issue we developed a theoretical model to gain general understanding of how the foraging efficiency of diving predators may vary according to landscape structure and foraging strategy. Our theoretical model highlights that animal movements, intervals between prey capture and foraging efficiency are likely to critically depend on the distribution of the prey resource and the size and distribution of introduced underwater structures. For multiple prey loaders, changes in prey distribution affected the searching time necessary to catch a set amount of prey which in turn affected the foraging efficiency. The spatial aggregation of prey around small devices (∼ 9 × 9 m) created a valuable habitat for a successful foraging activity resulting in shorter intervals between prey captures and higher foraging efficiency. The presence of large devices (∼ 24 × 24 m) however represented an obstacle for predator movement, thus increasing the intervals between prey captures. In contrast, for single prey loaders the introduction of spatial aggregation of the resources did not represent an advantage suggesting that their foraging efficiency is more strongly affected by other factors such as the timing to find the first prey item which was found to occur faster in the presence of large devices. The development of this theoretical model represents a useful starting point to understand the energetic reasons for a range of potential predator responses to spatial heterogeneity and environmental uncertainties in terms of search behaviour and predator–prey interactions. We highlight future directions that integrated empirical and modelling studies should take to improve our ability to predict how diving predators will be impacted by the deployment of manmade structures in the marine environment. PMID:25250211

  2. Mining of hospital laboratory information systems: a model study defining age- and gender-specific reference intervals and trajectories for plasma creatinine in a pediatric population.

    PubMed

    Søeby, Karen; Jensen, Peter Bjødstrup; Werge, Thomas; Sørensen, Steen

    2015-09-01

    The knowledge of physiological fluctuation and variation of even commonly used biochemical quantities in extreme age groups and during development is sparse. This challenges the clinical interpretation and utility of laboratory tests in these age groups. To explore the utility of hospital laboratory data as a source of information, we analyzed enzymatic plasma creatinine as a model analyte in two large pediatric hospital samples. Plasma creatinine measurements from 9700 children aged 0-18 years were obtained from hospital laboratory databases and partitioned into high-resolution gender- and age-groups. Normal probability plots were used to deduce parameters of the normal distributions from healthy creatinine values in the mixed hospital datasets. Furthermore, temporal trajectories were generated from repeated measurements to examine developmental patterns in periods of changing creatinine levels. Creatinine shows great age dependence from birth throughout childhood. We computed and replicated 95% reference intervals in narrow gender and age bins and showed them to be comparable to those determined in healthy population studies. We identified pronounced transitions in creatinine levels at different time points after birth and around the early teens, which challenges the establishment and usefulness of reference intervals in those age groups. The study documents that hospital laboratory data may inform on the developmental aspects of creatinine, on periods with pronounced heterogeneity and valid reference intervals. Furthermore, part of the heterogeneity in creatinine distribution is likely due to differences in biological and chronological age of children and should be considered when using age-specific reference intervals.

  3. First Definition of Reference Intervals of Liver Function Tests in China: A Large-Population-Based Multi-Center Study about Healthy Adults

    PubMed Central

    Zhang, Chuanbao; Guo, Wei; Huang, Hengjian; Ma, Yueyun; Zhuang, Junhua; Zhang, Jie

    2013-01-01

    Background Reference intervals of Liver function tests are very important for the screening, diagnosis, treatment, and monitoring of liver diseases. We aim to establish common reference intervals of liver function tests specifically for the Chinese adult population. Methods A total of 3210 individuals (20–79 years) were enrolled in six representative geographical regions in China. Analytes of ALT, AST, GGT, ALP, total protein, albumin and total bilirubin were measured using three analytical systems mainly used in China. The newly established reference intervals were based on the results of traceability or multiple systems, and then validated in 21 large hospitals located nationwide qualified by the National External Quality Assessment (EQA) of China. Results We had been established reference intervals of the seven liver function tests for the Chinese adult population and found there were apparent variances of reference values for the variables for partitioning analysis such as gender(ALT, GGT, total bilirubin), age(ALP, albumin) and region(total protein). More than 86% of the 21 laboratories passed the validation in all subgroup of reference intervals and overall about 95.3% to 98.8% of the 1220 validation results fell within the range of the new reference interval for all liver function tests. In comparison with the currently recommended reference intervals in China, the single side observed proportions of out of range of reference values from our study for most of the tests deviated significantly from the nominal 2.5% such as total bilirubin (15.2%), ALP (0.2%), albumin (0.0%). Most of reference intervals in our study were obviously different from that of other races. Conclusion These used reference intervals are no longer applicable for the current Chinese population. We have established common reference intervals of liver function tests that are defined specifically for Chinese population and can be universally used among EQA-approved laboratories located all over China. PMID:24058449

  4. VizieR Online Data Catalog: Fermi/GBM GRB time-resolved spectral catalog (Yu+, 2016)

    NASA Astrophysics Data System (ADS)

    Yu, H.-F.; Preece, R. D.; Greiner, J.; Bhat, P. N.; Bissaldi, E.; Briggs, M. S.; Cleveland, W. H.; Connaughton, V.; Goldstein, A.; von Kienlin; A.; Kouveliotou, C.; Mailyan, B.; Meegan, C. A.; Paciesas, W. S.; Rau, A.; Roberts, O. J.; Veres, P.; Wilson-Hodge, C.; Zhang, B.-B.; van Eerten, H. J.

    2016-01-01

    Time-resolved spectral analysis results of BEST models: for each spectrum GRB name using the Fermi GBM trigger designation, spectrum number within individual burst, start time Tstart and end time Tstop for the time bin, BEST model, best-fit parameters of the BEST model, value of CSTAT per degrees of freedom, 10keV-1MeV photon and energy flux are given. Ep evolutionary trends: for each burst GRB name, number of spectra with Ep, Spearman's Rank Correlation Coefficients between Ep_ and photon flux and 90%, 95%, and 99% confidence intervals, Spearman's Rank Correlation Coefficients between Ep and energy flux and 90%, 95%, and 99% confidence intervals, Spearman's Rank Correlation Coefficient between Ep and time and 90%, 95%, and 99% confidence intervals, trends as determined by computer for 90%, 95%, and 99% confidence intervals, trends as determined by human eyes are given. (2 data files).

  5. Interval Training

    MedlinePlus

    Healthy Lifestyle Fitness Interval training can help you get the most out of your workout. By Mayo Clinic Staff Are you ready to shake ... spending more time at the gym? Consider aerobic interval training. Once the domain of elite athletes, interval training ...

  6. Measuring Land Change in Coastal Zone around a Rapidly Urbanized Bay.

    PubMed

    Huang, Faming; Huang, Boqiang; Huang, Jinliang; Li, Shenghui

    2018-05-23

    Urban development is a major cause for eco-degradation in many coastal regions. Understanding urbanization dynamics and underlying driving factors is crucial for urban planning and management. Land-use dynamic degree indices and intensity analysis were used to measure land changes occurred in 1990, 2002, 2009, and 2017 in the coastal zone around Quanzhou bay, which is a rapidly urbanized bay in Southeast China. The comprehensive land-use dynamic degree and interval level intensity analysis both revealed that land change was accelerating across the three time intervals in a three-kilometer-wide zone along the coastal line (zone A), while land change was fastest during the second time interval 2002⁻2009 in a separate terrestrial area within coastal zone (zone B). Driven by urbanization, built-up gains and cropland losses were active for all time intervals in both zones. Mudflat losses were active except in the first time interval in zone A due to the intensive sea reclamation. The gain of mangrove was active while the loss of mangrove is dormant for all three intervals in zone A. Transition level analysis further revealed the similarities and differences in processes within patterns of land changes for both zones. The transition from cropland to built-up was systematically targeted and stationary while the transition from woodland to built-up was systematically avoiding transition in both zones. Built-up tended to target aquaculture for the second and third time intervals in zone A but avoid Aquaculture for all intervals in zone B. Land change in zone A was more significant than that in zone B during the second and third time intervals at three-level intensity. The application of intensity analysis can enhance our understanding of the patterns and processes in land changes and suitable land development plans in the Quanzhou bay area. This type of investigation is useful to provide information for developing sound land use policy to achieve urban sustainability in similar coastal areas.

  7. Changes in genetic selection differentials and generation intervals in US Holstein dairy cattle as a result of genomic selection.

    PubMed

    García-Ruiz, Adriana; Cole, John B; VanRaden, Paul M; Wiggans, George R; Ruiz-López, Felipe J; Van Tassell, Curtis P

    2016-07-12

    Seven years after the introduction of genomic selection in the United States, it is now possible to evaluate the impact of this technology on the population. Selection differential(s) (SD) and generation interval(s) (GI) were characterized in a four-path selection model that included sire(s) of bulls (SB), sire(s) of cows (SC), dam(s) of bulls (DB), and dam(s) of cows (DC). Changes in SD over time were estimated for milk, fat, and protein yield; somatic cell score (SCS); productive life (PL); and daughter pregnancy rate (DPR) for the Holstein breed. In the period following implementation of genomic selection, dramatic reductions were seen in GI, especially the SB and SC paths. The SB GI reduced from ∼7 y to less than 2.5 y, and the DB GI fell from about 4 y to nearly 2.5 y. SD were relatively stable for yield traits, although modest gains were noted in recent years. The most dramatic response to genomic selection was observed for the lowly heritable traits DPR, PL, and SCS. Genetic trends changed from close to zero to large and favorable, resulting in rapid genetic improvement in fertility, lifespan, and health in a breed where these traits eroded over time. These results clearly demonstrate the positive impact of genomic selection in US dairy cattle, even though this technology has only been in use for a short time. Based on the four-path selection model, rates of genetic gain per year increased from ∼50-100% for yield traits and from threefold to fourfold for lowly heritable traits.

  8. Changes in genetic selection differentials and generation intervals in US Holstein dairy cattle as a result of genomic selection

    PubMed Central

    García-Ruiz, Adriana; Cole, John B.; VanRaden, Paul M.; Wiggans, George R.; Ruiz-López, Felipe J.; Van Tassell, Curtis P.

    2016-01-01

    Seven years after the introduction of genomic selection in the United States, it is now possible to evaluate the impact of this technology on the population. Selection differential(s) (SD) and generation interval(s) (GI) were characterized in a four-path selection model that included sire(s) of bulls (SB), sire(s) of cows (SC), dam(s) of bulls (DB), and dam(s) of cows (DC). Changes in SD over time were estimated for milk, fat, and protein yield; somatic cell score (SCS); productive life (PL); and daughter pregnancy rate (DPR) for the Holstein breed. In the period following implementation of genomic selection, dramatic reductions were seen in GI, especially the SB and SC paths. The SB GI reduced from ∼7 y to less than 2.5 y, and the DB GI fell from about 4 y to nearly 2.5 y. SD were relatively stable for yield traits, although modest gains were noted in recent years. The most dramatic response to genomic selection was observed for the lowly heritable traits DPR, PL, and SCS. Genetic trends changed from close to zero to large and favorable, resulting in rapid genetic improvement in fertility, lifespan, and health in a breed where these traits eroded over time. These results clearly demonstrate the positive impact of genomic selection in US dairy cattle, even though this technology has only been in use for a short time. Based on the four-path selection model, rates of genetic gain per year increased from ∼50–100% for yield traits and from threefold to fourfold for lowly heritable traits. PMID:27354521

  9. On Intensive Late Holocene Iron Mining and Production in the Northern Congo Basin and the Environmental Consequences Associated with Metallurgy in Central Africa.

    PubMed

    Lupo, Karen D; Schmitt, Dave N; Kiahtipes, Christopher A; Ndanga, Jean-Paul; Young, D Craig; Simiti, Bernard

    2015-01-01

    An ongoing question in paleoenvironmental reconstructions of the central African rainforest concerns the role that prehistoric metallurgy played in shaping forest vegetation. Here we report evidence of intensive iron-ore mining and smelting in forested regions of the northern Congo Basin dating to the late Holocene. Volumetric estimates on extracted iron-ore and associated slag mounds from prehistoric sites in the southern Central African Republic suggest large-scale iron production on par with other archaeological and historically-known iron fabrication areas. These data document the first evidence of intensive iron mining and production spanning approximately 90 years prior to colonial occupation (circa AD 1889) and during an interval of time that is poorly represented in the archaeological record. Additional site areas pre-dating these remains by 3-4 centuries reflect an earlier period of iron production on a smaller scale. Microbotanical evidence from a sediment core collected from an adjacent riparian trap shows a reduction in shade-demanding trees in concert with an increase in light-demanding species spanning the time interval associated with iron intensification. This shift occurs during the same time interval when many portions of the Central African witnessed forest transgressions associated with a return to moister and more humid conditions beginning 500-100 years ago. Although data presented here do not demonstrate that iron smelting activities caused widespread vegetation change in Central Africa, we argue that intense mining and smelting can have localized and potentially regional impacts on vegetation communities. These data further demonstrate the high value of pairing archeological and paleoenvironmental analyses to reconstruct regional-scale forest histories.

  10. The association between calfhood bovine respiratory disease complex and subsequent departure from the herd, milk production, and reproduction in dairy cattle.

    PubMed

    Schaffer, Aaron P; Larson, Robert L; Cernicchiaro, Natalia; Hanzlicek, Gregg A; Bartle, Steven J; Thomson, Daniel U

    2016-05-15

    OBJECTIVE To describe the frequency of calfhood producer-identified bovine respiratory disease complex (BRDC) in Holstein replacement heifers on 1 large farm and determine associations between development of BRDC at ≤ 120 days of age (BRDC120) with milk production estimate, calving interval, and risk of departure from the herd (DFH). DESIGN Retrospective, observational study. ANIMALS 14,024 Holstein heifer calves born on 1 farm. PROCEDURES Data were obtained from herd management records. Cox proportional hazard and generalized linear mixed-effects models were used to assess associations for variables of interest (BRDC120 status, demographic data, and management factors) with DFH, milk production estimate, and calving interval. RESULTS Except for the year 2007, animals identified as having BRDC120 were 1.62 to 4.98 times as likely to leave the herd before first calving, compared with those that did not have this designation. Calves identified as having BRDC prior to weaning were 2.62 times as likely to have DFH before first calving as those classified as developing BRDC after weaning. Cows identified as having BRDC120 were 1.28 times as likely to have DFH between the first and second calving as were other cows. The BRDC120 designation was associated with a 233-kg (513-lb) lower 305-day mature equivalent value for first lactation milk production, but was not associated with longer or shorter calving intervals at maturity. CONCLUSIONS AND CLINICAL RELEVANCE Dairy cattle identified as having BRDC120 had increased risk of DFH before the first or second calving and lower first-lactation milk production estimates, compared with results for cattle without this finding. Further investigation of these associations is warranted.

  11. Cardiovascular Drift during Training for Fitness in Patients with Metabolic Syndrome.

    PubMed

    Morales-Palomo, Felix; Ramirez-Jimenez, Miguel; Ortega, Juan Fernando; Pallares, Jesus Garcia; Mora-Rodriguez, Ricardo

    2017-03-01

    The health benefits of a training program are largely influenced by the exercise dose and intensity. We sought to determine whether during a training bout of continuous versus interval exercise the workload needs to be reduced to maintain the prescribed target heart rate (HR). Fourteen obese (31 ± 4 kg·m) middle-age (57 ± 8 yr) individuals with metabolic syndrome, underwent two exercise training bouts matched by energy expenditure (i.e., 70 ± 5 min of continuous exercise [CE] or 45 min of interval exercise, high-intensity interval training [HIIT]). All subjects completed both trials in a randomized order. HR, power output (W), percent dehydration, intestinal and skin temperature (TINT and TSK), mean arterial pressure, cardiac output (CO), stroke volume (SV), and blood lactate concentration (La) were measured at the initial and latter stages of each trial to assess time-dependent drift. During the HIIT trial, power output was lowered by 30 ± 16 W to maintain the target HR, whereas a 10 ± 11 W reduction was needed in the CE trial (P < 0.05). Energy expenditure, CO, and SV declined with exercise time only in the HIIT trial (15%, 10%, and 13%, respectively). During HIIT, percent dehydration, TINT, and TSK increased more than during the CE trial (all P = 0.001). Mean arterial pressure and La were higher in HIIT without time drift in any trial. Our findings suggests that while CE results in mild power output reductions to maintain target HR, the increasingly popular HIIT results in marked reductions in power output, energy expenditure, and CO (21%, 15%, and 10%, respectively). HIIT based on target HR may result in lower than expected training adaptations because of workload adjustments to avoid HR drift.

  12. On Intensive Late Holocene Iron Mining and Production in the Northern Congo Basin and the Environmental Consequences Associated with Metallurgy in Central Africa

    PubMed Central

    Lupo, Karen D.; Schmitt, Dave N.; Kiahtipes, Christopher A.; Ndanga, Jean-Paul; Young, D. Craig; Simiti, Bernard

    2015-01-01

    An ongoing question in paleoenvironmental reconstructions of the central African rainforest concerns the role that prehistoric metallurgy played in shaping forest vegetation. Here we report evidence of intensive iron-ore mining and smelting in forested regions of the northern Congo Basin dating to the late Holocene. Volumetric estimates on extracted iron-ore and associated slag mounds from prehistoric sites in the southern Central African Republic suggest large-scale iron production on par with other archaeological and historically-known iron fabrication areas. These data document the first evidence of intensive iron mining and production spanning approximately 90 years prior to colonial occupation (circa AD 1889) and during an interval of time that is poorly represented in the archaeological record. Additional site areas pre-dating these remains by 3-4 centuries reflect an earlier period of iron production on a smaller scale. Microbotanical evidence from a sediment core collected from an adjacent riparian trap shows a reduction in shade-demanding trees in concert with an increase in light-demanding species spanning the time interval associated with iron intensification. This shift occurs during the same time interval when many portions of the Central African witnessed forest transgressions associated with a return to moister and more humid conditions beginning 500-100 years ago. Although data presented here do not demonstrate that iron smelting activities caused widespread vegetation change in Central Africa, we argue that intense mining and smelting can have localized and potentially regional impacts on vegetation communities. These data further demonstrate the high value of pairing archeological and paleoenvironmental analyses to reconstruct regional-scale forest histories. PMID:26161540

  13. Local quenches and quantum chaos from higher spin perturbations

    NASA Astrophysics Data System (ADS)

    David, Justin R.; Khetrapal, Surbhi; Kumar, S. Prem

    2017-10-01

    We study local quenches in 1+1 dimensional conformal field theories at large- c by operators carrying higher spin charge. Viewing such states as solutions in Chern-Simons theory, representing infalling massive particles with spin-three charge in the BTZ back-ground, we use the Wilson line prescription to compute the single-interval entanglement entropy (EE) and scrambling time following the quench. We find that the change in EE is finite (and real) only if the spin-three charge q is bounded by the energy of the perturbation E, as | q| /c < E 2 /c 2. We show that the Wilson line/EE correlator deep in the quenched regime and its expansion for small quench widths overlaps with the Regge limit for chaos of the out-of-time-ordered correlator. We further find that the scrambling time for the two-sided mutual information between two intervals in the thermofield double state increases with increasing spin-three charge, diverging when the bound is saturated. For larger values of the charge, the scrambling time is shorter than for pure gravity and controlled by the spin-three Lyapunov exponent 4 π/β. In a CFT with higher spin chemical potential, dual to a higher spin black hole, we find that the chemical potential must be bounded to ensure that the mutual information is a concave function of time and entanglement speed is less than the speed of light. In this case, a quench with zero higher spin charge yields the same Lyapunov exponent as pure Einstein gravity.

  14. Genetic control and comparative genomic analysis of flowering time in Setaria (Poaceae).

    PubMed

    Mauro-Herrera, Margarita; Wang, Xuewen; Barbier, Hugues; Brutnell, Thomas P; Devos, Katrien M; Doust, Andrew N

    2013-02-01

    We report the first study on the genetic control of flowering in Setaria, a panicoid grass closely related to switchgrass, and in the same subfamily as maize and sorghum. A recombinant inbred line mapping population derived from a cross between domesticated Setaria italica (foxtail millet) and its wild relative Setaria viridis (green millet), was grown in eight trials with varying environmental conditions to identify a small number of quantitative trait loci (QTL) that control differences in flowering time. Many of the QTL across trials colocalize, suggesting that the genetic control of flowering in Setaria is robust across a range of photoperiod and other environmental factors. A detailed comparison of QTL for flowering in Setaria, sorghum, and maize indicates that several of the major QTL regions identified in maize and sorghum are syntenic orthologs with Setaria QTL, although the maize large effect QTL on chromosome 10 is not. Several Setaria QTL intervals had multiple LOD peaks and were composed of multiple syntenic blocks, suggesting that observed QTL represent multiple tightly linked loci. Candidate genes from flowering time pathways identified in rice and Arabidopsis were identified in Setaria QTL intervals, including those involved in the CONSTANS photoperiod pathway. However, only three of the approximately seven genes cloned for flowering time in maize colocalized with Setaria QTL. This suggests that variation in flowering time in separate grass lineages is controlled by a combination of conserved and lineage specific genes.

  15. Genetic Control and Comparative Genomic Analysis of Flowering Time in Setaria (Poaceae)

    PubMed Central

    Mauro-Herrera, Margarita; Wang, Xuewen; Barbier, Hugues; Brutnell, Thomas P.; Devos, Katrien M.; Doust, Andrew N.

    2013-01-01

    We report the first study on the genetic control of flowering in Setaria, a panicoid grass closely related to switchgrass, and in the same subfamily as maize and sorghum. A recombinant inbred line mapping population derived from a cross between domesticated Setaria italica (foxtail millet) and its wild relative Setaria viridis (green millet), was grown in eight trials with varying environmental conditions to identify a small number of quantitative trait loci (QTL) that control differences in flowering time. Many of the QTL across trials colocalize, suggesting that the genetic control of flowering in Setaria is robust across a range of photoperiod and other environmental factors. A detailed comparison of QTL for flowering in Setaria, sorghum, and maize indicates that several of the major QTL regions identified in maize and sorghum are syntenic orthologs with Setaria QTL, although the maize large effect QTL on chromosome 10 is not. Several Setaria QTL intervals had multiple LOD peaks and were composed of multiple syntenic blocks, suggesting that observed QTL represent multiple tightly linked loci. Candidate genes from flowering time pathways identified in rice and Arabidopsis were identified in Setaria QTL intervals, including those involved in the CONSTANS photoperiod pathway. However, only three of the approximately seven genes cloned for flowering time in maize colocalized with Setaria QTL. This suggests that variation in flowering time in separate grass lineages is controlled by a combination of conserved and lineage specific genes. PMID:23390604

  16. What Controls Subduction Earthquake Size and Occurrence?

    NASA Astrophysics Data System (ADS)

    Ruff, L. J.

    2008-12-01

    There is a long history of observational studies on the size and recurrence intervals of the large underthrusting earthquakes in subduction zones. In parallel with this documentation of the variability in both recurrence times and earthquake sizes -- both within and amongst subduction zones -- there have been numerous suggestions for what controls size and occurrence. In addition to the intrinsic scientific interest in these issues, there are direct applications to hazards mitigation. In this overview presentation, I review past progress, consider current paradigms, and look toward future studies that offer some resolution of long- standing questions. Given the definition of seismic moment, earthquake size is the product of overall static stress drop, down-dip fault width, and along-strike fault length. The long-standing consensus viewpoint is that for the largest earthquakes in a subduction zone: stress-drop is constant, fault width is the down-dip extent of the seismogenic portion of the plate boundary, but that along-strike fault length can vary from one large earthquake to the next. While there may be semi-permanent segments along a subduction zone, successive large earthquakes can rupture different combinations of segments. Many investigations emphasize the role of asperities within the segments, rather than segment edges. Thus, the question of earthquake size is translated into: "What controls the along-strike segmentation, and what determines which segments will rupture in a particular earthquake cycle?" There is no consensus response to these questions. Over the years, the suggestions for segmentation control include physical features in the subducted plate, physical features in the over-lying plate, and more obscure -- and possibly ever-changing -- properties of the plate interface such as the hydrologic conditions. It seems that the full global answer requires either some unforeseen breakthrough, or the long-term hard work of falsifying all candidate hypotheses except one. This falsification process requires both concentrated multidisciplinary efforts and patience. Large earthquake recurrence intervals in the same subduction zone segment display a significant, and therefore unfortunate, variability. Over the years, many of us have devised simple models to explain this variability. Of course, there are also more complicated explanations with many additional model parameters. While there has been important observational progress as both historical and paleo-seismological studies continue to add more data pairs of fault length and recurrence intervals, there has been a frustrating lack of progress in elimination of candidate models or processes that explain recurrence time variability. Some of the simple models for recurrence times offer a probabilistic or even deterministic prediction of future recurrence times - and have been used for hazards evaluation. It is important to know if these models are correct. Since we do not have the patience to wait for a strict statistical test, we must find other ways to test these ideas. For example, some of the simple deterministic models for along-strike segment interaction make predictions for variation in tectonic stress state that can be tested during the inter-seismic period. We have seen how some observational discoveries in the past decade (e.g., the episodic creep events down-dip of the seismogenic zone) give us additional insight into the physical processes in subduction zones; perhaps multi-disciplinary studies of subduction zones will discover a new way to reliably infer large-scale shear stresses on the plate interface?

  17. New Estimates of Intergenerational Time Intervals for the Calculation of Age and Origins of Mutations

    PubMed Central

    Tremblay, Marc; Vézina, Hélène

    2000-01-01

    Summary Intergenerational time intervals are frequently used in human population-genetics studies concerned with the ages and origins of mutations. In most cases, mean intervals of 20 or 25 years are used, regardless of the demographic characteristics of the population under study. Although these characteristics may vary from prehistoric to historical times, we suggest that this value is probably too low, and that the ages of some mutations may have been underestimated. Analyses were performed by using the BALSAC Population Register (Quebec, Canada), from which several intergenerational comparisons can be made. Family reconstitutions were used to measure interval lengths and variations in descending lineages. Various parameters were considered, such as spouse age at marriage, parental age, and reproduction levels. Mother-child and father-child intervals were compared. Intergenerational male and female intervals were also analyzed in 100 extended ascending genealogies. Results showed that a mean value of 30 years is a better estimate of intergenerational intervals than 20 or 25 years. As marked differences between male and female interval length were observed, specific values are proposed for mtDNA, autosomal, X-chromosomal, and Y-chromosomal loci. The applicability of these results for age estimates of mutations is discussed. PMID:10677323

  18. The effects of morphine on fixed-interval patterning and temporal discrimination.

    PubMed Central

    Odum, A L; Schaal, D W

    2000-01-01

    Changes produced by drugs in response patterns under fixed-interval schedules of reinforcement have been interpreted to result from changes in temporal discrimination. To examine this possibility, this experiment determined the effects of morphine on the response patterning of 4 pigeons during a fixed-interval 1-min schedule of food delivery with interpolated temporal discrimination trials. Twenty of the 50 total intervals were interrupted by choice trials. Pecks to one key color produced food if the interval was interrupted after a short time (after 2 or 4.64 s). Pecks to another key color produced food if the interval was interrupted after a long time (after 24.99 or 58 s). Morphine (1.0 to 10.0 mg/kg) decreased the index of curvature (a measure of response patterning) during fixed intervals and accuracy during temporal discrimination trials. Accuracy was equally disrupted following short and long sample durations. Although morphine disrupted temporal discrimination in the context of a fixed-interval schedule, these effects are inconsistent with interpretations of the disruption of response patterning as a selective overestimation of elapsed time. The effects of morphine may be related to the effects of more conventional external stimuli on response patterning. PMID:11029024

  19. Context-Dependent Duration Signals in the Primate Prefrontal Cortex

    PubMed Central

    Genovesio, Aldo; Seitz, Lucia K.; Tsujimoto, Satoshi; Wise, Steven P.

    2016-01-01

    The activity of some prefrontal (PF) cortex neurons distinguishes short from long time intervals. Here, we examined whether this property reflected a general timing mechanism or one dependent on behavioral context. In one task, monkeys discriminated the relative duration of 2 stimuli; in the other, they discriminated the relative distance of 2 stimuli from a fixed reference point. Both tasks had a pre-cue period (interval 1) and a delay period (interval 2) with no discriminant stimulus. Interval 1 elapsed before the presentation of the first discriminant stimulus, and interval 2 began after that stimulus. Both intervals had durations of either 400 or 800 ms. Most PF neurons distinguished short from long durations in one task or interval, but not in the others. When neurons did signal something about duration for both intervals, they did so in an uncorrelated or weakly correlated manner. These results demonstrate a high degree of context dependency in PF time processing. The PF, therefore, does not appear to signal durations abstractedly, as would be expected of a general temporal encoder, but instead does so in a highly context-dependent manner, both within and between tasks. PMID:26209845

  20. Profile local linear estimation of generalized semiparametric regression model for longitudinal data.

    PubMed

    Sun, Yanqing; Sun, Liuquan; Zhou, Jie

    2013-07-01

    This paper studies the generalized semiparametric regression model for longitudinal data where the covariate effects are constant for some and time-varying for others. Different link functions can be used to allow more flexible modelling of longitudinal data. The nonparametric components of the model are estimated using a local linear estimating equation and the parametric components are estimated through a profile estimating function. The method automatically adjusts for heterogeneity of sampling times, allowing the sampling strategy to depend on the past sampling history as well as possibly time-dependent covariates without specifically model such dependence. A [Formula: see text]-fold cross-validation bandwidth selection is proposed as a working tool for locating an appropriate bandwidth. A criteria for selecting the link function is proposed to provide better fit of the data. Large sample properties of the proposed estimators are investigated. Large sample pointwise and simultaneous confidence intervals for the regression coefficients are constructed. Formal hypothesis testing procedures are proposed to check for the covariate effects and whether the effects are time-varying. A simulation study is conducted to examine the finite sample performances of the proposed estimation and hypothesis testing procedures. The methods are illustrated with a data example.

  1. Impact cratering through geologic time

    USGS Publications Warehouse

    Shoemaker, E.M.; Shoemaker, C.S.

    1998-01-01

    New data on lunar craters and recent discoveries about craters on Earth permit a reassessment of the bombardment history of Earth over the last 3.2 billion years. The combined lunar and terrestrial crater records suggest that the long-term average rate of production of craters larger than 20 km in diameter has increased, perhaps by as much as 60%, in the last 100 to 200 million years. Production of craters larger than 70 km in diameter may have increased, in the same time interval, by a factor of five or more over the average for the preceding three billion years. A large increase in the flux of long-period comets appears to be the most likely explanation for such a long-term increase in the cratering rate. Two large craters, in particular, appear to be associated with a comet shower that occurred about 35.5 million years ago. The infall of cosmic dust, as traced by 3He in deep sea sediments, and the ages of large craters, impact glass horizons, and other stratigraphic markers of large impacts seem to be approximately correlated with the estimated times of passage of the Sun through the galactic plane, at least for the last 65 million years. Those are predicted times for an increased near-Earth flux of comets from the Oort Cloud induced by the combined effects of galactic tidal perturbations and encounters of the Sun with passing stars. Long-term changes in the average comet flux may be related to changes in the amplitude of the z-motion of the Sun perpendicular to the galactic plane or to stripping of the outer Oort cloud by encounters with large passing stars, followed by restoration from the inner Oort cloud reservoir.

  2. Systolic and diastolic time intervals in pulsus alternans - Significance of alternating isovolumic relaxation

    NASA Technical Reports Server (NTRS)

    Spodick, D. H.; Quarry, V. M.; Khan, A. H.

    1974-01-01

    Systolic and diastolic time intervals in 14 cardiac patients with pulsus alternans revealed significant alternation of preinjection period (PEP), isovolumic contraction time (IVCT), left ventricular ejection time (LVET), ejection time index (ETI), PEP/LVET, and carotid dD/dt with better functional values in the strong beats. Cycle length, duration of electromechanical systole (EMS) and total diastole, i.e., isovolumic relaxation period (IRP) and diastolic filling period (DFP) occurred in 7 out of 8 patients. These diastolic intervals alternated reciprocally such that the IRP of the strong beats encroached upon the DFP of the next (weak) beats.

  3. Binary Interval Search: a scalable algorithm for counting interval intersections.

    PubMed

    Layer, Ryan M; Skadron, Kevin; Robins, Gabriel; Hall, Ira M; Quinlan, Aaron R

    2013-01-01

    The comparison of diverse genomic datasets is fundamental to understand genome biology. Researchers must explore many large datasets of genome intervals (e.g. genes, sequence alignments) to place their experimental results in a broader context and to make new discoveries. Relationships between genomic datasets are typically measured by identifying intervals that intersect, that is, they overlap and thus share a common genome interval. Given the continued advances in DNA sequencing technologies, efficient methods for measuring statistically significant relationships between many sets of genomic features are crucial for future discovery. We introduce the Binary Interval Search (BITS) algorithm, a novel and scalable approach to interval set intersection. We demonstrate that BITS outperforms existing methods at counting interval intersections. Moreover, we show that BITS is intrinsically suited to parallel computing architectures, such as graphics processing units by illustrating its utility for efficient Monte Carlo simulations measuring the significance of relationships between sets of genomic intervals. https://github.com/arq5x/bits.

  4. Market-based control strategy for long-span structures considering the multi-time delay issue

    NASA Astrophysics Data System (ADS)

    Li, Hongnan; Song, Jianzhu; Li, Gang

    2017-01-01

    To solve the different time delays that exist in the control device installed on spatial structures, in this study, discrete analysis using a 2 N precise algorithm was selected to solve the multi-time-delay issue for long-span structures based on the market-based control (MBC) method. The concept of interval mixed energy was introduced from computational structural mechanics and optimal control research areas, and it translates the design of the MBC multi-time-delay controller into a solution for the segment matrix. This approach transforms the serial algorithm in time to parallel computing in space, greatly improving the solving efficiency and numerical stability. The designed controller is able to consider the issue of time delay with a linear controlling force combination and is especially effective for large time-delay conditions. A numerical example of a long-span structure was selected to demonstrate the effectiveness of the presented controller, and the time delay was found to have a significant impact on the results.

  5. Parturition in gilts: duration of farrowing, birth intervals and placenta expulsion in relation to maternal, piglet and placental traits.

    PubMed

    van Rens, Birgitte T T M; van der Lende, Tette

    2004-07-01

    Large White x Meishan F2 crossbred gilts (n = 57) were observed continuously during farrowing while the placentae of their offspring were labeled in order to examine the duration of farrowing and placenta expulsion in relation to maternal-, piglet- and placental traits and the duration of birth interval in relation to birth weight, birth order and placental traits. Independently from each other, litter size, gestation length and offspring directed aggression significantly (P 0.05) affected duration of farrowing. An increase in litter size was associated with an increase of duration of farrowing and an increase in gestation length was associated with a decrease of duration of farrowing. Aggressive gilts took longer to farrow, compared to non-aggressive ones. After taking into account litter size, gestation length and offspring directed aggression, placental thickness (i.e., placental weight corrected for placental surface area) was significantly (P < 0.05) related to duration of farrowing, i.e., litters with on average thicker placentae took longer to farrow. The latter effect is the result of the fact that individual placental thickness significantly (P < 0.01) affected individual birth interval, independent of birth weight. The piglet has to break its own membranes to be able to start its journey through the uterus towards the birth channel. Apparently, a thicker placenta offers more resistance and thus prolongs the process of birth. Independent of placental thickness, birth interval significantly (P < 0.01) decreased with an increase in birth order (first born to last born). The high variation of birth intervals for the last born piglets, caused a slight increase in average birth interval for the latter piglets. Litters with on average more areolae per placenta took significantly (P < 0.001) less time to be born than litters with on average less areolae per placenta (independent of total number of piglets born and other placental traits), while birth intervals within litters were not affected by this trait. Thus, these results are probably due to a gilt trait rather than a piglet trait. Since the number of areolae represent the number of uterine glands present, the gilt trait might be uterine development. Duration of placenta expulsion significantly (P < 0.01) increased with an increase of duration of farrowing. Furthermore, the first placenta was expelled significantly (P < 0.01) earlier relative to last piglet when duration of farrowing was protracted, while there was no relation of the time interval between first placenta and last piglet and the duration of placenta expulsion. In conclusion, the most important finding of this study is that placental thickness rather than birth weight appears to play an important role in the duration of birth intervals and as a result, of duration of parturition in gilts.

  6. Learned Interval Time Facilitates Associate Memory Retrieval

    ERIC Educational Resources Information Center

    van de Ven, Vincent; Kochs, Sarah; Smulders, Fren; De Weerd, Peter

    2017-01-01

    The extent to which time is represented in memory remains underinvestigated. We designed a time paired associate task (TPAT) in which participants implicitly learned cue-time-target associations between cue-target pairs and specific cue-target intervals. During subsequent memory testing, participants showed increased accuracy of identifying…

  7. Proceedings of the Eleventh Annual Precise Time and Time Interval (PTTI) Application and Planning Meeting. [conference

    NASA Technical Reports Server (NTRS)

    Wardrip, S. C. (Editor)

    1979-01-01

    Thirty eight papers are presented addressing various aspects of precise time and time interval applications. Areas discussed include: past accomplishments; state of the art systems; new and useful applications, procedures, and techniques; and fruitful directions for research efforts.

  8. A Role for Memory in Prospective Timing informs Timing in Prospective Memory

    PubMed Central

    Waldum, Emily R; Sahakyan, Lili

    2014-01-01

    Time-based prospective memory (TBPM) tasks require the estimation of time in passing – known as prospective timing. Prospective timing is said to depend on an attentionally-driven internal clock mechanism, and is thought to be unaffected by memory for interval information (for reviews see, Block, Hancock, & Zakay, 2010; Block & Zakay, 1997). A prospective timing task that required a verbal estimate following the entire interval (Experiment 1) and a TBPM task that required production of a target response during the interval (Experiment 2) were used to test an alternative view that episodic memory does influence prospective timing. In both experiments, participants performed an ongoing lexical decision task of fixed duration while a varying number of songs were played in the background. Experiment 1 results revealed that verbal time estimates became longer the more songs participants remembered from the interval, suggesting that memory for interval information influences prospective time estimates. In Experiment 2, participants who were asked to perform the TBPM task without the aid of an external clock made their target responses earlier as the number of songs increased, indicating that prospective estimates of elapsed time increased as more songs were experienced. For participants who had access to a clock, changes in clock-checking coincided with the occurrence of song boundaries, indicating that participants used both song information and clock information to estimate time. Finally, ongoing task performance and verbal reports in both experiments further substantiate a role for episodic memory in prospective timing. PMID:22984950

  9. The Behavioral Economics of Choice and Interval Timing

    ERIC Educational Resources Information Center

    Jozefowiez, J.; Staddon, J. E. R.; Cerutti, D. T.

    2009-01-01

    The authors propose a simple behavioral economic model (BEM) describing how reinforcement and interval timing interact. The model assumes a Weber-law-compliant logarithmic representation of time. Associated with each represented time value are the payoffs that have been obtained for each possible response. At a given real time, the response with…

  10. Generating variable and random schedules of reinforcement using Microsoft Excel macros.

    PubMed

    Bancroft, Stacie L; Bourret, Jason C

    2008-01-01

    Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values.

  11. The destructive 1946 Unimak near-field tsunami: New evidence for a submarine slide source from reprocessed marine geophysical data

    USGS Publications Warehouse

    von Huene, Roland E.; Kirby, Stephen; Miller, John J.; Dartnell, Peter

    2014-01-01

    The Mw 8.6 earthquake in 1946 off the Pacific shore of Unimak Island at the end of the Alaska Peninsula generated a far-field tsunami that crossed the Pacific to Antarctica. Its tsunami magnitude, 9.3, is comparable to the 9.1 magnitude of the 2011 Tohoku tsunami. On Unimak Island's Pacific shore, a runup of 42 m destroyed the lighthouse at Scotch Cap. Elsewhere, localized tsunamis with such high runups have been interpreted as caused by large submarine landslides. However, previous to this study, no landslide large enough to generate this runup was found in the area that is limited by the time interval between earthquake shaking and tsunami inundation at Scotch Cap. Reworking of a seismic reflection transect and colocated multibeam bathymetric surveys reveal a landslide block that may explain the 1946 high runup. It is seaward of Scotch Cap on the midslope terrace and within the time-limited area.

  12. Simulations of fast crab cavity failures in the high luminosity Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Yee-Rendon, Bruce; Lopez-Fernandez, Ricardo; Barranco, Javier; Calaga, Rama; Marsili, Aurelien; Tomás, Rogelio; Zimmermann, Frank; Bouly, Frédéric

    2014-05-01

    Crab cavities (CCs) are a key ingredient of the high luminosity Large Hadron Collider (HL-LHC) project for increasing the luminosity of the LHC. At KEKB, CCs have exhibited abrupt changes of phase and voltage during a time period of the order of a few LHC turns and considering the significant stored energy in the HL-LHC beam, CC failures represent a serious threat in regard to LHC machine protection. In this paper, we discuss the effect of CC voltage or phase changes on a time interval similar to, or longer than, the one needed to dump the beam. The simulations assume a quasistationary-state distribution to assess the particles losses for the HL-LHC. These distributions produce beam losses below the safe operation threshold for Gaussian tails, while, for non-Gaussian tails are on the same order of the limit. Additionally, some mitigation strategies are studied for reducing the damage caused by the CC failures.

  13. Digital computing cardiotachometer

    NASA Technical Reports Server (NTRS)

    Smith, H. E.; Rasquin, J. R.; Taylor, R. A. (Inventor)

    1973-01-01

    A tachometer is described which instantaneously measures heart rate. During the two intervals between three succeeding heart beats, the electronic system: (1) measures the interval by counting cycles from a fixed frequency source occurring between the two beats; and (2) computes heat rate during the interval between the next two beats by counting the number of times that the interval count must be counted to zero in order to equal a total count of sixty times (to convert to beats per minute) the frequency of the fixed frequency source.

  14. Pharmacological interventions for acceleration of the onset time of rocuronium: a meta-analysis.

    PubMed

    Dong, Jing; Gao, Lingqi; Lu, Wenqing; Xu, Zifeng; Zheng, Jijian

    2014-01-01

    Rocuronium is an acceptable alternative when succinylcholine is contraindicated for facilitating the endotracheal intubation. However, the onset time of rocuronium for good intubation condition is still slower than that condition of succinylcholine. This study systematically investigated the most efficacious pharmacological interventions for accelerating the onset time of rocuronium. Medline, Embase, Cochrane Library databases, www.clinicaltrials.gov, and hand searching from the reference lists of identified papers were searched for randomized controlled trials comparing drug interventions with placebo or another drug to shorten the onset time of rocuronium. Statistical analyses were performed using RevMan5.2 and ADDIS 1.16.5 softwares. Mean differences (MDs) with their 95% confidence intervals (95% CIs) were used to analyze the effects of drug interventions on the onset time of rocuronium. 43 randomized controlled trials with 2,465 patients were analyzed. The average onset time of rocuronium was 102.4±24.9 s. Priming with rocuronium [Mean difference (MD) -21.0 s, 95% confidence interval (95% CI) (-27.6 to -14.3 s)], pretreatment with ephedrine [-22.3 s (-29.1 to -15.5 s)], pretreatment with magnesium sulphate [-28.2 s (-50.9 to -5.6 s)] were all effective in reducing the onset time of rocuronium. Statistical testing of indirect comparisons showed that rocuronium priming, pretreatment with ephedrine, and pretreatment with magnesium sulphate had the similar efficacy. Rocuronium priming, pretreatment with ephedrine, and pretreatment with magnesium sulphate were all effective in accelerating the onset time of rocuronium, and furthermore their efficacies were similar. Considering the convenience and efficacy, priming with rocuronium is recommended for accelerating the onset time of rocuronium. However, more strict clinical trials are still needed to reach a more solid conclusion due to the large heterogeneities exist among different studies.

  15. Post-KR Delay Intervals and Mental Practice: A Test of Adams' Closed Loop Theory

    ERIC Educational Resources Information Center

    Bole, Ronald

    1976-01-01

    The present study suggests that post-KR delay interval time or activity in the interval has little to do with learning on a self-paced positioning task, not ruling out that on ballistic tasks or more complex nonballistic tasks that a learner could make use of additional time or strategy. (MB)

  16. It's time to fear! Interval timing in odor fear conditioning in rats

    PubMed Central

    Shionoya, Kiseko; Hegoburu, Chloé; Brown, Bruce L.; Sullivan, Regina M.; Doyère, Valérie; Mouly, Anne-Marie

    2013-01-01

    Time perception is crucial to goal attainment in humans and other animals, and interval timing also guides fundamental animal behaviors. Accumulating evidence has made it clear that in associative learning, temporal relations between events are encoded, and a few studies suggest this temporal learning occurs very rapidly. Most of these studies, however, have used methodologies that do not permit investigating the emergence of this temporal learning. In the present study we monitored respiration, ultrasonic vocalization (USV) and freezing behavior in rats in order to perform fine-grain analysis of fear responses during odor fear conditioning. In this paradigm an initially neutral odor (the conditioned stimulus, CS) predicted the arrival of an aversive unconditioned stimulus (US, footshock) at a fixed 20-s time interval. We first investigated the development of a temporal pattern of responding related to CS-US interval duration. The data showed that during acquisition with odor-shock pairings, a temporal response pattern of respiration rate was observed. Changing the CS-US interval duration from 20-s to 30-s resulted in a shift of the temporal response pattern appropriate to the new duration thus demonstrating that the pattern reflected the learning of the CS-US interval. A temporal pattern was also observed during a retention test 24 h later for both respiration and freezing measures, suggesting that the animals had stored the interval duration in long-term memory. We then investigated the role of intra-amygdalar dopaminergic transmission in interval timing. For this purpose, the D1 dopaminergic receptors antagonist SCH23390 was infused in the basolateral amygdala before conditioning. This resulted in an alteration of timing behavior, as reflected in differential temporal patterns between groups observed in a 24 h retention test off drug. The present data suggest that D1 receptor dopaminergic transmission within the amygdala is involved in temporal processing. PMID:24098277

  17. Development of visual field defect after first-detected optic disc hemorrhage in preperimetric open-angle glaucoma.

    PubMed

    Kim, Hae Jin; Song, Yong Ju; Kim, Young Kook; Jeoung, Jin Wook; Park, Ki Ho

    2017-07-01

    To evaluate functional progression in preperimetric glaucoma (PPG) with disc hemorrhage (DH) and to determine the time interval between the first-detected DH and development of glaucomatous visual field (VF) defect. A total of 87 patients who had been first diagnosed with PPG were enrolled. The medical records of PPG patients without DH (Group 1) and with DH (Group 2) were reviewed. When glaucomatous VF defect appeared, the time interval from the diagnosis of PPG to the development of VF defect was calculated and compared between the two groups. In group 2, the time intervals from the first-detected DH to VF defect of the single- and recurrent-DH were compared. Of the enrolled patients, 45 had DH in the preperimetric stage. The median time interval from the diagnosis of PPG to the development of VF defect was 73.3 months in Group 1, versus 45.4 months in Group 2 (P = 0.042). The cumulative probability of development of VF defect after diagnosis of PPG was significantly greater in Group 2 than in Group 1. The median time interval from first-detected DH to the development of VF defect was 37.8 months. The median time interval from DH to VF defect and cumulative probability of VF defect after DH did not show a statistical difference between single and recurrent-DH patients. The median time interval between the diagnosis of PPG and the development of VF defect was significantly shorter in PPG with DH. The VF defect appeared 37.8 months after the first-detected DH in PPG.

  18. Performance of informative priors skeptical of large treatment effects in clinical trials: A simulation study.

    PubMed

    Pedroza, Claudia; Han, Weilu; Thanh Truong, Van Thi; Green, Charles; Tyson, Jon E

    2018-01-01

    One of the main advantages of Bayesian analyses of clinical trials is their ability to formally incorporate skepticism about large treatment effects through the use of informative priors. We conducted a simulation study to assess the performance of informative normal, Student- t, and beta distributions in estimating relative risk (RR) or odds ratio (OR) for binary outcomes. Simulation scenarios varied the prior standard deviation (SD; level of skepticism of large treatment effects), outcome rate in the control group, true treatment effect, and sample size. We compared the priors with regards to bias, mean squared error (MSE), and coverage of 95% credible intervals. Simulation results show that the prior SD influenced the posterior to a greater degree than the particular distributional form of the prior. For RR, priors with a 95% interval of 0.50-2.0 performed well in terms of bias, MSE, and coverage under most scenarios. For OR, priors with a wider 95% interval of 0.23-4.35 had good performance. We recommend the use of informative priors that exclude implausibly large treatment effects in analyses of clinical trials, particularly for major outcomes such as mortality.

  19. Effects of time interval between primary melanoma excision and sentinel node biopsy on positivity rate and survival.

    PubMed

    Oude Ophuis, Charlotte M C; van Akkooi, Alexander C J; Rutkowski, Piotr; Voit, Christiane A; Stepniak, Joanna; Erler, Nicole S; Eggermont, Alexander M M; Wouters, Michel W J M; Grünhagen, Dirk J; Verhoef, Cornelis Kees

    2016-11-01

    Sentinel node biopsy (SNB) is essential for adequate melanoma staging. Most melanoma guidelines advocate to perform wide local excision and SNB as soon as possible, causing time pressure. To investigate the role of time interval between melanoma diagnosis and SNB on sentinel node (SN) positivity and survival. This is a retrospective observational study concerning a cohort of melanoma patients from four European Organization for Research and Treatment of Cancer Melanoma Group tertiary referral centres from 1997 to 2013. A total of 4124 melanoma patients underwent SNB. Patients were selected if date of diagnosis and follow-up (FU) information were available, and SNB was performed in <180 d. A total of 3546 patients were included. Multivariable logistic regression and Cox regression analyses were performed to investigate how baseline characteristics and time interval until SNB are related to positivity rate, disease-free survival (DFS) and melanoma-specific survival (MSS). Median time interval was 43 d (interquartile range [IQR] 29-60 d), and 705 (19.9%) of 3546 patients had a positive SN. Sentinel node positivity was equal for early surgery (≤43 d) versus late surgery (>43 d): 19.7% versus 20.1% (p = 0.771). Median FU was 50 months (IQR 24-84 months). Sentinel node metastasis (hazard ratio [HR] 3.17, 95% confidence interval [95% CI] 2.53-3.97), ulceration (HR 1.99, 95% CI 1.58-2.51), Breslow thickness (HR 1.06, 95% CI 1.04-1.08), and male gender (HR 1.58, 95% CI 1.26-1.98) (all p < 0.00001) were independently associated with worse MSS and DFS; time interval was not. No effect of time interval between melanoma diagnosis and SNB on 5-year survival or SN positivity rate was found for a time interval of up to 3 months. This information can be used to counsel patients and remove strict time limits from melanoma guidelines. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Body size and extinction risk in terrestrial mammals above the species level.

    PubMed

    Tomiya, Susumu

    2013-12-01

    Mammalian body mass strongly correlates with life history and population properties at the scale of mouse to elephant. Large body size is thus often associated with elevated extinction risk. I examined the North American fossil record (28-1 million years ago) of 276 terrestrial genera to uncover the relationship between body size and extinction probability above the species level. Phylogenetic comparative analysis revealed no correlation between sampling-adjusted durations and body masses ranging 7 orders of magnitude, an observation that was corroborated by survival analysis. Most of the ecological and temporal groups within the data set showed the same lack of relationship. Size-biased generic extinctions do not constitute a general feature of the Holarctic mammalian faunas in the Neogene. Rather, accelerated loss of large mammals occurred during intervals that experienced combinations of regional aridification and increased biomic heterogeneity within continents. The latter phenomenon is consistent with the macroecological prediction that large geographic ranges are critical to the survival of large mammals in evolutionary time. The frequent lack of size selectivity in generic extinctions can be reconciled with size-biased species loss if extinctions of large and small mammals at the species level are often driven by ecological perturbations of different spatial and temporal scales, while those at the genus level are more synchronized in time as a result of fundamental, multiscale environmental shifts.

  1. A prospective study of the natural history of urinary incontinence in women.

    PubMed

    Hagan, Kaitlin A; Erekson, Elisabeth; Austin, Andrea; Minassian, Vatche A; Townsend, Mary K; Bynum, Julie P W; Grodstein, Francine

    2018-05-01

    Symptoms of urinary incontinence are commonly perceived to vary over time; yet, there is limited quantitative evidence regarding the natural history of urinary incontinence, especially over the long term. We sought to delineate the course of urinary incontinence symptoms over time, using 2 large cohorts of middle-aged and older women, with data collected over 10 years. We studied 9376 women from the Nurses' Health Study, age 56-81 years at baseline, and 7491 women from the Nurses' Health Study II, age 39-56 years, with incident urinary incontinence in 2002 through 2003. Urinary incontinence severity was measured by the Sandvik severity index. We tracked persistence, progression, remission, and improvement of symptoms over 10 years. We also examined risk factors for urinary incontinence progression using logistic regression models. Among women age 39-56 years, 39% had slight, 45% had moderate, and 17% had severe urinary incontinence at onset. Among women age 56-81 years, 34% had slight, 45% had moderate, and 21% had severe urinary incontinence at onset. Across ages, most women reported persistence or progression of symptoms over follow-up; few (3-11%) reported remission. However, younger women and women with less severe urinary incontinence at onset were more likely to report remission or improvement of symptoms. We found that increasing age was associated with higher odds of progression only among older women (age 75-81 vs 56-60 years; odds ratio, 1.84; 95% confidence interval, 1.51-2.25). Among all women, higher body mass index was strongly associated with progression (younger women: odds ratio, 2.37; 95% confidence interval, 2.00-2.81; body mass index ≥30 vs <25 kg/m 2 ; older women: odds ratio, 1.93; 95% confidence interval, 1.62-2.22). Additionally, greater physical activity was associated with lower odds of progression to severe urinary incontinence (younger women: odds ratio, 0.86; 95% confidence interval, 0.71-1.03; highest vs lowest quartile of activity; older women: odds ratio, 0.68; 95% confidence interval, 0.59-0.80). Most women with incident urinary incontinence continued to experience symptoms over 10 years; few had complete remission. Identification of risk factors for urinary incontinence progression, such as body mass index and physical activity, could be important for reducing symptoms over time. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Quantification of transuranic elements by time interval correlation spectroscopy of the detected neutrons

    PubMed

    Baeten; Bruggeman; Paepen; Carchon

    2000-03-01

    The non-destructive quantification of transuranic elements in nuclear waste management or in safeguards verifications is commonly performed by passive neutron assay techniques. To minimise the number of unknown sample-dependent parameters, Neutron Multiplicity Counting (NMC) is applied. We developed a new NMC-technique, called Time Interval Correlation Spectroscopy (TICS), which is based on the measurement of Rossi-alpha time interval distributions. Compared to other NMC-techniques, TICS offers several advantages.

  3. Evaluation of the Trail Making Test and interval timing as measures of cognition in healthy adults: comparisons by age, education, and gender.

    PubMed

    Płotek, Włodzimierz; Łyskawa, Wojciech; Kluzik, Anna; Grześkowiak, Małgorzata; Podlewski, Roland; Żaba, Zbigniew; Drobnik, Leon

    2014-02-03

    Human cognitive functioning can be assessed using different methods of testing. Age, level of education, and gender may influence the results of cognitive tests. The well-known Trail Making Test (TMT), which is often used to measure the frontal lobe function, and the experimental test of Interval Timing (IT) were compared. The methods used in IT included reproduction of auditory and visual stimuli, with the subsequent production of the time intervals of 1-, 2-, 5-, and 7-seconds durations with no pattern. Subjects included 64 healthy adult volunteers aged 18-63 (33 women, 31 men). Comparisons were made based on age, education, and gender. TMT was performed quickly and was influenced by age, education, and gender. All reproduced visual and produced intervals were shortened and the reproduction of auditory stimuli was more complex. Age, education, and gender have more pronounced impact on the cognitive test than on the interval timing test. The reproduction of the short auditory stimuli was more accurate in comparison to other modalities used in the IT test. The interval timing, when compared to the TMT, offers an interesting possibility of testing. Further studies are necessary to confirm the initial observation.

  4. Cox model with interval-censored covariate in cohort studies.

    PubMed

    Ahn, Soohyun; Lim, Johan; Paik, Myunghee Cho; Sacco, Ralph L; Elkind, Mitchell S

    2018-05-18

    In cohort studies the outcome is often time to a particular event, and subjects are followed at regular intervals. Periodic visits may also monitor a secondary irreversible event influencing the event of primary interest, and a significant proportion of subjects develop the secondary event over the period of follow-up. The status of the secondary event serves as a time-varying covariate, but is recorded only at the times of the scheduled visits, generating incomplete time-varying covariates. While information on a typical time-varying covariate is missing for entire follow-up period except the visiting times, the status of the secondary event are unavailable only between visits where the status has changed, thus interval-censored. One may view interval-censored covariate of the secondary event status as missing time-varying covariates, yet missingness is partial since partial information is provided throughout the follow-up period. Current practice of using the latest observed status produces biased estimators, and the existing missing covariate techniques cannot accommodate the special feature of missingness due to interval censoring. To handle interval-censored covariates in the Cox proportional hazards model, we propose an available-data estimator, a doubly robust-type estimator as well as the maximum likelihood estimator via EM algorithm and present their asymptotic properties. We also present practical approaches that are valid. We demonstrate the proposed methods using our motivating example from the Northern Manhattan Study. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Regression analysis of case K interval-censored failure time data in the presence of informative censoring.

    PubMed

    Wang, Peijie; Zhao, Hui; Sun, Jianguo

    2016-12-01

    Interval-censored failure time data occur in many fields such as demography, economics, medical research, and reliability and many inference procedures on them have been developed (Sun, 2006; Chen, Sun, and Peace, 2012). However, most of the existing approaches assume that the mechanism that yields interval censoring is independent of the failure time of interest and it is clear that this may not be true in practice (Zhang et al., 2007; Ma, Hu, and Sun, 2015). In this article, we consider regression analysis of case K interval-censored failure time data when the censoring mechanism may be related to the failure time of interest. For the problem, an estimated sieve maximum-likelihood approach is proposed for the data arising from the proportional hazards frailty model and for estimation, a two-step procedure is presented. In the addition, the asymptotic properties of the proposed estimators of regression parameters are established and an extensive simulation study suggests that the method works well. Finally, we apply the method to a set of real interval-censored data that motivated this study. © 2016, The International Biometric Society.

  6. Timing During Interruptions in Timing

    ERIC Educational Resources Information Center

    Fortin, Claudette; Bedard, Marie-Claude; Champagne, Julie

    2005-01-01

    Duration and location of breaks in time interval production were manipulated in various conditions of stimulus presentation (Experiments 1-4). Produced intervals shortened and then stabilized as break duration lengthened, suggesting that participants used the break as a preparatory period to restart timing as quickly as possible at the end of the…

  7. Lack of spacing effects during piano learning.

    PubMed

    Wiseheart, Melody; D'Souza, Annalise A; Chae, Jacey

    2017-01-01

    Spacing effects during retention of verbal information are easily obtained, and the effect size is large. Relatively little evidence exists on whether motor skill retention benefits from distributed practice, with even less evidence on complex motor skills. We taught a 17-note musical sequence on a piano to individuals without prior formal training. There were five lags between learning episodes: 0-, 1-, 5-, 10-, and 15-min. After a 5-min retention interval, participants' performance was measured using three criteria: accuracy of note playing, consistency in pressure applied to the keys, and consistency in timing. No spacing effect was found, suggesting that the effect may not always be demonstrable for complex motor skills or non-verbal abilities (timing and motor skills). Additionally, we taught short phrases from five songs, using the same set of lags and retention interval, and did not find any spacing effect for accuracy of song reproduction. Our findings indicate that although the spacing effect is one of the most robust phenomena in the memory literature (as demonstrated by verbal learning studies), the effect may vary when considered in the novel realm of complex motor skills such as piano performance.

  8. Correlations between Geomagnetic Disturbances and Field-Aligned Currents during the 22-29 July 2004 Storm Time Interval

    NASA Astrophysics Data System (ADS)

    Hood, R.; Woodroffe, J. R.; Morley, S.; Aruliah, A. L.

    2017-12-01

    Using the CHAMP fluxgate magnetometer to calculate field-aligned current (FAC) densities and magnetic latitudes, with SuperMAG ground magnetometers analogously providing ground geomagnetic disturbances (GMD) magnetic perturbations and latitudes, we probe FAC locations and strengths as predictors of GMD locations and strengths. We also study the relationships between solar wind drivers and global magnetospheric activity, and both FACs and GMDs using IMF Bz and the Sym-H index. We present an event study of the 22-29 July 2004 storm time interval, which had particularly large GMDs given its storm intensity. We find no correlation between FAC and GMD magnitudes, perhaps due to CHAMP orbit limitations or ground magnetometer coverage. There is, however, a correlation between IMF Bz and nightside GMD magnitudes, supportive of their generation via tail reconnection. IMF Bz is also correlated with dayside FAC and GMD magnetic latitudes, indicating solar wind as an initial driver. The ring current influence increases during the final storm, with improved correlations between the Sym-H index and both FAC magnetic latitudes and GMD magnitudes. Sym-H index correlations may only be valid for higher intensity storms; a statistical analysis of many storms is needed to verify this.

  9. Application Of Pulsed Laser Holography To Nondestructive Testing Of Aircraft Structures

    NASA Astrophysics Data System (ADS)

    Fagot, Hubert; Smigielski, Paul; Arnaud, Jean-Louis

    1983-03-01

    Subsequently to laboratory tests, experiments were conducted on an aircraft undergoing maintenance in order to assess the possible uses of holographic interferometry for non-destructive testing of large aircraft structures. A double ruby laser was used delivering two pulses with a duration of 20 ns each. The two pulses are separated by an arbitrary time interval At which is determined as a function of both the amplitude and frequency of the surface displacement. Shocks of the order of 100 mJ cause the structure under investigation to vibrate, the time interval At thereby ranging from 10 to 100 ps for a delay of a few ms after shock initiation. The method used is relatively insensitive to environmental disturbances. Although the laser delivers pulses of light of less than 100 mJ in energy, it is possible to visualize a field of 0.5 x1 m. Some results will be reported which have been obtained at the lower surface of an aerofoil, on a wheel well and on an air-brake. Finally a brief review will be made on the improvements envisaged on both the laser and the recording method in order to obtain an operational system for holographic non-destructive testing.

  10. Joint modelling of longitudinal outcome and interval-censored competing risk dropout in a schizophrenia clinical trial

    PubMed Central

    Gueorguieva, Ralitza; Rosenheck, Robert; Lin, Haiqun

    2011-01-01

    Summary The ‘Clinical antipsychotic trials in intervention effectiveness’ study, was designed to evaluate whether there were significant differences between several antipsychotic medications in effectiveness, tolerability, cost and quality of life of subjects with schizophrenia. Overall, 74 % of patients discontinued the study medication for various reasons before the end of 18 months in phase I of the study. When such a large percentage of study participants fail to complete the study schedule, it is not clear whether the apparent profile in effectiveness reflects genuine changes over time or is influenced by selection bias, with participants with worse (or better) outcome values being more likely to drop out or to discontinue. To assess the effect of dropouts for different reasons on inferences, we construct a joint model for the longitudinal outcome and cause-specific dropouts that allows for interval-censored dropout times. Incorporating the information regarding the cause of dropout improves inferences and provides better understanding of the association between cause-specific dropout and the outcome process. We use simulations to demonstrate the advantages of the joint modelling approach in terms of bias and efficiency. PMID:22468033

  11. Lack of spacing effects during piano learning

    PubMed Central

    D’Souza, Annalise A.; Chae, Jacey

    2017-01-01

    Spacing effects during retention of verbal information are easily obtained, and the effect size is large. Relatively little evidence exists on whether motor skill retention benefits from distributed practice, with even less evidence on complex motor skills. We taught a 17-note musical sequence on a piano to individuals without prior formal training. There were five lags between learning episodes: 0-, 1-, 5-, 10-, and 15-min. After a 5-min retention interval, participants’ performance was measured using three criteria: accuracy of note playing, consistency in pressure applied to the keys, and consistency in timing. No spacing effect was found, suggesting that the effect may not always be demonstrable for complex motor skills or non-verbal abilities (timing and motor skills). Additionally, we taught short phrases from five songs, using the same set of lags and retention interval, and did not find any spacing effect for accuracy of song reproduction. Our findings indicate that although the spacing effect is one of the most robust phenomena in the memory literature (as demonstrated by verbal learning studies), the effect may vary when considered in the novel realm of complex motor skills such as piano performance. PMID:28800631

  12. Chaotic dynamics of large-scale double-diffusive convection in a porous medium

    NASA Astrophysics Data System (ADS)

    Kondo, Shutaro; Gotoda, Hiroshi; Miyano, Takaya; Tokuda, Isao T.

    2018-02-01

    We have studied chaotic dynamics of large-scale double-diffusive convection of a viscoelastic fluid in a porous medium from the viewpoint of dynamical systems theory. A fifth-order nonlinear dynamical system modeling the double-diffusive convection is theoretically obtained by incorporating the Darcy-Brinkman equation into transport equations through a physical dimensionless parameter representing porosity. We clearly show that the chaotic convective motion becomes much more complicated with increasing porosity. The degree of dynamic instability during chaotic convective motion is quantified by two important measures: the network entropy of the degree distribution in the horizontal visibility graph and the Kaplan-Yorke dimension in terms of Lyapunov exponents. We also present an interesting on-off intermittent phenomenon in the probability distribution of time intervals exhibiting nearly complete synchronization.

  13. a Metadata Based Approach for Analyzing Uav Datasets for Photogrammetric Applications

    NASA Astrophysics Data System (ADS)

    Dhanda, A.; Remondino, F.; Santana Quintero, M.

    2018-05-01

    This paper proposes a methodology for pre-processing and analysing Unmanned Aerial Vehicle (UAV) datasets before photogrammetric processing. In cases where images are gathered without a detailed flight plan and at regular acquisition intervals the datasets can be quite large and be time consuming to process. This paper proposes a method to calculate the image overlap and filter out images to reduce large block sizes and speed up photogrammetric processing. The python-based algorithm that implements this methodology leverages the metadata in each image to determine the end and side overlap of grid-based UAV flights. Utilizing user input, the algorithm filters out images that are unneeded for photogrammetric processing. The result is an algorithm that can speed up photogrammetric processing and provide valuable information to the user about the flight path.

  14. [Study on the acquiring data time and intervals for measuring performance of air cleaner on formaldehyde].

    PubMed

    Tang, Zhigang; Wang, Guifang; Xu, Dongqun; Han, Keqin; Li, Yunpu; Zhang, Aijun; Dong, Xiaoyan

    2004-09-01

    The measuring time and measuring intervals to evaluate different type of air cleaner performance to remove formaldehyde were provided. The natural decay measurement and formaldehyde removal measurement were conducted in 1.5 m3 and 30 m3 test chamber. The natural decay rate was determined by acquiring formaldehyde concentration data at 15 minute intervals for 2.5 hours. The measured decay rate was determined by acquiring formaldehyde concentration data at 5 minute intervals for 1.2 hours. When the wind power of air cleaner is smaller than 30 m3/h or measuring performance of no wind power air clearing product, the 1.5 m3 test chamber can be used. Both the natural decay rate and the measured decay rate are determined by acquiring formaldehyde concentration data at 8 minute intervals for 64 minutes. There were different measuring time and measuring intervals to evaluate different type of air cleaner performance to remove formaldehyde.

  15. Sad facial cues inhibit temporal attention: evidence from an event-related potential study.

    PubMed

    Kong, Xianxian; Chen, Xiaoqiang; Tan, Bo; Zhao, Dandan; Jin, Zhenlan; Li, Ling

    2013-06-19

    We examined the influence of different emotional cues (happy or sad) on temporal attention (short or long interval) using behavioral as well as event-related potential recordings during a Stroop task. Emotional stimuli cued short and long time intervals, inducing 'sad-short', 'sad-long', 'happy-short', and 'happy-long' conditions. Following the intervals, participants performed a numeric Stroop task. Behavioral results showed the temporal attention effects in the sad-long, happy-long, and happy-short conditions, in which valid cues quickened the reaction times, but not in the sad-short condition. N2 event-related potential components showed sad cues to have decreased activity for short intervals compared with long intervals, whereas happy cues did not. Taken together, these findings provide evidence for different modulation of sad and happy facial cues on temporal attention. Furthermore, sad cues inhibit temporal attention, resulting in longer reaction time and decreased neural activity in the short interval by diverting more attentional resources.

  16. ELECTRICAL LOAD ANTICIPATOR AND RECORDER

    DOEpatents

    Russell, J.B.; Thomas, R.J.

    1961-07-25

    A system is descrbied in which an indication of the prevailing energy consumption in an electrical power metering system and a projected Power demand for one demand interval is provided at selected increments of time withm the demand interval. Each watthour meter in the system is provided with an impulse generator that generates two impulses for each revolution of the meter disc. The total pulses received frorn all the meters are continuously totaled and are fed to a plurality of parallel connected gated counters. Each counter has its gate opened at different sub-time intervals during the demand interval. A multiplier is connected to each of the gated counters except the last one and each multiplier is provided with a different multiplier constant so as to provide an estimate of the power to be drawn over the entire demand interval at the end of each of the different sub-time intervals. Means are provided for recording the ontputs from the different circuits in synchronism with the actuation oi each gate circuit.

  17. Ventral striatum lesions do not affect reinforcement learning with deterministic outcomes on slow time scales.

    PubMed

    Vicario-Feliciano, Raquel; Murray, Elisabeth A; Averbeck, Bruno B

    2017-10-01

    A large body of work has implicated the ventral striatum (VS) in aspects of reinforcement learning (RL). However, less work has directly examined the effects of lesions in the VS, or other forms of inactivation, on 2-armed bandit RL tasks. We have recently found that lesions in the VS in macaque monkeys affect learning with stochastic schedules but have minimal effects with deterministic schedules. The reasons for this are not currently clear. Because our previous work used short intertrial intervals, one possibility is that the animals were using working memory to bridge stimulus-reward associations from 1 trial to the next. In the present study, we examined learning of 60 pairs of objects, in which the animals received only 1 trial per day with each pair. The large number of object pairs and the long interval (approximately 24 hr) between trials with a given pair minimized the chances that the animals could use working memory to bridge trials. We found that monkeys with VS lesions were unimpaired relative to controls, which suggests that animals with VS lesions can still learn to select rewarded objects even when they cannot make use of working memory. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. Shared and Distinct Rupture Discriminants of Small and Large Intracranial Aneurysms.

    PubMed

    Varble, Nicole; Tutino, Vincent M; Yu, Jihnhee; Sonig, Ashish; Siddiqui, Adnan H; Davies, Jason M; Meng, Hui

    2018-04-01

    Many ruptured intracranial aneurysms (IAs) are small. Clinical presentations suggest that small and large IAs could have different phenotypes. It is unknown if small and large IAs have different characteristics that discriminate rupture. We analyzed morphological, hemodynamic, and clinical parameters of 413 retrospectively collected IAs (training cohort; 102 ruptured IAs). Hierarchal cluster analysis was performed to determine a size cutoff to dichotomize the IA population into small and large IAs. We applied multivariate logistic regression to build rupture discrimination models for small IAs, large IAs, and an aggregation of all IAs. We validated the ability of these 3 models to predict rupture status in a second, independently collected cohort of 129 IAs (testing cohort; 14 ruptured IAs). Hierarchal cluster analysis in the training cohort confirmed that small and large IAs are best separated at 5 mm based on morphological and hemodynamic features (area under the curve=0.81). For small IAs (<5 mm), the resulting rupture discrimination model included undulation index, oscillatory shear index, previous subarachnoid hemorrhage, and absence of multiple IAs (area under the curve=0.84; 95% confidence interval, 0.78-0.88), whereas for large IAs (≥5 mm), the model included undulation index, low wall shear stress, previous subarachnoid hemorrhage, and IA location (area under the curve=0.87; 95% confidence interval, 0.82-0.93). The model for the aggregated training cohort retained all the parameters in the size-dichotomized models. Results in the testing cohort showed that the size-dichotomized rupture discrimination model had higher sensitivity (64% versus 29%) and accuracy (77% versus 74%), marginally higher area under the curve (0.75; 95% confidence interval, 0.61-0.88 versus 0.67; 95% confidence interval, 0.52-0.82), and similar specificity (78% versus 80%) compared with the aggregate-based model. Small (<5 mm) and large (≥5 mm) IAs have different hemodynamic and clinical, but not morphological, rupture discriminants. Size-dichotomized rupture discrimination models performed better than the aggregate model. © 2018 American Heart Association, Inc.

  19. Evidence for a twelfth large earthquake on the southern hayward fault in the past 1900 years

    USGS Publications Warehouse

    Lienkaemper, J.J.; Williams, P.L.; Guilderson, T.P.

    2010-01-01

    We present age and stratigraphic evidence for an additional paleoearthquake at the Tyson Lagoon site. The acquisition of 19 additional radiocarbon dates and the inclusion of this additional event has resolved a large age discrepancy in our earlier earthquake chronology. The age of event E10 was previously poorly constrained, thus increasing the uncertainty in the mean recurrence interval (RI), a critical factor in seismic hazard evaluation. Reinspection of many trench logs revealed substantial evidence suggesting that an additional earthquake occurred between E10 and E9 within unit u45. Strata in older u45 are faulted in the main fault zone and overlain by scarp colluviums in two locations.We conclude that an additional surfacerupturing event (E9.5) occurred between E9 and E10. Since 91 A.D. (??40 yr, 1??), 11 paleoearthquakes preceded the M 6:8 earthquake in 1868, yielding a mean RI of 161 ?? 65 yr (1??, standard deviation of recurrence intervals). However, the standard error of the mean (SEM) is well determined at ??10 yr. Since ~1300 A.D., the mean rate has increased slightly, but is indistinguishable from the overall rate within the uncertainties. Recurrence for the 12-event sequence seems fairly regular: the coefficient of variation is 0.40, and it yields a 30-yr earthquake probability of 29%. The apparent regularity in timing implied by this earthquake chronology lends support for the use of time-dependent renewal models rather than assuming a random process to forecast earthquakes, at least for the southern Hayward fault.

  20. Contributors to the Excess Stroke Mortality in Rural Areas in the United States.

    PubMed

    Howard, George; Kleindorfer, Dawn O; Cushman, Mary; Long, D Leann; Jasne, Adam; Judd, Suzanne E; Higginbotham, John C; Howard, Virginia J

    2017-07-01

    Stroke mortality is 30% higher in the rural United States. This could be because of either higher incidence or higher case fatality from stroke in rural areas. The urban-rural status of 23 280 stroke-free participants recruited between 2003 and 2007 in the REGARDS study (Reasons for Geographic and Racial Differences in Stroke) was classified using the Rural-Urban Commuting Area scheme as residing in urban, large rural town/city, or small rural town or isolated areas. The risk of incident stroke was assessed using proportional hazards analysis, and case fatality (death within 30 days of stroke) was assessed using logistic regression. Models were adjusted for demographics, traditional stroke risk factors, and measures of socioeconomic status. After adjustment for demographic factors and relative to urban areas, stroke incidence was 1.23-times higher (95% confidence intervals, 1.01-1.51) in large rural town/cities and 1.30-times higher (95% confidence intervals, 1.03-1.62) in small rural towns or isolated areas. Adjustment for risk factors and socioeconomic status only modestly attenuated this association, and the association became marginally nonsignificant ( P =0.071). There was no association of rural-urban status with case fatality ( P >0.47). The higher stroke mortality in rural regions seemed to be attributable to higher stroke incidence rather than case fatality. A higher prevalence of risk factors and lower socioeconomic status only modestly contributed to the increased risk of incident stroke risk in rural areas. There was no evidence of higher case fatality in rural areas. © 2017 American Heart Association, Inc.

Top