Sample records for defined time intervals

  1. Buffered coscheduling for parallel programming and enhanced fault tolerance

    DOEpatents

    Petrini, Fabrizio [Los Alamos, NM; Feng, Wu-chun [Los Alamos, NM

    2006-01-31

    A computer implemented method schedules processor jobs on a network of parallel machine processors or distributed system processors. Control information communications generated by each process performed by each processor during a defined time interval is accumulated in buffers, where adjacent time intervals are separated by strobe intervals for a global exchange of control information. A global exchange of the control information communications at the end of each defined time interval is performed during an intervening strobe interval so that each processor is informed by all of the other processors of the number of incoming jobs to be received by each processor in a subsequent time interval. The buffered coscheduling method of this invention also enhances the fault tolerance of a network of parallel machine processors or distributed system processors

  2. Spectral of electrocardiographic RR intervals to indicate atrial fibrillation

    NASA Astrophysics Data System (ADS)

    Nuryani, Nuryani; Satrio Nugroho, Anto

    2017-11-01

    Atrial fibrillation is a serious heart diseases, which is associated on the risk of death, and thus an early detection of atrial fibrillation is necessary. We have investigated spectral pattern of electrocardiogram in relation to atrial fibrillation. The utilized feature of electrocardiogram is RR interval. RR interval is the time interval between a two-consecutive R peaks. A series of RR intervals in a time segment is converted to a signal with a frequency domain. The frequency components are investigated to find the components which significantly associate to atrial fibrillation. A segment is defined as atrial fibrillation or normal segments by considering a defined number of atrial fibrillation RR in the segment. Using clinical data of 23 patients with atrial fibrillation, we find that the frequency components could be used to indicate atrial fibrillation.

  3. Precise Interval Timer for Software Defined Radio

    NASA Technical Reports Server (NTRS)

    Pozhidaev, Aleksey (Inventor)

    2014-01-01

    A precise digital fractional interval timer for software defined radios which vary their waveform on a packet-by-packet basis. The timer allows for variable length in the preamble of the RF packet and allows to adjust boundaries of the TDMA (Time Division Multiple Access) Slots of the receiver of an SDR based on the reception of the RF packet of interest.

  4. VARIABLE TIME-INTERVAL GENERATOR

    DOEpatents

    Gross, J.E.

    1959-10-31

    This patent relates to a pulse generator and more particularly to a time interval generator wherein the time interval between pulses is precisely determined. The variable time generator comprises two oscillators with one having a variable frequency output and the other a fixed frequency output. A frequency divider is connected to the variable oscillator for dividing its frequency by a selected factor and a counter is used for counting the periods of the fixed oscillator occurring during a cycle of the divided frequency of the variable oscillator. This defines the period of the variable oscillator in terms of that of the fixed oscillator. A circuit is provided for selecting as a time interval a predetermined number of periods of the variable oscillator. The output of the generator consists of a first pulse produced by a trigger circuit at the start of the time interval and a second pulse marking the end of the time interval produced by the same trigger circuit.

  5. Analysis of single ion channel data incorporating time-interval omission and sampling

    PubMed Central

    The, Yu-Kai; Timmer, Jens

    2005-01-01

    Hidden Markov models are widely used to describe single channel currents from patch-clamp experiments. The inevitable anti-aliasing filter limits the time resolution of the measurements and therefore the standard hidden Markov model is not adequate anymore. The notion of time-interval omission has been introduced where brief events are not detected. The developed, exact solutions to this problem do not take into account that the measured intervals are limited by the sampling time. In this case the dead-time that specifies the minimal detectable interval length is not defined unambiguously. We show that a wrong choice of the dead-time leads to considerably biased estimates and present the appropriate equations to describe sampled data. PMID:16849220

  6. Finding Spatio-Temporal Patterns in Large Sensor Datasets

    ERIC Educational Resources Information Center

    McGuire, Michael Patrick

    2010-01-01

    Spatial or temporal data mining tasks are performed in the context of the relevant space, defined by a spatial neighborhood, and the relevant time period, defined by a specific time interval. Furthermore, when mining large spatio-temporal datasets, interesting patterns typically emerge where the dataset is most dynamic. This dissertation is…

  7. Ohio River Denial as a Transportation Corridor and Its Economic Impacts on the Energy Industry

    DTIC Science & Technology

    2009-03-01

    of freight transport market demand elasiticities, and mode choice probability elasticities of rail and full truck load carriers in the intercity ...obvious that under real conditions, travel demand would alter if time and duration of delay could be anticipated. First, cost of delay was defined...replication was expressed in terms of duration of the simulation interval, simulated travel time for this interval time, and expected travel time. Then

  8. Modified stochastic fragmentation of an interval as an ageing process

    NASA Astrophysics Data System (ADS)

    Fortin, Jean-Yves

    2018-02-01

    We study a stochastic model based on modified fragmentation of a finite interval. The mechanism consists of cutting the interval at a random location and substituting a unique fragment on the right of the cut to regenerate and preserve the interval length. This leads to a set of segments of random sizes, with the accumulation of small fragments near the origin. This model is an example of record dynamics, with the presence of ‘quakes’ and slow dynamics. The fragment size distribution is a universal inverse power law with logarithmic corrections. The exact distribution for the fragment number as function of time is simply related to the unsigned Stirling numbers of the first kind. Two-time correlation functions are defined, and computed exactly. They satisfy scaling relations, and exhibit aging phenomena. In particular, the probability that the same number of fragments is found at two different times t>s is asymptotically equal to [4πlog(s)]-1/2 when s\\gg 1 and the ratio t/s is fixed, in agreement with the numerical simulations. The same process with a reset impedes the aging phenomenon-beyond a typical time scale defined by the reset parameter.

  9. Burst switching without guard interval in all-optical software-define star intra-data center network

    NASA Astrophysics Data System (ADS)

    Ji, Philip N.; Wang, Ting

    2014-02-01

    Optical switching has been introduced in intra-data center networks (DCNs) to increase capacity and to reduce power consumption. Recently we proposed a star MIMO OFDM-based all-optical DCN with burst switching and software-defined networking. Here, we introduce the control procedure for the star DCN in detail for the first time. The timing, signaling, and operation are described for each step to achieve efficient bandwidth resource utilization. Furthermore, the guidelines for the burst assembling period selection that allows burst switching without guard interval are discussed. The star all-optical DCN offers flexible and efficient control for next-generation data center application.

  10. Optical timing receiver for the NASA laser ranging system. Part 2: High precision time interval digitizer

    NASA Technical Reports Server (NTRS)

    Leskovar, B.; Turko, B.

    1977-01-01

    The development of a high precision time interval digitizer is described. The time digitizer is a 10 psec resolution stop watch covering a range of up to 340 msec. The measured time interval is determined as a separation between leading edges of a pair of pulses applied externally to the start input and the stop input of the digitizer. Employing an interpolation techniques and a 50 MHz high precision master oscillator, the equivalent of a 100 GHz clock frequency standard is achieved. Absolute accuracy and stability of the digitizer are determined by the external 50 MHz master oscillator, which serves as a standard time marker. The start and stop pulses are fast 1 nsec rise time signals, according to the Nuclear Instrument means of tunnel diode discriminators. Firing level of the discriminator define start and stop points between which the time interval is digitized.

  11. Demodulation of OFDM Signals in the Presence of Deep Fading Channels and Signal Clipping

    DTIC Science & Technology

    2012-06-01

    2Re Cj F ts t e x tπ= (1) where CF is the carrier frequency. The complex baseband signal is subdivided into time intervals of length...SymbolT . Within the -thm time interval the signal ( )x t is defined as ( ) 2 2 2 0 ( ) 0< F F Nk j k Ft Symbol m k Symbol Nk k x mT t x t c e t Tπ...is the symbol length and consists of the guard interval gT and data interval bT : .Symbol g bT T T = + (3) In order to guarantee the subcarriers

  12. System implications of the ambulance arrival-to-patient contact interval on response interval compliance.

    PubMed

    Campbell, J P; Gratton, M C; Salomone, J A; Lindholm, D J; Watson, W A

    1994-01-01

    In some emergency medical services (EMS) system designs, response time intervals are mandated with monetary penalties for noncompliance. These times are set with the goal of providing rapid, definitive patient care. The time interval of vehicle at scene-to-patient access (VSPA) has been measured, but its effect on response time interval compliance has not been determined. To determine the effect of the VSPA interval on the mandated code 1 (< 9 min) and code 2 (< 13 min) response time interval compliance in an urban, public-utility model system. A prospective, observational study used independent third-party riders to collect the VSPA interval for emergency life-threatening (code 1) and emergency nonlife-threatening (code 2) calls. The VSPA interval was added to the 9-1-1 call-to-dispatch and vehicle dispatch-to-scene intervals to determine the total time interval from call received until paramedic access to the patient (9-1-1 call-to-patient access). Compliance with the mandated response time intervals was determined using the traditional time intervals (9-1-1 call-to-scene) plus the VSPA time intervals (9-1-1 call-to-patient access). Chi-square was used to determine statistical significance. Of the 216 observed calls, 198 were matched to the traditional time intervals. Sixty-three were code 1, and 135 were code 2. Of the code 1 calls, 90.5% were compliant using 9-1-1 call-to-scene intervals dropping to 63.5% using 9-1-1 call-to-patient access intervals (p < 0.0005). Of the code 2 calls, 94.1% were compliant using 9-1-1 call-to-scene intervals. Compliance decreased to 83.7% using 9-1-1 call-to-patient access intervals (p = 0.012). The addition of the VSPA interval to the traditional time intervals impacts system response time compliance. Using 9-1-1 call-to-scene compliance as a basis for measuring system performance underestimates the time for the delivery of definitive care. This must be considered when response time interval compliances are defined.

  13. Timescale- and Sensory Modality-Dependency of the Central Tendency of Time Perception.

    PubMed

    Murai, Yuki; Yotsumoto, Yuko

    2016-01-01

    When individuals are asked to reproduce intervals of stimuli that are intermixedly presented at various times, longer intervals are often underestimated and shorter intervals overestimated. This phenomenon may be attributed to the central tendency of time perception, and suggests that our brain optimally encodes a stimulus interval based on current stimulus input and prior knowledge of the distribution of stimulus intervals. Two distinct systems are thought to be recruited in the perception of sub- and supra-second intervals. Sub-second timing is subject to local sensory processing, whereas supra-second timing depends on more centralized mechanisms. To clarify the factors that influence time perception, the present study investigated how both sensory modality and timescale affect the central tendency. In Experiment 1, participants were asked to reproduce sub- or supra-second intervals, defined by visual or auditory stimuli. In the sub-second range, the magnitude of the central tendency was significantly larger for visual intervals compared to auditory intervals, while visual and auditory intervals exhibited a correlated and comparable central tendency in the supra-second range. In Experiment 2, the ability to discriminate sub-second intervals in the reproduction task was controlled across modalities by using an interval discrimination task. Even when the ability to discriminate intervals was controlled, visual intervals exhibited a larger central tendency than auditory intervals in the sub-second range. In addition, the magnitude of the central tendency for visual and auditory sub-second intervals was significantly correlated. These results suggest that a common modality-independent mechanism is responsible for the supra-second central tendency, and that both the modality-dependent and modality-independent components of the timing system contribute to the central tendency in the sub-second range.

  14. Musical training generalises across modalities and reveals efficient and adaptive mechanisms for reproducing temporal intervals.

    PubMed

    Aagten-Murphy, David; Cappagli, Giulia; Burr, David

    2014-03-01

    Expert musicians are able to time their actions accurately and consistently during a musical performance. We investigated how musical expertise influences the ability to reproduce auditory intervals and how this generalises across different techniques and sensory modalities. We first compared various reproduction strategies and interval length, to examine the effects in general and to optimise experimental conditions for testing the effect of music, and found that the effects were robust and consistent across different paradigms. Focussing on a 'ready-set-go' paradigm subjects reproduced time intervals drawn from distributions varying in total length (176, 352 or 704 ms) or in the number of discrete intervals within the total length (3, 5, 11 or 21 discrete intervals). Overall, Musicians performed more veridical than Non-Musicians, and all subjects reproduced auditory-defined intervals more accurately than visually-defined intervals. However, Non-Musicians, particularly with visual stimuli, consistently exhibited a substantial and systematic regression towards the mean interval. When subjects judged intervals from distributions of longer total length they tended to regress more towards the mean, while the ability to discriminate between discrete intervals within the distribution had little influence on subject error. These results are consistent with a Bayesian model that minimizes reproduction errors by incorporating a central tendency prior weighted by the subject's own temporal precision relative to the current distribution of intervals. Finally a strong correlation was observed between all durations of formal musical training and total reproduction errors in both modalities (accounting for 30% of the variance). Taken together these results demonstrate that formal musical training improves temporal reproduction, and that this improvement transfers from audition to vision. They further demonstrate the flexibility of sensorimotor mechanisms in adapting to different task conditions to minimise temporal estimation errors. © 2013.

  15. Atlas of interoccurrence intervals for selected thresholds of daily precipitation in Texas

    USGS Publications Warehouse

    Asquith, William H.; Roussel, Meghan C.

    2003-01-01

    A Poisson process model is used to define the distribution of interoccurrence intervals of daily precipitation in Texas. A precipitation interoccurrence interval is the time period between two successive rainfall events. Rainfall events are defined as daily precipitation equaling or exceeding a specified depth threshold. Ten precipitation thresholds are considered: 0.05, 0.10, 0.25, 0.50, 0.75, 1.0, 1.5, 2.0, 2.5, and 3.0 inches. Site-specific mean interoccurrence interval and ancillary statistics are presented for each threshold and for each of 1,306 National Weather Service daily precipitation gages. Maps depicting the spatial variation across Texas of the mean interoccurrence interval for each threshold are presented. The percent change from the statewide standard deviation of the interoccurrence intervals to the root-mean-square error ranges from a magnitude minimum of (negative) -24 to a magnitude maximum of -60 percent for the 0.05- and 2.0-inch thresholds, respectively. Because of the substantial negative percent change, the maps are considered more reliable estimators of the mean interoccurrence interval for most locations in Texas than the statewide mean values.

  16. Temporal Ventriloquism in a Purely Temporal Context

    ERIC Educational Resources Information Center

    Hartcher-O'Brien, Jessica; Alais, David

    2011-01-01

    This study examines how audiovisual signals are combined in time for a temporal analogue of the ventriloquist effect in a purely temporal context, that is, no spatial grounding of signals or other spatial facilitation. Observers were presented with two successive intervals, each defined by a 1250-ms tone, and indicated in which interval a brief…

  17. Fast temporal neural learning using teacher forcing

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad (Inventor); Bahren, Jacob (Inventor)

    1992-01-01

    A neural network is trained to output a time dependent target vector defined over a predetermined time interval in response to a time dependent input vector defined over the same time interval by applying corresponding elements of the error vector, or difference between the target vector and the actual neuron output vector, to the inputs of corresponding output neurons of the network as corrective feedback. This feedback decreases the error and quickens the learning process, so that a much smaller number of training cycles are required to complete the learning process. A conventional gradient descent algorithm is employed to update the neural network parameters at the end of the predetermined time interval. The foregoing process is repeated in repetitive cycles until the actual output vector corresponds to the target vector. In the preferred embodiment, as the overall error of the neural network output decreasing during successive training cycles, the portion of the error fed back to the output neurons is decreased accordingly, allowing the network to learn with greater freedom from teacher forcing as the network parameters converge to their optimum values. The invention may also be used to train a neural network with stationary training and target vectors.

  18. Fast temporal neural learning using teacher forcing

    NASA Technical Reports Server (NTRS)

    Toomarian, Nikzad (Inventor); Bahren, Jacob (Inventor)

    1995-01-01

    A neural network is trained to output a time dependent target vector defined over a predetermined time interval in response to a time dependent input vector defined over the same time interval by applying corresponding elements of the error vector, or difference between the target vector and the actual neuron output vector, to the inputs of corresponding output neurons of the network as corrective feedback. This feedback decreases the error and quickens the learning process, so that a much smaller number of training cycles are required to complete the learning process. A conventional gradient descent algorithm is employed to update the neural network parameters at the end of the predetermined time interval. The foregoing process is repeated in repetitive cycles until the actual output vector corresponds to the target vector. In the preferred embodiment, as the overall error of the neural network output decreasing during successive training cycles, the portion of the error fed back to the output neurons is decreased accordingly, allowing the network to learn with greater freedom from teacher forcing as the network parameters converge to their optimum values. The invention may also be used to train a neural network with stationary training and target vectors.

  19. Temporal Structure of Volatility Fluctuations

    NASA Astrophysics Data System (ADS)

    Wang, Fengzhong; Yamasaki, Kazuko; Stanley, H. Eugene; Havlin, Shlomo

    Volatility fluctuations are of great importance for the study of financial markets, and the temporal structure is an essential feature of fluctuations. To explore the temporal structure, we employ a new approach based on the return interval, which is defined as the time interval between two successive volatility values that are above a given threshold. We find that the distribution of the return intervals follows a scaling law over a wide range of thresholds, and over a broad range of sampling intervals. Moreover, this scaling law is universal for stocks of different countries, for commodities, for interest rates, and for currencies. However, further and more detailed analysis of the return intervals shows some systematic deviations from the scaling law. We also demonstrate a significant memory effect in the return intervals time organization. We find that the distribution of return intervals is strongly related to the correlations in the volatility.

  20. 49 CFR 572.133 - Neck assembly and test procedure.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... pendulum's longitudinal centerline between 77 degrees and 91 degrees. During the time interval while the... respect to the pendulum's longitudinal centerline between 99 degrees and 114 degrees. During the time... force to occipital condyle. (3) Time-zero is defined as the time of initial contact between the pendulum...

  1. 49 CFR 572.133 - Neck assembly and test procedure.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... pendulum's longitudinal centerline between 77 degrees and 91 degrees. During the time interval while the... respect to the pendulum's longitudinal centerline between 99 degrees and 114 degrees. During the time... force to occipital condyle. (3) Time-zero is defined as the time of initial contact between the pendulum...

  2. 49 CFR 572.133 - Neck assembly and test procedure.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... pendulum's longitudinal centerline between 77 degrees and 91 degrees. During the time interval while the... respect to the pendulum's longitudinal centerline between 99 degrees and 114 degrees. During the time... force to occipital condyle. (3) Time-zero is defined as the time of initial contact between the pendulum...

  3. 49 CFR 572.133 - Neck assembly and test procedure.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... pendulum's longitudinal centerline between 77 degrees and 91 degrees. During the time interval while the... respect to the pendulum's longitudinal centerline between 99 degrees and 114 degrees. During the time... force to occipital condyle. (3) Time-zero is defined as the time of initial contact between the pendulum...

  4. 49 CFR 572.133 - Neck assembly and test procedure.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... pendulum's longitudinal centerline between 77 degrees and 91 degrees. During the time interval while the... respect to the pendulum's longitudinal centerline between 99 degrees and 114 degrees. During the time... force to occipital condyle. (3) Time-zero is defined as the time of initial contact between the pendulum...

  5. Filling the blanks in temporal intervals: the type of filling influences perceived duration and discrimination performance

    PubMed Central

    Horr, Ninja K.; Di Luca, Massimiliano

    2015-01-01

    In this work we investigate how judgments of perceived duration are influenced by the properties of the signals that define the intervals. Participants compared two auditory intervals that could be any combination of the following four types: intervals filled with continuous tones (filled intervals), intervals filled with regularly-timed short tones (isochronous intervals), intervals filled with irregularly-timed short tones (anisochronous intervals), and intervals demarcated by two short tones (empty intervals). Results indicate that the type of intervals to be compared affects discrimination performance and induces distortions in perceived duration. In particular, we find that duration judgments are most precise when comparing two isochronous and two continuous intervals, while the comparison of two anisochronous intervals leads to the worst performance. Moreover, we determined that the magnitude of the distortions in perceived duration (an effect akin to the filled duration illusion) is higher for tone sequences (no matter whether isochronous or anisochronous) than for continuous tones. Further analysis of how duration distortions depend on the type of filling suggests that distortions are not only due to the perceived duration of the two individual intervals, but they may also be due to the comparison of two different filling types. PMID:25717310

  6. Dynamic MRI for distinguishing high-flow from low-flow peripheral vascular malformations.

    PubMed

    Ohgiya, Yoshimitsu; Hashimoto, Toshi; Gokan, Takehiko; Watanabe, Shouji; Kuroda, Masayoshi; Hirose, Masanori; Matsui, Seishi; Nobusawa, Hiroshi; Kitanosono, Takashi; Munechika, Hirotsugu

    2005-11-01

    The purpose of our study was to assess the usefulness of dynamic MRI in distinguishing high-flow vascular malformations from low-flow vascular malformations, which do not need angiography for treatment. Between September 2001 and January 2003, 16 patients who underwent conventional and dynamic MRI had peripheral vascular malformations (six high- and 10 low-flow). The temporal resolution of dynamic MRI was 5 sec. Time intervals between beginning of enhancement of an arterial branch in the vicinity of a lesion in the same slice and the onset of enhancement in the lesion were calculated. We defined these time intervals as "artery-lesion enhancement time." Time intervals between the onset of enhancement in the lesion and the time of the maximal percentage of enhancement above baseline of the lesion within 120 sec were measured. We defined these time intervals as "contrast rise time" of the lesion. Diagnosis of the peripheral vascular malformations was based on angiographic or venographic findings. The mean artery-lesion enhancement time of the high-flow vascular malformations (3.3 sec [range, 0-5 sec]) was significantly shorter than that of the low-flow vascular malformations (8.8 sec [range, 0-20 sec]) (Mann-Whitney test, p < 0.05). The mean maximal lesion enhancement time of the high-flow vascular malformations (5.8 sec [range, 5-10 sec]) was significantly shorter than that of the low-flow vascular malformations (88.4 sec [range, 50-100 sec]) (Mann-Whitney test, p < 0.01). Dynamic MRI is useful for distinguishing high-flow from low-flow vascular malformations, especially when the contrast rise time of the lesion is measured.

  7. RISMA: A Rule-based Interval State Machine Algorithm for Alerts Generation, Performance Analysis and Monitoring Real-Time Data Processing

    NASA Astrophysics Data System (ADS)

    Laban, Shaban; El-Desouky, Aly

    2013-04-01

    The monitoring of real-time systems is a challenging and complicated process. So, there is a continuous need to improve the monitoring process through the use of new intelligent techniques and algorithms for detecting exceptions, anomalous behaviours and generating the necessary alerts during the workflow monitoring of such systems. The interval-based or period-based theorems have been discussed, analysed, and used by many researches in Artificial Intelligence (AI), philosophy, and linguistics. As explained by Allen, there are 13 relations between any two intervals. Also, there have also been many studies of interval-based temporal reasoning and logics over the past decades. Interval-based theorems can be used for monitoring real-time interval-based data processing. However, increasing the number of processed intervals makes the implementation of such theorems a complex and time consuming process as the relationships between such intervals are increasing exponentially. To overcome the previous problem, this paper presents a Rule-based Interval State Machine Algorithm (RISMA) for processing, monitoring, and analysing the behaviour of interval-based data, received from real-time sensors. The proposed intelligent algorithm uses the Interval State Machine (ISM) approach to model any number of interval-based data into well-defined states as well as inferring them. An interval-based state transition model and methodology are presented to identify the relationships between the different states of the proposed algorithm. By using such model, the unlimited number of relationships between similar large numbers of intervals can be reduced to only 18 direct relationships using the proposed well-defined states. For testing the proposed algorithm, necessary inference rules and code have been designed and applied to the continuous data received in near real-time from the stations of International Monitoring System (IMS) by the International Data Centre (IDC) of the Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO). The CLIPS expert system shell has been used as the main rule engine for implementing the algorithm rules. Python programming language and the module "PyCLIPS" are used for building the necessary code for algorithm implementation. More than 1.7 million intervals constitute the Concise List of Frames (CLF) from 20 different seismic stations have been used for evaluating the proposed algorithm and evaluating stations behaviour and performance. The initial results showed that proposed algorithm can help in better understanding of the operation and performance of those stations. Different important information, such as alerts and some station performance parameters, can be derived from the proposed algorithm. For IMS interval-based data and at any period of time it is possible to analyze station behavior, determine the missing data, generate necessary alerts, and to measure some of station performance attributes. The details of the proposed algorithm, methodology, implementation, experimental results, advantages, and limitations of this research are presented. Finally, future directions and recommendations are discussed.

  8. Statistical regularities in the return intervals of volatility

    NASA Astrophysics Data System (ADS)

    Wang, F.; Weber, P.; Yamasaki, K.; Havlin, S.; Stanley, H. E.

    2007-01-01

    We discuss recent results concerning statistical regularities in the return intervals of volatility in financial markets. In particular, we show how the analysis of volatility return intervals, defined as the time between two volatilities larger than a given threshold, can help to get a better understanding of the behavior of financial time series. We find scaling in the distribution of return intervals for thresholds ranging over a factor of 25, from 0.6 to 15 standard deviations, and also for various time windows from one minute up to 390 min (an entire trading day). Moreover, these results are universal for different stocks, commodities, interest rates as well as currencies. We also analyze the memory in the return intervals which relates to the memory in the volatility and find two scaling regimes, ℓ<ℓ* with α1=0.64±0.02 and ℓ> ℓ* with α2=0.92±0.04; these exponent values are similar to results of Liu et al. for the volatility. As an application, we use the scaling and memory properties of the return intervals to suggest a possibly useful method for estimating risk.

  9. Return Intervals Approach to Financial Fluctuations

    NASA Astrophysics Data System (ADS)

    Wang, Fengzhong; Yamasaki, Kazuko; Havlin, Shlomo; Stanley, H. Eugene

    Financial fluctuations play a key role for financial markets studies. A new approach focusing on properties of return intervals can help to get better understanding of the fluctuations. A return interval is defined as the time between two successive volatilities above a given threshold. We review recent studies and analyze the 1000 most traded stocks in the US stock markets. We find that the distribution of the return intervals has a well approximated scaling over a wide range of thresholds. The scaling is also valid for various time windows from one minute up to one trading day. Moreover, these results are universal for stocks of different countries, commodities, interest rates as well as currencies. Further analysis shows some systematic deviations from a scaling law, which are due to the nonlinear correlations in the volatility sequence. We also examine the memory in return intervals for different time scales, which are related to the long-term correlations in the volatility. Furthermore, we test two popular models, FIGARCH and fractional Brownian motion (fBm). Both models can catch the memory effect but only fBm shows a good scaling in the return interval distribution.

  10. Evidence for impulsivity in the Spontaneously Hypertensive Rat drawn from complementary response-withholding tasks

    PubMed Central

    Sanabria, Federico; Killeen, Peter R

    2008-01-01

    Background The inability to inhibit reinforced responses is a defining feature of ADHD associated with impulsivity. The Spontaneously Hypertensive Rat (SHR) has been extolled as an animal model of ADHD, but there is no clear experimental evidence of inhibition deficits in SHR. Attempts to demonstrate these deficits may have suffered from methodological and analytical limitations. Methods We provide a rationale for using two complementary response-withholding tasks to doubly dissociate impulsivity from motivational and motor processes. In the lever-holding task (LHT), continual lever depression was required for a minimum interval. Under a differential reinforcement of low rates schedule (DRL), a minimum interval was required between lever presses. Both tasks were studied using SHR and two normotensive control strains, Wistar-Kyoto (WKY) and Long Evans (LE), over an overlapping range of intervals (1 – 5 s for LHT and 5 – 60 s for DRL). Lever-holding and DRL performance was characterized as the output of a mixture of two processes, timing and iterative random responding; we call this account of response inhibition the Temporal Regulation (TR) model. In the context of TR, impulsivity was defined as a bias toward premature termination of the timed intervals. Results The TR model provided an accurate description of LHT and DRL performance. On the basis of TR parameter estimates, SHRs were more impulsive than LE rats across tasks and target times. WKY rats produced substantially shorter timed responses in the lever-holding task than in DRL, suggesting a motivational or motor deficit. The precision of timing by SHR, as measured by the variance of their timed intervals, was excellent, flouting expectations from ADHD research. Conclusion This research validates the TR model of response inhibition and supports SHR as an animal model of ADHD-related impulsivity. It indicates, however, that SHR's impulse-control deficit is not caused by imprecise timing. The use of ad hoc impulsivity metrics and of WKY as control strain for SHR impulsivity are called into question. PMID:18261220

  11. Not All Prehospital Time is Equal: Influence of Scene Time on Mortality

    PubMed Central

    Brown, Joshua B.; Rosengart, Matthew R.; Forsythe, Raquel M.; Reynolds, Benjamin R.; Gestring, Mark L.; Hallinan, William M.; Peitzman, Andrew B.; Billiar, Timothy R.; Sperry, Jason L.

    2016-01-01

    Background Trauma is time-sensitive and minimizing prehospital (PH) time is appealing. However, most studies have not linked increasing PH time with worse outcomes, as raw PH times are highly variable. It is unclear whether specific PH time patterns affect outcomes. Our objective was to evaluate the association of PH time interval distribution with mortality. Methods Patients transported by EMS in the Pennsylvania trauma registry 2000-2013 with total prehospital time (TPT)≥20min were included. TPT was divided into three PH time intervals: response, scene, and transport time. The number of minutes in each PH time interval was divided by TPT to determine the relative proportion each interval contributed to TPT. A prolonged interval was defined as any one PH interval contributing ≥50% of TPT. Patients were classified by prolonged PH interval or no prolonged PH interval (all intervals<50% of TPT). Patients were matched for TPT and conditional logistic regression determined the association of mortality with PH time pattern, controlling for confounders. PH interventions were explored as potential mediators, and prehospital triage criteria used identify patients with time-sensitive injuries. Results There were 164,471 patients included. Patients with prolonged scene time had increased odds of mortality (OR 1.21; 95%CI 1.02–1.44, p=0.03). Prolonged response, transport, and no prolonged interval were not associated with mortality. When adjusting for mediators including extrication and PH intubation, prolonged scene time was no longer associated with mortality (OR 1.06; 0.90–1.25, p=0.50). Together these factors mediated 61% of the effect between prolonged scene time and mortality. Mortality remained associated with prolonged scene time in patients with hypotension, penetrating injury, and flail chest. Conclusions Prolonged scene time is associated with increased mortality. PH interventions partially mediate this association. Further study should evaluate whether these interventions drive increased mortality because they prolong scene time or by another mechanism, as reducing scene time may be a target for intervention. Level of Evidence IV, prognostic study PMID:26886000

  12. iCARE

    Cancer.gov

    The iCARE R Package allows researchers to quickly build models for absolute risk, and apply them to estimate an individual's risk of developing disease during a specifed time interval, based on a set of user defined input parameters.

  13. Detectability of auditory signals presented without defined observation intervals

    NASA Technical Reports Server (NTRS)

    Watson, C. S.; Nichols, T. L.

    1976-01-01

    Ability to detect tones in noise was measured without defined observation intervals. Latency density functions were estimated for the first response following a signal and, separately, for the first response following randomly distributed instances of background noise. Detection performance was measured by the maximum separation between the cumulative latency density functions for signal-plus-noise and for noise alone. Values of the index of detectability, estimated by this procedure, were approximately those obtained with a 2-dB weaker signal and defined observation intervals. Simulation of defined- and non-defined-interval tasks with an energy detector showed that this device performs very similarly to the human listener in both cases.

  14. Quantum interference of position and momentum: A particle propagation paradox

    NASA Astrophysics Data System (ADS)

    Hofmann, Holger F.

    2017-08-01

    Optimal simultaneous control of position and momentum can be achieved by maximizing the probabilities of finding their experimentally observed values within two well-defined intervals. The assumption that particles move along straight lines in free space can then be tested by deriving a lower limit for the probability of finding the particle in a corresponding spatial interval at any intermediate time t . Here, it is shown that this lower limit can be violated by quantum superpositions of states confined within the respective position and momentum intervals. These violations of the particle propagation inequality show that quantum mechanics changes the laws of motion at a fundamental level, providing a different perspective on causality relations and time evolution in quantum mechanics.

  15. 14 CFR Appendix B to Part 420 - Method for Defining a Flight Corridor

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... trajectory simulation software. Trajectory time intervals shall be no greater than one second. If an... applicant shall construct a launch area of a flight corridor using the processes and equations of this paragraph for each trajectory position. An applicant shall repeat these processes at time points on the...

  16. Novel method for high-throughput phenotyping of sleep in mice.

    PubMed

    Pack, Allan I; Galante, Raymond J; Maislin, Greg; Cater, Jacqueline; Metaxas, Dimitris; Lu, Shan; Zhang, Lin; Von Smith, Randy; Kay, Timothy; Lian, Jie; Svenson, Karen; Peters, Luanne L

    2007-01-17

    Assessment of sleep in mice currently requires initial implantation of chronic electrodes for assessment of electroencephalogram (EEG) and electromyogram (EMG) followed by time to recover from surgery. Hence, it is not ideal for high-throughput screening. To address this deficiency, a method of assessment of sleep and wakefulness in mice has been developed based on assessment of activity/inactivity either by digital video analysis or by breaking infrared beams in the mouse cage. It is based on the algorithm that any episode of continuous inactivity of > or =40 s is predicted to be sleep. The method gives excellent agreement in C57BL/6J male mice with simultaneous assessment of sleep by EEG/EMG recording. The average agreement over 8,640 10-s epochs in 24 h is 92% (n = 7 mice) with agreement in individual mice being 88-94%. Average EEG/EMG determined sleep per 2-h interval across the day was 59.4 min. The estimated mean difference (bias) per 2-h interval between inactivity-defined sleep and EEG/EMG-defined sleep was only 1.0 min (95% confidence interval for mean bias -0.06 to +2.6 min). The standard deviation of differences (precision) was 7.5 min per 2-h interval with 95% limits of agreement ranging from -13.7 to +15.7 min. Although bias significantly varied by time of day (P = 0.0007), the magnitude of time-of-day differences was not large (average bias during lights on and lights off was +5.0 and -3.0 min per 2-h interval, respectively). This method has applications in chemical mutagenesis and for studies of molecular changes in brain with sleep/wakefulness.

  17. Finding Intervals of Abrupt Change in Earth Science Data

    NASA Astrophysics Data System (ADS)

    Zhou, X.; Shekhar, S.; Liess, S.

    2011-12-01

    In earth science data (e.g., climate data), it is often observed that a persistently abrupt change in value occurs in a certain time-period or spatial interval. For example, abrupt climate change is defined as an unusually large shift of precipitation, temperature, etc, that occurs during a relatively short time period. A similar pattern can also be found in geographical space, representing a sharp transition of the environment (e.g., vegetation between different ecological zones). Identifying such intervals of change from earth science datasets is a crucial step for understanding and attributing the underlying phenomenon. However, inconsistencies in these noisy datasets can obstruct the major change trend, and more importantly can complicate the search of the beginning and end points of the interval of change. Also, the large volume of data makes it challenging to process the dataset reasonably fast. In earth science data (e.g., climate data), it is often observed that a persistently abrupt change in value occurs in a certain time-period or spatial interval. For example, abrupt climate change is defined as an unusually large shift of precipitation, temperature, etc, that occurs during a relatively short time period. A similar change pattern can also be found in geographical space, representing a sharp transition of the environment (e.g., vegetation between different ecological zones). Identifying such intervals of change from earth science datasets is a crucial step for understanding and attributing the underlying phenomenon. However, inconsistencies in these noisy datasets can obstruct the major change trend, and more importantly can complicate the search of the beginning and end points of the interval of change. Also, the large volume of data makes it challenging to process the dataset fast. In this work, we analyze earth science data using a novel, automated data mining approach to identify spatial/temporal intervals of persistent, abrupt change. We first propose a statistical model to quantitatively evaluate the change abruptness and persistence in an interval. Then we design an algorithm to exhaustively examine all the intervals using this model. Intervals passing a threshold test will be kept as final results. We evaluate the proposed method with the Climate Research Unit (CRU) precipitation data, whereby we focus on the Sahel rainfall index. Results show that this method can find periods of persistent and abrupt value changes with different temporal scales. We also further optimize the algorithm using a smart strategy, which always examines longer intervals before its subsets. By doing this, we reduce the computational cost to only one third of that of the original algorithm for the above test case. More significantly, the optimized algorithm is also proven to scale up well with data volume and number of changes. Particularly, it achieves better performance when dealing with longer change intervals.

  18. Time Perception and Depressive Realism: Judgment Type, Psychophysical Functions and Bias

    PubMed Central

    Kornbrot, Diana E.; Msetfi, Rachel M.; Grimwood, Melvyn J.

    2013-01-01

    The effect of mild depression on time estimation and production was investigated. Participants made both magnitude estimation and magnitude production judgments for five time intervals (specified in seconds) from 3 sec to 65 sec. The parameters of the best fitting psychophysical function (power law exponent, intercept, and threshold) were determined individually for each participant in every condition. There were no significant effects of mood (high BDI, low BDI) or judgment (estimation, production) on the mean exponent, n = .98, 95% confidence interval (.96–1.04) or on the threshold. However, the intercept showed a ‘depressive realism’ effect, where high BDI participants had a smaller deviation from accuracy and a smaller difference between estimation and judgment than low BDI participants. Accuracy bias was assessed using three measures of accuracy: difference, defined as psychological time minus physical time, ratio, defined as psychological time divided by physical time, and a new logarithmic accuracy measure defined as ln (ratio). The ln (ratio) measure was shown to have approximately normal residuals when subjected to a mixed ANOVA with mood as a between groups explanatory factor and judgment and time category as repeated measures explanatory factors. The residuals of the other two accuracy measures flagrantly violated normality. The mixed ANOVAs of accuracy also showed a strong depressive realism effect, just like the intercepts of the psychophysical functions. There was also a strong negative correlation between estimation and production judgments. Taken together these findings support a clock model of time estimation, combined with additional cognitive mechanisms to account for the depressive realism effect. The findings also suggest strong methodological recommendations. PMID:23990960

  19. An actual load forecasting methodology by interval grey modeling based on the fractional calculus.

    PubMed

    Yang, Yang; Xue, Dingyü

    2017-07-17

    The operation processes for thermal power plant are measured by the real-time data, and a large number of historical interval data can be obtained from the dataset. Within defined periods of time, the interval information could provide important information for decision making and equipment maintenance. Actual load is one of the most important parameters, and the trends hidden in the historical data will show the overall operation status of the equipments. However, based on the interval grey parameter numbers, the modeling and prediction process is more complicated than the one with real numbers. In order not lose any information, the geometric coordinate features are used by the coordinates of area and middle point lines in this paper, which are proved with the same information as the original interval data. The grey prediction model for interval grey number by the fractional-order accumulation calculus is proposed. Compared with integer-order model, the proposed method could have more freedom with better performance for modeling and prediction, which can be widely used in the modeling process and prediction for the small amount interval historical industry sequence samples. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Echo tracker/range finder for radars and sonars

    NASA Technical Reports Server (NTRS)

    Constantinides, N. J. (Inventor)

    1982-01-01

    An echo tracker/range finder or altimeter is described. The pulse repetition frequency (PFR) of a predetermined plurality of transmitted pulses is adjusted so that echo pulses received from a reflecting object are positioned between transmitted pulses and divided their interpulse time interval into two time intervals having a predetermined ratio with respect to each other. The invention described provides a means whereby the arrival time of a plurality of echo pulses is defined as the time at which a composite echo pulse formed of a sum of the individual echo pulses has the highest amplitude. The invention is applicable to radar systems, sonar systems, or any other kind of system in which pulses are transmitted and echoes received therefrom.

  1. Analysis of the equilibrium trip cost accounting for the fuel cost in a single-lane traffic system without late arrival

    NASA Astrophysics Data System (ADS)

    Tang, Tie-Qiao; Wang, Tao; Chen, Liang; Huang, Hai-Jun

    2018-01-01

    In this paper, we introduce the fuel cost into each commuter's trip cost, define a new trip cost without late arrival and its corresponding equilibrium state, and use a car-following model to explore the impacts of the fuel cost on each commuter's departure time, departure interval, arrival time, arrival interval, traveling time, early arrival time and trip cost at the above equilibrium state. The numerical results show that considering the fuel cost in each commuter's trip cost has positive impacts on his trip cost and fuel cost, and the traffic situation in the system without late arrival, i.e., each commuter should explicitly consider the fuel cost in his trip cost.

  2. Reference Intervals of Common Clinical Chemistry Analytes for Adults in Hong Kong.

    PubMed

    Lo, Y C; Armbruster, David A

    2012-04-01

    Defining reference intervals is a major challenge because of the difficulty in recruiting volunteers to participate and testing samples from a significant number of healthy reference individuals. Historical literature citation intervals are often suboptimal because they're be based on obsolete methods and/or only a small number of poorly defined reference samples. Blood donors in Hong Kong gave permission for additional blood to be collected for reference interval testing. The samples were tested for twenty-five routine analytes on the Abbott ARCHITECT clinical chemistry system. Results were analyzed using the Rhoads EP evaluator software program, which is based on the CLSI/IFCC C28-A guideline, and defines the reference interval as the 95% central range. Method specific reference intervals were established for twenty-five common clinical chemistry analytes for a Chinese ethnic population. The intervals were defined for each gender separately and for genders combined. Gender specific or combined gender intervals were adapted as appropriate for each analyte. A large number of healthy, apparently normal blood donors from a local ethnic population were tested to provide current reference intervals for a new clinical chemistry system. Intervals were determined following an accepted international guideline. Laboratories using the same or similar methodologies may adapt these intervals if deemed validated and deemed suitable for their patient population. Laboratories using different methodologies may be able to successfully adapt the intervals for their facilities using the reference interval transference technique based on a method comparison study.

  3. Maternal child-feeding practices and dietary inadequacy of 4-year-old children.

    PubMed

    Durão, Catarina; Andreozzi, Valeska; Oliveira, Andreia; Moreira, Pedro; Guerra, António; Barros, Henrique; Lopes, Carla

    2015-09-01

    This study aimed to evaluate the association between maternal perceived responsibility and child-feeding practices and dietary inadequacy of 4-year-old children. We studied 4122 mothers and children enrolled in the population-based birth cohort - Generation XXI (Porto, Portugal). Mothers self-completed the Child Feeding Questionnaire and a scale on covert and overt control, and answered to a food frequency questionnaire in face-to-face interviews. Using dietary guidelines for preschool children, adequacy intervals were defined: fruit and vegetables (F&V) 4-7 times/day; dairy 3-5 times/day; meat and eggs 5-10 times/week; fish 2-4 times/week. Inadequacy was considered as below or above these cut-points. For energy-dense micronutrient-poor foods and beverages (EDF), a tolerable limit was defined (<6 times/week). Associations between maternal perceived responsibility and child-feeding practices (restriction, monitoring, pressure to eat, overt and covert control) and children's diet were examined by logistic regression models. After adjustment for maternal BMI, education, and diet, and children's characteristics (sex, BMI z-scores), restriction, monitoring, overt and covert control were associated with 11-18% lower odds of F&V consumption below the interval defined as adequate. Overt control was also associated with 24% higher odds of their consumption above it. Higher perceived responsibility was associated with higher odds of children consuming F&V and dairy above recommendations. Pressure to eat was positively associated with consumption of dairy above the adequate interval. Except for pressure to eat, maternal practices were associated with 14-27% lower odds of inadequate consumption of EDF. In conclusion, children whose mothers had higher levels of covert control, monitoring, and restriction were less likely to consume F&V below recommendations and EDF above tolerable limits. Higher overt control and pressure to eat were associated, respectively, with higher possibility of children consuming F&V and dairy above recommendations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. The cyclic and fractal seismic series preceding an mb 4.8 earthquake on 1980 February 14 near the Virgin Islands

    USGS Publications Warehouse

    Varnes, D.J.; Bufe, C.G.

    1996-01-01

    Seismic activity in the 10 months preceding the 1980 February 14, mb 4.8 earthquake in the Virgin Islands, reported on by Frankel in 1982, consisted of four principal cycles. Each cycle began with a relatively large event or series of closely spaced events, and the duration of the cycles progressively shortened by a factor of about 3/4. Had this regular shortening of the cycles been recognized prior to the earthquake, the time of the next episode of setsmicity (the main shock) might have been closely estimated 41 days in advance. That this event could be much larger than the previous events is indicated from time-to-failure analysis of the accelerating rise in released seismic energy, using a non-linear time- and slip-predictable foreshock model. Examination of the timing of all events in the sequence shows an even higher degree of order. Rates of seismicity, measured by consecutive interevent times, when plotted on an iteration diagram of a rate versus the succeeding rate, form a triangular circulating trajectory. The trajectory becomes an ascending helix if extended in a third dimension, time. This construction reveals additional and precise relations among the time intervals between times of relatively high or relatively low rates of seismic activity, including period halving and doubling. The set of 666 time intervals between all possible pairs of the 37 recorded events appears to be a fractal; the set of time points that define the intervals has a finite, non-integer correlation dimension of 0.70. In contrast, the average correlation dimension of 50 random sequences of 37 events is significantly higher, dose to 1.0. In a similar analysis, the set of distances between pairs of epicentres has a fractal correlation dimension of 1.52. Well-defined cycles, numerous precise ratios among time intervals, and a non-random temporal fractal dimension suggest that the seismic series is not a random process, but rather the product of a deterministic dynamic system.

  5. Evaluating Protocol Lifecycle Time Intervals in HIV/AIDS Clinical Trials

    PubMed Central

    Schouten, Jeffrey T.; Dixon, Dennis; Varghese, Suresh; Cope, Marie T.; Marci, Joe; Kagan, Jonathan M.

    2014-01-01

    Background Identifying efficacious interventions for the prevention and treatment of human diseases depends on the efficient development and implementation of controlled clinical trials. Essential to reducing the time and burden of completing the clinical trial lifecycle is determining which aspects take the longest, delay other stages, and may lead to better resource utilization without diminishing scientific quality, safety, or the protection of human subjects. Purpose In this study we modeled time-to-event data to explore relationships between clinical trial protocol development and implementation times, as well as identify potential correlates of prolonged development and implementation. Methods We obtained time interval and participant accrual data from 111 interventional clinical trials initiated between 2006 and 2011 by NIH’s HIV/AIDS Clinical Trials Networks. We determined the time (in days) required to complete defined phases of clinical trial protocol development and implementation. Kaplan-Meier estimates were used to assess the rates at which protocols reached specified terminal events, stratified by study purpose (therapeutic, prevention) and phase group (pilot/phase I, phase II, and phase III/ IV). We also examined several potential correlates to prolonged development and implementation intervals. Results Even though phase grouping did not determine development or implementation times of either therapeutic or prevention studies, overall we observed wide variation in protocol development times. Moreover, we detected a trend toward phase III/IV therapeutic protocols exhibiting longer developmental (median 2 ½ years) and implementation times (>3years). We also found that protocols exceeding the median number of days for completing the development interval had significantly longer implementation. Limitations The use of a relatively small set of protocols may have limited our ability to detect differences across phase groupings. Some timing effects present for a specific study phase may have been masked by combining protocols into phase groupings. Presence of informative censoring, such as withdrawal of some protocols from development if they began showing signs of lost interest among investigators, complicates interpretation of Kaplan-Meier estimates. Because this study constitutes a retrospective examination over an extended period of time, it does not allow for the precise identification of relative factors impacting timing. Conclusions Delays not only increase the time and cost to complete clinical trials, but they also diminish their usefulness by failing to answer research questions in time. We believe that research analyzing the time spent traversing defined intervals across the clinical trial protocol development and implementation continuum can stimulate business process analyses and reengineering efforts that could lead to reductions in the time from clinical trial concept to results, thereby accelerating progress in clinical research. PMID:24980279

  6. Real Time Correction of Aircraft Flight Fonfiguration

    NASA Technical Reports Server (NTRS)

    Schipper, John F. (Inventor)

    2009-01-01

    Method and system for monitoring and analyzing, in real time, variation with time of an aircraft flight parameter. A time-dependent recovery band, defined by first and second recovery band boundaries that are spaced apart at at least one time point, is constructed for a selected flight parameter and for a selected time recovery time interval length .DELTA.t(FP;rec). A flight parameter, having a value FP(t=t.sub.p) at a time t=t.sub.p, is likely to be able to recover to a reference flight parameter value FP(t';ref), lying in a band of reference flight parameter values FP(t';ref;CB), within a time interval given by t.sub.p.ltoreq.t'.ltoreq.t.sub.p.DELTA.t(FP;rec), if (or only if) the flight parameter value lies between the first and second recovery band boundary traces.

  7. Open Smart Energy Gateway (OpenSEG)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The Open Smart Energy Gateway (OpenSEG) aims to provide near-real time smart meter data to consumers without the delays or latencies associated with it being transported to the utility data center and then back to the consumer's application. To do this, the gateway queries the local Smart Meter to which it is bound to get energy consumption information at pre-defined intervals (minimum interval is 4 seconds). OpenSEG then stores the resulting data internally for retrieval by an external application.

  8. Association between the physical activity and heart rate corrected-QT interval in older adults.

    PubMed

    Michishita, Ryoma; Fukae, Chika; Mihara, Rikako; Ikenaga, Masahiro; Morimura, Kazuhiro; Takeda, Noriko; Yamada, Yosuke; Higaki, Yasuki; Tanaka, Hiroaki; Kiyonaga, Akira

    2015-07-01

    Increased physical activity can reduce the incidence of cardiovascular disease and the mortality rate. In contrast, a prolonged heart rate corrected-QT (QTc) interval is associated with an increased risk of arrhythmias, sudden cardiac death and coronary artery disease. The present cross-sectional study was designed to clarify the association between the physical activity level and the QTc interval in older adults. The participants included 586 older adults (267 men and 319 women, age 71.2 ± 4.7 years) without a history of cardiovascular disease, who were taking cardioactive drugs. Electrocardiography was recorded with a standard resting 12-lead electrocardiograph, while the QTc interval was calculated according to Hodges' formula. The physical activity level was assessed using a triaxial accelerometer. The participants were divided into four categories, which were defined equally quartile distributions of the QTc interval. After adjusting for age, body mass index, waist circumference and the number of steps, the time spent in inactivity was higher and the time spent in light physical activity was significantly lower in the longest QTc interval group than in the shortest QTc interval group in both sexes (P < 0.05, respectively). However, there were no significant differences in the time spent in moderate and vigorous physical activities among the four groups in either sex. These results suggest that a decreased physical activity level, especially inactivity and light intensity physical activity, were associated with QTc interval in older adults. © 2014 Japan Geriatrics Society.

  9. On the local fractional derivative of everywhere non-differentiable continuous functions on intervals

    NASA Astrophysics Data System (ADS)

    Liu, Cheng-shi

    2017-01-01

    We first prove that for a continuous function f(x) defined on an open interval, the Kolvankar-Gangal's (or equivalently Chen-Yan-Zhang's) local fractional derivative f(α)(x) is not continuous, and then prove that it is impossible that the KG derivative f(α)(x) exists everywhere on the interval and satisfies f(α)(x) ≠ 0 in the same time. In addition, we give a criterion of the nonexistence of the local fractional derivative of everywhere non-differentiable continuous functions. Furthermore, we construct two simple nowhere differentiable continuous functions on (0, 1) and prove that they have no the local fractional derivatives everywhere.

  10. Detection of timescales in evolving complex systems

    PubMed Central

    Darst, Richard K.; Granell, Clara; Arenas, Alex; Gómez, Sergio; Saramäki, Jari; Fortunato, Santo

    2016-01-01

    Most complex systems are intrinsically dynamic in nature. The evolution of a dynamic complex system is typically represented as a sequence of snapshots, where each snapshot describes the configuration of the system at a particular instant of time. This is often done by using constant intervals but a better approach would be to define dynamic intervals that match the evolution of the system’s configuration. To this end, we propose a method that aims at detecting evolutionary changes in the configuration of a complex system, and generates intervals accordingly. We show that evolutionary timescales can be identified by looking for peaks in the similarity between the sets of events on consecutive time intervals of data. Tests on simple toy models reveal that the technique is able to detect evolutionary timescales of time-varying data both when the evolution is smooth as well as when it changes sharply. This is further corroborated by analyses of several real datasets. Our method is scalable to extremely large datasets and is computationally efficient. This allows a quick, parameter-free detection of multiple timescales in the evolution of a complex system. PMID:28004820

  11. Timing of silicone stent removal in patients with post-tuberculosis bronchial stenosis.

    PubMed

    Eom, Jung Seop; Kim, Hojoong; Park, Hye Yun; Jeon, Kyeongman; Um, Sang-Won; Koh, Won-Jung; Suh, Gee Young; Chung, Man Pyo; Kwon, O Jung

    2013-10-01

    In patients with post-tuberculosis bronchial stenosis (PTBS), the severity of bronchial stenosis affects the restenosis rate after the silicone stent is removed. In PTBS patients with incomplete bronchial obstruction, who had a favorable prognosis, the timing of stent removal to ensure airway patency is not clear. We evaluated the time for silicone stent removal in patients with incomplete PTBS. A retrospective study examined PTBS patients who underwent stenting and removal of a silicone stent. Incomplete bronchial stenosis was defined as PTBS other than total bronchial obstruction, which had a luminal opening at the stenotic segment on bronchoscopic intervention. The duration of stenting was defined as the interval from stent insertion to removal. The study included 44 PTBS patients and the patients were grouped at intervals of 6 months according to the duration of stenting. Patients stented for more than 12 months had a significantly lower restenosis rate than those stented for less than 12 months (4% vs. 35%, P = 0.009). Multiple logistic regression revealed an association between stenting for more than 12 months and a low restenosis rate (odds ratio 12.095; 95% confidence interval 1.097-133.377). Moreover, no restenosis was observed in PTBS patients when the stent was placed more than 14 months previously. In patients with incomplete PTBS, stent placement for longer than 12 months reduced restenosis after stent removal.

  12. IEEE-1588(Trademark) Standard for a Precision Clock Synchronization Protocol for Networked Measurement and Control Systems

    DTIC Science & Technology

    2002-12-01

    34th Annual Precise Time and Time Interval (PTTI) Meeting 243 IEEE-1588™ STANDARD FOR A PRECISION CLOCK SYNCHRONIZATION PROTOCOL FOR... synchronization . 2. Cyclic-systems. In cyclic-systems, timing is periodic and is usually defined by the characteristics of a cyclic network or bus...incommensurate, timing schedules for each device are easily implemented. In addition, synchronization accuracy depends on the accuracy of the common

  13. Proportionality between Doppler noise and integrated signal path electron density validated by differenced S-X range

    NASA Technical Reports Server (NTRS)

    Berman, A. L.

    1977-01-01

    Observations of Viking differenced S-band/X-band (S-X) range are shown to correlate strongly with Viking Doppler noise. A ratio of proportionality between downlink S-band plasma-induced range error and two-way Doppler noise is calculated. A new parameter (similar to the parameter epsilon which defines the ratio of local electron density fluctuations to mean electron density) is defined as a function of observed data sample interval (Tau) where the time-scale of the observations is 15 Tau. This parameter is interpreted to yield the ratio of net observed phase (or electron density) fluctuations to integrated electron density (in RMS meters/meter). Using this parameter and the thin phase-changing screen approximation, a value for the scale size L is calculated. To be consistent with Doppler noise observations, it is seen necessary for L to be proportional to closest approach distance a, and a strong function of the observed data sample interval, and hence the time-scale of the observations.

  14. A Joint Replenishment Inventory Model with Lost Sales

    NASA Astrophysics Data System (ADS)

    Devy, N. L.; Ai, T. J.; Astanti, R. D.

    2018-04-01

    This paper deals with two items joint replenishment inventory problem, in which the demand of each items are constant and deterministic. Inventory replenishment of items is conducted periodically every T time intervals. Among of these replenishments, joint replenishment of both items is possible. It is defined that item i is replenished every ZiT time intervals. Replenishment of items are instantaneous. All of shortages are considered as lost sales. The maximum allowance for lost sales of item i is Si. Mathematical model is formulated in order to determining the basic time cycle T, replenishment multiplier Zi , and maximum lost sales Si in order to minimize the total cost per unit time. A solution methodology is proposed for solve the model and a numerical example is provided for demonstrating the effectiveness of the proposed methodology.

  15. Fluctuations of healthy and unhealthy heartbeat intervals

    NASA Astrophysics Data System (ADS)

    Lan, Boon Leong; Toda, Mikito

    2013-04-01

    We show that the RR-interval fluctuations, defined as the difference between successive natural-logarithm of the RR interval, for healthy, congestive-heart-failure (CHF) and atrial-fibrillation (AF) subjects are well modeled by non-Gaussian stable distributions. Our results suggest that healthy or unhealthy RR-interval fluctuation can generally be modeled as a sum of a large number of independent physiological effects which are identically distributed with infinite variance. Furthermore, we show for the first time that one indicator —the scale parameter of the stable distribution— is sufficient to robustly distinguish the three groups of subjects. The scale parameters for healthy subjects are smaller than those for AF subjects but larger than those for CHF subjects —this ordering suggests that the scale parameter could be used to objectively quantify the severity of CHF and AF over time and also serve as an early warning signal for a healthy person when it approaches either boundary of the healthy range.

  16. Are polygonal faults the keystone for better understanding the timing of fluid migration in sedimentary basins?

    NASA Astrophysics Data System (ADS)

    Gay, Aurélien

    2017-06-01

    The initial sediment lithification starts with complex interactions involving minerals, surface water, decomposing organic matter and living organisms. This is the eogenesis domain (0 to 2 km below the seafloor) in which the sediments are subject to physical, chemical and mechanical transformations defining the early fabric of rocks. This interval is intensively prospected for its energy/mining resources (hydrocarbons, metal deposits, geothermal energy). In most basins worldwide it is composed of very fine-grained sediments and it is supposed to play the role of a seal for fluids migration. However, it is affected by polygonal faulting due to a volume loss during burial by contraction of clay sediments with a high smectite content. This process is of high interest for fractured reservoirs and/or cover integrity but it is not well constrained giving an uncertainty as this interval can either promote the migration of deeper fluids and the mineralized fluids intensifies diagenesis in the fracture planes, rendering this interval all the more impermeable. The next challenge will be to define where, when and how does this polygonal fault interval occur and this can only be done by understanding the behavior of clay grains and fluids during early burial.

  17. Effect of collection-maturation interval time and pregnancy status of donor mares on oocyte developmental competence in horse cloning.

    PubMed

    Gambini, A; Andrés, G; Jarazo, J; Javier, J; Karlanian, F; Florencia, K; De Stéfano, A; Salamone, D F

    2014-02-01

    The current limitations for obtaining ovaries from slaughterhouses and the low efficiency of in vivo follicular aspiration necessitate a complete understanding of the variables that affect oocyte developmental competence in the equine. For this reason, we assessed the effect on equine oocyte meiotic competence and the subsequent in vitro cloned embryo development of 1) the time interval between ovary collection and the onset of oocyte in vitro maturation (collection-maturation interval time) and 2) the pregnancy status of the donor mares. To define the collection-maturation interval time, collected oocytes were classified according to the slaughtering time and the pregnancy status of the mare. Maturation rate was recorded and some matured oocytes of each group were used to reconstruct zona free cloned embryos. Nuclear maturation rates were lower when the collection-maturation interval time exceeded 10 h as compared to 4 h (32/83 vs. 76/136, respectively; P = 0.0128) and when the donor mare was pregnant as compared to nonpregnant (53/146 vs. 177/329, respectively; P = 0.0004). Low rates of cleaved embryos were observed when the collection-maturation interval time exceeded 10 h as compared to 6 to 10 h (11/27 vs. 33/44, respectively; P = 0.0056), but the pregnancy status of donor mares did not affect cloned equine blastocyst development (3/49 vs. 1/27 for blastocyst rates of nonpregnant and pregnant groups, respectively; P = 1.00). These results indicate that, to apply assisted reproductive technologies in horses, oocytes should be harvested within approximately 10 h after ovary collection. Also, even though ovaries from pregnant mares are a potential source of oocytes, they should be processed at the end of the collection routine due to the lower collection and maturation rate in this group.

  18. Crackles and instabilities during lung inflation

    NASA Astrophysics Data System (ADS)

    Alencar, Adriano M.; Majumdar, Arnab; Hantos, Zoltan; Buldyrev, Sergey V.; Eugene Stanley, H.; Suki, Béla

    2005-11-01

    In a variety of physico-chemical reactions, the actual process takes place in a reactive zone, called the “active surface”. We define the active surface of the lung as the set of airway segments that are closed but connected to the trachea through an open pathway, which is the interface between closed and open regions in a collapsed lung. To study the active surface and the time interval between consecutive openings, we measured the sound pressure of crackles, associated with the opening of collapsed airway segments in isolated dog lungs, inflating from the collapsed state in 120 s. We analyzed the sequence of crackle amplitudes, inter-crackle intervals, and low frequency energy from acoustic data. The series of spike amplitudes spans two orders of magnitude and the inter-crackle intervals spans over five orders of magnitude. The distribution of spike amplitudes follows a power law for nearly two decades, while the distribution of time intervals between consecutive crackles shows two regimes of power law behavior, where the first region represents crackles coming from avalanches of openings whereas the second region is due to the time intervals between separate avalanches. Using the time interval between measured crackles, we estimated the time evolution of the active surface during lung inflation. In addition, we show that recruitment and instabilities along the pressure-volume curve are associated with airway opening and recruitment. We find a good agreement between the theory of the dynamics of lung inflation and the experimental data which combined with numerical results may prove useful in the clinical diagnosis of lung diseases.

  19. The STEP model: Characterizing simultaneous time effects on practice for flight simulator performance among middle-aged and older pilots

    PubMed Central

    Kennedy, Quinn; Taylor, Joy; Noda, Art; Yesavage, Jerome; Lazzeroni, Laura C.

    2015-01-01

    Understanding the possible effects of the number of practice sessions (practice) and time between practice sessions (interval) among middle-aged and older adults in real world tasks has important implications for skill maintenance. Prior training and cognitive ability may impact practice and interval effects on real world tasks. In this study, we took advantage of existing practice data from five simulated flights among 263 middle-aged and older pilots with varying levels of flight expertise (defined by FAA proficiency ratings). We developed a new STEP (Simultaneous Time Effects on Practice) model to: (1) model the simultaneous effects of practice and interval on performance of the five flights, and (2) examine the effects of selected covariates (age, flight expertise, and three composite measures of cognitive ability). The STEP model demonstrated consistent positive practice effects, negative interval effects, and predicted covariate effects. Age negatively moderated the beneficial effects of practice. Additionally, cognitive processing speed and intra-individual variability (IIV) in processing speed moderated the benefits of practice and/or the negative influence of interval for particular flight performance measures. Expertise did not interact with either practice or interval. Results indicate that practice and interval effects occur in simulated flight tasks. However, processing speed and IIV may influence these effects, even among high functioning adults. Results have implications for the design and assessment of training interventions targeted at middle-aged and older adults for complex real world tasks. PMID:26280383

  20. Proceedings of the Annual Precise Time and Time Interval (PTTI) Planning Meeting (6th). Held at U.S. Naval Research Laboratory, December 3-5, 1974

    DTIC Science & Technology

    1974-01-01

    General agreement seems to be developing that the geophysical system should be defined in terms of a large number of points...34A Laser-Interferometer System for the Absolute Determination of the Acceleration due to Gravity," In Proc. Int. Conf. on Precision Measurement...MO %. The ratio of the plasmaspheric to the total time-delays due to free

  1. Rule-Based Runtime Verification

    NASA Technical Reports Server (NTRS)

    Barringer, Howard; Goldberg, Allen; Havelund, Klaus; Sen, Koushik

    2003-01-01

    We present a rule-based framework for defining and implementing finite trace monitoring logics, including future and past time temporal logic, extended regular expressions, real-time logics, interval logics, forms of quantified temporal logics, and so on. Our logic, EAGLE, is implemented as a Java library and involves novel techniques for rule definition, manipulation and execution. Monitoring is done on a state-by-state basis, without storing the execution trace.

  2. 40 CFR 63.2872 - What definitions apply to this subpart?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... NESHAP General Provisions. (c) In this section as follows: Accounting month means a time interval defined... solvent from the extracted meal. Oilseeds processed in a conventional desolventizer produce crude vegetable oil and crude meal products, such as animal feed. Corn germ dry milling means a source that...

  3. 40 CFR 63.2872 - What definitions apply to this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... NESHAP General Provisions. (c) In this section as follows: Accounting month means a time interval defined... consistent and regular basis. An accounting month will consist of approximately 4 to 5 calendar weeks and each accounting month will be of approximate equal duration. An accounting month may not correspond...

  4. Impact of Short Interval SMS Digital Data on Wind Vector Determination for a Severe Local Storms Area

    NASA Technical Reports Server (NTRS)

    Peslen, C. A.

    1979-01-01

    The impact of 5 minute interval SMS-2 visible digital image data in analyzing severe local storms is examined using wind vectors derived from cloud tracking on time lapsed sequence of geosynchronous satellite images. The cloud tracking areas are located in the Central Plains, where on 6 May 1975, hail-producing thunderstorms occurred ahead of a well defined dry line. The results demonstrate that satellite-derived wind vectors and their associated divergence fields complement conventional meteorological analyses in describing the conditions preceding severe local storm development.

  5. Optimal investment strategies and hedging of derivatives in the presence of transaction costs (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Muratore-Ginanneschi, Paolo

    2005-05-01

    Investment strategies in multiplicative Markovian market models with transaction costs are defined using growth optimal criteria. The optimal strategy is shown to consist in holding the amount of capital invested in stocks within an interval around an ideal optimal investment. The size of the holding interval is determined by the intensity of the transaction costs and the time horizon. The inclusion of financial derivatives in the models is also considered. All the results presented in this contributions were previously derived in collaboration with E. Aurell.

  6. Timing of silicone stent removal in patients with post-tuberculosis bronchial stenosis

    PubMed Central

    Eom, Jung Seop; Kim, Hojoong; Park, Hye Yun; Jeon, Kyeongman; Um, Sang-Won; Koh, Won-Jung; Suh, Gee Young; Chung, Man Pyo; Kwon, O. Jung

    2013-01-01

    CONTEXT: In patients with post-tuberculosis bronchial stenosis (PTBS), the severity of bronchial stenosis affects the restenosis rate after the silicone stent is removed. In PTBS patients with incomplete bronchial obstruction, who had a favorable prognosis, the timing of stent removal to ensure airway patency is not clear. AIMS: We evaluated the time for silicone stent removal in patients with incomplete PTBS. SETTINGS AND DESIGN: A retrospective study examined PTBS patients who underwent stenting and removal of a silicone stent. METHODS: Incomplete bronchial stenosis was defined as PTBS other than total bronchial obstruction, which had a luminal opening at the stenotic segment on bronchoscopic intervention. The duration of stenting was defined as the interval from stent insertion to removal. The study included 44 PTBS patients and the patients were grouped at intervals of 6 months according to the duration of stenting. RESULTS: Patients stented for more than 12 months had a significantly lower restenosis rate than those stented for less than 12 months (4% vs. 35%, P = 0.009). Multiple logistic regression revealed an association between stenting for more than 12 months and a low restenosis rate (odds ratio 12.095; 95% confidence interval 1.097-133.377). Moreover, no restenosis was observed in PTBS patients when the stent was placed more than 14 months previously. CONCLUSIONS: In patients with incomplete PTBS, stent placement for longer than 12 months reduced restenosis after stent removal. PMID:24250736

  7. Active, capable, and potentially active faults - a paleoseismic perspective

    USGS Publications Warehouse

    Machette, M.N.

    2000-01-01

    Maps of faults (geologically defined source zones) may portray seismic hazards in a wide range of completeness depending on which types of faults are shown. Three fault terms - active, capable, and potential - are used in a variety of ways for different reasons or applications. Nevertheless, to be useful for seismic-hazards analysis, fault maps should encompass a time interval that includes several earthquake cycles. For example, if the common recurrence in an area is 20,000-50,000 years, then maps should include faults that are 50,000-100,000 years old (two to five typical earthquake cycles), thus allowing for temporal variability in slip rate and recurrence intervals. Conversely, in more active areas such as plate boundaries, maps showing faults that are <10,000 years old should include those with at least 2 to as many as 20 paleoearthquakes. For the International Lithosphere Programs' Task Group II-2 Project on Major Active Faults of the World our maps and database will show five age categories and four slip rate categories that allow one to select differing time spans and activity rates for seismic-hazard analysis depending on tectonic regime. The maps are accompanied by a database that describes evidence for Quaternary faulting, geomorphic expression, and paleoseismic parameters (slip rate, recurrence interval and time of most recent surface faulting). These maps and databases provide an inventory of faults that would be defined as active, capable, and potentially active for seismic-hazard assessments.

  8. Mechanical dispersion is associated with poor outcome in heart failure with a severely depressed left ventricular function and bundle branch blocks.

    PubMed

    Stankovic, Ivan; Janicijevic, Aleksandra; Dimic, Aleksandra; Stefanovic, Milica; Vidakovic, Radosav; Putnikovic, Biljana; Neskovic, Aleksandar N

    2018-03-01

    Bundle branch blocks (BBB)-related mechanical dyssynchrony and dispersion may improve patient selection for device therapy, but their effect on the natural history of this patient population is unknown. A total of 155 patients with LVEF ≤ 35% and BBB, not treated with device therapy, were included. Mechanical dyssynchrony was defined as the presence of either septal flash or apical rocking. Contraction duration was assessed as time interval from the electrocardiographic R-(Q-)wave to peak longitudinal strain in each of 17 left ventricular segments. Mechanical dispersion was defined as either the standard deviation of all time intervals (dispersion SD ) or as the difference between the longest and shortest time intervals (dispersion delta ). Patients were followed for cardiac mortality during a median period of 33 months. Mechanical dyssynchrony was not associated with survival. More pronounced mechanical dispersion delta was found in patients with dyssynchrony than in those without. In the multivariate regression analysis, patients' functional class, diabetes mellitus and dispersion delta were independently associated with mortality. Mechanical dispersion, but not dyssynchrony, was independently associated with mortality and it may be useful for risk stratification of patients with heart failure (HF) and BBB. Key Messages Mechanical dispersion, measured by strain echocardiography, is associated with poor outcome in heart failure with a severely depressed left ventricular function and bundle branch blocks. Mechanical dispersion may be useful for risk stratification of patients with heart failure and bundle branch blocks.

  9. Application of Millisecond Pulsar Timing to the Long-Term Stability of Clock Ensembles

    NASA Technical Reports Server (NTRS)

    Foster, Roger S.; Matsakis, Demetrios N.

    1996-01-01

    We review the application of millisecond pulsars to define a precise long-term standard and positional reference system in a nearly inertial reference frame. We quantify the current timing precision of the best millisecond pulsars and define the required precise time and time interval (PTTI) accuracy and stability to enable time transfer via pulsars. Pulsars may prove useful as independent standards to examine decade-long timing stability and provide an independent natural system within which to calibrate any new, perhaps vastly improved atomic time scale. Since pulsar stability appears to be related to the lifetime of the pulsar, the new millisecond pulsar J173+0747 is projected to have a 100-day accuracy equivalent to a single HP5071 cesium standard. Over the last five years, dozens of new millisecond pulsars have been discovered. A few of the new millisecond pulsars may have even better timing properties.

  10. EAGLE can do Efficient LTL Monitoring

    NASA Technical Reports Server (NTRS)

    Barringer, Howard; Goldberg, Allen; Havelund, Klaus; Sen, Koushik

    2003-01-01

    We briefly present a rule-based framework, called EAGLE, that has been shown to be capable of defining and implementing finite trace monitoring logics, including future and past time temporal logic, extended regular expressions, real-time logics, interval logics, forms of quantified temporal logics, and so on. In this paper we show how EAGLE can do linear temporal logic (LTL) monitoring in an efficient way. We give an upper bound on the space and time complexity of this monitoring.

  11. Statistical evaluation of time-dependent metabolite concentrations: estimation of post-mortem intervals based on in situ 1H-MRS of the brain.

    PubMed

    Scheurer, Eva; Ith, Michael; Dietrich, Daniel; Kreis, Roland; Hüsler, Jürg; Dirnhofer, Richard; Boesch, Chris

    2005-05-01

    Knowledge of the time interval from death (post-mortem interval, PMI) has an enormous legal, criminological and psychological impact. Aiming to find an objective method for the determination of PMIs in forensic medicine, 1H-MR spectroscopy (1H-MRS) was used in a sheep head model to follow changes in brain metabolite concentrations after death. Following the characterization of newly observed metabolites (Ith et al., Magn. Reson. Med. 2002; 5: 915-920), the full set of acquired spectra was analyzed statistically to provide a quantitative estimation of PMIs with their respective confidence limits. In a first step, analytical mathematical functions are proposed to describe the time courses of 10 metabolites in the decomposing brain up to 3 weeks post-mortem. Subsequently, the inverted functions are used to predict PMIs based on the measured metabolite concentrations. Individual PMIs calculated from five different metabolites are then pooled, being weighted by their inverse variances. The predicted PMIs from all individual examinations in the sheep model are compared with known true times. In addition, four human cases with forensically estimated PMIs are compared with predictions based on single in situ MRS measurements. Interpretation of the individual sheep examinations gave a good correlation up to 250 h post-mortem, demonstrating that the predicted PMIs are consistent with the data used to generate the model. Comparison of the estimated PMIs with the forensically determined PMIs in the four human cases shows an adequate correlation. Current PMI estimations based on forensic methods typically suffer from uncertainties in the order of days to weeks without mathematically defined confidence information. In turn, a single 1H-MRS measurement of brain tissue in situ results in PMIs with defined and favorable confidence intervals in the range of hours, thus offering a quantitative and objective method for the determination of PMIs. Copyright 2004 John Wiley & Sons, Ltd.

  12. Identification of atrial fibrillation using electrocardiographic RR-interval difference

    NASA Astrophysics Data System (ADS)

    Eliana, M.; Nuryani, N.

    2017-11-01

    Automated detection of atrial fibrillation (AF) is an interesting topic. It is an account of very dangerous, not only as a trigger of embolic stroke, but it’s also related to some else chronical disease. In this study, we analyse the presence of AF by determining irregularities of RR-interval. We utilize the interval comparison to measure the degree of irregularities of RR-interval in a defined segment. The series of RR-interval is segmented with the length of 10 of them. In this study, we use interval comparison for the method. We were comparing all of the intervals there each other. Then we put the threshold to define the low difference and high difference (δ). A segment is defined as AF or Normal Sinus by the number of high δ, so we put the tolerance (β) of high δ there. We have used this method to test the 23 patients data from MIT-BIH. Using the approach and the clinical data we find accuracy, sensitivity, and specificity of 84.98%, 91.99%, and 77.85% respectively.

  13. Influence of polymorphisms within the methotrexate pathway genes on the toxicity and efficacy of methotrexate in patients with juvenile idiopathic arthritis

    PubMed Central

    Yanagimachi, Masakatsu; Naruto, Takuya; Hara, Takuma; Kikuchi, Masako; Hara, Ryoki; Miyamae, Takako; Imagawa, Tomoyuki; Mori, Masaaki; Kaneko, Tetsuji; Morita, Satoshi; Goto, Hiroaki; Yokota, Shumpei

    2011-01-01

    AIMS We investigated whether several polymorphisms within the methotrexate (MTX) pathway genes were related to the toxicity and efficacy of MTX in 92 Japanese patients with articular-type juvenile idiopathic arthritis (JIA). METHODS Eight gene polymorphisms within the MTX pathway genes, namely, RFC, BCRP, MTHFR (two), FPGS, γ-glutamyl hydrolase (GGH; two) and ATIC, were genotyped using TaqMan assays. Liver dysfunction was defined as an increase in alanine transaminase to five times the normal upper limit. Non-responders to MTX were defined as patients refractory to MTX and were therefore treated with biologics. RESULTS The non-TT genotype at GGH T16C was associated with a high risk of liver dysfunction (P = 0.028, odds ratio = 6.90, 95% confidence interval 1.38–34.5), even after adjustment for the duration of MTX treatment. A longer interval from disease onset to treatment (8.5 and 21.3 months, P = 0.029) and rheumatoid factor positivity (P = 0.026, odds ratio = 2.87, 95% confidence interval 1.11–7.39) were associated with lower efficacy of MTX. CONCLUSIONS The non-TT genotype at GGH T16C was associated with a high risk of liver dysfunction, presumably because the C allele of GGH C16T may reduce the activity of GGH. The time interval before MTX treatment and rheumatoid factor positivity were associated with the efficacy of MTX treatment. The pharmacogenetics of the MTX pathway genes affects the toxicity and efficacy of MTX in Japanese JIA patients. PMID:21219404

  14. Time-based partitioning model for predicting neurologically favorable outcome among adults with witnessed bystander out-of-hospital CPA.

    PubMed

    Abe, Toshikazu; Tokuda, Yasuharu; Cook, E Francis

    2011-01-01

    Optimal acceptable time intervals from collapse to bystander cardiopulmonary resuscitation (CPR) for neurologically favorable outcome among adults with witnessed out-of-hospital cardiopulmonary arrest (CPA) have been unclear. Our aim was to assess the optimal acceptable thresholds of the time intervals of CPR for neurologically favorable outcome and survival using a recursive partitioning model. From January 1, 2005 through December 31, 2009, we conducted a prospective population-based observational study across Japan involving consecutive out-of-hospital CPA patients (N = 69,648) who received a witnessed bystander CPR. Of 69,648 patients, 34,605 were assigned to the derivation data set and 35,043 to the validation data set. Time factors associated with better outcomes: the better outcomes were survival and neurologically favorable outcome at one month, defined as category one (good cerebral performance) or two (moderate cerebral disability) of the cerebral performance categories. Based on the recursive partitioning model from the derivation dataset (n = 34,605) to predict the neurologically favorable outcome at one month, 5 min threshold was the acceptable time interval from collapse to CPR initiation; 11 min from collapse to ambulance arrival; 18 min from collapse to return of spontaneous circulation (ROSC); and 19 min from collapse to hospital arrival. Among the validation dataset (n = 35,043), 209/2,292 (9.1%) in all patients with the acceptable time intervals and 1,388/2,706 (52.1%) in the subgroup with the acceptable time intervals and pre-hospital ROSC showed neurologically favorable outcome. Initiation of CPR should be within 5 min for obtaining neurologically favorable outcome among adults with witnessed out-of-hospital CPA. Patients with the acceptable time intervals of bystander CPR and pre-hospital ROSC within 18 min could have 50% chance of neurologically favorable outcome.

  15. The STEP model: Characterizing simultaneous time effects on practice for flight simulator performance among middle-aged and older pilots.

    PubMed

    Kennedy, Quinn; Taylor, Joy; Noda, Art; Yesavage, Jerome; Lazzeroni, Laura C

    2015-09-01

    Understanding the possible effects of the number of practice sessions (practice) and time between practice sessions (interval) among middle-aged and older adults in real-world tasks has important implications for skill maintenance. Prior training and cognitive ability may impact practice and interval effects on real-world tasks. In this study, we took advantage of existing practice data from 5 simulated flights among 263 middle-aged and older pilots with varying levels of flight expertise (defined by U.S. Federal Aviation Administration proficiency ratings). We developed a new Simultaneous Time Effects on Practice (STEP) model: (a) to model the simultaneous effects of practice and interval on performance of the 5 flights, and (b) to examine the effects of selected covariates (i.e., age, flight expertise, and 3 composite measures of cognitive ability). The STEP model demonstrated consistent positive practice effects, negative interval effects, and predicted covariate effects. Age negatively moderated the beneficial effects of practice. Additionally, cognitive processing speed and intraindividual variability (IIV) in processing speed moderated the benefits of practice and/or the negative influence of interval for particular flight performance measures. Expertise did not interact with practice or interval. Results indicated that practice and interval effects occur in simulated flight tasks. However, processing speed and IIV may influence these effects, even among high-functioning adults. Results have implications for the design and assessment of training interventions targeted at middle-aged and older adults for complex real-world tasks. (c) 2015 APA, all rights reserved).

  16. Generating nonlinear FM chirp radar signals by multiple integrations

    DOEpatents

    Doerry, Armin W [Albuquerque, NM

    2011-02-01

    A phase component of a nonlinear frequency modulated (NLFM) chirp radar pulse can be produced by performing digital integration operations over a time interval defined by the pulse width. Each digital integration operation includes applying to a respectively corresponding input parameter value a respectively corresponding number of instances of digital integration.

  17. Annual forest inventory estimates based on the moving average

    Treesearch

    Francis A. Roesch; James R. Steinman; Michael T. Thompson

    2002-01-01

    Three interpretations of the simple moving average estimator, as applied to the USDA Forest Service's annual forest inventory design, are presented. A corresponding approach to composite estimation over arbitrarily defined land areas and time intervals is given for each interpretation, under the assumption that the investigator is armed with only the spatial/...

  18. Grenvillian magmatism in the northern Virginia Blue Ridge: Petrologic implications of episodic granitic magma production and the significance of postorogenic A-type charnockite

    USGS Publications Warehouse

    Tollo, R.P.; Aleinikoff, J.N.; Borduas, E.A.; Dickin, A.P.; McNutt, R.H.; Fanning, C.M.

    2006-01-01

    Grenvillian (1.2 to 1.0 Ga) plutonic rocks in northern Virginia preserve evidence of episodic, mostly granitic magmatism that spanned more than 150 million years (m.y.) of crustal reworking. Crystallization ages determined by sensitive high resolution ion microprobe (SHRIMP) U-Pb isotopic analyses of zircon and monazite, combined with results from previous studies, define three periods of magmatic activity at 1183-1144 Ma (Magmatic Interval I), 1120-1111 Ma (Magmatic Interval II), and 1078-1028 Ma (Magmatic Interval III). Magmatic activity produced dominantly tholeiitic plutons composed of (1) low-silica charnockite, (2) leucogranite, (3) non-leucocratic granitoid (with or without orthopyroxene (opx)), and (4) intermediate biotite-rich granitoid. Field, petrologic, geochemical, and geochronologic data indicate that charnockite and non-charnockitic granitoids were closely associated in both space and time, indicating that presence of opx is related to magmatic conditions, not metamorphic grade. Geochemical and Nd isotopic data, combined with results from experimental studies, indicate that leucogranites (Magmatic Intervals I and III) and non-leucocratic granitoids (Magmatic Intervals I and II) were derived from parental magmas produced by either a high degree of partial melting of isotopically evolved tonalitic sources or less advanced partial melting of dominantly tonalitic sources that also included a more mafic component. Post-orogenic, circa 1050 Ma low-silica charnockite is characterized by A-type compositional affinity including high FeOt/(FeOt + MgO), Ga/Al, Zr, Nb, Y, and Zn, and was derived from parental magmas produced by partial melting of potassic mafic sources in the lower crust. Linear geochemical trends defined by leucogranites, low-silica charnockite, and biotite-rich monzogranite emplaced during Magmatic Interval III reflect differences in source-related characteristics; these features do not represent an igneous fractionation sequence. A compositional gap between circa 1160 Ma magnesian low-silica charnockite and penecontemporaneous higher silica lithologies likewise precludes a fractionation relationship among plutons intruded during Magmatic Interval I. Correspondence in timing of magmatic activity between the Blue Ridge and neighboring Mesoproterozoic terranes underscores the widespread nature of Grenvillian processes in the region.

  19. IBM system/360 assembly language interval arithmetic software

    NASA Technical Reports Server (NTRS)

    Phillips, E. J.

    1972-01-01

    Computer software designed to perform interval arithmetic is described. An interval is defined as the set of all real numbers between two given numbers including or excluding one or both endpoints. Interval arithmetic consists of the various elementary arithmetic operations defined on the set of all intervals, such as interval addition, subtraction, union, etc. One of the main applications of interval arithmetic is in the area of error analysis of computer calculations. For example, it has been used sucessfully to compute bounds on sounding errors in the solution of linear algebraic systems, error bounds in numerical solutions of ordinary differential equations, as well as integral equations and boundary value problems. The described software enables users to implement algorithms of the type described in references efficiently on the IBM 360 system.

  20. Prevalence, Risk Factors and In-hospital Outcomes of QTc Interval Prolongation in Liver Cirrhosis.

    PubMed

    Zhao, Jiancheng; Qi, Xingshun; Hou, Feifei; Ning, Zheng; Zhang, Xintong; Deng, Han; Peng, Ying; Li, Jing; Wang, Xiaoxi; Li, Hongyu; Guo, Xiaozhong

    2016-09-01

    QTc interval prolongation is an electrocardiographic abnormality in liver cirrhosis. The objective of this study was to evaluate the prevalence, risk factors and in-hospital outcomes of QTc interval prolongation in Chinese patients with liver cirrhosis. This was a retrospective analysis of a total of 1,268 patients with liver cirrhosis who were consecutively admitted to our hospital between January 2011 and June 2014. QTc interval data were collected from the medical records. QTc interval prolongation was defined as QTc interval > 440 milliseconds. The prevalence of QTc interval prolongation was 38.2% (485 of 1268). In the entire cohort, the risk factors for QTc interval prolongation included an older age, a higher proportion of alcohol abuse and ascites, higher bilirubin, blood urea nitrogen, creatinine, prothrombin time, international normalized ratio, Child-Pugh score and model for end-stage liver diseases score, and lower red blood cell (RBC), hemoglobin (Hb), albumin (ALB), alanine aminotransferase and calcium. The in-hospital mortality was not significantly different between patients with and without QTc interval prolongation (2.1% versus 1.3%, P = 0.276). In the subgroup analyses of patients with hepatitis B virus or alcohol alone-related liver cirrhosis, the risk factors included higher bilirubin, creatinine, prothrombin time, international normalized ratio, Child-Pugh score and model for end-stage liver diseases score, and lower RBC, Hb and ALB. In the subgroups analyses of patients with acute upper gastrointestinal bleeding or ascites, the risk factors included lower RBC, Hb and ALB. QTc interval prolongation was frequent in liver cirrhosis. Although QTc interval prolongation was positively associated with alcohol-related liver cirrhosis and more severe liver dysfunction, it did not significantly influence the in-hospital mortality. Copyright © 2016 Southern Society for Clinical Investigation. Published by Elsevier Inc. All rights reserved.

  1. Evaluation of inhomogeneities of repolarization in patients with psoriasis vulgaris

    PubMed Central

    İnci, Sinan; Aksan, Gökhan; Nar, Gökay; Yüksel, Esra Pancar; Ocal, Hande Serra; Çapraz, Mustafa; Yüksel, Serkan; Şahin, Mahmut

    2016-01-01

    Introduction The arrhythmia potential has not been investigated adequately in psoriatic patients. In this study, we assessed the ventricular repolarization dispersion, using the Tp-e interval and the Tp-e/QT ratio, and investigated the association with inflammation. Material and methods Seventy-one psoriasis vulgaris patients and 70 age- and gender-matched healthy individuals were enrolled in the study. The severity of the disease was calculated using Psoriasis Area and Severity Index scoring. The QTd was defined as the difference between the maximum and minimum QT intervals. The Tp-e interval was defined as the interval from the peak of the T wave to the end of the T wave. The Tp-e interval was corrected for heart rate. The Tp-e/QT ratio was calculated using these measurements. Results There were no significant differences between the groups with respect to basal clinical and laboratory characteristics (p > 0.05). The Tp-e interval, the corrected Tp-e interval (cTp-e) and the Tp-e/QT ratio were also significantly higher in psoriasis patients compared to the control group (78.5 ±8.0 ms vs. 71.4 ±7.6 ms, p < 0.001, 86.3 ±13.2 ms vs. 77.6 ±9.0 ms, p < 0.001 and 0.21 ±0.02 vs. 0.19 ±0.02, p < 0.001 respectively). A significant correlation was detected between the cTp-e time and the Tp-e/QT ratio and the PASI score in the group of psoriatic patients (r = 0.51, p < 0.001; r = 0.59, p < 0.001, respectively). Conclusions In our study, we detected a significant increase in the Tp-e interval and the Tp-e/QT ratio in patients with psoriasis vulgaris. The Tp-e interval and the Tp-e/QT ratio may be predictors for ventricular arrhythmias in patients with psoriasis vulgaris. PMID:27904512

  2. A Flexible Toolkit Supporting Knowledge-based Tactical Planning for Ground Forces

    DTIC Science & Technology

    2011-06-01

    assigned to each of the Special Areas to model its temporal behaviour . In Figure 5 an optimal path going over two defined intermediate points is...which area can be reached by an armoured infantry platoon within a given time interval, which path should be taken by a support unit to minimize...al. 2008]. Although trained commanders and staff personnel may achieve very accurate planning results, time consuming procedures are excluded when

  3. CIMP status of interval colon cancers: another piece to the puzzle.

    PubMed

    Arain, Mustafa A; Sawhney, Mandeep; Sheikh, Shehla; Anway, Ruth; Thyagarajan, Bharat; Bond, John H; Shaukat, Aasma

    2010-05-01

    Colon cancers diagnosed in the interval after a complete colonoscopy may occur due to limitations of colonoscopy or due to the development of new tumors, possibly reflecting molecular and environmental differences in tumorigenesis resulting in rapid tumor growth. In a previous study from our group, interval cancers (colon cancers diagnosed within 5 years of a complete colonoscopy) were almost four times more likely to demonstrate microsatellite instability (MSI) than non-interval cancers. In this study we extended our molecular analysis to compare the CpG island methylator phenotype (CIMP) status of interval and non-interval colorectal cancers and investigate the relationship between the CIMP and MSI pathways in the pathogenesis of interval cancers. We searched our institution's cancer registry for interval cancers, defined as colon cancers that developed within 5 years of a complete colonoscopy. These were frequency matched in a 1:2 ratio by age and sex to patients with non-interval cancers (defined as colon cancers diagnosed on a patient's first recorded colonoscopy). Archived cancer specimens for all subjects were retrieved and tested for CIMP gene markers. The MSI status of subjects identified between 1989 and 2004 was known from our previous study. Tissue specimens of newly identified cases and controls (between 2005 and 2006) were tested for MSI. There were 1,323 cases of colon cancer diagnosed over the 17-year study period, of which 63 were identified as having interval cancer and matched to 131 subjects with non-interval cancer. Study subjects were almost all Caucasian men. CIMP was present in 57% of interval cancers compared to 33% of non-interval cancers (P=0.004). As shown previously, interval cancers were more likely than non-interval cancers to occur in the proximal colon (63% vs. 39%; P=0.002), and have MSI 29% vs. 11%, P=0.004). In multivariable logistic regression model, proximal location (odds ratio (OR) 1.85; 95% confidence interval (CI) 1.01-3.8), MSI (OR 2.7; 95% CI 1.1-6.8) and CIMP (OR 2.41; 95% CI 1.2-4.9) were independently associated with interval cancers. CIMP was associated with interval cancers independent of MSI status. There was no difference in 5-year survival between the two groups. Interval cancers are more likely to arise in the proximal colon and demonstrate CIMP, which suggests there may be differences in biology between these and non-interval CRC. Additional studies are needed to determine whether interval cancers arise as a result of missed lesions or accelerated neoplastic progression.

  4. The Time Course of the Probability of Transition Into and Out of REM Sleep

    PubMed Central

    Bassi, Alejandro; Vivaldi, Ennio A.; Ocampo-Garcés, Adrián

    2009-01-01

    Study Objectives: A model of rapid eye movement (REM) sleep expression is proposed that assumes underlying regulatory mechanisms operating as inhomogenous Poisson processes, the overt results of which are the transitions into and out of REM sleep. Design: Based on spontaneously occurring REM sleep episodes (“Episode”) and intervals without REM sleep (“Interval”), 3 variables are defined and evaluated over discrete 15-second epochs using a nonlinear logistic regression method: “Propensity” is the instantaneous rate of into-REM transition occurrence throughout an Interval, “Volatility” is the instantaneous rate of out-of-REM transition occurrence throughout an Episode, and “Opportunity” is the probability of being in non-REM (NREM) sleep at a given time throughout an Interval, a requisite for transition. Setting: 12:12 light:dark cycle, isolated boxes. Participants: Sixteen male Sprague-Dawley rats Interventions: None. Spontaneous sleep cycles. Measurements and Results: The highest levels of volatility and propensity occur, respectively, at the very beginning of Episodes and Intervals. The new condition stabilizes rapidly, and variables reach nadirs at minute 1.25 and 2.50, respectively. Afterward, volatility increases markedly, reaching values close to the initial level. Propensity increases moderately, the increment being stronger through NREM sleep bouts occurring at the end of long Intervals. Short-term homeostasis is evidenced by longer REM sleep episodes lowering propensity in the following Interval. Conclusions: The stabilization after transitions into Episodes or Intervals and the destabilization after remaining for some time in either condition may be described as resulting from continuous processes building up during Episodes and Intervals. These processes underlie the overt occurrence of transitions. Citation: Bassi A; Vivaldi EA; Ocampo-Garcées A. The time course of the probability of transition into and out of REM sleep. SLEEP 2009;32(5):655-669 PMID:19480233

  5. Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats

    USGS Publications Warehouse

    Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.

    2012-01-01

    This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.

  6. A pan-Precambrian link between deglaciation and environmental oxidation

    USGS Publications Warehouse

    Raub, T.J.; Kirschvink, J.L.

    2007-01-01

    Despite a continuous increase in solar luminosity to the present, Earth’s glacial record appears to become more frequent, though less severe, over geological time. At least two of the three major Precambrian glacial intervals were exceptionally intense, with solid evidence for widespread sea ice on or near the equator, well within a “Snowball Earth” zone produced by ice-albedo runaway in energy-balance models. The end of the first unambiguously low-latitude glaciation, the early Paleoproterozoic Makganyene event, is associated intimately with the first solid evidence for global oxygenation, including the world’s largest sedimentary manganese deposit. Subsequent low-latitude deglaciations during the Cryogenian interval of the Neoproterozoic Era are also associated with progressive oxidation, and these young Precambrian ice ages coincide with the time when basal animal phyla were diversifying. However, specifically testing hypotheses of cause and effect between Earth’s Neoproterozoic biosphere and glaciation is complicated because large and rapid True Polar Wander events appear to punctuate Neoproterozoic time and may have episodically dominated earlier and later intervals as well, rendering geographic reconstruction and age correlation challenging except for an exceptionally well-defined global paleomagnetic database.

  7. Extracting Hot spots of Topics from Time Stamped Documents

    PubMed Central

    Chen, Wei; Chundi, Parvathi

    2011-01-01

    Identifying time periods with a burst of activities related to a topic has been an important problem in analyzing time-stamped documents. In this paper, we propose an approach to extract a hot spot of a given topic in a time-stamped document set. Topics can be basic, containing a simple list of keywords, or complex. Logical relationships such as and, or, and not are used to build complex topics from basic topics. A concept of presence measure of a topic based on fuzzy set theory is introduced to compute the amount of information related to the topic in the document set. Each interval in the time period of the document set is associated with a numeric value which we call the discrepancy score. A high discrepancy score indicates that the documents in the time interval are more focused on the topic than those outside of the time interval. A hot spot of a given topic is defined as a time interval with the highest discrepancy score. We first describe a naive implementation for extracting hot spots. We then construct an algorithm called EHE (Efficient Hot Spot Extraction) using several efficient strategies to improve performance. We also introduce the notion of a topic DAG to facilitate an efficient computation of presence measures of complex topics. The proposed approach is illustrated by several experiments on a subset of the TDT-Pilot Corpus and DBLP conference data set. The experiments show that the proposed EHE algorithm significantly outperforms the naive one, and the extracted hot spots of given topics are meaningful. PMID:21765568

  8. TOPSIS-based consensus model for group decision-making with incomplete interval fuzzy preference relations.

    PubMed

    Liu, Fang; Zhang, Wei-Guo

    2014-08-01

    Due to the vagueness of real-world environments and the subjective nature of human judgments, it is natural for experts to estimate their judgements by using incomplete interval fuzzy preference relations. In this paper, based on the technique for order preference by similarity to ideal solution method, we present a consensus model for group decision-making (GDM) with incomplete interval fuzzy preference relations. To do this, we first define a new consistency measure for incomplete interval fuzzy preference relations. Second, a goal programming model is proposed to estimate the missing interval preference values and it is guided by the consistency property. Third, an ideal interval fuzzy preference relation is constructed by using the induced ordered weighted averaging operator, where the associated weights of characterizing the operator are based on the defined consistency measure. Fourth, a similarity degree between complete interval fuzzy preference relations and the ideal one is defined. The similarity degree is related to the associated weights, and used to aggregate the experts' preference relations in such a way that more importance is given to ones with the higher similarity degree. Finally, a new algorithm is given to solve the GDM problem with incomplete interval fuzzy preference relations, which is further applied to partnership selection in formation of virtual enterprises.

  9. 40 CFR 63.2855 - How do I determine the quantity of oilseed processed?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... oilseed measurements must be determined on an as received basis, as defined in § 63.2872. The as received... accounting month rather than a calendar month basis, and you have 12 complete accounting months of approximately equal duration in a calendar year, you may substitute the accounting month time interval for the...

  10. Association of time-to-surgery with outcomes in clinical stage I-II pancreatic adenocarcinoma treated with upfront surgery.

    PubMed

    Swords, Douglas S; Zhang, Chong; Presson, Angela P; Firpo, Matthew A; Mulvihill, Sean J; Scaife, Courtney L

    2018-04-01

    Time-to-surgery from cancer diagnosis has increased in the United States. We aimed to determine the association between time-to-surgery and oncologic outcomes in patients with resectable pancreatic ductal adenocarcinoma undergoing upfront surgery. The 2004-2012 National Cancer Database was reviewed for patients undergoing curative-intent surgery without neoadjuvant therapy for clinical stage I-II pancreatic ductal adenocarcinoma. A multivariable Cox model with restricted cubic splines was used to define time-to-surgery as short (1-14 days), medium (15-42), and long (43-120). Overall survival was examined using Cox shared frailty models. Secondary outcomes were examined using mixed-effects logistic regression models. Of 16,763 patients, time-to-surgery was short in 34.4%, medium in 51.6%, and long in 14.0%. More short time-to-surgery patients were young, privately insured, healthy, and treated at low-volume hospitals. Adjusted hazards of mortality were lower for medium (hazard ratio 0.94, 95% confidence interval, .90, 0.97) and long time-to-surgery (hazard ratio 0.91, 95% confidence interval, 0.86, 0.96) than short. There were no differences in adjusted odds of node positivity, clinical to pathologic upstaging, being unresectable or stage IV at exploration, and positive margins. Medium time-to-surgery patients had higher adjusted odds (odds ratio 1.11, 95% confidence interval, 1.03, 1.20) of receiving an adequate lymphadenectomy than short. Ninety-day mortality was lower in medium (odds ratio 0.75, 95% confidence interval, 0.65, 0.85) and long time-to-surgery (odds ratio 0.72, 95% confidence interval, 0.60, 0.88) than short. In this observational analysis, short time-to-surgery was associated with slightly shorter OS and higher perioperative mortality. These results may suggest that delays for medical optimization and referral to high volume surgeons are safe. Published by Elsevier Inc.

  11. Defining a procedure for predicting the duration of the approximately isothermal segments within the proposed drying regime as a function of the drying air parameters

    NASA Astrophysics Data System (ADS)

    Vasić, M.; Radojević, Z.

    2017-08-01

    One of the main disadvantages of the recently reported method, for setting up the drying regime based on the theory of moisture migration during drying, lies in a fact that it is based on a large number of isothermal experiments. In addition each isothermal experiment requires the use of different drying air parameters. The main goal of this paper was to find a way how to reduce the number of isothermal experiments without affecting the quality of the previously proposed calculation method. The first task was to define the lower and upper inputs as well as the output of the “black box” which will be used in the Box-Wilkinson’s orthogonal multi-factorial experimental design. Three inputs (drying air temperature, humidity and velocity) were used within the experimental design. The output parameter of the model represents the time interval between any two chosen characteristic points presented on the Deff - t. The second task was to calculate the output parameter for each planed experiments. The final output of the model is the equation which can predict the time interval between any two chosen characteristic points as a function of the drying air parameters. This equation is valid for any value of the drying air parameters which are within the defined area designated with lower and upper limiting values.

  12. Influence of polymorphisms within the methotrexate pathway genes on the toxicity and efficacy of methotrexate in patients with juvenile idiopathic arthritis.

    PubMed

    Yanagimachi, Masakatsu; Naruto, Takuya; Hara, Takuma; Kikuchi, Masako; Hara, Ryoki; Miyamae, Takako; Imagawa, Tomoyuki; Mori, Masaaki; Kaneko, Tetsuji; Morita, Satoshi; Goto, Hiroaki; Yokota, Shumpei

    2011-02-01

    We investigated whether several polymorphisms within the methotrexate (MTX) pathway genes were related to the toxicity and efficacy of MTX in 92 Japanese patients with articular-type juvenile idiopathic arthritis (JIA). Eight gene polymorphisms within the MTX pathway genes, namely, RFC, BCRP, MTHFR (two), FPGS, γ-glutamyl hydrolase (GGH; two) and ATIC, were genotyped using TaqMan assays. Liver dysfunction was defined as an increase in alanine transaminase to five times the normal upper limit. Non-responders to MTX were defined as patients refractory to MTX and were therefore treated with biologics. The non-TT genotype at GGH T16C was associated with a high risk of liver dysfunction (P=0.028, odds ratio=6.90, 95% confidence interval 1.38-34.5), even after adjustment for the duration of MTX treatment. A longer interval from disease onset to treatment (8.5 and 21.3 months, P=0.029) and rheumatoid factor positivity (P=0.026, odds ratio=2.87, 95% confidence interval 1.11-7.39) were associated with lower efficacy of MTX. The non-TT genotype at GGH T16C was associated with a high risk of liver dysfunction, presumably because the C allele of GGH C16T may reduce the activity of GGH. The time interval before MTX treatment and rheumatoid factor positivity were associated with the efficacy of MTX treatment. The pharmacogenetics of the MTX pathway genes affects the toxicity and efficacy of MTX in Japanese JIA patients. © 2011 The Authors. British Journal of Clinical Pharmacology © 2011 The British Pharmacological Society.

  13. Estimation of Initial and Response Times of Laser Dew-Point Hygrometer by Measurement Simulation

    NASA Astrophysics Data System (ADS)

    Matsumoto, Sigeaki; Toyooka, Satoru

    1995-10-01

    The initial and the response times of the laser dew-point hygrometer were evaluated by measurement simulation. The simulation was based on loop computations of the surface temperature of a plate with dew deposition, the quantity of dew deposited and the intensity of scattered light from the surface at each short interval of measurement. The initial time was defined as the time necessary for the hygrometer to reach a temperature within ±0.5° C of the measured dew point from the start time of measurement, and the response time was also defined for stepwise dew-point changes of +5° C and -5° C. The simulation results are in approximate agreement with the recorded temperature and intensity of scattered light of the hygrometer. The evaluated initial time ranged from 0.3 min to 5 min in the temperature range from 0° C to 60° C, and the response time was also evaluated to be from 0.2 min to 3 min.

  14. Introduction to the Neutrosophic Quantum Theory

    NASA Astrophysics Data System (ADS)

    Smarandache, Florentin

    2014-10-01

    Neutrosophic Quantum Theory (NQT) is the study of the principle that certain physical quantities can assume neutrosophic values, instead of discrete values as in quantum theory. These quantities are thus neutrosophically quantized. A neutrosophic values (neutrosophic amount) is expressed by a set (mostly an interval) that approximates (or includes) a discrete value. An oscillator can lose or gain energy by some neutrosophic amount (we mean neither continuously nor discretely, but as a series of integral sets: S, 2S, 3S, ..., where S is a set). In the most general form, one has an ensemble of sets of sets, i.e. R1S1 ,R2S2 ,R3S3 , ..., where all Rn and Sn are sets that may vary in function of time and of other parameters. Several such sets may be equal, or may be reduced to points, or may be empty. {The multiplication of two sets A and B is classically defined as: AB ={ab, a??A and b??B}. And similarly a number n times a set A is defined as: nA ={na, a??A}.} The unit of neutrosophic energy is Hν , where H is a set (in particular an interval) that includes Planck constant h, and ν is the frequency. Therefore, an oscillator could change its energy by a neutrosophic number of quanta: Hν , 2H ν, 3H ν, etc. For example, when H is an interval [h1 ,h2 ] , with 0 <=h1 <=h2 , that contains Planck constant h, then one has: [h1 ν ,h2 ν ], [2h1 ν , 2h2 ν ], [3h1 ν , 3h2 ν ],..., as series of intervals of energy change of the oscillator. The most general form of the units of neutrosophic energy is Hnνn , where all Hn and νn are sets that similarly as above may vary in function of time and of other oscillator and environment parameters. Neutrosophic quantum theory combines classical mechanics and quantum mechanics.

  15. Using low-frequency earthquake families on the San Andreas fault as deep creepmeters

    NASA Astrophysics Data System (ADS)

    Thomas, A.; Beeler, N. M.; Bletery, Q.; Burgmann, R.; Shelly, D. R.

    2017-12-01

    The San Andreas fault hosts tectonic tremor and low-frequency earthquakes (LFEs) similar to those in subduction zone environments. These LFEs are grouped into families based on waveform similarity and locate between 16 and 29 km depth along a 150-km-long section of the fault centered on Parkfield, CA. ­Within individual LFE families event occurrence is not steady. In some families, bursts of a few events recur on timescales of days while in other families there are nearly quiescent periods that often last for months followed by episodes where hundreds of events occur over the course of a few days. These two different styles of LFE occurrence are called continuous and episodic respectively. LFEs are often assumed to reflect persistent regions that periodically fail during the aseismic shear of the surrounding fault allowing them to be used as creepmeters. We test this idea by formalizing the definition of a creepmeter (the LFE occurrence rate is proportional to the local fault slip rate), determining whether this definition is consistent with the observations, and over what timescale. We use the recurrence intervals of LFEs within individual families to create a catalog of LFE bursts. For the episodic families, we consider both longer duration (multiday) inferred creep episodes (dubbed long-timescale episodic) as well as the frequent short-term bursts of events that occur many times during inferred creep episodes (dubbed short-timescale episodic). We then use the recurrence intervals of LFE bursts to estimate the timing, duration, recurrence interval, slip, and slip rate associated with inferred slow slip events. We find that continuous families and the short-timescale episodic families appear to be inconsistent with our definition of a creepmeter (defined on the recurrence interval timescale) because their estimated durations are not physically meaningful. A straight-forward interpretation of the frequent short-term bursts of the continuous and short-timescale episodic families is that they do not represent individual creep events but rather are persistent asperities that are driven to failure by quasi-continuous creep on the surrounding fault. In contrast, episodic families likely define sections of the fault where slip is distinctly episodic in well-defined SSEs that slip at 15 times the long-term rate.

  16. Reasoning about real-time systems with temporal interval logic constraints on multi-state automata

    NASA Technical Reports Server (NTRS)

    Gabrielian, Armen

    1991-01-01

    Models of real-time systems using a single paradigm often turn out to be inadequate, whether the paradigm is based on states, rules, event sequences, or logic. A model-based approach to reasoning about real-time systems is presented in which a temporal interval logic called TIL is employed to define constraints on a new type of high level automata. The combination, called hierarchical multi-state (HMS) machines, can be used to model formally a real-time system, a dynamic set of requirements, the environment, heuristic knowledge about planning-related problem solving, and the computational states of the reasoning mechanism. In this framework, mathematical techniques were developed for: (1) proving the correctness of a representation; (2) planning of concurrent tasks to achieve goals; and (3) scheduling of plans to satisfy complex temporal constraints. HMS machines allow reasoning about a real-time system from a model of how truth arises instead of merely depending of what is true in a system.

  17. qFeature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-09-14

    This package contains statistical routines for extracting features from multivariate time-series data which can then be used for subsequent multivariate statistical analysis to identify patterns and anomalous behavior. It calculates local linear or quadratic regression model fits to moving windows for each series and then summarizes the model coefficients across user-defined time intervals for each series. These methods are domain agnostic-but they have been successfully applied to a variety of domains, including commercial aviation and electric power grid data.

  18. NUMERICAL METHODS FOR SOLVING THE MULTI-TERM TIME-FRACTIONAL WAVE-DIFFUSION EQUATION.

    PubMed

    Liu, F; Meerschaert, M M; McGough, R J; Zhuang, P; Liu, Q

    2013-03-01

    In this paper, the multi-term time-fractional wave-diffusion equations are considered. The multi-term time fractional derivatives are defined in the Caputo sense, whose orders belong to the intervals [0,1], [1,2), [0,2), [0,3), [2,3) and [2,4), respectively. Some computationally effective numerical methods are proposed for simulating the multi-term time-fractional wave-diffusion equations. The numerical results demonstrate the effectiveness of theoretical analysis. These methods and techniques can also be extended to other kinds of the multi-term fractional time-space models with fractional Laplacian.

  19. NUMERICAL METHODS FOR SOLVING THE MULTI-TERM TIME-FRACTIONAL WAVE-DIFFUSION EQUATION

    PubMed Central

    Liu, F.; Meerschaert, M.M.; McGough, R.J.; Zhuang, P.; Liu, Q.

    2013-01-01

    In this paper, the multi-term time-fractional wave-diffusion equations are considered. The multi-term time fractional derivatives are defined in the Caputo sense, whose orders belong to the intervals [0,1], [1,2), [0,2), [0,3), [2,3) and [2,4), respectively. Some computationally effective numerical methods are proposed for simulating the multi-term time-fractional wave-diffusion equations. The numerical results demonstrate the effectiveness of theoretical analysis. These methods and techniques can also be extended to other kinds of the multi-term fractional time-space models with fractional Laplacian. PMID:23772179

  20. Winnowing sequences from a database search.

    PubMed

    Berman, P; Zhang, Z; Wolf, Y I; Koonin, E V; Miller, W

    2000-01-01

    In database searches for sequence similarity, matches to a distinct sequence region (e.g., protein domain) are frequently obscured by numerous matches to another region of the same sequence. In order to cope with this problem, algorithms are developed to discard redundant matches. One model for this problem begins with a list of intervals, each with an associated score; each interval gives the range of positions in the query sequence that align to a database sequence, and the score is that of the alignment. If interval I is contained in interval J, and I's score is less than J's, then I is said to be dominated by J. The problem is then to identify each interval that is dominated by at least K other intervals, where K is a given level of "tolerable redundancy." An algorithm is developed to solve the problem in O(N log N) time and O(N*) space, where N is the number of intervals and N* is a precisely defined value that never exceeds N and is frequently much smaller. This criterion for discarding database hits has been implemented in the Blast program, as illustrated herein with examples. Several variations and extensions of this approach are also described.

  1. Integrated payload and mission planning, phase 3. Volume 2: Logic/Methodology for preliminary grouping of spacelab and mixed cargo payloads

    NASA Technical Reports Server (NTRS)

    Rodgers, T. E.; Johnson, J. F.

    1977-01-01

    The logic and methodology for a preliminary grouping of Spacelab and mixed-cargo payloads is proposed in a form that can be readily coded into a computer program by NASA. The logic developed for this preliminary cargo grouping analysis is summarized. Principal input data include the NASA Payload Model, payload descriptive data, Orbiter and Spacelab capabilities, and NASA guidelines and constraints. The first step in the process is a launch interval selection in which the time interval for payload grouping is identified. Logic flow steps are then taken to group payloads and define flight configurations based on criteria that includes dedication, volume, area, orbital parameters, pointing, g-level, mass, center of gravity, energy, power, and crew time.

  2. Long Time to Diagnosis of Medulloblastoma in Children Is Not Associated with Decreased Survival or with Worse Neurological Outcome

    PubMed Central

    Brasme, Jean-Francois; Grill, Jacques; Doz, Francois; Lacour, Brigitte; Valteau-Couanet, Dominique; Gaillard, Stephan; Delalande, Olivier; Aghakhani, Nozar; Puget, Stéphanie; Chalumeau, Martin

    2012-01-01

    Background The long time to diagnosis of medulloblastoma, one of the most frequent brain tumors in children, is the source of painful remorse and sometimes lawsuits. We analyzed its consequences for tumor stage, survival, and sequelae. Patients and Methods This retrospective population-based cohort study included all cases of pediatric medulloblastoma from a region of France between 1990 and 2005. We collected the demographic, clinical, and tumor data and analyzed the relations between the interval from symptom onset until diagnosis, initial disease stage, survival, and neuropsychological and neurological outcome. Results The median interval from symptom onset until diagnosis for the 166 cases was 65 days (interquartile range 31–121, range 3–457). A long interval (defined as longer than the median) was associated with a lower frequency of metastasis in the univariate and multivariate analyses and with a larger tumor volume, desmoplastic histology, and longer survival in the univariate analysis, but not after adjustment for confounding factors. The time to diagnosis was significantly associated with IQ score among survivors. No significant relation was found between the time to diagnosis and neurological disability. In the 62 patients with metastases, a long prediagnosis interval was associated with a higher T stage, infiltration of the fourth ventricle floor, and incomplete surgical resection; it nonetheless did not influence survival significantly in this subgroup. Conclusions We found complex and often inverse relations between time to diagnosis of medulloblastoma in children and initial severity factors, survival, and neuropsychological and neurological outcome. This interval appears due more to the nature of the tumor and its progression than to parental or medical factors. These conclusions should be taken into account in the information provided to parents and in expert assessments produced for malpractice claims. PMID:22485143

  3. Techniques for obtaining regional radiation budgets from satellite radiometer observations, phase 4 and phase 5. Ph.D. Thesis. Final Report

    NASA Technical Reports Server (NTRS)

    Pina, J. F.; House, F. B.

    1976-01-01

    A scheme was developed which divides the earth-atmosphere system into 2060 elemental areas. The regions previously described are defined in terms of these elemental areas which are fixed in size and position as the satellite moves. One method, termed the instantaneous technique, yields values of the radiant emittance (We) and the radiant reflectance (Wr) which the regions have during the time interval of a single satellite pass. The number of observations matches the number of regions under study and a unique solution is obtained using matrix inversion. The other method (termed the best fit technique), yields time averages of We and Wr for large time intervals (e.g., months, seasons). The number of observations in this technique is much greater than the number of regions considered, and an approximate solution is obtained by the method of least squares.

  4. Differences in night-time and daytime ambulatory blood pressure when diurnal periods are defined by self-report, fixed-times, and actigraphy: Improving the Detection of Hypertension study.

    PubMed

    Booth, John N; Muntner, Paul; Abdalla, Marwah; Diaz, Keith M; Viera, Anthony J; Reynolds, Kristi; Schwartz, Joseph E; Shimbo, Daichi

    2016-02-01

    To determine whether defining diurnal periods by self-report, fixed-time, or actigraphy produce different estimates of night-time and daytime ambulatory blood pressure (ABP). Over a median of 28 days, 330 participants completed two 24-h ABP and actigraphy monitoring periods with sleep diaries. Fixed night-time and daytime periods were defined as 0000-0600 h and 1000-2000 h, respectively. Using the first ABP period, within-individual differences for mean night-time and daytime ABP and kappa statistics for night-time and daytime hypertension (systolic/diastolic ABP≥120/70 mmHg and ≥135/85 mmHg, respectively) were estimated comparing self-report, fixed-time, or actigraphy for defining diurnal periods. Reproducibility of ABP was also estimated. Within-individual mean differences in night-time systolic ABP were small, suggesting little bias, when comparing the three approaches used to define diurnal periods. The distribution of differences, represented by 95% confidence intervals (CI), in night-time systolic and diastolic ABP and daytime systolic and diastolic ABP was narrowest for self-report versus actigraphy. For example, mean differences (95% CI) in night-time systolic ABP for self-report versus fixed-time was -0.53 (-6.61, +5.56) mmHg, self-report versus actigraphy was 0.91 (-3.61, +5.43) mmHg, and fixed-time versus actigraphy was 1.43 (-5.59, +8.46) mmHg. Agreement for night-time and daytime hypertension was highest for self-report versus actigraphy: kappa statistic (95% CI) = 0.91 (0.86,0.96) and 1.00 (0.98,1.00), respectively. The reproducibility of mean ABP and hypertension categories was similar using each approach. Given the high agreement with actigraphy, these data support using self-report to define diurnal periods on ABP monitoring. Further, the use of fixed-time periods may be a reasonable alternative approach.

  5. Genetic analysis of longevity in Dutch dairy cattle using random regression.

    PubMed

    van Pelt, M L; Meuwissen, T H E; de Jong, G; Veerkamp, R F

    2015-06-01

    Longevity, productive life, or lifespan of dairy cattle is an important trait for dairy farmers, and it is defined as the time from first calving to the last test date for milk production. Methods for genetic evaluations need to account for censored data; that is, records from cows that are still alive. The aim of this study was to investigate whether these methods also need to take account of survival being genetically a different trait across the entire lifespan of a cow. The data set comprised 112,000 cows with a total of 3,964,449 observations for survival per month from first calving until 72 mo in productive life. A random regression model with second-order Legendre polynomials was fitted for the additive genetic effect. Alternative parameterizations were (1) different trait definitions for the length of time interval for survival after first calving (1, 3, 6, and 12 mo); (2) linear or threshold model; and (3) differing the order of the Legendre polynomial. The partial derivatives of a profit function were used to transform variance components on the survival scale to those for lifespan. Survival rates were higher in early life than later in life (99 vs. 95%). When survival was defined over 12-mo intervals survival curves were smooth compared with curves when 1-, 3-, or 6-mo intervals were used. Heritabilities in each interval were very low and ranged from 0.002 to 0.031, but the heritability for lifespan over the entire period of 72 mo after first calving ranged from 0.115 to 0.149. Genetic correlations between time intervals ranged from 0.25 to 1.00. Genetic parameters and breeding values for the genetic effect were more sensitive to the trait definition than to whether a linear or threshold model was used or to the order of Legendre polynomial used. Cumulative survival up to the first 6 mo predicted lifespan with an accuracy of only 0.79 to 0.85; that is, reliability of breeding value with many daughters in the first 6 mo can be, at most, 0.62 to 0.72, and changes of breeding values are still expected when daughters are getting older. Therefore, an improved model for genetic evaluation should treat survival as different traits during the lifespan by splitting lifespan in time intervals of 6 mo or less to avoid overestimated reliabilities and changes in breeding values when daughters are getting older. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  6. Modeling Finite-Time Failure Probabilities in Risk Analysis Applications.

    PubMed

    Dimitrova, Dimitrina S; Kaishev, Vladimir K; Zhao, Shouqi

    2015-10-01

    In this article, we introduce a framework for analyzing the risk of systems failure based on estimating the failure probability. The latter is defined as the probability that a certain risk process, characterizing the operations of a system, reaches a possibly time-dependent critical risk level within a finite-time interval. Under general assumptions, we define two dually connected models for the risk process and derive explicit expressions for the failure probability and also the joint probability of the time of the occurrence of failure and the excess of the risk process over the risk level. We illustrate how these probabilistic models and results can be successfully applied in several important areas of risk analysis, among which are systems reliability, inventory management, flood control via dam management, infectious disease spread, and financial insolvency. Numerical illustrations are also presented. © 2015 Society for Risk Analysis.

  7. Partitioning the grapevine growing season in the Douro Valley of Portugal: accumulated heat better than calendar dates

    NASA Astrophysics Data System (ADS)

    Real, António C.; Borges, José; Cabral, J. Sarsfield; Jones, Gregory V.

    2015-08-01

    Temperature and water status profiles during the growing season are the most important factors influencing the ripening of wine grapes. To model weather influences on the quality and productivity of the vintages, it is necessary to partition the growing season into smaller growth intervals in which weather variables are evaluated. A significant part of past and ongoing research on the relationships between weather and wine quality uses calendar-defined intervals to partition the growing season. The phenology of grapevines is not determined by calendar dates but by several factors such as accumulated heat. To examine the accuracy of different approaches, this work analyzed the difference in average temperature and accumulated precipitation using growth intervals with boundaries defined by means of estimated historical phenological dates and intervals defined by means of accumulated heat or average calendar dates of the Douro Valley of Portugal. The results show that in situations where there is an absence of historical phenological dates and/or no available data that makes the estimation of those dates possible, it is more accurate to use grapevine heat requirements than calendar dates to define growth interval boundaries. Additionally, we analyzed the ability of the length of growth intervals with boundaries based on grapevine heat requirements to differentiate the best from the worst vintage years with the results showing that vintage quality is strongly related to the phenological events. Finally, we analyzed the variability of growth interval lengths in the Douro Valley during 1980-2009 with the results showing a tendency for earlier grapevine physiology.

  8. Using Low-Frequency Earthquake Families on the San Andreas Fault as Deep Creepmeters

    NASA Astrophysics Data System (ADS)

    Thomas, A. M.; Beeler, N. M.; Bletery, Q.; Burgmann, R.; Shelly, D. R.

    2018-01-01

    The central section of the San Andreas Fault hosts tectonic tremor and low-frequency earthquakes (LFEs) similar to subduction zone environments. LFEs are often interpreted as persistent regions that repeatedly fail during the aseismic shear of the surrounding fault allowing them to be used as creepmeters. We test this idea by using the recurrence intervals of individual LFEs within LFE families to estimate the timing, duration, recurrence interval, slip, and slip rate associated with inferred slow slip events. We formalize the definition of a creepmeter and determine whether this definition is consistent with our observations. We find that episodic families reflect surrounding creep over the interevent time, while the continuous families and the short time scale bursts that occur as part of the episodic families do not. However, when these families are evaluated on time scales longer than the interevent time these events can also be used to meter slip. A straightforward interpretation of episodic families is that they define sections of the fault where slip is distinctly episodic in well-defined slow slip events that slip 16 times the long-term rate. In contrast, the frequent short-term bursts of the continuous and short time scale episodic families likely do not represent individual creep events but rather are persistent asperities that are driven to failure by quasi-continuous creep on the surrounding fault. Finally, we find that the moment-duration scaling of our inferred creep events are inconsistent with the proposed linear moment-duration scaling. However, caution must be exercised when attempting to determine scaling with incomplete knowledge of scale.

  9. An IDS Alerts Aggregation Algorithm Based on Rough Set Theory

    NASA Astrophysics Data System (ADS)

    Zhang, Ru; Guo, Tao; Liu, Jianyi

    2018-03-01

    Within a system in which has been deployed several IDS, a great number of alerts can be triggered by a single security event, making real alerts harder to be found. To deal with redundant alerts, we propose a scheme based on rough set theory. In combination with basic concepts in rough set theory, the importance of attributes in alerts was calculated firstly. With the result of attributes importance, we could compute the similarity of two alerts, which will be compared with a pre-defined threshold to determine whether these two alerts can be aggregated or not. Also, time interval should be taken into consideration. Allowed time interval for different types of alerts is computed individually, since different types of alerts may have different time gap between two alerts. In the end of this paper, we apply proposed scheme on DAPRA98 dataset and the results of experiment show that our scheme can efficiently reduce the redundancy of alerts so that administrators of security system could avoid wasting time on useless alerts.

  10. Task Space Angular Velocity Blending for Real-Time Trajectory Generation

    NASA Technical Reports Server (NTRS)

    Volpe, Richard A. (Inventor)

    1997-01-01

    The invention is embodied in a method of controlling a robot manipulator moving toward a target frame F(sub 0) with a target velocity v(sub 0) including a linear target velocity v and an angular target velocity omega(sub 0) to smoothly and continuously divert the robot manipulator to a subsequent frame F(sub 1) by determining a global transition velocity v(sub 1), the global transition velocity including a linear transition velocity v(sub 1) and an angular transition velocity omega(sub 1), defining a blend time interval 2(tau)(sub 0) within which the global velocity of the robot manipulator is to be changed from a global target velocity v(sub 0) to the global transition velocity v(sub 1) and dividing the blend time interval 2(tau)(sub 0) into discrete time segments (delta)t. During each one of the discrete time segments delta t of the blend interval 2(tau)(sub 0), a blended global velocity v of the manipulator is computed as a blend of the global target velocity v(sub 0) and the global transition velocity v(sub 1), the blended global velocity v including a blended angular velocity omega and a blended linear velocity v, and then, the manipulator is rotated by an incremental rotation corresponding to an integration of the blended angular velocity omega over one discrete time segment (delta)t.

  11. Detection of abnormal item based on time intervals for recommender systems.

    PubMed

    Gao, Min; Yuan, Quan; Ling, Bin; Xiong, Qingyu

    2014-01-01

    With the rapid development of e-business, personalized recommendation has become core competence for enterprises to gain profits and improve customer satisfaction. Although collaborative filtering is the most successful approach for building a recommender system, it suffers from "shilling" attacks. In recent years, the research on shilling attacks has been greatly improved. However, the approaches suffer from serious problem in attack model dependency and high computational cost. To solve the problem, an approach for the detection of abnormal item is proposed in this paper. In the paper, two common features of all attack models are analyzed at first. A revised bottom-up discretized approach is then proposed based on time intervals and the features for the detection. The distributions of ratings in different time intervals are compared to detect anomaly based on the calculation of chi square distribution (χ(2)). We evaluated our approach on four types of items which are defined according to the life cycles of these items. The experimental results show that the proposed approach achieves a high detection rate with low computational cost when the number of attack profiles is more than 15. It improves the efficiency in shilling attacks detection by narrowing down the suspicious users.

  12. An investigation of a hypothermic to ischemic ratio in patients following out-of-hospital cardiac arrest presenting with a shockable rhythm.

    PubMed

    Sawyer, Kelly N; Kurz, Michael C; Elswick, R K

    2014-06-01

    Targeted temperature management (TTM) improves outcome after out-of-hospital cardiac arrest (OHCA). We hypothesized that there may be a significant relationship between the dose of hypothermia, the time to return of spontaneous circulation (ROSC), and survival to discharge. Retrospective pilot investigation on 99 consecutive OHCA patients with initial shockable rhythm, surviving to admission, and undergoing TTM between 2008 and 2011. Dose of hypothermia was defined as the sum of the induction interval (time to target temperature [from ROSC to 33°C]); the controlled hypothermia interval (from reaching 33°C until rewarming); and the rewarming interval (from 33°C to 37°C). Time to ROSC was measured from pulselessness or 911 call time to ROSC. The ratio between the two was termed the hypothermic to ischemic ratio. Purposeful variable selection for logistic regression modeling was used to assess the influence of the hypothermic/ischemic ratio on survival. Odds ratios (OR) were used to examine the effects of predictor variables on survival. Of 99 patients, eight were excluded for deviation from protocol, death during protocol, or missing data. From the univariate models, survivors were more likely to be younger, have a shorter time to ROSC, and have a larger hypothermic/ischemic ratio. Survivors also had a nonsignificant trend toward a longer time to target temperature. In multivariable modeling, the hypothermic/ischemic ratio was the most significant predictor for survival (OR 2.161 [95% confidence interval 1.371, 3.404]). In this pilot study, the hypothermic to ischemic ratio was significantly associated with survival to discharge for patients with an initial shockable rhythm. Further investigation of the relationship between the dose of hypothermia and time to ROSC for postresuscitation TTM is needed.

  13. Structure-oriented versus process-oriented approach to enhance efficiency for emergency room operations: what lessons can we learn?

    PubMed

    Hwang, Taik Gun; Lee, Younsuk; Shin, Hojung

    2011-01-01

    The efficiency and quality of a healthcare system can be defined as interactions among the system structure, processes, and outcome. This article examines the effect of structural adjustment (change in floor plan or layout) and process improvement (critical pathway implementation) on performance of emergency room (ER) operations for acute cerebral infarction patients. Two large teaching hospitals participated in this study: Korea University (KU) Guro Hospital and KU Anam Hospital. The administration of Guro adopted a structure-oriented approach in improving its ER operations while the administration of Anam employed a process-oriented approach, facilitating critical pathways and protocols. To calibrate improvements, the data for time interval, length of stay, and hospital charges were collected, before and after the planned changes were implemented at each hospital. In particular, time interval is the most essential measure for handling acute stroke patients because patients' survival and recovery are affected by the promptness of diagnosis and treatment. Statistical analyses indicated that both redesign of layout at Guro and implementation of critical pathways at Anam had a positive influence on most of the performance measures. However, reduction in time interval was not consistent at Guro, demonstrating delays in processing time for a few processes. The adoption of critical pathways at Anam appeared more effective in reducing time intervals than the structural rearrangement at Guro, mainly as a result of the extensive employee training required for a critical pathway implementation. Thus, hospital managers should combine structure-oriented and process-oriented strategies to maximize effectiveness of improvement efforts.

  14. Simulating recurrent event data with hazard functions defined on a total time scale.

    PubMed

    Jahn-Eimermacher, Antje; Ingel, Katharina; Ozga, Ann-Kathrin; Preussler, Stella; Binder, Harald

    2015-03-08

    In medical studies with recurrent event data a total time scale perspective is often needed to adequately reflect disease mechanisms. This means that the hazard process is defined on the time since some starting point, e.g. the beginning of some disease, in contrast to a gap time scale where the hazard process restarts after each event. While techniques such as the Andersen-Gill model have been developed for analyzing data from a total time perspective, techniques for the simulation of such data, e.g. for sample size planning, have not been investigated so far. We have derived a simulation algorithm covering the Andersen-Gill model that can be used for sample size planning in clinical trials as well as the investigation of modeling techniques. Specifically, we allow for fixed and/or random covariates and an arbitrary hazard function defined on a total time scale. Furthermore we take into account that individuals may be temporarily insusceptible to a recurrent incidence of the event. The methods are based on conditional distributions of the inter-event times conditional on the total time of the preceeding event or study start. Closed form solutions are provided for common distributions. The derived methods have been implemented in a readily accessible R script. The proposed techniques are illustrated by planning the sample size for a clinical trial with complex recurrent event data. The required sample size is shown to be affected not only by censoring and intra-patient correlation, but also by the presence of risk-free intervals. This demonstrates the need for a simulation algorithm that particularly allows for complex study designs where no analytical sample size formulas might exist. The derived simulation algorithm is seen to be useful for the simulation of recurrent event data that follow an Andersen-Gill model. Next to the use of a total time scale, it allows for intra-patient correlation and risk-free intervals as are often observed in clinical trial data. Its application therefore allows the simulation of data that closely resemble real settings and thus can improve the use of simulation studies for designing and analysing studies.

  15. Chemical, Biological, and Radiological (CBR) Contamination Survivability, Large Item Interiors

    DTIC Science & Technology

    2016-08-03

    e.g., mud, grease, etc.). m. Pretest (baseline) and posttest (30 days after the first contamination and/or other defined long-term time interval...procedures used. f. Description of SUT-interior materials of construction, paint type, and surface condition (pretest and posttest ), including...difficult to decontaminate or allow liquid to penetrate. g. Pretest and posttest ME functional performance characteristics (when measured) used as

  16. Efavirenz versus boosted atazanavir-containing regimens and immunologic, virologic, and clinical outcomes

    PubMed Central

    Cain, Lauren E.; Caniglia, Ellen C.; Phillips, Andrew; Olson, Ashley; Muga, Roberto; Pérez-Hoyos, Santiago; Abgrall, Sophie; Costagliola, Dominique; Rubio, Rafael; Jarrín, Inma; Bucher, Heiner; Fehr, Jan; van Sighem, Ard; Reiss, Peter; Dabis, François; Vandenhende, Marie-Anne; Logan, Roger; Robins, James; Sterne, Jonathan A. C.; Justice, Amy; Tate, Janet; Touloumi, Giota; Paparizos, Vasilis; Esteve, Anna; Casabona, Jordi; Seng, Rémonie; Meyer, Laurence; Jose, Sophie; Sabin, Caroline; Hernán, Miguel A.

    2016-01-01

    Abstract Objective: To compare regimens consisting of either ritonavir-boosted atazanavir or efavirenz and a nucleoside reverse transcriptase inhibitor (NRTI) backbone with respect to clinical, immunologic, and virologic outcomes. Design: Prospective studies of human immunodeficiency virus (HIV)-infected individuals in Europe and the United States included in the HIV-CAUSAL Collaboration. Methods: HIV-positive, antiretroviral therapy-naive, and acquired immune deficiency syndrome (AIDS)-free individuals were followed from the time they started an atazanavir or efavirenz regimen. We estimated an analog of the “intention-to-treat” effect for efavirenz versus atazanavir regimens on clinical, immunologic, and virologic outcomes with adjustment via inverse probability weighting for time-varying covariates. Results: A total of 4301 individuals started an atazanavir regimen (83 deaths, 157 AIDS-defining illnesses or deaths) and 18,786 individuals started an efavirenz regimen (389 deaths, 825 AIDS-defining illnesses or deaths). During a median follow-up of 31 months, the hazard ratios (95% confidence intervals) were 0.98 (0.77, 1.24) for death and 1.09 (0.91, 1.30) for AIDS-defining illness or death comparing efavirenz with atazanavir regimens. The 5-year survival difference was 0.1% (95% confidence interval: −0.7%, 0.8%) and the AIDS-free survival difference was −0.3% (−1.2%, 0.6%). After 12 months, the mean change in CD4 cell count was 20.8 (95% confidence interval: 13.9, 27.8) cells/mm3 lower and the risk of virologic failure was 20% (14%, 26%) lower in the efavirenz regimens. Conclusion: Our estimates are consistent with a smaller 12-month increase in CD4 cell count, and a smaller risk of virologic failure at 12 months for efavirenz compared with atazanavir regimens. No overall differences could be detected with respect to 5-year survival or AIDS-free survival. PMID:27741139

  17. Does Reducing the Duration from Symptom Onset to Recanalization Improve the Results of Intracranial Mechanical Thrombectomy in the Elderly?

    PubMed Central

    KOMATSUBARA, Koichiro; DEMBO, Tomohisa; SATO, Eishi; SASAMORI, Hiroki; TORII, Masataka; SHIOKAWA, Yoshiaki; HIRANO, Teruyuki

    2017-01-01

    Endovascular recanalization for acute major cerebral artery occlusion is effective within a short time after symptom onset. However, its efficacy in the elderly remains unknown. We assessed the efficacy of our comprehensive stroke center’s reduction of this time in 28 consecutive patients for elderly patients (defined as patients aged ≥75 years) with acute major cerebral artery occlusion treated with intravenous injection of tissue plasminogen activator, followed by thrombus retrieval by endovascular therapy. The patients were divided into groups according to whether they were treated before implementation of the time reduction measure (from January 2012 to May 2014) or after (from June 2014 to May 2015). The onset-to-door, onset-to-needle, onset-to-recanalization (O2R), door-to-image (D2I), door-to-needle (D2N), door-to-puncture (D2P), door-to-recanalization (D2R), and puncture-to-recanalization time intervals were compared between the two groups. There were 14 patients (including 8 elderly patients ≥80 years) before and 14 patients (including 10 elderly patients ≥80 years) after the time reduction measure. The mean duration of each of the following time intervals was significantly reduced after the time reduction measure (P < 0.05). To reduce the O2R time, the D2P time is the first time interval that can be reduced. At our center, conferences were regularly held to raise awareness among staff and make specific changes in the workflow, and overall time reduction was achieved. Similar results were obtained in elderly patients. PMID:28132961

  18. Quaternary Geology and Surface Faulting Hazard: Active and Capable Faults in Central Apennines, Italy

    NASA Astrophysics Data System (ADS)

    Falcucci, E.; Gori, S.

    2015-12-01

    The 2009 L'Aquila earthquake (Mw 6.1), in central Italy, raised the issue of surface faulting hazard in Italy, since large urban areas were affected by surface displacement along the causative structure, the Paganica fault. Since then, guidelines for microzonation were drew up that take into consideration the problem of surface faulting in Italy, and laying the bases for future regulations about related hazard, similarly to other countries (e.g. USA). More specific guidelines on the management of areas affected by active and capable faults (i.e. able to produce surface faulting) are going to be released by National Department of Civil Protection; these would define zonation of areas affected by active and capable faults, with prescriptions for land use planning. As such, the guidelines arise the problem of the time interval and general operational criteria to asses fault capability for the Italian territory. As for the chronology, the review of the international literature and regulatory allowed Galadini et al. (2012) to propose different time intervals depending on the ongoing tectonic regime - compressive or extensional - which encompass the Quaternary. As for the operational criteria, the detailed analysis of the large amount of works dealing with active faulting in Italy shows that investigations exclusively based on surface morphological features (e.g. fault planes exposition) or on indirect investigations (geophysical data), are not sufficient or even unreliable to define the presence of an active and capable fault; instead, more accurate geological information on the Quaternary space-time evolution of the areas affected by such tectonic structures is needed. A test area for which active and capable faults can be first mapped based on such a classical but still effective methodological approach can be the central Apennines. Reference Galadini F., Falcucci E., Galli P., Giaccio B., Gori S., Messina P., Moro M., Saroli M., Scardia G., Sposato A. (2012). Time intervals to assess active and capable faults for engineering practices in Italy. Eng. Geol., 139/140, 50-65.

  19. Orbital time scale and new C-isotope record for Cenomanian-Turonian boundary stratotype

    NASA Astrophysics Data System (ADS)

    Sageman, Bradley B.; Meyers, Stephen R.; Arthur, Michael A.

    2006-02-01

    Previous time scales for the Cenomanian-Turonian boundary (CTB) interval containing Oceanic Anoxic Event II (OAE II) vary by a factor of three. In this paper we present a new orbital time scale for the CTB stratotype established independently of radiometric, biostratigraphic, or geochemical data sets, update revisions of CTB biostratigraphic zonation, and provide a new detailed carbon isotopic record for the CTB study interval. The orbital time scale allows an independent assessment of basal biozone ages relative to the new CTB date of 93.55 Ma (GTS04). The δ13Corg data document the abrupt onset of OAE II, significant variability in δ13Corg values, and values enriched to almost -22‰. These new data underscore the difficulty in defining OAE II termination. Using the new isotope curve and time scale, estimates of OAE II duration can be determined and exported to other sites based on integration of well-established chemostratigraphic and biostratigraphic datums. The new data will allow more accurate calculations of biogeochemical and paleobiologic rates across the CTB.

  20. Measuring Skew in Average Surface Roughness as a Function of Surface Preparation

    NASA Technical Reports Server (NTRS)

    Stahl, Mark

    2015-01-01

    Characterizing surface roughness is important for predicting optical performance. Better measurement of surface roughness reduces polishing time, saves money and allows the science requirements to be better defined. This study characterized statistics of average surface roughness as a function of polishing time. Average surface roughness was measured at 81 locations using a Zygo white light interferometer at regular intervals during the polishing process. Each data set was fit to a normal and Largest Extreme Value (LEV) distribution; then tested for goodness of fit. We show that the skew in the average data changes as a function of polishing time.

  1. Scripting Module for the Satellite Orbit Analysis Program (SOAP)

    NASA Technical Reports Server (NTRS)

    Carnright, Robert; Paget, Jim; Coggi, John; Stodden, David

    2008-01-01

    This add-on module to the SOAP software can perform changes to simulation objects based on the occurrence of specific conditions. This allows the software to encompass simulation response of scheduled or physical events. Users can manipulate objects in the simulation environment under programmatic control. Inputs to the scripting module are Actions, Conditions, and the Script. Actions are arbitrary modifications to constructs such as Platform Objects (i.e. satellites), Sensor Objects (representing instruments or communication links), or Analysis Objects (user-defined logical or numeric variables). Examples of actions include changes to a satellite orbit ( v), changing a sensor-pointing direction, and the manipulation of a numerical expression. Conditions represent the circumstances under which Actions are performed and can be couched in If-Then-Else logic, like performing v at specific times or adding to the spacecraft power only when it is being illuminated by the Sun. The SOAP script represents the entire set of conditions being considered over a specific time interval. The output of the scripting module is a series of events, which are changes to objects at specific times. As the SOAP simulation clock runs forward, the scheduled events are performed. If the user sets the clock back in time, the events within that interval are automatically undone. This script offers an interface for defining scripts where the user does not have to remember the vocabulary of various keywords. Actions can be captured by employing the same user interface that is used to define the objects themselves. Conditions can be set to invoke Actions by selecting them from pull-down lists. Users define the script by selecting from the pool of defined conditions. Many space systems have to react to arbitrary events that can occur from scheduling or from the environment. For example, an instrument may cease to draw power when the area that it is tasked to observe is not in view. The contingency of the planetary body blocking the line of sight is a condition upon which the power being drawn is set to zero. It remains at zero until the observation objective is again in view. Computing the total power drawn by the instrument over a period of days or weeks can now take such factors into consideration. What makes the architecture especially powerful is that the scripting module can look ahead and behind in simulation time, and this temporal versatility can be leveraged in displays such as x-y plots. For example, a plot of a satellite s altitude as a function of time can take changes to the orbit into account.

  2. Determining delayed admission to intensive care unit for mechanically ventilated patients in the emergency department.

    PubMed

    Hung, Shih-Chiang; Kung, Chia-Te; Hung, Chih-Wei; Liu, Ber-Ming; Liu, Jien-Wei; Chew, Ghee; Chuang, Hung-Yi; Lee, Wen-Huei; Lee, Tzu-Chi

    2014-08-23

    The adverse effects of delayed admission to the intensive care unit (ICU) have been recognized in previous studies. However, the definitions of delayed admission varies across studies. This study proposed a model to define "delayed admission", and explored the effect of ICU-waiting time on patients' outcome. This retrospective cohort study included non-traumatic adult patients on mechanical ventilation in the emergency department (ED), from July 2009 to June 2010. The primary outcomes measures were 21-ventilator-day mortality and prolonged hospital stays (over 30 days). Models of Cox regression and logistic regression were used for multivariate analysis. The non-delayed ICU-waiting was defined as a period in which the time effect on mortality was not statistically significant in a Cox regression model. To identify a suitable cut-off point between "delayed" and "non-delayed", subsets from the overall data were made based on ICU-waiting time and the hazard ratio of ICU-waiting hour in each subset was iteratively calculated. The cut-off time was then used to evaluate the impact of delayed ICU admission on mortality and prolonged length of hospital stay. The final analysis included 1,242 patients. The time effect on mortality emerged after 4 hours, thus we deduced ICU-waiting time in ED > 4 hours as delayed. By logistic regression analysis, delayed ICU admission affected the outcomes of 21 ventilator-days mortality and prolonged hospital stay, with odds ratio of 1.41 (95% confidence interval, 1.05 to 1.89) and 1.56 (95% confidence interval, 1.07 to 2.27) respectively. For patients on mechanical ventilation at the ED, delayed ICU admission is associated with higher probability of mortality and additional resource expenditure. A benchmark waiting time of no more than 4 hours for ICU admission is recommended.

  3. Natural History of Rotator Cuff Disease and Implications on Management

    PubMed Central

    Hsu, Jason

    2015-01-01

    Degenerative rotator cuff disease is commonly associated with ageing and is often asymptomatic. The factors related to tear progression and pain development are just now being defined through longitudinal natural history studies. The majority of studies that follow conservatively treated painful cuff tears or asymptomatic tears that are monitored at regular intervals show slow progression of tear enlargement and muscle degeneration over time. These studies have highlighted greater risks for disease progression for certain variables, such as the presence of a full-thickness tear and involvement of the anterior aspect supraspinatus tendon. Coupling the knowledge of the natural history of degenerative cuff tear progression with variables associated with greater likelihood of successful tendon healing following surgery will allow better refinement of surgical indications for rotator cuff disease. In addition, natural history studies may better define the risks of nonoperative treatment over time. This article will review pertinent literature regarding degenerative rotator cuff disease with emphasis on variables important to defining appropriate initial treatments and refining surgical indications. PMID:26726288

  4. Maternal Circadian Eating Time and Frequency Are Associated with Blood Glucose Concentrations during Pregnancy.

    PubMed

    Loy, See Ling; Chan, Jerry Kok Yen; Wee, Poh Hui; Colega, Marjorelee T; Cheung, Yin Bun; Godfrey, Keith M; Kwek, Kenneth; Saw, Seang Mei; Chong, Yap-Seng; Natarajan, Padmapriya; Müller-Riemenschneider, Falk; Lek, Ngee; Chong, Mary Foong-Fong; Yap, Fabian

    2017-01-01

    Synchronizing eating schedules to daily circadian rhythms may improve metabolic health, but its association with gestational glycemia is unknown. This study examined the association of maternal night-fasting intervals and eating episodes with blood glucose concentrations during pregnancy. This was a cross-sectional study within a prospective cohort in Singapore. Maternal 24-h dietary recalls, fasting glucose, and 2-h glucose concentrations were ascertained at 26-28 wk gestation for 1061 women (aged 30.7 ± 5.1 y). Night-fasting intervals were based on the longest fasting duration during the night (1900-0659). Eating episodes were defined as events that provided >50 kcal, with a time interval between eating episodes of ≥15 min. Multiple linear regressions with adjustment for confounders were conducted. Mean ± SD night-fasting intervals and eating episodes per day were 9.9 ± 1.6 h and 4.2 ± 1.3 times/d, respectively; fasting and 2-h glucose concentrations were 4.4 ± 0.5 and 6.6 ± 1.5 mmol/L, respectively. In adjusted models, each hourly increase in night-fasting intervals was associated with a 0.03 mmol/L decrease in fasting glucose (95% CI: -0.06, -0.01 mmol/L), whereas each additional daily eating episode was associated with a 0.15 mmol/L increase in 2-h glucose (95% CI: 0.03, 0.28 mmol/L). Conversely, night-fasting intervals and daily eating episodes were not associated with 2-h and fasting glucose, respectively. Increased maternal night-fasting intervals and reduced eating episodes per day were associated with decreased fasting glucose and 2-h glucose, respectively, in the late-second trimester of pregnancy. This points to potential alternative strategies to improve glycemic control in pregnant women. This study was registered at www.clinicaltrials.gov as NCT01174875. © 2017 American Society for Nutrition.

  5. Maternal circadian eating time and frequency are associated with blood glucose levels during pregnancy

    PubMed Central

    Loy, See Ling; Chan, Jerry Kok Yen; Wee, Poh Hui; Colega, Marjorelee T.; Cheung, Yin Bun; Godfrey, Keith M.; Kwek, Kenneth; Saw, Seang Mei; Chong, Yap-Seng; Natarajan, Padmapriya; Müller-Riemenschneider, Falk; Lek, Ngee; Chong, Mary Foong-Fong; Yap, Fabian

    2017-01-01

    Background Synchronizing eating schedules with daily circadian rhythms may improve metabolic health, but its association with gestational glycemia is unknown. Objective This study examined the association of maternal night-fasting intervals and eating episodes with blood glucose levels during pregnancy. Methods This was a cross-sectional study within a prospective cohort in Singapore. Maternal 24-hour dietary recalls, fasting glucose and 2-hour glucose concentrations were ascertained at 26-28 weeks’ gestation for 1061 women (age 30.7 ± 5.1 years). Night-fasting intervals were based on the longest fasting duration during the night (1900-0659h). Eating episodes were defined as events which provided >50 kcal, with a time interval between eating episodes of at least 15 minutes. Multiple linear regressions with adjustment for confounders were conducted. Results Mean ± standard deviation night-fasting intervals and eating episodes per day were 9.9 ± 1.6 hours and 4.2 ± 1.3 times per day, respectively; fasting and 2-hour glucose concentrations were 4.4 ± 0.5 and 6.6 ± 1.5 mmol/L, respectively. In adjusted models, each hourly increase in night-fasting interval was associated with a 0.03 mmol/L decrease in fasting glucose (95% CI: -0.06, -0.01 mmol/L), while each additional daily eating episode was associated with a 0.15 mmol/L increase in 2-hour glucose (95% CI: 0.03, 0.28 mmol/L). Conversely, night-fasting intervals and daily eating episodes were not associated with 2-hour and fasting glucose, respectively. Conclusions Increased maternal night-fasting intervals and reduced eating episodes per day were associated with decreased fasting glucose and 2-hour glucose, respectively, in the late-second trimester of pregnancy. This points to potential alternative strategies to improve glycemic control in pregnant women. This study was registered at www.clinicaltrials.gov as NCT01174875. PMID:27798346

  6. Real-time flight conflict detection and release based on Multi-Agent system

    NASA Astrophysics Data System (ADS)

    Zhang, Yifan; Zhang, Ming; Yu, Jue

    2018-01-01

    This paper defines two-aircrafts, multi-aircrafts and fleet conflict mode, sets up space-time conflict reservation on the basis of safety interval and conflict warning time in three-dimension. Detect real-time flight conflicts combined with predicted flight trajectory of other aircrafts in the same airspace, and put forward rescue resolutions for the three modes respectively. When accorded with the flight conflict conditions, determine the conflict situation, and enter the corresponding conflict resolution procedures, so as to avoid the conflict independently, as well as ensure the flight safety of aimed aircraft. Lastly, the correctness of model is verified with numerical simulation comparison.

  7. Estimating average annual per cent change in trend analysis

    PubMed Central

    Clegg, Limin X; Hankey, Benjamin F; Tiwari, Ram; Feuer, Eric J; Edwards, Brenda K

    2009-01-01

    Trends in incidence or mortality rates over a specified time interval are usually described by the conventional annual per cent change (cAPC), under the assumption of a constant rate of change. When this assumption does not hold over the entire time interval, the trend may be characterized using the annual per cent changes from segmented analysis (sAPCs). This approach assumes that the change in rates is constant over each time partition defined by the transition points, but varies among different time partitions. Different groups (e.g. racial subgroups), however, may have different transition points and thus different time partitions over which they have constant rates of change, making comparison of sAPCs problematic across groups over a common time interval of interest (e.g. the past 10 years). We propose a new measure, the average annual per cent change (AAPC), which uses sAPCs to summarize and compare trends for a specific time period. The advantage of the proposed AAPC is that it takes into account the trend transitions, whereas cAPC does not and can lead to erroneous conclusions. In addition, when the trend is constant over the entire time interval of interest, the AAPC has the advantage of reducing to both cAPC and sAPC. Moreover, because the estimated AAPC is based on the segmented analysis over the entire data series, any selected subinterval within a single time partition will yield the same AAPC estimate—that is it will be equal to the estimated sAPC for that time partition. The cAPC, however, is re-estimated using data only from that selected subinterval; thus, its estimate may be sensitive to the subinterval selected. The AAPC estimation has been incorporated into the segmented regression (free) software Joinpoint, which is used by many registries throughout the world for characterizing trends in cancer rates. Copyright © 2009 John Wiley & Sons, Ltd. PMID:19856324

  8. Rectal temperature-based death time estimation in infants.

    PubMed

    Igari, Yui; Hosokai, Yoshiyuki; Funayama, Masato

    2016-03-01

    In determining the time of death in infants based on rectal temperature, the same methods used in adults are generally used. However, whether the methods for adults are suitable for infants is unclear. In this study, we examined the following 3 methods in 20 infant death cases: computer simulation of rectal temperature based on the infinite cylinder model (Ohno's method), computer-based double exponential approximation based on Marshall and Hoare's double exponential model with Henssge's parameter determination (Henssge's method), and computer-based collinear approximation based on extrapolation of the rectal temperature curve (collinear approximation). The interval between the last time the infant was seen alive and the time that he/she was found dead was defined as the death time interval and compared with the estimated time of death. In Ohno's method, 7 cases were within the death time interval, and the average deviation in the other 12 cases was approximately 80 min. The results of both Henssge's method and collinear approximation were apparently inferior to the results of Ohno's method. The corrective factor was set within the range of 0.7-1.3 in Henssge's method, and a modified program was newly developed to make it possible to change the corrective factors. Modification A, in which the upper limit of the corrective factor range was set as the maximum value in each body weight, produced the best results: 8 cases were within the death time interval, and the average deviation in the other 12 cases was approximately 80min. There was a possibility that the influence of thermal isolation on the actual infants was stronger than that previously shown by Henssge. We conclude that Ohno's method and Modification A are useful for death time estimation in infants. However, it is important to accept the estimated time of death with certain latitude considering other circumstances. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas

    2014-01-01

    Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…

  10. Non-Markovianity of Gaussian Channels.

    PubMed

    Torre, G; Roga, W; Illuminati, F

    2015-08-14

    We introduce a necessary and sufficient criterion for the non-Markovianity of Gaussian quantum dynamical maps based on the violation of divisibility. The criterion is derived by defining a general vectorial representation of the covariance matrix which is then exploited to determine the condition for the complete positivity of partial maps associated with arbitrary time intervals. Such construction does not rely on the Choi-Jamiolkowski representation and does not require optimization over states.

  11. Optimizing some 3-stage W-methods for the time integration of PDEs

    NASA Astrophysics Data System (ADS)

    Gonzalez-Pinto, S.; Hernandez-Abreu, D.; Perez-Rodriguez, S.

    2017-07-01

    The optimization of some W-methods for the time integration of time-dependent PDEs in several spatial variables is considered. In [2, Theorem 1] several three-parametric families of three-stage W-methods for the integration of IVPs in ODEs were studied. Besides, the optimization of several specific methods for PDEs when the Approximate Matrix Factorization Splitting (AMF) is used to define the approximate Jacobian matrix (W ≈ fy(yn)) was carried out. Also, some convergence and stability properties were presented [2]. The derived methods were optimized on the base that the underlying explicit Runge-Kutta method is the one having the largest Monotonicity interval among the thee-stage order three Runge-Kutta methods [1]. Here, we propose an optimization of the methods by imposing some additional order condition [7] to keep order three for parabolic PDE problems [6] but at the price of reducing substantially the length of the nonlinear Monotonicity interval of the underlying explicit Runge-Kutta method.

  12. Analysis of noise-induced temporal correlations in neuronal spike sequences

    NASA Astrophysics Data System (ADS)

    Reinoso, José A.; Torrent, M. C.; Masoller, Cristina

    2016-11-01

    We investigate temporal correlations in sequences of noise-induced neuronal spikes, using a symbolic method of time-series analysis. We focus on the sequence of time-intervals between consecutive spikes (inter-spike-intervals, ISIs). The analysis method, known as ordinal analysis, transforms the ISI sequence into a sequence of ordinal patterns (OPs), which are defined in terms of the relative ordering of consecutive ISIs. The ISI sequences are obtained from extensive simulations of two neuron models (FitzHugh-Nagumo, FHN, and integrate-and-fire, IF), with correlated noise. We find that, as the noise strength increases, temporal order gradually emerges, revealed by the existence of more frequent ordinal patterns in the ISI sequence. While in the FHN model the most frequent OP depends on the noise strength, in the IF model it is independent of the noise strength. In both models, the correlation time of the noise affects the OP probabilities but does not modify the most probable pattern.

  13. Two Billion Years of Magmatism in One Place on Mars

    NASA Astrophysics Data System (ADS)

    Taylor, G. J.

    2017-05-01

    Thomas Lapen and Minako Righter (University of Houston), and colleagues at Aarhus University (Denmark), the Universities of Washington (Seattle), Wisconsin (Madison), California (Berkeley), and Arizona (Tucson), and Purdue University (Indiana) show that a geochemically-related group of Martian meteorites formed over a much longer time span than thought previously. So-called depleted shergottites formed during the time interval 325 to 600 million years ago, but now age dating on a recently discovered Martian meteorite, Northwest Africa (NWA) 7635, extends that interval by 1800 million years to 2400 million years. NWA 7635 and almost all other depleted shergottites were ejected from Mars in the same impact event, as defined by their same cosmic-ray exposure age of 1 million years, so all resided in one small area on Mars. This long time span of volcanic activity in the same place on the planet indicates that magma production was continuous, consistent with geophysical calculations of magma generation in plumes of hot mantle rising from the core-mantle boundary deep inside Mars.

  14. [Neuroimaging follow-up of cerebral aneurysms treated with endovascular techniques].

    PubMed

    Delgado, F; Saiz, A; Hilario, A; Murias, E; San Román Manzanera, L; Lagares Gomez-Abascal, A; Gabarrós, A; González García, A

    2014-01-01

    There are no specific recommendations in clinical guidelines about the best time, imaging tests, or intervals for following up patients with intracranial aneurysms treated with endovascular techniques. We reviewed the literature, using the following keywords to search in the main medical databases: cerebral aneurysm, coils, endovascular procedure, and follow-up. Within the Cerebrovascular Disease Group of the Spanish Society of Neuroradiology, we aimed to propose recommendations and an orientative protocol based on the scientific evidence for using neuroimaging to monitor intracranial aneurysms that have been treated with endovascular techniques. We aimed to specify the most appropriate neuroimaging techniques, the interval, the time of follow-up, and the best approach to defining the imaging findings, with the ultimate goal of improving clinical outcomes while optimizing and rationalizing the use of available resources. Copyright © 2013 SERAM. Published by Elsevier Espana. All rights reserved.

  15. Development of defined microbial population standards using fluorescence activated cell sorting for the absolute quantification of S. aureus using real-time PCR.

    PubMed

    Martinon, Alice; Cronin, Ultan P; Wilkinson, Martin G

    2012-01-01

    In this article, four types of standards were assessed in a SYBR Green-based real-time PCR procedure for the quantification of Staphylococcus aureus (S. aureus) in DNA samples. The standards were purified S. aureus genomic DNA (type A), circular plasmid DNA containing a thermonuclease (nuc) gene fragment (type B), DNA extracted from defined populations of S. aureus cells generated by Fluorescence Activated Cell Sorting (FACS) technology with (type C) or without purification of DNA by boiling (type D). The optimal efficiency of 2.016 was obtained on Roche LightCycler(®) 4.1. software for type C standards, whereas the lowest efficiency (1.682) corresponded to type D standards. Type C standards appeared to be more suitable for quantitative real-time PCR because of the use of defined populations for construction of standard curves. Overall, Fieller Confidence Interval algorithm may be improved for replicates having a low standard deviation in Cycle Threshold values such as found for type B and C standards. Stabilities of diluted PCR standards stored at -20°C were compared after 0, 7, 14 and 30 days and were lower for type A or C standards compared with type B standards. However, FACS generated standards may be useful for bacterial quantification in real-time PCR assays once optimal storage and temperature conditions are defined.

  16. Defining pollen exposure times for clinical trials of allergen immunotherapy for pollen-induced rhinoconjunctivitis - an EAACI position paper.

    PubMed

    Pfaar, O; Bastl, K; Berger, U; Buters, J; Calderon, M A; Clot, B; Darsow, U; Demoly, P; Durham, S R; Galán, C; Gehrig, R; Gerth van Wijk, R; Jacobsen, L; Klimek, L; Sofiev, M; Thibaudon, M; Bergmann, K C

    2017-05-01

    Clinical efficacy of pollen allergen immunotherapy (AIT) has been broadly documented in randomized controlled trials. The underlying clinical endpoints are analysed in seasonal time periods predefined based on the background pollen concentration. However, any validated or generally accepted definition from academia or regulatory authorities for this relevant pollen exposure intensity or period of time (season) is currently not available. Therefore, this Task Force initiative of the European Academy of Allergy and Clinical Immunology (EAACI) aimed to propose definitions based on expert consensus. A Task Force of the Immunotherapy and Aerobiology and Pollution Interest Groups of the EAACI reviewed the literature on pollen exposure in the context of defining relevant time intervals for evaluation of efficacy in AIT trials. Underlying principles in measuring pollen exposure and associated methodological problems and limitations were considered to achieve a consensus. The Task Force achieved a comprehensive position in defining pollen exposure times for different pollen types. Definitions are presented for 'pollen season', 'high pollen season' (or 'peak pollen period') and 'high pollen days'. This EAACI position paper provides definitions of pollen exposures for different pollen types for use in AIT trials. Their validity as standards remains to be tested in future studies. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  17. Autonomous learning by simple dynamical systems with a discrete-time formulation

    NASA Astrophysics Data System (ADS)

    Bilen, Agustín M.; Kaluza, Pablo

    2017-05-01

    We present a discrete-time formulation for the autonomous learning conjecture. The main feature of this formulation is the possibility to apply the autonomous learning scheme to systems in which the errors with respect to target functions are not well-defined for all times. This restriction for the evaluation of functionality is a typical feature in systems that need a finite time interval to process a unit piece of information. We illustrate its application on an artificial neural network with feed-forward architecture for classification and a phase oscillator system with synchronization properties. The main characteristics of the discrete-time formulation are shown by constructing these systems with predefined functions.

  18. A Semiparametric Approach for Composite Functional Mapping of Dynamic Quantitative Traits

    PubMed Central

    Yang, Runqing; Gao, Huijiang; Wang, Xin; Zhang, Ji; Zeng, Zhao-Bang; Wu, Rongling

    2007-01-01

    Functional mapping has emerged as a powerful tool for mapping quantitative trait loci (QTL) that control developmental patterns of complex dynamic traits. Original functional mapping has been constructed within the context of simple interval mapping, without consideration of separate multiple linked QTL for a dynamic trait. In this article, we present a statistical framework for mapping QTL that affect dynamic traits by capitalizing on the strengths of functional mapping and composite interval mapping. Within this so-called composite functional-mapping framework, functional mapping models the time-dependent genetic effects of a QTL tested within a marker interval using a biologically meaningful parametric function, whereas composite interval mapping models the time-dependent genetic effects of the markers outside the test interval to control the genome background using a flexible nonparametric approach based on Legendre polynomials. Such a semiparametric framework was formulated by a maximum-likelihood model and implemented with the EM algorithm, allowing for the estimation and the test of the mathematical parameters that define the QTL effects and the regression coefficients of the Legendre polynomials that describe the marker effects. Simulation studies were performed to investigate the statistical behavior of composite functional mapping and compare its advantage in separating multiple linked QTL as compared to functional mapping. We used the new mapping approach to analyze a genetic mapping example in rice, leading to the identification of multiple QTL, some of which are linked on the same chromosome, that control the developmental trajectory of leaf age. PMID:17947431

  19. Noninertial coordinate time: A new concept affecting time standards, time transfers, and clock synchronization

    NASA Technical Reports Server (NTRS)

    Deines, Steven D.

    1992-01-01

    Relativity compensations must be made in precise and accurate measurements whenever an observer is accelerated. Although many believe the Earth-centered frame is sufficiently inertial, accelerations of the Earth, as evidenced by the tides, prove that it is technically a noninertial system for even an Earth-based observer. Using the constant speed of light, a set of fixed remote clocks in an inertial frame can be synchronized to a fixed master clock transmitting its time in that frame. The time on the remote clock defines the coordinate time at that coordinate position. However, the synchronization procedure for an accelerated frame is affected, because the distance between the master and remote clocks is altered due to the acceleration of the remote clock toward or away from the master clock during the transmission interval. An exact metric that converts observations from noninertial frames to inertial frames was recently derived. Using this metric with other physical relationships, a new concept of noninertial coordinate time is defined. This noninertial coordinate time includes all relativity compensations. This new issue raises several timekeeping issues, such as proper time standards, time transfer process, and clock synchronization, all in a noninertial frame such as Earth.

  20. Allopurinol and Cardiovascular Outcomes in Adults With Hypertension.

    PubMed

    MacIsaac, Rachael L; Salatzki, Janek; Higgins, Peter; Walters, Matthew R; Padmanabhan, Sandosh; Dominiczak, Anna F; Touyz, Rhian M; Dawson, Jesse

    2016-03-01

    Allopurinol lowers blood pressure in adolescents and has other vasoprotective effects. Whether similar benefits occur in older individuals remains unclear. We hypothesized that allopurinol is associated with improved cardiovascular outcomes in older adults with hypertension. Data from the United Kingdom Clinical Research Practice Datalink were used. Multivariate Cox-proportional hazard models were applied to estimate hazard ratios for stroke and cardiac events (defined as myocardial infarction or acute coronary syndrome) associated with allopurinol use over a 10-year period in adults aged >65 years with hypertension. A propensity-matched design was used to reduce potential for confounding. Allopurinol exposure was a time-dependent variable and was defined as any exposure and then as high (≥300 mg daily) or low-dose exposure. A total of 2032 allopurinol-exposed patients and 2032 matched nonexposed patients were studied. Allopurinol use was associated with a significantly lower risk of both stroke (hazard ratio, 0.50; 95% confidence interval, 0.32-0.80) and cardiac events (hazard ratio, 0.61; 95% confidence interval, 0.43-0.87) than nonexposed control patients. In exposed patients, high-dose treatment with allopurinol (n=1052) was associated with a significantly lower risk of both stroke (hazard ratio, 0.58; 95% confidence interval, 0.36-0.94) and cardiac events (hazard ratio, 0.65; 95% confidence interval, 0.46-0.93) than low-dose treatment (n=980). Allopurinol use is associated with lower rates of stroke and cardiac events in older adults with hypertension, particularly at higher doses. Prospective clinical trials are needed to evaluate whether allopurinol improves cardiovascular outcomes in adults with hypertension. © 2016 American Heart Association, Inc.

  1. [Waiting time for the first colposcopic examination in women with abnormal Papanicolaou test].

    PubMed

    Nascimento, Maria Isabel do; Rabelo, Irene Machado Moraes Alvarenga; Cardoso, Fabrício Seabra Polidoro; Musse, Ricardo Neif Vieira

    2015-08-01

    To evaluate the waiting times before obtaining the first colposcopic examination for women with abnormal Papanicolaou smears. Retrospective cohort study conducted on patients who required a colposcopic examination to clarify an abnormal pap test, between 2002 January and 2008 August, in a metropolitan region of Brazil. The waiting times were defined as: Total Waiting Time (interval between the date of the pap test result and the date of the first colposcopic examination); Partial A Waiting Time (interval between the date of the pap test result and the date of referral); Partial B Waiting Time (interval between the date of referral and the date of the first colposcopic examination). Means, medians, relative and absolute frequencies were calculated. The Kruskal-Wallis test and Pearson's chi-square test were used to determine statistical significance. A total of 1,544 women with mean of age of 34 years (SD=12.6 years) were analyzed. Most of them had access to colposcopic examination within 30 days (65.8%) or 60 days (92.8%) from referral. Mean Total Waiting Time, Partial A Waiting Time, and Partial B Waiting Time were 94.5 days (SD=96.8 days), 67.8 days (SD=95.3 days) and 29.2 days (SD=35.1 days), respectively. A large part of the women studied had access to colposcopic examination within 60 days after referral, but Total waiting time was long. Measures to reduce the waiting time for obtaining the first colposcopic examination can help to improve the quality of care in the context of cervical cancer control in the region, and ought to be addressed at the phase between the date of the pap test results and the date of referral to the teaching hospital.

  2. Learning to tell Neoproterozoic time

    NASA Technical Reports Server (NTRS)

    Knoll, A. H.

    2000-01-01

    In 1989, the International Commission on Stratigraphy established a Working Group on the Terminal Proterozoic Period. Nine years of intensive, multidisciplinary research by scientists from some two dozen countries have markedly improved the framework for the correlation and calibration of latest Proterozoic events. Three principal phenomena--the Marinoan ice age, Ediacaran animal diversification, and the beginning of the Cambrian Period--specify the limits and character of this interval, but chemostratigraphy and biostratigraphy based on single-celled microfossils (acritarchs), integrated with high-resolution radiometric dates, provide the temporal framework necessary to order and evaluate terminal Proterozoic tectonic, biogeochemical, climatic, and biological events. These data also provide a rational basis for choosing the Global Stratotype Section and Point (GSSP) that will define the beginning of this period. A comparable level of stratigraphic resolution may be achievable for the preceding Cryogenian Period, providing an opportunity to define this interval, as well, in chronostratigraphic terms--perhaps bounded at beginning and end by the onset of Sturtian glaciation and the decay of Marinoan ice sheets, respectively. Limited paleontological, isotopic, and radiometric data additionally suggest a real but more distant prospect of lower Neoproterozoic correlation and stratigraphic subdivision.

  3. Min and Max Exponential Extreme Interval Values and Statistics

    ERIC Educational Resources Information Center

    Jance, Marsha; Thomopoulos, Nick

    2009-01-01

    The extreme interval values and statistics (expected value, median, mode, standard deviation, and coefficient of variation) for the smallest (min) and largest (max) values of exponentially distributed variables with parameter ? = 1 are examined for different observation (sample) sizes. An extreme interval value g[subscript a] is defined as a…

  4. The effects of sampling frequency on the climate statistics of the European Centre for Medium-Range Weather Forecasts

    NASA Astrophysics Data System (ADS)

    Phillips, Thomas J.; Gates, W. Lawrence; Arpe, Klaus

    1992-12-01

    The effects of sampling frequency on the first- and second-moment statistics of selected European Centre for Medium-Range Weather Forecasts (ECMWF) model variables are investigated in a simulation of "perpetual July" with a diurnal cycle included and with surface and atmospheric fields saved at hourly intervals. The shortest characteristic time scales (as determined by the e-folding time of lagged autocorrelation functions) are those of ground heat fluxes and temperatures, precipitation and runoff, convective processes, cloud properties, and atmospheric vertical motion, while the longest time scales are exhibited by soil temperature and moisture, surface pressure, and atmospheric specific humidity, temperature, and wind. The time scales of surface heat and momentum fluxes and of convective processes are substantially shorter over land than over oceans. An appropriate sampling frequency for each model variable is obtained by comparing the estimates of first- and second-moment statistics determined at intervals ranging from 2 to 24 hours with the "best" estimates obtained from hourly sampling. Relatively accurate estimation of first- and second-moment climate statistics (10% errors in means, 20% errors in variances) can be achieved by sampling a model variable at intervals that usually are longer than the bandwidth of its time series but that often are shorter than its characteristic time scale. For the surface variables, sampling at intervals that are nonintegral divisors of a 24-hour day yields relatively more accurate time-mean statistics because of a reduction in errors associated with aliasing of the diurnal cycle and higher-frequency harmonics. The superior estimates of first-moment statistics are accompanied by inferior estimates of the variance of the daily means due to the presence of systematic biases, but these probably can be avoided by defining a different measure of low-frequency variability. Estimates of the intradiurnal variance of accumulated precipitation and surface runoff also are strongly impacted by the length of the storage interval. In light of these results, several alternative strategies for storage of the EMWF model variables are recommended.

  5. The Effect of Information Feedback Upon Psychophysical Judgments

    NASA Technical Reports Server (NTRS)

    Atkinson, Richard C.; Carterette, Edward C.; Kinchla, Ronald A.

    1964-01-01

    An analysis was made of the role of presentation schedules and information feedback on performance in a forced-choice signal detection task. The experimental results indicate that information feedback facilitates performance, but only for certain presentation schedules. The present study was designed to assess performance in a signal detection task under two conditions of information feedback. In the I-condition, S was told on each trial whether his detection response was correct or incorrect; in the !-condition S was given no feedback regarding the correctness of his response. The task involved a 2-response, forced-choice auditory detection problem. On each trial 2 temporal intervals were defined and S was required to report which interval he believed contained the signal; i. e., in one interval a tone burst in a background of white noise was presented, while the other interval contained only white noise. A trial will be denoted as s1 or s2, depending on whether the signal was embedded in the 1st or 2nd interval; the S's response will be denoted A1 or A2 to indicate which interval he reported contained the signal. The probability of an s1 trial will be denoted as y. In this study two values of y were used (.50 and.75) and, as indicated above, two conditions of information feedback. Thus there were 4 experimental conditions (501, · 50I, 751, 75I); each S was run under all 4 conditions. Method Gaussian noise was presented binaurally in S's headphones throughout a test session and the signal was a 1000-cps sinusoid tone; the tone was presented for 100 msec. including equal fall and rise times of 20 msec. The ratio of signal energy to noise power in a unit bandwidth was 2.9, and was constant throughout the study. The. S was seated before a stimulus display board. On each trial a red warning light was flashed for 100 msec. Two amber lights then came on successively each for 1 sec.; these lights defined the 2 observation intervals. The onset of the signal occurred 500 msec. after the onset of one of the observation intervals. After the second amber light went off, S indicated his response by pressing 1 of 2 wand switches under cards reading "1st interval" and "2nd interval." For the !-condition a green light flashed on above the correct response key after S's response; the green light was omitted in the !-condition. Each trial lasted 6 sec. The S's were 12 male college students with normal hearing. They were run for two practice sessions followed by 20 test sessions. Test sessions were run on consecutive days, 350 trials/day. Each day S ran on 1 of the 4 experimental conditions; in successive 4-day blocks S ran one day on each of the 4 experimental conditions in a random order. Thus, over 20 days each of the experimental conditions was repeated 5 times.

  6. Flight Deck Interval Management Avionics: Eye-Tracking Analysis

    NASA Technical Reports Server (NTRS)

    Latorella, Kara; Harden, John W.

    2015-01-01

    Interval Management (IM) is one NexGen method for achieving airspace efficiencies. In order to initiate IM procedures, Air Traffic Control provides an IM clearance to the IM aircraft's pilots that indicates an intended spacing from another aircraft (the target to follow - or TTF) and the point at which this should be achieved. Pilots enter the clearance in the flight deck IM (FIM) system; and once the TTF's Automatic Dependent Surveillance-Broadcast signal is available, the FIM algorithm generates target speeds to meet that IM goal. This study examined four Avionics Conditions (defined by the instrumentation and location presenting FIM information) and three Notification Methods (defined by the visual and aural alerts that notified pilots to IM-related events). Current commercial pilots flew descents into Dallas/Fort-Worth in a high-fidelity commercial flight deck simulation environment with realistic traffic and communications. All 12 crews experienced each Avionics Condition, where order was counterbalanced over crews. Each crew used only one of the three Notification Methods. This paper presents results from eye tracking data collected from both pilots, including: normalized number of samples falling within FIM displays, normalized heads-up time, noticing time, dwell time on first FIM display look after a new speed, a workload-related metric, and a measure comparing the scan paths of pilot flying and pilot monitoring; and discusses these in the context of other objective (vertical and speed profile deviations, response time to dial in commanded speeds, out-of-speed-conformance and reminder indications) and subjective measures (workload, situation awareness, usability, and operational acceptability).

  7. [Considerations for the Definition of a Interval of Vulnerability/Possibility in Adolescence].

    PubMed

    Orón Semper, José Víctor; Echarte Alonso, Luis Enrique

    2017-01-01

    This article explores the hypothesis that while maturation related cognitive abilities reaches maturity around the age of fifteen, maturation related social skills is delayed until well into the twenties. Our goal is to try to define what is the window of opportunity/vulnerability and what is the maturational status of the young in this interval. In this context, we argue how the maturational time of the closing of adolescence has an impact on the valuation of autonomy in decision-making of the person. Particularly, we figure out implications for the assessment of the autonomy of youth in health issues, and also criminal liability. In the conclusion, we offer some educational criteria that may provide guidance for implementing both social policy and educational programs.

  8. Interference-Detection Module in a Digital Radar Receiver

    NASA Technical Reports Server (NTRS)

    Fischman, Mark; Berkun, Andrew; Chu, Anhua; Freedman, Adam; Jourdan, Michael; McWatters, Dalia; Paller, Mimi

    2009-01-01

    A digital receiver in a 1.26-GHz spaceborne radar scatterometer now undergoing development includes a module for detecting radio-frequency interference (RFI) that could contaminate scientific data intended to be acquired by the scatterometer. The role of the RFI-detection module is to identify time intervals during which the received signal is likely to be contaminated by RFI and thereby to enable exclusion, from further scientific data processing, of signal data acquired during those intervals. The underlying concepts of detection of RFI and rejection of RFI-contaminated signal data are also potentially applicable in advanced terrestrial radio receivers, including software-defined radio receivers in general, receivers in cellular telephones and other wireless consumer electronic devices, and receivers in automotive collision-avoidance radar systems.

  9. Properties of behavior under different random ratio and random interval schedules: A parametric study.

    PubMed

    Dembo, M; De Penfold, J B; Ruiz, R; Casalta, H

    1985-03-01

    Four pigeons were trained to peck a key under different values of a temporally defined independent variable (T) and different probabilities of reinforcement (p). Parameter T is a fixed repeating time cycle and p the probability of reinforcement for the first response of each cycle T. Two dependent variables were used: mean response rate and mean postreinforcement pause. For all values of p a critical value for the independent variable T was found (T=1 sec) in which marked changes took place in response rate and postreinforcement pauses. Behavior typical of random ratio schedules was obtained at T 1 sec and behavior typical of random interval schedules at T 1 sec. Copyright © 1985. Published by Elsevier B.V.

  10. Interpregnancy interval and risk of autistic disorder.

    PubMed

    Gunnes, Nina; Surén, Pål; Bresnahan, Michaeline; Hornig, Mady; Lie, Kari Kveim; Lipkin, W Ian; Magnus, Per; Nilsen, Roy Miodini; Reichborn-Kjennerud, Ted; Schjølberg, Synnve; Susser, Ezra Saul; Øyen, Anne-Siri; Stoltenberg, Camilla

    2013-11-01

    A recent California study reported increased risk of autistic disorder in children conceived within a year after the birth of a sibling. We assessed the association between interpregnancy interval and risk of autistic disorder using nationwide registry data on pairs of singleton full siblings born in Norway. We defined interpregnancy interval as the time from birth of the first-born child to conception of the second-born child in a sibship. The outcome of interest was autistic disorder in the second-born child. Analyses were restricted to sibships in which the second-born child was born in 1990-2004. Odds ratios (ORs) were estimated by fitting ordinary logistic models and logistic generalized additive models. The study sample included 223,476 singleton full-sibling pairs. In sibships with interpregnancy intervals <9 months, 0.25% of the second-born children had autistic disorder, compared with 0.13% in the reference category (≥ 36 months). For interpregnancy intervals shorter than 9 months, the adjusted OR of autistic disorder in the second-born child was 2.18 (95% confidence interval 1.42-3.26). The risk of autistic disorder in the second-born child was also increased for interpregnancy intervals of 9-11 months in the adjusted analysis (OR = 1.71 [95% CI = 1.07-2.64]). Consistent with a previous report from California, interpregnancy intervals shorter than 1 year were associated with increased risk of autistic disorder in the second-born child. A possible explanation is depletion of micronutrients in mothers with closely spaced pregnancies.

  11. Measuring skew in average surface roughness as a function of surface preparation

    NASA Astrophysics Data System (ADS)

    Stahl, Mark T.

    2015-08-01

    Characterizing surface roughness is important for predicting optical performance. Better measurement of surface roughness reduces polishing time, saves money and allows the science requirements to be better defined. This study characterized statistics of average surface roughness as a function of polishing time. Average surface roughness was measured at 81 locations using a Zygo® white light interferometer at regular intervals during the polishing process. Each data set was fit to a normal and Largest Extreme Value (LEV) distribution; then tested for goodness of fit. We show that the skew in the average data changes as a function of polishing time.

  12. ANALYSIS OF OIL-BEARING CRETACEOUS SANDSTONE HYDROCARBON RESERVOIRS, EXCLUSIVE OF THE DAKOTA SANDSTONE, ON THE JICARILLA APACHE INDIAN RESERVATION, NEW MEXICO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jennie Ridgley

    2000-01-21

    An additional 450 wells were added to the structural database; there are now 2550 wells in the database with corrected tops on the Juana Lopez, base of the Bridge Creek Limestone, and datum. This completes the structural data base compilation. Fifteen oil and five gas fields from the Mancos-ElVado interval were evaluated with respect to the newly defined sequence stratigraphic model for this interval. The five gas fields are located away from the structural margins of the deep part of the San Juan Basin. All the fields have characteristics of basin-centered gas and can be considered as continuous gas accumulationsmore » as recently defined by the U.S. Geological Survey. Oil production occurs in thinly interbedded sandstone and shale or in discrete sandstone bodies. Production is both from transgressive and regressive strata as redefined in this study. Oil production is both stratigraphically and structurally controlled with production occurring along the Chaco slope or in steeply west-dipping rocks along the east margin of the basin. The ElVado Sandstone of subsurface usage is redefined to encompass a narrower interval; it appears to be more time correlative with the Dalton Sandstone. Thus, it was deposited as part of a regressive sequence, in contrast to the underlying rock units which were deposited during transgression.« less

  13. An institutional study of time delays for symptomatic carotid endarterectomy.

    PubMed

    Charbonneau, Philippe; Bonaventure, Paule Lessard; Drudi, Laura M; Beaudoin, Nathalie; Blair, Jean-François; Elkouri, Stéphane

    2016-12-01

    The aim of this study was to assess time delays between first cerebrovascular symptoms and carotid endarterectomy (CEA) at a single center and to systematically evaluate causes of these delays. Consecutive adult patients who underwent CEAs between January 2010 and September 2011 at a single university-affiliated center (Centre Hospitalier de l'Université Montréal-Hôtel-Dieu Hospital, Montreal) were identified from a clinical database and operative records. Covariates of interest were extracted from electronic medical records. Timing and nature of the first cerebrovascular symptoms were also documented. The first medical contact and pathway of referral were also assessed. When possible, the ABCD 2 score (age, blood pressure, clinical features, duration of symptoms, and diabetes) was calculated to calculate further risk of stroke. The nonparametric Wilcoxon test was used to assess differences in time intervals between two variables. The Kruskal-Wallis test was used to assess differences in time intervals in comparing more than two variables. A multivariate linear regression analysis was performed using covariates that were determined to be statistically significant in our sensitivity analyses. The cohort consisted of 111 patients with documented symptomatic carotid stenosis undergoing surgical intervention. Thirty-nine percent of all patients were operated on within 2 weeks from the first cerebrovascular symptoms. The median time between the occurrence of the first neurologic symptom and the CEA procedure was 25 (interquartile range [IQR], 11-85) days. The patient-dependent delay, defined as the median delay between the first neurologic symptom and the first medical contact, was 1 (IQR, 0-14) day. The medical-dependent delay was defined as the time interval between the first medical contact and CEA. This included the delay between the first medical contact and the request for surgery consultation (median, 3 [IQR, 1-10] days). The multivariate regression model demonstrated that the emergency physician as referral source (P = .0002) was statistically significant for reducing CEA delay. Patients who were investigated as an outpatient (P = .02), first medical contact with a general practitioner (P = .0002), and hospital center I as referral center (P = .045) were also found to be statistically significant to extend CEA delay when the model was adjusted over all covariates. In this center, there was no correlation between ABCD 2 risk score and waiting time for surgery. The majority of our cohort falls short of the recommended 2-week interval to perform CEA. Factors contributing to reduced CEA delay were presentation to an emergency department, in-patient investigations, and a stroke center where a vascular surgeon is available. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  14. Stress-induced ST-segment deviation in relation to the presence and severity of coronary artery disease in patients with normal myocardial perfusion imaging.

    PubMed

    Weinsaft, Jonathan W; Manoushagian, Shant J; Patel, Taral; Shakoor, Aqsa; Kim, Robert J; Mirchandani, Sunil; Lin, Fay; Wong, Franklin J; Szulc, Massimiliano; Okin, Peter M; Kligfield, Paul D; Min, James K

    2009-01-01

    To assess the utility of stress electrocardiography (ECG) for identifying the presence and severity of obstructive coronary artery disease (CAD) defined by coronary computed tomographic angiography (CCTA) among patients with normal nuclear myocardial perfusion imaging (MPI). The study population comprised 119 consecutive patients with normal MPI who also underwent CCTA (interval 3.5+/-3.8 months). Stress ECG was performed at the time of MPI. CCTA and MPI were interpreted using established scoring systems, and CCTA was used to define the presence and extent of CAD, which was quantified by a coronary artery jeopardy score. Within this population, 28 patients (24%) had obstructive CAD identified by CCTA. The most common CAD pattern was single-vessel CAD (61%), although proximal vessel involvement was present in 46% of patients. Patients with CAD were nearly three times more likely to have positive standard test responses (1 mm ST-segment deviation) than patients with patent coronary arteries (36 vs. 13%, P=0.007). In multivariate analysis, a positive ST-segment test response was an independent marker for CAD (odds ratio: 2.02, confidence interval: 1.09-3.78, P=0.03) even after adjustment for a composite of clinical cardiac risk factors (odds ratio: 1.85, confidence interval: 1.05-3.23, P=0.03). Despite uniformly normal MPI, mean coronary jeopardy score was three-fold higher among patients with positive compared to those with negative ST-segment response to exercise or dobutamine stress (1.9+/-2.7 vs. 0.5+/-1.4, P=0.03). Stress-induced ST-segment deviation is an independent marker for obstructive CAD among patients with normal MPI. A positive stress ECG identifies patients with a greater anatomic extent of CAD as quantified by coronary jeopardy score.

  15. Expectations of clinical teachers and faculty regarding development of the CanMEDS-Family Medicine competencies: Laval developmental benchmarks scale for family medicine residency training.

    PubMed

    Lacasse, Miriam; Théorêt, Johanne; Tessier, Sylvie; Arsenault, Louise

    2014-01-01

    The CanMEDS-Family Medicine (CanMEDS-FM) framework defines the expected terminal enabling competencies (EC) for family medicine (FM) residency training in Canada. However, benchmarks throughout the 2-year program are not yet defined. This study aimed to identify expected time frames for achievement of the CanMEDS-FM competencies during FM residency training and create a developmental benchmarks scale for family medicine residency training. This 2011-2012 study followed a Delphi methodology. Selected faculty and clinical teachers identified, via questionnaire, the expected time of EC achievement from beginning of residency to one year in practice (0, 6, 12, […] 36 months). The 15-85th percentile intervals became the expected competency achievement interval. Content validity of the obtained benchmarks was assessed through a second Delphi round. The 1st and 2nd rounds were completed by 33 and 27 respondents, respectively. A developmental benchmarks scale was designed after the 1st round to illustrate expectations regarding achievement of each EC. The 2nd round (content validation) led to minor adjustments (1.9±2.7 months) of intervals for 44 of the 92 competencies, the others remaining unchanged. The Laval Developmental Benchmarks Scale for Family Medicine clarifies expectations regarding achievement of competencies throughout FM training. In a competency-based education system this now allows identification and management of outlying residents, both those excelling and needing remediation. Further research should focus on assessment of the scale reliability after pilot implementation in family medicine clinical teaching units at Laval University, and corroborate the established timeline in other sites.

  16. Temporal patterns of radiographic infiltration in severely traumatized patients with and without adult respiratory distress syndrome.

    PubMed

    Johnson, K S; Bishop, M H; Stephen, C M; Jorgens, J; Shoemaker, W C; Shori, S K; Ordog, G; Thadepalli, H; Appel, P L; Kram, H B

    1994-05-01

    We prospectively evaluated the patterns of pulmonary structural and functional changes in 100 consecutive surgical intensive care unit trauma patients who had (1) emergent major surgery, (2) a pelvic fracture, or (3) two or more major long bone fractures. For each patient, arterial blood gas measurements (ABGs), central venous pressure (CVP), pulmonary capillary occlusion pressure (PAOP), thoracic compliance, arterial oxygen tension/fraction of inspired oxygen (PAO2/FIO2), pulmonary venous admixture (Qs/Qt), and portable chest roentgenograms were sequentially tracked. The senior staff radiologist interpreted all chest roentgenograms. Pulmonary infiltration was quantitated in each of six fields using a scale ranging from 0 to 4, with 0 being no infiltration and 4 being the maximum. Adult respiratory distress syndrome (ARDS) was defined as follows: Qs/Qt > or = 20%, PAO2/FIO2 < 250 or both; dependence on mechanical ventilation for life support for > or = 24 hours; PAOP or CVP or both < 20 mm Hg; and thoracic compliance < 50 mL/cm H2O. Time zero (T0) the time of onset of ARDS, was defined as the time these criteria were met. Eighty-three of 100 study group patients had penetrating injuries, and 17 were admitted with blunt trauma. Fifty-one of 100 patients developed ARDS: 36 of 51 died. Only 4 of 49 (8%) patients without ARDS died. The injured lungs of patients with and without ARDS had similar amounts of infiltration over most measured time intervals. The noninjured lungs of the ARDS patients, however, had significantly greater infiltration than those without ARDS at T0 and over subsequent time intervals.(ABSTRACT TRUNCATED AT 250 WORDS)

  17. Wait Times Experienced by Lung Cancer Patients in the BC Southern Interior to Obtain Oncologic Care: Exploration of the Intervals from First Abnormal Imaging to Oncologic Treatment

    PubMed Central

    Chowdhury, Rezwan; Boyce, Andrew; Halperin, Ross

    2015-01-01

    Background: Lung cancer is associated with rapid disease progression, which can significantly progress over a duration of four to eight weeks. This study examines the time interval lung cancer patients from the interior of British Columbia (BC) experience while undergoing diagnostic evaluation, biopsy, staging, and preparation for treatment. Methods: A chart review of lung cancer patients (n=231) referred to the BC Cancer Agency Centre for the Southern Interior between January 1, 2010 and December 31, 2011 was performed. Time zero was defined as the date of the first abnormal chest imaging. Time intervals, expressed as median averages, to specialist consult, biopsy, oncologic referral, initial oncology consultation, and commencement of oncologic treatment were obtained. Results: The median time interval from first abnormal chest imaging to a specialist consultation was 18 days (interquartile range, IQR, 7-36). An additional nine days elapsed prior to biopsy in the form of bronchoscopy, CT-guided biopsy, or sputum cytology (median; IQR, 3-21); if lobectomy was required, 18 days elapsed (median; IQR, 9-28). Eight days were required for pathologic diagnosis and subsequent referral to the cancer centre (median; IQR, 3-16.5). Once referral was received, 10 days elapsed prior to consultation with either a medical or radiation oncologist (median, IQR 5-18). Finally, eight days was required for initiation of radiation and/or chemotherapy (median; IQR, 1-15). The median wait time from detection of lung cancer on imaging to oncologic treatment in the form of radiation and/or chemotherapy was 65.5 days (IQR, 41.5-104.3).  Interpretation: Patients in the BC Southern Interior experience considerable delays in accessing lung cancer care. During this time, the disease has the potential to significantly progress and it is possible that a subset of patients may lose their opportunity for curative intent treatment. PMID:26543688

  18. Defining the Ideal Time Interval Between Planned Induction Therapy and Surgery for Stage IIIA Non-Small Cell Lung Cancer.

    PubMed

    Samson, Pamela; Crabtree, Traves D; Robinson, Cliff G; Morgensztern, Daniel; Broderick, Stephen; Krupnick, A Sasha; Kreisel, Daniel; Patterson, G Alexander; Meyers, Bryan; Puri, Varun

    2017-04-01

    Induction therapy leads to significant improvement in survival for selected patients with stage IIIA non-small cell lung cancer. The ideal time interval between induction therapy and surgery remains unknown. Clinical stage IIIA non-small cell lung cancer patients receiving induction therapy and surgery were identified in the National Cancer Database. Delayed surgery was defined as greater than or equal to 3 months after starting induction therapy. A logistic regression model identified variables associated with delayed surgery. Cox proportional hazards modeling and Kaplan-Meier analysis were performed to evaluate variables independently associated with overall survival. From 2006 to 2010, 1,529 of 2,380 (64.2%) received delayed surgery. Delayed surgery patients were older (61.2 ± 10.0 years versus 60.3 ± 9.2; p = 0.03), more likely to be non-white (12.4% versus 9.7%; p = 0.046), and less likely to have private insurance (50% versus 58.2%; p = 0.002). Delayed surgery patients were also more likely to have a sublobar resection (6.3% versus 2.9%). On multivariate analysis, age greater than 68 years (odds ratio [OR], 1.37; 95% confidence interval [CI], 1.1 to 1.7) was associated with delayed surgery, whereas white race (OR, 0.75; 95% CI, 0.57 to 0.99) and private insurance status (OR, 0.82; 95% CI, 0.68 to 0.99) were associated with early surgery. Delayed surgery was associated with higher risk of long-term mortality (hazard ratio, 1.25; 95% CI, 1.07 to 1.47). Delayed surgery after induction therapy for stage IIIA lung cancer is associated with shorter survival, and is influenced by both social and physiologic factors. Prospective work is needed to further characterize the relationship between patient comorbidities and functional status with receipt of timely surgery. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  19. Effect of inter-train interval on the induction of repetition suppression of motor-evoked potentials using transcranial magnetic stimulation.

    PubMed

    Pitkänen, Minna; Kallioniemi, Elisa; Julkunen, Petro

    2017-01-01

    Repetition suppression (RS) is evident as a weakened response to repeated stimuli after the initial response. RS has been demonstrated in motor-evoked potentials (MEPs) induced with transcranial magnetic stimulation (TMS). Here, we investigated the effect of inter-train interval (ITI) on the induction of RS of MEPs with the attempt to optimize the investigative protocols. Trains of TMS pulses, targeted to the primary motor cortex by neuronavigation, were applied at a stimulation intensity of 120% of the resting motor threshold. The stimulus trains included either four or twenty pulses with an inter-stimulus interval (ISI) of 1 s. The ITI was here defined as the interval between the last pulse in a train and the first pulse in the next train; the ITIs used here were 1, 3, 4, 6, 7, 12, and 17 s. RS was observed with all ITIs except with the ITI of 1 s, in which the ITI was equal to ISI. RS was more pronounced with longer ITIs. Shorter ITIs may not allow sufficient time for a return to baseline. RS may reflect a startle-like response to the first pulse of a train followed by habituation. Longer ITIs may allow more recovery time and in turn demonstrate greater RS. Our results indicate that RS can be studied with confidence at relatively short ITIs of 6 s and above.

  20. 40 CFR 1066.705 - Symbols, abbreviations, acronyms, and units of measure.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... uses the following subscripts to define a quantity: Subscript Quantity int speed interval abs absolute... speed interval span span quantity test test quantity uncor uncorrected quantity zero zero quantity (e...

  1. Power frequency spectrum analysis of surface EMG signals of upper limb muscles during elbow flexion - A comparison between healthy subjects and stroke survivors.

    PubMed

    Angelova, Silvija; Ribagin, Simeon; Raikova, Rositsa; Veneva, Ivanka

    2018-02-01

    After a stroke, motor units stop working properly and large, fast-twitch units are more frequently affected. Their impaired functions can be investigated during dynamic tasks using electromyographic (EMG) signal analysis. The aim of this paper is to investigate changes in the parameters of the power/frequency function during elbow flexion between affected, non-affected, and healthy muscles. Fifteen healthy subjects and ten stroke survivors participated in the experiments. Electromyographic data from 6 muscles of the upper limbs during elbow flexion were filtered and normalized to the amplitudes of EMG signals during maximal isometric tasks. The moments when motion started and when the flexion angle reached its maximal value were found. Equal intervals of 0.3407 s were defined between these two moments and one additional interval before the start of the flexion (first one) was supplemented. For each of these intervals the power/frequency function of EMG signals was calculated. The mean (MNF) and median frequencies (MDF), the maximal power (MPw) and the area under the power function (APw) were calculated. MNF was always higher than MDF. A significant decrease in these frequencies was found in only three post-stroke survivors. The frequencies in the first time interval were nearly always the highest among all intervals. The maximal power was nearly zero during first time interval and increased during the next ones. The largest values of MPw and APw were found for the flexor muscles and they increased for the muscles of the affected arm compared to the non-affected one of stroke survivors. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Advanced analysis of finger-tapping performance: a preliminary study.

    PubMed

    Barut, Cağatay; Kızıltan, Erhan; Gelir, Ethem; Köktürk, Fürüzan

    2013-06-01

    The finger-tapping test is a commonly employed quantitative assessment tool used to measure motor performance in the upper extremities. This task is a complex motion that is affected by external stimuli, mood and health status. The complexity of this task is difficult to explain with a single average intertap-interval value (time difference between successive tappings) which only provides general information and neglects the temporal effects of the aforementioned factors. This study evaluated the time course of average intertap-interval values and the patterns of variation in both the right and left hands of right-handed subjects using a computer-based finger-tapping system. Cross sectional study. Thirty eight male individuals aged between 20 and 28 years (Mean±SD = 22.24±1.65) participated in the study. Participants were asked to perform single-finger-tapping test for 10 seconds of test period. Only the results of right-handed (RH) 35 participants were considered in this study. The test records the time of tapping and saves data as the time difference between successive tappings for further analysis. The average number of tappings and the temporal fluctuation patterns of the intertap-intervals were calculated and compared. The variations in the intertap-interval were evaluated with the best curve fit method. An average tapping speed or tapping rate can reliably be defined for a single-finger tapping test by analysing the graphically presented data of the number of tappings within the test period. However, a different presentation of the same data, namely the intertap-interval values, shows temporal variation as the number of tapping increases. Curve fitting applications indicate that the variation has a biphasic nature. The measures obtained in this study reflect the complex nature of the finger-tapping task and are suggested to provide reliable information regarding hand performance. Moreover, the equation reflects both the variations in and the general patterns associated with the task.

  3. An intermediate orbit calculated from three position vectors: accuracy of approximation of a perturbed motion. (Russian Title: Промежуточная орбита, вычисленная по трем векторам положения: точность аппроксимации возмущенного движения)

    NASA Astrophysics Data System (ADS)

    Shefer, V. A.

    2015-12-01

    We examine intermediate perturbed orbit proposed by the author previously, defined from the three position vectors of a small celestial body. It is shown theoretically, that at a small reference time interval covering the body positions the approximation accuracy of real motion by this orbit corresponds approximately to the fourth order of tangency. The smaller reference interval of time, the better this correspondence. Laws of variation of the methodical errors in constructing intermediate orbit subject to the length of reference time interval are deduced. According to these laws, the convergence rate of the method to the exact solution (upon reducing the reference interval of time) in the general case is higher by three orders of magnitude than in the case of conventional methods using Keplerian unperturbed orbit. The considered orbit is among the most accurate in set of orbits of their class determined by the order of tangency. The theoretical results are validated by numerical examples. The work was supported by the Ministry of Education and Science of the Russian Federation, project no. 2014/223(1567).

  4. New Approaches to Robust Confidence Intervals for Location: A Simulation Study.

    DTIC Science & Technology

    1984-06-01

    obtain a denominator for the test statistic. Those statistics based on location estimates derived from Hampel’s redescending influence function or v...defined an influence function for a test in terms of the behavior of its P-values when the data are sampled from a model distribution modified by point...proposal could be used for interval estimation as well as hypothesis testing, the extension is immediate. Once an influence function has been defined

  5. 76 FR 27356 - Exemptions From Certain Prohibited Transaction Restrictions

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-11

    ... IRA (as defined in Section V(e)) owner who is independent (as defined in Section V(d)) of Wachovia... dividend that is reset at specific intervals through a Dutch auction process; (d) A person is ``independent... defined in Section II(d)) owner who is independent (as defined in Section II(c)) of Baird. Notwithstanding...

  6. Film annotation system for a space experiment

    NASA Technical Reports Server (NTRS)

    Browne, W. R.; Johnson, S. S.

    1989-01-01

    This microprocessor system was designed to control and annotate a Nikon 35 mm camera for the purpose of obtaining photographs and data at predefined time intervals. The single STD BUSS interface card was designed in such a way as to allow it to be used in either a stand alone application with minimum features or installed in a STD BUSS computer allowing for maximum features. This control system also allows the exposure of twenty eight alpha/numeric characters across the bottom of each photograph. The data contains such information as camera identification, frame count, user defined text, and time to .01 second.

  7. User's manual for THPLOT, A FORTRAN 77 Computer program for time history plotting

    NASA Technical Reports Server (NTRS)

    Murray, J. E.

    1982-01-01

    A general purpose FORTRAN 77 computer program (THPLOT) for plotting time histories using Calcomp pen plotters is described. The program is designed to read a time history data file and to generate time history plots for selected time intervals and/or selected data channels. The capabilities of the program are described. The card input required to define the plotting operation is described and examples of card input and the resulting plotted output are given. The examples are followed by a description of the printed output, including both normal output and error messages. Lastly, implementation of the program is described. A complete listing of the program with reference maps produced by the CDC FTN 5.0 compiler is included.

  8. Effect of the number of request calls on the time from call to hospital arrival: a cross-sectional study of an ambulance record database in Nara prefecture, Japan.

    PubMed

    Hanaki, Nao; Yamashita, Kazuto; Kunisawa, Susumu; Imanaka, Yuichi

    2016-12-09

    In Japan, ambulance staff sometimes must make request calls to find hospitals that can accept patients because of an inadequate information sharing system. This study aimed to quantify effects of the number of request calls on the time interval between an emergency call and hospital arrival. A cross-sectional study of an ambulance records database in Nara prefecture, Japan. A total of 43 663 patients (50% women; 31.2% aged 80 years and over): (1) transported by ambulance from April 2013 to March 2014, (2) aged 15 years and over, and (3) with suspected major illness. The time from call to hospital arrival, defined as the time interval from receipt of an emergency call to ambulance arrival at a hospital. The mean time interval from emergency call to hospital arrival was 44.5 min, and the mean number of requests was 1.8. Multilevel linear regression analysis showed that ∼43.8% of variations in transportation times were explained by patient age, sex, season, day of the week, time, category of suspected illness, person calling for the ambulance, emergency status at request call, area and number of request calls. A higher number of request calls was associated with longer time intervals to hospital arrival (addition of 6.3 min per request call; p<0.001). In an analysis dividing areas into three groups, there were differences in transportation time for diseases needing cardiologists, neurologists, neurosurgeons and orthopaedists. The study revealed 6.3 additional minutes needed in transportation time for every refusal of a request call, and also revealed disease-specific delays among specific areas. An effective system should be collaboratively established by policymakers and physicians to ensure the rapid identification of an available hospital for patient transportation in order to reduce the time from the initial emergency call to hospital arrival. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  9. Effect of the number of request calls on the time from call to hospital arrival: a cross-sectional study of an ambulance record database in Nara prefecture, Japan

    PubMed Central

    Hanaki, Nao; Yamashita, Kazuto; Kunisawa, Susumu; Imanaka, Yuichi

    2016-01-01

    Objectives In Japan, ambulance staff sometimes must make request calls to find hospitals that can accept patients because of an inadequate information sharing system. This study aimed to quantify effects of the number of request calls on the time interval between an emergency call and hospital arrival. Design and setting A cross-sectional study of an ambulance records database in Nara prefecture, Japan. Cases A total of 43 663 patients (50% women; 31.2% aged 80 years and over): (1) transported by ambulance from April 2013 to March 2014, (2) aged 15 years and over, and (3) with suspected major illness. Primary outcome measures The time from call to hospital arrival, defined as the time interval from receipt of an emergency call to ambulance arrival at a hospital. Results The mean time interval from emergency call to hospital arrival was 44.5 min, and the mean number of requests was 1.8. Multilevel linear regression analysis showed that ∼43.8% of variations in transportation times were explained by patient age, sex, season, day of the week, time, category of suspected illness, person calling for the ambulance, emergency status at request call, area and number of request calls. A higher number of request calls was associated with longer time intervals to hospital arrival (addition of 6.3 min per request call; p<0.001). In an analysis dividing areas into three groups, there were differences in transportation time for diseases needing cardiologists, neurologists, neurosurgeons and orthopaedists. Conclusions The study revealed 6.3 additional minutes needed in transportation time for every refusal of a request call, and also revealed disease-specific delays among specific areas. An effective system should be collaboratively established by policymakers and physicians to ensure the rapid identification of an available hospital for patient transportation in order to reduce the time from the initial emergency call to hospital arrival. PMID:27940625

  10. Contracting for Agile Software Development in the Department of Defense: An Introduction

    DTIC Science & Technology

    2015-08-01

    Requirements are fixed at a more granular level; reviews of the work product happen more frequently and assess each individual increment rather than a “ big bang ...boundaries than “ big - bang ” development. The implementation of incremental or progressive reviews enables just that—any issues identified at the time of the...the contract needs to support the delivery of deployable software at defined increments/intervals, rather than incentivizing “ big - bang ” efforts or

  11. Economy with the time delay of information flow—The stock market case

    NASA Astrophysics Data System (ADS)

    Miśkiewicz, Janusz

    2012-02-01

    Any decision process requires information about the past and present state of the system, but in an economy acquiring data and processing it is an expensive and time-consuming task. Therefore, the state of the system is often measured over some legal interval, analysed after the end of well defined time periods and the results announced much later before any strategic decision is envisaged. The various time delay roles have to be crucially examined. Here, a model of stock market coupled with an economy is investigated to emphasise the role of the time delay span on the information flow. It is shown that the larger the time delay the more important the collective behaviour of agents since one observes time oscillations in the absolute log-return autocorrelations.

  12. Resolution of hypertension and proteinuria after preeclampsia.

    PubMed

    Berks, Durk; Steegers, Eric A P; Molas, Marek; Visser, Willy

    2009-12-01

    To estimate the time required for hypertension and proteinuria to resolve after preeclampsia, and to estimate how this time to resolution correlates with the levels of blood pressure and proteinuria during preeclampsia and prolonging pregnancy after the development of preeclampsia. This is a historic prospective cohort study of 205 preeclamptic women who were admitted between 1990 and 1992 at the Erasmus MC Medical Centre, Rotterdam, The Netherlands. Data were collected at 1.5, 3, 6, 12, 18, and 24 months after delivery. Hypertension was defined as a blood pressure 140/90 mm Hg or higher or use of antihypertensive drugs. Proteinuria was defined as 0.3 g/d or more. Resolution of hypertension and proteinuria were analyzed with the Turnbull extension to the Kaplan-Meier procedure. Correlations were calculated with an accelerated failure time model. At 3 months postpartum, 39% of women still had hypertension, which decreased to 18% at 2 years postpartum. Resolution time increased by 60% (P<.001) for every 10-mm Hg increase in maximal systolic blood pressure, 40% (P=.044) for every 10-mm Hg increase in maximal diastolic blood pressure, and 3.6% (P=.001) for every 1-day increase in the diagnosis-to-delivery interval. At 3 months postpartum, 14% still had proteinuria, which decreased to 2% at 2 years postpartum. Resolution time increased by 16% (P=.001) for every 1-g/d increase in maximal proteinuria. Gestational age at onset of preeclampsia was not correlated with resolution time of hypertension and proteinuria. The severity of preeclampsia and the time interval between diagnosis and delivery are associated with postpartum time to resolution of hypertension and proteinuria. After preeclampsia, it can take up to 2 years for hypertension and proteinuria to resolve. Therefore, the authors suggest that further invasive diagnostic tests for underlying renal disease may be postponed until 2 years postpartum. III.

  13. Integration of paleoseismic data from multiple sites to develop an objective earthquake chronology: Application to the Weber segment of the Wasatch fault zone, Utah

    USGS Publications Warehouse

    DuRoss, Christopher B.; Personius, Stephen F.; Crone, Anthony J.; Olig, Susan S.; Lund, William R.

    2011-01-01

    We present a method to evaluate and integrate paleoseismic data from multiple sites into a single, objective measure of earthquake timing and recurrence on discrete segments of active faults. We apply this method to the Weber segment (WS) of the Wasatch fault zone using data from four fault-trench studies completed between 1981 and 2009. After systematically reevaluating the stratigraphic and chronologic data from each trench site, we constructed time-stratigraphic OxCal models that yield site probability density functions (PDFs) of the times of individual earthquakes. We next qualitatively correlated the site PDFs into a segment-wide earthquake chronology, which is supported by overlapping site PDFs, large per-event displacements, and prominent segment boundaries. For each segment-wide earthquake, we computed the product of the site PDF probabilities in common time bins, which emphasizes the overlap in the site earthquake times, and gives more weight to the narrowest, best-defined PDFs. The product method yields smaller earthquake-timing uncertainties compared to taking the mean of the site PDFs, but is best suited to earthquakes constrained by broad, overlapping site PDFs. We calculated segment-wide earthquake recurrence intervals and uncertainties using a Monte Carlo model. Five surface-faulting earthquakes occurred on the WS at about 5.9, 4.5, 3.1, 1.1, and 0.6 ka. With the exception of the 1.1-ka event, we used the product method to define the earthquake times. The revised WS chronology yields a mean recurrence interval of 1.3 kyr (0.7–1.9-kyr estimated two-sigma [2δ] range based on interevent recurrence). These data help clarify the paleoearthquake history of the WS, including the important question of the timing and rupture extent of the most recent earthquake, and are essential to the improvement of earthquake-probability assessments for the Wasatch Front region.

  14. Integration of paleoseismic data from multiple sites to develop an objective earthquake chronology: Application to the Weber segment of the Wasatch fault zone, Utah

    USGS Publications Warehouse

    DuRoss, C.B.; Personius, S.F.; Crone, A.J.; Olig, S.S.; Lund, W.R.

    2011-01-01

    We present a method to evaluate and integrate paleoseismic data from multiple sites into a single, objective measure of earthquake timing and recurrence on discrete segments of active faults. We apply this method to the Weber segment (WS) of the Wasatch fault zone using data from four fault-trench studies completed between 1981 and 2009. After systematically reevaluating the stratigraphic and chronologic data from each trench site, we constructed time-stratigraphic OxCal models that yield site probability density functions (PDFs) of the times of individual earthquakes. We next qualitatively correlated the site PDFs into a segment-wide earthquake chronology, which is supported by overlapping site PDFs, large per-event displacements, and prominent segment boundaries. For each segment-wide earthquake, we computed the product of the site PDF probabilities in common time bins, which emphasizes the overlap in the site earthquake times, and gives more weight to the narrowest, best-defined PDFs. The product method yields smaller earthquake-timing uncertainties compared to taking the mean of the site PDFs, but is best suited to earthquakes constrained by broad, overlapping site PDFs. We calculated segment-wide earthquake recurrence intervals and uncertainties using a Monte Carlo model. Five surface-faulting earthquakes occurred on the WS at about 5.9, 4.5, 3.1, 1.1, and 0.6 ka. With the exception of the 1.1-ka event, we used the product method to define the earthquake times. The revised WS chronology yields a mean recurrence interval of 1.3 kyr (0.7-1.9-kyr estimated two-sigma [2??] range based on interevent recurrence). These data help clarify the paleoearthquake history of the WS, including the important question of the timing and rupture extent of the most recent earthquake, and are essential to the improvement of earthquake-probability assessments for the Wasatch Front region.

  15. Averaging interval selection for the calculation of Reynolds shear stress for studies of boundary layer turbulence.

    NASA Astrophysics Data System (ADS)

    Lee, Zoe; Baas, Andreas

    2013-04-01

    It is widely recognised that boundary layer turbulence plays an important role in sediment transport dynamics in aeolian environments. Improvements in the design and affordability of ultrasonic anemometers have provided significant contributions to studies of aeolian turbulence, by facilitating high frequency monitoring of three dimensional wind velocities. Consequently, research has moved beyond studies of mean airflow properties, to investigations into quasi-instantaneous turbulent fluctuations at high spatio-temporal scales. To fully understand, how temporal fluctuations in shear stress drive wind erosivity and sediment transport, research into the best practice for calculating shear stress is necessary. This paper builds upon work published by Lee and Baas (2012) on the influence of streamline correction techniques on Reynolds shear stress, by investigating the time-averaging interval used in the calculation. Concerns relating to the selection of appropriate averaging intervals for turbulence research, where the data are typically non-stationary at all timescales, are well documented in the literature (e.g. Treviño and Andreas, 2000). For example, Finnigan et al. (2003) found that underestimating the required averaging interval can lead to a reduction in the calculated momentum flux, as contributions from turbulent eddies longer than the averaging interval are lost. To avoid the risk of underestimating fluxes, researchers have typically used the total measurement duration as a single averaging period. For non-stationary data, however, using the whole measurement run as a single block average is inadequate for defining turbulent fluctuations. The data presented in this paper were collected in a field study of boundary layer turbulence conducted at Tramore beach near Rosapenna, County Donegal, Ireland. High-frequency (50 Hz) 3D wind velocity measurements were collected using ultrasonic anemometry at thirteen different heights between 0.11 and 1.62 metres above the bed. A technique for determining time-averaging intervals for a series of anemometers stacked in a close vertical array is presented. A minimum timescale is identified using spectral analysis to determine the inertial sub-range, where energy is neither produced nor dissipated but passed down to increasingly smaller scales. An autocorrelation function is then used to derive a scaling pattern between anemometer heights, which defines a series of averaging intervals of increasing length with height above the surface. Results demonstrate the effect of different averaging intervals on the calculation of Reynolds shear stress and highlight the inadequacy of using the total measurement duration as a single block average. Lee, Z. S. & Baas, A. C. W. (2012). Streamline correction for the analysis of boundary layer turbulence. Geomorphology, 171-172, 69-82. Treviño, G. and Andreas, E.L., 2000. Averaging Intervals For Spectral Analysis Of Nonstationary Turbulence. Boundary-Layer Meteorology, 95(2): 231-247. Finnigan, J.J., Clement, R., Malhi, Y., Leuning, R. and Cleugh, H.A., 2003. Re-evaluation of long-term flux measurement techniques. Part I: Averaging and coordinate rotation. Boundary-Layer Meteorology, 107(1): 1-48.

  16. Optimizing preventive maintenance policy: A data-driven application for a light rail braking system.

    PubMed

    Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel

    2017-10-01

    This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions.

  17. Optimizing preventive maintenance policy: A data-driven application for a light rail braking system

    PubMed Central

    Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel

    2017-01-01

    This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions. PMID:29278245

  18. Subjective versus objective evening chronotypes in bipolar disorder.

    PubMed

    Gershon, Anda; Kaufmann, Christopher N; Depp, Colin A; Miller, Shefali; Do, Dennis; Zeitzer, Jamie M; Ketter, Terence A

    2018-01-01

    Disturbed sleep timing is common in bipolar disorder (BD). However, most research is based upon self-reports. We examined relationships between subjective versus objective assessments of sleep timing in BD patients versus controls. We studied 61 individuals with bipolar I or II disorder and 61 healthy controls. Structured clinical interviews assessed psychiatric diagnoses, and clinician-administered scales assessed current mood symptom severity. For subjective chronotype, we used the Composite Scale of Morningness (CSM) questionnaire, using original and modified (1, ¾, ⅔, and ½ SD below mean CSM score) thresholds to define evening chronotype. Objective chronotype was calculated as the percentage of nights (50%, 66.7%, 75%, or 90% of all nights) with sleep interval midpoints at or before (non-evening chronotype) vs. after (evening chronotype) 04:15:00 (4:15:00a.m.), based on 25-50 days of continuous actigraph data. BD participants and controls differed significantly with respect to CSM mean scores and CSM evening chronotypes using modified, but not original, thresholds. Groups also differed significantly with respect to chronotype based on sleep interval midpoint means, and based on the threshold of 75% of sleep intervals with midpoints after 04:15:00. Subjective and objective chronotypes correlated significantly with one another. Twenty-one consecutive intervals were needed to yield an evening chronotype classification match of ≥ 95% with that made using the 75% of sleep intervals threshold. Limited sample size/generalizability. Subjective and objective chronotype measurements were correlated with one another in participants with BD. Using population-specific thresholds, participants with BD had a later chronotype than controls. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Optimal Interval for Repeated Gastric Cancer Screening in Normal-Risk Healthy Korean Adults: A Retrospective Cohort Study

    PubMed Central

    Bae, Jong-Myon; Shin, Sang Yop; Kim, Eun Hee

    2015-01-01

    Purpose This retrospective cohort study was conducted to estimate the optimal interval for gastric cancer screening in Korean adults with initial negative screening results. Materials and Methods This study consisted of voluntary Korean screenees aged 40 to 69 years who underwent subsequent screening gastroscopies after testing negative in the baseline screening performed between January 2007 and December 2011. A new case was defined as the presence of gastric cancer cells in biopsy specimens obtained upon gastroscopy. The follow-up periods were calculated during the months between the date of baseline screening gastroscopy and positive findings upon subsequent screenings, stratified by sex and age group. The mean sojourn time (MST) for determining the screening interval was estimated using the prevalence/incidence ratio. Results Of the 293,520 voluntary screenees for the gastric cancer screening program, 91,850 (31.29%) underwent subsequent screening gastroscopies between January 2007 and December 2011. The MSTs in men and women were 21.67 months (95% confidence intervals [CI], 17.64 to 26.88 months) and 15.14 months (95% CI, 9.44 to 25.85 months), respectively. Conclusion These findings suggest that the optimal interval for subsequent gastric screening in both men and women is 24 months, supporting the 2-year interval recommended by the nationwide gastric cancer screening program. PMID:25687874

  20. A Prescription for List-Mode Data Processing Conventions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beddingfield, David H.; Swinhoe, Martyn Thomas; Huszti, Jozsef

    There are a variety of algorithmic approaches available to process list-mode pulse streams to produce multiplicity histograms for subsequent analysis. In the development of the INCC v6.0 code to include the processing of this data format, we have noted inconsistencies in the “processed time” between the various approaches. The processed time, tp, is the time interval over which the recorded pulses are analyzed to construct multiplicity histograms. This is the time interval that is used to convert measured counts into count rates. The observed inconsistencies in tp impact the reported count rate information and the determination of the error-values associatedmore » with the derived singles, doubles, and triples counting rates. This issue is particularly important in low count-rate environments. In this report we will present a prescription for the processing of list-mode counting data that produces values that are both correct and consistent with traditional shift-register technologies. It is our objective to define conventions for list mode data processing to ensure that the results are physically valid and numerically aligned with the results from shift-register electronics.« less

  1. Daily and Long Term Variations of Out-Door Gamma Dose Rate in Khorasan Province, Iran

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toossi, M. T. Bahreyni; Bayani, SH.

    2008-08-07

    In Iran before 1996, only a few hot spots had been identified, no systematic study had been envisaged. Since then preparation of out-door environmental gamma radiation map of Iran was defined as a long term goal in our center, at the same time simultaneous monitoring of outdoor gamma level in Khorasan was also proposed. A Rados area monitoring system (AAM-90) including 10 intelligent RD-02 detector and all associated components were purchased. From 2003 gradually seven stations have been setup in Khorasan. For all seven stations monthly average and one hour daily average on four time intervals have been computed. Statisticallymore » no significant differences have been observed. This is also true for monthly averages. The overall average dose rate for present seven stations varies from 0.11 {mu}Sv{center_dot}h{sup -1} for Ferdows, to 0.04 {mu}Sv{center_dot}h{sup -1} for Dargaz. Based on our data, 50 minutes sample in any time interval is an accurate sample size to estimate out door Gamma dose rate.« less

  2. Failure-to-rescue after injury is associated with preventability: The results of mortality panel review of failure-to-rescue cases in trauma.

    PubMed

    Kuo, Lindsay E; Kaufman, Elinore; Hoffman, Rebecca L; Pascual, Jose L; Martin, Niels D; Kelz, Rachel R; Holena, Daniel N

    2017-03-01

    Failure-to-rescue is defined as the conditional probability of death after a complication, and the failure-to-rescue rate reflects a center's ability to successfully "rescue" patients after complications. The validity of the failure-to-rescue rate as a quality measure is dependent on the preventability of death and the appropriateness of this measure for use in the trauma population is untested. We sought to evaluate the relationship between preventability and failure-to-rescue in trauma. All adjudications from a mortality review panel at an academic level I trauma center from 2005-2015 were merged with registry data for the same time period. The preventability of each death was determined by panel consensus as part of peer review. Failure-to-rescue deaths were defined as those occurring after any registry-defined complication. Univariate and multivariate logistic regression models between failure-to-rescue status and preventability were constructed and time to death was examined using survival time analyses. Of 26,557 patients, 2,735 (10.5%) had a complication, of whom 359 died for a failure-to-rescue rate of 13.2%. Of failure-to-rescue deaths, 272 (75.6%) were judged to be non-preventable, 65 (18.1%) were judged potentially preventable, and 22 (6.1%) were judged to be preventable by peer review. After adjusting for other patient factors, there remained a strong association between failure-to-rescue status and potentially preventable (odds ratio 2.32, 95% confidence interval, 1.47-3.66) and preventable (odds ratio 14.84, 95% confidence interval, 3.30-66.71) judgment. Despite a strong association between failure-to-rescue status and preventability adjudication, only a minority of deaths meeting the definition of failure to rescue were judged to be preventable or potentially preventable. Revision of the failure-to-rescue metric before use in trauma care benchmarking is warranted. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Failure-to-rescue after injury is associated with preventability: The results of mortality panel review of failure-to-rescue cases in trauma

    PubMed Central

    Kuo, Lindsay E.; Kaufman, Elinore; Hoffman, Rebecca L.; Pascual, Jose L.; Martin, Niels D.; Kelz, Rachel R.; Holena, Daniel N.

    2018-01-01

    Background Failure-to-rescue is defined as the conditional probability of death after a complication, and the failure-to-rescue rate reflects a center’s ability to successfully “rescue” patients after complications. The validity of the failure-to-rescue rate as a quality measure is dependent on the preventability of death and the appropriateness of this measure for use in the trauma population is untested. We sought to evaluate the relationship between preventability and failure-to-rescue in trauma. Methods All adjudications from a mortality review panel at an academic level I trauma center from 2005–2015 were merged with registry data for the same time period. The preventability of each death was determined by panel consensus as part of peer review. Failure-to-rescue deaths were defined as those occurring after any registry-defined complication. Univariate and multivariate logistic regression models between failure-to-rescue status and preventability were constructed and time to death was examined using survival time analyses. Results Of 26,557 patients, 2,735 (10.5%) had a complication, of whom 359 died for a failure-to-rescue rate of 13.2%. Of failure-to-rescue deaths, 272 (75.6%) were judged to be non-preventable, 65 (18.1%) were judged potentially preventable, and 22 (6.1%) were judged to be preventable by peer review. After adjusting for other patient factors, there remained a strong association between failure-to-rescue status and potentially preventable (odds ratio 2.32, 95% confidence interval, 1.47–3.66) and preventable (odds ratio 14.84, 95% confidence interval, 3.30–66.71) judgment. Conclusion Despite a strong association between failure-to-rescue status and preventability adjudication, only a minority of deaths meeting the definition of failure to rescue were judged to be preventable or potentially preventable. Revision of the failure-to-rescue metric before use in trauma care benchmarking is warranted. PMID:27788924

  4. Seismic hazards at Kilauea and Mauna Loa volcanoes, Hawaii

    NASA Astrophysics Data System (ADS)

    Klein, Fred W.

    1994-04-01

    A significant seismic hazard exists in south Hawaii from large tectonic earthquakes that can reach magnitude 8 and intensity XII. This paper quantifies the hazard by estimating the horizontal peak ground acceleration (PGA) in south Hawaii which occurs with a 90% probability of not being exceeded during exposure times from 10 to 250 years. The largest earthquakes occur beneath active, unbuttressed and mobile flanks of volcanos in their shield building stage. The flanks are compressed and pushed laterally by rift zone intrusions. The largest earthquakes are thus not directly caused by volcanic activity. Historic earthquakes (since 1823) and the best Hawaiian Volcano Observatory catalog (since 1970) under the south side of the island define linear frequency-magnitude distributions that imply average recurrence intervals for M greater than 5.5 earthquakes of 3.4-5 years, for M greater than 7 events of 29-44 years, and for M greater than 8 earthquakes of 120-190 years. These estimated recurrences are compatable with the 107 year interval between the two major April 2, 1868 (M(approximately)7.9) and November 29, 1975 (M=7.2) earthquakes. Frequency-magnitude distributions define the activity levels of 19 different seismic source zones for probabilistic ground motion estimations. The available measurements of PGA (33 from 7 moderate earthquakes) are insufficient to define a new attenuation curve. We use the Boore et al. (1993) curve shifted upward by a factor of 1.2 to fit Hawaiian data. Amplification of sites on volcanic ash or unconsolidated soil are about two times those of hard lava sites. On a map for a 50 year exposure time with a 90% probability of not being exceeded, the peak ground accelerations are 1.0 g Kilauea's and Mauna Loa's mobile south flanks and 0.9 g in the Kaoiki seismic zone. This hazard from strong ground shaking is comparable to that near the San Andreas Fault in California or the subduction zone in the Gulf of Alaska.

  5. Dynamic symmetries and quantum nonadiabatic transitions

    DOE PAGES

    Li, Fuxiang; Sinitsyn, Nikolai A.

    2016-05-30

    Kramers degeneracy theorem is one of the basic results in quantum mechanics. According to it, the time-reversal symmetry makes each energy level of a half-integer spin system at least doubly degenerate, meaning the absence of transitions or scatterings between degenerate states if the Hamiltonian does not depend on time explicitly. Here we generalize this result to the case of explicitly time-dependent spin Hamiltonians. We prove that for a spin system with the total spin being a half integer, if its Hamiltonian and the evolution time interval are symmetric under a specifically defined time reversal operation, the scattering amplitude between anmore » arbitrary initial state and its time reversed counterpart is exactly zero. Lastly, we also discuss applications of this result to the multistate Landau–Zener (LZ) theory.« less

  6. Transmission of linear regression patterns between time series: From relationship in time series to complex networks

    NASA Astrophysics Data System (ADS)

    Gao, Xiangyun; An, Haizhong; Fang, Wei; Huang, Xuan; Li, Huajiao; Zhong, Weiqiong; Ding, Yinghui

    2014-07-01

    The linear regression parameters between two time series can be different under different lengths of observation period. If we study the whole period by the sliding window of a short period, the change of the linear regression parameters is a process of dynamic transmission over time. We tackle fundamental research that presents a simple and efficient computational scheme: a linear regression patterns transmission algorithm, which transforms linear regression patterns into directed and weighted networks. The linear regression patterns (nodes) are defined by the combination of intervals of the linear regression parameters and the results of the significance testing under different sizes of the sliding window. The transmissions between adjacent patterns are defined as edges, and the weights of the edges are the frequency of the transmissions. The major patterns, the distance, and the medium in the process of the transmission can be captured. The statistical results of weighted out-degree and betweenness centrality are mapped on timelines, which shows the features of the distribution of the results. Many measurements in different areas that involve two related time series variables could take advantage of this algorithm to characterize the dynamic relationships between the time series from a new perspective.

  7. Transmission of linear regression patterns between time series: from relationship in time series to complex networks.

    PubMed

    Gao, Xiangyun; An, Haizhong; Fang, Wei; Huang, Xuan; Li, Huajiao; Zhong, Weiqiong; Ding, Yinghui

    2014-07-01

    The linear regression parameters between two time series can be different under different lengths of observation period. If we study the whole period by the sliding window of a short period, the change of the linear regression parameters is a process of dynamic transmission over time. We tackle fundamental research that presents a simple and efficient computational scheme: a linear regression patterns transmission algorithm, which transforms linear regression patterns into directed and weighted networks. The linear regression patterns (nodes) are defined by the combination of intervals of the linear regression parameters and the results of the significance testing under different sizes of the sliding window. The transmissions between adjacent patterns are defined as edges, and the weights of the edges are the frequency of the transmissions. The major patterns, the distance, and the medium in the process of the transmission can be captured. The statistical results of weighted out-degree and betweenness centrality are mapped on timelines, which shows the features of the distribution of the results. Many measurements in different areas that involve two related time series variables could take advantage of this algorithm to characterize the dynamic relationships between the time series from a new perspective.

  8. Non-Markovianity quantifier of an arbitrary quantum process

    NASA Astrophysics Data System (ADS)

    Debarba, Tiago; Fanchini, Felipe F.

    2017-12-01

    Calculating the degree of non-Markovianity of a quantum process, for a high-dimensional system, is a difficult task given complex maximization problems. Focusing on the entanglement-based measure of non-Markovianity we propose a numerically feasible quantifier for finite-dimensional systems. We define the non-Markovianity measure in terms of a class of entanglement quantifiers named witnessed entanglement which allow us to write several entanglement based measures of non-Markovianity in a unique formalism. In this formalism, we show that the non-Markovianity, in a given time interval, can be witnessed by calculating the expectation value of an observable, making it attractive for experimental investigations. Following this property we introduce a quantifier base on the entanglement witness in an interval of time; we show that measure is a bonafide measure of non-Markovianity. In our example, we use the generalized robustness of entanglement, an entanglement measure that can be readily calculated by a semidefinite programming method, to study impurity atoms coupled to a Bose-Einstein condensate.

  9. Report of the first Nimbus-7 SMMR Experiment Team Workshop

    NASA Technical Reports Server (NTRS)

    Campbell, W. J.; Gloersen, P.

    1983-01-01

    Preliminary results of sea ice and techniques for calculating sea ice concentration and multiyear fraction from the microwave radiances obtained from the Nimbus-7 SMMR were presented. From these results, it is evident that these groups used different and independent approaches in deriving sea ice emissivities and algorithms. This precluded precise comparisons of their results. A common set of sea ice emissivities were defined for all groups to use for subsequent more careful comparison of the results from the various sea ice parameter algorithms. To this end, three different geographical areas in two different time intervals were defined as typifying SMMR beam-filling conditions for first year sea ice, multiyear sea ice, and open water and to be used for determining the required microwave emissivities.

  10. Very short (15s-15s) interval-training around the critical velocity allows middle-aged runners to maintain VO2 max for 14 minutes.

    PubMed

    Billat, V L; Slawinksi, J; Bocquet, V; Chassaing, P; Demarle, A; Koralsztein, J P

    2001-04-01

    The purpose of this study was to compare the effectiveness of three very short interval training sessions (15-15 s of hard and easier runs) run at an average velocity equal to the critical velocity to elicit VO2 max for more than 10 minutes. We hypothesized that the interval with the smallest amplitude (defined as the ratio between the difference in velocity between the hard and the easy run divided by the average velocity and multiplied by 100) would be the most efficient to elicit VO2 max for the longer time. The subjects were middle-aged runners (52 +/- 5 yr, VO2 max of 52.1 +/- 6 mL x min(-1) x kg(-1), vVO2 max of 15.9 +/- 1.8 km x h(-1), critical velocity of 85.6 +/- 1.2% vVO2 max) who were used to long slow distance-training rather than interval training. They performed three interval-training (IT) sessions on a synthetic track (400 m) whilst breathing through the COSMED K4b2 portable metabolic analyser. These three IT sessions were: A) 90-80% vVO2 max (for hard bouts and active recovery periods, respectively), the amplitude= (90-80/85) 100=11%, B) 100-70% vVO2 max amplitude=35%, and C) 60 x 110% vVO2 max amplitude = 59%. Interval training A and B allowed the athlete to spend twice the time at VO2 max (14 min vs. 7 min) compared to interval training C. Moreover, at the end of interval training A and B the runners had a lower blood lactate than after the procedure C (9 vs. 11 mmol x l(-1)). In conclusion, short interval-training of 15s-15s at 90-80 and 100-70% of vVO2 max proved to be the most efficient in stimulating the oxygen consumption to its highest level in healthy middle-aged long-distance runners used to doing only long slow distance-training.

  11. Pain as a risk factor for disability or death.

    PubMed

    Andrews, James S; Cenzer, Irena Stijacic; Yelin, Edward; Covinsky, Kenneth E

    2013-04-01

    To determine whether pain predicts future activity of daily living (ADL) disability or death in individuals aged 60 and older. Prospective cohort study. The 1998 to 2008 Health and Retirement Study (HRS), a nationally representative study of older community-living individuals. Twelve thousand six hundred thirty-one participants in the 1998 HRS aged 60 and older who did not need help in any ADL. Participants reporting that they had moderate or severe pain most of the time were defined as having significant pain. The primary outcome was time to development of ADL disability or death over 10 yrs, assessed at five successive 2-year intervals. ADL disability was defined as needing help performing any ADL: bathing, dressing, transferring, toileting, eating, or walking across a room. A discrete hazards survival model was used to examine the relationship between pain and incident disability over each 2-year interval using only participants who started the interval with no ADL disability. Several potential confounders were adjusted for at the start of each interval: demographic factors, seven chronic health conditions, and functional limitations (ADL difficulty and difficulty with five measures of mobility). At baseline, 2,283 (18%) participants had significant pain. Participants with pain were more likely (all P < .001) to be female (65% vs 54%), have ADL difficulty (e.g., transferring 12% vs 2%, toileting 11% vs 2%), have difficulty walking several blocks (60% vs 21%), and have difficulty climbing one flight of stairs (40% vs 12%). Over 10 years, participants with pain were more likely to develop ADL disability or death (58% vs 43%, unadjusted hazard ratio (HR) = 1.67, 95% confidence interval (CI) = 1.57-1.79), although after adjustment for confounders, participants with pain were not at greater risk for ADL disability or death (HR = 0.98, 95% CI = 0.91-1.07). Adjustment for functional status almost entirely explained the difference between the unadjusted and adjusted results. Although there are strong cross-sectional relationships between pain and functional limitations, individuals with pain are not at higher risk of subsequent disability or death after accounting for functional limitations. Like many geriatric syndromes, pain and disability may represent interrelated phenomena that occur simultaneously and require unified treatment paradigms. © 2013, Copyright the Authors Journal compilation © 2013, The American Geriatrics Society.

  12. Pain as a Risk Factor for Disability or Death

    PubMed Central

    Andrews, James S.; Cenzer, Irena Stijacic; Yelin, Edward; Covinsky, Kenneth E.

    2013-01-01

    OBJECTIVES To determine whether pain predicts future activity of daily living (ADL) disability or death in individuals aged 60 years and above. DESIGN Prospective cohort study SETTING The 1998 to 2008 Health and Retirement Study (HRS), a nationally-representative study of older community-living individuals. PARTICIPANTS Twelve thousand six hundred and thirty-one participants in the 1998 HRS aged 60 years and older who did not need help in any activity of daily living (ADL). MEASUREMENTS Participants reporting that they were troubled by moderate or severe pain most of the time were defined as having significant pain. Our primary outcome was time to development of ADL disability or death over 10 years, assessed in 5 successive 2 year intervals. ADL disability was defined as needing help performing any ADL: bathing, dressing, transferring, toileting, eating, or walking across a room. We used a discrete hazards survival model to examine the relationship between pain and incident disability over each two year interval using only participants who started the interval with no ADL disability. We adjusted for several potential confounders at the start of each interval: demographic factors, 7 chronic health conditions, and functional limitations (ADL difficulty, and difficulty with 5 measures of mobility). RESULTS At baseline, 2,283 (18%) subjects had significant pain. Subjects with pain were more likely (all p<0.001) to be female (65% vs. 54%), have ADL difficulty (eg. transferring 12% vs. 2%, toileting 11% vs. 2%), have difficulty walking several blocks (60% vs. 21%), and have difficulty climbing one flight of stairs (40% vs. 12%). Over 10 years, subjects with pain were more likely to develop ADL disability or death (58% vs43%, unadjusted HR 1.67, 95% confidence interval (1.57 to 1.79)). However, after adjustment for confounders, participants with pain were not at increased risk for ADL disability or death (HR 0.98 (0.91 to 1.07)). The difference between the unadjusted and adjusted results was almost entirely explained by adjustment for functional status. CONCLUSION While there are strong cross-sectional relationships between pain and functional limitations, individuals with pain are not at higher risk for subsequent disability or death, after accounting for functional limitations. Like many geriatric syndromes, pain and disability may represent interrelated phenomena that occur simultaneously and require unified treatment paradigms. PMID:23521614

  13. Monitoring Seasonal Evapotranspiration in Vulnerable Agriculture using Time Series VHSR Satellite Data

    NASA Astrophysics Data System (ADS)

    Dalezios, Nicolas; Spyropoulos, Nicos V.; Tarquis, Ana M.

    2015-04-01

    The research work stems from the hypothesis that it is possible to perform an estimation of seasonal water needs of olive tree farms under drought periods by cross correlating high spatial, spectral and temporal resolution (~monthly) of satellite data, acquired at well defined time intervals of the phenological cycle of crops, with ground-truth information simultaneously applied during the image acquisitions. The present research is for the first time, demonstrating the coordinated efforts of space engineers, satellite mission control planners, remote sensing scientists and ground teams to record at specific time intervals of the phenological cycle of trees from ground "zero" and from 770 km above the Earth's surface, the status of plants for subsequent cross correlation and analysis regarding the estimation of the seasonal evapotranspiration in vulnerable agricultural environment. The ETo and ETc derived by Penman-Montieth equation and reference Kc tables, compared with new ETd using the Kc extracted from the time series satellite data. Several vegetation indices were also used especially the RedEdge and the chlorophyll one based on WorldView-2 RedEdge and second NIR bands to relate the tree status with water and nutrition needs. Keywords: Evapotransipration, Very High Spatial Resolution - VHSR, time series, remote sensing, vulnerability, agriculture, vegetation indeces.

  14. Measurement of the timing behaviour of off-the-shelf cameras

    NASA Astrophysics Data System (ADS)

    Schatz, Volker

    2017-04-01

    This paper presents a measurement method suitable for investigating the timing properties of cameras. A single light source illuminates the camera detector starting with a varying defined delay after the camera trigger. Pixels from the recorded camera frames are summed up and normalised, and the resulting function is indicative of the overlap between illumination and exposure. This allows one to infer the trigger delay and the exposure time with sub-microsecond accuracy. The method is therefore of interest when off-the-shelf cameras are used in reactive systems or synchronised with other cameras. It can supplement radiometric and geometric calibration methods for cameras in scientific use. A closer look at the measurement results reveals deviations from the ideal camera behaviour of constant sensitivity limited to the exposure interval. One of the industrial cameras investigated retains a small sensitivity long after the end of the nominal exposure interval. All three investigated cameras show non-linear variations of sensitivity at O≤ft({{10}-3}\\right) to O≤ft({{10}-2}\\right) during exposure. Due to its sign, the latter effect cannot be described by a sensitivity function depending on the time after triggering, but represents non-linear pixel characteristics.

  15. Characteristic Lifelength of Coherent Structure in the Turbulent Boundary Layer

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.

    2006-01-01

    A characteristic lifelength is defined by which a Gaussian distribution is fit to data correlated over a 3 sensor array sampling streamwise sidewall pressure. The data were acquired at subsonic, transonic and supersonic speeds aboard a Tu-144. Lifelengths are estimated using the cross spectrum and are shown to compare favorably with Efimtsov's prediction of correlation space scales. Lifelength distributions are computed in the time/frequency domain using an interval correlation technique on the continuous wavelet transform of the original time data. The median values of the lifelength distributions are found to be very close to the frequency averaged result. The interval correlation technique is shown to allow the retrieval and inspection of the original time data of each event in the lifelength distribution, thus providing a means to locate and study the nature of the coherent structure in the turbulent boundary layer. The lifelength data can be converted to lifetimes using the convection velocity. The lifetime of events in the time/frequency domain are displayed in Lifetime Maps. The primary purpose of the paper is to validate these new analysis techniques so that they can be used with confidence to further characterize coherent structure in the turbulent boundary layer.

  16. Long-term Outcomes After Stepping Down Asthma Controller Medications: A Claims-Based, Time-to-Event Analysis.

    PubMed

    Rank, Matthew A; Johnson, Ryan; Branda, Megan; Herrin, Jeph; van Houten, Holly; Gionfriddo, Michael R; Shah, Nilay D

    2015-09-01

    Long-term outcomes after stepping down asthma medications are not well described. This study was a retrospective time-to-event analysis of individuals diagnosed with asthma who stepped down their asthma controller medications using a US claims database spanning 2000 to 2012. Four-month intervals were established and a step-down event was defined by a ≥ 50% decrease in days-supplied of controller medications from one interval to the next; this definition is inclusive of step-down that occurred without health-care provider guidance or as a consequence of a medication adherence lapse. Asthma stability in the period prior to step-down was defined by not having an asthma exacerbation (inpatient visit, ED visit, or dispensing of a systemic corticosteroid linked to an asthma visit) and having fewer than two rescue inhaler claims in a 4-month period. The primary outcome in the period following step-down was time-to-first asthma exacerbation. Thirty-two percent of the 26,292 included individuals had an asthma exacerbation in the 24-month period following step-down of asthma controller medication, though only 7% had an ED visit or hospitalization for asthma. The length of asthma stability prior to stepping down asthma medication was strongly associated with the risk of an asthma exacerbation in the subsequent 24-month period: < 4 months' stability, 44%; 4 to 7 months, 34%; 8 to 11 months, 30%; and ≥ 12 months, 21% (P < .001). In a large, claims-based, real-world study setting, 32% of individuals have an asthma exacerbation in the 2 years following a step-down event.

  17. Comparison between volatility return intervals of the S&P 500 index and two common models

    NASA Astrophysics Data System (ADS)

    Vodenska-Chitkushev, I.; Wang, F. Z.; Weber, P.; Yamasaki, K.; Havlin, S.; Stanley, H. E.

    2008-01-01

    We analyze the S&P 500 index data for the 13-year period, from January 1, 1984 to December 31, 1996, with one data point every 10 min. For this database, we study the distribution and clustering of volatility return intervals, which are defined as the time intervals between successive volatilities above a certain threshold q. We find that the long memory in the volatility leads to a clustering of above-median as well as below-median return intervals. In addition, it turns out that the short return intervals form larger clusters compared to the long return intervals. When comparing the empirical results to the ARMA-FIGARCH and fBm models for volatility, we find that the fBm model predicts scaling better than the ARMA-FIGARCH model, which is consistent with the argument that both ARMA-FIGARCH and fBm capture the long-term dependence in return intervals to a certain extent, but only fBm accounts for the scaling. We perform the Student's t-test to compare the empirical data with the shuffled records, ARMA-FIGARCH and fBm. We analyze separately the clusters of above-median return intervals and the clusters of below-median return intervals for different thresholds q. We find that the empirical data are statistically different from the shuffled data for all thresholds q. Our results also suggest that the ARMA-FIGARCH model is statistically different from the S&P 500 for intermediate q for both above-median and below-median clusters, while fBm is statistically different from S&P 500 for small and large q for above-median clusters and for small q for below-median clusters. Neither model can fully explain the entire regime of q studied.

  18. Rainfall-runoff data from small watersheds in Colorado, October 1974 through September 1977

    USGS Publications Warehouse

    Cochran, Betty J.; Hodges, H.E.; Livingston, R.K.; Jarret, R.D.

    1979-01-01

    Rainfall-runoff data from small watersheds in Colorado are being collected and analyzed for the purpose of defining the flood characteristics of these and other similar areas. Data collected from October 1974 through September 1977 at a total of 18 urban stations, 10 Denver Federal Center stations, and 48 rural (or highway) stations are tabulated at 5-minute time intervals. Additional information presented includes station descriptions and methods of data collection and analysis. (Kosco-USGS)

  19. An Extension of the Time-Spectral Method to Overset Solvers

    NASA Technical Reports Server (NTRS)

    Leffell, Joshua Isaac; Murman, Scott M.; Pulliam, Thomas

    2013-01-01

    Relative motion in the Cartesian or overset framework causes certain spatial nodes to move in and out of the physical domain as they are dynamically blanked by moving solid bodies. This poses a problem for the conventional Time-Spectral approach, which expands the solution at every spatial node into a Fourier series spanning the period of motion. The proposed extension to the Time-Spectral method treats unblanked nodes in the conventional manner but expands the solution at dynamically blanked nodes in a basis of barycentric rational polynomials spanning partitions of contiguously defined temporal intervals. Rational polynomials avoid Runge's phenomenon on the equidistant time samples of these sub-periodic intervals. Fourier- and rational polynomial-based differentiation operators are used in tandem to provide a consistent hybrid Time-Spectral overset scheme capable of handling relative motion. The hybrid scheme is tested with a linear model problem and implemented within NASA's OVERFLOW Reynolds-averaged Navier- Stokes (RANS) solver. The hybrid Time-Spectral solver is then applied to inviscid and turbulent RANS cases of plunging and pitching airfoils and compared to time-accurate and experimental data. A limiter was applied in the turbulent case to avoid undershoots in the undamped turbulent eddy viscosity while maintaining accuracy. The hybrid scheme matches the performance of the conventional Time-Spectral method and converges to the time-accurate results with increased temporal resolution.

  20. Population-Based Pediatric Reference Intervals in General Clinical Chemistry: A Swedish Survey.

    PubMed

    Ridefelt, Peter

    2015-01-01

    Very few high quality studies on pediatric reference intervals for general clinical chemistry and hematology analytes have been performed. Three recent prospective community-based projects utilising blood samples from healthy children in Sweden, Denmark and Canada have substantially improved the situation. The Swedish survey included 701 healthy children. Reference intervals for general clinical chemistry and hematology were defined.

  1. 75 FR 35113 - Self-Regulatory Organizations; Notice of Filing and Immediate Effectiveness of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-21

    ... Stock Market LLC To List Options on Trust Issued Receipts in $1 Strike Intervals June 15, 2010. Pursuant... options on Trust Issued Receipts in $1 strike price intervals. The text of the proposed rule change is...''), as defined in Supplementary Material to Section 6 at .01(b), in $1 or greater strike price intervals...

  2. Factors associated with suboptimal adherence to antiretroviral therapy in Asia

    PubMed Central

    Jiamsakul, Awachana; Kumarasamy, Nagalingeswaran; Ditangco, Rossana; Li, Patrick CK; Phanuphak, Praphan; Sirisanthana, Thira; Sungkanuparph, Somnuek; Kantipong, Pacharee; Lee, Christopher KC; Mustafa, Mahiran; Merati, Tuti; Kamarulzaman, Adeeba; Singtoroj, Thida; Law, Matthew

    2014-01-01

    Introduction Adherence to antiretroviral therapy (ART) plays an important role in treatment outcomes. It is crucial to identify factors influencing adherence in order to optimize treatment responses. The aim of this study was to assess the rates of, and factors associated with, suboptimal adherence (SubAdh) in the first 24 months of ART in an Asian HIV cohort. Methods As part of a prospective resistance monitoring study, the TREAT Asia Studies to Evaluate Resistance Monitoring Study (TASER-M) collected patients’ adherence based on the World Health Organization-validated Adherence Visual Analogue Scale. SubAdh was defined in two ways: (i) <100% and (ii) <95%. Follow-up time started from ART initiation and was censored at 24 months, loss to follow-up, death, treatment switch, or treatment cessation for >14 days. Time was divided into four intervals: 0–6, 6–12, 12–18 and 18–24 months. Factors associated with SubAdh were analysed using generalized estimating equations. Results Out of 1316 patients, 32% ever reported <100% adherence and 17% ever reported <95%. Defining the outcome as SubAdh <100%, the rates of SubAdh for the four time intervals were 26%, 17%, 12% and 10%. Sites with an average of >2 assessments per patient per year had an odds ratio (OR)=0.7 (95% confidence interval (CI) (0.55 to 0.90), p=0.006), compared to sites with ≤2 assessments per patient per year. Compared to heterosexual exposure, SubAdh was higher in injecting drug users (IDUs) (OR=1.92, 95% CI (1.23 to 3.00), p=0.004) and lower in homosexual exposure (OR=0.52, 95% CI (0.38 to 0.71), p<0.001). Patients taking a nucleoside transcriptase inhibitor and protease inhibitor (NRTI+PI) combination were less likely to report adherence <100% (OR=0.36, 95% CI (0.20 to 0.67), p=0.001) compared to patients taking an NRTI and non-nucleoside transcriptase inhibitor (NRTI+NNRTI) combination. SubAdh decreased with increasing time on ART (all p<0.001). Similar associations were found with adherence <95% as the outcome. Conclusions We found that SubAdh, defined as either <100% and <95%, was associated with mode of HIV exposure, ART regimen, time on ART and frequency of adherence measurement. The more frequently sites assessed patients, the lower the SubAdh, possibly reflecting site resourcing for patient counselling. Although social desirability bias could not be excluded, a greater emphasis on more frequent adherence counselling immediately following ART initiation and through the first six months may be valuable in promoting treatment and programme retention. PMID:24836775

  3. Degassing and microlite crystallization during pre-climactic events of the 1991 eruption of Mt. Pinatubo, Philippines

    USGS Publications Warehouse

    Hammer, J.E.; Cashman, K.V.; Hoblitt, R.P.; Newman, S.

    1999-01-01

    Dacite tephras produced by the 1991 pre-climactic eruptive sequence at Mt. Pinatubo display extreme heterogeneity in vesicularity, ranging in clast density from 700 to 2580 kg m-3. Observations of the 13 surge-producing blasts that preceded the climactic plinian event include radar-defined estimates of column heights and seismically defined eruptive and intra-eruptive durations. A comparison of the characteristics of erupted material, including microlite textures, chemical compositions, and H2O contents, with eruptive parameters suggests that devolatilization-induced crystallization of the magma occurred to a varying extent prior to at least nine of the explosive events. Although volatile loss progressed to the same approximate level in all of the clasts analyzed (weight percent H2O=1.26-1.73), microlite crystallization was extremely variable (0-22%). We infer that syn-eruptive volatile exsolution from magma in the conduit and intra-eruptive separation of the gas phase was facilitated by the development of permeability within magma residing in the conduit. Correlation of maximum microlite crystallinity with repose interval duration (28-262 min) suggests that crystallization occurred primarily intra-eruptively, in response to the reduction in dissolved H2O content that occurred during the preceding event. Detailed textural characterization, including determination of three-dimensional shapes and crystal size distributions (CSD), was conducted on a subset of clasts in order to determine rates of crystal nucleation and growth using repose interval as the time available for crystallization. Shape and size analysis suggests that crystallization proceeded in response to lessening degrees of feldspar supersaturation as repose interval durations increased. We thus propose that during repose intervals, a plug of highly viscous magma formed due to the collapse of vesicular magma that had exsolved volatiles during the previous explosive event. If plug thickness grew proportionally to the square root of time, and if magma pressurization increased during the eruptive sequence, the frequency of eruptive pulses may have been modulated by degassing of magma within the conduit. Dense clasts in surge deposits probably represent plug material entrained by each subsequent explosive event.

  4. Quality of Vitamin K Antagonist Anticoagulation in Spain: Prevalence of Poor Control and Associated Factors.

    PubMed

    Anguita Sánchez, Manuel; Bertomeu Martínez, Vicente; Cequier Fillat, Ángel

    2015-09-01

    To study the prevalence of poorly controlled vitamin K antagonist anticoagulation in Spain in patients with nonvalvular atrial fibrillation, and to identify associated factors. We studied 1056 consecutive patients seen at 120 cardiology clinics in Spain between November 2013 and March 2014. We analyzed the international normalized ratio from the 6 months prior to the patient's visit, calculating the prevalence of poorly controlled anticoagulation, defined as < 65% time in therapeutic range using the Rosendaal method. Mean age was 73.6 years (standard deviation, 9.8 years); women accounted for 42% of patients. The prevalence of poorly controlled anticoagulation was 47.3%. Mean time in therapeutic range was 63.8% (25.9%). The following factors were independently associated with poorly controlled anticoagulation: kidney disease (odds ratio = 1.53; 95% confidence interval, 1.08-2.18; P = .018), routine nonsteroidal anti-inflammatory drugs (odds ratio = 1.79; 95% confidence interval, 1.20-2.79; P = .004), antiplatelet therapy (odds ratio = 2.16; 95% confidence interval, 1.49-3.12; P < .0001) and absence of angiotensin receptor blockers (odds ratio = 1.39; 95% confidence interval, 1.08-1.79; P = .011). There is a high prevalence of poorly controlled vitamin K antagonist anticoagulation in Spain. Factors associated with poor control are kidney disease, routine nonsteroidal anti-inflammatory drugs, antiplatelet use, and absence of angiotensin receptor blockers. Copyright © 2014 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  5. New Madrid seismic zone recurrence intervals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schweig, E.S.; Ellis, M.A.

    1993-03-01

    Frequency-magnitude relations in the New Madrid seismic zone suggest that great earthquakes should occur every 700--1,200 yrs, implying relatively high strain rates. These estimates are supported by some geological and GPS results. Recurrence intervals of this order should have produced about 50 km of strike-slip offset since Miocene time. No subsurface evidence for such large displacements is known within the seismic zone. Moreover, the irregular fault pattern forming a compressive step that one sees today is not compatible with large displacements. There are at least three possible interpretations of the observations of short recurrence intervals and high strain rates, butmore » apparently youthful fault geometry and lack of major post-Miocene deformation. One is that the seismological and geodetic evidence are misleading. A second possibility is that activity in the region is cyclic. That is, the geological and geodetic observations that suggest relatively short recurrence intervals reflect a time of high, but geologically temporary, pore-fluid pressure. Zoback and Zoback have suggested such a model for intraplate seismicity in general. Alternatively, the New Madrid seismic zone is geologically young feature that has been active for only the last few tens of thousands of years. In support of this, observe an irregular fault geometry associated with a unstable compressive step, a series of en echelon and discontinuous lineaments that may define the position of a youthful linking fault, and the general absence of significant post-Eocene faulting or topography.« less

  6. A Dynamical System Approach Explaining the Process of Development by Introducing Different Time-scales.

    PubMed

    Hashemi Kamangar, Somayeh Sadat; Moradimanesh, Zahra; Mokhtari, Setareh; Bakouie, Fatemeh

    2018-06-11

    A developmental process can be described as changes through time within a complex dynamic system. The self-organized changes and emergent behaviour during development can be described and modeled as a dynamical system. We propose a dynamical system approach to answer the main question in human cognitive development i.e. the changes during development happens continuously or in discontinuous stages. Within this approach there is a concept; the size of time scales, which can be used to address the aforementioned question. We introduce a framework, by considering the concept of time-scale, in which "fast" and "slow" is defined by the size of time-scales. According to our suggested model, the overall pattern of development can be seen as one continuous function, with different time-scales in different time intervals.

  7. Magnetostratigraphy susceptibility for the Guadalupian Series GSSPs (Middle Permian) in Guadalupe Mountains National Park and adjacent areas in West Texas

    USGS Publications Warehouse

    Wardlaw, Bruce R.; Ellwood, Brooks B.; Lambert, Lance L.; Tomkin, Jonathan H.; Bell, Gordon L.; Nestell, Galina P.

    2012-01-01

    Here we establish a magnetostratigraphy susceptibility zonation for the three Middle Permian Global boundary Stratotype Sections and Points (GSSPs) that have recently been defined, located in Guadalupe Mountains National Park, West Texas, USA. These GSSPs, all within the Middle Permian Guadalupian Series, define (1) the base of the Roadian Stage (base of the Guadalupian Series), (2) the base of the Wordian Stage and (3) the base of the Capitanian Stage. Data from two additional stratigraphic successions in the region, equivalent in age to the Kungurian–Roadian and Wordian–Capitanian boundary intervals, are also reported. Based on low-field, mass specific magnetic susceptibility (χ) measurements of 706 closely spaced samples from these stratigraphic sections and time-series analysis of one of these sections, we (1) define the magnetostratigraphy susceptibility zonation for the three Guadalupian Series Global boundary Stratotype Sections and Points; (2) demonstrate that χ datasets provide a proxy for climate cyclicity; (3) give quantitative estimates of the time it took for some of these sediments to accumulate; (4) give the rates at which sediments were accumulated; (5) allow more precise correlation to equivalent sections in the region; (6) identify anomalous stratigraphic horizons; and (7) give estimates for timing and duration of geological events within sections.

  8. Kinetic approach to the study of froth flotation applied to a lepidolite ore

    NASA Astrophysics Data System (ADS)

    Vieceli, Nathália; Durão, Fernando O.; Guimarães, Carlos; Nogueira, Carlos A.; Pereira, Manuel F. C.; Margarido, Fernanda

    2016-07-01

    The number of published studies related to the optimization of lithium extraction from low-grade ores has increased as the demand for lithium has grown. However, no study related to the kinetics of the concentration stage of lithium-containing minerals by froth flotation has yet been reported. To establish a factorial design of batch flotation experiments, we conducted a set of kinetic tests to determine the most selective alternative collector, define a range of pulp pH values, and estimate a near-optimum flotation time. Both collectors (Aeromine 3000C and Armeen 12D) provided the required flotation selectivity, although this selectivity was lost in the case of pulp pH values outside the range between 2 and 4. Cumulative mineral recovery curves were used to adjust a classical kinetic model that was modified with a non-negative parameter representing a delay time. The computation of the near-optimum flotation time as the maximizer of a separation efficiency (SE) function must be performed with caution. We instead propose to define the near-optimum flotation time as the time interval required to achieve 95%-99% of the maximum value of the SE function.

  9. Infrared Sky Imager (IRSI) Instrument Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, Victor R.

    2016-04-01

    The Infrared Sky Imager (IRSI) deployed at the Atmospheric Radiation Measurement (ARM) Climate Research Facility is a Solmirus Corp. All Sky Infrared Visible Analyzer. The IRSI is an automatic, continuously operating, digital imaging and software system designed to capture hemispheric sky images and provide time series retrievals of fractional sky cover during both the day and night. The instrument provides diurnal, radiometrically calibrated sky imagery in the mid-infrared atmospheric window and imagery in the visible wavelengths for cloud retrievals during daylight hours. The software automatically identifies cloudy and clear regions at user-defined intervals and calculates fractional sky cover, providing amore » real-time display of sky conditions.« less

  10. Measuring Skew in Average Surface Roughness as a Function of Surface Preparation

    NASA Technical Reports Server (NTRS)

    Stahl, Mark T.

    2015-01-01

    Characterizing surface roughness is important for predicting optical performance. Better measurement of surface roughness reduces grinding saving both time and money and allows the science requirements to be better defined. In this study various materials are polished from a fine grind to a fine polish. Each sample's RMS surface roughness is measured at 81 locations in a 9x9 square grid using a Zygo white light interferometer at regular intervals during the polishing process. Each data set is fit with various standard distributions and tested for goodness of fit. We show that the skew in the RMS data changes as a function of polishing time.

  11. Supporting temporal queries on clinical relational databases: the S-WATCH-QL language.

    PubMed Central

    Combi, C.; Missora, L.; Pinciroli, F.

    1996-01-01

    Due to the ubiquitous and special nature of time, specially in clinical datábases there's the need of particular temporal data and operators. In this paper we describe S-WATCH-QL (Structured Watch Query Language), a temporal extension of SQL, the widespread query language based on the relational model. S-WATCH-QL extends the well-known SQL by the addition of: a) temporal data types that allow the storage of information with different levels of granularity; b) historical relations that can store together both instantaneous valid times and intervals; c) some temporal clauses, functions and predicates allowing to define complex temporal queries. PMID:8947722

  12. Immortal time bias in drug safety cohort studies: spontaneous abortion following nonsteroidal antiinflammatory drug exposure.

    PubMed

    Daniel, Sharon; Koren, Gideon; Lunenfeld, Eitan; Levy, Amalia

    2015-03-01

    Experimental research of drug safety in pregnancy is generally not feasible because of ethical issues. Therefore, most of the information about drug safety in general and teratogenicity in particular is obtained through observational studies, which require careful methodologic design to obtain unbiased results. Immortal time bias occurs when some cases do not "survive" sufficient time in the study, and as such, they have reduced chances of being defined as "exposed" simply because the durations of their follow-ups were shorter. For example, studies that examine the risk for spontaneous abortions in women exposed to a drug during pregnancy are susceptible to immortal time bias because the chance of drug exposure increases the longer a pregnancy lasts. Therefore, the drug tested may falsely be found protective against the outcome tested. The objective of the current study was to illustrate the extent of immortal time bias using a cohort study of pregnancies assessing the risk for spontaneous abortions following nonsteroidal antiinflammatory drug exposure. We assembled 3 databases containing data on spontaneous abortions, births and drug dispensions to create the present study's cohort. The risk for spontaneous abortion was assessed using 2 statistical analysis methods that were compared for 2 definitions of exposure (dichotomous, exposed vs unexposed, regular Cox regression vs Cox regression with time-varying exposure). Significant differences were found in the risk for spontaneous abortions between the 2 statistical methods, both for groups and for most specific nonsteroidal antiinflammatory drugs (nonselective Cox inhibitors - hazard ratio, 0.70; 95% confidence interval, 0.61-0.94 vs hazard ratio, 1.10; 95% confidence interval, 0.99-1.22 for dichotomous vs time-varying exposure analyses, respectively). Furthermore, a significant correlation was found between the median misclassified immortal time for each drug and the extent of the bias. Immortal time bias can easily occur in cohort studies assessing the risk for adverse pregnancy outcomes following exposure to drugs. One way to prevent such a bias is by defining exposure only from the time of exposure during follow-up onward using a time-varying exposure analysis. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Emergency Department Overcrowding and Ambulance Turnaround Time

    PubMed Central

    Lee, Yu Jin; Shin, Sang Do; Lee, Eui Jung; Cho, Jin Seong; Cha, Won Chul

    2015-01-01

    Objective The aims of this study were to describe overcrowding in regional emergency departments in Seoul, Korea and evaluate the effect of crowdedness on ambulance turnaround time. Methods This study was conducted between January 2010 and December 2010. Patients who were transported by 119-responding ambulances to 28 emergency centers within Seoul were eligible for enrollment. Overcrowding was defined as the average occupancy rate, which was equal to the average number of patients staying in an emergency department (ED) for 4 hours divided by the number of beds in the ED. After selecting groups for final analysis, multi-level regression modeling (MLM) was performed with random-effects for EDs, to evaluate associations between occupancy rate and turnaround time. Results Between January 2010 and December 2010, 163,659 patients transported to 28 EDs were enrolled. The median occupancy rate was 0.42 (range: 0.10-1.94; interquartile range (IQR): 0.20-0.76). Overcrowded EDs were more likely to have older patients, those with normal mentality, and non-trauma patients. Overcrowded EDs were more likely to have longer turnaround intervals and traveling distances. The MLM analysis showed that an increase of 1% in occupancy rate was associated with 0.02-minute decrease in turnaround interval (95% CI: 0.01 to 0.03). In subgroup analyses limited to EDs with occupancy rates over 100%, we also observed a 0.03 minute decrease in turnaround interval per 1% increase in occupancy rate (95% CI: 0.01 to 0.05). Conclusions In this study, we found wide variation in emergency department crowding in a metropolitan Korean city. Our data indicate that ED overcrowding is negatively associated with turnaround interval with very small practical significance. PMID:26115183

  14. The BUMP model of response planning: intermittent predictive control accounts for 10 Hz physiological tremor.

    PubMed

    Bye, Robin T; Neilson, Peter D

    2010-10-01

    Physiological tremor during movement is characterized by ∼10 Hz oscillation observed both in the electromyogram activity and in the velocity profile. We propose that this particular rhythm occurs as the direct consequence of a movement response planning system that acts as an intermittent predictive controller operating at discrete intervals of ∼100 ms. The BUMP model of response planning describes such a system. It forms the kernel of Adaptive Model Theory which defines, in computational terms, a basic unit of motor production or BUMP. Each BUMP consists of three processes: (1) analyzing sensory information, (2) planning a desired optimal response, and (3) execution of that response. These processes operate in parallel across successive sequential BUMPs. The response planning process requires a discrete-time interval in which to generate a minimum acceleration trajectory to connect the actual response with the predicted future state of the target and compensate for executional error. We have shown previously that a response planning time of 100 ms accounts for the intermittency observed experimentally in visual tracking studies and for the psychological refractory period observed in double stimulation reaction time studies. We have also shown that simulations of aimed movement, using this same planning interval, reproduce experimentally observed speed-accuracy tradeoffs and movement velocity profiles. Here we show, by means of a simulation study of constant velocity tracking movements, that employing a 100 ms planning interval closely reproduces the measurement discontinuities and power spectra of electromyograms, joint-angles, and angular velocities of physiological tremor reported experimentally. We conclude that intermittent predictive control through sequential operation of BUMPs is a fundamental mechanism of 10 Hz physiological tremor in movement. Copyright © 2010 Elsevier B.V. All rights reserved.

  15. Beat-to-beat control of human optokinetic nystagmus slow phase durations

    PubMed Central

    Furman, Joseph M.

    2016-01-01

    This study provides the first clear evidence that the generation of optokinetic nystagmus fast phases (FPs) is a decision process that is influenced by performance of a concurrent disjunctive reaction time task (DRT). Ten subjects performed an auditory DRT during constant velocity optokinetic stimulation. Eye movements were measured in three dimensions with a magnetic search coil. Slow phase (SP) durations were defined as the interval between FPs. There were three main findings. Firstly, human optokinetic nystagmus SP durations are consistent with a model of a Gaussian basic interval generator (a type of biological clock), such that FPs can be triggered randomly at the end of a clock cycle (mean duration: 200–250 ms). Kolmogorov-Smirnov tests could not reject the modeled cumulative distribution for any data trials. Secondly, the FP need not be triggered at the end of a clock cycle, so that individual SP durations represent single or multiple clock cycles. Thirdly, the probability of generating a FP at the end of each interval generator cycle decreases significantly during performance of a DRT. These findings indicate that the alternation between SPs and FPs of optokinetic nystagmus is not purely reflexive. Rather, the triggering of the next FP is postponed more frequently if a recently presented DRT trial is pending action when the timing cycle expires. Hence, optokinetic nystagmus FPs show dual-task interference in a manner usually attributed to voluntary movements, including saccades. NEW & NOTEWORTHY This study provides the first clear evidence that the generation of optokinetic nystagmus (OKN) fast phases is a decision process that is influenced by performance of a concurrent disjunctive reaction time task (DRT). The slow phase (SP) durations are consistent with a Gaussian basic interval generator and multiple interval SP durations occur more frequently in the presence of the DRT. Hence, OKN shows dual-task interference in a manner observed in voluntary movements, such as saccades. PMID:27760815

  16. Beat-to-beat control of human optokinetic nystagmus slow phase durations.

    PubMed

    Balaban, Carey D; Furman, Joseph M

    2017-01-01

    This study provides the first clear evidence that the generation of optokinetic nystagmus fast phases (FPs) is a decision process that is influenced by performance of a concurrent disjunctive reaction time task (DRT). Ten subjects performed an auditory DRT during constant velocity optokinetic stimulation. Eye movements were measured in three dimensions with a magnetic search coil. Slow phase (SP) durations were defined as the interval between FPs. There were three main findings. Firstly, human optokinetic nystagmus SP durations are consistent with a model of a Gaussian basic interval generator (a type of biological clock), such that FPs can be triggered randomly at the end of a clock cycle (mean duration: 200-250 ms). Kolmogorov-Smirnov tests could not reject the modeled cumulative distribution for any data trials. Secondly, the FP need not be triggered at the end of a clock cycle, so that individual SP durations represent single or multiple clock cycles. Thirdly, the probability of generating a FP at the end of each interval generator cycle decreases significantly during performance of a DRT. These findings indicate that the alternation between SPs and FPs of optokinetic nystagmus is not purely reflexive. Rather, the triggering of the next FP is postponed more frequently if a recently presented DRT trial is pending action when the timing cycle expires. Hence, optokinetic nystagmus FPs show dual-task interference in a manner usually attributed to voluntary movements, including saccades. This study provides the first clear evidence that the generation of optokinetic nystagmus (OKN) fast phases is a decision process that is influenced by performance of a concurrent disjunctive reaction time task (DRT). The slow phase (SP) durations are consistent with a Gaussian basic interval generator and multiple interval SP durations occur more frequently in the presence of the DRT. Hence, OKN shows dual-task interference in a manner observed in voluntary movements, such as saccades. Copyright © 2017 the American Physiological Society.

  17. Prospective Predictors of Suicidal Behavior in BPD at 6 Year Follow-up

    PubMed Central

    Soloff, Paul H.; Chiappetta, Laurel

    2012-01-01

    Objective Recurrent suicidal behavior is a defining characteristic of BPD. Although most patients achieve remission of suicidal behaviors over time, 3% to 10% die by suicide, raising the question of whether there is a high risk suicidal subtype in BPD. We are conducting the first longitudinal study of suicidal behavior in BPD to identify prospective predictors of suicide attempts, and characterize BPD patients at highest risk for suicide completion. Method Demographic, diagnostic, clinical and psychosocial risk factors assessed at baseline were examined for predictive association with medically significant suicide attempts using Cox proportional hazards models. Prospective predictors were defined for subjects completing 6 or more years in the study and compared to earlier intervals. Results Among 90 subjects, 25 (27.8%) made at least one suicide attempt in the interval, most occurring in the first two years. Risk of attempt was increased by: a.) low socioeconomic status, b.) poor psychosocial adjustment, c.) a family history of suicide d.) prior psychiatric hospitalization; e.) absence of any outpatient treatment prior to the attempt. Higher global functioning at baseline decreased risk. Conclusion Risk factors predictive of suicide attempts change over time. Acute stressors such as MDD were predictive only in the short term (12 mos.), while poor psychosocial functioning had persistent and long term effects on suicide risk. Half of BPD patients have poor psychosocial outcomes despite symptomatic improvement. A social and vocational rehabilitation model of treatment is needed to decrease suicide risk and optimize long term outcomes in BPD. PMID:22549208

  18. Surgical team turnover and operative time: An evaluation of operating room efficiency during pulmonary resection.

    PubMed

    Azzi, Alain Joe; Shah, Karan; Seely, Andrew; Villeneuve, James Patrick; Sundaresan, Sudhir R; Shamji, Farid M; Maziak, Donna E; Gilbert, Sebastien

    2016-05-01

    Health care resources are costly and should be used judiciously and efficiently. Predicting the duration of surgical procedures is key to optimizing operating room resources. Our objective was to identify factors influencing operative time, particularly surgical team turnover. We performed a single-institution, retrospective review of lobectomy operations. Univariate and multivariate analyses were performed to evaluate the impact of different factors on surgical time (skin-to-skin) and total procedure time. Staff turnover within the nursing component of the surgical team was defined as the number of instances any nurse had to leave the operating room over the total number of nurses involved in the operation. A total of 235 lobectomies were performed by 5 surgeons, most commonly for lung cancer (95%). On multivariate analysis, percent forced expiratory volume in 1 second, surgical approach, and lesion size had a significant effect on surgical time. Nursing turnover was associated with a significant increase in surgical time (53.7 minutes; 95% confidence interval, 6.4-101; P = .026) and total procedure time (83.2 minutes; 95% confidence interval, 30.1-136.2; P = .002). Active management of surgical team turnover may be an opportunity to improve operating room efficiency when the surgical team is engaged in a major pulmonary resection. Copyright © 2016 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  19. Clinical impact and predictors of complete ST segment resolution after primary percutaneous coronary intervention: A subanalysis of the ATLANTIC Trial.

    PubMed

    Fabris, Enrico; van 't Hof, Arnoud; Hamm, Christian W; Lapostolle, Frédéric; Lassen, Jens F; Goodman, Shaun G; Ten Berg, Jurriën M; Bolognese, Leonardo; Cequier, Angel; Chettibi, Mohamed; Hammett, Christopher J; Huber, Kurt; Janzon, Magnus; Merkely, Béla; Storey, Robert F; Zeymer, Uwe; Cantor, Warren J; Tsatsaris, Anne; Kerneis, Mathieu; Diallo, Abdourahmane; Vicaut, Eric; Montalescot, Gilles

    2017-08-01

    In the ATLANTIC (Administration of Ticagrelor in the catheterization laboratory or in the Ambulance for New ST elevation myocardial Infarction to open the Coronary artery) trial the early use of aspirin, anticoagulation, and ticagrelor coupled with very short medical contact-to-balloon times represent good indicators of optimal treatment of ST-elevation myocardial infarction and an ideal setting to explore which factors may influence coronary reperfusion beyond a well-established pre-hospital system. This study sought to evaluate predictors of complete ST-segment resolution after percutaneous coronary intervention in ST-elevation myocardial infarction patients enrolled in the ATLANTIC trial. ST-segment analysis was performed on electrocardiograms recorded at the time of inclusion (pre-hospital electrocardiogram), and one hour after percutaneous coronary intervention (post-percutaneous coronary intervention electrocardiogram) by an independent core laboratory. Complete ST-segment resolution was defined as ≥70% ST-segment resolution. Complete ST-segment resolution occurred post-percutaneous coronary intervention in 54.9% ( n=800/1456) of patients and predicted lower 30-day composite major adverse cardiovascular and cerebrovascular events (odds ratio 0.35, 95% confidence interval 0.19-0.65; p<0.01), definite stent thrombosis (odds ratio 0.18, 95% confidence interval 0.02-0.88; p=0.03), and total mortality (odds ratio 0.43, 95% confidence interval 0.19-0.97; p=0.04). In multivariate analysis, independent negative predictors of complete ST-segment resolution were the time from symptoms to pre-hospital electrocardiogram (odds ratio 0.91, 95% confidence interval 0.85-0.98; p<0.01) and diabetes mellitus (odds ratio 0.6, 95% confidence interval 0.44-0.83; p<0.01); pre-hospital ticagrelor treatment showed a favorable trend for complete ST-segment resolution (odds ratio 1.22, 95% confidence interval 0.99-1.51; p=0.06). This study confirmed that post-percutaneous coronary intervention complete ST-segment resolution is a valid surrogate marker for cardiovascular clinical outcomes. In the current era of ST-elevation myocardial infarction reperfusion, patients' delay and diabetes mellitus are independent predictors of poor reperfusion and need specific attention in the future.

  20. On the differentiation matrix for Daubechies-based wavelets on an interval

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1993-01-01

    The differentiation matrix for a Daubechies-based wavlet basis defined on an interval will be constructed. It will be shown that the differentiation matrix based on the currently available boundary constructions does not maintain the superconvergence encountered under periodic boundary conditions.

  1. On a distinctive feature of problems of calculating time-average characteristics of nuclear reactor optimal control sets

    NASA Astrophysics Data System (ADS)

    Trifonenkov, A. V.; Trifonenkov, V. P.

    2017-01-01

    This article deals with a feature of problems of calculating time-average characteristics of nuclear reactor optimal control sets. The operation of a nuclear reactor during threatened period is considered. The optimal control search problem is analysed. The xenon poisoning causes limitations on the variety of statements of the problem of calculating time-average characteristics of a set of optimal reactor power off controls. The level of xenon poisoning is limited. There is a problem of choosing an appropriate segment of the time axis to ensure that optimal control problem is consistent. Two procedures of estimation of the duration of this segment are considered. Two estimations as functions of the xenon limitation were plot. Boundaries of the interval of averaging are defined more precisely.

  2. Neuropathology of White Matter Lesions, Blood-Brain Barrier Dysfunction, and Dementia.

    PubMed

    Hainsworth, Atticus H; Minett, Thais; Andoh, Joycelyn; Forster, Gillian; Bhide, Ishaan; Barrick, Thomas R; Elderfield, Kay; Jeevahan, Jamuna; Markus, Hugh S; Bridges, Leslie R

    2017-10-01

    We tested whether blood-brain barrier dysfunction in subcortical white matter is associated with white matter abnormalities or risk of clinical dementia in older people (n=126; mean age 86.4, SD: 7.7 years) in the MRC CFAS (Medical Research Council Cognitive Function and Ageing Study). Using digital pathology, we quantified blood-brain barrier dysfunction (defined by immunohistochemical labeling for the plasma marker fibrinogen). This was assessed within subcortical white matter tissue samples harvested from postmortem T 2 magnetic resonance imaging (MRI)-detected white matter hyperintensities, from normal-appearing white matter (distant from coexistent MRI-defined hyperintensities), and from equivalent areas in MRI normal brains. Histopathologic lesions were defined using a marker for phagocytic microglia (CD68, clone PGM1). Extent of fibrinogen labeling was not significantly associated with white matter abnormalities defined either by MRI (odds ratio, 0.90; 95% confidence interval, 0.79-1.03; P =0.130) or by histopathology (odds ratio, 0.93; 95% confidence interval, 0.77-1.12; P =0.452). Among participants with normal MRI (no detectable white matter hyperintensities), increased fibrinogen was significantly related to decreased risk of clinical dementia (odds ratio, 0.74; 95% confidence interval, 0.58-0.94; P =0.013). Among participants with histological lesions, increased fibrinogen was related to increased risk of dementia (odds ratio, 2.26; 95% confidence interval, 1.25-4.08; P =0.007). Our data suggest that some degree of blood-brain barrier dysfunction is common in older people and that this may be related to clinical dementia risk, additional to standard MRI biomarkers. © 2017 American Heart Association, Inc.

  3. Approaches for the accurate definition of geological time boundaries

    NASA Astrophysics Data System (ADS)

    Schaltegger, Urs; Baresel, Björn; Ovtcharova, Maria; Goudemand, Nicolas; Bucher, Hugo

    2015-04-01

    Which strategies lead to the most precise and accurate date of a given geological boundary? Geological units are usually defined by the occurrence of characteristic taxa and hence boundaries between these geological units correspond to dramatic faunal and/or floral turnovers and they are primarily defined using first or last occurrences of index species, or ideally by the separation interval between two consecutive, characteristic associations of fossil taxa. These boundaries need to be defined in a way that enables their worldwide recognition and correlation across different stratigraphic successions, using tools as different as bio-, magneto-, and chemo-stratigraphy, and astrochronology. Sedimentary sequences can be dated in numerical terms by applying high-precision chemical-abrasion, isotope-dilution, thermal-ionization mass spectrometry (CA-ID-TIMS) U-Pb age determination to zircon (ZrSiO4) in intercalated volcanic ashes. But, though volcanic activity is common in geological history, ashes are not necessarily close to the boundary we would like to date precisely and accurately. In addition, U-Pb zircon data sets may be very complex and difficult to interpret in terms of the age of ash deposition. To overcome these difficulties we use a multi-proxy approach we applied to the precise and accurate dating of the Permo-Triassic and Early-Middle Triassic boundaries in South China. a) Dense sampling of ashes across the critical time interval and a sufficiently large number of analysed zircons per ash sample can guarantee the recognition of all system complexities. Geochronological datasets from U-Pb dating of volcanic zircon may indeed combine effects of i) post-crystallization Pb loss from percolation of hydrothermal fluids (even using chemical abrasion), with ii) age dispersion from prolonged residence of earlier crystallized zircon in the magmatic system. As a result, U-Pb dates of individual zircons are both apparently younger and older than the depositional age of the ash, therefore masking the true age of deposition. Trace element ratios such as Th/U, Yb/Gd, as well as Hf isotope analysis of dated zircon can be used to decipher the temporal evolution of the magmatic system before the eruption and deposition of the studied ashes, and resolve the complex system behaviour of the zircons. b) Changes in the source of the magma may happen between the deposition of two stratigraphically consecutive ash beds. They result in the modification of the trace element signature of zircon, but also of apatite (Ca5 (F, Cl, OH) (PO4)3). Trace element characteristics in apatite (e.g. Mg, Mn, Fe, F, Cl, Ce, and Y) are a reliable tool for distinguishing chemically similar groups of apatite crystals to unravel the geochemical fingerprint of one single ash bed. By establishing this fingerprint, ash beds of geographically separated geologic sections can be correlated even if they have not all been dated by U-Pb techniques. c) The ultimate goal of quantitative stratigraphy is to establish an age model that predicts the age of a synchronous time line with an associated 95% confidence interval for any such line within a stratigraphic sequence. We show how a Bayesian, non-parametric interpolation approach can be applied to very complex data sets and leads to a well-defined age solution, possibly identifying changes in sedimentation rate. The age of a geological time boundary bracketed by dated samples in such an age model can be defined with an associated uncertainty.

  4. Atorvastatin Use Associated With Acute Pancreatitis

    PubMed Central

    Lai, Shih-Wei; Lin, Cheng-Li; Liao, Kuan-Fu

    2016-01-01

    Abstract Few data are present in the literature on the relationship between atorvastatin use and acute pancreatitis. The aim of this study was to explore this issue in Taiwan. Using representative claims data established from the Taiwan National Health Insurance Program, this case–control study consisted of 5810 cases aged 20 to 84 years with a first-time diagnosis of acute pancreatitis during the period 1998 to 2011and 5733 randomly selected controls without acute pancreatitis. Both cases and controls were matched by sex, age, comorbidities, and index year of diagnosing acute pancreatitis. Subjects who at least received 1 prescription for other statins or nonstatin lipid-lowering drugs were excluded from the study. If subjects never had 1 prescription for atorvastatin, they were defined as never use of atorvastatin. Current use of atorvastatin was defined as subjects whose last remaining 1 tablet of atorvastatin was noted ≤7 days before the date of diagnosing acute pancreatitis. Late use of atorvastatin was defined as subjects whose last remaining 1 tablet of atorvastatin was noted >7 days before the date of diagnosing acute pancreatitis. The odds ratio with 95% confidence interval of acute pancreatitis associated with atorvastatin use was calculated by using the logistic regression analysis. The logistic regression analysis revealed that the odds ratio of acute pancreatitis was 1.67 for subjects with current use of atorvastatin (95% confidence interval 1.18, 2.38), when compared with subjects with never use of atorvastatin. The odds ratio decreased to 1.15 for those with late use of atorvastatin (95% confidence interval 0.87, 1.52), but without statistical significance. Current use of atorvastatin is associated with the diagnosis of acute pancreatitis. Clinically, clinicians should consider the possibility of atorvastatin-associated acute pancreatitis when patients present with a diagnosis of acute pancreatitis without a definite etiology but are taking atorvastatin. PMID:26886597

  5. Recurrence and interoccurrence behavior of self-organized complex phenomena

    NASA Astrophysics Data System (ADS)

    Abaimov, S. G.; Turcotte, D. L.; Shcherbakov, R.; Rundle, J. B.

    2007-08-01

    The sandpile, forest-fire and slider-block models are said to exhibit self-organized criticality. Associated natural phenomena include landslides, wildfires, and earthquakes. In all cases the frequency-size distributions are well approximated by power laws (fractals). Another important aspect of both the models and natural phenomena is the statistics of interval times. These statistics are particularly important for earthquakes. For earthquakes it is important to make a distinction between interoccurrence and recurrence times. Interoccurrence times are the interval times between earthquakes on all faults in a region whereas recurrence times are interval times between earthquakes on a single fault or fault segment. In many, but not all cases, interoccurrence time statistics are exponential (Poissonian) and the events occur randomly. However, the distribution of recurrence times are often Weibull to a good approximation. In this paper we study the interval statistics of slip events using a slider-block model. The behavior of this model is sensitive to the stiffness α of the system, α=kC/kL where kC is the spring constant of the connector springs and kL is the spring constant of the loader plate springs. For a soft system (small α) there are no system-wide events and interoccurrence time statistics of the larger events are Poissonian. For a stiff system (large α), system-wide events dominate the energy dissipation and the statistics of the recurrence times between these system-wide events satisfy the Weibull distribution to a good approximation. We argue that this applicability of the Weibull distribution is due to the power-law (scale invariant) behavior of the hazard function, i.e. the probability that the next event will occur at a time t0 after the last event has a power-law dependence on t0. The Weibull distribution is the only distribution that has a scale invariant hazard function. We further show that the onset of system-wide events is a well defined critical point. We find that the number of system-wide events NSWE satisfies the scaling relation NSWE ∝(α-αC)δ where αC is the critical value of the stiffness. The system-wide events represent a new phase for the slider-block system.

  6. WE-G-BRD-08: Motion Analysis for Rectal Cancer: Implications for Adaptive Radiotherapy On the MR-Linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleijnen, J; Asselen, B van; Burbach, M

    2015-06-15

    Purpose: Purpose of this study is to find the optimal trade-off between adaptation interval and margin reduction and to define the implications of motion for rectal cancer boost radiotherapy on a MR-linac. Methods: Daily MRI scans were acquired of 16 patients, diagnosed with rectal cancer, prior to each radiotherapy fraction in one week (N=76). Each scan session consisted of T2-weighted and three 2D sagittal cine-MRI, at begin (t=0 min), middle (t=9:30 min) and end (t=18:00 min) of scan session, for 1 minute at 2 Hz temporal resolution. Tumor and clinical target volume (CTV) were delineated on each T2-weighted scan andmore » transferred to each cine-MRI. The start frame of the begin scan was used as reference and registered to frames at time-points 15, 30 and 60 seconds, 9:30 and 18:00 minutes and 1, 2, 3 and 4 days later. Per time-point, motion of delineated voxels was evaluated using the deformation vector fields of the registrations and the 95th percentile distance (dist95%) was calculated as measure of motion. Per time-point, the distance that includes 90% of all cases was taken as estimate of required planning target volume (PTV)-margin. Results: Highest motion reduction is observed going from 9:30 minutes to 60 seconds. We observe a reduction in margin estimates from 10.6 to 2.7 mm and 16.1 to 4.6 mm for tumor and CTV, respectively, when adapting every 60 seconds compared to not adapting treatment. A 75% and 71% reduction, respectively. Further reduction in adaptation time-interval yields only marginal motion reduction. For adaptation intervals longer than 18:00 minutes only small motion reductions are observed. Conclusion: The optimal adaptation interval for adaptive rectal cancer (boost) treatments on a MR-linac is 60 seconds. This results in substantial smaller PTV-margin estimates. Adaptation intervals of 18:00 minutes and higher, show little improvement in motion reduction.« less

  7. Single therapeutic and supratherapeutic doses of corifollitropin alfa, a sustained follicle stimulant, do not prolong the QTcF-interval in healthy postmenopausal volunteers.

    PubMed

    de Kam, Pieter-Jan; van Kuijk, Jacqueline H M; Zandvliet, Anthe S; Thomsen, Torben

    2015-09-01

    Corifollitropin alfa (Elonva®) is the first hybrid follicle-stimulating hormone molecule with demonstrated sustained follicle-stimulating activity after a single subcutaneous injection. This trial evaluated if corifollitropin alfa is associated with QT/QTc prolongation and/ or proarrhythmic potential as compared to placebo in healthy post-menopausal women. Participants were healthy, postmenopausal women. Study treatments were corifollitropin alfa 150 μg, corifollitropin alfa 240 μg, and moxifloxacin 400 mg with placebo. This randomized, double blind, double-dummy, 4-period crossover trial compared single doses of corifollitropin alfa 150 μg (therapeutic dose), corifollitropin alfa 240 μg (supratherapeutic dose), and moxifloxacin 400 mg (positive control) with placebo. Corifollitropin alfa was administered on day 1 and moxifloxacin on day 2. The largest time-matched mean QTcF difference versus placebo for the therapeutic dose of corifollitropin alfa was 1.4 ms (upper limit of 1-sided 95% confidence interval (UL 95% CI) = 3.4 ms), and for the supratherapeutic dose was 1.2 ms (UL 95% CI = 3.6 ms). For both the therapeutic and the supratherapeutic dose of corifollitropin alfa and at all time points, the UL 95% CI for the time matched QTcF differences compared with placebo was below 10 ms, the threshold of relevance defined by the ICH E14 guideline. Single therapeutic and supratherapeutic doses of corifollitropin alfa are not associated with clinically relevant QT/QTc-interval prolongation in healthy post-menopausal women.

  8. Post-Extinction Ecological Recovery of Marine Life Modes

    NASA Astrophysics Data System (ADS)

    Park, C.; de la Torre, N. G.; Heim, N.; Payne, J.

    2016-12-01

    A mass extinction is defined by a substantial increase in extinction rates, resulting in a loss of taxonomic and ecological diversity. Bush et al. (2007) defined ecological life modes as the feeding, motility, and tiering habits and organized them in a six-by-six "eco-cube" in which each section represented a life mode. In our research, we analyzed the ecological recovery of each life mode after the five mass extinctions. Using a fossil marine genera database, we compiled five heat maps that depict the recovery of the life modes by plotting the diversity of genera in each life mode two intervals before and five intervals after each mass extinction interval. New life modes seem to appear either immediately following or three or more intervals after a mass extinction, which indicates that ecological recovery is not a gradual process, but rather occurs in a punctuated manner. Furthermore, the "filling order" of new life modes differ in each extinction. However, some seem to have defined patterns, such as the Ordovician, where earlier post-extinction intervals experienced an increase in the diversity of erect (tiering) ecospaces, followed by that of surficial and shallow infaunal life modes. The Devonian mass extinction followed a similar pattern as the end Ordovician where erect organisms came first followed by surficial, deep-infaunal, and pelagic life modes. Conversely, intervals following the end-Permian mass extinction experienced a recovery in pelagic, freely-moving life modes, followed by a recovery in infaunal organisms and an explosion in semi-infaunal, erect, surficial, and pelagic ecospaces in the Ladinian. New life modes in the Triassic and Cretaceous mass extinctions did not seem to generate in a distinct pattern. Overall, we conclude that recovery patterns are unique depending on the cause of each mass extinction, and that any general tendency in post-extinction ecological recovery was most likely overridden by the environmental condition of the recovery intervals.

  9. Toward a continuous 405-kyr-calibrated Astronomical Time Scale for the Mesozoic Era

    NASA Astrophysics Data System (ADS)

    Hinnov, Linda; Ogg, James; Huang, Chunju

    2010-05-01

    Mesozoic cyclostratigraphy is being assembled into a continuous Astronomical Time Scale (ATS) tied to the Earth's cyclic orbital parameters. Recognition of a nearly ubiquitous, dominant ~400-kyr cycling in formations throughout the era has been particularly striking. Composite formations spanning contiguous intervals up to 50 myr clearly express these long-eccentricity cycles, and in some cases, this cycling is defined by third- or fourth-order sea-level sequences. This frequency is associated with the 405-kyr orbital eccentricity cycle, which provides a basic metronome and enables the extension of the well-defined Cenozoic ATS to scale the majority of the Mesozoic Era. This astronomical calibration has a resolution comparable to the 1% to 0.1% precision for radioisotope dating of Mesozoic ash beds, but with the added benefit of providing continuous stratigraphic coverage between dated beds. Extended portions of the Mesozoic ATS provide solutions to long-standing geologic problems of tectonics, eustasy, paleoclimate change, and rates of seafloor spreading.

  10. Gonadal morphogenesis and gene expression in reptiles with temperature-dependent sex determination.

    PubMed

    Merchant-Larios, H; Díaz-Hernández, V; Marmolejo-Valencia, A

    2010-01-01

    In reptiles with temperature-dependent sexual determination, the thermosensitive period (TSP) is the interval in which the sex is defined during gonadal morphogenesis. One-shift experiments in a group of eggs define the onset and the end of the TSP as all and none responses, respectively. Timing for sex-undetermined (UG) and -determined gonads (DG) differs at male- (MPT) or female-producing temperatures (FPT). During the TSP a decreasing number of embryos respond to temperature shifts indicating that in this period embryos with both UG and DG exist. Although most UG correspond to undifferentiated gonads, some embryos extend UG after the onset of histological differentiation. Thus, temperature affects gonadal cells during the process of morphogenesis, but timing of commitment depends on individual embryos. A correlation between gonadal morphogenesis, TSP, and gene expression suggests that determination of the molecular pathways modulated by temperature in epithelial cells (surface epithelium and medullary cords) holds the key for a unifying hypothesis on temperature-dependent sex determination. (c) 2010 S. Karger AG, Basel.

  11. A reliability analysis of cardiac repolarization time markers.

    PubMed

    Scacchi, S; Franzone, P Colli; Pavarino, L F; Taccardi, B

    2009-06-01

    Only a limited number of studies have addressed the reliability of extracellular markers of cardiac repolarization time, such as the classical marker RT(eg) defined as the time of maximum upslope of the electrogram T wave. This work presents an extensive three-dimensional simulation study of cardiac repolarization time, extending the previous one-dimensional simulation study of a myocardial strand by Steinhaus [B.M. Steinhaus, Estimating cardiac transmembrane activation and recovery times from unipolar and bipolar extracellular electrograms: a simulation study, Circ. Res. 64 (3) (1989) 449]. The simulations are based on the bidomain - Luo-Rudy phase I system with rotational fiber anisotropy and homogeneous or heterogeneous transmural intrinsic membrane properties. The classical extracellular marker RT(eg) is compared with the gold standard of fastest repolarization time RT(tap), defined as the time of minimum derivative during the downstroke of the transmembrane action potential (TAP). Additionally, a new extracellular marker RT90(eg) is compared with the gold standard of late repolarization time RT90(tap), defined as the time when the TAP reaches 90% of its resting value. The results show a good global match between the extracellular and transmembrane repolarization markers, with small relative mean discrepancy (or=0.92), ensuring a reasonably good global match between the associated repolarization sequences. However, large local discrepancies of the extracellular versus transmembrane markers may ensue in regions where the curvature of the repolarization front changes abruptly (e.g. near front collisions) or is negligible (e.g. where repolarization proceeds almost uniformly across fiber). As a consequence, the spatial distribution of activation-recovery intervals (ARI) may provide an inaccurate estimate of (and weakly correlated with) the spatial distribution of action potential durations (APD).

  12. Metastasis-free interval in breast cancer patients: Thirty-year trends and time dependency of prognostic factors. A retrospective analysis based on a single institution experience.

    PubMed

    Houzé de l'Aulnoit, A; Rogoz, B; Pinçon, C; Houzé de l'Aulnoit, D

    2018-02-01

    Breast cancer remains the leading cause of cancer death in French women in spite of continuously improving management. The objectives of this study were to analyse trends in the metastasis-free interval over the past 30 years and to identify the prognostic factors of survival, while accounting for time dependency. A total of 1613 patients diagnosed with invasive non-metastatic breast cancer at Saint Vincent de Paul Hospital, Lille, France between 1977 and 2013, were followed for outcome (metastasis-free interval). Cohort entry time delay, a continuous temporal covariate, was defined to assess improvement of outcome. Data were analysed using the Cox proportional hazards model and presented as hazard ratio (HR). Metastatic disease developed during follow-up in 446 (27.6%) patients. Cohort entry time delay exhibited strong independent prognostic value while accounting for multiple prognostic factors including: tumour size (HR = 1.62, 95 %CI 1.37-1.91); rapid tumour growth (HR = 1.59, 95%CI 1.17-2.16); lymph node ratio (HR = 2.29, 95%CI 1.97-2.66); histological grade (grade 2 was significant only during the first 10 years after diagnosis, grade 3 and progesterone receptor status only during the first 5 years after diagnosis); and oestrogen receptor status (significant only during the first 8 years (HR = 0.75, 95%CI 0.58-0.96)). The current study showed an improvement in the prognosis of breast cancer patients over the past 30 years and pointed to the importance of evaluating covariates with time-varying effects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Timing of Occurrence Is the Most Important Characteristic of Spot Sign.

    PubMed

    Wang, Binli; Yan, Shenqiang; Xu, Mengjun; Zhang, Sheng; Liu, Keqin; Hu, Haitao; Selim, Magdy; Lou, Min

    2016-05-01

    Most previous studies have used single-phase computed tomographic angiography to detect the spot sign, a marker for hematoma expansion (HE) in spontaneous intracerebral hemorrhage. We investigated whether defining the spot sign based on timing on perfusion computed tomography (CTP) would improve its specificity for predicting HE. We prospectively enrolled supratentorial spontaneous intracerebral hemorrhage patients who underwent CTP within 6 hours of onset. Logistic regression was performed to assess the risk factors for HE and poor outcome. Predictive performance of individual CTP spot sign characteristics were examined with receiver operating characteristic analysis. Sixty-two men and 21 women with spontaneous intracerebral hemorrhage were included in this analysis. Spot sign was detected in 46% (38/83) of patients. Receiver operating characteristic analysis indicated that the timing of spot sign occurrence on CTP had the greatest area under receiver operating characteristic curve for HE (0.794; 95% confidence interval, 0.630-0.958; P=0.007); the cutoff time was 23.13 seconds. On multivariable analysis, the presence of early-occurring spot sign (ie, spot sign before 23.13 seconds) was an independent predictor not only of HE (odds ratio=28.835; 95% confidence interval, 6.960-119.458; P<0.001), but also of mortality at 3 months (odds ratio =22.377; 95% confidence interval, 1.773-282.334; P=0.016). Moreover, the predictive performance showed that the redefined early-occurring spot sign maintained a higher specificity for HE compared with spot sign (91% versus 74%). Redefining the spot sign based on timing of contrast leakage on CTP to determine early-occurring spot sign improves the specificity for predicting HE and 3-month mortality. The use of early-occurring spot sign could improve the selection of ICH patients for potential hemostatic therapy. © 2016 American Heart Association, Inc.

  14. Somatosensory Temporal Discrimination Threshold Involves Inhibitory Mechanisms in the Primary Somatosensory Area.

    PubMed

    Rocchi, Lorenzo; Casula, Elias; Tocco, Pierluigi; Berardelli, Alfredo; Rothwell, John

    2016-01-13

    Somatosensory temporal discrimination threshold (STDT) is defined as the shortest time interval necessary for a pair of tactile stimuli to be perceived as separate. Although STDT is altered in several neurological disorders, its neural bases are not entirely clear. We used continuous theta burst stimulation (cTBS) to condition the excitability of the primary somatosensory cortex in healthy humans to examine its possible contribution to STDT. Excitability was assessed using the recovery cycle of the N20 component of somatosensory evoked potentials (SEP) and the area of high-frequency oscillations (HFO). cTBS increased STDT and reduced inhibition in the N20 recovery cycle at an interstimulus interval of 5 ms. It also reduced the amplitude of late HFO. All three effects were correlated. There was no effect of cTBS over the secondary somatosensory cortex on STDT, although it reduced the N120 component of the SEP. STDT is assessed conventionally with a simple ascending method. To increase insight into the effect of cTBS, we measured temporal discrimination with a psychophysical method. cTBS reduced the slope of the discrimination curve, consistent with a reduction of the quality of sensory information caused by an increase in noise. We hypothesize that cTBS reduces the effectiveness of inhibitory interactions normally used to sharpen temporal processing of sensory inputs. This reduction in discriminability of sensory input is equivalent to adding neural noise to the signal. Precise timing of sensory information is crucial for nearly every aspect of human perception and behavior. One way to assess the ability to analyze temporal information in the somatosensory domain is to measure the somatosensory temporal discrimination threshold (STDT), defined as the shortest time interval necessary for a pair of tactile stimuli to be perceived as separate. In this study, we found that STDT depends on inhibitory mechanisms within the primary somatosensory area (S1). This finding helps interpret the sensory processing deficits in neurological diseases, such as focal dystonia and Parkinson's disease, and possibly prompts future studies using neurostimulation techniques over S1 for therapeutic purposes in dystonic patients. Copyright © 2016 the authors 0270-6474/16/360325-11$15.00/0.

  15. Prevalence of dry eye syndrome in an adult population.

    PubMed

    Hashemi, Hassan; Khabazkhoob, Mehdi; Kheirkhah, Ahmad; Emamian, Mohammad Hassan; Mehravaran, Shiva; Shariati, Mohammad; Fotouhi, Akbar

    2014-04-01

    To determine the prevalence of dry eye syndrome in the general 40- to 64-year-old population of Shahroud, Iran. Population-based cross-sectional study. Through cluster sampling, 6311 people were selected and 5190 participated. Assessment of dry eye was done in a random subsample of 1008 people. Subjective assessment for dry eye syndrome was performed using Ocular Surface Disease Index questionnaire. In addition, the following objective tests of dry eye syndrome were employed: Schirmer test, tear break-up time, and fluorescein and Rose Bengal staining using the Oxford grading scheme. Those with an Ocular Surface Disease Index score ≥23 were considered symptomatic, and dry eye syndrome was defined as having symptoms and at least one positive objective sign. The prevalence of dry eye syndrome was 8.7% (95% confidence interval 6.9-10.6). Assessment of signs showed an abnormal Schirmer score in 17.8% (95% confidence interval 15.5-20.0), tear break-up time in 34.2% (95% confidence interval 29.5-38.8), corneal fluorescein staining (≥1) in 11.3% (95% confidence interval 8.5-14.1) and Rose Bengal staining (≥3 for cornea and/or conjunctiva) in 4.9% (95% confidence interval 3.4-6.5). According to the Ocular Surface Disease Index scores, 18.3% (95% confidence interval 15.9-20.6) had dry eye syndrome symptoms. The prevalence of dry eye syndrome was significantly higher in women (P = 0.010) and not significantly associated with age (P = 0.291). The objective dry eye syndrome signs significantly increased with age. Based on the findings, the prevalence of dry eye syndrome in the studied population is in the mid-range. The prevalence is higher in women. Also, objective tests tend to turn abnormal at higher age. Pterygium is associated with dry eye syndrome and increased its symptoms. © 2013 Royal Australian and New Zealand College of Ophthalmologists.

  16. Predictors of Functional Dependence Despite Successful Revascularization in Large-Vessel Occlusion Strokes

    PubMed Central

    Shi, Zhong-Song; Liebeskind, David S.; Xiang, Bin; Ge, Sijian Grace; Feng, Lei; Albers, Gregory W.; Budzik, Ronald; Devlin, Thomas; Gupta, Rishi; Jansen, Olav; Jovin, Tudor G.; Killer-Oberpfalzer, Monika; Lutsep, Helmi L.; Macho, Juan; Nogueira, Raul G.; Rymer, Marilyn; Smith, Wade S.; Wahlgren, Nils; Duckwiler, Gary R.

    2014-01-01

    Background and Purpose High revascularization rates in large-vessel occlusion strokes treated by mechanical thrombectomy are not always associated with good clinical outcomes. We evaluated predictors of functional dependence despite successful revascularization among patients with acute ischemic stroke treated with thrombectomy. Methods We analyzed the pooled data from the Multi Mechanical Embolus Removal in Cerebral Ischemia (MERCI), Thrombectomy Revascularization of Large Vessel Occlusions in Acute Ischemic Stroke (TREVO), and TREVO 2 trials. Successful revascularization was defined as thrombolysis in cerebral infarction score 2b or 3. Functional dependence was defined as a score of 3 to 6 on the modified Rankin Scale at 3 months. We assessed relationship of demographic, clinical, angiographic characteristics, and hemorrhage with functional dependence despite successful revascularization. Results Two hundred and twenty-eight patients with successful revascularization had clinical outcome follow-up. The rates of functional dependence with endovascular success were 48.6% for Trevo thrombectomy and 58.0% for Merci thrombectomy. Age (odds ratio, 1.04; 95% confidence interval, 1.02–1.06 per 1-year increase), National Institutes of Health Stroke Scale score (odds ratio, 1.08; 95% confidence interval, 1.02–1.15 per 1-point increase), and symptom onset to endovascular treatment time (odds ratio, 1.11; 95% confidence interval, 1.01–1.22 per 30-minute delay) were predictors of functional dependence despite successful revascularization. Symptom onset to reperfusion time beyond 5 hours was associated with functional dependence. All subjects with symptomatic intracranial hemorrhage had functional dependence. Conclusions One half of patients with successful mechanical thrombectomy do not have good outcomes. Age, severe neurological deficits, and delayed endovascular treatment were associated with functional dependence despite successful revascularization. Our data support efforts to minimize delays to endovascular therapy in patients with acute ischemic stroke to improve outcomes. PMID:24876082

  17. Nonparametric change point estimation for survival distributions with a partially constant hazard rate.

    PubMed

    Brazzale, Alessandra R; Küchenhoff, Helmut; Krügel, Stefanie; Schiergens, Tobias S; Trentzsch, Heiko; Hartl, Wolfgang

    2018-04-05

    We present a new method for estimating a change point in the hazard function of a survival distribution assuming a constant hazard rate after the change point and a decreasing hazard rate before the change point. Our method is based on fitting a stump regression to p values for testing hazard rates in small time intervals. We present three real data examples describing survival patterns of severely ill patients, whose excess mortality rates are known to persist far beyond hospital discharge. For designing survival studies in these patients and for the definition of hospital performance metrics (e.g. mortality), it is essential to define adequate and objective end points. The reliable estimation of a change point will help researchers to identify such end points. By precisely knowing this change point, clinicians can distinguish between the acute phase with high hazard (time elapsed after admission and before the change point was reached), and the chronic phase (time elapsed after the change point) in which hazard is fairly constant. We show in an extensive simulation study that maximum likelihood estimation is not robust in this setting, and we evaluate our new estimation strategy including bootstrap confidence intervals and finite sample bias correction.

  18. Identification of speech transients using variable frame rate analysis and wavelet packets.

    PubMed

    Rasetshwane, Daniel M; Boston, J Robert; Li, Ching-Chung

    2006-01-01

    Speech transients are important cues for identifying and discriminating speech sounds. Yoo et al. and Tantibundhit et al. were successful in identifying speech transients and, emphasizing them, improving the intelligibility of speech in noise. However, their methods are computationally intensive and unsuitable for real-time applications. This paper presents a method to identify and emphasize speech transients that combines subband decomposition by the wavelet packet transform with variable frame rate (VFR) analysis and unvoiced consonant detection. The VFR analysis is applied to each wavelet packet to define a transitivity function that describes the extent to which the wavelet coefficients of that packet are changing. Unvoiced consonant detection is used to identify unvoiced consonant intervals and the transitivity function is amplified during these intervals. The wavelet coefficients are multiplied by the transitivity function for that packet, amplifying the coefficients localized at times when they are changing and attenuating coefficients at times when they are steady. Inverse transform of the modified wavelet packet coefficients produces a signal corresponding to speech transients similar to the transients identified by Yoo et al. and Tantibundhit et al. A preliminary implementation of the algorithm runs more efficiently.

  19. The Medical Duty Officer: An Attempt to Mitigate the Ambulance At-Hospital Interval

    PubMed Central

    Halliday, Megan H.; Bouland, Andrew J.; Lawner, Benjamin J.; Comer, Angela C.; Ramos, Daniel C.; Fletcher, Mark

    2016-01-01

    Introduction A lack of coordination between emergency medical services (EMS), emergency departments (ED) and systemwide management has contributed to extended ambulance at-hospital times at local EDs. In an effort to improve communication within the local EMS system, the Baltimore City Fire Department (BCFD) placed a medical duty officer (MDO) in the fire communications bureau. It was hypothesized that any real-time intervention suggested by the MDO would be manifested in a decrease in the EMS at-hospital time. Methods The MDO was implemented on November 11, 2013. A senior EMS paramedic was assigned to the position and was placed in the fire communication bureau from 9 a.m. to 9 p.m., seven days a week. We defined the pre-intervention period as August 2013 – October 2013 and the post-intervention period as December 2013 – February 2014. We also compared the post-intervention period to the “seasonal match control” one year earlier to adjust for seasonal variation in EMS volume. The MDO was tasked with the prospective management of city EMS resources through intensive monitoring of unit availability and hospital ED traffic. The MDO could suggest alternative transport destinations in the event of ED crowding. We collected and analyzed data from BCFD computer-aided dispatch (CAD) system for the following: ambulance response times, ambulance at-hospital interval, hospital diversion and alert status, and “suppression wait time” (defined as the total time suppression units remained on scene until ambulance arrival). The data analysis used a pre/post intervention design to examine the MDO impact on the BCFD EMS system. Results There were a total of 15,567 EMS calls during the pre-intervention period, 13,921 in the post-intervention period and 14,699 in the seasonal match control period one year earlier. The average at-hospital time decreased by 1.35 minutes from pre- to post-intervention periods and 4.53 minutes from the pre- to seasonal match control, representing a statistically significant decrease in this interval. There was also a statistically significant decrease in hospital alert time (approximately 1,700 hour decrease pre- to post-intervention periods) and suppression wait time (less than one minute decrease from pre- to post- and pre- to seasonal match control periods). The decrease in ambulance response time was not statistically significant. Conclusion Proactive deployment of a designated MDO was associated with a small, contemporaneous reduction in at-hospital time within an urban EMS jurisdiction. This project emphasized the importance of better communication between EMS systems and area hospitals as well as uniform reporting of variables for future iterations of this and similar projects. PMID:27625737

  20. Birth order and postpartum psychiatric disorders.

    PubMed

    Munk-Olsen, Trine; Jones, Ian; Laursen, Thomas Munk

    2014-05-01

    Primiparity is a well-established and significant risk factor for postpartum psychosis and especially bipolar affective disorders. However, no studies have, to our knowledge, quantified the risk of psychiatric disorders after the first, second, or subsequent births. The overall aim of the present study was to study the risk of first-time psychiatric episodes requiring inpatient treatment after the birth of the first, second, or third child. A cohort comprising 750,127 women was defined using information from Danish population registries. Women were followed individually from the date of birth of their first, second, or third child through the following 12 months over the period 1970-2011. The outcome of interest was defined as first-time admissions to a psychiatric hospital with any type of psychiatric disorder. Women who had a first psychiatric episode which required inpatient treatment after their first (n = 1,327), second (n = 735), or third (n = 238) delivery were included. The highest risk was found in primiparous mothers 10-19 days postpartum [relative risk (RR) = 8.65; 95% confidence interval (CI): 6.89-10.85]. After the second birth, the highest risk was at 60-89 days postpartum (RR = 2.01; 95% CI: 1.52-2.65), and there was no increased risk after the third birth. The effect of primiparity was strongest for bipolar disorders. Primiparity is a significant risk factor for experiencing a first-time episode with a psychiatric disorder, especially bipolar disorders. A second birth was associated with a smaller risk, and there was no increased risk after the third birth. The risk of postpartum episodes after the second delivery increased with increasing inter-pregnancy intervals, a result which warrants further investigation. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. An investigation of routes to cancer diagnosis in 10 international jurisdictions, as part of the International Cancer Benchmarking Partnership: survey development and implementation

    PubMed Central

    Weller, David; Vedsted, Peter; Anandan, Chantelle; Zalounina, Alina; Fourkala, Evangelia Ourania; Desai, Rakshit; Liston, William; Jensen, Henry; Barisic, Andriana; Gavin, Anna; Grunfeld, Eva; Lambe, Mats; Law, Rebecca-Jane; Malmberg, Martin; Neal, Richard D; Kalsi, Jatinderpal; Turner, Donna; White, Victoria; Bomb, Martine

    2016-01-01

    Objectives This paper describes the methods used in the International Cancer Benchmarking Partnership Module 4 Survey (ICBPM4) which examines time intervals and routes to cancer diagnosis in 10 jurisdictions. We present the study design with defining and measuring time intervals, identifying patients with cancer, questionnaire development, data management and analyses. Design and setting Recruitment of participants to the ICBPM4 survey is based on cancer registries in each jurisdiction. Questionnaires draw on previous instruments and have been through a process of cognitive testing and piloting in three jurisdictions followed by standardised translation and adaptation. Data analysis focuses on comparing differences in time intervals and routes to diagnosis in the jurisdictions. Participants Our target is 200 patients with symptomatic breast, lung, colorectal and ovarian cancer in each jurisdiction. Patients are approached directly or via their primary care physician (PCP). Patients’ PCPs and cancer treatment specialists (CTSs) are surveyed, and ‘data rules’ are applied to combine and reconcile conflicting information. Where CTS information is unavailable, audit information is sought from treatment records and databases. Main outcomes Reliability testing of the patient questionnaire showed that agreement was complete (κ=1) in four items and substantial (κ=0.8, 95% CI 0.333 to 1) in one item. The identification of eligible patients is sufficient to meet the targets for breast, lung and colorectal cancer. Initial patient and PCP survey response rates from the UK and Sweden are comparable with similar published surveys. Data collection was completed in early 2016 for all cancer types. Conclusion An international questionnaire-based survey of patients with cancer, PCPs and CTSs has been developed and launched in 10 jurisdictions. ICBPM4 will help to further understand international differences in cancer survival by comparing time intervals and routes to cancer diagnosis. PMID:27456325

  2. Real-time contrast ultrasound muscle perfusion imaging with intermediate-power imaging coupled with acoustically durable microbubbles.

    PubMed

    Seol, Sang-Hoon; Davidson, Brian P; Belcik, J Todd; Mott, Brian H; Goodman, Reid M; Ammi, Azzdine; Lindner, Jonathan R

    2015-06-01

    There is growing interest in limb contrast-enhanced ultrasound (CEU) perfusion imaging for the evaluation of peripheral artery disease. Because of low resting microvascular blood flow in skeletal muscle, signal enhancement during limb CEU is prohibitively low for real-time imaging. The aim of this study was to test the hypothesis that this obstacle can be overcome by intermediate- rather than low-power CEU when performed with an acoustically resilient microbubble agent. Viscoelastic properties of Definity and Sonazoid were assessed by measuring bulk modulus during incremental increases in ambient pressure to 200 mm Hg. Comparison of in vivo microbubble destruction and signal enhancement at a mechanical index (MI) of 0.1 to 0.4 was performed by sequential reduction in pulsing interval from 10 to 0.05 sec during limb CEU at 7 MHz in mice and 1.8 MHz in dogs. Destruction was also assessed by broadband signal generation during passive cavitation detection. Real-time CEU perfusion imaging with destruction-replenishment was then performed at 1.8 MHz in dogs using an MI of 0.1, 0.2, or 0.3. Sonazoid had a higher bulk modulus than Definity (66 ± 12 vs 29 ± 2 kPa, P = .02) and exhibited less inertial cavitation (destruction) at MIs ≥ 0.2. On in vivo CEU, maximal signal intensity increased incrementally with MI for both agents and was equivalent between agents except at an MI of 0.1 (60% and 85% lower for Sonazoid at 7 and 1.8 MHz, respectively, P < .05). However, on progressive shortening of the pulsing interval, Definity was nearly completely destroyed at MIs ≥ 0.2 at 1.8 and 7 MHz, whereas Sonazoid was destroyed only at 1.8 MHz at MIs ≥ 0.3. As a result, real-time CEU perfusion imaging demonstrated approximately fourfold greater enhancement for Sonazoid at an MI of 0.3 to 0.4. Robust signal enhancement during real-time CEU perfusion imaging of the limb is possible when using intermediate-power imaging coupled with a durable microbubble contrast agent. Copyright © 2015 American Society of Echocardiography. All rights reserved.

  3. Preliminary evaluation of flood frequency relations in the urban areas of Memphis, Tennessee

    USGS Publications Warehouse

    Boning, Charles W.

    1977-01-01

    A storm-runoff relation for streams in the urban areas of Memphis was determined by a statistical evaluation of 59 flood discharges from 19 gaging stations. These flood discharges were related to drainage area, percent imperviousness of the drainage basin, and rainfall occuring over 120-minute periods. The defined relation is Q=m3A*777A - .02 tI,,,,P + 1j-227 (1120).539(t120).40 where Q is flood discharge in cfs, A is drainage area in square miles, IMP is percent imperviousness in the basin, and I120 is rainfall in inches, over 120 minute time period. The defined relation was used to synthesize sets of annual flood peaks for drainage basins ranging from .05 square miles to 10 square miles and imperviousness ranging from 0 to 80 percent for the period of rainfall record at Memphis. From these series of flood peaks, frequency relations were defined and presented for 2, 5, 10, 25, 50 and 100 year recurrent intervals.

  4. The third-stimulus temporal discrimination threshold: focusing on the temporal processing of sensory input within primary somatosensory cortex.

    PubMed

    Leodori, Giorgio; Formica, Alessandra; Zhu, Xiaoying; Conte, Antonella; Belvisi, Daniele; Cruccu, Giorgio; Hallett, Mark; Berardelli, Alfredo

    2017-10-01

    The somatosensory temporal discrimination threshold (STDT) has been used in recent years to investigate time processing of sensory information, but little is known about the physiological correlates of somatosensory temporal discrimination. The objective of this study was to investigate whether the time interval required to discriminate between two stimuli varies according to the number of stimuli in the task. We used the third-stimulus temporal discrimination threshold (ThirdDT), defined as the shortest time interval at which an individual distinguishes a third stimulus following a pair of stimuli delivered at the STDT. The STDT and ThirdDT were assessed in 31 healthy subjects. In a subgroup of 10 subjects, we evaluated the effects of the stimuli intensity on the ThirdDT. In a subgroup of 16 subjects, we evaluated the effects of S1 continuous theta-burst stimulation (S1-cTBS) on the STDT and ThirdDT. Results show that ThirdDT is shorter than STDT. We found a positive correlation between STDT and ThirdDT values. As long as the stimulus intensity was within the perceivable and painless range, it did not affect ThirdDT values. S1-cTBS significantly affected both STDT and ThirdDT, although the latter was affected to a greater extent and for a longer period of time. We conclude that the interval needed to discriminate between time-separated tactile stimuli is related to the number of stimuli used in the task. STDT and ThirdDT are encoded in S1, probably by a shared tactile temporal encoding mechanism whose performance rapidly changes during the perception process. ThirdDT is a new method to measure somatosensory temporal discrimination. NEW & NOTEWORTHY To investigate whether the time interval required to discriminate between stimuli varies according to changes in the stimulation pattern, we used the third-stimulus temporal discrimination threshold (ThirdDT). We found that the somatosensory temporal discrimination acuity varies according to the number of stimuli in the task. The ThirdDT is a new method to measure somatosensory temporal discrimination and a possible index of inhibitory activity at the S1 level. Copyright © 2017 the American Physiological Society.

  5. A numerical study of the laminar necklace vortex system and its effect on the wake for a circular cylinder

    NASA Astrophysics Data System (ADS)

    Kirkil, Gokhan; Constantinescu, George

    2014-11-01

    Large Eddy Simulation is used to investigate the structure of the laminar horseshoe vortex (HV) system and the dynamics of the necklace vortices as they fold around the base of a circular cylinder mounted on the flat bed of an open channel for Reynolds numbers defined with the cylinder diameter, D, smaller than 4,460. The study concentrates on the analysis of the structure of the HV system in the periodic breakaway sub-regime which is characterized by the formation of three main necklace vortices. For the relatively shallow flow conditions considered in this study (H/D 1, H is the channel depth), at times, the disturbances induced by the legs of the necklace vortices do not allow the SSLs on the two sides of the cylinder to interact in a way that allows the vorticity redistribution mechanism to lead to the formation of a new wake roller. As a result, the shedding of large scale rollers in the turbulent wake is suppressed for relatively large periods of time. Simulation results show that the wake structure changes randomly between time intervals when large-scale rollers are forming and are convected in the wake (von Karman regime), and time intervals when the rollers do not form.

  6. Oversampling of digitized images. [effects on interpolation in signal processing

    NASA Technical Reports Server (NTRS)

    Fischel, D.

    1976-01-01

    Oversampling is defined as sampling with a device whose characteristic width is greater than the interval between samples. This paper shows why oversampling should be avoided and discusses the limitations in data processing if circumstances dictate that oversampling cannot be circumvented. Principally, oversampling should not be used to provide interpolating data points. Rather, the time spent oversampling should be used to obtain more signal with less relative error, and the Sampling Theorem should be employed to provide any desired interpolated values. The concepts are applicable to single-element and multielement detectors.

  7. A novel tool for continuous fracture aftercare - Clinical feasibility and first results of a new telemetric gait analysis insole.

    PubMed

    Braun, Benedikt J; Bushuven, Eva; Hell, Rebecca; Veith, Nils T; Buschbaum, Jan; Holstein, Joerg H; Pohlemann, Tim

    2016-02-01

    Weight bearing after lower extremity fractures still remains a highly controversial issue. Even in ankle fractures, the most common lower extremity injury no standard aftercare protocol has been established. Average non weight bearing times range from 0 to 7 weeks, with standardised, radiological healing controls at fixed time intervals. Recent literature calls for patient-adapted aftercare protocols based on individual fracture and load scenarios. We show the clinical feasibility and first results of a new, insole embedded gait analysis tool for continuous monitoring of gait, load and activity. Ten patients were monitored with a new, independent gait analysis insole for up to 3 months postoperatively. Strict 20 kg partial weight bearing was ordered for 6 weeks. Overall activity, load spectrum, ground reaction forces, clinical scoring and general health data were recorded and correlated. Statistical analysis with power analysis, t-test and Spearman correlation was performed. Only one patient completely adhered to the set weight bearing limit. Average time in minutes over the limit was 374 min. Based on the parameters load, activity, gait time over 20 kg weight bearing and maximum ground reaction force high and low performers were defined after 3 weeks. Significant difference in time to painless full weight bearing between high and low performers was shown. Correlation analysis revealed a significant correlation between weight bearing and clinical scoring as well as pain (American Orthopaedic Foot and Ankle Society (AOFAS) Score rs=0.74; Olerud-Molander Score rs=0.93; VAS pain rs=-0.95). Early, continuous gait analysis is able to define aftercare performers with significant differences in time to full painless weight bearing where clinical or radiographic controls could not. Patient compliance to standardised weight bearing limits and protocols is low. Highly individual rehabilitation patterns were seen in all patients. Aftercare protocols should be adjusted to real-time patient conditions, rather than fixed intervals and limits. With a real-time measuring device high performers could be identified and influenced towards optimal healing conditions early, while low performers are recognised and missing healing influences could be corrected according to patient condition. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Maternal Asian ethnicity and the risk of anal sphincter injury.

    PubMed

    Davies-Tuck, Miranda; Biro, Mary-Anne; Mockler, Joanne; Stewart, Lynne; Wallace, Euan M; East, Christine

    2015-03-01

    To examine associations between maternal Asian ethnicity (South Asian and South East/East Asian) and anal sphincter injury. Retrospective cross-sectional study, comparing outcomes for Asian women with those of Australian and New Zealand women. A large metropolitan maternity service in Victoria, Australia. Australian/New Zealand, South Asian and South East/East Asian women who had a singleton vaginal birth from 2006 to 2012. The relation between maternal ethnicity and anal sphincter injury was assessed by logistic regression, adjusting for potential confounders. Anal sphincter injury was defined as a third or fourth degree tear (with or without episiotomy). Among 32,653 vaginal births there was a significant difference in the rate of anal sphincter injury by maternal region of birth (p < 0.001). After adjustment for confounders, nulliparous women born in South Asian and South East/East Asia were 2.6 (95% confidence interval 2.2-3.3; p < 0.001) and 2.1 (95% confidence interval 1.7-2.5; p < 0.001) times more likely to sustain an anal sphincter injury than Australian/New Zealand women, respectively. Parous women born in South Asian and South East/East Asia were 2.4 (95% confidence interval 1.8-3.2; p < 0.001) and 2.0 (95% confidence interval 1.5-2.7; p < 0.001) times more likely to sustain an anal sphincter injury than Australian/New Zealand women, respectively. There are ethnic differences in the rates of anal sphincter injury not fully explained by known risk factors for such trauma. This may have implications for care provision. © 2014 Nordic Federation of Societies of Obstetrics and Gynecology.

  9. Characterization of progesterone profiles in fall-calving Norwegian Red cows.

    PubMed

    Garmo, R T; Martin, A D; Thuen, E; Havrevoll, Ø; Steinshamn, H; Prestløkken, E; Randby, A; Eknaes, M; Waldmann, A; Reksen, O

    2009-10-01

    Progesterone profiles in Norwegian Red cows were categorized, and associations between the occurrence of irregularities in the profiles and the commencement of luteal activity were investigated. The cows were managed in 3 feeding trials from 1994 to 2001 and from 2005 to 2008 at the Norwegian University of Life Sciences. The cows were followed from calving, and the milk samples collected represented 502 lactations from 302 cows. Milk samples for progesterone analysis were taken 3 times weekly from 1994 throughout 1998 and from 2005 to 2008 and 2 times weekly from 1999 to 2001. Commencement of luteal activity was defined as the first day of 2 consecutive measurements of progesterone concentration >or=3 ng/mL not earlier than 10 d after calving. Delayed ovulation type I was defined as consistently low progesterone concentration, <3 ng/mL for >or=50 d postpartum. Delayed ovulation type II was defined as prolonged interluteal interval with milk progesterone measurements <3 ng/mL for >or=12 d between 2 luteal phases. Persistent corpus luteum (PCL) type I was defined as delayed luteolysis with milk progesterone >or=3 ng/mL for >or=19 d during the first estrous cycle postpartum. Persistent corpus luteum type II was defined as delayed luteolysis with milk progesterone >or=3 ng/mL for >or=19 d during subsequent estrous cycles before first artificial insemination. Delayed ovulation type I was present in 14.7%, delayed ovulation type II in 2.8%, PCL type I in 6.7%, and PCL type II in 3.3% of the profiles. Commencement of luteal activity was related to milk yield, parity, PCL type I, and the summated occurrence of PCL type I and II. The least squares means for the interval to commencement of luteal activity were 24.2 d when PCL type I and II were present and 29.5 d when PCL type I and II were absent. The likelihood of pregnancy to first service was not affected in cows with a history of PCL when artificial insemination was carried out at progesterone concentrations <3 ng/mL (i.e., during estrus); however, cows that had experienced PCL were more likely to be inseminated during a luteal phase. The occurrence of delayed ovulation and PCL in Norwegian Red cows was less than that reported in most other dairy populations.

  10. Preharvest Interval Periods and their relation to fruit growth stages and pesticide formulations.

    PubMed

    Alister, Claudio; Araya, Manuel; Becerra, Kevin; Saavedra, Jorge; Kogan, Marcelo

    2017-04-15

    The aim of this study was to evaluate the effect of pesticide formulations and fruit growth stages on the Pre-harvest Interval Period (PHI). Results showed that pesticide formulations did not affect the initial deposit and dissipation rate. However, the fruit growth stage at the application time showed a significant effect on the above-mentioned parameters. Fruit diameter increases in one millimeter pesticide dissipation rates were reduced in -0.033mgkg -1 day -1 (R 2 =0.87; p<0.001) for grapes and -0.014mgkg -1 day -1 (R 2 =0.85; p<0.001) for apples. The relation between solar radiation, air humidity and temperature, and pesticide dissipation rates were dependent on fruit type. PHI could change according to the application time, because of the initial amount of pesticide deposit in the fruits and change in the dissipation rates. Because Maximum Residue Level are becoming more restrictive, it is more important to consider the fruit growth stage effects on pesticide when performing dissipation studies to define PHI. Copyright © 2016. Published by Elsevier Ltd.

  11. Group selections among laboratory populations of Tribolium.

    PubMed

    Wade, M J

    1976-12-01

    Selection at the population level or group selection is defined as genetic change that is brought about or maintained by the differential extinction and/or proliferation of populations. Group selection for both increased and decreased adult population size was carried out among laboratory populations of Tribolium castaneum at 37-day intervals. The effect of individual selection within populations on adult population size was evaluated in an additional control series of populations. The response in the group selection treatments occurred rapidly, within three or four generations, and was large in magnitude, at times differing from the controls by over 200%. This response to selection at the populational level occurred despite strong individual selection which caused a decline in the mean size of the control populations from over 200 adults to near 50 adults in nine 37-day intervals. "Assay" experiments indicated that selective changes in fecundity, developmental time, body weight, and cannibalism rates were responsible in part for the observed treatment differences in adult population size. These findings have implications in terms of speciation in organisms whose range is composed of many partially isolated local populations.

  12. Stochastic process approximation for recursive estimation with guaranteed bound on the error covariance

    NASA Technical Reports Server (NTRS)

    Menga, G.

    1975-01-01

    An approach, is proposed for the design of approximate, fixed order, discrete time realizations of stochastic processes from the output covariance over a finite time interval, was proposed. No restrictive assumptions are imposed on the process; it can be nonstationary and lead to a high dimension realization. Classes of fixed order models are defined, having the joint covariance matrix of the combined vector of the outputs in the interval of definition greater or equal than the process covariance; (the difference matrix is nonnegative definite). The design is achieved by minimizing, in one of those classes, a measure of the approximation between the model and the process evaluated by the trace of the difference of the respective covariance matrices. Models belonging to these classes have the notable property that, under the same measurement system and estimator structure, the output estimation error covariance matrix computed on the model is an upper bound of the corresponding covariance on the real process. An application of the approach is illustrated by the modeling of random meteorological wind profiles from the statistical analysis of historical data.

  13. Pediatric magnet ingestions: the dark side of the force.

    PubMed

    Brown, Julie C; Otjen, Jeffrey P; Drugas, George T

    2014-05-01

    Pediatric magnet ingestions are increasing. Commercial availability of rare-earth magnets poses a serious health risk. This study defines incidence, characteristics, and management of ingestions over time. Cases were identified by searching radiology reports from June 2002 to December 2012 at a children's hospital and verified by chart and imaging review. Relative risk (RR) regressions determined changes in incidence and interventions over time. In all, 98% of ingestions occurred since 2006; 57% involved multiple magnets. Median age was 8 years (range 0 to 18); 0% of single and 56% of multiple ingestions required intervention. Compared with 2007 to 2009, ingestions increased from 2010 to 2012 (RR = 1.9, 95% confidence interval 1.2 to 3.0). Intervention proportion was unchanged (RR = .94, 95% confidence interval .4 to 2.2). Small spherical magnets comprised 26.8% of ingestions since 2010; 86% involved multiple magnets and 47% required intervention. Pediatric magnet ingestions and interventions have increased. Multiple ingestions prompt more imaging and surgical interventions. Magnet safety standards are needed to decrease risk to children. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Measuring the RC time constant with Arduino

    NASA Astrophysics Data System (ADS)

    Pereira, N. S. A.

    2016-11-01

    In this work we use the Arduino UNO R3 open source hardware platform to assemble an experimental apparatus for the measurement of the time constant of an RC circuit. With adequate programming, the Arduino is used as a signal generator, a data acquisition system and a basic signal visualisation tool. Theoretical calculations are compared with direct observations from an analogue oscilloscope. Data processing and curve fitting is performed on a spreadsheet. The results obtained for the six RC test circuits are within the expected interval of values defined by the tolerance of the components. The hardware and software prove to be adequate to the proposed measurements and therefore adaptable to a laboratorial teaching and learning context.

  15. Early-stage chunking of finger tapping sequences by persons who stutter and fluent speakers.

    PubMed

    Smits-Bandstra, Sarah; De Nil, Luc F

    2013-01-01

    This research note explored the hypothesis that chunking differences underlie the slow finger-tap sequencing performance reported in the literature for persons who stutter (PWS) relative to fluent speakers (PNS). Early-stage chunking was defined as an immediate and spontaneous tendency to organize a long sequence into pauses, for motor planning, and chunks of fluent motor performance. A previously published study in which 12 PWS and 12 matched PNS practised a 10-item finger tapping sequence 30 times was examined. Both groups significantly decreased the duration of between-chunk intervals (BCIs) and within-chunk intervals (WCIs) over practice. PNS had significantly shorter WCIs relative to PWS, but minimal differences between groups were found for the number of, or duration of, BCI. Results imply that sequencing differences found between PNS and PWS may be due to differences in automatizing movements within chunks or retrieving chunks from memory rather than chunking per se.

  16. Determination of the optimal atrioventricular interval in sick sinus syndrome during DDD pacing.

    PubMed

    Kato, Masaya; Dote, Keigo; Sasaki, Shota; Goto, Kenji; Takemoto, Hiroaki; Habara, Seiji; Hasegawa, Daiji; Matsuda, Osamu

    2005-09-01

    Although the AAI pacing mode has been shown to be electromechanically superior to the DDD pacing mode in sick sinus syndrome (SSS), there is evidence suggesting that during AAI pacing the presence of natural ventricular activation pattern is not enough for hemodynamic benefit to occur. Myocardial performance index (MPI) is a simply measurable Doppler-derived index of combined systolic and diastolic myocardial performance. The aim of this study was to investigate whether AAI pacing mode is electromechanically superior to the DDD mode in patients with SSS by using Doppler-derived MPI. Thirty-nine SSS patients with dual-chamber pacing devices were evaluated by using Doppler echocardiography in AAI mode and DDD mode. The optimal atrioventricular (AV) interval in DDD mode was determined and atrial stimulus-R interval was measured in AAI mode. The ratio of the atrial stimulus-R interval to the optimal AV interval was defined as relative AV interval (rAVI) and the ratio of MPI in AAI mode to that in DDD mode was defined as relative MPI (rMPI). The rMPI was significantly correlated with atrial stimulus-R interval and rAVI (r = 0.57, P = 0.0002, and r = 0.67, P < 0.0001, respectively). A cutoff point of 1.73 for rAVI provided optimum sensitivity and specificity for rMPI >1 based on the receiver operator curves. Even though the intrinsic AV conduction is moderately prolonged, some SSS patients with dual-chamber pacing devices benefit from the ventricular pacing with optimal AV interval. MPI is useful to determine the optimal pacing mode in acute experiment.

  17. Geographic variation in Northwest Atlantic fin whale (Balaenoptera physalus) song: implications for stock structure assessment.

    PubMed

    Delarue, Julien; Todd, Sean K; Van Parijs, Sofie M; Di Iorio, Lucia

    2009-03-01

    Passive acoustic data are increasingly being used as a tool for helping to define marine mammal populations and stocks. Fin whale (Balaenoptera physalus) songs present a unique opportunity to determine interstock differences. Their highly stereotyped interpulse interval has been shown to vary between geographic areas and to remain stable over time in some areas. In this study the structure of songs recorded at two geographically close feeding aggregations in the Gulf of St. Lawrence (GSL) and Gulf of Maine (GoM) was compared. Recordings were made from September 2005 through February 2006 in the GSL and intermittently between January 2006 and September 2007 at two locations in the GoM. 6257 pulse intervals corresponding to 19 GSL and 29 GoM songs were measured to characterize songs from both areas. Classification trees showed that GSL songs differ significantly from those in the GoM. The results are consistent with those derived from other stock structure assessment methodologies, such as chemical signature and photoidentification analysis, suggesting that fin whales in these areas may form separate management stocks. Song structure analysis could therefore provide a useful and cost-efficient tool for defining conservation units over temporal and geographical scales relevant to management objectives in fin whales.

  18. Incidence, Determinants, and Outcomes of Coronary Perforation During Percutaneous Coronary Intervention in the United Kingdom Between 2006 and 2013: An Analysis of 527 121 Cases From the British Cardiovascular Intervention Society Database.

    PubMed

    Kinnaird, Tim; Kwok, Chun Shing; Kontopantelis, Evangelos; Ossei-Gerning, Nicholas; Ludman, Peter; deBelder, Mark; Anderson, Richard; Mamas, Mamas A

    2016-08-01

    As coronary perforation (CP) is a rare but serious complication of percutaneous coronary intervention (PCI) the current evidence base is limited to small series. Using a national PCI database, the incidence, predictors, and outcomes of CP as a complication of PCI were defined. Data were prospectively collected and retrospectively analyzed from the British Cardiovascular Intervention Society data set on all PCI procedures performed in England and Wales between 2006 and 2013. Multivariate logistic regressions and propensity scores were used to identify predictors of CP and its association with outcomes. In total, 1762 CPs were recorded from 527 121 PCI procedures (incidence of 0.33%). Patients with CP were more often women or older, with a greater burden of comorbidity and underwent more complex PCI procedures. Factors predictive of CP included age per year (odds ratio [OR], 1.03; 95% confidence intervals, 1.02-1.03; P<0.001), previous coronary artery bypass graft (OR, 1.44; 95% confidence intervals, 1.17-1.77; P<0.001), left main (OR, 1.54; 95% confidence intervals, 1.21-1.96; P<0.001), use of rotational atherectomy (OR, 2.37; 95% confidence intervals, 1.80-3.11; P<0.001), and chronic total occlusions intervention (OR, 3.96; 95% confidence intervals, 3.28-4.78; P<0.001). Adjusted odds of adverse outcomes were higher in patients with CP for all major adverse coronary events, including stroke, bleeding, and mortality. Emergency surgery was required in 3% of cases. Predictors of mortality in patients with CP included age, diabetes mellitus, previous myocardial infarction, renal disease, ventilatory support, use of circulatory support, glycoprotein inhibitor use, and stent type. Using a national PCI database for the first time, the incidence, predictors, and outcomes of CP were defined. Although CP as a complication of PCI occurred rarely, it was strongly associated with poor outcomes. © 2016 American Heart Association, Inc.

  19. Mammography interval and breast cancer mortality in women over the age of 75.

    PubMed

    Simon, Michael S; Wassertheil-Smoller, Sylvia; Thomson, Cynthia A; Ray, Roberta M; Hubbell, F Allan; Lessin, Lawrence; Lane, Dorothy S; Kuller, Lew H

    2014-11-01

    The purpose of this study is to evaluate the relationship between mammography interval and breast cancer mortality among older women with breast cancer. The study population included 1,914 women diagnosed with invasive breast cancer at age 75 or later during their participation in the Women's health initiative, with an average follow-up of 4.4 years (3.1 SD). Cause of death was based on medical record review. Mammography interval was defined as the time between the last self-reported mammogram 7 or more months prior to diagnosis, and the date of diagnosis. Multivariable adjusted hazard ratios (HR) and 95 % confidence intervals (CIs) for breast cancer mortality and all-cause mortality were computed from Cox proportional hazards analyses. Prior mammograms were reported by 73.0 % of women from 7 months to ≤2 year of diagnosis (referent group), 19.4 % (>2 to <5 years), and 7.5 % (≥5 years or no prior mammogram). Women with the longest versus shortest intervals had more poorly differentiated (28.5 % vs. 22.7 %), advanced stage (25.7 % vs. 22.9 %), and estrogen receptor negative tumors (20.9 % vs. 13.1 %). Compared to the referent group, women with intervals of >2 to <5 years or ≥5 years had an increased risk of breast cancer mortality (HR 1.62, 95 % CI 1.03-2.54) and (HR 2.80, 95 % CI 1.57-5.00), respectively, p trend = 0.0002. There was no significant relationship between mammography interval and other causes of death. These results suggest a continued role for screening mammography among women 75 years of age and older.

  20. Valid analytical performance specifications for combined analytical bias and imprecision for the use of common reference intervals.

    PubMed

    Hyltoft Petersen, Per; Lund, Flemming; Fraser, Callum G; Sandberg, Sverre; Sölétormos, György

    2018-01-01

    Background Many clinical decisions are based on comparison of patient results with reference intervals. Therefore, an estimation of the analytical performance specifications for the quality that would be required to allow sharing common reference intervals is needed. The International Federation of Clinical Chemistry (IFCC) recommended a minimum of 120 reference individuals to establish reference intervals. This number implies a certain level of quality, which could then be used for defining analytical performance specifications as the maximum combination of analytical bias and imprecision required for sharing common reference intervals, the aim of this investigation. Methods Two methods were investigated for defining the maximum combination of analytical bias and imprecision that would give the same quality of common reference intervals as the IFCC recommendation. Method 1 is based on a formula for the combination of analytical bias and imprecision and Method 2 is based on the Microsoft Excel formula NORMINV including the fractional probability of reference individuals outside each limit and the Gaussian variables of mean and standard deviation. The combinations of normalized bias and imprecision are illustrated for both methods. The formulae are identical for Gaussian and log-Gaussian distributions. Results Method 2 gives the correct results with a constant percentage of 4.4% for all combinations of bias and imprecision. Conclusion The Microsoft Excel formula NORMINV is useful for the estimation of analytical performance specifications for both Gaussian and log-Gaussian distributions of reference intervals.

  1. Defining travel-associated cases of enteric fever.

    PubMed

    Freedman, Joanne; Lighton, Lorraine; Jones, Jane

    2014-01-01

    There is no internationally recognized case-definition for travel-associated enteric fever in non-endemic countries. This study describes the patterns of case reporting between 2007 and 2011 as travel-associated or not from the surveillance data in England, Wales and Northern Ireland (EWNI), before and after a change in the time component of the case-definition in January 2011. It examines in particular the role of a time frame based on the reported typical incubation period in defining a case of travel-associated enteric fever. The results showed no significant differences in the distribution of cases of enteric fever in regards to the interval between the onset and UK arrival in 2011 compared to 2007-2010 (p=0.98 for typhoid and paratyphoid A); the distribution for paratyphoid B was also similar in both time periods. During 2007-2010, 93% (1730/1853) of all of the cases were classified as travel-associated compared to 94% (448/477) in 2011. This difference was not statistically significant. Changing the time component of the definition of travel-associated enteric fever did not make a significant difference to the proportion of travel-associated cases reported by investigators. Our analysis suggests that time might be subordinate to other considerations when investigators classify a case as travel-associated. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.

  2. 5-Year Reoperation Risk and Causes for Revision After Idiopathic Scoliosis Surgery.

    PubMed

    Ahmed, Syed Imraan; Bastrom, Tracey P; Yaszay, Burt; Newton, Peter O

    2017-07-01

    An actuarial "survivorship" analysis. The aim of this study was to define the incidence and cause of surgical revision 5 years after scoliosis surgery. Data on contemporary revision surgery rates after idiopathic scoliosis surgery beyond the 2 years postoperatively in the adolescent and young adult population are limited. Patients enrolled in a prospective, multicenter, idiopathic scoliosis surgical registry from 1995 to 2009 were reviewed. Any spine reoperation was defined as a "terminal event." An actuarial survivorship analysis that adjusts for patients lost to follow-up was performed to determine cumulative survival. Time intervals were defined as 0 to <3 months, 3 months to <1 year, 1 to <2 years, 2 to <5 years, and 5 to 10 years. Registry data and radiographs were reviewed and five categories for reoperation assigned: 1) implant failure and/or pseudarthrosis, 2) implant misplacement and/or prominence, 3) wound complication and/or infection, 4) residual deformity and/or progression, and 5) other. One thousand four hundred thirty-five patients from 12 sites were included. The majority were female (80%), with major thoracic curves (76% Lenke 1-4), and average age of 15 ± 2 years (10-22) at surgery. Most had posterior spinal instrumentation and fusion (81%). At this time, 75 (5.2%) patients required reoperation. Twenty-two occurred within 3 months postop, 10 more before 1 year, 12 more before 2 years, another 20 by 5 years, and 10 more after 5 years. This corresponded to an actuarial cumulative survival of 98.3% at 3 months, 97.5% at 1 year, 96.6% at 2 years, 93.9% at 5 years, and 89.8% at the final interval (5-10 yrs). Revisions for scoliosis continue to occur well after 2 years with a 5-year survivorship of 93.9%. Reasons for reoperation are not uniformly distributed over time, with implant-related issues and infection the leading cause for early revision, while late infection was the most common cause after 2 years. Long-term follow-up of these postoperative patients remains important. 3.

  3. Safety of intravenous thrombolysis in stroke mimics: prospective 5-year study and comprehensive meta-analysis.

    PubMed

    Tsivgoulis, Georgios; Zand, Ramin; Katsanos, Aristeidis H; Goyal, Nitin; Uchino, Ken; Chang, Jason; Dardiotis, Efthimios; Putaala, Jukka; Alexandrov, Anne W; Malkoff, Marc D; Alexandrov, Andrei V

    2015-05-01

    Shortening door-to-needle time may lead to inadvertent intravenous thrombolysis (IVT) administration in stroke mimics (SMs). We sought to determine the safety of IVT in SMs using prospective, single-center data and by conducting a comprehensive meta-analysis of reported case-series. We prospectively analyzed consecutive IVT-treated patients during a 5-year period at a tertiary care stroke center. A systematic review and meta-analysis of case-series reporting safety of IVT in SMs and confirmed acute ischemic stroke were conducted. Symptomatic intracerebral hemorrhage was defined as imaging evidence of ICH with an National Institutes of Health Stroke scale increase of ≥4 points. Favorable functional outcome at hospital discharge was defined as a modified Rankin Scale score of 0 to 1. Of 516 consecutive IVT patients at our tertiary care center (50% men; mean age, 60±14 years; median National Institutes of Health Stroke scale, 11; range, 3-22), SMs comprised 75 cases. Symptomatic intracerebral hemorrhage occurred in 1 patient, whereas we documented no cases of orolingual edema or major extracranial hemorrhagic complications. In meta-analysis of 9 studies (8942 IVT-treated patients), the pooled rates of symptomatic intracerebral hemorrhage and orolingual edema among 392 patients with SM treated with IVT were 0.5% (95% confidence interval, 0%-2%) and 0.3% (95% confidence interval, 0%-2%), respectively. Patients with SM were found to have a significantly lower risk for symptomatic intracerebral hemorrhage compared with patients with acute ischemic stroke (risk ratio=0.33; 95% confidence interval, 0.14-0.77; P=0.010), with no evidence of heterogeneity or publication bias. Favorable functional outcome was almost 3-fold higher in patients with SM in comparison with patients with acute ischemic stroke (risk ratio=2.78; 95% confidence interval, 2.07-3.73; P<0.00001). Our prospective, single-center experience coupled with the findings of the comprehensive meta-analysis underscores the safety of IVT in SM. © 2015 American Heart Association, Inc.

  4. The history of late holocene surface-faulting earthquakes on the central segments of the Wasatch fault zone, Utah

    USGS Publications Warehouse

    Duross, Christopher; Personius, Stephen; Olig, Susan S; Crone, Anthony J.; Hylland, Michael D.; Lund, William R; Schwartz, David P.

    2017-01-01

    The Wasatch fault (WFZ)—Utah’s longest and most active normal fault—forms a prominent eastern boundary to the Basin and Range Province in northern Utah. To provide paleoseismic data for a Wasatch Front regional earthquake forecast, we synthesized paleoseismic data to define the timing and displacements of late Holocene surface-faulting earthquakes on the central five segments of the WFZ. Our analysis yields revised histories of large (M ~7) surface-faulting earthquakes on the segments, as well as estimates of earthquake recurrence and vertical slip rate. We constrain the timing of four to six earthquakes on each of the central segments, which together yields a history of at least 24 surface-faulting earthquakes since ~6 ka. Using earthquake data for each segment, inter-event recurrence intervals range from about 0.6 to 2.5 kyr, and have a mean of 1.2 kyr. Mean recurrence, based on closed seismic intervals, is ~1.1–1.3 kyr per segment, and when combined with mean vertical displacements per segment of 1.7–2.6 m, yield mean vertical slip rates of 1.3–2.0 mm/yr per segment. These data refine the late Holocene behavior of the central WFZ; however, a significant source of uncertainty is whether structural complexities that define the segments of the WFZ act as hard barriers to ruptures propagating along the fault. Thus, we evaluate fault rupture models including both single-segment and multi-segment ruptures, and define 3–17-km-wide spatial uncertainties in the segment boundaries. These alternative rupture models and segment-boundary zones honor the WFZ paleoseismic data, take into account the spatial and temporal limitations of paleoseismic data, and allow for complex ruptures such as partial-segment and spillover ruptures. Our data and analyses improve our understanding of the complexities in normal-faulting earthquake behavior and provide geological inputs for regional earthquake-probability and seismic hazard assessments.

  5. Maternal and neonatal outcomes of antenatal anemia in a Scottish population: a retrospective cohort study.

    PubMed

    Rukuni, Ruramayi; Bhattacharya, Sohinee; Murphy, Michael F; Roberts, David; Stanworth, Simon J; Knight, Marian

    2016-05-01

    Antenatal anemia is a major public health problem in the UK, yet there is limited high quality evidence for associated poor clinical outcomes. The objectives of this study were to estimate the incidence and clinical outcomes of antenatal anemia in a Scottish population. A retrospective cohort study of 80 422 singleton pregnancies was conducted using data from the Aberdeen Maternal and Neonatal Databank between 1995 and 2012. Antenatal anemia was defined as haemoglobin ≤ 10 g/dl during pregnancy. Incidence was calculated with 95% confidence intervals and compared over time using a chi-squared test for trend. Multivariable logistic regression was used to adjust for confounding variables. Results are presented as adjusted odds ratios with 95% confidence interval. The overall incidence of antenatal anemia was 9.3 cases/100 singleton pregnancies (95% confidence interval 9.1-9.5), decreasing from 16.9/100 to 4.1/100 singleton pregnancies between 1995 and 2012 (p < 0.001). Maternal anemia was associated with antepartum hemorrhage (adjusted odds ratio 1.26, 95% confidence interval 1.17-1.36), postpartum infection (adjusted odds ratio 1.89, 95% confidence interval 1.39-2.57), transfusion (adjusted odds ratio 1.87, 95% confidence interval 1.65-2.13) and stillbirth (adjusted odds ratio 1.42, 95% confidence interval 1.04-1.94), reduced odds of postpartum hemorrhage (adjusted odds ratio 0.92, 95% confidence interval 0.86-0.98) and low birthweight (adjusted odds ratio 0.77, 95% confidence interval 0.69-0.86). No other outcomes were statistically significant. This study shows the incidence of antenatal anemia is decreasing steadily within this Scottish population. However, given that anemia is a readily correctable risk factor for major causes of morbidity and mortality in the UK, further work is required to investigate appropriate preventive measures. © 2016 Nordic Federation of Societies of Obstetrics and Gynecology.

  6. Series expansion solutions for the multi-term time and space fractional partial differential equations in two- and three-dimensions

    NASA Astrophysics Data System (ADS)

    Ye, H.; Liu, F.; Turner, I.; Anh, V.; Burrage, K.

    2013-09-01

    Fractional partial differential equations with more than one fractional derivative in time describe some important physical phenomena, such as the telegraph equation, the power law wave equation, or the Szabo wave equation. In this paper, we consider two- and three-dimensional multi-term time and space fractional partial differential equations. The multi-term time-fractional derivative is defined in the Caputo sense, whose order belongs to the interval (1,2],(2,3],(3,4] or (0, m], and the space-fractional derivative is referred to as the fractional Laplacian form. We derive series expansion solutions based on a spectral representation of the Laplacian operator on a bounded region. Some applications are given for the two- and three-dimensional telegraph equation, power law wave equation and Szabo wave equation.

  7. REPORT FROM THE STS NATIONAL DATABASE WORK FORCE

    PubMed Central

    Overman, David M.; Jacobs, Jeffrey P.; Prager, Richard L.; Wright, Cameron D.; Clarke, David R.; Pasquali, Sara; O’Brien, Sean M.; Dokholyan, Rachel S.; Meehan, Paul; McDonald, Donna E.; Jacobs, Marshall L.; Mavroudis, Constantine; Shahian, David M.

    2013-01-01

    Several distinct definitions of postoperative death have been used in various quality reporting programs. Some have defined a postoperative mortality as a patient who expires while still in the hospital, while others have considered all deaths occurring within a predetermined, standardized time interval after surgery. While if continues to collect mortality data using both these individual definitions, the Society of Thoracic Surgeons (STS) believes that either alone may be inadequate. Accordingly, the STS prefers a more encompassing metric, Operative Mortality, which is defined as (1) all deaths occurring during the hospitalization in which the operation was performed, even if after 30 days; and (2) all deaths occurring after discharge from the hospital, but before the end of the thirtieth postoperative day. This manuscript provides clarification for some uncommon but important scenarios where the correct application of this definition may be problematic. PMID:23799748

  8. Minimizing pre- and post-defibrillation pauses increases the likelihood of return of spontaneous circulation (ROSC).

    PubMed

    Sell, Rebecca E; Sarno, Renee; Lawrence, Brenna; Castillo, Edward M; Fisher, Roger; Brainard, Criss; Dunford, James V; Davis, Daniel P

    2010-07-01

    The three-phase model of ventricular fibrillation (VF) arrest suggests a period of compressions to "prime" the heart prior to defibrillation attempts. In addition, post-shock compressions may increase the likelihood of return of spontaneous circulation (ROSC). The optimal intervals for shock delivery following cessation of compressions (pre-shock interval) and resumption of compressions following a shock (post-shock interval) remain unclear. To define optimal pre- and post-defibrillation compression pauses for out-of-hospital cardiac arrest (OOHCA). All patients suffering OOHCA from VF were identified over a 1-month period. Defibrillator data were abstracted and analyzed using the combination of ECG, impedance, and audio recording. Receiver-operator curve (ROC) analysis was used to define the optimal pre- and post-shock compression intervals. Multiple logistic regression analysis was used to quantify the relationship between these intervals and ROSC. Covariates included cumulative number of defibrillation attempts, intubation status, and administration of epinephrine in the immediate pre-shock compression cycle. Cluster adjustment was performed due to the possibility of multiple defibrillation attempts for each patient. A total of 36 patients with 96 defibrillation attempts were included. The ROC analysis identified an optimal pre-shock interval of <3s and an optimal post-shock interval of <6s. Increased likelihood of ROSC was observed with a pre-shock interval <3s (adjusted OR 6.7, 95% CI 2.0-22.3, p=0.002) and a post-shock interval of <6s (adjusted OR 10.7, 95% CI 2.8-41.4, p=0.001). Likelihood of ROSC was substantially increased with the optimization of both pre- and post-shock intervals (adjusted OR 13.1, 95% CI 3.4-49.9, p<0.001). Decreasing pre- and post-shock compression intervals increases the likelihood of ROSC in OOHCA from VF.

  9. Interval Timing Is Preserved Despite Circadian Desynchrony in Rats: Constant Light and Heavy Water Studies.

    PubMed

    Petersen, Christian C; Mistlberger, Ralph E

    2017-08-01

    The mechanisms that enable mammals to time events that recur at 24-h intervals (circadian timing) and at arbitrary intervals in the seconds-to-minutes range (interval timing) are thought to be distinct at the computational and neurobiological levels. Recent evidence that disruption of circadian rhythmicity by constant light (LL) abolishes interval timing in mice challenges this assumption and suggests a critical role for circadian clocks in short interval timing. We sought to confirm and extend this finding by examining interval timing in rats in which circadian rhythmicity was disrupted by long-term exposure to LL or by chronic intake of 25% D 2 O. Adult, male Sprague-Dawley rats were housed in a light-dark (LD) cycle or in LL until free-running circadian rhythmicity was markedly disrupted or abolished. The rats were then trained and tested on 15- and 30-sec peak-interval procedures, with water restriction used to motivate task performance. Interval timing was found to be unimpaired in LL rats, but a weak circadian activity rhythm was apparently rescued by the training procedure, possibly due to binge feeding that occurred during the 15-min water access period that followed training each day. A second group of rats in LL were therefore restricted to 6 daily meals scheduled at 4-h intervals. Despite a complete absence of circadian rhythmicity in this group, interval timing was again unaffected. To eliminate all possible temporal cues, we tested a third group of rats in LL by using a pseudo-randomized schedule. Again, interval timing remained accurate. Finally, rats tested in LD received 25% D 2 O in place of drinking water. This markedly lengthened the circadian period and caused a failure of LD entrainment but did not disrupt interval timing. These results indicate that interval timing in rats is resistant to disruption by manipulations of circadian timekeeping previously shown to impair interval timing in mice.

  10. ParaExp Using Leapfrog as Integrator for High-Frequency Electromagnetic Simulations

    NASA Astrophysics Data System (ADS)

    Merkel, M.; Niyonzima, I.; Schöps, S.

    2017-12-01

    Recently, ParaExp was proposed for the time integration of linear hyperbolic problems. It splits the time interval of interest into subintervals and computes the solution on each subinterval in parallel. The overall solution is decomposed into a particular solution defined on each subinterval with zero initial conditions and a homogeneous solution propagated by the matrix exponential applied to the initial conditions. The efficiency of the method depends on fast approximations of this matrix exponential based on recent results from numerical linear algebra. This paper deals with the application of ParaExp in combination with Leapfrog to electromagnetic wave problems in time domain. Numerical tests are carried out for a simple toy problem and a realistic spiral inductor model discretized by the Finite Integration Technique.

  11. Active play and screen time in US children aged 4 to 11 years in relation to sociodemographic and weight status characteristics: a nationally representative cross-sectional analysis.

    PubMed

    Anderson, Sarah E; Economos, Christina D; Must, Aviva

    2008-10-22

    The high prevalence of childhood obesity underscores the importance of monitoring population trends in children's activity and screen time, and describing associations with child age, gender, race/ethnicity, and weight status. Our objective was to estimate the proportion of young children in the US who have low levels of active play or high levels of screen time, or who have both these behaviors, and to describe associations with age, gender, race/ethnicity, and weight status. We analyzed data collected during the National Health and Nutrition Examination Surveys 2001-2004, a US nationally representative cross-sectional study. We studied 2964 children aged 4.00 to 11.99 years. Our main outcomes were reported weekly times that the child played or exercised hard enough to sweat or breathe hard (active play), daily hours the child watched television/videos, used computers, or played computer games (screen time), and the combination of low active play and high screen time. Low active play was defined as active play 6 times or less per week. High screen time was defined as more than 2 hours per day. We accounted for the complex survey design in analyses and report proportions and 95% confidence intervals. We used Wald Chi-square to test for differences between proportions. To identify factors associated with low active play and high screen time, we used multivariate logistic regression. Of US children aged 4 to 11 years, 37.3% (95% confidence interval, 34.1% to 40.4%) had low levels of active play, 65.0% (95% CI, 61.4% to 68.5%) had high screen time, and 26.3% (95% CI, 23.8% to 28.9%) had both these behaviors. Characteristics associated with a higher probability of simultaneously having low active play and high screen time were older age, female gender, non-Hispanic black race/ethnicity, and having a BMI-for-age > or =95th percentile of the CDC growth reference. Many young children in the US are reported to have physical activity and screen time behaviors that are inconsistent with recommendations for healthy pediatric development. Children who are overweight, approaching adolescence, girls, and non-Hispanic blacks may benefit most from public health policies and programs aimed at these behaviors.

  12. Paleoarchean and Cambrian observations of the geodynamo in light of new estimates of core thermal conductivity

    NASA Astrophysics Data System (ADS)

    Tarduno, John; Bono, Richard; Cottrell, Rory

    2015-04-01

    Recent estimates of core thermal conductivity are larger than prior values by a factor of approximately three. These new estimates suggest that the inner core is a relatively young feature, perhaps as young as 500 million years old, and that the core-mantle heat flux required to drive the early dynamo was greater than previously assumed (Nimmo, 2015). Here, we focus on paleomagnetic studies of two key time intervals important for understanding core evolution in light of the revisions of core conductivity values. 1. Hadean to Paleoarchean (4.4-3.4 Ga). Single silicate crystal paleointensity analyses suggest a relatively strong magnetic field at 3.4-3.45 Ga (Tarduno et al., 2010). Paleointenity data from zircons of the Jack Hills (Western Australia) further suggest the presence of a geodynamo between 3.5 and 3.6 Ga (Tarduno and Cottrell, 2014). We will discuss our efforts to test for the absence/presence of the geodynamo in older Eoarchean and Hadean times. 2. Ediacaran to Early Cambrian (~635-530 Ma). Disparate directions seen in some paleomagnetic studies from this time interval have been interpreted as recording inertial interchange true polar wander (IITPW). Recent single silicate paleomagnetic analyses fail to find evidence for IITPW; instead a reversing field overprinted by secondary magnetizations is defined (Bono and Tarduno, 2015). Preliminary analyses suggest the field may have been unusually weak. We will discuss our on-going tests of the hypothesis that this interval represents the time of onset of inner core growth. References: Bono, R.K. & Tarduno, J.A., Geology, in press (2015); Nimmo, F., Treatise Geophys., in press (2015); Tarduno, J.A., et al., Science (2010); Tarduno, J.A. & Cottrell, R.D., AGU Fall Meeting (2014).

  13. Characterization and referral patterns of ST-elevation myocardial infarction patients admitted to chest pain units rather than directly to catherization laboratories. Data from the German Chest Pain Unit Registry.

    PubMed

    Schmidt, Frank P; Perne, Andrea; Hochadel, Matthias; Giannitsis, Evangelos; Darius, Harald; Maier, Lars S; Schmitt, Claus; Heusch, Gerd; Voigtländer, Thomas; Mudra, Harald; Gori, Tommaso; Senges, Jochen; Münzel, Thomas

    2017-03-15

    Direct transfer to the catheterization laboratory for primary percutaneous coronary intervention (PCI) is standard of care for patients with ST-segment elevation myocardial infarction (STEMI). Nevertheless, a significant number of STEMI-patients are initially treated in chest pain units (CPUs) of admitting hospitals. Thus, it is important to characterize these patients and to define why an important deviation from recommended clinical pathways occurs and in particular to quantify the impact of deviation on critical time intervals. 1679 STEMI patients admitted to a CPU in the period from 2010 to 2015 were enrolled in the German CPU registry (8.5% of 19,666). 55.9% of the patients were delivered by an emergency medical system (EMS), 16.1% transferred from other hospitals and 15.2% referred by a general practitioner (GP). 12.7% were self-referrals. 55% did not get a pre-hospital ECG. Compared to the EMS, referral by GPs markedly delayed critical time intervals while a pre-hospital ECG demonstrating ST-segment elevation reduced door-to-balloon time. When compared to STEMI patients (n=21,674) enrolled in the ALKK-registry, CPU-STEMI patients had a lower risk profile, their treatment in the CPU was guideline-conform and in-hospital mortality was low (1.5%). CPU-STEMI patients represent a numerically significant group because a pre-hospital ECG was not documented. Treatment in the CPU is guideline-conform and the intra-hospital mortality is low. The lack of a pre-hospital ECG and admission via the GP substantially delay critical time intervals suggesting that in patients with symptoms suggestive an ACS, the EMS should be contacted and not the GP. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Comparison of Prolonged Atrial Electromechanical Delays with Different Definitions in the Discrimination of Patients with Non-Valvular Paroxysmal Atrial Fibrillation

    PubMed Central

    Lee, Dong Hyun; Choi, Sun Young; Seo, Jeong-Min; Choi, Jae-Hyuk; Cho, Young-Rak; Park, Kyungil; Kim, Moo Hyun; Kim, Young-Dae

    2015-01-01

    Background and Objectives Previous studies have evaluated atrial electromechanical delays (AEMDs) with a number of different definitions to discriminate patients with paroxysmal atrial fibrillation (PAF) from controls without PAF. However, their discriminative values for PAF have not previously been directly compared. Subjects and Methods A total of 65 PAF patients and 130 control subjects matched for age, sex, history of hypertension, and diabetes mellitus were selected. The AEMDi and AEMDp were defined as the time intervals from the initiation of the P wave on the surface electrocardiogram to the initiation and peak of the late diastolic transmitral inflow on pulsed wave Doppler images, respectively. The AEMDim and AEMDpm were defined as the time intervals from the initiation of the P wave on the surface electrocardiogram to the initiation and peak of the late diastolic lateral mitral annular motion on tissue Doppler images, respectively. Results There were no significant differences in the clinical characteristics between the two groups. All 4 AEMDs were consistently longer in the PAF group, and proven effective to differentiate the PAF patients from the controls. The AEMDi measurement had a larger area under the curve (AUC) than the other AEMDs, left atrial volume index, and P wave amplitude. However, the AEMDp, AEMDim, and AEMDpm measurements had AUCs similar to those of the left atrial volume index and P wave amplitude. Conclusion The findings suggest that the AEMDi is better than the other AEMDs for the discrimination of PAF patients from the controls. PMID:26617650

  15. Seismic hazard and risk assessment in the intraplate environment: The New Madrid seismic zone of the central United States

    USGS Publications Warehouse

    Wang, Z.

    2007-01-01

    Although the causes of large intraplate earthquakes are still not fully understood, they pose certain hazard and risk to societies. Estimating hazard and risk in these regions is difficult because of lack of earthquake records. The New Madrid seismic zone is one such region where large and rare intraplate earthquakes (M = 7.0 or greater) pose significant hazard and risk. Many different definitions of hazard and risk have been used, and the resulting estimates differ dramatically. In this paper, seismic hazard is defined as the natural phenomenon generated by earthquakes, such as ground motion, and is quantified by two parameters: a level of hazard and its occurrence frequency or mean recurrence interval; seismic risk is defined as the probability of occurrence of a specific level of seismic hazard over a certain time and is quantified by three parameters: probability, a level of hazard, and exposure time. Probabilistic seismic hazard analysis (PSHA), a commonly used method for estimating seismic hazard and risk, derives a relationship between a ground motion parameter and its return period (hazard curve). The return period is not an independent temporal parameter but a mathematical extrapolation of the recurrence interval of earthquakes and the uncertainty of ground motion. Therefore, it is difficult to understand and use PSHA. A new method is proposed and applied here for estimating seismic hazard in the New Madrid seismic zone. This method provides hazard estimates that are consistent with the state of our knowledge and can be easily applied to other intraplate regions. ?? 2007 The Geological Society of America.

  16. Perspective: The Case for an Evidence-Based Reference Interval for Serum Magnesium: The Time Has Come.

    PubMed

    Costello, Rebecca B; Elin, Ronald J; Rosanoff, Andrea; Wallace, Taylor C; Guerrero-Romero, Fernando; Hruby, Adela; Lutsey, Pamela L; Nielsen, Forrest H; Rodriguez-Moran, Martha; Song, Yiqing; Van Horn, Linda V

    2016-11-01

    The 2015 Dietary Guidelines Advisory Committee indicated that magnesium was a shortfall nutrient that was underconsumed relative to the Estimated Average Requirement (EAR) for many Americans. Approximately 50% of Americans consume less than the EAR for magnesium, and some age groups consume substantially less. A growing body of literature from animal, epidemiologic, and clinical studies has demonstrated a varied pathologic role for magnesium deficiency that includes electrolyte, neurologic, musculoskeletal, and inflammatory disorders; osteoporosis; hypertension; cardiovascular diseases; metabolic syndrome; and diabetes. Studies have also demonstrated that magnesium deficiency is associated with several chronic diseases and that a reduced risk of these diseases is observed with higher magnesium intake or supplementation. Subclinical magnesium deficiency can exist despite the presentation of a normal status as defined within the current serum magnesium reference interval of 0.75-0.95 mmol/L. This reference interval was derived from data from NHANES I (1974), which was based on the distribution of serum magnesium in a normal population rather than clinical outcomes. What is needed is an evidenced-based serum magnesium reference interval that reflects optimal health and the current food environment and population. We present herein data from an array of scientific studies to support the perspective that subclinical deficiencies in magnesium exist, that they contribute to several chronic diseases, and that adopting a revised serum magnesium reference interval would improve clinical care and public health. © 2016 American Society for Nutrition.

  17. Perspective: The Case for an Evidence-Based Reference Interval for Serum Magnesium: The Time Has Come12345

    PubMed Central

    Elin, Ronald J; Rosanoff, Andrea; Lutsey, Pamela L; Nielsen, Forrest H; Rodriguez-Moran, Martha

    2016-01-01

    The 2015 Dietary Guidelines Advisory Committee indicated that magnesium was a shortfall nutrient that was underconsumed relative to the Estimated Average Requirement (EAR) for many Americans. Approximately 50% of Americans consume less than the EAR for magnesium, and some age groups consume substantially less. A growing body of literature from animal, epidemiologic, and clinical studies has demonstrated a varied pathologic role for magnesium deficiency that includes electrolyte, neurologic, musculoskeletal, and inflammatory disorders; osteoporosis; hypertension; cardiovascular diseases; metabolic syndrome; and diabetes. Studies have also demonstrated that magnesium deficiency is associated with several chronic diseases and that a reduced risk of these diseases is observed with higher magnesium intake or supplementation. Subclinical magnesium deficiency can exist despite the presentation of a normal status as defined within the current serum magnesium reference interval of 0.75–0.95 mmol/L. This reference interval was derived from data from NHANES I (1974), which was based on the distribution of serum magnesium in a normal population rather than clinical outcomes. What is needed is an evidenced-based serum magnesium reference interval that reflects optimal health and the current food environment and population. We present herein data from an array of scientific studies to support the perspective that subclinical deficiencies in magnesium exist, that they contribute to several chronic diseases, and that adopting a revised serum magnesium reference interval would improve clinical care and public health. PMID:28140318

  18. Program Monitoring with LTL in EAGLE

    NASA Technical Reports Server (NTRS)

    Barringer, Howard; Goldberg, Allen; Havelund, Klaus; Sen, Koushik

    2004-01-01

    We briefly present a rule-based framework called EAGLE, shown to be capable of defining and implementing finite trace monitoring logics, including future and past time temporal logic, extended regular expressions, real-time and metric temporal logics (MTL), interval logics, forms of quantified temporal logics, and so on. In this paper we focus on a linear temporal logic (LTL) specialization of EAGLE. For an initial formula of size m, we establish upper bounds of O(m(sup 2)2(sup m)log m) and O(m(sup 4)2(sup 2m)log(sup 2) m) for the space and time complexity, respectively, of single step evaluation over an input trace. This bound is close to the lower bound O(2(sup square root m) for future-time LTL presented. EAGLE has been successfully used, in both LTL and metric LTL forms, to test a real-time controller of an experimental NASA planetary rover.

  19. Commentary on Holmes et al. (2007): resolving the debate on when extinction risk is predictable.

    PubMed

    Ellner, Stephen P; Holmes, Elizabeth E

    2008-08-01

    We reconcile the findings of Holmes et al. (Ecology Letters, 10, 2007, 1182) that 95% confidence intervals for quasi-extinction risk were narrow for many vertebrates of conservation concern, with previous theory predicting wide confidence intervals. We extend previous theory, concerning the precision of quasi-extinction estimates as a function of population dynamic parameters, prediction intervals and quasi-extinction thresholds, and provide an approximation that specifies the prediction interval and threshold combinations where quasi-extinction estimates are precise (vs. imprecise). This allows PVA practitioners to define the prediction interval and threshold regions of safety (low risk with high confidence), danger (high risk with high confidence), and uncertainty.

  20. Repeatability of Quantitative Whole-Body 18F-FDG PET/CT Uptake Measures as Function of Uptake Interval and Lesion Selection in Non-Small Cell Lung Cancer Patients.

    PubMed

    Kramer, Gerbrand Maria; Frings, Virginie; Hoetjes, Nikie; Hoekstra, Otto S; Smit, Egbert F; de Langen, Adrianus Johannes; Boellaard, Ronald

    2016-09-01

    Change in (18)F-FDG uptake may predict response to anticancer treatment. The PERCIST suggest a threshold of 30% change in SUV to define partial response and progressive disease. Evidence underlying these thresholds consists of mixed stand-alone PET and PET/CT data with variable uptake intervals and no consensus on the number of lesions to be assessed. Additionally, there is increasing interest in alternative (18)F-FDG uptake measures such as metabolically active tumor volume and total lesion glycolysis (TLG). The aim of this study was to comprehensively investigate the repeatability of various quantitative whole-body (18)F-FDG metrics in non-small cell lung cancer (NSCLC) patients as a function of tracer uptake interval and lesion selection strategies. Eleven NSCLC patients, with at least 1 intrathoracic lesion 3 cm or greater, underwent double baseline whole-body (18)F-FDG PET/CT scans at 60 and 90 min after injection within 3 d. All (18)F-FDG-avid tumors were delineated with an 50% threshold of SUVpeak adapted for local background. SUVmax, SUVmean, SUVpeak, TLG, metabolically active tumor volume, and tumor-to-blood and -liver ratios were evaluated, as well as the influence of lesion selection and 2 methods for correction of uptake time differences. The best repeatability was found using the SUV metrics of the averaged PERCIST target lesions (repeatability coefficients < 10%). The correlation between test and retest scans was strong for all uptake measures at either uptake interval (intraclass correlation coefficient > 0.97 and R(2) > 0.98). There were no significant differences in repeatability between data obtained 60 and 90 min after injection. When only PERCIST-defined target lesions were included (n = 34), repeatability improved for all uptake values. Normalization to liver or blood uptake or glucose correction did not improve repeatability. However, after correction for uptake time the correlation of SUV measures and TLG between the 60- and 90-min data significantly improved without affecting test-retest performance. This study suggests that a 15% change of SUVmean/SUVpeak at 60 min after injection can be used to assess response in advanced NSCLC patients if up to 5 PERCIST target lesions are assessed. Lower thresholds could be used in averaged PERCIST target lesions (<10%). © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  1. NT-ProBNP and Troponin T and Risk of Rapid Kidney Function Decline and Incident CKD in Elderly Adults

    PubMed Central

    Katz, Ronit; Dalrymple, Lorien; de Boer, Ian; DeFilippi, Christopher; Kestenbaum, Bryan; Park, Meyeon; Sarnak, Mark; Seliger, Stephen; Shlipak, Michael

    2015-01-01

    Background and objectives Elevations in N-terminal pro–B-type natriuretic peptide and high-sensitivity troponin T are associated with poor cardiovascular outcomes. Whether elevations in these cardiac biomarkers are associated with decline in kidney function was evaluated. Design, setting, participants, & measurements N-terminal pro–B-type natriuretic peptide and troponin T were measured at baseline in 3752 participants free of heart failure in the Cardiovascular Health Study. eGFR was determined from the Chronic Kidney Disease Epidemiology Collaboration equation using serum cystatin C. Rapid decline in kidney function was defined as decline in serum cystatin C eGFR≥30%, and incident CKD was defined as the onset of serum cystatin C eGFR<60 among those without CKD at baseline (n=2786). Cox regression models were used to examine the associations of each biomarker with kidney function decline adjusting for demographics, baseline serum cystatin C eGFR, diabetes, and other CKD risk factors. Results In total, 503 participants had rapid decline in serum cystatin C eGFR over a mean follow-up time of 6.41 (1.81) years, and 685 participants developed incident CKD over a mean follow-up time of 6.41 (1.74) years. Participants in the highest quartile of N-terminal pro–B-type natriuretic peptide (>237 pg/ml) had an 67% higher risk of rapid decline and 38% higher adjusted risk of incident CKD compared with participants in the lowest quartile (adjusted hazard ratio for serum cystatin C eGFR rapid decline, 1.67; 95% confidence interval, 1.25 to 2.23; hazard ratio for incident CKD, 1.38; 95% confidence interval, 1.08 to 1.76). Participants in the highest category of troponin T (>10.58 pg/ml) had 80% greater risk of rapid decline compared with participants in the lowest category (adjusted hazard ratio, 1.80; 95% confidence interval, 1.35 to 2.40). The association of troponin T with incident CKD was not statistically significant (hazard ratio, 1.17; 95% confidence interval, 0.92 to 1.50). Conclusions Elevated N-terminal pro–B-type natriuretic peptide and troponin T are associated with rapid decline of kidney function and incident CKD. Additional studies are needed to evaluate the mechanisms that may explain this association. PMID:25605700

  2. Methane oxidation behind reflected shock waves: Ignition delay times measured by pressure and flame band emission

    NASA Technical Reports Server (NTRS)

    Brabbs, T. A.; Robertson, T. F.

    1986-01-01

    Ignition delay data were recorded for three methane-oxygen-argon mixtures (phi = 0.5, 1.0, 2.0) for the temperature range 1500 to 1920 K. Quiet pressure trances enabled us to obtain delay times for the start of the experimental pressure rise. These times were in good agreement with those obtained from the flame band emission at 3700 A. The data correlated well with the oxygen and methane dependence of Lifshitz, but showed a much stronger temperature dependence (phi = 0.5 delta E = 51.9, phi = 1.0 delta = 58.8, phi = 2.0 delta E = 58.7 Kcal). The effect of probe location on the delay time measurement was studied. It appears that the probe located 83 mm from the reflecting surface measured delay times which may not be related to the initial temperature and pressure. It was estimated that for a probe located 7 mm from the reflecting surface, the measured delay time would be about 10 microseconds too short, and it was suggested that delay times less than 100 microsecond should not be used. The ignition period was defined as the time interval between start of the experimental pressure rise and 50 percent of the ignition pressure. This time interval was measured for three gas mixtures and found to be similar (40 to 60 micro sec) for phi = 1.0 and 0.5 but much longer (100 to 120) microsecond for phi = 2.0. It was suggested that the ignition period would be very useful to the kinetic modeler in judging the agreement between experimental and calculated delay times.

  3. Variable impact on mortality of AIDS-defining events diagnosed during combination antiretroviral therapy: not all AIDS-defining conditions are created equal.

    PubMed

    Mocroft, Amanda; Sterne, Jonathan A C; Egger, Matthias; May, Margaret; Grabar, Sophie; Furrer, Hansjakob; Sabin, Caroline; Fatkenheuer, Gerd; Justice, Amy; Reiss, Peter; d'Arminio Monforte, Antonella; Gill, John; Hogg, Robert; Bonnet, Fabrice; Kitahata, Mari; Staszewski, Schlomo; Casabona, Jordi; Harris, Ross; Saag, Michael

    2009-04-15

    The extent to which mortality differs following individual acquired immunodeficiency syndrome (AIDS)-defining events (ADEs) has not been assessed among patients initiating combination antiretroviral therapy. We analyzed data from 31,620 patients with no prior ADEs who started combination antiretroviral therapy. Cox proportional hazards models were used to estimate mortality hazard ratios for each ADE that occurred in >50 patients, after stratification by cohort and adjustment for sex, HIV transmission group, number of antiretroviral drugs initiated, regimen, age, date of starting combination antiretroviral therapy, and CD4+ cell count and HIV RNA load at initiation of combination antiretroviral therapy. ADEs that occurred in <50 patients were grouped together to form a "rare ADEs" category. During a median follow-up period of 43 months (interquartile range, 19-70 months), 2880 ADEs were diagnosed in 2262 patients; 1146 patients died. The most common ADEs were esophageal candidiasis (in 360 patients), Pneumocystis jiroveci pneumonia (320 patients), and Kaposi sarcoma (308 patients). The greatest mortality hazard ratio was associated with non-Hodgkin's lymphoma (hazard ratio, 17.59; 95% confidence interval, 13.84-22.35) and progressive multifocal leukoencephalopathy (hazard ratio, 10.0; 95% confidence interval, 6.70-14.92). Three groups of ADEs were identified on the basis of the ranked hazard ratios with bootstrapped confidence intervals: severe (non-Hodgkin's lymphoma and progressive multifocal leukoencephalopathy [hazard ratio, 7.26; 95% confidence interval, 5.55-9.48]), moderate (cryptococcosis, cerebral toxoplasmosis, AIDS dementia complex, disseminated Mycobacterium avium complex, and rare ADEs [hazard ratio, 2.35; 95% confidence interval, 1.76-3.13]), and mild (all other ADEs [hazard ratio, 1.47; 95% confidence interval, 1.08-2.00]). In the combination antiretroviral therapy era, mortality rates subsequent to an ADE depend on the specific diagnosis. The proposed classification of ADEs may be useful in clinical end point trials, prognostic studies, and patient management.

  4. Time to relapse after epilepsy surgery in children: AED withdrawal policies are a contributing factor.

    PubMed

    Boshuisen, Kim; Schmidt, Dieter; Uiterwaal, Cuno S P M; Arzimanoglou, Alexis; Braun, Kees P J; Study Group, TimeToStop

    2014-09-01

    It was recently suggested that early postoperative seizure relapse implicates a failure to define and resect the epileptogenic zone, that late recurrences reflect the persistence or re-emergence of epileptogenic pathology, and that early recurrences are associated with poor treatment response. Timing of antiepileptic drugs withdrawal policies, however, have never been taken into account when investigating time to relapse following epilepsy surgery. Of the European paediatric epilepsy surgery cohort from the "TimeToStop" study, all 95 children with postoperative seizure recurrence following antiepileptic drug (AED) withdrawal were selected. We investigated how time intervals from surgery to AED withdrawal, as well as other previously suggested determinants of (timing of) seizure recurrence, related to time to relapse and to relapse treatability. Uni- and multivariable linear and logistic regression models were used. Based on multivariable analysis, a shorter interval to AED reduction was the only independent predictor of a shorter time to relapse. Based on univariable analysis, incomplete resection of the epileptogenic zone related to a shorter time to recurrence. Timing of recurrence was not related to the chance of regaining seizure freedom after reinstallation of medical treatment. For children in whom AED reduction is initiated following epilepsy surgery, the time to relapse is largely influenced by the timing of AED withdrawal, rather than by disease or surgery-specific factors. We could not confirm a relationship between time to recurrence and treatment response. Timing of AED withdrawal should be taken into account when studying time to relapse following epilepsy surgery, as early withdrawal reveals more rapidly whether surgery had the intended curative effect, independently of the other factors involved.

  5. Controls on the long term earthquake behavior of an intraplate fault revealed by U-Th and stable isotope analyses of syntectonic calcite veins

    NASA Astrophysics Data System (ADS)

    Williams, Randolph; Goodwin, Laurel; Sharp, Warren; Mozley, Peter

    2017-04-01

    U-Th dates on calcite precipitated in coseismic extension fractures in the Loma Blanca normal fault zone, Rio Grande rift, NM, USA, constrain earthquake recurrence intervals from 150-565 ka. This is the longest direct record of seismicity documented for a fault in any tectonic environment. Combined U-Th and stable isotope analyses of these calcite veins define 13 distinct earthquake events. These data show that for more than 400 ka the Loma Blanca fault produced earthquakes with a mean recurrence interval of 40 ± 7 ka. The coefficient of variation for these events is 0.40, indicating strongly periodic seismicity consistent with a time-dependent model of earthquake recurrence. Stochastic statistical analyses further validate the inference that earthquake behavior on the Loma Blanca was time-dependent. The time-dependent nature of these earthquakes suggests that the seismic cycle was fundamentally controlled by a stress renewal process. However, this periodic cycle was punctuated by an episode of clustered seismicity at 430 ka. Recurrence intervals within the earthquake cluster were as low as 5-11 ka. Breccia veins formed during this episode exhibit carbon isotope signatures consistent with having formed through pronounced degassing of a CO2 charged brine during post-failure, fault-localized fluid migration. The 40 ka periodicity of the long-term earthquake record of the Loma Blanca fault is similar in magnitude to recurrence intervals documented through paleoseismic studies of other normal faults in the Rio Grande rift and Basin and Range Province. We propose that it represents a background rate of failure in intraplate extension. The short-term, clustered seismicity that occurred on the fault records an interruption of the stress renewal process, likely by elevated fluid pressure in deeper structural levels of the fault, consistent with fault-valve behavior. The relationship between recurrence interval and inferred fluid degassing suggests that pore fluid pressure along the fault may have been driven by variations in CO2 content, thereby fundamentally affecting earthquake frequency. Thus, the Loma Blanca fault provides a record of "naturally induced" seismicity, with lessons for better understanding anthropogenic induced seismicity.

  6. The role of retinopathy distribution and other lesion types for the definition of examination intervals during screening for diabetic retinopathy.

    PubMed

    Ometto, Giovanni; Erlandsen, Mogens; Hunter, Andrew; Bek, Toke

    2017-06-01

    It has previously been shown that the intervals between screening examinations for diabetic retinopathy can be optimized by including individual risk factors for the development of the disease in the risk assessment. However, in some cases, the risk model calculating the screening interval may recommend a different interval than an experienced clinician. The purpose of this study was to evaluate the influence of factors unrelated to diabetic retinopathy and the distribution of lesions for discrepancies between decisions made by the clinician and the risk model. Therefore, fundus photographs from 90 screening examinations where the recommendations of the clinician and a risk model had been discrepant were evaluated. Forty features were defined to describe the type and location of the lesions, and classification and ranking techniques were used to assess whether the features could predict the discrepancy between the grader and the risk model. Suspicion of tumours, retinal degeneration and vascular diseases other than diabetic retinopathy could explain why the clinician recommended shorter examination intervals than the model. Additionally, the regional distribution of microaneurysms/dot haemorrhages was important for defining a photograph as belonging to the group where both the clinician and the risk model had recommended a short screening interval as opposed to the other decision alternatives. Features unrelated to diabetic retinopathy and the regional distribution of retinal lesions may affect the recommendation of the examination interval during screening for diabetic retinopathy. The development of automated computerized algorithms for extracting information about the type and location of retinal lesions could be expected to further optimize examination intervals during screening for diabetic retinopathy. © 2016 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  7. HIV testing, care referral and linkage to care intervals affect time to engagement in care for newly diagnosed HIV-infected adolescents in fifteen adolescent medicine clinics in the United States

    PubMed Central

    Philbin, Morgan M.; Tanner, Amanda E.; DuVal, Anna; Ellen, Jonathan M.; Xu, Jiahong; Kapogiannis, Bill; Bethel, Jim; Fortenberry, J. Dennis

    2016-01-01

    Objective To examine how the time from HIV testing to care referral and from referral to care linkage influenced time to care engagement for newly diagnosed HIV-infected adolescents. Methods We evaluated the Care Initiative, a care linkage and engagement program for HIV-infected adolescents in 15 U.S. clinics. We analyzed client-level factors, provider type and intervals from HIV testing to care referral and from referral to care linkage as predictors of care engagement. Engagement was defined as a second HIV-related medical visit within 16 weeks of initial HIV-related medical visit (linkage). Results At 32 months, 2,143 youth had been referred. Of these, 866 were linked to care through the Care Initiative within 42 days and thus eligible for study inclusion. Of the linked youth, 90.8% were ultimately engaged in care. Time from HIV testing to referral (e.g., ≤7 days versus >365 days) was associated with engagement (AOR=2.91; 95% CI: 1.43–5.94) and shorter time to engagement (Adjusted HR=1.41; 95% CI: 1.11–1.79). Individuals with shorter care referral to linkage intervals (e.g., ≤7 days versus 22–42 days) engaged in care faster (Adjusted HR=2.90; 95% CI: 2.34–3.60) and more successfully (AOR=2.01; 95% CI: 1.04–3.89). Conclusions These data address a critical piece of the care continuum, and can offer suggestions of where and with whom to intervene in order to best achieve the care engagement goals outlined in the U.S. National HIV/AIDS Strategy. These results may also inform programs and policies that set concrete milestones and strategies for optimal care linkage timing for newly diagnosed adolescents. PMID:26885804

  8. Estimating Time to Event From Longitudinal Categorical Data: An Analysis of Multiple Sclerosis Progression.

    PubMed

    Mandel, Micha; Gauthier, Susan A; Guttmann, Charles R G; Weiner, Howard L; Betensky, Rebecca A

    2007-12-01

    The expanded disability status scale (EDSS) is an ordinal score that measures progression in multiple sclerosis (MS). Progression is defined as reaching EDSS of a certain level (absolute progression) or increasing of one point of EDSS (relative progression). Survival methods for time to progression are not adequate for such data since they do not exploit the EDSS level at the end of follow-up. Instead, we suggest a Markov transitional model applicable for repeated categorical or ordinal data. This approach enables derivation of covariate-specific survival curves, obtained after estimation of the regression coefficients and manipulations of the resulting transition matrix. Large sample theory and resampling methods are employed to derive pointwise confidence intervals, which perform well in simulation. Methods for generating survival curves for time to EDSS of a certain level, time to increase of EDSS of at least one point, and time to two consecutive visits with EDSS greater than three are described explicitly. The regression models described are easily implemented using standard software packages. Survival curves are obtained from the regression results using packages that support simple matrix calculation. We present and demonstrate our method on data collected at the Partners MS center in Boston, MA. We apply our approach to progression defined by time to two consecutive visits with EDSS greater than three, and calculate crude (without covariates) and covariate-specific curves.

  9. Diagnostic reliability of the cervical vertebral maturation method and standing height in the identification of the mandibular growth spurt.

    PubMed

    Perinetti, Giuseppe; Contardo, Luca; Castaldo, Attilio; McNamara, James A; Franchi, Lorenzo

    2016-07-01

    To evaluate the capability of both cervical vertebral maturation (CVM) stages 3 and 4 (CS3-4 interval) and the peak in standing height to identify the mandibular growth spurt throughout diagnostic reliability analysis. A previous longitudinal data set derived from 24 untreated growing subjects (15 females and nine males,) detailed elsewhere were reanalyzed. Mandibular growth was defined as annual increments in Condylion (Co)-Gnathion (Gn) (total mandibular length) and Co-Gonion Intersection (Goi) (ramus height) and their arithmetic mean (mean mandibular growth [mMG]). Subsequently, individual annual increments in standing height, Co-Gn, Co-Goi, and mMG were arranged according to annual age intervals, with the first and last intervals defined as 7-8 years and 15-16 years, respectively. An analysis was performed to establish the diagnostic reliability of the CS3-4 interval or the peak in standing height in the identification of the maximum individual increments of each Co-Gn, Co-Goi, and mMG measurement at each annual age interval. CS3-4 and standing height peak show similar but variable accuracy across annual age intervals, registering values between 0.61 (standing height peak, Co-Gn) and 0.95 (standing height peak and CS3-4, mMG). Generally, satisfactory diagnostic reliability was seen when the mandibular growth spurt was identified on the basis of the Co-Goi and mMG increments. Both CVM interval CS3-4 and peak in standing height may be used in routine clinical practice to enhance efficiency of treatments requiring identification of the mandibular growth spurt.

  10. Can We Draw General Conclusions from Interval Training Studies?

    PubMed

    Viana, Ricardo Borges; de Lira, Claudio Andre Barbosa; Naves, João Pedro Araújo; Coswig, Victor Silveira; Del Vecchio, Fabrício Boscolo; Ramirez-Campillo, Rodrigo; Vieira, Carlos Alexandre; Gentil, Paulo

    2018-04-19

    Interval training (IT) has been used for many decades with the purpose of increasing performance and promoting health benefits while demanding a relatively small amount of time. IT can be defined as intermittent periods of intense exercise separated by periods of recovery and has been divided into high-intensity interval training (HIIT), sprint interval training (SIT), and repeated sprint training (RST). IT use has resulted in the publication of many studies and many of them with conflicting results and positions. The aim of this article was to move forward and understand the studies' protocols in order to draw accurate conclusions, as well as to avoid previous mistakes and effectively reproduce previous protocols. When analyzing the literature, we found many inconsistencies, such as the controversial concept of 'supramaximal' effort, a misunderstanding with regard to the term 'high intensity,' and the use of different strategies to control intensity. The adequate definition and interpretation of training intensity seems to be vital, since the results of IT are largely dependent on it. These observations are only a few examples of the complexity involved in IT prescription, and are discussed to illustrate some problems with the current literature regarding IT. Therefore, it is our opinion that it is not possible to draw general conclusions about IT without considering all variables used in IT prescription, such as exercise modality, intensity, effort and rest times, and participants' characteristics. In order to help guide researchers and health professionals in their practices it is important that experimental studies report their methods in as much detail as possible and future reviews and meta-analyses should critically discuss the articles included in the light of their methods to avoid inappropriate generalizations.

  11. Characteristic pulse trains of preliminary breakdown in four isolated small thunderstorms

    NASA Astrophysics Data System (ADS)

    Ma, Dong

    2017-03-01

    Using a low-frequency six-station local network, preliminary breakdown (PB) pulses not followed or followed by negative return stroke (RS), which are defined as PB-type and PB cloud-to-ground (PBCG)-type flashes, are analyzed based on four isolated small thunderstorms for the first time. On the basis of 22 PB-type flashes out of totally 2155 flashes, it indicates that the number of PB-type flashes is very small. At the early stage, PB-type flashes are observed in all four thunderstorms. At the active stage, PB-type flashes still can occur; meanwhile, there are few or no negative cloud-to-ground (CG) flashes. However, at the final stage no PB-type flashes occur. At the stage of distinct cell merging or splitting, PB-type flashes are also observed. Based on the 123 PBCG-type flashes, we discuss the percentage of PBCG-type flashes and also analyze the relationship between the electric field (E-field) amplitude of the largest pulse in the PB pulse train normalized to 100 km (PBA), the E-field amplitude of the first return stroke normalized to 100 km (RSA), the time interval between PBA and RSA (PB-RS interval), and the ratio between PBA and RSA (PB-RS ratio). We find that the percentage of PBCG-type flashes is not always dependent on PBA or PB-RS ratio; the type of thunderstorms may also have an impact on this percentage. None of the PB-RS intervals is less than 20 ms; we speculate that such long PB-RS interval is the feature of isolated small thunderstorms, but more observations are needed to further investigate this question.

  12. Exercise during pregnancy and risk of gestational hypertensive disorders: a systematic review and meta-analysis.

    PubMed

    Magro-Malosso, Elena R; Saccone, Gabriele; Di Tommaso, Mariarosaria; Roman, Amanda; Berghella, Vincenzo

    2017-08-01

    Gestational hypertensive disorders, including gestational hypertension and preeclampsia, are one of the leading causes of maternal morbidity and mortality. The aim of our study was to evaluate the effect of exercise during pregnancy on the risk of gestational hypertensive disorders. Electronic databases were searched from their inception to February 2017. Selection criteria included only randomized controlled trials of uncomplicated pregnant women assigned before 23 weeks to an aerobic exercise regimen or not. The summary measures were reported as relative risk with 95% confidence intervals. The primary outcome was the incidence of gestational hypertensive disorders, defined as either gestational hypertension or preeclampsia. Seventeen trials, including 5075 pregnant women, were analyzed. Of them, seven contributed data to quantitative meta-analysis for the primary outcome. Women who were randomized in early pregnancy to aerobic exercise for about 30-60 min two to seven times per week had a significant lower incidence of gestational hypertensive disorders (5.9% vs. 8.5%; relative risk 0.70, 95% confidence interval 0.53-0.83; seven studies, 2517 participants), specifically a lower incidence of gestational hypertension (2.5% vs. 4.6%; relative risk 0.54, 95% confidence interval 0.40-0.74; 16 studies, 4641 participants) compared with controls. The incidence of preeclampsia (2.3% vs. 2.8%; relative risk 0.79, 95% confidence interval 0.45-1.38; six studies, 2230 participants) was similar in both groups. The incidence of cesarean delivery was decreased by 16% in the exercise group. Aerobic exercise for about 30-60 min two to seven times per week during pregnancy, as compared with being more sedentary, is associated with a significantly reduced risk of gestational hypertensive disorders overall, gestational hypertension, and cesarean delivery. © 2017 Nordic Federation of Societies of Obstetrics and Gynecology.

  13. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution.

    PubMed

    Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr

    2012-01-01

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Intermediate-term and long-term outcome of piggyback drainage: connecting glaucoma drainage device to a device in-situ for improved intraocular pressure control.

    PubMed

    Dervan, Edward; Lee, Edward; Giubilato, Antonio; Khanam, Tina; Maghsoudlou, Panayiotis; Morgan, William H

    2017-11-01

    This study provides results of a treatment option for patients with failed primary glaucoma drainage device. The study aimed to describe and evaluate the long-term intraocular pressure control and complications of a new technique joining a second glaucoma drainage device directly to an existing glaucoma drainage device termed 'piggyback drainage'. This is a retrospective, interventional cohort study. Eighteen eyes of 17 patients who underwent piggyback drainage between 2004 and 2013 inclusive have been studied. All patients had prior glaucoma drainage device with uncontrolled intraocular pressure. The piggyback technique involved suturing a Baerveldt (250 or 350 mm) or Molteno3 glaucoma drainage device to an unused scleral quadrant and connecting the silicone tube to the primary plate bleb. Failure of intraocular pressure control defined as an intraocular pressure greater than 21 mmHg on maximal therapy on two separate occasions or further intervention to control intraocular pressure. The intraocular pressure was controlled in seven eyes (39%) at last follow-up with a mean follow-up time of 74.2 months. The mean preoperative intraocular pressure was 27.1 mmHg (95% confidence interval 23.8-30.3) compared with 18.4 mmHg (95% confidence interval 13.9-22.8) at last follow-up. The mean time to failure was 57.1 months (95% confidence interval 32.2-82), and the mean time to further surgery was 72.3 months (95% confidence interval 49.9-94.7). Lower preoperative intraocular pressure was associated with longer duration of intraocular pressure control (P = 0.048). If the intraocular pressure was controlled over 2 years, it continued to be controlled over the long term. Two eyes (11%) experienced corneal decompensation. Piggyback drainage represents a viable surgical alternative for the treatment of patients with severe glaucoma with failing primary glaucoma drainage device, particularly in those at high risk of corneal decompensation. © 2017 Royal Australian and New Zealand College of Ophthalmologists.

  15. Permanent pacemaker implantation in octogenarians with unexplained syncope and positive electrophysiologic testing.

    PubMed

    Giannopoulos, Georgios; Kossyvakis, Charalampos; Panagopoulou, Vasiliki; Tsiachris, Dimitrios; Doudoumis, Konstantinos; Mavri, Maria; Vrachatis, Dimitrios; Letsas, Konstantinos; Efremidis, Michael; Katsivas, Apostolos; Lekakis, John; Deftereos, Spyridon

    2017-05-01

    Syncope is a common problem in the elderly, and a permanent pacemaker is a therapeutic option when a bradycardic etiology is revealed. However, the benefit of pacing when no association of symptoms to bradycardia has been shown is not clear, especially in the elderly. The aim of this study was to evaluate the effect of pacing on syncope-free mortality in patients aged 80 years or older with unexplained syncope and "positive" invasive electrophysiologic testing (EPT). This was an observational study. A positive EPT for the purposes of this study was defined by at least 1 of the following: a corrected sinus node recovery time of >525 ms, a basic HV interval of >55 ms, detection of infra-Hisian block, or appearance of second-degree atrioventricular block on atrial decremental pacing at a paced cycle length of >400 ms. Among the 2435 screened patients, 228 eligible patients were identified, 145 of whom were implanted with a pacemaker. Kaplan-Meier analysis determined that time to event (syncope or death) was 50.1 months (95% confidence interval 45.4-54.8 months) with a pacemaker vs 37.8 months (95% confidence interval 31.3-44.4 months) without a pacemaker (log-rank test, P = .001). The 4-year time-dependent estimate of the rate of syncope was 12% vs 44% (P < .001) and that of any-cause death was 41% vs 56% (P = .023), respectively. The multivariable odds ratio was 0.25 (95% confidence interval 0.15-0.40) after adjustment for potential confounders. In patients with unexplained syncope and signs of sinus node dysfunction or impaired atrioventricular conduction on invasive EPT, pacemaker implantation was independently associated with longer syncope-free survival. Significant differences were also shown in the individual components of the primary outcome measure (syncope and death from any cause). Copyright © 2017 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  16. Fourth-grade children's dietary recall accuracy is influenced by retention interval (target period and interview time).

    PubMed

    Baxter, Suzanne Domel; Hardin, James W; Guinn, Caroline H; Royer, Julie A; Mackelprang, Alyssa J; Smith, Albert F

    2009-05-01

    For a 24-hour dietary recall, two possible target periods are the prior 24 hours (24 hours immediately preceding the interview time) and previous day (midnight to midnight of the day before the interview), and three possible interview times are morning, afternoon, and evening. Target period and interview time determine the retention interval (elapsed time between to-be-reported meals and the interview), which, along with intervening meals, can influence reporting accuracy. The effects of target period and interview time on children's accuracy for reporting school meals during 24-hour dietary recalls were investigated. DESIGN AND SUBJECTS/SETTING: During the 2004-2005, 2005-2006, and 2006-2007 school years in Columbia, SC, each of 374 randomly selected fourth-grade children (96% African American) was observed eating two consecutive school meals (breakfast and lunch) and interviewed to obtain a 24-hour dietary recall using one of six conditions defined by crossing two target periods with three interview times. Each condition had 62 or 64 children (half boys). Accuracy for reporting school meals was quantified by calculating rates for omissions (food items observed eaten but unreported) and intrusions (food items reported eaten but unobserved); a measure of total inaccuracy combined errors for reporting food items and amounts. For each accuracy measure, analysis of variance was conducted with target period, interview time, their interaction, sex, interviewer, and school year in the model. There was a target-period effect and a target-period by interview-time interaction on omission rates, intrusion rates, and total inaccuracy (six P values <0.004). For prior-24-hour recalls compared to previous-day recalls, and for prior-24-hour recalls in the afternoon and evening compared to previous-day recalls in the afternoon and evening, omission rates were better by one third, intrusion rates were better by one half, and total inaccuracy was better by one third. To enhance children's dietary recall accuracy, target periods and interview times that minimize the retention interval should be chosen.

  17. Explicit analytical expression for the condition number of polynomials in power form

    NASA Astrophysics Data System (ADS)

    Rack, Heinz-Joachim

    2017-07-01

    In his influential papers [1-3] W. Gautschi has defined and reshaped the condition number κ∞ of polynomials Pn of degree ≤ n which are represented in power form on a zero-symmetric interval [-ω, ω]. Basically, κ∞ is expressed as the product of two operator norms: an explicit factor times an implicit one (the l∞-norm of the coefficient vector of the n-th Chebyshev polynomial of the first kind relative to [-ω, ω]). We provide a new proof, economize the second factor and express it by an explicit analytical formula.

  18. Computing the variations in the self-similar properties of the various gait intervals in Parkinson disease patients.

    PubMed

    Manjeri Keloth, Sana; Arjunan, Sridhar P; Kumar, Dinesh

    2017-07-01

    This study has investigated the stride, swing, stance and double support intervals of gait for Parkinson's disease (PD) patients with different levels of severity. Self-similar properties of the gait signal were analyzed to investigate the changes in the gait pattern of the healthy and PD patients. To understand the self-similar property, detrended fluctuation analysis was performed. The analysis shows that the PD patients have less defined gait when compared to healthy. The study also shows that among the stance and swing phase of stride interval, the self-similarity is less for swing interval when compared to the stance interval of gait and decreases with the severity of gait. Also, PD patients show decreased self-similar patterns in double support interval of gait. This suggest that there are less rhythmic gait intervals and a sense of urgency to remain in support phase of gait by the PD patients.

  19. Magnetospheric electric fields and auroral oval

    NASA Technical Reports Server (NTRS)

    Laakso, Harri; Pedersen, Arne; Craven, John D.; Frank, L. A.

    1992-01-01

    DC electric field variations in a synchronous orbit (GEOS 2) during four substorms in the time sector 19 to 01 LT were investigated. Simultaneously, the imaging photometer on board DE 1 provided auroral images that are also utilized. Substorm onset is defined here as a sudden appearance of large electric fields. During the growth phase, the orientation of the electric field begins to oscillate some 30 min prior to onset. About 10 min before the onset GEOS 2 starts moving into a more tenuous plasma, probably due to a thinning of the current sheet. The onset is followed by a period of 10 to 15 min during which large electric fields occur. This interval can be divided into two intervals. During the first interval, which lasts 4 to 8 min, very large fields of 8 to 20 mV/m are observed, while the second interval contains relatively large fields (2 to 5 mV/m). A few min after the onset, the spacecraft returns to a plasma region of higher electron fluxes which are usually larger than before substorm. Some 30 min after onset, enhanced activity, lasting about 10 min, appears in the electric field. One of the events selected offers a good opportunity to study the formation and development of the Westward Traveling Surge (WST). During the traversal of the leading edge of the WTS (approximately 8 min) a stable wave mode at 5.7 mHz is detected.

  20. Vectorcardiographic diagnostic & prognostic information derived from the 12-lead electrocardiogram: Historical review and clinical perspective.

    PubMed

    Man, Sumche; Maan, Arie C; Schalij, Martin J; Swenne, Cees A

    2015-01-01

    In the course of time, electrocardiography has assumed several modalities with varying electrode numbers, electrode positions and lead systems. 12-lead electrocardiography and 3-lead vectorcardiography have become particularly popular. These modalities developed in parallel through the mid-twentieth century. In the same time interval, the physical concepts underlying electrocardiography were defined and worked out. In particular, the vector concept (heart vector, lead vector, volume conductor) appeared to be essential to understanding the manifestations of electrical heart activity, both in the 12-lead electrocardiogram (ECG) and in the 3-lead vectorcardiogram (VCG). Not universally appreciated in the clinic, the vectorcardiogram, and with it the vector concept, went out of use. A revival of vectorcardiography started in the 90's, when VCGs were mathematically synthesized from standard 12-lead ECGs. This facilitated combined electrocardiography and vectorcardiography without the need for a special recording system. This paper gives an overview of these historical developments, elaborates on the vector concept and seeks to define where VCG analysis/interpretation can add diagnostic/prognostic value to conventional 12-lead ECG analysis. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. The Joint Associations of Sedentary Time and Physical Activity With Mobility Disability in Older People: The NIH-AARP Diet and Health Study.

    PubMed

    DiPietro, Loretta; Jin, Yichen; Talegawkar, Sameera; Matthews, Charles E

    2018-03-14

    The purpose of this study was to determine the joint associations of sedentary time and physical activity with mobility disability in older age. We analyzed prospective data from 134,269 participants in the National Institutes of Health (NIH)-American Association of Retired Persons (NIH-AARP) Diet and Health Study between 1995-1996 and 2004-2005. Total sitting time (h/d), TV viewing time (h/d) and light- and moderate-to-vigorous-intensity physical activity (h/wk) were self-reported at baseline, and mobility disability at follow-up was defined as being "unable to walk" or having an "easy usual walking pace (<2 mph)." Multivariable logistic regression determined the independent and joint associations of sedentary time and total physical activity with the odds of disability. Among the most active participants (>7 h/wk), sitting <6 h/d was not related to excess disability at follow-up, and those in the most active group reporting the highest level of sitting time (≥7 h/d) still had a significantly lower odds (odds ratios = 1.11; 95% confidence interval = 1.02, 1.20) compared with those reporting the lowest level of sitting (<3 h/d) in the least active group (≤3 h/wk; odds ratios = 2.07; 95% confidence interval = 1.92, 2.23). Greater TV time was significantly related to increased disability within all levels of physical activity. Reduction of sedentary time, combined with increased physical activity may be necessary to maintain function in older age.

  2. High resolution data acquisition

    DOEpatents

    Thornton, G.W.; Fuller, K.R.

    1993-04-06

    A high resolution event interval timing system measures short time intervals such as occur in high energy physics or laser ranging. Timing is provided from a clock, pulse train, and analog circuitry for generating a triangular wave synchronously with the pulse train (as seen in diagram on patent). The triangular wave has an amplitude and slope functionally related to the time elapsed during each clock pulse in the train. A converter forms a first digital value of the amplitude and slope of the triangle wave at the start of the event interval and a second digital value of the amplitude and slope of the triangle wave at the end of the event interval. A counter counts the clock pulse train during the interval to form a gross event interval time. A computer then combines the gross event interval time and the first and second digital values to output a high resolution value for the event interval.

  3. High resolution data acquisition

    DOEpatents

    Thornton, Glenn W.; Fuller, Kenneth R.

    1993-01-01

    A high resolution event interval timing system measures short time intervals such as occur in high energy physics or laser ranging. Timing is provided from a clock (38) pulse train (37) and analog circuitry (44) for generating a triangular wave (46) synchronously with the pulse train (37). The triangular wave (46) has an amplitude and slope functionally related to the time elapsed during each clock pulse in the train. A converter (18, 32) forms a first digital value of the amplitude and slope of the triangle wave at the start of the event interval and a second digital value of the amplitude and slope of the triangle wave at the end of the event interval. A counter (26) counts the clock pulse train (37) during the interval to form a gross event interval time. A computer (52) then combines the gross event interval time and the first and second digital values to output a high resolution value for the event interval.

  4. Path integrals, supersymmetric quantum mechanics, and the Atiyah-Singer index theorem for twisted Dirac

    NASA Astrophysics Data System (ADS)

    Fine, Dana S.; Sawin, Stephen

    2017-01-01

    Feynman's time-slicing construction approximates the path integral by a product, determined by a partition of a finite time interval, of approximate propagators. This paper formulates general conditions to impose on a short-time approximation to the propagator in a general class of imaginary-time quantum mechanics on a Riemannian manifold which ensure that these products converge. The limit defines a path integral which agrees pointwise with the heat kernel for a generalized Laplacian. The result is a rigorous construction of the propagator for supersymmetric quantum mechanics, with potential, as a path integral. Further, the class of Laplacians includes the square of the twisted Dirac operator, which corresponds to an extension of N = 1/2 supersymmetric quantum mechanics. General results on the rate of convergence of the approximate path integrals suffice in this case to derive the local version of the Atiyah-Singer index theorem.

  5. Plasma Vitamin D Deficiency Is Associated with Poor Sleep Quality and Night-Time Eating at Mid-Pregnancy in Singapore

    PubMed Central

    Cheng, Tuck Seng; Loy, See Ling; Cheung, Yin Bun; Cai, Shirong; Colega, Marjorelee T.; Godfrey, Keith M.; Chong, Yap-Seng; Tan, Kok Hian; Shek, Lynette Pei-Chi; Lee, Yung Seng; Lek, Ngee; Chan, Jerry Kok Yen; Chong, Mary Foong-Fong; Yap, Fabian

    2017-01-01

    Plasma 25-hydroxyvitamin D (25OHD) deficiency, poor sleep quality, and night-time eating, have been independently associated with adverse pregnancy outcomes, but their inter-relationships are yet to be evaluated. We aimed to investigate the associations between maternal plasma 25OHD status and sleep quality and circadian eating patterns during pregnancy. Data on pregnant women (n = 890) from a prospective cohort (Growing Up in Singapore Towards healthy Outcomes) were analyzed. Plasma 25OHD concentration was measured, while the Pittsburgh sleep quality index (PSQI) and 24-h dietary recall were administered to women at 26–28 weeks’ gestation. Plasma 25OHD status was defined as sufficient (>75 nmol/L), insufficient (50–75 nmol/L), or deficient (<50 nmol/L). Poor sleep quality was defined by a total global PSQI score >5. Predominantly day-time (pDT) and predominantly night-time (pNT) were defined according to consumption of greater proportion of calories (i.e., >50%) from 07:00–18:59 and from 19:00–06:59, respectively. After adjustment for confounders, women with plasma 25OHD deficiency had higher odds of poor sleep quality (odds ratio (OR) 3.49; 95% confidence interval (CI) 1.84–6.63) and pNT eating (OR: 1.85; 95% CI 1.00–3.41) than those who were 25OHD sufficient. Our findings show the association of maternal plasma 25OHD deficiency with poor sleep quality and pNT eating at mid-pregnancy. PMID:28353643

  6. The impact of retirement account distributions on measures of family income.

    PubMed

    Iams, Howard M; Purcell, Patrick J

    2013-01-01

    In recent decades, employers have increasingly replaced defined benefit (DB) pensions with defined contribution (DC) retirement accounts for their employees. DB plans provide annuities, or lifetime benefits paid at regular intervals. The timing and amounts of DC distributions, however, may vary widely. Most surveys that provide data on the family income of the aged either collect no data on nonannuity retirement account distributions, or exclude such distributions from their summary measures of family income. We use Survey of Income and Program Participation (SIPP) data for 2009 to estimate the impact of including retirement account distributions on total family income calculations. We find that about one-fifth of aged families received distributions from retirement accounts in 2009. Measured mean income for those families would be about 15 percent higher and median income would be 18 percent higher if those distributions were included in the SIPP summary measure of family income.

  7. The socioeconomic impact of hearing loss in U.S. adults.

    PubMed

    Emmett, Susan D; Francis, Howard W

    2015-03-01

    To evaluate the associations between hearing loss and educational attainment, income, and unemployment/underemployment in U.S. adults. National cross-sectional survey. Ambulatory examination centers. Adults aged 20 to 69 years who participated in the 1999 to 2002 cycles of the NHANES (National Health and Nutrition Examination Survey) audiometric evaluation and income questionnaire (N = 3,379). Pure-tone audiometry, with hearing loss defined by World Health Organization criteria of bilateral pure-tone average of more than 25 dB (0.5, 1, 2, 4 kHz). Low educational attainment, defined as not completing high school; low income, defined as family income less than $20,000 per year; and unemployment or underemployment, defined as not having a job or working less than 35 hours per week. Individuals with hearing loss had 3.21 times higher odds of low educational attainment (95% confidence interval [95% CI], 2.20-4.68) compared with normal-hearing individuals. Controlling for education, age, sex, and race, individuals with hearing loss had 1.58 times higher odds of low income (95% CI, 1.16-2.15) and 1.98 times higher odds of being unemployed or underemployed (95% CI, 1.38-2.85) compared with normal-hearing individuals. Hearing loss is associated with low educational attainment in U.S. adults. Even after controlling for education and important demographic factors, hearing loss is independently associated with economic hardship, including both low income and unemployment/underemployment. The societal impact of hearing loss is profound in this nationally representative study and should be further evaluated with longitudinal cohorts. Received institutional review board approval (National Center for Health Statistics Institutional Review Board Protocol no. 98-12).

  8. Measuring the EMS patient access time interval and the impact of responding to high-rise buildings.

    PubMed

    Morrison, Laurie J; Angelini, Mark P; Vermeulen, Marian J; Schwartz, Brian

    2005-01-01

    To measure the patient access time interval and characterize its contribution to the total emergency medical services (EMS) response time interval; to compare the patient access time intervals for patients located three or more floors above ground with those less than three floors above or below ground, and specifically in the apartment subgroup; and to identify barriers that significantly impede EMS access to patients in high-rise apartments. An observational study of all patients treated by an emergency medical technician paramedics (EMT-P) crew was conducted using a trained independent observer to collect time intervals and identify potential barriers to access. Of 118 observed calls, 25 (21%) originated from patients three or more floors above ground. The overall median and 90th percentile (95% confidence interval) patient access time intervals were 1.61 (1.27, 1.91) and 3.47 (3.08, 4.05) minutes, respectively. The median interval was 2.73 (2.22, 3.03) minutes among calls from patients located three or more stories above ground compared with 1.25 (1.07, 1.55) minutes among those at lower levels. The patient access time interval represented 23.5% of the total EMS response time interval among calls originating less than three floors above or below ground and 32.2% of those located three or more stories above ground. The most frequently encountered barriers to access included security code entry requirements, lack of directional signs, and inability to fit the stretcher into the elevator. The patient access time interval is significantly long and represents a substantial component of the total EMS response time interval, especially among ambulance calls originating three or more floors above ground. A number of barriers appear to contribute to delayed paramedic access.

  9. Association between impaired fasting glycaemia in pediatric obesity and type 2 diabetes in young adulthood.

    PubMed

    Hagman, E; Danielsson, P; Brandt, L; Ekbom, A; Marcus, C

    2016-08-22

    In adults, impaired fasting glycemia (IFG) increases the risk for type 2 diabetes mellitus (T2DM). This study aimed to investigate to which extent children with obesity develop T2DM during early adulthood, and to determine whether IFG and elevated hemoglobin A1c (HbA1c) in obese children are risk markers for early development of T2DM. In this prospective cohort study, 1620 subjects from the Swedish Childhood Obesity Treatment Registry - BORIS who were ⩾18 years at follow-up and 8046 individuals in a population-based comparison group, matched on gender age and living area, were included. IFG was defined according to both ADA (cut-off 5.6 mmol l(-1)) and WHO (6.1 mmol l(-1)). Elevated HbA1c was defined according to ADA (cut-off 39 mmol l(-1)). Main outcome was T2DM medication, as a proxy for T2DM. Data on medications were retrieved from a national registry. The childhood obesity cohort were 24 times more likely to receive T2DM medications in early adulthood compared with the comparison group (95% confidence interval (CI): 12.52-46). WHO-defined IFG predicted future use of T2DM medication with an adjusted hazard ratio (HR) of 3.73 (95% CI: 1.87-7.45) compared with those who had fasting glucose levels <5.6 mmol l(-1). A fasting glucose level of 5.6-6.0 mmol l(-1), that is, the IFG-interval added by American Diabetes Association (ADA), did not increase the use of T2DM medication more than pediatric obesity itself, adjusted HR=1.72 (0.84-3.52). Elevated levels of HbA1c resulted in an adjusted HR=3.12 (1.50-6.52). More severe degree of obesity also increased the future T2DM risk. IFG according to WHO and elevated HbA1c (39-48 mmol l(-1)), but not the additional fasting glucose interval added by ADA (5.6-6.0 mmol l(-1)), can be considered as prediabetes in the obese pediatric population in Sweden.

  10. Modulation of human time processing by subthalamic deep brain stimulation.

    PubMed

    Wojtecki, Lars; Elben, Saskia; Timmermann, Lars; Reck, Christiane; Maarouf, Mohammad; Jörgens, Silke; Ploner, Markus; Südmeyer, Martin; Groiss, Stefan Jun; Sturm, Volker; Niedeggen, Michael; Schnitzler, Alfons

    2011-01-01

    Timing in the range of seconds referred to as interval timing is crucial for cognitive operations and conscious time processing. According to recent models of interval timing basal ganglia (BG) oscillatory loops are involved in time interval recognition. Parkinsońs disease (PD) is a typical disease of the basal ganglia that shows distortions in interval timing. Deep brain stimulation (DBS) of the subthalamic nucleus (STN) is a powerful treatment of PD which modulates motor and cognitive functions depending on stimulation frequency by affecting subcortical-cortical oscillatory loops. Thus, for the understanding of BG-involvement in interval timing it is of interest whether STN-DBS can modulate timing in a frequency dependent manner by interference with oscillatory time recognition processes. We examined production and reproduction of 5 and 15 second intervals and millisecond timing in a double blind, randomised, within-subject repeated-measures design of 12 PD-patients applying no, 10-Hz- and ≥ 130-Hz-STN-DBS compared to healthy controls. We found under(re-)production of the 15-second interval and a significant enhancement of this under(re-)production by 10-Hz-stimulation compared to no stimulation, ≥ 130-Hz-STN-DBS and controls. Milliseconds timing was not affected. We provide first evidence for a frequency-specific modulatory effect of STN-DBS on interval timing. Our results corroborate the involvement of BG in general and of the STN in particular in the cognitive representation of time intervals in the range of multiple seconds.

  11. Modulation of Human Time Processing by Subthalamic Deep Brain Stimulation

    PubMed Central

    Timmermann, Lars; Reck, Christiane; Maarouf, Mohammad; Jörgens, Silke; Ploner, Markus; Südmeyer, Martin; Groiss, Stefan Jun; Sturm, Volker; Niedeggen, Michael; Schnitzler, Alfons

    2011-01-01

    Timing in the range of seconds referred to as interval timing is crucial for cognitive operations and conscious time processing. According to recent models of interval timing basal ganglia (BG) oscillatory loops are involved in time interval recognition. Parkinsońs disease (PD) is a typical disease of the basal ganglia that shows distortions in interval timing. Deep brain stimulation (DBS) of the subthalamic nucleus (STN) is a powerful treatment of PD which modulates motor and cognitive functions depending on stimulation frequency by affecting subcortical-cortical oscillatory loops. Thus, for the understanding of BG-involvement in interval timing it is of interest whether STN-DBS can modulate timing in a frequency dependent manner by interference with oscillatory time recognition processes. We examined production and reproduction of 5 and 15 second intervals and millisecond timing in a double blind, randomised, within-subject repeated-measures design of 12 PD-patients applying no, 10-Hz- and ≥130-Hz-STN-DBS compared to healthy controls. We found under(re-)production of the 15-second interval and a significant enhancement of this under(re-)production by 10-Hz-stimulation compared to no stimulation, ≥130-Hz-STN-DBS and controls. Milliseconds timing was not affected. We provide first evidence for a frequency-specific modulatory effect of STN-DBS on interval timing. Our results corroborate the involvement of BG in general and of the STN in particular in the cognitive representation of time intervals in the range of multiple seconds. PMID:21931767

  12. Study design and sampling intensity for demographic analyses of bear populations

    USGS Publications Warehouse

    Harris, R.B.; Schwartz, C.C.; Mace, R.D.; Haroldson, M.A.

    2011-01-01

    The rate of population change through time (??) is a fundamental element of a wildlife population's conservation status, yet estimating it with acceptable precision for bears is difficult. For studies that follow known (usually marked) bears, ?? can be estimated during some defined time by applying either life-table or matrix projection methods to estimates of individual vital rates. Usually however, confidence intervals surrounding the estimate are broader than one would like. Using an estimator suggested by Doak et al. (2005), we explored the precision to be expected in ?? from demographic analyses of typical grizzly (Ursus arctos) and American black (U. americanus) bear data sets. We also evaluated some trade-offs among vital rates in sampling strategies. Confidence intervals around ?? were more sensitive to adding to the duration of a short (e.g., 3 yrs) than a long (e.g., 10 yrs) study, and more sensitive to adding additional bears to studies with small (e.g., 10 adult females/yr) than large (e.g., 30 adult females/yr) sample sizes. Confidence intervals of ?? projected using process-only variance of vital rates were only slightly smaller than those projected using total variances of vital rates. Under sampling constraints typical of most bear studies, it may be more efficient to invest additional resources into monitoring recruitment and juvenile survival rates of females already a part of the study, than to simply increase the sample size of study females. ?? 2011 International Association for Bear Research and Management.

  13. Monte Carlo Method for Determining Earthquake Recurrence Parameters from Short Paleoseismic Catalogs: Example Calculations for California

    USGS Publications Warehouse

    Parsons, Tom

    2008-01-01

    Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques [e.g., Ellsworth et al., 1999]. In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means [e.g., NIST/SEMATECH, 2006]. For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDF?s, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.

  14. Monte Carlo method for determining earthquake recurrence parameters from short paleoseismic catalogs: Example calculations for California

    USGS Publications Warehouse

    Parsons, T.

    2008-01-01

    Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques (e.g., Ellsworth et al., 1999). In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means (e.g., NIST/SEMATECH, 2006). For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDFs, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.

  15. [Criteria for the classification as a "domestic-setting corpse"--a literature search and review to define the term].

    PubMed

    Merz, Marius; Birngruber, Christoph G; Heidorn, Frank; Ramsthaler, Frank; Risse, Manfred; Kreutz, Kerstin; Krähahn, Jonathan; Verhoff, Marcel A

    2011-01-01

    In German medical and media circles (daily routine, specialist literature, press, novels), the term "domestic-setting corpse" is frequently used, but the term is only vaguely defined. The authors thus decided to perform an in-depth study of the literature, including historic textbooks and all German- and English-language medicolegal journals, going as far back as their first issues, in an attempt to more clearly define the term. Inclusion criteria used in the search were a post-mortem interval of at least 24 hours prior to discovery and discovery of the corpse in a domestic setting. In the literature, 37 cases that complied with the above-mentioned inclusion criteria were found. These cases frequently described "advanced decomposition", often "unclear cause of death" and "problems in identification". These characteristics can thus be considered as being additional pointers in the definition. However, we suggest that the two general defining characteristics of a "domestic-setting corpse" are a post-mortem interval of more than 24 hours before discovery and the discovery of the corpse in a domestic setting.

  16. Conformity and statistical tolerancing

    NASA Astrophysics Data System (ADS)

    Leblond, Laurent; Pillet, Maurice

    2018-02-01

    Statistical tolerancing was first proposed by Shewhart (Economic Control of Quality of Manufactured Product, (1931) reprinted 1980 by ASQC), in spite of this long history, its use remains moderate. One of the probable reasons for this low utilization is undoubtedly the difficulty for designers to anticipate the risks of this approach. The arithmetic tolerance (worst case) allows a simple interpretation: conformity is defined by the presence of the characteristic in an interval. Statistical tolerancing is more complex in its definition. An interval is not sufficient to define the conformance. To justify the statistical tolerancing formula used by designers, a tolerance interval should be interpreted as the interval where most of the parts produced should probably be located. This tolerance is justified by considering a conformity criterion of the parts guaranteeing low offsets on the latter characteristics. Unlike traditional arithmetic tolerancing, statistical tolerancing requires a sustained exchange of information between design and manufacture to be used safely. This paper proposes a formal definition of the conformity, which we apply successively to the quadratic and arithmetic tolerancing. We introduce a concept of concavity, which helps us to demonstrate the link between tolerancing approach and conformity. We use this concept to demonstrate the various acceptable propositions of statistical tolerancing (in the space decentring, dispersion).

  17. CDM analysis

    NASA Technical Reports Server (NTRS)

    Larson, Robert E.; Mcentire, Paul L.; Oreilly, John G.

    1993-01-01

    The C Data Manager (CDM) is an advanced tool for creating an object-oriented database and for processing queries related to objects stored in that database. The CDM source code was purchased and will be modified over the course of the Arachnid project. In this report, the modified CDM is referred to as MCDM. Using MCDM, a detailed series of experiments was designed and conducted on a Sun Sparcstation. The primary results and analysis of the CDM experiment are provided in this report. The experiments involved creating the Long-form Faint Source Catalog (LFSC) database and then analyzing it with respect to following: (1) the relationships between the volume of data and the time required to create a database; (2) the storage requirements of the database files; and (3) the properties of query algorithms. The effort focused on defining, implementing, and analyzing seven experimental scenarios: (1) find all sources by right ascension--RA; (2) find all sources by declination--DEC; (3) find all sources in the right ascension interval--RA1, RA2; (4) find all sources in the declination interval--DEC1, DEC2; (5) find all sources in the rectangle defined by--RA1, RA2, DEC1, DEC2; (6) find all sources that meet certain compound conditions; and (7) analyze a variety of query algorithms. Throughout this document, the numerical results obtained from these scenarios are reported; conclusions are presented at the end of the document.

  18. Potential and Limitations of Cochrane Reviews in Pediatric Cardiology: A Systematic Analysis.

    PubMed

    Poryo, Martin; Khosrawikatoli, Sara; Abdul-Khaliq, Hashim; Meyer, Sascha

    2017-04-01

    Evidence-based medicine has contributed substantially to the quality of medical care in pediatric and adult cardiology. However, our impression from the bedside is that a substantial number of Cochrane reviews generate inconclusive data that are of limited clinical benefit. We performed a systematic synopsis of Cochrane reviews published between 2001 and 2015 in the field of pediatric cardiology. Main outcome parameters were the number and percentage of conclusive, partly conclusive, and inconclusive reviews as well as their recommendations and their development over three a priori defined intervals. In total, 69 reviews were analyzed. Most of them examined preterm and term neonates (36.2%), whereas 33.3% included also non-pediatric patients. Leading topics were pharmacological issues (71.0%) followed by interventional (10.1%) and operative procedures (2.9%). The majority of reviews were inconclusive (42.9%), while 36.2% were conclusive and 21.7% partly conclusive. Although the number of published reviews increased during the three a priori defined time intervals, reviews with "no specific recommendations" remained stable while "recommendations in favor of an intervention" clearly increased. Main reasons for missing recommendations were insufficient data (n = 41) as well as an insufficient number of trials (n = 22) or poor study quality (n = 19). There is still need for high-quality research, which will likely yield a greater number of Cochrane reviews with conclusive results.

  19. Identification of a self-paced hitting task in freely moving rats based on adaptive spike detection from multi-unit M1 cortical signals

    PubMed Central

    Hammad, Sofyan H. H.; Farina, Dario; Kamavuako, Ernest N.; Jensen, Winnie

    2013-01-01

    Invasive brain–computer interfaces (BCIs) may prove to be a useful rehabilitation tool for severely disabled patients. Although some systems have shown to work well in restricted laboratory settings, their usefulness must be tested in less controlled environments. Our objective was to investigate if a specific motor task could reliably be detected from multi-unit intra-cortical signals from freely moving animals. Four rats were trained to hit a retractable paddle (defined as a “hit”). Intra-cortical signals were obtained from electrodes placed in the primary motor cortex. First, the signal-to-noise ratio was increased by wavelet denoising. Action potentials were then detected using an adaptive threshold, counted in three consecutive time intervals and were used as features to classify either a “hit” or a “no-hit” (defined as an interval between two “hits”). We found that a “hit” could be detected with an accuracy of 75 ± 6% when wavelet denoising was applied whereas the accuracy dropped to 62 ± 5% without prior denoising. We compared our approach with the common daily practice in BCI that consists of using a fixed, manually selected threshold for spike detection without denoising. The results showed the feasibility of detecting a motor task in a less restricted environment than commonly applied within invasive BCI research. PMID:24298254

  20. GP-delivered secondary prevention cardiovascular disease programme; early predictors of likelihood of patient non-adherence.

    PubMed

    Fitzpatrick, Patricia; Lonergan, Moira; Collins, Claire; Daly, Leslie

    2010-12-01

    The aim of this study was to determine how routinely recorded data could predict early the likelihood of patient non-adherence to a primary care-delivered secondary prevention programme for established coronary heart disease (CHD), with patients with CHD (10,851) invited to attend four times per year. Non-adherence was defined as attending no more than three GP visits ever. The study sample was selected to allow a possible two-year recorded follow-up period in which patients could take up invitations. Administrative recordings of visit dates and intervals between visits, baseline results of key parameters and early changes were examined using logistic regression to determine independent predictors of non-adherence. Longer interval between early visits, no family history of CHD, smoking and being outside target for exercise at baseline were independently associated with non-adherence. Early identification by GPs of those who fail to attend on time or who defer appointments, in addition to persistence of lifestyle factors unchanged by a prior serious cardiac event should serve as a warning sign that targeted interventions to maintain adherence in primary care-delivered secondary prevention programmes are necessary.

  1. A comparison of climatological observing windows and their impact on detecting daily temperature extrema

    NASA Astrophysics Data System (ADS)

    Žaknić-Ćatović, Ana; Gough, William A.

    2018-04-01

    Climatological observing window (COW) is defined as a time frame over which continuous or extreme air temperature measurements are collected. A 24-h time interval, ending at 00UTC or shifted to end at 06UTC, has been associated with difficulties in characterizing daily temperature extrema. A fixed 24-h COW used to obtain the temperature minima leads to potential misidentification due to fragmentation of "nighttime" into two subsequent nighttime periods due to the time discretization interval. The correct identification of air temperature extrema is achievable using a COW that identifies daily minimum over a single nighttime period and maximum over a single daytime period, as determined by sunrise and sunset. Due to a common absence of hourly air temperature observations, the accuracy of the mean temperature estimation is dependent on the accuracy of determination of diurnal air temperature extrema. Qualitative and quantitative criteria were used to examine the impact of the COW on detecting daily air temperature extrema. The timing of the 24-h observing window occasionally affects the determination of daily extrema through a mischaracterization of the diurnal minima and by extension can lead to errors in determining daily mean temperature. Hourly air temperature data for the time period from year 1987 to 2014, obtained from Toronto Buttonville Municipal Airport weather station, were used in analysis of COW impacts on detection of daily temperature extrema and calculation of annual temperature averages based on such extrema.

  2. Cigarette smoke chemistry market maps under Massachusetts Department of Public Health smoking conditions.

    PubMed

    Morton, Michael J; Laffoon, Susan W

    2008-06-01

    This study extends the market mapping concept introduced by Counts et al. (Counts, M.E., Hsu, F.S., Tewes, F.J., 2006. Development of a commercial cigarette "market map" comparison methodology for evaluating new or non-conventional cigarettes. Regul. Toxicol. Pharmacol. 46, 225-242) to include both temporal cigarette and testing variation and also machine smoking with more intense puffing parameters, as defined by the Massachusetts Department of Public Health (MDPH). The study was conducted over a two year period and involved a total of 23 different commercial cigarette brands from the U.S. marketplace. Market mapping prediction intervals were developed for 40 mainstream cigarette smoke constituents and the potential utility of the market map as a comparison tool for new brands was demonstrated. The over-time character of the data allowed for the variance structure of the smoke constituents to be more completely characterized than is possible with one-time sample data. The variance was partitioned among brand-to-brand differences, temporal differences, and the remaining residual variation using a mixed random and fixed effects model. It was shown that a conventional weighted least squares model typically gave similar prediction intervals to those of the more complicated mixed model. For most constituents there was less difference in the prediction intervals calculated from over-time samples and those calculated from one-time samples than had been anticipated. One-time sample maps may be adequate for many purposes if the user is aware of their limitations. Cigarette tobacco fillers were analyzed for nitrate, nicotine, tobacco-specific nitrosamines, ammonia, chlorogenic acid, and reducing sugars. The filler information was used to improve predicting relationships for several of the smoke constituents, and it was concluded that the effects of filler chemistry on smoke chemistry were partial explanations of the observed brand-to-brand variation.

  3. The predictive value of C-reactive protein (CRP) in acute pancreatitis - is interval change in CRP an additional indicator of severity?

    PubMed

    Stirling, Aaron D; Moran, Neil R; Kelly, Michael E; Ridgway, Paul F; Conlon, Kevin C

    2017-10-01

    Using revised Atlanta classification defined outcomes, we compare absolute values in C-reactive protein (CRP), with interval changes in CRP, for severity stratification in acute pancreatitis (AP). A retrospective study of all first incidence AP was conducted over a 5-year period. Interval change in CRP values from admission to day 1, 2 and 3 was compared against the absolute values. Receiver-operator characteristic (ROC) curve and likelihood ratios (LRs) were used to compare ability to predict severe and mild disease. 337 cases of first incidence AP were included in our analysis. ROC curve analysis demonstrated the second day as the most useful time for repeat CRP measurement. A CRP interval change >90 mg/dL at 48 h (+LR 2.15, -LR 0.26) was equivalent to an absolute value of >150 mg/dL within 48 h (+LR 2.32, -LR 0.25). The optimal cut-off for absolute CRP based on new, more stringent definition of severity was >190 mg/dL (+LR 2.72, -LR 0.24). Interval change in CRP is a comparable measure to absolute CRP in the prognostication of AP severity. This study suggests a rise of >90 mg/dL from admission or an absolute value of >190 mg/dL at 48 h predicts severe disease with the greatest accuracy. Copyright © 2017 International Hepato-Pancreato-Biliary Association Inc. Published by Elsevier Ltd. All rights reserved.

  4. Mining of hospital laboratory information systems: a model study defining age- and gender-specific reference intervals and trajectories for plasma creatinine in a pediatric population.

    PubMed

    Søeby, Karen; Jensen, Peter Bjødstrup; Werge, Thomas; Sørensen, Steen

    2015-09-01

    The knowledge of physiological fluctuation and variation of even commonly used biochemical quantities in extreme age groups and during development is sparse. This challenges the clinical interpretation and utility of laboratory tests in these age groups. To explore the utility of hospital laboratory data as a source of information, we analyzed enzymatic plasma creatinine as a model analyte in two large pediatric hospital samples. Plasma creatinine measurements from 9700 children aged 0-18 years were obtained from hospital laboratory databases and partitioned into high-resolution gender- and age-groups. Normal probability plots were used to deduce parameters of the normal distributions from healthy creatinine values in the mixed hospital datasets. Furthermore, temporal trajectories were generated from repeated measurements to examine developmental patterns in periods of changing creatinine levels. Creatinine shows great age dependence from birth throughout childhood. We computed and replicated 95% reference intervals in narrow gender and age bins and showed them to be comparable to those determined in healthy population studies. We identified pronounced transitions in creatinine levels at different time points after birth and around the early teens, which challenges the establishment and usefulness of reference intervals in those age groups. The study documents that hospital laboratory data may inform on the developmental aspects of creatinine, on periods with pronounced heterogeneity and valid reference intervals. Furthermore, part of the heterogeneity in creatinine distribution is likely due to differences in biological and chronological age of children and should be considered when using age-specific reference intervals.

  5. Discrete interference modeling via boolean algebra.

    PubMed

    Beckhoff, Gerhard

    2011-01-01

    Two types of boolean functions are considered, the locus function of n variables, and the interval function of ν = n - 1 variables. A 1-1 mapping is given that takes elements (cells) of the interval function to antidual pairs of elements in the locus function, and vice versa. A set of ν binary codewords representing the intervals are defined and used to generate the codewords of all genomic regions. Next a diallelic three-point system is reviewed in the light of boolean functions, which leads to redefining complete interference by a logic function. Together with the upper bound of noninterference already defined by a boolean function, it confines the region of interference. Extensions of these two functions to any finite number of ν are straightforward, but have been also made in terms of variables taken from the inclusion-exclusion principle (expressing "at least" and "exactly equal to" a decimal integer). Two coefficients of coincidence for systems with more than three loci are defined and discussed, one using the average of several individual coefficients and the other taking as coefficient a real number between zero and one. Finally, by way of a malfunction of the mod-2 addition, it is shown that a four-point system may produce two different functions, one of which exhibiting loss of a class of odd recombinants.

  6. Joint probabilities of extreme precipitation and wind gusts in Germany

    NASA Astrophysics Data System (ADS)

    von Waldow, H.; Martius, O.

    2012-04-01

    Extreme meteorological events such as storms, heavy rain, floods, droughts and heat waves can have devastating consequences for human health, infrastructure and ecosystems. Concomitantly occurring extreme events might interact synergistically to produce a particularly hazardous impact. The joint occurrence of droughts and heat waves, for example, can have a very different impact on human health and ecosystems both in quantity and quality, than just one of the two extreme events. The co-occurrence of certain types of extreme events is plausible from physical and dynamical considerations, for example heavy precipitation and high wind speeds in the pathway of strong extratropical cyclones. The winter storm Kyrill not only caused wind gust speeds well in excess of 30 m/s across Europe, but also brought 24 h precipitation sums greater than the mean January accumulations in some regions. However, the existence of such compound risks is currently not accounted for by insurance companies, who assume independence of extreme weather events to calculate their premiums. While there are established statistical methods to model the extremes of univariate meteorological variables, the modelling of multidimensional extremes calls for an approach that is tailored to the specific problem at hand. A first step involves defining extreme bivariate wind/precipitation events. Because precipitation and wind gusts caused by the same cyclone or convective cell do not occur at exactly the same location and at the same time, it is necessary to find a sound definition of "extreme compound event" for this case. We present a data driven method to choose appropriate time and space intervals that define "concomitance" for wind and precipitation extremes. Based on station data of wind speed and gridded precipitation data, we arrive at time and space intervals that compare well with the typical time and space scales of extratropical cyclones, i.e. a maximum time lag of 1 day and a maximum distance of about 300 km between associated wind and rain events. After modelling extreme precipitation and wind separately, we explore the practicability of characterising their joint distribution using a bivariate threshold excess model. In particular, we present different dependence measures and report about the computational feasibility and available computer codes.

  7. Modified-release hydrocortisone to provide circadian cortisol profiles.

    PubMed

    Debono, Miguel; Ghobadi, Cyrus; Rostami-Hodjegan, Amin; Huatan, Hiep; Campbell, Michael J; Newell-Price, John; Darzy, Ken; Merke, Deborah P; Arlt, Wiebke; Ross, Richard J

    2009-05-01

    Cortisol has a distinct circadian rhythm regulated by the brain's central pacemaker. Loss of this rhythm is associated with metabolic abnormalities, fatigue, and poor quality of life. Conventional glucocorticoid replacement cannot replicate this rhythm. Our objectives were to define key variables of physiological cortisol rhythm, and by pharmacokinetic modeling test whether modified-release hydrocortisone (MR-HC) can provide circadian cortisol profiles. The study was performed at a Clinical Research Facility. Using data from a cross-sectional study in healthy reference subjects (n = 33), we defined parameters for the cortisol rhythm. We then tested MR-HC against immediate-release hydrocortisone in healthy volunteers (n = 28) in an open-label, randomized, single-dose, cross-over study. We compared profiles with physiological cortisol levels, and modeled an optimal treatment regimen. The key variables in the physiological cortisol profile included: peak 15.5 microg/dl (95% reference range 11.7-20.6), acrophase 0832 h (95% confidence interval 0759-0905), nadir less than 2 microg/dl (95% reference range 1.5-2.5), time of nadir 0018 h (95% confidence interval 2339-0058), and quiescent phase (below the mesor) 1943-0531 h. MR-HC 15 mg demonstrated delayed and sustained release with a mean (sem) maximum observed concentration of 16.6 (1.4) microg/dl at 7.41 (0.57) h after drug. Bioavailability of MR-HC 5, 10, and 15 mg was 100, 79, and 86% that of immediate-release hydrocortisone. Modeling suggested that MR-HC 15-20 mg at 2300 h and 10 mg at 0700 h could reproduce physiological cortisol levels. By defining circadian rhythms and using modern formulation technology, it is possible to allow a more physiological circadian replacement of cortisol.

  8. Department of Defense Precise Time and Time Interval program improvement plan

    NASA Technical Reports Server (NTRS)

    Bowser, J. R.

    1981-01-01

    The United States Naval Observatory is responsible for ensuring uniformity in precise time and time interval operations including measurements, the establishment of overall DOD requirements for time and time interval, and the accomplishment of objectives requiring precise time and time interval with minimum cost. An overview of the objectives, the approach to the problem, the schedule, and a status report, including significant findings relative to organizational relationships, current directives, principal PTTI users, and future requirements as currently identified by the users are presented.

  9. Anesthesia preparation time is not affected by the experience level of the resident involved during his/her first month of adult cardiac surgery.

    PubMed

    Broussard, David M; Couch, Michael C

    2011-10-01

    This study was designed to answer the question of whether the experience level of the resident on his/her first month of adult cardiothoracic anesthesiology has an impact on operating room efficiency in a large academic medical center. Traditionally, the resident's 1st month of cardiac anesthesia had been reserved for the clinical anesthesia (CA)-2 year of training. This study analyzed the impact on operating room efficiency of moving the 1st month of cardiac anesthesia into the CA-1 year. The authors hypothesized that there would be no difference in anesthesia preparation times (defined as the interval between "in-room" and "anesthesia-ready" times) between CA-1 and CA-2 residents on their 1st month of cardiac anesthesia. This study was retrospective and used an electronic anesthesia information management system database. This study was conducted on care provided at a single 450-bed academic medical center. This study included 12 residents in their 1st month of cardiac anesthesia. The anesthesia preparation time (defined as the interval between "in-room" and "anesthesia-ready" times) was measured for cases involving residents on their first month of cardiac anesthesia. Anesthesia preparation times for 6 CA-1 resident months and 6 CA-2 resident months (100 adult cardiac procedures in total) were analyzed (49 for the CA-1 residents and 51 for the CA-2s). There were no differences in preparation time between CA-1 and CA-2 residents as a group (p = 0.8169). The CA-1 residents had an unadjusted mean (±standard error) of 51.1 ± 3.18 minutes, whereas the CA-2 residents' unadjusted mean was 50.2 ± 2.41 minutes. Adjusting for case mix (valves v coronary artery bypass graft surgery), the CA-1 mean was 49.1 ± 5.22 minutes, whereas the CA-2 mean was 49.1 ± 4.54 minutes. These findings suggest that operating room efficiency as measured by the anesthesia preparation time may not be affected by the level of the resident on his/her 1st month of adult cardiac anesthesia. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. The Application of Nonstandard Analysis to the Study of Inviscid Shock Wave Jump Conditions

    NASA Technical Reports Server (NTRS)

    Farassat, F.; Baty, R. S.

    1998-01-01

    The use of conservation laws in nonconservative form for deriving shock jump conditions by Schwartz distribution theory leads to ambiguous products of generalized functions. Nonstandard analysis is used to define a class of Heaviside functions where the jump from zero to one occurs on an infinitesimal interval. These Heaviside functions differ by their microstructure near x = 0, i.e., by the nature of the rise within the infinitesimal interval it is shown that the conservation laws in nonconservative form can relate the different Heaviside functions used to define jumps in different flow parameters. There are no mathematical or logical ambiguities in the derivation of the jump conditions. An important result is that the microstructure of the Heaviside function of the jump in entropy has a positive peak greater than one within the infinitesimal interval where the jump occurs. This phenomena is known from more sophisticated studies of the structure of shock waves using viscous fluid assumption. However, the present analysis is simpler and more direct.

  11. Ion detection device and method with compressing ion-beam shutter

    DOEpatents

    Sperline, Roger P [Tucson, AZ

    2009-05-26

    An ion detection device, method and computer readable medium storing instructions for applying voltages to shutter elements of the detection device to compress ions in a volume defined by the shutter elements and to output the compressed ions to a collector. The ion detection device has a chamber having an inlet and receives ions through the inlet, a shutter provided in the chamber opposite the inlet and configured to allow or prevent the ions to pass the shutter, the shutter having first and second shutter elements, a collector provided in the chamber opposite the shutter and configured to collect ions passed through the shutter, and a processing unit electrically connected to the first and second shutter elements. The processing unit applies, during a first predetermined time interval, a first voltage to the first shutter element and a second voltage to the second shutter element, the second voltage being lower than the first voltage such that ions from the inlet enter a volume defined by the first and second shutter elements, and during a second predetermined time interval, a third voltage to the first shutter element, higher than the first voltage, and a fourth voltage to the second shutter element, the third voltage being higher than the fourth voltage such that ions that entered the volume are compressed as the ions exit the volume and new ions coming from the inlet are prevented from entering the volume. The processing unit is electrically connected to the collector and configured to detect the compressed ions based at least on a current received from the collector and produced by the ions collected by the collector.

  12. Vehicle Exposure and Spinal Musculature Fatigue in Military Warfighters: A Meta-Analysis.

    PubMed

    Kollock, Roger O; Games, Kenneth E; Wilson, Alan E; Sefton, JoEllen M

    2016-11-01

     Spinal musculature fatigue from vehicle exposure may place warfighters at risk for spinal injuries and pain. Research on the relationship between vehicle exposure and spinal musculature fatigue is conflicting. A better understanding of the effect of military duty on musculoskeletal function is needed before sports medicine teams can develop injury-prevention programs.  To determine if the literature supports a definite effect of vehicle exposure on spinal musculature fatigue.  We searched the MEDLINE, Military & Government Collection (EBSCO), National Institute for Occupational Safety and Health Technical Information Center, PubMed, and Web of Science databases for articles published between January 1990 and September 2015.  To be included, a study required a clear sampling method, preexposure and postexposure assessments of fatigue, a defined objective measurement of fatigue, a defined exposure time, and a study goal of exposing participants to forces related to vehicle exposure.  Sample size, mean preexposure and postexposure measures of fatigue, vehicle type, and exposure time.  Six studies met the inclusion criteria. We used the Scottish Intercollegiate Guidelines Network algorithm to determine the appropriate tool for quality appraisal of each article. Unweighted random-effects model meta-analyses were conducted, and a natural log response ratio was used as the effect metric. The overall meta-analysis demonstrated that vehicle exposure increased fatigue of the spinal musculature (P = .03; natural log response ratio = -0.22, 95% confidence interval = -0.42, -0.02). Using the spinal region as a moderator, we observed that vehicle ride exposure significantly increased fatigue at the lumbar musculature (P = .02; natural log response ratio = -0.27, 95% confidence interval = -0.50, -0.04) but not at the cervical or thoracic region.  Vehicle exposure increased fatigue at the lumbar region.

  13. Methodological Considerations When Quantifying High-Intensity Efforts in Team Sport Using Global Positioning System Technology.

    PubMed

    Varley, Matthew C; Jaspers, Arne; Helsen, Werner F; Malone, James J

    2017-09-01

    Sprints and accelerations are popular performance indicators in applied sport. The methods used to define these efforts using athlete-tracking technology could affect the number of efforts reported. This study aimed to determine the influence of different techniques and settings for detecting high-intensity efforts using global positioning system (GPS) data. Velocity and acceleration data from a professional soccer match were recorded via 10-Hz GPS. Velocity data were filtered using either a median or an exponential filter. Acceleration data were derived from velocity data over a 0.2-s time interval (with and without an exponential filter applied) and a 0.3-second time interval. High-speed-running (≥4.17 m/s 2 ), sprint (≥7.00 m/s 2 ), and acceleration (≥2.78 m/s 2 ) efforts were then identified using minimum-effort durations (0.1-0.9 s) to assess differences in the total number of efforts reported. Different velocity-filtering methods resulted in small to moderate differences (effect size [ES] 0.28-1.09) in the number of high-speed-running and sprint efforts detected when minimum duration was <0.5 s and small to very large differences (ES -5.69 to 0.26) in the number of accelerations when minimum duration was <0.7 s. There was an exponential decline in the number of all efforts as minimum duration increased, regardless of filtering method, with the largest declines in acceleration efforts. Filtering techniques and minimum durations substantially affect the number of high-speed-running, sprint, and acceleration efforts detected with GPS. Changes to how high-intensity efforts are defined affect reported data. Therefore, consistency in data processing is advised.

  14. Vehicle Exposure and Spinal Musculature Fatigue in Military Warfighters: A Meta-Analysis

    PubMed Central

    Kollock, Roger O.; Games, Kenneth E.; Wilson, Alan E.; Sefton, JoEllen M.

    2016-01-01

    Context: Spinal musculature fatigue from vehicle exposure may place warfighters at risk for spinal injuries and pain. Research on the relationship between vehicle exposure and spinal musculature fatigue is conflicting. A better understanding of the effect of military duty on musculoskeletal function is needed before sports medicine teams can develop injury-prevention programs. Objective: To determine if the literature supports a definite effect of vehicle exposure on spinal musculature fatigue. Data Sources: We searched the MEDLINE, Military & Government Collection (EBSCO), National Institute for Occupational Safety and Health Technical Information Center, PubMed, and Web of Science databases for articles published between January 1990 and September 2015. Study Selection: To be included, a study required a clear sampling method, preexposure and postexposure assessments of fatigue, a defined objective measurement of fatigue, a defined exposure time, and a study goal of exposing participants to forces related to vehicle exposure. Data Extraction: Sample size, mean preexposure and postexposure measures of fatigue, vehicle type, and exposure time. Data Synthesis: Six studies met the inclusion criteria. We used the Scottish Intercollegiate Guidelines Network algorithm to determine the appropriate tool for quality appraisal of each article. Unweighted random-effects model meta-analyses were conducted, and a natural log response ratio was used as the effect metric. The overall meta-analysis demonstrated that vehicle exposure increased fatigue of the spinal musculature (P = .03; natural log response ratio = −0.22, 95% confidence interval = −0.42, −0.02). Using the spinal region as a moderator, we observed that vehicle ride exposure significantly increased fatigue at the lumbar musculature (P = .02; natural log response ratio = −0.27, 95% confidence interval = −0.50, −0.04) but not at the cervical or thoracic region. Conclusions: Vehicle exposure increased fatigue at the lumbar region. PMID:28068167

  15. Burst firing and modulation of functional connectivity in cat striate cortex.

    PubMed

    Snider, R K; Kabara, J F; Roig, B R; Bonds, A B

    1998-08-01

    We studied the influences of the temporal firing patterns of presynaptic cat visual cortical cells on spike generation by postsynaptic cells. Multiunit recordings were dissected into the activity of individual neurons within the recorded group. Cross-correlation analysis was then used to identify directly coupled neuron pairs. The 22 multiunit groups recorded typically showed activity from two to six neurons, each containing between 1 and 15 neuron pairs. From a total of 241 neuron pairs, 91 (38%) had a shifted cross-correlation peak, which indicated a possible direct connection. Only two multiunit groups contained no shifted peaks. Burst activity, defined by groups of two or more spikes with intervals of

  16. Lung cancer risk prediction to select smokers for screening CT--a model based on the Italian COSMOS trial.

    PubMed

    Maisonneuve, Patrick; Bagnardi, Vincenzo; Bellomi, Massimo; Spaggiari, Lorenzo; Pelosi, Giuseppe; Rampinelli, Cristiano; Bertolotti, Raffaella; Rotmensz, Nicole; Field, John K; Decensi, Andrea; Veronesi, Giulia

    2011-11-01

    Screening with low-dose helical computed tomography (CT) has been shown to significantly reduce lung cancer mortality but the optimal target population and time interval to subsequent screening are yet to be defined. We developed two models to stratify individual smokers according to risk of developing lung cancer. We first used the number of lung cancers detected at baseline screening CT in the 5,203 asymptomatic participants of the COSMOS trial to recalibrate the Bach model, which we propose using to select smokers for screening. Next, we incorporated lung nodule characteristics and presence of emphysema identified at baseline CT into the Bach model and proposed the resulting multivariable model to predict lung cancer risk in screened smokers after baseline CT. Age and smoking exposure were the main determinants of lung cancer risk. The recalibrated Bach model accurately predicted lung cancers detected during the first year of screening. Presence of nonsolid nodules (RR = 10.1, 95% CI = 5.57-18.5), nodule size more than 8 mm (RR = 9.89, 95% CI = 5.84-16.8), and emphysema (RR = 2.36, 95% CI = 1.59-3.49) at baseline CT were all significant predictors of subsequent lung cancers. Incorporation of these variables into the Bach model increased the predictive value of the multivariable model (c-index = 0.759, internal validation). The recalibrated Bach model seems suitable for selecting the higher risk population for recruitment for large-scale CT screening. The Bach model incorporating CT findings at baseline screening could help defining the time interval to subsequent screening in individual participants. Further studies are necessary to validate these models.

  17. Reference interval for thyrotropin in a ultrasonography screened Korean population

    PubMed Central

    Kim, Mijin; Kim, Soo Han; Lee, Yunkyoung; Park, Su-yeon; Kim, Hyung-don; Kwon, Hyemi; Choi, Yun Mi; Jang, Eun Kyung; Jeon, Min Ji; Kim, Won Gu; Shong, Young Kee; Kim, Won Bae

    2015-01-01

    Background/Aims The diagnostic accuracy of thyroid dysfunctions is primarily affected by the validity of the reference interval for serum thyroid-stimulating hormone (TSH). Thus, the present study aimed to establish a reference interval for TSH using a normal Korean population. Methods This study included 19,465 subjects who were recruited after undergoing routine health check-ups. Subjects with overt thyroid disease, a prior history of thyroid disease, or a family history of thyroid cancer were excluded from the present analyses. The reference range for serum TSH was evaluated in a normal Korean reference population which was defined according to criteria based on the guidelines of the National Academy of Clinical Biochemistry, ultrasound (US) findings, and smoking status. Sex and age were also taken into consideration when evaluating the distribution of serum TSH levels in different groups. Results In the presence of positive anti-thyroid peroxidase antibodies or abnormal US findings, the central 95 percentile interval of the serum TSH levels was widened. Additionally, the distribution of serum TSH levels shifted toward lower values in the current smokers group. The reference interval for TSH obtained using a normal Korean reference population was 0.73 to 7.06 mIU/L. The serum TSH levels were higher in females than in males in all groups, and there were no age-dependent shifts. Conclusions The present findings demonstrate that the serum TSH reference interval in a normal Korean reference population was higher than that in other countries. This result suggests that the upper and lower limits of the TSH reference interval, which was previously defined by studies from Western countries, should be raised for Korean populations. PMID:25995664

  18. Effect of interpregnancy interval on risk of spontaneous preterm birth in Emirati women, United Arab Emirates.

    PubMed Central

    Al-Jasmi, Fatima; Al-Mansoor, Fatima; Alsheiba, Aisha; Carter, Anne O.; Carter, Thomas P.; Hossain, M. Moshaddeque

    2002-01-01

    OBJECTIVE: To investigate whether a short interpregnancy interval is a risk factor for preterm birth in Emirati women, where there is a wide range of interpregnancy intervals and uniformity in potentially confounding factors. METHODS: A case-control design based on medical records was used. A case was defined as a healthy multiparous Emirati woman delivering a healthy singleton spontaneously before 37 weeks of gestation between 1997 and 2000, and a control was defined as the next eligible similar woman delivering after 37 weeks of gestation. Women were excluded if there was no information available about their most recent previous pregnancy or if it had resulted in a multiple or preterm birth. Data collected from charts and delivery room records were analysed using the STATA statistical package. All variables found to be valid, stable and significant by univariate analysis were included in multivariate logistic regression analysis. FINDINGS: There were 128 cases who met the eligibility criteria; 128 controls were selected. Short interpregnancy intervals were significantly associated with case status (P<0.05). The multivariate adjusted odds ratios for the 1st, 2nd, and 4th quartiles of interpregnancy interval compared with the lowest-risk 3rd quartile were 8.2, 5.4, and 2.0 (95% confidence intervals: 3.5-19.2, 2.4-12.6, and 0.9- 4.5 respectively). CONCLUSION: A short interpregnancy interval is a risk factor for spontaneous preterm birth in Emirati women. The magnitude of the risk and the risk gradient between exposure quartiles suggest that the risk factor is causal and that its modification would reduce the risk of preterm birth. PMID:12481208

  19. Time-series Analysis of Heat Waves and Emergency Department Visits in Atlanta, 1993 to 2012

    PubMed Central

    Chen, Tianqi; Sarnat, Stefanie E.; Grundstein, Andrew J.; Winquist, Andrea

    2017-01-01

    Background: Heat waves are extreme weather events that have been associated with adverse health outcomes. However, there is limited knowledge of heat waves’ impact on population morbidity, such as emergency department (ED) visits. Objectives: We investigated associations between heat waves and ED visits for 17 outcomes in Atlanta over a 20-year period, 1993–2012. Methods: Associations were estimated using Poisson log-linear models controlling for continuous air temperature, dew-point temperature, day of week, holidays, and time trends. We defined heat waves as periods of ≥2 consecutive days with temperatures beyond the 98th percentile of the temperature distribution over the period from 1945–2012. We considered six heat wave definitions using maximum, minimum, and average air temperatures and apparent temperatures. Associations by heat wave characteristics were examined. Results: Among all outcome-heat wave combinations, associations were strongest between ED visits for acute renal failure and heat waves defined by maximum apparent temperature at lag 0 [relative risk (RR) = 1.15; 95% confidence interval (CI): 1.03–1.29], ED visits for ischemic stroke and heat waves defined by minimum temperature at lag 0 (RR = 1.09; 95% CI: 1.02–1.17), and ED visits for intestinal infection and heat waves defined by average temperature at lag 1 (RR = 1.10; 95% CI: 1.00–1.21). ED visits for all internal causes were associated with heat waves defined by maximum temperature at lag 1 (RR = 1.02; 95% CI: 1.00, 1.04). Conclusions: Heat waves can confer additional risks of ED visits beyond those of daily air temperature, even in a region with high air-conditioning prevalence. https://doi.org/10.1289/EHP44 PMID:28599264

  20. Time-series Analysis of Heat Waves and Emergency Department Visits in Atlanta, 1993 to 2012.

    PubMed

    Chen, Tianqi; Sarnat, Stefanie E; Grundstein, Andrew J; Winquist, Andrea; Chang, Howard H

    2017-05-31

    Heat waves are extreme weather events that have been associated with adverse health outcomes. However, there is limited knowledge of heat waves' impact on population morbidity, such as emergency department (ED) visits. We investigated associations between heat waves and ED visits for 17 outcomes in Atlanta over a 20-year period, 1993-2012. Associations were estimated using Poisson log-linear models controlling for continuous air temperature, dew-point temperature, day of week, holidays, and time trends. We defined heat waves as periods of consecutive days with temperatures beyond the 98th percentile of the temperature distribution over the period from 1945-2012. We considered six heat wave definitions using maximum, minimum, and average air temperatures and apparent temperatures. Associations by heat wave characteristics were examined. Among all outcome-heat wave combinations, associations were strongest between ED visits for acute renal failure and heat waves defined by maximum apparent temperature at lag 0 [relative risk (RR) = 1.15; 95% confidence interval (CI): 1.03-1.29], ED visits for ischemic stroke and heat waves defined by minimum temperature at lag 0 (RR = 1.09; 95% CI: 1.02-1.17), and ED visits for intestinal infection and heat waves defined by average temperature at lag 1 (RR = 1.10; 95% CI: 1.00-1.21). ED visits for all internal causes were associated with heat waves defined by maximum temperature at lag 1 (RR = 1.02; 95% CI: 1.00, 1.04). Heat waves can confer additional risks of ED visits beyond those of daily air temperature, even in a region with high air-conditioning prevalence. https://doi.org/10.1289/EHP44.

  1. Geodynamic Evolution of Northeastern Tunisia During the Maastrichtian-Paleocene Time: Insights from Integrated Seismic Stratigraphic Analysis

    NASA Astrophysics Data System (ADS)

    Abidi, Oussama; Inoubli, Mohamed Hédi; Sebei, Kawthar; Amiri, Adnen; Boussiga, Haifa; Nasr, Imen Hamdi; Salem, Abdelhamid Ben; Elabed, Mahmoud

    2017-05-01

    The Maastrichtian-Paleocene El Haria formation was studied and defined in Tunisia on the basis of outcrops and borehole data; few studies were interested in its three-dimensional extent. In this paper, the El Haria formation is reviewed in the context of a tectono-stratigraphic interval using an integrated seismic stratigraphic analysis based on borehole lithology logs, electrical well logging, well shots, vertical seismic profiles and post-stack surface data. Seismic analysis benefits from appropriate calibration with borehole data, conventional interpretation, velocity mapping, seismic attributes and post-stack model-based inversion. The applied methodology proved to be powerful for charactering the marly Maastrichtian-Paleocene interval of the El Haria formation. Migrated seismic sections together with borehole measurements are used to detail the three-dimensional changes in thickness, facies and depositional environment in the Cap Bon and Gulf of Hammamet regions during the Maastrichtian-Paleocene time. Furthermore, dating based on their microfossil content divulges local and multiple internal hiatuses within the El Haria formation which are related to the geodynamic evolution of the depositional floor since the Campanian stage. Interpreted seismic sections display concordance, unconformities, pinchouts, sedimentary gaps, incised valleys and syn-sedimentary normal faulting. Based on the seismic reflection geometry and terminations, seven sequences are delineated. These sequences are related to base-level changes as the combination of depositional floor paleo-topography, tectonic forces, subsidence and the developed accommodation space. These factors controlled the occurrence of the various parts of the Maastrichtian-Paleocene interval. Detailed examinations of these deposits together with the analysis of the structural deformation at different time periods allowed us to obtain a better understanding of the sediment architecture in depth and the delineation of the geodynamic evolution of the region.

  2. Oral and Hand Movement Speeds are Associated with Expressive Language Ability in Children with Speech Sound Disorder

    PubMed Central

    Peter, Beate

    2013-01-01

    This study tested the hypothesis that children with speech sound disorder have generalized slowed motor speeds. It evaluated associations among oral and hand motor speeds and measures of speech (articulation and phonology) and language (receptive vocabulary, sentence comprehension, sentence imitation), in 11 children with moderate to severe SSD and 11 controls. Syllable durations from a syllable repetition task served as an estimate of maximal oral movement speed. In two imitation tasks, nonwords and clapped rhythms, unstressed vowel durations and quarter-note clap intervals served as estimates of oral and hand movement speed, respectively. Syllable durations were significantly correlated with vowel durations and hand clap intervals. Sentence imitation was correlated with all three timed movement measures. Clustering on syllable repetition durations produced three clusters that also differed in sentence imitation scores. Results are consistent with limited movement speeds across motor systems and SSD subtypes defined by motor speeds as a corollary of expressive language abilities. PMID:22411590

  3. Oral and hand movement speeds are associated with expressive language ability in children with speech sound disorder.

    PubMed

    Peter, Beate

    2012-12-01

    This study tested the hypothesis that children with speech sound disorder have generalized slowed motor speeds. It evaluated associations among oral and hand motor speeds and measures of speech (articulation and phonology) and language (receptive vocabulary, sentence comprehension, sentence imitation), in 11 children with moderate to severe SSD and 11 controls. Syllable durations from a syllable repetition task served as an estimate of maximal oral movement speed. In two imitation tasks, nonwords and clapped rhythms, unstressed vowel durations and quarter-note clap intervals served as estimates of oral and hand movement speed, respectively. Syllable durations were significantly correlated with vowel durations and hand clap intervals. Sentence imitation was correlated with all three timed movement measures. Clustering on syllable repetition durations produced three clusters that also differed in sentence imitation scores. Results are consistent with limited movement speeds across motor systems and SSD subtypes defined by motor speeds as a corollary of expressive language abilities.

  4. Treatment selection in a randomized clinical trial via covariate-specific treatment effect curves.

    PubMed

    Ma, Yunbei; Zhou, Xiao-Hua

    2017-02-01

    For time-to-event data in a randomized clinical trial, we proposed two new methods for selecting an optimal treatment for a patient based on the covariate-specific treatment effect curve, which is used to represent the clinical utility of a predictive biomarker. To select an optimal treatment for a patient with a specific biomarker value, we proposed pointwise confidence intervals for each covariate-specific treatment effect curve and the difference between covariate-specific treatment effect curves of two treatments. Furthermore, to select an optimal treatment for a future biomarker-defined subpopulation of patients, we proposed confidence bands for each covariate-specific treatment effect curve and the difference between each pair of covariate-specific treatment effect curve over a fixed interval of biomarker values. We constructed the confidence bands based on a resampling technique. We also conducted simulation studies to evaluate finite-sample properties of the proposed estimation methods. Finally, we illustrated the application of the proposed method in a real-world data set.

  5. Saline Flush After Rocuronium Bolus Reduces Onset Time and Prolongs Duration of Effect: A Randomized Clinical Trial.

    PubMed

    Ishigaki, Sayaka; Masui, Kenichi; Kazama, Tomiei

    2016-03-01

    Circulatory factors modify the onset time of neuromuscular-blocking drugs. Therefore, we hypothesized that infusion of a saline flush immediately after rocuronium administration would shorten the onset time without influencing the duration of the rocuronium effect. Forty-eight patients were randomly allocated to the control or saline flush group. Anesthesia was induced and maintained with propofol and remifentanil, and all patients received 0.6 mg/kg rocuronium in 10 mL of normal saline. In the saline flush group, 20 mL normal saline was immediately infused after rocuronium administration. Neuromuscular blockade was assessed using acceleromyography at the adductor pollicis muscle with train-of-four (TOF) stimulation. The neuromuscular indices for rocuronium were calculated as follows: the latent onset time, defined as the time from the start of rocuronium infusion until first occurrence of depression of the first twitch of the TOF (T1) ≥5%; onset time, defined as the time from the start of rocuronium infusion until first occurrence of depression of the T1 ≥95%; clinical duration, defined as the time from the start of rocuronium administration until T1 recovered to 25% of the final T1 value; recovery index, defined as the time for recovery of T1 from 25% to 75% of the final T1 value; and the total recovery time, defined as the time from the start of rocuronium administration until reaching a TOF ratio of 0.9. Significance was designated at P <0.05. The measured latent onset time and onset time were significantly shorter in the saline flush group than the control group by 15 seconds (95.2% confidence interval, 0-15, P = 0.007) and 15 seconds (0-30, P = 0.018), respectively. Saline flush significantly depressed the T1 height at 30, 45, and 60 seconds after the rocuronium bolus by 17%, 24%, and 14%, respectively. In addition, the recovery phase was significantly prolonged in the saline flush group. The mean clinical duration (5th-95th percentile range) in the saline flush group and control group was 35 minutes (27-63 minutes) and 31 minutes (19-48 minutes; P = 0.032), respectively; the recovery index was 13 minutes (8-25 minutes) and 10 minutes (7-19 minutes; P = 0.019), respectively; and the total recovery time was 61 minutes (44-108 minutes) and 50 minutes (35-93 minutes; P = 0.048), respectively. Administering a 20-mL saline flush immediately after infusion of 0.6 mg/kg rocuronium in 10 mL normal saline shortened the onset time and prolonged the recovery phase of neuromuscular blockade.

  6. Ratio-based lengths of intervals to improve fuzzy time series forecasting.

    PubMed

    Huarng, Kunhuang; Yu, Tiffany Hui-Kuang

    2006-04-01

    The objective of this study is to explore ways of determining the useful lengths of intervals in fuzzy time series. It is suggested that ratios, instead of equal lengths of intervals, can more properly represent the intervals among observations. Ratio-based lengths of intervals are, therefore, proposed to improve fuzzy time series forecasting. Algebraic growth data, such as enrollments and the stock index, and exponential growth data, such as inventory demand, are chosen as the forecasting targets, before forecasting based on the various lengths of intervals is performed. Furthermore, sensitivity analyses are also carried out for various percentiles. The ratio-based lengths of intervals are found to outperform the effective lengths of intervals, as well as the arbitrary ones in regard to the different statistical measures. The empirical analysis suggests that the ratio-based lengths of intervals can also be used to improve fuzzy time series forecasting.

  7. Importance of the Time Interval between Bowel Preparation and Colonoscopy in Determining the Quality of Bowel Preparation for Full-Dose Polyethylene Glycol Preparation

    PubMed Central

    Kim, Tae Kyung; Kim, Hyung Wook; Kim, Su Jin; Ha, Jong Kun; Jang, Hyung Ha; Hong, Young Mi; Park, Su Bum; Choi, Cheol Woong; Kang, Dae Hwan

    2014-01-01

    Background/Aims The quality of bowel preparation (QBP) is the important factor in performing a successful colonoscopy. Several factors influencing QBP have been reported; however, some factors, such as the optimal preparation-to-colonoscopy time interval, remain controversial. This study aimed to determine the factors influencing QBP and the optimal time interval for full-dose polyethylene glycol (PEG) preparation. Methods A total of 165 patients who underwent colonoscopy from June 2012 to August 2012 were prospectively evaluated. The QBP was assessed using the Ottawa Bowel Preparation Scale (Ottawa) score according to several factors influencing the QBP were analyzed. Results Colonoscopies with a time interval of 5 to 6 hours had the best Ottawa score in all parts of the colon. Patients with time intervals of 6 hours or less had the better QBP than those with time intervals of more than 6 hours (p=0.046). In the multivariate analysis, the time interval (odds ratio, 1.897; 95% confidence interval, 1.006 to 3.577; p=0.048) was the only significant contributor to a satisfactory bowel preparation. Conclusions The optimal time was 5 to 6 hours for the full-dose PEG method, and the time interval was the only significant contributor to a satisfactory bowel preparation. PMID:25368750

  8. Spacecraft utility and the development of confidence intervals for criticality of anomalies

    NASA Technical Reports Server (NTRS)

    Williams, R. E.

    1980-01-01

    The concept of spacecraft utility, a measure of its performance in orbit, is discussed and its formulation is described. Performance is defined in terms of the malfunctions that occur and the criticality to the mission of these malfunctions. Different approaches to establishing average or expected values of criticality are discussed and confidence intervals are developed for parameters used in the computation of utility.

  9. Interresponse Time Structures in Variable-Ratio and Variable-Interval Schedules

    ERIC Educational Resources Information Center

    Bowers, Matthew T.; Hill, Jade; Palya, William L.

    2008-01-01

    The interresponse-time structures of pigeon key pecking were examined under variable-ratio, variable-interval, and variable-interval plus linear feedback schedules. Whereas the variable-ratio and variable-interval plus linear feedback schedules generally resulted in a distinct group of short interresponse times and a broad distribution of longer…

  10. Evaluation of the appropriate time range for estimating the apparent permeability coefficient (P(app)) in a transcellular transport study.

    PubMed

    Ozeki, Kazuhisa; Kato, Motohiro; Sakurai, Yuuji; Ishigai, Masaki; Kudo, Toshiyuki; Ito, Kiyomi

    2015-11-30

    In a transcellular transport study, the apparent permeability coefficient (Papp) of a compound is evaluated using the range by which the amount of compound accumulated on the receiver side is assumed to be proportional to time. However, the time profile of the concentration of the compound in receiver (C3) often shows a lag time before reaching the linear range and later changes from linear to steady state. In this study, the linear range needed to calculate Papp in the C3-time profile was evaluated by a 3-compartment model. C3 was described by an equation with two steady states (C3=A3(1-e(-αt))+B3(1-e(-βt)), α>β), and by a simple approximate line (C3=A3-A3×αt) in the time range of 3/α

  11. Using Procedure Codes to Define Radiation Toxicity in Administrative Data: The Devil is in the Details.

    PubMed

    Meyer, Anne-Marie; Kuo, Tzy-Mey; Chang, YunKyung; Carpenter, William R; Chen, Ronald C; Sturmer, Til

    2017-05-01

    Systematic coding systems are used to define clinically meaningful outcomes when leveraging administrative claims data for research. How and when these codes are applied within a research study can have implications for the study validity and their specificity can vary significantly depending on treatment received. Data are from the Surveillance, Epidemiology, and End Results-Medicare linked dataset. We use propensity score methods in a retrospective cohort of prostate cancer patients first examined in a recently published radiation oncology comparative effectiveness study. With the narrowly defined outcome definition, the toxicity event outcome rate ratio was 0.88 per 100 person-years (95% confidence interval, 0.71-1.08). With the broadly defined outcome, the rate ratio was comparable, with 0.89 per 100 person-years (95% confidence interval, 0.76-1.04), although individual event rates were doubled. Some evidence of surveillance bias was suggested by a higher rate of endoscopic procedures the first year of follow-up in patients who received proton therapy compared with those receiving intensity-modulated radiation treatment (11.15 vs. 8.90, respectively). This study demonstrates the risk of introducing bias through subjective application of procedure codes. Careful consideration is required when using procedure codes to define outcomes in administrative data.

  12. Impact of hyperthermal rotary blood pump surfaces on blood clotting behavior: an approach.

    PubMed

    Hamilton, Kathrin F; Schlanstein, Peter C; Mager, Ilona; Schmitz-Rode, Thomas; Steinseifer, Ulrich

    2009-09-01

    The influence of heat dissipating systems, such as rotary blood pumps, was investigated. Titanium cylinders as rotary blood pump housing dummies were immersed in porcine blood and constantly tempered at specific temperatures (37-60 degrees C) over a defined period of time. The porcine blood was anticoagulated either by low heparin dosage or citrate. At frequent intervals, samples were taken for blood analysis and the determination of the plasmatic coagulation cascade. Blood parameters do not alter at surface temperatures below 50 degrees C. Hyperthermia-induced hemolysis could be confirmed. The plasmatic coagulation cascade is terminated at surface temperatures exceeding 55 degrees C. The adhesion of blood constituents on surfaces is temperature and time dependent, and structural changes of adhesions and blood itself were detected.

  13. Strong convergence and convergence rates of approximating solutions for algebraic Riccati equations in Hilbert spaces

    NASA Technical Reports Server (NTRS)

    Ito, Kazufumi

    1987-01-01

    The linear quadratic optimal control problem on infinite time interval for linear time-invariant systems defined on Hilbert spaces is considered. The optimal control is given by a feedback form in terms of solution pi to the associated algebraic Riccati equation (ARE). A Ritz type approximation is used to obtain a sequence pi sup N of finite dimensional approximations of the solution to ARE. A sufficient condition that shows pi sup N converges strongly to pi is obtained. Under this condition, a formula is derived which can be used to obtain a rate of convergence of pi sup N to pi. The results of the Galerkin approximation is demonstrated and applied for parabolic systems and the averaging approximation for hereditary differential systems.

  14. Time interval measurement device based on surface acoustic wave filter excitation, providing 1 ps precision and stability.

    PubMed

    Panek, Petr; Prochazka, Ivan

    2007-09-01

    This article deals with the time interval measurement device, which is based on a surface acoustic wave (SAW) filter as a time interpolator. The operating principle is based on the fact that a transversal SAW filter excited by a short pulse can generate a finite signal with highly suppressed spectra outside a narrow frequency band. If the responses to two excitations are sampled at clock ticks, they can be precisely reconstructed from a finite number of samples and then compared so as to determine the time interval between the two excitations. We have designed and constructed a two-channel time interval measurement device which allows independent timing of two events and evaluation of the time interval between them. The device has been constructed using commercially available components. The experimental results proved the concept. We have assessed the single-shot time interval measurement precision of 1.3 ps rms that corresponds to the time of arrival precision of 0.9 ps rms in each channel. The temperature drift of the measured time interval on temperature is lower than 0.5 ps/K, and the long term stability is better than +/-0.2 ps/h. These are to our knowledge the best values reported for the time interval measurement device. The results are in good agreement with the error budget based on the theoretical analysis.

  15. Interval Timing Accuracy and Scalar Timing in C57BL/6 Mice

    PubMed Central

    Buhusi, Catalin V.; Aziz, Dyana; Winslow, David; Carter, Rickey E.; Swearingen, Joshua E.; Buhusi, Mona C.

    2010-01-01

    In many species, interval timing behavior is accurate—appropriate estimated durations—and scalar—errors vary linearly with estimated durations. While accuracy has been previously examined, scalar timing has not been yet clearly demonstrated in house mice (Mus musculus), raising concerns about mouse models of human disease. We estimated timing accuracy and precision in C57BL/6 mice, the most used background strain for genetic models of human disease, in a peak-interval procedure with multiple intervals. Both when timing two intervals (Experiment 1) or three intervals (Experiment 2), C57BL/6 mice demonstrated varying degrees of timing accuracy. Importantly, both at individual and group level, their precision varied linearly with the subjective estimated duration. Further evidence for scalar timing was obtained using an intraclass correlation statistic. This is the first report of consistent, reliable scalar timing in a sizable sample of house mice, thus validating the PI procedure as a valuable technique, the intraclass correlation statistic as a powerful test of the scalar property, and the C57BL/6 strain as a suitable background for behavioral investigations of genetically engineered mice modeling disorders of interval timing. PMID:19824777

  16. Assessment of cardiac time intervals using high temporal resolution real-time spiral phase contrast with UNFOLDed-SENSE.

    PubMed

    Kowalik, Grzegorz T; Knight, Daniel S; Steeden, Jennifer A; Tann, Oliver; Odille, Freddy; Atkinson, David; Taylor, Andrew; Muthurangu, Vivek

    2015-02-01

    To develop a real-time phase contrast MR sequence with high enough temporal resolution to assess cardiac time intervals. The sequence utilized spiral trajectories with an acquisition strategy that allowed a combination of temporal encoding (Unaliasing by fourier-encoding the overlaps using the temporal dimension; UNFOLD) and parallel imaging (Sensitivity encoding; SENSE) to be used (UNFOLDed-SENSE). An in silico experiment was performed to determine the optimum UNFOLD filter. In vitro experiments were carried out to validate the accuracy of time intervals calculation and peak mean velocity quantification. In addition, 15 healthy volunteers were imaged with the new sequence, and cardiac time intervals were compared to reference standard Doppler echocardiography measures. For comparison, in silico, in vitro, and in vivo experiments were also carried out using sliding window reconstructions. The in vitro experiments demonstrated good agreement between real-time spiral UNFOLDed-SENSE phase contrast MR and the reference standard measurements of velocity and time intervals. The protocol was successfully performed in all volunteers. Subsequent measurement of time intervals produced values in keeping with literature values and good agreement with the gold standard echocardiography. Importantly, the proposed UNFOLDed-SENSE sequence outperformed the sliding window reconstructions. Cardiac time intervals can be successfully assessed with UNFOLDed-SENSE real-time spiral phase contrast. Real-time MR assessment of cardiac time intervals may be beneficial in assessment of patients with cardiac conditions such as diastolic dysfunction. © 2014 Wiley Periodicals, Inc.

  17. Graph theory applied to the analysis of motor activity in patients with schizophrenia and depression

    PubMed Central

    Fasmer, Erlend Eindride; Berle, Jan Øystein; Oedegaard, Ketil J.; Hauge, Erik R.

    2018-01-01

    Depression and schizophrenia are defined only by their clinical features, and diagnostic separation between them can be difficult. Disturbances in motor activity pattern are central features of both types of disorders. We introduce a new method to analyze time series, called the similarity graph algorithm. Time series of motor activity, obtained from actigraph registrations over 12 days in depressed and schizophrenic patients, were mapped into a graph and we then applied techniques from graph theory to characterize these time series, primarily looking for changes in complexity. The most marked finding was that depressed patients were found to be significantly different from both controls and schizophrenic patients, with evidence of less regularity of the time series, when analyzing the recordings with one hour intervals. These findings support the contention that there are important differences in control systems regulating motor behavior in patients with depression and schizophrenia. The similarity graph algorithm we have described can easily be applied to the study of other types of time series. PMID:29668743

  18. An Integer Batch Scheduling Model for a Single Machine with Simultaneous Learning and Deterioration Effects to Minimize Total Actual Flow Time

    NASA Astrophysics Data System (ADS)

    Yusriski, R.; Sukoyo; Samadhi, T. M. A. A.; Halim, A. H.

    2016-02-01

    In the manufacturing industry, several identical parts can be processed in batches, and setup time is needed between two consecutive batches. Since the processing times of batches are not always fixed during a scheduling period due to learning and deterioration effects, this research deals with batch scheduling problems with simultaneous learning and deterioration effects. The objective is to minimize total actual flow time, defined as a time interval between the arrival of all parts at the shop and their common due date. The decision variables are the number of batches, integer batch sizes, and the sequence of the resulting batches. This research proposes a heuristic algorithm based on the Lagrange Relaxation. The effectiveness of the proposed algorithm is determined by comparing the resulting solutions of the algorithm to the respective optimal solution obtained from the enumeration method. Numerical experience results show that the average of difference among the solutions is 0.05%.

  19. On the continuous dependence with respect to sampling of the linear quadratic regulator problem for distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.; Wang, C.

    1990-01-01

    The convergence of solutions to the discrete or sampled time linear quadratic regulator problem and associated Riccati equation for infinite dimensional systems to the solutions to the corresponding continuous time problem and equation, as the length of the sampling interval (the sampling rate) tends toward zero (infinity) is established. Both the finite and infinite time horizon problems are studied. In the finite time horizon case, strong continuity of the operators which define the control system and performance index together with a stability and consistency condition on the sampling scheme are required. For the infinite time horizon problem, in addition, the sampled systems must be stabilizable and detectable, uniformly with respect to the sampling rate. Classes of systems for which this condition can be verified are discussed. Results of numerical studies involving the control of a heat/diffusion equation, a hereditary of delay system, and a flexible beam are presented and discussed.

  20. Graph theory applied to the analysis of motor activity in patients with schizophrenia and depression.

    PubMed

    Fasmer, Erlend Eindride; Fasmer, Ole Bernt; Berle, Jan Øystein; Oedegaard, Ketil J; Hauge, Erik R

    2018-01-01

    Depression and schizophrenia are defined only by their clinical features, and diagnostic separation between them can be difficult. Disturbances in motor activity pattern are central features of both types of disorders. We introduce a new method to analyze time series, called the similarity graph algorithm. Time series of motor activity, obtained from actigraph registrations over 12 days in depressed and schizophrenic patients, were mapped into a graph and we then applied techniques from graph theory to characterize these time series, primarily looking for changes in complexity. The most marked finding was that depressed patients were found to be significantly different from both controls and schizophrenic patients, with evidence of less regularity of the time series, when analyzing the recordings with one hour intervals. These findings support the contention that there are important differences in control systems regulating motor behavior in patients with depression and schizophrenia. The similarity graph algorithm we have described can easily be applied to the study of other types of time series.

  1. On the continuous dependence with respect to sampling of the linear quadratic regulator problem for distributed parameter system

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.; Wang, C.

    1992-01-01

    The convergence of solutions to the discrete- or sampled-time linear quadratic regulator problem and associated Riccati equation for infinite-dimensional systems to the solutions to the corresponding continuous time problem and equation, as the length of the sampling interval (the sampling rate) tends toward zero(infinity) is established. Both the finite-and infinite-time horizon problems are studied. In the finite-time horizon case, strong continuity of the operators that define the control system and performance index, together with a stability and consistency condition on the sampling scheme are required. For the infinite-time horizon problem, in addition, the sampled systems must be stabilizable and detectable, uniformly with respect to the sampling rate. Classes of systems for which this condition can be verified are discussed. Results of numerical studies involving the control of a heat/diffusion equation, a hereditary or delay system, and a flexible beam are presented and discussed.

  2. The 2011 Mineral, Virginia, earthquake and its significance for seismic hazards in eastern North America: overview and synthesis

    USGS Publications Warehouse

    Horton, J. Wright; Chapman, Martin C.; Green, Russell A.

    2015-01-01

    The earthquake and aftershocks occurred in crystalline rocks within Paleozoic thrust sheets of the Chopawamsic terrane. The main shock and majority of aftershocks delineated the newly named Quail fault zone in the subsurface, and shallow aftershocks defined outlying faults. The earthquake induced minor liquefaction sand boils, but notably there was no evidence of a surface fault rupture. Recurrence intervals, and evidence for larger earthquakes in the Quaternary in this area, remain important unknowns. This event, along with similar events during historical time, is a reminder that earthquakes of similar or larger magnitude pose a real hazard in eastern North America.

  3. Seizure clustering.

    PubMed

    Haut, Sheryl R

    2006-02-01

    Seizure clusters, also known as repetitive or serial seizures, occur commonly in epilepsy. Clustering implies that the occurrence of one seizure may influence the probability of a subsequent seizure; thus, the investigation of the clustering phenomenon yields insights into both specific mechanisms of seizure clustering and more general concepts of seizure occurrence. Seizure clustering has been defined clinically as a number of seizures per unit time and, statistically, as a deviation from a random distribution, or interseizure interval dependence. This review explores the pathophysiology, epidemiology, and clinical implications of clustering, as well as other periodic patterns of seizure occurrence. Risk factors for experiencing clusters and potential precipitants of clustering are also addressed.

  4. Triage sepsis alert and sepsis protocol lower times to fluids and antibiotics in the ED.

    PubMed

    Hayden, Geoffrey E; Tuuri, Rachel E; Scott, Rachel; Losek, Joseph D; Blackshaw, Aaron M; Schoenling, Andrew J; Nietert, Paul J; Hall, Greg A

    2016-01-01

    Early identification of sepsis in the emergency department (ED), followed by adequate fluid hydration and appropriate antibiotics, improves patient outcomes. We sought to measure the impact of a sepsis workup and treatment protocol (SWAT) that included an electronic health record (EHR)-based triage sepsis alert, direct communication, mobilization of resources, and standardized order sets. We conducted a retrospective, quasiexperimental study of adult ED patients admitted with suspected sepsis, severe sepsis, or septic shock. We defined a preimplementation (pre-SWAT) group and a postimplementation (post-SWAT) group and further broke these down into SWAT A (septic shock) and SWAT B (sepsis with normal systolic blood pressure). We performed extensive data comparisons in the pre-SWAT and post-SWAT groups, including demographics, systemic inflammatory response syndrome criteria, time to intravenous fluids bolus, time to antibiotics, length-of-stay times, and mortality rates. There were 108 patients in the pre-SWAT group and 130 patients in the post-SWAT group. The mean time to bolus was 31 minutes less in the postimplementation group, 51 vs 82 minutes (95% confidence interval, 15-46; P value < .01). The mean time to antibiotics was 59 minutes less in the postimplementation group, 81 vs 139 minutes (95% confidence interval, 44-74; P value < .01). Segmented regression modeling did not identify secular trends in these outcomes. There was no significant difference in mortality rates. An EHR-based triage sepsis alert and SWAT protocol led to a significant reduction in the time to intravenous fluids and time to antibiotics in ED patients admitted with suspected sepsis, severe sepsis, and septic shock. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Discrete Time Rescaling Theorem: Determining Goodness of Fit for Discrete Time Statistical Models of Neural Spiking

    PubMed Central

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-01-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868

  6. Discrete time rescaling theorem: determining goodness of fit for discrete time statistical models of neural spiking.

    PubMed

    Haslinger, Robert; Pipa, Gordon; Brown, Emery

    2010-10-01

    One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.

  7. Jointly modeling longitudinal proportional data and survival times with an application to the quality of life data in a breast cancer trial.

    PubMed

    Song, Hui; Peng, Yingwei; Tu, Dongsheng

    2017-04-01

    Motivated by the joint analysis of longitudinal quality of life data and recurrence free survival times from a cancer clinical trial, we present in this paper two approaches to jointly model the longitudinal proportional measurements, which are confined in a finite interval, and survival data. Both approaches assume a proportional hazards model for the survival times. For the longitudinal component, the first approach applies the classical linear mixed model to logit transformed responses, while the second approach directly models the responses using a simplex distribution. A semiparametric method based on a penalized joint likelihood generated by the Laplace approximation is derived to fit the joint model defined by the second approach. The proposed procedures are evaluated in a simulation study and applied to the analysis of breast cancer data motivated this research.

  8. The Cretaceous superchron geodynamo: Observations near the tangent cylinder

    PubMed Central

    Tarduno, John A.; Cottrell, Rory D.; Smirnov, Alexei V.

    2002-01-01

    If relationships exist between the frequency of geomagnetic reversals and the morphology, secular variation, and intensity of Earth's magnetic field, they should be best expressed during superchrons, intervals tens of millions of years long lacking reversals. Here we report paleomagnetic and paleointensity data from lavas of the Cretaceous Normal Polarity Superchron that formed at high latitudes near the tangent cylinder that surrounds the solid inner core. The time-averaged field recorded by these lavas is remarkably strong and stable. When combined with global results available from lower latitudes, these data define a time-averaged field that is overwhelmingly dominated by the axial dipole (octupole components are insignificant). These observations suggest that the basic features of the geomagnetic field are intrinsically related. Superchrons may reflect times when the nature of core–mantle boundary heat flux allows the geodynamo to operate at peak efficiency. PMID:12388778

  9. Use of mobile phones in Norway and risk of intracranial tumours.

    PubMed

    Klaeboe, Lars; Blaasaas, Karl Gerhard; Tynes, Tore

    2007-04-01

    To test the hypothesis that exposure to radio-frequency electromagnetic fields from mobile phones increases the incidence of gliomas, meningiomas and acoustic neuromas in adults. The incident cases were of patients aged 19-69 years who were diagnosed during 2001-2002 in Southern Norway. Population controls were selected and frequency-matched for age, sex, and residential area. Detailed information about mobile phone use was collected from 289 glioma (response rate 77%), 207 meningioma patients (71%), and 45 acoustic neuroma patients (68%) and from 358 (69%) controls. For regular mobile phone use, defined as use on average at least once a week or more for at least 6 months, the odds ratio was 0.6 (95% confidence interval 0.4-0.9) for gliomas, 0.8 (95% confidence interval 0.5-1.1) for meningiomas and 0.5 (95% confidence interval 0.2-1.0) for acoustic neuromas. Similar results were found with mobile phone use for 6 years or more for gliomas and acoustic neuromas. An exception was meningiomas, where the odds ratio was 1.2 (95% confidence interval 0.6-2.2). Furthermore, no increasing trend was observed for gliomas or acoustic neuromas by increasing duration of regular use, the time since first regular use or cumulative use of mobile phones. The results from the present study indicate that use of mobile phones is not associated with an increased risk of gliomas, meningiomas or acoustic neuromas.

  10. Intermittent Drug Dosing Intervals Guided by the Operational Multiple Dosing Half Lives for Predictable Plasma Accumulation and Fluctuation

    PubMed Central

    Grover, Anita; Benet, Leslie Z.

    2013-01-01

    Intermittent drug dosing intervals are usually initially guided by the terminal pharmacokinetic half life and are dependent on drug formulation. For chronic multiple dosing and for extended release dosage forms, the terminal half life often does not predict the plasma drug accumulation or fluctuation observed. We define and advance applications for the operational multiple dosing half lives for drug accumulation and fluctuation after multiple oral dosing at steady-state. Using Monte Carlo simulation, our results predict a way to maximize the operational multiple dosing half lives relative to the terminal half life by using a first-order absorption rate constant close to the terminal elimination rate constant in the design of extended release dosage forms. In this way, drugs that may be eliminated early in the development pipeline due to a relatively short half life can be formulated to be dosed at intervals three times the terminal half life, maximizing compliance, while maintaining tight plasma concentration accumulation and fluctuation ranges. We also present situations in which the operational multiple dosing half lives will be especially relevant in the determination of dosing intervals, including for drugs that follow a direct PKPD model and have a narrow therapeutic index, as the rate of concentration decrease after chronic multiple dosing (that is not the terminal half life) can be determined via simulation. These principles are illustrated with case studies on valproic acid, diazepam, and anti-hypertensives. PMID:21499748

  11. Hospital factors impact variation in emergency department length of stay more than physician factors.

    PubMed

    Krall, Scott P; Cornelius, Angela P; Addison, J Bruce

    2014-03-01

    To analyze the correlation between the many different emergency department (ED) treatment metric intervals and determine if the metrics directly impacted by the physician correlate to the "door to room" interval in an ED (interval determined by ED bed availability). Our null hypothesis was that the cause of the variation in delay to receiving a room was multifactorial and does not correlate to any one metric interval. We collected daily interval averages from the ED information system, Meditech©. Patient flow metrics were collected on a 24-hour basis. We analyzed the relationship between the time intervals that make up an ED visit and the "arrival to room" interval using simple correlation (Pearson Correlation coefficients). Summary statistics of industry standard metrics were also done by dividing the intervals into 2 groups, based on the average ED length of stay (LOS) from the National Hospital Ambulatory Medical Care Survey: 2008 Emergency Department Summary. Simple correlation analysis showed that the doctor-to-discharge time interval had no correlation to the interval of "door to room (waiting room time)", correlation coefficient (CC) (CC=0.000, p=0.96). "Room to doctor" had a low correlation to "door to room" CC=0.143, while "decision to admitted patients departing the ED time" had a moderate correlation of 0.29 (p <0.001). "New arrivals" (daily patient census) had a strong correlation to longer "door to room" times, 0.657, p<0.001. The "door to discharge" times had a very strong correlation CC=0.804 (p<0.001), to the extended "door to room" time. Physician-dependent intervals had minimal correlation to the variation in arrival to room time. The "door to room" interval was a significant component to the variation in "door to discharge" i.e. LOS. The hospital-influenced "admit decision to hospital bed" i.e. hospital inpatient capacity, interval had a correlation to delayed "door to room" time. The other major factor affecting department bed availability was the "total patients per day." The correlation to the increasing "door to room" time also reflects the effect of availability of ED resources (beds) on the patient evaluation time. The time that it took for a patient to receive a room appeared more dependent on the system resources, for example, beds in the ED, as well as in the hospital, than on the physician.

  12. Intact interval timing in circadian CLOCK mutants.

    PubMed

    Cordes, Sara; Gallistel, C R

    2008-08-28

    While progress has been made in determining the molecular basis for the circadian clock, the mechanism by which mammalian brains time intervals measured in seconds to minutes remains a mystery. An obvious question is whether the interval-timing mechanism shares molecular machinery with the circadian timing mechanism. In the current study, we trained circadian CLOCK +/- and -/- mutant male mice in a peak-interval procedure with 10 and 20-s criteria. The mutant mice were more active than their wild-type littermates, but there were no reliable deficits in the accuracy or precision of their timing as compared with wild-type littermates. This suggests that expression of the CLOCK protein is not necessary for normal interval timing.

  13. Transport induced by mean-eddy interaction: I. Theory, and relation to Lagrangian lobe dynamics

    NASA Astrophysics Data System (ADS)

    Ide, Kayo; Wiggins, Stephen

    2015-02-01

    In this paper we develop a method for the estimation of Transport Induced by the Mean-Eddy interaction (TIME) in two-dimensional unsteady flows. The method is based on the dynamical systems approach to fluid transport and can be viewed as a hybrid combination of Lagrangian and Eulerian methods. The (Eulerian) boundaries across which we consider (Lagrangian) transport are kinematically defined by appropriately chosen streamlines of the mean flow. By evaluating the impact of the mean-eddy interaction on transport, the TIME method can be used as a diagnostic tool for transport processes that occur during a specified time interval along a specified boundary segment. We introduce two types of TIME functions: one that quantifies the accumulation of flow properties and another that measures the displacement of the transport geometry. The spatial geometry of transport is described by the so-called pseudo-lobes, and temporal evolution of transport by their dynamics. In the case where the TIME functions are evaluated along a separatrix, the pseudo-lobes have a relationship to the lobes of Lagrangian transport theory. In fact, one of the TIME functions is identical to the Melnikov function that is used to measure the distance, at leading order in a small parameter, between the two invariant manifolds that define the Lagrangian lobes. We contrast the similarities and differences between the TIME and Lagrangian lobe dynamics in detail. An application of the TIME method is carried out for inter-gyre transport in the wind-driven oceanic circulation model and a comparison with the Lagrangian transport theory is made.

  14. Testing independence of bivariate interval-censored data using modified Kendall's tau statistic.

    PubMed

    Kim, Yuneung; Lim, Johan; Park, DoHwan

    2015-11-01

    In this paper, we study a nonparametric procedure to test independence of bivariate interval censored data; for both current status data (case 1 interval-censored data) and case 2 interval-censored data. To do it, we propose a score-based modification of the Kendall's tau statistic for bivariate interval-censored data. Our modification defines the Kendall's tau statistic with expected numbers of concordant and disconcordant pairs of data. The performance of the modified approach is illustrated by simulation studies and application to the AIDS study. We compare our method to alternative approaches such as the two-stage estimation method by Sun et al. (Scandinavian Journal of Statistics, 2006) and the multiple imputation method by Betensky and Finkelstein (Statistics in Medicine, 1999b). © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Night-time heart rate nondipping: clinical and prognostic significance in the general population.

    PubMed

    Cuspidi, Cesare; Facchetti, Rita; Bombelli, Michele; Sala, Carla; Tadic, Marijana; Grassi, Guido; Mancia, Giuseppe

    2018-06-01

    Studies addressing the association between a reduced drop of heart rate (HR) at night with subclinical organ damage and cardiovascular events in the general population are scanty. We evaluated this issue in individuals enrolled in the Pressioni Monitorate E Loro Associazioni study. At entry, 2021 individuals underwent diagnostic tests including laboratory investigations, 24-h ambulatory blood pressure (BP) monitoring and echocardiography. Participants were followed from the initial medical visit for a time interval of 148 ± 27 months. To explore the association of circadian HR rhythm and outcomes, participants were classified in the primary analysis according to quartiles of nocturnal HR decrease. In secondary analyses, the population was also classified according nondipping nocturnal HR (defined as a drop in average HR at night lower than 10% compared with day-time values) and next in four categories: first, BP/HR dipper, second, BP/HR nondipper, third, HR dipper/BP nondipper, fourth, HR nondipper/BP dipper). A flattened circadian HR rhythm (i.e. lowest quartile of night-time HR dip) was independently associated with left atrial enlargement, but not to left ventricular hypertrophy; moreover, it was predictive of fatal and nonfatal cardiovascular events, independently of several confounders (hazard ratio 1.8, confidence interval: 1.13-2.86, P < 0.01 vs. highest quartile). A blunted dipping of nocturnal HR is associated with preclinical cardiac damage in terms of left atrial enlargement and is predictive cardiovascular morbidity and mortality in the general population.

  16. Duration of Group A Streptococcus PCR positivity following antibiotic treatment of pharyngitis.

    PubMed

    Homme, Jason H; Greenwood, Corryn S; Cronk, Lisa B; Nyre, Lisa M; Uhl, James R; Weaver, Amy L; Patel, Robin

    2018-02-01

    Polymerase chain reaction (PCR) has high sensitivity and specificity for detection of group A streptococcus (GAS) in throat swabs and is routinely used for GAS pharyngitis diagnosis at our institution. Herein we defined the natural history of throat swab GAS PCR and culture positivity during and following treatment of GAS pharyngitis. Fifty children with a PCR positive GAS throat swab were recruited for participation. Four additional throat swabs were collected over 2 weeks following the initial positive PCR result (during and following a standard course of antibiotic therapy) and tested for GAS using rapid real-time PCR and culture. After the initial positive swab, 45% had a positive PCR 2-4 days, 20% 5-7 days, 18% 8-10 days, 25% 11-13days, and 20% 14-18days later. The median time to a negative PCR was 4 days with the nadir in positive PCR results approximating the end of a typical 10-day treatment interval. Seven subjects remained persistently PCR positive. Culture results remained positive at a stable rate for each time interval, ranging from 5-10%. If a patient presents with symptoms of GAS pharyngitis after previous positive GAS PCR testing and treatment with appropriate antibiotics, it is reasonable to use PCR testing for GAS pharyngitis testing beginning one week after initial testing. Further studies are warranted to determine if this time frame can be applied to PCR testing used to detect other infections. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Solar Wind Turbulence and Intermittency at 0.72 AU - Statistical Approach

    NASA Astrophysics Data System (ADS)

    Teodorescu, E.; Echim, M.; Munteanu, C.; Zhang, T.; Barabash, S. V.; Budnik, E.; Fedorov, A.

    2014-12-01

    Through this analysis we characterize the turbulent magnetic fluctuations by Venus Express Magnetometer, VEX-MAG in the solar wind during the last solar cycle minimum at a distance of 0.72 AU from the Sun. We analyze data recorded between 2007 and 2009 with time resolutions of 1 Hz and 32 Hz. In correlation with plasma data from the ASPERA instrument, Analyser of Space Plasma and Energetic Atoms, we identify 550 time intervals, at 1 Hz resolution, when VEX is in the solar wind and which satisfy selection criteria defined based on the amount and the continuity of the data. We identify 118 time intervals that correspond to fast solar wind. We compute the power spectral densities (PSD) for Bx, By, Bz, B, B2, B|| and B^. We perform a statistical analysis of the spectral indices computed for each of the PSD's and evidence a dependence of the spectral index on the solar wind velocity and a slight difference in power content between parallel and perpendicular components of the magnetic field. We also estimate the scale invariance of fluctuations by computing the Probability Distribution Functions (PDFs) for Bx, By, Bz, B and B2 time series and discuss the implications for intermittent turbulence. Research supported by the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement no 313038/STORM, and a grant of the Romanian Ministry of National Education, CNCS - UEFISCDI, project number PN-II-ID-PCE-2012-4-0418.

  18. Markov reward processes

    NASA Technical Reports Server (NTRS)

    Smith, R. M.

    1991-01-01

    Numerous applications in the area of computer system analysis can be effectively studied with Markov reward models. These models describe the behavior of the system with a continuous-time Markov chain, where a reward rate is associated with each state. In a reliability/availability model, upstates may have reward rate 1 and down states may have reward rate zero associated with them. In a queueing model, the number of jobs of certain type in a given state may be the reward rate attached to that state. In a combined model of performance and reliability, the reward rate of a state may be the computational capacity, or a related performance measure. Expected steady-state reward rate and expected instantaneous reward rate are clearly useful measures of the Markov reward model. More generally, the distribution of accumulated reward or time-averaged reward over a finite time interval may be determined from the solution of the Markov reward model. This information is of great practical significance in situations where the workload can be well characterized (deterministically, or by continuous functions e.g., distributions). The design process in the development of a computer system is an expensive and long term endeavor. For aerospace applications the reliability of the computer system is essential, as is the ability to complete critical workloads in a well defined real time interval. Consequently, effective modeling of such systems must take into account both performance and reliability. This fact motivates our use of Markov reward models to aid in the development and evaluation of fault tolerant computer systems.

  19. Defining Incident Cases of Epilepsy in Administrative Data

    PubMed Central

    Bakaki, Paul M.; Koroukian, Siran M.; Jackson, Leila W.; Albert, Jeffrey M.; Kaiboriboon, Kitti

    2013-01-01

    Purpose To determine the minimum enrollment duration for identifying incident cases of epilepsy in administrative data. Methods We performed a retrospective dynamic cohort study using Ohio Medicaid data from 1992–2006 to identify a total of 5,037 incident epilepsy cases who had at least 1 year of follow-up prior to epilepsy diagnosis (epilepsy-free interval). The incidence for epilepsy-free intervals from 1 to 8 years, overall and stratified by pre-existing disability status, was examined. The graphical approach between the slopes of incidence estimates and the epilepsy-free intervals was used to identify the minimum epilepsy-free interval that minimized misclassification of prevalent as incident epilepsy cases. Results As the length of epilepsy-free interval increased, the incidence rates decreased. A graphical plot showed that the decline in incidence of epilepsy became nearly flat beyond the third epilepsy-free interval. Conclusion The minimum of 3-year epilepsy-free interval is needed to differentiate incident from prevalent cases in administrative data. Shorter or longer epilepsy-free intervals could result in over- or under-estimation of epilepsy incidence. PMID:23791310

  20. A novel approach based on preference-based index for interval bilevel linear programming problem.

    PubMed

    Ren, Aihong; Wang, Yuping; Xue, Xingsi

    2017-01-01

    This paper proposes a new methodology for solving the interval bilevel linear programming problem in which all coefficients of both objective functions and constraints are considered as interval numbers. In order to keep as much uncertainty of the original constraint region as possible, the original problem is first converted into an interval bilevel programming problem with interval coefficients in both objective functions only through normal variation of interval number and chance-constrained programming. With the consideration of different preferences of different decision makers, the concept of the preference level that the interval objective function is preferred to a target interval is defined based on the preference-based index. Then a preference-based deterministic bilevel programming problem is constructed in terms of the preference level and the order relation [Formula: see text]. Furthermore, the concept of a preference δ -optimal solution is given. Subsequently, the constructed deterministic nonlinear bilevel problem is solved with the help of estimation of distribution algorithm. Finally, several numerical examples are provided to demonstrate the effectiveness of the proposed approach.

  1. Transformation to equivalent dimensions—a new methodology to study earthquake clustering

    NASA Astrophysics Data System (ADS)

    Lasocki, Stanislaw

    2014-05-01

    A seismic event is represented by a point in a parameter space, quantified by the vector of parameter values. Studies of earthquake clustering involve considering distances between such points in multidimensional spaces. However, the metrics of earthquake parameters are different, hence the metric in a multidimensional parameter space cannot be readily defined. The present paper proposes a solution of this metric problem based on a concept of probabilistic equivalence of earthquake parameters. Under this concept the lengths of parameter intervals are equivalent if the probability for earthquakes to take values from either interval is the same. Earthquake clustering is studied in an equivalent rather than the original dimensions space, where the equivalent dimension (ED) of a parameter is its cumulative distribution function. All transformed parameters are of linear scale in [0, 1] interval and the distance between earthquakes represented by vectors in any ED space is Euclidean. The unknown, in general, cumulative distributions of earthquake parameters are estimated from earthquake catalogues by means of the model-free non-parametric kernel estimation method. Potential of the transformation to EDs is illustrated by two examples of use: to find hierarchically closest neighbours in time-space and to assess temporal variations of earthquake clustering in a specific 4-D phase space.

  2. The Anaesthetic-ECT Time Interval in Electroconvulsive Therapy Practice--Is It Time to Time?

    PubMed

    Gálvez, Verònica; Hadzi-Pavlovic, Dusan; Wark, Harry; Harper, Simon; Leyden, John; Loo, Colleen K

    2016-01-01

    Because most common intravenous anaesthetics used in ECT have anticonvulsant properties, their plasma-brain concentration at the time of seizure induction might affect seizure expression. The quality of ECT seizure expression has been repeatedly associated with efficacy outcomes. The time interval between the anaesthetic bolus injection and the ECT stimulus (anaesthetic-ECT time interval) will determine the anaesthetic plasma-brain concentration when the ECT stimulus is administered. The aim of this study was to examine the effect of the anaesthetic-ECT time interval on ECT seizure quality and duration. The anaesthetic-ECT time interval was recorded in 771 ECT sessions (84 patients). Right unilateral brief pulse ECT was applied. Anaesthesia given was propofol (1-2 mg/kg) and succinylcholine (0.5-1.0 mg/kg). Seizure quality indices (slow wave onset, amplitude, regularity, stereotypy and post-ictal suppression) and duration were rated through a structured rating scale by a single blinded trained rater. Linear Mixed Effects Models analysed the effect of the anaesthetic-ECT time interval on seizure quality indices, controlling for propofol dose (mg), ECT charge (mC), ECT session number, days between ECT, age (years), initial seizure threshold (mC) and concurrent medication. Longer anaesthetic-ECT time intervals lead to significantly higher quality seizures (p < 0.001 for amplitude, regularity, stereotypy and post-ictal suppression). These results suggest that the anaesthetic-ECT time interval is an important factor to consider in ECT practice. This time interval should be extended to as long as practically possible to facilitate the production of better quality seizures. Close collaboration between the anaesthetist and the psychiatrist is essential. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Sea-level evaluation of digitally implemented turbojet engine control functions

    NASA Technical Reports Server (NTRS)

    Arpasi, D. J.; Cwynar, D. S.; Wallhagen, R. E.

    1972-01-01

    The standard hydromechanical control system of a turbojet engine was replaced with a digital control system that implemented the same control laws. A detailed discussion of the digital control system in use with the engine is presented. The engine was operated in a sea-level test stand. The effects of control update interval are defined, and a method for extending this interval by using digital compensation is discussed.

  4. Comparison of glucose fluctuations between day- and night-time measured using a continuous glucose monitoring system in diabetic dogs.

    PubMed

    Mori, Akihiro; Kurishima, Miyuki; Oda, Hitomi; Saeki, Kaori; Arai, Toshiro; Sako, Toshinori

    2013-01-31

    Monitoring of blood glucose concentration is important to evaluate the diabetic status of dogs. Continuous glucose monitoring systems (CGMS) have been applied in veterinary medicine for glucose monitoring in diabetic dogs. The purpose of the study was to evaluate the daily glycemic profiles obtained with CGMS and compare glucose fluctuations between day- and night-time in diabetic dogs. Five diabetic dogs were used in this study and were treated with either NPH insulin or insulin detemir. For data analyses, day-time was defined as 9:00 am-9:00 pm and night-time as 9:00 pm-9:00 am. Using glucose profiles, we determined the mean glucose concentrations (1- and 12-hr intervals), and times spent in hyperglycemia >200 mg/dl or hypoglycemia <60 mg/dl. None of the parameters differed significantly between day-time and night-time in dogs treated with NPH insulin or insulin detemir. In conclusion, this study confirmed, using CGMS, that there are no differences in glucose fluctuations between day- and night-time, in diabetic dogs on a similar feeding regimen and insulin administration.

  5. Magnitude and frequency of floods in the United States, Part 3-A, Ohio River Basin except Cumberland and Tennessee River Basins

    USGS Publications Warehouse

    Speer, Paul R.; Gamble, Charles R.

    1965-01-01

    This report presents a means of determining the probable magnitude and frequency of floods of any recurrence interval from 1.1 to 50 years at most points on streams in the Ohio River basin except Cumberland and Tennessee River basins. Curves are defined that show the relation between the drainage area and the mean annual flood in eight hydrologic areas, and composite frequency curves define the relation of a flood of any recurrence interval from 1.1 to 50 years to the mean annual flood. These two relations are based upon gaging-station records having 10 or more years of record not materially affected by storage or diversion, and the results obtainable from them will represent the magnitude and frequency of natural floods within the range and recurrence intervals defined by the base data. The report also contains a compilation of flood records at all sites in the area at which records have been collected for 5 or more consecutive years. As far as was possible at each location for which discharge has been determined, the tabulations include all floods above a selected base. Where only gage heights have been obtained or where the data did not warrant computation of peach discharges above a selected base, only annual peaks are shown. The maximum known flood discharges for the streamflow stations and miscellaneous points except Ohio River main stem stations, together with areal floods of 10- and 50-year recurrence intervals, are plotted against the size of drainage area for each flood region and hydrologic area to provide a convenient means of judging the frequency of the maximum known floods that have been recorded for these points.

  6. Risk factors and mortality associated with default from multidrug-resistant tuberculosis treatment.

    PubMed

    Franke, Molly F; Appleton, Sasha C; Bayona, Jaime; Arteaga, Fernando; Palacios, Eda; Llaro, Karim; Shin, Sonya S; Becerra, Mercedes C; Murray, Megan B; Mitnick, Carole D

    2008-06-15

    Completing treatment for multidrug-resistant (MDR) tuberculosis (TB) may be more challenging than completing first-line TB therapy, especially in resource-poor settings. The objectives of this study were to (1) identify risk factors for default from MDR TB therapy (defined as prolonged treatment interruption), (2) quantify mortality among patients who default from treatment, and (3) identify risk factors for death after default from treatment. We performed a retrospective chart review to identify risk factors for default from MDR TB therapy and conducted home visits to assess mortality among patients who defaulted from such therapy. Sixty-seven (10.0%) of 671 patients defaulted from MDR TB therapy. The median time to treatment default was 438 days (interquartile range, 152-710 days), and 27 (40.3%) of the 67 patients who defaulted from treatment had culture-positive sputum at the time of default. Substance use (hazard ratio, 2.96; 95% confidence interval, 1.56-5.62; P = .001), substandard housing conditions (hazard ratio, 1.83; 95% confidence interval, 1.07-3.11; P = .03), later year of enrollment (hazard ratio, 1.62, 95% confidence interval, 1.09-2.41; P = .02), and health district (P = .02) predicted default from therapy in a multivariable analysis. Severe adverse events did not predict default from therapy. Forty-seven (70.1%) of 67 patients who defaulted from therapy were successfully traced; of these, 25 (53.2%) had died. Poor bacteriologic response, <1 year of treatment at the time of default, low education level, and diagnosis with a psychiatric disorder significantly predicted death after default in a multivariable analysis. The proportion of patients who defaulted from MDR TB treatment was relatively low. The large proportion of patients who had culture-positive sputum at the time of treatment default underscores the public health importance of minimizing treatment default. Prognosis for patients who defaulted from therapy was poor. Interventions aimed at preventing treatment default may reduce TB-related mortality.

  7. Seventeen-year evaluation of breast cancer screening: the DOM project, The Netherlands. Diagnostisch Onderzoek (investigation) Mammacarcinoom.

    PubMed Central

    Miltenburg, G. A.; Peeters, P. H.; Fracheboud, J.; Collette, H. J.

    1998-01-01

    The DOM project is a non-randomized population-based breast cancer screening programme in Utrecht which started in 1974-75. The 17-year effect has been evaluated by a case-control study of breast cancer deaths during the period 1975-92 in women living in the city of Utrecht, born between 1911 and 1925, whose breast cancers were diagnosed after the initiation of the DOM project. Controls (three for each case) were defined as women having the same year of birth as the case, living in the city of Utrecht at the time the case died, and having had the opportunity of screening in the DOM project. Screening in the period 1975-92 indicated a breast cancer mortality reduction of 46% (odds ratio of 0.54, 95% confidence interval 0.37-0.79). The strongest protective effect was found at a screening interval of 2 years or less (mortality reduction of 62%, odds ratio of 0.38), and for the highest number of screens (mortality reduction of 68%, odds ratio of 0.32 for more than four screens). Exclusion of breast cancer deaths that occurred within 1 year of diagnosis, to allow for 'lead-time' bias, gave an odds ratio of 0.61. Early diagnosis of breast cancer by screening reduces breast cancer mortality in the long term. Bias due to the study design may slightly overestimate the protective effect. A screening programme with a 2-yearly, or smaller, interval between successive screens will improve the protection of screening. PMID:9764591

  8. Effect of bolus volume and viscosity on pharyngeal automated impedance manometry variables derived for broad Dysphagia patients.

    PubMed

    Omari, Taher I; Dejaeger, Eddy; Tack, Jan; Van Beckevoort, Dirk; Rommel, Nathalie

    2013-06-01

    Automated impedance manometry (AIM) analysis measures swallow variables defining bolus timing, pressure, contractile vigour, and bolus presence, which are combined to derive a swallow risk index (SRI) correlating with aspiration. In a heterogeneous cohort of dysphagia patients, we assessed the impact of bolus volume and viscosity on AIM variables. We studied 40 patients (average age = 46 years). Swallowing of boluses was recorded with manometry, impedance, and videofluoroscopy. AIMplot software was used to derive functional variables: peak pressure (PeakP), pressure at nadir impedance (PNadImp), time from nadir impedance to peak pressure (TNadImp-PeakP), the interval of impedance drop in the distal pharynx (flow interval, FI), upper oesophageal sphincter (UES) relaxation interval (UES RI), nadir UES pressure (Nad UESP), UES intrabolus pressure (UES IBP), and UES resistance. The SRI was derived using the formula SRI = (FI * PNadImp)/(PeakP * (TNadImp-PeakP + 1)) * 100. A total of 173 liquid, 44 semisolid, and 33 solid boluses were analysed. The SRI was elevated in relation to aspiration. PeakP increased with volume. SRI was not significantly altered by bolus volume. PNadImp, UES IBP, and UES resistance increased with viscosity. SRI was lower with increased viscosity. In patients with dysphagia, the SRI is elevated in relation to aspiration, reduced by bolus viscosity, and not affected by bolus volume. These data provide evidence that pharyngeal AIM analysis may have clinical utility for assessing deglutitive aspiration risk to liquid boluses.

  9. Complex reference values for endocrine and special chemistry biomarkers across pediatric, adult, and geriatric ages: establishment of robust pediatric and adult reference intervals on the basis of the Canadian Health Measures Survey.

    PubMed

    Adeli, Khosrow; Higgins, Victoria; Nieuwesteeg, Michelle; Raizman, Joshua E; Chen, Yunqi; Wong, Suzy L; Blais, David

    2015-08-01

    Defining laboratory biomarker reference values in a healthy population and understanding the fluctuations in biomarker concentrations throughout life and between sexes are critical to clinical interpretation of laboratory test results in different disease states. The Canadian Health Measures Survey (CHMS) has collected blood samples and health information from the Canadian household population. In collaboration with the Canadian Laboratory Initiative on Pediatric Reference Intervals (CALIPER), the data have been analyzed to determine reference value distributions and reference intervals for several endocrine and special chemistry biomarkers in pediatric, adult, and geriatric age groups. CHMS collected data and blood samples from thousands of community participants aged 3 to 79 years. We used serum samples to measure 13 immunoassay-based special chemistry and endocrine markers. We assessed reference value distributions and, after excluding outliers, calculated age- and sex-specific reference intervals, along with corresponding 90% CIs, according to CLSI C28-A3 guidelines. We observed fluctuations in biomarker reference values across the pediatric, adult, and geriatric age range, with stratification required on the basis of age for all analytes. Additional sex partitions were required for apolipoprotein AI, homocysteine, ferritin, and high sensitivity C-reactive protein. The unique collaboration between CALIPER and CHMS has enabled, for the first time, a detailed examination of the changes in various immunochemical markers that occur in healthy individuals of different ages. The robust age- and sex-specific reference intervals established in this study provide insight into the complex biological changes that take place throughout development and aging and will contribute to improved clinical test interpretation. © 2015 American Association for Clinical Chemistry.

  10. Prescription of Zolpidem and the Risk of Fatal Motor Vehicle Collisions: A Population-Based, Case-Crossover Study from South Korea.

    PubMed

    Yang, Bo Ram; Kim, Ye-Jee; Kim, Mi-Sook; Jung, Sun-Young; Choi, Nam-Kyong; Hwang, Byungkwan; Park, Byung-Joo; Lee, Joongyub

    2018-05-23

    Zolpidem is one of the most frequently used hypnotics worldwide, but associations with serious adverse effects such as motor vehicle collisions have been reported. The objective of this study was to evaluate the association of fatal motor vehicle collisions with a prescription for zolpidem, considering the context of the motor vehicle collisions. We conducted a case-crossover study, where each case served as its own control, by linking data about fatal motor vehicle collisions from the Korean Road Traffic Authority between 2010 and 2014 with national health insurance data. The case period was defined as 1 day before the fatal motor vehicle collisions, and was matched to four control periods at 90-day intervals. Conditional logistic regression was performed to calculate the odds ratio for fatal motor vehicle collisions associated with zolpidem exposure, and odds ratios were adjusted for time-varying exposure to confounding medications. A stratified analysis was performed by age group (younger than 65 years or not), the Charlson Comorbidity Index, and whether patients were new zolpidem users. Among the 714 subjects, the adjusted odds ratio for a fatal motor vehicle collision associated with a prescription for zolpidem the previous day was 1.48 (95% confidence interval 1.06-2.07). After stratification, a significantly increased risk was observed in subjects with a high Charlson Comorbidity Index (odds ratio 1.81; 95% confidence interval 1.16-2.84), the younger age group (odds ratio: 1.62; 95% confidence interval 1.03-2.56), and new zolpidem users (odds ratio 2.37; 95% confidence interval 1.40-4.00). A prescription for zolpidem on the previous day was significantly related to an increased risk of fatal motor vehicle collisions in this population-based case-crossover study.

  11. Depletion of mesospheric sodium during extended period of pulsating aurora

    NASA Astrophysics Data System (ADS)

    Takahashi, T.; Hosokawa, K.; Nozawa, S.; Tsuda, T. T.; Ogawa, Y.; Tsutsumi, M.; Hiraki, Y.; Fujiwara, H.; Kawahara, T. D.; Saito, N.; Wada, S.; Kawabata, T.; Hall, C.

    2017-01-01

    We quantitatively evaluated the Na density depletion due to charge transfer reactions between Na atoms and molecular ions produced by high-energy electron precipitation during a pulsating aurora (PsA). An extended period of PsA was captured by an all-sky camera at the European Incoherent Scatter (EISCAT) radar Tromsø site (69.6°N, 19.2°E) during a 2 h interval from 00:00 to 02:00 UT on 25 January 2012. During this period, using the EISCAT very high frequency (VHF) radar, we detected three intervals of intense ionization below 100 km that were probably caused by precipitation of high-energy electrons during the PsA. In these intervals, the sodium lidar at Tromsø observed characteristic depletion of Na density at altitudes between 97 and 100 km. These Na density depletions lasted for 8 min and represented 5-8% of the background Na layer. To examine the cause of this depletion, we modeled the depletion rate based on charge transfer reactions with NO+ and O2+ while changing the R value which is defined as the ratio of NO+ to O2+ densities, from 1 to 10. The correlation coefficients between observed and modeled Na density depletion calculated with typical value R = 3 for time intervals T1, T2, and T3 were 0.66, 0.80, and 0.67, respectively. The observed Na density depletion rates fall within the range of modeled depletion rate calculated with R from 1 to 10. This suggests that the charge transfer reactions triggered by the auroral impact ionization at low altitudes are the predominant process responsible for Na density depletion during PsA intervals.

  12. Initial Systolic Time Interval (ISTI) as a Predictor of Intradialytic Hypotension (IDH)

    NASA Astrophysics Data System (ADS)

    Biesheuvel, J. D.; Vervloet, M. G.; Verdaasdonk, R. M.; Meijer, J. H.

    2013-04-01

    In haemodialysis treatment the clearance and volume control by the kidneys of a patient are partially replaced by intermittent haemodialysis. Because this artificial process is performed on a limited time scale, unphysiological imbalances in the fluid compartments of the body occur, that can lead to intradialytic hypotensions (IDH). An IDH endangers the efficacy of the haemodialysis session and is associated with dismal clinical endpoints, including mortality. A diagnostic method that predicts the occurrence of these drops in blood pressure could facilitate timely measures for the prevention of IDH. The present study investigates whether the Initial Systolic Time Interval (ISTI) can provide such a diagnostic method. The ISTI is defined as the time difference between the R-peak in the electrocardiogram (ECG) and the C-wave in the impedance cardiogram (ICG) and is considered to be a non-invasive assessment of the time delay between the electrical and mechanical activity of the heart. This time delay has previously been found to depend on autonomic nervous function as well as preload of the heart. Therefore, it can be expected that ISTI may predict an imminent IDH caused by a low circulating blood volume. This ongoing observational clinical study investigates the relationship between changes in ISTI and subsequent drops in blood pressure during haemodialysis. A registration of a complicated dialysis showed a significant correlation between a drop in blood pressure, a decrease in relative blood volume and a substantial increase in ISTI. An uncomplicated dialysis, in which also a considerable amount of fluid was removed, showed no correlations. Both, blood pressure and ISTI remained stable. In conclusion, the preliminary results of the present study show a substantial response of ISTI to haemodynamic instability, indicating an application in optimization and individualisation of the dialysis process.

  13. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE PAGES

    Grout, Ray; Kolla, Hemanth; Minion, Michael; ...

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. Here, we demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  14. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray; Kolla, Hemanth; Minion, Michael

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited tomore » recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  15. Achieving algorithmic resilience for temporal integration through spectral deferred corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grout, Ray; Kolla, Hemanth; Minion, Michael

    2017-05-08

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less

  16. Association of Hearing Impairment With Incident Frailty and Falls in Older Adults

    PubMed Central

    Kamil, Rebecca J.; Betz, Joshua; Powers, Becky Brott; Pratt, Sheila; Kritchevsky, Stephen; Ayonayon, Hilsa N.; Harris, Tammy B.; Helzner, Elizabeth; Deal, Jennifer A.; Martin, Kathryn; Peterson, Matthew; Satterfield, Suzanne; Simonsick, Eleanor M.; Lin, Frank R.

    2017-01-01

    Objective We aimed to determine whether hearing impairment (HI) in older adults is associated with the development of frailty and falls. Method Longitudinal analysis of observational data from the Health, Aging and Body Composition study of 2,000 participants aged 70 to 79 was conducted. Hearing was defined by the pure-tone-average of hearing thresholds at 0.5, 1, 2, and 4 kHz in the better hearing ear. Frailty was defined as a gait speed of <0.60 m/s and/or inability to rise from a chair without using arms. Falls were assessed annually by self-report. Results Older adults with moderate-or-greater HI had a 63% increased risk of developing frailty (adjusted hazard ratio [HR] = 1.63, 95% confidence interval [CI] = [1.26, 2.12]) compared with normal-hearing individuals. Moderate-or-greater HI was significantly associated with a greater annual percent increase in odds of falling over time (9.7%, 95% CI = [7.0, 12.4] compared with normal hearing, 4.4%, 95% CI = [2.6, 6.2]). Discussion HI is independently associated with the risk of frailty in older adults and with greater odds of falling over time. PMID:26438083

  17. Intact Interval Timing in Circadian CLOCK Mutants

    PubMed Central

    Cordes, Sara; Gallistel, C. R.

    2008-01-01

    While progress has been made in determining the molecular basis for the circadian clock, the mechanism by which mammalian brains time intervals measured in seconds to minutes remains a mystery. An obvious question is whether the interval timing mechanism shares molecular machinery with the circadian timing mechanism. In the current study, we trained circadian CLOCK +/− and −/− mutant male mice in a peak-interval procedure with 10 and 20-s criteria. The mutant mice were more active than their wild-type littermates, but there were no reliable deficits in the accuracy or precision of their timing as compared with wild-type littermates. This suggests that expression of the CLOCK protein is not necessary for normal interval timing. PMID:18602902

  18. Outcome of total knee replacement following explantation and cemented spacer therapy.

    PubMed

    Ghanem, Mohamed; Zajonz, Dirk; Bollmann, Juliane; Geissler, Vanessa; Prietzel, Torsten; Moche, Michael; Roth, Andreas; Heyde, Christoph-E; Josten, Christoph

    2016-01-01

    Infection after total knee replacement (TKR) is one of the serious complications which must be pursued with a very effective therapeutic concept. In most cases this means revision arthroplasty, in which one-setting and two-setting procedures are distinguished. Healing of infection is the conditio sine qua non for re-implantation. This retrospective work presents an assessment of the success rate after a two-setting revision arthroplasty of the knee following periprosthetic infection. It further considers drawing conclusions concerning the optimal timing of re-implantation. A total of 34 patients have been enclosed in this study from September 2005 to December 2013. 35 re-implantations were carried out following explantation of total knee and implantation of cemented spacer. The patient's group comprised of 53% (18) males and 47% (16) females. The average age at re-implantation time was 72.2 years (ranging from 54 to 85 years). We particularly evaluated the microbial spectrum, the interval between explantation and re-implantation, the number of surgeries that were necessary prior to re-implantation as well as the postoperative course. We reported 31.4% (11) reinfections following re-implantation surgeries. The number of the reinfections declined with increasing time interval between explantation and re-implantation. Patients who developed reinfections were operated on (re-implantation) after an average of 4.47 months. Those patients with uncomplicated course were operated on (re-implantation) after an average of 6.79 months. Nevertheless, we noticed no essential differences in outcome with regard to the number of surgeries carried out prior to re-implantation. Mobile spacers proved better outcome than temporary arthrodesis with intramedullary fixation. No uniform strategy of treatment exists after peri-prosthetic infections. In particular, no optimal timing can be stated concerning re-implantation. Our data point out to the fact that a longer time interval between explantation and re-implantation reduces the rate of reinfection. From our point of view, the optimal timing for re-implantation depends on various specific factors and therefore it should be defined individually.

  19. Outcome of total knee replacement following explantation and cemented spacer therapy

    PubMed Central

    Ghanem, Mohamed; Zajonz, Dirk; Bollmann, Juliane; Geissler, Vanessa; Prietzel, Torsten; Moche, Michael; Roth, Andreas; Heyde, Christoph-E.; Josten, Christoph

    2016-01-01

    Background: Infection after total knee replacement (TKR) is one of the serious complications which must be pursued with a very effective therapeutic concept. In most cases this means revision arthroplasty, in which one-setting and two-setting procedures are distinguished. Healing of infection is the conditio sine qua non for re-implantation. This retrospective work presents an assessment of the success rate after a two-setting revision arthroplasty of the knee following periprosthetic infection. It further considers drawing conclusions concerning the optimal timing of re-implantation. Patients and methods: A total of 34 patients have been enclosed in this study from September 2005 to December 2013. 35 re-implantations were carried out following explantation of total knee and implantation of cemented spacer. The patient’s group comprised of 53% (18) males and 47% (16) females. The average age at re-implantation time was 72.2 years (ranging from 54 to 85 years). We particularly evaluated the microbial spectrum, the interval between explantation and re-implantation, the number of surgeries that were necessary prior to re-implantation as well as the postoperative course. Results: We reported 31.4% (11) reinfections following re-implantation surgeries. The number of the reinfections declined with increasing time interval between explantation and re-implantation. Patients who developed reinfections were operated on (re-implantation) after an average of 4.47 months. Those patients with uncomplicated course were operated on (re-implantation) after an average of 6.79 months. Nevertheless, we noticed no essential differences in outcome with regard to the number of surgeries carried out prior to re-implantation. Mobile spacers proved better outcome than temporary arthrodesis with intramedullary fixation. Conclusion: No uniform strategy of treatment exists after peri-prosthetic infections. In particular, no optimal timing can be stated concerning re-implantation. Our data point out to the fact that a longer time interval between explantation and re-implantation reduces the rate of reinfection. From our point of view, the optimal timing for re-implantation depends on various specific factors and therefore it should be defined individually. PMID:27066391

  20. Developing tools for the safety specification in risk management plans: lessons learned from a pilot project.

    PubMed

    Cooper, Andrew J P; Lettis, Sally; Chapman, Charlotte L; Evans, Stephen J W; Waller, Patrick C; Shakir, Saad; Payvandi, Nassrin; Murray, Alison B

    2008-05-01

    Following the adoption of the ICH E2E guideline, risk management plans (RMP) defining the cumulative safety experience and identifying limitations in safety information are now required for marketing authorisation applications (MAA). A collaborative research project was conducted to gain experience with tools for presenting and evaluating data in the safety specification. This paper presents those tools found to be useful and the lessons learned from their use. Archive data from a successful MAA were utilised. Methods were assessed for demonstrating the extent of clinical safety experience, evaluating the sensitivity of the clinical trial data to detect treatment differences and identifying safety signals from adverse event and laboratory data to define the extent of safety knowledge with the drug. The extent of clinical safety experience was demonstrated by plots of patient exposure over time. Adverse event data were presented using dot plots, which display the percentages of patients with the events of interest, the odds ratio, and 95% confidence interval. Power and confidence interval plots were utilised for evaluating the sensitivity of the clinical database to detect treatment differences. Box and whisker plots were used to display laboratory data. This project enabled us to identify new evidence-based methods for presenting and evaluating clinical safety data. These methods represent an advance in the way safety data from clinical trials can be analysed and presented. This project emphasises the importance of early and comprehensive planning of the safety package, including evaluation of the use of epidemiology data.

  1. Longitudinal expression of Toll-like receptors on dendritic cells in uncomplicated pregnancy and postpartum

    PubMed Central

    Young, Brett C.; Stanic, Aleksandar K.; Panda, Britta; Rueda, Bo R.; Panda, Alexander

    2014-01-01

    OBJECTIVE Toll-like receptors (TLRs) are integral parts of the innate immune system and have been implicated in complications of pregnancy. The longitudinal expression of TLRs on dendritic cells in the maternal circulation during uncomplicated pregnancies is unknown. The objective of this study was to prospectively evaluate TLRs 1-9 as expressed on dendritic cells in the maternal circulation at defined intervals throughout pregnancy and postpartum. STUDY DESIGN This was a prospective cohort of 30 pregnant women with uncomplicated pregnancies and 30 nonpregnant controls. TLRs and cytokine expression was measured in unstimulated dendritic cells at 4 defined intervals during pregnancy and postpartum. Basal expression of TLRs and cytokines was measured by multicolor flow cytometry. The percent-positive dendritic cells for each TLRs were compared with both nonpregnant and postpartum levels with multivariate linear regression. RESULTS TLRs 1, 7, and 9 were elevated compared with nonpregnant controls with persistent elevation of TLR 1 and interleukin-12 (IL-12) into the postpartum period. Concordantly, levels of IL-6, IL-12, interferon alpha, and tumor necrosis factor alpha increased during pregnancy and returned to levels similar to nonpregnant controls during the postpartum period. The elevated levels of TLR 1 and IL-12 were persistent postpartum, challenging notions that immunologic changes during pregnancy resolve after the prototypical postpartum period. CONCLUSION Normal pregnancy is associated with time-dependent changes in TLR expression compared with nonpregnant controls; these findings may help elucidate immunologic dysfunction in complicated pregnancies. PMID:24291497

  2. Working times of elastomeric impression materials determined by dimensional accuracy.

    PubMed

    Tan, E; Chai, J; Wozniak, W T

    1996-01-01

    The working times of five poly(vinyl siloxane) impression materials were estimated by evaluating the dimensional accuracy of stone dies of impressions of a standard model made at successive time intervals. The stainless steel standard model was represented by two abutments having known distances between landmarks in three dimensions. Three dimensions in the x-, y-, and z-axes of the stone dies were measured with a traveling microscope. A time interval was rejected as being within the working time if the percentage change of the resultant dies, in any dimension, was statistically different from those measured from stone dies from previous time intervals. The absolute dimensions of those dies from the rejected time interval also must have exceeded all those from previous time intervals. Results showed that the working times estimated with this method generally were about 30 seconds longer than those recommended by the manufacturers.

  3. Single-channel autocorrelation functions: the effects of time interval omission.

    PubMed Central

    Ball, F G; Sansom, M S

    1988-01-01

    We present a general mathematical framework for analyzing the dynamic aspects of single channel kinetics incorporating time interval omission. An algorithm for computing model autocorrelation functions, incorporating time interval omission, is described. We show, under quite general conditions, that the form of these autocorrelations is identical to that which would be obtained if time interval omission was absent. We also show, again under quite general conditions, that zero correlations are necessarily a consequence of the underlying gating mechanism and not an artefact of time interval omission. The theory is illustrated by a numerical study of an allosteric model for the gating mechanism of the locust muscle glutamate receptor-channel. PMID:2455553

  4. Device and method for screening crystallization conditions in solution crystal growth

    NASA Technical Reports Server (NTRS)

    Carter, Daniel C. (Inventor)

    1995-01-01

    A device and method for detecting optimum protein crystallization conditions and for growing protein crystals in either 1g or microgravity environments comprising a housing, defining at least one pair of chambers for containing crystallization solutions is presented. The housing further defines an orifice therein for providing fluid communication between the chambers. The orifice is adapted to receive a tube which contains a gelling substance for limiting the rate of diffusive mixing of the crystallization solutions. The solutions are diffusively mixed over a period of time defined by the quantity of gelling substance sufficient to achieve equilibration and to substantially reduce density driven convection disturbances therein. The device further includes endcaps to seal the first and second chambers. One of the endcaps includes a dialysis chamber which contains protein solution in which protein crystals are grown. Once the endcaps are in place, the protein solution is exposed to the crystallization solutions wherein the solubility of the protein solution is reduced at a rate responsive to the rate of diffusive mixing of the crystallization solutions. This allows for a controlled approach to supersaturation and allows for screening of crystal growth conditions at preselected intervals.

  5. Device and Method for Screening Crystallization Conditions in Solution Crystal Growth

    NASA Technical Reports Server (NTRS)

    Carter, Daniel C. (Inventor)

    1997-01-01

    A device and method for detecting optimum protein crystallization conditions and for growing protein crystals in either 1 g or microgravity environments comprising a housing defining at least one pair of chambers for containing crystallization solutions. The housing further defines an orifice therein for providing fluid communication between the chambers. The orifice is adapted to receive a tube which contains a gelling substance for limiting the rate of diffusive mixing of the crystallization solutions. The solutions are diffusively mixed over a period of time defined by the quantity of gelling substance sufficient to achieve equilibration and to substantially reduce density driven convection disturbances therein. The device further includes endcaps to seal the first and second chambers. One of the endcaps includes a dialysis chamber which contains protein solution in which protein crystals are grown. Once the endcaps are in place. the protein solution is exposed to the crystallization solutions wherein the solubility of the protein solution is reduced at a rate responsive to the rate of diffusive mixing of the crystallization solutions. This allows for a controlled approach to supersaturation and allows for screening of crystal growth conditions at preselected intervals.

  6. Sensitivity of a phase-sensitive optical time-domain reflectometer with a semiconductor laser source

    NASA Astrophysics Data System (ADS)

    Alekseev, A. E.; Tezadov, Ya A.; Potapov, V. T.

    2018-06-01

    In the present paper we perform, for the first time, an analysis of the average sensitivity of a coherent phase-sensitive optical time-domain reflectometer (phase-OTDR) with a semiconductor laser source to external actions. The sensitivity of this OTDR can be defined in a conventional manner via average SNR at its output, which in turn is defined by the average useful signal power and the average intensity noise power in the OTDR spatial channels in the bandwidth defined by the OTDR sampling frequency. The average intensity noise power is considered in detail in a previous paper. In the current paper we examine the average useful signal power at the output of a phase-OTDR. The analysis of the average useful signal power of a phase-OTDR is based on the study of a fiber scattered-light interferometer (FSLI) which is treated as a constituent part of a phase- OTDR. In the analysis, one of the conventional phase-OTDR schemes with a rectangular dual-pulse probe signal is considered. The FSLI which corresponds to this OTDR scheme has two scattering fiber segments with additional time delay, introduced between backscattered fields. The average useful signal power and the resulting average SNR at the output of this FSLI are determined by the degree of coherence of the semiconductor laser source, the length of the scattering fiber segments, and by the additional time delay between the scattering fiber segments. The average useful signal power characteristic of the corresponding phase-OTDR is determined by analogous parameters: the source coherence, the time durations of the parts constituting the dual-pulse, and the time interval which separates these parts. In the paper an expression for the average useful signal power of a phase-OTDR is theoretically derived and experimentally verified. Based on the found average useful signal power of a phase-OTDR and the average intensity noise power, derived in the previous paper, the average SNR of a phase-OTDR is defined. Setting the average signal SNR to 1, at a defined spectral band the minimum detectable external action amplitude for our particular phase-OTDR setup is determined. We also derive a simple relation for the average useful signal power and the average SNR which results when making the assumption that the laser source coherence is high. The results of the paper can serve as the basis for further development of the concept of phase-OTDR sensitivity.

  7. Crossover between structured and well-mixed networks in an evolutionary prisoner's dilemma game

    NASA Astrophysics Data System (ADS)

    Dai, Qionglin; Cheng, Hongyan; Li, Haihong; Li, Yuting; Zhang, Mei; Yang, Junzhong

    2011-07-01

    In a spatial evolutionary prisoner’s dilemma game (PDG), individuals interact with their neighbors and update their strategies according to some rules. As is well known, cooperators are destined to become extinct in a well-mixed population, whereas they could emerge and be sustained on a structured network. In this work, we introduce a simple model to investigate the crossover between a structured network and a well-mixed one in an evolutionary PDG. In the model, each link j is designated a rewiring parameter τj, which defines the time interval between two successive rewiring events for link j. By adjusting the rewiring parameter τ (the mean time interval for any link in the network), we could change a structured network into a well-mixed one. For the link rewiring events, three situations are considered: one synchronous situation and two asynchronous situations. Simulation results show that there are three regimes of τ: large τ where the density of cooperators ρc rises to ρc,∞ (the value of ρc for the case without link rewiring), small τ where the mean-field description for a well-mixed network is applicable, and moderate τ where the crossover between a structured network and a well-mixed one happens.

  8. Zero entropy continuous interval maps and MMLS-MMA property

    NASA Astrophysics Data System (ADS)

    Jiang, Yunping

    2018-06-01

    We prove that the flow generated by any continuous interval map with zero topological entropy is minimally mean-attractable and minimally mean-L-stable. One of the consequences is that any oscillating sequence is linearly disjoint from all flows generated by all continuous interval maps with zero topological entropy. In particular, the Möbius function is linearly disjoint from all flows generated by all continuous interval maps with zero topological entropy (Sarnak’s conjecture for continuous interval maps). Another consequence is a non-trivial example of a flow having discrete spectrum. We also define a log-uniform oscillating sequence and show a result in ergodic theory for comparison. This material is based upon work supported by the National Science Foundation. It is also partially supported by a collaboration grant from the Simons Foundation (grant number 523341) and PSC-CUNY awards and a grant from NSFC (grant number 11571122).

  9. Interval Neutrosophic Sets and Their Application in Multicriteria Decision Making Problems

    PubMed Central

    Zhang, Hong-yu; Wang, Jian-qiang; Chen, Xiao-hong

    2014-01-01

    As a generalization of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete, and inconsistent information existing in the real world. And interval neutrosophic sets (INSs) have been proposed exactly to address issues with a set of numbers in the real unit interval, not just a specific number. However, there are fewer reliable operations for INSs, as well as the INS aggregation operators and decision making method. For this purpose, the operations for INSs are defined and a comparison approach is put forward based on the related research of interval valued intuitionistic fuzzy sets (IVIFSs) in this paper. On the basis of the operations and comparison approach, two interval neutrosophic number aggregation operators are developed. Then, a method for multicriteria decision making problems is explored applying the aggregation operators. In addition, an example is provided to illustrate the application of the proposed method. PMID:24695916

  10. Interval neutrosophic sets and their application in multicriteria decision making problems.

    PubMed

    Zhang, Hong-yu; Wang, Jian-qiang; Chen, Xiao-hong

    2014-01-01

    As a generalization of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete, and inconsistent information existing in the real world. And interval neutrosophic sets (INSs) have been proposed exactly to address issues with a set of numbers in the real unit interval, not just a specific number. However, there are fewer reliable operations for INSs, as well as the INS aggregation operators and decision making method. For this purpose, the operations for INSs are defined and a comparison approach is put forward based on the related research of interval valued intuitionistic fuzzy sets (IVIFSs) in this paper. On the basis of the operations and comparison approach, two interval neutrosophic number aggregation operators are developed. Then, a method for multicriteria decision making problems is explored applying the aggregation operators. In addition, an example is provided to illustrate the application of the proposed method.

  11. PK/PD Modelling of the QT Interval: a Step Towards Defining the Translational Relationship Between In Vitro, Awake Beagle Dogs, and Humans.

    PubMed

    Marostica, Eleonora; Van Ammel, Karel; Teisman, Ard; Gallacher, David; Van Bocxlaer, Jan; De Ridder, Filip; Boussery, Koen; Vermeulen, An

    2016-07-01

    Inhibiting the human ether-a-go-go-related gene (hERG)-encoded potassium ion channel is positively correlated with QT-interval prolongation in vivo, which is considered a risk factor for the occurrence of Torsades de Pointes (TdP). A pharmacokinetic/pharmacodynamic model was developed for four compounds that reached the clinic, to relate drug-induced QT-interval change in awake dogs and humans and to derive a translational scaling factor a 1. Overall, dogs were more sensitive than humans to QT-interval change, an a 1 of 1.5 was found, and a 10% current inhibition in vitro produced a higher percent QT-interval change in dogs as compared to humans. The QT-interval changes in dogs were predictive for humans. In vitro and in vivo information could reliably describe the effects in humans. Robust translational knowledge is likely to reduce the need for expensive thorough QT studies; therefore, expanding this work to more compounds is recommended.

  12. Questa Baseline and Pre-Mining Ground-Water Quality Investigation. 1. Depth to Bedrock Determinations Using Shallow Seismic Data Acquired in the Straight Creek Drainage Near Red River, New Mexico

    USGS Publications Warehouse

    Powers, Michael H.; Burton, Bethany L.

    2004-01-01

    In late May and early June of 2002, the U.S. Geological Survey (USGS) acquired four P-wave seismic profiles across the Straight Creek drainage near Red River, New Mexico. The data were acquired to support a larger effort to investigate baseline and pre-mining ground-water quality in the Red River basin (Nordstrom and others, 2002). For ground-water flow modeling, knowledge of the thickness of the valley fill material above the bedrock is required. When curved-ray refraction tomography was used with the seismic first arrival times, the resulting images of interval velocity versus depth clearly show a sharp velocity contrast where the bedrock interface is expected. The images show that the interpreted buried bedrock surface is neither smooth nor sharp, but it is clearly defined across the valley along the seismic line profiles. The bedrock models defined by the seismic refraction images are consistent with the well data.

  13. Four dimensional imaging of E. coli nucleoid organization and dynamics in living cells

    PubMed Central

    Fisher, J. K.; Bourniquel, A.; Witz, G.; Weiner, B.; Prentiss, M.; Kleckner, N.

    2013-01-01

    Visualization of living E. coli nucleoids, defined by HupA-mCherry, reveals a discrete, dynamic helical ellipsoid. Three basic features emerge. (i) Nucleoid density efficiently coalesces into longitudinal bundles, giving a stiff, low DNA density ellipsoid. (ii) This ellipsoid is radially confined within the cell cylinder. Radial confinement gives helical shape and drives and directs global nucleoid dynamics, including sister segregation. (iii) Longitudinal density waves flux back and forth along the nucleoid, with 5–10% of density shifting within 5s, enhancing internal nucleoid mobility. Furthermore, sisters separate end-to-end in sequential discontinuous pulses, each elongating the nucleoid by 5–15%. Pulses occur at 20min intervals, at defined cell cycle times. This progression is mediated by sequential installation and release of programmed tethers, implying cyclic accumulation and relief of intra-nucleoid mechanical stress. These effects could comprise a chromosome-based cell cycle engine. Overall, the presented results suggest a general conceptual framework for bacterial nucleoid morphogenesis and dynamics. PMID:23623305

  14. Analytic study of orbiter landing profiles

    NASA Technical Reports Server (NTRS)

    Walker, H. J.

    1981-01-01

    A broad survey of possible orbiter landing configurations was made with specific goals of defining boundaries for the landing task. The results suggest that the center of the corridors between marginal and routine represents a more or less optimal preflare condition for regular operations. Various constraints used to define the boundaries are based largely on qualitative judgements from earlier flight experience with the X-15 and lifting body research aircraft. The results should serve as useful background for expanding and validating landing simulation programs. The analytic approach offers a particular advantage in identifying trends due to the systematic variation of factors such as vehicle weight, load factor, approach speed, and aim point. Limitations such as a constant load factor during the flare and using a fixed gear deployment time interval, can be removed by increasing the flexibility of the computer program. This analytic definition of landing profiles of the orbiter may suggest additional studies, includin more configurations or more comparisons of landing profiles within and beyond the corridor boundaries.

  15. Insights into Ocean Acidification During the Middle Eocene Climatic Optimum from Boron Isotopes at Southern Ocean Site 738

    NASA Astrophysics Data System (ADS)

    Moebius, I.; Hoenisch, B.; Friedrich, O.

    2015-12-01

    The Middle Eocene Climatic Optimum (MECO) is a ~650-kyr interval of global warming, with a brief ~50 ky long peak warming interval, and an abrupt termination. Deep sea and surface ocean temperature evolution across this interval are fairly well constrained, but thus far we have little understanding of the mechanisms responsible for the gradual warming and rapid recovery. Carbonate mass accumulation rates suggest a shoaling of the carbonate compensation depth, and studies on alkenones indicate increasing atmospheric CO2 levels during the MECO. This suggests an increase in surface ocean CO2, and consequently ocean acidification. However, the severity and timing of the proposed ocean acidification with respect to the onset, peak warming and the termination are currently not well resolved. The boron isotopic composition (δ11B) recorded in planktic foraminifer shells offers an opportunity to infer oceanic pH across this interval. We are working on a boron isotope reconstruction from Southern Ocean IODP site 738 and South Atlantic IODP site 1263, covering 42.0 to 38.5 Ma. These sites are characterized by good carbonate preservation and well-defined age models have been established. Additionally, ecology, nutrient content and bottom-water oxygenation have been shown to change significantly across the event towards a more eutrophic, periodically oxygen-depleted environment supporting different biological communities. We selected the planktic foraminifera species Acarinina spinuloinflata for this study because it is symbiont-bearing, suggesting a near-surface habitat and little vertical migration in the water column, and because of its abundance in the samples. δ11B data will be translated to surface ocean pH and atmospheric pCO2 will be approximated to refine knowledge about the carbon cycle during this time. Parallel analysis of two core sites will help to evaluate the tenacity of the data.

  16. A one-kilogram quartz resonator as a mass standard.

    PubMed

    Vig, John; Howe, David

    2013-02-01

    The SI unit of mass, the kilogram, is defined by a single artifact, the International Prototype Kilogram. This artifact, the primary mass standard, suffers from long-term instabilities that are neither well understood nor easily monitored. A secondary mass standard consisting of a 1-kg quartz resonator in ultrahigh vacuum is proposed. The frequency stability of such a resonator is likely to be far higher than the mass stability of the primary mass standard. Moreover, the resonator would provide a link to the SI time-interval unit. When compared with a laboratory-grade atomic frequency standard or GPS time, the frequency of the resonator could be monitored, on a continuous basis, with 10(-15) precision in only a few days of averaging. It could also be coordinated, worldwide, with other resonator mass standards without the need to transport the standards.

  17. Thermalization of oscillator chains with onsite anharmonicity and comparison with kinetic theory

    DOE PAGES

    Mendl, Christian B.; Lu, Jianfeng; Lukkarinen, Jani

    2016-12-02

    We perform microscopic molecular dynamics simulations of particle chains with an onsite anharmonicity to study relaxation of spatially homogeneous states to equilibrium, and directly compare the simulations with the corresponding Boltzmann-Peierls kinetic theory. The Wigner function serves as a common interface between the microscopic and kinetic level. We demonstrate quantitative agreement after an initial transient time interval. In particular, besides energy conservation, we observe the additional quasiconservation of the phonon density, defined via an ensemble average of the related microscopic field variables and exactly conserved by the kinetic equations. On superkinetic time scales, density quasiconservation is lost while energy remainsmore » conserved, and we find evidence for eventual relaxation of the density to its canonical ensemble value. Furthermore, the precise mechanism remains unknown and is not captured by the Boltzmann-Peierls equations.« less

  18. Thermalization of oscillator chains with onsite anharmonicity and comparison with kinetic theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendl, Christian B.; Lu, Jianfeng; Lukkarinen, Jani

    We perform microscopic molecular dynamics simulations of particle chains with an onsite anharmonicity to study relaxation of spatially homogeneous states to equilibrium, and directly compare the simulations with the corresponding Boltzmann-Peierls kinetic theory. The Wigner function serves as a common interface between the microscopic and kinetic level. We demonstrate quantitative agreement after an initial transient time interval. In particular, besides energy conservation, we observe the additional quasiconservation of the phonon density, defined via an ensemble average of the related microscopic field variables and exactly conserved by the kinetic equations. On superkinetic time scales, density quasiconservation is lost while energy remainsmore » conserved, and we find evidence for eventual relaxation of the density to its canonical ensemble value. Furthermore, the precise mechanism remains unknown and is not captured by the Boltzmann-Peierls equations.« less

  19. On the Helicity in 3D-Periodic Navier-Stokes Equations II: The Statistical Case

    NASA Astrophysics Data System (ADS)

    Foias, Ciprian; Hoang, Luan; Nicolaenko, Basil

    2009-09-01

    We study the asymptotic behavior of the statistical solutions to the Navier-Stokes equations using the normalization map [9]. It is then applied to the study of mean energy, mean dissipation rate of energy, and mean helicity of the spatial periodic flows driven by potential body forces. The statistical distribution of the asymptotic Beltrami flows are also investigated. We connect our mathematical analysis with the empirical theory of decaying turbulence. With appropriate mathematically defined ensemble averages, the Kolmogorov universal features are shown to be transient in time. We provide an estimate for the time interval in which those features may still be present. Our collaborator and friend Basil Nicolaenko passed away in September of 2007, after this work was completed. Honoring his contribution and friendship, we dedicate this article to him.

  20. EAGLE Monitors by Collecting Facts and Generating Obligations

    NASA Technical Reports Server (NTRS)

    Barrnger, Howard; Goldberg, Allen; Havelund, Klaus; Sen, Koushik

    2003-01-01

    We present a rule-based framework, called EAGLE, that has been shown to be capable of defining and implementing a range of finite trace monitoring logics, including future and past time temporal logic, extended regular expressions, real-time and metric temporal logics, interval logics, forms of quantified temporal logics, and so on. A monitor for an EAGLE formula checks if a finite trace of states satisfies the given formula. We present, in details, an algorithm for the synthesis of monitors for EAGLE. The algorithm is implemented as a Java application and involves novel techniques for rule definition, manipulation and execution. Monitoring is achieved on a state-by-state basis avoiding any need to store the input trace of states. Our initial experiments have been successful as EAGLE detected a previously unknown bug while testing a planetary rover controller.

  1. Measuring the effects of supratherapeutic doses of levofloxacin on healthy volunteers using four methods of QT correction and periodic and continuous ECG recordings.

    PubMed

    Noel, Gary J; Goodman, Daniel B; Chien, Shuchean; Solanki, Bhavna; Padmanabhan, Mukund; Natarajan, Jaya

    2004-05-01

    A clinical trial was conducted in healthy volunteers using both periodic and continuous ECG recordings to assess the effect of increasing doses of levofloxacin on the QT and QTc interval. Periodic and continuous ECGs were recorded before and after subjects were dosed with placebo and increasing doses of levofloxacin (500 mg, 1000 mg, 1500 mg) that included doses twice the maximum recommended dose of 750 mg in a double-blind, randomized, four-period, four-sequence crossover trial. Mean heart rate (HR) and the QT and QTc interval after dosing with levofloxacin and placebo were compared, and HR-QT interval relationships defined by linear regression analysis were calculated. After single doses of 1000 and 1500 mg of levofloxacin, HR increased significantly, as measured by periodic and continuous ECG recordings. This transient increase occurred at times of peak plasma concentration and was without symptoms. Mean QT intervals after placebo and mean intervals after levofloxacin were indistinguishable. Using periodic ECG recordings, single doses of 1500 mg were associated with small increases in QTc that were statistically significant. In contrast, an effect on QTc was shown only using the Bazett formula with data obtained from continuous ECG recordings. Together with the finding that levofloxacin does not influence HR-QT relationships, these findings suggest that levofloxacin has little effect on prolonging ventricular repolarization and that small increases in HR associated with high doses of levofloxacin contribute to the drug's apparent effect on QTc. Single doses of 1000 or 1500 mg of levofloxacin transiently increase HR without affecting the uncorrected QT interval. Differences in mean QTc after levofloxacin compared to placebo vary depending on the correction formula used and whether the data analyzed are from periodic or continuous ECG recordings. This work suggests that using continuous ECG recordings in assessing QT/QTc effects of drugs may be of value, particularly with drugs that might influence HR.

  2. Varying intervals of antiretroviral medication dispensing to improve outcomes for HIV patients (The INTERVAL Study): study protocol for a randomized controlled trial.

    PubMed

    Hoffman, Risa; Bardon, Ashley; Rosen, Sydney; Fox, Matthew; Kalua, Thoko; Xulu, Thembi; Taylor, Angela; Sanne, Ian

    2017-10-13

    Requirements for frequent dispensing of antiretroviral therapy (ART) place demands on health systems and can lead to suboptimal adherence and disengagement in care for patients due to the time and cost of frequent clinic visits. Rigorous data are needed to define optimal ART dispensing strategies and to evaluate the impact of a longer medication supply on retention and virologic suppression and determine whether this strategy lowers costs for both the patient and the health system. To date, no randomized studies have tested the benefits of 6-month dispensing of ART compared to 3-month and standard of care approaches. This study will be an unblinded cluster-randomized, matched controlled trial conducted among 8200 stable, HIV-infected individuals age 18 years and older on ART in Malawi and Zambia, to compare three ART dispensing intervals on the outcomes of retention in care (primary outcome), virologic suppression, and cost-effectiveness. Thirty clusters will be matched according to country, facility type, and ART cohort size and randomized to one of three study arms: standard of care, 3-month dispensing, and 6-month dispensing. Study participants will be followed, and outcomes will be measured at 12, 24, and 36 months. A subset of participants (n = 240) and providers (n = 180) will also participate in qualitative interviews to evaluate feasibility and acceptability of different ART dispensing intervals. This study will be the first to compare 6-month and 3-month ART dispensing intervals for stable, HIV-infected individuals in Malawi and Zambia. We focus on outcomes relevant to country programs, including retention, virologic suppression, and cost-effectiveness. Results from the study will help resource-limited health systems better understand the full scope of outcomes resulting from various ART dispensing intervals and help to inform health policy decisions. ClinicalTrials.gov, NCT03101592 . Registered on 18 March 2017. Pan African Clinical Trials, PACTR201706002336105 . Registered on 2 June 2017.

  3. Variations in rupture process with recurrence interval in a repeated small earthquake

    USGS Publications Warehouse

    Vidale, J.E.; Ellsworth, W.L.; Cole, A.; Marone, Chris

    1994-01-01

    In theory and in laboratory experiments, friction on sliding surfaces such as rock, glass and metal increases with time since the previous episode of slip. This time dependence is a central pillar of the friction laws widely used to model earthquake phenomena. On natural faults, other properties, such as rupture velocity, porosity and fluid pressure, may also vary with the recurrence interval. Eighteen repetitions of the same small earthquake, separated by intervals ranging from a few days to several years, allow us to test these laboratory predictions in situ. The events with the longest time since the previous earthquake tend to have about 15% larger seismic moment than those with the shortest intervals, although this trend is weak. In addition, the rupture durations of the events with the longest recurrence intervals are more than a factor of two shorter than for the events with the shortest intervals. Both decreased duration and increased friction are consistent with progressive fault healing during the time of stationary contact.In theory and in laboratory experiments, friction on sliding surfaces such as rock, glass and metal increases with time since the previous episode of slip. This time dependence is a central pillar of the friction laws widely used to model earthquake phenomena. On natural faults, other properties, such as rupture velocity, porosity and fluid pressure, may also vary with the recurrence interval. Eighteen repetitions of the same small earthquake, separated by intervals ranging from a few days to several years, allow us to test these laboratory predictions in situ. The events with the longest time since the previous earthquake tend to have about 15% larger seismic moment than those with the shortest intervals, although this trend is weak. In addition, the rupture durations of the events with the longest recurrence intervals are more than a factor of two shorter than for the events with the shortest intervals. Both decreased duration and increased friction are consistent with progressive fault healing during the time of stationary contact.

  4. Variation in the days supply field for osteoporosis medications in Ontario.

    PubMed

    Burden, Andrea M; Huang, Anjie; Tadrous, Mina; Cadarette, Suzanne M

    2013-01-01

    We examined pharmacy claims for osteoporosis medications dispensed in the community (78 %) and long-term care (LTC) to determine if days supply values matched expected dosing intervals. Results identify potential reporting errors that can have implications for drug exposure misclassification, particularly in LTC where only 59 % of reported values matched expected values. The days supply field is commonly used to examine patterns of drug utilization and classify drug exposure, yet its accuracy has received little attention. We sought to describe the days supply reported for osteoporosis drugs and examine if values matched expected therapeutic dosing intervals. We examined days supply values for osteoporosis medications submitted to the Ontario Drug Benefits program for seniors, 1997-2011. Days supply values were evaluated by dosing regimen and setting (community or long-term care [LTC]) and compared to pre-defined expected values. We defined expected days supply by the therapeutic dosing interval: daily in 7- or 30-day intervals, or as 100 days; weekly in 7- or 30-day intervals; monthly and daily nasal spray in 28- or 30-day intervals; and cyclical etidronate as a 90-day supply. We identified 17,615,404 osteoporosis prescriptions, with 78 % dispensed in the community. Most daily oral prescriptions were dispensed by an expected therapeutic dosing interval (97 %). Annual IV zoledronic acid was most commonly dispensed as a 1-day supply (62 %). Distinct differences in agreement were observed for other regimens, with the expected days supply more commonly reported in community versus LTC: cyclical etidronate (86 % vs. 40 %), weekly (91 % vs. 60 %), monthly (94 % vs. 35 %), and nasal spray (84 % vs. 40 %). Results suggest that inaccuracies in the days supply field exist, particularly among prescriptions dispensed in LTC. Inaccurate reporting may have significant implications for osteoporosis drug exposure misclassification.

  5. The Impact of Radiation Treatment Time on Survival in Patients With Head and Neck Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaikh, Talha; Handorf, Elizabeth A.; Murphy, Colin T.

    Purpose: To assess the impact of radiation treatment time (RTT) in head and neck cancers on overall survival (OS) in the era of chemoradiation. Methods and Materials: Patients with diagnoses of tongue, hypopharynx, larynx, oropharynx, or tonsil cancer were identified by use of the National Cancer Database. RTT was defined as date of first radiation treatment to date of last radiation treatment. In the definitive setting, prolonged RTT was defined as >56 days, accelerated RTT was defined as <47 days, and standard RTT was defined as 47 to 56 days. In the postoperative setting, prolonged RTT was defined as >49 days, accelerated RTT wasmore » defined as <40 days, and standard RTT was defined as 40 to 49 days. We used χ{sup 2} tests to identify predictors of RTT. The Kaplan-Meier method was used to compare OS among groups. Cox proportional hazards model was used for OS analysis in patients with known comorbidity status. Results: 19,531 patients were included; 12,987 (67%) had a standard RTT, 4,369 (34%) had an accelerated RTT, and 2,165 (11%) had a prolonged RTT. On multivariable analysis, accelerated RTT (hazard ratio [HR] 0.84; 95% confidence interval [CI] 0.73-0.97) was associated with an improved OS, and prolonged RTT (HR 1.25; 95% CI 1.14-1.37) was associated with a worse OS relative to standard RTT. When the 9,200 (47%) patients receiving definitive concurrent chemoradiation were examined, prolonged RTT (HR 1.29; 95% CI 1.11-1.50) was associated with a worse OS relative to standard RTT, whereas there was no significant association between accelerated RTT and OS (HR 0.76; 95% CI 0.57-1.01). Conclusion: Prolonged RTT is associated with worse OS in patients receiving radiation therapy for head and neck cancer, even in the setting of chemoradiation. Expeditious completion of radiation should continue to be a quality metric for the management of head and neck malignancies.« less

  6. Monthly fluctuations of insomnia symptoms in a population-based sample.

    PubMed

    Morin, Charles M; Leblanc, M; Ivers, H; Bélanger, L; Mérette, Chantal; Savard, Josée; Jarrin, Denise C

    2014-02-01

    To document the monthly changes in sleep/insomnia status over a 12-month period; to determine the optimal time intervals to reliably capture new incident cases and recurrent episodes of insomnia and the likelihood of its persistence over time. Participants were 100 adults (mean age = 49.9 years; 66% women) randomly selected from a larger population-based sample enrolled in a longitudinal study of the natural history of insomnia. They completed 12 monthly telephone interviews assessing insomnia, use of sleep aids, stressful life events, and physical and mental health problems in the previous month. A total of 1,125 interviews of a potential 1,200 were completed. Based on data collected at each assessment, participants were classified into one of three subgroups: good sleepers, insomnia symptoms, and insomnia syndrome. At baseline, 42 participants were classified as good sleepers, 34 met criteria for insomnia symptoms, and 24 for an insomnia syndrome. There were significant fluctuations of insomnia over time, with 66% of the participants changing sleep status at least once over the 12 monthly assessments (51.5% for good sleepers, 59.5% for insomnia syndrome, and 93.4% for insomnia symptoms). Changes of status were more frequent among individuals with insomnia symptoms at baseline (mean = 3.46, SD = 2.36) than among those initially classified as good sleepers (mean = 2.12, SD = 2.70). Among the subgroup with insomnia symptoms at baseline, 88.3% reported improved sleep (i.e., became good sleepers) at least once over the 12 monthly assessments compared to 27.7% whose sleep worsened (i.e., met criteria for an insomnia syndrome) during the same period. Among individuals classified as good sleepers at baseline, risks of developing insomnia symptoms and syndrome over the subsequent months were, respectively, 48.6% and 14.5%. Monthly assessment over an interval of 6 months was found most reliable to estimate incidence rates, while an interval of 3 months proved the most reliable for defining chronic insomnia. Monthly assessment of insomnia and sleep patterns revealed significant variability over the course of a 12-month period. These findings highlight the importance for future epidemiological studies of conducting repeated assessment at shorter than the typical yearly interval in order to reliably capture the natural course of insomnia over time.

  7. Approximation Set of the Interval Set in Pawlak's Space

    PubMed Central

    Wang, Jin; Wang, Guoyin

    2014-01-01

    The interval set is a special set, which describes uncertainty of an uncertain concept or set Z with its two crisp boundaries named upper-bound set and lower-bound set. In this paper, the concept of similarity degree between two interval sets is defined at first, and then the similarity degrees between an interval set and its two approximations (i.e., upper approximation set R¯(Z) and lower approximation set R_(Z)) are presented, respectively. The disadvantages of using upper-approximation set R¯(Z) or lower-approximation set R_(Z) as approximation sets of the uncertain set (uncertain concept) Z are analyzed, and a new method for looking for a better approximation set of the interval set Z is proposed. The conclusion that the approximation set R 0.5(Z) is an optimal approximation set of interval set Z is drawn and proved successfully. The change rules of R 0.5(Z) with different binary relations are analyzed in detail. Finally, a kind of crisp approximation set of the interval set Z is constructed. We hope this research work will promote the development of both the interval set model and granular computing theory. PMID:25177721

  8. Eliminating livelock by assigning the same priority state to each message that is input into a flushable routing system during N time intervals

    DOEpatents

    Faber, V.

    1994-11-29

    Livelock-free message routing is provided in a network of interconnected nodes that is flushable in time T. An input message processor generates sequences of at least N time intervals, each of duration T. An input register provides for receiving and holding each input message, where the message is assigned a priority state p during an nth one of the N time intervals. At each of the network nodes a message processor reads the assigned priority state and awards priority to messages with priority state (p-1) during an nth time interval and to messages with priority state p during an (n+1) th time interval. The messages that are awarded priority are output on an output path toward the addressed output message processor. Thus, no message remains in the network for a time longer than T. 4 figures.

  9. Eliminating livelock by assigning the same priority state to each message that is inputted into a flushable routing system during N time intervals

    DOEpatents

    Faber, Vance

    1994-01-01

    Livelock-free message routing is provided in a network of interconnected nodes that is flushable in time T. An input message processor generates sequences of at least N time intervals, each of duration T. An input register provides for receiving and holding each input message, where the message is assigned a priority state p during an nth one of the N time intervals. At each of the network nodes a message processor reads the assigned priority state and awards priority to messages with priority state (p-1) during an nth time interval and to messages with priority state p during an (n+1) th time interval. The messages that are awarded priority are output on an output path toward the addressed output message processor. Thus, no message remains in the network for a time longer than T.

  10. a New Approach for Accuracy Improvement of Pulsed LIDAR Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Huang, W.; Zhou, X.; He, C.; Li, X.; Huang, Y.; Zhang, L.

    2018-05-01

    In remote sensing applications, the accuracy of time interval measurement is one of the most important parameters that affect the quality of pulsed lidar data. The traditional time interval measurement technique has the disadvantages of low measurement accuracy, complicated circuit structure and large error. A high-precision time interval data cannot be obtained in these traditional methods. In order to obtain higher quality of remote sensing cloud images based on the time interval measurement, a higher accuracy time interval measurement method is proposed. The method is based on charging the capacitance and sampling the change of capacitor voltage at the same time. Firstly, the approximate model of the capacitance voltage curve in the time of flight of pulse is fitted based on the sampled data. Then, the whole charging time is obtained with the fitting function. In this method, only a high-speed A/D sampler and capacitor are required in a single receiving channel, and the collected data is processed directly in the main control unit. The experimental results show that the proposed method can get error less than 3 ps. Compared with other methods, the proposed method improves the time interval accuracy by at least 20 %.

  11. Recurrence time statistics for finite size intervals

    NASA Astrophysics Data System (ADS)

    Altmann, Eduardo G.; da Silva, Elton C.; Caldas, Iberê L.

    2004-12-01

    We investigate the statistics of recurrences to finite size intervals for chaotic dynamical systems. We find that the typical distribution presents an exponential decay for almost all recurrence times except for a few short times affected by a kind of memory effect. We interpret this effect as being related to the unstable periodic orbits inside the interval. Although it is restricted to a few short times it changes the whole distribution of recurrences. We show that for systems with strong mixing properties the exponential decay converges to the Poissonian statistics when the width of the interval goes to zero. However, we alert that special attention to the size of the interval is required in order to guarantee that the short time memory effect is negligible when one is interested in numerically or experimentally calculated Poincaré recurrence time statistics.

  12. Transmissibility of the Ice Bucket Challenge among globally influential celebrities: retrospective cohort study.

    PubMed

    Ni, Michael Y; Chan, Brandford H Y; Leung, Gabriel M; Lau, Eric H Y; Pang, Herbert

    2014-12-16

    To estimate the transmissibility of the Ice Bucket Challenge among globally influential celebrities and to identify associated risk factors. Retrospective cohort study. Social media (YouTube, Facebook, Twitter, Instagram). David Beckham, Cristiano Ronaldo, Benedict Cumberbatch, Stephen Hawking, Mark Zuckerberg, Oprah Winfrey, Homer Simpson, and Kermit the Frog were defined as index cases. We included contacts up to the fifth generation seeded from each index case and enrolled a total of 99 participants into the cohort. Basic reproduction number R0, serial interval of accepting the challenge, and odds ratios of associated risk factors based on fully observed nomination chains; R0 is a measure of transmissibility and is defined as the number of secondary cases generated by a single index in a fully susceptible population. Serial interval is the duration between onset of a primary case and onset of its secondary cases. Based on the empirical data and assuming a branching process we estimated a mean R0 of 1.43 (95% confidence interval 1.23 to 1.65) and a mean serial interval for accepting the challenge of 2.1 days (median 1 day). Higher log (base 10) net worth of the participants was positively associated with transmission (odds ratio 1.63, 95% confidence interval 1.06 to 2.50), adjusting for age and sex. The Ice Bucket Challenge was moderately transmissible among a group of globally influential celebrities, in the range of the pandemic A/H1N1 2009 influenza. The challenge was more likely to be spread by richer celebrities, perhaps in part reflecting greater social influence. © Ni et al 2014.

  13. Floods on small streams in North Carolina, probable magnitude and frequency

    USGS Publications Warehouse

    Hinson, Herbert G.

    1965-01-01

    The magnitude and frequency of floods are defined regionally for small streams (drainage area, 1 to 150 sq mi) in North Carolina. Composite frequency curves for each of two regions relate the magnitude of the annual flood, in ratio to the mean annual flood, to recurrence intervals of 1.1 to 50 years. In North Carolina, the mean annual flood (Q2.33) is related to drainage area (A) by the following equation: Q2. 33 = GA0.66, where G, the geographic factor, is the product of a statewide coefficient (US) times a correction which reflects differences in basin characteristics. Isograms of the G factor covering the State are presented.

  14. Fast transfer of crossmodal time interval training.

    PubMed

    Chen, Lihan; Zhou, Xiaolin

    2014-06-01

    Sub-second time perception is essential for many important sensory and perceptual tasks including speech perception, motion perception, motor coordination, and crossmodal interaction. This study investigates to what extent the ability to discriminate sub-second time intervals acquired in one sensory modality can be transferred to another modality. To this end, we used perceptual classification of visual Ternus display (Ternus in Psychol Forsch 7:81-136, 1926) to implicitly measure participants' interval perception in pre- and posttests and implemented an intra- or crossmodal sub-second interval discrimination training protocol in between the tests. The Ternus display elicited either an "element motion" or a "group motion" percept, depending on the inter-stimulus interval between the two visual frames. The training protocol required participants to explicitly compare the interval length between a pair of visual, auditory, or tactile stimuli with a standard interval or to implicitly perceive the length of visual, auditory, or tactile intervals by completing a non-temporal task (discrimination of auditory pitch or tactile intensity). Results showed that after fast explicit training of interval discrimination (about 15 min), participants improved their ability to categorize the visual apparent motion in Ternus displays, although the training benefits were mild for visual timing training. However, the benefits were absent for implicit interval training protocols. This finding suggests that the timing ability in one modality can be rapidly acquired and used to improve timing-related performance in another modality and that there may exist a central clock for sub-second temporal processing, although modality-specific perceptual properties may constrain the functioning of this clock.

  15. Reliability and validity of pressure and temporal parameters recorded using a pressure-sensitive insole during running.

    PubMed

    Mann, Robert; Malisoux, Laurent; Brunner, Roman; Gette, Paul; Urhausen, Axel; Statham, Andrew; Meijer, Kenneth; Theisen, Daniel

    2014-01-01

    Running biomechanics has received increasing interest in recent literature on running-related injuries, calling for new, portable methods for large-scale measurements. Our aims were to define running strike pattern based on output of a new pressure-sensitive measurement device, the Runalyser, and to test its validity regarding temporal parameters describing running gait. Furthermore, reliability of the Runalyser measurements was evaluated, as well as its ability to discriminate different running styles. Thirty-one healthy participants (30.3 ± 7.4 years, 1.78 ± 0.10 m and 74.1 ± 12.1 kg) were involved in the different study parts. Eleven participants were instructed to use a rearfoot (RFS), midfoot (MFS) and forefoot (FFS) strike pattern while running on a treadmill. Strike pattern was subsequently defined using a linear regression (R(2)=0.89) between foot strike angle, as determined by motion analysis (1000 Hz), and strike index (SI, point of contact on the foot sole, as a percentage of foot sole length), as measured by the Runalyser. MFS was defined by the 95% confidence interval of the intercept (SI=43.9-49.1%). High agreement (overall mean difference 1.2%) was found between stance time, flight time, stride time and duty factor as determined by the Runalyser and a force-measuring treadmill (n=16 participants). Measurements of the two devices were highly correlated (R ≥ 0.80) and not significantly different. Test-retest intra-class correlation coefficients for all parameters were ≥ 0.94 (n=14 participants). Significant differences (p<0.05) between FFS, RFS and habitual running were detected regarding SI, stance time and stride time (n=24 participants). The Runalyser is suitable for, and easily applicable in large-scale studies on running biomechanics. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Spatial-temporal analysis of the of the risk of Rift Valley Fever in Kenya

    NASA Astrophysics Data System (ADS)

    Bett, B.; Omolo, A.; Hansen, F.; Notenbaert, A.; Kemp, S.

    2012-04-01

    Historical data on Rift Valley Fever (RVF) outbreaks in Kenya covering the period 1951 - 2010 were analyzed using a logistic regression model to identify factors associated with RVF occurrence. The analysis used a division, an administrative unit below a district, as the unit of analysis. The infection status of each division was defined on a monthly time scale and used as a dependent variable. Predictors investigated include: monthly precipitation (minimum, maximum and total), normalized difference vegetation index, altitude, agro-ecological zone, presence of game, livestock and human population densities, the number of times a division has had an outbreak before and time interval in months between successive outbreaks (used as a proxy for immunity). Both univariable and multivariable analyses were conducted. The models used incorporated an auto-regressive correlation matrix to account for clustering of observations in time, while dummy variables were fitted in the multivariable model to account for spatial relatedness/topology between divisions. This last procedure was followed because it is expected that the risk of RVF occurring in a given division increases when its immediate neighbor gets infected. Functional relationships between the continuous and the outcome variables were assessed to ensure that the linearity assumption was met. Deviance and leverage residuals were also generated from the final model and used for evaluating the goodness of fit of the model. Descriptive analyzes indicate that a total of 91 divisions in 42 districts (of the original 69 districts in place by 1999) reported RVF outbreaks at least once over the period. The mean interval between outbreaks was determined to be about 43 months. Factors that were positively associated with RVF occurrence include increased precipitation, high outbreak interval and the number of times a division has been infected or reported an outbreak. The model will be validated and used for developing an RVF forecasting system. This forecasting system can then be used with the existing regional RVF prediction tools such as EMPRES-i to downscale RVF risk predictions to country-specific scales and subsequently link them with decision support systems. The ultimate aim is to increase the capacity of the national institutions to formulate appropriate RVF mitigation measures.

  17. Impact of Rehabilitation on Outcomes in Patients With Ischemic Stroke: A Nationwide Retrospective Cohort Study in Japan.

    PubMed

    Yagi, Maiko; Yasunaga, Hideo; Matsui, Hiroki; Morita, Kojiro; Fushimi, Kiyohide; Fujimoto, Masashi; Koyama, Teruyuki; Fujitani, Junko

    2017-03-01

    We aimed to examine the concurrent effects of timing and intensity of rehabilitation on improving activities of daily living (ADL) among patients with ischemic stroke. Using the Japanese Diagnosis Procedure Combination inpatient database, we retrospectively analyzed consecutive patients with ischemic stroke at admission who received rehabilitation (n=100 719) from April 2012 to March 2014. Early rehabilitation was defined as that starting within 3 days after admission. The average rehabilitation intensity per day was calculated as the total units of rehabilitation during hospitalization divided by the length of hospital stay. A multivariable logistic regression analysis with multiple imputation and an instrumental variable analysis were performed to examine the association of early and intensive rehabilitation with the proportion of improved ADL score. The proportion of improved ADL score was higher in the early and intensive rehabilitation group. The multivariable logistic regression analysis showed that significant improvements in ADL were observed for early rehabilitation (odds ratio: 1.08; 95% confidence interval: 1.04-1.13; P <0.01) and intensive rehabilitation of >5.0 U/d (odds ratio: 1.87; 95% confidence interval: 1.69-2.07; P <0.01). The instrumental variable analysis showed that an increased proportion of improved ADL was associated with early rehabilitation (risk difference: 2.8%; 95% confidence interval: 2.0-3.4%; P <0.001) and intensive rehabilitation (risk difference: 5.6%; 95% confidence interval: 4.6-6.6%; P <0.001). The present results suggested that early and intensive rehabilitation improved ADL during hospitalization in patients with ischemic stroke. © 2017 American Heart Association, Inc.

  18. Neuronal periodicity detection as a basis for the perception of consonance: a mathematical model of tonal fusion.

    PubMed

    Ebeling, Martin

    2008-10-01

    A mathematical model is presented here to explain the sensation of consonance and dissonance on the basis of neuronal coding and the properties of a neuronal periodicity detection mechanism. This mathematical model makes use of physiological data from a neuronal model of periodicity analysis in the midbrain, whose operation can be described mathematically by autocorrelation functions with regard to time windows. Musical intervals produce regular firing patterns in the auditory nerve that depend on the vibration ratio of the two tones. The mathematical model makes it possible to define a measure for the degree of these regularities for each vibration ratio. It turns out that this measure value is in line with the degree of tonal fusion as described by Stumpf [Tonpsychologie (Psychology of Tones) (Knuf, Hilversum), reprinted 1965]. This finding makes it probable that tonal fusion is a consequence of certain properties of the neuronal periodicity detection mechanism. Together with strong roughness resulting from interval tones with fundamentals close together or close to the octave, this neuronal mechanism may be regarded as the basis of consonance and dissonance.

  19. Dual ant colony operational modal analysis parameter estimation method

    NASA Astrophysics Data System (ADS)

    Sitarz, Piotr; Powałka, Bartosz

    2018-01-01

    Operational Modal Analysis (OMA) is a common technique used to examine the dynamic properties of a system. Contrary to experimental modal analysis, the input signal is generated in object ambient environment. Operational modal analysis mainly aims at determining the number of pole pairs and at estimating modal parameters. Many methods are used for parameter identification. Some methods operate in time while others in frequency domain. The former use correlation functions, the latter - spectral density functions. However, while some methods require the user to select poles from a stabilisation diagram, others try to automate the selection process. Dual ant colony operational modal analysis parameter estimation method (DAC-OMA) presents a new approach to the problem, avoiding issues involved in the stabilisation diagram. The presented algorithm is fully automated. It uses deterministic methods to define the interval of estimated parameters, thus reducing the problem to optimisation task which is conducted with dedicated software based on ant colony optimisation algorithm. The combination of deterministic methods restricting parameter intervals and artificial intelligence yields very good results, also for closely spaced modes and significantly varied mode shapes within one measurement point.

  20. 48 CFR 1816.405-271 - Base fee.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... For other contracts, such as those for hardware or software development, the procurement officer may... during the life of the contract at defined intervals on a provisional basis. If the final award fee...

  1. 48 CFR 1816.405-271 - Base fee.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... For other contracts, such as those for hardware or software development, the procurement officer may... during the life of the contract at defined intervals on a provisional basis. If the final award fee...

  2. 48 CFR 1816.405-271 - Base fee.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... For other contracts, such as those for hardware or software development, the procurement officer may... during the life of the contract at defined intervals on a provisional basis. If the final award fee...

  3. 48 CFR 1816.405-271 - Base fee.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    .... For other contracts, such as those for hardware or software development, the procurement officer may... during the life of the contract at defined intervals on a provisional basis. If the final award fee...

  4. 48 CFR 1816.405-271 - Base fee.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    .... For other contracts, such as those for hardware or software development, the procurement officer may... during the life of the contract at defined intervals on a provisional basis. If the final award fee...

  5. Fitting Curves by Fractal Interpolation: AN Application to the Quantification of Cognitive Brain Processes

    NASA Astrophysics Data System (ADS)

    Navascues, M. A.; Sebastian, M. V.

    Fractal interpolants of Barnsley are defined for any continuous function defined on a real compact interval. The uniform distance between the function and its approximant is bounded in terms of the vertical scale factors. As a general result, the density of the affine fractal interpolation functions of Barnsley in the space of continuous functions in a compact interval is proved. A method of data fitting by means of fractal interpolation functions is proposed. The procedure is applied to the quantification of cognitive brain processes. In particular, the increase in the complexity of the electroencephalographic signal produced by the execution of a test of visual attention is studied. The experiment was performed on two types of children: a healthy control group and a set of children diagnosed with an attention deficit disorder.

  6. Dose-dense and less dose-intense Total Therapy 5 for gene expression profiling-defined high-risk multiple myeloma.

    PubMed

    Jethava, Y; Mitchell, A; Zangari, M; Waheed, S; Schinke, C; Thanendrarajan, S; Sawyer, J; Alapat, D; Tian, E; Stein, C; Khan, R; Heuck, C J; Petty, N; Avery, D; Steward, D; Smith, R; Bailey, C; Epstein, J; Yaccoby, S; Hoering, A; Crowley, J; Morgan, G; Barlogie, B; van Rhee, F

    2016-07-29

    Multiple myeloma (MM) is a heterogeneous disease with high-risk patients progressing rapidly despite treatment. Various definitions of high-risk MM are used and we reported that gene expression profile (GEP)-defined high risk was a major predictor of relapse. In spite of our best efforts, the majority of GEP70 high-risk patients relapse and we have noted higher relapse rates during drug-free intervals. This prompted us to explore the concept of less intense drug dosing with shorter intervals between courses with the aim of preventing inter-course relapse. Here we report the outcome of the Total Therapy 5 trial, where this concept was tested. This regimen effectively reduced early mortality and relapse but failed to improve progression-free survival and overall survival due to relapse early during maintenance.

  7. Place avoidance learning and memory in a jumping spider.

    PubMed

    Peckmezian, Tina; Taylor, Phillip W

    2017-03-01

    Using a conditioned passive place avoidance paradigm, we investigated the relative importance of three experimental parameters on learning and memory in a salticid, Servaea incana. Spiders encountered an aversive electric shock stimulus paired with one side of a two-sided arena. Our three parameters were the ecological relevance of the visual stimulus, the time interval between trials and the time interval before test. We paired electric shock with either a black or white visual stimulus, as prior studies in our laboratory have demonstrated that S. incana prefer dark 'safe' regions to light ones. We additionally evaluated the influence of two temporal features (time interval between trials and time interval before test) on learning and memory. Spiders exposed to the shock stimulus learned to associate shock with the visual background cue, but the extent to which they did so was dependent on which visual stimulus was present and the time interval between trials. Spiders trained with a long interval between trials (24 h) maintained performance throughout training, whereas spiders trained with a short interval (10 min) maintained performance only when the safe side was black. When the safe side was white, performance worsened steadily over time. There was no difference between spiders tested after a short (10 min) or long (24 h) interval before test. These results suggest that the ecological relevance of the stimuli used and the duration of the interval between trials can influence learning and memory in jumping spiders.

  8. Estimation of postmortem interval based on colony development time for Anoplolepsis longipes (Hymenoptera: Formicidae).

    PubMed

    Goff, M L; Win, B H

    1997-11-01

    The postmortem interval for a set of human remains discovered inside a metal tool box was estimated using the development time required for a stratiomyid fly (Diptera: Stratiomyidae), Hermetia illucens, in combination with the time required to establish a colony of the ant Anoplolepsis longipes (Hymenoptera: Formicidae) capable of producing alate (winged) reproductives. This analysis resulted in a postmortem interval estimate of 14 + months, with a period of 14-18 months being the most probable time interval. The victim had been missing for approximately 18 months.

  9. TIME-INTERVAL MEASURING DEVICE

    DOEpatents

    Gross, J.E.

    1958-04-15

    An electronic device for measuring the time interval between two control pulses is presented. The device incorporates part of a previous approach for time measurement, in that pulses from a constant-frequency oscillator are counted during the interval between the control pulses. To reduce the possible error in counting caused by the operation of the counter gating circuit at various points in the pulse cycle, the described device provides means for successively delaying the pulses for a fraction of the pulse period so that a final delay of one period is obtained and means for counting the pulses before and after each stage of delay during the time interval whereby a plurality of totals is obtained which may be averaged and multplied by the pulse period to obtain an accurate time- Interval measurement.

  10. Monitoring molecular interactions using photon arrival-time interval distribution analysis

    DOEpatents

    Laurence, Ted A [Livermore, CA; Weiss, Shimon [Los Angels, CA

    2009-10-06

    A method for analyzing/monitoring the properties of species that are labeled with fluorophores. A detector is used to detect photons emitted from species that are labeled with one or more fluorophores and located in a confocal detection volume. The arrival time of each of the photons is determined. The interval of time between various photon pairs is then determined to provide photon pair intervals. The number of photons that have arrival times within the photon pair intervals is also determined. The photon pair intervals are then used in combination with the corresponding counts of intervening photons to analyze properties and interactions of the molecules including brightness, concentration, coincidence and transit time. The method can be used for analyzing single photon streams and multiple photon streams.

  11. [Estimation of the atrioventricular time interval by pulse Doppler in the normal fetal heart].

    PubMed

    Hamela-Olkowska, Anita; Dangel, Joanna

    2009-08-01

    To assess normative values of the fetal atrioventricular (AV) time interval by pulse-wave Doppler methods on 5-chamber view. Fetal echocardiography exams were performed using Acuson Sequoia 512 in 140 singleton fetuses at 18 to 40 weeks of gestation with sinus rhythm and normal cardiac and extracardiac anatomy. Pulsed Doppler derived AV intervals were measured from left ventricular inflow/outflow view using transabdominal convex 3.5-6 MHz probe. The values of AV time interval ranged from 100 to 150 ms (mean 123 +/- 11.2). The AV interval was negatively correlated with the heart rhythm (p<0.001). Fetal heart rate decreased as gestation progressed (p<0.001). Thus, the AV intervals increased with the age of gestation (p=0.007). However, in the same subgroup of the fetal heart rate there was no relation between AV intervals and gestational age. Therefore, the AV intervals showed only the heart rate dependence. The 95th percentiles of AV intervals according to FHR ranged from 135 to 148 ms. 1. The AV interval duration was negatively correlated with the heart rhythm. 2. Measurement of AV time interval is easy to perform and has a good reproducibility. It may be used for the fetal heart block screening in anti-Ro and anti-La positive pregnancies. 3. Normative values established in the study may help obstetricians in assessing fetal abnormalities of the AV conduction.

  12. Transforming parts of a differential equations system to difference equations as a method for run-time savings in NONMEM.

    PubMed

    Petersson, K J F; Friberg, L E; Karlsson, M O

    2010-10-01

    Computer models of biological systems grow more complex as computing power increase. Often these models are defined as differential equations and no analytical solutions exist. Numerical integration is used to approximate the solution; this can be computationally intensive, time consuming and be a large proportion of the total computer runtime. The performance of different integration methods depend on the mathematical properties of the differential equations system at hand. In this paper we investigate the possibility of runtime gains by calculating parts of or the whole differential equations system at given time intervals, outside of the differential equations solver. This approach was tested on nine models defined as differential equations with the goal to reduce runtime while maintaining model fit, based on the objective function value. The software used was NONMEM. In four models the computational runtime was successfully reduced (by 59-96%). The differences in parameter estimates, compared to using only the differential equations solver were less than 12% for all fixed effects parameters. For the variance parameters, estimates were within 10% for the majority of the parameters. Population and individual predictions were similar and the differences in OFV were between 1 and -14 units. When computational runtime seriously affects the usefulness of a model we suggest evaluating this approach for repetitive elements of model building and evaluation such as covariate inclusions or bootstraps.

  13. [The effect of arthroscopic debridement and conservative treatment in knee osteoarthritis: Results of a 5-year follow-up and literature review].

    PubMed

    Spahn, G; Klinger, H M; Hofmann, G O

    2013-12-01

    This study is aimed to compare the effects of arthroscopic joint debridement over a 5-year period in a clearly defined patient population (only grade III knee osteoarthritis, history < 2 years). A total of 96 patients (50 male and 46 female) underwent arthroscopic knee debridement for knee OA. The main criteria for inclusion were osteoarthritis grade III (Kellgren-Lawrence score) and a maximal history of 2 years. The subjective complaints and the knee-related quality of life were estimated by the KOOS (knee injury and osteoarthritis outcome score). The score increased significantly within the 1 to 3 rd year post operation. After this interval the mean points of the score declined. But after 5 years the KOOS was higher in comparison to the baseline dates. Patients who had undergone conservative treatment at baseline had a significantly different KOOS than patients in the arthroscopy group. Over time, patients in the arthroscopy group had fewer complaints than patients in the conservative treatment group. In both groups, the results decreased over time. A total of 17 patients (17.2 %) needed a conversion to total endoprothetic replacement. The mean time-interval between index operation and conversion was 56.6 (95 % CI 54.4 - 58.4) months. In middle stages of knee OA, arthroscopic joint debridement can effectively reduce subjective complaints. Because this treatment does not stop the process of OA, the improvements decrease over time. © Georg Thieme Verlag KG Stuttgart · New York.

  14. Bayesian analyses of time-interval data for environmental radiation monitoring.

    PubMed

    Luo, Peng; Sharp, Julia L; DeVol, Timothy A

    2013-01-01

    Time-interval (time difference between two consecutive pulses) analysis based on the principles of Bayesian inference was investigated for online radiation monitoring. Using experimental and simulated data, Bayesian analysis of time-interval data [Bayesian (ti)] was compared with Bayesian and a conventional frequentist analysis of counts in a fixed count time [Bayesian (cnt) and single interval test (SIT), respectively]. The performances of the three methods were compared in terms of average run length (ARL) and detection probability for several simulated detection scenarios. Experimental data were acquired with a DGF-4C system in list mode. Simulated data were obtained using Monte Carlo techniques to obtain a random sampling of the Poisson distribution. All statistical algorithms were developed using the R Project for statistical computing. Bayesian analysis of time-interval information provided a similar detection probability as Bayesian analysis of count information, but the authors were able to make a decision with fewer pulses at relatively higher radiation levels. In addition, for the cases with very short presence of the source (< count time), time-interval information is more sensitive to detect a change than count information since the source data is averaged by the background data over the entire count time. The relationships of the source time, change points, and modifications to the Bayesian approach for increasing detection probability are presented.

  15. What role do plain radiographs have in assessing the skeletally immature acromioclavicular joint?

    PubMed

    Lee, Seung Yeol; Kwon, Soon-Sun; Chung, Chin Youb; Lee, Kyoung Min; Park, Moon Seok

    2014-01-01

    Because of incomplete ossification of the coracoid process and acromion, acromioclavicular joint configuration in the skeletally immature patient differs from that of adults. Although comparison to radiographic standards for this joint is critical in the evaluation of acromioclavicular joint injuries, these standards are not well defined for children or adolescents. We therefore sought to determine (1) the reliability of numerous radiographic measurements of the skeletally immature acromioclavicular joint, including the vertical and shortest coracoclavicular interval, and the acromioclavicular joint offset; (2) the timing of ossification of the acromion and coracoid in males and females; and (3) the differences in the values of these radiographic measurements based on age and sex. This study was based on a total of 485 subjects, 8 to 18 years old, who underwent conventional AP view radiographs of both shoulders. The 485 subjects were included to assess normal configuration around the acromioclavicular joint and 466 of these subjects were evaluated for comparison between both sides. The vertical and shortest coracoclavicular interval, coracoclavicular clavicle width ratio, acromioclavicular joint offset, and difference of the coracoclavicular interval of both sides were measured. A reliability test was conducted before obtaining the main measurements. The relationship of measurements with sex, age, and stage of ossification was evaluated. The vertical and shortest coracoclavicular interval showed excellent reliability (intraclass correlation coefficient ([ICC], 0.918 and 0.934). The acromioclavicular joint offset showed low reliability (ICC, 0.543). The ossification centers of the acromion and the coracoid processes appeared and fused earlier in females than in males. The vertical coracoclavicular interval, which was not affected by partial ossification of the coracoid process, was less than 11 mm in the 90% quantile of total subjects in males and 10 mm in the 90% quantile in females. The difference of the vertical coracoclavicular interval of both sides was less than 50% in 436 of 466 (93.4%) patients. The vertical coracoclavicular interval was the best parameter to assess acromioclavicular joint dislocation in skeletally immature patients. Comparison of both sides of the acromioclavicular joint could help to inform physicians in predicting the need for additional evaluations.

  16. Fixed-interval matching-to-sample: intermatching time and intermatching error runs1

    PubMed Central

    Nelson, Thomas D.

    1978-01-01

    Four pigeons were trained on a matching-to-sample task in which reinforcers followed either the first matching response (fixed interval) or the fifth matching response (tandem fixed-interval fixed-ratio) that occurred 80 seconds or longer after the last reinforcement. Relative frequency distributions of the matching-to-sample responses that concluded intermatching times and runs of mismatches (intermatching error runs) were computed for the final matching responses directly followed by grain access and also for the three matching responses immediately preceding the final match. Comparison of these two distributions showed that the fixed-interval schedule arranged for the preferential reinforcement of matches concluding relatively extended intermatching times and runs of mismatches. Differences in matching accuracy and rate during the fixed interval, compared to the tandem fixed-interval fixed-ratio, suggested that reinforcers following matches concluding various intermatching times and runs of mismatches influenced the rate and accuracy of the last few matches before grain access, but did not control rate and accuracy throughout the entire fixed-interval period. PMID:16812032

  17. Improved confidence intervals when the sample is counted an integer times longer than the blank.

    PubMed

    Potter, William Edward; Strzelczyk, Jadwiga Jodi

    2011-05-01

    Past computer solutions for confidence intervals in paired counting are extended to the case where the ratio of the sample count time to the blank count time is taken to be an integer, IRR. Previously, confidence intervals have been named Neyman-Pearson confidence intervals; more correctly they should have been named Neyman confidence intervals or simply confidence intervals. The technique utilized mimics a technique used by Pearson and Hartley to tabulate confidence intervals for the expected value of the discrete Poisson and Binomial distributions. The blank count and the contribution of the sample to the gross count are assumed to be Poisson distributed. The expected value of the blank count, in the sample count time, is assumed known. The net count, OC, is taken to be the gross count minus the product of IRR with the blank count. The probability density function (PDF) for the net count can be determined in a straightforward manner.

  18. The Time Is Up: Compression of Visual Time Interval Estimations of Bimodal Aperiodic Patterns

    PubMed Central

    Duarte, Fabiola; Lemus, Luis

    2017-01-01

    The ability to estimate time intervals subserves many of our behaviors and perceptual experiences. However, it is not clear how aperiodic (AP) stimuli affect our perception of time intervals across sensory modalities. To address this question, we evaluated the human capacity to discriminate between two acoustic (A), visual (V) or audiovisual (AV) time intervals of trains of scattered pulses. We first measured the periodicity of those stimuli and then sought for correlations with the accuracy and reaction times (RTs) of the subjects. We found that, for all time intervals tested in our experiment, the visual system consistently perceived AP stimuli as being shorter than the periodic (P) ones. In contrast, such a compression phenomenon was not apparent during auditory trials. Our conclusions are: first, the subjects exposed to P stimuli are more likely to measure their durations accurately. Second, perceptual time compression occurs for AP visual stimuli. Lastly, AV discriminations are determined by A dominance rather than by AV enhancement. PMID:28848406

  19. Evaluation of cardiovascular demands of game play and practice in women's ice hockey.

    PubMed

    Spiering, Barry A; Wilson, Meredith H; Judelson, Daniel A; Rundell, Kenneth W

    2003-05-01

    Preparation for the physical demands of competition often involves game simulation during practice. This paradigm is thought to promote physiological adaptations that enhance maximal performance. However, a mismatch between practice intensity and actual competition intensity may not provide adequate training to achieve optimal game-play fitness. The purpose of this study was to evaluate the effectiveness of practice in meeting the cardiovascular demands of a women's ice hockey game. Heart rate (HR) data from 11 U.S. National Women's Ice Hockey team members were collected (5-second intervals) during a game and a typical practice session. Data was normalized to individual HRmax determined during Vo(2)max testing. Working time was defined as a game shift or practice-working interval. Mean working HR was greater during the game than the practice, 90 +/- 2% and 76 +/- 3% of HRmax, respectively (p < 0.05). Mean percent session time (game or practice) >90% HRmax was also longer during the game than the practice, 10.5 +/- 4.1% and 5.6 +/- 3.5% (p < 0.05), respectively. Mean session HR, percent time >80% HRmax, and mean resting HR were not different between game and practice (68 +/- 7% vs. 69 +/- 5%, 23.2 +/- 5.3% vs. 26.1 +/- 9.2%, and 59 +/- 8% vs. 56 +/- 5%, respectively). Elite women hockey players experience significantly greater cardiovascular load during game play than during practice. This mismatch in cardiovascular demand may prevent players from achieving "game shape," thus affecting competition play.

  20. Epidemiology meets econometrics: using time-series analysis to observe the impact of bed occupancy rates on the spread of multidrug-resistant bacteria.

    PubMed

    Kaier, K; Meyer, E; Dettenkofer, M; Frank, U

    2010-10-01

    Two multivariate time-series analyses were carried out to identify the impact of bed occupancy rates, turnover intervals and the average length of hospital stay on the spread of multidrug-resistant bacteria in a teaching hospital. Epidemiological data on the incidences of meticillin-resistant Staphylococcus aureus (MRSA) and extended-spectrum beta-lactamase (ESBL)-producing bacteria were collected. Time-series of bed occupancy rates, turnover intervals and the average length of stay were tested for inclusion in the models as independent variables. Incidence was defined as nosocomial cases per 1000 patient-days. This included all patients infected or colonised with MRSA/ESBL more than 48h after admission. Between January 2003 and July 2008, a mean incidence of 0.15 nosocomial MRSA cases was identified. ESBL was not included in the surveillance until January 2005. Between January 2005 and July 2008 the mean incidence of nosocomial ESBL was also 0.15 cases per 1000 patient-days. The two multivariate models demonstrate a temporal relationship between bed occupancy rates in general wards and the incidence of nosocomial MRSA and ESBL. Similarly, the temporal relationship between the monthly average length of stay in intensive care units (ICUs) and the incidence of nosocomial MRSA and ESBL was demonstrated. Overcrowding in general wards and long periods of ICU stay were identified as factors influencing the spread of multidrug-resistant bacteria in hospital settings. Copyright 2010 The Hospital Infection Society. Published by Elsevier Ltd. All rights reserved.

  1. Fish debris in sediments of the upwelling zone off central Peru: a late Quaternary record

    NASA Astrophysics Data System (ADS)

    De Vries, Thomas J.; Pearcy, William G.

    1982-01-01

    Scales of the anchoveta were abundantly represented among fish remains preserved in partly laminated marine sediments on the upper continental slope of Peru. Hake scales were less common. Sardine scales occurred only sporadically. Recent accumulation rates of scales indicate that prior to exploitation the anchoveta standing stock off Peru was about five times that of northern anchovy off California. During glacial time, however, clupeoids were less abundant off Peru and were more evenly distributed among sardines and anchoveta. Evidence from fish scales and phytoplankton assemblages suggests that the coastal waters off Peru did not respond to continental glacial and neoglacial advances simply by cooling. High accumulation rates of scales from warm-water fishes and tests of cool-water phytoplankton preceded and succeeded an interval containing low numbers of dominantly warm-water taxa. This interval coincided with the second neoglacial advance (2000 to 2700 y B.P.). Similar but less well-defined warm-water and cool-water assemblages coincided with the third neoglacial advance (200 to 400 y B.P.) and the last glacial retreat. Upwelling intensity probably fluctuated more widely during early and late phases of glacial and neoglacial cooling episodes, accounting for the mix of distinctly warm-water and cool-water assemblages and perhaps for an enhanced productivity. A weakened Intertropical Convergence Zone or strengthened coastal countercurrent may explain the warm-water marine faunas and floras and wet climates on the mainland of Peru inferred by others for neoglacial or glacial time.

  2. Performance-based Physical Functioning and Peripheral Neuropathy in a Population-based Cohort of Women at Midlife

    PubMed Central

    Ylitalo, Kelly R.; Herman, William H.; Harlow, Siobán D.

    2013-01-01

    Peripheral neuropathy is underappreciated as a potential cause of functional limitations. In the present article, we assessed the cross-sectional association between peripheral neuropathy and physical functioning and how the longitudinal association between age and functioning differed by neuropathy status. Physical functioning was measured in 1996–2008 using timed performances on stair-climb, walking, sit-to-stand, and balance tests at the Michigan site of the Study of Women's Health Across the Nation, a population-based cohort study of women at midlife (n = 396). Peripheral neuropathy was measured in 2008 and defined as having an abnormal monofilament test result or 4 or more symptoms. We used linear mixed models to determine whether trajectories of physical functioning differed by prevalent neuropathy status. Overall, 27.8% of the women had neuropathy. Stair-climb time differed by neuropathy status (P = 0.04), and for every 1-year increase in age, women with neuropathy had a 1.82% (95% confidence interval: 1.42, 2.21) increase compared with a 0.95% (95% confidence interval: 0.71, 1.20) increase for women without neuropathy. Sit-to-stand time differed by neuropathy status (P = 0.01), but the rate of change did not differ. No differences between neuropathy groups were observed for the walk test. For some performance-based tasks, poor functioning was maintained or exacerbated for women who had prevalent neuropathy. Peripheral neuropathy may play a role in physical functioning limitations and future disability. PMID:23524038

  3. Daily home gardening improved survival for older people with mobility limitations: an 11-year follow-up study in Taiwan.

    PubMed

    Lêng, Chhian Hūi; Wang, Jung-Der

    2016-01-01

    To test the hypothesis that gardening is beneficial for survival after taking time-dependent comorbidities, mobility, and depression into account in a longitudinal middle-aged (50-64 years) and older (≥65 years) cohort in Taiwan. The cohort contained 5,058 nationally sampled adults ≥50 years old from the Taiwan Longitudinal Study on Aging (1996-2007). Gardening was defined as growing flowers, gardening, or cultivating potted plants for pleasure with five different frequencies. We calculated hazard ratios for the mortality risks of gardening and adjusted the analysis for socioeconomic status, health behaviors and conditions, depression, mobility limitations, and comorbidities. Survival models also examined time-dependent effects and risks in each stratum contingent upon baseline mobility and depression. Sensitivity analyses used imputation methods for missing values. Daily home gardening was associated with a high survival rate (hazard ratio: 0.82; 95% confidence interval: 0.71-0.94). The benefits were robust for those with mobility limitations, but without depression at baseline (hazard ratio: 0.64, 95% confidence interval: 0.48-0.87) when adjusted for time-dependent comorbidities, mobility limitations, and depression. Chronic or relapsed depression weakened the protection of gardening. For those without mobility limitations and not depressed at baseline, gardening had no effect. Sensitivity analyses using different imputation methods yielded similar results and corroborated the hypothesis. Daily gardening for pleasure was associated with reduced mortality for Taiwanese >50 years old with mobility limitations but without depression.

  4. Monitoring that matters

    USGS Publications Warehouse

    Johnson, Douglas H.; Gitzen, Robert A.; Millspaugh, Joshua J.; Cooper, Andrew B.; Licht, Daniel S.

    2012-01-01

    Monitoring is a critically important activity for assessing the status of a system, such as the health of an individual, the balance in one's checking account, profits and losses of a business, the economic activity of a nation, or the size of an animal population. Monitoring is especially vital for evaluating changes in the system associated with specific known impacts occurring to the system. It is also valuable for detecting unanticipated changes in the system and identifying plausible causes of such changes, all in time to take corrective action. Before proceeding, we should define "monitoring." One definition of "monitor" (Microsoft Corporation 2009) is "to check something at regular intervals in order to find out how it is progressing or developing." The key point here is "at regular intervals," suggesting a continuing process. Some definitions do not indicate the repetitive nature of monitoring and are basically synonymous with "observing." Most monitoring, in the strict sense of the word, is intended to persist for long periods of time, perhaps indefinitely or permanently. Similarly, Thompson et al. (1998: 3) referred to the "repeated assessment of status" of something, but noted that the term "monitor" is sometimes used for analogous activities such as collecting baseline information or evaluating projects for either implementation or effectiveness. For their purposes, they restricted the term to involve repeated measurements collected at a specified frequency of time units. Let us adopt that definition, recognizing that repeated measurements imply collecting comparable information on each occasion.

  5. Time estimation by patients with frontal lesions and by Korsakoff amnesics.

    PubMed

    Mimura, M; Kinsbourne, M; O'Connor, M

    2000-07-01

    We studied time estimation in patients with frontal damage (F) and alcoholic Korsakoff (K) patients in order to differentiate between the contributions of working memory and episodic memory to temporal cognition. In Experiment 1, F and K patients estimated time intervals between 10 and 120 s less accurately than matched normal and alcoholic control subjects. F patients were less accurate than K patients at short (< 1 min) time intervals whereas K patients increasingly underestimated durations as intervals grew longer. F patients overestimated short intervals in inverse proportion to their performance on the Wisconsin Card Sorting Test. As intervals grew longer, overestimation yielded to underestimation for F patients. Experiment 2 involved time estimation while counting at a subjective 1/s rate. F patients' subjective tempo, though relatively rapid, did not fully explain their overestimation of short intervals. In Experiment 3, participants produced predetermined time intervals by depressing a mouse key. K patients underproduced longer intervals. F patients produced comparably to normal participants, but were extremely variable. Findings suggest that both working memory and episodic memory play an individual role in temporal cognition. Turnover within a short-term working memory buffer provides a metric for temporal decisions. The depleted working memory that typically attends frontal dysfunction may result in quicker turnover, and this may inflate subjective duration. On the other hand, temporal estimation beyond 30 s requires episodic remembering, and this puts K patients at a disadvantage.

  6. Method and apparatus for assessing cardiovascular risk

    NASA Technical Reports Server (NTRS)

    Albrecht, Paul (Inventor); Bigger, J. Thomas (Inventor); Cohen, Richard J. (Inventor)

    1998-01-01

    The method for assessing risk of an adverse clinical event includes detecting a physiologic signal in the subject and determining from the physiologic signal a sequence of intervals corresponding to time intervals between heart beats. The long-time structure of fluctuations in the intervals over a time period of more than fifteen minutes is analyzed to assess risk of an adverse clinical event. In a preferred embodiment, the physiologic signal is an electrocardiogram and the time period is at least fifteen minutes. A preferred method for analyzing the long-time structure variability in the intervals includes computing the power spectrum and fitting the power spectrum to a power law dependence on frequency over a selected frequency range such as 10.sup.-4 to 10.sup.-2 Hz. Characteristics of the long-time structure fluctuations in the intervals is used to assess risk of an adverse clinical event.

  7. Automated saliva processing for LC-MS/MS: Improving laboratory efficiency in cortisol and cortisone testing.

    PubMed

    Antonelli, Giorgia; Padoan, Andrea; Artusi, Carlo; Marinova, Mariela; Zaninotto, Martina; Plebani, Mario

    2016-04-01

    The aim of this study was to implement in our routine practice an automated saliva preparation protocol for quantification of cortisol (F) and cortisone (E) by LC-MS/MS using a liquid handling platform, maintaining the previously defined reference intervals with the manual preparation. Addition of internal standard solution to saliva samples and calibrators and SPE on μ-elution 96-well plate were performed by liquid handling platform. After extraction, the eluates were submitted to LC-MS/MS analysis. The manual steps within the entire process were to transfer saliva samples in suitable tubes, to put the cap mat and transfer of the collection plate to the LC auto sampler. Transference of the reference intervals from the manual to the automated procedure was established by Passing Bablok regression on 120 saliva samples analyzed simultaneously with the two procedures. Calibration curves were linear throughout the selected ranges. The imprecision ranged from 2 to 10%, with recoveries from 95 to 116%. Passing Bablok regression demonstrated no significant bias. The liquid handling platform translates the manual steps into automated operations allowing for saving hands-on time, while maintaining assay reproducibility and ensuring reliability of results, making it implementable in our routine with the previous established reference intervals. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  8. An easy-to-use technique to characterize cardiodynamics from first-return maps on ΔRR-intervals

    NASA Astrophysics Data System (ADS)

    Fresnel, Emeline; Yacoub, Emad; Freitas, Ubiratan; Kerfourn, Adrien; Messager, Valérie; Mallet, Eric; Muir, Jean-François; Letellier, Christophe

    2015-08-01

    Heart rate variability analysis using 24-h Holter monitoring is frequently performed to assess the cardiovascular status of a patient. The present retrospective study is based on the beat-to-beat interval variations or ΔRR, which offer a better view of the underlying structures governing the cardiodynamics than the common RR-intervals. By investigating data for three groups of adults (with normal sinus rhythm, congestive heart failure, and atrial fibrillation, respectively), we showed that the first-return maps built on ΔRR can be classified according to three structures: (i) a moderate central disk, (ii) a reduced central disk with well-defined segments, and (iii) a large triangular shape. These three very different structures can be distinguished by computing a Shannon entropy based on a symbolic dynamics and an asymmetry coefficient, here introduced to quantify the balance between accelerations and decelerations in the cardiac rhythm. The probability P111111 of successive heart beats without large beat-to-beat fluctuations allows to assess the regularity of the cardiodynamics. A characteristic time scale, corresponding to the partition inducing the largest Shannon entropy, was also introduced to quantify the ability of the heart to modulate its rhythm: it was significantly different for the three structures of first-return maps. A blind validation was performed to validate the technique.

  9. Weight Cycling and Cancer Incidence in a Large Prospective US Cohort

    PubMed Central

    Stevens, Victoria L.; Jacobs, Eric J.; Patel, Alpa V.; Sun, Juzhong; McCullough, Marjorie L.; Campbell, Peter T.; Gapstur, Susan M.

    2015-01-01

    Weight cycling, which consists of repeated cycles of intentional weight loss and regain, is common among individuals who try to lose weight. Some evidence suggests that weight cycling may affect biological processes that could contribute to carcinogenesis, but whether it is associated with cancer risk is unclear. Using 62,792 men and 69,520 women enrolled in the Cancer Prevention Study II Nutrition Cohort in 1992, we examined the association between weight cycling and cancer incidence. Weight cycles were defined by using baseline questions that asked the number of times ≥10 pounds (4.54 kg) was purposely lost and later regained. Multivariable-adjusted hazard ratios and 95% confidence intervals for all cancer and 15 individual cancers were estimated by using Cox proportional hazards regression. During up to 17 years of follow-up, 15,333 men and 9,984 women developed cancer. Weight cycling was not associated with overall risk of cancer in men (hazard ratio = 0.96, 95% confidence interval: 0.83, 1.11 for ≥20 cycles vs. no weight cycles) or women (hazard ratio = 0.96, 95% confidence interval: 0.86, 1.08) in models that adjusted for body mass index and other covariates. Weight cycling was also not associated with any individual cancer investigated. These results suggest that weight cycling, independent of body weight, is unlikely to influence subsequent cancer risk. PMID:26209523

  10. Gamma Knife Treatment of Growing Vestibular Schwannoma in Norway: A Prospective Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Varughese, Jobin Kotakkathu, E-mail: jobinv@gmail.com; Wentzel-Larsen, Tore; Centre for Child and Adolescent Mental Health, Eastern and Southern Norway, Oslo

    2012-10-01

    Purpose: Gamma Knife radiosurgery (GKRS) has been increasingly used in the treatment of vestibular schwannoma (VS). Very few studies relate tumor control and post-treatment growth rates to pretreatment growth rates. Methods and Materials: We prospectively included 45 consecutive VS patients who were initially treated conservatively and then received GKRS between 2000 and 2007 because of demonstrated tumor growth. Pretreatment and post-treatment tumor volumes were estimated. Patients underwent audiograms, reported their symptoms, and responded to the Short Form General Health Survey (SF-36) questionnaire on each visit. Results: Volume doubling times before and after treatment were 1.36 years (95% confidence intervals, 1.14-1.68)more » and -13.1 years (95% confidence intervals, -111.0 to -6.94), respectively. Tumor control, defined as a post-GKRS growth rate {<=}0, was achieved in 71.1% of patients, with highest odds for tumor control among older patients and those with larger tumors. The 5-year retreatment-free survival rate was 93.9% (95% confidence intervals, 76.5-98.5). None of the clinical endpoints investigated showed statistically significant changes after GKRS, but improvement was seen in a few SF-36 parameters. Conclusions: GKRS alters the natural course of the tumor by reducing growth. Mathematic models yield poorer tumor control rates than those found by clinical assessment. Symptoms were unaffected by treatment, but quality of life was improved.« less

  11. THE FUNDAMENTAL SOLUTIONS FOR MULTI-TERM MODIFIED POWER LAW WAVE EQUATIONS IN A FINITE DOMAIN.

    PubMed

    Jiang, H; Liu, F; Meerschaert, M M; McGough, R J

    2013-01-01

    Fractional partial differential equations with more than one fractional derivative term in time, such as the Szabo wave equation, or the power law wave equation, describe important physical phenomena. However, studies of these multi-term time-space or time fractional wave equations are still under development. In this paper, multi-term modified power law wave equations in a finite domain are considered. The multi-term time fractional derivatives are defined in the Caputo sense, whose orders belong to the intervals (1, 2], [2, 3), [2, 4) or (0, n ) ( n > 2), respectively. Analytical solutions of the multi-term modified power law wave equations are derived. These new techniques are based on Luchko's Theorem, a spectral representation of the Laplacian operator, a method of separating variables and fractional derivative techniques. Then these general methods are applied to the special cases of the Szabo wave equation and the power law wave equation. These methods and techniques can also be extended to other kinds of the multi-term time-space fractional models including fractional Laplacian.

  12. Novel control system of the high-voltage IGBT-switch

    NASA Astrophysics Data System (ADS)

    Ponomarev, A. V.; Mamontov, Y. I.; Gusev, A. I.; Pedos, M. S.

    2017-05-01

    HV solid-state switch control circuit was developed and tested. The switch was made with series connection IGBT-transistors. The distinctive feature of the circuit is an ability to fine-tune the switching time of every transistor. Simultaneous switching provides balancing of the dynamic voltage at all switch elements. A separate control board switches on and off every transistor. On and off signals from the main conductor are sent to the board by current pulses of different polarity. A positive pulse provides the transistor switch-on, while a negative pulse provides their switch-off. The time interval between pulses defines the time when the switch is turned on. The minimum time when the switch is turned on equals to a few microseconds, while the maximum time is not limited. This paper shows the test results of 4 kV switch prototype. The switch was used to produce rectangular pulses of a microsecond range under resistive load. The possibility to generate the damped harmonic oscillations was also tested. On the basis of this approach, positive testing results open up a possibility to design switches under an operating voltage of tens kilovolts.

  13. An ISEE 3 high time resolution study of interplanetary parameter correlations with magnetospheric activity

    NASA Technical Reports Server (NTRS)

    Baker, D. N.; Zwickl, R. D.; Bame, S. J.; Hones, E. W., Jr.; Tsurutani, B. T.; Smith, E. J.; Akasofu, S.-I.

    1983-01-01

    The coupling between the solar wind and the geomagnetic disturbances was examined using data from the ISEE-3 spacecraft at an earth-sun libration point and ground-based data. One minute data were used to avoid aliasing in determining the internal magnetospheric response to solar wind conditions. Attention was given to the cross-correlations between the geomagnetic index (AE), the total energy dissipation rate (UT), and the solar wind parameters, as well as the spatial and temporal scales on which the magnetosphere reacts to the solar wind conditions. It was considered necessary to characterize the physics of the solar wind-magnetosphere coupling in order to define the requirements for a spacecraft like the ISEE-3 that could be used as a real time monitoring system for predicting storms and substorms. The correlations among all but one parameter were lower during disturbance intervals; UT was highly correlated with all parameters during the disturbed times. An intrinsic 25-40 min delay was detected between interplanetary activity and magnetospheric response in quite times, diminishing to no more than 15 min during disturbed times.

  14. The Partition Intervalometer: A Programmable Underwater Timer for Marking Accumulated Sediment Profiles Collected in Anderson Sediment Traps: Development, Operation, Testing Procedures, and Field Results

    USGS Publications Warehouse

    Rendigs, Richard R.; Anderson, Roger Y.; Xu, Jingping; Davis, Raymond E.; Bergeron, Emile M.

    2009-01-01

    This manual illustrates the development of a programmable instrument designed to deploy a series of wafer-shaped discs (partitions) into the collection tube of a sediment trap in various aquatic environments. These hydrodynamically shaped discs are deployed at discrete time intervals from the Intervalometer and provide markers that delineate time intervals within the sediments that accumulate in the collection tube. The timer and mechanical system are lodged in an air-filled, water-tight pressure housing that is vertically hung within the confines of a cone-shaped sediment trap. The instrumentation has been operationally pressure tested to an equivalent water depth of approximately 1 km. Flaws discovered during extensive laboratory and pressure testing resulted in the implementation of several mechanical modifications (such as a redesign of the rotor and the discs) that improved the operation of the rotor assembly as well as the release of discs through the end cap. These results also identified a preferred azimuth placement of the rotor disc relative to the drop hole of the end cap. In the initial field trial, five sediment traps and coupled Intervalometers were attached to moored arrays and deployed at two sites off the coast of Southern California for approximately 8 months. Each of the instruments released 18 discs at the programmed 10 day intervals, except one unit, which experienced a malfunction after approximately 4 months. Most of the discs oriented in a near-horizontal position upon the surface of the sediment in the collection tubes. Sampling of the sediments for geochemical analyses was improved by these clearly defined markers, which indicated the changes in the flux and nature of sediments accumulated during the deployment period of each sediment trap.

  15. Population Pharmacokinetic-Pharmacodynamic Analysis to Compare the Effect of Moxifloxacin on QT Interval Prolongation Between Healthy Korean and Japanese Subjects.

    PubMed

    Choi, Hyang-Ki; Jung, Jin Ah; Fujita, Tomoe; Amano, Hideki; Ghim, Jong-Lyul; Lee, Dong-Hwan; Tabata, Kenichi; Song, Il-Dae; Maeda, Mika; Kumagai, Yuji; Mendzelevski, Boaz; Shin, Jae-Gook

    2016-12-01

    The goal of this study was to evaluate the moxifloxacin-induced QT interval prolongation in healthy male and female Korean and Japanese volunteers to investigate interethnic differences. This multicenter, randomized, double-blind, placebo-controlled, 2-way crossover study was conducted in healthy male and female Korean and Japanese volunteers. In each period, a single dose of moxifloxacin or placebo 400 mg was administered orally under fasting conditions. Triplicate 12-lead ECGs were recorded at defined time points before, up to 24 hours after dosing, and at corresponding time points during baseline. Serial blood sampling was conducted for pharmacokinetic analysis of moxifloxacin. The pharmacokinetic-pharmacodynamic data between the 2 ethnic groups were compared by using a typical analysis based on the intersection-union test and a nonlinear mixed effects method. A total of 39 healthy subjects (Korean, male: 10, female: 10; Japanese, male: 10, female: 9) were included in the analysis. The concentration-effect analysis revealed that there was no change in slope (and confirmed that the difference was caused by a change in the pharmacokinetic model of moxifloxacin). A 2-compartment model with first-order absorption provided the best description of moxifloxacin's pharmacokinetic parameters. Weight and sex were selected as significant covariates for central volume of distribution and intercompartmental clearance, respectively. An E max model (E[C]=[E max ⋅C]/[EC 50 +C]) described the QT interval data of this study well. However, ethnicity was not found to be a significant factor in a pharmacokinetic-pharmacodynamic link model. The drug-induced QTc prolongations evaluated using moxifloxacin as the probe did not seem to be significantly different between these Korean and Japanese subjects. ClinicalTrials.gov identifier: NCT01876316. Copyright © 2016 Elsevier HS Journals, Inc. All rights reserved.

  16. The error and bias of supplementing a short, arid climate, rainfall record with regional vs. global frequency analysis

    NASA Astrophysics Data System (ADS)

    Endreny, Theodore A.; Pashiardis, Stelios

    2007-02-01

    SummaryRobust and accurate estimates of rainfall frequencies are difficult to make with short, and arid-climate, rainfall records, however new regional and global methods were used to supplement such a constrained 15-34 yr record in Cyprus. The impact of supplementing rainfall frequency analysis with the regional and global approaches was measured with relative bias and root mean square error (RMSE) values. Analysis considered 42 stations with 8 time intervals (5-360 min) in four regions delineated by proximity to sea and elevation. Regional statistical algorithms found the sites passed discordancy tests of coefficient of variation, skewness and kurtosis, while heterogeneity tests revealed the regions were homogeneous to mildly heterogeneous. Rainfall depths were simulated in the regional analysis method 500 times, and then goodness of fit tests identified the best candidate distribution as the general extreme value (GEV) Type II. In the regional analysis, the method of L-moments was used to estimate location, shape, and scale parameters. In the global based analysis, the distribution was a priori prescribed as GEV Type II, a shape parameter was a priori set to 0.15, and a time interval term was constructed to use one set of parameters for all time intervals. Relative RMSE values were approximately equal at 10% for the regional and global method when regions were compared, but when time intervals were compared the global method RMSE had a parabolic-shaped time interval trend. Relative bias values were also approximately equal for both methods when regions were compared, but again a parabolic-shaped time interval trend was found for the global method. The global method relative RMSE and bias trended with time interval, which may be caused by fitting a single scale value for all time intervals.

  17. Medication possession ratio predicts antiretroviral regimens persistence in Peru.

    PubMed

    Salinas, Jorge L; Alave, Jorge L; Westfall, Andrew O; Paz, Jorge; Moran, Fiorella; Carbajal-Gonzalez, Danny; Callacondo, David; Avalos, Odalie; Rodriguez, Martin; Gotuzzo, Eduardo; Echevarria, Juan; Willig, James H

    2013-01-01

    In developing nations, the use of operational parameters (OPs) in the prediction of clinical care represents a missed opportunity to enhance the care process. We modeled the impact of multiple measurements of antiretroviral treatment (ART) adherence on antiretroviral treatment outcomes in Peru. Retrospective cohort study including ART naïve, non-pregnant, adults initiating therapy at Hospital Nacional Cayetano Heredia, Lima-Peru (2006-2010). Three OPs were defined: 1) Medication possession ratio (MPR): days with antiretrovirals dispensed/days on first-line therapy; 2) Laboratory monitory constancy (LMC): proportion of 6 months intervals with ≥1 viral load or CD4 reported; 3) Clinic visit constancy (CVC): proportion of 6 months intervals with ≥1 clinic visit. Three multi-variable Cox proportional hazard (PH) models (one per OP) were fit for (1) time of first-line ART persistence and (2) time to second-line virologic failure. All models were adjusted for socio-demographic, clinical and laboratory variables. 856 patients were included in first-line persistence analyses, median age was 35.6 years [29.4-42.9] and most were male (624; 73%). In multivariable PH models, MPR (per 10% increase HR=0.66; 95%CI=0.61-0.71) and LMC (per 10% increase 0.83; 0.71-0.96) were associated with prolonged time on first-line therapies. Among 79 individuals included in time to second-line virologic failure analyses, MPR was the only OP independently associated with prolonged time to second-line virologic failure (per 10% increase 0.88; 0.77-0.99). The capture and utilization of program level parameters such as MPR can provide valuable insight into patient-level treatment outcomes.

  18. Adaptation to short photoperiods augments circadian food anticipatory activity in Siberian hamsters.

    PubMed

    Bradley, Sean P; Prendergast, Brian J

    2014-06-01

    This article is part of a Special Issue "Energy Balance". Both the light-dark cycle and the timing of food intake can entrain circadian rhythms. Entrainment to food is mediated by a food entrainable circadian oscillator (FEO) that is formally and mechanistically separable from the hypothalamic light-entrainable oscillator. This experiment examined whether seasonal changes in day length affect the function of the FEO in male Siberian hamsters (Phodopus sungorus). Hamsters housed in long (LD; 15 h light/day) or short (SD; 9h light/day) photoperiods were subjected to a timed-feeding schedule for 10 days, during which food was available only during a 5h interval of the light phase. Running wheel activity occurring within a 3h window immediately prior to actual or anticipated food delivery was operationally-defined as food anticipatory activity (FAA). After the timed-feeding interval, hamsters were fed ad libitum, and FAA was assessed 2 and 7 days later via probe trials of total food deprivation. During timed-feeding, all hamsters exhibited increases FAA, but FAA emerged more rapidly in SD; in probe trials, FAA was greater in magnitude and persistence in SD. Gonadectomy in LD did not induce the SD-like FAA phenotype, indicating that withdrawal of gonadal hormones is not sufficient to mediate the effects of photoperiod on FAA. Entrainment of the circadian system to light markedly affects the functional output of the FEO via gonadal hormone-independent mechanisms. Rapid emergence and persistent expression of FAA in SD may reflect a seasonal adaptation that directs behavior toward sources of nutrition with high temporal precision at times of year when food is scarce. © 2013.

  19. The Attentional Demand of Automobile Driving Revisited: Occlusion Distance as a Function of Task-Relevant Event Density in Realistic Driving Scenarios.

    PubMed

    Kujala, Tuomo; Mäkelä, Jakke; Kotilainen, Ilkka; Tokkonen, Timo

    2016-02-01

    We studied the utility of occlusion distance as a function of task-relevant event density in realistic traffic scenarios with self-controlled speed. The visual occlusion technique is an established method for assessing visual demands of driving. However, occlusion time is not a highly informative measure of environmental task-relevant event density in self-paced driving scenarios because it partials out the effects of changes in driving speed. Self-determined occlusion times and distances of 97 drivers with varying backgrounds were analyzed in driving scenarios simulating real Finnish suburban and highway traffic environments with self-determined vehicle speed. Occlusion distances varied systematically with the expected environmental demands of the manipulated driving scenarios whereas the distributions of occlusion times remained more static across the scenarios. Systematic individual differences in the preferred occlusion distances were observed. More experienced drivers achieved better lane-keeping accuracy than inexperienced drivers with similar occlusion distances; however, driving experience was unexpectedly not a major factor for the preferred occlusion distances. Occlusion distance seems to be an informative measure for assessing task-relevant event density in realistic traffic scenarios with self-controlled speed. Occlusion time measures the visual demand of driving as the task-relevant event rate in time intervals, whereas occlusion distance measures the experienced task-relevant event density in distance intervals. The findings can be utilized in context-aware distraction mitigation systems, human-automated vehicle interaction, road speed prediction and design, as well as in the testing of visual in-vehicle tasks for inappropriate in-vehicle glancing behaviors in any dynamic traffic scenario for which appropriate individual occlusion distances can be defined. © 2015, Human Factors and Ergonomics Society.

  20. Dissociations between interval timing and intertemporal choice following administration of fluoxetine, cocaine, or methamphetamine

    PubMed Central

    Heilbronner, Sarah R.; Meck, Warren. H.

    2014-01-01

    The goal of our study was to characterize the relationship between intertemporal choice and interval timing, including determining how drugs that modulate brain serotonin and dopamine levels influence these two processes. In Experiment 1, rats were tested on a standard 40-s peak-interval procedure following administration of fluoxetine (3, 5, or 8 mg/kg) or vehicle to assess basic effects on interval timing. In Experiment 2, rats were tested in a novel behavioral paradigm intended to simultaneously examine interval timing and impulsivity. Rats performed a variant of the bi-peak procedure using 10-s and 40-s target durations with an additional “defection” lever that provided the possibility of a small, immediate reward. Timing functions remained relatively intact, and ‘patience’ across subjects correlated with peak times, indicating a negative relationship between ‘patience’ and clock speed. We next examined the effects of fluoxetine (5 mg/kg), cocaine (15 mg/kg), or methamphetamine (1 mg/kg) on task performance. Fluoxetine reduced impulsivity as measured by defection time without corresponding changes in clock speed. In contrast, cocaine and methamphetamine both increased impulsivity and clock speed. Thus, variations in timing may mediate intertemporal choice via dopaminergic inputs. However, a separate, serotonergic system can affect intertemporal choice without affecting interval timing directly. PMID:24135569

  1. Nonparametric spirometry reference values for Hispanic Americans.

    PubMed

    Glenn, Nancy L; Brown, Vanessa M

    2011-02-01

    Recent literature sites ethnic origin as a major factor in developing pulmonary function reference values. Extensive studies established reference values for European and African Americans, but not for Hispanic Americans. The Third National Health and Nutrition Examination Survey defines Hispanic as individuals of Spanish speaking cultures. While no group was excluded from the target population, sample size requirements only allowed inclusion of individuals who identified themselves as Mexican Americans. This research constructs nonparametric reference value confidence intervals for Hispanic American pulmonary function. The method is applicable to all ethnicities. We use empirical likelihood confidence intervals to establish normal ranges for reference values. Its major advantage: it is model free, but shares asymptotic properties of model based methods. Statistical comparisons indicate that empirical likelihood interval lengths are comparable to normal theory intervals. Power and efficiency studies agree with previously published theoretical results.

  2. Empirical likelihood-based confidence intervals for the sensitivity of a continuous-scale diagnostic test at a fixed level of specificity.

    PubMed

    Gengsheng Qin; Davis, Angela E; Jing, Bing-Yi

    2011-06-01

    For a continuous-scale diagnostic test, it is often of interest to find the range of the sensitivity of the test at the cut-off that yields a desired specificity. In this article, we first define a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and show that its limiting distribution is a scaled chi-square distribution. We then propose two new empirical likelihood-based confidence intervals for the sensitivity of the test at a fixed level of specificity by using the scaled chi-square distribution. Simulation studies are conducted to compare the finite sample performance of the newly proposed intervals with the existing intervals for the sensitivity in terms of coverage probability. A real example is used to illustrate the application of the recommended methods.

  3. Assessment of postoperative pain after reciprocating or rotary NiTi instrumentation of root canals: a randomized, controlled clinical trial.

    PubMed

    Relvas, João Bosko Formigas; Bastos, Mariana Mena Barreto; Marques, André Augusto Franco; Garrido, Angela Delfina Bitencourt; Sponchiado, Emílio Carlos

    2016-11-01

    The aim of this study was to assess postoperative pain in a prospective randomized clinical trial comparing two groups, using the Reciproc® system in one group and the ProTaper® rotary system in the other. The study included 78 male patients, aged 18-64 years (mean age of 26 years), with asymptomatic pulp necrosis in mandibular molar teeth (n = 78). The single-session endodontic treatment was performed by a single operator specialized in Endodontics. Mechanical preparation of the root canals was performed using the ProTaper® and Reciproc® instrumentation techniques. Postoperative pain was recorded using a verbal rating scale (VRS) and verbal description with well-defined categories at the three following time intervals: 24 h, 72 h, and 7 days after the endodontic procedure. The assessment of postoperative pain was recorded as no pain, mild pain, moderate pain, and severe pain or flare-up. Data were analyzed using the nonparametric Mann-Whitney test with the aid of the STATA® software. The incidence of postoperative pain in the ProTaper group (PT) 24 h after the endodontic procedure was 17.9 and 5.1 % after 72 h. In the Reciproc group (RP), the incidence after 24 h was 15.3 and 2.5 % after 72 h. No patients presented severe pain at the time intervals assessed. No significant difference (p > 0.05) in postoperative pain was found between the ProTaper® and Reciproc® instrumentation technique during endodontic treatment in this study. According to our findings and the results of the clinical trial, the occurrence of postoperative pain was low and similar between the reciprocating and rotary techniques during the time intervals assessed. These results are different from basic laboratory studies that affirm that the reciprocating techniques tend to promote more postoperative pain since extrusion of debris is greater.

  4. A new optimization tool path planning for 3-axis end milling of free-form surfaces based on efficient machining intervals

    NASA Astrophysics Data System (ADS)

    Vu, Duy-Duc; Monies, Frédéric; Rubio, Walter

    2018-05-01

    A large number of studies, based on 3-axis end milling of free-form surfaces, seek to optimize tool path planning. Approaches try to optimize the machining time by reducing the total tool path length while respecting the criterion of the maximum scallop height. Theoretically, the tool path trajectories that remove the most material follow the directions in which the machined width is the largest. The free-form surface is often considered as a single machining area. Therefore, the optimization on the entire surface is limited. Indeed, it is difficult to define tool trajectories with optimal feed directions which generate largest machined widths. Another limiting point of previous approaches for effectively reduce machining time is the inadequate choice of the tool. Researchers use generally a spherical tool on the entire surface. However, the gains proposed by these different methods developed with these tools lead to relatively small time savings. Therefore, this study proposes a new method, using toroidal milling tools, for generating toolpaths in different regions on the machining surface. The surface is divided into several regions based on machining intervals. These intervals ensure that the effective radius of the tool, at each cutter-contact points on the surface, is always greater than the radius of the tool in an optimized feed direction. A parallel plane strategy is then used on the sub-surfaces with an optimal specific feed direction for each sub-surface. This method allows one to mill the entire surface with efficiency greater than with the use of a spherical tool. The proposed method is calculated and modeled using Maple software to find optimal regions and feed directions in each region. This new method is tested on a free-form surface. A comparison is made with a spherical cutter to show the significant gains obtained with a toroidal milling cutter. Comparisons with CAM software and experimental validations are also done. The results show the efficiency of the method.

  5. Angiotensin-converting enzyme inhibitors delay the occurrence of renal involvement and are associated with a decreased risk of disease activity in patients with systemic lupus erythematosus--results from LUMINA (LIX): a multiethnic US cohort.

    PubMed

    Durán-Barragán, S; McGwin, G; Vilá, L M; Reveille, J D; Alarcón, G S

    2008-07-01

    To examine if angiotensin-converting enzyme (ACE) inhibitor use delays the occurrence of renal involvement and decreases the risk of disease activity in SLE patients. SLE patients (Hispanics, African Americans and Caucasians) from the lupus in minorities: nature vs nurture (LUMINA) cohort were studied. Renal involvement was defined as ACR criterion and/or biopsy-proven lupus nephritis. Time-to-renal involvement was examined by univariable and multivariable Cox proportional hazards regression analyses. Disease activity was examined with a case-crossover design and a conditional logistic regression model; in the case intervals, a decrease in the SLAM-R score >or=4 points occurred but not in the control intervals. Eighty of 378 patients (21%) were ACE inhibitor users; 298 (79%) were not. The probability of renal involvement free-survival at 10 yrs was 88.1% for users and 75.4% for non-users (P = 0.0099, log rank test). Users developed persistent proteinuria and/or biopsy-proven lupus nephritis (7.1%) less frequently than non-users (22.9%), P = 0.016. By multivariable Cox proportional hazards regression analyses, ACE inhibitors use [hazard ratio (HR) 0.27; 95% CI 0.09, 0.78] was associated with a longer time-to-renal involvement occurrence whereas African American ethnicity (HR 3.31; 95% CI 1.44, 7.61) was with a shorter time. ACE inhibitor use (54/288 case and 254/1148 control intervals) was also associated with a decreased risk of disease activity (HR 0.56; 95% CI 0.34, 0.94). ACE inhibitor use delays the development of renal involvement and associates with a decreased risk of disease activity in SLE; corroboration of these findings in other lupus cohorts is desirable before practice recommendations are formulated.

  6. Characterization of Cardiac Time Intervals in Healthy Bonnet Macaques (Macaca radiata) by Using an Electronic Stethoscope

    PubMed Central

    Kamran, Haroon; Salciccioli, Louis; Pushilin, Sergei; Kumar, Paraag; Carter, John; Kuo, John; Novotney, Carol; Lazar, Jason M

    2011-01-01

    Nonhuman primates are used frequently in cardiovascular research. Cardiac time intervals derived by phonocardiography have long been used to assess left ventricular function. Electronic stethoscopes are simple low-cost systems that display heart sound signals. We assessed the use of an electronic stethoscope to measure cardiac time intervals in 48 healthy bonnet macaques (age, 8 ± 5 y) based on recorded heart sounds. Technically adequate recordings were obtained from all animals and required 1.5 ± 1.3 min. The following cardiac time intervals were determined by simultaneously recording acoustic and single-lead electrocardiographic data: electromechanical activation time (QS1), electromechanical systole (QS2), the time interval between the first and second heart sounds (S1S2), and the time interval between the second and first sounds (S2S1). QS2 was correlated with heart rate, mean arterial pressure, diastolic blood pressure, and left ventricular ejection time determined by using echocardiography. S1S2 correlated with heart rate, mean arterial pressure, diastolic blood pressure, left ventricular ejection time, and age. S2S1 correlated with heart rate, mean arterial pressure, diastolic blood pressure, systolic blood pressure, and left ventricular ejection time. QS1 did not correlate with any anthropometric or echocardiographic parameter. The relation S1S2/S2S1 correlated with systolic blood pressure. On multivariate analyses, heart rate was the only independent predictor of QS2, S1S2, and S2S1. In conclusion, determination of cardiac time intervals is feasible and reproducible by using an electrical stethoscope in nonhuman primates. Heart rate is a major determinant of QS2, S1S2, and S2S1 but not QS1; regression equations for reference values for cardiac time intervals in bonnet macaques are provided. PMID:21439218

  7. Hypertensive Disorders in Pregnancy and the Risk of Subsequent Cardiovascular Disease.

    PubMed

    Grandi, Sonia M; Vallée-Pouliot, Karine; Reynier, Pauline; Eberg, Maria; Platt, Robert W; Arel, Roxane; Basso, Olga; Filion, Kristian B

    2017-09-01

    Hypertensive disorders in pregnancy (HDP) have been shown to predict later risk of cardiovascular disease (CVD). However, previous studies have not accounted for subsequent pregnancies and their complications, which are potential confounders and intermediates of this association. A cohort of 146 748 women with a first pregnancy was constructed using the Clinical Practice Research Datalink. HDP was defined using diagnostic codes, elevated blood pressure readings, or new use of an anti-hypertensive drug between 18 weeks' gestation and 6 weeks post-partum. The study outcomes were incident CVD and hypertension. Marginal structural Cox models (MSM) were used to account for time-varying confounders and intermediates. Time-fixed exposure defined at the first pregnancy was used in secondary analyses. A total of 997 women were diagnosed with incident CVD, and 6812 women were diagnosed with hypertension or received a new anti-hypertensive medication during the follow-up period. Compared with women without HDP, those with HDP had a substantially higher rate of CVD (hazard ratio (HR) 2.2, 95% confidence interval (CI) 1.7, 2.7). In women with HDP, the rate of hypertension was five times that of women without a HDP (HR 5.6, 95% CI 5.1, 6.3). With overlapping 95% CIs, the time-fixed analysis and the MSM produced consistent results for both outcomes. Women with HDP are at increased risk of developing subsequent CVD and hypertension. Similar estimates obtained with the MSM and the time-fixed analysis suggests that subsequent pregnancies do not confound a first episode of HDP and later CVD. © 2017 John Wiley & Sons Ltd.

  8. Increased mean time from end of surgery to operating room exit in a historical cohort of cases with prolonged time to extubation.

    PubMed

    Dexter, Franklin; Epstein, Richard H

    2013-12-01

    Prolonged time to extubation has been defined as the occurrence of a ≥ 15-minute interval from the end of surgery to removal of the tracheal tube. We quantified the increases in the mean times from end of surgery to exit from the OR associated with prolonged extubations and tested whether the increases were economically important (≥ 5 minutes). Anesthesia information management system data from 1 tertiary hospital were collected from November 2005 through December 2012 (i.e., sample sizes were N = 22 sequential quarters). Cases were excluded in which the patient's trachea was not intubated or extubated while physically in the operating room (OR). For each combination of stratification variable (below) and quarter, the mean time from end of surgery to OR exit was calculated for the extubations that were not prolonged and for those that were prolonged. Results are reported as mean ± SEM, with "at least" denoting the lower 95% confidence interval. The mean times from end of surgery to OR exit were at least 12.6 minutes longer for prolonged extubations when calculated with stratification by duration of surgery and prone or other positioning (13.0 ± 0.1 minutes), P < 0.0001 compared to 5 minutes (i.e., times were substantively long economically). The mean times were at least 11.7 minutes longer when calculated stratified by anesthesia procedure code (12.4 ± 0.4, P < 0.0001) and at least 11.3 minutes longer when calculated stratified by surgeon (12.4 ± 0.6, P < 0.0001). We recommend that anesthesia providers document the times of extubations and monitor the incidence of prolonged extubations as an economic measure. This would be especially important for providers at facilities with many ORs that have at least 8 hours of cases and turnovers.

  9. The early and long-term outcomes of completion pneumonectomy: report of 56 cases.

    PubMed

    Pan, Xufeng; Fu, Shijie; Shi, Jianxin; Yang, Jun; Zhao, Heng

    2014-09-01

    The aim of this study was to analyse the early and long-term results of completion pneumonectomy (CP). A retrospective review of consecutive patients who underwent CP in the Shanghai Chest Hospital. Fifty-six CP were performed between January 2003 and July 2013. There were 45 conventional CP (CCP) and 11 rescue CP (RCP) cases. CCP was defined as resection of the remaining lung because of the occurrence of new lesions in patients with previous lung resection. RCP was defined as resection of the remaining lung because of severe complication after primary lung surgery. The mortality and morbidity rates of CCP were 4.4 and 33.3%, respectively. For CCP, the morbidity was significantly higher in benign cases than in malignant cases (80.0 vs 27.5%, P = 0.04). The mortality and morbidity rates of RCP were 27.3 and 90.9%, respectively. For RCP, advanced age (P = 0.046) and preoperative mechanical ventilation (P = 0.03) were related to higher postoperative mortality. The overall 5-year survival rate was 80% for benign cases, whereas for lung malignancy cases, it was 30%. Survival varied (median 60.0 vs 35.0 vs 10.0 months, I vs II vs III, P < 0.01) for different TNM stages and was better for a time interval (between primary surgery and occurrence of lesion) of >2 years (median 60.0 vs 18.0 months, P < 0.01). CP was an operation with high risk, especially for RCP. Advanced age and mechanical ventilation before the operation were related to higher mortality in RCP. CCP of benign cases was related to higher postoperative risk, but with good survival. For lung malignancy, survival was better for a time interval (between primary surgery and occurrence of lesion) of >2 years. © The Author 2014. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  10. Meeting physical activity guidelines and the risk of incident knee osteoarthritis: a population-based prospective cohort study.

    PubMed

    Barbour, K E; Hootman, J M; Helmick, C G; Murphy, L B; Theis, Kristina A; Schwartz, T A; Kalsbeek, W D; Renner, J B; Jordan, J M

    2014-01-01

    Knee osteoarthritis (OA) is a leading cause of disability and joint pain. Although other risk factors of knee OA have been identified, how physical activity affects incident knee OA remains unclear. Using data from the first (1999-2004) and second (2005-2010) followup periods of the Johnston County Osteoarthritis Project study, we tested the association between meeting physical activity guidelines and incident knee outcomes among 1,522 adults ages ≥45 years. The median followup time was 6.5 years (range 4.0-10.2 years). Physical activity at baseline (moderate-equivalent physical activity minutes/week) was calculated using the Minnesota Leisure Time Physical Activity questionnaire. Incident knee radiographic OA (ROA) was defined as the development of Kellgren/Lawrence grade ≥2 in a knee at followup. Incident knee symptomatic ROA (sROA) was defined as the development of ROA and symptoms in at least 1 knee at followup. Weibull regression modeling was used to estimate hazard ratios (HRs) and 95% confidence intervals (95% CIs) for interval-censored data. In multivariable models, meeting the 2008 Department of Health and Human Services (HHS) physical activity guidelines (≥150 minutes/week) was not significantly associated with ROA (HR 1.20 [95% CI 0.92-1.56]) or sROA (HR 1.24 [95% CI 0.87-1.76]). Adults in the highest level (≥300 minutes/week) of physical activity had a higher risk of knee ROA and sROA compared with inactive (0 to <10 minutes/week) participants; however, these associations were not statistically significant (HR 1.62 [95% CI 0.97-2.68] and HR 1.42 [95% CI 0.76-2.65], respectively). Meeting the HHS physical activity guidelines was not associated with incident knee ROA or sROA in a cohort of middle-aged and older adults. Copyright © 2014 by the American College of Rheumatology.

  11. Sexual Functioning Among Endometrial Cancer Patients Treated With Adjuvant High-Dose-Rate Intra-Vaginal Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damast, Shari, E-mail: shari.damast@yale.edu; Alektiar, Kaled M.; Goldfarb, Shari

    Purpose: We used the Female Sexual Function Index (FSFI) to investigate the prevalence of sexual dysfunction (SD) and factors associated with diminished sexual functioning in early stage endometrial cancer (EC) patients treated with simple hysterectomy and adjuvant brachytherapy. Methods and Materials: A cohort of 104 patients followed in a radiation oncology clinic completed questionnaires to quantify current levels of sexual functioning. The time interval between hysterectomy and questionnaire completion ranged from <6 months to >5 years. Multivariate regression was performed using the FSFI as a continuous variable (score range, 1.2-35.4). SD was defined as an FSFI score of <26, basedmore » on the published validation study. Results: SD was reported by 81% of respondents. The mean ({+-} standard deviation) domain scores in order of highest-to-lowest functioning were: satisfaction, 2.9 ({+-}2.0); orgasm, 2.5 ({+-}2.4); desire, 2.4 ({+-}1.3); arousal, 2.2 ({+-}2.0); dryness, 2.1 ({+-}2.1); and pain, 1.9 ({+-}2.3). Compared to the index population in which the FSFI cut-score was validated (healthy women ages 18-74), all scores were low. Compared to published scores of a postmenopausal population, scores were not statistically different. Multivariate analysis isolated factors associated with lower FSFI scores, including having laparotomy as opposed to minimally invasive surgery (effect size, -7.1 points; 95% CI, -11.2 to -3.1; P<.001), lack of vaginal lubricant use (effect size, -4.4 points; 95% CI, -8.7 to -0.2, P=.040), and short time interval (<6 months) from hysterectomy to questionnaire completion (effect size, -4.6 points; 95% CI, -9.3-0.2; P=.059). Conclusions: The rate of SD, as defined by an FSFI score <26, was prevalent. The postmenopausal status of EC patients alone is a known risk factor for SD. Additional factors associated with poor sexual functioning following treatment for EC included receipt of laparotomy and lack of vaginal lubricant use.« less

  12. Extensive theoretical/numerical comparative studies on H2 and generalised H2 norms in sampled-data systems

    NASA Astrophysics Data System (ADS)

    Kim, Jung Hoon; Hagiwara, Tomomichi

    2017-11-01

    This paper is concerned with linear time-invariant (LTI) sampled-data systems (by which we mean sampled-data systems with LTI generalised plants and LTI controllers) and studies their H2 norms from the viewpoint of impulse responses and generalised H2 norms from the viewpoint of the induced norms from L2 to L∞. A new definition of the H2 norm of LTI sampled-data systems is first introduced through a sort of intermediate standpoint of those for the existing two definitions. We then establish unified treatment of the three definitions of the H2 norm through a matrix function G(τ) defined on the sampling interval [0, h). This paper next considers the generalised H2 norms, in which two types of the L∞ norm of the output are considered as the temporal supremum magnitude under the spatial 2-norm and ∞-norm of a vector-valued function. We further give unified treatment of the generalised H2 norms through another matrix function F(θ) which is also defined on [0, h). Through a close connection between G(τ) and F(θ), some theoretical relationships between the H2 and generalised H2 norms are provided. Furthermore, appropriate extensions associated with the treatment of G(τ) and F(θ) to the closed interval [0, h] are discussed to facilitate numerical computations and comparisons of the H2 and generalised H2 norms. Through theoretical and numerical studies, it is shown that the two generalised H2 norms coincide with neither of the three H2 norms of LTI sampled-data systems even though all the five definitions coincide with each other when single-output continuous-time LTI systems are considered as a special class of LTI sampled-data systems. To summarise, this paper clarifies that the five control performance measures are mutually related with each other but they are also intrinsically different from each other.

  13. Add-On Antihypertensive Medications to Angiotensin-Aldosterone System Blockers in Diabetes: A Comparative Effectiveness Study.

    PubMed

    Schroeder, Emily B; Chonchol, Michel; Shetterly, Susan M; Powers, J David; Adams, John L; Schmittdiel, Julie A; Nichols, Gregory A; O'Connor, Patrick J; Steiner, John F

    2018-05-07

    In individuals with diabetes, the comparative effectiveness of add-on antihypertensive medications added to an angiotensin-converting enzyme inhibitor or angiotensin II receptor blocker on the risk of significant kidney events is unknown. We used an observational, multicenter cohort of 21,897 individuals with diabetes to compare individuals who added β -blockers, dihydropyridine calcium channel blockers, loop diuretics, or thiazide diuretics to angiotensin-converting enzyme inhibitors or angiotensin II receptor blockers. We examined the hazard of significant kidney events, cardiovascular events, and death using Cox proportional hazard models with propensity score weighting. The composite significant kidney event end point was defined as the first occurrence of a ≥30% decline in eGFR to an eGFR<60 ml/min per 1.73 m 2 , initiation of dialysis, or kidney transplant. The composite cardiovascular event end point was defined as the first occurrence of hospitalization for acute myocardial infarction, acute coronary syndrome, stroke, or congestive heart failure; coronary artery bypass grafting; or percutaneous coronary intervention, and it was only examined in those free of cardiovascular disease at baseline. Over a maximum of 5 years, there were 4707 significant kidney events, 1498 deaths, and 818 cardiovascular events. Compared with thiazide diuretics, hazard ratios for significant kidney events for β -blockers, calcium channel blockers, and loop diuretics were 0.81 (95% confidence interval, 0.74 to 0.89), 0.67 (95% confidence interval, 0.58 to 0.78), and 1.19 (95% confidence interval, 1.00 to 1.41), respectively. Compared with thiazide diuretics, hazard ratios of mortality for β -blockers, calcium channel blockers, and loop diuretics were 1.19 (95% confidence interval, 0.97 to 1.44), 0.73 (95% confidence interval, 0.52 to 1.03), and 1.67 (95% confidence interval, 1.31 to 2.13), respectively. Compared with thiazide diuretics, hazard ratios of cardiovascular events for β -blockers, calcium channel blockers, and loop diuretics compared with thiazide diuretics were 1.65 (95% confidence interval, 1.39 to 1.96), 1.05 (95% confidence interval, 0.80 to 1.39), and 1.55 (95% confidence interval, 1.05 to 2.27), respectively. Compared with thiazide diuretics, calcium channel blockers were associated with a lower risk of significant kidney events and a similar risk of cardiovascular events. Copyright © 2018 by the American Society of Nephrology.

  14. 4D Dynamic Required Navigation Performance Final Report

    NASA Technical Reports Server (NTRS)

    Finkelsztein, Daniel M.; Sturdy, James L.; Alaverdi, Omeed; Hochwarth, Joachim K.

    2011-01-01

    New advanced four dimensional trajectory (4DT) procedures under consideration for the Next Generation Air Transportation System (NextGen) require an aircraft to precisely navigate relative to a moving reference such as another aircraft. Examples are Self-Separation for enroute operations and Interval Management for in-trail and merging operations. The current construct of Required Navigation Performance (RNP), defined for fixed-reference-frame navigation, is not sufficiently specified to be applicable to defining performance levels of such air-to-air procedures. An extension of RNP to air-to-air navigation would enable these advanced procedures to be implemented with a specified level of performance. The objective of this research effort was to propose new 4D Dynamic RNP constructs that account for the dynamic spatial and temporal nature of Interval Management and Self-Separation, develop mathematical models of the Dynamic RNP constructs, "Required Self-Separation Performance" and "Required Interval Management Performance," and to analyze the performance characteristics of these air-to-air procedures using the newly developed models. This final report summarizes the activities led by Raytheon, in collaboration with GE Aviation and SAIC, and presents the results from this research effort to expand the RNP concept to a dynamic 4D frame of reference.

  15. More accurate, calibrated bootstrap confidence intervals for correlating two autocorrelated climate time series

    NASA Astrophysics Data System (ADS)

    Olafsdottir, Kristin B.; Mudelsee, Manfred

    2013-04-01

    Estimation of the Pearson's correlation coefficient between two time series to evaluate the influences of one time depended variable on another is one of the most often used statistical method in climate sciences. Various methods are used to estimate confidence interval to support the correlation point estimate. Many of them make strong mathematical assumptions regarding distributional shape and serial correlation, which are rarely met. More robust statistical methods are needed to increase the accuracy of the confidence intervals. Bootstrap confidence intervals are estimated in the Fortran 90 program PearsonT (Mudelsee, 2003), where the main intention was to get an accurate confidence interval for correlation coefficient between two time series by taking the serial dependence of the process that generated the data into account. However, Monte Carlo experiments show that the coverage accuracy for smaller data sizes can be improved. Here we adapt the PearsonT program into a new version called PearsonT3, by calibrating the confidence interval to increase the coverage accuracy. Calibration is a bootstrap resampling technique, which basically performs a second bootstrap loop or resamples from the bootstrap resamples. It offers, like the non-calibrated bootstrap confidence intervals, robustness against the data distribution. Pairwise moving block bootstrap is used to preserve the serial correlation of both time series. The calibration is applied to standard error based bootstrap Student's t confidence intervals. The performances of the calibrated confidence intervals are examined with Monte Carlo simulations, and compared with the performances of confidence intervals without calibration, that is, PearsonT. The coverage accuracy is evidently better for the calibrated confidence intervals where the coverage error is acceptably small (i.e., within a few percentage points) already for data sizes as small as 20. One form of climate time series is output from numerical models which simulate the climate system. The method is applied to model data from the high resolution ocean model, INALT01 where the relationship between the Agulhas Leakage and the North Brazil Current is evaluated. Preliminary results show significant correlation between the two variables when there is 10 year lag between them, which is more or less the time that takes the Agulhas Leakage water to reach the North Brazil Current. Mudelsee, M., 2003. Estimating Pearson's correlation coefficient with bootstrap confidence interval from serially dependent time series. Mathematical Geology 35, 651-665.

  16. Optimal methods for fitting probability distributions to propagule retention time in studies of zoochorous dispersal.

    PubMed

    Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi

    2016-02-01

    Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We recommend the use of cumulative probability to fit parametric probability distributions to propagule retention time, specifically using maximum likelihood for parameter estimation. Furthermore, the experimental design for an optimal characterization of unimodal propagule retention time should contemplate at least 500 recovered propagules and sampling time-intervals not larger than the time peak of propagule retrieval, except in the tail of the distribution where broader sampling time-intervals may also produce accurate fits.

  17. Detection of severe Midwest thunderstorms using geosynchronous satellite data

    NASA Technical Reports Server (NTRS)

    Adler, R. F.; Markus, M. J.; Fenn, D. D.

    1985-01-01

    In the present exploration of the effectiveness of severe thunderstorm detection in the Midwestern region of the U.S. by means of approximately 5-min interval geosynchronous satellite data, thunderstorms are defined in IR data as points of relative minimum in brightness temperature T(B) having good time continuity and exhibiting a period of rapid growth. The four parameters of rate of T(B) decrease in the upper troposphere and stratosphere, isotherm expansion, and storm lifetime minimum T(B), are shown to be statistically related to the occurrence of severe weather on four case study days and are combined into a Thunderstorm Index which varies among values from 1 to 9. Storms rating higher than 6 have a much higher probability of severe weather reports, yielding a warning time lead of 15 min for hail and 30 min for the first tornado report.

  18. A new preparedness policy for EMS logistics.

    PubMed

    Lee, Seokcheon

    2017-03-01

    Response time in emergency medical services (EMS) is defined as the interval for an ambulance to arrive the scene after receipt of a 911 call. When several ambulances are available upon the receipt of a new call, a decision of selecting an ambulance has to be made in an effort to reduce response time. Dispatching the closest unit available is commonly used in practice; however, recently the Preparedness policy was designed that is in a simplistic form yet being capable of securing a long-term efficiency. This research aims to improve the Preparedness policy, resolving several critical issues inherent in the current form of the policy. The new Preparedness policy incorporates a new metric of preparedness based on the notion of centrality and involves a tuning parameter, weight on preparedness, which has to be appropriately chosen according to operational scenario. Computational experiment shows that the new policy significantly improves the former policy robustly in various scenarios.

  19. Application of a simple recording system to the analysis of free-play behavior in autistic children1

    PubMed Central

    Boer, Arend P.

    1968-01-01

    An observational system, which has been developed to facilitate recording of the total behavioral repertoire of autistic children, involves time-sampling recording of behavior with the help of a common Stenograph machine. Categories which exhausted all behavior were defined. Each category corresponded with a designated key on the Stenograph machine. The observer depressed one key at each 1-sec interval. The observer was paced by audible beats from a metronome. A naive observer can be used with this method. The observer is not mechanically limited and a minimum of observer training is required to obtain reliable measures. The data sampled during a five-week observation period indicated the stability of a taxonomic instrument of behavior based upon direct, time-sampling observations and the stability of spontaneous autistic behavior. Results showed that the behavior of the subjects was largely nonrandom and unsocialized in character. PMID:16795193

  20. Application of a simple recording system to the analysis of free-play behavior in autistic children.

    PubMed

    Boer, A P

    1968-01-01

    An observational system, which has been developed to facilitate recording of the total behavioral repertoire of autistic children, involves time-sampling recording of behavior with the help of a common Stenograph machine. Categories which exhausted all behavior were defined. Each category corresponded with a designated key on the Stenograph machine. The observer depressed one key at each 1-sec interval. The observer was paced by audible beats from a metronome. A naive observer can be used with this method. The observer is not mechanically limited and a minimum of observer training is required to obtain reliable measures. The data sampled during a five-week observation period indicated the stability of a taxonomic instrument of behavior based upon direct, time-sampling observations and the stability of spontaneous autistic behavior. Results showed that the behavior of the subjects was largely nonrandom and unsocialized in character.

Top