Particle behavior simulation in thermophoresis phenomena by direct simulation Monte Carlo method
NASA Astrophysics Data System (ADS)
Wada, Takao
2014-07-01
A particle motion considering thermophoretic force is simulated by using direct simulation Monte Carlo (DSMC) method. Thermophoresis phenomena, which occur for a particle size of 1 μm, are treated in this paper. The problem of thermophoresis simulation is computation time which is proportional to the collision frequency. Note that the time step interval becomes much small for the simulation considering the motion of large size particle. Thermophoretic forces calculated by DSMC method were reported, but the particle motion was not computed because of the small time step interval. In this paper, the molecule-particle collision model, which computes the collision between a particle and multi molecules in a collision event, is considered. The momentum transfer to the particle is computed with a collision weight factor, where the collision weight factor means the number of molecules colliding with a particle in a collision event. The large time step interval is adopted by considering the collision weight factor. Furthermore, the large time step interval is about million times longer than the conventional time step interval of the DSMC method when a particle size is 1 μm. Therefore, the computation time becomes about one-millionth. We simulate the graphite particle motion considering thermophoretic force by DSMC-Neutrals (Particle-PLUS neutral module) with above the collision weight factor, where DSMC-Neutrals is commercial software adopting DSMC method. The size and the shape of the particle are 1 μm and a sphere, respectively. The particle-particle collision is ignored. We compute the thermophoretic forces in Ar and H2 gases of a pressure range from 0.1 to 100 mTorr. The results agree well with Gallis' analytical results. Note that Gallis' analytical result for continuum limit is the same as Waldmann's result.
Healthy Lifestyle Fitness Interval training can help you get the most out of your workout. By Mayo Clinic Staff Are you ready to shake ... spending more time at the gym? Consider aerobic interval training. Once the domain of elite athletes, interval training ...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-18
... Association (ATA) on behalf of its member Delta Air Lines Inc. (Delta) asks that we consider reviewing the compliance time to better align with industry standard tank entry intervals. Delta notes that the... intervals; and the center tank is opened at 4-year intervals. Delta states that the 60-month compliance time...
Moulki, Naeem; Kealhofer, Jessica V; Benditt, David G; Gravely, Amy; Vakil, Kairav; Garcia, Santiago; Adabag, Selcuk
2018-06-16
Bifascicular block and prolonged PR interval on the electrocardiogram (ECG) have been associated with complete heart block and sudden cardiac death. We sought to determine if cardiac implantable electronic devices (CIED) improve survival in these patients. We assessed survival in relation to CIED status among 636 consecutive patients with bifascicular block and prolonged PR interval on the ECG. In survival analyses, CIED was considered as a time-varying covariate. Average age was 76 ± 9 years, and 99% of the patients were men. A total of 167 (26%) underwent CIED (127 pacemaker only) implantation at baseline (n = 23) or during follow-up (n = 144). During 5.4 ± 3.8 years of follow-up, 83 (13%) patients developed complete or high-degree atrioventricular block and 375 (59%) died. Patients with a CIED had a longer survival compared to those without a CIED in the traditional, static analysis (log-rank p < 0.0001) but not when CIED was considered as a time-varying covariate (log-rank p = 0.76). In the multivariable model, patients with a CIED had a 34% lower risk of death (hazard ratio 0.66, 95% confidence interval 0.52-0.83; p = 0.001) than those without CIED in the traditional analysis but not in the time-varying covariate analysis (hazard ratio 1.05, 95% confidence interval 0.79-1.38; p = 0.76). Results did not change in the subgroup with a pacemaker only. Bifascicular block and prolonged PR interval on ECG are associated with a high incidence of complete atrioventricular block and mortality. However, CIED implantation does not have a significant influence on survival when time-varying nature of CIED implantation is considered.
Campbell, J P; Gratton, M C; Salomone, J A; Lindholm, D J; Watson, W A
1994-01-01
In some emergency medical services (EMS) system designs, response time intervals are mandated with monetary penalties for noncompliance. These times are set with the goal of providing rapid, definitive patient care. The time interval of vehicle at scene-to-patient access (VSPA) has been measured, but its effect on response time interval compliance has not been determined. To determine the effect of the VSPA interval on the mandated code 1 (< 9 min) and code 2 (< 13 min) response time interval compliance in an urban, public-utility model system. A prospective, observational study used independent third-party riders to collect the VSPA interval for emergency life-threatening (code 1) and emergency nonlife-threatening (code 2) calls. The VSPA interval was added to the 9-1-1 call-to-dispatch and vehicle dispatch-to-scene intervals to determine the total time interval from call received until paramedic access to the patient (9-1-1 call-to-patient access). Compliance with the mandated response time intervals was determined using the traditional time intervals (9-1-1 call-to-scene) plus the VSPA time intervals (9-1-1 call-to-patient access). Chi-square was used to determine statistical significance. Of the 216 observed calls, 198 were matched to the traditional time intervals. Sixty-three were code 1, and 135 were code 2. Of the code 1 calls, 90.5% were compliant using 9-1-1 call-to-scene intervals dropping to 63.5% using 9-1-1 call-to-patient access intervals (p < 0.0005). Of the code 2 calls, 94.1% were compliant using 9-1-1 call-to-scene intervals. Compliance decreased to 83.7% using 9-1-1 call-to-patient access intervals (p = 0.012). The addition of the VSPA interval to the traditional time intervals impacts system response time compliance. Using 9-1-1 call-to-scene compliance as a basis for measuring system performance underestimates the time for the delivery of definitive care. This must be considered when response time interval compliances are defined.
Monitoring real-time navigation processes using the automated reasoning tool (ART)
NASA Technical Reports Server (NTRS)
Maletz, M. C.; Culbert, C. J.
1985-01-01
An expert system is described for monitoring and controlling navigation processes in real-time. The ART-based system features data-driven computation, accommodation of synchronous and asynchronous data, temporal modeling for individual time intervals and chains of time intervals, and hypothetical reasoning capabilities that consider alternative interpretations of the state of navigation processes. The concept is illustrated in terms of the NAVEX system for monitoring and controlling the high speed ground navigation console for Mission Control at Johnson Space Center. The reasoning processes are outlined, including techniques used to consider alternative data interpretations. Installation of the system has permitted using a single operator, instead of three, to monitor the ascent and entry phases of a Shuttle mission.
NASA Astrophysics Data System (ADS)
Tang, Tie-Qiao; Wang, Tao; Chen, Liang; Huang, Hai-Jun
2018-01-01
In this paper, we introduce the fuel cost into each commuter's trip cost, define a new trip cost without late arrival and its corresponding equilibrium state, and use a car-following model to explore the impacts of the fuel cost on each commuter's departure time, departure interval, arrival time, arrival interval, traveling time, early arrival time and trip cost at the above equilibrium state. The numerical results show that considering the fuel cost in each commuter's trip cost has positive impacts on his trip cost and fuel cost, and the traffic situation in the system without late arrival, i.e., each commuter should explicitly consider the fuel cost in his trip cost.
Multivariable nonlinear analysis of foreign exchange rates
NASA Astrophysics Data System (ADS)
Suzuki, Tomoya; Ikeguchi, Tohru; Suzuki, Masuo
2003-05-01
We analyze the multivariable time series of foreign exchange rates. These are price movements that have often been analyzed, and dealing time intervals and spreads between bid and ask prices. Considering dealing time intervals as event timing such as neurons’ firings, we use raster plots (RPs) and peri-stimulus time histograms (PSTHs) which are popular methods in the field of neurophysiology. Introducing special processings to obtaining RPs and PSTHs time histograms for analyzing exchange rates time series, we discover that there exists dynamical interaction among three variables. We also find that adopting multivariables leads to improvements of prediction accuracy.
The Anaesthetic-ECT Time Interval in Electroconvulsive Therapy Practice--Is It Time to Time?
Gálvez, Verònica; Hadzi-Pavlovic, Dusan; Wark, Harry; Harper, Simon; Leyden, John; Loo, Colleen K
2016-01-01
Because most common intravenous anaesthetics used in ECT have anticonvulsant properties, their plasma-brain concentration at the time of seizure induction might affect seizure expression. The quality of ECT seizure expression has been repeatedly associated with efficacy outcomes. The time interval between the anaesthetic bolus injection and the ECT stimulus (anaesthetic-ECT time interval) will determine the anaesthetic plasma-brain concentration when the ECT stimulus is administered. The aim of this study was to examine the effect of the anaesthetic-ECT time interval on ECT seizure quality and duration. The anaesthetic-ECT time interval was recorded in 771 ECT sessions (84 patients). Right unilateral brief pulse ECT was applied. Anaesthesia given was propofol (1-2 mg/kg) and succinylcholine (0.5-1.0 mg/kg). Seizure quality indices (slow wave onset, amplitude, regularity, stereotypy and post-ictal suppression) and duration were rated through a structured rating scale by a single blinded trained rater. Linear Mixed Effects Models analysed the effect of the anaesthetic-ECT time interval on seizure quality indices, controlling for propofol dose (mg), ECT charge (mC), ECT session number, days between ECT, age (years), initial seizure threshold (mC) and concurrent medication. Longer anaesthetic-ECT time intervals lead to significantly higher quality seizures (p < 0.001 for amplitude, regularity, stereotypy and post-ictal suppression). These results suggest that the anaesthetic-ECT time interval is an important factor to consider in ECT practice. This time interval should be extended to as long as practically possible to facilitate the production of better quality seizures. Close collaboration between the anaesthetist and the psychiatrist is essential. Copyright © 2015 Elsevier Inc. All rights reserved.
Spectral of electrocardiographic RR intervals to indicate atrial fibrillation
NASA Astrophysics Data System (ADS)
Nuryani, Nuryani; Satrio Nugroho, Anto
2017-11-01
Atrial fibrillation is a serious heart diseases, which is associated on the risk of death, and thus an early detection of atrial fibrillation is necessary. We have investigated spectral pattern of electrocardiogram in relation to atrial fibrillation. The utilized feature of electrocardiogram is RR interval. RR interval is the time interval between a two-consecutive R peaks. A series of RR intervals in a time segment is converted to a signal with a frequency domain. The frequency components are investigated to find the components which significantly associate to atrial fibrillation. A segment is defined as atrial fibrillation or normal segments by considering a defined number of atrial fibrillation RR in the segment. Using clinical data of 23 patients with atrial fibrillation, we find that the frequency components could be used to indicate atrial fibrillation.
Zhang, Zhenwei; VanSwearingen, Jessie; Brach, Jennifer S.; Perera, Subashan
2016-01-01
Human gait is a complex interaction of many nonlinear systems and stride intervals exhibit self-similarity over long time scales that can be modeled as a fractal process. The scaling exponent represents the fractal degree and can be interpreted as a biomarker of relative diseases. The previous study showed that the average wavelet method provides the most accurate results to estimate this scaling exponent when applied to stride interval time series. The purpose of this paper is to determine the most suitable mother wavelet for the average wavelet method. This paper presents a comparative numerical analysis of sixteen mother wavelets using simulated and real fractal signals. Simulated fractal signals were generated under varying signal lengths and scaling exponents that indicate a range of physiologically conceivable fractal signals. The five candidates were chosen due to their good performance on the mean square error test for both short and long signals. Next, we comparatively analyzed these five mother wavelets for physiologically relevant stride time series lengths. Our analysis showed that the symlet 2 mother wavelet provides a low mean square error and low variance for long time intervals and relatively low errors for short signal lengths. It can be considered as the most suitable mother function without the burden of considering the signal length. PMID:27960102
NASA Technical Reports Server (NTRS)
Wardrip, S. C.
1982-01-01
Proceedings of an annual Precise Time and Time Interval (PTTI) Applications and Planning Meeting are summarized. A transparent view of the state-of-the-art, an opportunity to express needs, a view of important future trends, and a review of relevant past accomplishments were considered for PTTI managers, systems engineers, and program planner. Specific aims were: to provide PTTI users with new and useful applications, procedures, and techniques; to allow the PTTI researcher to better assess fruitful directions for research efforts.
Maximum likelihood estimation for semiparametric transformation models with interval-censored data
Mao, Lu; Lin, D. Y.
2016-01-01
Abstract Interval censoring arises frequently in clinical, epidemiological, financial and sociological studies, where the event or failure of interest is known only to occur within an interval induced by periodic monitoring. We formulate the effects of potentially time-dependent covariates on the interval-censored failure time through a broad class of semiparametric transformation models that encompasses proportional hazards and proportional odds models. We consider nonparametric maximum likelihood estimation for this class of models with an arbitrary number of monitoring times for each subject. We devise an EM-type algorithm that converges stably, even in the presence of time-dependent covariates, and show that the estimators for the regression parameters are consistent, asymptotically normal, and asymptotically efficient with an easily estimated covariance matrix. Finally, we demonstrate the performance of our procedures through simulation studies and application to an HIV/AIDS study conducted in Thailand. PMID:27279656
Atlas of interoccurrence intervals for selected thresholds of daily precipitation in Texas
Asquith, William H.; Roussel, Meghan C.
2003-01-01
A Poisson process model is used to define the distribution of interoccurrence intervals of daily precipitation in Texas. A precipitation interoccurrence interval is the time period between two successive rainfall events. Rainfall events are defined as daily precipitation equaling or exceeding a specified depth threshold. Ten precipitation thresholds are considered: 0.05, 0.10, 0.25, 0.50, 0.75, 1.0, 1.5, 2.0, 2.5, and 3.0 inches. Site-specific mean interoccurrence interval and ancillary statistics are presented for each threshold and for each of 1,306 National Weather Service daily precipitation gages. Maps depicting the spatial variation across Texas of the mean interoccurrence interval for each threshold are presented. The percent change from the statewide standard deviation of the interoccurrence intervals to the root-mean-square error ranges from a magnitude minimum of (negative) -24 to a magnitude maximum of -60 percent for the 0.05- and 2.0-inch thresholds, respectively. Because of the substantial negative percent change, the maps are considered more reliable estimators of the mean interoccurrence interval for most locations in Texas than the statewide mean values.
Wang, Peijie; Zhao, Hui; Sun, Jianguo
2016-12-01
Interval-censored failure time data occur in many fields such as demography, economics, medical research, and reliability and many inference procedures on them have been developed (Sun, 2006; Chen, Sun, and Peace, 2012). However, most of the existing approaches assume that the mechanism that yields interval censoring is independent of the failure time of interest and it is clear that this may not be true in practice (Zhang et al., 2007; Ma, Hu, and Sun, 2015). In this article, we consider regression analysis of case K interval-censored failure time data when the censoring mechanism may be related to the failure time of interest. For the problem, an estimated sieve maximum-likelihood approach is proposed for the data arising from the proportional hazards frailty model and for estimation, a two-step procedure is presented. In the addition, the asymptotic properties of the proposed estimators of regression parameters are established and an extensive simulation study suggests that the method works well. Finally, we apply the method to a set of real interval-censored data that motivated this study. © 2016, The International Biometric Society.
Li, Lingling; Kulldorff, Martin; Russek-Cohen, Estelle; Kawai, Alison Tse; Hua, Wei
2015-12-01
The self-controlled risk interval design is commonly used to assess the association between an acute exposure and an adverse event of interest, implicitly adjusting for fixed, non-time-varying covariates. Explicit adjustment needs to be made for time-varying covariates, for example, age in young children. It can be performed via either a fixed or random adjustment. The random-adjustment approach can provide valid point and interval estimates but requires access to individual-level data for an unexposed baseline sample. The fixed-adjustment approach does not have this requirement and will provide a valid point estimate but may underestimate the variance. We conducted a comprehensive simulation study to evaluate their performance. We designed the simulation study using empirical data from the Food and Drug Administration-sponsored Mini-Sentinel Post-licensure Rapid Immunization Safety Monitoring Rotavirus Vaccines and Intussusception study in children 5-36.9 weeks of age. The time-varying confounder is age. We considered a variety of design parameters including sample size, relative risk, time-varying baseline risks, and risk interval length. The random-adjustment approach has very good performance in almost all considered settings. The fixed-adjustment approach can be used as a good alternative when the number of events used to estimate the time-varying baseline risks is at least the number of events used to estimate the relative risk, which is almost always the case. We successfully identified settings in which the fixed-adjustment approach can be used as a good alternative and provided guidelines on the selection and implementation of appropriate analyses for the self-controlled risk interval design. Copyright © 2015 John Wiley & Sons, Ltd.
Tremblay, Marc; Vézina, Hélène
2000-01-01
Summary Intergenerational time intervals are frequently used in human population-genetics studies concerned with the ages and origins of mutations. In most cases, mean intervals of 20 or 25 years are used, regardless of the demographic characteristics of the population under study. Although these characteristics may vary from prehistoric to historical times, we suggest that this value is probably too low, and that the ages of some mutations may have been underestimated. Analyses were performed by using the BALSAC Population Register (Quebec, Canada), from which several intergenerational comparisons can be made. Family reconstitutions were used to measure interval lengths and variations in descending lineages. Various parameters were considered, such as spouse age at marriage, parental age, and reproduction levels. Mother-child and father-child intervals were compared. Intergenerational male and female intervals were also analyzed in 100 extended ascending genealogies. Results showed that a mean value of 30 years is a better estimate of intergenerational intervals than 20 or 25 years. As marked differences between male and female interval length were observed, specific values are proposed for mtDNA, autosomal, X-chromosomal, and Y-chromosomal loci. The applicability of these results for age estimates of mutations is discussed. PMID:10677323
NASA Astrophysics Data System (ADS)
Endreny, Theodore A.; Pashiardis, Stelios
2007-02-01
SummaryRobust and accurate estimates of rainfall frequencies are difficult to make with short, and arid-climate, rainfall records, however new regional and global methods were used to supplement such a constrained 15-34 yr record in Cyprus. The impact of supplementing rainfall frequency analysis with the regional and global approaches was measured with relative bias and root mean square error (RMSE) values. Analysis considered 42 stations with 8 time intervals (5-360 min) in four regions delineated by proximity to sea and elevation. Regional statistical algorithms found the sites passed discordancy tests of coefficient of variation, skewness and kurtosis, while heterogeneity tests revealed the regions were homogeneous to mildly heterogeneous. Rainfall depths were simulated in the regional analysis method 500 times, and then goodness of fit tests identified the best candidate distribution as the general extreme value (GEV) Type II. In the regional analysis, the method of L-moments was used to estimate location, shape, and scale parameters. In the global based analysis, the distribution was a priori prescribed as GEV Type II, a shape parameter was a priori set to 0.15, and a time interval term was constructed to use one set of parameters for all time intervals. Relative RMSE values were approximately equal at 10% for the regional and global method when regions were compared, but when time intervals were compared the global method RMSE had a parabolic-shaped time interval trend. Relative bias values were also approximately equal for both methods when regions were compared, but again a parabolic-shaped time interval trend was found for the global method. The global method relative RMSE and bias trended with time interval, which may be caused by fitting a single scale value for all time intervals.
Evaluating dedicated and intrinsic models of temporal encoding by varying context
Spencer, Rebecca M.C.; Karmarkar, Uma; Ivry, Richard B.
2009-01-01
Two general classes of models have been proposed to account for how people process temporal information in the milliseconds range. Dedicated models entail a mechanism in which time is explicitly encoded; examples include clock–counter models and functional delay lines. Intrinsic models, such as state-dependent networks (SDN), represent time as an emergent property of the dynamics of neural processing. An important property of SDN is that the encoding of duration is context dependent since the representation of an interval will vary as a function of the initial state of the network. Consistent with this assumption, duration discrimination thresholds for auditory intervals spanning 100 ms are elevated when an irrelevant tone is presented at varying times prior to the onset of the test interval. We revisit this effect in two experiments, considering attentional issues that may also produce such context effects. The disruptive effect of a variable context was eliminated or attenuated when the intervals between the irrelevant tone and test interval were made dissimilar or the duration of the test interval was increased to 300 ms. These results indicate how attentional processes can influence the perception of brief intervals, as well as point to important constraints for SDN models. PMID:19487188
Marwaha, Puneeta; Sunkaria, Ramesh Kumar
2016-09-01
The sample entropy (SampEn) has been widely used to quantify the complexity of RR-interval time series. It is a fact that higher complexity, and hence, entropy is associated with the RR-interval time series of healthy subjects. But, SampEn suffers from the disadvantage that it assigns higher entropy to the randomized surrogate time series as well as to certain pathological time series, which is a misleading observation. This wrong estimation of the complexity of a time series may be due to the fact that the existing SampEn technique updates the threshold value as a function of long-term standard deviation (SD) of a time series. However, time series of certain pathologies exhibits substantial variability in beat-to-beat fluctuations. So the SD of the first order difference (short term SD) of the time series should be considered while updating threshold value, to account for period-to-period variations inherited in a time series. In the present work, improved sample entropy (I-SampEn), a new methodology has been proposed in which threshold value is updated by considering the period-to-period variations of a time series. The I-SampEn technique results in assigning higher entropy value to age-matched healthy subjects than patients suffering atrial fibrillation (AF) and diabetes mellitus (DM). Our results are in agreement with the theory of reduction in complexity of RR-interval time series in patients suffering from chronic cardiovascular and non-cardiovascular diseases.
Confidence intervals for the first crossing point of two hazard functions.
Cheng, Ming-Yen; Qiu, Peihua; Tan, Xianming; Tu, Dongsheng
2009-12-01
The phenomenon of crossing hazard rates is common in clinical trials with time to event endpoints. Many methods have been proposed for testing equality of hazard functions against a crossing hazards alternative. However, there has been relatively few approaches available in the literature for point or interval estimation of the crossing time point. The problem of constructing confidence intervals for the first crossing time point of two hazard functions is considered in this paper. After reviewing a recent procedure based on Cox proportional hazard modeling with Box-Cox transformation of the time to event, a nonparametric procedure using the kernel smoothing estimate of the hazard ratio is proposed. The proposed procedure and the one based on Cox proportional hazard modeling with Box-Cox transformation of the time to event are both evaluated by Monte-Carlo simulations and applied to two clinical trial datasets.
A new stochastic model considering satellite clock interpolation errors in precise point positioning
NASA Astrophysics Data System (ADS)
Wang, Shengli; Yang, Fanlin; Gao, Wang; Yan, Lizi; Ge, Yulong
2018-03-01
Precise clock products are typically interpolated based on the sampling interval of the observational data when they are used for in precise point positioning. However, due to the occurrence of white noise in atomic clocks, a residual component of such noise will inevitable reside within the observations when clock errors are interpolated, and such noise will affect the resolution of the positioning results. In this paper, which is based on a twenty-one-week analysis of the atomic clock noise characteristics of numerous satellites, a new stochastic observation model that considers satellite clock interpolation errors is proposed. First, the systematic error of each satellite in the IGR clock product was extracted using a wavelet de-noising method to obtain the empirical characteristics of atomic clock noise within each clock product. Then, based on those empirical characteristics, a stochastic observation model was structured that considered the satellite clock interpolation errors. Subsequently, the IGR and IGS clock products at different time intervals were used for experimental validation. A verification using 179 stations worldwide from the IGS showed that, compared with the conventional model, the convergence times using the stochastic model proposed in this study were respectively shortened by 4.8% and 4.0% when the IGR and IGS 300-s-interval clock products were used and by 19.1% and 19.4% when the 900-s-interval clock products were used. Furthermore, the disturbances during the initial phase of the calculation were also effectively improved.
NASA Astrophysics Data System (ADS)
Torries, Brian; Shamsaei, Nima
2017-12-01
The effects of different cooling rates, as achieved by varying the interlayer time interval, on the fatigue behavior of additively manufactured Ti-6Al-4V specimens were investigated and modeled via a microstructure-sensitive fatigue model. Comparisons are made between two sets of specimens fabricated via Laser Engineered Net Shaping (LENS™), with variance in interlayer time interval accomplished by depositing either one or two specimens per print operation. Fully reversed, strain-controlled fatigue tests were conducted, with fractography following specimen failure. A microstructure-sensitive fatigue model was calibrated to model the fatigue behavior of both sets of specimens and was found to be capable of correctly predicting the longer fatigue lives of the single-built specimens and the reduced scatter of the double-built specimens; all data points fell within the predicted upper and lower bounds of fatigue life. The time interval effects and the ability to be modeled are important to consider when producing test specimens that are smaller than the production part (i.e., property-performance relationships).
Statistical inferences with jointly type-II censored samples from two Pareto distributions
NASA Astrophysics Data System (ADS)
Abu-Zinadah, Hanaa H.
2017-08-01
In the several fields of industries the product comes from more than one production line, which is required to work the comparative life tests. This problem requires sampling of the different production lines, then the joint censoring scheme is appeared. In this article we consider the life time Pareto distribution with jointly type-II censoring scheme. The maximum likelihood estimators (MLE) and the corresponding approximate confidence intervals as well as the bootstrap confidence intervals of the model parameters are obtained. Also Bayesian point and credible intervals of the model parameters are presented. The life time data set is analyzed for illustrative purposes. Monte Carlo results from simulation studies are presented to assess the performance of our proposed method.
ERIC Educational Resources Information Center
Soland, James
2017-01-01
Research shows that assuming a test scale is equal-interval can be problematic, especially when the assessment is being used to achieve a policy aim like evaluating growth over time. However, little research considers whether teacher value added is sensitive to the underlying test scale, and in particular whether treating an ordinal scale as…
Reliability-based management of buried pipelines considering external corrosion defects
NASA Astrophysics Data System (ADS)
Miran, Seyedeh Azadeh
Corrosion is one of the main deteriorating mechanisms that degrade the energy pipeline integrity, due to transferring corrosive fluid or gas and interacting with corrosive environment. Corrosion defects are usually detected by periodical inspections using in-line inspection (ILI) methods. In order to ensure pipeline safety, this study develops a cost-effective maintenance strategy that consists of three aspects: corrosion growth model development using ILI data, time-dependent performance evaluation, and optimal inspection interval determination. In particular, the proposed study is applied to a cathodic protected buried steel pipeline located in Mexico. First, time-dependent power-law formulation is adopted to probabilistically characterize growth of the maximum depth and length of the external corrosion defects. Dependency between defect depth and length are considered in the model development and generation of the corrosion defects over time is characterized by the homogenous Poisson process. The growth models unknown parameters are evaluated based on the ILI data through the Bayesian updating method with Markov Chain Monte Carlo (MCMC) simulation technique. The proposed corrosion growth models can be used when either matched or non-matched defects are available, and have ability to consider newly generated defects since last inspection. Results of this part of study show that both depth and length growth models can predict damage quantities reasonably well and a strong correlation between defect depth and length is found. Next, time-dependent system failure probabilities are evaluated using developed corrosion growth models considering prevailing uncertainties where three failure modes, namely small leak, large leak and rupture are considered. Performance of the pipeline is evaluated through failure probability per km (or called a sub-system) where each subsystem is considered as a series system of detected and newly generated defects within that sub-system. Sensitivity analysis is also performed to determine to which incorporated parameter(s) in the growth models reliability of the studied pipeline is most sensitive. The reliability analysis results suggest that newly generated defects should be considered in calculating failure probability, especially for prediction of long-term performance of the pipeline and also, impact of the statistical uncertainty in the model parameters is significant that should be considered in the reliability analysis. Finally, with the evaluated time-dependent failure probabilities, a life cycle-cost analysis is conducted to determine optimal inspection interval of studied pipeline. The expected total life-cycle costs consists construction cost and expected costs of inspections, repair, and failure. The repair is conducted when failure probability from any described failure mode exceeds pre-defined probability threshold after each inspection. Moreover, this study also investigates impact of repair threshold values and unit costs of inspection and failure on the expected total life-cycle cost and optimal inspection interval through a parametric study. The analysis suggests that a smaller inspection interval leads to higher inspection costs, but can lower failure cost and also repair cost is less significant compared to inspection and failure costs.
Ballal, Nidambur Vasudev; Yegneswaran, Prakash Peralam; Mala, Kundabala; Bhat, Kadengodlu Seetharama
2011-11-01
The aim of this study was to evaluate the antimicrobial efficacy of 7% maleic acid (MA) and 17% ethylenediaminetetraacetic acid (EDTA) in elimination of Enterococcus faecalis, Candida albicans, and Staphylococcus aureus at different time intervals. Transfer culture of microbial strains were used for inoculum preparation and determination of time-kill assay. The viability counts of 7% MA and 17% EDTA suspensions were performed at 0, 2, 4, 6, 12, and 24 hours. Assay results were analyzed by determining number of strains that yielded log(10) CFU/mL of -1 compared with counts at 0 hours, for test medicaments at time intervals. Medicaments were considered to be microbicidal at a minimum inhibitory concentration that reduced original inoculum by >3 log(10) CFU/mL (99.9%) and microbiostatic if inoculum was reduced by <3 log(10) CFU/mL. Statistical analysis was performed using chi-square and Fisher exact tests as well as Friedman test for comparison of the time interval within the MA and EDTA groups. At all time intervals, there was no significant difference between MA and EDTA for all of the organisms (P > .05). However, within the MA and EDTA groups at various time intervals, there were significant differences (P < .001). Equivalent antimicrobial activity was observed by MA and EDTA against all of the organisms tested at various periods. Copyright © 2011 Mosby, Inc. All rights reserved.
Emergence of patterns in random processes
NASA Astrophysics Data System (ADS)
Newman, William I.; Turcotte, Donald L.; Malamud, Bruce D.
2012-08-01
Sixty years ago, it was observed that any independent and identically distributed (i.i.d.) random variable would produce a pattern of peak-to-peak sequences with, on average, three events per sequence. This outcome was employed to show that randomness could yield, as a null hypothesis for animal populations, an explanation for their apparent 3-year cycles. We show how we can explicitly obtain a universal distribution of the lengths of peak-to-peak sequences in time series and that this can be employed for long data sets as a test of their i.i.d. character. We illustrate the validity of our analysis utilizing the peak-to-peak statistics of a Gaussian white noise. We also consider the nearest-neighbor cluster statistics of point processes in time. If the time intervals are random, we show that cluster size statistics are identical to the peak-to-peak sequence statistics of time series. In order to study the influence of correlations in a time series, we determine the peak-to-peak sequence statistics for the Langevin equation of kinetic theory leading to Brownian motion. To test our methodology, we consider a variety of applications. Using a global catalog of earthquakes, we obtain the peak-to-peak statistics of earthquake magnitudes and the nearest neighbor interoccurrence time statistics. In both cases, we find good agreement with the i.i.d. theory. We also consider the interval statistics of the Old Faithful geyser in Yellowstone National Park. In this case, we find a significant deviation from the i.i.d. theory which we attribute to antipersistence. We consider the interval statistics using the AL index of geomagnetic substorms. We again find a significant deviation from i.i.d. behavior that we attribute to mild persistence. Finally, we examine the behavior of Standard and Poor's 500 stock index's daily returns from 1928-2011 and show that, while it is close to being i.i.d., there is, again, significant persistence. We expect that there will be many other applications of our methodology both to interoccurrence statistics and to time series.
Suprathermal O(+) and H(+) ion behavior during the March 22, 1979 (CDAW 6), substorms
NASA Technical Reports Server (NTRS)
Ipavich, F. M.; Galvin, A. B.; Gloeckler, G.; Scholer, M.; Hovestadt, D.; Klecker, B.
1985-01-01
The present investigation has the objective to report on the behavior of energetic (approximately 130 keV) O(+) ions in the earth's plasma sheet, taking into account observations by the ISEE 1 spacecraft during a magnetically active time interval encompassing two major substorms on March 22, 1979. Attention is also given to suprathermal H(+) and He(++) ions. ISEE 1 plasma sheet observations of the proton and alpha particle phase space densities as a function of energy per charge during the time interval 0933-1000 UT on March 22, 1979 are considered along with the proton phase space density versus energy in the energy interval approximately 10 to 70 keV for the selected time periods 0933-1000 UT (presubstorm) and 1230-1243 UT (recovery phase) during the 1055 substorm on March 22, 1979. A table listing the proton energy density for presubstorm and recovery periods is also provided.
Two-Way Satellite Time and Frequency Transfer (TWSTFT) Calibration Constancy From Closure Sums
2008-12-01
40th Annual Precise Time and Time Interval (PTTI) Meeting 587 TWO-WAY SATELLITE TIME AND FREQUENCY TRANSFER ( TWSTFT ) CALIBRATION...Paris, France Abstract Two-way Satellite Time and Frequency Transfer ( TWSTFT ) is considered to be the most accurate means of long-distance...explanations for small, but non-zero, biases observed in the closure sums of uncalibrated data are presented. I. INTRODUCTION TWSTFT [1] has
Asayama, Kei; Thijs, Lutgarde; Li, Yan; Gu, Yu-Mei; Hara, Azusa; Liu, Yan-Ping; Zhang, Zhenyu; Wei, Fang-Fei; Lujambio, Inés; Mena, Luis J.; Boggia, José; Hansen, Tine W.; Björklund-Bodegård, Kristina; Nomura, Kyoko; Ohkubo, Takayoshi; Jeppesen, Jørgen; Torp-Pedersen, Christian; Dolan, Eamon; Stolarz-Skrzypek, Katarzyna; Malyutina, Sofia; Casiglia, Edoardo; Nikitin, Yuri; Lind, Lars; Luzardo, Leonella; Kawecka-Jaszcz, Kalina; Sandoya, Edgardo; Filipovský, Jan; Maestre, Gladys E.; Wang, Jiguang; Imai, Yutaka; Franklin, Stanley S.; O’Brien, Eoin; Staessen, Jan A.
2015-01-01
Outcome-driven recommendations about time intervals during which ambulatory blood pressure should be measured to diagnose white-coat or masked hypertension are lacking. We cross-classified 8237 untreated participants (mean age, 50.7 years; 48.4% women) enrolled in 12 population studies, using ≥140/≥90, ≥130/≥80, ≥135/≥85, and ≥120/≥70 mm Hg as hypertension thresholds for conventional, 24-hour, daytime, and nighttime blood pressure. White-coat hypertension was hypertension on conventional measurement with ambulatory normotension, the opposite condition being masked hypertension. Intervals used for classification of participants were daytime, nighttime, and 24 hours, first considered separately, and next combined as 24 hours plus daytime or plus nighttime, or plus both. Depending on time intervals chosen, white-coat and masked hypertension frequencies ranged from 6.3% to 12.5% and from 9.7% to 19.6%, respectively. During 91 046 person-years, 729 participants experienced a cardiovascular event. In multivariable analyses with normotension during all intervals of the day as reference, hazard ratios associated with white-coat hypertension progressively weakened considering daytime only (1.38; P=0.033), nighttime only (1.43; P=0.0074), 24 hours only (1.21; P=0.20), 24 hours plus daytime (1.24; P=0.18), 24 hours plus nighttime (1.15; P=0.39), and 24 hours plus daytime and nighttime (1.16; P=0.41). The hazard ratios comparing masked hypertension with normotension were all significant (P<0.0001), ranging from 1.76 to 2.03. In conclusion, identification of truly low-risk white-coat hypertension requires setting thresholds simultaneously to 24 hours, daytime, and nighttime blood pressure. Although any time interval suffices to diagnose masked hypertension, as proposed in current guidelines, full 24-hour recordings remain standard in clinical practice. PMID:25135185
24 CFR 3280.401 - Structural load tests.
Code of Federal Regulations, 2013 CFR
2013-04-01
... sustaining its dead load plus superimposed live loads equal to 1.75 times the required live loads for a... in 1/4 design live load increments at 10-minute intervals until 1.25 times design live load plus dead... load plus dead load has been reached. Assembly failure shall be considered as design live load...
24 CFR 3280.401 - Structural load tests.
Code of Federal Regulations, 2010 CFR
2010-04-01
... sustaining its dead load plus superimposed live loads equal to 1.75 times the required live loads for a... in 1/4 design live load increments at 10-minute intervals until 1.25 times design live load plus dead... load plus dead load has been reached. Assembly failure shall be considered as design live load...
24 CFR 3280.401 - Structural load tests.
Code of Federal Regulations, 2014 CFR
2014-04-01
... sustaining its dead load plus superimposed live loads equal to 1.75 times the required live loads for a... in 1/4 design live load increments at 10-minute intervals until 1.25 times design live load plus dead... load plus dead load has been reached. Assembly failure shall be considered as design live load...
24 CFR 3280.401 - Structural load tests.
Code of Federal Regulations, 2012 CFR
2012-04-01
... sustaining its dead load plus superimposed live loads equal to 1.75 times the required live loads for a... in 1/4 design live load increments at 10-minute intervals until 1.25 times design live load plus dead... load plus dead load has been reached. Assembly failure shall be considered as design live load...
24 CFR 3280.401 - Structural load tests.
Code of Federal Regulations, 2011 CFR
2011-04-01
... sustaining its dead load plus superimposed live loads equal to 1.75 times the required live loads for a... in 1/4 design live load increments at 10-minute intervals until 1.25 times design live load plus dead... load plus dead load has been reached. Assembly failure shall be considered as design live load...
Comparison of English Language Rhythm and Kalhori Kurdish Language Rhythm
ERIC Educational Resources Information Center
Taghva, Nafiseh; Zadeh, Vahideh Abolhasani
2016-01-01
Interval-based method is a method of studying the rhythmic quantitative features of languages. This method use Pairwise Variability Index (PVI) to consider the variability of vocalic duration and inter-vocalic duration of sentences which leads to classification of languages rhythm into stress-timed languages and syllable-timed ones. This study…
Kieffer, James D.
2017-01-01
Abstract The most utilized method to measure swimming performance of fishes has been the critical swimming speed (UCrit) test. In this test, the fish is forced to swim against an incrementally increasing flow of water until fatigue. Before the water velocity is increased, the fish swims at the water velocity for a specific, pre-arranged time interval. The magnitude of the velocity increments and the time interval for each swimming period can vary across studies making the comparison between and within species difficult. This issue has been acknowledged in the literature, however, little empirical evidence exists that tests the importance of velocity and time increments on swimming performance in fish. A practical application for fish performance is through the design of fishways that enable fish to bypass anthropogenic structures (e.g. dams) that block migration routes, which is one of the causes of world-wide decline in sturgeon populations. While fishways will improve sturgeon conservation, they need to be specifically designed to accommodate the swimming capabilities specific for sturgeons, and it is possible that current swimming methodologies have under-estimated the swimming performance of sturgeons. The present study assessed the UCrit of shortnose sturgeon using modified UCrit to determine the importance of velocity increment (5 and 10 cm s−1) and time (5, 15 and 30 min) intervals on swimming performance. UCrit was found to be influenced by both time interval and water velocity. UCrit was generally lower in sturgeon when they were swum using 5cm s−1 compared with 10 cm s−1 increments. Velocity increment influences the UCrit more than time interval. Overall, researchers must consider the impacts of using particular swimming criteria when designing their experiments. PMID:28835841
Arteyeva, Natalia V; Azarov, Jan E
The aim of the study was to differentiate the effect of dispersion of repolarization (DOR) and action potential duration (APD) on T-wave parameters being considered as indices of DOR, namely, Tpeak-Tend interval, T-wave amplitude and T-wave area. T-wave was simulated in a wide physiological range of DOR and APD using a realistic rabbit model based on experimental data. A simplified mathematical formulation of T-wave formation was conducted. Both the simulations and the mathematical formulation showed that Tpeak-Tend interval and T-wave area are linearly proportional to DOR irrespectively of APD range, while T-wave amplitude is non-linearly proportional to DOR and inversely proportional to the minimal repolarization time, or minimal APD value. Tpeak-Tend interval and T-wave area are the most accurate DOR indices independent of APD. T-wave amplitude can be considered as an index of DOR when the level of APD is taken into account. Copyright © 2017 Elsevier Inc. All rights reserved.
Safe and effective error rate monitors for SS7 signaling links
NASA Astrophysics Data System (ADS)
Schmidt, Douglas C.
1994-04-01
This paper describes SS7 error monitor characteristics, discusses the existing SUERM (Signal Unit Error Rate Monitor), and develops the recently proposed EIM (Error Interval Monitor) for higher speed SS7 links. A SS7 error monitor is considered safe if it ensures acceptable link quality and is considered effective if it is tolerant to short-term phenomena. Formal criteria for safe and effective error monitors are formulated in this paper. This paper develops models of changeover transients, the unstable component of queue length resulting from errors. These models are in the form of recursive digital filters. Time is divided into sequential intervals. The filter's input is the number of errors which have occurred in each interval. The output is the corresponding change in transmit queue length. Engineered EIM's are constructed by comparing an estimated changeover transient with a threshold T using a transient model modified to enforce SS7 standards. When this estimate exceeds T, a changeover will be initiated and the link will be removed from service. EIM's can be differentiated from SUERM by the fact that EIM's monitor errors over an interval while SUERM's count errored messages. EIM's offer several advantages over SUERM's, including the fact that they are safe and effective, impose uniform standards in link quality, are easily implemented, and make minimal use of real-time resources.
Migration of Trans-Neptunian Objects to a Near-Earth Space
NASA Technical Reports Server (NTRS)
Ipatov, S. I.; Mather, J. C.; Oegerle, William (Technical Monitor)
2002-01-01
Our estimates of the migration of trans-Neptunian objects (TNOs) to a near-Earth space are based on the results of investigations of orbital evolution of TNOs and Jupiter-crossing objects (JCOs). The orbital evolution of TNOs was considered in many papers. Recently we investigated the evolution for intervals of at least 5-10 Myr of 2500 JCOs under the gravitational influence of all planets, except for Mercury and Pluto (without dissipative factors). In the first series we considered N=2000 orbits near the orbits of 30 real Jupiter-family comets with period P(sub alpha)less than 10 yr, and in the second series we took N=500 orbits close to the orbit of Comet 10P Tempel 2 (alpha=3.1 AU, e=0.53, i=12 deg). We calculated the probabilities of collisions of objects with the terrestrial planets, using orbital elements obtained with a step equal to 500 yr, and then summarized the results for all time intervals and all bodies, obtaining the total probability P(sub sigma) of collisions with a planet and the total time interval T(sub sigma) during which perihelion distance q of bodies was less than a semimajor axis of the planet.
[Influence of age on systolic and diastolic time intervals in normal individuals].
Soares-Costa, J T; Soares-Costa, T J; Santos, A J; Monteiro, A J
1991-12-01
To evaluate the influence of age (I) on the left ventricle (VE) systolic time intervals, the S2O interval, the pulse transmission time (TTP) and the relative amplitude of the a wave (Aa%) of the apexcardiogram (ACG) of normal individuals. 202 subjects considered as normal by clinical and electrocardiographic examinations were studied. Their age (I) was 38 +/- 13 years (average +/- 1 SD), being 125 male and 77 female. The electrocardiogram (ECG), phonocardiogram, ACG and carotid arterial pulse tracing (PC) were simultaneously recorded. The following intervals were determined: Electromechanical (IEM)--from the onset of QRS complex of the ECG to the ascending branch of the great wave of the ACG (A point); mechanical systole (SM)--from the A point of the ACG to the beginning of first high frequency vibration of the aortic component of the second heart sound (S2); ejection period (FE)--from the beginning of the anacrotic branch of the PC to the nadir of its dicrotic notch (ID); isovolumic contraction time (FIS)--subtracting FE duration to the SM duration; S2O interval--since S2 to the O point (nadir) of the ACG; Aa%--relation percentage expressed between a wave amplitude and total amplitude of the ACG; pulse transmission time--since S2 to ID. Statistically significant correlations (p less than 0.05) between I (years expressed) and the previously mentioned variables were investigated. It was possible to verify: a) the IEM and FIS intervals were not significantly correlated with I; b) the FE had a linear, positive and significant correlation with I (r = 0.222); c) the correlations between FE and heart rate (FC) were not significantly different between the considered age groups (14-34, 35-49, 50-69 years); d) the S2O interval had a linear, negative and significant correlation with FC (r = -0.196), and a linear, positive and significant correlation with I (r = 0.392); e) multiple regression equation between S2O, I and FC was: S2O = 70 - 0.36 x FC + 0.55 x I; f) the Aa% had a linear, positive and significant correlation with I (r = 0.252); g) TTP has a linear, negative and significant correlation with I (r = -0.793). a) The FE increases with I related probably to the afterload increasing that follows aging process; b) the S2O interval increases with I reflecting the elongation of the relaxing time that is associated to the senescence; c) the Aa% increases with I, expressing the reduction of the compliance of the VE associated to the aging; d) the TTP decreases with I related to the increasing of velocity of the pulse wave that follows senescence and is attributed to the increasing of the aortic stiffness.
Improving laboratory results turnaround time by reducing pre analytical phase.
Khalifa, Mohamed; Khalid, Parwaiz
2014-01-01
Laboratory turnaround time is considered one of the most important indicators of work efficiency in hospitals, physicians always need timely results to take effective clinical decisions especially in the emergency department where these results can guide physicians whether to admit patients to the hospital, discharge them home or do further investigations. A retrospective data analysis study was performed to identify the effects of ER and Lab staff training on new routines for sample collection and transportation on the pre-analytical phase of turnaround time. Renal profile tests requested by the ER and performed in 2013 has been selected as a sample, and data about 7,519 tests were retrieved and analyzed to compare turnaround time intervals before and after implementing new routines. Results showed significant time reduction on "Request to Sample Collection" and "Collection to In Lab Delivery" time intervals with less significant improvement on the analytical phase of the turnaround time.
On the use and the performance of software reliability growth models
NASA Technical Reports Server (NTRS)
Keiller, Peter A.; Miller, Douglas R.
1991-01-01
We address the problem of predicting future failures for a piece of software. The number of failures occurring during a finite future time interval is predicted from the number failures observed during an initial period of usage by using software reliability growth models. Two different methods for using the models are considered: straightforward use of individual models, and dynamic selection among models based on goodness-of-fit and quality-of-prediction criteria. Performance is judged by the relative error of the predicted number of failures over future finite time intervals relative to the number of failures eventually observed during the intervals. Six of the former models and eight of the latter are evaluated, based on their performance on twenty data sets. Many open questions remain regarding the use and the performance of software reliability growth models.
A Joint Replenishment Inventory Model with Lost Sales
NASA Astrophysics Data System (ADS)
Devy, N. L.; Ai, T. J.; Astanti, R. D.
2018-04-01
This paper deals with two items joint replenishment inventory problem, in which the demand of each items are constant and deterministic. Inventory replenishment of items is conducted periodically every T time intervals. Among of these replenishments, joint replenishment of both items is possible. It is defined that item i is replenished every ZiT time intervals. Replenishment of items are instantaneous. All of shortages are considered as lost sales. The maximum allowance for lost sales of item i is Si. Mathematical model is formulated in order to determining the basic time cycle T, replenishment multiplier Zi , and maximum lost sales Si in order to minimize the total cost per unit time. A solution methodology is proposed for solve the model and a numerical example is provided for demonstrating the effectiveness of the proposed methodology.
Pullin, A N; Pairis-Garcia, M D; Campbell, B J; Campler, M R; Proudfoot, K L
2017-11-01
When considering methodologies for collecting behavioral data, continuous sampling provides the most complete and accurate data set whereas instantaneous sampling can provide similar results and also increase the efficiency of data collection. However, instantaneous time intervals require validation to ensure accurate estimation of the data. Therefore, the objective of this study was to validate scan sampling intervals for lambs housed in a feedlot environment. Feeding, lying, standing, drinking, locomotion, and oral manipulation were measured on 18 crossbred lambs housed in an indoor feedlot facility for 14 h (0600-2000 h). Data from continuous sampling were compared with data from instantaneous scan sampling intervals of 5, 10, 15, and 20 min using a linear regression analysis. Three criteria determined if a time interval accurately estimated behaviors: 1) ≥ 0.90, 2) slope not statistically different from 1 ( > 0.05), and 3) intercept not statistically different from 0 ( > 0.05). Estimations for lying behavior were accurate up to 20-min intervals, whereas feeding and standing behaviors were accurate only at 5-min intervals (i.e., met all 3 regression criteria). Drinking, locomotion, and oral manipulation demonstrated poor associations () for all tested intervals. The results from this study suggest that a 5-min instantaneous sampling interval will accurately estimate lying, feeding, and standing behaviors for lambs housed in a feedlot, whereas continuous sampling is recommended for the remaining behaviors. This methodology will contribute toward the efficiency, accuracy, and transparency of future behavioral data collection in lamb behavior research.
A Benchmark for Comparing Different Approaches for Specifying and Verifying Real-Time Systems
1993-01-01
To be considered correct or useful, real - time systems must deliver results within specified time intervals, either without exception or with high...probability. Recently, a large number of formal methods have been invented for specifying and verifying real - time systems . It has been suggested that...these formal methods need to be tested out on actual real - time systems . Such testing will allow the scalability of the methods to be assessed and also
Robust stability of interval bidirectional associative memory neural network with time delays.
Liao, Xiaofeng; Wong, Kwok-wo
2004-04-01
In this paper, the conventional bidirectional associative memory (BAM) neural network with signal transmission delay is intervalized in order to study the bounded effect of deviations in network parameters and external perturbations. The resultant model is referred to as a novel interval dynamic BAM (IDBAM) model. By combining a number of different Lyapunov functionals with the Razumikhin technique, some sufficient conditions for the existence of unique equilibrium and robust stability are derived. These results are fairly general and can be verified easily. To go further, we extend our investigation to the time-varying delay case. Some robust stability criteria for BAM with perturbations of time-varying delays are derived. Besides, our approach for the analysis allows us to consider several different types of activation functions, including piecewise linear sigmoids with bounded activations as well as the usual C1-smooth sigmoids. We believe that the results obtained have leading significance in the design and application of BAM neural networks.
Heuristic algorithms for the minmax regret flow-shop problem with interval processing times.
Ćwik, Michał; Józefczyk, Jerzy
2018-01-01
An uncertain version of the permutation flow-shop with unlimited buffers and the makespan as a criterion is considered. The investigated parametric uncertainty is represented by given interval-valued processing times. The maximum regret is used for the evaluation of uncertainty. Consequently, the minmax regret discrete optimization problem is solved. Due to its high complexity, two relaxations are applied to simplify the optimization procedure. First of all, a greedy procedure is used for calculating the criterion's value, as such calculation is NP-hard problem itself. Moreover, the lower bound is used instead of solving the internal deterministic flow-shop. The constructive heuristic algorithm is applied for the relaxed optimization problem. The algorithm is compared with previously elaborated other heuristic algorithms basing on the evolutionary and the middle interval approaches. The conducted computational experiments showed the advantage of the constructive heuristic algorithm with regards to both the criterion and the time of computations. The Wilcoxon paired-rank statistical test confirmed this conclusion.
Gherardini, Stefano
2018-01-01
The improvement of clotting factor concentrates (CFCs) has undergone an impressive boost during the last six years. Since 2010, several new recombinant factor (rF)VIII/IX concentrates entered phase I/II/III clinical trials. The improvements are related to the culture of human embryonic kidney (HEK) cells, post-translational glycosylation, PEGylation, and co-expression of the fragment crystallizable (Fc) region of immunoglobulin (Ig)G1 or albumin genes in the manufacturing procedures. The extended half-life (EHL) CFCs allow an increase of the interval between bolus administrations during prophylaxis, a very important advantage for patients with difficulties in venous access. Although the inhibitor risk has not been fully established, phase III studies have provided standard prophylaxis protocols, which, compared with on-demand treatment, have achieved very low annualized bleeding rates (ABRs). The key pharmacokinetics (PK) parameter to tailor patient therapy is clearance, which is more reliable than the half-life of CFCs; the clearance considers the decay rate of the drug concentration–time profile, while the half-life considers only the half concentration of the drug at a given time. To tailor the prophylaxis of hemophilia patients in real-life, we propose two formulae (expressed in terms of the clearance, trough and dose interval between prophylaxis), respectively based on the one- and two-compartmental models (CMs), for the prediction of the optimal single dose of EHL CFCs. Once the data from the time decay of the CFCs are fitted by the one- or two-CMs after an individual PK analysis, such formulae provide to the treater the optimal trade-off among trough and time-intervals between boluses. In this way, a sufficiently long time-interval between bolus administration could be guaranteed for a wider class of patients, with a preassigned level of the trough. Finally, a PK approach using repeated dosing is discussed, and some examples with new EHL CFCs are shown. PMID:29899890
Changes in crash risk following re-timing of traffic signal change intervals.
Retting, Richard A; Chapline, Janella F; Williams, Allan F
2002-03-01
More than I million motor vehicle crashes occur annually at signalized intersections in the USA. The principal method used to prevent crashes associated with routine changes in signal indications is employment of a traffic signal change interval--a brief yellow and all-red period that follows the green indication. No universal practice exists for selecting the duration of change intervals, and little is known about the influence of the duration of the change interval on crash risk. The purpose of this study was to estimate potential crash effects of modifying the duration of traffic signal change intervals to conform with values associated with a proposed recommended practice published by the Institute of Transportation Engineers. A sample of 122 intersections was identified and randomly assigned to experimental and control groups. Of 51 eligible experimental sites, 40 (78%) needed signal timing changes. For the 3-year period following implementation of signal timing changes, there was an 8% reduction in reportable crashes at experimental sites relative to those occurring at control sites (P = 0.08). For injury crashes, a 12% reduction at experimental sites relative to those occurring at control sites was found (P = 0.03). Pedestrian and bicycle crashes at experimental sites decreased 37% (P = 0.03) relative to controls. Given these results and the relatively low cost of re-timing traffic signals, modifying the duration of traffic signal change intervals to conform with values associated with the Institute of Transportation Engineers' proposed recommended practice should be strongly considered by transportation agencies to reduce the frequency of urban motor vehicle crashes.
Passive longitudes of solar cosmic rays in 19-24 solar cycles
NASA Astrophysics Data System (ADS)
Getselev, Igor; Podzolko, Mikhail; Shatov, Pavel; Tasenko, Sergey; Skorohodov, Ilya; Okhlopkov, Viktor
The distribution of solar proton event sources along the Carrington longitude in 19-24 solar cycles is considered. For this study an extensive database on ≈450 solar proton events have been constructed using various available sources and solar cosmic ray measurements, which included the data about the time of the event, fluences of protons of various energies in it and the coordinates of its source on the Sun. The analysis has shown the significant inhomogeneity of the distribution. In particular a region of “passive longitudes” has been discovered, extensive over the longitude (from ≈90-100° to 170°) and the life time (the whole period of observations). From the 60 most powerful proton events during the 19-24 solar cycles not more than 1 event was originated from the interval of 100-170° Carrington longitude, from another 80 “medium” events only 10 were injected from this interval. The summarized proton fluence of the events, which sources belong to the interval of 90-170° amounts only to 5%, and if not take into account the single “anomalous” powerful event - to just only 1.2% from the total fluence for all the considered events. The existence of the extensive and stable interval of “passive” Carrington longitudes is the remarkable phenomenon in solar physics. It also confirms the physical relevance of the mean synodic period of Sun’s rotation determined by R. C. Carrington.
Tamhankar, Ashwin S; Pawar, Prakash W; Sawant, Ajit S; Kasat, Gaurav V; Savaliya, Abhishek; Mundhe, Shankar; Patil, Sunil; Narwade, Sayalee
2017-01-01
Penile fracture is a relatively common phenomenon. The main problem associated with this condition is the lack of patients' awareness on the urgency of the situation. This study reports the different modes of presentations and treatment results. We reviewed 21 cases of penile fracture over 5 years. Parameters were mode of injury, age group, time interval before presentation, management, site of injury, urethral involvement, results, complications and erectile function at follow-up. The mean age of patients was 34 years, the mean time interval until presentation was 26 h. Cases involving the right corpus cavernosum comprised 57.14% and 42.85% were cases involving the left corpus cavernosum. Two patients had full circumferential urethral tear. Two patients developed wound infections and 2 patients developed mild penile curvature (<30°). These 4 patients had all presented late for treatment (>40 h). Urologists need to consider penile fracture a urological emergency and atypical presentations need to be considered when deciding on management. © 2017 S. Karger AG, Basel.
Evaluation of SAMe-TT2R2 Score on Predicting Success With Extended-Interval Warfarin Monitoring.
Hwang, Andrew Y; Carris, Nicholas W; Dietrich, Eric A; Gums, John G; Smith, Steven M
2018-06-01
In patients with stable international normalized ratios, 12-week extended-interval warfarin monitoring can be considered; however, predictors of success with this strategy are unknown. The previously validated SAMe-TT 2 R 2 score (considering sex, age, medical history, treatment, tobacco, and race) predicts anticoagulation control during standard follow-up (every 4 weeks), with lower scores associated with greater time in therapeutic range. To evaluate the ability of the SAMe-TT 2 R 2 score in predicting success with extended-interval warfarin follow-up in patients with previously stable warfarin doses. In this post hoc analysis of a single-arm feasibility study, baseline SAMe-TT 2 R 2 scores were calculated for patients with ≥1 extended-interval follow-up visit. The primary analysis assessed achieved weeks of extended-interval follow-up according to baseline SAMe-TT 2 R 2 scores. A total of 47 patients receiving chronic anticoagulation completed a median of 36 weeks of extended-interval follow-up. The median baseline SAMe-TT 2 R 2 score was 1 (range 0-5). Lower SAMe-TT 2 R 2 scores appeared to be associated with greater duration of extended-interval follow-up achieved, though the differences between scores were not statistically significant. No individual variable of the SAMe-TT 2 R 2 score was associated with achieved weeks of extended-interval follow-up. Analysis of additional patient factors found that longer duration (≥24 weeks) of prior stable treatment was significantly associated with greater weeks of extended-interval follow-up completed ( P = 0.04). Conclusion and Relevance: This pilot study provides limited evidence that the SAMe-TT 2 R 2 score predicts success with extended-interval warfarin follow-up but requires confirmation in a larger study. Further research is also necessary to establish additional predictors of successful extended-interval warfarin follow-up.
Decentralized Hypothesis Testing in Energy Harvesting Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Tarighati, Alla; Gross, James; Jalden, Joakim
2017-09-01
We consider the problem of decentralized hypothesis testing in a network of energy harvesting sensors, where sensors make noisy observations of a phenomenon and send quantized information about the phenomenon towards a fusion center. The fusion center makes a decision about the present hypothesis using the aggregate received data during a time interval. We explicitly consider a scenario under which the messages are sent through parallel access channels towards the fusion center. To avoid limited lifetime issues, we assume each sensor is capable of harvesting all the energy it needs for the communication from the environment. Each sensor has an energy buffer (battery) to save its harvested energy for use in other time intervals. Our key contribution is to formulate the problem of decentralized detection in a sensor network with energy harvesting devices. Our analysis is based on a queuing-theoretic model for the battery and we propose a sensor decision design method by considering long term energy management at the sensors. We show how the performance of the system changes for different battery capacities. We then numerically show how our findings can be used in the design of sensor networks with energy harvesting sensors.
Real-time flutter boundary prediction based on time series models
NASA Astrophysics Data System (ADS)
Gu, Wenjing; Zhou, Li
2018-03-01
For the purpose of predicting the flutter boundary in real time during flutter flight tests, two time series models accompanied with corresponding stability criterion are adopted in this paper. The first method simplifies a long nonstationary response signal as many contiguous intervals and each is considered to be stationary. The traditional AR model is then established to represent each interval of signal sequence. While the second employs a time-varying AR model to characterize actual measured signals in flutter test with progression variable speed (FTPVS). To predict the flutter boundary, stability parameters are formulated by the identified AR coefficients combined with Jury's stability criterion. The behavior of the parameters is examined using both simulated and wind-tunnel experiment data. The results demonstrate that both methods show significant effectiveness in predicting the flutter boundary at lower speed level. A comparison between the two methods is also given in this paper.
Timing and Causality in the Generation of Learned Eyelid Responses
Sánchez-Campusano, Raudel; Gruart, Agnès; Delgado-García, José M.
2011-01-01
The cerebellum-red nucleus-facial motoneuron (Mn) pathway has been reported as being involved in the proper timing of classically conditioned eyelid responses. This special type of associative learning serves as a model of event timing for studying the role of the cerebellum in dynamic motor control. Here, we have re-analyzed the firing activities of cerebellar posterior interpositus (IP) neurons and orbicularis oculi (OO) Mns in alert behaving cats during classical eyeblink conditioning, using a delay paradigm. The aim was to revisit the hypothesis that the IP neurons (IPns) can be considered a neuronal phase-modulating device supporting OO Mns firing with an emergent timing mechanism and an explicit correlation code during learned eyelid movements. Optimized experimental and computational tools allowed us to determine the different causal relationships (temporal order and correlation code) during and between trials. These intra- and inter-trial timing strategies expanding from sub-second range (millisecond timing) to longer-lasting ranges (interval timing) expanded the functional domain of cerebellar timing beyond motor control. Interestingly, the results supported the above-mentioned hypothesis. The causal inferences were influenced by the precise motor and pre-motor spike timing in the cause-effect interval, and, in addition, the timing of the learned responses depended on cerebellar–Mn network causality. Furthermore, the timing of CRs depended upon the probability of simulated causal conditions in the cause-effect interval and not the mere duration of the inter-stimulus interval. In this work, the close relation between timing and causality was verified. It could thus be concluded that the firing activities of IPns may be related more to the proper performance of ongoing CRs (i.e., the proper timing as a consequence of the pertinent causality) than to their generation and/or initiation. PMID:21941469
Measuring Food Use in School-Aged Children.
ERIC Educational Resources Information Center
Randall, Elizabeth
1991-01-01
Ideal measurement of food use in school-aged children possesses validity, reliability, interpretability, and feasibility. The article discusses dietary records and recalls and measures of patterns in food use. Issues to consider when constructing children's food frequency are food list, time intervals, response set, context of questioning, and…
Zhang, Zhi-Hui; Yang, Guang-Hong
2017-05-01
This paper provides a novel event-triggered fault detection (FD) scheme for discrete-time linear systems. First, an event-triggered interval observer is proposed to generate the upper and lower residuals by taking into account the influence of the disturbances and the event error. Second, the robustness of the residual interval against the disturbances and the fault sensitivity are improved by introducing l 1 and H ∞ performances. Third, dilated linear matrix inequalities are used to decouple the Lyapunov matrices from the system matrices. The nonnegative conditions for the estimation error variables are presented with the aid of the slack matrix variables. This technique allows considering a more general Lyapunov function. Furthermore, the FD decision scheme is proposed by monitoring whether the zero value belongs to the residual interval. It is shown that the information communication burden is reduced by designing the event-triggering mechanism, while the FD performance can still be guaranteed. Finally, simulation results demonstrate the effectiveness of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Muratore-Ginanneschi, Paolo
2005-05-01
Investment strategies in multiplicative Markovian market models with transaction costs are defined using growth optimal criteria. The optimal strategy is shown to consist in holding the amount of capital invested in stocks within an interval around an ideal optimal investment. The size of the holding interval is determined by the intensity of the transaction costs and the time horizon. The inclusion of financial derivatives in the models is also considered. All the results presented in this contributions were previously derived in collaboration with E. Aurell.
Trading Speed and Accuracy by Coding Time: A Coupled-circuit Cortical Model
Standage, Dominic; You, Hongzhi; Wang, Da-Hui; Dorris, Michael C.
2013-01-01
Our actions take place in space and time, but despite the role of time in decision theory and the growing acknowledgement that the encoding of time is crucial to behaviour, few studies have considered the interactions between neural codes for objects in space and for elapsed time during perceptual decisions. The speed-accuracy trade-off (SAT) provides a window into spatiotemporal interactions. Our hypothesis is that temporal coding determines the rate at which spatial evidence is integrated, controlling the SAT by gain modulation. Here, we propose that local cortical circuits are inherently suited to the relevant spatial and temporal coding. In simulations of an interval estimation task, we use a generic local-circuit model to encode time by ‘climbing’ activity, seen in cortex during tasks with a timing requirement. The model is a network of simulated pyramidal cells and inhibitory interneurons, connected by conductance synapses. A simple learning rule enables the network to quickly produce new interval estimates, which show signature characteristics of estimates by experimental subjects. Analysis of network dynamics formally characterizes this generic, local-circuit timing mechanism. In simulations of a perceptual decision task, we couple two such networks. Network function is determined only by spatial selectivity and NMDA receptor conductance strength; all other parameters are identical. To trade speed and accuracy, the timing network simply learns longer or shorter intervals, driving the rate of downstream decision processing by spatially non-selective input, an established form of gain modulation. Like the timing network's interval estimates, decision times show signature characteristics of those by experimental subjects. Overall, we propose, demonstrate and analyse a generic mechanism for timing, a generic mechanism for modulation of decision processing by temporal codes, and we make predictions for experimental verification. PMID:23592967
Solar forecast and real-time monitoring needs of the Study of Energy Release in Flares (SERF)
NASA Technical Reports Server (NTRS)
Rust, D. M.
1979-01-01
Complementary, simultaneous observations of flares from as many observatories, both ground based and orbiting, as possible planned for the Solar Maximum Year are considered. The need for forecasts of solar activity on long term, one week, and two day intervals is described. Real time reporting is not needed, but daily summaries of activity and permanent records are important.
The spatial return level of aggregated hourly extreme rainfall in Peninsular Malaysia
NASA Astrophysics Data System (ADS)
Shaffie, Mardhiyyah; Eli, Annazirin; Wan Zin, Wan Zawiah; Jemain, Abdul Aziz
2015-07-01
This paper is intended to ascertain the spatial pattern of extreme rainfall distribution in Peninsular Malaysia at several short time intervals, i.e., on hourly basis. Motivation of this research is due to historical records of extreme rainfall in Peninsular Malaysia, whereby many hydrological disasters at this region occur within a short time period. The hourly periods considered are 1, 2, 3, 6, 12, and 24 h. Many previous hydrological studies dealt with daily rainfall data; thus, this study enables comparison to be made on the estimated performances between daily and hourly rainfall data analyses so as to identify the impact of extreme rainfall at a shorter time scale. Return levels based on the time aggregate considered are also computed. Parameter estimation using L-moment method for four probability distributions, namely, the generalized extreme value (GEV), generalized logistic (GLO), generalized Pareto (GPA), and Pearson type III (PE3) distributions were conducted. Aided with the L-moment diagram test and mean square error (MSE) test, GLO was found to be the most appropriate distribution to represent the extreme rainfall data. At most time intervals (10, 50, and 100 years), the spatial patterns revealed that the rainfall distribution across the peninsula differ for 1- and 24-h extreme rainfalls. The outcomes of this study would provide additional information regarding patterns of extreme rainfall in Malaysia which may not be detected when considering only a higher time scale such as daily; thus, appropriate measures for shorter time scales of extreme rainfall can be planned. The implementation of such measures would be beneficial to the authorities to reduce the impact of any disastrous natural event.
A game theoretic approach to a finite-time disturbance attenuation problem
NASA Technical Reports Server (NTRS)
Rhee, Ihnseok; Speyer, Jason L.
1991-01-01
A disturbance attenuation problem over a finite-time interval is considered by a game theoretic approach where the control, restricted to a function of the measurement history, plays against adversaries composed of the process and measurement disturbances, and the initial state. A zero-sum game, formulated as a quadratic cost criterion subject to linear time-varying dynamics and measurements, is solved by a calculus of variation technique. By first maximizing the quadratic cost criterion with respect to the process disturbance and initial state, a full information game between the control and the measurement residual subject to the estimator dynamics results. The resulting solution produces an n-dimensional compensator which expresses the controller as a linear combination of the measurement history. A disturbance attenuation problem is solved based on the results of the game problem. For time-invariant systems it is shown that under certain conditions the time-varying controller becomes time-invariant on the infinite-time interval. The resulting controller satisfies an H(infinity) norm bound.
Sucunza, Federico; Danilewicz, Daniel; Cremer, Marta; Andriolo, Artur; Zerbini, Alexandre N
2018-01-01
Estimation of visibility bias is critical to accurately compute abundance of wild populations. The franciscana, Pontoporia blainvillei, is considered the most threatened small cetacean in the southwestern Atlantic Ocean. Aerial surveys are considered the most effective method to estimate abundance of this species, but many existing estimates have been considered unreliable because they lack proper estimation of correction factors for visibility bias. In this study, helicopter surveys were conducted to determine surfacing-diving intervals of franciscanas and to estimate availability for aerial platforms. Fifteen hours were flown and 101 groups of 1 to 7 franciscanas were monitored, resulting in a sample of 248 surface-dive cycles. The mean surfacing interval and diving interval times were 16.10 seconds (SE = 9.74) and 39.77 seconds (SE = 29.06), respectively. Availability was estimated at 0.39 (SE = 0.01), a value 16-46% greater than estimates computed from diving parameters obtained from boats or from land. Generalized mixed-effects models were used to investigate the influence of biological and environmental predictors on the proportion of time franciscana groups are visually available to be seen from an aerial platform. These models revealed that group size was the main factor influencing the proportion at surface. The use of negatively biased estimates of availability results in overestimation of abundance, leads to overly optimistic assessments of extinction probabilities and to potentially ineffective management actions. This study demonstrates that estimates of availability must be computed from suitable platforms to ensure proper conservation decisions are implemented to protect threatened species such as the franciscana.
Danilewicz, Daniel; Cremer, Marta; Andriolo, Artur; Zerbini, Alexandre N.
2018-01-01
Estimation of visibility bias is critical to accurately compute abundance of wild populations. The franciscana, Pontoporia blainvillei, is considered the most threatened small cetacean in the southwestern Atlantic Ocean. Aerial surveys are considered the most effective method to estimate abundance of this species, but many existing estimates have been considered unreliable because they lack proper estimation of correction factors for visibility bias. In this study, helicopter surveys were conducted to determine surfacing-diving intervals of franciscanas and to estimate availability for aerial platforms. Fifteen hours were flown and 101 groups of 1 to 7 franciscanas were monitored, resulting in a sample of 248 surface-dive cycles. The mean surfacing interval and diving interval times were 16.10 seconds (SE = 9.74) and 39.77 seconds (SE = 29.06), respectively. Availability was estimated at 0.39 (SE = 0.01), a value 16–46% greater than estimates computed from diving parameters obtained from boats or from land. Generalized mixed-effects models were used to investigate the influence of biological and environmental predictors on the proportion of time franciscana groups are visually available to be seen from an aerial platform. These models revealed that group size was the main factor influencing the proportion at surface. The use of negatively biased estimates of availability results in overestimation of abundance, leads to overly optimistic assessments of extinction probabilities and to potentially ineffective management actions. This study demonstrates that estimates of availability must be computed from suitable platforms to ensure proper conservation decisions are implemented to protect threatened species such as the franciscana. PMID:29534086
Estimation of TOA based MUSIC algorithm and cross correlation algorithm of appropriate interval
NASA Astrophysics Data System (ADS)
Lin, Wei; Liu, Jun; Zhou, Yineng; Huang, Jiyan
2017-03-01
Localization of mobile station (MS) has now gained considerable attention due to its wide applications in military, environmental, health and commercial systems. Phrase angle and encode data of MSK system model are two critical parameters in time-of-arrival (TOA) localization technique; nevertheless, precise value of phrase angle and encode data are not easy to achieved in general. In order to meet the actual situation, we should consider the condition that phase angle and encode data is unknown. In this paper, a novel TOA localization method, which combine MUSIC algorithm and cross correlation algorithm in an appropriate interval, is proposed. Simulations show that the proposed method has better performance than music algorithm and cross correlation algorithm of the whole interval.
Recurrence Interval and Event Age Data for Type A Faults
Dawson, Timothy E.; Weldon, Ray J.; Biasi, Glenn P.
2008-01-01
This appendix summarizes available recurrence interval, event age, and timing of most recent event data for Type A faults considered in the Earthquake Rate Model 2 (ERM 2) and used in the ERM 2 Appendix C analysis as well as Appendix N (time-dependent probabilities). These data have been compiled into an Excel workbook named Appendix B A-fault event ages_recurrence_V5.0 (herein referred to as the Appendix B workbook). For convenience, the Appendix B workbook is attached to the end of this document as a series of tables. The tables within the Appendix B workbook include site locations, event ages, and recurrence data, and in some cases, the interval of time between earthquakes is also reported. The Appendix B workbook is organized as individual worksheets, with each worksheet named by fault and paleoseismic site. Each worksheet contains the site location in latitude and longitude, as well as information on event ages, and a summary of recurrence data. Because the data has been compiled from different sources with different presentation styles, descriptions of the contents of each worksheet within the Appendix B spreadsheet are summarized.
Magari, Robert T
2002-03-01
The effect of different lot-to-lot variability levels on the prediction of stability are studied based on two statistical models for estimating degradation in real time and accelerated stability tests. Lot-to-lot variability is considered as random in both models, and is attributed to two sources-variability at time zero, and variability of degradation rate. Real-time stability tests are modeled as a function of time while accelerated stability tests as a function of time and temperatures. Several data sets were simulated, and a maximum likelihood approach was used for estimation. The 95% confidence intervals for the degradation rate depend on the amount of lot-to-lot variability. When lot-to-lot degradation rate variability is relatively large (CV > or = 8%) the estimated confidence intervals do not represent the trend for individual lots. In such cases it is recommended to analyze each lot individually. Copyright 2002 Wiley-Liss, Inc. and the American Pharmaceutical Association J Pharm Sci 91: 893-899, 2002
The role of ultrasound guidance in pediatric caudal block
Erbüyün, Koray; Açıkgöz, Barış; Ok, Gülay; Yılmaz, Ömer; Temeltaş, Gökhan; Tekin, İdil; Tok, Demet
2016-01-01
Objectives: To compare the time interval of the procedure, possible complications, post-operative pain levels, additional analgesics, and nurse satisfaction in ultrasonography-guided and standard caudal block applications. Methods: This retrospective study was conducted in Celal Bayar University Hospital, Manisa, Turkey, between January and December 2014, included 78 pediatric patients. Caudal block was applied to 2 different groups; one with ultrasound guide, and the other using the standard method. Results: The time interval of the procedure was significantly shorter in the standard application group compared with ultrasound-guided group (p=0.020). Wong-Baker FACES Pain Rating Scale values obtained at the 90th minute was statistically lower in the standard application group compared with ultrasound-guided group (p=0.035). No statistically significant difference was found on the other parameters between the 2 groups. The shorter time interval of the procedure at standard application group should not be considered as a distinctive mark by the pediatric anesthesiologists, because this time difference was as short as seconds. Conclusion: Ultrasound guidance for caudal block applications would neither increase nor decrease the success of the treatment. However, ultrasound guidance should be needed in cases where the detection of sacral anatomy is difficult, especially by palpations. PMID:26837396
Wang, Molin; Liao, Xiaomei; Laden, Francine; Spiegelman, Donna
2016-01-01
Identification of the latency period and age-related susceptibility, if any, is an important aspect of assessing risks of environmental, nutritional and occupational exposures. We consider estimation and inference for latency and age-related susceptibility in relative risk and excess risk models. We focus on likelihood-based methods for point and interval estimation of the latency period and age-related windows of susceptibility coupled with several commonly considered exposure metrics. The method is illustrated in a study of the timing of the effects of constituents of air pollution on mortality in the Nurses’ Health Study. PMID:26750582
Dugué, Audrey Emmanuelle; Pulido, Marina; Chabaud, Sylvie; Belin, Lisa; Gal, Jocelyn
2016-12-01
We describe how to estimate progression-free survival while dealing with interval-censored data in the setting of clinical trials in oncology. Three procedures with SAS and R statistical software are described: one allowing for a nonparametric maximum likelihood estimation of the survival curve using the EM-ICM (Expectation and Maximization-Iterative Convex Minorant) algorithm as described by Wellner and Zhan in 1997; a sensitivity analysis procedure in which the progression time is assigned (i) at the midpoint, (ii) at the upper limit (reflecting the standard analysis when the progression time is assigned at the first radiologic exam showing progressive disease), or (iii) at the lower limit of the censoring interval; and finally, two multiple imputations are described considering a uniform or the nonparametric maximum likelihood estimation (NPMLE) distribution. Clin Cancer Res; 22(23); 5629-35. ©2016 AACR. ©2016 American Association for Cancer Research.
NASA Astrophysics Data System (ADS)
Trifonenkov, A. V.; Trifonenkov, V. P.
2017-01-01
This article deals with a feature of problems of calculating time-average characteristics of nuclear reactor optimal control sets. The operation of a nuclear reactor during threatened period is considered. The optimal control search problem is analysed. The xenon poisoning causes limitations on the variety of statements of the problem of calculating time-average characteristics of a set of optimal reactor power off controls. The level of xenon poisoning is limited. There is a problem of choosing an appropriate segment of the time axis to ensure that optimal control problem is consistent. Two procedures of estimation of the duration of this segment are considered. Two estimations as functions of the xenon limitation were plot. Boundaries of the interval of averaging are defined more precisely.
Reliable results from stochastic simulation models
Donald L., Jr. Gochenour; Leonard R. Johnson
1973-01-01
Development of a computer simulation model is usually done without fully considering how long the model should run (e.g. computer time) before the results are reliable. However construction of confidence intervals (CI) about critical output parameters from the simulation model makes it possible to determine the point where model results are reliable. If the results are...
Vacchiano, Giuseppe; Luna Maldonado, Aurelio; Matas Ros, Maria; Fiorenza, Elisa; Silvestre, Angela; Simonetti, Biagio; Pieri, Maria
2018-06-01
The study reports the evolution of the demyelinization process based on cholesterol ([CHOL]) levels quantified in median nerve samples and collected at different times-from death from both right and left wrists. The statistical data show that the phenomenon evolves differently in the right and left nerves. Such a difference can reasonably be attributed to a different multicenter evolution of the demyelinization. For data analysis, the enrolled subjects were grouped by similar postmortem intervals (PMIs), considering 3 intervals: PMI < 48 hours, 48 hours < PMI < 78 hours, and PMI > 78 hours. Data obtained from tissue dissected within 48 hours of death allowed for a PMI estimation according to the following equations: PMI = 0.000 + 0.7623 [CHOL]right (R = 0.581) for the right wrist and PMI = 0.000 + 0.8911 [CHOL]left (R = 0.794) for the left wrist.At present, this correlation cannot be considered to be definitive because of the limitation of the small size of the samples analyzed, because the differences in the sampling time and the interindividual and intraindividual variation may influence the demyelinization process.
Conservative management of extradural hematoma: A report of sixty-two cases.
Zwayed, A Rahim H; Lucke-Wold, Brandon
2018-06-01
Extradural hematomas (EDH) are considered life threatening in that the risk for brain herniation is significant. The current accepted understanding within the literature is to treat EDH via surgical evacuation of the hematoma. In this case-series we report 62 cases of EDH managed conservatively without surgical intervention. Inclusion criteria were: Glasgow comma scale score 13-15, extradural hematoma confirmed by CT being less than 40 mm, less than 6 mm of midline shift, and no other surgical lesions present. Patients were initially observed in a surgical intensive care unit prior to discharge and had closely scheduled follow-up. Of the 62 cases none required emergent intervention and the majority had interval resolution of the epidural hematoma over time. Resolution was apparent by 21 days and definitive by 3 to 6 months. Patients with EDH who have a high Glasgow comma scale score 13-15, volume <40 mm, and less than 6 mm of midline shift should be considered for conservative management. Our study indicates that these patients will have interval resolution of hematoma over time without worsening of symptoms.
Seismic hazard assessment over time: Modelling earthquakes in Taiwan
NASA Astrophysics Data System (ADS)
Chan, Chung-Han; Wang, Yu; Wang, Yu-Ju; Lee, Ya-Ting
2017-04-01
To assess the seismic hazard with temporal change in Taiwan, we develop a new approach, combining both the Brownian Passage Time (BPT) model and the Coulomb stress change, and implement the seismogenic source parameters by the Taiwan Earthquake Model (TEM). The BPT model was adopted to describe the rupture recurrence intervals of the specific fault sources, together with the time elapsed since the last fault-rupture to derive their long-term rupture probability. We also evaluate the short-term seismicity rate change based on the static Coulomb stress interaction between seismogenic sources. By considering above time-dependent factors, our new combined model suggests an increased long-term seismic hazard in the vicinity of active faults along the western Coastal Plain and the Longitudinal Valley, where active faults have short recurrence intervals and long elapsed time since their last ruptures, and/or short-term elevated hazard levels right after the occurrence of large earthquakes due to the stress triggering effect. The stress enhanced by the February 6th, 2016, Meinong ML 6.6 earthquake also significantly increased rupture probabilities of several neighbouring seismogenic sources in Southwestern Taiwan and raised hazard level in the near future. Our approach draws on the advantage of incorporating long- and short-term models, to provide time-dependent earthquake probability constraints. Our time-dependent model considers more detailed information than any other published models. It thus offers decision-makers and public officials an adequate basis for rapid evaluations of and response to future emergency scenarios such as victim relocation and sheltering.
NASA Astrophysics Data System (ADS)
Kargovsky, A. V.; Chichigina, O. A.; Anashkina, E. I.; Valenti, D.; Spagnolo, B.
2015-10-01
The relaxation dynamics of a system described by a Langevin equation with pulse multiplicative noise sources with different correlation properties is considered. The solution of the corresponding Fokker-Planck equation is derived for Gaussian white noise. Moreover, two pulse processes with regulated periodicity are considered as a noise source: the dead-time-distorted Poisson process and the process with fixed time intervals, which is characterized by an infinite correlation time. We find that the steady state of the system is dependent on the correlation properties of the pulse noise. An increase of the noise correlation causes the decrease of the mean value of the solution at the steady state. The analytical results are in good agreement with the numerical ones.
Kargovsky, A V; Chichigina, O A; Anashkina, E I; Valenti, D; Spagnolo, B
2015-10-01
The relaxation dynamics of a system described by a Langevin equation with pulse multiplicative noise sources with different correlation properties is considered. The solution of the corresponding Fokker-Planck equation is derived for Gaussian white noise. Moreover, two pulse processes with regulated periodicity are considered as a noise source: the dead-time-distorted Poisson process and the process with fixed time intervals, which is characterized by an infinite correlation time. We find that the steady state of the system is dependent on the correlation properties of the pulse noise. An increase of the noise correlation causes the decrease of the mean value of the solution at the steady state. The analytical results are in good agreement with the numerical ones.
Araújo, Célio U; Basting, Roberta T
2018-03-01
To perform an in situ evaluation of surface roughness and micromorphology of two soft liner materials for dentures at different time intervals. The surface roughness of materials may influence the adhesion of micro-organisms and inflammation of the mucosal tissues. The in situ evaluation of surface roughness and the micromorphology of soft liner materials over the course of time may present results different from those of in vitro studies, considering the constant presence of saliva and food, the changes in temperature and the pH level in the oral cavity. Forty-eight rectangular specimens of each of the two soft liner materials were fabricated: a silicone-based material (Mucopren Soft) and an acrylic resin-based material (Trusoft). The specimens were placed in the dentures of 12 participants (n = 12), and the materials were evaluated for surface roughness and micromorphology at different time intervals: 0, 7, 30 and 60 days. Roughness (Ra) was evaluated by means of a roughness tester. Surface micromorphology was evaluated by scanning electron microscopy. Analysis of variance for randomised block design and Tukey's test showed that surface roughness values were lower in the groups using the silicone-based material at all the time intervals (P < .0001). The average surface roughness was higher at time interval 0 than at the other intervals, for both materials (P < .0001). The surface micromorphology showed that the silicone material presented a more regular and smoother surface than the acrylic resin-based material. The surface roughness of acrylic resin-based and silicone-based denture soft liner materials decreased after 7 days of evaluation, leading to a smoother surface over time. The silicone-based material showed lower roughness values and a smoother surface than the acrylic resin-based material, thereby making it preferred when selecting more appropriate material, due its tendency to promote less biofilm build-up. © 2017 John Wiley & Sons A/S and The Gerodontology Association. Published by John Wiley & Sons Ltd.
Prieto García, M A; Delgado Sevillano, R; Baldó Sierra, C; González Díaz, E; López Secades, A; Llavona Amor, J A; Vidal Marín, B
2013-09-01
To review and classify the interval cancers found in the Principality of Asturias's Breast Cancer Screening Program (PDPCM). A secondary objective was to determine the histological characteristics, size, and stage of the interval cancers at the time of diagnosis. We included the interval cancers in the PDPCM in the period 2003-2007. Interval cancers were classified according to the breast cancer screening program protocol, with double reading without consensus, without blinding, with arbitration. Mammograms were interpreted by 10 radiologists in the PDPCM. A total of 33.7% of the interval cancers could not be classified; of the interval cancers that could be classified, 40.67% were labeled true interval cancers, 31.4% were labeled false negatives on screening, 23.7% had minimal signs, and 4.23% were considered occult. A total of 70% of the interval cancers were diagnosed in the year of the period between screening examinations and 71.7% were diagnosed after subsequent screening. A total of 76.9% were invasive ductal carcinomas, 61.1% were stage II when detected, and 78.7% were larger than 10mm when detected. The rate of interval cancers and the rate of false negatives in the PDPCM are higher than those recommended in the European guidelines. Interval cancers are diagnosed later than the tumors detected at screening. Studying interval cancers provides significant training for the radiologists in the PDPCM. Copyright © 2011 SERAM. Published by Elsevier Espana. All rights reserved.
Echolocation system of the bottlenose dolphin
NASA Astrophysics Data System (ADS)
Dubrovsky, N. A.
2004-05-01
The hypothesis put forward by Vel’min and Dubrovsky [1] is discussed. The hypothesis suggests that bottlenose dolphins possess two functionally separate auditory subsystems: one of them serves for analyzing extraneous sounds, as in nonecholocating terrestrial animals, and the other performs the analysis of echoes caused by the echolocation clicks of the animal itself. The first subsystem is called passive hearing, and the second, active hearing. The results of experimental studies of dolphin’s echolocation system are discussed to confirm the proposed hypothesis. For the active hearing of dolphins, the notion of a critical interval is considered as the interval of time within which the formation of a merged auditory image of the echolocation object is formed when all echo highlights of the echo from this object fall within the critical interval.
Sanjay, Pandanaboyana; Yeeting, Sim; Whigham, Carole; Judson, Hannah; Polignano, Francesco M; Tait, Iain S
2008-08-01
UK guidelines for gallstone pancreatitis (GSP) advocate definitive treatment during the index admission, or within 2 weeks of discharge. However, this target may not always be achievable. This study reviewed current management of GSP in a university hospital and evaluated the risk associated with interval cholecystectomy. All patients that presented with GSP over a 4-year period (2002-2005) were stratified for disease severity (APACHE II). Patient demographics, time to definitive therapy [index cholecystectomy; endoscopic sphincterotomy (ES); Interval cholecystectomy], and readmission rates were analysed retrospectively. 100 patients admitted with GSP. Disease severity was mild in 54 patients and severe in 46 patients. Twenty-two patients unsuitable for surgery underwent ES as definitive treatment with no readmissions. Seventy-eight patients underwent cholecystectomy, of which 40 (58%) had an index cholecystectomy, and 38 (42%) an interval cholecystectomy. Only 10 patients with severe GSP had an index cholecystectomy, whilst 30 were readmitted for Interval cholecystectomy (p = 0.04). The median APACHE score was 4 [standard deviation (SD) 3.8] for index cholecystectomy and 8 (SD 2.6) for Interval cholecystectomy (p < 0.05). Median time (range) to surgery was 7.5 (2-30) days for index cholecystectomy and 63 (13-210) days for Interval cholecystectomy. Fifty percent (19/38) of patients with GSP had ES prior to discharge for interval cholecystectomy. Two (5%) patients were readmitted: with acute cholecystitis (n = 1) and acute pancreatitis (n = 1) , whilst awaiting interval cholecystectomy. No mortality was noted in the Index or Interval group. This study demonstrates that overall 62% (22 endoscopic sphincterotomy and 40 index cholecystectomy) of patients with GSP have definitive therapy during the Index admission. However, surgery was deferred in the majority (n = 30) of patients with severe GSP, and 19/30 underwent ES prior to discharge. ES and interval cholecystectomy in severe GSP is associated with minimal morbidity and readmission rates, and is considered a reasonable alternative to an index cholecystectomy in patients with severe GSP.
Røtterud, Jan Harald; Sivertsen, Einar A; Forssblad, Magnus; Engebretsen, Lars; Årøen, Asbjørn
2011-07-01
The presence of an articular cartilage lesion in anterior cruciate ligament-injured knees is considered a predictor of osteoarthritis. This study was undertaken to evaluate risk factors for full-thickness articular cartilage lesions in anterior cruciate ligament-injured knees, in particular the role of gender and the sport causing the initial injury. Cohort study (prognosis); Level of evidence, 2. Primary unilateral anterior cruciate ligament reconstructions prospectively registered in the Swedish and the Norwegian National Knee Ligament Registry during 2005 through 2008 were included (N = 15 783). Logistic regression analyses were used to evaluate risk factors for cartilage lesions. A total of 1012 patients (6.4%) had full-thickness cartilage lesions. The median time from injury to surgery was 9 months (range, 0 days-521 months). Male patients had an increased odds of full-thickness cartilage lesions compared with females (odds ratio = 1.22; 95% confidence interval, 1.04-1.42). In males, team handball had an increase in the odds of full-thickness cartilage lesions compared with soccer (odds ratio = 2.36; 95% confidence interval, 1.33-4.19). Among female patients, no sport investigated showed a significant decrease or increase in the odds of full-thickness cartilage lesions. The odds of a full-thickness cartilage lesion increased by 1.006 (95% confidence interval, 1.005-1.008) for each month elapsed from time of injury until anterior cruciate ligament reconstruction when all patients were considered, while time from injury to surgery did not affect the odds significantly in those patients reconstructed within 1 year of injury (odds ratio = 0.98; 95% confidence interval, 0.95-1.02). Previous surgery increased the odds of having a full-thickness cartilage lesion (odds ratio = 1.40; 95% confidence interval, 1.21-1.63). One year of increasing patient age also increased the odds (odds ratio = 1.05; 95% confidence interval, 1.05-1.06). Male gender is associated with an increased risk of full-thickness articular cartilage lesions in anterior cruciate ligament-injured knees. Male team handball players had an increased risk of full-thickness lesions. No other sports investigated were found to have significant effect on the risk in either gender. Furthermore, age, previous surgery, and time from injury to surgery exceeding 12 months are risk factors for full-thickness cartilage lesions.
Finding Every Root of a Broad Class of Real, Continuous Functions in a Given Interval
NASA Technical Reports Server (NTRS)
Tausworthe, Robert C.; Wolgast, Paul A.
2011-01-01
One of the most pervasive needs within the Deep Space Network (DSN) Metric Prediction Generator (MPG) view period event generation is that of finding solutions to given occurrence conditions. While the general form of an equation expresses equivalence between its left-hand and right-hand expressions, the traditional treatment of the subject subtracts the two sides, leaving an expression of the form Integral of(x) = 0. Values of the independent variable x satisfying this condition are roots, or solutions. Generally speaking, there may be no solutions, a unique solution, multiple solutions, or a continuum of solutions to a given equation. In particular, all view period events are modeled as zero crossings of various metrics; for example, the time at which the elevation of a spacecraft reaches its maximum value, as viewed from a Deep Space Station (DSS), is found by locating that point at which the derivative of the elevation function becomes zero. Moreover, each event type may have several occurrences within a given time interval of interest. For example, a spacecraft in a low Moon orbit will experience several possible occultations per day, each of which must be located in time. The MPG is charged with finding all specified event occurrences that take place within a given time interval (or pass ), without any special clues from operators as to when they may occur, for the entire spectrum of missions undertaken by the DSN. For each event type, the event metric function is a known form that can be computed for any instant within the interval. A method has been created for a mathematical root finder to be capable of finding all roots of an arbitrary continuous function, within a given interval, to be subject to very lenient, parameterized assumptions. One assumption is that adjacent roots are separated at least by a given amount, xGuard. Any point whose function value is less than ef in magnitude is considered to be a root, and the function values at distances xGuard away from a root are larger than ef, unless there is another root located in this vicinity. A root is considered found if, during iteration, two root candidates differ by less than a pre-specified ex, and the optimum cubic polynomial matching the function at the end and at two interval points (that is within a relative error fraction L at its midpoint) is reliable in indicating whether the function has extrema within the interval. The robustness of this method depends solely on choosing these four parameters that control the search. The roots of discontinuous functions were also found, but at degraded performance.
NASA Technical Reports Server (NTRS)
Pina, J. F.; House, F. B.
1976-01-01
A scheme was developed which divides the earth-atmosphere system into 2060 elemental areas. The regions previously described are defined in terms of these elemental areas which are fixed in size and position as the satellite moves. One method, termed the instantaneous technique, yields values of the radiant emittance (We) and the radiant reflectance (Wr) which the regions have during the time interval of a single satellite pass. The number of observations matches the number of regions under study and a unique solution is obtained using matrix inversion. The other method (termed the best fit technique), yields time averages of We and Wr for large time intervals (e.g., months, seasons). The number of observations in this technique is much greater than the number of regions considered, and an approximate solution is obtained by the method of least squares.
Santos, Thays Brenner; Kramer-Soares, Juliana Carlota; Favaro, Vanessa Manchim; Oliveira, Maria Gabriela Menezes
2017-10-01
Time plays an important role in conditioning, it is not only possible to associate stimuli with events that overlap, as in delay fear conditioning, but it is also possible to associate stimuli that are discontinuous in time, as shown in trace conditioning for a discrete stimuli. The environment itself can be a powerful conditioned stimulus (CS) and be associated to unconditioned stimulus (US). Thus, the aim of the present study was to determine the parameters in which contextual fear conditioning occurs by the maintenance of a contextual representation over short and long time intervals. The results showed that a contextual representation can be maintained and associated after 5s, even in the absence of a 15s re-exposure to the training context before US delivery. The same effect was not observed with a 24h interval of discontinuity. Furthermore, optimal conditioned response with a 5s interval is produced only when the contexts (of pre-exposure and shock) match. As the pre-limbic cortex (PL) is necessary for the maintenance of a continuous representation of a stimulus, the involvement of the PL in this temporal and contextual processing was investigated. The reversible inactivation of the PL by muscimol infusion impaired the acquisition of contextual fear conditioning with a 5s interval, but not with a 24h interval, and did not impair delay fear conditioning. The data provided evidence that short and long intervals of discontinuity have different mechanisms, thus contributing to a better understanding of PL involvement in contextual fear conditioning and providing a model that considers both temporal and contextual factors in fear conditioning. Copyright © 2017 Elsevier Inc. All rights reserved.
Measures of native and non-native rhythm in a quantity language.
Stockmal, Verna; Markus, Dace; Bond, Dzintra
2005-01-01
The traditional phonetic classification of language rhythm as stress-timed or syllable-timed is attributed to Pike. Recently, two different proposals have been offered for describing the rhythmic structure of languages from acoustic-phonetic measurements. Ramus has suggested a metric based on the proportion of vocalic intervals and the variability (SD) of consonantal intervals. Grabe has proposed Pairwise Variability Indices (nPVI, rPVI) calculated from the differences in vocalic and consonantal durations between successive syllables. We have calculated both the Ramus and Grabe metrics for Latvian, traditionally considered a syllable rhythm language, and for Latvian as spoken by Russian learners. Native speakers and proficient learners were very similar whereas low-proficiency learners showed high variability on some properties. The metrics did not provide an unambiguous classification of Latvian.
NASA Astrophysics Data System (ADS)
Azmi, N. I. L. Mohd; Ahmad, R.; Zainuddin, Z. M.
2017-09-01
This research explores the Mixed-Model Two-Sided Assembly Line (MMTSAL). There are two interrelated problems in MMTSAL which are line balancing and model sequencing. In previous studies, many researchers considered these problems separately and only few studied them simultaneously for one-sided line. However in this study, these two problems are solved simultaneously to obtain more efficient solution. The Mixed Integer Linear Programming (MILP) model with objectives of minimizing total utility work and idle time is generated by considering variable launching interval and assignment restriction constraint. The problem is analysed using small-size test cases to validate the integrated model. Throughout this paper, numerical experiment was conducted by using General Algebraic Modelling System (GAMS) with the solver CPLEX. Experimental results indicate that integrating the problems of model sequencing and line balancing help to minimise the proposed objectives function.
Interepisode Sleep Bruxism Intervals and Myofascial Face Pain.
Muzalev, Konstantin; Lobbezoo, Frank; Janal, Malvin N; Raphael, Karen G
2017-08-01
Sleep bruxism (SB) is considered as a possible etiological factor for temporomandibular disorder (TMD) pain. However, polysomnographic (PSG) studies, which are current "gold standard" diagnostic approach to SB, failed to prove an association between SB and TMD. A possible explanation could be that PSG studies have considered only limited characteristics of SB activity: the number of SB events per hour and, sometimes, the total duration of SB per night. According to the sports sciences literature, lack of adequate rest time between muscle activities leads to muscle overloading and pain. Therefore, the aim of this study was to determine whether the intervals between bruxism events differ between patients with and without TMD pain. Two groups of female volunteers were recruited: myofascial TMD pain group (n=124) and non-TMD control group (n=46). From these groups, we selected 86 (69%) case participants and 37 (80%) controls who had at least two SB episodes per night based on PSG recordings. A linear mixed model was used to compare case and control groups over the repeated observations of interepisode intervals. The duration of interepisode intervals was statistically similar in the case (mean [standard deviation {SD}] 1137.7 [1975.8] seconds)] and control (mean [SD] 1192.0 [1972.0] seconds) groups. There were also a similar number of SB episodes per hour and a total duration of SB episodes in both groups. The current data fail to support the idea that TMD pain can be explained by increasing number of SB episodes per hour of sleep or decreasing the time between SB events. © Sleep Research Society 2017. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.
Uncertainty analysis for absorbed dose from a brain receptor imaging agent
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aydogan, B.; Miller, L.F.; Sparks, R.B.
Absorbed dose estimates are known to contain uncertainties. A recent literature search indicates that prior to this study no rigorous investigation of uncertainty associated with absorbed dose has been undertaken. A method of uncertainty analysis for absorbed dose calculations has been developed and implemented for the brain receptor imaging agent {sup 123}I-IPT. The two major sources of uncertainty considered were the uncertainty associated with the determination of residence time and that associated with the determination of the S values. There are many sources of uncertainty in the determination of the S values, but only the inter-patient organ mass variation wasmore » considered in this work. The absorbed dose uncertainties were determined for lung, liver, heart and brain. Ninety-five percent confidence intervals of the organ absorbed dose distributions for each patient and for a seven-patient population group were determined by the ``Latin Hypercube Sampling`` method. For an individual patient, the upper bound of the 95% confidence interval of the absorbed dose was found to be about 2.5 times larger than the estimated mean absorbed dose. For the seven-patient population the upper bound of the 95% confidence interval of the absorbed dose distribution was around 45% more than the estimated population mean. For example, the 95% confidence interval of the population liver dose distribution was found to be between 1.49E+0.7 Gy/MBq and 4.65E+07 Gy/MBq with a mean of 2.52E+07 Gy/MBq. This study concluded that patients in a population receiving {sup 123}I-IPT could receive absorbed doses as much as twice as large as the standard estimated absorbed dose due to these uncertainties.« less
Statistical analyses of the relative risk.
Gart, J J
1979-01-01
Let P1 be the probability of a disease in one population and P2 be the probability of a disease in a second population. The ratio of these quantities, R = P1/P2, is termed the relative risk. We consider first the analyses of the relative risk from retrospective studies. The relation between the relative risk and the odds ratio (or cross-product ratio) is developed. The odds ratio can be considered a parameter of an exponential model possessing sufficient statistics. This permits the development of exact significance tests and confidence intervals in the conditional space. Unconditional tests and intervals are also considered briefly. The consequences of misclassification errors and ignoring matching or stratifying are also considered. The various methods are extended to combination of results over the strata. Examples of case-control studies testing the association between HL-A frequencies and cancer illustrate the techniques. The parallel analyses of prospective studies are given. If P1 and P2 are small with large samples sizes the appropriate model is a Poisson distribution. This yields a exponential model with sufficient statistics. Exact conditional tests and confidence intervals can then be developed. Here we consider the case where two populations are compared adjusting for sex differences as well as for the strata (or covariate) differences such as age. The methods are applied to two examples: (1) testing in the two sexes the ratio of relative risks of skin cancer in people living in different latitudes, and (2) testing over time the ratio of the relative risks of cancer in two cities, one of which fluoridated its drinking water and one which did not. PMID:540589
NUMERICAL METHODS FOR SOLVING THE MULTI-TERM TIME-FRACTIONAL WAVE-DIFFUSION EQUATION.
Liu, F; Meerschaert, M M; McGough, R J; Zhuang, P; Liu, Q
2013-03-01
In this paper, the multi-term time-fractional wave-diffusion equations are considered. The multi-term time fractional derivatives are defined in the Caputo sense, whose orders belong to the intervals [0,1], [1,2), [0,2), [0,3), [2,3) and [2,4), respectively. Some computationally effective numerical methods are proposed for simulating the multi-term time-fractional wave-diffusion equations. The numerical results demonstrate the effectiveness of theoretical analysis. These methods and techniques can also be extended to other kinds of the multi-term fractional time-space models with fractional Laplacian.
NUMERICAL METHODS FOR SOLVING THE MULTI-TERM TIME-FRACTIONAL WAVE-DIFFUSION EQUATION
Liu, F.; Meerschaert, M.M.; McGough, R.J.; Zhuang, P.; Liu, Q.
2013-01-01
In this paper, the multi-term time-fractional wave-diffusion equations are considered. The multi-term time fractional derivatives are defined in the Caputo sense, whose orders belong to the intervals [0,1], [1,2), [0,2), [0,3), [2,3) and [2,4), respectively. Some computationally effective numerical methods are proposed for simulating the multi-term time-fractional wave-diffusion equations. The numerical results demonstrate the effectiveness of theoretical analysis. These methods and techniques can also be extended to other kinds of the multi-term fractional time-space models with fractional Laplacian. PMID:23772179
Unsignaled Delay of Reinforcement, Relative Time, and Resistance to Change
ERIC Educational Resources Information Center
Shahan, Timothy A.; Lattal, Kennon A.
2005-01-01
Two experiments with pigeons examined the effects of unsignaled, nonresetting delays of reinforcement on responding maintained by different reinforcement rates. In Experiment 1, 3-s unsignaled delays were introduced into each component of a multiple variable-interval (VI) 15-s VI 90-s VI 540-s schedule. When considered as a proportion of the…
Revising the "Rule of Three" for inferring seizure freedom.
Westover, M Brandon; Cormier, Justine; Bianchi, Matt T; Shafi, Mouhsin; Kilbride, Ronan; Cole, Andrew J; Cash, Sydney S
2012-02-01
How long after starting a new medication must a patient go without seizures before they can be regarded as seizure-free? A recent International League Against Epilepsy (ILAE) task force proposed using a "Rule of Three" as an operational definition of seizure freedom, according to which a patient should be considered seizure-free following an intervention after a period without seizures has elapsed equal to three times the longest preintervention interseizure interval over the previous year. This rule was motivated in large part by statistical considerations advanced in a classic 1983 paper by Hanley and Lippman-Hand. However, strict adherence to the statistical logic of this rule generally requires waiting much longer than recommended by the ILAE task force. Therefore, we set out to determine whether an alternative approach to the Rule of Three might be possible, and under what conditions the rule may be expected to hold or would need to be extended. Probabilistic modeling and application of Bayes' rule. We find that an alternative approach to the problem of inferring seizure freedom supports using the Rule of Three in the way proposed by the ILAE in many cases, particularly in evaluating responses to a first trial of antiseizure medication, and to favorably-selected epilepsy surgical candidates. In cases where the a priori odds of success are less favorable, our analysis requires longer seizure-free observation periods before declaring seizure freedom, up to six times the average preintervention interseizure interval. The key to our approach is to take into account not only the time elapsed without seizures but also empirical data regarding the a priori probability of achieving seizure freedom conferred by a particular intervention. In many cases it may be reasonable to consider a patient seizure-free after they have gone without seizures for a period equal to three times the preintervention interseizure interval, as proposed on pragmatic grounds in a recent ILAE position paper, although in other commonly encountered cases a waiting time up to six times this interval is required. In this work we have provided a coherent theoretical basis for modified criterion for seizure freedom, which we call the "Rule of Three-To-Six." Wiley Periodicals, Inc. © 2011 International League Against Epilepsy.
Revising the Rule Of Three For Inferring Seizure Freedom
Westover, M. Brandon; Cormier, Justine; Bianchi, Matt T.; Shafi, Mouhsin; Kilbride, Ronan; Cole, Andrew J.; Cash, Sydney S.
2011-01-01
Summary Purpose How long after starting a new medication must a patient go without seizures before they can be regarded as seizure free? A recent ILAE task force proposed using a “Rule of Three” as an operational definition of seizure freedom, according to which a patient should be considered seizure-free following an intervention after a period without seizures has elapsed equal to three times the longest pre-intervention inter-seizure interval over the previous year. This rule was motivated in large part by statistical considerations advanced in a classic 1983 paper by Hanley and Lippman-Hand. However, strict adherence to the statistical logic of this rule generally requires waiting much longer than recommended by the ILAE task force. Therefore, we set out to determine whether an alternative approach to the Rule of Three might be possible, and under what conditions the rule may be expected to hold or would need to be extended. Methods Probabilistic modeling and application of Bayes’ rule. Key Findings We find that an alternative approach to the problem of inferring seizure freedom supports using the Rule of Three in the way proposed by the ILAE in many cases, particularly in evaluating responses to a first trial of anti-seizure medication, and to favorably-selected epilepsy surgical candidates. In cases where the a priori odds of success are less favorable, our analysis requires longer seizure-free observation periods before declaring seizure freedom, up to six times the average pre-intervention insterseizure interval. The key to our approach is to take into account not only the time elapsed without seizures but also empirical data regarding the a priori probability of achieving seizure freedom conferred by a particular intervention. Significance In many cases it may be reasonable to consider a patient seizure free after they have gone without seizures for a period equal to three times the pre-intervention inter-seizure interval, as proposed on pragmatic grounds in a recent ILAE position paper, though in other commonly encountered cases a waiting time up to six times this interval is required. In this work we have provided a coherent theoretical basis for modified criterion for seizure freedom, which we call the “Rule of Three-To-Six”. PMID:22191711
Methoxyflurane anesthesia augments the chronotropic and dromotropic effects of verapamil.
Jamali, F; Mayo, P R
1999-01-01
Inhalation anesthetics have been shown to have electrical suppressant effects on excitable membranes such as the cardiac conduction system. Therefore, the anesthetized patient or laboratory animal may respond differently to cardiac drugs when compared with their conscious counterparts. The purpose of this study was to assess the effects of anesthesia with methoxyflurane (MF) on the dromotropic and chronotropic effects of verapamil (VER) in the rat. A lead I ECG was measured using subcutaneous electrodes placed both axilli and over the xyphoid process in male Sprague-Dawley rats. Dromotropic effect was measured using the PR-interval which indicated the electrical spread across the atria to the AV-node and chronotropic effects were determined using RR-interval. A total of six animals were randomized to receive 10 mg/kg s.c. of verapamil in the presence or absence of general anesthesia containing methoxyflurane. In addition, PR-interval and RR-intervals were determined in the presence of only methoxyflurane and at rest without any drug exposure. The time for the ECG to normalize after exposure to methoxyflurane and/or verapamil was also determined. Exposure to verapamil alone resulted in a 5% prolongation in PR-interval and 6% prolongation in RR-interval. Methoxyflurane alone had a larger effect than verapamil demonstrating a 14.5% prolongation in PR-interval and a 12.3% in RR-interval which was statistically significant. The combination of MF + VER resulted in a synergistic prolongation in PR-interval to 28. 7% while the effect on RR-interval was additive with an increase to 17.6%. The time for the ECG to normalize after exposure to VER, MF and VER + MF was 37.5 15.1 min, 69.8 5.3 min, and 148.5 +/- 6.6 min respectively. General anesthesia with MF enhances the dromotropic and chronotropic effect of VER. This should be considered when MF-anesthesia is used in experimental procedure.
Time from cervical conization to pregnancy and preterm birth.
Himes, Katherine P; Simhan, Hyagriv N
2007-02-01
To estimate whether the time interval between cervical conization and subsequent pregnancy is associated with risk of preterm birth. Our study is a case control study nested in a retrospective cohort. Women who underwent colposcopic biopsy or conization with loop electrosurgical excision procedure, large loop excision of the transformation zone, or cold knife cone and subsequently delivered at our hospital were identified with electronic databases. Variables considered as possible confounders included maternal race, age, marital status, payor status, years of education, self-reported tobacco use, history of preterm delivery, and dimensions of cone specimen. Conization was not associated with preterm birth or any subtypes of preterm birth. Among women who underwent conization, those with a subsequent preterm birth had a shorter conization-to-pregnancy interval (337 days) than women with a subsequent term birth (581 days) (P=.004). The association between short conization-to-pregnancy interval and preterm birth remained significant when controlling for confounders including race and cone dimensions. The effect of short conization-to-pregnancy interval on subsequent preterm birth was more persistent among African Americans when compared with white women. Women with a short conization-to-pregnancy interval are at increased risk for preterm birth. Women of reproductive age who must have a conization procedure can be counseled that conceiving within 2 to 3 months of the procedure may be associated with an increased risk of preterm birth. II.
Wang, Molin; Liao, Xiaomei; Laden, Francine; Spiegelman, Donna
2016-06-15
Identification of the latency period and age-related susceptibility, if any, is an important aspect of assessing risks of environmental, nutritional, and occupational exposures. We consider estimation and inference for latency and age-related susceptibility in relative risk and excess risk models. We focus on likelihood-based methods for point and interval estimation of the latency period and age-related windows of susceptibility coupled with several commonly considered exposure metrics. The method is illustrated in a study of the timing of the effects of constituents of air pollution on mortality in the Nurses' Health Study. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
2D Slightly Compressible Ideal Flow in an Exterior Domain
NASA Astrophysics Data System (ADS)
Secchi, Paolo
2006-12-01
We consider the Euler equations of barotropic inviscid compressible fluids in the exterior domain. It is well known that, as the Mach number goes to zero, the compressible flows approximate the solution of the equations of motion of inviscid, incompressible fluids. In dimension 2 such limit solution exists on any arbitrary time interval, with no restriction on the size of the initial data. It is then natural to expect the same for the compressible solution, if the Mach number is sufficiently small. First we study the life span of smooth irrotational solutions, i.e. the largest time interval T(ɛ) of existence of classical solutions, when the initial data are a small perturbation of size ɛ from a constant state. Then, we study the nonlinear interaction between the irrotational part and the incompressible part of a general solution. This analysis yields the existence of smooth compressible flow on any arbitrary time interval and with no restriction on the size of the initial velocity, for any Mach number sufficiently small. Finally, the approach is applied to the study of the incompressible limit. For the proofs we use a combination of energy estimates and a decay estimate for the irrotational part.
Orbital Evolution of Jupiter-Family Comets
NASA Technical Reports Server (NTRS)
Ipatov, S. I.; Mather, J. S.; Oegerle, William R. (Technical Monitor)
2002-01-01
We investigated the evolution for periods of at least 5-10 Myr of 2500 Jupiter-crossing objects (JCOs) under the gravitational influence of all planets, except for Mercury and Pluto (without dissipative factors). In the first series we considered N=2000 orbits near the orbits of 30 real Jupiter-family comets with period less than 10 yr, and in the second series we took 500 orbits close to the orbit of Comet 10P Tempel 2. We calculated the probabilities of collisions of objects with the terrestrial planets, using orbital elements obtained with a step equal to 500 yr and then summarized the results for all time intervals and all bodies, obtaining the total probability P(sub sigma) of collisions with a planet and the total time interval T(sub sigma) during which perihelion distance of bodies was less than a semimajor axis of the planet. The values of P = 10(exp 6)P(sub sigma)/N and T = T(sub sigma)/1000 yr are presented in Table together with the ratio r of the total time interval when orbits were of Apollo type (at e less than 0.999) to that of Amor type.
Response of noctilucent cloud brightness to daily solar variations
NASA Astrophysics Data System (ADS)
Dalin, P.; Pertsev, N.; Perminov, V.; Dubietis, A.; Zadorozhny, A.; Zalcik, M.; McEachran, I.; McEwan, T.; Černis, K.; Grønne, J.; Taustrup, T.; Hansen, O.; Andersen, H.; Melnikov, D.; Manevich, A.; Romejko, V.; Lifatova, D.
2018-04-01
For the first time, long-term data sets of ground-based observations of noctilucent clouds (NLC) around the globe have been analyzed in order to investigate a response of NLC to solar UV irradiance variability on a day-to-day scale. NLC brightness has been considered versus variations of solar Lyman-alpha flux. We have found that day-to-day solar variability, whose effect is generally masked in the natural NLC variability, has a statistically significant effect when considering large statistics for more than ten years. Average increase in day-to-day solar Lyman-α flux results in average decrease in day-to-day NLC brightness that can be explained by robust physical mechanisms taking place in the summer mesosphere. Average time lags between variations of Lyman-α flux and NLC brightness are short (0-3 days), suggesting a dominant role of direct solar heating and of the dynamical mechanism compared to photodissociation of water vapor by solar Lyman-α flux. All found regularities are consistent between various ground-based NLC data sets collected at different locations around the globe and for various time intervals. Signatures of a 27-day periodicity seem to be present in the NLC brightness for individual summertime intervals; however, this oscillation cannot be unambiguously retrieved due to inevitable periods of tropospheric cloudiness.
Koerbin, G; Cavanaugh, J A; Potter, J M; Abhayaratna, W P; West, N P; Glasgow, N; Hawkins, C; Armbruster, D; Oakman, C; Hickman, P E
2015-02-01
Development of reference intervals is difficult, time consuming, expensive and beyond the scope of most laboratories. The Aussie Normals study is a direct a priori study to determine reference intervals in healthy Australian adults. All volunteers completed a health and lifestyle questionnaire and exclusion was based on conditions such as pregnancy, diabetes, renal or cardiovascular disease. Up to 91 biochemical analyses were undertaken on a variety of analytical platforms using serum samples collected from 1856 volunteers. We report on our findings for 40 of these analytes and two calculated parameters performed on the Abbott ARCHITECTci8200/ci16200 analysers. Not all samples were analysed for all assays due to volume requirements or assay/instrument availability. Results with elevated interference indices and those deemed unsuitable after clinical evaluation were removed from the database. Reference intervals were partitioned based on the method of Harris and Boyd into three scenarios, combined gender, males and females and age and gender. We have performed a detailed reference interval study on a healthy Australian population considering the effects of sex, age and body mass. These reference intervals may be adapted to other manufacturer's analytical methods using method transference.
Dynamics of Stability of Orientation Maps Recorded with Optical Imaging.
Shumikhina, S I; Bondar, I V; Svinov, M M
2018-03-15
Orientation selectivity is an important feature of visual cortical neurons. Optical imaging of the visual cortex allows for the generation of maps of orientation selectivity that reflect the activity of large populations of neurons. To estimate the statistical significance of effects of experimental manipulations, evaluation of the stability of cortical maps over time is required. Here, we performed optical imaging recordings of the visual cortex of anesthetized adult cats. Monocular stimulation with moving clockwise square-wave gratings that continuously changed orientation and direction was used as the mapping stimulus. Recordings were repeated at various time intervals, from 15 min to 16 h. Quantification of map stability was performed on a pixel-by-pixel basis using several techniques. Map reproducibility showed clear dynamics over time. The highest degree of stability was seen in maps recorded 15-45 min apart. Averaging across all time intervals and all stimulus orientations revealed a mean shift of 2.2 ± 0.1°. There was a significant tendency for larger shifts to occur at longer time intervals. Shifts between 2.8° (mean ± 2SD) and 5° were observed more frequently at oblique orientations, while shifts greater than 5° appeared more frequently at cardinal orientations. Shifts greater than 5° occurred rarely overall (5.4% of cases) and never exceeded 11°. Shifts of 10-10.6° (0.7%) were seen occasionally at time intervals of more than 4 h. Our findings should be considered when evaluating the potential effect of experimental manipulations on orientation selectivity mapping studies. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.
The geomagnetic jerk of 1969 and the DGRFs
Thompson, D.; Cain, J.C.
1987-01-01
Cubic spline fits to the DGRF/IGRF series indicate agreement with other analyses showing the 1969-1970 magnetic jerk in the h ??12 and g ??02 secular change coefficients, and agreement that the h ??11 term showed no sharp change. The variation of the g ??01 term is out of phase with other analyses indicating a likely error in its representation in the 1965-1975 interval. We recommend that future derivations of the 'definitive' geomagnetic reference models take into consideration the times of impulses or jerks so as to not be bound to a standard 5 year interval, and otherwise to make more considered analyses before adopting sets of coefficients. ?? 1987.
Rectal temperature-based death time estimation in infants.
Igari, Yui; Hosokai, Yoshiyuki; Funayama, Masato
2016-03-01
In determining the time of death in infants based on rectal temperature, the same methods used in adults are generally used. However, whether the methods for adults are suitable for infants is unclear. In this study, we examined the following 3 methods in 20 infant death cases: computer simulation of rectal temperature based on the infinite cylinder model (Ohno's method), computer-based double exponential approximation based on Marshall and Hoare's double exponential model with Henssge's parameter determination (Henssge's method), and computer-based collinear approximation based on extrapolation of the rectal temperature curve (collinear approximation). The interval between the last time the infant was seen alive and the time that he/she was found dead was defined as the death time interval and compared with the estimated time of death. In Ohno's method, 7 cases were within the death time interval, and the average deviation in the other 12 cases was approximately 80 min. The results of both Henssge's method and collinear approximation were apparently inferior to the results of Ohno's method. The corrective factor was set within the range of 0.7-1.3 in Henssge's method, and a modified program was newly developed to make it possible to change the corrective factors. Modification A, in which the upper limit of the corrective factor range was set as the maximum value in each body weight, produced the best results: 8 cases were within the death time interval, and the average deviation in the other 12 cases was approximately 80min. There was a possibility that the influence of thermal isolation on the actual infants was stronger than that previously shown by Henssge. We conclude that Ohno's method and Modification A are useful for death time estimation in infants. However, it is important to accept the estimated time of death with certain latitude considering other circumstances. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Shi, Yongli; Wu, Zhong; Zhi, Kangyi; Xiong, Jun
2018-03-01
In order to realize reliable commutation of brushless DC motors (BLDCMs), a simple approach is proposed to detect and correct signal faults of Hall position sensors in this paper. First, the time instant of the next jumping edge for Hall signals is predicted by using prior information of pulse intervals in the last electrical period. Considering the possible errors between the predicted instant and the real one, a confidence interval is set by using the predicted value and a suitable tolerance for the next pulse edge. According to the relationship between the real pulse edge and the confidence interval, Hall signals can be judged and the signal faults can be corrected. Experimental results of a BLDCM at steady speed demonstrate the effectiveness of the approach.
NASA Astrophysics Data System (ADS)
Ren, Lixia; He, Li; Lu, Hongwei; Chen, Yizhong
2016-08-01
A new Monte Carlo-based interval transformation analysis (MCITA) is used in this study for multi-criteria decision analysis (MCDA) of naphthalene-contaminated groundwater management strategies. The analysis can be conducted when input data such as total cost, contaminant concentration and health risk are represented as intervals. Compared to traditional MCDA methods, MCITA-MCDA has the advantages of (1) dealing with inexactness of input data represented as intervals, (2) mitigating computational time due to the introduction of Monte Carlo sampling method, (3) identifying the most desirable management strategies under data uncertainty. A real-world case study is employed to demonstrate the performance of this method. A set of inexact management alternatives are considered in each duration on the basis of four criteria. Results indicated that the most desirable management strategy lied in action 15 for the 5-year, action 8 for the 10-year, action 12 for the 15-year, and action 2 for the 20-year management.
Market-based control strategy for long-span structures considering the multi-time delay issue
NASA Astrophysics Data System (ADS)
Li, Hongnan; Song, Jianzhu; Li, Gang
2017-01-01
To solve the different time delays that exist in the control device installed on spatial structures, in this study, discrete analysis using a 2 N precise algorithm was selected to solve the multi-time-delay issue for long-span structures based on the market-based control (MBC) method. The concept of interval mixed energy was introduced from computational structural mechanics and optimal control research areas, and it translates the design of the MBC multi-time-delay controller into a solution for the segment matrix. This approach transforms the serial algorithm in time to parallel computing in space, greatly improving the solving efficiency and numerical stability. The designed controller is able to consider the issue of time delay with a linear controlling force combination and is especially effective for large time-delay conditions. A numerical example of a long-span structure was selected to demonstrate the effectiveness of the presented controller, and the time delay was found to have a significant impact on the results.
NASA Technical Reports Server (NTRS)
Nandi, P. S.; Spodick, D. H.
1977-01-01
The time course of the recovery period was characterized by noninvasive measurements after 4 minute bicycle exercise at 3 separate work loads in volunteers with normal peak responses. Most responses started immediately to return toward resting control values. Left ventricular ejection time and stroke volume change are discussed. Changes in pre-ejection period were determined by changes in isovolume contraction time, and factors affecting the degree and rate of return are considered. The rates of change in the ejection time index and in the ratio pre-ejection period/left ventricular ejection time were virtually independent of load throughout most of recovery.
Analysis of an inventory model for both linearly decreasing demand and holding cost
NASA Astrophysics Data System (ADS)
Malik, A. K.; Singh, Parth Raj; Tomar, Ajay; Kumar, Satish; Yadav, S. K.
2016-03-01
This study proposes the analysis of an inventory model for linearly decreasing demand and holding cost for non-instantaneous deteriorating items. The inventory model focuses on commodities having linearly decreasing demand without shortages. The holding cost doesn't remain uniform with time due to any form of variation in the time value of money. Here we consider that the holding cost decreases with respect to time. The optimal time interval for the total profit and the optimal order quantity are determined. The developed inventory model is pointed up through a numerical example. It also includes the sensitivity analysis.
Nomura, Shuhei; Blangiardo, Marta; Tsubokura, Masaharu; Nishikawa, Yoshitaka; Gilmour, Stuart; Kami, Masahiro; Hodgson, Susan
2016-01-01
Considering the health impacts of evacuation is fundamental to disaster planning especially for vulnerable elderly populations; however, evacuation-related mortality risks have not been well-investigated. We conducted an analysis to compare survival of evacuated and non-evacuated residents of elderly care facilities, following the Great East Japan Earthquake and subsequent Fukushima Dai-ichi nuclear power plant incident on 11th March 2011. To assess associations between evacuation and mortality after the Fukushima nuclear incident; and to present discussion points on disaster planning, with reference to vulnerable elderly populations. The study population comprised 1,215 residents admitted to seven elderly care facilities located 20-40km from the nuclear plant in the five years before the incident. Demographic and clinical characteristics were obtained from medical records. Evacuation histories were tracked until mid 2013. Main outcome measures are hazard ratios in evacuees versus non-evacuees using random-effects Cox proportional hazards models, and pre- and post-disaster survival probabilities and relative mortality incidence. Experiencing the disasters did not have a significant influence on mortality (hazard ratio 1.10, 95% confidence interval: 0.84-1.43). Evacuation was associated with 1.82 times higher mortality (95% confidence interval: 1.22-2.70) after adjusting for confounders, with the initial evacuation from the original facility associated with 3.37 times higher mortality risk (95% confidence interval: 1.66-6.81) than non evacuation. The government should consider updating its requirements for emergency planning for elderly facilities and ensure that, in a disaster setting, these facilities have the capacity and support to shelter in place for at least sufficient time to adequately prepare initial evacuation. Copyright © 2015 Elsevier Inc. All rights reserved.
A subharmonic dynamical bifurcation during in vitro epileptiform activity
NASA Astrophysics Data System (ADS)
Perez Velazquez, Jose L.; Khosravani, Houman
2004-06-01
Epileptic seizures are considered to result from a sudden change in the synchronization of firing neurons in brain neural networks. We have used an in vitro model of status epilepticus (SE) to characterize dynamical regimes underlying the observed seizure-like activity. Time intervals between spikes or bursts were used as the variable to construct first-return interpeak or interburst interval plots, for studying neuronal population activity during the transition to seizure, as well as within seizures. Return maps constructed for a brief epoch before seizures were used for approximating the local system dynamics during that time window. Analysis of the first-return maps suggests that intermittency is a dynamical regime underlying the observed epileptic activity. This type of analysis may be useful for understanding the collective dynamics of neuronal populations in the normal and pathological brain.
Technique for estimation of streamflow statistics in mineral areas of interest in Afghanistan
Olson, Scott A.; Mack, Thomas J.
2011-01-01
A technique for estimating streamflow statistics at ungaged stream sites in areas of mineral interest in Afghanistan using drainage-area-ratio relations of historical streamflow data was developed and is documented in this report. The technique can be used to estimate the following streamflow statistics at ungaged sites: (1) 7-day low flow with a 10-year recurrence interval, (2) 7-day low flow with a 2-year recurrence interval, (3) daily mean streamflow exceeded 90 percent of the time, (4) daily mean streamflow exceeded 80 percent of the time, (5) mean monthly streamflow for each month of the year, (6) mean annual streamflow, and (7) minimum monthly streamflow for each month of the year. Because they are based on limited historical data, the estimates of streamflow statistics at ungaged sites are considered preliminary.
NASA Astrophysics Data System (ADS)
Niakan, F.; Vahdani, B.; Mohammadi, M.
2015-12-01
This article proposes a multi-objective mixed-integer model to optimize the location of hubs within a hub network design problem under uncertainty. The considered objectives include minimizing the maximum accumulated travel time, minimizing the total costs including transportation, fuel consumption and greenhouse emissions costs, and finally maximizing the minimum service reliability. In the proposed model, it is assumed that for connecting two nodes, there are several types of arc in which their capacity, transportation mode, travel time, and transportation and construction costs are different. Moreover, in this model, determining the capacity of the hubs is part of the decision-making procedure and balancing requirements are imposed on the network. To solve the model, a hybrid solution approach is utilized based on inexact programming, interval-valued fuzzy programming and rough interval programming. Furthermore, a hybrid multi-objective metaheuristic algorithm, namely multi-objective invasive weed optimization (MOIWO), is developed for the given problem. Finally, various computational experiments are carried out to assess the proposed model and solution approaches.
Statistics of a neuron model driven by asymmetric colored noise.
Müller-Hansen, Finn; Droste, Felix; Lindner, Benjamin
2015-02-01
Irregular firing of neurons can be modeled as a stochastic process. Here we study the perfect integrate-and-fire neuron driven by dichotomous noise, a Markovian process that jumps between two states (i.e., possesses a non-Gaussian statistics) and exhibits nonvanishing temporal correlations (i.e., represents a colored noise). Specifically, we consider asymmetric dichotomous noise with two different transition rates. Using a first-passage-time formulation, we derive exact expressions for the probability density and the serial correlation coefficient of the interspike interval (time interval between two subsequent neural action potentials) and the power spectrum of the spike train. Furthermore, we extend the model by including additional Gaussian white noise, and we give approximations for the interspike interval (ISI) statistics in this case. Numerical simulations are used to validate the exact analytical results for pure dichotomous noise, and to test the approximations of the ISI statistics when Gaussian white noise is included. The results may help to understand how correlations and asymmetry of noise and signals in nerve cells shape neuronal firing statistics.
Parametric Model Based On Imputations Techniques for Partly Interval Censored Data
NASA Astrophysics Data System (ADS)
Zyoud, Abdallah; Elfaki, F. A. M.; Hrairi, Meftah
2017-12-01
The term ‘survival analysis’ has been used in a broad sense to describe collection of statistical procedures for data analysis. In this case, outcome variable of interest is time until an event occurs where the time to failure of a specific experimental unit might be censored which can be right, left, interval, and Partly Interval Censored data (PIC). In this paper, analysis of this model was conducted based on parametric Cox model via PIC data. Moreover, several imputation techniques were used, which are: midpoint, left & right point, random, mean, and median. Maximum likelihood estimate was considered to obtain the estimated survival function. These estimations were then compared with the existing model, such as: Turnbull and Cox model based on clinical trial data (breast cancer data), for which it showed the validity of the proposed model. Result of data set indicated that the parametric of Cox model proved to be more superior in terms of estimation of survival functions, likelihood ratio tests, and their P-values. Moreover, based on imputation techniques; the midpoint, random, mean, and median showed better results with respect to the estimation of survival function.
On Some Confidence Intervals for Estimating the Mean of a Skewed Population
ERIC Educational Resources Information Center
Shi, W.; Kibria, B. M. Golam
2007-01-01
A number of methods are available in the literature to measure confidence intervals. Here, confidence intervals for estimating the population mean of a skewed distribution are considered. This note proposes two alternative confidence intervals, namely, Median t and Mad t, which are simple adjustments to the Student's t confidence interval. In…
NASA Astrophysics Data System (ADS)
Pradhan, Moumita; Pradhan, Dinesh; Bandyopadhyay, G.
2010-10-01
Fuzzy System has demonstrated their ability to solve different kinds of problem in various application domains. There is an increasing interest to apply fuzzy concept to improve tasks of any system. Here case study of a thermal power plant is considered. Existing time estimation represents time to complete tasks. Applying fuzzy linear approach it becomes clear that after each confidence level least time is taken to complete tasks. As time schedule is less than less amount of cost is needed. Objective of this paper is to show how one system becomes more efficient in applying Fuzzy Linear approach. In this paper we want to optimize the time estimation to perform all tasks in appropriate time schedules. For the case study, optimistic time (to), pessimistic time (tp), most likely time(tm) is considered as data collected from thermal power plant. These time estimates help to calculate expected time(te) which represents time to complete particular task to considering all happenings. Using project evaluation and review technique (PERT) and critical path method (CPM) concept critical path duration (CPD) of this project is calculated. This tells that the probability of fifty percent of the total tasks can be completed in fifty days. Using critical path duration and standard deviation of the critical path, total completion of project can be completed easily after applying normal distribution. Using trapezoidal rule from four time estimates (to, tm, tp, te), we can calculate defuzzyfied value of time estimates. For range of fuzzy, we consider four confidence interval level say 0.4, 0.6, 0.8,1. From our study, it is seen that time estimates at confidence level between 0.4 and 0.8 gives the better result compared to other confidence levels.
Cure rate model with interval censored data.
Kim, Yang-Jin; Jhun, Myoungshic
2008-01-15
In cancer trials, a significant fraction of patients can be cured, that is, the disease is completely eliminated, so that it never recurs. In general, treatments are developed to both increase the patients' chances of being cured and prolong the survival time among non-cured patients. A cure rate model represents a combination of cure fraction and survival model, and can be applied to many clinical studies over several types of cancer. In this article, the cure rate model is considered in the interval censored data composed of two time points, which include the event time of interest. Interval censored data commonly occur in the studies of diseases that often progress without symptoms, requiring clinical evaluation for detection (Encyclopedia of Biostatistics. Wiley: New York, 1998; 2090-2095). In our study, an approximate likelihood approach suggested by Goetghebeur and Ryan (Biometrics 2000; 56:1139-1144) is used to derive the likelihood in interval censored data. In addition, a frailty model is introduced to characterize the association between the cure fraction and survival model. In particular, the positive association between the cure fraction and the survival time is incorporated by imposing a common normal frailty effect. The EM algorithm is used to estimate parameters and a multiple imputation based on the profile likelihood is adopted for variance estimation. The approach is applied to the smoking cessation study in which the event of interest is a smoking relapse and several covariates including an intensive care treatment are evaluated to be effective for both the occurrence of relapse and the non-smoking duration. Copyright (c) 2007 John Wiley & Sons, Ltd.
Optimizing some 3-stage W-methods for the time integration of PDEs
NASA Astrophysics Data System (ADS)
Gonzalez-Pinto, S.; Hernandez-Abreu, D.; Perez-Rodriguez, S.
2017-07-01
The optimization of some W-methods for the time integration of time-dependent PDEs in several spatial variables is considered. In [2, Theorem 1] several three-parametric families of three-stage W-methods for the integration of IVPs in ODEs were studied. Besides, the optimization of several specific methods for PDEs when the Approximate Matrix Factorization Splitting (AMF) is used to define the approximate Jacobian matrix (W ≈ fy(yn)) was carried out. Also, some convergence and stability properties were presented [2]. The derived methods were optimized on the base that the underlying explicit Runge-Kutta method is the one having the largest Monotonicity interval among the thee-stage order three Runge-Kutta methods [1]. Here, we propose an optimization of the methods by imposing some additional order condition [7] to keep order three for parabolic PDE problems [6] but at the price of reducing substantially the length of the nonlinear Monotonicity interval of the underlying explicit Runge-Kutta method.
The steady part of the secular variation of the Earth's magnetic field
NASA Technical Reports Server (NTRS)
Bloxham, Jeremy
1992-01-01
The secular variation of the Earth's magnetic field results from the effects of magnetic induction in the fluid outer core and from the effects of magnetic diffusion in the core and the mantle. Adequate observations to map the magnetic field at the core-mantle boundary extend back over three centuries, providing a model of the secular variation at the core-mantle boundary. Here we consider how best to analyze this time-dependent part of the field. To calculate steady core flow over long time periods, we introduce an adaptation of our earlier method of calculating the flow in order to achieve greater numerical stability. We perform this procedure for the periods 1840-1990 and 1690-1840 and find that well over 90 percent of the variance of the time-dependent field can be explained by simple steady core flow. The core flows obtained for the two intervals are broadly similar to each other and to flows determined over much shorter recent intervals.
A long time span relativistic precession model of the Earth
NASA Astrophysics Data System (ADS)
Tang, Kai; Soffel, Michael H.; Tao, Jin-He; Han, Wen-Biao; Tang, Zheng-Hong
2015-04-01
A numerical solution to the Earth's precession in a relativistic framework for a long time span is presented here. We obtain the motion of the solar system in the Barycentric Celestial Reference System by numerical integration with a symplectic integrator. Special Newtonian corrections accounting for tidal dissipation are included in the force model. The part representing Earth's rotation is calculated in the Geocentric Celestial Reference System by integrating the post-Newtonian equations of motion published by Klioner et al. All the main relativistic effects are included following Klioner et al. In particular, we consider several relativistic reference systems with corresponding time scales, scaled constants and parameters. Approximate expressions for Earth's precession in the interval ±1 Myr around J2000.0 are provided. In the interval ±2000 years around J2000.0, the difference compared to the P03 precession theory is only several arcseconds and the results are consistent with other long-term precession theories. Supported by the National Natural Science Foundation of China.
Nandini, Suresh; Ballal, Suma; Kandaswamy, Deivanayagam
2007-02-01
The prolonged setting time of mineral trioxide aggregate (MTA) is the main disadvantage of this material. This study analyzes the influence of glass-ionomer cement on the setting of MTA using laser Raman spectroscopy (LRS). Forty hollow glass molds were taken in which MTA was placed. In Group I specimens, MTA was layered with glass-ionomer cement after 45 minutes. Similar procedures were done for Groups II and III at 4 hours and 3 days, respectively. No glass ionomer was added in Group IV, which were then considered as control samples. Each sample was scanned at various time intervals. At each time interval, the interface between MTA and glass-ionomer cement was also scanned (excluding Group IV). The spectral analysis proved that placement of glass-ionomer cement over MTA after 45 minutes did not affect its setting reaction and calcium salts may be formed in the interface of these two materials.
Snyder, David; Morgan, Carl
2004-09-01
Recent studies have associated interruptions of cardiopulmonary resuscitation imposed by automated external defibrillators (AEDs) with poor resuscitation outcome. In particular, the "hands-off" interval between precordial compressions and subsequent defibrillation shock has been implicated. We sought to determine the range of variation among current-generation AEDs with respect to this characteristic. Seven AEDs from six manufacturers were characterized via stopwatch and arrhythmia simulator with respect to the imposed hands-off interval. All AEDs were equipped with new batteries, and measurements were repeated five times for each AED. A wide variation in the hands-off interval between precordial compressions and shock delivery was observed, ranging from 5.2 to 28.4 secs, with only one AED achieving an interruption of <10 secs. Laboratory and clinical data suggest that this range of variation could be responsible for a more than two-fold variation in patient resuscitation success, an effect that far exceeds any defibrillation efficacy differences that may hypothetically exist. In addition to defibrillation waveform and dose, researchers should consider the hands-off cardiopulmonary resuscitation interruption interval between cardiopulmonary resuscitation and subsequent defibrillation shock to be an important covariate of outcome in resuscitation studies. Defibrillator design should minimize this interval to avoid potential adverse consequences on patient survival.
Dynamical phase transition in the simplest molecular chain model
NASA Astrophysics Data System (ADS)
Malyshev, V. A.; Muzychka, S. A.
2014-04-01
We consider the dynamics of the simplest chain of a large number N of particles. In the double scaling limit, we find the partition of the parameter space into two domains: for one domain, the supremum over the time interval ( 0,∞) of the relative extension of the chain tends to 1 as N → ∞, and for the other domain, to infinity.
Estimating climate sensitivity from paleo-data.
NASA Astrophysics Data System (ADS)
Crowley, T. J.; Hegerl, G. C.
2003-12-01
For twenty years estimates of climate sensitivity from the instrumental record have neen between about 1.5-4.5° C for a doubling of CO2. Various efforts, most notably by J. Hansen, and M. Hoffert and C. Covey. have been made to test this range against paleo-data for the ice age and Cretaceous, yielding approximately the same range with a "best guess" sensitivity of about 2.0-3.0° C. Here we re-examine this issue with new paleo-data and also include information for the time period 1000-present. For this latter interval formal pdfs can for the first time be calculated for paleo data. Regardless of the time interval examined we generally find that paleo-sensitivities still fall within the range of about 1.5-4.5° C. The primary impediments to more precise determinations involve not only uncertainties in forcings but also the paleo reconstructions. Barring a dramatic breakthrough in reconciliation of some long-standing differences in the magnitude of paleotemperature estimates for different proxies, the range of paleo-sensitivities will continue to have this uncertainty. This range can be considered either unsatisfactory or satisfactory. It is unsatisfactory because some may consider it insufficiently precise. It is satisfactory in the sense that the range is both robust and entirely consistent with the range independently estimated from the instrumental record.
Faugeras, Olivier; Touboul, Jonathan; Cessac, Bruno
2008-01-01
We deal with the problem of bridging the gap between two scales in neuronal modeling. At the first (microscopic) scale, neurons are considered individually and their behavior described by stochastic differential equations that govern the time variations of their membrane potentials. They are coupled by synaptic connections acting on their resulting activity, a nonlinear function of their membrane potential. At the second (mesoscopic) scale, interacting populations of neurons are described individually by similar equations. The equations describing the dynamical and the stationary mean-field behaviors are considered as functional equations on a set of stochastic processes. Using this new point of view allows us to prove that these equations are well-posed on any finite time interval and to provide a constructive method for effectively computing their unique solution. This method is proved to converge to the unique solution and we characterize its complexity and convergence rate. We also provide partial results for the stationary problem on infinite time intervals. These results shed some new light on such neural mass models as the one of Jansen and Rit (1995): their dynamics appears as a coarse approximation of the much richer dynamics that emerges from our analysis. Our numerical experiments confirm that the framework we propose and the numerical methods we derive from it provide a new and powerful tool for the exploration of neural behaviors at different scales. PMID:19255631
Nimmo, Lisa M; Lewandowsky, Stephan
2006-09-01
The notion of a link between time and memory is intuitively appealing and forms the core assumption of temporal distinctiveness models. Distinctiveness models predict that items that are temporally isolated from their neighbors at presentation should be recalled better than items that are temporally crowded. By contrast, event-based theories consider time to be incidental to the processes that govern memory, and such theories would not imply a temporal isolation advantage unless participants engaged in a consolidation process (e.g., rehearsal or selective encoding) that exploited the temporal structure of the list. In this report, we examine two studies that assessed the effect of temporal distinctiveness on memory, using auditory (Experiment 1) and auditory and visual (Experiment 2) presentation with unpredictably varying interitem intervals. The results show that with unpredictable intervals temporal isolation does not benefit memory, regardless of presentation modality.
Runkel, Anthony C.; MacKey, T.J.; Cowan, Clinton A.; Fox, David L.
2010-01-01
Middle to late Cambrian time (ca. 513 to 488 Ma) is characterized by an unstable plateau in biodiversity, when depauperate shelf faunas suffered repeated extinctions. This poorly understood interval separates the Cambrian Explosion from the Great Ordovician Biodiversification Event and is generally regarded as a time of sustained greenhouse conditions. We present evidence that suggests a drastically different climate during this enigmatic interval: Features indicative of meteoric ice are well preserved in late Cambrian equatorial beach deposits that correspond to one of the shelf extinction events. Thus, the middle to late Cambrian Earth was at least episodically cold and might best be considered a muted analogue to the environmental extremes that characterized the Proterozoic, even though cooling in the two periods may have occurred in response to different triggers. Such later Cambrian conditions may have significantly impacted evolution preceding the Ordovician radiation.
Immortal time bias in observational studies of drug effects in pregnancy.
Matok, Ilan; Azoulay, Laurent; Yin, Hui; Suissa, Samy
2014-09-01
The use of decongestants during the second or third trimesters of pregnancy has been associated with a decreased risk of preterm delivery in two observational studies. This effect may have been subject to immortal time bias, a bias arising from the improper classification of exposure during follow-up. We illustrate this bias by repeating the studies using a different data source. The United Kingdom Hospital Episodes Statistics and the Clinical Practice Research Datalink databases were linked to identify all live singleton pregnancies among women aged 15 to 45 years between 1997 and 2012. Cox proportional hazards models were used to estimate hazard ratios (HRs) and 95% confidence intervals of preterm delivery (before 37 weeks of gestation) by considering the use of decongestants during the third trimester as a time-fixed (biased analysis which misclassifies unexposed person-time as exposed person-time) and time-varying exposure (unbiased analysis with proper classification of unexposed person-time). All models were adjusted for maternal age, smoking status, maternal diabetes, maternal hypertension, preeclampsia, and parity. Of the 195,582 singleton deliveries, 10,248 (5.2%) were born preterm. In the time-fixed analysis, the HR of preterm delivery for the use of decongestants was below the null and suggestive of a 46% decreased risk (adjusted HR = 0.54; 95% confidence interval, 0.24-1.20). In contrast, the HR was closer to null (adjusted HR = 0.93 95% confidence interval, 0.42-2.06) when the use of decongestants was treated as a time-varying variable. Studies of drug safety in pregnancy should use the appropriate statistical techniques to avoid immortal time bias, particularly when the exposure occurs at later stages of pregnancy. © 2014 Wiley Periodicals, Inc.
Real-time control systems: feedback, scheduling and robustness
NASA Astrophysics Data System (ADS)
Simon, Daniel; Seuret, Alexandre; Sename, Olivier
2017-08-01
The efficient control of real-time distributed systems, where continuous components are governed through digital devices and communication networks, needs a careful examination of the constraints arising from the different involved domains inside co-design approaches. Thanks to the robustness of feedback control, both new control methodologies and slackened real-time scheduling schemes are proposed beyond the frontiers between these traditionally separated fields. A methodology to design robust aperiodic controllers is provided, where the sampling interval is considered as a control variable of the system. Promising experimental results are provided to show the feasibility and robustness of the approach.
Caminal, Pere; Sola, Fuensanta; Gomis, Pedro; Guasch, Eduard; Perera, Alexandre; Soriano, Núria; Mont, Lluis
2018-03-01
This study was conducted to test, in mountain running route conditions, the accuracy of the Polar V800™ monitor as a suitable device for monitoring the heart rate variability (HRV) of runners. Eighteen healthy subjects ran a route that included a range of running slopes such as those encountered in trail and ultra-trail races. The comparative study of a V800 and a Holter SEER 12 ECG Recorder™ included the analysis of RR time series and short-term HRV analysis. A correction algorithm was designed to obtain the corrected Polar RR intervals. Six 5-min segments related to different running slopes were considered for each subject. The correlation between corrected V800 RR intervals and Holter RR intervals was very high (r = 0.99, p < 0.001), and the bias was less than 1 ms. The limits of agreement (LoA) obtained for SDNN and RMSSD were (- 0.25 to 0.32 ms) and (- 0.90 to 1.08 ms), respectively. The effect size (ES) obtained in the time domain HRV parameters was considered small (ES < 0.2). Frequency domain HRV parameters did not differ (p > 0.05) and were well correlated (r ≥ 0.96, p < 0.001). Narrow limits of agreement, high correlations and small effect size suggest that the Polar V800 is a valid tool for the analysis of heart rate variability in athletes while running high endurance events such as marathon, trail, and ultra-trail races.
Photoinduced diffusion molecular transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rozenbaum, Viktor M., E-mail: vik-roz@mail.ru, E-mail: litrakh@gmail.com; Dekhtyar, Marina L.; Lin, Sheng Hsien
2016-08-14
We consider a Brownian photomotor, namely, the directed motion of a nanoparticle in an asymmetric periodic potential under the action of periodic rectangular resonant laser pulses which cause charge redistribution in the particle. Based on the kinetics for the photoinduced electron redistribution between two or three energy levels of the particle, the time dependence of its potential energy is derived and the average directed velocity is calculated in the high-temperature approximation (when the spatial amplitude of potential energy fluctuations is small relative to the thermal energy). The thus developed theory of photoinduced molecular transport appears applicable not only to conventionalmore » dichotomous Brownian motors (with only two possible potential profiles) but also to a much wider variety of molecular nanomachines. The distinction between the realistic time dependence of the potential energy and that for a dichotomous process (a step function) is represented in terms of relaxation times (they can differ on the time intervals of the dichotomous process). As shown, a Brownian photomotor has the maximum average directed velocity at (i) large laser pulse intensities (resulting in short relaxation times on laser-on intervals) and (ii) excited state lifetimes long enough to permit efficient photoexcitation but still much shorter than laser-off intervals. A Brownian photomotor with optimized parameters is exemplified by a cylindrically shaped semiconductor nanocluster which moves directly along a polar substrate due to periodically photoinduced dipole moment (caused by the repetitive excited electron transitions to a non-resonant level of the nanocylinder surface impurity).« less
Enhancing Heart-Beat-Based Security for mHealth Applications.
Seepers, Robert M; Strydis, Christos; Sourdis, Ioannis; De Zeeuw, Chris I
2017-01-01
In heart-beat-based security, a security key is derived from the time difference between consecutive heart beats (the inter-pulse interval, IPI), which may, subsequently, be used to enable secure communication. While heart-beat-based security holds promise in mobile health (mHealth) applications, there currently exists no work that provides a detailed characterization of the delivered security in a real system. In this paper, we evaluate the strength of IPI-based security keys in the context of entity authentication. We investigate several aspects that should be considered in practice, including subjects with reduced heart-rate variability (HRV), different sensor-sampling frequencies, intersensor variability (i.e., how accurate each entity may measure heart beats) as well as average and worst-case-authentication time. Contrary to the current state of the art, our evaluation demonstrates that authentication using multiple, less-entropic keys may actually increase the key strength by reducing the effects of intersensor variability. Moreover, we find that the maximal key strength of a 60-bit key varies between 29.2 bits and only 5.7 bits, depending on the subject's HRV. To improve security, we introduce the inter-multi-pulse interval (ImPI), a novel method of extracting entropy from the heart by considering the time difference between nonconsecutive heart beats. Given the same authentication time, using the ImPI for key generation increases key strength by up to 3.4 × (+19.2 bits) for subjects with limited HRV, at the cost of an extended key-generation time of 4.8 × (+45 s).
Advanced analysis of finger-tapping performance: a preliminary study.
Barut, Cağatay; Kızıltan, Erhan; Gelir, Ethem; Köktürk, Fürüzan
2013-06-01
The finger-tapping test is a commonly employed quantitative assessment tool used to measure motor performance in the upper extremities. This task is a complex motion that is affected by external stimuli, mood and health status. The complexity of this task is difficult to explain with a single average intertap-interval value (time difference between successive tappings) which only provides general information and neglects the temporal effects of the aforementioned factors. This study evaluated the time course of average intertap-interval values and the patterns of variation in both the right and left hands of right-handed subjects using a computer-based finger-tapping system. Cross sectional study. Thirty eight male individuals aged between 20 and 28 years (Mean±SD = 22.24±1.65) participated in the study. Participants were asked to perform single-finger-tapping test for 10 seconds of test period. Only the results of right-handed (RH) 35 participants were considered in this study. The test records the time of tapping and saves data as the time difference between successive tappings for further analysis. The average number of tappings and the temporal fluctuation patterns of the intertap-intervals were calculated and compared. The variations in the intertap-interval were evaluated with the best curve fit method. An average tapping speed or tapping rate can reliably be defined for a single-finger tapping test by analysing the graphically presented data of the number of tappings within the test period. However, a different presentation of the same data, namely the intertap-interval values, shows temporal variation as the number of tapping increases. Curve fitting applications indicate that the variation has a biphasic nature. The measures obtained in this study reflect the complex nature of the finger-tapping task and are suggested to provide reliable information regarding hand performance. Moreover, the equation reflects both the variations in and the general patterns associated with the task.
Finding the Best Quadratic Approximation of a Function
ERIC Educational Resources Information Center
Yang, Yajun; Gordon, Sheldon P.
2011-01-01
This article examines the question of finding the best quadratic function to approximate a given function on an interval. The prototypical function considered is f(x) = e[superscript x]. Two approaches are considered, one based on Taylor polynomial approximations at various points in the interval under consideration, the other based on the fact…
Figueras, Jaume; Heras, Magda; Baigorri, Francisco; Elosua, Roberto; Ferreira, Ignacio; Santaló, Miquel
2009-11-14
To analyze the use of reperfusion therapy in patients with ST elevation myocardial infarction (STEMI) in Catalonia in a registry performed in 2006 (IAM CAT III) and its comparison with 2 previous registries Frequency of reperfusion therapy and time intervals between symptom onset - reperfusion therapy were the principal variables investigated. The IAM CAT I (June-December 2000) included 1,450 patients, the IAM CAT II (October 2002-April 2003) 1,386, and the IAM CAT III (October-December 2006) 367. The proportion of patients treated with reperfusion increased progressively (72%, 79% and 81%) as the use of primary angioplasty (5%, 10% and 33%). In the III registry the transfer system most frequently used was the SEM/061 (17%, 32% and 47%, respectively) but the time interval symptom onset-first contact with the medical system did not improve (II, 90 vs III, 105 min), the interval symptom onset-thrombolytic therapy did hardly change (178, 165 and 177 min) and the interval hospital arrival-trombolysis (needle-door) tended to improve (59, 42 and 42 min). Thirty day mortality in STEMI patients declined progressively through the 3 registries (12.1, 10.6 and 7.4%, p=0.012). The proportion of STEMI patients treated with reperfusion has improved but the interval to its application has not been shortened. To improve the latter it is mandatory an earlier contact with the medical system, a shortening of the intervals door-needle and door-balloon through better coordination between the 061, the sanitary personnel and the hospital administration, and to consider the subject as a real sanitary priority.
Love, Jeffrey J.
2012-01-01
Statistical analysis is made of rare, extreme geophysical events recorded in historical data -- counting the number of events $k$ with sizes that exceed chosen thresholds during specific durations of time $\\tau$. Under transformations that stabilize data and model-parameter variances, the most likely Poisson-event occurrence rate, $k/\\tau$, applies for frequentist inference and, also, for Bayesian inference with a Jeffreys prior that ensures posterior invariance under changes of variables. Frequentist confidence intervals and Bayesian (Jeffreys) credibility intervals are approximately the same and easy to calculate: $(1/\\tau)[(\\sqrt{k} - z/2)^{2},(\\sqrt{k} + z/2)^{2}]$, where $z$ is a parameter that specifies the width, $z=1$ ($z=2$) corresponding to $1\\sigma$, $68.3\\%$ ($2\\sigma$, $95.4\\%$). If only a few events have been observed, as is usually the case for extreme events, then these "error-bar" intervals might be considered to be relatively wide. From historical records, we estimate most likely long-term occurrence rates, 10-yr occurrence probabilities, and intervals of frequentist confidence and Bayesian credibility for large earthquakes, explosive volcanic eruptions, and magnetic storms.
NASA Astrophysics Data System (ADS)
Paramonov, G. P.; Mysin, A. V.; Babkin, R. S.
2017-10-01
The paper introduces construction of multicharge composition with separation of parts by the profile inert interval. On the basis of the previous researches, the pulse-forming process at explosion of the borehole multicharge taking into account the offered design is considered. The physical model for definition of reflected wavelet taking into account an increment of radius of cross section of a charging cavity and the expiration of detonation products is offered. A technique is developed for numerical modeling of gas-dynamic processes in a borehole with a change in the axial channel of a profile inert interval caused by a high-temperature flow of gaseous products of an explosion. The authors obtained the dependence of the change in mean pressure on the borehole wall on time for each of the parts of the multicharge. To blast a series of charges of the proposed design, taking into account optimization of the stress fields of neighboring charges, the delay interval is determined for a short-delayed explosion.
Using Landslide Failure Forecast Models in Near Real Time: the Mt. de La Saxe case-study
NASA Astrophysics Data System (ADS)
Manconi, Andrea; Giordan, Daniele
2014-05-01
Forecasting the occurrence of landslide phenomena in space and time is a major scientific challenge. The approaches used to forecast landslides mainly depend on the spatial scale analyzed (regional vs. local), the temporal range of forecast (long- vs. short-term), as well as the triggering factor and the landslide typology considered. By focusing on short-term forecast methods for large, deep seated slope instabilities, the potential time of failure (ToF) can be estimated by studying the evolution of the landslide deformation over time (i.e., strain rate) provided that, under constant stress conditions, landslide materials follow creep mechanism before reaching rupture. In the last decades, different procedures have been proposed to estimate ToF by considering simplified empirical and/or graphical methods applied to time series of deformation data. Fukuzono, 1985 proposed a failure forecast method based on the experience performed during large scale laboratory experiments, which were aimed at observing the kinematic evolution of a landslide induced by rain. This approach, known also as the inverse-velocity method, considers the evolution over time of the inverse value of the surface velocity (v) as an indicator of the ToF, by assuming that failure approaches while 1/v tends to zero. Here we present an innovative method to aimed at achieving failure forecast of landslide phenomena by considering near-real-time monitoring data. Starting from the inverse velocity theory, we analyze landslide surface displacements on different temporal windows, and then apply straightforward statistical methods to obtain confidence intervals on the time of failure. Our results can be relevant to support the management of early warning systems during landslide emergency conditions, also when the predefined displacement and/or velocity thresholds are exceeded. In addition, our statistical approach for the definition of confidence interval and forecast reliability can be applied also to different failure forecast methods. We applied for the first time the herein presented approach in near real time during the emergency scenario relevant to the reactivation of the La Saxe rockslide, a large mass wasting menacing the population of Courmayeur, northern Italy, and the important European route E25. We show how the application of simplified but robust forecast models can be a convenient method to manage and support early warning systems during critical situations. References: Fukuzono T. (1985), A New Method for Predicting the Failure Time of a Slope, Proc. IVth International Conference and Field Workshop on Landslides, Tokyo.
Kumar, Anupam; Kumar, Vijay
2017-05-01
In this paper, a novel concept of an interval type-2 fractional order fuzzy PID (IT2FO-FPID) controller, which requires fractional order integrator and fractional order differentiator, is proposed. The incorporation of Takagi-Sugeno-Kang (TSK) type interval type-2 fuzzy logic controller (IT2FLC) with fractional controller of PID-type is investigated for time response measure due to both unit step response and unit load disturbance. The resulting IT2FO-FPID controller is examined on different delayed linear and nonlinear benchmark plants followed by robustness analysis. In order to design this controller, fractional order integrator-differentiator operators are considered as design variables including input-output scaling factors. A new hybridized algorithm named as artificial bee colony-genetic algorithm (ABC-GA) is used to optimize the parameters of the controller while minimizing weighted sum of integral of time absolute error (ITAE) and integral of square of control output (ISCO). To assess the comparative performance of the IT2FO-FPID, authors compared it against existing controllers, i.e., interval type-2 fuzzy PID (IT2-FPID), type-1 fractional order fuzzy PID (T1FO-FPID), type-1 fuzzy PID (T1-FPID), and conventional PID controllers. Furthermore, to show the effectiveness of the proposed controller, the perturbed processes along with the larger dead time are tested. Moreover, the proposed controllers are also implemented on multi input multi output (MIMO), coupled, and highly complex nonlinear two-link robot manipulator system in presence of un-modeled dynamics. Finally, the simulation results explicitly indicate that the performance of the proposed IT2FO-FPID controller is superior to its conventional counterparts in most of the cases. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Shefer, V. A.
2015-12-01
We examine intermediate perturbed orbit proposed by the author previously, defined from the three position vectors of a small celestial body. It is shown theoretically, that at a small reference time interval covering the body positions the approximation accuracy of real motion by this orbit corresponds approximately to the fourth order of tangency. The smaller reference interval of time, the better this correspondence. Laws of variation of the methodical errors in constructing intermediate orbit subject to the length of reference time interval are deduced. According to these laws, the convergence rate of the method to the exact solution (upon reducing the reference interval of time) in the general case is higher by three orders of magnitude than in the case of conventional methods using Keplerian unperturbed orbit. The considered orbit is among the most accurate in set of orbits of their class determined by the order of tangency. The theoretical results are validated by numerical examples. The work was supported by the Ministry of Education and Science of the Russian Federation, project no. 2014/223(1567).
Study of temperature distributions in wafer exposure process
NASA Astrophysics Data System (ADS)
Lin, Zone-Ching; Wu, Wen-Jang
During the exposure process of photolithography, wafer absorbs the exposure energy, which results in rising temperature and the phenomenon of thermal expansion. This phenomenon was often neglected due to its limited effect in the previous generation of process. However, in the new generation of process, it may very likely become a factor to be considered. In this paper, the finite element model for analyzing the transient behavior of the distribution of wafer temperature during exposure was established under the assumption that the wafer was clamped by a vacuum chuck without warpage. The model is capable of simulating the distribution of the wafer temperature under different exposure conditions. The flowchart of analysis begins with the simulation of transient behavior in a single exposure region to the variation of exposure energy, interval of exposure locations and interval of exposure time under continuous exposure to investigate the distribution of wafer temperature. The simulation results indicate that widening the interval of exposure locations has a greater impact in improving the distribution of wafer temperature than extending the interval of exposure time between neighboring image fields. Besides, as long as the distance between the field center locations of two neighboring exposure regions exceeds the straight distance equals to three image fields wide, the interacting thermal effect during wafer exposure can be ignored. The analysis flow proposed in this paper can serve as a supporting reference tool for engineers in planning exposure paths.
NASA Astrophysics Data System (ADS)
Lapshenkov, E. M.; Volkov, V. Y.; Kulagin, V. P.
2018-05-01
The article is devoted to the problem of pattern creation of the NMR sensor signal for subsequent recognition by the artificial neural network in the intelligent device "the electronic tongue". The specific problem of removing redundant data from the spin-spin relaxation signal pattern that is used as a source of information in analyzing the composition of oil and petroleum products is considered. The method is proposed that makes it possible to remove redundant data of the relaxation decay pattern but without introducing additional distortion. This method is based on combining some relaxation decay curve intervals that increment below the noise level such that the increment of the combined intervals is above the noise level. In this case, the relaxation decay curve samples that are located inside the combined intervals are removed from the pattern. This method was tested on the heavy-oil NMR signal patterns that were created by using the Carr-Purcell-Meibum-Gill (CPMG) sequence for recording the relaxation process. Parameters of CPMG sequence are: 100 μs - time interval between 180° pulses, 0.4s - duration of measurement. As a result, it was revealed that the proposed method allowed one to reduce the number of samples 15 times (from 4000 to 270), and the maximum detected root mean square error (RMS error) equals 0.00239 (equivalent to signal-to-noise ratio 418).
Comparing bandwidth requirements for digital baseband signals.
NASA Technical Reports Server (NTRS)
Houts, R. C.; Green, T. A.
1972-01-01
This paper describes the relative bandwidth requirements of the common digital baseband signaling techniques used for data transmission. Bandwidth considerations include the percentage of total power in a properly encoded PN sequence passed at bandwidths of 0.5, 1, 2 and 3 times the reciprocal of the bit interval. The signals considered in this study are limited to the binary class. The study compares such signaling techniques as delay modulation, bipolar, biternary, duobinary, pair selected ternary and time polarity control in addition to the conventional NRZ, RZ and BI-phi schemes.
Semiparametric regression analysis of interval-censored competing risks data.
Mao, Lu; Lin, Dan-Yu; Zeng, Donglin
2017-09-01
Interval-censored competing risks data arise when each study subject may experience an event or failure from one of several causes and the failure time is not observed directly but rather is known to lie in an interval between two examinations. We formulate the effects of possibly time-varying (external) covariates on the cumulative incidence or sub-distribution function of competing risks (i.e., the marginal probability of failure from a specific cause) through a broad class of semiparametric regression models that captures both proportional and non-proportional hazards structures for the sub-distribution. We allow each subject to have an arbitrary number of examinations and accommodate missing information on the cause of failure. We consider nonparametric maximum likelihood estimation and devise a fast and stable EM-type algorithm for its computation. We then establish the consistency, asymptotic normality, and semiparametric efficiency of the resulting estimators for the regression parameters by appealing to modern empirical process theory. In addition, we show through extensive simulation studies that the proposed methods perform well in realistic situations. Finally, we provide an application to a study on HIV-1 infection with different viral subtypes. © 2017, The International Biometric Society.
Mankin, Romi; Rekker, Astrid
2016-12-01
The output interspike interval statistics of a stochastic perfect integrate-and-fire neuron model driven by an additive exogenous periodic stimulus is considered. The effect of temporally correlated random activity of synaptic inputs is modeled by an additive symmetric dichotomous noise. Using a first-passage-time formulation, exact expressions for the output interspike interval density and for the serial correlation coefficient are derived in the nonsteady regime, and their dependence on input parameters (e.g., the noise correlation time and amplitude as well as the frequency of an input current) is analyzed. It is shown that an interplay of a periodic forcing and colored noise can cause a variety of nonequilibrium cooperation effects, such as sign reversals of the interspike interval correlations versus noise-switching rate as well as versus the frequency of periodic forcing, a power-law-like decay of oscillations of the serial correlation coefficients in the long-lag limit, amplification of the output signal modulation in the instantaneous firing rate of the neural response, etc. The features of spike statistics in the limits of slow and fast noises are also discussed.
Response to a periodic stimulus in a perfect integrate-and-fire neuron model driven by colored noise
NASA Astrophysics Data System (ADS)
Mankin, Romi; Rekker, Astrid
2016-12-01
The output interspike interval statistics of a stochastic perfect integrate-and-fire neuron model driven by an additive exogenous periodic stimulus is considered. The effect of temporally correlated random activity of synaptic inputs is modeled by an additive symmetric dichotomous noise. Using a first-passage-time formulation, exact expressions for the output interspike interval density and for the serial correlation coefficient are derived in the nonsteady regime, and their dependence on input parameters (e.g., the noise correlation time and amplitude as well as the frequency of an input current) is analyzed. It is shown that an interplay of a periodic forcing and colored noise can cause a variety of nonequilibrium cooperation effects, such as sign reversals of the interspike interval correlations versus noise-switching rate as well as versus the frequency of periodic forcing, a power-law-like decay of oscillations of the serial correlation coefficients in the long-lag limit, amplification of the output signal modulation in the instantaneous firing rate of the neural response, etc. The features of spike statistics in the limits of slow and fast noises are also discussed.
Response rate and reinforcement rate in Pavlovian conditioning.
Harris, Justin A; Carpenter, Joanne S
2011-10-01
Four experiments used delay conditioning of magazine approach in rats to investigate the relationship between the rate of responding, R, to a conditioned stimulus (CS) and the rate, r, at which the CS is reinforced with the unconditioned stimulus (US). Rats were concurrently trained with four variable-duration CSs with different rs, either as a result of differences in the mean CS-US interval or in the proportion of CS presentations that ended with the US. In each case, R was systematically related to r, and the relationship was very accurately characterized by a hyperbolic function, R = Ar/(r +c). Accordingly, the reciprocal of these two variables-response interval, I (= 1/R), and CS-US interval, i (= 1/r) - were related by a simple affine (straight line) transformation, I = mi+b. This latter relationship shows that each increment in the time that the rats had to wait for food produced a linear increment in the time they waited between magazine entries. We discuss the close agreement between our findings and the Matching Law (Herrnstein, 1970) and consider their implications for both associative theories (e.g., Rescorla & Wagner, 1972) and nonassociative theories (Gallistel & Gibbon, 2000) of conditioning. (PsycINFO Database Record (c) 2011 APA, all rights reserved).
Liao, Renkuan; Yang, Peiling; Wu, Wenyong; Ren, Shumei
2016-01-01
The widespread use of superabsorbent polymers (SAPs) in arid regions improves the efficiency of local land and water use. However, SAPs’ repeated absorption and release of water has periodic and unstable effects on both soil’s physical and chemical properties and on the growth of plant roots, which complicates modeling of water movement in SAP-treated soils. In this paper, we proposea model of soil water movement for SAP-treated soils. The residence time of SAP in the soil and the duration of the experiment were considered as the same parameter t. This simplifies previously proposed models in which the residence time of SAP in the soil and the experiment’s duration were considered as two independent parameters. Numerical testing was carried out on the inverse method of estimating the source/sink term of root water uptake in the model of soil water movement under the effect of SAP. The test results show that time interval, hydraulic parameters, test error, and instrument precision had a significant influence on the stability of the inverse method, while time step, layering of soil, and boundary conditions had relatively smaller effects. A comprehensive analysis of the method’s stability, calculation, and accuracy suggests that the proposed inverse method applies if the following conditions are satisfied: the time interval is between 5 d and 17 d; the time step is between 1000 and 10000; the test error is ≥ 0.9; the instrument precision is ≤ 0.03; and the rate of soil surface evaporation is ≤ 0.6 mm/d. PMID:27505000
Petersen, Christian C; Mistlberger, Ralph E
2017-08-01
The mechanisms that enable mammals to time events that recur at 24-h intervals (circadian timing) and at arbitrary intervals in the seconds-to-minutes range (interval timing) are thought to be distinct at the computational and neurobiological levels. Recent evidence that disruption of circadian rhythmicity by constant light (LL) abolishes interval timing in mice challenges this assumption and suggests a critical role for circadian clocks in short interval timing. We sought to confirm and extend this finding by examining interval timing in rats in which circadian rhythmicity was disrupted by long-term exposure to LL or by chronic intake of 25% D 2 O. Adult, male Sprague-Dawley rats were housed in a light-dark (LD) cycle or in LL until free-running circadian rhythmicity was markedly disrupted or abolished. The rats were then trained and tested on 15- and 30-sec peak-interval procedures, with water restriction used to motivate task performance. Interval timing was found to be unimpaired in LL rats, but a weak circadian activity rhythm was apparently rescued by the training procedure, possibly due to binge feeding that occurred during the 15-min water access period that followed training each day. A second group of rats in LL were therefore restricted to 6 daily meals scheduled at 4-h intervals. Despite a complete absence of circadian rhythmicity in this group, interval timing was again unaffected. To eliminate all possible temporal cues, we tested a third group of rats in LL by using a pseudo-randomized schedule. Again, interval timing remained accurate. Finally, rats tested in LD received 25% D 2 O in place of drinking water. This markedly lengthened the circadian period and caused a failure of LD entrainment but did not disrupt interval timing. These results indicate that interval timing in rats is resistant to disruption by manipulations of circadian timekeeping previously shown to impair interval timing in mice.
Molero-Chamizo, Andrés; Alameda Bailén, José R; Garrido Béjar, Tamara; García López, Macarena; Jaén Rodríguez, Inmaculada; Gutiérrez Lérida, Carolina; Pérez Panal, Silvia; González Ángel, Gloria; Lemus Corchero, Laura; Ruiz Vega, María J; Nitsche, Michael A; Rivera-Urbina, Guadalupe N
2018-02-01
Anodal transcranial direct current stimulation (tDCS) induces long-term potentiation-like plasticity, which is associated with long-lasting effects on different cognitive, emotional, and motor performances. Specifically, tDCS applied over the motor cortex is considered to improve reaction time in simple and complex tasks. The timing of tDCS relative to task performance could determine the efficacy of tDCS to modulate performance. The aim of this study was to compare the effects of a single session of anodal tDCS (1.5 mA, for 15 min) applied over the left primary motor cortex (M1) versus sham stimulation on performance of a go/no-go simple reaction-time task carried out at three different time points after tDCS-namely, 0, 30, or 60 min after stimulation. Performance zero min after anodal tDCS was improved during the whole course of the task. Performance 30 min after anodal tDCS was improved only in the last block of the reaction-time task. Performance 60 min after anodal tDCS was not significantly different throughout the entire task. These findings suggest that the motor cortex excitability changes induced by tDCS can improve motor responses, and these effects critically depend on the time interval between stimulation and task performance.
NASA Astrophysics Data System (ADS)
Komovkin, S. V.; Lavrenov, S. M.; Tuchin, A. G.; Tuchin, D. A.; Yaroshevsky, V. S.
2016-12-01
The article describes a model of the two-way measurements of radial velocity based on the Doppler effect. The relations are presented for the instantaneous value of the increment range at the time of measurement and the radial velocity of the mid-dimensional interval. The compensation of methodological errors of interpretation of the two-way Doppler measurements is considered.
Early application of related SCT might improve clinical outcome in adult T-cell leukemia/lymphoma.
Fuji, S; Fujiwara, H; Nakano, N; Wake, A; Inoue, Y; Fukuda, T; Hidaka, M; Moriuchi, Y; Miyamoto, T; Uike, N; Taguchi, J; Eto, T; Tomoyose, T; Kondo, T; Yamanoha, A; Ichinohe, T; Atsuta, Y; Utsunomiya, A
2016-02-01
Allogeneic hematopoietic SCT (allo-HSCT) is a curative treatment for aggressive adult T-cell leukemia/lymphoma (ATLL). Considering the dismal prognosis associated with conventional chemotherapies, early application of allo-HSCT might be beneficial for patients with ATLL. However, no previous study has addressed the optimal timing of allo-HSCT from related donors. Hence, to evaluate the impact of timing of allo-HSCT for patients with ATLL, we retrospectively analyzed data from patients with ATLL who received an allo-HSCT from a related donor. The median age was 52 years. Patients were grouped according to the interval from diagnosis to allo-HSCT: early transplant group, <100 days, n=72; late transplant group, ⩾100 days, n=428. The corresponding constituents of disease status were not statistically different between the two groups (P=0.11). The probability of OS in the early transplant group was significantly higher than that in the late transplant group (4-year OS, 49.3% vs 31.2%). Multivariate analysis revealed that late allo-HSCT was an unfavorable prognostic factor for OS (hazard ratio, 1.46; 95% confidence interval (CI), 1.01-2.11; P=0.04). Despite the limitations of a retrospective study, it might be acceptable to consider early application of allo-HSCT for ATLL.
Fajt, Virginia R; Apley, Michael D; Brogden, Kim A; Skogerboe, Terry L; Shostrom, Valerie K; Chin, Ya-Lin
2004-05-01
To examine effects of danofloxacin and tilmicosin on continuously recorded body temperature in beef calves with pneumonia experimentally induced by inoculation of Mannheimia haemolytica. 41 Angus-cross heifers (body weight, 160 to 220 kg) without a recent history of respiratory tract disease or antimicrobial treatment, all from a single ranch. Radiotransmitters were implanted intravaginally in each calf. Pneumonia was induced intrabronchially by use of logarithmic-phase cultures of M. haemolytica. At 21 hours after inoculation, calves were treated with saline (0.9% NaCl) solution, danofloxacin, or tilmicosin. Body temperature was monitored from 66 hours before inoculation until 72 hours after treatment. Area under the curve (AUC) of the temperature-time plot and mean temperature were calculated for 3-hour intervals and compared among treatment groups. The AUCs for 3-hour intervals did not differ significantly among treatment groups for any of the time periods. Analysis of the mean temperature for 3-hour intervals revealed significantly higher temperatures at most time periods for saline-treated calves, compared with temperatures for antimicrobial-treated calves; however, we did not detect significant differences between the danofloxacin- and tilmicosin-treated calves. The circadian rhythm of temperatures before exposure was detected again approximately 48 hours after bacterial inoculation. Danofloxacin and tilmicosin did not differ in their effect on mean body temperature for 3-hour intervals but significantly decreased body temperature, compared with body temperature in saline-treated calves. Normal daily variation in body temperature must be considered in the face of respiratory tract disease during clinical evaluation of feedlot cattle.
Di Rienzo, Marco; Vaini, Emanuele; Castiglioni, Paolo; Meriggi, Paolo; Rizzo, Francesco
2013-01-01
Seismocardiogram (SCG) is the measure of the minute vibrations produced by the beating heart. We previously demonstrated that SCG, ECG and respiration could be recorded over the 24 h during spontaneous behavior by a smart garment, the MagIC-SCG system. In the present case study we explored the feasibility of a beat-to-beat estimation of two indices of heart contractility, the Left Ventricular Ejection Time (LVET) and the electromechanical systole (QS2) from SCG and ECG recordings obtained by the MagIC-SCG device in one subject. We considered data collected during outdoor spontaneous behavior (while sitting in the metro and in the office) and in a laboratory setting (in supine and sitting posture, and during recovery after 100 W and 140 W cycling). LVET was estimated from SCG as the time interval between the opening and closure of the aortic valve, QS2 as the time interval between the Q wave of the ECG and the closure of the aortic valve. In every condition, LVET and QS2 could be estimated on a beat-to-beat basis from the SCG collected by the smart garment. LVET and QS2 are characterized by important beat-to-beat fluctuations, with standard deviations in the same order of magnitude of RR Interval. In all settings, spectral profiles are different for LVET, QS2 and RR Interval. This suggests that the biological mechanisms impinging on the heart exert a differentiated influence on the variability of each of these three indices.
Modeling environmental noise exceedances using non-homogeneous Poisson processes.
Guarnaccia, Claudio; Quartieri, Joseph; Barrios, Juan M; Rodrigues, Eliane R
2014-10-01
In this work a non-homogeneous Poisson model is considered to study noise exposure. The Poisson process, counting the number of times that a sound level surpasses a threshold, is used to estimate the probability that a population is exposed to high levels of noise a certain number of times in a given time interval. The rate function of the Poisson process is assumed to be of a Weibull type. The presented model is applied to community noise data from Messina, Sicily (Italy). Four sets of data are used to estimate the parameters involved in the model. After the estimation and tuning are made, a way of estimating the probability that an environmental noise threshold is exceeded a certain number of times in a given time interval is presented. This estimation can be very useful in the study of noise exposure of a population and also to predict, given the current behavior of the data, the probability of occurrence of high levels of noise in the near future. One of the most important features of the model is that it implicitly takes into account different noise sources, which need to be treated separately when using usual models.
The ceramics industry and lead poisoning. Long-term testing.
De Rosa, E; Rossi, A; Toffolo, D; Brighenti, F; Rosa, A; Caroldi, S
1980-12-01
The investigation evaluates the efficiency attributed to some measures (improvements in environment, individual health habits) in reducing the risk of lead poisoning in the ceramics industry. The evaluation of the average levels of lead in the blood of 154 exposed workers was carried out in four plants at a time interval of six to eight months. The study considers the variations in relation to possible measures brought about during the interval. A reduction of environmental risk was in effect shown by a clear improvement in the blood lead levels, which still, however, exceeded the internationally recommended limits in many of the subjects. It was concluded that further improvements can only be made by reducing the lead content of the glazes used.
Communication: Coordinate-dependent diffusivity from single molecule trajectories
NASA Astrophysics Data System (ADS)
Berezhkovskii, Alexander M.; Makarov, Dmitrii E.
2017-11-01
Single-molecule observations of biomolecular folding are commonly interpreted using the model of one-dimensional diffusion along a reaction coordinate, with a coordinate-independent diffusion coefficient. Recent analysis, however, suggests that more general models are required to account for single-molecule measurements performed with high temporal resolution. Here, we consider one such generalization: a model where the diffusion coefficient can be an arbitrary function of the reaction coordinate. Assuming Brownian dynamics along this coordinate, we derive an exact expression for the coordinate-dependent diffusivity in terms of the splitting probability within an arbitrarily chosen interval and the mean transition path time between the interval boundaries. This formula can be used to estimate the effective diffusion coefficient along a reaction coordinate directly from single-molecule trajectories.
Park, Myung Sook; Kang, Kyung Ja; Jang, Sun Joo; Lee, Joo Yun; Chang, Sun Ju
2018-03-01
This study aimed to evaluate the components of test-retest reliability including time interval, sample size, and statistical methods used in patient-reported outcome measures in older people and to provide suggestions on the methodology for calculating test-retest reliability for patient-reported outcomes in older people. This was a systematic literature review. MEDLINE, Embase, CINAHL, and PsycINFO were searched from January 1, 2000 to August 10, 2017 by an information specialist. This systematic review was guided by both the Preferred Reporting Items for Systematic Reviews and Meta-Analyses checklist and the guideline for systematic review published by the National Evidence-based Healthcare Collaborating Agency in Korea. The methodological quality was assessed by the Consensus-based Standards for the selection of health Measurement Instruments checklist box B. Ninety-five out of 12,641 studies were selected for the analysis. The median time interval for test-retest reliability was 14days, and the ratio of sample size for test-retest reliability to the number of items in each measure ranged from 1:1 to 1:4. The most frequently used statistical methods for continuous scores was intraclass correlation coefficients (ICCs). Among the 63 studies that used ICCs, 21 studies presented models for ICC calculations and 30 studies reported 95% confidence intervals of the ICCs. Additional analyses using 17 studies that reported a strong ICC (>0.09) showed that the mean time interval was 12.88days and the mean ratio of the number of items to sample size was 1:5.37. When researchers plan to assess the test-retest reliability of patient-reported outcome measures for older people, they need to consider an adequate time interval of approximately 13days and the sample size of about 5 times the number of items. Particularly, statistical methods should not only be selected based on the types of scores of the patient-reported outcome measures, but should also be described clearly in the studies that report the results of test-retest reliability. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lim, Jongil; Kwon, Ji Young; Song, Juhee; Choi, Hosoon; Shin, Jong Chul; Park, In Yang
2014-02-01
The interpretation of the fetal heart rate (FHR) signal considering labor progression may improve perinatal morbidity and mortality. However, there have been few studies that evaluate the fetus in each labor stage quantitatively. To evaluate whether the entropy indices of FHR are different according to labor progression. A retrospective comparative study of FHR recordings in three groups: 280 recordings in the second stage of labor before vaginal delivery, 31 recordings in the first stage of labor before emergency cesarean delivery, and 23 recordings in the pre-labor before elective cesarean delivery. The stored FHR recordings of external cardiotocography during labor. Approximate entropy (ApEn) and sample entropy (SampEn) for the final 2000 RR intervals. The median ApEn and SampEn for the 2000 RR intervals showed the lowest values in the second stage of labor, followed by the emergency cesarean group and the elective cesarean group for all time segments (all P<0.001). Also, in the second stage of labor, the final 5 min of 2000 RR intervals had a significantly lower median ApEn (0.49 vs. 0.44, P=0.001) and lower median SampEn (0.34 vs. 0.29, P<0.001) than the initial 5 min of 2000 RR intervals. Entropy indices of FHR were significantly different according to labor progression. This result supports the necessity of considering labor progression when developing intrapartum fetal monitoring using the entropy indices of FHR. Copyright © 2013 Elsevier Ltd. All rights reserved.
Ripamonti, Giancarlo; Abba, Andrea; Geraci, Angelo
2010-05-01
A method for measuring time intervals accurate to the picosecond range is based on phase measurements of oscillating waveforms synchronous with their beginning and/or end. The oscillation is generated by triggering an LC resonant circuit, whose capacitance is precharged. By using high Q resonators and a final active quenching of the oscillation, it is possible to conjugate high time resolution and a small measurement time, which allows a high measurement rate. Methods for fast analysis of the data are considered and discussed with reference to computing resource requirements, speed, and accuracy. Experimental tests show the feasibility of the method and a time accuracy better than 4 ps rms. Methods aimed at further reducing hardware resources are finally discussed.
Brett, Benjamin L; Smyk, Nathan; Solomon, Gary; Baughman, Brandon C; Schatz, Philip
2016-08-18
The ImPACT (Immediate Post-Concussion Assessment and Cognitive Testing) neurocognitive testing battery is a widely used tool used for the assessment and management of sports-related concussion. Research on the stability of ImPACT in high school athletes at a 1- and 2-year intervals have been inconsistent, requiring further investigation. We documented 1-, 2-, and 3-year test-retest reliability of repeated ImPACT baseline assessments in a sample of high school athletes, using multiple statistical methods for examining stability. A total of 1,510 high school athletes completed baseline cognitive testing using online ImPACT test battery at three time periods of approximately 1- (N = 250), 2- (N = 1146), and 3-year (N = 114) intervals. No participant sustained a concussion between assessments. Intraclass correlation coefficients (ICCs) ranged in composite scores from 0.36 to 0.90 and showed little change as intervals between assessments increased. Reliable change indices and regression-based measures (RBMs) examining the test-retest stability demonstrated a lack of significant change in composite scores across the various time intervals, with very few cases (0%-6%) falling outside of 95% confidence intervals. The results suggest ImPACT composites scores remain considerably stability across 1-, 2-, and 3-year test-retest intervals in high school athletes, when considering both ICCs and RBM. Annually ascertaining baseline scores continues to be optimal for ensuring accurate and individualized management of injury for concussed athletes. For instances in which more recent baselines are not available (1-2 years), clinicians should seek to utilize more conservative range estimates in determining the presence of clinically meaningful change in cognitive performance. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Quantitative imaging biomarkers: Effect of sample size and bias on confidence interval coverage.
Obuchowski, Nancy A; Bullen, Jennifer
2017-01-01
Introduction Quantitative imaging biomarkers (QIBs) are being increasingly used in medical practice and clinical trials. An essential first step in the adoption of a quantitative imaging biomarker is the characterization of its technical performance, i.e. precision and bias, through one or more performance studies. Then, given the technical performance, a confidence interval for a new patient's true biomarker value can be constructed. Estimating bias and precision can be problematic because rarely are both estimated in the same study, precision studies are usually quite small, and bias cannot be measured when there is no reference standard. Methods A Monte Carlo simulation study was conducted to assess factors affecting nominal coverage of confidence intervals for a new patient's quantitative imaging biomarker measurement and for change in the quantitative imaging biomarker over time. Factors considered include sample size for estimating bias and precision, effect of fixed and non-proportional bias, clustered data, and absence of a reference standard. Results Technical performance studies of a quantitative imaging biomarker should include at least 35 test-retest subjects to estimate precision and 65 cases to estimate bias. Confidence intervals for a new patient's quantitative imaging biomarker measurement constructed under the no-bias assumption provide nominal coverage as long as the fixed bias is <12%. For confidence intervals of the true change over time, linearity must hold and the slope of the regression of the measurements vs. true values should be between 0.95 and 1.05. The regression slope can be assessed adequately as long as fixed multiples of the measurand can be generated. Even small non-proportional bias greatly reduces confidence interval coverage. Multiple lesions in the same subject can be treated as independent when estimating precision. Conclusion Technical performance studies of quantitative imaging biomarkers require moderate sample sizes in order to provide robust estimates of bias and precision for constructing confidence intervals for new patients. Assumptions of linearity and non-proportional bias should be assessed thoroughly.
Modular networks with delayed coupling: Synchronization and frequency control
NASA Astrophysics Data System (ADS)
Maslennikov, Oleg V.; Nekorkin, Vladimir I.
2014-07-01
We study the collective dynamics of modular networks consisting of map-based neurons which generate irregular spike sequences. Three types of intramodule topology are considered: a random Erdös-Rényi network, a small-world Watts-Strogatz network, and a scale-free Barabási-Albert network. The interaction between the neurons of different modules is organized by relatively sparse connections with time delay. For all the types of the network topology considered, we found that with increasing delay two regimes of module synchronization alternate with each other: inphase and antiphase. At the same time, the average rate of collective oscillations decreases within each of the time-delay intervals corresponding to a particular synchronization regime. A dual role of the time delay is thus established: controlling a synchronization mode and degree and controlling an average network frequency. Furthermore, we investigate the influence on the modular synchronization by other parameters: the strength of intermodule coupling and the individual firing rate.
Segundo, J P; Sugihara, G; Dixon, P; Stiber, M; Bersier, L F
1998-12-01
This communication describes the new information that may be obtained by applying nonlinear analytical techniques to neurobiological time-series. Specifically, we consider the sequence of interspike intervals Ti (the "timing") of trains recorded from synaptically inhibited crayfish pacemaker neurons. As reported earlier, different postsynaptic spike train forms (sets of timings with shared properties) are generated by varying the average rate and/or pattern (implying interval dispersions and sequences) of presynaptic spike trains. When the presynaptic train is Poisson (independent exponentially distributed intervals), the form is "Poisson-driven" (unperturbed and lengthened intervals succeed each other irregularly). When presynaptic trains are pacemaker (intervals practically equal), forms are either "p:q locked" (intervals repeat periodically), "intermittent" (mostly almost locked but disrupted irregularly), "phase walk throughs" (intermittencies with briefer regular portions), or "messy" (difficult to predict or describe succinctly). Messy trains are either "erratic" (some intervals natural and others lengthened irregularly) or "stammerings" (intervals are integral multiples of presynaptic intervals). The individual spike train forms were analysed using attractor reconstruction methods based on the lagged coordinates provided by successive intervals from the time-series Ti. Numerous models were evaluated in terms of their predictive performance by a trial-and-error procedure: the most successful model was taken as best reflecting the true nature of the system's attractor. Each form was characterized in terms of its dimensionality, nonlinearity and predictability. (1) The dimensionality of the underlying dynamical attractor was estimated by the minimum number of variables (coordinates Ti) required to model acceptably the system's dynamics, i.e. by the system's degrees of freedom. Each model tested was based on a different number of Ti; the smallest number whose predictions were judged successful provided the best integer approximation of the attractor's true dimension (not necessarily an integer). Dimensionalities from three to five provided acceptable fits. (2) The degree of nonlinearity was estimated by: (i) comparing the correlations between experimental results and data from linear and nonlinear models, and (ii) tuning model nonlinearity via a distance-weighting function and identifying the either local or global neighborhood size. Lockings were compatible with linear models and stammerings were marginal; nonlinear models were best for Poisson-driven, intermittent and erratic forms. (3) Finally, prediction accuracy was plotted against increasingly long sequences of intervals forecast: the accuracies for Poisson-driven, locked and stammering forms were invariant, revealing irregularities due to uncorrelated noise, but those of intermittent and messy erratic forms decayed rapidly, indicating an underlying deterministic process. The excellent reconstructions possible for messy erratic and for some intermittent forms are especially significant because of their relatively low dimensionality (around 4), high degree of nonlinearity and prediction decay with time. This is characteristic of chaotic systems, and provides evidence that nonlinear couplings between relatively few variables are the major source of the apparent complexity seen in these cases. This demonstration of different dimensions, degrees of nonlinearity and predictabilities provides rigorous support for the categorization of different synaptically driven discharge forms proposed earlier on the basis of more heuristic criteria. This has significant implications. (1) It demonstrates that heterogeneous postsynaptic forms can indeed be induced by manipulating a few presynaptic variables. (2) Each presynaptic timing induces a form with characteristic dimensionality, thus breaking up the preparation into subsystems such that the physical variables in each operate as one
Analyzing time-ordered event data with missed observations.
Dokter, Adriaan M; van Loon, E Emiel; Fokkema, Wimke; Lameris, Thomas K; Nolet, Bart A; van der Jeugd, Henk P
2017-09-01
A common problem with observational datasets is that not all events of interest may be detected. For example, observing animals in the wild can difficult when animals move, hide, or cannot be closely approached. We consider time series of events recorded in conditions where events are occasionally missed by observers or observational devices. These time series are not restricted to behavioral protocols, but can be any cyclic or recurring process where discrete outcomes are observed. Undetected events cause biased inferences on the process of interest, and statistical analyses are needed that can identify and correct the compromised detection processes. Missed observations in time series lead to observed time intervals between events at multiples of the true inter-event time, which conveys information on their detection probability. We derive the theoretical probability density function for observed intervals between events that includes a probability of missed detection. Methodology and software tools are provided for analysis of event data with potential observation bias and its removal. The methodology was applied to simulation data and a case study of defecation rate estimation in geese, which is commonly used to estimate their digestive throughput and energetic uptake, or to calculate goose usage of a feeding site from dropping density. Simulations indicate that at a moderate chance to miss arrival events ( p = 0.3), uncorrected arrival intervals were biased upward by up to a factor 3, while parameter values corrected for missed observations were within 1% of their true simulated value. A field case study shows that not accounting for missed observations leads to substantial underestimates of the true defecation rate in geese, and spurious rate differences between sites, which are introduced by differences in observational conditions. These results show that the derived methodology can be used to effectively remove observational biases in time-ordered event data.
Automatic Error Analysis Using Intervals
ERIC Educational Resources Information Center
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
Epstein, Richard H; Dexter, Franklin; Hofer, Ira S; Rodriguez, Luis I; Schwenk, Eric S; Maga, Joni M; Hindman, Bradley J
2018-02-01
Perioperative hypothermia may increase the incidences of wound infection, blood loss, transfusion, and cardiac morbidity. US national quality programs for perioperative normothermia specify the presence of at least 1 "body temperature" ≥35.5°C during the interval from 30 minutes before to 15 minutes after the anesthesia end time. Using data from 4 academic hospitals, we evaluated timing and measurement considerations relevant to the current requirements to guide hospitals wishing to report perioperative temperature measures using electronic data sources. Anesthesia information management system databases from 4 hospitals were queried to obtain intraoperative temperatures and intervals to the anesthesia end time from discontinuation of temperature monitoring, end of surgery, and extubation. Inclusion criteria included age >16 years, use of a tracheal tube or supraglottic airway, and case duration ≥60 minutes. The end-of-case temperature was determined as the maximum intraoperative temperature recorded within 30 minutes before the anesthesia end time (ie, the temperature that would be used for reporting purposes). The fractions of cases with intervals >30 minutes between the last intraoperative temperature and the anesthesia end time were determined. Among the hospitals, averages (binned by quarters) of 34.5% to 59.5% of cases had intraoperative temperature monitoring discontinued >30 minutes before the anesthesia end time. Even if temperature measurement had been continued until extubation, averages of 5.9% to 20.8% of cases would have exceeded the allowed 30-minute window. Averages of 8.9% to 21.3% of cases had end-of-case intraoperative temperatures <35.5°C (ie, a quality measure failure). Because of timing considerations, a substantial fraction of cases would have been ineligible to use the end-of-case intraoperative temperature for national quality program reporting. Thus, retrieval of postanesthesia care unit temperatures would have been necessary. A substantive percentage of cases had end-of-case intraoperative temperatures below the 35.5°C threshold, also requiring postoperative measurement to determine whether the quality measure was satisfied. Institutions considering reporting national quality measures for perioperative normothermia should consider the technical and logistical issues identified to achieve a high level of compliance based on the specified regulatory language.
Faydasicok, Ozlem; Arik, Sabri
2013-08-01
The main problem with the analysis of robust stability of neural networks is to find the upper bound norm for the intervalized interconnection matrices of neural networks. In the previous literature, the major three upper bound norms for the intervalized interconnection matrices have been reported and they have been successfully applied to derive new sufficient conditions for robust stability of delayed neural networks. One of the main contributions of this paper will be the derivation of a new upper bound for the norm of the intervalized interconnection matrices of neural networks. Then, by exploiting this new upper bound norm of interval matrices and using stability theory of Lyapunov functionals and the theory of homomorphic mapping, we will obtain new sufficient conditions for the existence, uniqueness and global asymptotic stability of the equilibrium point for the class of neural networks with discrete time delays under parameter uncertainties and with respect to continuous and slope-bounded activation functions. The results obtained in this paper will be shown to be new and they can be considered alternative results to previously published corresponding results. We also give some illustrative and comparative numerical examples to demonstrate the effectiveness and applicability of the proposed robust stability condition. Copyright © 2013 Elsevier Ltd. All rights reserved.
Vaas, Lea A I; Sikorski, Johannes; Michael, Victoria; Göker, Markus; Klenk, Hans-Peter
2012-01-01
The Phenotype MicroArray (OmniLog® PM) system is able to simultaneously capture a large number of phenotypes by recording an organism's respiration over time on distinct substrates. This technique targets the object of natural selection itself, the phenotype, whereas previously addressed '-omics' techniques merely study components that finally contribute to it. The recording of respiration over time, however, adds a longitudinal dimension to the data. To optimally exploit this information, it must be extracted from the shapes of the recorded curves and displayed in analogy to conventional growth curves. The free software environment R was explored for both visualizing and fitting of PM respiration curves. Approaches using either a model fit (and commonly applied growth models) or a smoothing spline were evaluated. Their reliability in inferring curve parameters and confidence intervals was compared to the native OmniLog® PM analysis software. We consider the post-processing of the estimated parameters, the optimal classification of curve shapes and the detection of significant differences between them, as well as practically relevant questions such as detecting the impact of cultivation times and the minimum required number of experimental repeats. We provide a comprehensive framework for data visualization and parameter estimation according to user choices. A flexible graphical representation strategy for displaying the results is proposed, including 95% confidence intervals for the estimated parameters. The spline approach is less prone to irregular curve shapes than fitting any of the considered models or using the native PM software for calculating both point estimates and confidence intervals. These can serve as a starting point for the automated post-processing of PM data, providing much more information than the strict dichotomization into positive and negative reactions. Our results form the basis for a freely available R package for the analysis of PM data.
Vaas, Lea A. I.; Sikorski, Johannes; Michael, Victoria; Göker, Markus; Klenk, Hans-Peter
2012-01-01
Background The Phenotype MicroArray (OmniLog® PM) system is able to simultaneously capture a large number of phenotypes by recording an organism's respiration over time on distinct substrates. This technique targets the object of natural selection itself, the phenotype, whereas previously addressed ‘-omics’ techniques merely study components that finally contribute to it. The recording of respiration over time, however, adds a longitudinal dimension to the data. To optimally exploit this information, it must be extracted from the shapes of the recorded curves and displayed in analogy to conventional growth curves. Methodology The free software environment R was explored for both visualizing and fitting of PM respiration curves. Approaches using either a model fit (and commonly applied growth models) or a smoothing spline were evaluated. Their reliability in inferring curve parameters and confidence intervals was compared to the native OmniLog® PM analysis software. We consider the post-processing of the estimated parameters, the optimal classification of curve shapes and the detection of significant differences between them, as well as practically relevant questions such as detecting the impact of cultivation times and the minimum required number of experimental repeats. Conclusions We provide a comprehensive framework for data visualization and parameter estimation according to user choices. A flexible graphical representation strategy for displaying the results is proposed, including 95% confidence intervals for the estimated parameters. The spline approach is less prone to irregular curve shapes than fitting any of the considered models or using the native PM software for calculating both point estimates and confidence intervals. These can serve as a starting point for the automated post-processing of PM data, providing much more information than the strict dichotomization into positive and negative reactions. Our results form the basis for a freely available R package for the analysis of PM data. PMID:22536335
β-hCG resolution times during expectant management of tubal ectopic pregnancies.
Mavrelos, D; Memtsa, M; Helmy, S; Derdelis, G; Jauniaux, E; Jurkovic, D
2015-05-21
A subset of women with a tubal ectopic pregnancy can be safely managed expectantly. Expectant management involves a degree of disruption with hospital visits to determine serum β-hCG (β-human chorionic gonadotrophin) concentration until the pregnancy test becomes negative and expectant management is considered complete. The length of time required for the pregnancy test to become negative and the parameters that influence this interval have not been described. Information on the likely length of follow up would be useful for women considering expectant management of their tubal ectopic pregnancy. This was a retrospective study at a tertiary referral center in an inner city London Hospital. We included women who were diagnosed with a tubal ectopic pregnancy by transvaginal ultrasound between March 2009 and March 2014. During the study period 474 women were diagnosed with a tubal ectopic pregnancy and 256 (54 %) of them fulfilled our management criteria for expectant management. A total of 158 (33 %) women had successful expectant management and in those cases we recorded the diameter of the ectopic pregnancy (mm), the maximum serum β-hCG (IU/L) and levels during follow up until resolution as well as the interval to resolution (days). The median interval from maximum serum β-hCG concentration to resolution was 18.0 days (IQR 11.0-28.0). The maximum serum β-hCG concentration and the rate of decline of β-hCG were independently associated with the length of follow up. Women's age and size of ectopic pregnancy did not have significant effects on the length of follow up. Women undergoing expectant management of ectopic pregnancy can be informed that the likely length of follow up is under 3 weeks and that it positively correlates with initial β-hCG level at the time of diagnosis.
Integrated Traffic Flow Management Decision Making
NASA Technical Reports Server (NTRS)
Grabbe, Shon R.; Sridhar, Banavar; Mukherjee, Avijit
2009-01-01
A generalized approach is proposed to support integrated traffic flow management decision making studies at both the U.S. national and regional levels. It can consider tradeoffs between alternative optimization and heuristic based models, strategic versus tactical flight controls, and system versus fleet preferences. Preliminary testing was accomplished by implementing thirteen unique traffic flow management models, which included all of the key components of the system and conducting 85, six-hour fast-time simulation experiments. These experiments considered variations in the strategic planning look-ahead times, the replanning intervals, and the types of traffic flow management control strategies. Initial testing indicates that longer strategic planning look-ahead times and re-planning intervals result in steadily decreasing levels of sector congestion for a fixed delay level. This applies when accurate estimates of the air traffic demand, airport capacities and airspace capacities are available. In general, the distribution of the delays amongst the users was found to be most equitable when scheduling flights using a heuristic scheduling algorithm, such as ration-by-distance. On the other hand, equity was the worst when using scheduling algorithms that took into account the number of seats aboard each flight. Though the scheduling algorithms were effective at alleviating sector congestion, the tactical rerouting algorithm was the primary control for avoiding en route weather hazards. Finally, the modeled levels of sector congestion, the number of weather incursions, and the total system delays, were found to be in fair agreement with the values that were operationally observed on both good and bad weather days.
Poureslami, Hamidreza; Asl Aminabadi, Naser; Sighari Deljavan, Alireza; Erfanparast, Leila; Sohrabi, Azin; Jamali, Zahra; Ghertasi Oskouei, Sina; Hazem, Kameliya; Shirazi, Sajjad
2015-01-01
Background and aims. Predicting the teeth eruption time is a valuable tool in pediatric dentistry since it can affects scheduling dental and orthodontic treatments. This study investigated the relationship between the eruption time of first primary and permanent teeth and the variation in the eruption time considering socioeconomic status (SES) in a 9-year population- based cohort study. Materials and methods. 307 subjects were examined at bimonthly intervals during the first and second years of life and then at six-month intervals until the eruption of first permanent tooth. Eruption times of primary and permanent tooth were recorded for each child. A modified form of Kuppuswamy’s scale was used to assess the SES. Results. Among 267 subjects completed all follow-ups, the eruption time for first primary and permanent teeth indicated a direct strong correlation; in that one month delayed or early eruption of firstprimary tooth resulted in 4.21 months delayed or early eruption of first appearing permanent tooth (r = 0.91, n = 267, P <0.001). No significant correlation was observed between the eruption time of first primary and first permanent teeth and SES (P = 0.67, P = 0.75, respectively). Conclusion. The eruption timing for the first primary tooth had a correlation with the first permanent tooth eruption tim-ing, while SES did not have any influence on eruption times. PMID:26236432
On the long-period evolution of the sun-synchronous orbits
NASA Astrophysics Data System (ADS)
Kuznetsov, E. D.; Jasim, A. T.
2016-05-01
The dynamic evolution of sun-synchronous orbits at a time interval of 20 years is considered. The numerical motion simulation has been carried out using the Celestial Mechanics software package developed at the Institute of Astronomy of the University of Bern. The dependence of the dynamic evolution on the initial value of the ascending node longitude is examined for two families of sun-synchronous orbits with altitudes of 751 and 1191 km. Variations of the semimajor axis and orbit inclination are obtained depending on the initial value of the ascending node longitude. Recommendations on the selection of orbits, in which spent sun-synchronous satellites can be moved, are formulated. Minimal changes of elements over a time interval of 20 years have been observed for orbits in which at the initial time the angle between the orbit ascending node and the direction of the Sun measured along the equator have been close to 90° or 270°. In this case, the semimajor axis of the orbit is not experiencing secular perturbations arising from the satellite's passage through the Earth's shadow.
Mengarelli, Alessandro; Cardarelli, Stefano; Verdini, Federica; Burattini, Laura; Fioretti, Sandro; Di Nardo, Francesco
2016-08-01
In this paper a graphical user interface (GUI) built in MATLAB® environment is presented. This interactive tool has been developed for the analysis of superficial electromyography (sEMG) signals and in particular for the assessment of the muscle activation time intervals. After the signal import, the tool performs a first analysis in a totally user independent way, providing a reliable computation of the muscular activation sequences. Furthermore, the user has the opportunity to modify each parameter of the on/off identification algorithm implemented in the presented tool. The presence of an user-friendly GUI allows the immediate evaluation of the effects that the modification of every single parameter has on the activation intervals recognition, through the real-time updating and visualization of the muscular activation/deactivation sequences. The possibility to accept the initial signal analysis or to modify the on/off identification with respect to each considered signal, with a real-time visual feedback, makes this GUI-based tool a valuable instrument in clinical, research applications and also in an educational perspective.
Excess close burst pairs in FRB 121102
NASA Astrophysics Data System (ADS)
Katz, J. I.
2018-05-01
The repeating FRB 121102 emitted a pair of apparently discrete bursts separated by 37 ms and another pair, 131 d later, separated by 34 ms, during observations that detected bursts at a mean rate of ˜2 × 10-4 s-1. While FRB 121102 is known to produce multipeaked bursts, here I assume that these `burst pairs' are truly separate bursts and not multicomponent single bursts, and consider the implications of that assumption. Their statistics are then non-Poissonian. Assuming that the emission comes from a narrow range of rotational phase, then the measured burst intervals constrain any possible periodic modulation underlying the highly episodic emission. If more such short intervals are measured a period may be determined or periodicity may be excluded. The excess of burst intervals much shorter than their mean recurrence time may be explained if FRB emit steady but narrow beams that execute a random walk in direction, perhaps indicating origin in a black hole's accretion disc.
Detecting Pulsing Denial-of-Service Attacks with Nondeterministic Attack Intervals
NASA Astrophysics Data System (ADS)
Luo, Xiapu; Chan, Edmond W. W.; Chang, Rocky K. C.
2009-12-01
This paper addresses the important problem of detecting pulsing denial of service (PDoS) attacks which send a sequence of attack pulses to reduce TCP throughput. Unlike previous works which focused on a restricted form of attacks, we consider a very broad class of attacks. In particular, our attack model admits any attack interval between two adjacent pulses, whether deterministic or not. It also includes the traditional flooding-based attacks as a limiting case (i.e., zero attack interval). Our main contribution is Vanguard, a new anomaly-based detection scheme for this class of PDoS attacks. The Vanguard detection is based on three traffic anomalies induced by the attacks, and it detects them using a CUSUM algorithm. We have prototyped Vanguard and evaluated it on a testbed. The experiment results show that Vanguard is more effective than the previous methods that are based on other traffic anomalies (after a transformation using wavelet transform, Fourier transform, and autocorrelation) and detection algorithms (e.g., dynamic time warping).
Fall risk as a function of time after admission to sub-acute geriatric hospital units.
Rapp, Kilian; Ravindren, Johannes; Becker, Clemens; Lindemann, Ulrich; Jaensch, Andrea; Klenk, Jochen
2016-10-07
There is evidence about time-dependent fracture rates in different settings and situations. Lacking are data about underlying time-dependent fall risk patterns. The objective of the study was to analyse fall rates as a function of time after admission to sub-acute hospital units and to evaluate the time-dependent impact of clinical factors at baseline on fall risk. This retrospective cohort study used data of 5,255 patients admitted to sub-acute units in a geriatric rehabilitation clinic in Germany between 2010 and 2014. Falls, personal characteristics and functional status at admission were extracted from the hospital information system. The rehabilitation stay was divided in 3-day time-intervals. The fall rate was calculated for each time-interval in all patients combined and in subgroups of patients. To analyse the influence of covariates on fall risk over time multivariate negative binomial regression models were applied for each of 5 time-intervals. The overall fall rate was 10.2 falls/1,000 person-days with highest fall risks during the first week and decreasing risks within the following weeks. A particularly pronounced risk pattern with high fall risks during the first days and decreasing risks thereafter was observed in men, disoriented people, and people with a low functional status or impaired cognition. In disoriented patients, for example, the fall rate decreased from 24.6 falls/1,000 person-days in day 2-4 to about 13 falls/1,000 person-days 2 weeks later. The incidence rate ratio of baseline characteristics changed also over time. Fall risk differs considerably over time during sub-acute hospitalisation. The strongest association between time and fall risk was observed in functionally limited patients with high risks during the first days after admission and declining risks thereafter. This should be considered in the planning and application of fall prevention measures.
Neighborhood education inequality and drinking behavior.
Lê, Félice; Ahern, Jennifer; Galea, Sandro
2010-11-01
The neighborhood distribution of education (education inequality) may influence substance use among neighborhood residents. Using data from the New York Social Environment Study (conducted in 2005; n=4000), we examined the associations of neighborhood education inequality (measured using Gini coefficients of education) with alcohol use prevalence and levels of alcohol consumption among alcohol users. Analyses were adjusted for neighborhood education level, income level and income inequality, as well as for individual demographic and socioeconomic characteristics and history of drinking prior to residence in the current neighborhood. Neighborhood social norms about drinking were examined as a possible mediator. In adjusted generalized estimating equation regression models, one-standard-deviation-higher education inequality was associated with 1.18 times higher odds of alcohol use (logistic regression odds ratio=1.18, 95% confidence interval 1.08-1.30) but 0.79 times lower average daily alcohol consumption among alcohol users (Poisson regression relative rate=0.79, 95% confidence interval 0.68-0.92). The results tended to differ in magnitude depending on respondents' individual educational levels. There was no evidence that these associations were mediated by social drinking norms, although norms did vary with education inequality. Our results provide further evidence of a relation between education inequality and drinking behavior while illustrating the importance of considering different drinking outcomes and heterogeneity between neighborhood subgroups. Future research could fruitfully consider other potential mechanisms, such as alcohol availability or the role of stress; research that considers multiple mechanisms and their combined effects may be most informative. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Timescale- and Sensory Modality-Dependency of the Central Tendency of Time Perception.
Murai, Yuki; Yotsumoto, Yuko
2016-01-01
When individuals are asked to reproduce intervals of stimuli that are intermixedly presented at various times, longer intervals are often underestimated and shorter intervals overestimated. This phenomenon may be attributed to the central tendency of time perception, and suggests that our brain optimally encodes a stimulus interval based on current stimulus input and prior knowledge of the distribution of stimulus intervals. Two distinct systems are thought to be recruited in the perception of sub- and supra-second intervals. Sub-second timing is subject to local sensory processing, whereas supra-second timing depends on more centralized mechanisms. To clarify the factors that influence time perception, the present study investigated how both sensory modality and timescale affect the central tendency. In Experiment 1, participants were asked to reproduce sub- or supra-second intervals, defined by visual or auditory stimuli. In the sub-second range, the magnitude of the central tendency was significantly larger for visual intervals compared to auditory intervals, while visual and auditory intervals exhibited a correlated and comparable central tendency in the supra-second range. In Experiment 2, the ability to discriminate sub-second intervals in the reproduction task was controlled across modalities by using an interval discrimination task. Even when the ability to discriminate intervals was controlled, visual intervals exhibited a larger central tendency than auditory intervals in the sub-second range. In addition, the magnitude of the central tendency for visual and auditory sub-second intervals was significantly correlated. These results suggest that a common modality-independent mechanism is responsible for the supra-second central tendency, and that both the modality-dependent and modality-independent components of the timing system contribute to the central tendency in the sub-second range.
High resolution data acquisition
Thornton, G.W.; Fuller, K.R.
1993-04-06
A high resolution event interval timing system measures short time intervals such as occur in high energy physics or laser ranging. Timing is provided from a clock, pulse train, and analog circuitry for generating a triangular wave synchronously with the pulse train (as seen in diagram on patent). The triangular wave has an amplitude and slope functionally related to the time elapsed during each clock pulse in the train. A converter forms a first digital value of the amplitude and slope of the triangle wave at the start of the event interval and a second digital value of the amplitude and slope of the triangle wave at the end of the event interval. A counter counts the clock pulse train during the interval to form a gross event interval time. A computer then combines the gross event interval time and the first and second digital values to output a high resolution value for the event interval.
High resolution data acquisition
Thornton, Glenn W.; Fuller, Kenneth R.
1993-01-01
A high resolution event interval timing system measures short time intervals such as occur in high energy physics or laser ranging. Timing is provided from a clock (38) pulse train (37) and analog circuitry (44) for generating a triangular wave (46) synchronously with the pulse train (37). The triangular wave (46) has an amplitude and slope functionally related to the time elapsed during each clock pulse in the train. A converter (18, 32) forms a first digital value of the amplitude and slope of the triangle wave at the start of the event interval and a second digital value of the amplitude and slope of the triangle wave at the end of the event interval. A counter (26) counts the clock pulse train (37) during the interval to form a gross event interval time. A computer (52) then combines the gross event interval time and the first and second digital values to output a high resolution value for the event interval.
Real-time edge-enhanced optical correlator
NASA Astrophysics Data System (ADS)
Shihabi, Mazen M.; Hinedi, Sami M.; Shah, Biren N.
1992-08-01
The performance of five symbol lock detectors are compared. They are the square-law detector with overlapping (SQOD) and non-overlapping (SQNOD) integrators, the absolute value detectors with overlapping and non-overlapping (AVNOD) integrators and the signal power estimator detector (SPED). The analysis considers various scenarios when the observation interval is much larger or equal to the symbol synchronizer loop bandwidth, which has not been considered in previous analyses. Also, the case of threshold setting in the absence of signal is considered. It is shown that the SQOD outperforms all others when the threshold is set in the presence of signal, independent of the relationship between loop bandwidth and observation period. On the other hand, the SPED outperforms all others when the threshold is set in the presence of noise only.
Real-time edge-enhanced optical correlator
NASA Technical Reports Server (NTRS)
Shihabi, Mazen M. (Inventor); Hinedi, Sami M. (Inventor); Shah, Biren N. (Inventor)
1992-01-01
The performance of five symbol lock detectors are compared. They are the square-law detector with overlapping (SQOD) and non-overlapping (SQNOD) integrators, the absolute value detectors with overlapping and non-overlapping (AVNOD) integrators and the signal power estimator detector (SPED). The analysis considers various scenarios when the observation interval is much larger or equal to the symbol synchronizer loop bandwidth, which has not been considered in previous analyses. Also, the case of threshold setting in the absence of signal is considered. It is shown that the SQOD outperforms all others when the threshold is set in the presence of signal, independent of the relationship between loop bandwidth and observation period. On the other hand, the SPED outperforms all others when the threshold is set in the presence of noise only.
Postmortem ICD interrogation in mode of death classification.
Nikolaidou, Theodora; Johnson, Miriam J; Ghosh, Justin M; Marincowitz, Carl; Shah, Saumil; Lammiman, Michael J; Schilling, Richard J; Clark, Andrew L
2018-04-01
The definition of sudden death due to arrhythmia relies on the time interval between onset of symptoms and death. However, not all sudden deaths are due to arrhythmia. In patients with an implantable cardioverter defibrillator (ICD), postmortem device interrogation may help better distinguish the mode of death compared to a time-based definition alone. This study aims to assess the proportion of "sudden" cardiac deaths in patients with an ICD that have confirmed arrhythmia. We conducted a literature search for studies using postmortem ICD interrogation and a time-based classification of the mode of death. A modified QUADAS-2 checklist was used to assess risk of bias in individual studies. Outcome data were pooled where sufficient data were available. Our search identified 22 studies undertaken between 1982 and 2015 with 23,600 participants. The pooled results (excluding studies with high risk of bias) suggest that ventricular arrhythmias are present at the time of death in 76% of "sudden" deaths (95% confidence interval [CI] 67-85; range 42-88). Postmortem ICD interrogation identifies 24% of "sudden" deaths to be nonarrhythmic. Postmortem device interrogation should be considered in all cases of unexplained sudden cardiac death. © 2018 Wiley Periodicals, Inc.
Technical note: A device for obtaining time-integrated samples of ruminal fluid
Corley, R. N.; Murphy, M.R.; Lucena, J.; Panno, S.V.
1999-01-01
A device was adapted to allow for time-integrated sampling of fluid from the rumen via a cannula. The sampler consisted of a cup-shaped ceramic filter positioned in the ventral rumen of a cannulated cow and attached to a tube through which fluid entering the filter was removed continuously using a peristaltic pump. Rate of ruminal fluid removal using the device was monitored over two 36-h periods (at 6-h intervals) and was not affected (P > .05) by time, indicating that the system was not susceptible to clogging during this period. Two cows having ad libitum access to a totally mixed ration were used in a split-block design to evaluate the utility of the system for obtaining time-integrated samples of ruminal fluid. Ruminal fluid VFA concentration and pattern in samples collected in two replicated 8-h periods by the time-integrated sampler (at 1-h intervals) were compared with composite samples collected using a conventional suction-strainer device (at 30-min intervals). Each 8-h collection period started 2 h before or 6 h after feeding. Results indicated that total VFA concentration was not affected (P > .05) by the sampling method. Volatile fatty acid patterns were likewise unaffected (P > .05) except that acetate was 2.5% higher (P < .05) in samples collected 2 h before feeding and valerate was 5% higher (P < .05) in samples collected 6 h after feeding by the suction-strainer device. Although significant, these differences were not considered physiologically important. We concluded that use of the ceramic filter improved the sampling of ruminal fluid by simplifying the technique and allowing time-integrated samples to be obtained.
The probability of lava inundation at the proposed and existing Kulani prison sites
Kauahikaua, J.P.; Trusdell, F.A.; Heliker, C.C.
1998-01-01
The State of Hawai`i has proposed building a 2,300-bed medium-security prison about 10 km downslope from the existing Kulani medium-security correctional facility. The proposed and existing facilities lie on the northeast rift zone of Mauna Loa, which last erupted in 1984 in this same general area. We use the best available geologic mapping and dating with GIS software to estimate the average recurrence interval between lava flows that inundate these sites. Three different methods are used to adjust the number of flows exposed at the surface for those flows that are buried to allow a better representation of the recurrence interval. Probabilities are then computed, based on these recurrence intervals, assuming that the data match a Poisson distribution. The probability of lava inundation for the existing prison site is estimated to be 11- 12% in the next 50 years. The probability of lava inundation for the proposed sites B and C are 2- 3% and 1-2%, respectively, in the same period. The probabilities are based on estimated recurrence intervals for lava flows, which are approximately proportional to the area considered. The probability of having to evacuate the prison is certainly higher than the probability of lava entering the site. Maximum warning times between eruption and lava inundation of a site are estimated to be 24 hours for the existing prison site and 72 hours for proposed sites B and C. Evacuation plans should take these times into consideration.
Measuring the EMS patient access time interval and the impact of responding to high-rise buildings.
Morrison, Laurie J; Angelini, Mark P; Vermeulen, Marian J; Schwartz, Brian
2005-01-01
To measure the patient access time interval and characterize its contribution to the total emergency medical services (EMS) response time interval; to compare the patient access time intervals for patients located three or more floors above ground with those less than three floors above or below ground, and specifically in the apartment subgroup; and to identify barriers that significantly impede EMS access to patients in high-rise apartments. An observational study of all patients treated by an emergency medical technician paramedics (EMT-P) crew was conducted using a trained independent observer to collect time intervals and identify potential barriers to access. Of 118 observed calls, 25 (21%) originated from patients three or more floors above ground. The overall median and 90th percentile (95% confidence interval) patient access time intervals were 1.61 (1.27, 1.91) and 3.47 (3.08, 4.05) minutes, respectively. The median interval was 2.73 (2.22, 3.03) minutes among calls from patients located three or more stories above ground compared with 1.25 (1.07, 1.55) minutes among those at lower levels. The patient access time interval represented 23.5% of the total EMS response time interval among calls originating less than three floors above or below ground and 32.2% of those located three or more stories above ground. The most frequently encountered barriers to access included security code entry requirements, lack of directional signs, and inability to fit the stretcher into the elevator. The patient access time interval is significantly long and represents a substantial component of the total EMS response time interval, especially among ambulance calls originating three or more floors above ground. A number of barriers appear to contribute to delayed paramedic access.
Modulation of human time processing by subthalamic deep brain stimulation.
Wojtecki, Lars; Elben, Saskia; Timmermann, Lars; Reck, Christiane; Maarouf, Mohammad; Jörgens, Silke; Ploner, Markus; Südmeyer, Martin; Groiss, Stefan Jun; Sturm, Volker; Niedeggen, Michael; Schnitzler, Alfons
2011-01-01
Timing in the range of seconds referred to as interval timing is crucial for cognitive operations and conscious time processing. According to recent models of interval timing basal ganglia (BG) oscillatory loops are involved in time interval recognition. Parkinsońs disease (PD) is a typical disease of the basal ganglia that shows distortions in interval timing. Deep brain stimulation (DBS) of the subthalamic nucleus (STN) is a powerful treatment of PD which modulates motor and cognitive functions depending on stimulation frequency by affecting subcortical-cortical oscillatory loops. Thus, for the understanding of BG-involvement in interval timing it is of interest whether STN-DBS can modulate timing in a frequency dependent manner by interference with oscillatory time recognition processes. We examined production and reproduction of 5 and 15 second intervals and millisecond timing in a double blind, randomised, within-subject repeated-measures design of 12 PD-patients applying no, 10-Hz- and ≥ 130-Hz-STN-DBS compared to healthy controls. We found under(re-)production of the 15-second interval and a significant enhancement of this under(re-)production by 10-Hz-stimulation compared to no stimulation, ≥ 130-Hz-STN-DBS and controls. Milliseconds timing was not affected. We provide first evidence for a frequency-specific modulatory effect of STN-DBS on interval timing. Our results corroborate the involvement of BG in general and of the STN in particular in the cognitive representation of time intervals in the range of multiple seconds.
Modulation of Human Time Processing by Subthalamic Deep Brain Stimulation
Timmermann, Lars; Reck, Christiane; Maarouf, Mohammad; Jörgens, Silke; Ploner, Markus; Südmeyer, Martin; Groiss, Stefan Jun; Sturm, Volker; Niedeggen, Michael; Schnitzler, Alfons
2011-01-01
Timing in the range of seconds referred to as interval timing is crucial for cognitive operations and conscious time processing. According to recent models of interval timing basal ganglia (BG) oscillatory loops are involved in time interval recognition. Parkinsońs disease (PD) is a typical disease of the basal ganglia that shows distortions in interval timing. Deep brain stimulation (DBS) of the subthalamic nucleus (STN) is a powerful treatment of PD which modulates motor and cognitive functions depending on stimulation frequency by affecting subcortical-cortical oscillatory loops. Thus, for the understanding of BG-involvement in interval timing it is of interest whether STN-DBS can modulate timing in a frequency dependent manner by interference with oscillatory time recognition processes. We examined production and reproduction of 5 and 15 second intervals and millisecond timing in a double blind, randomised, within-subject repeated-measures design of 12 PD-patients applying no, 10-Hz- and ≥130-Hz-STN-DBS compared to healthy controls. We found under(re-)production of the 15-second interval and a significant enhancement of this under(re-)production by 10-Hz-stimulation compared to no stimulation, ≥130-Hz-STN-DBS and controls. Milliseconds timing was not affected. We provide first evidence for a frequency-specific modulatory effect of STN-DBS on interval timing. Our results corroborate the involvement of BG in general and of the STN in particular in the cognitive representation of time intervals in the range of multiple seconds. PMID:21931767
Fuel optimal maneuvers for spacecraft with fixed thrusters
NASA Technical Reports Server (NTRS)
Carter, T. C.
1982-01-01
Several mathematical models, including a minimum integral square criterion problem, were used for the qualitative investigation of fuel optimal maneuvers for spacecraft with fixed thrusters. The solutions consist of intervals of "full thrust" and "coast" indicating that thrusters do not need to be designed as "throttleable" for fuel optimal performance. For the primary model considered, singular solutions occur only if the optimal solution is "pure translation". "Time optimal" singular solutions can be found which consist of intervals of "coast" and "full thrust". The shape of the optimal fuel consumption curve as a function of flight time was found to depend on whether or not the initial state is in the region admitting singular solutions. Comparisons of fuel optimal maneuvers in deep space with those relative to a point in circular orbit indicate that qualitative differences in the solutions can occur. Computation of fuel consumption for certain "pure translation" cases indicates that considerable savings in fuel can result from the fuel optimal maneuvers.
Wilson, Lorna R M; Hopcraft, Keith I
2017-12-01
The problem of zero crossings is of great historical prevalence and promises extensive application. The challenge is to establish precisely how the autocorrelation function or power spectrum of a one-dimensional continuous random process determines the density function of the intervals between the zero crossings of that process. This paper investigates the case where periodicities are incorporated into the autocorrelation function of a smooth process. Numerical simulations, and statistics about the number of crossings in a fixed interval, reveal that in this case the zero crossings segue between a random and deterministic point process depending on the relative time scales of the periodic and nonperiodic components of the autocorrelation function. By considering the Laplace transform of the density function, we show that incorporating correlation between successive intervals is essential to obtaining accurate results for the interval variance. The same method enables prediction of the density function tail in some regions, and we suggest approaches for extending this to cover all regions. In an ever-more complex world, the potential applications for this scale of regularity in a random process are far reaching and powerful.
NASA Astrophysics Data System (ADS)
Wilson, Lorna R. M.; Hopcraft, Keith I.
2017-12-01
The problem of zero crossings is of great historical prevalence and promises extensive application. The challenge is to establish precisely how the autocorrelation function or power spectrum of a one-dimensional continuous random process determines the density function of the intervals between the zero crossings of that process. This paper investigates the case where periodicities are incorporated into the autocorrelation function of a smooth process. Numerical simulations, and statistics about the number of crossings in a fixed interval, reveal that in this case the zero crossings segue between a random and deterministic point process depending on the relative time scales of the periodic and nonperiodic components of the autocorrelation function. By considering the Laplace transform of the density function, we show that incorporating correlation between successive intervals is essential to obtaining accurate results for the interval variance. The same method enables prediction of the density function tail in some regions, and we suggest approaches for extending this to cover all regions. In an ever-more complex world, the potential applications for this scale of regularity in a random process are far reaching and powerful.
Jasmine Ware,; Rode, Karyn D.; Pagano, Anthony M.; Bromaghin, Jeffrey F.; Robbins, Charles T.; Joy Erlenbach,; Shannon Jensen,; Amy Cutting,; Nicole Nicassio-Hiskey,; Amy Hash,; Owen, Megan A.; Heiko Jansen,
2015-01-01
Activity sensors are often included in wildlife transmitters and can provide information on the behavior and activity patterns of animals remotely. However, interpreting activity-sensor data relative to animal behavior can be difficult if animals cannot be continuously observed. In this study, we examined the performance of a mercury tip-switch and a tri-axial accelerometer housed in collars to determine whether sensor data can be accurately classified as resting and active behaviors and whether data are comparable for the 2 sensor types. Five captive bears (3 polar [Ursus maritimus] and 2 brown [U. arctos horribilis]) were fitted with a collar specially designed to internally house the sensors. The bears’ behaviors were recorded, classified, and then compared with sensor readings. A separate tri-axial accelerometer that sampled continuously at a higher frequency and provided raw acceleration values from 3 axes was also mounted on the collar to compare with the lower resolution sensors. Both accelerometers more accurately identified resting and active behaviors at time intervals ranging from 1 minute to 1 hour (≥91.1% accuracy) compared with the mercury tip-switch (range = 75.5–86.3%). However, mercury tip-switch accuracy improved when sampled at longer intervals (e.g., 30–60 min). Data from the lower resolution accelerometer, but not the mercury tip-switch, accurately predicted the percentage of time spent resting during an hour. Although the number of bears available for this study was small, our results suggest that these activity sensors can remotely identify resting versus active behaviors across most time intervals. We recommend that investigators consider both study objectives and the variation in accuracy of classifying resting and active behaviors reported here when determining sampling interval.
Tanguay, Alain; Dallaire, Renée; Hébert, Denise; Bégin, François; Fleet, Richard
2015-11-01
As per American Heart Association/American College of Cardiology guidelines, the delay between first medical contact and balloon inflation should not exceed 90 min for primary percutaneous coronary intervention (PCI). In North America, few prehospital systems have been developed to grant rural populations timely access to PCI. The objective of the present study was to evaluate the ability of an ST-segment elevation myocardial infarction (STEMI) system serving suburban and rural populations to achieve the recommended 90-min interval benchmark for PCI. A prehospital telemedicine program was implemented in a rural and suburban region of the Quebec province. Three patient groups with STEMI were created according to trajectory: 1) patients already en route to a PCI center, 2) patients initially directed to the nearest hospital who were subsequently diverted to a PCI center during transport, and 3) patients directed to the nearest hospital without transfer for PCI. Time intervals were compared across groups. Of the 208 patients diagnosed with STEMI, 14.9% were already on their way to a hospital with PCI capabilities, 75.0% were rerouted to a PCI center, and 10.1% were directed to the nearest local hospital. All patients but one arrived at the PCI center within the 60-min prehospital care interval, considering an additional 30 min for balloon inflation at the PCI center. This study demonstrated that a regionalized prehospital system for STEMI patients could achieve the recommended 90-min interval benchmark for PCI, while giving timely access to PCI to rural populations that would not otherwise have access to this treatment. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Zare, Sajad; Nassiri, Parvin; Monazzam, Mohammad Reza; Pourbakht, Akram; Azam, Kamal; Golmohammadi, Taghi
2015-01-01
Background: Noise-induced hearing loss (NIHL) is usually one of the main problems in industrial settings. The aim of this study was to determine whether changes in the signal-to-noise ratio (SNR) in different DPOAE are caused by exposure to different levels of noise at different time intervals among workers exposed to noise. Methods: This case-control study was conducted in the autumn of 2014 on 45 workers at Gol Gohar Mining and Industrial Company, which is located in Sirjan in southeast Iran. The workers were divided into three groups based on their noise exposure, i.e., 1) 15 office workers as a control group with exposure to low levels of noise, 2) 15 workers from manufacturing departments who were exposed to a medium level of noise, and 3) 15 workers from manufacturing departments who were exposed to high levels of noise. The SNRs at the frequencies of 1000, 2000, 3000, 4000, and 6000 Hz were measured in both ears at three different time intervals during the shift work. SNRs of 6 or greater were considered as inclusion criterion. Repeated measures, the Spearman rank-order correlation test, and paired t-test analyses were used with α = 0.05 being the level of significance. Results: For all frequencies in the right and left ears, the SNR values were more than 6, thus all SNR values were considered as acceptable responses. The effects of time and sound pressure level (SPL) on SNR were significant for the right and left ears (p = 0.027 and < 0.001, respectively). There was a statistically significant correlation between the SNR values in the right and left ears for the time intervals 7:30–8:00 A.M. and 13:30–14:00 P.M., which implied that an increase in the duration of exposure led to reduced SNR values (p = 0.024, r = 0.948). Conclusions: The comparison of the SNR values in the right and left ears (for all frequencies and the three different SPLs) indicated that the values decreased during the shift work. PMID:26388979
Infant Temperament: Stability by Age, Gender, Birth Order, Term Status, and SES
Bornstein, Marc H.; Putnick, Diane L.; Gartstein, Maria A.; Hahn, Chun-Shin; Auestad, Nancy; O’Connor, Deborah L.
2015-01-01
Two complementary studies focused on stability of infant temperament across the first year and considered infant age, gender, birth order, term status, and socioeconomic status (SES) as moderators. Study 1 consisted of 73 mothers of firstborn term girls and boys queried at 2, 5, and 13 months of age. Study 2 consisted of 335 mothers of infants of different gender, birth order, term status, and SES queried at 6 and 12 months. Consistent positive and negative affectivity factors emerged at all time-points across both studies. Infant temperament proved stable and robust across gender, birth order, term status, and SES. Stability coefficients for temperament factors and scales were medium to large for shorter (<9 months) inter-assessment intervals and small to medium for longer (>10 months) intervals. PMID:25865034
Flambaum, V V; Kozlov, M G
2007-10-12
Sensitivity to temporal variation of the fundamental constants may be strongly enhanced in transitions between narrow close levels of different nature. This enhancement may be realized in a large number of molecules due to cancellation between the ground state fine-structure omega{f} and vibrational interval omega{v} [omega=omega{f}-nomega{v} approximately 0, delta omega/omega=K(2delta alpha/alpha+0.5 delta mu/mu), K>1, mu=m{p}/m{e}]. The intervals between the levels are conveniently located in microwave frequency range and the level widths are very small. Required accuracy of the shift measurements is about 0.01-1 Hz. As examples, we consider molecules Cl(+)(2), CuS, IrC, SiBr, and HfF(+).
On the motion of a rigid body with an internal moving point mass on a horizontal plane
NASA Astrophysics Data System (ADS)
Bardin, B. S.; Panev, A. S.
2018-05-01
We consider motions of a body carrying movable internal mass. The internal mass is a particle moving in a circle inside the body, which performs a rectilinear motion on a horizontal plane. We suppose that viscous and dry friction acts between the plane and the body. We also assume that the body moves without jumps on the plane. Our study has shown that depending on values of parameters the body moves either periodically stoping and resting for certain time intervals or it approaches a periodic mode of motion without quiescence intervals. The above conclusions are in good correspondence with results obtained in our previous papers, where the above problem has been studied under assumption that the viscous friction is absent.
Joosen, Annemiek M C P; van der Linden, Ivon J M; Schrauwen, Lianne; Theeuwes, Alisia; de Groot, Monique J M; Ermens, Antonius A M
2017-11-27
Vasopressin and adrenomedullin and their stable by-products copeptin and midregional part of proadrenomedullin (MR-proADM) are promising biomarkers for the development of preeclampsia. However, clinical use is hampered by the lack of trimester-specific reference intervals. We therefore estimated reference intervals for copeptin and MR-proADM in disease-free Dutch women throughout pregnancy. Apparently healthy low risk pregnant women were recruited. Exclusion criteria included current or past history of endocrine disease, multiple pregnancy, use of medication known to influence thyroid function and current pregnancy as a result of hormonal stimulation. Women who miscarried, developed hyperemesis gravidarum, hypertension, pre-eclampsia, hemolysis elevated liver enzymes and low platelets, diabetes or other disease, delivered prematurely or had a small for gestational age neonate were excluded from analyses. Blood samples were collected at 9-13 weeks (n=98), 27-29 weeks (n=94) and 36-39 weeks (n=91) of gestation and at 4-13 weeks post-partum (PP) (n=89). Sixty-two women had complete data during pregnancy and PP. All analyses were performed on a Kryptor compact plus. Copeptin increases during pregnancy, but 97.5th percentiles remain below the non-pregnant upper reference limit (URL) provided by the manufacturer. MR-proADM concentrations increase as well during pregnancy. In trimesters 2 and 3 the 97.5th percentiles are over three times the non-pregnant URL provided by the manufacturer. Trimester- and assay-specific reference intervals for copeptin and MR-proADM should be used. In addition, consecutive measurements and the time frame between measurements should be considered as the differences seen with or in advance of preeclampsia can be expected to be relatively small compared to the reference intervals.
Cardiac autonomic regulation is disturbed in children with euthyroid Hashimoto thyroiditis.
Kilic, Ayhan; Gulgun, Mustafa; Tascilar, Mehmet Emre; Sari, Erkan; Yokusoglu, Mehmet
2012-03-01
Hashimoto thyroiditis (chronic autoimmune thyroiditis) is the most common form of thyroiditis in childhood. Previous studies have found autonomic dysfunction of varying magnitude in patients with autoimmune diseases, which is considered a cardiovascular risk factor. We aimed to evaluate the heart rate variability (HRV), a measure of cardiac autonomic modulation, in children with euthyroid Hashimoto thyroiditis (eHT). The study included 32 patients with eHT (27 girls and 5 boys; mean age 11 ± 4.1 years, range 8-16; body mass index 0.47 ± 0.69 kg/m(2)), as judged by normal or minimally elevated serum TSH levels (normal range: 0.34-5.6 mIU/l) and normal levels of free thyroid hormones (FT4 and FT3) and 38 euthyroid age-matched controls. Patients with eHT and control subjects underwent physical examination and 24-hour ambulatory ECG monitoring. Time-domain parameters of HRV were evaluated for cardiac autonomic functions. Children with eHT displayed significantly lower values of time-domain parameters of SDANN (standard deviation of the averages of NN intervals), RMSSD (square root of the mean of the sum of the squares of differences between adjacent NN intervals), NN50 counts (number of pairs of adjacent NN intervals differing by more than 50 ms) and PNN50 (NN50 count divided by the total number of all NN intervals) for each 5-min interval, compared to healthy controls (p < 0.05 for each), indicating the decreased beat-to-beat variation of heart rate. In conclusion, eHT is associated with disturbed autonomic regulation of heart rate. Hence, the children with eHT are at higher risk for developing cardiovascular diseases.
Department of Defense Precise Time and Time Interval program improvement plan
NASA Technical Reports Server (NTRS)
Bowser, J. R.
1981-01-01
The United States Naval Observatory is responsible for ensuring uniformity in precise time and time interval operations including measurements, the establishment of overall DOD requirements for time and time interval, and the accomplishment of objectives requiring precise time and time interval with minimum cost. An overview of the objectives, the approach to the problem, the schedule, and a status report, including significant findings relative to organizational relationships, current directives, principal PTTI users, and future requirements as currently identified by the users are presented.
Relative potency estimates of acceptable residues and reentry intervals after nerve agent release
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watson, A.P.; Jones, T.D.; Adams, J.D.
1992-06-01
In the event of an unplanned release of a chemical warfare agent during any stage of the Chemical Stockpile Disposal Program, the potential exists for off-post contamination of drinking water, forage crops, grains, garden produce, and livestock. The more persistent agents, such as the organophosphate nerve agent VX, pose the greatest human health concern for reentry. A relative potency approach comparing the toxicity of VX to organophosphate insecticide analogues is developed and used to estimate allowable residues for VX in agricultural products and reentry intervals for public access to contaminated areas. Analysis of mammalian LD50 data by all exposure routesmore » indicates that VX is 10(3) to 10(4) times more toxic than most commercially available organophosphate insecticides. Thus, allowable residues of VX could be considered at concentration levels 10(3) to 10(4) lower than those established for certain insecticides by the U.S. EPA. Evaluation of reentry intervals developed for these organophosphate analogues indicate that, if environmental monitoring cannot reliably demonstrate acceptable levels of VX, restricted access to suspect or contaminated areas may be on the order of weeks to months following agent release. Planning for relocation, mass care centers, and quarantine should take this time period into account.« less
Galerkin Spectral Method for the 2D Solitary Waves of Boussinesq Paradigm Equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christou, M. A.; Christov, C. I.
2009-10-29
We consider the 2D stationary propagating solitary waves of the so-called Boussinesq Paradigm equation. The fourth- order elliptic boundary value problem on infinite interval is solved by a Galerkin spectral method. An iterative procedure based on artificial time ('false transients') and operator splitting is used. Results are obtained for the shapes of the solitary waves for different values of the dispersion parameters for both subcritical and supercritical phase speeds.
NASA Astrophysics Data System (ADS)
Kotrlová, Andrea; Török, Gabriel; Šrámková, Eva; Stuchlík, Zdeněk
2014-12-01
We have previously applied several models of high-frequency quasi-periodic oscillations (HF QPOs) to estimate the spin of the central Kerr black hole in the three Galactic microquasars, GRS 1915+105, GRO J1655-40, and XTE J1550-564. Here we explore the alternative possibility that the central compact body is a super-spinning object (or a naked singularity) with the external space-time described by Kerr geometry with a dimensionless spin parameter a ≡ cJ/GM2> 1. We calculate the relevant spin intervals for a subset of HF QPO models considered in the previous study. Our analysis indicates that for all but one of the considered models there exists at least one interval of a> 1 that is compatible with constraints given by the ranges of the central compact object mass independently estimated for the three sources. For most of the models, the inferred values of a are several times higher than the extreme Kerr black hole value a = 1. These values may be too high since the spin of superspinars is often assumed to rapidly decrease due to accretion when a ≫ 1. In this context, we conclude that only the epicyclic and the Keplerian resonance model provides estimates that are compatible with the expectation of just a small deviation from a = 1.
Sensitivity analysis of seismic hazard for the northwestern portion of the state of Gujarat, India
Petersen, M.D.; Rastogi, B.K.; Schweig, E.S.; Harmsen, S.C.; Gomberg, J.S.
2004-01-01
We test the sensitivity of seismic hazard to three fault source models for the northwestern portion of Gujarat, India. The models incorporate different characteristic earthquake magnitudes on three faults with individual recurrence intervals of either 800 or 1600 years. These recurrence intervals imply that large earthquakes occur on one of these faults every 266-533 years, similar to the rate of historic large earthquakes in this region during the past two centuries and for earthquakes in intraplate environments like the New Madrid region in the central United States. If one assumes a recurrence interval of 800 years for large earthquakes on each of three local faults, the peak ground accelerations (PGA; horizontal) and 1-Hz spectral acceleration ground motions (5% damping) are greater than 1 g over a broad region for a 2% probability of exceedance in 50 years' hazard level. These probabilistic PGAs at this hazard level are similar to median deterministic ground motions. The PGAs for 10% in 50 years' hazard level are considerably lower, generally ranging between 0.2 g and 0.7 g across northwestern Gujarat. Ground motions calculated from our models that consider fault interevent times of 800 years are considerably higher than other published models even though they imply similar recurrence intervals. These higher ground motions are mainly caused by the application of intraplate attenuation relations, which account for less severe attenuation of seismic waves when compared to the crustal interplate relations used in these previous studies. For sites in Bhuj and Ahmedabad, magnitude (M) 7 3/4 earthquakes contribute most to the PGA and the 0.2- and 1-s spectral acceleration ground motion maps at the two considered hazard levels. ?? 2004 Elsevier B.V. All rights reserved.
Anatomy of Some Non-Heinrich Events During The Last Glacial Maximum on Laurentian Fan
NASA Astrophysics Data System (ADS)
Gil, I. M.; Keigwin, L. D.
2013-12-01
High-resolution diatom assemblage analyses coupled with oxygen and carbon isotopic records from a new 28 m piston core on Laurentian Fan reveal significant sedimentological and marine productivity changes related to variability of the nearby Laurentide Ice Sheet during the Last Glacial Maximum. Between 21.0 and 19.7 ka and between 18.8 and 18.6 ka, olive-grey clays intervals interrupt the usual glacial red-clays sedimentation. The timing of these two intervals corresponds to reported occurrence of layers low in detrital carbonate (LDC, considered as non-Heinrich events) that occurred between Heinrich Event 1 and 2. Diatoms are only abundant during those LDC - olive-grey clay intervals and suggest ice retreat (allowing light penetration necessary to diatoms). The species succession reveals also different environmental conditions. The 21.0 to 19.7 ka interval is divisible to two main periods: the first was characterized by environmental conditions dominated by ice, while the second period (starting at 20.2 ka) was warmer than the first. During the shorter 18.8 to 18.6 ka interval, conditions were even warmer than during the 20.2 to 19.7 ka sub-interval. Finally, the comparison of the interpreted oceanographic conditions with changes in Ice Rafted Debris and other records from the North Atlantic will bring a new insight into those episodes that precede the transition to deglaciation beginning ~18.2 ka on Laurentian Fan (based on δ18-O in N. pachyderma (s.)).
Dedicated clock/timing-circuit theories of time perception and timed performance.
van Rijn, Hedderik; Gu, Bon-Mi; Meck, Warren H
2014-01-01
Scalar Timing Theory (an information-processing version of Scalar Expectancy Theory) and its evolution into the neurobiologically plausible Striatal Beat-Frequency (SBF) theory of interval timing are reviewed. These pacemaker/accumulator or oscillation/coincidence detection models are then integrated with the Adaptive Control of Thought-Rational (ACT-R) cognitive architecture as dedicated timing modules that are able to make use of the memory and decision-making mechanisms contained in ACT-R. The different predictions made by the incorporation of these timing modules into ACT-R are discussed as well as the potential limitations. Novel implementations of the original SBF model that allow it to be incorporated into ACT-R in a more fundamental fashion than the earlier simulations of Scalar Timing Theory are also considered in conjunction with the proposed properties and neural correlates of the "internal clock".
Persistence of non-Markovian Gaussian stationary processes in discrete time
NASA Astrophysics Data System (ADS)
Nyberg, Markus; Lizana, Ludvig
2018-04-01
The persistence of a stochastic variable is the probability that it does not cross a given level during a fixed time interval. Although persistence is a simple concept to understand, it is in general hard to calculate. Here we consider zero mean Gaussian stationary processes in discrete time n . Few results are known for the persistence P0(n ) in discrete time, except the large time behavior which is characterized by the nontrivial constant θ through P0(n ) ˜θn . Using a modified version of the independent interval approximation (IIA) that we developed before, we are able to calculate P0(n ) analytically in z -transform space in terms of the autocorrelation function A (n ) . If A (n )→0 as n →∞ , we extract θ numerically, while if A (n )=0 , for finite n >N , we find θ exactly (within the IIA). We apply our results to three special cases: the nearest-neighbor-correlated "first order moving average process", where A (n )=0 for n >1 , the double exponential-correlated "second order autoregressive process", where A (n ) =c1λ1n+c2λ2n , and power-law-correlated variables, where A (n ) ˜n-μ . Apart from the power-law case when μ <5 , we find excellent agreement with simulations.
Non-Stationary Effects and Cross Correlations in Solar Activity
NASA Astrophysics Data System (ADS)
Nefedyev, Yuri; Panischev, Oleg; Demin, Sergey
2016-07-01
In this paper within the framework of the Flicker-Noise Spectroscopy (FNS) we consider the dynamic properties of the solar activity by analyzing the Zurich sunspot numbers. As is well-known astrophysics objects are the non-stationary open systems, whose evolution are the quite individual and have the alternation effects. The main difference of FNS compared to other related methods is the separation of the original signal reflecting the dynamics of solar activity into three frequency bands: system-specific "resonances" and their interferential contributions at lower frequencies, chaotic "random walk" ("irregularity-jump") components at larger frequencies, and chaotic "irregularity-spike" (inertial) components in the highest frequency range. Specific parameters corresponding to each of the bands are introduced and calculated. These irregularities as well as specific resonance frequencies are considered as the information carriers on every hierarchical level of the evolution of a complex natural system with intermittent behavior, consecutive alternation of rapid chaotic changes in the values of dynamic variables on small time intervals with small variations of the values on longer time intervals ("laminar" phases). The jump and spike irregularities are described by power spectra and difference moments (transient structural functions) of the second order. FNS allows revealing the most crucial points of the solar activity dynamics by means of "spikiness" factor. It is shown that this variable behaves as the predictor of crucial changes of the sunspot number dynamics, particularly when the number comes up to maximum value. The change of averaging interval allows revealing the non-stationary effects depending by 11-year cycle and by inside processes in a cycle. To consider the cross correlations between the different variables of solar activity we use the Zurich sunspot numbers and the sequence of corona's radiation energy. The FNS-approach allows extracting the information about cross correlation dynamics between the signals from separate points of the studied system. The 3D cross correlators and their plain projections allow revealing the periodic laws of solar evolution. Work was supported by grants RFBR 15-02-01638-a and 16-02-00496-a.
Quantitative survival impact of composite treatment delays in head and neck cancer.
Ho, Allen S; Kim, Sungjin; Tighiouart, Mourad; Mita, Alain; Scher, Kevin S; Epstein, Joel B; Laury, Anna; Prasad, Ravi; Ali, Nabilah; Patio, Chrysanta; St-Clair, Jon Mallen; Zumsteg, Zachary S
2018-05-09
Multidisciplinary management of head and neck cancer (HNC) must reconcile increasingly sophisticated subspecialty care with timeliness of care. Prior studies examined the individual effects of delays in diagnosis-to-treatment interval, postoperative interval, and radiation interval but did not consider them collectively. The objective of the current study was to investigate the combined impact of these interwoven intervals on patients with HNC. Patients with HNC who underwent curative-intent surgery with radiation were identified in the National Cancer Database between 2004 and 2013. Multivariable models were constructed using restricted cubic splines to determine nonlinear relations with overall survival. Overall, 15,064 patients were evaluated. After adjustment for covariates, only prolonged postoperative interval (P < .001) and radiation interval (P < .001) independently predicted for worse outcomes, whereas the association of diagnosis-to-treatment interval with survival disappeared. By using multivariable restricted cubic spline functions, increasing postoperative interval did not affect mortality until 40 days after surgery, and each day of delay beyond this increased the risk of mortality until 70 days after surgery (hazard ratio, 1.14; 95% confidence interval, 1.01-1.28; P = .029). For radiation interval, mortality escalated continuously with each additional day of delay, plateauing at 55 days (hazard ratio, 1.25; 95% confidence interval, 1.11-1.41; P < .001). Delays beyond these change points were not associated with further survival decrements. Increasing delays in postoperative and radiation intervals are associated independently with an escalating risk of mortality that plateaus beyond certain thresholds. Delays in initiating therapy, conversely, are eclipsed in importance when appraised in conjunction with the entire treatment course. Such findings may redirect focus to streamlining those intervals that are most sensitive to delays when considering survival burden. Cancer 2018. © 2018 American Cancer Society. © 2018 American Cancer Society.
TaDb: A time-aware diffusion-based recommender algorithm
NASA Astrophysics Data System (ADS)
Li, Wen-Jun; Xu, Yuan-Yuan; Dong, Qiang; Zhou, Jun-Lin; Fu, Yan
2015-02-01
Traditional recommender algorithms usually employ the early and recent records indiscriminately, which overlooks the change of user interests over time. In this paper, we show that the interests of a user remain stable in a short-term interval and drift during a long-term period. Based on this observation, we propose a time-aware diffusion-based (TaDb) recommender algorithm, which assigns different temporal weights to the leading links existing before the target user's collection and the following links appearing after that in the diffusion process. Experiments on four real datasets, Netflix, MovieLens, FriendFeed and Delicious show that TaDb algorithm significantly improves the prediction accuracy compared with the algorithms not considering temporal effects.
Hibberd, Daryl L; Jamson, Samantha L; Carsten, Oliver M J
2013-01-01
Modern driving involves frequent and potentially detrimental interactions with distracting in-vehicle tasks. Distraction has been shown to slow brake reaction time and decrease lateral and longitudinal vehicle control. It is likely that these negative effects will become more prevalent in the future as advances are made in the functionality, availability, and number of in-vehicle systems. This paper addresses this problem by considering ways to manage in-vehicle task presentation to mitigate their distracting effects. A driving simulator experiment using 48 participants was performed to investigate the existence of the Psychological Refractory Period in the driving context and its effect on braking performance. Drivers were exposed to lead vehicle braking events in isolation (single-task) and with a preceding surrogate in-vehicle task (dual-task). In dual-task scenarios, the time interval between the in-vehicle and braking tasks was manipulated. Brake reaction time increased when drivers were distracted. The in-vehicle task interfered with the performance of the braking task in a manner that was dependent on the interval between the two tasks, with slower reactions following a shorter inter-task interval. This is the Psychological Refractory Period effect. These results have implications for driver safety during in-vehicle distraction. The findings are used to develop recommendations regarding the timing of in-vehicle task presentation so as to reduce their potentially damaging effects on braking performance. In future, these guidelines could be incorporated into a driver workload management system to minimise the opportunity for a driver to be distracted from the ongoing driving task. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Robinson, John E.
2014-01-01
The Federal Aviation Administration's Next Generation Air Transportation System will combine advanced air traffic management technologies, performance-based procedures, and state-of-the-art avionics to maintain efficient operations throughout the entire arrival phase of flight. Flight deck Interval Management (FIM) operations are expected to use sophisticated airborne spacing capabilities to meet precise in-trail spacing from top-of-descent to touchdown. Recent human-in-the-loop simulations by the National Aeronautics and Space Administration have found that selection of the assigned spacing goal using the runway schedule can lead to premature interruptions of the FIM operation during periods of high traffic demand. This study compares three methods for calculating the assigned spacing goal for a FIM operation that is also subject to time-based metering constraints. The particular paradigms investigated include: one based upon the desired runway spacing interval, one based upon the desired meter fix spacing interval, and a composite method that combines both intervals. These three paradigms are evaluated for the primary arrival procedures to Phoenix Sky Harbor International Airport using the entire set of Rapid Update Cycle wind forecasts from 2011. For typical meter fix and runway spacing intervals, the runway- and meter fix-based paradigms exhibit moderate FIM interruption rates due to their inability to consider multiple metering constraints. The addition of larger separation buffers decreases the FIM interruption rate but also significantly reduces the achievable runway throughput. The composite paradigm causes no FIM interruptions, and maintains higher runway throughput more often than the other paradigms. A key implication of the results with respect to time-based metering is that FIM operations using a single assigned spacing goal will not allow reduction of the arrival schedule's excess spacing buffer. Alternative solutions for conducting the FIM operation in a manner more compatible with the arrival schedule are discussed in detail.
Prevalence of dry eye syndrome in an adult population.
Hashemi, Hassan; Khabazkhoob, Mehdi; Kheirkhah, Ahmad; Emamian, Mohammad Hassan; Mehravaran, Shiva; Shariati, Mohammad; Fotouhi, Akbar
2014-04-01
To determine the prevalence of dry eye syndrome in the general 40- to 64-year-old population of Shahroud, Iran. Population-based cross-sectional study. Through cluster sampling, 6311 people were selected and 5190 participated. Assessment of dry eye was done in a random subsample of 1008 people. Subjective assessment for dry eye syndrome was performed using Ocular Surface Disease Index questionnaire. In addition, the following objective tests of dry eye syndrome were employed: Schirmer test, tear break-up time, and fluorescein and Rose Bengal staining using the Oxford grading scheme. Those with an Ocular Surface Disease Index score ≥23 were considered symptomatic, and dry eye syndrome was defined as having symptoms and at least one positive objective sign. The prevalence of dry eye syndrome was 8.7% (95% confidence interval 6.9-10.6). Assessment of signs showed an abnormal Schirmer score in 17.8% (95% confidence interval 15.5-20.0), tear break-up time in 34.2% (95% confidence interval 29.5-38.8), corneal fluorescein staining (≥1) in 11.3% (95% confidence interval 8.5-14.1) and Rose Bengal staining (≥3 for cornea and/or conjunctiva) in 4.9% (95% confidence interval 3.4-6.5). According to the Ocular Surface Disease Index scores, 18.3% (95% confidence interval 15.9-20.6) had dry eye syndrome symptoms. The prevalence of dry eye syndrome was significantly higher in women (P = 0.010) and not significantly associated with age (P = 0.291). The objective dry eye syndrome signs significantly increased with age. Based on the findings, the prevalence of dry eye syndrome in the studied population is in the mid-range. The prevalence is higher in women. Also, objective tests tend to turn abnormal at higher age. Pterygium is associated with dry eye syndrome and increased its symptoms. © 2013 Royal Australian and New Zealand College of Ophthalmologists.
Ratio-based lengths of intervals to improve fuzzy time series forecasting.
Huarng, Kunhuang; Yu, Tiffany Hui-Kuang
2006-04-01
The objective of this study is to explore ways of determining the useful lengths of intervals in fuzzy time series. It is suggested that ratios, instead of equal lengths of intervals, can more properly represent the intervals among observations. Ratio-based lengths of intervals are, therefore, proposed to improve fuzzy time series forecasting. Algebraic growth data, such as enrollments and the stock index, and exponential growth data, such as inventory demand, are chosen as the forecasting targets, before forecasting based on the various lengths of intervals is performed. Furthermore, sensitivity analyses are also carried out for various percentiles. The ratio-based lengths of intervals are found to outperform the effective lengths of intervals, as well as the arbitrary ones in regard to the different statistical measures. The empirical analysis suggests that the ratio-based lengths of intervals can also be used to improve fuzzy time series forecasting.
Kim, Tae Kyung; Kim, Hyung Wook; Kim, Su Jin; Ha, Jong Kun; Jang, Hyung Ha; Hong, Young Mi; Park, Su Bum; Choi, Cheol Woong; Kang, Dae Hwan
2014-01-01
Background/Aims The quality of bowel preparation (QBP) is the important factor in performing a successful colonoscopy. Several factors influencing QBP have been reported; however, some factors, such as the optimal preparation-to-colonoscopy time interval, remain controversial. This study aimed to determine the factors influencing QBP and the optimal time interval for full-dose polyethylene glycol (PEG) preparation. Methods A total of 165 patients who underwent colonoscopy from June 2012 to August 2012 were prospectively evaluated. The QBP was assessed using the Ottawa Bowel Preparation Scale (Ottawa) score according to several factors influencing the QBP were analyzed. Results Colonoscopies with a time interval of 5 to 6 hours had the best Ottawa score in all parts of the colon. Patients with time intervals of 6 hours or less had the better QBP than those with time intervals of more than 6 hours (p=0.046). In the multivariate analysis, the time interval (odds ratio, 1.897; 95% confidence interval, 1.006 to 3.577; p=0.048) was the only significant contributor to a satisfactory bowel preparation. Conclusions The optimal time was 5 to 6 hours for the full-dose PEG method, and the time interval was the only significant contributor to a satisfactory bowel preparation. PMID:25368750
NASA Astrophysics Data System (ADS)
Gupta, R. K.; Bhunia, A. K.; Roy, D.
2009-10-01
In this paper, we have considered the problem of constrained redundancy allocation of series system with interval valued reliability of components. For maximizing the overall system reliability under limited resource constraints, the problem is formulated as an unconstrained integer programming problem with interval coefficients by penalty function technique and solved by an advanced GA for integer variables with interval fitness function, tournament selection, uniform crossover, uniform mutation and elitism. As a special case, considering the lower and upper bounds of the interval valued reliabilities of the components to be the same, the corresponding problem has been solved. The model has been illustrated with some numerical examples and the results of the series redundancy allocation problem with fixed value of reliability of the components have been compared with the existing results available in the literature. Finally, sensitivity analyses have been shown graphically to study the stability of our developed GA with respect to the different GA parameters.
Kim, Hea-Jung
2014-01-01
This paper considers a hierarchical screened Gaussian model (HSGM) for Bayesian inference of normal models when an interval constraint in the mean parameter space needs to be incorporated in the modeling but when such a restriction is uncertain. An objective measure of the uncertainty, regarding the interval constraint, accounted for by using the HSGM is proposed for the Bayesian inference. For this purpose, we drive a maximum entropy prior of the normal mean, eliciting the uncertainty regarding the interval constraint, and then obtain the uncertainty measure by considering the relationship between the maximum entropy prior and the marginal prior of the normal mean in HSGM. Bayesian estimation procedure of HSGM is developed and two numerical illustrations pertaining to the properties of the uncertainty measure are provided.
Interresponse Time Structures in Variable-Ratio and Variable-Interval Schedules
ERIC Educational Resources Information Center
Bowers, Matthew T.; Hill, Jade; Palya, William L.
2008-01-01
The interresponse-time structures of pigeon key pecking were examined under variable-ratio, variable-interval, and variable-interval plus linear feedback schedules. Whereas the variable-ratio and variable-interval plus linear feedback schedules generally resulted in a distinct group of short interresponse times and a broad distribution of longer…
van Bommel, Rob M G; Weber, Roy; Voogd, Adri C; Nederend, Joost; Louwman, Marieke W J; Venderink, Dick; Strobbe, Luc J A; Rutten, Matthieu J C; Plaisier, Menno L; Lohle, Paul N; Hooijen, Marianne J H; Tjan-Heijnen, Vivianne C G; Duijm, Lucien E M
2017-05-05
To determine the proportion of "true" interval cancers and tumor characteristics of interval breast cancers prior to, during and after the transition from screen-film mammography screening (SFM) to full-field digital mammography screening (FFDM). We included all women with interval cancers detected between January 2006 and January 2014. Breast imaging reports, biopsy results and breast surgery reports of all women recalled at screening mammography and of all women with interval breast cancers were collected. Two experienced screening radiologists reviewed the diagnostic mammograms, on which the interval cancers were diagnosed, as well as the prior screening mammograms and determined whether or not the interval cancer had been missed on the most recent screening mammogram. If not missed, the cancer was considered an occult ("true") interval cancer. A total of 442 interval cancers had been diagnosed, of which 144 at SFM with a prior SFM (SFM-SFM), 159 at FFDM with a prior SFM (FFDM-SFM) and 139 at FFDM with a prior FFDM (FFDM-FFDM). The transition from SFM to FFDM screening resulted in the diagnosis of more occult ("true") interval cancers at FFDM-SFM than at SFM-SFM (65.4% (104/159) versus 49.3% (71/144), P < 0.01), but this increase was no longer statistically significant in women who had been screened digitally for the second time (57.6% (80/139) at FFDM-FFDM versus 49.3% (71/144) at SFM-SFM). Tumor characteristics were comparable for the three interval cancer cohorts, except of a lower porportion (75.7 and 78.0% versus 67.2% af FFDM-FFDM, P < 0.05) of invasive ductal cancers at FFDM with prior FFDM. An increase in the proportion of occult interval cancers is observed during the transition from SFM to FFDM screening mammography. However, this increase seems temporary and is no longer detectable after the second round of digital screening. Tumor characteristics and type of surgery are comparable for interval cancers detected prior to, during and after the transition from SFM to FFDM screening mammography, except of a lower proportion of invasive ductal cancers after the transition.
Riley, W D; Ibbotson, A T; Maxwell, D L; Davison, P I; Beaumont, W R C; Ives, M J
2014-10-01
The downstream migratory behaviour of wild Atlantic salmon Salmo salar smolts was monitored using passive integrated transponder (PIT) antennae systems over 10 years in the lower reaches of a small chalk stream in southern England, U.K. The timing of smolt movements and the likely occurrence of schooling were investigated and compared to previous studies. In nine of the 10 consecutive years of study, the observed diel downstream patterns of S. salar smolt migration appeared to be synchronized with the onset of darkness. The distribution of time intervals between successive nocturnal detections of PIT-tagged smolts was as expected if generated randomly from observed hourly rates. There were, however, significantly more short intervals than expected for smolts detected migrating during the day. For each year from 2006 to 2011, the observed 10th percentile of the daytime intervals was <4 s, compared to ≥55 s for the simulated random times, indicating greater incidence of groups of smolts. Groups with the shortest time intervals between successive PIT tag detections originated from numerous parr tagging sites (used as a proxy for relatedness). The results suggest that the ecological drivers influencing daily smolt movements in the lower reaches of chalk stream catchments are similar to those previously reported at the onset of migration for smolts leaving their natal tributaries; that smolts detected migrating during the night are moving independently following initiation by a common environmental factor (presumably darkness), whereas those detected migrating during the day often move in groups, and that such schools may not be site (kin)-structured. The importance of understanding smolt migratory behaviour is considered with reference to stock monitoring programmes and enhancing downstream passage past barriers. © 2014 Crown copyright. Journal of Fish Biology © 2014 The Fisheries Society of the British Isles.
Ho, K M
2014-05-01
Cardiac surgery is increasingly performed on elderly patients with multiple comorbid conditions, but the determinants of the relationship between cost and survival time after cardiac surgery for patients with a serious cardiac condition remain uncertain. Using the long-term outcome data of a cohort study on adult cardiac surgical patients, the relationship between cost and survival time after cardiac surgery from a hospital service perspective was determined. The total cost for each patient was estimated by the costs of the surgical procedures, intra-aortic balloon pump utilisation, operating theatre utilisation, blood products, intensive care unit stay and cumulative hospital stay up to a median follow-up time of 30 months. Of the 2131 patients considered in this study, a total cost >A$100,000 per life-year after cardiac surgery was observed only in 171 patients (8.0%, 95% confidence interval 6.9 to 9.3%). Age, Charlson Comorbidity Index and EuroSCORE were all related to the cost per life-year after cardiac surgery, but EuroSCORE (odds ratio 1.26 per score increment, 95% confidence interval 1.18 to 1.35, P=0.001) was, by far, the most important determinant and explained 32% of the variability in cost per life-year after cardiac surgery. Patients with a high EuroSCORE were associated with a substantially longer length of intensive care unit stay and cumulative hospital stay, as well as a shorter survival time after cardiac surgery compared to patients with a lower EuroSCORE. Of all the subgroups of patients examined, only patients with a EuroSCORE >5 were consistently associated with a cost >A$100,000 per life-year (cost per life-year $183,148, 95% confidence interval 125, 394 to 240, 902).
Hashemi Kamangar, Somayeh Sadat; Moradimanesh, Zahra; Mokhtari, Setareh; Bakouie, Fatemeh
2018-06-11
A developmental process can be described as changes through time within a complex dynamic system. The self-organized changes and emergent behaviour during development can be described and modeled as a dynamical system. We propose a dynamical system approach to answer the main question in human cognitive development i.e. the changes during development happens continuously or in discontinuous stages. Within this approach there is a concept; the size of time scales, which can be used to address the aforementioned question. We introduce a framework, by considering the concept of time-scale, in which "fast" and "slow" is defined by the size of time-scales. According to our suggested model, the overall pattern of development can be seen as one continuous function, with different time-scales in different time intervals.
Sato, Takako; Zaitsu, Kei; Tsuboi, Kento; Nomura, Masakatsu; Kusano, Maiko; Shima, Noriaki; Abe, Shuntaro; Ishii, Akira; Tsuchihashi, Hitoshi; Suzuki, Koichi
2015-05-01
Estimation of postmortem interval (PMI) is an important goal in judicial autopsy. Although many approaches can estimate PMI through physical findings and biochemical tests, accurate PMI calculation by these conventional methods remains difficult because PMI is readily affected by surrounding conditions, such as ambient temperature and humidity. In this study, Sprague-Dawley (SD) rats (10 weeks) were sacrificed by suffocation, and blood was collected by dissection at various time intervals (0, 3, 6, 12, 24, and 48 h; n = 6) after death. A total of 70 endogenous metabolites were detected in plasma by gas chromatography-tandem mass spectrometry (GC-MS/MS). Each time group was separated from each other on the principal component analysis (PCA) score plot, suggesting that the various endogenous metabolites changed with time after death. To prepare a prediction model of a PMI, a partial least squares (or projection to latent structure, PLS) regression model was constructed using the levels of significantly different metabolites determined by variable importance in the projection (VIP) score and the Kruskal-Wallis test (P < 0.05). Because the constructed PLS regression model could successfully predict each PMI, this model was validated with another validation set (n = 3). In conclusion, plasma metabolic profiling demonstrated its ability to successfully estimate PMI under a certain condition. This result can be considered to be the first step for using the metabolomics method in future forensic casework.
Not All Prehospital Time is Equal: Influence of Scene Time on Mortality
Brown, Joshua B.; Rosengart, Matthew R.; Forsythe, Raquel M.; Reynolds, Benjamin R.; Gestring, Mark L.; Hallinan, William M.; Peitzman, Andrew B.; Billiar, Timothy R.; Sperry, Jason L.
2016-01-01
Background Trauma is time-sensitive and minimizing prehospital (PH) time is appealing. However, most studies have not linked increasing PH time with worse outcomes, as raw PH times are highly variable. It is unclear whether specific PH time patterns affect outcomes. Our objective was to evaluate the association of PH time interval distribution with mortality. Methods Patients transported by EMS in the Pennsylvania trauma registry 2000-2013 with total prehospital time (TPT)≥20min were included. TPT was divided into three PH time intervals: response, scene, and transport time. The number of minutes in each PH time interval was divided by TPT to determine the relative proportion each interval contributed to TPT. A prolonged interval was defined as any one PH interval contributing ≥50% of TPT. Patients were classified by prolonged PH interval or no prolonged PH interval (all intervals<50% of TPT). Patients were matched for TPT and conditional logistic regression determined the association of mortality with PH time pattern, controlling for confounders. PH interventions were explored as potential mediators, and prehospital triage criteria used identify patients with time-sensitive injuries. Results There were 164,471 patients included. Patients with prolonged scene time had increased odds of mortality (OR 1.21; 95%CI 1.02–1.44, p=0.03). Prolonged response, transport, and no prolonged interval were not associated with mortality. When adjusting for mediators including extrication and PH intubation, prolonged scene time was no longer associated with mortality (OR 1.06; 0.90–1.25, p=0.50). Together these factors mediated 61% of the effect between prolonged scene time and mortality. Mortality remained associated with prolonged scene time in patients with hypotension, penetrating injury, and flail chest. Conclusions Prolonged scene time is associated with increased mortality. PH interventions partially mediate this association. Further study should evaluate whether these interventions drive increased mortality because they prolong scene time or by another mechanism, as reducing scene time may be a target for intervention. Level of Evidence IV, prognostic study PMID:26886000
Panek, Petr; Prochazka, Ivan
2007-09-01
This article deals with the time interval measurement device, which is based on a surface acoustic wave (SAW) filter as a time interpolator. The operating principle is based on the fact that a transversal SAW filter excited by a short pulse can generate a finite signal with highly suppressed spectra outside a narrow frequency band. If the responses to two excitations are sampled at clock ticks, they can be precisely reconstructed from a finite number of samples and then compared so as to determine the time interval between the two excitations. We have designed and constructed a two-channel time interval measurement device which allows independent timing of two events and evaluation of the time interval between them. The device has been constructed using commercially available components. The experimental results proved the concept. We have assessed the single-shot time interval measurement precision of 1.3 ps rms that corresponds to the time of arrival precision of 0.9 ps rms in each channel. The temperature drift of the measured time interval on temperature is lower than 0.5 ps/K, and the long term stability is better than +/-0.2 ps/h. These are to our knowledge the best values reported for the time interval measurement device. The results are in good agreement with the error budget based on the theoretical analysis.
Interval Timing Accuracy and Scalar Timing in C57BL/6 Mice
Buhusi, Catalin V.; Aziz, Dyana; Winslow, David; Carter, Rickey E.; Swearingen, Joshua E.; Buhusi, Mona C.
2010-01-01
In many species, interval timing behavior is accurate—appropriate estimated durations—and scalar—errors vary linearly with estimated durations. While accuracy has been previously examined, scalar timing has not been yet clearly demonstrated in house mice (Mus musculus), raising concerns about mouse models of human disease. We estimated timing accuracy and precision in C57BL/6 mice, the most used background strain for genetic models of human disease, in a peak-interval procedure with multiple intervals. Both when timing two intervals (Experiment 1) or three intervals (Experiment 2), C57BL/6 mice demonstrated varying degrees of timing accuracy. Importantly, both at individual and group level, their precision varied linearly with the subjective estimated duration. Further evidence for scalar timing was obtained using an intraclass correlation statistic. This is the first report of consistent, reliable scalar timing in a sizable sample of house mice, thus validating the PI procedure as a valuable technique, the intraclass correlation statistic as a powerful test of the scalar property, and the C57BL/6 strain as a suitable background for behavioral investigations of genetically engineered mice modeling disorders of interval timing. PMID:19824777
Kowalik, Grzegorz T; Knight, Daniel S; Steeden, Jennifer A; Tann, Oliver; Odille, Freddy; Atkinson, David; Taylor, Andrew; Muthurangu, Vivek
2015-02-01
To develop a real-time phase contrast MR sequence with high enough temporal resolution to assess cardiac time intervals. The sequence utilized spiral trajectories with an acquisition strategy that allowed a combination of temporal encoding (Unaliasing by fourier-encoding the overlaps using the temporal dimension; UNFOLD) and parallel imaging (Sensitivity encoding; SENSE) to be used (UNFOLDed-SENSE). An in silico experiment was performed to determine the optimum UNFOLD filter. In vitro experiments were carried out to validate the accuracy of time intervals calculation and peak mean velocity quantification. In addition, 15 healthy volunteers were imaged with the new sequence, and cardiac time intervals were compared to reference standard Doppler echocardiography measures. For comparison, in silico, in vitro, and in vivo experiments were also carried out using sliding window reconstructions. The in vitro experiments demonstrated good agreement between real-time spiral UNFOLDed-SENSE phase contrast MR and the reference standard measurements of velocity and time intervals. The protocol was successfully performed in all volunteers. Subsequent measurement of time intervals produced values in keeping with literature values and good agreement with the gold standard echocardiography. Importantly, the proposed UNFOLDed-SENSE sequence outperformed the sliding window reconstructions. Cardiac time intervals can be successfully assessed with UNFOLDed-SENSE real-time spiral phase contrast. Real-time MR assessment of cardiac time intervals may be beneficial in assessment of patients with cardiac conditions such as diastolic dysfunction. © 2014 Wiley Periodicals, Inc.
Integrate-and-fire neurons driven by asymmetric dichotomous noise.
Droste, Felix; Lindner, Benjamin
2014-12-01
We consider a general integrate-and-fire (IF) neuron driven by asymmetric dichotomous noise. In contrast to the Gaussian white noise usually used in the so-called diffusion approximation, this noise is colored, i.e., it exhibits temporal correlations. We give an analytical expression for the stationary voltage distribution of a neuron receiving such noise and derive recursive relations for the moments of the first passage time density, which allow us to calculate the firing rate and the coefficient of variation of interspike intervals. We study how correlations in the input affect the rate and regularity of firing under variation of the model's parameters for leaky and quadratic IF neurons. Further, we consider the limit of small correlation times and find lowest order corrections to the first passage time moments to be proportional to the square root of the correlation time. We show analytically that to this lowest order, correlations always lead to a decrease in firing rate for a leaky IF neuron. All theoretical expressions are compared to simulations of leaky and quadratic IF neurons.
VARIABLE TIME-INTERVAL GENERATOR
Gross, J.E.
1959-10-31
This patent relates to a pulse generator and more particularly to a time interval generator wherein the time interval between pulses is precisely determined. The variable time generator comprises two oscillators with one having a variable frequency output and the other a fixed frequency output. A frequency divider is connected to the variable oscillator for dividing its frequency by a selected factor and a counter is used for counting the periods of the fixed oscillator occurring during a cycle of the divided frequency of the variable oscillator. This defines the period of the variable oscillator in terms of that of the fixed oscillator. A circuit is provided for selecting as a time interval a predetermined number of periods of the variable oscillator. The output of the generator consists of a first pulse produced by a trigger circuit at the start of the time interval and a second pulse marking the end of the time interval produced by the same trigger circuit.
NASA Astrophysics Data System (ADS)
Jha, Mayank Shekhar; Dauphin-Tanguy, G.; Ould-Bouamama, B.
2016-06-01
The paper's main objective is to address the problem of health monitoring of system parameters in Bond Graph (BG) modeling framework, by exploiting its structural and causal properties. The system in feedback control loop is considered uncertain globally. Parametric uncertainty is modeled in interval form. The system parameter is undergoing degradation (prognostic candidate) and its degradation model is assumed to be known a priori. The detection of degradation commencement is done in a passive manner which involves interval valued robust adaptive thresholds over the nominal part of the uncertain BG-derived interval valued analytical redundancy relations (I-ARRs). The latter forms an efficient diagnostic module. The prognostics problem is cast as joint state-parameter estimation problem, a hybrid prognostic approach, wherein the fault model is constructed by considering the statistical degradation model of the system parameter (prognostic candidate). The observation equation is constructed from nominal part of the I-ARR. Using particle filter (PF) algorithms; the estimation of state of health (state of prognostic candidate) and associated hidden time-varying degradation progression parameters is achieved in probabilistic terms. A simplified variance adaptation scheme is proposed. Associated uncertainties which arise out of noisy measurements, parametric degradation process, environmental conditions etc. are effectively managed by PF. This allows the production of effective predictions of the remaining useful life of the prognostic candidate with suitable confidence bounds. The effectiveness of the novel methodology is demonstrated through simulations and experiments on a mechatronic system.
Responses evoked from man by acoustic stimulation
NASA Technical Reports Server (NTRS)
Galambos, R.; Hecox, K.; Picton, T.
1974-01-01
Clicks and other acoustic stimuli evoke time-locked responses from the brain of man. The properties of the waves recordable within the interval from 1 to 10 msec after the stimuli strike the eardrum are discussed along with factors influencing the waves in the 100 to 500 msec epoch. So-called brainstem responses from a normal young adult are considered. No waves were observed for clicks to weak to be heard. With increasing stimulus strength the waves become larger in amplitude and their latency shortens.
Parametric synthesis of a robust controller on a base of mathematical programming method
NASA Astrophysics Data System (ADS)
Khozhaev, I. V.; Gayvoronskiy, S. A.; Ezangina, T. A.
2018-05-01
Considered paper is dedicated to deriving sufficient conditions, linking root indices of robust control quality with coefficients of interval characteristic polynomial, on the base of mathematical programming method. On the base of these conditions, a method of PI- and PID-controllers, providing aperiodic transient process with acceptable stability degree and, subsequently, acceptable setting time, synthesis was developed. The method was applied to a problem of synthesizing a controller for a depth control system of an unmanned underwater vehicle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McJunkin, Timothy; Epiney, Aaron; Rabiti, Cristian
2017-06-01
This report provides a summary of the effort in the Nuclear-Renewable Hybrid Energy System (N-R HES) project on the level 4 milestone to consider integration of existing grid models into the factors for optimization on shorter time intervals than the existing electric grid models with the Risk Analysis Virtual Environment (RAVEN) and Modelica [1] optimizations and economic analysis that are the focus of the project to date.
Rankin, Nicole M; York, Sarah; Stone, Emily; Barnes, David; McGregor, Deborah; Lai, Michelle; Shaw, Tim; Butow, Phyllis N
2017-05-01
Pathways to lung cancer diagnosis and treatment are complex. International evidence shows significant variations in pathways. Qualitative research investigating pathways to lung cancer diagnosis rarely considers both patient and general practitioner views simultaneously. To describe the lung cancer diagnostic pathway, focusing on the perspective of patients and general practitioners about diagnostic and pretreatment intervals. This qualitative study of patients with lung cancer and general practitioners in Australia used qualitative interviews or a focus group in which participants responded to a semistructured questionnaire designed to explore experiences of the diagnostic pathway. The Model of Pathways to Treatment (the Model) was used as a framework for analysis, with data organized into (1) events, (2) processes, and (3) contributing factors for variations in diagnostic and pretreatment intervals. Thirty participants (19 patients with lung cancer and 11 general practitioners) took part. Nine themes were identified during analysis. For the diagnostic interval, these were: (1) taking patient concerns seriously, (2) a sense of urgency, (3) advocacy that is doctor-driven or self-motivated, and (4) referral: "knowing who to refer to." For the pretreatment interval, themes were: (5) uncertainty, (6) psychosocial support for the patient and family before treatment, and (7) communication among the multidisciplinary team and general practitioners. Two cross-cutting themes were: (8) coordination of care and "handing over" the patient, and (9) general practitioner knowledge about lung cancer. Events were perceived as complex, with diagnosis often being revealed over time, rather than as a single event. Contributing factors at patient, system, and disease levels are described for both intervals. Patients and general practitioners expressed similar themes across the diagnostic and pretreatment intervals. Significant improvements could be made to health systems to facilitate better patient and general practitioner experiences of the diagnostic pathway. This novel presentation of patient and general practitioner perspectives indicates that systemic interventions have a role in timely and appropriate referrals to specialist care and coordination of investigations. Systemic interventions may alleviate concerns about urgency of diagnostic workup, communication, and coordination of care as patients transition from primary to specialist care.
Relative frequencies of seismic main shocks after strong shocks in Italy
NASA Astrophysics Data System (ADS)
Gasperini, Paolo; Lolli, Barbara; Vannucci, Gianfranco
2016-10-01
We analysed a catalogue of Italian earthquakes, covering 55 yr of data from 1960 to 2014 with magnitudes homogeneously converted to Mw, to compute the time-dependent relative frequencies with which strong seismic shocks (4.0 ≤ Mw < 5.0), widely felt by the population, have been followed by main shocks (Mw ≥ 5.0) that threatened the health and the properties of the persons living in the epicentral area. Assuming the stationarity of the seismic release properties, such frequencies are estimates of the probabilities of potentially destructive shocks after the occurrence of future strong shocks. We compared them with the time-independent relative frequencies of random occurrence in terms of the frequency gain that is the ratio between the time-dependent and time-independent relative frequencies. The time-dependent relative frequencies vary from less than 1 per cent to about 20 per cent, depending on the magnitudes of the shocks and the time windows considered (ranging from minutes to years). They remain almost constant for a few hours after the strong shock and then decrease with time logarithmically. Strong earthquakes (with Mw ≥ 6.0) mainly occurred within two or three months of the strong shock. The frequency gains vary from about 10 000 for very short time intervals to less than 10 for a time interval of 2 yr. Only about 1/3 of main shocks were preceded by at least a strong shock in the previous day and about 1/2 in the previous month.
Krall, Scott P; Cornelius, Angela P; Addison, J Bruce
2014-03-01
To analyze the correlation between the many different emergency department (ED) treatment metric intervals and determine if the metrics directly impacted by the physician correlate to the "door to room" interval in an ED (interval determined by ED bed availability). Our null hypothesis was that the cause of the variation in delay to receiving a room was multifactorial and does not correlate to any one metric interval. We collected daily interval averages from the ED information system, Meditech©. Patient flow metrics were collected on a 24-hour basis. We analyzed the relationship between the time intervals that make up an ED visit and the "arrival to room" interval using simple correlation (Pearson Correlation coefficients). Summary statistics of industry standard metrics were also done by dividing the intervals into 2 groups, based on the average ED length of stay (LOS) from the National Hospital Ambulatory Medical Care Survey: 2008 Emergency Department Summary. Simple correlation analysis showed that the doctor-to-discharge time interval had no correlation to the interval of "door to room (waiting room time)", correlation coefficient (CC) (CC=0.000, p=0.96). "Room to doctor" had a low correlation to "door to room" CC=0.143, while "decision to admitted patients departing the ED time" had a moderate correlation of 0.29 (p <0.001). "New arrivals" (daily patient census) had a strong correlation to longer "door to room" times, 0.657, p<0.001. The "door to discharge" times had a very strong correlation CC=0.804 (p<0.001), to the extended "door to room" time. Physician-dependent intervals had minimal correlation to the variation in arrival to room time. The "door to room" interval was a significant component to the variation in "door to discharge" i.e. LOS. The hospital-influenced "admit decision to hospital bed" i.e. hospital inpatient capacity, interval had a correlation to delayed "door to room" time. The other major factor affecting department bed availability was the "total patients per day." The correlation to the increasing "door to room" time also reflects the effect of availability of ED resources (beds) on the patient evaluation time. The time that it took for a patient to receive a room appeared more dependent on the system resources, for example, beds in the ED, as well as in the hospital, than on the physician.
A Very Simple Method to Calculate the (Positive) Largest Lyapunov Exponent Using Interval Extensions
NASA Astrophysics Data System (ADS)
Mendes, Eduardo M. A. M.; Nepomuceno, Erivelton G.
2016-12-01
In this letter, a very simple method to calculate the positive Largest Lyapunov Exponent (LLE) based on the concept of interval extensions and using the original equations of motion is presented. The exponent is estimated from the slope of the line derived from the lower bound error when considering two interval extensions of the original system. It is shown that the algorithm is robust, fast and easy to implement and can be considered as alternative to other algorithms available in the literature. The method has been successfully tested in five well-known systems: Logistic, Hénon, Lorenz and Rössler equations and the Mackey-Glass system.
Intact interval timing in circadian CLOCK mutants.
Cordes, Sara; Gallistel, C R
2008-08-28
While progress has been made in determining the molecular basis for the circadian clock, the mechanism by which mammalian brains time intervals measured in seconds to minutes remains a mystery. An obvious question is whether the interval-timing mechanism shares molecular machinery with the circadian timing mechanism. In the current study, we trained circadian CLOCK +/- and -/- mutant male mice in a peak-interval procedure with 10 and 20-s criteria. The mutant mice were more active than their wild-type littermates, but there were no reliable deficits in the accuracy or precision of their timing as compared with wild-type littermates. This suggests that expression of the CLOCK protein is not necessary for normal interval timing.
Provider attitudes toward STARPAHC: a telemedicine project on the Papago reservation.
Fuchs, M
1979-01-01
Space Technology Applied to Rural Papago Advanced Health Care (STARPAHC), is a large-scale telemedicine project, sponsored jointly by the Indian Health Service (IHS), NASA, and the Papago tribe, and in operation on the Papago Indian Reservation outside Tucson Arizona, for the past two years. STARPAHC uses a mobile health unit (MHU), staffed by non-M.D. providers and linked by two-way television, radio, and remote telemetry to an IHS hospital up to 100 miles away, to make medical care available in remote areas of the reservation. Over a two-year-period beginning in January, 1975, 47 individual providers, including 21 physicians, were interviewed, at five intervals, to determine their receptivity to and acceptance of telemedicine; because of staff turnover, not all providers were interviewed at each different interval. Data suggests that television equipment was considered costly and in some cases inconvenient to M.D. providers; it was not considered always essential for providers to be able to diagnose and treat patients. The major problems providers cited were the unreliability of equipment and the time required for television consultations. The major benefit cited was improved access to health care for a population not previously receiving such care near their homes. Non-M.D. providers considered the link they were provided to physicians via television and voice communications from remote areas to be a major benefit.
Wang, Qiuying; Guo, Zheng; Sun, Zhiguo; Cui, Xufei; Liu, Kaiyue
2018-01-01
Pedestrian-positioning technology based on the foot-mounted micro inertial measurement unit (MIMU) plays an important role in the field of indoor navigation and has received extensive attention in recent years. However, the positioning accuracy of the inertial-based pedestrian-positioning method is rapidly reduced because of the relatively low measurement accuracy of the measurement sensor. The zero-velocity update (ZUPT) is an error correction method which was proposed to solve the cumulative error because, on a regular basis, the foot is stationary during the ordinary gait; this is intended to reduce the position error growth of the system. However, the traditional ZUPT has poor performance because the time of foot touchdown is short when the pedestrians move faster, which decreases the positioning accuracy. Considering these problems, a forward and reverse calculation method based on the adaptive zero-velocity interval adjustment for the foot-mounted MIMU location method is proposed in this paper. To solve the inaccuracy of the zero-velocity interval detector during fast pedestrian movement where the contact time of the foot on the ground is short, an adaptive zero-velocity interval detection algorithm based on fuzzy logic reasoning is presented in this paper. In addition, to improve the effectiveness of the ZUPT algorithm, forward and reverse multiple solutions are presented. Finally, with the basic principles and derivation process of this method, the MTi-G710 produced by the XSENS company is used to complete the test. The experimental results verify the correctness and applicability of the proposed method. PMID:29883399
Temporal coherence of phenological and climatic rhythmicity in Beijing
NASA Astrophysics Data System (ADS)
Chen, Xiaoqiu; Zhang, Weiqi; Ren, Shilong; Lang, Weiguang; Liang, Boyi; Liu, Guohua
2017-10-01
Using woody plant phenological data in the Beijing Botanical Garden from 1979 to 2013, we revealed three levels of phenology rhythms and examined their coherence with temperature rhythms. First, the sequential and correlative rhythm shows that occurrence dates of various phenological events obey a certain time sequence within a year and synchronously advance or postpone among years. The positive correlation between spring phenophase dates is much stronger than that between autumn phenophase dates and attenuates as the time interval between two spring phenophases increases. This phenological rhythm can be explained by positive correlation between above 0 °C mean temperatures corresponding to different phenophase dates. Second, the circannual rhythm indicates that recurrence interval of a phenophase in the same species in two adjacent years is about 365 days, which can be explained by the 365-day recurrence interval in the first and last dates of threshold temperatures. Moreover, an earlier phenophase date in the current year may lead to a later phenophase date in the next year through extending recurrence interval. Thus, the plant phenology sequential and correlative rhythm and circannual rhythm are interacted, which mirrors the interaction between seasonal variation and annual periodicity of temperature. Finally, the multi-year rhythm implies that phenophase dates display quasi-periodicity more than 1 year. The same 12-year periodicity in phenophase and threshold temperature dates confirmed temperature controls of the phenology multi-year rhythm. Our findings provide new perspectives for examining phenological response to climate change and developing comprehensive phenology models considering temporal coherence of phenological and climatic rhythmicity.
Effects of a low-resistance, interval bicycling intervention in Parkinson's Disease.
Uygur, Mehmet; Bellumori, Maria; Knight, Christopher A
2017-12-01
Previous studies have shown that people with Parkinson's disease (PD) benefit from a variety of exercise modalities with respect to symptom management and function. Among the possible exercise modalities, speedwork has been identified as a promising strategy, with direct implications for the rate and amplitude of nervous system involvement. Considering that previous speed-based exercise for PD has often been equipment, personnel and/or facility dependent, and often time intensive, our purpose was to develop a population-specific exercise program that could be self-administered with equipment that is readily found in fitness centers or perhaps the home. Fourteen individuals with PD (Hoehn-Yahr (H-Y) stage of 3.0 or less) participated in twelve 30-min sessions of low-resistance interval training on a stationary recumbent bicycle. Motor examination section of the Unified Parkinson's Disease Rating Scale (UPDRS), 10-meter walk (10mW), timed-up-and-go (TUG), functional reach, four-square step test (4SST), nine-hole peg test (9HPT) and simple reaction time scores all exhibited significant improvements (p < 0.05). These results add further support to the practice of speedwork for people with PD and outline a population-amenable program with high feasibility.
Local Stable and Unstable Manifolds and Their Control in Nonautonomous Finite-Time Flows
NASA Astrophysics Data System (ADS)
Balasuriya, Sanjeeva
2016-08-01
It is well known that stable and unstable manifolds strongly influence fluid motion in unsteady flows. These emanate from hyperbolic trajectories, with the structures moving nonautonomously in time. The local directions of emanation at each instance in time is the focus of this article. Within a nearly autonomous setting, it is shown that these time-varying directions can be characterised through the accumulated effect of velocity shear. Connections to Oseledets spaces and projection operators in exponential dichotomies are established. Availability of data for both infinite- and finite-time intervals is considered. With microfluidic flow control in mind, a methodology for manipulating these directions in any prescribed time-varying fashion by applying a local velocity shear is developed. The results are verified for both smoothly and discontinuously time-varying directions using finite-time Lyapunov exponent fields, and excellent agreement is obtained.
Interval Estimation of Seismic Hazard Parameters
NASA Astrophysics Data System (ADS)
Orlecka-Sikora, Beata; Lasocki, Stanislaw
2017-03-01
The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.
NASA Astrophysics Data System (ADS)
Kirkil, Gokhan; Constantinescu, George
2014-11-01
Large Eddy Simulation is used to investigate the structure of the laminar horseshoe vortex (HV) system and the dynamics of the necklace vortices as they fold around the base of a circular cylinder mounted on the flat bed of an open channel for Reynolds numbers defined with the cylinder diameter, D, smaller than 4,460. The study concentrates on the analysis of the structure of the HV system in the periodic breakaway sub-regime which is characterized by the formation of three main necklace vortices. For the relatively shallow flow conditions considered in this study (H/D 1, H is the channel depth), at times, the disturbances induced by the legs of the necklace vortices do not allow the SSLs on the two sides of the cylinder to interact in a way that allows the vorticity redistribution mechanism to lead to the formation of a new wake roller. As a result, the shedding of large scale rollers in the turbulent wake is suppressed for relatively large periods of time. Simulation results show that the wake structure changes randomly between time intervals when large-scale rollers are forming and are convected in the wake (von Karman regime), and time intervals when the rollers do not form.
NASA Astrophysics Data System (ADS)
Siswanto, A.; Kurniati, N.
2018-04-01
An oil and gas company has 2,268 oil and gas wells. Well Barrier Element (WBE) is installed in a well to protect human, prevent asset damage and minimize harm to the environment. The primary WBE component is Surface Controlled Subsurface Safety Valve (SCSSV). The secondary WBE component is Christmas Tree Valves that consist of four valves i.e. Lower Master Valve (LMV), Upper Master Valve (UMV), Swab Valve (SV) and Wing Valve (WV). Current practice on WBE Preventive Maintenance (PM) program is conducted by considering the suggested schedule as stated on manual. Corrective Maintenance (CM) program is conducted when the component fails unexpectedly. Both PM and CM need cost and may cause production loss. This paper attempts to analyze the failure data and reliability based on historical data. Optimal PM interval is determined in order to minimize the total cost of maintenance per unit time. The optimal PM interval for SCSSV is 730 days, LMV is 985 days, UMV is 910 days, SV is 900 days and WV is 780 days. In average of all components, the cost reduction by implementing the suggested interval is 52%, while the reliability is improved by 4% and the availability is increased by 5%.
Intact Interval Timing in Circadian CLOCK Mutants
Cordes, Sara; Gallistel, C. R.
2008-01-01
While progress has been made in determining the molecular basis for the circadian clock, the mechanism by which mammalian brains time intervals measured in seconds to minutes remains a mystery. An obvious question is whether the interval timing mechanism shares molecular machinery with the circadian timing mechanism. In the current study, we trained circadian CLOCK +/− and −/− mutant male mice in a peak-interval procedure with 10 and 20-s criteria. The mutant mice were more active than their wild-type littermates, but there were no reliable deficits in the accuracy or precision of their timing as compared with wild-type littermates. This suggests that expression of the CLOCK protein is not necessary for normal interval timing. PMID:18602902
Cano-Montoya, Johnattan; Álvarez, Cristian; Martínez, Cristian; Salas, Andrés; Sade, Farid; Ramírez-Campillo, Rodrigo
2016-09-01
Despite the evidence supporting metabolic benefits of high intensity interval exercise (HIIT), there is little information about the cardiovascular response to this type of exercise in patients with type 2 diabetes (T2D) and hypertension (HTA). To analyze the changes in heart rate at rest, at the onset and at the end of each interval of training, after twelve weeks of a HIIT program in T2D and HTA patients. Twenty-three participants with T2D and HTA (20 women) participated in a controlled HIIT program. Fourteen participants attended 90% of more session of exercise and were considered as adherent. Adherent and non-adherent participants had similar body mass index (BMI), and blood pressure. A 1x2x10 (work: rest-time: intervals) HIIT exercise protocol was used both as a test and as training method during twelve weeks. The initial and finishing heart rate (HR) of each of the ten intervals before and after the intervention were measured. After twelve weeks of HIIT intervention, adherent participants had a significant reduction in the heart rate at the onset of exercise, and during intervals 4, 5, 8 and 10. A reduction in the final heart rate was observed during intervals 8 and 10. In the same participants the greatest magnitude of reduction, at the onset or end of exercise was approximately 10 beats/min. No significant changes in BMI, resting heart rate and blood pressure were observed. A HIIT program reduces the cardiovascular effort to a given work-load and improves cardiovascular recovery after exercise.
Application and Validation of Remaining Service Interval Framework for Pavements
DOT National Transportation Integrated Search
2016-10-01
The pavement remaining service interval (RSI) terminology was developed to remove confusion caused by the multitude of meanings assigned to the various forms of pavement remaining service life (RSL). The RSI concept considers the complete maintenance...
Working times of elastomeric impression materials determined by dimensional accuracy.
Tan, E; Chai, J; Wozniak, W T
1996-01-01
The working times of five poly(vinyl siloxane) impression materials were estimated by evaluating the dimensional accuracy of stone dies of impressions of a standard model made at successive time intervals. The stainless steel standard model was represented by two abutments having known distances between landmarks in three dimensions. Three dimensions in the x-, y-, and z-axes of the stone dies were measured with a traveling microscope. A time interval was rejected as being within the working time if the percentage change of the resultant dies, in any dimension, was statistically different from those measured from stone dies from previous time intervals. The absolute dimensions of those dies from the rejected time interval also must have exceeded all those from previous time intervals. Results showed that the working times estimated with this method generally were about 30 seconds longer than those recommended by the manufacturers.
Analysis of photon count data from single-molecule fluorescence experiments
NASA Astrophysics Data System (ADS)
Burzykowski, T.; Szubiakowski, J.; Rydén, T.
2003-03-01
We consider single-molecule fluorescence experiments with data in the form of counts of photons registered over multiple time-intervals. Based on the observation schemes, linking back to works by Dehmelt [Bull. Am. Phys. Soc. 20 (1975) 60] and Cook and Kimble [Phys. Rev. Lett. 54 (1985) 1023], we propose an analytical approach to the data based on the theory of Markov-modulated Poisson processes (MMPP). In particular, we consider maximum-likelihood estimation. The method is illustrated using a real-life dataset. Additionally, the properties of the proposed method are investigated through simulations and compared to two other approaches developed by Yip et al. [J. Phys. Chem. A 102 (1998) 7564] and Molski [Chem. Phys. Lett. 324 (2000) 301].
Life sciences flight experiments program - Overview
NASA Technical Reports Server (NTRS)
Berry, W. E.; Dant, C. C.
1981-01-01
The considered LSFE program focuses on Spacelab life sciences missions planned for the 1984-1985 time frame. Life Sciences Spacelab payloads, launched at approximately 18-months intervals, will enable scientists to test hypotheses from such disciplines as vestibular physiology, developmental biology, biochemistry, cell biology, plant physiology, and a variety of other life sciences. An overview is presented of the LSFE program that will take advantage of the unique opportunities for biological experimentation possible on Spacelab. Program structure, schedules, and status are considered along with questions of program selection, and the science investigator working groups. A description is presented of the life sciences laboratory equipment program, taking into account the general purpose work station, the research animal holding facility, and the plant growth unit.
Single-channel autocorrelation functions: the effects of time interval omission.
Ball, F G; Sansom, M S
1988-01-01
We present a general mathematical framework for analyzing the dynamic aspects of single channel kinetics incorporating time interval omission. An algorithm for computing model autocorrelation functions, incorporating time interval omission, is described. We show, under quite general conditions, that the form of these autocorrelations is identical to that which would be obtained if time interval omission was absent. We also show, again under quite general conditions, that zero correlations are necessarily a consequence of the underlying gating mechanism and not an artefact of time interval omission. The theory is illustrated by a numerical study of an allosteric model for the gating mechanism of the locust muscle glutamate receptor-channel. PMID:2455553
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Ziyang; Yang, Tao; Li, Guoqi
We study synchronization of coupled linear systems over networks with weak connectivity and time-varying delays. We focus on the case that the internal dynamics are time-varying but non-expansive. Both uniformly connected and infinitely connected communication topologies are considered. A new concept of P-synchronization is introduced and we first show that global asymptotic P-synchronization can be achieved over directed networks with uniform joint connectivity and arbitrarily bounded delays. We then study the case of the infinitely jointly connected communication topology. In particular, for the undirected communication topologies, it turns out that the existence of a uniform time interval for the communicationmore » topology is not necessary and P-synchronization can be achieved when the time varying delays are arbitrarily bounded. Simulations are given to validate the theoretical results.« less
Preharvest Interval Periods and their relation to fruit growth stages and pesticide formulations.
Alister, Claudio; Araya, Manuel; Becerra, Kevin; Saavedra, Jorge; Kogan, Marcelo
2017-04-15
The aim of this study was to evaluate the effect of pesticide formulations and fruit growth stages on the Pre-harvest Interval Period (PHI). Results showed that pesticide formulations did not affect the initial deposit and dissipation rate. However, the fruit growth stage at the application time showed a significant effect on the above-mentioned parameters. Fruit diameter increases in one millimeter pesticide dissipation rates were reduced in -0.033mgkg -1 day -1 (R 2 =0.87; p<0.001) for grapes and -0.014mgkg -1 day -1 (R 2 =0.85; p<0.001) for apples. The relation between solar radiation, air humidity and temperature, and pesticide dissipation rates were dependent on fruit type. PHI could change according to the application time, because of the initial amount of pesticide deposit in the fruits and change in the dissipation rates. Because Maximum Residue Level are becoming more restrictive, it is more important to consider the fruit growth stage effects on pesticide when performing dissipation studies to define PHI. Copyright © 2016. Published by Elsevier Ltd.
Viability of dental implants in head and neck irradiated patients: A systematic review.
Zen Filho, Edson Virgílio; Tolentino, Elen de Souza; Santos, Paulo Sérgio Silva
2016-04-01
The purpose of this systematic review was to evaluate the safety of dental implants placed in irradiated bone and to discuss their viability when placed post-radiotherapy (RT). A systematic review was performed to answer the questions: "Are dental implants in irradiated bone viable?" and "What are the main factors that influence the loss of implants in irradiated patients?" The search strategy resulted in 8 publications. A total of 331 patients received 1237 implants, with an overall failure rate of 9.53%. The osseointegration success rates ranged between 62.5% and 100%. The optimal time interval between irradiation and dental implantation varied from 6 to 15 months. The interval time between RT and implant placement and the radiation doses are not associated with significant implant failure rates. The placement of implants in irradiated bone is viable, and head and neck RT should not be considered as a contraindication for dental rehabilitation with implants. © 2015 Wiley Periodicals, Inc. Head Neck 38: E2229-E2240, 2016. © 2015 Wiley Periodicals, Inc.
Ragoschke-Schumm, Andreas; Yilmaz, Umut; Kostopoulos, Panagiotis; Lesmeister, Martin; Manitz, Matthias; Walter, Silke; Helwig, Stefan; Schwindling, Lenka; Fousse, Mathias; Haass, Anton; Garner, Dominique; Körner, Heiko; Roumia, Safwan; Grunwald, Iris; Nasreldein, Ali; Halmer, Ramona; Liu, Yang; Schlechtriemen, Thomas; Reith, Wolfgang; Fassbender, Klaus
2015-01-01
For patients with acute ischemic stroke, intra-arterial treatment (IAT) is considered to be an effective strategy for removing the obstructing clot. Because outcome crucially depends on time to treatment ('time-is-brain' concept), we assessed the effects of an intervention based on performing all the time-sensitive diagnostic and therapeutic procedures at a single location on the delay before intra-arterial stroke treatment. Consecutive acute stroke patients with large vessel occlusion who obtained IAT were evaluated before and after implementation (April 26, 2010) of an intervention focused on performing all the diagnostic and therapeutic measures at a single site ('stroke room'). After implementation of the intervention, the median intervals between admission and first angiography series were significantly shorter for 174 intervention patients (102 min, interquartile range (IQR) 85-120 min) than for 81 control patients (117 min, IQR 89-150 min; p < 0.05), as were the intervals between admission and clot removal or end of angiography (152 min, IQR 123-185 min vs. 190 min, IQR 163-227 min; p < 0.001). However, no significant differences in clinical outcome were observed. This study shows for the, to our knowledge, first time that for patients with acute ischemic stroke, stroke diagnosis and treatment at a single location ('stroke room') saves crucial time until IAT. © 2015 S. Karger AG, Basel.
Identifying when weather influences life-history traits of grazing herbivores.
Sims, Michelle; Elston, David A; Larkham, Ann; Nussey, Daniel H; Albon, Steve D
2007-07-01
1. There is increasing evidence that density-independent weather effects influence life-history traits and hence the dynamics of populations of animals. Here, we present a novel statistical approach to estimate when such influences are strongest. The method is demonstrated by analyses investigating the timing of the influence of weather on the birth weight of sheep and deer. 2. The statistical technique allowed for the pattern of temporal correlation in the weather data enabling the effects of weather in many fine-scale time intervals to be investigated simultaneously. Thus, while previous studies have typically considered weather averaged across a single broad time interval during pregnancy, our approach enabled examination simultaneously of the relationships with weekly and fortnightly averages throughout the whole of pregnancy. 3. We detected a positive effect of temperature on the birth weight of deer, which is strongest in late pregnancy (mid-March to mid-April), and a negative effect of rainfall on the birthweight of sheep, which is strongest during mid-pregnancy (late January to early February). The possible mechanisms underlying these weather-birth weight relationships are discussed. 4. This study enhances our insight into the pattern of the timing of influence of weather on early development. The method is of much more general application and could provide valuable insights in other areas of ecology in which sequences of intercorrelated explanatory variables have been collected in space or in time.
Computer Modelling and Simulation of Solar PV Array Characteristics
NASA Astrophysics Data System (ADS)
Gautam, Nalin Kumar
2003-02-01
The main objective of my PhD research work was to study the behaviour of inter-connected solar photovoltaic (PV) arrays. The approach involved the construction of mathematical models to investigate different types of research problems related to the energy yield, fault tolerance, efficiency and optimal sizing of inter-connected solar PV array systems. My research work can be divided into four different types of research problems: 1. Modeling of inter-connected solar PV array systems to investigate their electrical behavior, 2. Modeling of different inter-connected solar PV array networks to predict their expected operational lifetimes, 3. Modeling solar radiation estimation and its variability, and 4. Modeling of a coupled system to estimate the size of PV array and battery-bank in the stand-alone inter-connected solar PV system where the solar PV system depends on a system providing solar radiant energy. The successful application of mathematics to the above-m entioned problems entailed three phases: 1. The formulation of the problem in a mathematical form using numerical, optimization, probabilistic and statistical methods / techniques, 2. The translation of mathematical models using C++ to simulate them on a computer, and 3. The interpretation of the results to see how closely they correlated with the real data. Array is the most cost-intensive component of the solar PV system. Since the electrical performances as well as life properties of an array are highly sensitive to field conditions, different characteristics of the arrays, such as energy yield, operational lifetime, collector orientation, and optimal sizing were investigated in order to improve their efficiency, fault-tolerance and reliability. Three solar cell interconnection configurations in the array - series-parallel, total-cross-tied, and bridge-linked, were considered. The electrical characteristics of these configurations were investigated to find out one that is comparatively less susceptible to the mismatches due to manufacturer's tolerances in cell characteristics, shadowing, soiling and aging of solar cells. The current-voltage curves and the values of energy yield characterized by maximum-power points and fill factors for these arrays were also obtained. Two different mathematical models, one for smaller size arrays and the other for the larger size arrays, were developed. The first model takes account of the partial differential equations with boundary value conditions, whereas the second one involves the simple linear programming concept. Based on the initial information on the values of short-circuit current and open-circuit voltage of thirty-six single-crystalline silicon solar cells provided by a manufacturer, the values of these parameters for up to 14,400 solar cells were generated randomly. Thus, the investigations were done for three different cases of array sizes, i.e., (6 x 6), (36 x 8) and (720 x 20), for each configuration. The operational lifetimes of different interconnected solar PV arrays and the improvement in their life properties through different interconnection and modularized configurations were investigated using a reliability-index model. Under normal conditions, the efficiency of a solar cell degrades in an exponential manner, and its operational life above a lowest admissible efficiency may be considered as the upper bound of its lifetime. Under field conditions, the solar cell may fail any time due to environmental stresses, or it may function up to the end of its expected lifetime. In view of this, the lifetime of a solar cell in an array was represented by an exponentially distributed random variable. At any instant of time t, this random variable was considered to have two states: (i) the cell functioned till time t, or (ii) the cell failed within time t. It was considered that the functioning of the solar cell included its operation at an efficiency decaying with time under normal conditions. It was assumed that the lifetime of a solar cell had lack of memory or aging property, which meant that no matter how long (say, t) the cell had been operational, the probability that it would last an additional time ?t was independent of t. The operational life of the solar cell above a lowest admissible efficiency was considered as the upper bound of its expected lifetime. The value of the upper bound on the expected life of solar cell was evaluated using the information provided by the manufacturers of the single-crystalline silicon solar cells. Then on the basis of these lifetimes, the expected operational lifetimes of the array systems were obtained. Since the investigations of the effects of collector orientation on the performance of an array require the continuous values of global solar radiation on a surface, a method to estimate the global solar radiation on a surface (horizontal or tilted) was also proposed. The cloudiness index was defined as the fraction of extraterrestrial radiation that reached the earth's surface when the sky above the location of interest was obscured by the cloud cover. The cloud cover at the location of interest during any time interval of a day was assumed to follow the fuzzy random phenomenon. The cloudiness index, therefore, was considered as a fuzzy random variable that accounted for the cloud cover at the location of interest during any time interval of a day. This variable was assumed to depend on four other fuzzy random variables that, respectively, accounted for the cloud cover corresponding to the 1) type of cloud group, 2) climatic region, 3) season with most of the precipitation, and 4) type of precipitation at the location of interest during any time interval. All possible types of cloud covers were categorized into five types of cloud groups. Each cloud group was considered to be a fuzzy subset. In this model, the cloud cover at the location of interest during a time interval was considered to be the clouds that obscure the sky above the location. The cloud covers, with all possible types of clouds having transmissivities corresponding to values in the membership range of a fuzzy subset (i.e., a type of cloud group), were considered to be the membership elements of that fuzzy subset. The transmissivities of different types of cloud covers in a cloud group corresponded to the values in the membership range of that cloud group. Predicate logic (i.e., if---then---, else---, conditions) was used to set the relationship between all the fuzzy random variables. The values of the above-mentioned fuzzy random variables were evaluated to provide the value of cloudiness index for each time interval at the location of interest. For each case of the fuzzy random variable, heuristic approach was used to identify subjectively the range ([a, b], where a and b were real numbers with in [0, 1] such that a
Understanding Preprocedure Patient Flow in IR.
Zafar, Abdul Mueed; Suri, Rajeev; Nguyen, Tran Khanh; Petrash, Carson Cope; Fazal, Zanira
2016-08-01
To quantify preprocedural patient flow in interventional radiology (IR) and to identify potential contributors to preprocedural delays. An administrative dataset was used to compute time intervals required for various preprocedural patient-flow processes. These time intervals were compared across on-time/delayed cases and inpatient/outpatient cases by Mann-Whitney U test. Spearman ρ was used to assess any correlation of the rank of a procedure on a given day and the procedure duration to the preprocedure time. A linear-regression model of preprocedure time was used to further explore potential contributing factors. Any identified reason(s) for delay were collated. P < .05 was considered statistically significant. Of the total 1,091 cases, 65.8% (n = 718) were delayed. Significantly more outpatient cases started late compared with inpatient cases (81.4% vs 45.0%; P < .001, χ(2) test). The multivariate linear regression model showed outpatient status, length of delay in arrival, and longer procedure times to be significantly associated with longer preprocedure times. Late arrival of patients (65.9%), unavailability of physicians (18.4%), and unavailability of procedure room (13.0%) were the three most frequently identified reasons for delay. The delay was multifactorial in 29.6% of cases (n = 213). Objective measurement of preprocedural IR patient flow demonstrated considerable waste and highlighted high-yield areas of possible improvement. A data-driven approach may aid efficient delivery of IR care. Copyright © 2016 SIR. Published by Elsevier Inc. All rights reserved.
Variations in rupture process with recurrence interval in a repeated small earthquake
Vidale, J.E.; Ellsworth, W.L.; Cole, A.; Marone, Chris
1994-01-01
In theory and in laboratory experiments, friction on sliding surfaces such as rock, glass and metal increases with time since the previous episode of slip. This time dependence is a central pillar of the friction laws widely used to model earthquake phenomena. On natural faults, other properties, such as rupture velocity, porosity and fluid pressure, may also vary with the recurrence interval. Eighteen repetitions of the same small earthquake, separated by intervals ranging from a few days to several years, allow us to test these laboratory predictions in situ. The events with the longest time since the previous earthquake tend to have about 15% larger seismic moment than those with the shortest intervals, although this trend is weak. In addition, the rupture durations of the events with the longest recurrence intervals are more than a factor of two shorter than for the events with the shortest intervals. Both decreased duration and increased friction are consistent with progressive fault healing during the time of stationary contact.In theory and in laboratory experiments, friction on sliding surfaces such as rock, glass and metal increases with time since the previous episode of slip. This time dependence is a central pillar of the friction laws widely used to model earthquake phenomena. On natural faults, other properties, such as rupture velocity, porosity and fluid pressure, may also vary with the recurrence interval. Eighteen repetitions of the same small earthquake, separated by intervals ranging from a few days to several years, allow us to test these laboratory predictions in situ. The events with the longest time since the previous earthquake tend to have about 15% larger seismic moment than those with the shortest intervals, although this trend is weak. In addition, the rupture durations of the events with the longest recurrence intervals are more than a factor of two shorter than for the events with the shortest intervals. Both decreased duration and increased friction are consistent with progressive fault healing during the time of stationary contact.
Buffered coscheduling for parallel programming and enhanced fault tolerance
Petrini, Fabrizio [Los Alamos, NM; Feng, Wu-chun [Los Alamos, NM
2006-01-31
A computer implemented method schedules processor jobs on a network of parallel machine processors or distributed system processors. Control information communications generated by each process performed by each processor during a defined time interval is accumulated in buffers, where adjacent time intervals are separated by strobe intervals for a global exchange of control information. A global exchange of the control information communications at the end of each defined time interval is performed during an intervening strobe interval so that each processor is informed by all of the other processors of the number of incoming jobs to be received by each processor in a subsequent time interval. The buffered coscheduling method of this invention also enhances the fault tolerance of a network of parallel machine processors or distributed system processors
Effect of aspirin in pregnant women is dependent on increase in bleeding time.
Dumont, A; Flahault, A; Beaufils, M; Verdy, E; Uzan, S
1999-01-01
Randomized trials with low-dose aspirin to prevent preeclampsia and intrauterine growth restriction have yielded conflicting results. In particular, 3 recent large trials were not conclusive. Study designs, however, varied greatly regarding selection of patients, dose of aspirin, and timing of treatment, all of which can be determinants of the results. Retrospectively analyzing the conditions associated with failure or success of aspirin may therefore help to draw up new hypotheses and prepare for more specific randomized trials. We studied a historical cohort of 187 pregnant women who were considered at high risk for preeclampsia, intrauterine growth restriction, or both and were therefore treated with low-dose aspirin between 1989 and 1994. Various epidemiologic, clinical, and laboratory data were extracted from the files. Univariate and multivariate analyses were performed to search for independent parameters associated with the outcome of pregnancy. Age, parity, weight, height, and race had no influence on the outcome. The success rate was higher when treatment was given because of previous poor pregnancy outcomes than when it was given for other indications, and the patients with successful therapy had started aspirin earlier than had those with therapy failure (17.7 vs 20.0 weeks' gestation, P =.04). After multivariate analysis an increase in Ivy bleeding time after 10 days of treatment by >2 minutes was an independent predictor of a better outcome (odds ratio 0.22, 95% confidence interval 0.09-0.51). Borderline statistical significance was observed for aspirin initiation before 17 weeks' gestation (odds ratio 0.44, 95% confidence interval 0.18-1. 08). Abnormal uterine artery Doppler velocimetric scan at 20-24 weeks' gestation (odds ratio 3.31, 95% confidence interval 1.41-7.7), abnormal umbilical artery Doppler velocimetric scan after 26 weeks' gestation (odds ratio 37.6, 95% confidence interval 3.96-357), and use of antihypertensive therapy (odds ratio 6.06, 95% confidence interval 2.45-15) were independent predictors of poor outcome. Efficacy of aspirin seems optimal when bleeding time increases >/=2 minutes with treatment, indicating a more powerful antiplatelet effect. This suggests that the dose of aspirin should be adjusted according to a biologic marker of the antiplatelet effect. A prospective trial is warranted to test this hypothesis.
NASA Astrophysics Data System (ADS)
Lyubushin, Alexey
2016-04-01
The problem of estimate of current seismic danger based on monitoring of seismic noise properties from broadband seismic network F-net in Japan (84 stations) is considered. Variations of the following seismic noise parameters are analyzed: multifractal singularity spectrum support width, generalized Hurst exponent, minimum Hölder-Lipschitz exponent and minimum normalized entropy of squared orthogonal wavelet coefficients. These parameters are estimated within adjacent time windows of the length 1 day for seismic noise waveforms from each station. Calculating daily median values of these parameters by all stations provides 4-dimensional time series which describes integral properties of the seismic noise in the region covered by the network. Cluster analysis is applied to the sequence of clouds of 4-dimensional vectors within moving time window of the length 365 days with mutual shift 3 days starting from the beginning of 1997 up to the current time. The purpose of the cluster analysis is to find the best number of clusters (BNC) from probe numbers which are varying from 1 up to the maximum value 40. The BNC is found from the maximum of pseudo-F-statistics (PFS). A 2D map could be created which presents dependence of PFS on the tested probe number of clusters and the right-hand end of moving time window which is rather similar to usual spectral time-frequency diagrams. In the paper [1] it was shown that the BNC before Tohoku mega-earthquake on March 11, 2011, has strongly chaotic regime with jumps from minimum up to maximum values in the time interval 1 year before the event and this time intervals was characterized by high PFS values. The PFS-map is proposed as the method for extracting time intervals with high current seismic danger. The next danger time interval after Tohoku mega-EQ began at the end of 2012 and was finished at the middle of 2013. Starting from middle of 2015 the high PFS values and chaotic regime of BNC variations were returned. This could be interpreted as the increasing of the danger of the next mega-EQ in Japan in the region of Nankai Trough [1] at the first half of 2016. References 1. Lyubushin, A., 2013. How soon would the next mega-earthquake occur in Japan? // Natural Science, 5 (8A1), 1-7. http://dx.doi.org/10.4236/ns.2013.58A1001
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sparks, R.B.; Stabin, M.G.
1999-01-01
After administration of I-131 to the female patient, the possibility of radiation exposure of the embryo/fetus exists if the patient becomes pregnant while radioiodine remains in the body. Fetal radiation dose estimates for such cases were calculated. Doses were calculated for various maternal thyroid uptakes and time intervals between administration and conception, including euthyroid and hyperthyroid cases. The maximum fetal dose calculating was about 9.8E-03 mGy/MBq, which occurred with 100% maternal thyroid uptake and a 1 week interval between administration and conception. Placental crossover of the small amount of radioiodine remaining 90 days after conception was also considered. Such crossovermore » could result in an additional fetal dose of 9.8E-05 mGy/MBq and a maximum fetal thyroid self dose of 3.5E-04 mGy/MBq.« less
Infant temperament: stability by age, gender, birth order, term status, and socioeconomic status.
Bornstein, Marc H; Putnick, Diane L; Gartstein, Maria A; Hahn, Chun-Shin; Auestad, Nancy; O'Connor, Deborah L
2015-01-01
Two complementary studies focused on stability of infant temperament across the 1st year and considered infant age, gender, birth order, term status, and socioeconomic status (SES) as moderators. Study 1 consisted of 73 mothers of firstborn term girls and boys queried at 2, 5, and 13 months of age. Study 2 consisted of 335 mothers of infants of different gender, birth order, term status, and SES queried at 6 and 12 months. Consistent positive and negative affectivity factors emerged at all time points across both studies. Infant temperament proved stable and robust across gender, birth order, term status, and SES. Stability coefficients for temperament factors and scales were medium to large for shorter (< 9 months) interassessment intervals and small to medium for longer (> 10 months) intervals. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Faber, V.
1994-11-29
Livelock-free message routing is provided in a network of interconnected nodes that is flushable in time T. An input message processor generates sequences of at least N time intervals, each of duration T. An input register provides for receiving and holding each input message, where the message is assigned a priority state p during an nth one of the N time intervals. At each of the network nodes a message processor reads the assigned priority state and awards priority to messages with priority state (p-1) during an nth time interval and to messages with priority state p during an (n+1) th time interval. The messages that are awarded priority are output on an output path toward the addressed output message processor. Thus, no message remains in the network for a time longer than T. 4 figures.
Faber, Vance
1994-01-01
Livelock-free message routing is provided in a network of interconnected nodes that is flushable in time T. An input message processor generates sequences of at least N time intervals, each of duration T. An input register provides for receiving and holding each input message, where the message is assigned a priority state p during an nth one of the N time intervals. At each of the network nodes a message processor reads the assigned priority state and awards priority to messages with priority state (p-1) during an nth time interval and to messages with priority state p during an (n+1) th time interval. The messages that are awarded priority are output on an output path toward the addressed output message processor. Thus, no message remains in the network for a time longer than T.
a New Approach for Accuracy Improvement of Pulsed LIDAR Remote Sensing Data
NASA Astrophysics Data System (ADS)
Zhou, G.; Huang, W.; Zhou, X.; He, C.; Li, X.; Huang, Y.; Zhang, L.
2018-05-01
In remote sensing applications, the accuracy of time interval measurement is one of the most important parameters that affect the quality of pulsed lidar data. The traditional time interval measurement technique has the disadvantages of low measurement accuracy, complicated circuit structure and large error. A high-precision time interval data cannot be obtained in these traditional methods. In order to obtain higher quality of remote sensing cloud images based on the time interval measurement, a higher accuracy time interval measurement method is proposed. The method is based on charging the capacitance and sampling the change of capacitor voltage at the same time. Firstly, the approximate model of the capacitance voltage curve in the time of flight of pulse is fitted based on the sampled data. Then, the whole charging time is obtained with the fitting function. In this method, only a high-speed A/D sampler and capacitor are required in a single receiving channel, and the collected data is processed directly in the main control unit. The experimental results show that the proposed method can get error less than 3 ps. Compared with other methods, the proposed method improves the time interval accuracy by at least 20 %.
The Wasatch fault zone, utah-segmentation and history of Holocene earthquakes
Machette, M.N.; Personius, S.F.; Nelson, A.R.; Schwartz, D.P.; Lund, W.R.
1991-01-01
The Wasatch fault zone (WFZ) forms the eastern boundary of the Basin and Range province and is the longest continuous, active normal fault (343 km) in the United States. It underlies an urban corridor of 1.6 million people (80% of Utah's population) representing the largest earthquake risk in the interior of the western United States. We have used paleoseismological data to identify 10 discrete segments of the WFZ. Five are active, medial segments with Holocene slip rates of 1-2 mm a-1, recurrence intervals of 2000-4000 years and average lengths of about 50 km. Five are less active, distal segments with mostly pre-Holocene surface ruptures, late Quaternary slip rates of 6.5 have occurred since 1860. Although the time scale of the clustering is different-130 years vs 1100 years-we consider the central Nevada-eastern California Seismic Belt to be a historic analog for movement on the WFZ during the past 1500 years. We have found no evidence that surface-rupturing events occurred on the WFZ during the past 400 years, a time period which is twice the average intracluster recurrence interval and equal to the average Holocene recurrence interval. In particular, the Brigham City segment (the northernmost medial segment) has not ruptured in the past 3600 years-a period that is about three times longer than this segment's average recurrence interval during the early and middle Holocene. Although the WFZ's seismological record is one of relative quiescence, a comparison with other historic surface-rupturing earthquakes in the region suggests that earthquakes having moment magnitudes of 7.1-7.4 (or surface-wave magnitudes of 7.5-7.7)-each associated with tens of kilometers of surface rupture and several meters of normal dip slip-have occurred about every four centuries during the Holocene and should be expected in the future. ?? 1991.
Patel, Sanjay R.; Weng, Jia; Rueschman, Michael; Dudley, Katherine A.; Loredo, Jose S.; Mossavar-Rahmani, Yasmin; Ramirez, Maricelle; Ramos, Alberto R.; Reid, Kathryn; Seiger, Ashley N.; Sotres-Alvarez, Daniela; Zee, Phyllis C.; Wang, Rui
2015-01-01
Study Objectives: While actigraphy is considered objective, the process of setting rest intervals to calculate sleep variables is subjective. We sought to evaluate the reproducibility of actigraphy-derived measures of sleep using a standardized algorithm for setting rest intervals. Design: Observational study. Setting: Community-based. Participants: A random sample of 50 adults aged 18–64 years free of severe sleep apnea participating in the Sueño sleep ancillary study to the Hispanic Community Health Study/Study of Latinos. Interventions: N/A. Measurements and Results: Participants underwent 7 days of continuous wrist actigraphy and completed daily sleep diaries. Studies were scored twice by each of two scorers. Rest intervals were set using a standardized hierarchical approach based on event marker, diary, light, and activity data. Sleep/wake status was then determined for each 30-sec epoch using a validated algorithm, and this was used to generate 11 variables: mean nightly sleep duration, nap duration, 24-h sleep duration, sleep latency, sleep maintenance efficiency, sleep fragmentation index, sleep onset time, sleep offset time, sleep midpoint time, standard deviation of sleep duration, and standard deviation of sleep midpoint. Intra-scorer intraclass correlation coefficients (ICCs) were high, ranging from 0.911 to 0.995 across all 11 variables. Similarly, inter-scorer ICCs were high, also ranging from 0.911 to 0.995, and mean inter-scorer differences were small. Bland-Altman plots did not reveal any systematic disagreement in scoring. Conclusions: With use of a standardized algorithm to set rest intervals, scoring of actigraphy for the purpose of generating a wide array of sleep variables is highly reproducible. Citation: Patel SR, Weng J, Rueschman M, Dudley KA, Loredo JS, Mossavar-Rahmani Y, Ramirez M, Ramos AR, Reid K, Seiger AN, Sotres-Alvarez D, Zee PC, Wang R. Reproducibility of a standardized actigraphy scoring algorithm for sleep in a US Hispanic/Latino population. SLEEP 2015;38(9):1497–1503. PMID:25845697
NASA Astrophysics Data System (ADS)
Popeskov, Mirjana; Cukavac, Milena; Lazovic, Caslav
This paper should consider interpretation of geomagnetic field changes on the basis of possible connection with geological composition of deformation zone. Analysis of total magnetic field intensity data from 38 surveys, carried out in the period may 1980 november 2001 in Kopaonik thrust region, central Serbia, reveals anomalous behaviour of local field changes in particular time intervals. These data give us possibility to observe geomagnetic changes in long period of time. This paper shall consider if and how different magnetizations of geological composition of array are in connection with anomalous geomagnetic field change. We shall consider how non-uniform geological structure or rocks with different magnetizations can effect geomagnetic observations and weather sharp contrast in rock magnetization between neighbour layers can give rise to larger changes in the geomagnetic total intensity than those for a uniform layer. For that purpose we are going to consider geological and tectonical map of Kopaonik region. We shall also consider map of vertical component of geomagnetic field because Kopaonik belongs to high magnetic anomaly zone. Corelation of geomagnetic and geological data is supposed to give us some answers to the question of origine of some anomalious geomagnetic changes in total intensity of geomagnetic field. It can also represent first step in corelationof geomagnetic field changes to other geophysical, seismological or geological data that can be couse of geomagnetic field change.
Training for long duration space missions
NASA Technical Reports Server (NTRS)
Goldberg, Joseph H.
1987-01-01
The successful completion of an extended duration manned mission to Mars will require renewed research effort in the areas of crew training and skill retention techniques. The current estimate of inflight transit time is about nine months each way, with a six month surface visit, an order of magnitude beyond previous U.S. space missions. Concerns arise when considering the level of skill retention required for highly critical, one time operations such as an emergency procedure or a Mars orbit injection. The factors responsible for the level of complex skill retention are reviewed, optimal ways of refreshing degraded skills are suggested, and a conceptual crew training design for a Mars mission is outlined. Currently proposed crew activities during a Mars mission were reviewed to identify the spectrum of skills which must be retained over a long time period. Skill retention literature was reviewed to identify those factors which must be considered in deciding when and which tasks need retraining. Task, training, and retention interval factors were identified. These factors were then interpreted in light of the current state of spaceflight and adaptive training systems.
Recurrence time statistics for finite size intervals
NASA Astrophysics Data System (ADS)
Altmann, Eduardo G.; da Silva, Elton C.; Caldas, Iberê L.
2004-12-01
We investigate the statistics of recurrences to finite size intervals for chaotic dynamical systems. We find that the typical distribution presents an exponential decay for almost all recurrence times except for a few short times affected by a kind of memory effect. We interpret this effect as being related to the unstable periodic orbits inside the interval. Although it is restricted to a few short times it changes the whole distribution of recurrences. We show that for systems with strong mixing properties the exponential decay converges to the Poissonian statistics when the width of the interval goes to zero. However, we alert that special attention to the size of the interval is required in order to guarantee that the short time memory effect is negligible when one is interested in numerically or experimentally calculated Poincaré recurrence time statistics.
Fast transfer of crossmodal time interval training.
Chen, Lihan; Zhou, Xiaolin
2014-06-01
Sub-second time perception is essential for many important sensory and perceptual tasks including speech perception, motion perception, motor coordination, and crossmodal interaction. This study investigates to what extent the ability to discriminate sub-second time intervals acquired in one sensory modality can be transferred to another modality. To this end, we used perceptual classification of visual Ternus display (Ternus in Psychol Forsch 7:81-136, 1926) to implicitly measure participants' interval perception in pre- and posttests and implemented an intra- or crossmodal sub-second interval discrimination training protocol in between the tests. The Ternus display elicited either an "element motion" or a "group motion" percept, depending on the inter-stimulus interval between the two visual frames. The training protocol required participants to explicitly compare the interval length between a pair of visual, auditory, or tactile stimuli with a standard interval or to implicitly perceive the length of visual, auditory, or tactile intervals by completing a non-temporal task (discrimination of auditory pitch or tactile intensity). Results showed that after fast explicit training of interval discrimination (about 15 min), participants improved their ability to categorize the visual apparent motion in Ternus displays, although the training benefits were mild for visual timing training. However, the benefits were absent for implicit interval training protocols. This finding suggests that the timing ability in one modality can be rapidly acquired and used to improve timing-related performance in another modality and that there may exist a central clock for sub-second temporal processing, although modality-specific perceptual properties may constrain the functioning of this clock.
Dickens, G L; O'Shea, L E
2015-08-01
The Short-Term Assessment of Risk and Treatability (START) is a tool used in some mental health services to assess patients to see if they are at risk of violence, self-harm, self-neglect or victimization. The recommended time between assessments is 3 months but there is currently no evidence to show that this is best practice. We have investigated whether assessing at 1- or 2-month intervals would be more accurate and therefore facilitate more individualized risk management interventions. We found that many patients who were rated as low risk had been involved in risk behaviours before 3 months had passed; some patients who were rated at increased risk did not get involved in risk behaviours at all. Results are mixed for different outcomes but on balance, we think that the recommendation to conduct START assessment every 3 months is supported by the evidence. However, reassessment should be considered if risk behaviours are not prevented and teams should always consider whether risk management practices are too restrictive. The Short-Term Assessment of Risk and Treatability (START) guides assessment of potential adverse outcomes. Assessment is recommended every 3 months but there is no evidence for this interval. We aimed to inform whether earlier reassessment was warranted. We collated START assessments for N = 217 adults in a secure mental health hospital, and subsequent aggressive, self-harm, self-neglect and victimization incidents. We used receiver operating characteristic analysis to assess predictive validity; survival function analysis to examine differences between low-, medium-, and high-risk groups; and hazard function analysis to determine the optimum interval for reassessment. The START predicted aggression and self-harm at 1, 2 and 3 months. At-risk individuals engaged in adverse outcomes earlier than low-risk patients. About half warranted reassessment before 3 months due to engagement in risk behaviour before that point despite a low-risk rating, or because of non-engagement by that point despite an elevated risk rating. Risk assessment should occur at appropriate intervals so that management strategies can be individually tailored. Assessment at 3-month intervals is supported by the evidence. START assessments should be revisited earlier if risk behaviours are not prevented; teams should constantly re-evaluate the need for restrictive practices. © 2015 John Wiley & Sons Ltd.
Can We Draw General Conclusions from Interval Training Studies?
Viana, Ricardo Borges; de Lira, Claudio Andre Barbosa; Naves, João Pedro Araújo; Coswig, Victor Silveira; Del Vecchio, Fabrício Boscolo; Ramirez-Campillo, Rodrigo; Vieira, Carlos Alexandre; Gentil, Paulo
2018-04-19
Interval training (IT) has been used for many decades with the purpose of increasing performance and promoting health benefits while demanding a relatively small amount of time. IT can be defined as intermittent periods of intense exercise separated by periods of recovery and has been divided into high-intensity interval training (HIIT), sprint interval training (SIT), and repeated sprint training (RST). IT use has resulted in the publication of many studies and many of them with conflicting results and positions. The aim of this article was to move forward and understand the studies' protocols in order to draw accurate conclusions, as well as to avoid previous mistakes and effectively reproduce previous protocols. When analyzing the literature, we found many inconsistencies, such as the controversial concept of 'supramaximal' effort, a misunderstanding with regard to the term 'high intensity,' and the use of different strategies to control intensity. The adequate definition and interpretation of training intensity seems to be vital, since the results of IT are largely dependent on it. These observations are only a few examples of the complexity involved in IT prescription, and are discussed to illustrate some problems with the current literature regarding IT. Therefore, it is our opinion that it is not possible to draw general conclusions about IT without considering all variables used in IT prescription, such as exercise modality, intensity, effort and rest times, and participants' characteristics. In order to help guide researchers and health professionals in their practices it is important that experimental studies report their methods in as much detail as possible and future reviews and meta-analyses should critically discuss the articles included in the light of their methods to avoid inappropriate generalizations.
Baldi, Emilio; Baldi, Claudio; Lithgow, Brian J
2007-01-01
The question whether pulsed electromagnetic field (PEMF) can affect the heart rhythm is still controversial. This study investigates the effects on the cardiocirculatory system of ELF-PEMFs. It is a follow-up to an investigation made of the possible therapeutic effect ELF-PEMFs, using a commercially available magneto therapeutic unit, had on soft tissue injury repair in humans. Modulation of heart rate (HR) or heart rate variability (HRV) can be detected from changes in periodicity of the R-R interval and/or from changes in the numbers of heart-beat/min (bpm), however, R-R interval analysis gives only a quantitative insight into HRV. A qualitative understanding of HRV can be obtained considering the power spectral density (PSD) of the R-R intervals Fourier transform. In this study PSD is the investigative tool used, more specifically the low frequency (LF) PSD and high frequency (HF) PSD ratio (LF/HF) which is an indicator of sympatho-vagal balance. To obtain the PSD value, variations of the R-R time intervals were evaluated from a continuously recorded ECG. The results show a HR variation in all the subjects when they are exposed to the same ELF-PEMF. This variation can be detected by observing the change in the sympatho-vagal equilibrium, which is an indicator of modulation of heart activity. Variation of the LF/HF PSD ratio mainly occurs at transition times from exposure to nonexposure, or vice versa. Also of interest are the results obtained during the exposure of one subject to a range of different ELF-PEMFs. This pilot study suggests that a full investigation into the effect of ELF-PEMFs on the cardiovascular system is justified.
Annoyance to Noise Produced by a Distributed Electric Propulsion High-Lift System
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Palumbo, Daniel L.; Rathsam, Jonathan; Christian, Andrew; Rafaelof, Menachem
2017-01-01
A psychoacoustic test was performed using simulated sounds from a distributed electric propulsion aircraft concept to help understand factors associated with human annoyance. A design space spanning the number of high-lift leading edge propellers and their relative operating speeds, inclusive of time varying effects associated with motor controller error and atmospheric turbulence, was considered. It was found that the mean annoyance response varies in a statistically significant manner with the number of propellers and with the inclusion of time varying effects, but does not differ significantly with the relative RPM between propellers. An annoyance model was developed, inclusive of confidence intervals, using the noise metrics of loudness, roughness, and tonality as predictors.
Teach a Confidence Interval for the Median in the First Statistics Course
ERIC Educational Resources Information Center
Howington, Eric B.
2017-01-01
Few introductory statistics courses consider statistical inference for the median. This article argues in favour of adding a confidence interval for the median to the first statistics course. Several methods suitable for introductory statistics students are identified and briefly reviewed.
Place avoidance learning and memory in a jumping spider.
Peckmezian, Tina; Taylor, Phillip W
2017-03-01
Using a conditioned passive place avoidance paradigm, we investigated the relative importance of three experimental parameters on learning and memory in a salticid, Servaea incana. Spiders encountered an aversive electric shock stimulus paired with one side of a two-sided arena. Our three parameters were the ecological relevance of the visual stimulus, the time interval between trials and the time interval before test. We paired electric shock with either a black or white visual stimulus, as prior studies in our laboratory have demonstrated that S. incana prefer dark 'safe' regions to light ones. We additionally evaluated the influence of two temporal features (time interval between trials and time interval before test) on learning and memory. Spiders exposed to the shock stimulus learned to associate shock with the visual background cue, but the extent to which they did so was dependent on which visual stimulus was present and the time interval between trials. Spiders trained with a long interval between trials (24 h) maintained performance throughout training, whereas spiders trained with a short interval (10 min) maintained performance only when the safe side was black. When the safe side was white, performance worsened steadily over time. There was no difference between spiders tested after a short (10 min) or long (24 h) interval before test. These results suggest that the ecological relevance of the stimuli used and the duration of the interval between trials can influence learning and memory in jumping spiders.
Horton, Bethany Jablonski; Wages, Nolan A.; Conaway, Mark R.
2016-01-01
Toxicity probability interval designs have received increasing attention as a dose-finding method in recent years. In this study, we compared the two-stage, likelihood-based continual reassessment method (CRM), modified toxicity probability interval (mTPI), and the Bayesian optimal interval design (BOIN) in order to evaluate each method's performance in dose selection for Phase I trials. We use several summary measures to compare the performance of these methods, including percentage of correct selection (PCS) of the true maximum tolerable dose (MTD), allocation of patients to doses at and around the true MTD, and an accuracy index. This index is an efficiency measure that describes the entire distribution of MTD selection and patient allocation by taking into account the distance between the true probability of toxicity at each dose level and the target toxicity rate. The simulation study considered a broad range of toxicity curves and various sample sizes. When considering PCS, we found that CRM outperformed the two competing methods in most scenarios, followed by BOIN, then mTPI. We observed a similar trend when considering the accuracy index for dose allocation, where CRM most often outperformed both the mTPI and BOIN. These trends were more pronounced with increasing number of dose levels. PMID:27435150
Goff, M L; Win, B H
1997-11-01
The postmortem interval for a set of human remains discovered inside a metal tool box was estimated using the development time required for a stratiomyid fly (Diptera: Stratiomyidae), Hermetia illucens, in combination with the time required to establish a colony of the ant Anoplolepsis longipes (Hymenoptera: Formicidae) capable of producing alate (winged) reproductives. This analysis resulted in a postmortem interval estimate of 14 + months, with a period of 14-18 months being the most probable time interval. The victim had been missing for approximately 18 months.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ojeda-Gonzalez, A.; Prestes, A.; Klausner, V.
Spatio-temporal entropy (STE) analysis is used as an alternative mathematical tool to identify possible magnetic cloud (MC) candidates. We analyze Interplanetary Magnetic Field (IMF) data using a time interval of only 10 days. We select a convenient data interval of 2500 records moving forward by 200 record steps until the end of the time series. For every data segment, the STE is calculated at each step. During an MC event, the STE reaches values close to zero. This extremely low value of STE is due to MC structure features. However, not all of the magnetic components in MCs have STEmore » values close to zero at the same time. For this reason, we create a standardization index (the so-called Interplanetary Entropy, IE, index). This index is a worthwhile effort to develop new tools to help diagnose ICME structures. The IE was calculated using a time window of one year (1999), and it has a success rate of 70% over other identifiers of MCs. The unsuccessful cases (30%) are caused by small and weak MCs. The results show that the IE methodology identified 9 of 13 MCs, and emitted nine false alarm cases. In 1999, a total of 788 windows of 2500 values existed, meaning that the percentage of false alarms was 1.14%, which can be considered a good result. In addition, four time windows, each of 10 days, are studied, where the IE method was effective in finding MC candidates. As a novel result, two new MCs are identified in these time windows.« less
TIME-INTERVAL MEASURING DEVICE
Gross, J.E.
1958-04-15
An electronic device for measuring the time interval between two control pulses is presented. The device incorporates part of a previous approach for time measurement, in that pulses from a constant-frequency oscillator are counted during the interval between the control pulses. To reduce the possible error in counting caused by the operation of the counter gating circuit at various points in the pulse cycle, the described device provides means for successively delaying the pulses for a fraction of the pulse period so that a final delay of one period is obtained and means for counting the pulses before and after each stage of delay during the time interval whereby a plurality of totals is obtained which may be averaged and multplied by the pulse period to obtain an accurate time- Interval measurement.
Monitoring molecular interactions using photon arrival-time interval distribution analysis
Laurence, Ted A [Livermore, CA; Weiss, Shimon [Los Angels, CA
2009-10-06
A method for analyzing/monitoring the properties of species that are labeled with fluorophores. A detector is used to detect photons emitted from species that are labeled with one or more fluorophores and located in a confocal detection volume. The arrival time of each of the photons is determined. The interval of time between various photon pairs is then determined to provide photon pair intervals. The number of photons that have arrival times within the photon pair intervals is also determined. The photon pair intervals are then used in combination with the corresponding counts of intervening photons to analyze properties and interactions of the molecules including brightness, concentration, coincidence and transit time. The method can be used for analyzing single photon streams and multiple photon streams.
Regenerative Simulation of Response Times in Networks of Queues.
1979-11-01
i jobs at center 1. Nov consider the network of queues in Figure 2.2, formulated (Lewis and Shedler ( 1971 )) as a model of system overhead in...7.2 leading to point estimates and confidence intervals for the quantity r(f) are that the pairs of random variables {( V(fHk) : kall (9.2.3) are...189 Next we show that P’=P. Since P ’m"CIP for all i, we can use the Skorohod representation theorem (see Skorohod (1956) or Billingsley ( 1971 )) to
Quasi-neutral limit of Euler–Poisson system of compressible fluids coupled to a magnetic field
NASA Astrophysics Data System (ADS)
Yang, Jianwei
2018-06-01
In this paper, we consider the quasi-neutral limit of a three-dimensional Euler-Poisson system of compressible fluids coupled to a magnetic field. We prove that, as Debye length tends to zero, periodic initial-value problems of the model have unique smooth solutions existing in the time interval where the ideal incompressible magnetohydrodynamic equations has smooth solution. Meanwhile, it is proved that smooth solutions converge to solutions of incompressible magnetohydrodynamic equations with a sharp convergence rate in the process of quasi-neutral limit.
Is sibling rivalry fatal?: siblings and mortality clustering.
Kippen, Rebecca; Walters, Sarah
2012-01-01
Evidence drawn from nineteenth-century Belgian population registers shows that the presence of similarly aged siblings competing for resources within a household increases the probability of death for children younger than five, even when controlling for the preceding birth interval and multiple births. Furthermore, in this period of Belgian history, such mortality tended to cluster in certain families. The findings suggest the importance of segmenting the mortality of siblings younger than five by age group, of considering the presence of siblings as a time-varying covariate, and of factoring mortality clustering into analyses.
[Peculiarities of the early diagnostics of malignant nasopharyngal neoplasms].
Baryshev, V V; Andreev, V G; Sevryukov, F E; Buyakova, M E; Akki, E D
The authors consider the risk factors and the specific clinical symptoms of the malignant nasopharyngal neoplasms as well as the methods for instrumental, laboratory, and pathomorphological diagnostics of this pathology. The full scale implementation of the recommendations for the timely detection of the tumours using the aforementioned diagnostic procedures and tests makes it possible to reduce to a minimum the interval between the establishment of the diagnosis and the onset of the relevant treatment at the early stages of the disease and thereby to ensure the improvement of its long-term outcomes.
LETTER TO THE EDITOR: A disintegrating cosmic string
NASA Astrophysics Data System (ADS)
Griffiths, J. B.; Docherty, P.
2002-06-01
We present a simple sandwich gravitational wave of the Robinson-Trautman family. This is interpreted as representing a shock wave with a spherical wavefront which propagates into a Minkowski background minus a wedge (i.e. the background contains a cosmic string). The deficit angle (the tension) of the string decreases through the gravitational wave, which then ceases. This leaves an expanding spherical region of Minkowski space behind it. The decay of the cosmic string over a finite interval of retarded time may be considered to generate the gravitational wave.
Robust guaranteed-cost adaptive quantum phase estimation
NASA Astrophysics Data System (ADS)
Roy, Shibdas; Berry, Dominic W.; Petersen, Ian R.; Huntington, Elanor H.
2017-05-01
Quantum parameter estimation plays a key role in many fields like quantum computation, communication, and metrology. Optimal estimation allows one to achieve the most precise parameter estimates, but requires accurate knowledge of the model. Any inevitable uncertainty in the model parameters may heavily degrade the quality of the estimate. It is therefore desired to make the estimation process robust to such uncertainties. Robust estimation was previously studied for a varying phase, where the goal was to estimate the phase at some time in the past, using the measurement results from both before and after that time within a fixed time interval up to current time. Here, we consider a robust guaranteed-cost filter yielding robust estimates of a varying phase in real time, where the current phase is estimated using only past measurements. Our filter minimizes the largest (worst-case) variance in the allowable range of the uncertain model parameter(s) and this determines its guaranteed cost. It outperforms in the worst case the optimal Kalman filter designed for the model with no uncertainty, which corresponds to the center of the possible range of the uncertain parameter(s). Moreover, unlike the Kalman filter, our filter in the worst case always performs better than the best achievable variance for heterodyne measurements, which we consider as the tolerable threshold for our system. Furthermore, we consider effective quantum efficiency and effective noise power, and show that our filter provides the best results by these measures in the worst case.
[Estimation of the atrioventricular time interval by pulse Doppler in the normal fetal heart].
Hamela-Olkowska, Anita; Dangel, Joanna
2009-08-01
To assess normative values of the fetal atrioventricular (AV) time interval by pulse-wave Doppler methods on 5-chamber view. Fetal echocardiography exams were performed using Acuson Sequoia 512 in 140 singleton fetuses at 18 to 40 weeks of gestation with sinus rhythm and normal cardiac and extracardiac anatomy. Pulsed Doppler derived AV intervals were measured from left ventricular inflow/outflow view using transabdominal convex 3.5-6 MHz probe. The values of AV time interval ranged from 100 to 150 ms (mean 123 +/- 11.2). The AV interval was negatively correlated with the heart rhythm (p<0.001). Fetal heart rate decreased as gestation progressed (p<0.001). Thus, the AV intervals increased with the age of gestation (p=0.007). However, in the same subgroup of the fetal heart rate there was no relation between AV intervals and gestational age. Therefore, the AV intervals showed only the heart rate dependence. The 95th percentiles of AV intervals according to FHR ranged from 135 to 148 ms. 1. The AV interval duration was negatively correlated with the heart rhythm. 2. Measurement of AV time interval is easy to perform and has a good reproducibility. It may be used for the fetal heart block screening in anti-Ro and anti-La positive pregnancies. 3. Normative values established in the study may help obstetricians in assessing fetal abnormalities of the AV conduction.
NASA Astrophysics Data System (ADS)
Wang, Tong; Ding, Yongsheng; Zhang, Lei; Hao, Kuangrong
2016-08-01
This paper considered the synchronisation of continuous complex dynamical networks with discrete-time communications and delayed nodes. The nodes in the dynamical networks act in the continuous manner, while the communications between nodes are discrete-time; that is, they communicate with others only at discrete time instants. The communication intervals in communication period can be uncertain and variable. By using a piecewise Lyapunov-Krasovskii function to govern the characteristics of the discrete communication instants, we investigate the adaptive feedback synchronisation and a criterion is derived to guarantee the existence of the desired controllers. The globally exponential synchronisation can be achieved by the controllers under the updating laws. Finally, two numerical examples including globally coupled network and nearest-neighbour coupled networks are presented to demonstrate the validity and effectiveness of the proposed control scheme.
NASA Astrophysics Data System (ADS)
Ye, H.; Liu, F.; Turner, I.; Anh, V.; Burrage, K.
2013-09-01
Fractional partial differential equations with more than one fractional derivative in time describe some important physical phenomena, such as the telegraph equation, the power law wave equation, or the Szabo wave equation. In this paper, we consider two- and three-dimensional multi-term time and space fractional partial differential equations. The multi-term time-fractional derivative is defined in the Caputo sense, whose order belongs to the interval (1,2],(2,3],(3,4] or (0, m], and the space-fractional derivative is referred to as the fractional Laplacian form. We derive series expansion solutions based on a spectral representation of the Laplacian operator on a bounded region. Some applications are given for the two- and three-dimensional telegraph equation, power law wave equation and Szabo wave equation.
Bayesian analyses of time-interval data for environmental radiation monitoring.
Luo, Peng; Sharp, Julia L; DeVol, Timothy A
2013-01-01
Time-interval (time difference between two consecutive pulses) analysis based on the principles of Bayesian inference was investigated for online radiation monitoring. Using experimental and simulated data, Bayesian analysis of time-interval data [Bayesian (ti)] was compared with Bayesian and a conventional frequentist analysis of counts in a fixed count time [Bayesian (cnt) and single interval test (SIT), respectively]. The performances of the three methods were compared in terms of average run length (ARL) and detection probability for several simulated detection scenarios. Experimental data were acquired with a DGF-4C system in list mode. Simulated data were obtained using Monte Carlo techniques to obtain a random sampling of the Poisson distribution. All statistical algorithms were developed using the R Project for statistical computing. Bayesian analysis of time-interval information provided a similar detection probability as Bayesian analysis of count information, but the authors were able to make a decision with fewer pulses at relatively higher radiation levels. In addition, for the cases with very short presence of the source (< count time), time-interval information is more sensitive to detect a change than count information since the source data is averaged by the background data over the entire count time. The relationships of the source time, change points, and modifications to the Bayesian approach for increasing detection probability are presented.
Fixed-interval matching-to-sample: intermatching time and intermatching error runs1
Nelson, Thomas D.
1978-01-01
Four pigeons were trained on a matching-to-sample task in which reinforcers followed either the first matching response (fixed interval) or the fifth matching response (tandem fixed-interval fixed-ratio) that occurred 80 seconds or longer after the last reinforcement. Relative frequency distributions of the matching-to-sample responses that concluded intermatching times and runs of mismatches (intermatching error runs) were computed for the final matching responses directly followed by grain access and also for the three matching responses immediately preceding the final match. Comparison of these two distributions showed that the fixed-interval schedule arranged for the preferential reinforcement of matches concluding relatively extended intermatching times and runs of mismatches. Differences in matching accuracy and rate during the fixed interval, compared to the tandem fixed-interval fixed-ratio, suggested that reinforcers following matches concluding various intermatching times and runs of mismatches influenced the rate and accuracy of the last few matches before grain access, but did not control rate and accuracy throughout the entire fixed-interval period. PMID:16812032
Improved confidence intervals when the sample is counted an integer times longer than the blank.
Potter, William Edward; Strzelczyk, Jadwiga Jodi
2011-05-01
Past computer solutions for confidence intervals in paired counting are extended to the case where the ratio of the sample count time to the blank count time is taken to be an integer, IRR. Previously, confidence intervals have been named Neyman-Pearson confidence intervals; more correctly they should have been named Neyman confidence intervals or simply confidence intervals. The technique utilized mimics a technique used by Pearson and Hartley to tabulate confidence intervals for the expected value of the discrete Poisson and Binomial distributions. The blank count and the contribution of the sample to the gross count are assumed to be Poisson distributed. The expected value of the blank count, in the sample count time, is assumed known. The net count, OC, is taken to be the gross count minus the product of IRR with the blank count. The probability density function (PDF) for the net count can be determined in a straightforward manner.
The Time Is Up: Compression of Visual Time Interval Estimations of Bimodal Aperiodic Patterns
Duarte, Fabiola; Lemus, Luis
2017-01-01
The ability to estimate time intervals subserves many of our behaviors and perceptual experiences. However, it is not clear how aperiodic (AP) stimuli affect our perception of time intervals across sensory modalities. To address this question, we evaluated the human capacity to discriminate between two acoustic (A), visual (V) or audiovisual (AV) time intervals of trains of scattered pulses. We first measured the periodicity of those stimuli and then sought for correlations with the accuracy and reaction times (RTs) of the subjects. We found that, for all time intervals tested in our experiment, the visual system consistently perceived AP stimuli as being shorter than the periodic (P) ones. In contrast, such a compression phenomenon was not apparent during auditory trials. Our conclusions are: first, the subjects exposed to P stimuli are more likely to measure their durations accurately. Second, perceptual time compression occurs for AP visual stimuli. Lastly, AV discriminations are determined by A dominance rather than by AV enhancement. PMID:28848406
High Resolution ECG for Evaluation of QT Interval Variability during Exposure to Acute Hypoxia
NASA Technical Reports Server (NTRS)
Zupet, P.; Finderle, Z.; Schlegel, Todd T.; Starc, V.
2010-01-01
Ventricular repolarization instability as quantified by the index of QT interval variability (QTVI) is one of the best predictors for risk of malignant ventricular arrhythmias and sudden cardiac death. Because it is difficult to appropriately monitor early signs of organ dysfunction at high altitude, we investigated whether high resolution advanced ECG (HR-ECG) analysis might be helpful as a non-invasive and easy-to-use tool for evaluating the risk of cardiac arrhythmias during exposure to acute hypoxia. 19 non-acclimatized healthy trained alpinists (age 37, 8 plus or minus 4,7 years) participated in the study. Five-minute high-resolution 12-lead electrocardiograms (ECGs) were recorded (Cardiosoft) in each subject at rest in the supine position breathing room air and then after breathing 12.5% oxygen for 30 min. For beat-to-beat RR and QT variability, the program of Starc was utilized to derive standard time domain measures such as root mean square of the successive interval difference (rMSSD) of RRV and QTV, the corrected QT interval (QTc) and the QTVI in lead II. Changes were evaluated with paired-samples t-test with p-values less than 0.05 considered statistically significant. As expected, the RR interval and its variability both decreased with increasing altitude, with p = 0.000 and p = 0.005, respectively. Significant increases were found in both the rMSSDQT and the QTVI in lead II, with p = 0.002 and p = 0.003, respectively. There was no change in QTc interval length (p = non significant). QT variability parameters may be useful for evaluating changes in ventricular repolarization caused by hypoxia. These changes might be driven by increases in sympathetic nervous system activity at ventricular level.
Martian cratering. II - Asteroid impact history.
NASA Technical Reports Server (NTRS)
Hartmann, W. K.
1971-01-01
This paper considers the extent to which Martian craters can be explained by considering asteroidal impact. Sections I, II, and III of this paper derive the diameter distribution of hypothetical asteroidal craters on Mars from recent Palomar-Leiden asteroid statistics and show that the observed Martian craters correspond to a bombardment by roughly 100 times the present number of Mars-crossing asteroids. Section IV discusses the early bombardment history of Mars, based on the capture theory of Opik and probable orbital parameters of early planetesimals. These results show that the visible craters and surface of Mars should not be identified with the initial, accreted surface. A backward extrapolation of the impact rates based on surviving Mars-crossing asteroids can account for the majority of Mars craters over an interval of several aeons, indicating that we see back in time no further than part-way into a period of intense bombardment. An early period of erosion and deposition is thus suggested. Section V presents a comparison with results and terminology of other authors.
NASA Astrophysics Data System (ADS)
Berrada, K.; Eleuch, H.
2017-09-01
Various schemes have been proposed to improve the parameter-estimation precision. In the present work, we suggest an alternative method to preserve the estimation precision by considering a model that closely describes a realistic experimental scenario. We explore this active way to control and enhance the measurements precision for a two-level quantum system interacting with classical electromagnetic field using ultra-short strong pulses with an exact analytical solution, i.e. beyond the rotating wave approximation. In particular, we investigate the variation of the precision with a few cycles pulse and a smooth phase jump over a finite time interval. We show that by acting on the shape of the phase transient and other parameters of the considered system, the amount of information may be increased and has smaller decay rate in the long time. These features make two-level systems incorporated in ultra-short, of-resonant and gradually changing phase good candidates for implementation of schemes for the quantum computation and the coherent information processing.
NASA Astrophysics Data System (ADS)
Li, Yi; Xu, Yan Long
2018-05-01
When the dependence of the function on uncertain variables is non-monotonic in interval, the interval of function obtained by the classic interval extension based on the first order Taylor series will exhibit significant errors. In order to reduce theses errors, the improved format of the interval extension with the first order Taylor series is developed here considering the monotonicity of function. Two typical mathematic examples are given to illustrate this methodology. The vibration of a beam with lumped masses is studied to demonstrate the usefulness of this method in the practical application, and the necessary input data of which are only the function value at the central point of interval, sensitivity and deviation of function. The results of above examples show that the interval of function from the method developed by this paper is more accurate than the ones obtained by the classic method.
Watanabe, Yukari; Watanabe, Takamitsu
2017-10-01
Head injury is considered as a potential risk factor for amyotrophic lateral sclerosis (ALS). However, several recent studies have suggested that head injury is not a cause, but a consequence of latent ALS. We aimed to evaluate such a possibility of reverse causation with meta-analyses considering time lags between the incidence of head injuries and the occurrence of ALS. We searched Medline and Web of Science for case-control, cross-sectional, or cohort studies that quantitatively investigated the head-injury-related risk of ALS and were published until 1 December 2016. After selecting appropriate publications based on PRISMA statement, we performed random-effects meta-analyses to calculate odds ratios (ORs) and 95% confidence intervals (CI). Sixteen of 825 studies fulfilled the eligibility criteria. The association between head injuries and ALS was statistically significant when the meta-analysis included all the 16 studies (OR 1.45, 95% CI 1.21-1.74). However, in the meta-analyses considering the time lags between the experience of head injuries and diagnosis of ALS, the association was weaker (OR 1.21, 95% CI 1.01-1.46, time lag ≥ 1 year) or not significant (e.g. OR 1.16, 95% CI 0.84-1.59, time lag ≥ 3 years). Although it did not deny associations between head injuries and ALS, the current study suggests a possibility that such a head-injury-oriented risk of ALS has been somewhat overestimated. For more accurate evaluation, it would be necessary to conduct more epidemiological studies that consider the time lags between the occurrence of head injuries and the diagnosis of ALS.
Time estimation by patients with frontal lesions and by Korsakoff amnesics.
Mimura, M; Kinsbourne, M; O'Connor, M
2000-07-01
We studied time estimation in patients with frontal damage (F) and alcoholic Korsakoff (K) patients in order to differentiate between the contributions of working memory and episodic memory to temporal cognition. In Experiment 1, F and K patients estimated time intervals between 10 and 120 s less accurately than matched normal and alcoholic control subjects. F patients were less accurate than K patients at short (< 1 min) time intervals whereas K patients increasingly underestimated durations as intervals grew longer. F patients overestimated short intervals in inverse proportion to their performance on the Wisconsin Card Sorting Test. As intervals grew longer, overestimation yielded to underestimation for F patients. Experiment 2 involved time estimation while counting at a subjective 1/s rate. F patients' subjective tempo, though relatively rapid, did not fully explain their overestimation of short intervals. In Experiment 3, participants produced predetermined time intervals by depressing a mouse key. K patients underproduced longer intervals. F patients produced comparably to normal participants, but were extremely variable. Findings suggest that both working memory and episodic memory play an individual role in temporal cognition. Turnover within a short-term working memory buffer provides a metric for temporal decisions. The depleted working memory that typically attends frontal dysfunction may result in quicker turnover, and this may inflate subjective duration. On the other hand, temporal estimation beyond 30 s requires episodic remembering, and this puts K patients at a disadvantage.
Method and apparatus for assessing cardiovascular risk
NASA Technical Reports Server (NTRS)
Albrecht, Paul (Inventor); Bigger, J. Thomas (Inventor); Cohen, Richard J. (Inventor)
1998-01-01
The method for assessing risk of an adverse clinical event includes detecting a physiologic signal in the subject and determining from the physiologic signal a sequence of intervals corresponding to time intervals between heart beats. The long-time structure of fluctuations in the intervals over a time period of more than fifteen minutes is analyzed to assess risk of an adverse clinical event. In a preferred embodiment, the physiologic signal is an electrocardiogram and the time period is at least fifteen minutes. A preferred method for analyzing the long-time structure variability in the intervals includes computing the power spectrum and fitting the power spectrum to a power law dependence on frequency over a selected frequency range such as 10.sup.-4 to 10.sup.-2 Hz. Characteristics of the long-time structure fluctuations in the intervals is used to assess risk of an adverse clinical event.
A time reversal algorithm in acoustic media with Dirac measure approximations
NASA Astrophysics Data System (ADS)
Bretin, Élie; Lucas, Carine; Privat, Yannick
2018-04-01
This article is devoted to the study of a photoacoustic tomography model, where one is led to consider the solution of the acoustic wave equation with a source term writing as a separated variables function in time and space, whose temporal component is in some sense close to the derivative of the Dirac distribution at t = 0. This models a continuous wave laser illumination performed during a short interval of time. We introduce an algorithm for reconstructing the space component of the source term from the measure of the solution recorded by sensors during a time T all along the boundary of a connected bounded domain. It is based at the same time on the introduction of an auxiliary equivalent Cauchy problem allowing to derive explicit reconstruction formula and then to use of a deconvolution procedure. Numerical simulations illustrate our approach. Finally, this algorithm is also extended to elasticity wave systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Ziyang; Yang, Tao; Li, Guoqi
Here, we study synchronization of coupled linear systems over networks with weak connectivity and nonuniform time-varying delays. We focus on the case where the internal dynamics are time-varying but non-expansive (stable dynamics with a quadratic Lyapunov function). Both uniformly jointly connected and infinitely jointly connected communication topologies are considered. A new concept of quadratic synchronization is introduced. We first show that global asymptotic quadratic synchronization can be achieved over directed networks with uniform joint connectivity and arbitrarily bounded delays. We then study the case of infinitely jointly connected communication topology. In particular, for the undirected communication topologies, it turns outmore » that the existence of a uniform time interval for the jointly connected communication topology is not necessary and quadratic synchronization can be achieved when the time-varying nonuniform delays are arbitrarily bounded. Finally, simulation results are provided to validate the theoretical results.« less
Meng, Ziyang; Yang, Tao; Li, Guoqi; ...
2017-09-18
Here, we study synchronization of coupled linear systems over networks with weak connectivity and nonuniform time-varying delays. We focus on the case where the internal dynamics are time-varying but non-expansive (stable dynamics with a quadratic Lyapunov function). Both uniformly jointly connected and infinitely jointly connected communication topologies are considered. A new concept of quadratic synchronization is introduced. We first show that global asymptotic quadratic synchronization can be achieved over directed networks with uniform joint connectivity and arbitrarily bounded delays. We then study the case of infinitely jointly connected communication topology. In particular, for the undirected communication topologies, it turns outmore » that the existence of a uniform time interval for the jointly connected communication topology is not necessary and quadratic synchronization can be achieved when the time-varying nonuniform delays are arbitrarily bounded. Finally, simulation results are provided to validate the theoretical results.« less
Francq, Bernard G; Govaerts, Bernadette
2016-06-30
Two main methodologies for assessing equivalence in method-comparison studies are presented separately in the literature. The first one is the well-known and widely applied Bland-Altman approach with its agreement intervals, where two methods are considered interchangeable if their differences are not clinically significant. The second approach is based on errors-in-variables regression in a classical (X,Y) plot and focuses on confidence intervals, whereby two methods are considered equivalent when providing similar measures notwithstanding the random measurement errors. This paper reconciles these two methodologies and shows their similarities and differences using both real data and simulations. A new consistent correlated-errors-in-variables regression is introduced as the errors are shown to be correlated in the Bland-Altman plot. Indeed, the coverage probabilities collapse and the biases soar when this correlation is ignored. Novel tolerance intervals are compared with agreement intervals with or without replicated data, and novel predictive intervals are introduced to predict a single measure in an (X,Y) plot or in a Bland-Atman plot with excellent coverage probabilities. We conclude that the (correlated)-errors-in-variables regressions should not be avoided in method comparison studies, although the Bland-Altman approach is usually applied to avert their complexity. We argue that tolerance or predictive intervals are better alternatives than agreement intervals, and we provide guidelines for practitioners regarding method comparison studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Flegar-Meštrić, Zlata; Perkov, Sonja; Radeljak, Andrea
2016-03-26
Considering the fact that the results of laboratory tests provide useful information about the state of health of patients, determination of reference value is considered an intrinsic part in the development of laboratory medicine. There are still huge differences in the analytical methods used as well as in the associated reference intervals which could consequently significantly affect the proper assessment of patient health. In a constant effort to increase the quality of patients' care, there are numerous international initiatives for standardization and/or harmonization of laboratory diagnostics in order to achieve maximum comparability of laboratory test results and improve patient safety. Through the standardization and harmonization processes of analytical methods the ability to create unique reference intervals is achieved. Such reference intervals could be applied globally in all laboratories using methods traceable to the same reference measuring system and analysing the biological samples from the populations with similar socio-demographic and ethnic characteristics. In this review we outlined the results of the harmonization processes in Croatia in the field of population based reference intervals for clinically relevant blood and serum constituents which are in accordance with ongoing activity for worldwide standardization and harmonization based on traceability in laboratory medicine.
Flegar-Meštrić, Zlata; Perkov, Sonja; Radeljak, Andrea
2016-01-01
Considering the fact that the results of laboratory tests provide useful information about the state of health of patients, determination of reference value is considered an intrinsic part in the development of laboratory medicine. There are still huge differences in the analytical methods used as well as in the associated reference intervals which could consequently significantly affect the proper assessment of patient health. In a constant effort to increase the quality of patients’ care, there are numerous international initiatives for standardization and/or harmonization of laboratory diagnostics in order to achieve maximum comparability of laboratory test results and improve patient safety. Through the standardization and harmonization processes of analytical methods the ability to create unique reference intervals is achieved. Such reference intervals could be applied globally in all laboratories using methods traceable to the same reference measuring system and analysing the biological samples from the populations with similar socio-demographic and ethnic characteristics. In this review we outlined the results of the harmonization processes in Croatia in the field of population based reference intervals for clinically relevant blood and serum constituents which are in accordance with ongoing activity for worldwide standardization and harmonization based on traceability in laboratory medicine. PMID:27019800
Metastability versus collapse following a quench in attractive Bose-Einstein condensates
NASA Astrophysics Data System (ADS)
Golde, Jake; Ruhl, Joanna; Olshanii, Maxim; Dunjko, Vanja; Datta, Sumita; Malomed, Boris A.
2018-05-01
We consider a Bose-Einstein condensate (BEC) with attractive two-body interactions in a cigar-shaped trap, initially prepared in its ground state for a given negative scattering length, which is quenched to a larger absolute value of the scattering length. Using the mean-field approximation, we compute numerically, for an experimentally relevant range of aspect ratios and initial strengths of the coupling, two critical values of quench. One corresponds to the weakest attraction strength, the quench to which causes the system to collapse before completing even a single return from the narrow configuration (pericenter) in its breathing cycle. The other is a similar critical point for the occurrence of collapse before completing two returns. In the latter case, we also compute the limiting value, as we keep increasing the strength of the postquench attraction towards its critical value, of the time interval between the first two pericenters. We also use a Gaussian variational model to estimate the critical quenched attraction strength below which the system is stable against the collapse for long times. These time intervals and critical attraction strengths, apart from being fundamental properties of nonlinear dynamics of self-attractive BECs, may provide clues to the design of upcoming experiments that are trying to create robust BEC breathers.
Modeling Grade IV Gas Emboli using a Limited Failure Population Model with Random Effects
NASA Technical Reports Server (NTRS)
Thompson, Laura A.; Conkin, Johnny; Chhikara, Raj S.; Powell, Michael R.
2002-01-01
Venous gas emboli (VGE) (gas bubbles in venous blood) are associated with an increased risk of decompression sickness (DCS) in hypobaric environments. A high grade of VGE can be a precursor to serious DCS. In this paper, we model time to Grade IV VGE considering a subset of individuals assumed to be immune from experiencing VGE. Our data contain monitoring test results from subjects undergoing up to 13 denitrogenation test procedures prior to exposure to a hypobaric environment. The onset time of Grade IV VGE is recorded as contained within certain time intervals. We fit a parametric (lognormal) mixture survival model to the interval-and right-censored data to account for the possibility of a subset of "cured" individuals who are immune to the event. Our model contains random subject effects to account for correlations between repeated measurements on a single individual. Model assessments and cross-validation indicate that this limited failure population mixture model is an improvement over a model that does not account for the potential of a fraction of cured individuals. We also evaluated some alternative mixture models. Predictions from the best fitted mixture model indicate that the actual process is reasonably approximated by a limited failure population model.
Heilbronner, Sarah R.; Meck, Warren. H.
2014-01-01
The goal of our study was to characterize the relationship between intertemporal choice and interval timing, including determining how drugs that modulate brain serotonin and dopamine levels influence these two processes. In Experiment 1, rats were tested on a standard 40-s peak-interval procedure following administration of fluoxetine (3, 5, or 8 mg/kg) or vehicle to assess basic effects on interval timing. In Experiment 2, rats were tested in a novel behavioral paradigm intended to simultaneously examine interval timing and impulsivity. Rats performed a variant of the bi-peak procedure using 10-s and 40-s target durations with an additional “defection” lever that provided the possibility of a small, immediate reward. Timing functions remained relatively intact, and ‘patience’ across subjects correlated with peak times, indicating a negative relationship between ‘patience’ and clock speed. We next examined the effects of fluoxetine (5 mg/kg), cocaine (15 mg/kg), or methamphetamine (1 mg/kg) on task performance. Fluoxetine reduced impulsivity as measured by defection time without corresponding changes in clock speed. In contrast, cocaine and methamphetamine both increased impulsivity and clock speed. Thus, variations in timing may mediate intertemporal choice via dopaminergic inputs. However, a separate, serotonergic system can affect intertemporal choice without affecting interval timing directly. PMID:24135569
NASA Astrophysics Data System (ADS)
Zamri, Nurnadiah; Abdullah, Lazim
2014-06-01
Flood control project is a complex issue which takes economic, social, environment and technical attributes into account. Selection of the best flood control project requires the consideration of conflicting quantitative and qualitative evaluation criteria. When decision-makers' judgment are under uncertainty, it is relatively difficult for them to provide exact numerical values. The interval type-2 fuzzy set (IT2FS) is a strong tool which can deal with the uncertainty case of subjective, incomplete, and vague information. Besides, it helps to solve for some situations where the information about criteria weights for alternatives is completely unknown. Therefore, this paper is adopted the information interval type-2 entropy concept into the weighting process of interval type-2 fuzzy TOPSIS. This entropy weight is believed can effectively balance the influence of uncertainty factors in evaluating attribute. Then, a modified ranking value is proposed in line with the interval type-2 entropy weight. Quantitative and qualitative factors that normally linked with flood control project are considered for ranking. Data in form of interval type-2 linguistic variables were collected from three authorised personnel of three Malaysian Government agencies. Study is considered for the whole of Malaysia. From the analysis, it shows that diversion scheme yielded the highest closeness coefficient at 0.4807. A ranking can be drawn using the magnitude of closeness coefficient. It was indicated that the diversion scheme recorded the first rank among five causes.
Hanley, Gillian E; Janssen, Patricia A
2013-11-01
We aimed to determine whether ethnicity-specific birthweight distributions more accurately identify newborns at risk for short-term neonatal morbidity associated with small for gestational age (SGA) birth than population-based distributions not stratified on ethnicity. We examined 100,463 singleton term infants born to parents in Washington State between Jan. 1, 2006, and Dec. 31, 2008. Using multivariable logistic regression models, we compared the ability of an ethnicity-specific growth distribution and a population-based growth distribution to predict which infants were at increased risk for Apgar score <7 at 5 minutes, admission to the neonatal intensive care unit, ventilation, extended length of stay in hospital, hypothermia, hypoglycemia, and infection. Newborns considered SGA by ethnicity-specific weight distributions had the highest rates of each of the adverse outcomes assessed-more than double those of infants only considered SGA by the population-based standards. When controlling for mother's age, parity, body mass index, education, gestational age, mode of delivery, and marital status, newborns considered SGA by ethnicity-specific birthweight distributions were between 2 and 7 times more likely to suffer from the adverse outcomes listed above than infants who were not SGA. In contrast, newborns considered SGA by population-based birthweight distributions alone were at no higher risk of any adverse outcome except hypothermia (adjusted odds ratio, 2.76; 95% confidence interval, 1.68-4.55) and neonatal intensive care unit admission (adjusted odds ratio, 1.40; 95% confidence interval, 1.18-1.67). Ethnicity-specific birthweight distributions were significantly better at identifying the infants at higher risk of short-term neonatal morbidity, suggesting that their use could save resources and unnecessary parental anxiety. Copyright © 2013 Mosby, Inc. All rights reserved.
Characterizing the graded structure of false killer whale (Pseudorca crassidens) vocalizations.
Murray, S O; Mercado, E; Roitblat, H L
1998-09-01
The vocalizations from two, captive false killer whales (Pseudorca crassidens) were analyzed. The structure of the vocalizations was best modeled as lying along a continuum with trains of discrete, exponentially damped sinusoidal pulses at one end and continuous sinusoidal signals at the other end. Pulse trains were graded as a function of the interval between pulses where the minimum interval between pulses could be zero milliseconds. The transition from a pulse train with no inter-pulse interval to a whistle could be modeled by gradations in the degree of damping. There were many examples of vocalizations that were gradually modulated from pulse trains to whistles. There were also vocalizations that showed rapid shifts in signal type--for example, switching immediately from a whistle to a pulse train. These data have implications when considering both the possible function(s) of the vocalizations and the potential sound production mechanism(s). A short-time duty cycle measure was developed to characterize the graded structure of the vocalizations. A random sample of 500 vocalizations was characterized by combining the duty cycle measure with peak frequency measurements. The analysis method proved to be an effective metric for describing the graded structure of false killer whale vocalizations.
Kamran, Haroon; Salciccioli, Louis; Pushilin, Sergei; Kumar, Paraag; Carter, John; Kuo, John; Novotney, Carol; Lazar, Jason M
2011-01-01
Nonhuman primates are used frequently in cardiovascular research. Cardiac time intervals derived by phonocardiography have long been used to assess left ventricular function. Electronic stethoscopes are simple low-cost systems that display heart sound signals. We assessed the use of an electronic stethoscope to measure cardiac time intervals in 48 healthy bonnet macaques (age, 8 ± 5 y) based on recorded heart sounds. Technically adequate recordings were obtained from all animals and required 1.5 ± 1.3 min. The following cardiac time intervals were determined by simultaneously recording acoustic and single-lead electrocardiographic data: electromechanical activation time (QS1), electromechanical systole (QS2), the time interval between the first and second heart sounds (S1S2), and the time interval between the second and first sounds (S2S1). QS2 was correlated with heart rate, mean arterial pressure, diastolic blood pressure, and left ventricular ejection time determined by using echocardiography. S1S2 correlated with heart rate, mean arterial pressure, diastolic blood pressure, left ventricular ejection time, and age. S2S1 correlated with heart rate, mean arterial pressure, diastolic blood pressure, systolic blood pressure, and left ventricular ejection time. QS1 did not correlate with any anthropometric or echocardiographic parameter. The relation S1S2/S2S1 correlated with systolic blood pressure. On multivariate analyses, heart rate was the only independent predictor of QS2, S1S2, and S2S1. In conclusion, determination of cardiac time intervals is feasible and reproducible by using an electrical stethoscope in nonhuman primates. Heart rate is a major determinant of QS2, S1S2, and S2S1 but not QS1; regression equations for reference values for cardiac time intervals in bonnet macaques are provided. PMID:21439218
Maternal child-feeding practices and dietary inadequacy of 4-year-old children.
Durão, Catarina; Andreozzi, Valeska; Oliveira, Andreia; Moreira, Pedro; Guerra, António; Barros, Henrique; Lopes, Carla
2015-09-01
This study aimed to evaluate the association between maternal perceived responsibility and child-feeding practices and dietary inadequacy of 4-year-old children. We studied 4122 mothers and children enrolled in the population-based birth cohort - Generation XXI (Porto, Portugal). Mothers self-completed the Child Feeding Questionnaire and a scale on covert and overt control, and answered to a food frequency questionnaire in face-to-face interviews. Using dietary guidelines for preschool children, adequacy intervals were defined: fruit and vegetables (F&V) 4-7 times/day; dairy 3-5 times/day; meat and eggs 5-10 times/week; fish 2-4 times/week. Inadequacy was considered as below or above these cut-points. For energy-dense micronutrient-poor foods and beverages (EDF), a tolerable limit was defined (<6 times/week). Associations between maternal perceived responsibility and child-feeding practices (restriction, monitoring, pressure to eat, overt and covert control) and children's diet were examined by logistic regression models. After adjustment for maternal BMI, education, and diet, and children's characteristics (sex, BMI z-scores), restriction, monitoring, overt and covert control were associated with 11-18% lower odds of F&V consumption below the interval defined as adequate. Overt control was also associated with 24% higher odds of their consumption above it. Higher perceived responsibility was associated with higher odds of children consuming F&V and dairy above recommendations. Pressure to eat was positively associated with consumption of dairy above the adequate interval. Except for pressure to eat, maternal practices were associated with 14-27% lower odds of inadequate consumption of EDF. In conclusion, children whose mothers had higher levels of covert control, monitoring, and restriction were less likely to consume F&V below recommendations and EDF above tolerable limits. Higher overt control and pressure to eat were associated, respectively, with higher possibility of children consuming F&V and dairy above recommendations. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Duffy, J. F.; Dijk, D. J.; Hall, E. F.; Czeisler, C. A.
1999-01-01
BACKGROUND: Morningness-eveningness refers to interindividual differences in preferred timing of behavior (i.e., bed and wake times). Older people have earlier wake times and rate themselves as more morning-like than young adults. It has been reported that the phase of circadian rhythms is earlier in morning-types than in evening types, and that older people have earlier phases than young adults. These changes in phase have been considered to be the chronobiological basis of differences in preferred bed and wake times and age-related changes therein. Whether such differences in phase are associated with changes in the phase relationship between endogenous circadian rhythms and the sleep-wake cycle has not been investigated previously. METHODS: We investigated the association between circadian phase, the phase relationship between the sleep-wake cycle and circadian rhythms, and morningness-eveningness, and their interaction with aging. In this circadian rhythm study, 68 young and 40 older subjects participated. RESULTS: Among the young subjects, the phase of the melatonin and core temperature rhythms occurred earlier in morning than in evening types and the interval between circadian phase and usual wake time was longer in morning types. Thus, while evening types woke at a later clock hour than morning types, morning types actually woke at a later circadian phase. Comparing young and older morning types we found that older morning types had an earlier circadian phase and a shorter phase-wake time interval. The shorter phase-waketime interval in older "morning types" is opposite to the change associated with morningness in young people, and is more similar to young evening types. CONCLUSIONS: These findings demonstrate an association between circadian phase, the relationship between the sleep-wake cycle and circadian phase, and morningness-eveningness in young adults. Furthermore, they demonstrate that age-related changes in phase angle cannot be attributed fully to an age-related shift toward morningness. These findings have important implications for understanding individual preferences in sleep-wake timing and age-related changes in the timing of sleep.
NASA Astrophysics Data System (ADS)
Lima, C. H.; Lall, U.
2010-12-01
Flood frequency statistical analysis most often relies on stationary assumptions, where distribution moments (e.g. mean, standard deviation) and associated flood quantiles do not change over time. In this sense, one expects that flood magnitudes and their frequency of occurrence will remain constant as observed in the historical information. However, evidence of inter-annual and decadal climate variability and anthropogenic change as well as an apparent increase in the number and magnitude of flood events across the globe have made the stationary assumption questionable. Here, we show how to estimate flood quantiles (e.g. 100-year flood) at ungauged basins without needing to consider stationarity. A statistical model based on the well known flow-area scaling law is proposed to estimate flood flows at ungauged basins. The slope and intercept scaling law coefficients are assumed time varying and a hierarchical Bayesian model is used to include climate information and reduce parameter uncertainties. Cross-validated results from 34 streamflow gauges located in a nested Basin in Brazil show that the proposed model is able to estimate flood quantiles at ungauged basins with remarkable skills compared with data based estimates using the full record. The model as developed in this work is also able to simulate sequences of flood flows considering global climate changes provided an appropriate climate index developed from the General Circulation Model is used as a predictor. The time varying flood frequency estimates can be used for pricing insurance models, and in a forecast mode for preparations for flooding, and finally, for timing infrastructure investments and location. Non-stationary 95% interval estimation for the 100-year Flood (shaded gray region) and 95% interval for the 100-year flood estimated from data (horizontal dashed and solid lines). The average distribution of the 100-year flood is shown in green in the right side.
Symbol interval optimization for molecular communication with drift.
Kim, Na-Rae; Eckford, Andrew W; Chae, Chan-Byoung
2014-09-01
In this paper, we propose a symbol interval optimization algorithm in molecular communication with drift. Proper symbol intervals are important in practical communication systems since information needs to be sent as fast as possible with low error rates. There is a trade-off, however, between symbol intervals and inter-symbol interference (ISI) from Brownian motion. Thus, we find proper symbol interval values considering the ISI inside two kinds of blood vessels, and also suggest no ISI system for strong drift models. Finally, an isomer-based molecule shift keying (IMoSK) is applied to calculate achievable data transmission rates (achievable rates, hereafter). Normalized achievable rates are also obtained and compared in one-symbol ISI and no ISI systems.
NASA Astrophysics Data System (ADS)
Olafsdottir, Kristin B.; Mudelsee, Manfred
2013-04-01
Estimation of the Pearson's correlation coefficient between two time series to evaluate the influences of one time depended variable on another is one of the most often used statistical method in climate sciences. Various methods are used to estimate confidence interval to support the correlation point estimate. Many of them make strong mathematical assumptions regarding distributional shape and serial correlation, which are rarely met. More robust statistical methods are needed to increase the accuracy of the confidence intervals. Bootstrap confidence intervals are estimated in the Fortran 90 program PearsonT (Mudelsee, 2003), where the main intention was to get an accurate confidence interval for correlation coefficient between two time series by taking the serial dependence of the process that generated the data into account. However, Monte Carlo experiments show that the coverage accuracy for smaller data sizes can be improved. Here we adapt the PearsonT program into a new version called PearsonT3, by calibrating the confidence interval to increase the coverage accuracy. Calibration is a bootstrap resampling technique, which basically performs a second bootstrap loop or resamples from the bootstrap resamples. It offers, like the non-calibrated bootstrap confidence intervals, robustness against the data distribution. Pairwise moving block bootstrap is used to preserve the serial correlation of both time series. The calibration is applied to standard error based bootstrap Student's t confidence intervals. The performances of the calibrated confidence intervals are examined with Monte Carlo simulations, and compared with the performances of confidence intervals without calibration, that is, PearsonT. The coverage accuracy is evidently better for the calibrated confidence intervals where the coverage error is acceptably small (i.e., within a few percentage points) already for data sizes as small as 20. One form of climate time series is output from numerical models which simulate the climate system. The method is applied to model data from the high resolution ocean model, INALT01 where the relationship between the Agulhas Leakage and the North Brazil Current is evaluated. Preliminary results show significant correlation between the two variables when there is 10 year lag between them, which is more or less the time that takes the Agulhas Leakage water to reach the North Brazil Current. Mudelsee, M., 2003. Estimating Pearson's correlation coefficient with bootstrap confidence interval from serially dependent time series. Mathematical Geology 35, 651-665.
Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi
2016-02-01
Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We recommend the use of cumulative probability to fit parametric probability distributions to propagule retention time, specifically using maximum likelihood for parameter estimation. Furthermore, the experimental design for an optimal characterization of unimodal propagule retention time should contemplate at least 500 recovered propagules and sampling time-intervals not larger than the time peak of propagule retrieval, except in the tail of the distribution where broader sampling time-intervals may also produce accurate fits.
Lee, I J; Kim, Y I; Kim, K W; Kim, D H; Ryoo, I; Lee, M W; Chung, J W
2012-11-01
This study was designed to evaluate the extent of the radiofrequency ablation zone in relation to the time interval between transcatheter arterial embolisation (TAE) and radiofrequency ablation (RFA) and, ultimately, to determine the optimal strategy of combining these two therapies for hepatocellular carcinoma. 15 rabbits were evenly divided into three groups: Group A was treated with RFA alone; Group B was treated with TAE immediately followed by RFA; and Group C was treated with TAE followed by RFA 5 days later. All animals underwent perfusion CT (PCT) scans immediately after RFA. Serum liver transaminases were measured to evaluate acute liver damage. Animals were euthanised for pathological analysis of ablated tissues 10 days after RFA. Non-parametric analyses were conducted to compare PCT indices, the RFA zone and liver transaminase levels among the three experimental groups. Group B showed a significantly larger ablation zone than the other two groups. Arterial liver perfusion and hepatic perfusion index represented well the perfusion decrease after TAE on PCT. Although Group B showed the most elevated liver transaminase levels at 1 day post RFA, the enzymes decreased to levels that were not different from the other groups at 10 days post-RFA. When combined TAE and RFA therapy is considered, TAE should be followed by RFA as quickly as possible, as it can be performed safely without serious hepatic deterioration, despite the short interval between the two procedures.
Screening and cervical cancer cure: population based cohort study
Andersson, Therese M-L; Lambert, Paul C; Kemetli, Levent; Silfverdal, Lena; Strander, Björn; Ryd, Walter; Dillner, Joakim; Törnberg, Sven; Sparén, Pär
2012-01-01
Objective To determine whether detection of invasive cervical cancer by screening results in better prognosis or merely increases the lead time until death. Design Nationwide population based cohort study. Setting Sweden. Participants All 1230 women with cervical cancer diagnosed during 1999-2001 in Sweden prospectively followed up for an average of 8.5 years. Main outcome measures Cure proportions and five year relative survival ratios, stratified by screening history, mode of detection, age, histopathological type, and FIGO (International Federation of Gynecology and Obstetrics) stage. Results In the screening ages, the cure proportion for women with screen detected invasive cancer was 92% (95% confidence interval 75% to 98%) and for symptomatic women was 66% (62% to 70%), a statistically significant difference in cure of 26% (16% to 36%). Among symptomatic women, the cure proportion was significantly higher for those who had been screened according to recommendations (interval cancers) than among those overdue for screening: difference in cure 14% (95% confidence interval 6% to 23%). Cure proportions were similar for all histopathological types except small cell carcinomas and were closely related to FIGO stage. A significantly higher cure proportion for screen detected cancers remained after adjustment for stage at diagnosis (difference 15%, 7% to 22%). Conclusions Screening is associated with improved cure of cervical cancer. Confounding cannot be ruled out, but the effect was not attributable to lead time bias and was larger than what is reflected by down-staging. Evaluations of screening programmes should consider the assessment of cure proportions. PMID:22381677
Screening and cervical cancer cure: population based cohort study.
Andrae, Bengt; Andersson, Therese M-L; Lambert, Paul C; Kemetli, Levent; Silfverdal, Lena; Strander, Björn; Ryd, Walter; Dillner, Joakim; Törnberg, Sven; Sparén, Pär
2012-03-01
To determine whether detection of invasive cervical cancer by screening results in better prognosis or merely increases the lead time until death. Nationwide population based cohort study. Sweden. All 1230 women with cervical cancer diagnosed during 1999-2001 in Sweden prospectively followed up for an average of 8.5 years. Cure proportions and five year relative survival ratios, stratified by screening history, mode of detection, age, histopathological type, and FIGO (International Federation of Gynecology and Obstetrics) stage. In the screening ages, the cure proportion for women with screen detected invasive cancer was 92% (95% confidence interval 75% to 98%) and for symptomatic women was 66% (62% to 70%), a statistically significant difference in cure of 26% (16% to 36%). Among symptomatic women, the cure proportion was significantly higher for those who had been screened according to recommendations (interval cancers) than among those overdue for screening: difference in cure 14% (95% confidence interval 6% to 23%). Cure proportions were similar for all histopathological types except small cell carcinomas and were closely related to FIGO stage. A significantly higher cure proportion for screen detected cancers remained after adjustment for stage at diagnosis (difference 15%, 7% to 22%). Screening is associated with improved cure of cervical cancer. Confounding cannot be ruled out, but the effect was not attributable to lead time bias and was larger than what is reflected by down-staging. Evaluations of screening programmes should consider the assessment of cure proportions.
Factors influencing pre-hospital care time intervals in Iran: a qualitative study.
Khorasani-Zavareh, Davoud; Mohammadi, Reza; Bohm, Katarina
2018-06-23
Pre-hospital time management provides better access to victims of road traffic crashes (RTCs) and can help minimize preventable deaths, injuries and disabilities. While most studies have been focused on measuring various time intervals in the pre-hospital phase, to our best knowledge there is no study exploring the barriers and facilitators that affects these various intervals qualitatively. The present study aimed to explore factors affecting various time intervals relating to road traffic incidents in the pre-hospital phase and provides suggestions for improvements in Iran. The study was conducted during 2013-2014 at both the national and local level in Iran. Overall, 18 face-to-face interviews with emergency medical services (EMS) personnel were used for data collection. Qualitative content analysis was employed to analyze the data. The most important barriers in relation to pre-hospital intervals were related to the manner of cooperation by members of the public with the EMS and their involvement at the crash scene, as well as to pre-hospital system factors, including the number and location of EMS facilities, type and number of ambulances and manpower. These factors usually affect how rapidly the EMS can arrive at the scene of the crash and how quickly victims can be transferred to hospital. These two categories have six main themes: notification interval; activation interval; response interval; on-scene interval; transport interval; and delivery interval. Despite more focus on physical resources, cooperation from members of the public needs to be taken in account in order to achieve better pre-hospital management of the various intervals, possibly through the use of public education campaigns.
Interval stability for complex systems
NASA Astrophysics Data System (ADS)
Klinshov, Vladimir V.; Kirillov, Sergey; Kurths, Jürgen; Nekorkin, Vladimir I.
2018-04-01
Stability of dynamical systems against strong perturbations is an important problem of nonlinear dynamics relevant to many applications in various areas. Here, we develop a novel concept of interval stability, referring to the behavior of the perturbed system during a finite time interval. Based on this concept, we suggest new measures of stability, namely interval basin stability (IBS) and interval stability threshold (IST). IBS characterizes the likelihood that the perturbed system returns to the stable regime (attractor) in a given time. IST provides the minimal magnitude of the perturbation capable to disrupt the stable regime for a given interval of time. The suggested measures provide important information about the system susceptibility to external perturbations which may be useful for practical applications. Moreover, from a theoretical viewpoint the interval stability measures are shown to bridge the gap between linear and asymptotic stability. We also suggest numerical algorithms for quantification of the interval stability characteristics and demonstrate their potential for several dynamical systems of various nature, such as power grids and neural networks.
Atomic temporal interval relations in branching time: calculation and application
NASA Astrophysics Data System (ADS)
Anger, Frank D.; Ladkin, Peter B.; Rodriguez, Rita V.
1991-03-01
A practical method of reasoning about intervals in a branching-time model which is dense, unbounded, future-branching, without rejoining branches is presented. The discussion is based on heuristic constraint- propagation techniques using the relation algebra of binary temporal relations among the intervals over the branching-time model. This technique has been applied with success to models of intervals over linear time by Allen and others, and is of cubic-time complexity. To extend it to branding-time models, it is necessary to calculate compositions of the relations; thus, the table of compositions for the 'atomic' relations is computed, enabling the rapid determination of the composition of arbitrary relations, expressed as disjunctions or unions of the atomic relations.
Analysis of single ion channel data incorporating time-interval omission and sampling
The, Yu-Kai; Timmer, Jens
2005-01-01
Hidden Markov models are widely used to describe single channel currents from patch-clamp experiments. The inevitable anti-aliasing filter limits the time resolution of the measurements and therefore the standard hidden Markov model is not adequate anymore. The notion of time-interval omission has been introduced where brief events are not detected. The developed, exact solutions to this problem do not take into account that the measured intervals are limited by the sampling time. In this case the dead-time that specifies the minimal detectable interval length is not defined unambiguously. We show that a wrong choice of the dead-time leads to considerably biased estimates and present the appropriate equations to describe sampled data. PMID:16849220
Chen, Chen; Xie, Yuanchang
2014-12-01
Driving hours and rest breaks are closely related to driver fatigue, which is a major contributor to truck crashes. This study investigates the effects of driving hours and rest breaks on commercial truck driver safety. A discrete-time logistic regression model is used to evaluate the crash odds ratios of driving hours and rest breaks. Driving time is divided into 11 one hour intervals. These intervals and rest breaks are modeled as dummy variables. In addition, a Cox proportional hazards regression model with time-dependent covariates is used to assess the transient effects of rest breaks, which consists of a fixed effect and a variable effect. Data collected from two national truckload carriers in 2009 and 2010 are used. The discrete-time logistic regression result indicates that only the crash odds ratio of the 11th driving hour is statistically significant. Taking one, two, and three rest breaks can reduce drivers' crash odds by 68%, 83%, and 85%, respectively, compared to drivers who did not take any rest breaks. The Cox regression result shows clear transient effects for rest breaks. It also suggests that drivers may need some time to adjust themselves to normal driving tasks after a rest break. Overall, the third rest break's safety benefit is very limited based on the results of both models. The findings of this research can help policy makers better understand the impact of driving time and rest breaks and develop more effective rules to improve commercial truck safety. Copyright © 2014 National Safety Council and Elsevier Ltd. All rights reserved.
Geng, Tao; Su, Xing; Fang, Rongxin; Xie, Xin; Zhao, Qile; Liu, Jingnan
2016-01-01
In order to satisfy the requirement of high-rate high-precision applications, 1 Hz BeiDou Navigation Satellite System (BDS) satellite clock corrections are generated based on precise orbit products, and the quality of the generated clock products is assessed by comparing with those from the other analysis centers. The comparisons show that the root mean square (RMS) of clock errors of geostationary Earth orbits (GEO) is about 0.63 ns, whereas those of inclined geosynchronous orbits (IGSO) and medium Earth orbits (MEO) are about 0.2–0.3 ns and 0.1 ns, respectively. Then, the 1 Hz clock products are used for BDS precise point positioning (PPP) to retrieve seismic displacements of the 2015 Mw 7.8 Gorkha, Nepal, earthquake. The derived seismic displacements from BDS PPP are consistent with those from the Global Positioning System (GPS) PPP, with RMS of 0.29, 0.38, and 1.08 cm in east, north, and vertical components, respectively. In addition, the BDS PPP solutions with different clock intervals of 1 s, 5 s, 30 s, and 300 s are processed and compared with each other. The results demonstrate that PPP with 300 s clock intervals is the worst and that with 1 s clock interval is the best. For the scenario of 5 s clock intervals, the precision of PPP solutions is almost the same to 1 s results. Considering the time consumption of clock estimates, we suggest that 5 s clock interval is competent for high-rate BDS solutions. PMID:27999384
Søeby, Karen; Jensen, Peter Bjødstrup; Werge, Thomas; Sørensen, Steen
2015-09-01
The knowledge of physiological fluctuation and variation of even commonly used biochemical quantities in extreme age groups and during development is sparse. This challenges the clinical interpretation and utility of laboratory tests in these age groups. To explore the utility of hospital laboratory data as a source of information, we analyzed enzymatic plasma creatinine as a model analyte in two large pediatric hospital samples. Plasma creatinine measurements from 9700 children aged 0-18 years were obtained from hospital laboratory databases and partitioned into high-resolution gender- and age-groups. Normal probability plots were used to deduce parameters of the normal distributions from healthy creatinine values in the mixed hospital datasets. Furthermore, temporal trajectories were generated from repeated measurements to examine developmental patterns in periods of changing creatinine levels. Creatinine shows great age dependence from birth throughout childhood. We computed and replicated 95% reference intervals in narrow gender and age bins and showed them to be comparable to those determined in healthy population studies. We identified pronounced transitions in creatinine levels at different time points after birth and around the early teens, which challenges the establishment and usefulness of reference intervals in those age groups. The study documents that hospital laboratory data may inform on the developmental aspects of creatinine, on periods with pronounced heterogeneity and valid reference intervals. Furthermore, part of the heterogeneity in creatinine distribution is likely due to differences in biological and chronological age of children and should be considered when using age-specific reference intervals.
Geng, Tao; Su, Xing; Fang, Rongxin; Xie, Xin; Zhao, Qile; Liu, Jingnan
2016-12-20
In order to satisfy the requirement of high-rate high-precision applications, 1 Hz BeiDou Navigation Satellite System (BDS) satellite clock corrections are generated based on precise orbit products, and the quality of the generated clock products is assessed by comparing with those from the other analysis centers. The comparisons show that the root mean square (RMS) of clock errors of geostationary Earth orbits (GEO) is about 0.63 ns, whereas those of inclined geosynchronous orbits (IGSO) and medium Earth orbits (MEO) are about 0.2-0.3 ns and 0.1 ns, respectively. Then, the 1 Hz clock products are used for BDS precise point positioning (PPP) to retrieve seismic displacements of the 2015 Mw 7.8 Gorkha, Nepal, earthquake. The derived seismic displacements from BDS PPP are consistent with those from the Global Positioning System (GPS) PPP, with RMS of 0.29, 0.38, and 1.08 cm in east, north, and vertical components, respectively. In addition, the BDS PPP solutions with different clock intervals of 1 s, 5 s, 30 s, and 300 s are processed and compared with each other. The results demonstrate that PPP with 300 s clock intervals is the worst and that with 1 s clock interval is the best. For the scenario of 5 s clock intervals, the precision of PPP solutions is almost the same to 1 s results. Considering the time consumption of clock estimates, we suggest that 5 s clock interval is competent for high-rate BDS solutions.
Liu, Shiqi; Li, Qifeng; Li, Yumei; Lv, Yi; Niu, Jianhua; Xu, Quan; Zhao, Jingru; Chen, Yajun; Wang, Dayong; Bai, Ruimiao
2018-06-01
This case study is concerning the meticulous observation of the moving process and track of 2 ingested needles using interval x-ray radiography, trying to localize the foreign bodies and reduce unnecessary exploration of digestive tract. An unusual case of a 1-year, 9-month-old female baby, with incarcerated hernia perforation caused by sewing needles with sharp ends, was reported herein. The patient had swallowed 2 sewing needles. One needle was excreted uneventfully after 8 days. On the contrary, the other needle stabbed the ileocecal junction wall into the right side of inguinal hernia sac after 9 days, and the patient received successful operation management. Interval x-ray confirmed that 1 needle-like foreign body moving down in 8 days until excretion along with feces. However, the other pierced into the incarcerated hernia. Preoperative x-ray radiography successfully monitored the moving process and tract of the sewing needles. Considering the penetrating-migrating nature of the foreign bodies, once the sharp-pointed objects were located, they should be removed as the mortality and risk of related complications may be increased. Interval x-ray radiography represents a meticulous preoperative monitoring method of the moving process and tract of needle-like foreign bodies. Interval x-ray with real-time images accurately detecting the moving foreign bodies could be help to reduce the unnecessary exploration of digestive tract and subsequently prevent possible complications. Based on the basic findings from the interval x-ray, treatment choices of endoscopic removal and surgical intervention may be attempted.
Martin, A.D.
1986-05-09
Method and apparatus are provided for generating an output pulse following a trigger pulse at a time delay interval preset with a resolution which is high relative to a low resolution available from supplied clock pulses. A first lumped constant delay provides a first output signal at predetermined interpolation intervals corresponding to the desired high resolution time interval. Latching circuits latch the high resolution data to form a first synchronizing data set. A selected time interval has been preset to internal counters and corrected for circuit propagation delay times having the same order of magnitude as the desired high resolution. Internal system clock pulses count down the counters to generate an internal pulse delayed by an internal which is functionally related to the preset time interval. A second LCD corrects the internal signal with the high resolution time delay. A second internal pulse is then applied to a third LCD to generate a second set of synchronizing data which is complementary with the first set of synchronizing data for presentation to logic circuits. The logic circuits further delay the internal output signal with the internal pulses. The final delayed output signal thereafter enables the output pulse generator to produce the desired output pulse at the preset time delay interval following input of the trigger pulse.
NASA Technical Reports Server (NTRS)
Bergman, S. A., Jr.; Johnson, R. L.; Hoffler, G. W.
1977-01-01
Devices and techniques for measuring and analyzing systolic time intervals and quantitative phonocardiograms were initiated during Apollo 17. The data show that the systolic time interval from Apollo 17 crewmen remained elevated longer postflight than the response criteria of heart rate, blood pressure, and percent change in leg volume all of which had returned to preflight levels by the second day postflight. Although the systolic time interval values were only slightly outside the preflight fiducial limits, this finding suggested that: the analysis of systolic time intervals may help to identify the mechanisms of postflight orthostatic intolerance by virtue of measuring ventricular function more directly and, the noninvasive technique may prove useful in determining the extent and duration of cardiovascular instability after long duration space flight. The systolic time intervals obtained on the Apollo 17 crewmen during lower body negative pressure were similar to those noted in patients with significant heart disease.
Stouffer, Philip C.; Johnson, Erik I.; Bierregaard, Richard O.; Lovejoy, Thomas E.
2011-01-01
Inferences about species loss following habitat conversion are typically drawn from short-term surveys, which cannot reconstruct long-term temporal dynamics of extinction and colonization. A long-term view can be critical, however, to determine the stability of communities within fragments. Likewise, landscape dynamics must be considered, as second growth structure and overall forest cover contribute to processes in fragments. Here we examine bird communities in 11 Amazonian rainforest fragments of 1–100 ha, beginning before the fragments were isolated in the 1980s, and continuing through 2007. Using a method that accounts for imperfect detection, we estimated extinction and colonization based on standardized mist-net surveys within discreet time intervals (1–2 preisolation samples and 4–5 post-isolation samples). Between preisolation and 2007, all fragments lost species in an area-dependent fashion, with loss of as few as <10% of preisolation species from 100-ha fragments, but up to 70% in 1-ha fragments. Analysis of individual time intervals revealed that the 2007 result was not due to gradual species loss beginning at isolation; both extinction and colonization occurred in every time interval. In the last two samples, 2000 and 2007, extinction and colonization were approximately balanced. Further, 97 of 101 species netted before isolation were detected in at least one fragment in 2007. Although a small subset of species is extremely vulnerable to fragmentation, and predictably goes extinct in fragments, developing second growth in the matrix around fragments encourages recolonization in our landscapes. Species richness in these fragments now reflects local turnover, not long-term attrition of species. We expect that similar processes could be operating in other fragmented systems that show unexpectedly low extinction. PMID:21731616
Stouffer, Philip C; Johnson, Erik I; Bierregaard, Richard O; Lovejoy, Thomas E
2011-01-01
Inferences about species loss following habitat conversion are typically drawn from short-term surveys, which cannot reconstruct long-term temporal dynamics of extinction and colonization. A long-term view can be critical, however, to determine the stability of communities within fragments. Likewise, landscape dynamics must be considered, as second growth structure and overall forest cover contribute to processes in fragments. Here we examine bird communities in 11 Amazonian rainforest fragments of 1-100 ha, beginning before the fragments were isolated in the 1980s, and continuing through 2007. Using a method that accounts for imperfect detection, we estimated extinction and colonization based on standardized mist-net surveys within discreet time intervals (1-2 preisolation samples and 4-5 post-isolation samples). Between preisolation and 2007, all fragments lost species in an area-dependent fashion, with loss of as few as <10% of preisolation species from 100-ha fragments, but up to 70% in 1-ha fragments. Analysis of individual time intervals revealed that the 2007 result was not due to gradual species loss beginning at isolation; both extinction and colonization occurred in every time interval. In the last two samples, 2000 and 2007, extinction and colonization were approximately balanced. Further, 97 of 101 species netted before isolation were detected in at least one fragment in 2007. Although a small subset of species is extremely vulnerable to fragmentation, and predictably goes extinct in fragments, developing second growth in the matrix around fragments encourages recolonization in our landscapes. Species richness in these fragments now reflects local turnover, not long-term attrition of species. We expect that similar processes could be operating in other fragmented systems that show unexpectedly low extinction.
Cheung, Li C; Pan, Qing; Hyun, Noorie; Schiffman, Mark; Fetterman, Barbara; Castle, Philip E; Lorey, Thomas; Katki, Hormuzd A
2017-09-30
For cost-effectiveness and efficiency, many large-scale general-purpose cohort studies are being assembled within large health-care providers who use electronic health records. Two key features of such data are that incident disease is interval-censored between irregular visits and there can be pre-existing (prevalent) disease. Because prevalent disease is not always immediately diagnosed, some disease diagnosed at later visits are actually undiagnosed prevalent disease. We consider prevalent disease as a point mass at time zero for clinical applications where there is no interest in time of prevalent disease onset. We demonstrate that the naive Kaplan-Meier cumulative risk estimator underestimates risks at early time points and overestimates later risks. We propose a general family of mixture models for undiagnosed prevalent disease and interval-censored incident disease that we call prevalence-incidence models. Parameters for parametric prevalence-incidence models, such as the logistic regression and Weibull survival (logistic-Weibull) model, are estimated by direct likelihood maximization or by EM algorithm. Non-parametric methods are proposed to calculate cumulative risks for cases without covariates. We compare naive Kaplan-Meier, logistic-Weibull, and non-parametric estimates of cumulative risk in the cervical cancer screening program at Kaiser Permanente Northern California. Kaplan-Meier provided poor estimates while the logistic-Weibull model was a close fit to the non-parametric. Our findings support our use of logistic-Weibull models to develop the risk estimates that underlie current US risk-based cervical cancer screening guidelines. Published 2017. This article has been contributed to by US Government employees and their work is in the public domain in the USA. Published 2017. This article has been contributed to by US Government employees and their work is in the public domain in the USA.
Schmidt-Hieber, Martin; Schwender, Julie; Heinz, Werner J.; Zabelina, Tatjana; Kühl, Jörn S.; Mousset, Sabine; Schüttrumpf, Silke; Junghanss, Christian; Silling, Gerda; Basara, Nadezda; Neuburger, Stefan; Thiel, Eckhard; Blau, Igor W.
2011-01-01
Background Limited data are available on characteristics of viral encephalitis in patients after allogeneic stem cell transplantation. Design and Methods We analyzed 2,628 patients after allogeneic stem cell transplantation to identify risk factors and characteristics of viral encephalitis. Results Viral encephalitis occurred in 32 patients (1.2%, 95% confidence interval 0.8%–1.6%) and was associated with the use of OKT-3 or alemtuzumab for T-cell depletion (P<0.001) and an increased mortality (P=0.011) in comparison to patients without viral encephalitis. Detected viruses included human herpesvirus-6 (28%), Epstein-Barr virus (19%), herpes simplex virus (13%), JC virus (9%), varicella zoster virus (6%), cytomegalovirus (6%) and adenovirus (3%). More than one virus was identified in 16% of the patients. The median onset time was 106 days after allogeneic stem cell transplantation for the total group of 32 patients, but onset times were shortest in those with human herpesvirus-6 encephalitis and longest in those with JC virus-associated progressive multifocal leukoencephalopathy. The probability of a sustained response to treatment was 63% (95% confidence interval 44%–82%) with a median survival of 94 (95% confidence interval 36–152) days after onset, but significant variation was found when considering different causative viruses. Patients with herpes simplex virus encephalitis had the most favorable outcome with no encephalitis-related deaths. Conclusions The use of OKT-3 or alemtuzumab for in vivo T-cell depletion is associated with an increased risk of viral encephalitis after allogeneic stem cell transplantation. Different viruses are frequently associated with distinct characteristics such as onset time, response to treatment and outcome. PMID:20851868
Compression based entropy estimation of heart rate variability on multiple time scales.
Baumert, Mathias; Voss, Andreas; Javorka, Michal
2013-01-01
Heart rate fluctuates beat by beat in a complex manner. The aim of this study was to develop a framework for entropy assessment of heart rate fluctuations on multiple time scales. We employed the Lempel-Ziv algorithm for lossless data compression to investigate the compressibility of RR interval time series on different time scales, using a coarse-graining procedure. We estimated the entropy of RR interval time series of 20 young and 20 old subjects and also investigated the compressibility of randomly shuffled surrogate RR time series. The original RR time series displayed significantly smaller compression entropy values than randomized RR interval data. The RR interval time series of older subjects showed significantly different entropy characteristics over multiple time scales than those of younger subjects. In conclusion, data compression may be useful approach for multiscale entropy assessment of heart rate variability.
NASA Astrophysics Data System (ADS)
Kim, Jung Hoon; Hagiwara, Tomomichi
2017-11-01
This paper is concerned with linear time-invariant (LTI) sampled-data systems (by which we mean sampled-data systems with LTI generalised plants and LTI controllers) and studies their H2 norms from the viewpoint of impulse responses and generalised H2 norms from the viewpoint of the induced norms from L2 to L∞. A new definition of the H2 norm of LTI sampled-data systems is first introduced through a sort of intermediate standpoint of those for the existing two definitions. We then establish unified treatment of the three definitions of the H2 norm through a matrix function G(τ) defined on the sampling interval [0, h). This paper next considers the generalised H2 norms, in which two types of the L∞ norm of the output are considered as the temporal supremum magnitude under the spatial 2-norm and ∞-norm of a vector-valued function. We further give unified treatment of the generalised H2 norms through another matrix function F(θ) which is also defined on [0, h). Through a close connection between G(τ) and F(θ), some theoretical relationships between the H2 and generalised H2 norms are provided. Furthermore, appropriate extensions associated with the treatment of G(τ) and F(θ) to the closed interval [0, h] are discussed to facilitate numerical computations and comparisons of the H2 and generalised H2 norms. Through theoretical and numerical studies, it is shown that the two generalised H2 norms coincide with neither of the three H2 norms of LTI sampled-data systems even though all the five definitions coincide with each other when single-output continuous-time LTI systems are considered as a special class of LTI sampled-data systems. To summarise, this paper clarifies that the five control performance measures are mutually related with each other but they are also intrinsically different from each other.
Lack of spacing effects during piano learning.
Wiseheart, Melody; D'Souza, Annalise A; Chae, Jacey
2017-01-01
Spacing effects during retention of verbal information are easily obtained, and the effect size is large. Relatively little evidence exists on whether motor skill retention benefits from distributed practice, with even less evidence on complex motor skills. We taught a 17-note musical sequence on a piano to individuals without prior formal training. There were five lags between learning episodes: 0-, 1-, 5-, 10-, and 15-min. After a 5-min retention interval, participants' performance was measured using three criteria: accuracy of note playing, consistency in pressure applied to the keys, and consistency in timing. No spacing effect was found, suggesting that the effect may not always be demonstrable for complex motor skills or non-verbal abilities (timing and motor skills). Additionally, we taught short phrases from five songs, using the same set of lags and retention interval, and did not find any spacing effect for accuracy of song reproduction. Our findings indicate that although the spacing effect is one of the most robust phenomena in the memory literature (as demonstrated by verbal learning studies), the effect may vary when considered in the novel realm of complex motor skills such as piano performance.
Lack of spacing effects during piano learning
D’Souza, Annalise A.; Chae, Jacey
2017-01-01
Spacing effects during retention of verbal information are easily obtained, and the effect size is large. Relatively little evidence exists on whether motor skill retention benefits from distributed practice, with even less evidence on complex motor skills. We taught a 17-note musical sequence on a piano to individuals without prior formal training. There were five lags between learning episodes: 0-, 1-, 5-, 10-, and 15-min. After a 5-min retention interval, participants’ performance was measured using three criteria: accuracy of note playing, consistency in pressure applied to the keys, and consistency in timing. No spacing effect was found, suggesting that the effect may not always be demonstrable for complex motor skills or non-verbal abilities (timing and motor skills). Additionally, we taught short phrases from five songs, using the same set of lags and retention interval, and did not find any spacing effect for accuracy of song reproduction. Our findings indicate that although the spacing effect is one of the most robust phenomena in the memory literature (as demonstrated by verbal learning studies), the effect may vary when considered in the novel realm of complex motor skills such as piano performance. PMID:28800631
Escarpinati, Suzana Cunha; Siqueira, Tadeu; Medina, Paulino Barroso; de Oliveira Roque, Fabio
2014-03-01
In order to evaluate the potential risks of human visitation on macroinvertebrate communities in streams, we investigated the effect of trampling using two short-term experiments conducted in a Brazilian ecotourism karst region. We asked three questions: (a) Does trampling increase the drift rate of aquatic macroinvertebrates and organic matter? (b) Does trampling change the macroinvertebrate community organization? (c) If trampling alters the community structure, is a short time (5 days, a between weekends interval - peaks of tourism activities) sufficient for community restructuring? Analysis of variance of richness, total abundance, abundance of the most abundant genus (e.g., Simothraulopsis and Callibaetis), and community composition showed that trampling immediately affects macroinvertebrate community and that the intervals between the peaks of visitation (5 days) are not sufficient to complete community restructuring. Considering that bathing areas receive thousands of visitors every year and that intervals of time without visitation are nearly nonexistent, we suspect that the negative effects on the macroinvertebrate community occur in a cumulative way. Finally, we discuss some simple procedures that could potentially be used for reducing trampling impacts in lotic environments.
Hanna, Laila S; Medhat, Amina M; Abdel-Menem, Hanan A
2003-04-01
In Egypt, infection with Schistosoma mansoni (S.m.) and residues of pesticides have been considered as major environmental pollutants that adversely affect health. Effects of diazinon (DZN) and/or praziquantel (PZQ) on the levels of plasma triiodothyronine (T3), thyroxine (T4), activities of brain acetylcholinesterase (AchE) and liver alanine aminotransferase (ALT) in addition to blood reduced glutathione (GSH) in healthy and S.m. infected mice were investigated after 9 and 17 weeks of either infection or intoxication with DZN. Triiodothyronine showed significant differences among the different treatments. The group of mice treated with PZQ showed the highest levels of T3 at both time intervals. Thyroxine level showed significant differences between the two time intervals. The lowest levels of T4 were observed in the infected-PZQ group at week 17. The maximum inhibition of brain AchE activity was noticed in DZN-PZQ treated group after 9 and 17 weeks. The different treatments significantly reduced the activities of liver ALT. The highest decrease was recorded in the infected-DZN-PZQ group at week 9. All treatments significantly lowered the levels of blood GSH after 9 weeks.
NASA Astrophysics Data System (ADS)
Rebolledo, M. A.; Martinez-Betorz, J. A.
1989-04-01
In this paper the accuracy in the determination of the period of an oscillating signal, when obtained from the photon statistics time-interval probability, is studied as a function of the precision (the inverse of the cutoff frequency of the photon counting system) with which time intervals are measured. The results are obtained by means of an experiment with a square-wave signal, where the Fourier or square-wave transforms of the time-interval probability are measured. It is found that for values of the frequency of the signal near the cutoff frequency the errors in the period are small.
Method of high precision interval measurement in pulse laser ranging system
NASA Astrophysics Data System (ADS)
Wang, Zhen; Lv, Xin-yuan; Mao, Jin-jin; Liu, Wei; Yang, Dong
2013-09-01
Laser ranging is suitable for laser system, for it has the advantage of high measuring precision, fast measuring speed,no cooperative targets and strong resistance to electromagnetic interference,the measuremen of laser ranging is the key paremeters affecting the performance of the whole system.The precision of the pulsed laser ranging system was decided by the precision of the time interval measurement, the principle structure of laser ranging system was introduced, and a method of high precision time interval measurement in pulse laser ranging system was established in this paper.Based on the analysis of the factors which affected the precision of range measure,the pulse rising edges discriminator was adopted to produce timing mark for the start-stop time discrimination,and the TDC-GP2 high precision interval measurement system based on TMS320F2812 DSP was designed to improve the measurement precision.Experimental results indicate that the time interval measurement method in this paper can obtain higher range accuracy. Compared with the traditional time interval measurement system,the method simplifies the system design and reduce the influence of bad weather conditions,furthermore,it satisfies the requirements of low costs and miniaturization.
Pharmacological interventions for acceleration of the onset time of rocuronium: a meta-analysis.
Dong, Jing; Gao, Lingqi; Lu, Wenqing; Xu, Zifeng; Zheng, Jijian
2014-01-01
Rocuronium is an acceptable alternative when succinylcholine is contraindicated for facilitating the endotracheal intubation. However, the onset time of rocuronium for good intubation condition is still slower than that condition of succinylcholine. This study systematically investigated the most efficacious pharmacological interventions for accelerating the onset time of rocuronium. Medline, Embase, Cochrane Library databases, www.clinicaltrials.gov, and hand searching from the reference lists of identified papers were searched for randomized controlled trials comparing drug interventions with placebo or another drug to shorten the onset time of rocuronium. Statistical analyses were performed using RevMan5.2 and ADDIS 1.16.5 softwares. Mean differences (MDs) with their 95% confidence intervals (95% CIs) were used to analyze the effects of drug interventions on the onset time of rocuronium. 43 randomized controlled trials with 2,465 patients were analyzed. The average onset time of rocuronium was 102.4±24.9 s. Priming with rocuronium [Mean difference (MD) -21.0 s, 95% confidence interval (95% CI) (-27.6 to -14.3 s)], pretreatment with ephedrine [-22.3 s (-29.1 to -15.5 s)], pretreatment with magnesium sulphate [-28.2 s (-50.9 to -5.6 s)] were all effective in reducing the onset time of rocuronium. Statistical testing of indirect comparisons showed that rocuronium priming, pretreatment with ephedrine, and pretreatment with magnesium sulphate had the similar efficacy. Rocuronium priming, pretreatment with ephedrine, and pretreatment with magnesium sulphate were all effective in accelerating the onset time of rocuronium, and furthermore their efficacies were similar. Considering the convenience and efficacy, priming with rocuronium is recommended for accelerating the onset time of rocuronium. However, more strict clinical trials are still needed to reach a more solid conclusion due to the large heterogeneities exist among different studies.
NASA Astrophysics Data System (ADS)
Hamdi, Mazda; Kenari, Masoumeh Nasiri
2013-06-01
We consider a time-hopping based multiple access scheme introduced in [1] for communication over dispersive infrared links, and evaluate its performance for correlator and matched filter receivers. In the investigated time-hopping code division multiple access (TH-CDMA) method, the transmitter benefits a low rate convolutional encoder. In this method, the bit interval is divided into Nc chips and the output of the encoder along with a PN sequence assigned to the user determines the position of the chip in which the optical pulse is transmitted. We evaluate the multiple access performance of the system for correlation receiver considering background noise which is modeled as White Gaussian noise due to its large intensity. For the correlation receiver, the results show that for a fixed processing gain, at high transmit power, where the multiple access interference has the dominant effect, the performance improves by the coding gain. But at low transmit power, in which the increase of coding gain leads to the decrease of the chip time, and consequently, to more corruption due to the channel dispersion, there exists an optimum value for the coding gain. However, for the matched filter, the performance always improves by the coding gain. The results show that the matched filter receiver outperforms the correlation receiver in the considered cases. Our results show that, for the same bandwidth and bit rate, the proposed system excels other multiple access techniques, like conventional CDMA and time hopping scheme.
Effects of Photobleaching on Microplastics
NASA Astrophysics Data System (ADS)
Ferrone, Salvatore; Sullivan, Kelley
The presence of microplastics in our oceans and lakes is a contemporary environmental issue. A popular method for studying microplastics is fluorescence microscopy. We are studying the effects of fluorescence photo-bleaching on the imaging of microplastics. Our goal is to find out to what extent microplastics photo-bleach and if the photo-bleaching is recoverable. Photo-bleaching may entirely destroy the plastics' ability to fluoresce, hamper it for a short time, or have a minuscule effects. For this project, we consider the seven recyclable plastics. For each plastic type, we record a video of the micro-plastics for several hours under 405 nm light, then analyze and plot the image intensity as a function of time to see if the outputted light from the plastic decays over time. We then take single images at different time intervals to check if the intensity recovers. Our results will help set conditions under which fluorescence techniques can be used on microplastics. Undergraduate Student.
Cost-effectiveness of one-time genetic testing to minimize lifetime adverse drug reactions.
Alagoz, O; Durham, D; Kasirajan, K
2016-04-01
We evaluated the cost-effectiveness of one-time pharmacogenomic testing for preventing adverse drug reactions (ADRs) over a patient's lifetime. We developed a Markov-based Monte Carlo microsimulation model to represent the ADR events in the lifetime of each patient. The base-case considered a 40-year-old patient. We measured health outcomes in life years (LYs) and quality-adjusted LYs (QALYs) and estimated costs using 2013 US$. In the base-case, one-time genetic testing had an incremental cost-effectiveness ratio (ICER) of $43,165 (95% confidence interval (CI) is ($42,769,$43,561)) per additional LY and $53,680 per additional QALY (95% CI is ($53,182,$54,179)), hence under the base-case one-time genetic testing is cost-effective. The ICER values were most sensitive to the average probability of death due to ADR, reduction in ADR rate due to genetic testing, mean ADR rate and cost of genetic testing.
Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly
2016-01-01
This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.
Lévy walks with variable waiting time: A ballistic case
NASA Astrophysics Data System (ADS)
Kamińska, A.; Srokowski, T.
2018-06-01
The Lévy walk process for a lower interval of an excursion times distribution (α <1 ) is discussed. The particle rests between the jumps, and the waiting time is position-dependent. Two cases are considered: a rising and diminishing waiting time rate ν (x ) , which require different approximations of the master equation. The process comprises two phases of the motion: particles at rest and in flight. The density distributions for them are derived, as a solution of corresponding fractional equations. For strongly falling ν (x ) , the resting particles density assumes the α -stable form (truncated at fronts), and the process resolves itself to the Lévy flights. The diffusion is enhanced for this case but no longer ballistic, in contrast to the case for the rising ν (x ) . The analytical results are compared with Monte Carlo trajectory simulations. The results qualitatively agree with observed properties of human and animal movements.
Longitudinal Tobacco Use Transitions Among Adolescents and Young Adults: 2014-2016.
Hair, Elizabeth C; Romberg, Alexa R; Niaura, Raymond; Abrams, David B; Bennett, Morgane A; Xiao, Haijun; Rath, Jessica M; Pitzer, Lindsay; Vallone, Donna
2018-02-13
Among youth, the frequency and prevalence of using more than one tobacco (small cigar, cigarette, and hookah) or nicotine-containing product (e-cigarettes-ENDS) are changing. These shifts pose challenges for regulation, intervention, and prevention campaigns because of scant longitudinal data on the stability of use patterns in this changing product landscape. A nationally representative longitudinal survey of 15- to 21-year olds (n = 15,275) was used to describe transitions between never use, noncurrent use, and past 30-day use of combustible tobacco, e-cigarettes (ENDS), and dual use of both kinds of products. A multistate model was fit to observations collected every 6 months across 2.5 years to estimate the probability of transitions between states (TPs), the average time in state (sojourn time), and the effect of age on transitions. Current state strongly predicted future state over time intervals of 1 year or less, but only weakly predicted future state at longer intervals: TP to noncurrent use was higher for ENDS-only than combustible-only users over a 6-month interval but was similar for both groups over a 2-year interval. Sojourn time was significantly longer for combustible-only (0.52 years) and dual use (0.55 years) than ENDS-only use (0.27 years); older youth were more likely than younger youth to stay combustible tobacco users or noncurrent users. The dynamics of transitions between combustible tobacco products and ENDS in a population of youth and young adults suggest that policy and prevention efforts must consider the frequent changes and instability over a 1-year or less time period in use patterns among young people. The study addresses an urgent need in public health for timely information on how youth and young adults use tobacco and nicotine products. We found that youth, particularly adolescents, moved frequently between using ENDS and combustible tobacco products either alone or together. Importantly, the utility of current-use states for predicting future use states declined for time horizons longer than 1 year. Our results demonstrate a need for caution in interpreting product transitions. Longitudinal data with frequent observations and coverage of a wide range of possible product types is required to fully characterize usage patterns in youth. © The Author 2018. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Multiserver Queueing Model subject to Single Exponential Vacation
NASA Astrophysics Data System (ADS)
Vijayashree, K. V.; Janani, B.
2018-04-01
A multi-server queueing model subject to single exponential vacation is considered. The arrivals are allowed to join the queue according to a Poisson distribution and services takes place according to an exponential distribution. Whenever the system becomes empty, all the servers goes for a vacation and returns back after a fixed interval of time. The servers then starts providing service if there are waiting customers otherwise they will wait to complete the busy period. The vacation times are also assumed to be exponentially distributed. In this paper, the stationary and transient probabilities for the number of customers during ideal and functional state of the server are obtained explicitly. Also, numerical illustrations are added to visualize the effect of various parameters.
Practical synchronization on complex dynamical networks via optimal pinning control
NASA Astrophysics Data System (ADS)
Li, Kezan; Sun, Weigang; Small, Michael; Fu, Xinchu
2015-07-01
We consider practical synchronization on complex dynamical networks under linear feedback control designed by optimal control theory. The control goal is to minimize global synchronization error and control strength over a given finite time interval, and synchronization error at terminal time. By utilizing the Pontryagin's minimum principle, and based on a general complex dynamical network, we obtain an optimal system to achieve the control goal. The result is verified by performing some numerical simulations on Star networks, Watts-Strogatz networks, and Barabási-Albert networks. Moreover, by combining optimal control and traditional pinning control, we propose an optimal pinning control strategy which depends on the network's topological structure. Obtained results show that optimal pinning control is very effective for synchronization control in real applications.
Nonequilibrium Probabilistic Dynamics of the Logistic Map at the Edge of Chaos
NASA Astrophysics Data System (ADS)
Borges, Ernesto P.; Tsallis, Constantino; Añaños, Garín F.; de Oliveira, Paulo Murilo
2002-12-01
We consider nonequilibrium probabilistic dynamics in logisticlike maps xt+1=1-a|xt|z, (z>1) at their chaos threshold: We first introduce many initial conditions within one among W>>1 intervals partitioning the phase space and focus on the unique value qsen<1 for which the entropic form Sq≡(1- ∑
A model for foreign exchange markets based on glassy Brownian systems
Trinidad-Segovia, J. E.; Clara-Rahola, J.; Puertas, A. M.; De las Nieves, F. J.
2017-01-01
In this work we extend a well-known model from arrested physical systems, and employ it in order to efficiently depict different currency pairs of foreign exchange market price fluctuation distributions. We consider the exchange rate price in the time range between 2010 and 2016 at yearly time intervals and resolved at one minute frequency. We then fit the experimental datasets with this model, and find significant qualitative symmetry between price fluctuation distributions from the currency market, and the ones belonging to colloidal particles position in arrested states. The main contribution of this paper is a well-known physical model that does not necessarily assume the independent and identically distributed (i.i.d.) restrictive condition. PMID:29206868
[Demographic consequences of genetic load: a model of the origin of the incest taboo].
Buzin, A Iu
1987-12-01
The prohibition of copulations among near relatives may raise the fitness of population. This effect being irregular and insignificant for a distinct generation, becomes apparent in evolutionary time intervals through the natural selection of populations with incest-taboo. The "characteristic selection time" theta depends on typical population size, genetic damage and the mean rate of population growth. The estimation obtained for theta permit us to assert that the model describes the phenomenon of "socio-cultural selection" in prehistory. The model shows the demographic specificity of small populations. The problem of the number of consanguineous marriages is considered in detail. New explanation for deviation of the observed frequency of consanguineous marriages from classical estimations is proposed.
Belke, Terry W; Christie-Fougere, Melissa M
2006-11-01
Across two experiments, a peak procedure was used to assess the timing of the onset and offset of an opportunity to run as a reinforcer. The first experiment investigated the effect of reinforcer duration on temporal discrimination of the onset of the reinforcement interval. Three male Wistar rats were exposed to fixed-interval (FI) 30-s schedules of wheel-running reinforcement and the duration of the opportunity to run was varied across values of 15, 30, and 60s. Each session consisted of 50 reinforcers and 10 probe trials. Results showed that as reinforcer duration increased, the percentage of postreinforcement pauses longer than the 30-s schedule interval increased. On probe trials, peak response rates occurred near the time of reinforcer delivery and peak times varied with reinforcer duration. In a second experiment, seven female Long-Evans rats were exposed to FI 30-s schedules leading to 30-s opportunities to run. Timing of the onset and offset of the reinforcement period was assessed by probe trials during the schedule interval and during the reinforcement interval in separate conditions. The results provided evidence of timing of the onset, but not the offset of the wheel-running reinforcement period. Further research is required to assess if timing occurs during a wheel-running reinforcement period.
High resolution digital delay timer
Martin, Albert D.
1988-01-01
Method and apparatus are provided for generating an output pulse following a trigger pulse at a time delay interval preset with a resolution which is high relative to a low resolution available from supplied clock pulses. A first lumped constant delay (20) provides a first output signal (24) at predetermined interpolation intervals corresponding to the desired high resolution time interval. Latching circuits (26, 28) latch the high resolution data (24) to form a first synchronizing data set (60). A selected time interval has been preset to internal counters (142, 146, 154) and corrected for circuit propagation delay times having the same order of magnitude as the desired high resolution. Internal system clock pulses (32, 34) count down the counters to generate an internal pulse delayed by an interval which is functionally related to the preset time interval. A second LCD (184) corrects the internal signal with the high resolution time delay. A second internal pulse is then applied to a third LCD (74) to generate a second set of synchronizing data (76) which is complementary with the first set of synchronizing data (60) for presentation to logic circuits (64). The logic circuits (64) further delay the internal output signal (72) to obtain a proper phase relationship of an output signal (80) with the internal pulses (32, 34). The final delayed output signal (80) thereafter enables the output pulse generator (82) to produce the desired output pulse (84) at the preset time delay interval following input of the trigger pulse (10, 12).
Mette, Christian; Grabemann, Marco; Zimmermann, Marco; Strunz, Laura; Scherbaum, Norbert; Wiltfang, Jens; Kis, Bernhard
2015-01-01
Altered time reproduction is exhibited by patients with adult attention deficit hyperactivity disorder (ADHD). It remains unclear whether memory capacity influences the ability of adults with ADHD to reproduce time intervals. We conducted a behavioral study on 30 ADHD patients who were medicated with methylphenidate, 29 unmedicated adult ADHD patients and 32 healthy controls (HCs). We assessed time reproduction using six time intervals (1 s, 4 s, 6 s, 10 s, 24 s and 60 s) and assessed memory performance using the Wechsler memory scale. The patients with ADHD exhibited lower memory performance scores than the HCs. No significant differences in the raw scores for any of the time intervals (p > .05), with the exception of the variability at the short time intervals (1 s, 4 s and 6 s) (p < .01), were found between the groups. The overall analyses failed to reveal any significant correlations between time reproduction at any of the time intervals examined in the time reproduction task and working memory performance (p > .05). We detected no findings indicating that working memory might influence time reproduction in adult patients with ADHD. Therefore, further studies concerning time reproduction and memory capacity among adult patients with ADHD must be performed to verify and replicate the present findings.
NASA Technical Reports Server (NTRS)
Leskovar, B.; Turko, B.
1977-01-01
The development of a high precision time interval digitizer is described. The time digitizer is a 10 psec resolution stop watch covering a range of up to 340 msec. The measured time interval is determined as a separation between leading edges of a pair of pulses applied externally to the start input and the stop input of the digitizer. Employing an interpolation techniques and a 50 MHz high precision master oscillator, the equivalent of a 100 GHz clock frequency standard is achieved. Absolute accuracy and stability of the digitizer are determined by the external 50 MHz master oscillator, which serves as a standard time marker. The start and stop pulses are fast 1 nsec rise time signals, according to the Nuclear Instrument means of tunnel diode discriminators. Firing level of the discriminator define start and stop points between which the time interval is digitized.
Estimated Accuracy of Three Common Trajectory Statistical Methods
NASA Technical Reports Server (NTRS)
Kabashnikov, Vitaliy P.; Chaikovsky, Anatoli P.; Kucsera, Tom L.; Metelskaya, Natalia S.
2011-01-01
Three well-known trajectory statistical methods (TSMs), namely concentration field (CF), concentration weighted trajectory (CWT), and potential source contribution function (PSCF) methods were tested using known sources and artificially generated data sets to determine the ability of TSMs to reproduce spatial distribution of the sources. In the works by other authors, the accuracy of the trajectory statistical methods was estimated for particular species and at specified receptor locations. We have obtained a more general statistical estimation of the accuracy of source reconstruction and have found optimum conditions to reconstruct source distributions of atmospheric trace substances. Only virtual pollutants of the primary type were considered. In real world experiments, TSMs are intended for application to a priori unknown sources. Therefore, the accuracy of TSMs has to be tested with all possible spatial distributions of sources. An ensemble of geographical distributions of virtual sources was generated. Spearman s rank order correlation coefficient between spatial distributions of the known virtual and the reconstructed sources was taken to be a quantitative measure of the accuracy. Statistical estimates of the mean correlation coefficient and a range of the most probable values of correlation coefficients were obtained. All the TSMs that were considered here showed similar close results. The maximum of the ratio of the mean correlation to the width of the correlation interval containing the most probable correlation values determines the optimum conditions for reconstruction. An optimal geographical domain roughly coincides with the area supplying most of the substance to the receptor. The optimal domain s size is dependent on the substance decay time. Under optimum reconstruction conditions, the mean correlation coefficients can reach 0.70 0.75. The boundaries of the interval with the most probable correlation values are 0.6 0.9 for the decay time of 240 h and 0.5 0.95 for the decay time of 12 h. The best results of source reconstruction can be expected for the trace substances with a decay time on the order of several days. Although the methods considered in this paper do not guarantee high accuracy they are computationally simple and fast. Using the TSMs in optimum conditions and taking into account the range of uncertainties, one can obtain a first hint on potential source areas.
Eliciting interval beliefs: An experimental study
Peeters, Ronald; Wolk, Leonard
2017-01-01
In this paper we study the interval scoring rule as a mechanism to elicit subjective beliefs under varying degrees of uncertainty. In our experiment, subjects forecast the termination time of a time series to be generated from a given but unknown stochastic process. Subjects gradually learn more about the underlying process over time and hence the true distribution over termination times. We conduct two treatments, one with a high and one with a low volatility process. We find that elicited intervals are better when subjects are facing a low volatility process. In this treatment, participants learn to position their intervals almost optimally over the course of the experiment. This is in contrast with the high volatility treatment, where subjects, over the course of the experiment, learn to optimize the location of their intervals but fail to provide the optimal length. PMID:28380020
Horr, Ninja K.; Di Luca, Massimiliano
2015-01-01
In this work we investigate how judgments of perceived duration are influenced by the properties of the signals that define the intervals. Participants compared two auditory intervals that could be any combination of the following four types: intervals filled with continuous tones (filled intervals), intervals filled with regularly-timed short tones (isochronous intervals), intervals filled with irregularly-timed short tones (anisochronous intervals), and intervals demarcated by two short tones (empty intervals). Results indicate that the type of intervals to be compared affects discrimination performance and induces distortions in perceived duration. In particular, we find that duration judgments are most precise when comparing two isochronous and two continuous intervals, while the comparison of two anisochronous intervals leads to the worst performance. Moreover, we determined that the magnitude of the distortions in perceived duration (an effect akin to the filled duration illusion) is higher for tone sequences (no matter whether isochronous or anisochronous) than for continuous tones. Further analysis of how duration distortions depend on the type of filling suggests that distortions are not only due to the perceived duration of the two individual intervals, but they may also be due to the comparison of two different filling types. PMID:25717310
Use of precision time and time interval (PTTI)
NASA Technical Reports Server (NTRS)
Taylor, J. D.
1974-01-01
A review of range time synchronization methods are discussed as an important aspect of range operations. The overall capabilities of various missile ranges to determine precise time of day by synchronizing to available references and applying this time point to instrumentation for time interval measurements are described.
Time intervals in the treatment of fractured femurs as indicators of the quality of trauma systems.
Matityahu, Amir; Elliott, Iain; Marmor, Meir; Caldwell, Amber; Coughlin, Richard; Gosselin, Richard A
2014-01-01
To investigate the use of time intervals in the treatment of fractured femurs as indicators of the quality of trauma systems. Time intervals from injury to admission, admission to surgery and surgery to discharge for patients with isolated femur fractures in four low- and middle-income countries were compared with the corresponding values from one German hospital, an Israeli hospital and the National Trauma Data Bank of the United States of America by means of Student's t-tests. The correlations between the time intervals recorded in a country and that country's expenditure on health and gross domestic product (GDP) were also evaluated using Pearson's product moment correlation coefficient. Relative to patients from high-income countries, those from low- and middle-income countries were significantly more likely to be male and to have been treated by open femoral nailing, and their intervals from injury to admission, admission to surgery and surgery to discharge were significantly longer. Strong negative correlations were detected between the interval from injury to admission and government expenditure on health, and between the interval from admission to surgery and the per capita values for total expenditure on health, government expenditure on health and GDP. Strong positive correlations were detected between the interval from surgery to discharge and general government expenditure on health. The time intervals for the treatment of femur fractures are relatively long in low- and middle-income countries, can easily be measured, and are highly correlated with accessible and quantifiable country data on health and economics.
Questions on universal constants and four-dimensional symmetry from a broad viewpoint. I
NASA Technical Reports Server (NTRS)
Hsu, J. P.
1983-01-01
It is demonstrated that there is a flexibility in clock synchronizations and that four-dimensional symmetry framework can be viewed broadly. The true universality of basic constants is discussed, considering a class of measurement processes based on the velocity = distance/time interval, which always yields some number when used by an observer. The four-dimensional symmetry framework based on common time for all observers is formulated, and related processes of measuring light speed are discussed. Invariant 'action functions' for physical laws in the new four-dimensional symmetry framework with the common time are established to discuss universal constants. Truly universal constants are demonstrated, and it is shown that physics in this new framework and in special relativity are equivalent as far as one-particle systems and the S-matrix in field theories are concerned.
Adaptive precompensators for flexible-link manipulator control
NASA Technical Reports Server (NTRS)
Tzes, Anthony P.; Yurkovich, Stephen
1989-01-01
The application of input precompensators to flexible manipulators is considered. Frequency domain compensators color the input around the flexible mode locations, resulting in a bandstop or notch filter in cascade with the system. Time domain compensators apply a sequence of impulses at prespecified times related to the modal frequencies. The resulting control corresponds to a feedforward term that convolves in real-time the desired reference input with a sequence of impulses and produces a vibration-free output. An adaptive precompensator can be implemented by combining a frequency domain identification scheme which is used to estimate online the modal frequencies and subsequently update the bandstop interval or the spacing between the impulses. The combined adaptive input preshaping scheme provides the most rapid slew that results in a vibration-free output. Experimental results are presented to verify the results.
Shalev, Varda; Rogowski, Ori; Shimron, Orit; Sheinberg, Bracha; Shapira, Itzhak; Seligsohn, Uri; Berliner, Shlomo; Misgav, Mudi
2007-01-01
The incidence of stroke in patients with atrial fibrillation (AF) can be significantly reduced with warfarin therapy especially if optimally controlled. To evaluate the effect of the interval between consecutive prothrombin time measurements on the time in therapeutic range (INR 2-3) in a cohort of patients with AF on chronic warfarin treatment in the community. All INR measurements available from a relatively large cohort of patients with chronic AF were reviewed and the mean interval between consecutive INR tests of each patient was correlated with the time in therapeutic range (TTR). Altogether 251,916 INR measurements performed in 4408 patients over a period of seven years were reviewed. Sixty percent of patients had their INR measured on average every 2 to 3 weeks and most others were followed at intervals of 4 weeks or longer. A small proportion (3.6%) had their INR measured on average every week. A significant decline in the time in therapeutic range was observed as the intervals between tests increased. At one to three weeks interval the TTR was 48%, at 4 weeks interval 45% and at 5 weeks 41% (P<0.0005). A five percent increment in TTR was observed if more tests were performed at multiplications of exactly 7 days (43% vs 48% P<0.0001). A better control with an increase in the TTR was observed in patients with atrial fibrillation if prothrombin time tests are performed at regular intervals of no longer than 3 weeks.
Preventive care and recall intervals. Targeting of services in child dental care in Norway.
Wang, N J; Aspelund, G Ø
2010-03-01
Skewed caries distribution has made interesting the use of a high risk strategy in child dental services. The purpose of this study was to describe the preventive dental care given and the recall intervals used for children and adolescents in a low caries risk population, and to study how the time spent for preventive care and the length of intervals were associated with characteristics of the children and factors related to care delivery. Time spent for and type of preventive care, recall intervals, oral health and health behaviour of children and adolescents three to 18 years of age (n = 576) and the preventive services delivered were registered at routine dental examinations in the public dental services. The time used for preventive dental care was on average 22% of the total time used in a course of treatment (7.3 of 33.4 minutes). Less than 15% of the variation in time spent for prevention was explained by oral health, oral health behaviours and other characteristics of the children and the service delivery. The mean (SD) recall intervals were 15.4 (4.6) months and 55% of the children were given intervals equal to or longer than 18 months. Approximately 30% of the variation in the length of the recall intervals was explained by characteristics of the child and the service delivery. The time used for preventive dental care of children in a low risk population was standardized, while the recall intervals to a certain extent were individualized according to dental health and dental health behaviour.
Vaccines for preventing influenza in healthy adults.
Demicheli, V; Rivetti, D; Deeks, J J; Jefferson, T O
2001-01-01
Three different types of influenza vaccines are currently produced world wide. None is traditionally targeted to healthy adults. Despite the publication of a large number of clinical trials, there is still substantial uncertainty about the clinical effectiveness of influenza vaccines and this has negative impact on the vaccines acceptance and uptake. To identify, retrieve and assess all studies evaluating the effects of vaccines on influenza in healthy adults. To assess the effectiveness of vaccines in preventing cases of influenza in healthy adults. To estimate the frequency of adverse effects associated with influenza vaccination in healthy adults. MEDLINE was searched using the strategy of the Cochrane Acute Respiratory Infections Group. The bibliography of retrieved articles, the Cochrane Controlled Trials Register (CCTR), and EMBASE (1990 to 1997) were also searched. Handsearch of the journal Vaccine from its first issue to the end of 1997 (Jefferson and Jefferson, 1996; Jefferson, 1998). We wrote to vaccine manufacturers and first or corresponding authors of studies in the review. Any randomised or quasi-randomised studies comparing influenza vaccines in humans with placebo, control vaccines or no intervention, or comparing types, doses or schedules of influenza vaccine. Live, attenuated or killed vaccines or fractions thereof administered by any route, irrespective of antigenic configuration were considered. Only studies assessing protection from exposure to naturally occurring influenza in healthy individuals aged 14 to 60 (irrespective of influenza immune status) were considered. Both clinically defined cases and serologically confirmed cases of influenza were considered as outcomes according to the authors' definitions. Time off work, complication and hospitalisation rates were considered, together with adverse effects. Vaccine schedules were analysed including one component matching the recommended vaccine (WHO or government recommendations) for the year of the study, and whether they matched the circulating viral subtypes. The recommended live aerosol vaccines reduced the number of cases of serologically confirmed influenza A by 48% (95% confidence interval 24% to 64%), whilst recommended inactivated parenteral vaccines had a vaccine efficacy of 68% (95% confidence interval 49% to 79%). The vaccines were less effective in reducing clinical influenza cases, with efficacies of 13% and 24% respectively. Use of the vaccine significantly reduced time off work, but only by 0.4 days for each influenza episode (95% confidence interval 0.1 to 0.8 days). Analysis of vaccines matching the circulating strain gave higher estimates of efficacy, whilst inclusion of all other vaccines reduced the efficacy. Influenza vaccines are effective in reducing serologically confirmed cases of influenza A. However, they are not as effective in reducing cases of clinical influenza. The use of WHO recommended vaccines appears to enhance their effectiveness in practice.
NASA Astrophysics Data System (ADS)
Miller, Avery
Consider a set of prisoners that want to gossip with one another, and suppose that these prisoners are located at fixed locations (e.g., in jail cells) along a corridor. Each prisoner has a way to broadcast messages (e.g. by voice or contraband radio) with transmission radius R and interference radius R' ≥ R. We study synchronous algorithms for this problem (that is, prisoners are allowed to speak at regulated intervals) including two restricted subclasses. We prove exact upper and lower bounds on the gossiping completion time for all three classes. We demonstrate that each restriction placed on the algorithm results in decreasing performance.
Computational problems in autoregressive moving average (ARMA) models
NASA Technical Reports Server (NTRS)
Agarwal, G. C.; Goodarzi, S. M.; Oneill, W. D.; Gottlieb, G. L.
1981-01-01
The choice of the sampling interval and the selection of the order of the model in time series analysis are considered. Band limited (up to 15 Hz) random torque perturbations are applied to the human ankle joint. The applied torque input, the angular rotation output, and the electromyographic activity using surface electrodes from the extensor and flexor muscles of the ankle joint are recorded. Autoregressive moving average models are developed. A parameter constraining technique is applied to develop more reliable models. The asymptotic behavior of the system must be taken into account during parameter optimization to develop predictive models.
Xiong, Wenjun; Yu, Xinghuo; Chen, Yao; Gao, Jie
2017-06-01
This brief investigates the quantized iterative learning problem for digital networks with time-varying topologies. The information is first encoded as symbolic data and then transmitted. After the data are received, a decoder is used by the receiver to get an estimate of the sender's state. Iterative learning quantized communication is considered in the process of encoding and decoding. A sufficient condition is then presented to achieve the consensus tracking problem in a finite interval using the quantized iterative learning controllers. Finally, simulation results are given to illustrate the usefulness of the developed criterion.
Life-times of quantum resonances through the Geometrical Phase Propagator Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavlou, G.E.; Karanikas, A.I.; Diakonos, F.K., E-mail: fdiakono@phys.uoa.gr
We employ the recently introduced Geometric Phase Propagator Approach (GPPA) (Diakonos et al., 2012) to develop an improved perturbative scheme for the calculation of life times in driven quantum systems. This incorporates a resummation of the contributions of virtual processes starting and ending at the same state in the considered time interval. The proposed procedure allows for a strict determination of the conditions leading to finite life times in a general driven quantum system by isolating the resummed terms in the perturbative expansion contributing to their generation. To illustrate how the derived conditions apply in practice, we consider the effect ofmore » driving in a system with purely discrete energy spectrum, as well as in a system for which the eigenvalue spectrum contains a continuous part. We show that in the first case, when the driving contains a dense set of frequencies acting as a noise to the system, the corresponding bound states acquire a finite life time. When the energy spectrum contains also a continuum set of eigenvalues then the bound states, due to the driving, couple to the continuum and become quasi-bound resonances. The benchmark of this change is the appearance of a Fano-type peak in the associated transmission profile. In both cases the corresponding life-time can be efficiently estimated within the reformulated GPPA approach.« less
NASA Astrophysics Data System (ADS)
Galves, A.; Löcherbach, E.
2013-06-01
We consider a new class of non Markovian processes with a countable number of interacting components. At each time unit, each component can take two values, indicating if it has a spike or not at this precise moment. The system evolves as follows. For each component, the probability of having a spike at the next time unit depends on the entire time evolution of the system after the last spike time of the component. This class of systems extends in a non trivial way both the interacting particle systems, which are Markovian (Spitzer in Adv. Math. 5:246-290, 1970) and the stochastic chains with memory of variable length which have finite state space (Rissanen in IEEE Trans. Inf. Theory 29(5):656-664, 1983). These features make it suitable to describe the time evolution of biological neural systems. We construct a stationary version of the process by using a probabilistic tool which is a Kalikow-type decomposition either in random environment or in space-time. This construction implies uniqueness of the stationary process. Finally we consider the case where the interactions between components are given by a critical directed Erdös-Rényi-type random graph with a large but finite number of components. In this framework we obtain an explicit upper-bound for the correlation between successive inter-spike intervals which is compatible with previous empirical findings.
An interval programming model for continuous improvement in micro-manufacturing
NASA Astrophysics Data System (ADS)
Ouyang, Linhan; Ma, Yizhong; Wang, Jianjun; Tu, Yiliu; Byun, Jai-Hyun
2018-03-01
Continuous quality improvement in micro-manufacturing processes relies on optimization strategies that relate an output performance to a set of machining parameters. However, when determining the optimal machining parameters in a micro-manufacturing process, the economics of continuous quality improvement and decision makers' preference information are typically neglected. This article proposes an economic continuous improvement strategy based on an interval programming model. The proposed strategy differs from previous studies in two ways. First, an interval programming model is proposed to measure the quality level, where decision makers' preference information is considered in order to determine the weight of location and dispersion effects. Second, the proposed strategy is a more flexible approach since it considers the trade-off between the quality level and the associated costs, and leaves engineers a larger decision space through adjusting the quality level. The proposed strategy is compared with its conventional counterparts using an Nd:YLF laser beam micro-drilling process.
NASA Astrophysics Data System (ADS)
Kurniati, Devi; Hoyyi, Abdul; Widiharih, Tatik
2018-05-01
Time series data is a series of data taken or measured based on observations at the same time interval. Time series data analysis is used to perform data analysis considering the effect of time. The purpose of time series analysis is to know the characteristics and patterns of a data and predict a data value in some future period based on data in the past. One of the forecasting methods used for time series data is the state space model. This study discusses the modeling and forecasting of electric energy consumption using the state space model for univariate data. The modeling stage is began with optimal Autoregressive (AR) order selection, determination of state vector through canonical correlation analysis, estimation of parameter, and forecasting. The result of this research shows that modeling of electric energy consumption using state space model of order 4 with Mean Absolute Percentage Error (MAPE) value 3.655%, so the model is very good forecasting category.
NASA Astrophysics Data System (ADS)
Laban, Shaban; El-Desouky, Aly
2013-04-01
The monitoring of real-time systems is a challenging and complicated process. So, there is a continuous need to improve the monitoring process through the use of new intelligent techniques and algorithms for detecting exceptions, anomalous behaviours and generating the necessary alerts during the workflow monitoring of such systems. The interval-based or period-based theorems have been discussed, analysed, and used by many researches in Artificial Intelligence (AI), philosophy, and linguistics. As explained by Allen, there are 13 relations between any two intervals. Also, there have also been many studies of interval-based temporal reasoning and logics over the past decades. Interval-based theorems can be used for monitoring real-time interval-based data processing. However, increasing the number of processed intervals makes the implementation of such theorems a complex and time consuming process as the relationships between such intervals are increasing exponentially. To overcome the previous problem, this paper presents a Rule-based Interval State Machine Algorithm (RISMA) for processing, monitoring, and analysing the behaviour of interval-based data, received from real-time sensors. The proposed intelligent algorithm uses the Interval State Machine (ISM) approach to model any number of interval-based data into well-defined states as well as inferring them. An interval-based state transition model and methodology are presented to identify the relationships between the different states of the proposed algorithm. By using such model, the unlimited number of relationships between similar large numbers of intervals can be reduced to only 18 direct relationships using the proposed well-defined states. For testing the proposed algorithm, necessary inference rules and code have been designed and applied to the continuous data received in near real-time from the stations of International Monitoring System (IMS) by the International Data Centre (IDC) of the Preparatory Commission for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO). The CLIPS expert system shell has been used as the main rule engine for implementing the algorithm rules. Python programming language and the module "PyCLIPS" are used for building the necessary code for algorithm implementation. More than 1.7 million intervals constitute the Concise List of Frames (CLF) from 20 different seismic stations have been used for evaluating the proposed algorithm and evaluating stations behaviour and performance. The initial results showed that proposed algorithm can help in better understanding of the operation and performance of those stations. Different important information, such as alerts and some station performance parameters, can be derived from the proposed algorithm. For IMS interval-based data and at any period of time it is possible to analyze station behavior, determine the missing data, generate necessary alerts, and to measure some of station performance attributes. The details of the proposed algorithm, methodology, implementation, experimental results, advantages, and limitations of this research are presented. Finally, future directions and recommendations are discussed.
Patel, Praveen J; Devonport, Helen; Sivaprasad, Sobha; Ross, Adam H; Walters, Gavin; Gale, Richard P; Lotery, Andrew J; Mahmood, Sajjad; Talks, James S; Napier, Jackie
2017-01-01
National recommendations on continued administration of aflibercept solution for injection after the first year of treatment for neovascular age-related macular degeneration (nAMD) have been developed by an expert panel of UK retina specialists, based on clinician experience and treatment outcomes seen in year 2. The 2017 update reiterates that the treatment goal is to maintain or improve the macular structural and functional gains achieved in year 1 while attempting to reduce or minimize the treatment burden, recognizing the need for ongoing treatment. At the end of year 1 (ie, the decision visit at month 11), two treatment options should be considered: do not extend the treatment interval and maintain fixed 8-weekly dosing, or extend the treatment interval using a treat-and-extend regimen up to a maximum 12 weeks. Criteria for considering not extending the treatment interval are persistent macular fluid with stable vision, recurrent fluid, decrease in vision in the presence of fluid, macular hemorrhage, new choroidal neovascularization or any other sign(s) of exudative disease activity considered vision threatening in the opinion of the treating clinician. Treatment extension is recommended for eyes with a dry macula (ie, without macular fluid) and stable vision. Under both options, the treatment interval may be shortened if visual and/or anatomic outcomes deteriorate. Monitoring without treatment may be considered for eyes with a fluid-free macula for a minimum duration of 48 weeks. A patient completing one full year of monitoring without requiring injections may be considered for discharge from clinic. The treatment algorithm incorporates return to fixed 8-weekly dosing for disease reactivation during treatment extension and reinstatement of treatment for disease recurrence following discontinuation or discharge. For bilateral nAMD, either the eye requiring the more intensive treatment or the eye with the better vision, guided by local clinical practice, should determine the retreatment schedule overall. PMID:29184385
Patel, Praveen J; Devonport, Helen; Sivaprasad, Sobha; Ross, Adam H; Walters, Gavin; Gale, Richard P; Lotery, Andrew J; Mahmood, Sajjad; Talks, James S; Napier, Jackie
2017-01-01
National recommendations on continued administration of aflibercept solution for injection after the first year of treatment for neovascular age-related macular degeneration (nAMD) have been developed by an expert panel of UK retina specialists, based on clinician experience and treatment outcomes seen in year 2. The 2017 update reiterates that the treatment goal is to maintain or improve the macular structural and functional gains achieved in year 1 while attempting to reduce or minimize the treatment burden, recognizing the need for ongoing treatment. At the end of year 1 (ie, the decision visit at month 11), two treatment options should be considered: do not extend the treatment interval and maintain fixed 8-weekly dosing, or extend the treatment interval using a treat-and-extend regimen up to a maximum 12 weeks. Criteria for considering not extending the treatment interval are persistent macular fluid with stable vision, recurrent fluid, decrease in vision in the presence of fluid, macular hemorrhage, new choroidal neovascularization or any other sign(s) of exudative disease activity considered vision threatening in the opinion of the treating clinician. Treatment extension is recommended for eyes with a dry macula (ie, without macular fluid) and stable vision. Under both options, the treatment interval may be shortened if visual and/or anatomic outcomes deteriorate. Monitoring without treatment may be considered for eyes with a fluid-free macula for a minimum duration of 48 weeks. A patient completing one full year of monitoring without requiring injections may be considered for discharge from clinic. The treatment algorithm incorporates return to fixed 8-weekly dosing for disease reactivation during treatment extension and reinstatement of treatment for disease recurrence following discontinuation or discharge. For bilateral nAMD, either the eye requiring the more intensive treatment or the eye with the better vision, guided by local clinical practice, should determine the retreatment schedule overall.
Funke, K; Wörgötter, F
1995-01-01
1. The spike interval pattern during the light responses of 155 on- and 81 off-centre cells of the dorsal lateral geniculate nucleus (LGN) was studied in anaesthetized and paralysed cats by the use of a novel analysis. Temporally localized interval distributions were computed from a 100 ms time window, which was shifted along the time axis in 10 ms steps, resulting in a 90% overlap between two adjacent windows. For each step the interval distribution was computed inside the time window with 1 ms resolution, and plotted as a greyscale-coded pixel line orthogonal to the time axis. For visual stimulation, light or dark spots of different size and contrast were presented with different background illumination levels. 2. Two characteristic interval patterns were observed during the sustained response component of the cells. Mainly on-cells (77%) responded with multimodal interval distributions, resulting in elongated 'bands' in the 2-dimensional time window plots. In similar situations, the interval distributions for most (71%) off-cells were rather wide and featureless. In those cases where interval bands (i.e. multimodal interval distributions) were observed for off-cells (14%), they were always much wider than for the on-cells. This difference between the on- and off-cell population was independent of the background illumination and the contrast of the stimulus. Y on-cells also tended to produce wider interval bands than X on-cells. 3. For most stimulation situations the first interval band was centred around 6-9 ms, which has been called the fundamental interval; higher order bands are multiples thereof. The fundamental interval shifted towards larger sizes with decreasing stimulus contrast. Increasing stimulus size, on the other hand, resulted in a redistribution of the intervals into higher order bands, while at the same time the location of the fundamental interval remained largely unaffected. This was interpreted as an effect of the increasing surround inhibition at the geniculate level, by which individual retinal EPSPs were cancelled. A changing level of adaptation can result in a mixed shift/redistribution effect because of the changing stimulus contrast and changing level of tonic inhibition. 4. The occurrence of interval bands is not directly related to the shape of the autocorrelation function, which can be flat, weakly oscillatory or strongly oscillatory, regardless of the interval band pattern. 5. A simple computer model was devised to account for the observed cell behaviour. The model is highly robust against parameter variations.(ABSTRACT TRUNCATED AT 400 WORDS) Images Figure 1 Figure 2 Figure 3 Figure 4 Figure 6 Figure 7 Figure 8 Figure 9 Figure 10 Figure 11 Figure 12 Figure 13 Figure 15 PMID:7562612
Rath, Timo; Tontini, Gian E; Vieth, Michael; Nägel, Andreas; Neurath, Markus F; Neumann, Helmut
2016-06-01
In order to reduce time, costs, and risks associated with resection of diminutive colorectal polyps, the American Society for Gastrointestinal Endoscopy (ASGE) recently proposed performance thresholds that new technologies should meet for the accurate real-time assessment of histology of colorectal polyps. In this study, we prospectively assessed whether laser-induced fluorescence spectroscopy (LIFS), using the new WavSTAT4 optical biopsy system, can meet the ASGE criteria. 27 patients undergoing screening or surveillance colonoscopy were included. The histology of 137 diminutive colorectal polyps was predicted in real time using LIFS and findings were compared with the results of conventional histopathological examination. The accuracy of predicting polyp histology with WavSTAT4 was assessed according to the ASGE criteria. The overall accuracy of LIFS using WavSTAT4 for predicting polyp histology was 84.7 % with sensitivity, specificity, and negative predictive value (NPV) of 81.8 %, 85.2 %, and 96.1 %. When only distal colorectal diminutive polyps were considered, the NPV for excluding adenomatous histology increased to 100 % (accuracy 82.4 %, sensitivity 100 %, specificity 80.6 %). On-site, LIFS correctly predicted the recommended surveillance intervals with an accuracy of 88.9 % (24/27 patients) when compared with histology-based United States guideline recommendations; in the 3 patients for whom LIFS- and histopathology-based recommended surveillance intervals differed, LIFS predicted shorter surveillance intervals. From the data of this pilot study, LIFS using the WavSTAT4 system appears accurate enough to allow distal colorectal polyps to be left in place and nearly reaches the threshold to "resect and discard" them without pathologic assessment. WavSTAT4 therefore has the potential to reduce costs and risks associated with the removal of diminutive colorectal polyps. © Georg Thieme Verlag KG Stuttgart · New York.
The impact of peritoneal dialysis-related peritonitis on mortality in peritoneal dialysis patients.
Ye, Hongjian; Zhou, Qian; Fan, Li; Guo, Qunying; Mao, Haiping; Huang, Fengxian; Yu, Xueqing; Yang, Xiao
2017-06-05
Results concerning the association between peritoneal dialysis-related peritonitis and mortality in peritoneal dialysis patients are inconclusive, with one potential reason being that the time-dependent effect of peritonitis has rarely been considered in previous studies. This study aimed to evaluate whether peritonitis has a negative impact on mortality in a large cohort of peritoneal dialysis patients. We also assessed the changing impact of peritonitis on patient mortality with respect to duration of follow-up. This retrospective cohort study included incident patients who started peritoneal dialysis from 1 January 2006 to 31 December 2011. Episodes of peritonitis were recorded at the time of onset, and peritonitis was parameterized as a time-dependent variable for analysis. We used the Cox regression model to assess whether peritonitis has a negative impact on mortality. A total of 1321 patients were included. The mean age was 48.1 ± 15.3 years, 41.3% were female, and 23.5% with diabetes mellitus. The median (interquartile) follow-up time was 34 (21-48) months. After adjusting for confounders, peritonitis was independently associated with 95% increased risk of all-cause mortality (hazard ratio, 1.95; 95% confidence interval: 1.46-2.60), 90% increased risk of cardiovascular mortality (hazard ratio, 1.90; 95% confidence interval: 1.28-2.81) and near 4-fold increased risk of infection-related mortality (hazard ratio, 4.94; 95% confidence interval: 2.47-9.86). Further analyses showed that peritonitis was not significantly associated with mortality within 2 years of peritoneal dialysis initiation, but strongly influenced mortality in patients dialysed longer than 2 years. Peritonitis was independently associated with higher risk of all-cause, cardiovascular and infection-related mortality in peritoneal dialysis patients, and its impact on mortality was more significant in patients with longer peritoneal dialysis duration.
Relationship between menstruation status and work conditions in Japan.
Nishikitani, Mariko; Nakao, Mutsuhiro; Tsurugano, Shinobu; Inoure, Mariko; Yano, Eiji
2017-01-01
Menstrual problems can significantly impact daily and work life. In reaction to a shrinking population, the Japanese government is encouraging more women to participate in the labor force. Actual success in achieving this aim, however, is limited. Specifically, participation in the workforce by women during their reproductive years is impacted by their health, which involves not only work conditions, but also traditional family circumstances. Therefore, it is important to further assess and gather more information about the health status of women who work during their reproductive years in Japan. Specifically, women's health can be represented by menstruation status, which is a pivotal indicator. In this study, we assessed the association between short rest periods in work intervals and menstruation and other health status indicators among female workers in Japan. Study participants were recruited from the alumnae of a university, which provided a uniform educational level. All 9864 female alumnae were asked to join the survey and 1630 (17%) accepted. The final sample of study participants ( n = 505) were aged 23-43 years, had maintained the same job status for at least 1 year, and were not shift workers, had no maternal status, and did not lack any related information. The participants were divided into two groups according to interval time, with 11 h between end of work and resumption of daily work as a benchmark. This interval time was based on EU regulations and the goal set by the government of Japan. Health outcomes included: menstrual cycle, dysmenorrhoea symptoms, anxiety regarding health, and satisfaction in terms of health. Multiple logistic regression analyses were conducted to estimate the odds ratios (ORs) and 95% confidence intervals (CIs) for health indexes in association with interval time by adjusting for confounding variables that included both psychosocial and biological factors. We compared the health status of women in the workforce with and without a sufficient interval time of 11 h/day. Workers who had a short interval time had a significantly higher prevalence of anxiety about health and dissatisfaction with their health. For menstruation status, only abnormal menstruation cycles were observed more often among workers in the short interval group than those of the long interval group. However, this association disappeared when biological confounding factors were adjusted in a multivariable regression model. Dysmenorrhea symptoms did not show a statistically significant association with short interval time. This study found a significant association between a short interval time of less than 11 h/day and subjective health indicators and the menstrual health status of women in the workforce. Menstrual health was more affected by biological factors than social psychological factors. A long work time and short interval time could increase worker anxiety and dissatisfaction and may deteriorate the menstrual cycle.
NASA Astrophysics Data System (ADS)
Simonsen, I.; Jensen, M. H.; Johansen, A.
2002-06-01
In stochastic finance, one traditionally considers the return as a competitive measure of an asset, i.e., the profit generated by that asset after some fixed time span Δt, say one week or one year. This measures how well (or how bad) the asset performs over that given period of time. It has been established that the distribution of returns exhibits ``fat tails'' indicating that large returns occur more frequently than what is expected from standard Gaussian stochastic processes [1-3]. Instead of estimating this ``fat tail'' distribution of returns, we propose here an alternative approach, which is outlined by addressing the following question: What is the smallest time interval needed for an asset to cross a fixed return level of say 10%? For a particular asset, we refer to this time as the investment horizon and the corresponding distribution as the investment horizon distribution. This latter distribution complements that of returns and provides new and possibly crucial information for portfolio design and risk-management, as well as for pricing of more exotic options. By considering historical financial data, exemplified by the Dow Jones Industrial Average, we obtain a novel set of probability distributions for the investment horizons which can be used to estimate the optimal investment horizon for a stock or a future contract.
Revealing Real-Time Emotional Responses: a Personalized Assessment based on Heartbeat Dynamics
NASA Astrophysics Data System (ADS)
Valenza, Gaetano; Citi, Luca; Lanatá, Antonio; Scilingo, Enzo Pasquale; Barbieri, Riccardo
2014-05-01
Emotion recognition through computational modeling and analysis of physiological signals has been widely investigated in the last decade. Most of the proposed emotion recognition systems require relatively long-time series of multivariate records and do not provide accurate real-time characterizations using short-time series. To overcome these limitations, we propose a novel personalized probabilistic framework able to characterize the emotional state of a subject through the analysis of heartbeat dynamics exclusively. The study includes thirty subjects presented with a set of standardized images gathered from the international affective picture system, alternating levels of arousal and valence. Due to the intrinsic nonlinearity and nonstationarity of the RR interval series, a specific point-process model was devised for instantaneous identification considering autoregressive nonlinearities up to the third-order according to the Wiener-Volterra representation, thus tracking very fast stimulus-response changes. Features from the instantaneous spectrum and bispectrum, as well as the dominant Lyapunov exponent, were extracted and considered as input features to a support vector machine for classification. Results, estimating emotions each 10 seconds, achieve an overall accuracy in recognizing four emotional states based on the circumplex model of affect of 79.29%, with 79.15% on the valence axis, and 83.55% on the arousal axis.
Revealing real-time emotional responses: a personalized assessment based on heartbeat dynamics.
Valenza, Gaetano; Citi, Luca; Lanatá, Antonio; Scilingo, Enzo Pasquale; Barbieri, Riccardo
2014-05-21
Emotion recognition through computational modeling and analysis of physiological signals has been widely investigated in the last decade. Most of the proposed emotion recognition systems require relatively long-time series of multivariate records and do not provide accurate real-time characterizations using short-time series. To overcome these limitations, we propose a novel personalized probabilistic framework able to characterize the emotional state of a subject through the analysis of heartbeat dynamics exclusively. The study includes thirty subjects presented with a set of standardized images gathered from the international affective picture system, alternating levels of arousal and valence. Due to the intrinsic nonlinearity and nonstationarity of the RR interval series, a specific point-process model was devised for instantaneous identification considering autoregressive nonlinearities up to the third-order according to the Wiener-Volterra representation, thus tracking very fast stimulus-response changes. Features from the instantaneous spectrum and bispectrum, as well as the dominant Lyapunov exponent, were extracted and considered as input features to a support vector machine for classification. Results, estimating emotions each 10 seconds, achieve an overall accuracy in recognizing four emotional states based on the circumplex model of affect of 79.29%, with 79.15% on the valence axis, and 83.55% on the arousal axis.
A hybrid of monopoly and perfect competition model for hi-tech products
NASA Astrophysics Data System (ADS)
Yang, P. C.; Wee, H. M.; Pai, S.; Yang, H. J.; Wee, P. K. P.
2010-11-01
For Hi-tech products, the demand rate, the component cost as well as the selling price usually decline significantly with time. In the case of perfect competition, shortages usually result in lost sales; while in a monopoly, shortages will be completely backordered. However, neither perfect competition nor monopoly exists. Therefore, there is a need to develop a replenishment model considering a hybrid of perfect competition and monopoly when the cost, price and demand are decreasing simultaneously. A numerical example and sensitivity analysis are carried out to illustrate this model. The results show that a higher decline-rate in the component cost leads to a smaller service level and a larger replenishment interval. When the component cost decline rate increases and the selling price decline rate decreases simultaneously, the replenishment interval decreases. In perfect competition it is better to have a high service level, while for the case with monopoly, keeping a low service level is better due to complete backordering.
Ishii, Yoshinori; Noguchi, Hideo; Takeda, Mitsuhiro; Sato, Junko; Toyabe, Shin-Ichi
2014-01-01
The purpose of this study was to evaluate the interval between the first and second operations for staged total knee arthroplasties (TKAs) in patients with bilateral knee osteoarthritis. Depending on satisfactory preoperative health status, the patients determined the timing of the second operation. We also analysed correlations between the interval and patient characteristics. Eighty-six patients with bilateral knee osteoarthritis were analysed. The mean follow-up time from the first TKA was 96 months. The side of the first TKA was chosen by the patients. The timing of the second TKA was determined by the patients, depending on their perceived ability to tolerate the additional pain and limitations to activities of daily living. The median interval between the first and second operations was 12.5 months, with a range of 2 to 113 months. In 43 (50%) patients, the interval was <12 months. There was no difference in the interval between females and males (p=0.861), and no correlation between the interval and body mass index or age. There was weak correlation between the year of the first TKA and the interval (R=-0.251, p=0.020), with the interval getting significantly shorter as the years progressed (p=0.032). The median interval between the first and second operations in patients who underwent staged TKAs for bilateral knee osteoarthritis was about 1 year. The results of the current study may help patients and physicians to plan effective treatment strategies for staged TKAs. Level II. Copyright © 2013 Elsevier B.V. All rights reserved.
Why noise is useful in functional and neural mechanisms of interval timing?
2013-01-01
Background The ability to estimate durations in the seconds-to-minutes range - interval timing - is essential for survival, adaptation and its impairment leads to severe cognitive and/or motor dysfunctions. The response rate near a memorized duration has a Gaussian shape centered on the to-be-timed interval (criterion time). The width of the Gaussian-like distribution of responses increases linearly with the criterion time, i.e., interval timing obeys the scalar property. Results We presented analytical and numerical results based on the striatal beat frequency (SBF) model showing that parameter variability (noise) mimics behavioral data. A key functional block of the SBF model is the set of oscillators that provide the time base for the entire timing network. The implementation of the oscillators block as simplified phase (cosine) oscillators has the additional advantage that is analytically tractable. We also checked numerically that the scalar property emerges in the presence of memory variability by using biophysically realistic Morris-Lecar oscillators. First, we predicted analytically and tested numerically that in a noise-free SBF model the output function could be approximated by a Gaussian. However, in a noise-free SBF model the width of the Gaussian envelope is independent of the criterion time, which violates the scalar property. We showed analytically and verified numerically that small fluctuations of the memorized criterion time leads to scalar property of interval timing. Conclusions Noise is ubiquitous in the form of small fluctuations of intrinsic frequencies of the neural oscillators, the errors in recording/retrieving stored information related to criterion time, fluctuation in neurotransmitters’ concentration, etc. Our model suggests that the biological noise plays an essential functional role in the SBF interval timing. PMID:23924391
Ando, Akira; Hagiwara, Yoshihiro; Sekiguchi, Takuya; Koide, Masashi; Kanazawa, Kenji; Watanabe, Takashi; Itoi, Eiji
2017-07-01
This study proposed new magnetic resonance imaging (MRI) of haemodialysis shoulders (HDS) focusing on the changes of the rotator cuff, and rotator interval and risk factors for the development of HDS were examined. Eighty-five shoulders in 72 patients with a chief complaint of shoulder pain during haemodialysis and at least 10 years of haemodialysis were included. They were classified into 5 groups based on the thickness of the rotator cuff and conditions of rotator interval. Clinical and radiological findings in each grade were examined, and risk factors for the development of HDS were evaluated. Arthroscopic surgeries were performed on 22 shoulders in 20 patients, and arthroscopic findings were also evaluated. Positive correlations for the development of HDS were observed in duration of haemodialysis, positive hepatitis C virus (HCV) infection, and previous haemodialysis-related orthopaedic surgery (P < 0.001, respectively). Strong correlations were observed between positive HCV and the progression of HDS (odds ratio 24.8, 95 % confidence interval 5.7-107.6). Arthroscopically, progression of the surrounding soft tissue degeneration was observed, and operative times were lengthened depending on the progression of MRI grading. A new MRI classification of HDS which may be helpful when considering arthroscopic surgeries has been proposed. Positive HCV infection was strongly associated with the progression of HDS on MRI. Conditions of the rotator interval and the rotator cuff based on the MRI classification should be examined when treating HDS patients. III.
Meng, Ran; Dennison, Philip E; D'Antonio, Carla M; Moritz, Max A
2014-01-01
Increased fire frequency has been shown to promote alien plant invasions in the western United States, resulting in persistent vegetation type change. Short interval fires are widely considered to be detrimental to reestablishment of shrub species in southern California chaparral, facilitating the invasion of exotic annuals and producing "type conversion". However, supporting evidence for type conversion has largely been at local, site scales and over short post-fire time scales. Type conversion has not been shown to be persistent or widespread in chaparral, and past range improvement studies present evidence that chaparral type conversion may be difficult and a relatively rare phenomenon across the landscape. With the aid of remote sensing data covering coastal southern California and a historical wildfire dataset, the effects of short interval fires (<8 years) on chaparral recovery were evaluated by comparing areas that burned twice to adjacent areas burned only once. Twelve pairs of once- and twice-burned areas were compared using normalized burn ratio (NBR) distributions. Correlations between measures of recovery and explanatory factors (fire history, climate and elevation) were analyzed by linear regression. Reduced vegetation cover was found in some lower elevation areas that were burned twice in short interval fires, where non-sprouting species are more common. However, extensive type conversion of chaparral to grassland was not evident in this study. Most variables, with the exception of elevation, were moderately or poorly correlated with differences in vegetation recovery.
Meng, Ran; Dennison, Philip E.; D’Antonio, Carla M.; Moritz, Max A.
2014-01-01
Increased fire frequency has been shown to promote alien plant invasions in the western United States, resulting in persistent vegetation type change. Short interval fires are widely considered to be detrimental to reestablishment of shrub species in southern California chaparral, facilitating the invasion of exotic annuals and producing “type conversion”. However, supporting evidence for type conversion has largely been at local, site scales and over short post-fire time scales. Type conversion has not been shown to be persistent or widespread in chaparral, and past range improvement studies present evidence that chaparral type conversion may be difficult and a relatively rare phenomenon across the landscape. With the aid of remote sensing data covering coastal southern California and a historical wildfire dataset, the effects of short interval fires (<8 years) on chaparral recovery were evaluated by comparing areas that burned twice to adjacent areas burned only once. Twelve pairs of once- and twice-burned areas were compared using normalized burn ratio (NBR) distributions. Correlations between measures of recovery and explanatory factors (fire history, climate and elevation) were analyzed by linear regression. Reduced vegetation cover was found in some lower elevation areas that were burned twice in short interval fires, where non-sprouting species are more common. However, extensive type conversion of chaparral to grassland was not evident in this study. Most variables, with the exception of elevation, were moderately or poorly correlated with differences in vegetation recovery. PMID:25337785
Outcome of total knee replacement following explantation and cemented spacer therapy.
Ghanem, Mohamed; Zajonz, Dirk; Bollmann, Juliane; Geissler, Vanessa; Prietzel, Torsten; Moche, Michael; Roth, Andreas; Heyde, Christoph-E; Josten, Christoph
2016-01-01
Infection after total knee replacement (TKR) is one of the serious complications which must be pursued with a very effective therapeutic concept. In most cases this means revision arthroplasty, in which one-setting and two-setting procedures are distinguished. Healing of infection is the conditio sine qua non for re-implantation. This retrospective work presents an assessment of the success rate after a two-setting revision arthroplasty of the knee following periprosthetic infection. It further considers drawing conclusions concerning the optimal timing of re-implantation. A total of 34 patients have been enclosed in this study from September 2005 to December 2013. 35 re-implantations were carried out following explantation of total knee and implantation of cemented spacer. The patient's group comprised of 53% (18) males and 47% (16) females. The average age at re-implantation time was 72.2 years (ranging from 54 to 85 years). We particularly evaluated the microbial spectrum, the interval between explantation and re-implantation, the number of surgeries that were necessary prior to re-implantation as well as the postoperative course. We reported 31.4% (11) reinfections following re-implantation surgeries. The number of the reinfections declined with increasing time interval between explantation and re-implantation. Patients who developed reinfections were operated on (re-implantation) after an average of 4.47 months. Those patients with uncomplicated course were operated on (re-implantation) after an average of 6.79 months. Nevertheless, we noticed no essential differences in outcome with regard to the number of surgeries carried out prior to re-implantation. Mobile spacers proved better outcome than temporary arthrodesis with intramedullary fixation. No uniform strategy of treatment exists after peri-prosthetic infections. In particular, no optimal timing can be stated concerning re-implantation. Our data point out to the fact that a longer time interval between explantation and re-implantation reduces the rate of reinfection. From our point of view, the optimal timing for re-implantation depends on various specific factors and therefore it should be defined individually.
Outcome of total knee replacement following explantation and cemented spacer therapy
Ghanem, Mohamed; Zajonz, Dirk; Bollmann, Juliane; Geissler, Vanessa; Prietzel, Torsten; Moche, Michael; Roth, Andreas; Heyde, Christoph-E.; Josten, Christoph
2016-01-01
Background: Infection after total knee replacement (TKR) is one of the serious complications which must be pursued with a very effective therapeutic concept. In most cases this means revision arthroplasty, in which one-setting and two-setting procedures are distinguished. Healing of infection is the conditio sine qua non for re-implantation. This retrospective work presents an assessment of the success rate after a two-setting revision arthroplasty of the knee following periprosthetic infection. It further considers drawing conclusions concerning the optimal timing of re-implantation. Patients and methods: A total of 34 patients have been enclosed in this study from September 2005 to December 2013. 35 re-implantations were carried out following explantation of total knee and implantation of cemented spacer. The patient’s group comprised of 53% (18) males and 47% (16) females. The average age at re-implantation time was 72.2 years (ranging from 54 to 85 years). We particularly evaluated the microbial spectrum, the interval between explantation and re-implantation, the number of surgeries that were necessary prior to re-implantation as well as the postoperative course. Results: We reported 31.4% (11) reinfections following re-implantation surgeries. The number of the reinfections declined with increasing time interval between explantation and re-implantation. Patients who developed reinfections were operated on (re-implantation) after an average of 4.47 months. Those patients with uncomplicated course were operated on (re-implantation) after an average of 6.79 months. Nevertheless, we noticed no essential differences in outcome with regard to the number of surgeries carried out prior to re-implantation. Mobile spacers proved better outcome than temporary arthrodesis with intramedullary fixation. Conclusion: No uniform strategy of treatment exists after peri-prosthetic infections. In particular, no optimal timing can be stated concerning re-implantation. Our data point out to the fact that a longer time interval between explantation and re-implantation reduces the rate of reinfection. From our point of view, the optimal timing for re-implantation depends on various specific factors and therefore it should be defined individually. PMID:27066391
The Behavioral Economics of Choice and Interval Timing
Jozefowiez, J.; Staddon, J. E. R.; Cerutti, D. T.
2009-01-01
We propose a simple behavioral economic model (BEM) describing how reinforcement and interval timing interact. The model assumes a Weber-law-compliant logarithmic representation of time. Associated with each represented time value are the payoffs that have been obtained for each possible response. At a given real time, the response with the highest payoff is emitted. The model accounts for a wide range of data from procedures such as simple bisection, metacognition in animals, economic effects in free-operant psychophysical procedures and paradoxical choice in double-bisection procedures. Although it assumes logarithmic time representation, it can also account for data from the time-left procedure usually cited in support of linear time representation. It encounters some difficulties in complex free-operant choice procedures, such as concurrent mixed fixed-interval schedules as well as some of the data on double bisection, that may involve additional processes. Overall, BEM provides a theoretical framework for understanding how reinforcement and interval timing work together to determine choice between temporally differentiated reinforcers. PMID:19618985
Quantitative analysis of ground penetrating radar data in the Mu Us Sandland
NASA Astrophysics Data System (ADS)
Fu, Tianyang; Tan, Lihua; Wu, Yongqiu; Wen, Yanglei; Li, Dawei; Duan, Jinlong
2018-06-01
Ground penetrating radar (GPR), which can reveal the sedimentary structure and development process of dunes, is widely used to evaluate aeolian landforms. The interpretations for GPR profiles are mostly based on qualitative descriptions of geometric features of the radar reflections. This research quantitatively analyzed the waveform parameter characteristics of different radar units by extracting the amplitude and time interval parameters of GPR data in the Mu Us Sandland in China, and then identified and interpreted different sedimentary structures. The results showed that different types of radar units had specific waveform parameter characteristics. The main waveform parameter characteristics of sand dune radar facies and sandstone radar facies included low amplitudes and wide ranges of time intervals, ranging from 0 to 0.25 and 4 to 33 ns respectively, and the mean amplitudes changed gradually with time intervals. The amplitude distribution curves of various sand dune radar facies were similar as unimodal distributions. The radar surfaces showed high amplitudes with time intervals concentrated in high-value areas, ranging from 0.08 to 0.61 and 9 to 34 ns respectively, and the mean amplitudes changed drastically with time intervals. The amplitude and time interval values of lacustrine radar facies were between that of sand dune radar facies and radar surfaces, ranging from 0.08 to 0.29 and 11 to 30 ns respectively, and the mean amplitude and time interval curve was approximately trapezoidal. The quantitative extraction and analysis of GPR reflections could help distinguish various radar units and provide evidence for identifying sedimentary structure in aeolian landforms.
Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient
ERIC Educational Resources Information Center
Krishnamoorthy, K.; Xia, Yanping
2008-01-01
The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…
Interval timing in genetically modified mice: a simple paradigm
Balci, F.; Papachristos, E. B.; Gallistel, C. R.; Brunner, D.; Gibson, J.; Shumyatsky, G. P.
2009-01-01
We describe a behavioral screen for the quantitative study of interval timing and interval memory in mice. Mice learn to switch from a short-latency feeding station to a long-latency station when the short latency has passed without a feeding. The psychometric function is the cumulative distribution of switch latencies. Its median measures timing accuracy and its interquartile interval measures timing precision. Next, using this behavioral paradigm, we have examined mice with a gene knockout of the receptor for gastrin-releasing peptide that show enhanced (i.e. prolonged) freezing in fear conditioning. We have tested the hypothesis that the mutants freeze longer because they are more uncertain than wild types about when to expect the electric shock. The knockouts however show normal accuracy and precision in timing, so we have rejected this alternative hypothesis. Last, we conduct the pharmacological validation of our behavioral screen using D-amphetamine and methamphetamine. We suggest including the analysis of interval timing and temporal memory in tests of genetically modified mice for learning and memory and argue that our paradigm allows this to be done simply and efficiently. PMID:17696995
Interval timing in genetically modified mice: a simple paradigm.
Balci, F; Papachristos, E B; Gallistel, C R; Brunner, D; Gibson, J; Shumyatsky, G P
2008-04-01
We describe a behavioral screen for the quantitative study of interval timing and interval memory in mice. Mice learn to switch from a short-latency feeding station to a long-latency station when the short latency has passed without a feeding. The psychometric function is the cumulative distribution of switch latencies. Its median measures timing accuracy and its interquartile interval measures timing precision. Next, using this behavioral paradigm, we have examined mice with a gene knockout of the receptor for gastrin-releasing peptide that show enhanced (i.e. prolonged) freezing in fear conditioning. We have tested the hypothesis that the mutants freeze longer because they are more uncertain than wild types about when to expect the electric shock. The knockouts however show normal accuracy and precision in timing, so we have rejected this alternative hypothesis. Last, we conduct the pharmacological validation of our behavioral screen using d-amphetamine and methamphetamine. We suggest including the analysis of interval timing and temporal memory in tests of genetically modified mice for learning and memory and argue that our paradigm allows this to be done simply and efficiently.
Estimating short-run and long-run interaction mechanisms in interictal state.
Ozkaya, Ata; Korürek, Mehmet
2010-04-01
We address the issue of analyzing electroencephalogram (EEG) from seizure patients in order to test, model and determine the statistical properties that distinguish between EEG states (interictal, pre-ictal, ictal) by introducing a new class of time series analysis methods. In the present study: firstly, we employ statistical methods to determine the non-stationary behavior of focal interictal epileptiform series within very short time intervals; secondly, for such intervals that are deemed non-stationary we suggest the concept of Autoregressive Integrated Moving Average (ARIMA) process modelling, well known in time series analysis. We finally address the queries of causal relationships between epileptic states and between brain areas during epileptiform activity. We estimate the interaction between different EEG series (channels) in short time intervals by performing Granger-causality analysis and also estimate such interaction in long time intervals by employing Cointegration analysis, both analysis methods are well-known in econometrics. Here we find: first, that the causal relationship between neuronal assemblies can be identified according to the duration and the direction of their possible mutual influences; second, that although the estimated bidirectional causality in short time intervals yields that the neuronal ensembles positively affect each other, in long time intervals neither of them is affected (increasing amplitudes) from this relationship. Moreover, Cointegration analysis of the EEG series enables us to identify whether there is a causal link from the interictal state to ictal state.
Luo, Yuan; Szolovits, Peter
2016-01-01
In natural language processing, stand-off annotation uses the starting and ending positions of an annotation to anchor it to the text and stores the annotation content separately from the text. We address the fundamental problem of efficiently storing stand-off annotations when applying natural language processing on narrative clinical notes in electronic medical records (EMRs) and efficiently retrieving such annotations that satisfy position constraints. Efficient storage and retrieval of stand-off annotations can facilitate tasks such as mapping unstructured text to electronic medical record ontologies. We first formulate this problem into the interval query problem, for which optimal query/update time is in general logarithm. We next perform a tight time complexity analysis on the basic interval tree query algorithm and show its nonoptimality when being applied to a collection of 13 query types from Allen's interval algebra. We then study two closely related state-of-the-art interval query algorithms, proposed query reformulations, and augmentations to the second algorithm. Our proposed algorithm achieves logarithmic time stabbing-max query time complexity and solves the stabbing-interval query tasks on all of Allen's relations in logarithmic time, attaining the theoretic lower bound. Updating time is kept logarithmic and the space requirement is kept linear at the same time. We also discuss interval management in external memory models and higher dimensions.
Luo, Yuan; Szolovits, Peter
2016-01-01
In natural language processing, stand-off annotation uses the starting and ending positions of an annotation to anchor it to the text and stores the annotation content separately from the text. We address the fundamental problem of efficiently storing stand-off annotations when applying natural language processing on narrative clinical notes in electronic medical records (EMRs) and efficiently retrieving such annotations that satisfy position constraints. Efficient storage and retrieval of stand-off annotations can facilitate tasks such as mapping unstructured text to electronic medical record ontologies. We first formulate this problem into the interval query problem, for which optimal query/update time is in general logarithm. We next perform a tight time complexity analysis on the basic interval tree query algorithm and show its nonoptimality when being applied to a collection of 13 query types from Allen’s interval algebra. We then study two closely related state-of-the-art interval query algorithms, proposed query reformulations, and augmentations to the second algorithm. Our proposed algorithm achieves logarithmic time stabbing-max query time complexity and solves the stabbing-interval query tasks on all of Allen’s relations in logarithmic time, attaining the theoretic lower bound. Updating time is kept logarithmic and the space requirement is kept linear at the same time. We also discuss interval management in external memory models and higher dimensions. PMID:27478379
Kawakita, Daisuke; Sato, Fumihito; Hosono, Satoyo; Ito, Hidemi; Oze, Isao; Watanabe, Miki; Hanai, Nobuhiro; Hatooka, Shunzo; Hasegawa, Yasuhisa; Shinoda, Masayuki; Tajima, Kazuo; Murakami, Shingo; Tanaka, Hideo; Matsuo, Keitaro
2012-09-01
Although the combination of tobacco smoking and alcohol drinking account for approximately 80% of upper aerodigestive tract (UADT) cancer risk, the role of dietary factors, including dairy products, in the risk of these cancers remains controversial. We aimed to evaluate the association between dairy product intake and UADT cancer risk in a Japanese population. We conducted a case-control study in 959 patients with UADT cancer and 2877 sex- and age-matched noncancer control subjects who visited the Aichi Cancer Center in Nagoya, Japan. Data on lifestyle factors, including diet, were obtained by self-administered questionnaire. Associations were assessed by multivariate logistic regression models that considered potential confounders. We found a significant inverse association between yoghurt intake and UADT cancer risk with multivariate-adjusted odds ratios and 95% confidence intervals for <1 time/week, ≥ 1 time/week and <1 time/day, and ≥ 1 time/day consumption of yoghurt of 0.70 (95% confidence interval: 0.54-0.91), 0.67 (0.54-0.84), and 0.73 (0.55-0.95) relative to nonconsumers (P trend=0.005). When stratified by primary tumor site, this association was significant among patients with hypopharyngeal, laryngeal, and esophageal cancer. However, we saw no significant association between milk or butter intake and UADT cancer risk. In this study, we found that a high intake of yoghurt may lower the risk of developing UADT cancer in a Japanese population. Further investigation of this association is warranted.
Cerebral oxygenation and desaturations in preterm infants - a longitudinal data analysis.
Mayer, Benjamin; Pohl, Moritz; Hummler, Helmut D; Schmid, Manuel B
2017-01-01
Hypoxemic episodes commonly occur in very preterm infants and may be associated with several adverse effects. Cerebral tissue oxygen saturation (StO2) as measured by near infrared spectroscopy (NIRS) may be a useful measure to assess brain oxygenation. However, knowledge on variability of StO2 is limited in preterm infants at this time, so StO2 dependency on arterial oxygenation (SpO2) and heart rate (HR) was assessed in preterm infants using statistical methods of time series analysis. StO2, SpO2, and HR were recorded from 15 preterm infants every 2 seconds for six hours. Statistical methods of time series and longitudinal data analysis were applied to the data. The mean StO2 level was found as 72% (95% confidence interval (CI) 55.5% -85.5%) based on a moving average process with a 5 minute order. Accordingly, longitudinal SpO2 measurements showed a mean level of 91% (95% CI 69% -98%). Generally, compensation strategies to cope with both StO2 and SpO2 desaturations were observed in the studied patients. SpO2 had a significant effect on cerebral oxygenation (p < 0.001), but HR did not, which led to inconclusive results considering different time intervals. In infants with intermittent hypoxemia and bradycardia, we found a mean StO2 level of 72% and a strong correlation with SpO2. We observed large differences between individuals in the ability to maintain StO2 at a stable level.
Mette, Christian; Grabemann, Marco; Zimmermann, Marco; Strunz, Laura; Scherbaum, Norbert; Wiltfang, Jens; Kis, Bernhard
2015-01-01
Objective Altered time reproduction is exhibited by patients with adult attention deficit hyperactivity disorder (ADHD). It remains unclear whether memory capacity influences the ability of adults with ADHD to reproduce time intervals. Method We conducted a behavioral study on 30 ADHD patients who were medicated with methylphenidate, 29 unmedicated adult ADHD patients and 32 healthy controls (HCs). We assessed time reproduction using six time intervals (1 s, 4 s, 6 s, 10 s, 24 s and 60 s) and assessed memory performance using the Wechsler memory scale. Results The patients with ADHD exhibited lower memory performance scores than the HCs. No significant differences in the raw scores for any of the time intervals (p > .05), with the exception of the variability at the short time intervals (1 s, 4 s and 6 s) (p < .01), were found between the groups. The overall analyses failed to reveal any significant correlations between time reproduction at any of the time intervals examined in the time reproduction task and working memory performance (p > .05). Conclusion We detected no findings indicating that working memory might influence time reproduction in adult patients with ADHD. Therefore, further studies concerning time reproduction and memory capacity among adult patients with ADHD must be performed to verify and replicate the present findings. PMID:26221955
Wilquin, Hélène; Delevoye-Turrell, Yvonne; Dione, Mariama; Giersch, Anne
2018-01-01
Objective: Basic temporal dysfunctions have been described in patients with schizophrenia, which may impact their ability to connect and synchronize with the outer world. The present study was conducted with the aim to distinguish between interval timing and synchronization difficulties and more generally the spatial-temporal organization disturbances for voluntary actions. A new sensorimotor synchronization task was developed to test these abilities. Method: Twenty-four chronic schizophrenia patients matched with 27 controls performed a spatial-tapping task in which finger taps were to be produced in synchrony with a regular metronome to six visual targets presented around a virtual circle on a tactile screen. Isochronous (time intervals of 500 ms) and non-isochronous auditory sequences (alternated time intervals of 300/600 ms) were presented. The capacity to produce time intervals accurately versus the ability to synchronize own actions (tap) with external events (tone) were measured. Results: Patients with schizophrenia were able to produce the tapping patterns of both isochronous and non-isochronous auditory sequences as accurately as controls producing inter-response intervals close to the expected interval of 500 and 900 ms, respectively. However, the synchronization performances revealed significantly more positive asynchrony means (but similar variances) in the patient group than in the control group for both types of auditory sequences. Conclusion: The patterns of results suggest that patients with schizophrenia are able to perceive and produce both simple and complex sequences of time intervals but are impaired in the ability to synchronize their actions with external events. These findings suggest a specific deficit in predictive timing, which may be at the core of early symptoms previously described in schizophrenia.
Zhang, Jimmy; Mannix, Rebekah; Whalen, Michael J.
2012-01-01
Abstract BACKGROUND: Although previous evidence suggests that the cognitive effects of concussions are cumulative, the effect of time interval between repeat concussions is largely unknown. OBJECTIVE: To determine the effect of time interval between repeat concussions on the cognitive function of mice. METHODS: We used a weight-drop model of concussion to subject anesthetized mice to 1, 3, 5, or 10 concussions, each a day apart. Additional mice were subjected to 5 concussions at varying time intervals: daily, weekly, and monthly. Morris water maze performance was measured 24 hours, 1 month, and 1 year after final injury. RESULTS: After 1 concussion, injured and sham-injured mice performed similarly in the Morris water maze. As the number of concussions increased, injured mice performed worse than sham-injured mice. Mice sustaining 5 concussions either 1 day or 1 week apart performed worse than sham-injured mice. When 5 concussions were delivered at 1-month time intervals, no difference in Morris water maze performance was observed between injured and sham-injured mice. After a 1-month recovery period, mice that sustained 5 concussions at daily and weekly time intervals continued to perform worse than sham-injured mice. One year after the final injury, mice sustaining 5 concussions at a daily time interval still performed worse than sham-injured mice. CONCLUSION: When delivered within a period of vulnerability, the cognitive effects of multiple concussions are cumulative, persistent, and may be permanent. Increasing the time interval between concussions attenuates the effects on cognition. When multiple concussions are sustained by mice daily, the effects on cognition are long term. PMID:22743360
Validity and Generalizability of Measuring Student Engaged Time in Physical Education.
ERIC Educational Resources Information Center
Silverman, Stephen; Zotos, Connee
The validity of interval and time sampling methods of measuring student engaged time was investigated in a study estimating the actual time students spent engaged in relevant motor performance in physical education classes. Two versions of the interval Academic Learning Time in Physical Education (ALT-PE) instrument and an equivalent time sampling…
Gravity separation of pericardial fat in cardiotomy suction blood: an in vitro model.
Kinard, M Rhett; Shackelford, Anthony G; Sistino, Joseph J
2009-06-01
Fat emboli generated during cardiac surgery have been shown to cause neurologic complications in patients postoperatively. Cardiotomy suction has been known to be a large generator of emboli. This study will examine the efficacy of a separation technique in which the cardiotomy suction blood is stored in a cardiotomy reservoir for various time intervals to allow spontaneous separation of fat from blood by density. Soybean oil was added to heparinized porcine blood to simulate the blood of a patient with hypertriglyceridemia (> 150 mg/dL). Roller pump suction was used to transfer the room temperature blood into the cardiotomy reservoir. Blood was removed from the reservoir in 200-mL aliquots at 0, 15, 30 45, and 60 minutes. Samples were taken at each interval and centrifuged to facilitate further separation of liquid fat. Fat content in each sample was determined by a point-of-care triglyceride analyzer. Three trials were conducted for a total of 30 samples. The 0-minute group was considered a baseline and was compared to the other four times. Fat concentration was reduced significantly in the 45- and 60-minute groups compared to the 0-, 15-, and 30-minute groups (p < .05). Gravity separation of cardiotomy suction blood is effective; however, it may require retention of blood for more time than is clinically acceptable during a routing coronary artery bypass graft surgery.
Direct thermal effects of the Hadean bombardment did not limit early subsurface habitability
NASA Astrophysics Data System (ADS)
Grimm, R. E.; Marchi, S.
2018-03-01
Intense bombardment is considered characteristic of the Hadean and early Archean eons, yet some detrital zircons indicate that near-surface water was present and thus at least intervals of clement conditions may have existed. We investigate the habitability of the top few kilometers of the subsurface by updating a prior approach to thermal evolution of the crust due to impact heating, using a revised bombardment history, a more accurate thermal model, and treatment of melt sheets from large projectiles (>100 km diameter). We find that subsurface habitable volume grows nearly continuously throughout the Hadean and early Archean (4.5-3.5 Ga) because impact heat is dissipated rapidly compared to the total duration and waning strength of the bombardment. Global sterilization was only achieved using an order of magnitude more projectiles in 1/10 the time. Melt sheets from large projectiles can completely resurface the Earth several times prior to ∼4.2 Ga but at most once since then. Even in the Hadean, melt sheets have little effect on habitability because cooling times are short compared to resurfacing intervals, allowing subsurface biospheres to be locally re-established by groundwater infiltration between major impacts. Therefore the subsurface is always habitable somewhere, and production of global steam or silicate-vapor atmospheres are the only remaining avenues to early surface sterilization by bombardment.
Electrocardiographic findings in chronic hemodialysis patients.
Bignotto, Luís Henrique; Kallás, Marina Esteves; Djouki, Rafael Jorge Teixeira; Sassaki, Marcela Mayume; Voss, Guilherme Ota; Soto, Cristina Lopez; Frattini, Fernando; Medeiros, Flávia Silva Reis
2012-01-01
Cardiovascular disease is the leading cause of mortality among patients on dialysis. When considering all causes of death, about 30% are classified as cardiac arrest, death of unknown cause or cardiac arrhythmia. The increasing time of ventricular depolarization and repolarization, measured non-invasively by measuring the QT interval on the electrocardiogram at rest, has emerged as a predictor of complex ventricular arrhythmias, a major cause of sudden cardiac death. To determine the electrocardiographic alterations present in hemodialysis (HD) patients, measuring the QT interval and its relationship with clinical and laboratory variables. Patients above 18 years on dialysis were approached to participate in the study and, after consent, were submitted to the examination of 12-lead electrocardiogram. Clinical data were reviewed to assess the presence of comorbidities, as well as anthropometric and blood pressure measures. Blood samples were collected to determinate hemoglobin and serum levels of calcium, phosphorus and potassium. One hundred and seventy nine patients were included in the study. The majority of the patients were male (64.8%) and white (54.7%); the average age was 58.5 ± 14.7 years old. About 50% of all patients had, at least, one electrical conduction disturb. About 50% of all patients had QTc prolongation and experienced a significant increase in the frequency of Left Ventricular Hypertrophy (LVH), changes of the cardiac rhythm and bundle branch blocks, and a lower body mass index (BMI), when compared with normal QTc interval patients. Patients with chronic kidney disease (CKD) on hemodialysis had high frequency of abnormal electrocardiographic findings, including a high prevalence of patients with prolonged QTc interval. This study also found a significant association between prolonged QTc interval and the presence of Diabetes and lower values of BMI.
Yang, Bo Ram; Kim, Ye-Jee; Kim, Mi-Sook; Jung, Sun-Young; Choi, Nam-Kyong; Hwang, Byungkwan; Park, Byung-Joo; Lee, Joongyub
2018-05-23
Zolpidem is one of the most frequently used hypnotics worldwide, but associations with serious adverse effects such as motor vehicle collisions have been reported. The objective of this study was to evaluate the association of fatal motor vehicle collisions with a prescription for zolpidem, considering the context of the motor vehicle collisions. We conducted a case-crossover study, where each case served as its own control, by linking data about fatal motor vehicle collisions from the Korean Road Traffic Authority between 2010 and 2014 with national health insurance data. The case period was defined as 1 day before the fatal motor vehicle collisions, and was matched to four control periods at 90-day intervals. Conditional logistic regression was performed to calculate the odds ratio for fatal motor vehicle collisions associated with zolpidem exposure, and odds ratios were adjusted for time-varying exposure to confounding medications. A stratified analysis was performed by age group (younger than 65 years or not), the Charlson Comorbidity Index, and whether patients were new zolpidem users. Among the 714 subjects, the adjusted odds ratio for a fatal motor vehicle collision associated with a prescription for zolpidem the previous day was 1.48 (95% confidence interval 1.06-2.07). After stratification, a significantly increased risk was observed in subjects with a high Charlson Comorbidity Index (odds ratio 1.81; 95% confidence interval 1.16-2.84), the younger age group (odds ratio: 1.62; 95% confidence interval 1.03-2.56), and new zolpidem users (odds ratio 2.37; 95% confidence interval 1.40-4.00). A prescription for zolpidem on the previous day was significantly related to an increased risk of fatal motor vehicle collisions in this population-based case-crossover study.
Analysis of dynamics of vulcanian activity of Ubinas volcano, using multicomponent seismic antennas
NASA Astrophysics Data System (ADS)
Inza, L. A.; Métaxian, J. P.; Mars, J. I.; Bean, C. J.; O'Brien, G. S.; Macedo, O.; Zandomeneghi, D.
2014-01-01
A series of 16 vulcanian explosions occurred at Ubinas volcano between May 24 and June 14, 2009. The intervals between explosions were from 2.1 h to more than 6 days (mean interval, 33 h). Considering only the first nine explosions, the average time interval was 7.8 h. Most of the explosions occurred after a short time interval (< 8 h) and had low energy, which suggests that the refilling time was not sufficient for large accumulation of gas. A tremor episode followed 75% of the explosions, which coincided with pulses of ash emission. The durations of the tremors following the explosions were longer for the two highest energy explosions. To better understand the physical processes associated with these eruptive events, we localized the sources of explosions using two seismic antennas that were composed of three-component 10 and 12 sensors. We used the high-resolution MUSIC-3C algorithm to estimate the slowness vector for the first waves that composed the explosion signals recorded by the two antennas assuming propagation in a homogeneous medium. The initial part of the explosions was dominated by two frequencies, at 1.1 Hz and 1.5 Hz, for which we identified two separated sources located at 4810 m and 3890 m +/- 390 altitude, respectively. The position of these two sources was the same for the full 16 explosions. This implies the reproduction of similar mechanisms in the conduit. Based on the eruptive mechanisms proposed for other volcanoes of the same type, we interpret the position of these two sources as the limits of the conduit portion that was involved in the fragmentation process. Seismic data and ground deformation recorded simultaneously less than 2 km from the crater showed a decompression movement 2 s prior to each explosion. This movement can be interpreted as gas leakage at the level of the cap before its destruction. The pressure drop generated in the conduit could be the cause of the fragmentation process that propagated deeper. Based on these observations, we interpret the position of the highest source as the part of the conduit under the cap, and the deeper source as the limit of the fragmentation zone.
A model of interval timing by neural integration.
Simen, Patrick; Balci, Fuat; de Souza, Laura; Cohen, Jonathan D; Holmes, Philip
2011-06-22
We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.
Face-Recognition Memory: Implications for Children's Eyewitness Testimony.
ERIC Educational Resources Information Center
Chance, June E.; Goldstein, Alvin G.
1984-01-01
Reviews studies of face-recognition memory and considers implications for assessing the dependability of children's performances as eyewitnesses. Considers personal factors (age, intellectual differences, and gender) and situational factors (familiarity of face, retention interval, and others). Also identifies developmental questions for future…
Madison, Guy; Karampela, Olympia; Ullén, Fredrik; Holm, Linus
2013-05-01
Timing permeates everyday activities such as walking, dancing and music, yet the effect of short-term practice in this ubiquitous activity is largely unknown. In two training experiments involving sessions spread across several days, we examined short-term practice effects on timing variability in a sequential interval production task. In Experiment 1, we varied the mode of response (e.g., drumstick and finger tapping) and the level of sensory feedback. In Experiment 2 we varied the interval in 18 levels ranging from 500 ms to 1624 ms. Both experiments showed a substantial decrease in variability within the first hour of practice, but little thereafter. This effect was similar across mode of response, amount of feedback, and interval duration, and was manifested as a reduction in both local variability (between neighboring intervals) and drift (fluctuation across multiple intervals). The results suggest mainly effects on motor implementation rather than on cognitive timing processes, and have methodological implications for timing studies that have not controlled for practice. Copyright © 2013 Elsevier B.V. All rights reserved.
Cramer, Bradley D.; Loydell, David K.; Samtleben, Christian; Munnecke, Axel; Kaljo, Dimitri; Mannik, Peep; Martma, Tonu; Jeppsson, Lennart; Kleffner, Mark A.; Barrick, James E.; Johnson, Craig A.; Emsbo, Poul; Joachimski, Michael M.; Bickert, Torsten; Saltzman, Matthew R.
2010-01-01
The resolution and fidelity of global chronostratigraphic correlation are direct functions of the time period under consideration. By virtue of deep-ocean cores and astrochronology, the Cenozoic and Mesozoic time scales carry error bars of a few thousand years (k.y.) to a few hundred k.y. In contrast, most of the Paleozoic time scale carries error bars of plus or minus a few million years (m.y.), and chronostratigraphic control better than ??1 m.y. is considered "high resolution." The general lack of Paleozoic abyssal sediments and paucity of orbitally tuned Paleozoic data series combined with the relative incompleteness of the Paleozoic stratigraphic record have proven historically to be such an obstacle to intercontinental chronostratigraphic correlation that resolving the Paleozoic time scale to the level achieved during the Mesozoic and Cenozoic was viewed as impractical, impossible, or both. Here, we utilize integrated graptolite, conodont, and carbonate carbon isotope (??13Ccarb) data from three paleocontinents (Baltica, Avalonia, and Laurentia) to demonstrate chronostratigraphic control for upper Llando very through middle Wenlock (Telychian-Sheinwoodian, ~436-426 Ma) strata with a resolution of a few hundred k.y. The interval surrounding the base of the Wenlock Series can now be correlated globally with precision approaching 100 k.y., but some intervals (e.g., uppermost Telychian and upper Shein-woodian) are either yet to be studied in sufficient detail or do not show sufficient biologic speciation and/or extinction or carbon isotopic features to delineate such small time slices. Although producing such resolution during the Paleozoic presents an array of challenges unique to the era, we have begun to demonstrate that erecting a Paleozoic time scale comparable to that of younger eras is achievable. ?? 2010 Geological Society of America.
Initial Systolic Time Interval (ISTI) as a Predictor of Intradialytic Hypotension (IDH)
NASA Astrophysics Data System (ADS)
Biesheuvel, J. D.; Vervloet, M. G.; Verdaasdonk, R. M.; Meijer, J. H.
2013-04-01
In haemodialysis treatment the clearance and volume control by the kidneys of a patient are partially replaced by intermittent haemodialysis. Because this artificial process is performed on a limited time scale, unphysiological imbalances in the fluid compartments of the body occur, that can lead to intradialytic hypotensions (IDH). An IDH endangers the efficacy of the haemodialysis session and is associated with dismal clinical endpoints, including mortality. A diagnostic method that predicts the occurrence of these drops in blood pressure could facilitate timely measures for the prevention of IDH. The present study investigates whether the Initial Systolic Time Interval (ISTI) can provide such a diagnostic method. The ISTI is defined as the time difference between the R-peak in the electrocardiogram (ECG) and the C-wave in the impedance cardiogram (ICG) and is considered to be a non-invasive assessment of the time delay between the electrical and mechanical activity of the heart. This time delay has previously been found to depend on autonomic nervous function as well as preload of the heart. Therefore, it can be expected that ISTI may predict an imminent IDH caused by a low circulating blood volume. This ongoing observational clinical study investigates the relationship between changes in ISTI and subsequent drops in blood pressure during haemodialysis. A registration of a complicated dialysis showed a significant correlation between a drop in blood pressure, a decrease in relative blood volume and a substantial increase in ISTI. An uncomplicated dialysis, in which also a considerable amount of fluid was removed, showed no correlations. Both, blood pressure and ISTI remained stable. In conclusion, the preliminary results of the present study show a substantial response of ISTI to haemodynamic instability, indicating an application in optimization and individualisation of the dialysis process.
NASA Astrophysics Data System (ADS)
Wayan Mangku, I.
2017-10-01
In this paper we survey some results on estimation of the intensity function of a cyclic Poisson process in the presence of additive and multiplicative linear trend. We do not assume any parametric form for the cyclic component of the intensity function, except that it is periodic. Moreover, we consider the case when there is only a single realization of the Poisson process is observed in a bounded interval. The considered estimators are weakly and strongly consistent when the size of the observation interval indefinitely expands. Asymptotic approximations to the bias and variance of those estimators are presented.
Wathes, D C; Bourne, N; Cheng, Z; Mann, G E; Taylor, V J; Coffey, M P
2007-03-01
Results from 4 studies were combined (representing a total of 500 lactations) to investigate the relationships between metabolic parameters and fertility in dairy cows. Information was collected on blood metabolic traits and body condition score at 1 to 2 wk prepartum and at 2, 4, and 7 wk postpartum. Fertility traits were days to commencement of luteal activity, days to first service, days to conception, and failure to conceive. Primiparous and multiparous cows were considered separately. Initial linear regression analyses were used to determine relationships among fertility, metabolic, and endocrine traits at each time point. All metabolic and endocrine traits significantly related to fertility were included in stepwise multiple regression analyses alone (model 1), including peak milk yield and interval to commencement of luteal activity (model 2), and with the further addition of dietary group (model 3). In multiparous cows, extended calving to conception intervals were associated prepartum with greater concentrations of leptin and lesser concentrations of nonesterified fatty acids and urea, and postpartum with reduced insulin-like growth factor-I at 2 wk, greater urea at 7 wk, and greater peak milk yield. In primiparous cows, extended calving to conception intervals were associated with more body condition and more urea prepartum, elevated urea postpartum, and more body condition loss by 7 wk. In conclusion, some metabolic measurements were associated with poorer fertility outcomes. Relationships between fertility and metabolic and endocrine traits varied both according to the lactation number of the cow and with the time relative to calving.
Drug Dependence Treatment Awareness among Japanese Female Stimulant Drug Offenders
Yatsugi, Shinzo; Fujita, Koji; Kashima, Saori; Eboshida, Akira
2016-01-01
Few stimulant drug users receive adequate treatment. This cross-sectional study describes the characteristics of female drug offenders that use stimulants and clarifies the factors related to the awareness of treatment for drug dependencies. We included 80 females imprisoned due to stimulant control law violations from 2012 to 2015. The characteristics of the female prisoners were stratified according to various treatment awareness levels, and associations between each characteristic and treatment awareness were evaluated using logistic regression models. The average period of stimulant drug use was 17.7 years. Participants imprisoned for the second time were significantly more likely to consider treatment compared to those imprisoned only once: odds ratio (OR) = 3.2 (95% confidence interval (CI): 1.0–10.7). This elevated OR was diluted in repeat offenders. Participants who had experienced multiple aftereffects (≥7) or serious depressive symptoms were also more likely to consider treatment: OR = 6.1 (95% CI: 1.8–20.8) and OR = 2.5 (95% CI: 1.0–6.2), respectively. Second-time stimulant offenders or offenders who had experienced health problems were more likely to consider it important to receive drug dependence treatment. To overcome relapses of stimulant use, it is recommended that stimulant use offenders are encouraged to accept adequate treatment. PMID:27845738
Ding, Xiaoshuai; Cao, Jinde; Alsaedi, Ahmed; Alsaadi, Fuad E; Hayat, Tasawar
2017-06-01
This paper is concerned with the fixed-time synchronization for a class of complex-valued neural networks in the presence of discontinuous activation functions and parameter uncertainties. Fixed-time synchronization not only claims that the considered master-slave system realizes synchronization within a finite time segment, but also requires a uniform upper bound for such time intervals for all initial synchronization errors. To accomplish the target of fixed-time synchronization, a novel feedback control procedure is designed for the slave neural networks. By means of the Filippov discontinuity theories and Lyapunov stability theories, some sufficient conditions are established for the selection of control parameters to guarantee synchronization within a fixed time, while an upper bound of the settling time is acquired as well, which allows to be modulated to predefined values independently on initial conditions. Additionally, criteria of modified controller for assurance of fixed-time anti-synchronization are also derived for the same system. An example is included to illustrate the proposed methodologies. Copyright © 2017 Elsevier Ltd. All rights reserved.
VizieR Online Data Catalog: Fermi/GBM GRB time-resolved spectral catalog (Yu+, 2016)
NASA Astrophysics Data System (ADS)
Yu, H.-F.; Preece, R. D.; Greiner, J.; Bhat, P. N.; Bissaldi, E.; Briggs, M. S.; Cleveland, W. H.; Connaughton, V.; Goldstein, A.; von Kienlin; A.; Kouveliotou, C.; Mailyan, B.; Meegan, C. A.; Paciesas, W. S.; Rau, A.; Roberts, O. J.; Veres, P.; Wilson-Hodge, C.; Zhang, B.-B.; van Eerten, H. J.
2016-01-01
Time-resolved spectral analysis results of BEST models: for each spectrum GRB name using the Fermi GBM trigger designation, spectrum number within individual burst, start time Tstart and end time Tstop for the time bin, BEST model, best-fit parameters of the BEST model, value of CSTAT per degrees of freedom, 10keV-1MeV photon and energy flux are given. Ep evolutionary trends: for each burst GRB name, number of spectra with Ep, Spearman's Rank Correlation Coefficients between Ep_ and photon flux and 90%, 95%, and 99% confidence intervals, Spearman's Rank Correlation Coefficients between Ep and energy flux and 90%, 95%, and 99% confidence intervals, Spearman's Rank Correlation Coefficient between Ep and time and 90%, 95%, and 99% confidence intervals, trends as determined by computer for 90%, 95%, and 99% confidence intervals, trends as determined by human eyes are given. (2 data files).
Transformation to equivalent dimensions—a new methodology to study earthquake clustering
NASA Astrophysics Data System (ADS)
Lasocki, Stanislaw
2014-05-01
A seismic event is represented by a point in a parameter space, quantified by the vector of parameter values. Studies of earthquake clustering involve considering distances between such points in multidimensional spaces. However, the metrics of earthquake parameters are different, hence the metric in a multidimensional parameter space cannot be readily defined. The present paper proposes a solution of this metric problem based on a concept of probabilistic equivalence of earthquake parameters. Under this concept the lengths of parameter intervals are equivalent if the probability for earthquakes to take values from either interval is the same. Earthquake clustering is studied in an equivalent rather than the original dimensions space, where the equivalent dimension (ED) of a parameter is its cumulative distribution function. All transformed parameters are of linear scale in [0, 1] interval and the distance between earthquakes represented by vectors in any ED space is Euclidean. The unknown, in general, cumulative distributions of earthquake parameters are estimated from earthquake catalogues by means of the model-free non-parametric kernel estimation method. Potential of the transformation to EDs is illustrated by two examples of use: to find hierarchically closest neighbours in time-space and to assess temporal variations of earthquake clustering in a specific 4-D phase space.
Cholera Epidemics of the Past Offer New Insights Into an Old Enemy
Phelps, Matthew; Perner, Mads Linnet; Pitzer, Virginia E; Andreasen, Viggo; Jensen, Peter K M; Simonsen, Lone
2018-01-01
Abstract Background Although cholera is considered the quintessential long-cycle waterborne disease, studies have emphasized the existence of short-cycle (food, household) transmission. We investigated singular Danish cholera epidemics (in 1853) to elucidate epidemiological parameters and modes of spread. Methods Using time series data from cities with different water systems, we estimated the intrinsic transmissibility (R0). Accessing cause-specific mortality data, we studied clinical severity and age-specific impact. From physicians’ narratives we established transmission chains and estimated serial intervals. Results Epidemics were seeded by travelers from cholera-affected cities; initial transmission chains involving household members and caretakers ensued. Cholera killed 3.4%–8.9% of the populations, with highest mortality among seniors (16%) and lowest in children (2.7%). Transmissibility (R0) was 1.7–2.6 and the serial interval was estimated at 3.7 days (95% confidence interval, 2.9–4.7 days). The case fatality ratio (CFR) was high (54%–68%); using R0 we computed an adjusted CFR of 4%–5%. Conclusions Short-cycle transmission was likely critical to early secondary transmission in historic Danish towns. The outbreaks resembled the contemporary Haiti outbreak with respect to transmissibility, age patterns, and CFR, suggesting a role for broader hygiene/sanitation interventions to control contemporary outbreaks. PMID:29165706
An application of the Maslov complex germ method to the one-dimensional nonlocal Fisher-KPP equation
NASA Astrophysics Data System (ADS)
Shapovalov, A. V.; Trifonov, A. Yu.
A semiclassical approximation approach based on the Maslov complex germ method is considered in detail for the one-dimensional nonlocal Fisher-Kolmogorov-Petrovskii-Piskunov (Fisher-KPP) equation under the supposition of weak diffusion. In terms of the semiclassical formalism developed, the original nonlinear equation is reduced to an associated linear partial differential equation and some algebraic equations for the coefficients of the linear equation with a given accuracy of the asymptotic parameter. The solutions of the nonlinear equation are constructed from the solutions of both the linear equation and the algebraic equations. The solutions of the linear problem are found with the use of symmetry operators. A countable family of the leading terms of the semiclassical asymptotics is constructed in explicit form. The semiclassical asymptotics are valid by construction in a finite time interval. We construct asymptotics which are different from the semiclassical ones and can describe evolution of the solutions of the Fisher-KPP equation at large times. In the example considered, an initial unimodal distribution becomes multimodal, which can be treated as an example of a space structure.
Measuring Land Change in Coastal Zone around a Rapidly Urbanized Bay.
Huang, Faming; Huang, Boqiang; Huang, Jinliang; Li, Shenghui
2018-05-23
Urban development is a major cause for eco-degradation in many coastal regions. Understanding urbanization dynamics and underlying driving factors is crucial for urban planning and management. Land-use dynamic degree indices and intensity analysis were used to measure land changes occurred in 1990, 2002, 2009, and 2017 in the coastal zone around Quanzhou bay, which is a rapidly urbanized bay in Southeast China. The comprehensive land-use dynamic degree and interval level intensity analysis both revealed that land change was accelerating across the three time intervals in a three-kilometer-wide zone along the coastal line (zone A), while land change was fastest during the second time interval 2002⁻2009 in a separate terrestrial area within coastal zone (zone B). Driven by urbanization, built-up gains and cropland losses were active for all time intervals in both zones. Mudflat losses were active except in the first time interval in zone A due to the intensive sea reclamation. The gain of mangrove was active while the loss of mangrove is dormant for all three intervals in zone A. Transition level analysis further revealed the similarities and differences in processes within patterns of land changes for both zones. The transition from cropland to built-up was systematically targeted and stationary while the transition from woodland to built-up was systematically avoiding transition in both zones. Built-up tended to target aquaculture for the second and third time intervals in zone A but avoid Aquaculture for all intervals in zone B. Land change in zone A was more significant than that in zone B during the second and third time intervals at three-level intensity. The application of intensity analysis can enhance our understanding of the patterns and processes in land changes and suitable land development plans in the Quanzhou bay area. This type of investigation is useful to provide information for developing sound land use policy to achieve urban sustainability in similar coastal areas.
Marino, Patricia; Siani, Carole; Roché, Henri; Moatti, Jean-Paul
2005-01-01
The object of this study was to determine, taking into account uncertainty on cost and outcome parameters, the cost-effectiveness of high-dose chemotherapy (HDC) compared with conventional chemotherapy for advanced breast cancer patients. An analysis was conducted for 300 patients included in a randomized clinical trial designed to evaluate the benefits, in terms of disease-free survival and overall survival, of adding a single course of HDC to a four-cycle conventional-dose chemotherapy for breast cancer patients with axillary lymph node invasion. Costs were estimated from a detailed observation of physical quantities consumed, and the Kaplan-Meier method was used to evaluate mean survival times. Incremental cost-effectiveness ratios were evaluated successively considering disease-free survival and overall survival outcomes. Handling of uncertainty consisted in construction of confidence intervals for these ratios, using the truncated Fieller method. The cost per disease-free life year gained was evaluated at 13,074 Euros, a value that seems to be acceptable to society. However, handling uncertainty shows that the upper bound of the confidence interval is around 38,000 Euros, which is nearly three times higher. Moreover, as no difference was demonstrated in overall survival between treatments, cost-effectiveness analysis, that is a cost minimization, indicated that the intensive treatment is a dominated strategy involving an extra cost of 7,400 Euros, for no added benefit. Adding a single course of HDC led to a clinical benefit in terms of disease-free survival for an additional cost that seems to be acceptable, considering the point estimate of the ratio. However, handling uncertainty indicates a maximum ratio for which conclusions have to be discussed.
The Importance of Muscular Strength: Training Considerations.
Suchomel, Timothy J; Nimphius, Sophia; Bellon, Christopher R; Stone, Michael H
2018-04-01
This review covers underlying physiological characteristics and training considerations that may affect muscular strength including improving maximal force expression and time-limited force expression. Strength is underpinned by a combination of morphological and neural factors including muscle cross-sectional area and architecture, musculotendinous stiffness, motor unit recruitment, rate coding, motor unit synchronization, and neuromuscular inhibition. Although single- and multi-targeted block periodization models may produce the greatest strength-power benefits, concepts within each model must be considered within the limitations of the sport, athletes, and schedules. Bilateral training, eccentric training and accentuated eccentric loading, and variable resistance training may produce the greatest comprehensive strength adaptations. Bodyweight exercise, isolation exercises, plyometric exercise, unilateral exercise, and kettlebell training may be limited in their potential to improve maximal strength but are still relevant to strength development by challenging time-limited force expression and differentially challenging motor demands. Training to failure may not be necessary to improve maximum muscular strength and is likely not necessary for maximum gains in strength. Indeed, programming that combines heavy and light loads may improve strength and underpin other strength-power characteristics. Multiple sets appear to produce superior training benefits compared to single sets; however, an athlete's training status and the dose-response relationship must be considered. While 2- to 5-min interset rest intervals may produce the greatest strength-power benefits, rest interval length may vary based an athlete's training age, fiber type, and genetics. Weaker athletes should focus on developing strength before emphasizing power-type training. Stronger athletes may begin to emphasize power-type training while maintaining/improving their strength. Future research should investigate how best to implement accentuated eccentric loading and variable resistance training and examine how initial strength affects an athlete's ability to improve their performance following various training methods.
Timing of the Diagnosis of Attention-Deficit/Hyperactivity Disorder and Autism Spectrum Disorder.
Miodovnik, Amir; Harstad, Elizabeth; Sideridis, Georgios; Huntington, Noelle
2015-10-01
Symptoms of inattention, hyperactivity, and impulsivity are core features of attention-deficit/hyperactivity disorder (ADHD). However, children with autism spectrum disorder (ASD) often present with similar symptoms and may receive a diagnosis of ADHD first. We investigated the relationship between the timing of ADHD diagnosis in children with ASD and the age at ASD diagnosis. Data were drawn from the 2011-2012 National Survey of Children's Health, which asked parents to provide the age(s) at which their child received a diagnosis of ADHD and/or ASD. Using weighted prevalence estimates, we examined the association between a previous diagnosis of ADHD and the age at ASD diagnosis, while controlling for factors known to influence the timing of ASD diagnosis. Our study consisted of 1496 children with a current diagnosis of ASD as reported by parents of children ages 2 to 17 years. Approximately 20% of these children had initially been diagnosed with ADHD. Children diagnosed with ADHD before ASD were diagnosed with ASD ∼3 years (95% confidence interval 2.3-3.5) after children in whom ADHD was diagnosed at the same time or after ASD. The children with ADHD diagnosed first were nearly 30 times more likely to receive their ASD diagnosis after age 6 (95% confidence interval 11.2-77.8). The delay in ASD diagnosis was consistent across childhood and independent of ASD severity. To avoid potential delays in ASD diagnosis, clinicians should consider ASD in young children presenting with ADHD symptoms. Copyright © 2015 by the American Academy of Pediatrics.
The effects of morphine on fixed-interval patterning and temporal discrimination.
Odum, A L; Schaal, D W
2000-01-01
Changes produced by drugs in response patterns under fixed-interval schedules of reinforcement have been interpreted to result from changes in temporal discrimination. To examine this possibility, this experiment determined the effects of morphine on the response patterning of 4 pigeons during a fixed-interval 1-min schedule of food delivery with interpolated temporal discrimination trials. Twenty of the 50 total intervals were interrupted by choice trials. Pecks to one key color produced food if the interval was interrupted after a short time (after 2 or 4.64 s). Pecks to another key color produced food if the interval was interrupted after a long time (after 24.99 or 58 s). Morphine (1.0 to 10.0 mg/kg) decreased the index of curvature (a measure of response patterning) during fixed intervals and accuracy during temporal discrimination trials. Accuracy was equally disrupted following short and long sample durations. Although morphine disrupted temporal discrimination in the context of a fixed-interval schedule, these effects are inconsistent with interpretations of the disruption of response patterning as a selective overestimation of elapsed time. The effects of morphine may be related to the effects of more conventional external stimuli on response patterning. PMID:11029024
Context-Dependent Duration Signals in the Primate Prefrontal Cortex
Genovesio, Aldo; Seitz, Lucia K.; Tsujimoto, Satoshi; Wise, Steven P.
2016-01-01
The activity of some prefrontal (PF) cortex neurons distinguishes short from long time intervals. Here, we examined whether this property reflected a general timing mechanism or one dependent on behavioral context. In one task, monkeys discriminated the relative duration of 2 stimuli; in the other, they discriminated the relative distance of 2 stimuli from a fixed reference point. Both tasks had a pre-cue period (interval 1) and a delay period (interval 2) with no discriminant stimulus. Interval 1 elapsed before the presentation of the first discriminant stimulus, and interval 2 began after that stimulus. Both intervals had durations of either 400 or 800 ms. Most PF neurons distinguished short from long durations in one task or interval, but not in the others. When neurons did signal something about duration for both intervals, they did so in an uncorrelated or weakly correlated manner. These results demonstrate a high degree of context dependency in PF time processing. The PF, therefore, does not appear to signal durations abstractedly, as would be expected of a general temporal encoder, but instead does so in a highly context-dependent manner, both within and between tasks. PMID:26209845
An iterative method for analysis of hadron ratios and Spectra in relativistic heavy-ion collisions
NASA Astrophysics Data System (ADS)
Choi, Suk; Lee, Kang Seog
2016-04-01
A new iteration method is proposed for analyzing both the multiplicities and the transverse momentum spectra measured within a small rapidity interval with low momentum cut-off without assuming the invariance of the rapidity distribution under the Lorentz-boost and is applied to the hadron data measured by the ALICE collaboration for Pb+Pb collisions at √ {^sNN} = 2.76 TeV. In order to correctly consider the resonance contribution only to the small rapidity interval measured, we only consider ratios involving only those hadrons whose transverse momentum spectrum is available. In spite of the small number of ratios considered, the quality of fitting both of the ratios and the transverse momentum spectra is excellent. Also, the calculated ratios involving strange baryons with the fitted parameters agree with the data surprisingly well.
Cumulant generating function formula of heat transfer in ballistic systems with lead-lead coupling
NASA Astrophysics Data System (ADS)
Li, Huanan; Agarwalla, Bijay Kumar; Wang, Jian-Sheng
2012-10-01
Based on a two-time observation protocol, we consider heat transfer in a given time interval tM in a lead-junction-lead system taking coupling between the leads into account. In view of the two-time observation, consistency conditions are carefully verified in our specific family of quantum histories. Furthermore, its implication is briefly explored. Then using the nonequilibrium Green's function method, we obtain an exact formula for the cumulant generating function for heat transfer between the two leads, valid in both transient and steady-state regimes. Also, a compact formula for the cumulant generating function in the long-time limit is derived, for which the Gallavotti-Cohen fluctuation symmetry is explicitly verified. In addition, we briefly discuss Di Ventra's repartitioning trick regarding whether the repartitioning procedure of the total Hamiltonian affects the nonequilibrium steady-state current fluctuation. All kinds of properties of nonequilibrium current fluctuations, such as the fluctuation theorem in different time regimes, could be readily given according to these exact formulas.
Liu, Hongjian; Wang, Zidong; Shen, Bo; Huang, Tingwen; Alsaadi, Fuad E
2018-06-01
This paper is concerned with the globally exponential stability problem for a class of discrete-time stochastic memristive neural networks (DSMNNs) with both leakage delays as well as probabilistic time-varying delays. For the probabilistic delays, a sequence of Bernoulli distributed random variables is utilized to determine within which intervals the time-varying delays fall at certain time instant. The sector-bounded activation function is considered in the addressed DSMNN. By taking into account the state-dependent characteristics of the network parameters and choosing an appropriate Lyapunov-Krasovskii functional, some sufficient conditions are established under which the underlying DSMNN is globally exponentially stable in the mean square. The derived conditions are made dependent on both the leakage and the probabilistic delays, and are therefore less conservative than the traditional delay-independent criteria. A simulation example is given to show the effectiveness of the proposed stability criterion. Copyright © 2018 Elsevier Ltd. All rights reserved.
Multiscale spatial and temporal estimation of the b-value
NASA Astrophysics Data System (ADS)
García-Hernández, R.; D'Auria, L.; Barrancos, J.; Padilla, G.
2017-12-01
The estimation of the spatial and temporal variations of the Gutenberg-Richter b-value is of great importance in different seismological applications. One of the problems affecting its estimation is the heterogeneous distribution of the seismicity which makes its estimate strongly dependent upon the selected spatial and/or temporal scale. This is especially important in volcanoes where dense clusters of earthquakes often overlap the background seismicity. Proposed solutions for estimating temporal variations of the b-value include considering equally spaced time intervals or variable intervals having an equal number of earthquakes. Similar approaches have been proposed to image the spatial variations of this parameter as well.We propose a novel multiscale approach, based on the method of Ogata and Katsura (1993), allowing a consistent estimation of the b-value regardless of the considered spatial and/or temporal scales. Our method, named MUST-B (MUltiscale Spatial and Temporal characterization of the B-value), basically consists in computing estimates of the b-value at multiple temporal and spatial scales, extracting for a give spatio-temporal point a statistical estimator of the value, as well as and indication of the characteristic spatio-temporal scale. This approach includes also a consistent estimation of the completeness magnitude (Mc) and of the uncertainties over both b and Mc.We applied this method to example datasets for volcanic (Tenerife, El Hierro) and tectonic areas (Central Italy) as well as an example application at global scale.
NASA Technical Reports Server (NTRS)
Spodick, D. H.; Quarry, V. M.; Khan, A. H.
1974-01-01
Systolic and diastolic time intervals in 14 cardiac patients with pulsus alternans revealed significant alternation of preinjection period (PEP), isovolumic contraction time (IVCT), left ventricular ejection time (LVET), ejection time index (ETI), PEP/LVET, and carotid dD/dt with better functional values in the strong beats. Cycle length, duration of electromechanical systole (EMS) and total diastole, i.e., isovolumic relaxation period (IRP) and diastolic filling period (DFP) occurred in 7 out of 8 patients. These diastolic intervals alternated reciprocally such that the IRP of the strong beats encroached upon the DFP of the next (weak) beats.
Hildesheim, Allan; Gonzalez, Paula; Kreimer, Aimee R.; Wacholder, Sholom; Schussler, John; Rodriguez, Ana C.; Porras, Carolina; Schiffman, Mark; Sidawy, Mary; Schiller, John T.; Lowy, Douglas R.; Herrero, Rolando
2016-01-01
BACKGROUND Human papillomavirus (HPV) vaccines prevent HPV infection and cervical precancers. The impact of vaccinating women with a current infection or after treatment for an HPV-associated lesion is not fully understood. OBJECTIVES To determine whether HPV-16/18 vaccination influences the outcome of infections present at vaccination and the rate of infection and disease after treatment of lesions. STUDY DESIGN We included 1711 women (18–25 years) with carcinogenic human papillomavirus infection and 311 women of similar age who underwent treatment for cervical precancer and who participated in a community-based trial of the AS04-adjuvanted HPV-16/18 virus-like particle vaccine. Participants were randomized (human papillomavirus or hepatitis A vaccine) and offered 3 vaccinations over 6 months. Follow-up included annual visits (more frequently if clinically indicated), referral to colposcopy of high-grade and persistent low-grade lesions, treatment by loop electrosurgical excisional procedure when clinically indicated, and cytologic and virologic follow-up after treatment. Among women with human papillomavirus infection at the time of vaccination, we considered type-specific viral clearance, and development of cytologic (squamous intraepithelial lesions) and histologic (cervical intraepithelial neoplasia) lesions. Among treated women, we considered single-time and persistent human papillomavirus infection, squamous intraepithelial lesions, and cervical intraepithelial neoplasia 2+. Outcomes associated with infections absent before treatment also were evaluated. Infection-level analyses were performed and vaccine efficacy estimated. RESULTS Median follow-up was 56.7 months (women with human papillomavirus infection) and 27.3 months (treated women). There was no evidence of vaccine efficacy to increase clearance of human papillomavirus infections or decrease incidence of cytologic/histologic abnormalities associated with human papillomavirus types present at enrollment. Vaccine efficacy for human papillomavirus 16/18 clearance and against human papillomavirus 16/18 progression from infection to cervical intraepithelial neoplasia 2+ were −5.4% (95% confidence interval −19,10) and 0.3% (95% confidence interval −69,41), respectively. Among treated women, 34.1% had oncogenic infection and 1.6% had cervical intraepithelial neoplasia 2+ detected after treatment, respectively, and of these 69.8% and 20.0% were the result of new infections. We observed no significant effect of vaccination on rates of infection/lesions after treatment. Vaccine efficacy estimates for human papillomavirus 16/18 associated persistent infection and cervical intraepithelial neoplasia 2+ after treatment were 34.7% (95% confidence interval −131, 82) and −211% (95% confidence interval −2901, 68), respectively. We observed evidence for a partial and nonsignificant protective effect of vaccination against new infections absent before treatment. For incident human papillomavirus 16/18, human papillomavirus 31/33/45, and oncogenic human papillomavirus infections post-treatment, vaccine efficacy estimates were 57.9% (95% confidence interval −44, 88), 72.9% (95% confidence interval 29, 90), and 36.7% (95% confidence interval 1.5, 59), respectively. CONCLUSION We find no evidence for a vaccine effect on the fate of detectable human papillomavirus infections. We show that vaccination does not protect against infections/lesions after treatment. Evaluation of vaccine protection against new infections and resultant lesions warrants further consideration in future studies. PMID:26892991
Temporal differentiation and the optimization of system output
NASA Astrophysics Data System (ADS)
Tannenbaum, Emmanuel
2008-01-01
We develop two simplified dynamical models with which to explore the conditions under which temporal differentiation leads to increased system output. By temporal differentiation, we mean a division of labor whereby different subtasks associated with performing a given task are done at different times. The idea is that, by focusing on one particular set of subtasks at a time, it is possible to increase the efficiency with which each subtask is performed, thereby allowing for faster completion of the overall task. In the first model, we consider the filling and emptying of a tank in the presence of a time-varying resource profile. If a given resource is available, the tank may be filled at some rate rf . As long as the tank contains a resource, it may be emptied at a rate re , corresponding to processing into some product, which is either the final product of a process or an intermediate that is transported for further processing. Given a resource-availability profile over some time interval T , we develop an algorithm for determining the fill-empty profile that produces the maximum quantity of processed resource at the end of the time interval. We rigorously prove that the basic algorithm is one where the tank is filled when a resource is available and emptied when a resource is not available. In the second model, we consider a process whereby some resource is converted into some final product in a series of three agent-mediated steps. Temporal differentiation is incorporated by allowing the agents to oscillate between performing the first two steps and performing the last step. We find that temporal differentiation is favored when the number of agents is at intermediate values and when there are process intermediates that have long lifetimes compared to other characteristic time scales in the system. Based on these results, we speculate that temporal differentiation may provide an evolutionary basis for the emergence of phenomena such as sleep, distinct REM and non-REM sleep states, and circadian rhythms in general. The essential argument is that in sufficiently complex biological systems, a maximal amount of information and tasks can be processed and completed if the system follows a temporally differentiated “work plan,” whereby the system focuses on one or a few tasks at a time.
Road safety forecasts in five European countries using structural time series models.
Antoniou, Constantinos; Papadimitriou, Eleonora; Yannis, George
2014-01-01
Modeling road safety development is a complex task and needs to consider both the quantifiable impact of specific parameters as well as the underlying trends that cannot always be measured or observed. The objective of this research is to apply structural time series models for obtaining reliable medium- to long-term forecasts of road traffic fatality risk using data from 5 countries with different characteristics from all over Europe (Cyprus, Greece, Hungary, Norway, and Switzerland). Two structural time series models are considered: (1) the local linear trend model and the (2) latent risk time series model. Furthermore, a structured decision tree for the selection of the applicable model for each situation (developed within the Road Safety Data, Collection, Transfer and Analysis [DaCoTA] research project, cofunded by the European Commission) is outlined. First, the fatality and exposure data that are used for the development of the models are presented and explored. Then, the modeling process is presented, including the model selection process, introduction of intervention variables, and development of mobility scenarios. The forecasts using the developed models appear to be realistic and within acceptable confidence intervals. The proposed methodology is proved to be very efficient for handling different cases of data availability and quality, providing an appropriate alternative from the family of structural time series models in each country. A concluding section providing perspectives and directions for future research is presented.
Earthquakes: Recurrence and Interoccurrence Times
NASA Astrophysics Data System (ADS)
Abaimov, S. G.; Turcotte, D. L.; Shcherbakov, R.; Rundle, J. B.; Yakovlev, G.; Goltz, C.; Newman, W. I.
2008-04-01
The purpose of this paper is to discuss the statistical distributions of recurrence times of earthquakes. Recurrence times are the time intervals between successive earthquakes at a specified location on a specified fault. Although a number of statistical distributions have been proposed for recurrence times, we argue in favor of the Weibull distribution. The Weibull distribution is the only distribution that has a scale-invariant hazard function. We consider three sets of characteristic earthquakes on the San Andreas fault: (1) The Parkfield earthquakes, (2) the sequence of earthquakes identified by paleoseismic studies at the Wrightwood site, and (3) an example of a sequence of micro-repeating earthquakes at a site near San Juan Bautista. In each case we make a comparison with the applicable Weibull distribution. The number of earthquakes in each of these sequences is too small to make definitive conclusions. To overcome this difficulty we consider a sequence of earthquakes obtained from a one million year “Virtual California” simulation of San Andreas earthquakes. Very good agreement with a Weibull distribution is found. We also obtain recurrence statistics for two other model studies. The first is a modified forest-fire model and the second is a slider-block model. In both cases good agreements with Weibull distributions are obtained. Our conclusion is that the Weibull distribution is the preferred distribution for estimating the risk of future earthquakes on the San Andreas fault and elsewhere.
Lee, Peter N
2015-03-20
The "gateway hypothesis" usually refers to the possibility that the taking up of habit A, which is considered harmless (or less harmful), may lead to the subsequent taking up of another habit, B, which is considered harmful (or more harmful). Possible approaches to designing and analysing studies to test the hypothesis are discussed. Evidence relating to the use of snus (A) as a gateway for smoking (B) is then evaluated in detail. The importance of having appropriate data available on the sequence of use of A and B and on other potential confounding factors that may lead to the taking up of B is emphasised. Where randomised trials are impractical, the preferred designs include the prospective cohort study in which ever use of A and of B is recorded at regular intervals, and the cross-sectional survey in which time of starting to use A and B is recorded. Both approaches allow time-stratified analytical methods to be used, in which, in each time period, risk of initiating B among never users of B at the start of the interval is compared according to prior use of A. Adjustment in analysis for the potential confounding factors is essential. Of 11 studies of possible relevance conducted in Sweden, Finland or Norway, only one seriously addresses potential confounding by those other factors involved in the initiation of smoking. Furthermore, 5 of the 11 studies are of a design that does not allow proper testing of the gateway hypothesis for various reasons, and the analysis is unsatisfactory, sometimes seriously, in all the remaining six. While better analyses could be attempted for some of the six studies identified as having appropriate design, the issues of confounding remain, and more studies are clearly needed. To obtain a rapid answer, a properly designed cross-sectional survey is recommended.
Bertolino, Marco; Costa, Gabriele; Carella, Mirko; Cattaneo-Vietti, Riccardo; Cerrano, Carlo; Pansini, Maurizio; Quarta, Gianluca; Calcagnile, Lucio; Bavestrello, Giorgio
2017-01-01
This paper concerns the changes occurred over both decennial and millennial spans of time in a sponge assemblage present in coralligenous biogenic build-ups growing at 15 m depth in the Ligurian Sea (Western Mediterranean). The comparison of the sponge diversity after a time interval of about 40 years (1973-2014) showed a significant reduction in species richness (about 45%). This decrease affected mainly the massive/erect sponges, and in particular the subclass Keratosa, with a species loss of 67%, while the encrusting and cavity dwelling sponges lost the 36% and 50%, respectively. The boring sponges lost only one species (25%). This changing pattern suggested that the inner habitat of the bioconstructions was less affected by the variations of the environmental conditions or by the human pressures which, on the contrary, strongly affected the species living on the surface of the biogenic build-ups. Five cores extracted from the bioherms, dating back to 3500 YBP, allowed to analyse the siliceous spicules remained trapped in them in order to obtain taxonomic information. Changes at generic level in diversity and abundance were observed at 500/250-years intervals, ranging between 19 and 33 genera. The number of genera showed a sharp decrease since 3500-3000 to 3000-2500 YBP. After this period, the genera regularly increased until 1500-1250 YBP, from when they progressively decreased until 1000-500 YBP. Tentatively, these changes could be related to the different climatic periods that followed one another in the Mediterranean area within the considered time span. The recent depletion in sponge richness recorded in the Ligurian coralligenous can be considered relevant. In fact, the analysis of the spicules indicated that the sponges living in these coralligenous habitats remained enough stable during 3000 years, but could have lost a significant part of their biodiversity in the last decades, coinciding with a series of warming episodes.
Bertolino, Marco; Costa, Gabriele; Carella, Mirko; Cattaneo-Vietti, Riccardo; Cerrano, Carlo; Pansini, Maurizio; Quarta, Gianluca; Calcagnile, Lucio
2017-01-01
This paper concerns the changes occurred over both decennial and millennial spans of time in a sponge assemblage present in coralligenous biogenic build-ups growing at 15 m depth in the Ligurian Sea (Western Mediterranean). The comparison of the sponge diversity after a time interval of about 40 years (1973–2014) showed a significant reduction in species richness (about 45%). This decrease affected mainly the massive/erect sponges, and in particular the subclass Keratosa, with a species loss of 67%, while the encrusting and cavity dwelling sponges lost the 36% and 50%, respectively. The boring sponges lost only one species (25%). This changing pattern suggested that the inner habitat of the bioconstructions was less affected by the variations of the environmental conditions or by the human pressures which, on the contrary, strongly affected the species living on the surface of the biogenic build-ups. Five cores extracted from the bioherms, dating back to 3500 YBP, allowed to analyse the siliceous spicules remained trapped in them in order to obtain taxonomic information. Changes at generic level in diversity and abundance were observed at 500/250-years intervals, ranging between 19 and 33 genera. The number of genera showed a sharp decrease since 3500–3000 to 3000–2500 YBP. After this period, the genera regularly increased until 1500–1250 YBP, from when they progressively decreased until 1000–500 YBP. Tentatively, these changes could be related to the different climatic periods that followed one another in the Mediterranean area within the considered time span. The recent depletion in sponge richness recorded in the Ligurian coralligenous can be considered relevant. In fact, the analysis of the spicules indicated that the sponges living in these coralligenous habitats remained enough stable during 3000 years, but could have lost a significant part of their biodiversity in the last decades, coinciding with a series of warming episodes. PMID:28531209
NASA Astrophysics Data System (ADS)
Tarling, G. A.; Stowasser, G.; Ward, P.; Poulton, A. J.; Zhou, M.; Venables, H. J.; McGill, R. A. R.; Murphy, E. J.
2012-01-01
The biomass size structure of pelagic communities provides a system level perspective that can be instructive when considering trophic interactions. Such perspectives can become even more powerful when combined with taxonomic information and stable isotope analysis. Here we apply these approaches to the pelagic community of the Scotia Sea (Southern Ocean) and consider the structure and development of trophic interactions over different years and seasons. Samples were collected from three open-ocean cruises during the austral spring 2006, summer 2008 and autumn 2009. Three main sampling techniques were employed: sampling bottles for microplankton (0-50 m), vertically hauled fine meshed nets for mesozooplankton (0-400 m) and coarse-meshed trawls for macrozooplankton and nekton (0-1000 m). All samples were identified to the lowest practicable taxonomic level and their abundance, individual body weight and biomass (in terms of carbon) estimated. Slopes of normalised biomass spectrum versus size showed a significant but not substantial difference between cruises and were between -1.09 and -1.06. These slopes were shallower than expected for a community at equilibrium and indicated that there was an accumulation of biomass in the larger size classes (10 1-10 5 mg C ind -1). A secondary structure of biomass domes was also apparent, with the domes being 2.5-3 log 10 intervals apart in spring and summer and 2 log 10 intervals apart in autumn. The recruitment of copepod-consuming macrozooplankton, Euphausia triacantha and Themisto gaudichaudii into an additional biomass dome was responsible for the decrease in the inter-dome interval in autumn. Predator to prey mass ratios estimated from stable isotope analysis reached a minimum in autumn while the estimated trophic level of myctophid fish was highest in that season. This reflected greater amounts of internal recycling and increased numbers of trophic levels in autumn compared to earlier times of the year. The accumulation of biomass in larger size classes throughout the year in the Scotia Sea may reflect the prevalence of species that store energy and have multiyear life-cycles.
Association mining of dependency between time series
NASA Astrophysics Data System (ADS)
Hafez, Alaaeldin
2001-03-01
Time series analysis is considered as a crucial component of strategic control over a broad variety of disciplines in business, science and engineering. Time series data is a sequence of observations collected over intervals of time. Each time series describes a phenomenon as a function of time. Analysis on time series data includes discovering trends (or patterns) in a time series sequence. In the last few years, data mining has emerged and been recognized as a new technology for data analysis. Data Mining is the process of discovering potentially valuable patterns, associations, trends, sequences and dependencies in data. Data mining techniques can discover information that many traditional business analysis and statistical techniques fail to deliver. In this paper, we adapt and innovate data mining techniques to analyze time series data. By using data mining techniques, maximal frequent patterns are discovered and used in predicting future sequences or trends, where trends describe the behavior of a sequence. In order to include different types of time series (e.g. irregular and non- systematic), we consider past frequent patterns of the same time sequences (local patterns) and of other dependent time sequences (global patterns). We use the word 'dependent' instead of the word 'similar' for emphasis on real life time series where two time series sequences could be completely different (in values, shapes, etc.), but they still react to the same conditions in a dependent way. In this paper, we propose the Dependence Mining Technique that could be used in predicting time series sequences. The proposed technique consists of three phases: (a) for all time series sequences, generate their trend sequences, (b) discover maximal frequent trend patterns, generate pattern vectors (to keep information of frequent trend patterns), use trend pattern vectors to predict future time series sequences.
Pyrilamine-induced prolonged QT interval in adolescent with drug overdose.
Paudel, Govinda; Syed, Muhammad; Kalantre, Sarika; Sharma, Jayendra
2011-10-01
The widespread availability of antihistamines in many over-the-counter preparations can lead to significant hazard to the public because of their possible link to potential ventricular arrhythmias secondary to prolongation of QT interval. The effect can be further compounded by the use of other commonly used medications such as macrolides, antifungal agents, antipsychotics, and other antihistamine-containing preparations. The effect of antihistamines on QT interval is not a class effect but is unique to certain medications. Pyrilamine, a first-generation antihistaminic agent, is considered safe as there are no reports regarding its cardiac toxicity available in literature. We report a case of an adolescent with prolonged QT interval after an overdose of pyrilamine.
Learned Interval Time Facilitates Associate Memory Retrieval
ERIC Educational Resources Information Center
van de Ven, Vincent; Kochs, Sarah; Smulders, Fren; De Weerd, Peter
2017-01-01
The extent to which time is represented in memory remains underinvestigated. We designed a time paired associate task (TPAT) in which participants implicitly learned cue-time-target associations between cue-target pairs and specific cue-target intervals. During subsequent memory testing, participants showed increased accuracy of identifying…
NASA Technical Reports Server (NTRS)
Wardrip, S. C. (Editor)
1979-01-01
Thirty eight papers are presented addressing various aspects of precise time and time interval applications. Areas discussed include: past accomplishments; state of the art systems; new and useful applications, procedures, and techniques; and fruitful directions for research efforts.
A Role for Memory in Prospective Timing informs Timing in Prospective Memory
Waldum, Emily R; Sahakyan, Lili
2014-01-01
Time-based prospective memory (TBPM) tasks require the estimation of time in passing – known as prospective timing. Prospective timing is said to depend on an attentionally-driven internal clock mechanism, and is thought to be unaffected by memory for interval information (for reviews see, Block, Hancock, & Zakay, 2010; Block & Zakay, 1997). A prospective timing task that required a verbal estimate following the entire interval (Experiment 1) and a TBPM task that required production of a target response during the interval (Experiment 2) were used to test an alternative view that episodic memory does influence prospective timing. In both experiments, participants performed an ongoing lexical decision task of fixed duration while a varying number of songs were played in the background. Experiment 1 results revealed that verbal time estimates became longer the more songs participants remembered from the interval, suggesting that memory for interval information influences prospective time estimates. In Experiment 2, participants who were asked to perform the TBPM task without the aid of an external clock made their target responses earlier as the number of songs increased, indicating that prospective estimates of elapsed time increased as more songs were experienced. For participants who had access to a clock, changes in clock-checking coincided with the occurrence of song boundaries, indicating that participants used both song information and clock information to estimate time. Finally, ongoing task performance and verbal reports in both experiments further substantiate a role for episodic memory in prospective timing. PMID:22984950
The Behavioral Economics of Choice and Interval Timing
ERIC Educational Resources Information Center
Jozefowiez, J.; Staddon, J. E. R.; Cerutti, D. T.
2009-01-01
The authors propose a simple behavioral economic model (BEM) describing how reinforcement and interval timing interact. The model assumes a Weber-law-compliant logarithmic representation of time. Associated with each represented time value are the payoffs that have been obtained for each possible response. At a given real time, the response with…
Aging Effects on Cardiac and Respiratory Dynamics in Healthy Subjects across Sleep Stages
Schumann, Aicko Y.; Bartsch, Ronny P.; Penzel, Thomas; Ivanov, Plamen Ch.; Kantelhardt, Jan W.
2010-01-01
Study Objectives: Respiratory and heart rate variability exhibit fractal scaling behavior on certain time scales. We studied the short-term and long-term correlation properties of heartbeat and breathing-interval data from disease-free subjects focusing on the age-dependent fractal organization. We also studied differences across sleep stages and night-time wake and investigated quasi-periodic variations associated with cardiac risk. Design: Full-night polysomnograms were recorded during 2 nights, including electrocardiogram and oronasal airflow. Setting: Data were collected in 7 laboratories in 5 European countries. Participants: 180 subjects without health complaints (85 males, 95 females) aged from 20 to 89 years. Interventions: None. Measurements and Results: Short-term correlations in heartbeat intervals measured by the detrended fluctuation analysis (DFA) exponent α1 show characteristic age dependence with a maximum around 50–60 years disregarding the dependence on sleep and wake states. Long-term correlations measured by α2 differ in NREM sleep when compared with REM sleep and wake, besides weak age dependence. Results for respiratory intervals are similar to those for α2 of heartbeat intervals. Deceleration capacity (DC) decreases with age; it is lower during REM and deep sleep (compared with light sleep and wake). Conclusion: The age dependence of α1 should be considered when using this value for diagnostic purposes in post-infarction patients. Pronounced long-term correlations (larger α2) for heartbeat and respiration during REM sleep and wake indicate an enhanced control of higher brain regions, which is absent during NREM sleep. Reduced DC possibly indicates an increased cardiovascular risk with aging and during REM and deep sleep. Citation: Schumann AY; Bartsch RP; Penzel T; Ivanov PC; Kantelhardt JW. Aging effects on cardiac and respiratory dynamics in healthy subjects across sleep stages. SLEEP 2010;33(7):943-955. PMID:20614854
Gendaszek, Andrew S.; Burton, Karl D.; Magirl, Christopher S.; Konrad, Christopher P.
2017-01-01
In the Pacific Northwest of the United States, salmon eggs incubating within streambed gravels are susceptible to scour during floods. The threat to egg-to-fry survival by streambed scour is mitigated, in part, by the adaptation of salmon to bury their eggs below the typical depth of scour. In regulated rivers globally, we suggest that water managers consider the effect of dam operations on scour and its impacts on species dependent on benthic habitats.We instrumented salmon-spawning habitat with accelerometer scour monitors (ASMs) at 73 locations in 11 reaches of the Cedar River in western Washington State of the United States from Autumn 2013 through the Spring of 2014. The timing of scour was related to the discharge measured at a nearby gage and compared to previously published ASM data at 26 locations in two reaches of the Cedar River collected between Autumn 2010 and Spring 2011.Thirteen percent of the recovered ASMs recorded scour during a peak-discharge event in March 2014 (2-to 3-year recurrence interval) compared to 71% of the recovered ASMs during a higher peak-discharge event in January 2011 (10-year recurrence interval). Of the 23 locations where ASMs recorded scour during the 2011 and 2014 deployments, 35% had scour when the discharge was ≤87.3 m3/s (3,082 ft3/s) (2-year recurrence interval discharge) with 13% recording scour at or below the 62.3 m3/s (2,200 ft3/s) operational threshold for peak-discharge management during the incubation of salmon eggs.Scour to the depth of salmon egg pockets was limited during peak discharges with frequent (1.25-year or less) recurrence intervals, which managers can regulate through dam operations on the Cedar River. Pairing novel measurements of the timing of streambed scour with discharge data allows the development of peak-discharge management strategies that protect salmon eggs incubating within streambed gravels during floods.
Piperaquine Population Pharmacokinetics and Cardiac Safety in Cambodia
Lon, Chanthap; Spring, Michele; Sok, Sommethy; Ta-aksorn, Winita; Kodchakorn, Chanikarn; Pann, Sut-Thang; Chann, Soklyda; Ittiverakul, Mali; Sriwichai, Sabaithip; Buathong, Nillawan; Kuntawunginn, Worachet; So, Mary; Youdaline, Theng; Milner, Erin; Wojnarski, Mariusz; Lanteri, Charlotte; Manning, Jessica; Prom, Satharath; Haigney, Mark; Cantilena, Louis; Saunders, David
2017-01-01
ABSTRACT Despite the rising rates of resistance to dihydroartemisinin-piperaquine (DP), DP remains a first-line therapy for uncomplicated malaria in many parts of Cambodia. While DP is generally well tolerated as a 3-day DP (3DP) regimen, compressed 2-day DP (2DP) regimens were associated with treatment-limiting cardiac repolarization effects in a recent clinical trial. To better estimate the risks of piperaquine on QT interval prolongation, we pooled data from three randomized clinical trials conducted between 2010 and 2014 in northern Cambodia. A population pharmacokinetic model was developed to compare exposure-response relationships between the 2DP and 3DP regimens while accounting for differences in regimen and sample collection times between studies. A 2-compartment model with first-order absorption and elimination without covariates best fit the data. The linear slope-intercept model predicted a 0.05-ms QT prolongation per ng/ml of piperaquine (5 ms per 100 ng/ml) in this largely male population. Though the plasma half-life was similar in both regimens, peak and total piperaquine exposures were higher in those treated with the 2DP regimen. Furthermore, the correlation between the plasma piperaquine concentration and the QT interval prolongation was stronger in the population receiving the 2DP regimen. Neither the time since the previous meal nor the baseline serum magnesium or potassium levels had additive effects on QT interval prolongation. As electrocardiographic monitoring is often nonexistent in areas where malaria is endemic, 2DP regimens should be avoided and the 3DP regimen should be carefully considered in settings where viable alternative therapies exist. When DP is employed, the risk of cardiotoxicity can be mitigated by combining a 3-day regimen, enforcing a 3-h fast before and after administration, and avoiding the concomitant use of QT interval-prolonging medications. (This study used data from three clinical trials that are registered at ClinicalTrials.gov under identifiers NCT01280162, NCT01624337, and NCT01849640.) PMID:28193647
Generating variable and random schedules of reinforcement using Microsoft Excel macros.
Bancroft, Stacie L; Bourret, Jason C
2008-01-01
Variable reinforcement schedules are used to arrange the availability of reinforcement following varying response ratios or intervals of time. Random reinforcement schedules are subtypes of variable reinforcement schedules that can be used to arrange the availability of reinforcement at a constant probability across number of responses or time. Generating schedule values for variable and random reinforcement schedules can be difficult. The present article describes the steps necessary to write macros in Microsoft Excel that will generate variable-ratio, variable-interval, variable-time, random-ratio, random-interval, and random-time reinforcement schedule values.
A novel approach based on preference-based index for interval bilevel linear programming problem.
Ren, Aihong; Wang, Yuping; Xue, Xingsi
2017-01-01
This paper proposes a new methodology for solving the interval bilevel linear programming problem in which all coefficients of both objective functions and constraints are considered as interval numbers. In order to keep as much uncertainty of the original constraint region as possible, the original problem is first converted into an interval bilevel programming problem with interval coefficients in both objective functions only through normal variation of interval number and chance-constrained programming. With the consideration of different preferences of different decision makers, the concept of the preference level that the interval objective function is preferred to a target interval is defined based on the preference-based index. Then a preference-based deterministic bilevel programming problem is constructed in terms of the preference level and the order relation [Formula: see text]. Furthermore, the concept of a preference δ -optimal solution is given. Subsequently, the constructed deterministic nonlinear bilevel problem is solved with the help of estimation of distribution algorithm. Finally, several numerical examples are provided to demonstrate the effectiveness of the proposed approach.
Digital computing cardiotachometer
NASA Technical Reports Server (NTRS)
Smith, H. E.; Rasquin, J. R.; Taylor, R. A. (Inventor)
1973-01-01
A tachometer is described which instantaneously measures heart rate. During the two intervals between three succeeding heart beats, the electronic system: (1) measures the interval by counting cycles from a fixed frequency source occurring between the two beats; and (2) computes heat rate during the interval between the next two beats by counting the number of times that the interval count must be counted to zero in order to equal a total count of sixty times (to convert to beats per minute) the frequency of the fixed frequency source.
NASA Astrophysics Data System (ADS)
Ryzhenkov, V.; Ivashchenko, V.; Vinuesa, R.; Mullyadzhanov, R.
2016-10-01
We use the open-source code nek5000 to assess the accuracy of high-order spectral element large-eddy simulations (LES) of a turbulent channel flow depending on the spatial resolution compared to the direct numerical simulation (DNS). The Reynolds number Re = 6800 is considered based on the bulk velocity and half-width of the channel. The filtered governing equations are closed with the dynamic Smagorinsky model for subgrid stresses and heat flux. The results show very good agreement between LES and DNS for time-averaged velocity and temperature profiles and their fluctuations. Even the coarse LES grid which contains around 30 times less points than the DNS one provided predictions of the friction velocity within 2.0% accuracy interval.
NASA Technical Reports Server (NTRS)
Ito, Kazufumi
1987-01-01
The linear quadratic optimal control problem on infinite time interval for linear time-invariant systems defined on Hilbert spaces is considered. The optimal control is given by a feedback form in terms of solution pi to the associated algebraic Riccati equation (ARE). A Ritz type approximation is used to obtain a sequence pi sup N of finite dimensional approximations of the solution to ARE. A sufficient condition that shows pi sup N converges strongly to pi is obtained. Under this condition, a formula is derived which can be used to obtain a rate of convergence of pi sup N to pi. The results of the Galerkin approximation is demonstrated and applied for parabolic systems and the averaging approximation for hereditary differential systems.
Lacosamide cardiac safety: a thorough QT/QTc trial in healthy volunteers.
Kropeit, D; Johnson, M; Cawello, W; Rudd, G D; Horstmann, R
2015-11-01
To determine whether lacosamide prolongs the corrected QT interval (QTc). In this randomized, double-blind, positive- and placebo-controlled, parallel-design trial, healthy volunteers were randomized to lacosamide 400 mg/day (maximum-recommended daily dose, 6 days), lacosamide 800 mg/day (supratherapeutic dose, 6 days), placebo (6 days), or moxifloxacin 400 mg/day (3 days). Variables included maximum time-matched change from baseline in QT interval individually corrected for heart rate ([HR] QTcI), other ECG parameters, pharmacokinetics (PK), and safety/tolerability. The QTcI mean maximum difference from placebo was -4.3 ms and -6.3 ms for lacosamide 400 and 800 mg/day; upper limits of the 2-sided 90% confidence interval were below the 10 ms non-inferiority margin (-0.5 and -2.5 ms, respectively). Placebo-corrected QTcI for moxifloxacin was +10.4 ms (lower 90% confidence bound >0 [6.6 ms]), which established assay sensitivity for this trial. As lacosamide did not increase QTcI, the trial is considered a negative QTc trial. There was no dose-related or clinically relevant effect on QRS duration. HR increased from baseline by ~5 bpm with lacosamide 800 mg/day versus placebo. Placebo-subtracted mean increases in PR interval at tmax were 7.3 ms (400 mg/day) and 11.9 ms (800 mg/day). There were no findings of second-degree or higher atrioventricular block. Adverse events (AEs) were dose related and most commonly involved the nervous and gastrointestinal systems. Lacosamide (≤ 800 mg/day) did not prolong the QTc interval. Lacosamide caused a small, dose-related increase in mean PR interval that was not associated with AEs. Cardiac, overall safety, and PK profiles for lacosamide in healthy volunteers were consistent with those observed in patients with partial-onset seizures. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies
Gülhan, Orekıcı Temel
2016-01-01
Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes. PMID:27478491
Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies.
Erdoğan, Semra; Gülhan, Orekıcı Temel
2016-01-01
Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes.
Post-KR Delay Intervals and Mental Practice: A Test of Adams' Closed Loop Theory
ERIC Educational Resources Information Center
Bole, Ronald
1976-01-01
The present study suggests that post-KR delay interval time or activity in the interval has little to do with learning on a self-paced positioning task, not ruling out that on ballistic tasks or more complex nonballistic tasks that a learner could make use of additional time or strategy. (MB)
It's time to fear! Interval timing in odor fear conditioning in rats
Shionoya, Kiseko; Hegoburu, Chloé; Brown, Bruce L.; Sullivan, Regina M.; Doyère, Valérie; Mouly, Anne-Marie
2013-01-01
Time perception is crucial to goal attainment in humans and other animals, and interval timing also guides fundamental animal behaviors. Accumulating evidence has made it clear that in associative learning, temporal relations between events are encoded, and a few studies suggest this temporal learning occurs very rapidly. Most of these studies, however, have used methodologies that do not permit investigating the emergence of this temporal learning. In the present study we monitored respiration, ultrasonic vocalization (USV) and freezing behavior in rats in order to perform fine-grain analysis of fear responses during odor fear conditioning. In this paradigm an initially neutral odor (the conditioned stimulus, CS) predicted the arrival of an aversive unconditioned stimulus (US, footshock) at a fixed 20-s time interval. We first investigated the development of a temporal pattern of responding related to CS-US interval duration. The data showed that during acquisition with odor-shock pairings, a temporal response pattern of respiration rate was observed. Changing the CS-US interval duration from 20-s to 30-s resulted in a shift of the temporal response pattern appropriate to the new duration thus demonstrating that the pattern reflected the learning of the CS-US interval. A temporal pattern was also observed during a retention test 24 h later for both respiration and freezing measures, suggesting that the animals had stored the interval duration in long-term memory. We then investigated the role of intra-amygdalar dopaminergic transmission in interval timing. For this purpose, the D1 dopaminergic receptors antagonist SCH23390 was infused in the basolateral amygdala before conditioning. This resulted in an alteration of timing behavior, as reflected in differential temporal patterns between groups observed in a 24 h retention test off drug. The present data suggest that D1 receptor dopaminergic transmission within the amygdala is involved in temporal processing. PMID:24098277
Kim, Hae Jin; Song, Yong Ju; Kim, Young Kook; Jeoung, Jin Wook; Park, Ki Ho
2017-07-01
To evaluate functional progression in preperimetric glaucoma (PPG) with disc hemorrhage (DH) and to determine the time interval between the first-detected DH and development of glaucomatous visual field (VF) defect. A total of 87 patients who had been first diagnosed with PPG were enrolled. The medical records of PPG patients without DH (Group 1) and with DH (Group 2) were reviewed. When glaucomatous VF defect appeared, the time interval from the diagnosis of PPG to the development of VF defect was calculated and compared between the two groups. In group 2, the time intervals from the first-detected DH to VF defect of the single- and recurrent-DH were compared. Of the enrolled patients, 45 had DH in the preperimetric stage. The median time interval from the diagnosis of PPG to the development of VF defect was 73.3 months in Group 1, versus 45.4 months in Group 2 (P = 0.042). The cumulative probability of development of VF defect after diagnosis of PPG was significantly greater in Group 2 than in Group 1. The median time interval from first-detected DH to the development of VF defect was 37.8 months. The median time interval from DH to VF defect and cumulative probability of VF defect after DH did not show a statistical difference between single and recurrent-DH patients. The median time interval between the diagnosis of PPG and the development of VF defect was significantly shorter in PPG with DH. The VF defect appeared 37.8 months after the first-detected DH in PPG.
Oude Ophuis, Charlotte M C; van Akkooi, Alexander C J; Rutkowski, Piotr; Voit, Christiane A; Stepniak, Joanna; Erler, Nicole S; Eggermont, Alexander M M; Wouters, Michel W J M; Grünhagen, Dirk J; Verhoef, Cornelis Kees
2016-11-01
Sentinel node biopsy (SNB) is essential for adequate melanoma staging. Most melanoma guidelines advocate to perform wide local excision and SNB as soon as possible, causing time pressure. To investigate the role of time interval between melanoma diagnosis and SNB on sentinel node (SN) positivity and survival. This is a retrospective observational study concerning a cohort of melanoma patients from four European Organization for Research and Treatment of Cancer Melanoma Group tertiary referral centres from 1997 to 2013. A total of 4124 melanoma patients underwent SNB. Patients were selected if date of diagnosis and follow-up (FU) information were available, and SNB was performed in <180 d. A total of 3546 patients were included. Multivariable logistic regression and Cox regression analyses were performed to investigate how baseline characteristics and time interval until SNB are related to positivity rate, disease-free survival (DFS) and melanoma-specific survival (MSS). Median time interval was 43 d (interquartile range [IQR] 29-60 d), and 705 (19.9%) of 3546 patients had a positive SN. Sentinel node positivity was equal for early surgery (≤43 d) versus late surgery (>43 d): 19.7% versus 20.1% (p = 0.771). Median FU was 50 months (IQR 24-84 months). Sentinel node metastasis (hazard ratio [HR] 3.17, 95% confidence interval [95% CI] 2.53-3.97), ulceration (HR 1.99, 95% CI 1.58-2.51), Breslow thickness (HR 1.06, 95% CI 1.04-1.08), and male gender (HR 1.58, 95% CI 1.26-1.98) (all p < 0.00001) were independently associated with worse MSS and DFS; time interval was not. No effect of time interval between melanoma diagnosis and SNB on 5-year survival or SN positivity rate was found for a time interval of up to 3 months. This information can be used to counsel patients and remove strict time limits from melanoma guidelines. Copyright © 2016 Elsevier Ltd. All rights reserved.
Bailes, Freya; Dean, Roger T; Broughton, Mary C
2015-01-01
For listeners familiar with Western twelve-tone equal-tempered (12-TET) music, a novel microtonal tuning system is expected to present additional processing challenges. We aimed to determine whether this was the case, focusing on the extent to which our perceptions can be considered bottom-up (psychoacoustic and primarily perceptual) and top-down (dependent on familiarity and cognitive processing). We elicited both overt response ratings, and covert event-related potentials (ERPs), so as to compare subjective impressions of sounds with the neurophysiological processing of the acoustic signal. We hypothesised that microtonal intervals are perceived differently from 12-TET intervals, and that the responses of musicians (n = 10) and non-musicians (n = 10) are distinct. Two-note chords were presented comprising 12-TET intervals (consonant and dissonant) or microtonal (quarter tone) intervals, and ERP, subjective roughness ratings, and liking ratings were recorded successively. Musical experience mediated the perception of differences between dissonant and microtone intervals, with non-musicians giving similar ratings for each, and musicians preferring dissonant over the less commonly used microtonal intervals, rating them as less rough. ERP response amplitude was greater for consonant intervals than other intervals. Musical experience interacted with interval type, suggesting that musical expertise facilitates the sensory and perceptual discrimination of microtonal intervals from 12-TET intervals, and an increased ability to categorize such intervals. Non-musicians appear to have perceived microtonal intervals as instances of neighbouring 12-TET intervals.
Dead time corrections for inbeam γ-spectroscopy measurements
NASA Astrophysics Data System (ADS)
Boromiza, M.; Borcea, C.; Negret, A.; Olacel, A.; Suliman, G.
2017-08-01
Relatively high counting rates were registered in a proton inelastic scattering experiment on 16O and 28Si using HPGe detectors which was performed at the Tandem facility of IFIN-HH, Bucharest. In consequence, dead time corrections were needed in order to determine the absolute γ-production cross sections. Considering that the real counting rate follows a Poisson distribution, the dead time correction procedure is reformulated in statistical terms. The arriving time interval between the incoming events (Δt) obeys an exponential distribution with a single parameter - the average of the associated Poisson distribution. We use this mathematical connection to calculate and implement the dead time corrections for the counting rates of the mentioned experiment. Also, exploiting an idea introduced by Pommé et al., we describe a consistent method for calculating the dead time correction which completely eludes the complicated problem of measuring the dead time of a given detection system. Several comparisons are made between the corrections implemented through this method and by using standard (phenomenological) dead time models and we show how these results were used for correcting our experimental cross sections.
NASA Astrophysics Data System (ADS)
Chen, Zuojing; Polizzi, Eric
2010-11-01
Effective modeling and numerical spectral-based propagation schemes are proposed for addressing the challenges in time-dependent quantum simulations of systems ranging from atoms, molecules, and nanostructures to emerging nanoelectronic devices. While time-dependent Hamiltonian problems can be formally solved by propagating the solutions along tiny simulation time steps, a direct numerical treatment is often considered too computationally demanding. In this paper, however, we propose to go beyond these limitations by introducing high-performance numerical propagation schemes to compute the solution of the time-ordered evolution operator. In addition to the direct Hamiltonian diagonalizations that can be efficiently performed using the new eigenvalue solver FEAST, we have designed a Gaussian propagation scheme and a basis-transformed propagation scheme (BTPS) which allow to reduce considerably the simulation times needed by time intervals. It is outlined that BTPS offers the best computational efficiency allowing new perspectives in time-dependent simulations. Finally, these numerical schemes are applied to study the ac response of a (5,5) carbon nanotube within a three-dimensional real-space mesh framework.
The manual control of vehicles undergoing slow transitions in dynamic characteristics
NASA Technical Reports Server (NTRS)
Moriarty, T. E.
1974-01-01
The manual control was studied of a vehicle with slowly time-varying dynamics to develop analytic and computer techniques necessary for the study of time-varying systems. The human operator is considered as he controls a time-varying plant in which the changes are neither abrupt nor so slow that the time variations are unimportant. An experiment in which pilots controlled the longitudinal mode of a simulated time-varying aircraft is described. The vehicle changed from a pure double integrator to a damped second order system, either instantaneously or smoothly over time intervals of 30, 75, or 120 seconds. The regulator task consisted of trying to null the error term resulting from injected random disturbances with bandwidths of 0.8, 1.4, and 2.0 radians per second. Each of the twelve experimental conditons was replicated ten times. It is shown that the pilot's performance in the time-varying task is essentially equivalent to his performance in stationary tasks which correspond to various points in the transition. A rudimentary model for the pilot-vehicle-regulator is presented.
An MIP model to schedule the call center workforce and organize the breaks
NASA Astrophysics Data System (ADS)
Türker, Turgay; Demiriz, Ayhan
2016-06-01
In modern economies, companies place a premium on managing their workforce efficiently especially in labor intensive service sector, since the services have become the significant portion of the economies. Tour scheduling is an important tool to minimize the overall workforce costs while satisfying the minimum service level constraints. In this study, we consider the workforce management problem of an inbound call-center while satisfying the call demand within the short time periods with the minimum cost. We propose a mixed-integer programming model to assign workers to the daily shifts, to determine the weekly off-days, and to determine the timings of lunch and other daily breaks for each worker. The proposed model has been verified on the weekly demand data observed at a specific call center location of a satellite TV operator. The model was run on both 15 and 10 minutes demand estimation periods (planning time intervals).
Proper motion and secular variations of Keplerian orbital elements
NASA Astrophysics Data System (ADS)
Butkevich, Alexey G.
2018-05-01
High-precision observations require accurate modelling of secular changes in the orbital elements in order to extrapolate measurements over long time intervals, and to detect deviation from pure Keplerian motion caused, for example, by other bodies or relativistic effects. We consider the evolution of the Keplerian elements resulting from the gradual change of the apparent orbit orientation due to proper motion. We present rigorous formulae for the transformation of the orbit inclination, longitude of the ascending node and argument of the pericenter from one epoch to another, assuming uniform stellar motion and taking radial velocity into account. An approximate treatment, accurate to the second-order terms in time, is also given. The proper motion effects may be significant for long-period transiting planets. These theoretical results are applicable to the modelling of planetary transits and precise Doppler measurements as well as analysis of pulsar and eclipsing binary timing observations.
NASA Astrophysics Data System (ADS)
Anikushina, T. A.; Naumov, A. V.
2013-12-01
This article demonstrates the principal advantages of the technique for analysis of the long-term spectral evolution of single molecules (SM) in the study of the microscopic nature of the dynamic processes in low-temperature polymers. We performed the detailed analysis of the spectral trail of single tetra-tert-butylterrylene (TBT) molecule in an amorphous polyisobutylene matrix, measured over 5 hours at T = 7K. It has been shown that the slow temporal dynamics is in qualitative agreement with the standard model of two-level systems and stochastic sudden-jump model. At the same time the distributions of the first four moments (cumulants) of the spectra of the selected SM measured at different time points were found not consistent with the standard theory prediction. It was considered as evidence that in a given time interval the system is not ergodic
Baeten; Bruggeman; Paepen; Carchon
2000-03-01
The non-destructive quantification of transuranic elements in nuclear waste management or in safeguards verifications is commonly performed by passive neutron assay techniques. To minimise the number of unknown sample-dependent parameters, Neutron Multiplicity Counting (NMC) is applied. We developed a new NMC-technique, called Time Interval Correlation Spectroscopy (TICS), which is based on the measurement of Rossi-alpha time interval distributions. Compared to other NMC-techniques, TICS offers several advantages.
Identifying radiological needs of referring clinicians.
Zhang, Li; Hefke, Antje; Figiel, Jens; Schwarz, Ulrike; Rominger, Marga; Klose, Klaus Jochen
2013-06-01
To provide prospective information about quality- and satisfaction-related product features in radiology, a customer-centered approach for acquiring clinicians' requirements and their prioritizations is essential. We introduced the Kano model for the first time in radiology to obtain such information. A Kano questionnaire, consisting of pairs of questions regarding 13 clinician requirements related to computed tomography (CT), magnetic resonance imaging (MRI) access and report turnaround time (RTT), was developed and administered. Each requirement was assigned a Kano category, and its satisfaction and dissatisfaction coefficients were calculated and presented in a Kano diagram. The data were stratified based on different clinics and on staff and resident clinicians. The time interval was evaluated between the completion of an examination and the first attempt to access the report by a clinician. Consultation for modality selection and scheduling and access to CT within 24 h and RTT within 8 to 24 h were considered as must-be requirements. Access to CT within 4 h and within 8 h, access to MRI within 8 h and within 24 h, and access to RTT within 4 h were one-dimensional requirements. The extension of operation time for CT or MRI, as well as MRI access within 4 h, was considered attractive. Eight out of nine clinics considered RTT within 8 h as a must-be requirement. There were differences in responses both among different clinics and between staff and resident clinicians. Access attempts to reports by clinicians in the first 4 h after the examination completion accounted for 65 % of CTs and 49 % of MRIs.
Military Applicability of Interval Training for Health and Performance.
Gibala, Martin J; Gagnon, Patrick J; Nindl, Bradley C
2015-11-01
Militaries from around the globe have predominantly used endurance training as their primary mode of aerobic physical conditioning, with historical emphasis placed on the long distance run. In contrast to this traditional exercise approach to training, interval training is characterized by brief, intermittent bouts of intense exercise, separated by periods of lower intensity exercise or rest for recovery. Although hardly a novel concept, research over the past decade has shed new light on the potency of interval training to elicit physiological adaptations in a time-efficient manner. This work has largely focused on the benefits of low-volume interval training, which involves a relatively small total amount of exercise, as compared with the traditional high-volume approach to training historically favored by militaries. Studies that have directly compared interval and moderate-intensity continuous training have shown similar improvements in cardiorespiratory fitness and the capacity for aerobic energy metabolism, despite large differences in total exercise and training time commitment. Interval training can also be applied in a calisthenics manner to improve cardiorespiratory fitness and strength, and this approach could easily be incorporated into a military conditioning environment. Although interval training can elicit physiological changes in men and women, the potential for sex-specific adaptations in the adaptive response to interval training warrants further investigation. Additional work is needed to clarify adaptations occurring over the longer term; however, interval training deserves consideration from a military applicability standpoint as a time-efficient training strategy to enhance soldier health and performance. There is value for military leaders in identifying strategies that reduce the time required for exercise, but nonetheless provide an effective training stimulus.
Płotek, Włodzimierz; Łyskawa, Wojciech; Kluzik, Anna; Grześkowiak, Małgorzata; Podlewski, Roland; Żaba, Zbigniew; Drobnik, Leon
2014-02-03
Human cognitive functioning can be assessed using different methods of testing. Age, level of education, and gender may influence the results of cognitive tests. The well-known Trail Making Test (TMT), which is often used to measure the frontal lobe function, and the experimental test of Interval Timing (IT) were compared. The methods used in IT included reproduction of auditory and visual stimuli, with the subsequent production of the time intervals of 1-, 2-, 5-, and 7-seconds durations with no pattern. Subjects included 64 healthy adult volunteers aged 18-63 (33 women, 31 men). Comparisons were made based on age, education, and gender. TMT was performed quickly and was influenced by age, education, and gender. All reproduced visual and produced intervals were shortened and the reproduction of auditory stimuli was more complex. Age, education, and gender have more pronounced impact on the cognitive test than on the interval timing test. The reproduction of the short auditory stimuli was more accurate in comparison to other modalities used in the IT test. The interval timing, when compared to the TMT, offers an interesting possibility of testing. Further studies are necessary to confirm the initial observation.
Cox model with interval-censored covariate in cohort studies.
Ahn, Soohyun; Lim, Johan; Paik, Myunghee Cho; Sacco, Ralph L; Elkind, Mitchell S
2018-05-18
In cohort studies the outcome is often time to a particular event, and subjects are followed at regular intervals. Periodic visits may also monitor a secondary irreversible event influencing the event of primary interest, and a significant proportion of subjects develop the secondary event over the period of follow-up. The status of the secondary event serves as a time-varying covariate, but is recorded only at the times of the scheduled visits, generating incomplete time-varying covariates. While information on a typical time-varying covariate is missing for entire follow-up period except the visiting times, the status of the secondary event are unavailable only between visits where the status has changed, thus interval-censored. One may view interval-censored covariate of the secondary event status as missing time-varying covariates, yet missingness is partial since partial information is provided throughout the follow-up period. Current practice of using the latest observed status produces biased estimators, and the existing missing covariate techniques cannot accommodate the special feature of missingness due to interval censoring. To handle interval-censored covariates in the Cox proportional hazards model, we propose an available-data estimator, a doubly robust-type estimator as well as the maximum likelihood estimator via EM algorithm and present their asymptotic properties. We also present practical approaches that are valid. We demonstrate the proposed methods using our motivating example from the Northern Manhattan Study. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Timing During Interruptions in Timing
ERIC Educational Resources Information Center
Fortin, Claudette; Bedard, Marie-Claude; Champagne, Julie
2005-01-01
Duration and location of breaks in time interval production were manipulated in various conditions of stimulus presentation (Experiments 1-4). Produced intervals shortened and then stabilized as break duration lengthened, suggesting that participants used the break as a preparatory period to restart timing as quickly as possible at the end of the…
Tang, Zhigang; Wang, Guifang; Xu, Dongqun; Han, Keqin; Li, Yunpu; Zhang, Aijun; Dong, Xiaoyan
2004-09-01
The measuring time and measuring intervals to evaluate different type of air cleaner performance to remove formaldehyde were provided. The natural decay measurement and formaldehyde removal measurement were conducted in 1.5 m3 and 30 m3 test chamber. The natural decay rate was determined by acquiring formaldehyde concentration data at 15 minute intervals for 2.5 hours. The measured decay rate was determined by acquiring formaldehyde concentration data at 5 minute intervals for 1.2 hours. When the wind power of air cleaner is smaller than 30 m3/h or measuring performance of no wind power air clearing product, the 1.5 m3 test chamber can be used. Both the natural decay rate and the measured decay rate are determined by acquiring formaldehyde concentration data at 8 minute intervals for 64 minutes. There were different measuring time and measuring intervals to evaluate different type of air cleaner performance to remove formaldehyde.
Sad facial cues inhibit temporal attention: evidence from an event-related potential study.
Kong, Xianxian; Chen, Xiaoqiang; Tan, Bo; Zhao, Dandan; Jin, Zhenlan; Li, Ling
2013-06-19
We examined the influence of different emotional cues (happy or sad) on temporal attention (short or long interval) using behavioral as well as event-related potential recordings during a Stroop task. Emotional stimuli cued short and long time intervals, inducing 'sad-short', 'sad-long', 'happy-short', and 'happy-long' conditions. Following the intervals, participants performed a numeric Stroop task. Behavioral results showed the temporal attention effects in the sad-long, happy-long, and happy-short conditions, in which valid cues quickened the reaction times, but not in the sad-short condition. N2 event-related potential components showed sad cues to have decreased activity for short intervals compared with long intervals, whereas happy cues did not. Taken together, these findings provide evidence for different modulation of sad and happy facial cues on temporal attention. Furthermore, sad cues inhibit temporal attention, resulting in longer reaction time and decreased neural activity in the short interval by diverting more attentional resources.
ELECTRICAL LOAD ANTICIPATOR AND RECORDER
Russell, J.B.; Thomas, R.J.
1961-07-25
A system is descrbied in which an indication of the prevailing energy consumption in an electrical power metering system and a projected Power demand for one demand interval is provided at selected increments of time withm the demand interval. Each watthour meter in the system is provided with an impulse generator that generates two impulses for each revolution of the meter disc. The total pulses received frorn all the meters are continuously totaled and are fed to a plurality of parallel connected gated counters. Each counter has its gate opened at different sub-time intervals during the demand interval. A multiplier is connected to each of the gated counters except the last one and each multiplier is provided with a different multiplier constant so as to provide an estimate of the power to be drawn over the entire demand interval at the end of each of the different sub-time intervals. Means are provided for recording the ontputs from the different circuits in synchronism with the actuation oi each gate circuit.
An analysis of first-time blood donors return behaviour using regression models.
Kheiri, S; Alibeigi, Z
2015-08-01
Blood products have a vital role in saving many patients' lives. The aim of this study was to analyse blood donor return behaviour. Using a cross-sectional follow-up design of 5-year duration, 864 first-time donors who had donated blood were selected using a systematic sampling. The behaviours of donors via three response variables, return to donation, frequency of return to donation and the time interval between donations, were analysed based on logistic regression, negative binomial regression and Cox's shared frailty model for recurrent events respectively. Successful return to donation rated at 49·1% and the deferral rate was 13·3%. There was a significant reverse relationship between the frequency of return to donation and the time interval between donations. Sex, body weight and job had an effect on return to donation; weight and frequency of donation during the first year had a direct effect on the total frequency of donations. Age, weight and job had a significant effect on the time intervals between donations. Aging decreases the chances of return to donation and increases the time interval between donations. Body weight affects the three response variables, i.e. the higher the weight, the more the chances of return to donation and the shorter the time interval between donations. There is a positive correlation between the frequency of donations in the first year and the total number of return to donations. Also, the shorter the time interval between donations is, the higher the frequency of donations. © 2015 British Blood Transfusion Society.
A model of interval timing by neural integration
Simen, Patrick; Balci, Fuat; deSouza, Laura; Cohen, Jonathan D.; Holmes, Philip
2011-01-01
We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes; that correlations among them can be largely cancelled by balancing excitation and inhibition; that neural populations can act as integrators; and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule’s predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior. PMID:21697374
The time course of corticospinal excitability during a simple reaction time task.
Kennefick, Michael; Maslovat, Dana; Carlsen, Anthony N
2014-01-01
The production of movement in a simple reaction time task can be separated into two time periods: the foreperiod, which is thought to include preparatory processes, and the reaction time interval, which includes initiation processes. To better understand these processes, transcranial magnetic stimulation has been used to probe corticospinal excitability at various time points during response preparation and initiation. Previous research has shown that excitability decreases prior to the "go" stimulus and increases following the "go"; however these two time frames have been examined independently. The purpose of this study was to measure changes in CE during both the foreperiod and reaction time interval in a single experiment, relative to a resting baseline level. Participants performed a button press movement in a simple reaction time task and excitability was measured during rest, the foreperiod, and the reaction time interval. Results indicated that during the foreperiod, excitability levels quickly increased from baseline with the presentation of the warning signal, followed by a period of stable excitability leading up to the "go" signal, and finally a rapid increase in excitability during the reaction time interval. This excitability time course is consistent with neural activation models that describe movement preparation and response initiation.
Ti, Xiaonan; Tani, Naoki; Isobe, Minoru; Kai, Hidenori
2006-05-01
The TIME (Time Interval Measuring Enzyme) ATPase measures time intervals in accordance with diapause development, which indispensably requires cold for resumption of embryonic development in the silkworm (Bombyx mori). The PIN (Peptidyl Inhibitory Needle) peptide regulates the time measurement function of TIME. In the present study we investigated the interaction between TIME and PIN in order to address the mechanism of diapause development. When TIME was isolated from eggs later than 12 days after oviposition, transient bursts of ATPase activity occurred 18h after isolation of TIME, and the younger the eggs and pupal ovaries from which TIME was isolated, the earlier the bursts of ATPase activity appeared. However, no interval-timer activation of ATPase occurred in ovaries earlier than 6 days after pupation. Similar patterns of ATPase activity occurred in test tubes after mixing TIME with PIN. The shorter the time PIN was mixed with TIME, the earlier the ATPase activity appeared. The timer may be built into the protein conformation of TIME, and PIN (which is present in ovaries beginning 6 days after pupation) appears able to alter this timer conformation through pupal stages to laid eggs. We discuss the possible mechanism of diapause development in relation to the timer mechanism of TIME.