Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection
NASA Technical Reports Server (NTRS)
Kumar, Sricharan; Srivistava, Ashok N.
2012-01-01
Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.
Predicting birth weight with conditionally linear transformation models.
Möst, Lisa; Schmid, Matthias; Faschingbauer, Florian; Hothorn, Torsten
2016-12-01
Low and high birth weight (BW) are important risk factors for neonatal morbidity and mortality. Gynecologists must therefore accurately predict BW before delivery. Most prediction formulas for BW are based on prenatal ultrasound measurements carried out within one week prior to birth. Although successfully used in clinical practice, these formulas focus on point predictions of BW but do not systematically quantify uncertainty of the predictions, i.e. they result in estimates of the conditional mean of BW but do not deliver prediction intervals. To overcome this problem, we introduce conditionally linear transformation models (CLTMs) to predict BW. Instead of focusing only on the conditional mean, CLTMs model the whole conditional distribution function of BW given prenatal ultrasound parameters. Consequently, the CLTM approach delivers both point predictions of BW and fetus-specific prediction intervals. Prediction intervals constitute an easy-to-interpret measure of prediction accuracy and allow identification of fetuses subject to high prediction uncertainty. Using a data set of 8712 deliveries at the Perinatal Centre at the University Clinic Erlangen (Germany), we analyzed variants of CLTMs and compared them to standard linear regression estimation techniques used in the past and to quantile regression approaches. The best-performing CLTM variant was competitive with quantile regression and linear regression approaches in terms of conditional coverage and average length of the prediction intervals. We propose that CLTMs be used because they are able to account for possible heteroscedasticity, kurtosis, and skewness of the distribution of BWs. © The Author(s) 2014.
Reliable prediction intervals with regression neural networks.
Papadopoulos, Harris; Haralambous, Haris
2011-10-01
This paper proposes an extension to conventional regression neural networks (NNs) for replacing the point predictions they produce with prediction intervals that satisfy a required level of confidence. Our approach follows a novel machine learning framework, called Conformal Prediction (CP), for assigning reliable confidence measures to predictions without assuming anything more than that the data are independent and identically distributed (i.i.d.). We evaluate the proposed method on four benchmark datasets and on the problem of predicting Total Electron Content (TEC), which is an important parameter in trans-ionospheric links; for the latter we use a dataset of more than 60000 TEC measurements collected over a period of 11 years. Our experimental results show that the prediction intervals produced by our method are both well calibrated and tight enough to be useful in practice. Copyright © 2011 Elsevier Ltd. All rights reserved.
Prediction Interval Development for Wind-Tunnel Balance Check-Loading
NASA Technical Reports Server (NTRS)
Landman, Drew; Toro, Kenneth G.; Commo, Sean A.; Lynn, Keith C.
2014-01-01
Results from the Facility Analysis Verification and Operational Reliability project revealed a critical gap in capability in ground-based aeronautics research applications. Without a standardized process for check-loading the wind-tunnel balance or the model system, the quality of the aerodynamic force data collected varied significantly between facilities. A prediction interval is required in order to confirm a check-loading. The prediction interval provides an expected upper and lower bound on balance load prediction at a given confidence level. A method has been developed which accounts for sources of variability due to calibration and check-load application. The prediction interval method of calculation and a case study demonstrating its use is provided. Validation of the methods is demonstrated for the case study based on the probability of capture of confirmation points.
It's time to fear! Interval timing in odor fear conditioning in rats
Shionoya, Kiseko; Hegoburu, Chloé; Brown, Bruce L.; Sullivan, Regina M.; Doyère, Valérie; Mouly, Anne-Marie
2013-01-01
Time perception is crucial to goal attainment in humans and other animals, and interval timing also guides fundamental animal behaviors. Accumulating evidence has made it clear that in associative learning, temporal relations between events are encoded, and a few studies suggest this temporal learning occurs very rapidly. Most of these studies, however, have used methodologies that do not permit investigating the emergence of this temporal learning. In the present study we monitored respiration, ultrasonic vocalization (USV) and freezing behavior in rats in order to perform fine-grain analysis of fear responses during odor fear conditioning. In this paradigm an initially neutral odor (the conditioned stimulus, CS) predicted the arrival of an aversive unconditioned stimulus (US, footshock) at a fixed 20-s time interval. We first investigated the development of a temporal pattern of responding related to CS-US interval duration. The data showed that during acquisition with odor-shock pairings, a temporal response pattern of respiration rate was observed. Changing the CS-US interval duration from 20-s to 30-s resulted in a shift of the temporal response pattern appropriate to the new duration thus demonstrating that the pattern reflected the learning of the CS-US interval. A temporal pattern was also observed during a retention test 24 h later for both respiration and freezing measures, suggesting that the animals had stored the interval duration in long-term memory. We then investigated the role of intra-amygdalar dopaminergic transmission in interval timing. For this purpose, the D1 dopaminergic receptors antagonist SCH23390 was infused in the basolateral amygdala before conditioning. This resulted in an alteration of timing behavior, as reflected in differential temporal patterns between groups observed in a 24 h retention test off drug. The present data suggest that D1 receptor dopaminergic transmission within the amygdala is involved in temporal processing. PMID
NASA Astrophysics Data System (ADS)
Bogachev, Mikhail I.; Kireenkov, Igor S.; Nifontov, Eugene M.; Bunde, Armin
2009-06-01
We study the statistics of return intervals between large heartbeat intervals (above a certain threshold Q) in 24 h records obtained from healthy subjects. We find that both the linear and the nonlinear long-term memory inherent in the heartbeat intervals lead to power-laws in the probability density function PQ(r) of the return intervals. As a consequence, the probability WQ(t; Δt) that at least one large heartbeat interval will occur within the next Δt heartbeat intervals, with an increasing elapsed number of intervals t after the last large heartbeat interval, follows a power-law. Based on these results, we suggest a method of obtaining a priori information about the occurrence of the next large heartbeat interval, and thus to predict it. We show explicitly that the proposed method, which exploits long-term memory, is superior to the conventional precursory pattern recognition technique, which focuses solely on short-term memory. We believe that our results can be straightforwardly extended to obtain more reliable predictions in other physiological signals like blood pressure, as well as in other complex records exhibiting multifractal behaviour, e.g. turbulent flow, precipitation, river flows and network traffic.
NASA Astrophysics Data System (ADS)
Kumar, Girish; Jain, Vipul; Gandhi, O. P.
2018-03-01
Maintenance helps to extend equipment life by improving its condition and avoiding catastrophic failures. Appropriate model or mechanism is, thus, needed to quantify system availability vis-a-vis a given maintenance strategy, which will assist in decision-making for optimal utilization of maintenance resources. This paper deals with semi-Markov process (SMP) modeling for steady state availability analysis of mechanical systems that follow condition-based maintenance (CBM) and evaluation of optimal condition monitoring interval. The developed SMP model is solved using two-stage analytical approach for steady-state availability analysis of the system. Also, CBM interval is decided for maximizing system availability using Genetic Algorithm approach. The main contribution of the paper is in the form of a predictive tool for system availability that will help in deciding the optimum CBM policy. The proposed methodology is demonstrated for a centrifugal pump.
CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.
Cooley, Richard L.; Vecchia, Aldo V.
1987-01-01
A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.
Machine learning approaches for estimation of prediction interval for the model output.
Shrestha, Durga L; Solomatine, Dimitri P
2006-03-01
A novel method for estimating prediction uncertainty using machine learning techniques is presented. Uncertainty is expressed in the form of the two quantiles (constituting the prediction interval) of the underlying distribution of prediction errors. The idea is to partition the input space into different zones or clusters having similar model errors using fuzzy c-means clustering. The prediction interval is constructed for each cluster on the basis of empirical distributions of the errors associated with all instances belonging to the cluster under consideration and propagated from each cluster to the examples according to their membership grades in each cluster. Then a regression model is built for in-sample data using computed prediction limits as targets, and finally, this model is applied to estimate the prediction intervals (limits) for out-of-sample data. The method was tested on artificial and real hydrologic data sets using various machine learning techniques. Preliminary results show that the method is superior to other methods estimating the prediction interval. A new method for evaluating performance for estimating prediction interval is proposed as well.
Construction of prediction intervals for Palmer Drought Severity Index using bootstrap
NASA Astrophysics Data System (ADS)
Beyaztas, Ufuk; Bickici Arikan, Bugrayhan; Beyaztas, Beste Hamiye; Kahya, Ercan
2018-04-01
In this study, we propose an approach based on the residual-based bootstrap method to obtain valid prediction intervals using monthly, short-term (three-months) and mid-term (six-months) drought observations. The effects of North Atlantic and Arctic Oscillation indexes on the constructed prediction intervals are also examined. Performance of the proposed approach is evaluated for the Palmer Drought Severity Index (PDSI) obtained from Konya closed basin located in Central Anatolia, Turkey. The finite sample properties of the proposed method are further illustrated by an extensive simulation study. Our results revealed that the proposed approach is capable of producing valid prediction intervals for future PDSI values.
Ongoing behavior predicts perceptual report of interval duration
Gouvêa, Thiago S.; Monteiro, Tiago; Soares, Sofia; Atallah, Bassam V.; Paton, Joseph J.
2014-01-01
The ability to estimate the passage of time is essential for adaptive behavior in complex environments. Yet, it is not known how the brain encodes time over the durations necessary to explain animal behavior. Under temporally structured reinforcement schedules, animals tend to develop temporally structured behavior, and interval timing has been suggested to be accomplished by learning sequences of behavioral states. If this is true, trial to trial fluctuations in behavioral sequences should be predictive of fluctuations in time estimation. We trained rodents in an duration categorization task while continuously monitoring their behavior with a high speed camera. Animals developed highly reproducible behavioral sequences during the interval being timed. Moreover, those sequences were often predictive of perceptual report from early in the trial, providing support to the idea that animals may use learned behavioral patterns to estimate the duration of time intervals. To better resolve the issue, we propose that continuous and simultaneous behavioral and neural monitoring will enable identification of neural activity related to time perception that is not explained by ongoing behavior. PMID:24672473
Hourly Wind Speed Interval Prediction in Arid Regions
NASA Astrophysics Data System (ADS)
Chaouch, M.; Ouarda, T.
2013-12-01
context, probabilistic forecasts might be more relevant than point forecasts for the planner to build scenarios In this paper, we are interested in estimating predictive intervals of the hourly wind speed measures in few cities in United Arab emirates (UAE). More precisely, given a wind speed time series, our target is to forecast the wind speed at any specific hour during the day and provide in addition an interval with the coverage probability 0
interval we need to estimate the lower band (resp. upper band) which corresponds to the (1-p)/2-th (resp. (1+p)/2-th) conditional quantile. In this paper, a kernel-smoothed estimator of the conditional quantiles is introduced. The proposed non-parametric approach has many advantages since it is flexible because it does not need a specification of the model to work with (such as normal distribution or a linear relation). Here, we use a covariable that is correlated to the wind speed. In practice, many possible choices of the covariate are available. In fact, in addition to its historical data, the wind speed is highly correlated to temperature, humidity and wind direction. In this paper a comparison, in terms of Mean Absolute Prediction Errors and Interquartile Range, between those choices will be provided to show which covariates are more suitable to forecast wind speed.
Forensic use of the Greulich and Pyle atlas: prediction intervals and relevance.
Chaumoitre, K; Saliba-Serre, B; Adalian, P; Signoli, M; Leonetti, G; Panuel, M
2017-03-01
The Greulich and Pyle (GP) atlas is one of the most frequently used methods of bone age (BA) estimation. Our aim is to assess its accuracy and to calculate the prediction intervals at 95% for forensic use. The study was conducted on a multi-ethnic sample of 2614 individuals (1423 boys and 1191 girls) referred to the university hospital of Marseille (France) for simple injuries. Hand radiographs were analysed using the GP atlas. Reliability of GP atlas and agreement between BA and chronological age (CA) were assessed and prediction intervals at 95% were calculated. The repeatability was excellent and the reproducibility was good. Pearson's linear correlation coefficient between CA and BA was 0.983. The mean difference between BA and CA was -0.18 years (boys) and 0.06 years (girls). The prediction interval at 95% for CA was given for each GP category and ranged between 1.2 and more than 4.5 years. The GP atlas is a reproducible and repeatable method that is still accurate for the present population, with a high correlation between BA and CA. The prediction intervals at 95% are wide, reflecting individual variability, and should be known when the method is used in forensic cases. • The GP atlas is still accurate at the present time. • There is a high correlation between bone age and chronological age. • Individual variability must be known when GP is used in forensic cases. • Prediction intervals (95%) are large; around 4 years after 10 year olds.
Tomaselli Muensterman, Elena; Tisdale, James E
2018-06-08
Prolongation of the heart rate-corrected QT (QTc) interval increases the risk for torsades de pointes (TdP), a potentially fatal arrhythmia. The likelihood of TdP is higher in patients with risk factors, which include female sex, older age, heart failure with reduced ejection fraction, hypokalemia, hypomagnesemia, concomitant administration of ≥ 2 QTc interval-prolonging medications, among others. Assessment and quantification of risk factors may facilitate prediction of patients at highest risk for developing QTc interval prolongation and TdP. Investigators have utilized the field of predictive analytics, which generates predictions using techniques including data mining, modeling, machine learning, and others, to develop methods of risk quantification and prediction of QTc interval prolongation. Predictive analytics have also been incorporated into clinical decision support (CDS) tools to alert clinicians regarding patients at increased risk of developing QTc interval prolongation. The objectives of this paper are to assess the effectiveness of predictive analytics for identification of patients at risk of drug-induced QTc interval prolongation, and to discuss the efficacy of incorporation of predictive analytics into CDS tools in clinical practice. A systematic review of English language articles (human subjects only) was performed, yielding 57 articles, with an additional 4 articles identified from other sources; a total of 10 articles were included in this review. Risk scores for QTc interval prolongation have been developed in various patient populations including those in cardiac intensive care units (ICUs) and in broader populations of hospitalized or health system patients. One group developed a risk score that includes information regarding genetic polymorphisms; this score significantly predicted TdP. Development of QTc interval prolongation risk prediction models and incorporation of these models into CDS tools reduces the risk of QTc interval
NASA Astrophysics Data System (ADS)
Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.
2016-12-01
Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.
NASA Astrophysics Data System (ADS)
Kasiviswanathan, K.; Sudheer, K.
2013-05-01
Artificial neural network (ANN) based hydrologic models have gained lot of attention among water resources engineers and scientists, owing to their potential for accurate prediction of flood flows as compared to conceptual or physics based hydrologic models. The ANN approximates the non-linear functional relationship between the complex hydrologic variables in arriving at the river flow forecast values. Despite a large number of applications, there is still some criticism that ANN's point prediction lacks in reliability since the uncertainty of predictions are not quantified, and it limits its use in practical applications. A major concern in application of traditional uncertainty analysis techniques on neural network framework is its parallel computing architecture with large degrees of freedom, which makes the uncertainty assessment a challenging task. Very limited studies have considered assessment of predictive uncertainty of ANN based hydrologic models. In this study, a novel method is proposed that help construct the prediction interval of ANN flood forecasting model during calibration itself. The method is designed to have two stages of optimization during calibration: at stage 1, the ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector, and during stage 2, the optimal variability of ANN parameters (obtained in stage 1) is identified so as to create an ensemble of predictions. During the 2nd stage, the optimization is performed with multiple objectives, (i) minimum residual variance for the ensemble mean, (ii) maximum measured data points to fall within the estimated prediction interval and (iii) minimum width of prediction interval. The method is illustrated using a real world case study of an Indian basin. The method was able to produce an ensemble that has an average prediction interval width of 23.03 m3/s, with 97.17% of the total validation data points (measured) lying within the interval. The derived
Measurement of Phonated Intervals during Four Fluency-Inducing Conditions
ERIC Educational Resources Information Center
Davidow, Jason H.; Bothe, Anne K.; Andreatta, Richard D.; Ye, Jun
2009-01-01
Purpose: Previous investigations of persons who stutter have demonstrated changes in vocalization variables during fluency-inducing conditions (FICs). A series of studies has also shown that a reduction in short intervals of phonation, those from 30 to 200 ms, is associated with decreased stuttering. The purpose of this study, therefore, was to…
Evaluation of SAMe-TT2R2 Score on Predicting Success With Extended-Interval Warfarin Monitoring.
Hwang, Andrew Y; Carris, Nicholas W; Dietrich, Eric A; Gums, John G; Smith, Steven M
2018-06-01
In patients with stable international normalized ratios, 12-week extended-interval warfarin monitoring can be considered; however, predictors of success with this strategy are unknown. The previously validated SAMe-TT 2 R 2 score (considering sex, age, medical history, treatment, tobacco, and race) predicts anticoagulation control during standard follow-up (every 4 weeks), with lower scores associated with greater time in therapeutic range. To evaluate the ability of the SAMe-TT 2 R 2 score in predicting success with extended-interval warfarin follow-up in patients with previously stable warfarin doses. In this post hoc analysis of a single-arm feasibility study, baseline SAMe-TT 2 R 2 scores were calculated for patients with ≥1 extended-interval follow-up visit. The primary analysis assessed achieved weeks of extended-interval follow-up according to baseline SAMe-TT 2 R 2 scores. A total of 47 patients receiving chronic anticoagulation completed a median of 36 weeks of extended-interval follow-up. The median baseline SAMe-TT 2 R 2 score was 1 (range 0-5). Lower SAMe-TT 2 R 2 scores appeared to be associated with greater duration of extended-interval follow-up achieved, though the differences between scores were not statistically significant. No individual variable of the SAMe-TT 2 R 2 score was associated with achieved weeks of extended-interval follow-up. Analysis of additional patient factors found that longer duration (≥24 weeks) of prior stable treatment was significantly associated with greater weeks of extended-interval follow-up completed ( P = 0.04). Conclusion and Relevance: This pilot study provides limited evidence that the SAMe-TT 2 R 2 score predicts success with extended-interval warfarin follow-up but requires confirmation in a larger study. Further research is also necessary to establish additional predictors of successful extended-interval warfarin follow-up.
An Efficient Interval Type-2 Fuzzy CMAC for Chaos Time-Series Prediction and Synchronization.
Lee, Ching-Hung; Chang, Feng-Yu; Lin, Chih-Min
2014-03-01
This paper aims to propose a more efficient control algorithm for chaos time-series prediction and synchronization. A novel type-2 fuzzy cerebellar model articulation controller (T2FCMAC) is proposed. In some special cases, this T2FCMAC can be reduced to an interval type-2 fuzzy neural network, a fuzzy neural network, and a fuzzy cerebellar model articulation controller (CMAC). So, this T2FCMAC is a more generalized network with better learning ability, thus, it is used for the chaos time-series prediction and synchronization. Moreover, this T2FCMAC realizes the un-normalized interval type-2 fuzzy logic system based on the structure of the CMAC. It can provide better capabilities for handling uncertainty and more design degree of freedom than traditional type-1 fuzzy CMAC. Unlike most of the interval type-2 fuzzy system, the type-reduction of T2FCMAC is bypassed due to the property of un-normalized interval type-2 fuzzy logic system. This causes T2FCMAC to have lower computational complexity and is more practical. For chaos time-series prediction and synchronization applications, the training architectures with corresponding convergence analyses and optimal learning rates based on Lyapunov stability approach are introduced. Finally, two illustrated examples are presented to demonstrate the performance of the proposed T2FCMAC.
NASA Astrophysics Data System (ADS)
Sun, Chao; Zhang, Chunran; Gu, Xinfeng; Liu, Bin
2017-10-01
Constraints of the optimization objective are often unable to be met when predictive control is applied to industrial production process. Then, online predictive controller will not find a feasible solution or a global optimal solution. To solve this problem, based on Back Propagation-Auto Regressive with exogenous inputs (BP-ARX) combined control model, nonlinear programming method is used to discuss the feasibility of constrained predictive control, feasibility decision theorem of the optimization objective is proposed, and the solution method of soft constraint slack variables is given when the optimization objective is not feasible. Based on this, for the interval control requirements of the controlled variables, the slack variables that have been solved are introduced, the adaptive weighted interval predictive control algorithm is proposed, achieving adaptive regulation of the optimization objective and automatically adjust of the infeasible interval range, expanding the scope of the feasible region, and ensuring the feasibility of the interval optimization objective. Finally, feasibility and effectiveness of the algorithm is validated through the simulation comparative experiments.
Mohebbi, Maryam; Ghassemian, Hassan
2011-08-01
Atrial fibrillation (AF) is the most common cardiac arrhythmia and increases the risk of stroke. Predicting the onset of paroxysmal AF (PAF), based on noninvasive techniques, is clinically important and can be invaluable in order to avoid useless therapeutic intervention and to minimize risks for the patients. In this paper, we propose an effective PAF predictor which is based on the analysis of the RR-interval signal. This method consists of three steps: preprocessing, feature extraction and classification. In the first step, the QRS complexes are detected from the electrocardiogram (ECG) signal and then the RR-interval signal is extracted. In the next step, the recurrence plot (RP) of the RR-interval signal is obtained and five statistically significant features are extracted to characterize the basic patterns of the RP. These features consist of the recurrence rate, length of longest diagonal segments (L(max )), average length of the diagonal lines (L(mean)), entropy, and trapping time. Recurrence quantification analysis can reveal subtle aspects of dynamics not easily appreciated by other methods and exhibits characteristic patterns which are caused by the typical dynamical behavior. In the final step, a support vector machine (SVM)-based classifier is used for PAF prediction. The performance of the proposed method in prediction of PAF episodes was evaluated using the Atrial Fibrillation Prediction Database (AFPDB) which consists of both 30 min ECG recordings that end just prior to the onset of PAF and segments at least 45 min distant from any PAF events. The obtained sensitivity, specificity, positive predictivity and negative predictivity were 97%, 100%, 100%, and 96%, respectively. The proposed methodology presents better results than other existing approaches.
Hinderliter, Charles F; Goodhart, Mark; Anderson, Matthew J; Misanin, James R
2002-06-01
Assuming body temperature correlates with metabolic activities, rate of body temperature recovery was manipulated to assess effects on long-trace conditioning in a conditioned taste-aversion paradigm. Following 10 min. access to a .1% saccharin solution and then 10 min. immersion in 0-0.5 degrees C water, two groups of 16 Wistar-derived, 81-113 day-old, male albino rats received either saline or lithium chloride injections 3 hr. later. These two groups were subdivided on basis of warming rate during the 3-hr. interval. Half of the rats recovered at room temperature (20 degrees to 21 degrees C), and half recovered in an incubator maintained at 30 degrees C. Maintaining a lowered body temperature between the conditioned stimulus and unconditioned stimulus allowed an association to be made at 3 hr., an interval that normally does not support conditioning. In contrast, lowering body temperature and then inducing a fast warming rate did not produce evidence of an aversion. It is suggested that maintaining a low body temperature over the interval between the presentation of the conditioned stimulus and unconditioned stimulus slows a metabolic clock that extends the measured interval at which associations can be made using conditioned taste-aversion procedures.
Ingham, Roger J.; Bothe, Anne K.; Wang, Yuedong; Purkhiser, Krystal; New, Anneliese
2012-01-01
Purpose To relate changes in four variables previously defined as characteristic of normally fluent speech to changes in phonatory behavior during oral reading by persons who stutter (PWS) and normally fluent controls under multiple fluency-inducing (FI) conditions. Method Twelve PWS and 12 controls each completed 4 ABA experiments. During A phases, participants read normally. B phases were 4 different FI conditions: auditory masking, chorus reading, whispering, and rhythmic stimulation. Dependent variables were the durations of accelerometer-recorded phonated intervals; self-judged speech effort; and observer-judged stuttering frequency, speech rate, and speech naturalness. The method enabled a systematic replication of Ingham et al. (2009). Results All FI conditions resulted in decreased stuttering and decreases in the number of short phonated intervals, as compared with baseline conditions, but the only FI condition that satisfied all four characteristics of normally fluent speech was chorus reading. Increases in longer phonated intervals were associated with decreased stuttering but also with poorer naturalness and/or increased speech effort. Previous findings concerning the effects of FI conditions on speech naturalness and effort were replicated. Conclusions Measuring all relevant characteristics of normally fluent speech, in the context of treatments that aim to reduce the occurrence of short-duration PIs, may aid the search for an explanation of the nature of stuttering and may also maximize treatment outcomes for adults who stutter. PMID:22365886
Lu, Hongwei; Zhang, Chenxi; Sun, Ying; Hao, Zhidong; Wang, Chunfang; Tian, Jiajia
2015-08-01
Predicting the termination of paroxysmal atrial fibrillation (AF) may provide a signal to decide whether there is a need to intervene the AF timely. We proposed a novel RdR RR intervals scatter plot in our study. The abscissa of the RdR scatter plot was set to RR intervals and the ordinate was set as the difference between successive RR intervals. The RdR scatter plot includes information of RR intervals and difference between successive RR intervals, which captures more heart rate variability (HRV) information. By RdR scatter plot analysis of one minute RR intervals for 50 segments with non-terminating AF and immediately terminating AF, it was found that the points in RdR scatter plot of non-terminating AF were more decentralized than the ones of immediately terminating AF. By dividing the RdR scatter plot into uniform grids and counting the number of non-empty grids, non-terminating AF and immediately terminating AF segments were differentiated. By utilizing 49 RR intervals, for 20 segments of learning set, 17 segments were correctly detected, and for 30 segments of test set, 20 segments were detected. While utilizing 66 RR intervals, for 18 segments of learning set, 16 segments were correctly detected, and for 28 segments of test set, 20 segments were detected. The results demonstrated that during the last one minute before the termination of paroxysmal AF, the variance of the RR intervals and the difference of the neighboring two RR intervals became smaller. The termination of paroxysmal AF could be successfully predicted by utilizing the RdR scatter plot, while the predicting accuracy should be further improved.
The effects of context and musical training on auditory temporal-interval discrimination.
Banai, Karen; Fisher, Shirley; Ganot, Ron
2012-02-01
Non sensory factors such as stimulus context and musical experience are known to influence auditory frequency discrimination, but whether the context effect extends to auditory temporal processing remains unknown. Whether individual experiences such as musical training alter the context effect is also unknown. The goal of the present study was therefore to investigate the effects of stimulus context and musical experience on auditory temporal-interval discrimination. In experiment 1, temporal-interval discrimination was compared between fixed context conditions in which a single base temporal interval was presented repeatedly across all trials and variable context conditions in which one of two base intervals was randomly presented on each trial. Discrimination was significantly better in the fixed than in the variable context conditions. In experiment 2 temporal discrimination thresholds of musicians and non-musicians were compared across 3 conditions: a fixed context condition in which the target interval was presented repeatedly across trials, and two variable context conditions differing in the frequencies used for the tones marking the temporal intervals. Musicians outperformed non-musicians on all 3 conditions, but the effects of context were similar for the two groups. Overall, it appears that, like frequency discrimination, temporal-interval discrimination benefits from having a fixed reference. Musical experience, while improving performance, did not alter the context effect, suggesting that improved discrimination skills among musicians are probably not an outcome of more sensitive contextual facilitation or predictive coding mechanisms. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
van Horssen, Wim T.; Wang, Yandong; Cao, Guohua
2018-06-01
In this paper, it is shown how characteristic coordinates, or equivalently how the well-known formula of d'Alembert, can be used to solve initial-boundary value problems for wave equations on fixed, bounded intervals involving Robin type of boundary conditions with time-dependent coefficients. A Robin boundary condition is a condition that specifies a linear combination of the dependent variable and its first order space-derivative on a boundary of the interval. Analytical methods, such as the method of separation of variables (SOV) or the Laplace transform method, are not applicable to those types of problems. The obtained analytical results by applying the proposed method, are in complete agreement with those obtained by using the numerical, finite difference method. For problems with time-independent coefficients in the Robin boundary condition(s), the results of the proposed method also completely agree with those as for instance obtained by the method of separation of variables, or by the finite difference method.
ERIC Educational Resources Information Center
Hinderliter, Charles F.; Andrews, Amy; Misanin, James R.
2012-01-01
In conditioned taste aversion (CTA), a taste, the conditioned stimulus (CS), is paired with an illness-inducing stimulus, the unconditioned stimulus (US), to produce CS-US associations at very long (hours) intervals, a result that appears to violate the law of contiguity. The specific length of the maximum effective trace interval that has been…
Perceptual Responses to High- and Moderate-Intensity Interval Exercise in Adolescents.
Malik, Adam A; Williams, Craig A; Weston, Kathryn L; Barker, Alan R
2018-05-01
Continuous high-intensity exercise is proposed to evoke unpleasant sensations as predicted by the dual-mode theory and may negatively impact on future exercise adherence. Previous studies support unpleasant sensations in affective responses during continuous high-intensity exercise, but the affect experience during high-intensity interval exercise (HIIE) involving brief bursts of high-intensity exercise separated by low-intensity activity is poorly understood in adolescents. We examined the acute affective, enjoyment, and perceived exertion responses to HIIE compared with moderate-intensity interval exercise (MIIE) in adolescents. Thirteen adolescent boys (mean ± SD: age, 14.0 ± 0.5 yr) performed two counterbalanced exercise conditions: 1) HIIE: 8 × 1-min work intervals at 90% maximal aerobic speed; and 2) MIIE: between 9 and 12 × 1-min work intervals at 90% ventilatory threshold where the number of intervals performed were distance-matched to HIIE. HIIE and MIIE work intervals were interspersed with 75 s active recovery at 4 km·h. Affect, enjoyment, and RPE were recorded before, during, and after exercise. Affect responses declined in both conditions but the fall was greater in HIIE than MIIE (P < 0.025, effect size [ES], 0.64 to 0.81). Affect remained positive at the end-work interval for both conditions (MIIE, 2.62 ± 1.50; HIIE, 1.15 ± 2.08 on feeling scale). No enjoyment differences were evident during HIIE and MIIE (P = 0.32), but HIIE elicited greater postexercise enjoyment compared with MIIE (P = 0.01, ES = 0.47). RPE was significantly higher during HIIE than MIIE across all work intervals (all P < 0.03, ES > 0.64). Despite elevated RPE, HIIE did not elicit prominent unpleasant feelings as predicted by the dual-mode theory and was associated with greater postexercise enjoyment responses than MIIE. This study demonstrates the feasibility of the application of HIIE as an alternative form of physical activity in adolescents.
Intervals for posttest probabilities: a comparison of 5 methods.
Mossman, D; Berger, J O
2001-01-01
Several medical articles discuss methods of constructing confidence intervals for single proportions and the likelihood ratio, but scant attention has been given to the systematic study of intervals for the posterior odds, or the positive predictive value, of a test. The authors describe 5 methods of constructing confidence intervals for posttest probabilities when estimates of sensitivity, specificity, and the pretest probability of a disorder are derived from empirical data. They then evaluate each method to determine how well the intervals' coverage properties correspond to their nominal value. When the estimates of pretest probabilities, sensitivity, and specificity are derived from more than 80 subjects and are not close to 0 or 1, all methods generate intervals with appropriate coverage properties. When these conditions are not met, however, the best-performing method is an objective Bayesian approach implemented by a simple simulation using a spreadsheet. Physicians and investigators can generate accurate confidence intervals for posttest probabilities in small-sample situations using the objective Bayesian approach.
Brown, Kevin L.; Stanton, Mark E.
2008-01-01
Eyeblink classical conditioning (EBC) was observed across a broad developmental period with tasks utilizing two interstimulus intervals (ISIs). In ISI discrimination, two distinct conditioned stimuli (CSs; light and tone) are reinforced with a periocular shock unconditioned stimulus (US) at two different CS-US intervals. Temporal uncertainty is identical in design with the exception that the same CS is presented at both intervals. Developmental changes in conditioning have been reported in each task beyond ages when single-ISI learning is well developed. The present study sought to replicate and extend these previous findings by testing each task at four separate ages. Consistent with previous findings, younger rats (postnatal day – PD - 23 and 30) trained in ISI discrimination showed evidence of enhanced cross-modal influence of the short CS-US pairing upon long CS conditioning relative to older subjects. ISI discrimination training at PD43-47 yielded outcomes similar to those in adults (PD65-71). Cross-modal transfer effects in this task therefore appear to diminish between PD30 and PD43-47. Comparisons of ISI discrimination with temporal uncertainty indicated that cross-modal transfer in ISI discrimination at the youngest ages did not represent complete generalization across CSs. ISI discrimination undergoes a more protracted developmental emergence than single-cue EBC and may be a more sensitive indicator of developmental disorders involving cerebellar dysfunction. PMID:18726989
NASA Technical Reports Server (NTRS)
Chiavassa, G.; Liandrat, J.
1996-01-01
We construct compactly supported wavelet bases satisfying homogeneous boundary conditions on the interval (0,1). The maximum features of multiresolution analysis on the line are retained, including polynomial approximation and tree algorithms. The case of H(sub 0)(sup 1)(0, 1)is detailed, and numerical values, required for the implementation, are provided for the Neumann and Dirichlet boundary conditions.
Predicting Near-Term Water Quality from Satellite Observations of Watershed Conditions
NASA Astrophysics Data System (ADS)
Weiss, W. J.; Wang, L.; Hoffman, K.; West, D.; Mehta, A. V.; Lee, C.
2017-12-01
Despite the strong influence of watershed conditions on source water quality, most water utilities and water resource agencies do not currently have the capability to monitor watershed sources of contamination with great temporal or spatial detail. Typically, knowledge of source water quality is limited to periodic grab sampling; automated monitoring of a limited number of parameters at a few select locations; and/or monitoring relevant constituents at a treatment plant intake. While important, such observations are not sufficient to inform proactive watershed or source water management at a monthly or seasonal scale. Satellite remote sensing data on the other hand can provide a snapshot of an entire watershed at regular, sub-monthly intervals, helping analysts characterize watershed conditions and identify trends that could signal changes in source water quality. Accordingly, the authors are investigating correlations between satellite remote sensing observations of watersheds and source water quality, at a variety of spatial and temporal scales and lags. While correlations between remote sensing observations and direct in situ measurements of water quality have been well described in the literature, there are few studies that link remote sensing observations across a watershed with near-term predictions of water quality. In this presentation, the authors will describe results of statistical analyses and discuss how these results are being used to inform development of a desktop decision support tool to support predictive application of remote sensing data. Predictor variables under evaluation include parameters that describe vegetative conditions; parameters that describe climate/weather conditions; and non-remote sensing, in situ measurements. Water quality parameters under investigation include nitrogen, phosphorus, organic carbon, chlorophyll-a, and turbidity.
Reboul, Q; Delabaere, A; Luo, Z C; Nuyt, A-M; Wu, Y; Chauleur, C; Fraser, W; Audibert, F
2017-03-01
To compare third-trimester ultrasound screening methods to predict small-for-gestational age (SGA), and to evaluate the impact of the ultrasound-delivery interval on screening performance. In this prospective study, data were collected from a multicenter singleton cohort study investigating the links between various exposures during pregnancy with birth outcome and later health in children. We included women, recruited in the first trimester, who had complete outcome data and had undergone third-trimester ultrasound examination. Demographic, clinical and biological variables were also collected from both parents. We compared prediction of delivery of a SGA neonate (birth weight < 10 th percentile) by the following methods: abdominal circumference (AC) Z-score based on Hadlock curves (Hadlock AC), on INTERGROWTH-21 st Project curves (Intergrowth AC) and on Salomon curves (Salomon AC); estimated fetal weight (EFW) Z-score based on Hadlock curves (Hadlock EFW) and on customized curves from Gardosi (Gardosi EFW); and fetal growth velocity based on change in AC between second and third trimesters (FGVAC). We also assessed the following ultrasound-delivery intervals: ≤ 4 weeks, ≤ 6 weeks and ≤ 10 weeks. Third-trimester ultrasound was performed in 1805 patients with complete outcome data, of whom 158 (8.8%) delivered a SGA neonate. Ultrasound examination was at a median gestational age of 32 (interquartile range, 31-33) weeks. The ultrasound-delivery interval was ≤ 4 weeks in 17.2% of cases, ≤ 6 weeks in 48.1% of cases and ≤ 10 weeks in 97.3% of cases. Areas under the receiver-operating characteristics curve (AUC) were 0.772 for Salomon AC, 0.768 for Hadlock EFW, 0.766 for Hadlock AC, 0.765 for Intergrowth AC, 0.708 for Gardosi EFW and 0.674 for FGVAC (all P < 0.0001). The screening method with the highest AUC for an ultrasound-delivery interval ≤ 4 weeks was Salomon AC (AUC, 0.856), ≤ 6 weeks was Hadlock AC (AUC, 0.824) and ≤ 10 weeks was Salomon AC
Caro-Martín, C Rocío; Leal-Campanario, Rocío; Sánchez-Campusano, Raudel; Delgado-García, José M; Gruart, Agnès
2015-11-04
We were interested in determining whether rostral medial prefrontal cortex (rmPFC) neurons participate in the measurement of conditioned stimulus-unconditioned stimulus (CS-US) time intervals during classical eyeblink conditioning. Rabbits were conditioned with a delay paradigm consisting of a tone as CS. The CS started 50, 250, 500, 1000, or 2000 ms before and coterminated with an air puff (100 ms) directed at the cornea as the US. Eyelid movements were recorded with the magnetic search coil technique and the EMG activity of the orbicularis oculi muscle. Firing activities of rmPFC neurons were recorded across conditioning sessions. Reflex and conditioned eyelid responses presented a dominant oscillatory frequency of ≈12 Hz. The firing rate of each recorded neuron presented a single peak of activity with a frequency dependent on the CS-US interval (i.e., ≈12 Hz for 250 ms, ≈6 Hz for 500 ms, and≈3 Hz for 1000 ms). Interestingly, rmPFC neurons presented their dominant firing peaks at three precise times evenly distributed with respect to CS start and also depending on the duration of the CS-US interval (only for intervals of 250, 500, and 1000 ms). No significant neural responses were recorded at very short (50 ms) or long (2000 ms) CS-US intervals. rmPFC neurons seem not to encode the oscillatory properties characterizing conditioned eyelid responses in rabbits, but are probably involved in the determination of CS-US intervals of an intermediate range (250-1000 ms). We propose that a variable oscillator underlies the generation of working memories in rabbits. The way in which brains generate working memories (those used for the transient processing and storage of newly acquired information) is still an intriguing question. Here, we report that the firing activities of neurons located in the rostromedial prefrontal cortex recorded in alert behaving rabbits are controlled by a dynamic oscillator. This oscillator generated firing frequencies in a variable band
The influence of CS-US interval on several different indices of learning in appetitive conditioning
Delamater, Andrew R.; Holland, Peter C.
2010-01-01
Four experiments examined the effects of varying the CS-US interval (and US density) on learning in an appetitive magazine approach task with rats. Learning was assessed with conditioned response (CR) measures, as well as measures of sensory-specific stimulus-outcome associations (Pavlovian-instrumental transfer, potentiated feeding, and US devaluation). The results from these studies indicate that there exists an inverse relation between CS-US interval and magazine approach CRs, but that sensory-specific stimulus-outcome associations are established over a wide range of relatively long, but not short, CS-US intervals. These data suggest that simple CR measures provide us with different information about what is learned than measures of the specific stimulus-outcome association, and that time is a more critical variable for the former than latter component of learning. PMID:18426304
Predicting Driver Behavior during the Yellow Interval Using Video Surveillance
Li, Juan; Jia, Xudong; Shao, Chunfu
2016-01-01
At a signalized intersection, drivers must make a stop/go decision at the onset of the yellow signal. Incorrect decisions would lead to red light running (RLR) violations or crashes. This study aims to predict drivers’ stop/go decisions and RLR violations during yellow intervals. Traffic data such as vehicle approaching speed, acceleration, distance to the intersection, and occurrence of RLR violations are gathered by a Vehicle Data Collection System (VDCS). An enhanced Gaussian Mixture Model (GMM) is used to extract moving vehicles from target lanes, and the Kalman Filter (KF) algorithm is utilized to acquire vehicle trajectories. The data collected from the VDCS are further analyzed by a sequential logit model, and the relationship between drivers’ stop/go decisions and RLR violations is identified. The results indicate that the distance of vehicles to the stop line at the onset of the yellow signal is an important predictor for both drivers’ stop/go decisions and RLR violations. In addition, vehicle approaching speed is a contributing factor for stop/go decisions. Furthermore, the accelerations of vehicles after the onset of the yellow signal are positively related to RLR violations. The findings of this study can be used to predict the probability of drivers’ RLR violations and improve traffic safety at signalized intersections. PMID:27929447
Predicting Driver Behavior during the Yellow Interval Using Video Surveillance.
Li, Juan; Jia, Xudong; Shao, Chunfu
2016-12-06
At a signalized intersection, drivers must make a stop/go decision at the onset of the yellow signal. Incorrect decisions would lead to red light running (RLR) violations or crashes. This study aims to predict drivers' stop/go decisions and RLR violations during yellow intervals. Traffic data such as vehicle approaching speed, acceleration, distance to the intersection, and occurrence of RLR violations are gathered by a Vehicle Data Collection System (VDCS). An enhanced Gaussian Mixture Model (GMM) is used to extract moving vehicles from target lanes, and the Kalman Filter (KF) algorithm is utilized to acquire vehicle trajectories. The data collected from the VDCS are further analyzed by a sequential logit model, and the relationship between drivers' stop/go decisions and RLR violations is identified. The results indicate that the distance of vehicles to the stop line at the onset of the yellow signal is an important predictor for both drivers' stop/go decisions and RLR violations. In addition, vehicle approaching speed is a contributing factor for stop/go decisions. Furthermore, the accelerations of vehicles after the onset of the yellow signal are positively related to RLR violations. The findings of this study can be used to predict the probability of drivers' RLR violations and improve traffic safety at signalized intersections.
Gast, Anne; Langer, Sebastian; Sengewald, Marie-Ann
2016-10-01
Evaluative conditioning (EC) is a change in valence that is due to pairing a conditioned stimulus (CS) with another, typically valent, unconditioned stimulus (US). This paper investigates how basic presentation parameters moderate EC effects. In two studies we tested the effectiveness of different temporal relations of the CS and the US, that is, the order in which the stimuli were presented and the temporal distance between them. Both studies showed that the size of EC effects was independent of the presentation order of CS and US within a stimulus pair. Contrary to classical conditioning effects, EC effects are thus not most pronounced after CS-first presentations. Furthermore, as shown in Experiment 2, EC effects increased in magnitude as the temporal interval between CS and US presentations decreased. Experiment 1 showed largest EC effects in the condition with simultaneous presentations - which can be seen as the condition with the temporally closest presentation. In this Experiment stimuli were presented in two different modalities, which might have facilitated simultaneous processing. In Experiment 2, in which all stimuli were presented visually, this advantage of simultaneous presentation was not found. We discuss practical and theoretical implications of our findings. Copyright © 2016 Elsevier B.V. All rights reserved.
Prediction of Malaysian monthly GDP
NASA Astrophysics Data System (ADS)
Hin, Pooi Ah; Ching, Soo Huei; Yeing, Pan Wei
2015-12-01
The paper attempts to use a method based on multivariate power-normal distribution to predict the Malaysian Gross Domestic Product next month. Letting r(t) be the vector consisting of the month-t values on m selected macroeconomic variables, and GDP, we model the month-(t+1) GDP to be dependent on the present and l-1 past values r(t), r(t-1),…,r(t-l+1) via a conditional distribution which is derived from a [(m+1)l+1]-dimensional power-normal distribution. The 100(α/2)% and 100(1-α/2)% points of the conditional distribution may be used to form an out-of sample prediction interval. This interval together with the mean of the conditional distribution may be used to predict the month-(t+1) GDP. The mean absolute percentage error (MAPE), estimated coverage probability and average length of the prediction interval are used as the criterions for selecting the suitable lag value l-1 and the subset from a pool of 17 macroeconomic variables. It is found that the relatively better models would be those of which 2 ≤ l ≤ 3, and involving one or two of the macroeconomic variables given by Market Indicative Yield, Oil Prices, Exchange Rate and Import Trade.
Francq, Bernard G; Govaerts, Bernadette
2016-06-30
Two main methodologies for assessing equivalence in method-comparison studies are presented separately in the literature. The first one is the well-known and widely applied Bland-Altman approach with its agreement intervals, where two methods are considered interchangeable if their differences are not clinically significant. The second approach is based on errors-in-variables regression in a classical (X,Y) plot and focuses on confidence intervals, whereby two methods are considered equivalent when providing similar measures notwithstanding the random measurement errors. This paper reconciles these two methodologies and shows their similarities and differences using both real data and simulations. A new consistent correlated-errors-in-variables regression is introduced as the errors are shown to be correlated in the Bland-Altman plot. Indeed, the coverage probabilities collapse and the biases soar when this correlation is ignored. Novel tolerance intervals are compared with agreement intervals with or without replicated data, and novel predictive intervals are introduced to predict a single measure in an (X,Y) plot or in a Bland-Atman plot with excellent coverage probabilities. We conclude that the (correlated)-errors-in-variables regressions should not be avoided in method comparison studies, although the Bland-Altman approach is usually applied to avert their complexity. We argue that tolerance or predictive intervals are better alternatives than agreement intervals, and we provide guidelines for practitioners regarding method comparison studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Martínez-Alanis, Marisol; Ruiz-Velasco, Silvia; Lerma, Claudia
2016-12-15
Most approaches to predict ventricular tachyarrhythmias which are based on RR intervals consider only sinus beats, excluding premature ventricular complexes (PVCs). The method known as heartprint, which analyses PVCs and their characteristics, has prognostic value for fatal arrhythmias on long recordings of RR intervals (>70,000 beats). To evaluate characteristics of PVCs from short term recordings (around 1000 beats) and their prognostic value for imminent sustained tachyarrhythmia. We analyzed 132 pairs of short term RR interval recordings (one before tachyarrhythmia and one control) obtained from 78 patients. Patients were classified into two groups based on the history of accelerated heart rate (HR) (HR>90bpm) before a tachyarrhythmia episode. Heartprint indexes, such as mean coupling interval (meanCI) and the number of occurrences of the most prevalent form of PVCs (SNIB) were calculated. The predictive value of all the indexes and of the combination of different indexes was calculated. MeanCI shorter than 482ms and the occurrence of more repetitive arrhythmias (sNIB≥2.5), had a significant prognostic value for patients with accelerated heart rate: adjusted odds ratio of 2.63 (1.33-5.17) for meanCI and 2.28 (1.20-4.33) for sNIB. Combining these indexes increases the adjusted odds ratio: 10.94 (3.89-30.80). High prevalence of repeating forms of PVCs and shorter CI are potentially useful risk markers of imminent ventricular tachyarrhythmia. Knowing if a patient has history of VT/VF preceded by accelerated HR, improves the prognostic value of these risk markers. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Predicting Culex pipiens/restuans population dynamics by interval lagged weather data
2013-01-01
Background Culex pipiens/restuans mosquitoes are important vectors for a variety of arthropod borne viral infections. In this study, the associations between 20 years of mosquito capture data and the time lagged environmental quantities daytime length, temperature, precipitation, relative humidity and wind speed were used to generate a predictive model for the population dynamics of this vector species. Methods Mosquito population in the study area was represented by averaged time series of mosquitos counts captured at 6 sites in Cook County (Illinois, USA). Cross-correlation maps (CCMs) were compiled to investigate the association between mosquito abundances and environmental quantities. The results obtained from the CCMs were incorporated into a Poisson regression to generate a predictive model. To optimize the predictive model the time lags obtained from the CCMs were adjusted using a genetic algorithm. Results CCMs for weekly data showed a highly positive correlation of mosquito abundances with daytime length 4 to 5 weeks prior to capture (quantified by a Spearman rank order correlation of rS = 0.898) and with temperature during 2 weeks prior to capture (rS = 0.870). Maximal negative correlations were found for wind speed averaged over 3 week prior to capture (rS = −0.621). Cx. pipiens/restuans population dynamics was predicted by integrating the CCM results in Poisson regression models. They were used to simulate the average seasonal cycle of the mosquito abundance. Verification with observations resulted in a correlation of rS = 0.899 for daily and rS = 0.917 for weekly data. Applying the optimized models to the entire 20-years time series also resulted in a suitable fit with rS = 0.876 for daily and rS = 0.899 for weekly data. Conclusions The study demonstrates the application of interval lagged weather data to predict mosquito abundances with a feasible accuracy, especially when related to weekly Cx. pipiens
Modelling volatility recurrence intervals in the Chinese commodity futures market
NASA Astrophysics Data System (ADS)
Zhou, Weijie; Wang, Zhengxin; Guo, Haiming
2016-09-01
The law of extreme event occurrence attracts much research. The volatility recurrence intervals of Chinese commodity futures market prices are studied: the results show that the probability distributions of the scaled volatility recurrence intervals have a uniform scaling curve for different thresholds q. So we can deduce the probability distribution of extreme events from normal events. The tail of a scaling curve can be well fitted by a Weibull form, which is significance-tested by KS measures. Both short-term and long-term memories are present in the recurrence intervals with different thresholds q, which denotes that the recurrence intervals can be predicted. In addition, similar to volatility, volatility recurrence intervals also have clustering features. Through Monte Carlo simulation, we artificially synthesise ARMA, GARCH-class sequences similar to the original data, and find out the reason behind the clustering. The larger the parameter d of the FIGARCH model, the stronger the clustering effect is. Finally, we use the Fractionally Integrated Autoregressive Conditional Duration model (FIACD) to analyse the recurrence interval characteristics. The results indicated that the FIACD model may provide a method to analyse volatility recurrence intervals.
Weighted regression analysis and interval estimators
Donald W. Seegrist
1974-01-01
A method for deriving the weighted least squares estimators for the parameters of a multiple regression model. Confidence intervals for expected values, and prediction intervals for the means of future samples are given.
Waynforth, David
2015-10-01
Human birth interval length is indicative of the level of parental investment that a child will receive: a short interval following birth means that parental resources must be split with a younger sibling during a period when the older sibling remains highly dependent on their parents. From a life-history theoretical perspective, it is likely that there are evolved mechanisms that serve to maximize fitness depending on context. One context that would be expected to result in short birth intervals, and lowered parental investment, is after a child with low expected fitness is born. Here, data drawn from a longitudinal British birth cohort study were used to test whether birth intervals were shorter following the birth of a child with a long-term health problem. Data on the timing of 4543 births were analysed using discrete-time event history analysis. The results were consistent with the hypothesis: birth intervals were shorter following the birth of a child diagnosed by a medical professional with a severe but non-fatal medical condition. Covariates in the analysis were also significantly associated with birth interval length: births of twins or multiple births, and relationship break-up were associated with significantly longer birth intervals. © 2015 The Author(s).
Tocci, Giuliano; Giuliani, Manuela; Canichella, Flaminia; Timpano, Jacopo; Presta, Vivianne; Francia, Pietro; Musumeci, Maria Beatrice; Fubelli, Federica; Pozzilli, Carlo; Volpe, Massimo; Ferrucci, Andrea
2016-10-15
FTY720 (Fingolimod) is an immunosuppressive drug, which provides favourable effects in patients with multiple sclerosis (MS), albeit it induces heart rate (HR) and blood pressure (BP) reductions. Therefore, we tested potential factors able to predict HR response in MS patients treated with fingolimod. We analysed patients with MS followed at our Neurology Outpatient Clinic from May 2013 to June 2015. All patients underwent BP measurements and 12-lead ECG before and 6-h after drug administration. At these time intervals, conventional and new ECG indexes for cardiac damage, including Tp-Te interval, were measured. Univariate and multivariate analyses were performed to test the outcome of HR reduction more than median difference between baseline and final observations. 69 outpatients with MS (46 males, age 35.1±9.4years, BP 119.0±12.7/73.0±9.3mmHg, HR 73.5±11.4bpm) were included. No relevant adverse reactions were reported. Fingolimod induced progressive systolic (P=0.024) and diastolic (P<0.001) BP, as well as HR (P<0.001) reductions compared to baseline. Prolonged PQ (150.4±19.5 vs. 157.0±19.5ms; P<0.001), QT (374.9±27.0 vs. 400.0±25.8ms; P<0.001), Tp-Te (1.8±0.3 vs. 1.9±0.3mm; P=0.021), and reduced QTc (414.4±24.4 vs. 404.5±24.5ms; P<0.001) intervals were also recorded at final observation. Baseline HR, QT and Tp-Te intervals provided prognostic information at univariate analysis, although Tp-Te interval resulted the best independent predictor for HR reduction at multivariate analysis [0.057 (0.005-0.660); P=0.022]. This study firstly demonstrates that prolonged Tp-Te interval may identify those MS patients treated with fingolimod at higher risk of having significant, asymptomatic HR reduction during clinical observation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Steiner, Genevieve Z.; Barry, Robert J.; Gonsalvez, Craig J.
2016-01-01
In oddball tasks, increasing the time between stimuli within a particular condition (target-to-target interval, TTI; nontarget-to-nontarget interval, NNI) systematically enhances N1, P2, and P300 event-related potential (ERP) component amplitudes. This study examined the mechanism underpinning these effects in ERP components recorded from 28 adults who completed a conventional three-tone oddball task. Bivariate correlations, partial correlations and multiple regression explored component changes due to preceding ERP component amplitudes and intervals found within the stimulus series, rather than constraining the task with experimentally constructed intervals, which has been adequately explored in prior studies. Multiple regression showed that for targets, N1 and TTI predicted N2, TTI predicted P3a and P3b, and Processing Negativity (PN), P3b, and TTI predicted reaction time. For rare nontargets, P1 predicted N1, NNI predicted N2, and N1 predicted Slow Wave (SW). Findings show that the mechanism is operating on separate stages of stimulus-processing, suggestive of either increased activation within a number of stimulus-specific pathways, or very long component generator recovery cycles. These results demonstrate the extent to which matching-stimulus intervals influence ERP component amplitudes and behavior in a three-tone oddball task, and should be taken into account when designing similar studies. PMID:27445774
Steiner, Genevieve Z; Barry, Robert J; Gonsalvez, Craig J
2016-01-01
In oddball tasks, increasing the time between stimuli within a particular condition (target-to-target interval, TTI; nontarget-to-nontarget interval, NNI) systematically enhances N1, P2, and P300 event-related potential (ERP) component amplitudes. This study examined the mechanism underpinning these effects in ERP components recorded from 28 adults who completed a conventional three-tone oddball task. Bivariate correlations, partial correlations and multiple regression explored component changes due to preceding ERP component amplitudes and intervals found within the stimulus series, rather than constraining the task with experimentally constructed intervals, which has been adequately explored in prior studies. Multiple regression showed that for targets, N1 and TTI predicted N2, TTI predicted P3a and P3b, and Processing Negativity (PN), P3b, and TTI predicted reaction time. For rare nontargets, P1 predicted N1, NNI predicted N2, and N1 predicted Slow Wave (SW). Findings show that the mechanism is operating on separate stages of stimulus-processing, suggestive of either increased activation within a number of stimulus-specific pathways, or very long component generator recovery cycles. These results demonstrate the extent to which matching-stimulus intervals influence ERP component amplitudes and behavior in a three-tone oddball task, and should be taken into account when designing similar studies.
Burliaeva, E V; Tarkhov, A E; Burliaev, V V; Iurkevich, A M; Shvets, V I
2002-01-01
Searching of new anti-HIV agents is still crucial now. In general, researches are looking for inhibitors of certain HIV's vital enzymes, especially for reverse transcriptase (RT) inhibitors. Modern generation of anti-HIV agents represents non-nucleoside reverse transcriptase inhibitors (NNRTIs). They are much less toxic than nucleoside analogues and more chemically stable, thus being slower metabolized and emitted from the human body. Thus, search of new NNRTIs is actual today. Synthesis and study of new anti-HIV drugs is very expensive. So employment of the activity prediction techniques for such a search is very beneficial. This technique allows predicting the activities for newly proposed structures. It is based on the property model built by investigation of a series of known compounds with measured activity. This paper presents an approach of activity prediction based on "structure-activity" models designed to form a hypothesis about probably activity interval estimate. This hypothesis formed is based on structure descriptor domains, calculated for all energetically allowed conformers for each compound in the studied sef. Tetrahydroimidazobenzodiazipenone (TIBO) derivatives and phenylethyltiazolyltiourea (PETT) derivatives illustrated the predictive power of this method. The results are consistent with experimental data and allow to predict inhibitory activity of compounds, which were not included into the training set.
van Daalen, Marjolijn A; de Kat, Dorothée S; Oude Grotebevelsborg, Bernice F L; de Leeuwe, Roosje; Warnaar, Jeroen; Oostra, Roelof Jan; M Duijst-Heesters, Wilma L J
2017-03-01
This study aimed to develop an aquatic decomposition scoring (ADS) method and investigated the predictive value of this method in estimating the postmortem submersion interval (PMSI) of bodies recovered from the North Sea. This method, consisting of an ADS item list and a pictorial reference atlas, showed a high interobserver agreement (Krippendorff's alpha ≥ 0.93) and hence proved to be valid. This scoring method was applied to data, collected from closed cases-cases in which the postmortal submersion interval (PMSI) was known-concerning bodies recovered from the North Sea from 1990 to 2013. Thirty-eight cases met the inclusion criteria and were scored by quantifying the observed total aquatic decomposition score (TADS). Statistical analysis demonstrated that TADS accurately predicts the PMSI (p < 0.001), confirming that the decomposition process in the North Sea is strongly correlated to time. © 2017 American Academy of Forensic Sciences.
Doré, Bruce P; Meksin, Robert; Mather, Mara; Hirst, William; Ochsner, Kevin N
2016-06-01
In the aftermath of a national tragedy, important decisions are predicated on judgments of the emotional significance of the tragedy in the present and future. Research in affective forecasting has largely focused on ways in which people fail to make accurate predictions about the nature and duration of feelings experienced in the aftermath of an event. Here we ask a related but understudied question: can people forecast how they will feel in the future about a tragic event that has already occurred? We found that people were strikingly accurate when predicting how they would feel about the September 11 attacks over 1-, 2-, and 7-year prediction intervals. Although people slightly under- or overestimated their future feelings at times, they nonetheless showed high accuracy in forecasting (a) the overall intensity of their future negative emotion, and (b) the relative degree of different types of negative emotion (i.e., sadness, fear, or anger). Using a path model, we found that the relationship between forecasted and actual future emotion was partially mediated by current emotion and remembered emotion. These results extend theories of affective forecasting by showing that emotional responses to an event of ongoing national significance can be predicted with high accuracy, and by identifying current and remembered feelings as independent sources of this accuracy. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Doré, B.P.; Meksin, R.; Mather, M.; Hirst, W.; Ochsner, K.N
2016-01-01
In the aftermath of a national tragedy, important decisions are predicated on judgments of the emotional significance of the tragedy in the present and future. Research in affective forecasting has largely focused on ways in which people fail to make accurate predictions about the nature and duration of feelings experienced in the aftermath of an event. Here we ask a related but understudied question: can people forecast how they will feel in the future about a tragic event that has already occurred? We found that people were strikingly accurate when predicting how they would feel about the September 11 attacks over 1-, 2-, and 7-year prediction intervals. Although people slightly under- or overestimated their future feelings at times, they nonetheless showed high accuracy in forecasting 1) the overall intensity of their future negative emotion, and 2) the relative degree of different types of negative emotion (i.e., sadness, fear, or anger). Using a path model, we found that the relationship between forecasted and actual future emotion was partially mediated by current emotion and remembered emotion. These results extend theories of affective forecasting by showing that emotional responses to an event of ongoing national significance can be predicted with high accuracy, and by identifying current and remembered feelings as independent sources of this accuracy. PMID:27100309
Prediction of future asset prices
NASA Astrophysics Data System (ADS)
Seong, Ng Yew; Hin, Pooi Ah; Ching, Soo Huei
2014-12-01
This paper attempts to incorporate trading volumes as an additional predictor for predicting asset prices. Denoting r(t) as the vector consisting of the time-t values of the trading volume and price of a given asset, we model the time-(t+1) asset price to be dependent on the present and l-1 past values r(t), r(t-1), ....., r(t-1+1) via a conditional distribution which is derived from a (2l+1)-dimensional power-normal distribution. A prediction interval based on the 100(α/2)% and 100(1-α/2)% points of the conditional distribution is then obtained. By examining the average lengths of the prediction intervals found by using the composite indices of the Malaysia stock market for the period 2008 to 2013, we found that the value 2 appears to be a good choice for l. With the omission of the trading volume in the vector r(t), the corresponding prediction interval exhibits a slightly longer average length, showing that it might be desirable to keep trading volume as a predictor. From the above conditional distribution, the probability that the time-(t+1) asset price will be larger than the time-t asset price is next computed. When the probability differs from 0 (or 1) by less than 0.03, the observed time-(t+1) increase in price tends to be negative (or positive). Thus the above probability has a good potential of being used as a market indicator in technical analysis.
NASA Astrophysics Data System (ADS)
Tesoriero, A. J.; Terziotti, S.
2014-12-01
Nitrate trends in streams often do not match expectations based on recent nitrogen source loadings to the land surface. Groundwater discharge with long travel times has been suggested as the likely cause for these observations. The fate of nitrate in groundwater depends to a large extent on the occurrence of denitrification along flow paths. Because denitrification in groundwater is inhibited when dissolved oxygen (DO) concentrations are high, defining the oxic-suboxic interface has been critical in determining pathways for nitrate transport in groundwater and to streams at the local scale. Predicting redox conditions on a regional scale is complicated by the spatial variability of reaction rates. In this study, logistic regression and boosted classification tree analysis were used to predict the probability of oxic water in groundwater in the Chesapeake Bay watershed. The probability of oxic water (DO > 2 mg/L) was predicted by relating DO concentrations in over 3,000 groundwater samples to indicators of residence time and/or electron donor availability. Variables that describe position in the flow system (e.g., depth to top of the open interval), soil drainage and surficial geology were the most important predictors of oxic water. Logistic regression and boosted classification tree analysis correctly predicted the presence or absence of oxic conditions in over 75 % of the samples in both training and validation data sets. Predictions of the percentages of oxic wells in deciles of risk were very accurate (r2>0.9) in both the training and validation data sets. Depth to the bottom of the oxic layer was predicted and is being used to estimate the effect that groundwater denitrification has on stream nitrate concentrations and the time lag between the application of nitrogen at the land surface and its effect on streams.
How Children Use Examples to Make Conditional Predictions
ERIC Educational Resources Information Center
Kalish, Charles W.
2010-01-01
Two experiments explored children's and adults' use of examples to make conditional predictions. In Experiment 1 adults (N = 20) but not 4-year-olds (N = 21) or 8-year-olds (N =1 8) distinguished predictable from unpredictable features when features were partially correlated (e.g., necessary but not sufficient). Children did make reliable…
ERIC Educational Resources Information Center
Woodruff-Pak, Diana S.; Seta, Susan E.; Roker, LaToya A.; Lehr, Melissa A.
2007-01-01
The aim of this study was to examine parameters affecting age differences in eyeblink classical conditioning in a large sample of young and middle-aged rabbits. A total of 122 rabbits of mean ages of 4 or 26 mo were tested at inter-stimulus intervals (ISIs) of 600 or 750 msec in the delay or trace paradigms. Paradigm affected both age groups…
Grover, Anita; Benet, Leslie Z.
2013-01-01
Intermittent drug dosing intervals are usually initially guided by the terminal pharmacokinetic half life and are dependent on drug formulation. For chronic multiple dosing and for extended release dosage forms, the terminal half life often does not predict the plasma drug accumulation or fluctuation observed. We define and advance applications for the operational multiple dosing half lives for drug accumulation and fluctuation after multiple oral dosing at steady-state. Using Monte Carlo simulation, our results predict a way to maximize the operational multiple dosing half lives relative to the terminal half life by using a first-order absorption rate constant close to the terminal elimination rate constant in the design of extended release dosage forms. In this way, drugs that may be eliminated early in the development pipeline due to a relatively short half life can be formulated to be dosed at intervals three times the terminal half life, maximizing compliance, while maintaining tight plasma concentration accumulation and fluctuation ranges. We also present situations in which the operational multiple dosing half lives will be especially relevant in the determination of dosing intervals, including for drugs that follow a direct PKPD model and have a narrow therapeutic index, as the rate of concentration decrease after chronic multiple dosing (that is not the terminal half life) can be determined via simulation. These principles are illustrated with case studies on valproic acid, diazepam, and anti-hypertensives. PMID:21499748
Chandran, S; Parker, F; Lontos, S; Vaughan, R; Efthymiou, M
2015-12-01
Polyps identified at colonoscopy are predominantly diminutive (<5 mm) with a small risk (>1%) of high-grade dysplasia or carcinoma; however, the cost of histological assessment is substantial. The aim of this study was to determine whether prediction of colonoscopy surveillance intervals based on real-time endoscopic assessment of polyp histology is accurate and cost effective. A prospective cohort study was conducted across a tertiary care and private community hospital. Ninety-four patients underwent colonoscopy and polypectomy of diminutive (≤5 mm) polyps from October 2012 to July 2013, yielding a total of 159 polyps. Polyps were examined and classified according to the Sano-Emura classification system. The endoscopic assessment (optical diagnosis) of polyp histology was used to predict appropriate colonoscopy surveillance intervals. The main outcome measure was the accuracy of optical diagnosis of diminutive colonic polyps against the gold standard of histological assessment. Optical diagnosis was correct in 105/108 (97.2%) adenomas. This yielded a sensitivity, specificity and positive and negative predictive values (with 95%CI) of 97.2% (92.1-99.4%), 78.4% (64.7-88.7%), 90.5% (83.7-95.2%) and 93% (80.9-98.5%) respectively. Ninety-two (98%) patients were correctly triaged to their repeat surveillance colonoscopy. Based on these findings, a cut and discard approach would have resulted in a saving of $319.77 per patient. Endoscopists within a tertiary care setting can accurately predict diminutive polyp histology and confer an appropriate surveillance interval with an associated financial benefit to the healthcare system. However, limitations to its application in the community setting exist, which may improve with further training and high-definition colonoscopes. © 2015 Royal Australasian College of Physicians.
Pananceau, M; Rispal-Padel, L
2000-06-01
In classic conditioning, the interstimulus interval (ISI) between the conditioned (CS) and unconditioned (US) stimulus is a critical parameter. The aim of the present experiment was to assess whether, during conditioning, modification of the CS-US interval could reliably produce changes in the functional properties of the interposito-thalamo-cortical pathways (INTCps). Five cats were prepared for chronic stimulation and recording from several brain regions along this pathway in awake animals. The CS was a weak electric shock applied on the interposed nucleus of the cerebellum in sites that initially elicited forelimb flexion (i.e., alpha motor responses) in three cats, and equal proportions of flexor and extensor responses in two cats. The US was an electric shock applied on the skin that elicited forelimb flexions. The motor and neurobiological effects of synchronous CS-US were compared with pairings in which the CS was applied 100 ms before US. Simultaneous and sequential application of CS and US produced different behavioral outcomes and resulted in different neural processes in the interposito-thalamo-cortical pathways (INTCps). The simultaneous presentation of stimuli only produced a small increase in excitability spreading to all the body representational zones of the primary motor cortex and a weak increase in the amplitude of the alpha motor response. In contrast, the sequential application led to a profound modification of the interposed output to neurons in the forelimb representation of the motor cortex. These robust neuronal correlates of conditioning were accompanied by a large facilitation of the alpha motor response (alpha-MR). There were also changes in the direction of misdirected alpha responses and an emergence of functionally appropriate, long-latency withdrawal forelimb flexion. These data revealed that, during conditioning, plastic changes within the thalamocortical connections are selectively induced by sequential information from central and
Setting the Revisit Interval in Primary Care
Schwartz, Lisa M; Woloshin, Steven; Wasson, John H; Renfrew, Roger A; Welch, H Gilbert
1999-01-01
OBJECTIVE Although longitudinal care constitutes the bulk of primary care, physicians receive little guidance on the fundamental question of how to time follow-up visits. We sought to identify important predictors of the revisit interval and to describe the variability in how physicians set these intervals when caring for patients with common medical conditions. DESIGN Cross-sectional survey of physicians performed at the end of office visits for consecutive patients with hypertension, angina, diabetes, or musculoskeletal pain. PARTICIPANTS/SETTING One hundred sixty-four patients under the care of 11 primary care physicians in the Dartmouth Primary Care Cooperative Research Network. MEASUREMENTS The main outcome measures were the variability in mean revisit intervals across physicians and the proportion of explained variance by potential determinants of revisit intervals. We assessed the relation between the revisit interval (dependent variable) and three groups of independent variables, patient characteristics (e.g., age, physician perception of patient health), identification of individual physician, and physician characterization of the visit (e.g., routine visit, visit requiring a change in management, or visit occurring on a “hectic” day), using multiple regression that accounted for the natural grouping of patients within physician. MAIN RESULTS Revisit intervals ranged from 1 week to over 1 year. The most common intervals were 12 and 16 weeks. Physicians’ perception of fair-poor health status and visits involving a change in management were most strongly related to shorter revisit intervals. In multivariate analyses, patient characteristics explained about 18% of the variance in revisit intervals, and adding identification of the individual provider doubled the explained variance to about 40%. Physician characterization of the visit increased explained variance to 57%. The average revisit interval adjusted for patient characteristics for each of the 11
Modeling stream fish distributions using interval-censored detection times.
Ferreira, Mário; Filipe, Ana Filipa; Bardos, David C; Magalhães, Maria Filomena; Beja, Pedro
2016-08-01
Controlling for imperfect detection is important for developing species distribution models (SDMs). Occupancy-detection models based on the time needed to detect a species can be used to address this problem, but this is hindered when times to detection are not known precisely. Here, we extend the time-to-detection model to deal with detections recorded in time intervals and illustrate the method using a case study on stream fish distribution modeling. We collected electrofishing samples of six fish species across a Mediterranean watershed in Northeast Portugal. Based on a Bayesian hierarchical framework, we modeled the probability of water presence in stream channels, and the probability of species occupancy conditional on water presence, in relation to environmental and spatial variables. We also modeled time-to-first detection conditional on occupancy in relation to local factors, using modified interval-censored exponential survival models. Posterior distributions of occupancy probabilities derived from the models were used to produce species distribution maps. Simulations indicated that the modified time-to-detection model provided unbiased parameter estimates despite interval-censoring. There was a tendency for spatial variation in detection rates to be primarily influenced by depth and, to a lesser extent, stream width. Species occupancies were consistently affected by stream order, elevation, and annual precipitation. Bayesian P-values and AUCs indicated that all models had adequate fit and high discrimination ability, respectively. Mapping of predicted occupancy probabilities showed widespread distribution by most species, but uncertainty was generally higher in tributaries and upper reaches. The interval-censored time-to-detection model provides a practical solution to model occupancy-detection when detections are recorded in time intervals. This modeling framework is useful for developing SDMs while controlling for variation in detection rates, as it
Lengyel, Csaba; Orosz, Andrea; Hegyi, Péter; Komka, Zsolt; Udvardy, Anna; Bosnyák, Edit; Trájer, Emese; Pavlik, Gábor; Tóth, Miklós; Wittmann, Tibor; Papp, Julius Gy.; Varró, András; Baczkó, István
2011-01-01
Background Sudden cardiac death in competitive athletes is rare but it is significantly more frequent than in the normal population. The exact cause is seldom established and is mostly attributed to ventricular fibrillation. Myocardial hypertrophy and slow heart rate, both characteristic changes in top athletes in response to physical conditioning, could be associated with increased propensity for ventricular arrhythmias. We investigated conventional ECG parameters and temporal short-term beat-to-beat variability of repolarization (STVQT), a presumptive novel parameter for arrhythmia prediction, in professional soccer players. Methods Five-minute 12-lead electrocardiograms were recorded from professional soccer players (n = 76, all males, age 22.0±0.61 years) and age-matched healthy volunteers who do not participate in competitive sports (n = 76, all males, age 22.0±0.54 years). The ECGs were digitized and evaluated off-line. The temporal instability of beat-to-beat heart rate and repolarization were characterized by the calculation of short-term variability of the RR and QT intervals. Results Heart rate was significantly lower in professional soccer players at rest (61±1.2 vs. 72±1.5/min in controls). The QT interval was prolonged in players at rest (419±3.1 vs. 390±3.6 in controls, p<0.001). QTc was significantly longer in players compared to controls calculated with Fridericia and Hodges correction formulas. Importantly, STVQT was significantly higher in players both at rest and immediately after the game compared to controls (4.8±0.14 and 4.3±0.14 vs. 3.5±0.10 ms, both p<0.001, respectively). Conclusions STVQT is significantly higher in professional soccer players compared to age-matched controls, however, further studies are needed to relate this finding to increased arrhythmia propensity in this population. PMID:21526208
Stirling, Aaron D; Moran, Neil R; Kelly, Michael E; Ridgway, Paul F; Conlon, Kevin C
2017-10-01
Using revised Atlanta classification defined outcomes, we compare absolute values in C-reactive protein (CRP), with interval changes in CRP, for severity stratification in acute pancreatitis (AP). A retrospective study of all first incidence AP was conducted over a 5-year period. Interval change in CRP values from admission to day 1, 2 and 3 was compared against the absolute values. Receiver-operator characteristic (ROC) curve and likelihood ratios (LRs) were used to compare ability to predict severe and mild disease. 337 cases of first incidence AP were included in our analysis. ROC curve analysis demonstrated the second day as the most useful time for repeat CRP measurement. A CRP interval change >90 mg/dL at 48 h (+LR 2.15, -LR 0.26) was equivalent to an absolute value of >150 mg/dL within 48 h (+LR 2.32, -LR 0.25). The optimal cut-off for absolute CRP based on new, more stringent definition of severity was >190 mg/dL (+LR 2.72, -LR 0.24). Interval change in CRP is a comparable measure to absolute CRP in the prognostication of AP severity. This study suggests a rise of >90 mg/dL from admission or an absolute value of >190 mg/dL at 48 h predicts severe disease with the greatest accuracy. Copyright © 2017 International Hepato-Pancreato-Biliary Association Inc. Published by Elsevier Ltd. All rights reserved.
Early photosensitizer uptake kinetics predict optimum drug-light interval for photodynamic therapy
NASA Astrophysics Data System (ADS)
Sinha, Lagnojita; Elliott, Jonathan T.; Hasan, Tayyaba; Pogue, Brian W.; Samkoe, Kimberley S.; Tichauer, Kenneth M.
2015-03-01
Photodynamic therapy (PDT) has shown promising results in targeted treatment of cancerous cells by developing localized toxicity with the help of light induced generation of reactive molecular species. The efficiency of this therapy depends on the product of the intensity of light dose and the concentration of photosensitizer (PS) in the region of interest (ROI). On account of this, the dynamic and variable nature of PS delivery and retention depends on many physiological factors that are known to be heterogeneous within and amongst tumors (e.g., blood flow, blood volume, vascular permeability, and lymph drainage rate). This presents a major challenge with respect to how the optimal time and interval of light delivery is chosen, which ideally would be when the concentration of PS molecule is at its maximum in the ROI. In this paper, a predictive algorithm is developed that takes into consideration the variability and dynamic nature of PS distribution in the body on a region-by-region basis and provides an estimate of the optimum time when the PS concentration will be maximum in the ROI. The advantage of the algorithm lies in the fact that it predicts the time in advance as it takes only a sample of initial data points (~12 min) as input. The optimum time calculated using the algorithm estimated a maximum dose that was only 0.58 +/- 1.92% under the true maximum dose compared to a mean dose error of 39.85 +/- 6.45% if a 1 h optimal light deliver time was assumed for patients with different efflux rate constants of the PS, assuming they have the same plasma function. Therefore, if the uptake values of PS for the blood and the ROI is known for only first 12 minutes, the entire curve along with the optimum time of light radiation can be predicted with the help of this algorithm.
Filgueiras, Paulo R; Terra, Luciana A; Castro, Eustáquio V R; Oliveira, Lize M S L; Dias, Júlio C M; Poppi, Ronei J
2015-09-01
This paper aims to estimate the temperature equivalent to 10% (T10%), 50% (T50%) and 90% (T90%) of distilled volume in crude oils using (1)H NMR and support vector regression (SVR). Confidence intervals for the predicted values were calculated using a boosting-type ensemble method in a procedure called ensemble support vector regression (eSVR). The estimated confidence intervals obtained by eSVR were compared with previously accepted calculations from partial least squares (PLS) models and a boosting-type ensemble applied in the PLS method (ePLS). By using the proposed boosting strategy, it was possible to identify outliers in the T10% property dataset. The eSVR procedure improved the accuracy of the distillation temperature predictions in relation to standard PLS, ePLS and SVR. For T10%, a root mean square error of prediction (RMSEP) of 11.6°C was obtained in comparison with 15.6°C for PLS, 15.1°C for ePLS and 28.4°C for SVR. The RMSEPs for T50% were 24.2°C, 23.4°C, 22.8°C and 14.4°C for PLS, ePLS, SVR and eSVR, respectively. For T90%, the values of RMSEP were 39.0°C, 39.9°C and 39.9°C for PLS, ePLS, SVR and eSVR, respectively. The confidence intervals calculated by the proposed boosting methodology presented acceptable values for the three properties analyzed; however, they were lower than those calculated by the standard methodology for PLS. Copyright © 2015 Elsevier B.V. All rights reserved.
QCT/FEA predictions of femoral stiffness are strongly affected by boundary condition modeling
Rossman, Timothy; Kushvaha, Vinod; Dragomir-Daescu, Dan
2015-01-01
Quantitative computed tomography-based finite element models of proximal femora must be validated with cadaveric experiments before using them to assess fracture risk in osteoporotic patients. During validation it is essential to carefully assess whether the boundary condition modeling matches the experimental conditions. This study evaluated proximal femur stiffness results predicted by six different boundary condition methods on a sample of 30 cadaveric femora and compared the predictions with experimental data. The average stiffness varied by 280% among the six boundary conditions. Compared with experimental data the predictions ranged from overestimating the average stiffness by 65% to underestimating it by 41%. In addition we found that the boundary condition that distributed the load to the contact surfaces similar to the expected contact mechanics predictions had the best agreement with experimental stiffness. We concluded that boundary conditions modeling introduced large variations in proximal femora stiffness predictions. PMID:25804260
Liu, Datong; Peng, Yu; Peng, Xiyuan
2018-01-01
Effective anomaly detection of sensing data is essential for identifying potential system failures. Because they require no prior knowledge or accumulated labels, and provide uncertainty presentation, the probability prediction methods (e.g., Gaussian process regression (GPR) and relevance vector machine (RVM)) are especially adaptable to perform anomaly detection for sensing series. Generally, one key parameter of prediction models is coverage probability (CP), which controls the judging threshold of the testing sample and is generally set to a default value (e.g., 90% or 95%). There are few criteria to determine the optimal CP for anomaly detection. Therefore, this paper designs a graphic indicator of the receiver operating characteristic curve of prediction interval (ROC-PI) based on the definition of the ROC curve which can depict the trade-off between the PI width and PI coverage probability across a series of cut-off points. Furthermore, the Youden index is modified to assess the performance of different CPs, by the minimization of which the optimal CP is derived by the simulated annealing (SA) algorithm. Experiments conducted on two simulation datasets demonstrate the validity of the proposed method. Especially, an actual case study on sensing series from an on-orbit satellite illustrates its significant performance in practical application. PMID:29587372
Literature-based condition-specific miRNA-mRNA target prediction.
Oh, Minsik; Rhee, Sungmin; Moon, Ji Hwan; Chae, Heejoon; Lee, Sunwon; Kang, Jaewoo; Kim, Sun
2017-01-01
miRNAs are small non-coding RNAs that regulate gene expression by binding to the 3'-UTR of genes. Many recent studies have reported that miRNAs play important biological roles by regulating specific mRNAs or genes. Many sequence-based target prediction algorithms have been developed to predict miRNA targets. However, these methods are not designed for condition-specific target predictions and produce many false positives; thus, expression-based target prediction algorithms have been developed for condition-specific target predictions. A typical strategy to utilize expression data is to leverage the negative control roles of miRNAs on genes. To control false positives, a stringent cutoff value is typically set, but in this case, these methods tend to reject many true target relationships, i.e., false negatives. To overcome these limitations, additional information should be utilized. The literature is probably the best resource that we can utilize. Recent literature mining systems compile millions of articles with experiments designed for specific biological questions, and the systems provide a function to search for specific information. To utilize the literature information, we used a literature mining system, BEST, that automatically extracts information from the literature in PubMed and that allows the user to perform searches of the literature with any English words. By integrating omics data analysis methods and BEST, we developed Context-MMIA, a miRNA-mRNA target prediction method that combines expression data analysis results and the literature information extracted based on the user-specified context. In the pathway enrichment analysis using genes included in the top 200 miRNA-targets, Context-MMIA outperformed the four existing target prediction methods that we tested. In another test on whether prediction methods can re-produce experimentally validated target relationships, Context-MMIA outperformed the four existing target prediction methods. In summary
Vleugels, Jasper L A; Dijkgraaf, Marcel G W; Hazewinkel, Yark; Wanders, Linda K; Fockens, Paul; Dekker, Evelien
2018-05-01
Real-time differentiation of diminutive polyps (1-5 mm) during endoscopy could replace histopathology analysis. According to guidelines, implementation of optical diagnosis into routine practice would require it to identify rectosigmoid neoplastic lesions with a negative predictive value (NPV) of more than 90%, using histologic findings as a reference, and agreement with histology-based surveillance intervals for more than 90% of cases. We performed a prospective study with 39 endoscopists accredited to perform colonoscopies on participants with positive results from fecal immunochemical tests in the Bowel Cancer Screening Program at 13 centers in the Netherlands. Endoscopists were trained in optical diagnosis using a validated module (Workgroup serrAted polypS and Polyposis). After meeting predefined performance thresholds in the training program, the endoscopists started a 1-year program (continuation phase) in which they performed narrow band imaging analyses during colonoscopies of participants in the screening program and predicted histological findings with confidence levels. The endoscopists were randomly assigned to groups that received feedback or no feedback on the accuracy of their predictions. Primary outcome measures were endoscopists' abilities to identify rectosigmoid neoplastic lesions (using histology as a reference) with NPVs of 90% or more, and selecting surveillance intervals that agreed with those determined by histology for at least 90% of cases. Of 39 endoscopists initially trained, 27 (69%) completed the training program. During the continuation phase, these 27 endoscopists performed 3144 colonoscopies in which 4504 diminutive polyps were removed. The endoscopists identified neoplastic lesions with a pooled NPV of 90.8% (95% confidence interval 88.6-92.6); their proposed surveillance intervals agreed with those determined by histologic analysis for 95.4% of cases (95% confidence interval 94.0-96.6). Findings did not differ between the group
CIACCIO, EDWARD J.; BIVIANO, ANGELO B.; GAMBHIR, ALOK; EINSTEIN, ANDREW J.; GARAN, HASAN
2014-01-01
Background When atrial fibrillation (AF) is incessant, imaging during a prolonged ventricular RR interval may improve image quality. It was hypothesized that long RR intervals could be predicted from preceding RR values. Methods From the PhysioNet database, electrocardiogram RR intervals were obtained from 74 persistent AF patients. An RR interval lengthened by at least 250 ms beyond the immediately preceding RR interval (termed T0 and T1, respectively) was considered prolonged. A two-parameter scatterplot was used to predict the occurrence of a prolonged interval T0. The scatterplot parameters were: (1) RR variability (RRv) estimated as the average second derivative from 10 previous pairs of RR differences, T13–T2, and (2) Tm–T1, the difference between Tm, the mean from T13 to T2, and T1. For each patient, scatterplots were constructed using preliminary data from the first hour. The ranges of parameters 1 and 2 were adjusted to maximize the proportion of prolonged RR intervals within range. These constraints were used for prediction of prolonged RR in test data collected during the second hour. Results The mean prolonged event was 1.0 seconds in duration. Actual prolonged events were identified with a mean positive predictive value (PPV) of 80% in the test set. PPV was >80% in 36 of 74 patients. An average of 10.8 prolonged RR intervals per 60 minutes was correctly identified. Conclusions A method was developed to predict prolonged RR intervals using two parameters and prior statistical sampling for each patient. This or similar methodology may help improve cardiac imaging in many longstanding persistent AF patients. PMID:23998759
Confidence intervals in Flow Forecasting by using artificial neural networks
NASA Astrophysics Data System (ADS)
Panagoulia, Dionysia; Tsekouras, George
2014-05-01
One of the major inadequacies in implementation of Artificial Neural Networks (ANNs) for flow forecasting is the development of confidence intervals, because the relevant estimation cannot be implemented directly, contrasted to the classical forecasting methods. The variation in the ANN output is a measure of uncertainty in the model predictions based on the training data set. Different methods for uncertainty analysis, such as bootstrap, Bayesian, Monte Carlo, have already proposed for hydrologic and geophysical models, while methods for confidence intervals, such as error output, re-sampling, multi-linear regression adapted to ANN have been used for power load forecasting [1-2]. The aim of this paper is to present the re-sampling method for ANN prediction models and to develop this for flow forecasting of the next day. The re-sampling method is based on the ascending sorting of the errors between real and predicted values for all input vectors. The cumulative sample distribution function of the prediction errors is calculated and the confidence intervals are estimated by keeping the intermediate value, rejecting the extreme values according to the desired confidence levels, and holding the intervals symmetrical in probability. For application of the confidence intervals issue, input vectors are used from the Mesochora catchment in western-central Greece. The ANN's training algorithm is the stochastic training back-propagation process with decreasing functions of learning rate and momentum term, for which an optimization process is conducted regarding the crucial parameters values, such as the number of neurons, the kind of activation functions, the initial values and time parameters of learning rate and momentum term etc. Input variables are historical data of previous days, such as flows, nonlinearly weather related temperatures and nonlinearly weather related rainfalls based on correlation analysis between the under prediction flow and each implicit input
Predicting biological condition in southern California streams
Brown, Larry R.; May, Jason T.; Rehn, Andrew C.; Ode, Peter R.; Waite, Ian R.; Kennen, Jonathan G.
2012-01-01
As understanding of the complex relations among environmental stressors and biological responses improves, a logical next step is predictive modeling of biological condition at unsampled sites. We developed a boosted regression tree (BRT) model of biological condition, as measured by a benthic macroinvertebrate index of biotic integrity (BIBI), for streams in urbanized Southern Coastal California. We also developed a multiple linear regression (MLR) model as a benchmark for comparison with the BRT model. The BRT model explained 66% of the variance in B-IBI, identifying watershed population density and combined percentage agricultural and urban land cover in the riparian buffer as the most important predictors of B-IBI, but with watershed mean precipitation and watershed density of manmade channels also important. The MLR model explained 48% of the variance in B-IBI and included watershed population density and combined percentage agricultural and urban land cover in the riparian buffer. For a verification data set, the BRT model correctly classified 75% of impaired sites (B-IBI < 40) and 78% of unimpaired sites (B-IBI = 40). For the same verification data set, the MLR model correctly classified 69% of impaired sites and 87% of unimpaired sites. The BRT model should not be used to predict B-IBI for specific sites; however, the model can be useful for general applications such as identifying and prioritizing regions for monitoring, remediation or preservation, stratifying new bioassessments according to anticipated biological condition, or assessing the potential for change in stream biological condition based on anticipated changes in population density and development in stream buffers.
Novel transformation-based response prediction of shear building using interval neural network
NASA Astrophysics Data System (ADS)
Chakraverty, S.; Sahoo, Deepti Moyi
2017-04-01
Present paper uses powerful technique of interval neural network (INN) to simulate and estimate structural response of multi-storey shear buildings subject to earthquake motion. The INN is first trained for a real earthquake data, viz., the ground acceleration as input and the numerically generated responses of different floors of multi-storey buildings as output. Till date, no model exists to handle positive and negative data in the INN. As such here, the bipolar data in [ -1, 1] are converted first to unipolar form, i.e., to [0, 1] by means of a novel transformation for the first time to handle the above training patterns in normalized form. Once the training is done, again the unipolar data are converted back to its bipolar form by using the inverse transformation. The trained INN architecture is then used to simulate and test the structural response of different floors for various intensity earthquake data and it is found that the predicted responses given by INN model are good for practical purposes.
Application of Interval Predictor Models to Space Radiation Shielding
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy,Daniel P.; Norman, Ryan B.; Blattnig, Steve R.
2016-01-01
This paper develops techniques for predicting the uncertainty range of an output variable given input-output data. These models are called Interval Predictor Models (IPM) because they yield an interval valued function of the input. This paper develops IPMs having a radial basis structure. This structure enables the formal description of (i) the uncertainty in the models parameters, (ii) the predicted output interval, and (iii) the probability that a future observation would fall in such an interval. In contrast to other metamodeling techniques, this probabilistic certi cate of correctness does not require making any assumptions on the structure of the mechanism from which data are drawn. Optimization-based strategies for calculating IPMs having minimal spread while containing all the data are developed. Constraints for bounding the minimum interval spread over the continuum of inputs, regulating the IPMs variation/oscillation, and centering its spread about a target point, are used to prevent data over tting. Furthermore, we develop an approach for using expert opinion during extrapolation. This metamodeling technique is illustrated using a radiation shielding application for space exploration. In this application, we use IPMs to describe the error incurred in predicting the ux of particles resulting from the interaction between a high-energy incident beam and a target.
Quan, Hao; Srinivasan, Dipti; Khosravi, Abbas
2015-09-01
Penetration of renewable energy resources, such as wind and solar power, into power systems significantly increases the uncertainties on system operation, stability, and reliability in smart grids. In this paper, the nonparametric neural network-based prediction intervals (PIs) are implemented for forecast uncertainty quantification. Instead of a single level PI, wind power forecast uncertainties are represented in a list of PIs. These PIs are then decomposed into quantiles of wind power. A new scenario generation method is proposed to handle wind power forecast uncertainties. For each hour, an empirical cumulative distribution function (ECDF) is fitted to these quantile points. The Monte Carlo simulation method is used to generate scenarios from the ECDF. Then the wind power scenarios are incorporated into a stochastic security-constrained unit commitment (SCUC) model. The heuristic genetic algorithm is utilized to solve the stochastic SCUC problem. Five deterministic and four stochastic case studies incorporated with interval forecasts of wind power are implemented. The results of these cases are presented and discussed together. Generation costs, and the scheduled and real-time economic dispatch reserves of different unit commitment strategies are compared. The experimental results show that the stochastic model is more robust than deterministic ones and, thus, decreases the risk in system operations of smart grids.
Royal, M D; Pryce, J E; Woolliams, J A; Flint, A P F
2002-11-01
The decline of fertility in the UK dairy herd and the unfavorable genetic correlation (r(a)) between fertility and milk yield has necessitated the broadening of breeding goals to include fertility. The coefficient of genetic variation present in fertility is of similar magnitude to that present in production traits; however, traditional measurements of fertility (such as calving interval, days open, nonreturn rate) have low heritability (h2 < 0.05), and recording is often poor, hindering identification of genetically superior animals. An alternative approach is to use endocrine measurements of fertility such as interval to commencement of luteal activity postpartum (CLA), which has a higher h2 (0.16 to 0.23) and is free from management bias. Although CLA has favorable phenotypic correlations with traditional measures of fertility, if it is to be used in a selection index, the genetic correlation (ra) of this trait with fertility and other components of the index must be estimated. The aim of the analyses reported here was to obtain information on the ra between lnCLA and calving interval (CI), average body condition score (BCS; one to nine, an indicator of energy balance estimated from records taken at different months of lactation), production and a number of linear type traits. Genetic models were fitted using ASREML, and r(a) were inferred from genetic regression of lnCLA on sire-predicted transmitting abilities (PTA) for the trait concerned by multiplying the regression coefficient (b) by the ratio of the genetic standard deviations. The inferred r(a) between lnCLA and CI and average BCS were 0.36 and -0.84, respectively. Genetic correlations between InCLA and milk fat and protein yields were all positive and ranged between 0.33 and 0.69. Genetic correlations between InCLA and linear type traits reflecting body structure ranged from -0.25 to 0.15, and between udder characteristics they ranged from -0.16 to 0.05. Thus, incorporation of endocrine parameters of
A model of interval timing by neural integration.
Simen, Patrick; Balci, Fuat; de Souza, Laura; Cohen, Jonathan D; Holmes, Philip
2011-06-22
We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.
A model of interval timing by neural integration
Simen, Patrick; Balci, Fuat; deSouza, Laura; Cohen, Jonathan D.; Holmes, Philip
2011-01-01
We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes; that correlations among them can be largely cancelled by balancing excitation and inhibition; that neural populations can act as integrators; and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule’s predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior. PMID:21697374
Isarida, Takeo; Sakai, Tetsuya; Kubota, Takayuki; Koga, Miho; Katayama, Yu; Isarida, Toshiko K
2014-04-01
The present study investigated context effects of incidental odors in free recall after a short retention interval (5 min). With a short retention interval, the results are not confounded by extraneous odors or encounters with the experimental odor and possible rehearsal during a long retention interval. A short study time condition (4 s per item), predicted not to be affected by adaptation to the odor, and a long study time condition (8 s per item) were used. Additionally, we introduced a new method for recovery from adaptation, where a dissimilar odor was briefly presented at the beginning of the retention interval, and we demonstrated the effectiveness of this technique. An incidental learning paradigm was used to prevent overshadowing from confounding the results. In three experiments, undergraduates (N = 200) incidentally studied words presented one-by-one and received a free recall test. Two pairs of odors and a third odor having different semantic-differential characteristics were selected from 14 familiar odors. One of the odors was presented during encoding, and during the test, the same odor (same-context condition) or the other odor within the pair (different-context condition) was presented. Without using a recovery-from-adaptation method, a significant odor-context effect appeared in the 4-s/item condition, but not in the 8-s/item condition. Using the recovery-from-adaptation method, context effects were found for both the 8- and the 4-s/item conditions. The size of the recovered odor-context effect did not change with study time. There were no serial position effects. Implications of the present findings are discussed.
Retention interval affects visual short-term memory encoding.
Bankó, Eva M; Vidnyánszky, Zoltán
2010-03-01
Humans can efficiently store fine-detailed facial emotional information in visual short-term memory for several seconds. However, an unresolved question is whether the same neural mechanisms underlie high-fidelity short-term memory for emotional expressions at different retention intervals. Here we show that retention interval affects the neural processes of short-term memory encoding using a delayed facial emotion discrimination task. The early sensory P100 component of the event-related potentials (ERP) was larger in the 1-s interstimulus interval (ISI) condition than in the 6-s ISI condition, whereas the face-specific N170 component was larger in the longer ISI condition. Furthermore, the memory-related late P3b component of the ERP responses was also modulated by retention interval: it was reduced in the 1-s ISI as compared with the 6-s condition. The present findings cannot be explained based on differences in sensory processing demands or overall task difficulty because there was no difference in the stimulus information and subjects' performance between the two different ISI conditions. These results reveal that encoding processes underlying high-precision short-term memory for facial emotional expressions are modulated depending on whether information has to be stored for one or for several seconds.
Quantifying uncertainty on sediment loads using bootstrap confidence intervals
NASA Astrophysics Data System (ADS)
Slaets, Johanna I. F.; Piepho, Hans-Peter; Schmitter, Petra; Hilger, Thomas; Cadisch, Georg
2017-01-01
Load estimates are more informative than constituent concentrations alone, as they allow quantification of on- and off-site impacts of environmental processes concerning pollutants, nutrients and sediment, such as soil fertility loss, reservoir sedimentation and irrigation channel siltation. While statistical models used to predict constituent concentrations have been developed considerably over the last few years, measures of uncertainty on constituent loads are rarely reported. Loads are the product of two predictions, constituent concentration and discharge, integrated over a time period, which does not make it straightforward to produce a standard error or a confidence interval. In this paper, a linear mixed model is used to estimate sediment concentrations. A bootstrap method is then developed that accounts for the uncertainty in the concentration and discharge predictions, allowing temporal correlation in the constituent data, and can be used when data transformations are required. The method was tested for a small watershed in Northwest Vietnam for the period 2010-2011. The results showed that confidence intervals were asymmetric, with the highest uncertainty in the upper limit, and that a load of 6262 Mg year-1 had a 95 % confidence interval of (4331, 12 267) in 2010 and a load of 5543 Mg an interval of (3593, 8975) in 2011. Additionally, the approach demonstrated that direct estimates from the data were biased downwards compared to bootstrap median estimates. These results imply that constituent loads predicted from regression-type water quality models could frequently be underestimating sediment yields and their environmental impact.
Body Condition Indices Predict Reproductive Success but Not Survival in a Sedentary, Tropical Bird
Milenkaya, Olga; Catlin, Daniel H.; Legge, Sarah; Walters, Jeffrey R.
2015-01-01
Body condition may predict individual fitness because those in better condition have more resources to allocate towards improving their fitness. However, the hypothesis that condition indices are meaningful proxies for fitness has been questioned. Here, we ask if intraspecific variation in condition indices predicts annual reproductive success and survival. We monitored a population of Neochmia phaeton (crimson finch), a sedentary, tropical passerine, for reproductive success and survival over four breeding seasons, and sampled them for commonly used condition indices: mass adjusted for body size, muscle and fat scores, packed cell volume, hemoglobin concentration, total plasma protein, and heterophil to lymphocyte ratio. Our study population is well suited for this research because individuals forage in common areas and do not hold territories such that variation in condition between individuals is not confounded by differences in habitat quality. Furthermore, we controlled for factors that are known to impact condition indices in our study population (e.g., breeding stage) such that we assessed individual condition relative to others in the same context. Condition indices that reflect energy reserves predicted both the probability of an individual fledging young and the number of young produced that survived to independence, but only during some years. Those that were relatively heavy for their body size produced about three times more independent young compared to light individuals. That energy reserves are a meaningful predictor of reproductive success in a sedentary passerine supports the idea that energy reserves are at least sometimes predictors of fitness. However, hematological indices failed to predict reproductive success and none of the indices predicted survival. Therefore, some but not all condition indices may be informative, but because we found that most indices did not predict any component of fitness, we question the ubiquitous interpretation of
Predicting watershed acidification under alternate rainfall conditions
Huntington, T.G.
1996-01-01
The effect of alternate rainfall scenarios on acidification of a forested watershed subjected to chronic acidic deposition was assessed using the model of acidification of groundwater in catchments (MAGIC). The model was calibrated at the Panola Mountain Research Watershed, near Atlanta, Georgia, U.S.A. using measured soil properties, wet and dry deposition, and modeled hydrologic routing. Model forecast simulations were evaluated to compare alternate temporal averaging of rainfall inputs and variations in rainfall amount and seasonal distribution. Soil water alkalinity was predicted to decrease to substantially lower concentrations under lower rainfall compared with current or higher rainfall conditions. Soil water alkalinity was also predicted to decrease to lower levels when the majority of rainfall occurred during the growing season compared with other rainfall distributions. Changes in rainfall distribution that result in decreases in net soil water flux will temporarily delay acidification. Ultimately, however, decreased soil water flux will result in larger increases in soil- adsorbed sulfur and soil-water sulfate concentrations and decreases in alkalinity when compared to higher water flux conditions. Potential climate change resulting in significant changes in rainfall amounts, seasonal distribution of rainfall, or evapotranspiration will change net soil water flux and, consequently, will affect the dynamics of the acidification response to continued sulfate loading.
Fixed-interval performance and self-control in children.
Darcheville, J C; Rivière, V; Wearden, J H
1992-01-01
Operant responses of 16 children (mean age 6 years and 1 month) were reinforced according to different fixed-interval schedules (with interreinforcer intervals of 20, 30, or 40 s) in which the reinforcers were either 20-s or 40-s presentations of a cartoon. In another procedure, they received training on a self-control paradigm in which both reinforcer delay (0.5 s or 40 s) and reinforcer duration (20 s or 40 s of cartoons) varied, and subjects were offered a choice between various combinations of delay and duration. Individual differences in behavior under the self-control procedure were precisely mirrored by individual differences under the fixed-interval schedule. Children who chose the smaller immediate reinforcer on the self-control procedure (impulsive) produced short postreinforcement pauses and high response rates in the fixed-interval conditions, and both measures changed little with changes in fixed-interval value. Conversely, children who chose the larger delayed reinforcer in the self-control condition (the self-controlled subjects) exhibited lower response rates and long postreinforcement pauses, which changed systematically with changes in the interval, in their fixed-interval performances. PMID:1573372
Cole, Sindy; McNally, Gavan P
2007-10-01
Three experiments studied temporal-difference (TD) prediction errors during Pavlovian fear conditioning. In Stage I, rats received conditioned stimulus A (CSA) paired with shock. In Stage II, they received pairings of CSA and CSB with shock that blocked learning to CSB. In Stage III, a serial overlapping compound, CSB --> CSA, was followed by shock. The change in intratrial durations supported fear learning to CSB but reduced fear of CSA, revealing the operation of TD prediction errors. N-methyl- D-aspartate (NMDA) receptor antagonism prior to Stage III prevented learning, whereas opioid receptor antagonism selectively affected predictive learning. These findings support a role for TD prediction errors in fear conditioning. They suggest that NMDA receptors contribute to fear learning by acting on the product of predictive error, whereas opioid receptors contribute to predictive error. (PsycINFO Database Record (c) 2007 APA, all rights reserved).
Klopčič, M; Koops, W J; Kuipers, A
2013-09-01
The milk production of a dairy cow is characterized by lactation production, which is calculated from daily milk yields (DMY) during lactation. The DMY is calculated from one or more milkings a day collected at the farm. Various milking systems are in use today, resulting in one or many recorded milk yields a day, from which different calculations are used to determine DMY. The primary objective of this study was to develop a mathematical function that described milk production of a dairy cow in relation to the interval between 2 milkings. The function was partly based on the biology of the milk production process. This function, called the 3K-function, was able to predict milk production over an interval of 12h, so DMY was twice this estimate. No external information is needed to incorporate this function in methods to predict DMY. Application of the function on data from different milking systems showed a good fit. This function could be a universal tool to predict DMY for a variety of milking systems, and it seems especially useful for data from robotic milking systems. Further study is needed to evaluate the function under a wide range of circumstances, and to see how it can be incorporated in existing milk recording systems. A secondary objective of using the 3K-function was to compare how much DMY based on different milking systems differed from that based on a twice-a-day milking. Differences were consistent with findings in the literature. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
An actual load forecasting methodology by interval grey modeling based on the fractional calculus.
Yang, Yang; Xue, Dingyü
2017-07-17
The operation processes for thermal power plant are measured by the real-time data, and a large number of historical interval data can be obtained from the dataset. Within defined periods of time, the interval information could provide important information for decision making and equipment maintenance. Actual load is one of the most important parameters, and the trends hidden in the historical data will show the overall operation status of the equipments. However, based on the interval grey parameter numbers, the modeling and prediction process is more complicated than the one with real numbers. In order not lose any information, the geometric coordinate features are used by the coordinates of area and middle point lines in this paper, which are proved with the same information as the original interval data. The grey prediction model for interval grey number by the fractional-order accumulation calculus is proposed. Compared with integer-order model, the proposed method could have more freedom with better performance for modeling and prediction, which can be widely used in the modeling process and prediction for the small amount interval historical industry sequence samples. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Capaldi, E. J.; Martins, Ana; Miller, Ronald M.
2007-01-01
Rats in a Pavlovian situation were trained under three different reward schedules, at either a 30 s or a 90 s intertrial interval (ITI): Consistent reward (C), 50% irregular reward (I), and single alternation of reward and nonrewarded trials (SA). Activity was recorded to the conditioned stimulus (CS) and in all 10 s bins in each ITI except the…
Wu, Ning Ying; Conger, Anthony J; Dygdon, Judith A
2006-04-01
Two hundred fifty one men and women participated in a study of the prediction of fear of heights, snakes, and public speaking by providing retrospective accounts of multimodal classical conditioning events involving those stimuli. The fears selected for study represent those believed by some to be innate (i.e., heights), prepared (i.e., snakes), and purely experientially learned (i.e., public speaking). This study evaluated the extent to which classical conditioning experiences in direct, observational, and verbal modes contributed to the prediction of the current level of fear severity. Subjects were asked to describe their current level of fear and to estimate their experience with fear response-augmenting events (first- and higher-order aversive pairings) and fear response-moderating events (first- and higher-order appetitive pairings, and pre- and post-conditioning neutral presentations) in direct, observational, and verbal modes. For each stimulus, fear was predictable from direct response-augmenting events and prediction was enhanced by the inclusion of response-moderating events. Furthermore, for each fear, maximum prediction was attained by the addition of variables tapping experiences in the observational and/or verbal modes. Conclusions are offered regarding the importance of including response-augmenting and response-moderating events in all three modes in both research and clinical applications of classical conditioning.
Correlate Life Predictions and Condition Indicators in Helicopter Tail Gearbox Bearings
NASA Technical Reports Server (NTRS)
Dempsey, Paula J.; Bolander, Nathan; Haynes, Chris; Branning, Jeremy; Wade, Daniel R.
2010-01-01
Research to correlate bearing remaining useful life (RUL) predictions with Helicopter Health Usage Monitoring Systems (HUMS) condition indicators (CI) to indicate the damage state of a transmission component has been developed. Condition indicators were monitored and recorded on UH-60M (Black Hawk) tail gearbox output shaft thrust bearings, which had been removed from helicopters and installed in a bearing spall propagation test rig. Condition indicators monitoring the tail gearbox output shaft thrust bearings in UH-60M helicopters were also recorded from an on-board HUMS. The spal-lpropagation data collected in the test rig was used to generate condition indicators for bearing fault detection. A damage progression model was also developed from this data. Determining the RUL of this component in a helicopter requires the CI response to be mapped to the damage state. The data from helicopters and a test rig were analyzed to determine if bearing remaining useful life predictions could be correlated with HUMS condition indicators (CI). Results indicate data fusion analysis techniques can be used to map the CI response to the damage levels.
Single-channel autocorrelation functions: the effects of time interval omission.
Ball, F G; Sansom, M S
1988-01-01
We present a general mathematical framework for analyzing the dynamic aspects of single channel kinetics incorporating time interval omission. An algorithm for computing model autocorrelation functions, incorporating time interval omission, is described. We show, under quite general conditions, that the form of these autocorrelations is identical to that which would be obtained if time interval omission was absent. We also show, again under quite general conditions, that zero correlations are necessarily a consequence of the underlying gating mechanism and not an artefact of time interval omission. The theory is illustrated by a numerical study of an allosteric model for the gating mechanism of the locust muscle glutamate receptor-channel. PMID:2455553
Aural-Nondetectability Model Predictions for Night-Vision Goggles across Ambient Lighting Conditions
2015-12-01
ARL-TR-7564 ● DEC 2015 US Army Research Laboratory Aural-Nondetectability Model Predictions for Night -Vision Goggles across...ARL-TR-7564 ● DEC 2015 US Army Research Laboratory Aural-Nondetectability Model Predictions for Night -Vision Goggles across Ambient...May 2015–30 Sep 2015 4. TITLE AND SUBTITLE Aural-Nondetectability Model Predictions for Night -Vision Goggles across Ambient Lighting Conditions 5a
Grillon, Christian; Baas, Johanna M P; Cornwell, Brian; Johnson, Linda
2006-10-01
Sustained anxiety can be modeled using context conditioning, which can be studied in a virtual reality environment. Unpredictable stressors increase context conditioning in animals. This study examined context conditioning to predictable and unpredictable shocks in humans using behavioral avoidance, potentiated startle, and subjective reports of anxiety. Subjects were guided through three virtual rooms (no-shock, predictable, unpredictable contexts). Eight-sec duration colored lights served as conditioned stimuli (CS). During acquisition, no shock was administered in the no-shock context. Shocks were paired with the CS in the predictable context and were administered randomly in the unpredictable context. No shock was administered during extinction. Startle stimuli were delivered during CS and between CS to assess cued and context conditioning, respectively. To assess avoidance, subjects freely navigated into two of the three contexts to retrieve money. Startle between CS was potentiated in the unpredictable context compared to the two other contexts. Following acquisition, subjects showed a strong preference for the no-shock context and avoidance of the unpredictable context. Consistent with animal data, context conditioning is increased by unpredictability. These data support virtual reality as a tool to extend research on physiological and behavioral signs of fear and anxiety in humans.
Karsten, Schober; Stephanie, Savino; Vedat, Yildiz
2017-11-10
The objective of the study was to evaluate the effects of body weight (BW), breed, and sex on two-dimensional (2D) echocardiographic measures, reference ranges, and prediction intervals using allometrically-scaled data of left atrial (LA) and left ventricular (LV) size and LV wall thickness in healthy cats. Study type was retrospective, observational, and clinical cohort. 150 healthy cats were enrolled and 2D echocardiograms analyzed. LA diameter, LV wall thickness, and LV dimension were quantified using three different imaging views. The effect of BW, breed, sex, age, and interaction (BW*sex) on echocardiographic variables was assessed using univariate and multivariate regression and linear mixed model analysis. Standard (using raw data) and allometrically scaled (Y=a × M b ) reference intervals and prediction intervals were determined. BW had a significant (P<0.05) independent effect on 2D variables whereas breed, sex, and age did not. There were clinically relevant differences between reference intervals using mean ± 2SD of raw data and mean and 95% prediction interval of allometrically-scaled variables, most prominent in larger (>6 kg) and smaller (<3 kg) cats. A clinically relevant difference between thickness of the interventricular septum (IVS) and dimension of the LV posterior wall (LVPW) was identified. In conclusion, allometric scaling and BW-based 95% prediction intervals should be preferred over conventional 2D echocardiographic reference intervals in cats, in particular in small and large cats. These results are particularly relevant to screening examinations for feline hypertrophic cardiomyopathy.
Assessing predictability of a hydrological stochastic-dynamical system
NASA Astrophysics Data System (ADS)
Gelfan, Alexander
2014-05-01
The water cycle includes the processes with different memory that creates potential for predictability of hydrological system based on separating its long and short memory components and conditioning long-term prediction on slower evolving components (similar to approaches in climate prediction). In the face of the Panta Rhei IAHS Decade questions, it is important to find a conceptual approach to classify hydrological system components with respect to their predictability, define predictable/unpredictable patterns, extend lead-time and improve reliability of hydrological predictions based on the predictable patterns. Representation of hydrological systems as the dynamical systems subjected to the effect of noise (stochastic-dynamical systems) provides possible tool for such conceptualization. A method has been proposed for assessing predictability of hydrological system caused by its sensitivity to both initial and boundary conditions. The predictability is defined through a procedure of convergence of pre-assigned probabilistic measure (e.g. variance) of the system state to stable value. The time interval of the convergence, that is the time interval during which the system losses memory about its initial state, defines limit of the system predictability. The proposed method was applied to assess predictability of soil moisture dynamics in the Nizhnedevitskaya experimental station (51.516N; 38.383E) located in the agricultural zone of the central European Russia. A stochastic-dynamical model combining a deterministic one-dimensional model of hydrothermal regime of soil with a stochastic model of meteorological inputs was developed. The deterministic model describes processes of coupled heat and moisture transfer through unfrozen/frozen soil and accounts for the influence of phase changes on water flow. The stochastic model produces time series of daily meteorological variables (precipitation, air temperature and humidity), whose statistical properties are similar
Moustafa, Ahmed A.; Wufong, Ella; Servatius, Richard J.; Pang, Kevin C. H.; Gluck, Mark A.; Myers, Catherine E.
2013-01-01
A recurrent-network model provides a unified account of the hippocampal region in mediating the representation of temporal information in classical eyeblink conditioning. Much empirical research is consistent with a general conclusion that delay conditioning (in which the conditioned stimulus CS and unconditioned stimulus US overlap and co-terminate) is independent of the hippocampal system, while trace conditioning (in which the CS terminates before US onset) depends on the hippocampus. However, recent studies show that, under some circumstances, delay conditioning can be hippocampal-dependent and trace conditioning can be spared following hippocampal lesion. Here, we present an extension of our prior trial-level models of hippocampal function and stimulus representation that can explain these findings within a unified framework. Specifically, the current model includes adaptive recurrent collateral connections that aid in the representation of intra-trial temporal information. With this model, as in our prior models, we argue that the hippocampus is not specialized for conditioned response timing, but rather is a general-purpose system that learns to predict the next state of all stimuli given the current state of variables encoded by activity in recurrent collaterals. As such, the model correctly predicts that hippocampal involvement in classical conditioning should be critical not only when there is an intervening trace interval, but also when there is a long delay between CS onset and US onset. Our model simulates empirical data from many variants of classical conditioning, including delay and trace paradigms in which the length of the CS, the inter-stimulus interval, or the trace interval is varied. Finally, we discuss model limitations, future directions, and several novel empirical predictions of this temporal processing model of hippocampal function and learning. PMID:23178699
ACCEPT: Introduction of the Adverse Condition and Critical Event Prediction Toolbox
NASA Technical Reports Server (NTRS)
Martin, Rodney A.; Santanu, Das; Janakiraman, Vijay Manikandan; Hosein, Stefan
2015-01-01
The prediction of anomalies or adverse events is a challenging task, and there are a variety of methods which can be used to address the problem. In this paper, we introduce a generic framework developed in MATLAB (sup registered mark) called ACCEPT (Adverse Condition and Critical Event Prediction Toolbox). ACCEPT is an architectural framework designed to compare and contrast the performance of a variety of machine learning and early warning algorithms, and tests the capability of these algorithms to robustly predict the onset of adverse events in any time-series data generating systems or processes.
Recurrence interval analysis of trading volumes
NASA Astrophysics Data System (ADS)
Ren, Fei; Zhou, Wei-Xing
2010-06-01
We study the statistical properties of the recurrence intervals τ between successive trading volumes exceeding a certain threshold q . The recurrence interval analysis is carried out for the 20 liquid Chinese stocks covering a period from January 2000 to May 2009, and two Chinese indices from January 2003 to April 2009. Similar to the recurrence interval distribution of the price returns, the tail of the recurrence interval distribution of the trading volumes follows a power-law scaling, and the results are verified by the goodness-of-fit tests using the Kolmogorov-Smirnov (KS) statistic, the weighted KS statistic and the Cramér-von Mises criterion. The measurements of the conditional probability distribution and the detrended fluctuation function show that both short-term and long-term memory effects exist in the recurrence intervals between trading volumes. We further study the relationship between trading volumes and price returns based on the recurrence interval analysis method. It is found that large trading volumes are more likely to occur following large price returns, and the comovement between trading volumes and price returns is more pronounced for large trading volumes.
Recurrence interval analysis of trading volumes.
Ren, Fei; Zhou, Wei-Xing
2010-06-01
We study the statistical properties of the recurrence intervals τ between successive trading volumes exceeding a certain threshold q. The recurrence interval analysis is carried out for the 20 liquid Chinese stocks covering a period from January 2000 to May 2009, and two Chinese indices from January 2003 to April 2009. Similar to the recurrence interval distribution of the price returns, the tail of the recurrence interval distribution of the trading volumes follows a power-law scaling, and the results are verified by the goodness-of-fit tests using the Kolmogorov-Smirnov (KS) statistic, the weighted KS statistic and the Cramér-von Mises criterion. The measurements of the conditional probability distribution and the detrended fluctuation function show that both short-term and long-term memory effects exist in the recurrence intervals between trading volumes. We further study the relationship between trading volumes and price returns based on the recurrence interval analysis method. It is found that large trading volumes are more likely to occur following large price returns, and the comovement between trading volumes and price returns is more pronounced for large trading volumes.
Volatility return intervals analysis of the Japanese market
NASA Astrophysics Data System (ADS)
Jung, W.-S.; Wang, F. Z.; Havlin, S.; Kaizoji, T.; Moon, H.-T.; Stanley, H. E.
2008-03-01
We investigate scaling and memory effects in return intervals between price volatilities above a certain threshold q for the Japanese stock market using daily and intraday data sets. We find that the distribution of return intervals can be approximated by a scaling function that depends only on the ratio between the return interval τ and its mean <τ>. We also find memory effects such that a large (or small) return interval follows a large (or small) interval by investigating the conditional distribution and mean return interval. The results are similar to previous studies of other markets and indicate that similar statistical features appear in different financial markets. We also compare our results between the period before and after the big crash at the end of 1989. We find that scaling and memory effects of the return intervals show similar features although the statistical properties of the returns are different.
Short-term load and wind power forecasting using neural network-based prediction intervals.
Quan, Hao; Srinivasan, Dipti; Khosravi, Abbas
2014-02-01
Electrical power systems are evolving from today's centralized bulk systems to more decentralized systems. Penetrations of renewable energies, such as wind and solar power, significantly increase the level of uncertainty in power systems. Accurate load forecasting becomes more complex, yet more important for management of power systems. Traditional methods for generating point forecasts of load demands cannot properly handle uncertainties in system operations. To quantify potential uncertainties associated with forecasts, this paper implements a neural network (NN)-based method for the construction of prediction intervals (PIs). A newly introduced method, called lower upper bound estimation (LUBE), is applied and extended to develop PIs using NN models. A new problem formulation is proposed, which translates the primary multiobjective problem into a constrained single-objective problem. Compared with the cost function, this new formulation is closer to the primary problem and has fewer parameters. Particle swarm optimization (PSO) integrated with the mutation operator is used to solve the problem. Electrical demands from Singapore and New South Wales (Australia), as well as wind power generation from Capital Wind Farm, are used to validate the PSO-based LUBE method. Comparative results show that the proposed method can construct higher quality PIs for load and wind power generation forecasts in a short time.
Can working memory predict target-to-target interval effects in the P300?
Steiner, Genevieve Z; Barry, Robert J; Gonsalvez, Craig J
2013-09-01
It has been suggested that the P300 component of the ERP is an electrophysiological index of memory-updating processes associated with task-relevant stimuli. Component magnitude varies with the time separating target stimuli (target-to-target interval: TTI), with longer TTIs eliciting larger P300 amplitudes. According to the template-update perspective, TTI effects observable in the P300 reflect the updating of stimulus-templates in working memory (WM). The current study explored whether young adults' memory-task ability could predict TTI effects in P300. EEG activity was recorded from 50 university students (aged 18-25 years) while they completed an auditory equiprobable Go/NoGo task with manipulations of TTIs. Participants also completed a CogState® battery and were sorted according to their WM score. ERPs were analysed using a temporal PCA. Two P300 components, P3b and the Slow Wave, were found to linearly increase in amplitude to longer TTIs. This TTI effect differed between groups only for the P3b component: The high WM group showed a steeper increase in P3b amplitude with TTI than the low WM group. These results suggest that TTI effects in P300 are directly related to WM processes. © 2013.
Intertrial interval duration and learning in autistic children.
Koegel, R L; Dunlap, G; Dyer, K
1980-01-01
This study investigated the influence of intertrial interval duration on the performance of autistic children during teaching situations. The children were taught under the same conditions existing in their regular programs, except that the length of time between trials was systematically manipulated. With both multiple baseline and repeated reversal designs, two lengths of intertrial interval were employed; short intervals with the SD for any given trial presented approximately one second following the reinforcer for the previous trial versus long intervals with the SD presented four or more seconds following the reinforcer for the previous trial. The results showed that: (1) the short intertrial intervals always produced higher levels of correct responding than the long intervals; and (2) there were improving trends in performance and rapid acquisition with the short intertrial intervals, in contrast to minimal or no change with the long intervals. The results are discussed in terms of utilizing information about child and task characteristics in terms of selecting optimal intervals. The data suggest that manipulations made between trials have a large influence on autistic children's learning. PMID:7364701
Usefulness of Electrocardiographic QT Interval to Predict Left Ventricular Diastolic Dysfunction
Wilcox, Jane E.; Rosenberg, Jonathan; Vallakati, Ajay; Gheorghiade, Mihai; Shah, Sanjiv J.
2013-01-01
Whether a normal electrocardiogram excludes left ventricular (LV) diastolic dysfunction (DD) and whether electrocardiographic parameters are associated with DD is unknown. We therefore sought to investigate the relation between electrocardiographic parameters and DD. We first evaluated 75 consecutive patients referred for echocardiography for clinical suspicion of heart failure (phase 1). Electrocardiography and comprehensive echocardiography were performed on all patients and were analyzed separately in a blinded fashion. Receiver operating characteristic curves and multivariate regression analyses were used to determine which electrocardiographic parameters were most closely associated with DD. Next, we prospectively validated our results in 100 consecutive, unselected patients undergoing echocardiography (phase 2). In phase 1 of our study, the mean age was 59 ± 14 years, 41% were women, 31% had coronary disease, 53% had hypertension, and 25% had diabetes. The mean ejection fraction was 54 ± 15%, and 64% had DD. Of all the electrocardiographic parameters, the QTc interval was most closely associated with DD. QTc was inversely associated with E′ velocity (r = −0.54, p <0.0001), and the area under the receiver operating characteristic curve for QTc as a predictor of DD was 0.82. QTc prolongation was independently associated with reduced E′ velocity (p = 0.021 after adjustment for age, gender, medications, QRS duration, and ejection fraction). In phase 2 of our study QTc was the electrocardiographic parameter most associated with reduced E′ velocity (435 ± 31 vs 419 ± 24 ms; p = 0.004), confirming our phase 1 study findings. In conclusion, QTc prolongation was the electrocardiographic marker most predictive of DD and was independently associated with DD. PMID:21907948
Predicted Turbine Heat Transfer for a Range of Test Conditions
NASA Technical Reports Server (NTRS)
Boyle, R. J.; Lucci, B. L.
1996-01-01
Comparisons are shown between predictions and experimental data for blade and endwall heat transfer. The comparisons of computational domain parisons are given for both vane and rotor geometries over an extensive range of Reynolds and Mach numbers. Comparisons are made with experimental data from a variety of sources. A number of turbulence models are available for predicting blade surface heat transfer, as well as aerodynamic performance. The results of an investigation to determine the turbulence model which gives the best agreement with experimental data over a wide range of test conditions are presented.
Predicting thermal reference conditions for USA streams and rivers
Hill, Ryan A.; Hawkins, Charles P.; Carlisle, Daren M.
2013-01-01
Temperature is a primary driver of the structure and function of stream ecosystems. However, the lack of stream temperature (ST) data for the vast majority of streams and rivers severely compromises our ability to describe patterns of thermal variation among streams, test hypotheses regarding the effects of temperature on macroecological patterns, and assess the effects of altered STs on ecological resources. Our goal was to develop empirical models that could: 1) quantify the effects of stream and watershed alteration (SWA) on STs, and 2) accurately and precisely predict natural (i.e., reference condition) STs in conterminous USA streams and rivers. We modeled 3 ecologically important elements of the thermal regime: mean summer, mean winter, and mean annual ST. To build reference-condition models (RCMs), we used daily mean ST data obtained from several thousand US Geological Survey temperature sites distributed across the conterminous USA and iteratively modeled ST with Random Forests to identify sites in reference condition. We first created a set of dirty models (DMs) that related STs to both natural factors (e.g., climate, watershed area, topography) and measures of SWA, i.e., reservoirs, urbanization, and agriculture. The 3 models performed well (r2 = 0.84–0.94, residual mean square error [RMSE] = 1.2–2.0°C). For each DM, we used partial dependence plots to identify SWA thresholds below which response in ST was minimal. We then used data from only the sites with upstream SWA below these thresholds to build RCMs with only natural factors as predictors (r2 = 0.87–0.95, RMSE = 1.1–1.9°C). Use of only reference-quality sites caused RCMs to suffer modest loss of predictor space and spatial coverage, but this loss was associated with parts of ST response curves that were flat and, therefore, not responsive to further variation in predictor space. We then compared predictions made with the RCMs to predictions made with the DMs with SWA set to 0. For most
Interval Predictor Models with a Formal Characterization of Uncertainty and Reliability
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2014-01-01
This paper develops techniques for constructing empirical predictor models based on observations. By contrast to standard models, which yield a single predicted output at each value of the model's inputs, Interval Predictors Models (IPM) yield an interval into which the unobserved output is predicted to fall. The IPMs proposed prescribe the output as an interval valued function of the model's inputs, render a formal description of both the uncertainty in the model's parameters and of the spread in the predicted output. Uncertainty is prescribed as a hyper-rectangular set in the space of model's parameters. The propagation of this set through the empirical model yields a range of outputs of minimal spread containing all (or, depending on the formulation, most) of the observations. Optimization-based strategies for calculating IPMs and eliminating the effects of outliers are proposed. Outliers are identified by evaluating the extent by which they degrade the tightness of the prediction. This evaluation can be carried out while the IPM is calculated. When the data satisfies mild stochastic assumptions, and the optimization program used for calculating the IPM is convex (or, when its solution coincides with the solution to an auxiliary convex program), the model's reliability (that is, the probability that a future observation would be within the predicted range of outputs) can be bounded rigorously by a non-asymptotic formula.
Protein 8-class secondary structure prediction using conditional neural fields.
Wang, Zhiyong; Zhao, Feng; Peng, Jian; Xu, Jinbo
2011-10-01
Compared with the protein 3-class secondary structure (SS) prediction, the 8-class prediction gains less attention and is also much more challenging, especially for proteins with few sequence homologs. This paper presents a new probabilistic method for 8-class SS prediction using conditional neural fields (CNFs), a recently invented probabilistic graphical model. This CNF method not only models the complex relationship between sequence features and SS, but also exploits the interdependency among SS types of adjacent residues. In addition to sequence profiles, our method also makes use of non-evolutionary information for SS prediction. Tested on the CB513 and RS126 data sets, our method achieves Q8 accuracy of 64.9 and 64.7%, respectively, which are much better than the SSpro8 web server (51.0 and 48.0%, respectively). Our method can also be used to predict other structure properties (e.g. solvent accessibility) of a protein or the SS of RNA. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
ERIC Educational Resources Information Center
Ladd, Helen F.
2009-01-01
This quantitative study uses data from North Carolina to examine the extent to which survey based perceptions of working conditions are predictive of policy-relevant outcomes, independent of other school characteristics such as the demographic mix of the school's students. Working conditions emerge as highly predictive of teachers' stated…
Ghisoni, Eleonora; Katsaros, Dionyssios; Maggiorotto, Furio; Aglietta, Massimo; Vaira, Marco; De Simone, Michele; Mittica, Gloria; Giannone, Gaia; Robella, Manuela; Genta, Sofia; Lucchino, Fabiola; Marocco, Francesco; Borella, Fulvio; Valabrega, Giorgio; Ponzone, Riccardo
2018-05-30
Optimal cytoreduction (macroscopic Residual Tumor, RT = 0) is the best survival predictor factor in epithelial ovarian cancer (EOC). It doesn't exist a consolidated criteria to predict optimal surgical resection at interval debulking surgery (IDS). The aim of this study is to develop a predictive model of complete cytoreduction at IDS. We, retrospectively, analyzed 93 out of 432 patients, with advanced EOC, underwent neoadjuvant chemotherapy (NACT) and IDS from January 2010 to December 2016 in two referral cancer centers. The correlation between clinical-pathological variables and residual disease at IDS has been investigated with univariate and multivariate analysis. A predictive score of cytoreduction (PSC) has been created by combining all significant variables. The performance of each single variable and PSC has been reported and the correlation of all significant variables with progression free survival (PFS) has been assessed. At IDS, 65 patients (69,8%) had complete cytoreduction with no residual disease (R = 0). Three criteria independently predicted R > 0: age ≥ 60 years (p = 0.014), CA-125 before NACT > 550 UI/dl (p = 0.044), and Peritoneal Cancer Index (PCI) > 16 (p < 0.001). A PSC ≥ 3 has been associated with a better accuracy (85,8%), limiting the number of incomplete surgeries to 16,5%. Moreover, a PCI > 16, a PSC ≥ 3 and the presence of R > 0 after IDS were all significantly associated with shorter PFS (p < 0.001, p < 0.001 and p = 0.004 respectively). Our PSC predicts, in a large number of patients, complete cytoreduction at IDS, limiting the rate of futile extensive surgeries in case of presence of residual tumor (R > 0). The PSC should be prospectively validated in a larger series of EOC patients undergoing NACT-IDS.
Modeling Relationships Between Flight Crew Demographics and Perceptions of Interval Management
NASA Technical Reports Server (NTRS)
Remy, Benjamin; Wilson, Sara R.
2016-01-01
The Interval Management Alternative Clearances (IMAC) human-in-the-loop simulation experiment was conducted to assess interval management system performance and participants' acceptability and workload while performing three interval management clearance types. Twenty-four subject pilots and eight subject controllers flew ten high-density arrival scenarios into Denver International Airport during two weeks of data collection. This analysis examined the possible relationships between subject pilot demographics on reported perceptions of interval management in IMAC. Multiple linear regression models were created with a new software tool to predict subject pilot questionnaire item responses from demographic information. General patterns were noted across models that may indicate flight crew demographics influence perceptions of interval management.
The 32nd CDC: System identification using interval dynamic models
NASA Technical Reports Server (NTRS)
Keel, L. H.; Lew, J. S.; Bhattacharyya, S. P.
1992-01-01
Motivated by the recent explosive development of results in the area of parametric robust control, a new technique to identify a family of uncertain systems is identified. The new technique takes the frequency domain input and output data obtained from experimental test signals and produces an 'interval transfer function' that contains the complete frequency domain behavior with respect to the test signals. This interval transfer function is one of the key concepts in the parametric robust control approach and identification with such an interval model allows one to predict the worst case performance and stability margins using recent results on interval systems. The algorithm is illustrated by applying it to an 18 bay Mini-Mast truss structure.
On the accuracy and reliability of predictions by control-system theory.
Bourbon, W T; Copeland, K E; Dyer, V R; Harman, W K; Mosley, B L
1990-12-01
In three experiments we used control-system theory (CST) to predict the results of tracking tasks on which people held a handle to keep a cursor even with a target on a computer screen. 10 people completed a total of 104 replications of the task. In each experiment, there were two conditions: in one, only the handle affected the position of the cursor; in the other, a random disturbance also affected the cursor. From a person's performance during Condition 1, we derived constants used in the CST model to predict the results of Condition 2. In two experiments, predictions occurred a few minutes before Condition 2; in one experiment, the delay was 1 yr. During a 1-min. experimental run, the positions of handle and cursor, produced by the person, were each sampled 1800 times, once every 1/30 sec. During a modeling run, the model predicted the positions of the handle and target for each of the 1800 intervals sampled in the experimental run. In 104 replications, the mean correlation between predicted and actual positions of the handle was .996; SD = .002.
The QT Interval and Risk of Incident Atrial Fibrillation
Mandyam, Mala C.; Soliman, Elsayed Z.; Alonso, Alvaro; Dewland, Thomas A.; Heckbert, Susan R.; Vittinghoff, Eric; Cummings, Steven R.; Ellinor, Patrick T.; Chaitman, Bernard R.; Stocke, Karen; Applegate, William B.; Arking, Dan E.; Butler, Javed; Loehr, Laura R.; Magnani, Jared W.; Murphy, Rachel A.; Satterfield, Suzanne; Newman, Anne B.; Marcus, Gregory M.
2013-01-01
BACKGROUND Abnormal atrial repolarization is important in the development of atrial fibrillation (AF), but no direct measurement is available in clinical medicine. OBJECTIVE To determine whether the QT interval, a marker of ventricular repolarization, could be used to predict incident AF. METHODS We examined a prolonged QT corrected by the Framingham formula (QTFram) as a predictor of incident AF in the Atherosclerosis Risk in Communities (ARIC) study. The Cardiovascular Health Study (CHS) and Health, Aging, and Body Composition (Health ABC) study were used for validation. Secondary predictors included QT duration as a continuous variable, a short QT interval, and QT intervals corrected by other formulae. RESULTS Among 14,538 ARIC participants, a prolonged QTFram predicted a roughly two-fold increased risk of AF (hazard ratio [HR] 2.05, 95% confidence interval [CI] 1.42–2.96, p<0.001). No substantive attenuation was observed after adjustment for age, race, sex, study center, body mass index, hypertension, diabetes, coronary disease, and heart failure. The findings were validated in CHS and Health ABC and were similar across various QT correction methods. Also in ARIC, each 10-ms increase in QTFram was associated with an increased unadjusted (HR 1.14, 95%CI 1.10–1.17, p<0.001) and adjusted (HR 1.11, 95%CI 1.07–1.14, p<0.001) risk of AF. Findings regarding a short QT were inconsistent across cohorts. CONCLUSIONS A prolonged QT interval is associated with an increased risk of incident AF. PMID:23872693
Condition Assessment and End-of-Life Prediction System for Electric Machines and Their Loads
NASA Technical Reports Server (NTRS)
Parlos, Alexander G.; Toliyat, Hamid A.
2005-01-01
An end-of-life prediction system developed for electric machines and their loads could be used in integrated vehicle health monitoring at NASA and in other government agencies. This system will provide on-line, real-time condition assessment and end-of-life prediction of electric machines (e.g., motors, generators) and/or their loads of mechanically coupled machinery (e.g., pumps, fans, compressors, turbines, conveyor belts, magnetic levitation trains, and others). In long-duration space flight, the ability to predict the lifetime of machinery could spell the difference between mission success or failure. Therefore, the system described here may be of inestimable value to the U.S. space program. The system will provide continuous monitoring for on-line condition assessment and end-of-life prediction as opposed to the current off-line diagnoses.
NASA Astrophysics Data System (ADS)
Wilson, S. R.; Close, M. E.; Abraham, P.
2018-01-01
Diffuse nitrate losses from agricultural land pollute groundwater resources worldwide, but can be attenuated under reducing subsurface conditions. In New Zealand, the ability to predict where groundwater denitrification occurs is important for understanding the linkage between land use and discharges of nitrate-bearing groundwater to streams. This study assesses the application of linear discriminant analysis (LDA) for predicting groundwater redox status for Southland, a major dairy farming region in New Zealand. Data cases were developed by assigning a redox status to samples derived from a regional groundwater quality database. Pre-existing regional-scale geospatial databases were used as training variables for the discriminant functions. The predictive accuracy of the discriminant functions was slightly improved by optimising the thresholds between sample depth classes. The models predict 23% of the region as being reducing at shallow depths (<15 m), and 37% at medium depths (15-75 m). Predictions were made at a sub-regional level to determine whether improvements could be made with discriminant functions trained by local data. The results indicated that any gains in predictive success were offset by loss of confidence in the predictions due to the reduction in the number of samples used. The regional scale model predictions indicate that subsurface reducing conditions predominate at low elevations on the coastal plains where poorly drained soils are widespread. Additional indicators for subsurface denitrification are a high carbon content of the soil, a shallow water table, and low-permeability clastic sediments. The coastal plains are an area of widespread groundwater discharge, and the soil and hydrology characteristics require the land to be artificially drained to render the land suitable for farming. For the improvement of water quality in coastal areas, it is therefore important that land and water management efforts focus on understanding hydrological
Characterization of minimal sequences associated with self-similar interval exchange maps
NASA Astrophysics Data System (ADS)
Cobo, Milton; Gutiérrez-Romo, Rodolfo; Maass, Alejandro
2018-04-01
The construction of affine interval exchange maps (IEMs) with wandering intervals that are semi-conjugate to a given self-similar IEM is strongly related to the existence of the so-called minimal sequences associated with local potentials, which are certain elements of the substitution subshift arising from the given IEM. In this article, under the condition called unique representation property, we characterize such minimal sequences for potentials coming from non-real eigenvalues of the substitution matrix. We also give conditions on the slopes of the affine extensions of a self-similar IEM that determine whether it exhibits a wandering interval or not.
ERIC Educational Resources Information Center
Schuwerk, Tobias; Sodian, Beate; Paulus, Markus
2016-01-01
Recent research suggests that impaired action prediction is at the core of social interaction deficits in autism spectrum condition (ASC). Here, we targeted two cognitive mechanisms that are thought to underlie the prediction of others' actions: statistical learning and efficiency considerations. We measured proactive eye movements of 10-year-old…
Military Applicability of Interval Training for Health and Performance.
Gibala, Martin J; Gagnon, Patrick J; Nindl, Bradley C
2015-11-01
Militaries from around the globe have predominantly used endurance training as their primary mode of aerobic physical conditioning, with historical emphasis placed on the long distance run. In contrast to this traditional exercise approach to training, interval training is characterized by brief, intermittent bouts of intense exercise, separated by periods of lower intensity exercise or rest for recovery. Although hardly a novel concept, research over the past decade has shed new light on the potency of interval training to elicit physiological adaptations in a time-efficient manner. This work has largely focused on the benefits of low-volume interval training, which involves a relatively small total amount of exercise, as compared with the traditional high-volume approach to training historically favored by militaries. Studies that have directly compared interval and moderate-intensity continuous training have shown similar improvements in cardiorespiratory fitness and the capacity for aerobic energy metabolism, despite large differences in total exercise and training time commitment. Interval training can also be applied in a calisthenics manner to improve cardiorespiratory fitness and strength, and this approach could easily be incorporated into a military conditioning environment. Although interval training can elicit physiological changes in men and women, the potential for sex-specific adaptations in the adaptive response to interval training warrants further investigation. Additional work is needed to clarify adaptations occurring over the longer term; however, interval training deserves consideration from a military applicability standpoint as a time-efficient training strategy to enhance soldier health and performance. There is value for military leaders in identifying strategies that reduce the time required for exercise, but nonetheless provide an effective training stimulus.
Counts, M E; Morton, M J; Laffoon, S W; Cox, R H; Lipowicz, P J
2005-04-01
The study objectives were to determine the effects of smoking machine puffing parameters on mainstream smoke composition and to express those effects as predicting relationships. Forty-eight commercial Philip Morris USA and Philip Morris International cigarettes from international markets and the 1R4F reference cigarette were machine-smoked using smoking conditions defined by the International Organization of Standardization (ISO), the Massachusetts Department of Public Health (MDPH), and Health Canada (HC). Cigarette tobacco fillers were analyzed for nitrate, nicotine, tobacco-specific nitrosamines (TSNA), and ammonia. Mainstream yields for tar and 44 individual smoke constituents and "smoke pH" were determined. Cigarette constituent yields typically increased in the order ISO
Prediction of sand transport over immobile gravel from supply limited to capacity conditions
USDA-ARS?s Scientific Manuscript database
The prediction of the transport of sand in armored gravel reaches downstream of dams is complicated by variable bed conditions ranging from sand transported through gravel to sand in transport over buried gravel. Knowledge of the rate of sand transport in these conditions, however, is necessary for...
Confidence intervals for correlations when data are not normal.
Bishara, Anthony J; Hittner, James B
2017-02-01
With nonnormal data, the typical confidence interval of the correlation (Fisher z') may be inaccurate. The literature has been unclear as to which of several alternative methods should be used instead, and how extreme a violation of normality is needed to justify an alternative. Through Monte Carlo simulation, 11 confidence interval methods were compared, including Fisher z', two Spearman rank-order methods, the Box-Cox transformation, rank-based inverse normal (RIN) transformation, and various bootstrap methods. Nonnormality often distorted the Fisher z' confidence interval-for example, leading to a 95 % confidence interval that had actual coverage as low as 68 %. Increasing the sample size sometimes worsened this problem. Inaccurate Fisher z' intervals could be predicted by a sample kurtosis of at least 2, an absolute sample skewness of at least 1, or significant violations of normality hypothesis tests. Only the Spearman rank-order and RIN transformation methods were universally robust to nonnormality. Among the bootstrap methods, an observed imposed bootstrap came closest to accurate coverage, though it often resulted in an overly long interval. The results suggest that sample nonnormality can justify avoidance of the Fisher z' interval in favor of a more robust alternative. R code for the relevant methods is provided in supplementary materials.
Honeine, Jean-Louis; Schieppati, Marco
2014-01-01
Maintaining equilibrium is basically a sensorimotor integration task. The central nervous system (CNS) continually and selectively weights and rapidly integrates sensory inputs from multiple sources, and coordinates multiple outputs. The weighting process is based on the availability and accuracy of afferent signals at a given instant, on the time-period required to process each input, and possibly on the plasticity of the relevant pathways. The likelihood that sensory inflow changes while balancing under static or dynamic conditions is high, because subjects can pass from a dark to a well-lit environment or from a tactile-guided stabilization to loss of haptic inflow. This review article presents recent data on the temporal events accompanying sensory transition, on which basic information is fragmentary. The processing time from sensory shift to reaching a new steady state includes the time to (a) subtract or integrate sensory inputs; (b) move from allocentric to egocentric reference or vice versa; and (c) adjust the calibration of motor activity in time and amplitude to the new sensory set. We present examples of processes of integration of posture-stabilizing information, and of the respective sensorimotor time-intervals while allowing or occluding vision or adding or subtracting tactile information. These intervals are short, in the order of 1–2 s for different postural conditions, modalities and deliberate or passive shift. They are just longer for haptic than visual shift, just shorter on withdrawal than on addition of stabilizing input, and on deliberate than unexpected mode. The delays are the shortest (for haptic shift) in blind subjects. Since automatic balance stabilization may be vulnerable to sensory-integration delays and to interference from concurrent cognitive tasks in patients with sensorimotor problems, insight into the processing time for balance control represents a critical step in the design of new balance- and locomotion training devices
[Development of a predictive program for microbial growth under various temperature conditions].
Fujikawa, Hiroshi; Yano, Kazuyoshi; Morozumi, Satoshi; Kimura, Bon; Fujii, Tateo
2006-12-01
A predictive program for microbial growth under various temperature conditions was developed with a mathematical model. The model was a new logistic model recently developed by us. The program predicts Escherichia coli growth in broth, Staphylococcus aureus growth and its enterotoxin production in milk, and Vibrio parahaemolyticus growth in broth at various temperature patterns. The program, which was built with Microsoft Excel (Visual Basic Application), is user-friendly; users can easily input the temperature history of a test food and obtain the prediction instantly on the computer screen. The predicted growth and toxin production can be important indices to determine whether a food is microbiologically safe or not. This program should be a useful tool to confirm the microbial safety of commercial foods.
Healthy Lifestyle Fitness Interval training can help you get the most out of your workout. By Mayo Clinic Staff Are you ready to shake ... spending more time at the gym? Consider aerobic interval training. Once the domain of elite athletes, interval training ...
Fractal analyses reveal independent complexity and predictability of gait
Dierick, Frédéric; Nivard, Anne-Laure
2017-01-01
Locomotion is a natural task that has been assessed for decades and used as a proxy to highlight impairments of various origins. So far, most studies adopted classical linear analyses of spatio-temporal gait parameters. Here, we use more advanced, yet not less practical, non-linear techniques to analyse gait time series of healthy subjects. We aimed at finding more sensitive indexes related to spatio-temporal gait parameters than those previously used, with the hope to better identify abnormal locomotion. We analysed large-scale stride interval time series and mean step width in 34 participants while altering walking direction (forward vs. backward walking) and with or without galvanic vestibular stimulation. The Hurst exponent α and the Minkowski fractal dimension D were computed and interpreted as indexes expressing predictability and complexity of stride interval time series, respectively. These holistic indexes can easily be interpreted in the framework of optimal movement complexity. We show that α and D accurately capture stride interval changes in function of the experimental condition. Walking forward exhibited maximal complexity (D) and hence, adaptability. In contrast, walking backward and/or stimulation of the vestibular system decreased D. Furthermore, walking backward increased predictability (α) through a more stereotyped pattern of the stride interval and galvanic vestibular stimulation reduced predictability. The present study demonstrates the complementary power of the Hurst exponent and the fractal dimension to improve walking classification. Our developments may have immediate applications in rehabilitation, diagnosis, and classification procedures. PMID:29182659
Vazquez-Leal, Hector; Benhammouda, Brahim; Filobello-Nino, Uriel Antonio; Sarmiento-Reyes, Arturo; Jimenez-Fernandez, Victor Manuel; Marin-Hernandez, Antonio; Herrera-May, Agustin Leobardo; Diaz-Sanchez, Alejandro; Huerta-Chua, Jesus
2014-01-01
In this article, we propose the application of a modified Taylor series method (MTSM) for the approximation of nonlinear problems described on finite intervals. The issue of Taylor series method with mixed boundary conditions is circumvented using shooting constants and extra derivatives of the problem. In order to show the benefits of this proposal, three different kinds of problems are solved: three-point boundary valued problem (BVP) of third-order with a hyperbolic sine nonlinearity, two-point BVP for a second-order nonlinear differential equation with an exponential nonlinearity, and a two-point BVP for a third-order nonlinear differential equation with a radical nonlinearity. The result shows that the MTSM method is capable to generate easily computable and highly accurate approximations for nonlinear equations. 34L30.
Carlisle, D.M.; Falcone, J.; Meador, M.R.
2009-01-01
We developed and evaluated empirical models to predict biological condition of wadeable streams in a large portion of the eastern USA, with the ultimate goal of prediction for unsampled basins. Previous work had classified (i.e., altered vs. unaltered) the biological condition of 920 streams based on a biological assessment of macroinvertebrate assemblages. Predictor variables were limited to widely available geospatial data, which included land cover, topography, climate, soils, societal infrastructure, and potential hydrologic modification. We compared the accuracy of predictions of biological condition class based on models with continuous and binary responses. We also evaluated the relative importance of specific groups and individual predictor variables, as well as the relationships between the most important predictors and biological condition. Prediction accuracy and the relative importance of predictor variables were different for two subregions for which models were created. Predictive accuracy in the highlands region improved by including predictors that represented both natural and human activities. Riparian land cover and road-stream intersections were the most important predictors. In contrast, predictive accuracy in the lowlands region was best for models limited to predictors representing natural factors, including basin topography and soil properties. Partial dependence plots revealed complex and nonlinear relationships between specific predictors and the probability of biological alteration. We demonstrate a potential application of the model by predicting biological condition in 552 unsampled basins across an ecoregion in southeastern Wisconsin (USA). Estimates of the likelihood of biological condition of unsampled streams could be a valuable tool for screening large numbers of basins to focus targeted monitoring of potentially unaltered or altered stream segments. ?? Springer Science+Business Media B.V. 2008.
Ramos, Fernando; Robledo, Cristina; Pereira, Arturo; Pedro, Carmen; Benito, Rocío; de Paz, Raquel; Del Rey, Mónica; Insunza, Andrés; Tormo, Mar; Díez-Campelo, María; Xicoy, Blanca; Salido, Eduardo; Sánchez-Del-Real, Javier; Arenillas, Leonor; Florensa, Lourdes; Luño, Elisa; Del Cañizo, Consuelo; Sanz, Guillermo F; María Hernández-Rivas, Jesús
2017-09-01
The International Prognostic Scoring System and its revised form (IPSS-R) are the most widely used indices for prognostic assessment of patients with myelodysplastic syndromes (MDS), but can only partially account for the observed variation in patient outcomes. This study aimed to evaluate the relative contribution of patient condition and mutational status in peripheral blood when added to the IPSS-R, for estimating overall survival and the risk of leukemic transformation in patients with MDS. A prospective cohort (2006-2015) of 200 consecutive patients with MDS were included in the study series and categorized according to the IPSS-R. Patients were further stratified according to patient condition (assessed using the multidimensional Lee index for older adults) and genetic mutations (peripheral blood samples screened using next-generation sequencing). The change in likelihood-ratio was tested in Cox models after adding individual covariates. The addition of the Lee index to the IPSS-R significantly improved prediction of overall survival [hazard ratio (HR) 3.02, 95% confidence interval (CI) 1.96-4.66, P < 0.001), and mutational analysis significantly improved prediction of leukemic evolution (HR 2.64, 1.56-4.46, P < 0.001). Non-leukemic death was strongly linked to patient condition (HR 2.71, 1.72-4.25, P < 0.001), but not to IPSS-R score (P = 0.35) or mutational status (P = 0.75). Adjustment for exposure to disease-modifying therapy, evaluated as a time-dependent covariate, had no effect on the proposed model's predictive ability. In conclusion, patient condition, assessed by the multidimensional Lee index and patient mutational status can improve the prediction of clinical outcomes of patients with MDS already stratified by IPSS-R. © 2017 Wiley Periodicals, Inc.
Arc Jet Facility Test Condition Predictions Using the ADSI Code
NASA Technical Reports Server (NTRS)
Palmer, Grant; Prabhu, Dinesh; Terrazas-Salinas, Imelda
2015-01-01
The Aerothermal Design Space Interpolation (ADSI) tool is used to interpolate databases of previously computed computational fluid dynamic solutions for test articles in a NASA Ames arc jet facility. The arc jet databases are generated using an Navier-Stokes flow solver using previously determined best practices. The arc jet mass flow rates and arc currents used to discretize the database are chosen to span the operating conditions possible in the arc jet, and are based on previous arc jet experimental conditions where possible. The ADSI code is a database interpolation, manipulation, and examination tool that can be used to estimate the stagnation point pressure and heating rate for user-specified values of arc jet mass flow rate and arc current. The interpolation is performed in the other direction (predicting mass flow and current to achieve a desired stagnation point pressure and heating rate). ADSI is also used to generate 2-D response surfaces of stagnation point pressure and heating rate as a function of mass flow rate and arc current (or vice versa). Arc jet test data is used to assess the predictive capability of the ADSI code.
Posterior Predictive Checks for Conditional Independence between Response Time and Accuracy
ERIC Educational Resources Information Center
Bolsinova, Maria; Tijmstra, Jesper
2016-01-01
Conditional independence (CI) between response time and response accuracy is a fundamental assumption of many joint models for time and accuracy used in educational measurement. In this study, posterior predictive checks (PPCs) are proposed for testing this assumption. These PPCs are based on three discrepancy measures reflecting different…
Neeman, Noga; Spotila, James R; O'Connor, Michael P
2015-09-07
Variation in the yearly number of sea turtles nesting at rookeries can interfere with population estimates and obscure real population dynamics. Previous theoretical models suggested that this variation in nesting numbers may be driven by changes in resources at the foraging grounds. We developed a physiologically-based model that uses temperatures at foraging sites to predict foraging conditions, resource accumulation, remigration probabilities, and, ultimately, nesting numbers for a stable population of sea turtles. We used this model to explore several scenarios of temperature variation at the foraging grounds, including one-year perturbations and cyclical temperature oscillations. We found that thermally driven resource variation can indeed synchronize nesting in groups of turtles, creating cohorts, but that these cohorts tend to break down over 5-10 years unless regenerated by environmental conditions. Cohorts were broken down faster at lower temperatures. One-year perturbations of low temperature had a synchronizing effect on nesting the following year, while high temperature perturbations tended to delay nesting in a less synchronized way. Cyclical temperatures lead to cyclical responses both in nesting numbers and remigration intervals, with the amplitude and lag of the response depending on the duration of the cycle. Overall, model behavior is consistent with observations at nesting beaches. Future work should focus on refining the model to fit particular nesting populations and testing further whether or not it may be used to predict observed nesting numbers and remigration intervals. Copyright © 2015 Elsevier Ltd. All rights reserved.
An interval model updating strategy using interval response surface models
NASA Astrophysics Data System (ADS)
Fang, Sheng-En; Zhang, Qiu-Hu; Ren, Wei-Xin
2015-08-01
Stochastic model updating provides an effective way of handling uncertainties existing in real-world structures. In general, probabilistic theories, fuzzy mathematics or interval analyses are involved in the solution of inverse problems. However in practice, probability distributions or membership functions of structural parameters are often unavailable due to insufficient information of a structure. At this moment an interval model updating procedure shows its superiority in the aspect of problem simplification since only the upper and lower bounds of parameters and responses are sought. To this end, this study develops a new concept of interval response surface models for the purpose of efficiently implementing the interval model updating procedure. The frequent interval overestimation due to the use of interval arithmetic can be maximally avoided leading to accurate estimation of parameter intervals. Meanwhile, the establishment of an interval inverse problem is highly simplified, accompanied by a saving of computational costs. By this means a relatively simple and cost-efficient interval updating process can be achieved. Lastly, the feasibility and reliability of the developed method have been verified against a numerical mass-spring system and also against a set of experimentally tested steel plates.
Prediction of Flows about Forebodies at High-Angle-of-Attack Dynamic Conditions
NASA Technical Reports Server (NTRS)
Fremaux, C. M.; vanDam, C. P.; Saephan, S.; DalBello, T.
2003-01-01
A Reynolds-average Navier Stokes method developed for rotorcraft type of flow problems is applied for predicting the forces and moments of forebody models at high-angle-of-attack dynamic conditions and for providing insight into the flow characteristics at these conditions. Wind-tunnel results from rotary testing on generic forebody models conducted by NASA Langley and DERA are used for comparison. This paper focuses on the steady-state flow problem.
Tanaka, Tomohiro; Nishida, Satoshi
2015-01-01
The neuronal processes that underlie visual searches can be divided into two stages: target discrimination and saccade preparation/generation. This predicts that the length of time of the prediscrimination stage varies according to the search difficulty across different stimulus conditions, whereas the length of the latter postdiscrimination stage is stimulus invariant. However, recent studies have suggested that the length of the postdiscrimination interval changes with different stimulus conditions. To address whether and how the visual stimulus affects determination of the postdiscrimination interval, we recorded single-neuron activity in the lateral intraparietal area (LIP) when monkeys (Macaca fuscata) performed a color-singleton search involving four stimulus conditions that differed regarding luminance (Bright vs. Dim) and target-distractor color similarity (Easy vs. Difficult). We specifically focused on comparing activities between the Bright-Difficult and Dim-Easy conditions, in which the visual stimuli were considerably different, but the mean reaction times were indistinguishable. This allowed us to examine the neuronal activity when the difference in the degree of search speed between different stimulus conditions was minimal. We found that not only prediscrimination but also postdiscrimination intervals varied across stimulus conditions: the postdiscrimination interval was longer in the Dim-Easy condition than in the Bright-Difficult condition. Further analysis revealed that the postdiscrimination interval might vary with stimulus luminance. A computer simulation using an accumulation-to-threshold model suggested that the luminance-related difference in visual response strength at discrimination time could be the cause of different postdiscrimination intervals. PMID:25995344
Music enhances performance and perceived enjoyment of sprint interval exercise.
Stork, Matthew J; Kwan, Matthew Y W; Gibala, Martin J; Martin Ginis, Kathleen A
2015-05-01
Interval exercise training can elicit physiological adaptations similar to those of traditional endurance training, but with reduced time. However, the intense nature of specific protocols, particularly the "all-out" efforts characteristic of sprint interval training (SIT), may be perceived as being aversive. The purpose of this study was to determine whether listening to self-selected music can reduce the potential aversiveness of an acute session of SIT by improving affect, motivation, and enjoyment, and to examine the effects of music on performance. Twenty moderately active adults (22 ± 4 yr) unfamiliar with interval exercise completed an acute session of SIT under two different conditions: music and no music. The exercise consisted of four 30-s "all-out" Wingate Anaerobic Test bouts on a cycle ergometer, separated by 4 min of rest. Peak and mean power output, RPE, affect, task motivation, and perceived enjoyment of the exercise were measured. Mixed-effects models were used to evaluate changes in dependent measures over time and between the two conditions. Peak and mean power over the course of the exercise session were higher in the music condition (coefficient = 49.72 [SE = 13.55] and coefficient = 23.65 [SE = 11.30]; P < 0.05). A significant time by condition effect emerged for peak power (coefficient = -12.31 [SE = 4.95]; P < 0.05). There were no between-condition differences in RPE, affect, or task motivation. Perceived enjoyment increased over time and was consistently higher in the music condition (coefficient = 7.00 [SE = 3.05]; P < 0.05). Music enhances in-task performance and enjoyment of an acute bout of SIT. Listening to music during intense interval exercise may be an effective strategy for facilitating participation in, and adherence to, this form of training.
MEETING DATA QUALITY OBJECTIVES WITH INTERVAL INFORMATION
Immunoassay test kits are promising technologies for measuring analytes under field conditions. Frequently, these field-test kits report the analyte concentrations as falling in an interval between minimum and maximum values. Many project managers use field-test kits only for scr...
Gomes, Hilary; Barrett, Sophia; Duff, Martin; Barnhardt, Jack; Ritter, Walter
2008-03-01
We examined the impact of perceptual load by manipulating interstimulus interval (ISI) in two auditory selective attention studies that varied in the difficulty of the target discrimination. In the paradigm, channels were separated by frequency and target/deviant tones were softer in intensity. Three ISI conditions were presented: fast (300ms), medium (600ms) and slow (900ms). Behavioral (accuracy and RT) and electrophysiological measures (Nd, P3b) were observed. In both studies, participants evidenced poorer accuracy during the fast ISI condition than the slow suggesting that ISI impacted task difficulty. However, none of the three measures of processing examined, Nd amplitude, P3b amplitude elicited by unattended deviant stimuli, or false alarms to unattended deviants, were impacted by ISI in the manner predicted by perceptual load theory. The prediction based on perceptual load theory, that there would be more processing of irrelevant stimuli under conditions of low as compared to high perceptual load, was not supported in these auditory studies. Task difficulty/perceptual load impacts the processing of irrelevant stimuli in the auditory modality differently than predicted by perceptual load theory, and perhaps differently than in the visual modality.
Predicted Weather Display and Decision Support Interface for Flight Deck
NASA Technical Reports Server (NTRS)
Johnson, Walter W. (Inventor); Wong, Dominic G. (Inventor); Koteskey, Robert W. (Inventor); Wu, Shu-Chieh (Inventor)
2017-01-01
A system and method for providing visual depictions of a predictive weather forecast for in-route vehicle trajectory planning. The method includes displaying weather information on a graphical display, displaying vehicle position information on the graphical display, selecting a predictive interval, displaying predictive weather information for the predictive interval on the graphical display, and displaying predictive vehicle position information for the predictive interval on the graphical display, such that the predictive vehicle position information is displayed relative to the predictive weather information, for in-route trajectory planning.
Ward, Ryan D; Gallistel, C R; Jensen, Greg; Richards, Vanessa L; Fairhurst, Stephen; Balsam, Peter D
2012-07-01
In a conditioning protocol, the onset of the conditioned stimulus ([CS]) provides information about when to expect reinforcement (unconditioned stimulus [US]). There are two sources of information from the CS in a delay conditioning paradigm in which the CS-US interval is fixed. The first depends on the informativeness, the degree to which CS onset reduces the average expected time to onset of the next US. The second depends only on how precisely a subject can represent a fixed-duration interval (the temporal Weber fraction). In three experiments with mice, we tested the differential impact of these two sources of information on rate of acquisition of conditioned responding (CS-US associability). In Experiment 1, we showed that associability (the inverse of trials to acquisition) increased in proportion to informativeness. In Experiment 2, we showed that fixing the duration of the US-US interval or the CS-US interval or both had no effect on associability. In Experiment 3, we equated the increase in information produced by varying the C/T ratio with the increase produced by fixing the duration of the CS-US interval. Associability increased with increased informativeness, but, as in Experiment 2, fixing the CS-US duration had no effect on associability. These results are consistent with the view that CS-US associability depends on the increased rate of reward signaled by CS onset. The results also provide further evidence that conditioned responding is temporally controlled when it emerges.
Sun, Libo; Wan, Ying
2018-04-22
Conditional power and predictive power provide estimates of the probability of success at the end of the trial based on the information from the interim analysis. The observed value of the time to event endpoint at the interim analysis could be biased for the true treatment effect due to early censoring, leading to a biased estimate of conditional power and predictive power. In such cases, the estimates and inference for this right censored primary endpoint are enhanced by incorporating a fully observed auxiliary variable. We assume a bivariate normal distribution of the transformed primary variable and a correlated auxiliary variable. Simulation studies are conducted that not only shows enhanced conditional power and predictive power but also can provide the framework for a more efficient futility interim analysis in terms of an improved accuracy in estimator, a smaller inflation in type II error and an optimal timing for such analysis. We also illustrated the new approach by a real clinical trial example. Copyright © 2018 John Wiley & Sons, Ltd.
Application of Response Surface Methods To Determine Conditions for Optimal Genomic Prediction
Howard, Réka; Carriquiry, Alicia L.; Beavis, William D.
2017-01-01
An epistatic genetic architecture can have a significant impact on prediction accuracies of genomic prediction (GP) methods. Machine learning methods predict traits comprised of epistatic genetic architectures more accurately than statistical methods based on additive mixed linear models. The differences between these types of GP methods suggest a diagnostic for revealing genetic architectures underlying traits of interest. In addition to genetic architecture, the performance of GP methods may be influenced by the sample size of the training population, the number of QTL, and the proportion of phenotypic variability due to genotypic variability (heritability). Possible values for these factors and the number of combinations of the factor levels that influence the performance of GP methods can be large. Thus, efficient methods for identifying combinations of factor levels that produce most accurate GPs is needed. Herein, we employ response surface methods (RSMs) to find the experimental conditions that produce the most accurate GPs. We illustrate RSM with an example of simulated doubled haploid populations and identify the combination of factors that maximize the difference between prediction accuracies of best linear unbiased prediction (BLUP) and support vector machine (SVM) GP methods. The greatest impact on the response is due to the genetic architecture of the population, heritability of the trait, and the sample size. When epistasis is responsible for all of the genotypic variance and heritability is equal to one and the sample size of the training population is large, the advantage of using the SVM method vs. the BLUP method is greatest. However, except for values close to the maximum, most of the response surface shows little difference between the methods. We also determined that the conditions resulting in the greatest prediction accuracy for BLUP occurred when genetic architecture consists solely of additive effects, and heritability is equal to one. PMID
Familiar Tonal Context Improves Accuracy of Pitch Interval Perception.
Graves, Jackson E; Oxenham, Andrew J
2017-01-01
A fundamental feature of everyday music perception is sensitivity to familiar tonal structures such as musical keys. Many studies have suggested that a tonal context can enhance the perception and representation of pitch. Most of these studies have measured response time, which may reflect expectancy as opposed to perceptual accuracy. We instead used a performance-based measure, comparing participants' ability to discriminate between a "small, in-tune" interval and a "large, mistuned" interval in conditions that involved familiar tonal relations (diatonic, or major, scale notes), unfamiliar tonal relations (whole-tone or mistuned-diatonic scale notes), repetition of a single pitch, or no tonal context. The context was established with a brief sequence of tones in Experiment 1 (melodic context), and a cadence-like two-chord progression in Experiment 2 (harmonic context). In both experiments, performance significantly differed across the context conditions, with a diatonic context providing a significant advantage over no context; however, no correlation with years of musical training was observed. The diatonic tonal context also provided an advantage over the whole-tone scale context condition in Experiment 1 (melodic context), and over the mistuned scale or repetition context conditions in Experiment 2 (harmonic context). However, the relatively small benefit to performance suggests that the main advantage of tonal context may be priming of expected stimuli, rather than enhanced accuracy of pitch interval representation.
Prediction of Hydrolysis Products of Organic Chemicals under Environmental pH Conditions.
Tebes-Stevens, Caroline; Patel, Jay M; Jones, W Jack; Weber, Eric J
2017-05-02
Cheminformatics-based software tools can predict the molecular structure of transformation products using a library of transformation reaction schemes. This paper presents the development of such a library for abiotic hydrolysis of organic chemicals under environmentally relevant conditions. The hydrolysis reaction schemes in the library encode the process science gathered from peer-reviewed literature and regulatory reports. Each scheme has been ranked on a scale of one to six based on the median half-life in a data set compiled from literature-reported hydrolysis rates. These ranks are used to predict the most likely transformation route when more than one structural fragment susceptible to hydrolysis is present in a molecule of interest. Separate rank assignments are established for pH 5, 7, and 9 to represent standard conditions in hydrolysis studies required for registration of pesticides in Organisation for Economic Co-operation and Development (OECD) member countries. The library is applied to predict the likely hydrolytic transformation products for two lists of chemicals, one representative of chemicals used in commerce and the other specific to pesticides, to evaluate which hydrolysis reaction pathways are most likely to be relevant for organic chemicals found in the natural environment.
Variations in rupture process with recurrence interval in a repeated small earthquake
Vidale, J.E.; Ellsworth, W.L.; Cole, A.; Marone, Chris
1994-01-01
In theory and in laboratory experiments, friction on sliding surfaces such as rock, glass and metal increases with time since the previous episode of slip. This time dependence is a central pillar of the friction laws widely used to model earthquake phenomena. On natural faults, other properties, such as rupture velocity, porosity and fluid pressure, may also vary with the recurrence interval. Eighteen repetitions of the same small earthquake, separated by intervals ranging from a few days to several years, allow us to test these laboratory predictions in situ. The events with the longest time since the previous earthquake tend to have about 15% larger seismic moment than those with the shortest intervals, although this trend is weak. In addition, the rupture durations of the events with the longest recurrence intervals are more than a factor of two shorter than for the events with the shortest intervals. Both decreased duration and increased friction are consistent with progressive fault healing during the time of stationary contact.In theory and in laboratory experiments, friction on sliding surfaces such as rock, glass and metal increases with time since the previous episode of slip. This time dependence is a central pillar of the friction laws widely used to model earthquake phenomena. On natural faults, other properties, such as rupture velocity, porosity and fluid pressure, may also vary with the recurrence interval. Eighteen repetitions of the same small earthquake, separated by intervals ranging from a few days to several years, allow us to test these laboratory predictions in situ. The events with the longest time since the previous earthquake tend to have about 15% larger seismic moment than those with the shortest intervals, although this trend is weak. In addition, the rupture durations of the events with the longest recurrence intervals are more than a factor of two shorter than for the events with the shortest intervals. Both decreased duration and
Binary Interval Search: a scalable algorithm for counting interval intersections.
Layer, Ryan M; Skadron, Kevin; Robins, Gabriel; Hall, Ira M; Quinlan, Aaron R
2013-01-01
The comparison of diverse genomic datasets is fundamental to understand genome biology. Researchers must explore many large datasets of genome intervals (e.g. genes, sequence alignments) to place their experimental results in a broader context and to make new discoveries. Relationships between genomic datasets are typically measured by identifying intervals that intersect, that is, they overlap and thus share a common genome interval. Given the continued advances in DNA sequencing technologies, efficient methods for measuring statistically significant relationships between many sets of genomic features are crucial for future discovery. We introduce the Binary Interval Search (BITS) algorithm, a novel and scalable approach to interval set intersection. We demonstrate that BITS outperforms existing methods at counting interval intersections. Moreover, we show that BITS is intrinsically suited to parallel computing architectures, such as graphics processing units by illustrating its utility for efficient Monte Carlo simulations measuring the significance of relationships between sets of genomic intervals. https://github.com/arq5x/bits.
The Use of Commercial Remote Sensing Predicting Helicopter Brownout Conditions
2007-09-01
and is not partial to the number of times an individual can visit the site and update or supply new information. (Shewfelt 2006) (U) An example of...leaves as well as promote new growth of more green vegetation.(Shewfelt 2006) This green leaf vegetation relation to moisture plays a vital role in...Sensing in Predicting Helicopter Brownout Conditions. September 2006 (Top Secret). Tan, Kim H., First Edition, Enviromental Soil Science Marcel
A sequential solution for anisotropic total variation image denoising with interval constraints
NASA Astrophysics Data System (ADS)
Xu, Jingyan; Noo, Frédéric
2017-09-01
We show that two problems involving the anisotropic total variation (TV) and interval constraints on the unknown variables admit, under some conditions, a simple sequential solution. Problem 1 is a constrained TV penalized image denoising problem; problem 2 is a constrained fused lasso signal approximator. The sequential solution entails finding first the solution to the unconstrained problem, and then applying a thresholding to satisfy the constraints. If the interval constraints are uniform, this sequential solution solves problem 1. If the interval constraints furthermore contain zero, the sequential solution solves problem 2. Here uniform interval constraints refer to all unknowns being constrained to the same interval. A typical example of application is image denoising in x-ray CT, where the image intensities are non-negative as they physically represent linear attenuation coefficient in the patient body. Our results are simple yet seem unknown; we establish them using the Karush-Kuhn-Tucker conditions for constrained convex optimization.
Remote sensing for prediction of 1-year post-fire ecosystem condition
Leigh B. Lentile; Alistair M. S. Smith; Andrew T. Hudak; Penelope Morgan; Michael J. Bobbitt; Sarah A. Lewis; Peter R. Robichaud
2009-01-01
Appropriate use of satellite data in predicting >1 year post-fire effects requires remote measurement of surface properties that can be mechanistically related to ground measures of post-fire condition. The present study of burned ponderosa pine (Pinus ponderosa) forests in the Black Hills of South Dakota evaluates whether immediate fractional cover estimates of...
Sato, Takako; Zaitsu, Kei; Tsuboi, Kento; Nomura, Masakatsu; Kusano, Maiko; Shima, Noriaki; Abe, Shuntaro; Ishii, Akira; Tsuchihashi, Hitoshi; Suzuki, Koichi
2015-05-01
Estimation of postmortem interval (PMI) is an important goal in judicial autopsy. Although many approaches can estimate PMI through physical findings and biochemical tests, accurate PMI calculation by these conventional methods remains difficult because PMI is readily affected by surrounding conditions, such as ambient temperature and humidity. In this study, Sprague-Dawley (SD) rats (10 weeks) were sacrificed by suffocation, and blood was collected by dissection at various time intervals (0, 3, 6, 12, 24, and 48 h; n = 6) after death. A total of 70 endogenous metabolites were detected in plasma by gas chromatography-tandem mass spectrometry (GC-MS/MS). Each time group was separated from each other on the principal component analysis (PCA) score plot, suggesting that the various endogenous metabolites changed with time after death. To prepare a prediction model of a PMI, a partial least squares (or projection to latent structure, PLS) regression model was constructed using the levels of significantly different metabolites determined by variable importance in the projection (VIP) score and the Kruskal-Wallis test (P < 0.05). Because the constructed PLS regression model could successfully predict each PMI, this model was validated with another validation set (n = 3). In conclusion, plasma metabolic profiling demonstrated its ability to successfully estimate PMI under a certain condition. This result can be considered to be the first step for using the metabolomics method in future forensic casework.
NASA Astrophysics Data System (ADS)
Crosby, S. C.; O'Reilly, W. C.; Guza, R. T.
2016-02-01
Accurate, unbiased, high-resolution (in space and time) nearshore wave predictions are needed to drive models of beach erosion, coastal flooding, and alongshore transport of sediment, biota and pollutants. On highly sheltered shorelines, wave predictions are sensitive to the directions of onshore propagating waves, and nearshore model prediction error is often dominated by uncertainty in offshore boundary conditions. Offshore islands and shoals, and coastline curvature, create complex sheltering patterns over the 250km span of southern California (SC) shoreline. Here, regional wave model skill in SC was compared for different offshore boundary conditions created using offshore buoy observations and global wave model hindcasts (National Oceanographic and Atmospheric Administration Wave Watch 3, WW3). Spectral ray-tracing methods were used to transform incident offshore swell (0.04-0.09Hz) energy at high directional resolution (1-deg). Model skill is assessed for predictions (wave height, direction, and alongshore radiation stress) at 16 nearshore buoy sites between 2000 and 2009. Model skill using buoy-derived boundary conditions is higher than with WW3-derived boundary conditions. Buoy-driven nearshore model results are similar with various assumptions about the true offshore directional distribution (maximum entropy, Bayesian direct, and 2nd derivative smoothness). Two methods combining offshore buoy observations with WW3 predictions in the offshore boundary condition did not improve nearshore skill above buoy-only methods. A case example at Oceanside harbor shows strong sensitivity of alongshore sediment transport predictions to different offshore boundary conditions. Despite this uncertainty in alongshore transport magnitude, alongshore gradients in transport (e.g. the location of model accretion and erosion zones) are determined by the local bathymetry, and are similar for all predictions.
Binary Interval Search: a scalable algorithm for counting interval intersections
Layer, Ryan M.; Skadron, Kevin; Robins, Gabriel; Hall, Ira M.; Quinlan, Aaron R.
2013-01-01
Motivation: The comparison of diverse genomic datasets is fundamental to understand genome biology. Researchers must explore many large datasets of genome intervals (e.g. genes, sequence alignments) to place their experimental results in a broader context and to make new discoveries. Relationships between genomic datasets are typically measured by identifying intervals that intersect, that is, they overlap and thus share a common genome interval. Given the continued advances in DNA sequencing technologies, efficient methods for measuring statistically significant relationships between many sets of genomic features are crucial for future discovery. Results: We introduce the Binary Interval Search (BITS) algorithm, a novel and scalable approach to interval set intersection. We demonstrate that BITS outperforms existing methods at counting interval intersections. Moreover, we show that BITS is intrinsically suited to parallel computing architectures, such as graphics processing units by illustrating its utility for efficient Monte Carlo simulations measuring the significance of relationships between sets of genomic intervals. Availability: https://github.com/arq5x/bits. Contact: arq5x@virginia.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23129298
NASA Astrophysics Data System (ADS)
Rustic, G. T.; Polissar, P. J.; Ravelo, A. C.; White, S. M.
2017-12-01
The El Niño Southern Oscillation (ENSO) plays a dominant role in Earth's climate variability. Paleoceanographic evidence suggests that ENSO has changed in the past, and these changes have been linked to large-scale climatic shifts. While a close relationship between ENSO evolution and climate boundary conditions has been predicted, testing these predictions remains challenging. These climate boundary conditions, including insolation, the mean surface temperature gradient of the tropical Pacific, global ice volume, and tropical thermocline depth, often co-vary and may work together to suppress or enhance the ocean-atmosphere feedbacks that drive ENSO variability. Furthermore, suitable paleo-archives spanning multiple climate states are sparse. We have aimed to test ENSO response to changing climate boundary conditions by generating new reconstructions of mixed-layer variability from sedimentary archives spanning the last three glacial-interglacial cycles from the Central Tropical Pacific Line Islands, where El Niño is strongly expressed. We analyzed Mg/Ca ratios from individual foraminifera to reconstruct mixed-layer variability at discrete time intervals representing combinations of climatic boundary conditions from the middle Holocene to Marine Isotope Stage (MIS) 8. We observe changes in the mixed-layer temperature variability during MIS 5 and during the previous interglacial (MIS 7) showing significant reductions in ENSO amplitude. Differences in variability during glacial and interglacial intervals are also observed. Additionally, we reconstructed mixed-layer and thermocline conditions using multi-species Mg/Ca and stable isotope measurements to more fully characterize the state of the Central Tropical Pacific during these intervals. These reconstructions provide us with a unique view of Central Tropical Pacific variability and water-column structure at discrete intervals under varying boundary climate conditions with which to assess factors that shape ENSO
Confidence Intervals from Realizations of Simulated Nuclear Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Younes, W.; Ratkiewicz, A.; Ressler, J. J.
2017-09-28
Various statistical techniques are discussed that can be used to assign a level of confidence in the prediction of models that depend on input data with known uncertainties and correlations. The particular techniques reviewed in this paper are: 1) random realizations of the input data using Monte-Carlo methods, 2) the construction of confidence intervals to assess the reliability of model predictions, and 3) resampling techniques to impose statistical constraints on the input data based on additional information. These techniques are illustrated with a calculation of the keff value, based on the 235U(n, f) and 239Pu (n, f) cross sections.
Clénet, Didier
2018-04-01
Due to their thermosensitivity, most vaccines must be kept refrigerated from production to use. To successfully carry out global immunization programs, ensuring the stability of vaccines is crucial. In this context, two important issues are critical, namely: (i) predicting vaccine stability and (ii) preventing product damage due to excessive temperature excursions outside of the recommended storage conditions (cold chain break). We applied a combination of advanced kinetics and statistical analyses on vaccine forced degradation data to accurately describe the loss of antigenicity for a multivalent freeze-dried inactivated virus vaccine containing three variants. The screening of large amounts of kinetic models combined with a statistical model selection approach resulted in the identification of two-step kinetic models. Predictions based on kinetic analysis and experimental stability data were in agreement, with approximately five percentage points difference from real values for long-term stability storage conditions, after excursions of temperature and during experimental shipments of freeze-dried products. Results showed that modeling a few months of forced degradation can be used to predict various time and temperature profiles endured by vaccines, i.e. long-term stability, short time excursions outside the labeled storage conditions or shipments at ambient temperature, with high accuracy. Pharmaceutical applications of the presented kinetics-based approach are discussed. Copyright © 2018 The Author. Published by Elsevier B.V. All rights reserved.
Constraint-based Attribute and Interval Planning
NASA Technical Reports Server (NTRS)
Jonsson, Ari; Frank, Jeremy
2013-01-01
In this paper we describe Constraint-based Attribute and Interval Planning (CAIP), a paradigm for representing and reasoning about plans. The paradigm enables the description of planning domains with time, resources, concurrent activities, mutual exclusions among sets of activities, disjunctive preconditions and conditional effects. We provide a theoretical foundation for the paradigm, based on temporal intervals and attributes. We then show how the plans are naturally expressed by networks of constraints, and show that the process of planning maps directly to dynamic constraint reasoning. In addition, we de ne compatibilities, a compact mechanism for describing planning domains. We describe how this framework can incorporate the use of constraint reasoning technology to improve planning. Finally, we describe EUROPA, an implementation of the CAIP framework.
Kerlikowske, Karla; Scott, Christopher G; Mahmoudzadeh, Amir P; Ma, Lin; Winham, Stacey; Jensen, Matthew R; Wu, Fang Fang; Malkov, Serghei; Pankratz, V Shane; Cummings, Steven R; Shepherd, John A; Brandt, Kathleen R; Miglioretti, Diana L; Vachon, Celine M
2018-06-05
assessed on tomosynthesis, an emerging breast screening method. Automated and clinical BI-RADS density similarly predict interval and screen-detected cancer risk, suggesting that either measure may be used to inform women of their breast density. National Cancer Institute.
Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies
Gülhan, Orekıcı Temel
2016-01-01
Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes. PMID:27478491
Alternative Confidence Interval Methods Used in the Diagnostic Accuracy Studies.
Erdoğan, Semra; Gülhan, Orekıcı Temel
2016-01-01
Background/Aim. It is necessary to decide whether the newly improved methods are better than the standard or reference test or not. To decide whether the new diagnostics test is better than the gold standard test/imperfect standard test, the differences of estimated sensitivity/specificity are calculated with the help of information obtained from samples. However, to generalize this value to the population, it should be given with the confidence intervals. The aim of this study is to evaluate the confidence interval methods developed for the differences between the two dependent sensitivity/specificity values on a clinical application. Materials and Methods. In this study, confidence interval methods like Asymptotic Intervals, Conditional Intervals, Unconditional Interval, Score Intervals, and Nonparametric Methods Based on Relative Effects Intervals are used. Besides, as clinical application, data used in diagnostics study by Dickel et al. (2010) has been taken as a sample. Results. The results belonging to the alternative confidence interval methods for Nickel Sulfate, Potassium Dichromate, and Lanolin Alcohol are given as a table. Conclusion. While preferring the confidence interval methods, the researchers have to consider whether the case to be compared is single ratio or dependent binary ratio differences, the correlation coefficient between the rates in two dependent ratios and the sample sizes.
Zhu, Guang-Hui; Jia, Zheng-Jun; Yu, Xiao-Jun; Wu, Ku-Sheng; Chen, Lu-Shi; Lv, Jun-Yao; Eric Benbow, M
2017-05-01
Preadult development of necrophagous flies is commonly recognized as an accurate method for estimating the minimum postmortem interval (PMImin). However, once the PMImin exceeds the duration of preadult development, the method is less accurate. Recently, fly puparial hydrocarbons were found to significantly change with weathering time in the field, indicating their potential use for PMImin estimates. However, additional studies are required to demonstrate how the weathering varies among species. In this study, the puparia of Chrysomya rufifacies were placed in the field to experience natural weathering to characterize hydrocarbon composition change over time. We found that weathering of the puparial hydrocarbons was regular and highly predictable in the field. For most of the hydrocarbons, the abundance decreased significantly and could be modeled using a modified exponent function. In addition, the weathering rate was significantly correlated with the hydrocarbon classes. The weathering rate of 2-methyl alkanes was significantly lower than that of alkenes and internal methyl alkanes, and alkenes were higher than the other two classes. For mono-methyl alkanes, the rate was significantly and positively associated with carbon chain length and branch position. These results indicate that puparial hydrocarbon weathering is highly predictable and can be used for estimating long-term PMImin.
Pigeons' Choices between Fixed-Interval and Random-Interval Schedules: Utility of Variability?
ERIC Educational Resources Information Center
Andrzejewski, Matthew E.; Cardinal, Claudia D.; Field, Douglas P.; Flannery, Barbara A.; Johnson, Michael; Bailey, Kathleen; Hineline, Philip N.
2005-01-01
Pigeons' choosing between fixed-interval and random-interval schedules of reinforcement was investigated in three experiments using a discrete-trial procedure. In all three experiments, the random-interval schedule was generated by sampling a probability distribution at an interval (and in multiples of the interval) equal to that of the…
Ishikawa, Joji; Ishikawa, Shizukiyo; Kario, Kazuomi
2015-03-01
We attempted to evaluate whether subjects who exhibit prolonged corrected QT (QTc) interval (≥440 ms in men and ≥460 ms in women) on ECG, with and without ECG-diagnosed left ventricular hypertrophy (ECG-LVH; Cornell product, ≥244 mV×ms), are at increased risk of stroke. Among the 10 643 subjects, there were a total of 375 stroke events during the follow-up period (128.7±28.1 months; 114 142 person-years). The subjects with prolonged QTc interval (hazard ratio, 2.13; 95% confidence interval, 1.22-3.73) had an increased risk of stroke even after adjustment for ECG-LVH (hazard ratio, 1.71; 95% confidence interval, 1.22-2.40). When we stratified the subjects into those with neither a prolonged QTc interval nor ECG-LVH, those with a prolonged QTc interval but without ECG-LVH, and those with ECG-LVH, multivariate-adjusted Cox proportional hazards analysis demonstrated that the subjects with prolonged QTc intervals but not ECG-LVH (1.2% of all subjects; incidence, 10.7%; hazard ratio, 2.70, 95% confidence interval, 1.48-4.94) and those with ECG-LVH (incidence, 7.9%; hazard ratio, 1.83; 95% confidence interval, 1.31-2.57) had an increased risk of stroke events, compared with those with neither a prolonged QTc interval nor ECG-LVH. In conclusion, prolonged QTc interval was associated with stroke risk even among patients without ECG-LVH in the general population. © 2014 American Heart Association, Inc.
Prediction future asset price which is non-concordant with the historical distribution
NASA Astrophysics Data System (ADS)
Seong, Ng Yew; Hin, Pooi Ah
2015-12-01
This paper attempts to predict the major characteristics of the future asset price which is non-concordant with the distribution estimated from the price today and the prices on a large number of previous days. The three major characteristics of the i-th non-concordant asset price are the length of the interval between the occurrence time of the previous non-concordant asset price and that of the present non-concordant asset price, the indicator which denotes that the non-concordant price is extremely small or large by its values -1 and 1 respectively, and the degree of non-concordance given by the negative logarithm of the probability of the left tail or right tail of which one of the end points is given by the observed future price. The vector of three major characteristics of the next non-concordant price is modelled to be dependent on the vectors corresponding to the present and l - 1 previous non-concordant prices via a 3-dimensional conditional distribution which is derived from a 3(l + 1)-dimensional power-normal mixture distribution. The marginal distribution for each of the three major characteristics can then be derived from the conditional distribution. The mean of the j-th marginal distribution is an estimate of the value of the j-th characteristics of the next non-concordant price. Meanwhile, the 100(α/2) % and 100(1 - α/2) % points of the j-th marginal distribution can be used to form a prediction interval for the j-th characteristic of the next non-concordant price. The performance measures of the above estimates and prediction intervals indicate that the fitted conditional distribution is satisfactory. Thus the incorporation of the distribution of the characteristics of the next non-concordant price in the model for asset price has a good potential of yielding a more realistic model.
Coefficient Omega Bootstrap Confidence Intervals: Nonnormal Distributions
ERIC Educational Resources Information Center
Padilla, Miguel A.; Divers, Jasmin
2013-01-01
The performance of the normal theory bootstrap (NTB), the percentile bootstrap (PB), and the bias-corrected and accelerated (BCa) bootstrap confidence intervals (CIs) for coefficient omega was assessed through a Monte Carlo simulation under conditions not previously investigated. Of particular interests were nonnormal Likert-type and binary items.…
Prediction Models for Dynamic Demand Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aman, Saima; Frincu, Marc; Chelmis, Charalampos
2015-11-02
As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D 2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D 2R, which we address inmore » this paper. Our first contribution is the formal definition of D 2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D 2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D 2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D 2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D 2R. Also, prediction models require just few days’ worth of data indicating that small amounts of historical training data can be used to make reliable predictions
Fujikawa, Hiroshi; Kimura, Bon; Fujii, Tateo
2009-09-01
In this study, we developed a predictive program for Vibrio parahaemolyticus growth under various environmental conditions. Raw growth data was obtained with a V. parahaemolyticus O3:K6 strain cultured at a variety of broth temperatures, pH, and salt concentrations. Data were analyzed with our logistic model and the parameter values of the model were analyzed with polynomial equations. A prediction program consisting of the growth model and the polynomial equations was then developed. After the range of the growth environments was modified, the program successfully predicted the growth for all environments tested. The program could be a useful tool to ensure the bacteriological safety of seafood.
Event- and interval-based measurement of stuttering: a review.
Valente, Ana Rita S; Jesus, Luis M T; Hall, Andreia; Leahy, Margaret
2015-01-01
Event- and interval-based measurements are two different ways of computing frequency of stuttering. Interval-based methodology emerged as an alternative measure to overcome problems associated with reproducibility in the event-based methodology. No review has been made to study the effect of methodological factors in interval-based absolute reliability data or to compute the agreement between the two methodologies in terms of inter-judge, intra-judge and accuracy (i.e., correspondence between raters' scores and an established criterion). To provide a review related to reproducibility of event-based and time-interval measurement, and to verify the effect of methodological factors (training, experience, interval duration, sample presentation order and judgment conditions) on agreement of time-interval measurement; in addition, to determine if it is possible to quantify the agreement between the two methodologies The first two authors searched for articles on ERIC, MEDLINE, PubMed, B-on, CENTRAL and Dissertation Abstracts during January-February 2013 and retrieved 495 articles. Forty-eight articles were selected for review. Content tables were constructed with the main findings. Articles related to event-based measurements revealed values of inter- and intra-judge greater than 0.70 and agreement percentages beyond 80%. The articles related to time-interval measures revealed that, in general, judges with more experience with stuttering presented significantly higher levels of intra- and inter-judge agreement. Inter- and intra-judge values were beyond the references for high reproducibility values for both methodologies. Accuracy (regarding the closeness of raters' judgements with an established criterion), intra- and inter-judge agreement were higher for trained groups when compared with non-trained groups. Sample presentation order and audio/video conditions did not result in differences in inter- or intra-judge results. A duration of 5 s for an interval appears to be
XBeach-G: a tool for predicting gravel barrier response to extreme storm conditions
NASA Astrophysics Data System (ADS)
Masselink, Gerd; Poate, Tim; McCall, Robert; Roelvink, Dano; Russell, Paul; Davidson, Mark
2014-05-01
Gravel beaches protect low-lying back-barrier regions from flooding during storm events and their importance to society is widely acknowledged. Unfortunately, breaching and extensive storm damage has occurred at many gravel sites and this is likely to increase as a result of sea-level rise and enhanced storminess due to climate change. Limited scientific guidance is currently available to provide beach managers with operational management tools to predict the response of gravel beaches to storms. The New Understanding and Prediction of Storm Impacts on Gravel beaches (NUPSIG) project aims to improve our understanding of storm impacts on gravel coastal environments and to develop a predictive capability by modelling these impacts. The NUPSIG project uses a 5-pronged approach to address its aim: (1) analyse hydrodynamic data collected during a proto-type laboratory experiment on a gravel beach; (2) collect hydrodynamic field data on a gravel beach under a range of conditions, including storm waves with wave heights up to 3 m; (3) measure swash dynamics and beach response on 10 gravel beaches during extreme wave conditions with wave heights in excess of 3 m; (4) use the data collected under 1-3 to develop and validate a numerical model to model hydrodynamics and morphological response of gravel beaches under storm conditions; and (5) develop a tool for end-users, based on the model formulated under (4), for predicting storm response of gravel beaches and barriers. The aim of this presentation is to present the key results of the NUPSIG project and introduce the end-user tool for predicting storm response on gravel beaches. The model is based on the numerical model XBeach, and different forcing scenarios (wave and tides), barrier configurations (dimensions) and sediment characteristics are easily uploaded for model simulations using a Graphics User Interface (GUI). The model can be used to determine the vulnerability of gravel barriers to storm events, but can also be
Pereira, Gavin; Jacoby, Peter; de Klerk, Nicholas; Stanley, Fiona J
2014-01-01
Objective To re-evaluate the causal effect of interpregnancy interval on adverse birth outcomes, on the basis that previous studies relying on between mother comparisons may have inadequately adjusted for confounding by maternal risk factors. Design Retrospective cohort study using conditional logistic regression (matching two intervals per mother so each mother acts as her own control) to model the incidence of adverse birth outcomes as a function of interpregnancy interval; additional unconditional logistic regression with adjustment for confounders enabled comparison with the unmatched design of previous studies. Setting Perth, Western Australia, 1980-2010. Participants 40 441 mothers who each delivered three liveborn singleton neonates. Main outcome measures Preterm birth (<37 weeks), small for gestational age birth (<10th centile of birth weight by sex and gestational age), and low birth weight (<2500 g). Results Within mother analysis of interpregnancy intervals indicated a much weaker effect of short intervals on the odds of preterm birth and low birth weight compared with estimates generated using a traditional between mother analysis. The traditional unmatched design estimated an adjusted odds ratio for an interpregnancy interval of 0-5 months (relative to the reference category of 18-23 months) of 1.41 (95% confidence interval 1.31 to 1.51) for preterm birth, 1.26 (1.15 to 1.37) for low birth weight, and 0.98 (0.92 to 1.06) for small for gestational age birth. In comparison, the matched design showed a much weaker effect of short interpregnancy interval on preterm birth (odds ratio 1.07, 0.86 to 1.34) and low birth weight (1.03, 0.79 to 1.34), and the effect for small for gestational age birth remained small (1.08, 0.87 to 1.34). Both the unmatched and matched models estimated a high odds of small for gestational age birth and low birth weight for long interpregnancy intervals (longer than 59 months), but the estimated effect of long interpregnancy
Method and apparatus to predict the remaining service life of an operating system
Greitzer, Frank L.; Kangas, Lars J.; Terrones, Kristine M.; Maynard, Melody A.; Pawlowski, Ronald A. , Ferryman; Thomas A.; Skorpik, James R.; Wilson, Bary W.
2008-11-25
A method and computer-based apparatus for monitoring the degradation of, predicting the remaining service life of, and/or planning maintenance for, an operating system are disclosed. Diagnostic information on degradation of the operating system is obtained through measurement of one or more performance characteristics by one or more sensors onboard and/or proximate the operating system. Though not required, it is preferred that the sensor data are validated to improve the accuracy and reliability of the service life predictions. The condition or degree of degradation of the operating system is presented to a user by way of one or more calculated, numeric degradation figures of merit that are trended against one or more independent variables using one or more mathematical techniques. Furthermore, more than one trendline and uncertainty interval may be generated for a given degradation figure of merit/independent variable data set. The trendline(s) and uncertainty interval(s) are subsequently compared to one or more degradation figure of merit thresholds to predict the remaining service life of the operating system. The present invention enables multiple mathematical approaches in determining which trendline(s) to use to provide the best estimate of the remaining service life.
Operational seasonal and interannual predictions of ocean conditions
NASA Technical Reports Server (NTRS)
Leetmaa, Ants
1992-01-01
Dr. Leetmaa described current work at the U.S. National Meteorological Center (NMC) on coupled systems leading to a seasonal prediction system. He described the way in which ocean thermal data is quality controlled and used in a four dimensional data assimilation system. This consists of a statistical interpolation scheme, a primitive equation ocean general circulation model, and the atmospheric fluxes that are required to force this. This whole process generated dynamically consist thermohaline and velocity fields for the ocean. Currently routine weekly analyses are performed for the Atlantic and Pacific oceans. These analyses are used for ocean climate diagnostics and as initial conditions for coupled forecast models. Specific examples of output products were shown both in the Pacific and the Atlantic Ocean.
Warburton, Elizabeth M; Pearl, Christopher A; Vonhof, Maarten J
2016-06-01
Sex-biased parasitism highlights potentially divergent approaches to parasite resistance resulting in differing energetic trade-offs for males and females; however, trade-offs between immunity and self-maintenance could also depend on host body condition. We investigated these relationships in the big brown bat, Eptesicus fuscus, to determine if host sex or body condition better predicted parasite resistance, if testosterone levels predicted male parasite burdens, and if immune parameters could predict male testosterone levels. We found that male and female hosts had similar parasite burdens and female bats scored higher than males in only one immunological measure. Top models of helminth burden revealed interactions between body condition index and agglutination score as well as between agglutination score and host sex. Additionally, the strength of the relationships between sex, agglutination, and helminth burden is affected by body condition. Models of male parasite burden provided no support for testosterone predicting helminthiasis. Models that best predicted testosterone levels did not include parasite burden but instead consistently included month of capture and agglutination score. Thus, in our system, body condition was a more important predictor of immunity and worm burden than host sex.
Classification of Initial conditions required for Substorm prediction.
NASA Astrophysics Data System (ADS)
Patra, S.; Spencer, E. A.
2014-12-01
We investigate different classes of substorms that occur as a result of various drivers such as the conditions in the solar wind and the internal state of the magnetosphere ionosphere system during the geomagnetic activity. In performing our study, we develop and use our low order physics based nonlinear model of the magnetosphere called WINDMI to establish the global energy exchange between the solar wind, magnetosphere and ionosphere by constraining the model results to satellite and ground measurements. On the other hand, we make quantitative and qualitative comparisons between our low order model with available MHD, multi-fluid and ring current simulations in terms of the energy transfer between the geomagnetic tail, plasma sheet, field aligned currents, ionospheric currents and ring current, during isolated substorms, storm time substorms, and sawtooth events. We use high resolution solar wind data from the ACE satellite, measurements from the CLUSTER and THEMIS missions satellites, and ground based magnetometer measurements from SUPERMAG and WDC Kyoto, to further develop our low order physics based model. Finally, we attempt to answer the following questions: 1) What conditions in the solar wind influence the type of substorm event. This includes the IMF strength and orientation, the particle densities, velocities and temperatures, and the timing of changes such as shocks, southward turnings or northward turnings of the IMF. 2) What is the state of the magnetosphere ionosphere system before an event begins. These are the steady state conditions prior to an event, if they exist, which produce the satellite and ground based measurements matched to the WINDMI model. 3) How does the prior state of the magnetosphere influence the transition into a particular mode of behavior under solar wind forcing. 4) Is it possible to classify the states of the magnetosphere into distinct categories depending on pre-conditioning, and solar wind forcing conditions? 5) Can we
NASA Astrophysics Data System (ADS)
Du, Qiang; Li, Yanjun
2015-06-01
In this paper, a multi-scale as-cast grain size prediction model is proposed to predict as-cast grain size of inoculated aluminum alloys melt solidified under non-isothermal condition, i.e., the existence of temperature gradient. Given melt composition, inoculation and heat extraction boundary conditions, the model is able to predict maximum nucleation undercooling, cooling curve, primary phase solidification path and final as-cast grain size of binary alloys. The proposed model has been applied to two Al-Mg alloys, and comparison with laboratory and industrial solidification experimental results have been carried out. The preliminary conclusion is that the proposed model is a promising suitable microscopic model used within the multi-scale casting simulation modelling framework.
Can phenological models predict tree phenology accurately under climate change conditions?
NASA Astrophysics Data System (ADS)
Chuine, Isabelle; Bonhomme, Marc; Legave, Jean Michel; García de Cortázar-Atauri, Inaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry
2014-05-01
The onset of the growing season of trees has been globally earlier by 2.3 days/decade during the last 50 years because of global warming and this trend is predicted to continue according to climate forecast. The effect of temperature on plant phenology is however not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud dormancy, and on the other hand higher temperatures are necessary to promote bud cells growth afterwards. Increasing phenological changes in temperate woody species have strong impacts on forest trees distribution and productivity, as well as crops cultivation areas. Accurate predictions of trees phenology are therefore a prerequisite to understand and foresee the impacts of climate change on forests and agrosystems. Different process-based models have been developed in the last two decades to predict the date of budburst or flowering of woody species. They are two main families: (1) one-phase models which consider only the ecodormancy phase and make the assumption that endodormancy is always broken before adequate climatic conditions for cell growth occur; and (2) two-phase models which consider both the endodormancy and ecodormancy phases and predict a date of dormancy break which varies from year to year. So far, one-phase models have been able to predict accurately tree bud break and flowering under historical climate. However, because they do not consider what happens prior to ecodormancy, and especially the possible negative effect of winter temperature warming on dormancy break, it seems unlikely that they can provide accurate predictions in future climate conditions. It is indeed well known that a lack of low temperature results in abnormal pattern of bud break and development in temperate fruit trees. An accurate modelling of the dormancy break date has thus become a major issue in phenology modelling. Two-phases phenological models predict that global warming should delay
Kuehnbaum, Naomi L; Gillen, Jenna B; Gibala, Martin J; Britz-McKibbin, Philip
2014-08-28
High-intensity interval training (HIIT) offers a practical approach for enhancing cardiorespiratory fitness, however its role in improving glucose regulation among sedentary yet normoglycemic women remains unclear. Herein, multi-segment injection capillary electrophoresis-mass spectrometry is used as a high-throughput platform in metabolomics to assess dynamic responses of overweight/obese women (BMI > 25, n = 11) to standardized oral glucose tolerance tests (OGTTs) performed before and after a 6-week HIIT intervention. Various statistical methods were used to classify plasma metabolic signatures associated with post-prandial glucose and/or training status when using a repeated measures/cross-over study design. Branched-chain/aromatic amino acids and other intermediates of urea cycle and carnitine metabolism decreased over time in plasma after oral glucose loading. Adaptive exercise-induced changes to plasma thiol redox and orthinine status were measured for trained subjects while at rest in a fasting state. A multi-linear regression model was developed to predict changes in glucose tolerance based on a panel of plasma metabolites measured for naïve subjects in their untrained state. Since treatment outcomes to physical activity are variable between-subjects, prognostic markers offer a novel approach to screen for potential negative responders while designing lifestyle modifications that maximize the salutary benefits of exercise for diabetes prevention on an individual level.
Kuehnbaum, Naomi L.; Gillen, Jenna B.; Gibala, Martin J.; Britz-McKibbin, Philip
2014-01-01
High-intensity interval training (HIIT) offers a practical approach for enhancing cardiorespiratory fitness, however its role in improving glucose regulation among sedentary yet normoglycemic women remains unclear. Herein, multi-segment injection capillary electrophoresis-mass spectrometry is used as a high-throughput platform in metabolomics to assess dynamic responses of overweight/obese women (BMI > 25, n = 11) to standardized oral glucose tolerance tests (OGTTs) performed before and after a 6-week HIIT intervention. Various statistical methods were used to classify plasma metabolic signatures associated with post-prandial glucose and/or training status when using a repeated measures/cross-over study design. Branched-chain/aromatic amino acids and other intermediates of urea cycle and carnitine metabolism decreased over time in plasma after oral glucose loading. Adaptive exercise-induced changes to plasma thiol redox and orthinine status were measured for trained subjects while at rest in a fasting state. A multi-linear regression model was developed to predict changes in glucose tolerance based on a panel of plasma metabolites measured for naïve subjects in their untrained state. Since treatment outcomes to physical activity are variable between-subjects, prognostic markers offer a novel approach to screen for potential negative responders while designing lifestyle modifications that maximize the salutary benefits of exercise for diabetes prevention on an individual level. PMID:25164777
Purde, Mette-Triin; Nock, Stefan; Risch, Lorenz; Medina Escobar, Pedro; Grebhardt, Chris; Nydegger, Urs E; Stanga, Zeno; Risch, Martin
2016-03-01
The ratio of cystatin C (cysC) to creatinine (crea) is regarded as a marker of glomerular filtration quality associated with cardiovascular morbidities. We sought to determine reference intervals for serum cysC-crea ratio in seniors. Furthermore, we sought to determine whether other low-molecular weight molecules exhibit a similar behavior in individuals with altered glomerular filtration quality. Finally, we investigated associations with adverse outcomes. A total of 1382 subjectively healthy Swiss volunteers aged 60 years or older were enrolled in the study. Reference intervals were calculated according to Clinical & Laboratory Standards Institute (CLSI) guideline EP28-A3c. After a baseline exam, a 4-year follow-up survey recorded information about overall morbidity and mortality. The cysC-crea ratio (mean 0.0124 ± 0.0026 mg/μmol) was significantly higher in women and increased progressively with age. Other associated factors were hemoglobin A1c, mean arterial pressure, and C-reactive protein (P < 0.05 for all). Participants exhibiting shrunken pore syndrome had significantly higher ratios of 3.5-66.5 kDa molecules (brain natriuretic peptide, parathyroid hormone, β2-microglobulin, cystatin C, retinol-binding protein, thyroid-stimulating hormone, α1-acid glycoprotein, lipase, amylase, prealbumin, and albumin) and creatinine. There was no such difference in the ratios of very low-molecular weight molecules (urea, uric acid) to creatinine or in the ratios of molecules larger than 66.5 kDa (transferrin, haptoglobin) to creatinine. The cysC-crea ratio was significantly predictive of mortality and subjective overall morbidity at follow-up in logistic regression models adjusting for several factors. The cysC-crea ratio exhibits age- and sex-specific reference intervals in seniors. In conclusion, the cysC-crea ratio may indicate the relative retention of biologically active low-molecular weight compounds and can independently predict the risk for overall mortality
NASA Astrophysics Data System (ADS)
Tao, Ling-Jiang; Gao, Chuan; Zhang, Rong-Hua
2018-07-01
Previous studies indicate that ENSO predictions are particularly sensitive to the initial conditions in some key areas (socalled "sensitive areas"). And yet, few studies have quantified improvements in prediction skill in the context of an optimal observing system. In this study, the impact on prediction skill is explored using an intermediate coupled model in which errors in initial conditions formed to make ENSO predictions are removed in certain areas. Based on ideal observing system simulation experiments, the importance of various observational networks on improvement of El Niño prediction skill is examined. The results indicate that the initial states in the central and eastern equatorial Pacific are important to improve El Ni˜no prediction skill effectively. When removing the initial condition errors in the central equatorial Pacific, ENSO prediction errors can be reduced by 25%. Furthermore, combinations of various subregions are considered to demonstrate the efficiency on ENSO prediction skill. Particularly, seasonally varying observational networks are suggested to improve the prediction skill more effectively. For example, in addition to observing in the central equatorial Pacific and its north throughout the year, increasing observations in the eastern equatorial Pacific during April to October is crucially important, which can improve the prediction accuracy by 62%. These results also demonstrate the effectiveness of the conditional nonlinear optimal perturbation approach on detecting sensitive areas for target observations.
Abstract: Inference and Interval Estimation for Indirect Effects With Latent Variable Models.
Falk, Carl F; Biesanz, Jeremy C
2011-11-30
Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods. This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx. Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.
Hypothesis testing and earthquake prediction.
Jackson, D D
1996-04-30
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.
Hypothesis testing and earthquake prediction.
Jackson, D D
1996-01-01
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663
While large-scale, randomized surveys estimate the percentage of a region’s streams in poor ecological condition, identifying particular stream reaches or watersheds in poor condition is an equally important goal for monitoring and management. We built predictive models of strea...
NASA Astrophysics Data System (ADS)
Matsakis, Nicholas D.; Gross, Thomas R.
Intervals are a new, higher-level primitive for parallel programming with which programmers directly construct the program schedule. Programs using intervals can be statically analyzed to ensure that they do not deadlock or contain data races. In this paper, we demonstrate the flexibility of intervals by showing how to use them to emulate common parallel control-flow constructs like barriers and signals, as well as higher-level patterns such as bounded-buffer producer-consumer. We have implemented intervals as a publicly available library for Java and Scala.
The salt marsh vegetation spread dynamics simulation and prediction based on conditions optimized CA
NASA Astrophysics Data System (ADS)
Guan, Yujuan; Zhang, Liquan
2006-10-01
The biodiversity conservation and management of the salt marsh vegetation relies on processing their spatial information. Nowadays, more attentions are focused on their classification surveying and describing qualitatively dynamics based on RS images interpreted, rather than on simulating and predicting their dynamics quantitatively, which is of greater importance for managing and planning the salt marsh vegetation. In this paper, our notion is to make a dynamic model on large-scale and to provide a virtual laboratory in which researchers can run it according requirements. Firstly, the characteristic of the cellular automata was analyzed and a conclusion indicated that it was necessary for a CA model to be extended geographically under varying conditions of space-time circumstance in order to make results matched the facts accurately. Based on the conventional cellular automata model, the author introduced several new conditions to optimize it for simulating the vegetation objectively, such as elevation, growth speed, invading ability, variation and inheriting and so on. Hence the CA cells and remote sensing image pixels, cell neighbors and pixel neighbors, cell rules and nature of the plants were unified respectively. Taking JiuDuanSha as the test site, where holds mainly Phragmites australis (P.australis) community, Scirpus mariqueter (S.mariqueter) community and Spartina alterniflora (S.alterniflora) community. The paper explored the process of making simulation and predictions about these salt marsh vegetable changing with the conditions optimized CA (COCA) model, and examined the links among data, statistical models, and ecological predictions. This study exploited the potential of applying Conditioned Optimized CA model technique to solve this problem.
NASA Astrophysics Data System (ADS)
Hughes, J. D.; White, J.; Doherty, J.
2011-12-01
Linear prediction uncertainty analysis in a Bayesian framework was applied to guide the conditioning of an integrated surface water/groundwater model that will be used to predict the effects of groundwater withdrawals on surface-water and groundwater flows. Linear prediction uncertainty analysis is an effective approach for identifying (1) raw and processed data most effective for model conditioning prior to inversion, (2) specific observations and periods of time critically sensitive to specific predictions, and (3) additional observation data that would reduce model uncertainty relative to specific predictions. We present results for a two-dimensional groundwater model of a 2,186 km2 area of the Biscayne aquifer in south Florida implicitly coupled to a surface-water routing model of the actively managed canal system. The model domain includes 5 municipal well fields withdrawing more than 1 Mm3/day and 17 operable surface-water control structures that control freshwater releases from the Everglades and freshwater discharges to Biscayne Bay. More than 10 years of daily observation data from 35 groundwater wells and 24 surface water gages are available to condition model parameters. A dense parameterization was used to fully characterize the contribution of the inversion null space to predictive uncertainty and included bias-correction parameters. This approach allows better resolution of the boundary between the inversion null space and solution space. Bias-correction parameters (e.g., rainfall, potential evapotranspiration, and structure flow multipliers) absorb information that is present in structural noise that may otherwise contaminate the estimation of more physically-based model parameters. This allows greater precision in predictions that are entirely solution-space dependent, and reduces the propensity for bias in predictions that are not. Results show that application of this analysis is an effective means of identifying those surface-water and
Pollard, C E; Valentin, J-P; Hammond, T G
2008-08-01
Drug-induced prolongation of the QT interval is having a significant impact on the ability of the pharmaceutical industry to develop new drugs. The development implications for a compound causing a significant effect in the 'Thorough QT/QTc Study' -- as defined in the clinical regulatory guidance (ICH E14) -- are substantial. In view of this, and the fact that QT interval prolongation is linked to direct inhibition of the hERG channel, in the early stages of drug discovery the focus is on testing for and screening out hERG activity. This has led to understanding of how to produce low potency hERG blockers whilst retaining desirable properties. Despite this, a number of factors mean that when an integrated risk assessment is generated towards the end of the discovery phase (by conducting at least an in vivo QT assessment) a QT interval prolongation risk is still often apparent; inhibition of hERG channel trafficking and partitioning into cardiac tissue are just two confounding factors. However, emerging information suggests that hERG safety margins have high predictive value and that when hERG and in vivo non-clinical data are combined, their predictive value to man, whilst not perfect, is >80%. Although understanding the anomalies is important and is being addressed, of greater importance is developing a better understanding of TdP, with the aim of being able to predict TdP rather than using an imperfect surrogate marker (QT interval prolongation). Without an understanding of how to predict TdP risk, high-benefit drugs for serious indications may never be marketed.
A method to predict equilibrium conditions of gas hydrate formation in porous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clarke, M.A.; Pooladi-Darvish, M.; Bishnoi, P.R.
1999-06-01
In the petroleum industry, it is desirable to avoid the formation of gas hydrates. When gas hydrates form, they tend to agglomerate and block pipelines and process equipment. However, naturally occurring gas hydrates that form in the permafrost region or in deep oceans represent a vast untouched natural gas reserve. Although the exact amount of gas in the hydrate form is not known, it is believed to be comparable to the known amount of gas in the free state. Numerous methods for the recovery of natural gas from hydrate fields have been proposed. These techniques include thermal decomposition, depressurization, andmore » chemical injection. To fully exploit hydrate reserves, it will be necessary to know the decomposition/formation conditions of the gas hydrate in porous media. A predictive model has been developed to determine the incipient hydrate formation conditions in porous media. The only additional information that is needed to determine the incipient hydrate formation conditions is the pore radius, surface energy per unit area, and wetting angle. It was found that the model performed well in predicting the experimental data of Handa and Stupin.« less
NASA Astrophysics Data System (ADS)
Kim, S.; Seo, D. J.
2017-12-01
When water temperature (TW) increases due to changes in hydrometeorological conditions, the overall ecological conditions change in the aquatic system. The changes can be harmful to human health and potentially fatal to fish habitat. Therefore, it is important to assess the impacts of thermal disturbances on in-stream processes of water quality variables and be able to predict effectiveness of possible actions that may be taken for water quality protection. For skillful prediction of in-stream water quality processes, it is necessary for the watershed water quality models to be able to reflect such changes. Most of the currently available models, however, assume static parameters for the biophysiochemical processes and hence are not able to capture nonstationaries seen in water quality observations. In this work, we assess the performance of the Hydrological Simulation Program-Fortran (HSPF) in predicting algal dynamics following TW increase. The study area is located in the Republic of Korea where waterway change due to weir construction and drought concurrently occurred around 2012. In this work we use data assimilation (DA) techniques to update model parameters as well as the initial condition of selected state variables for in-stream processes relevant to algal growth. For assessment of model performance and characterization of temporal variability, various goodness-of-fit measures and wavelet analysis are used.
Confidence Interval Coverage for Cohen's Effect Size Statistic
ERIC Educational Resources Information Center
Algina, James; Keselman, H. J.; Penfield, Randall D.
2006-01-01
Kelley compared three methods for setting a confidence interval (CI) around Cohen's standardized mean difference statistic: the noncentral-"t"-based, percentile (PERC) bootstrap, and biased-corrected and accelerated (BCA) bootstrap methods under three conditions of nonnormality, eight cases of sample size, and six cases of population…
Ramachandran, Suchitra; Meyer, Travis; Olson, Carl R
2016-01-01
When monkeys view two images in fixed sequence repeatedly over days and weeks, neurons in area TE of the inferotemporal cortex come to exhibit prediction suppression. The trailing image elicits only a weak response when presented following the leading image that preceded it during training. Induction of prediction suppression might depend either on the contiguity of the images, as determined by their co-occurrence and captured in the measure of joint probability P(A,B), or on their contingency, as determined by their correlation and as captured in the measures of conditional probability P(A|B) and P(B|A). To distinguish between these possibilities, we measured prediction suppression after imposing training regimens that held P(A,B) constant but varied P(A|B) and P(B|A). We found that reducing either P(A|B) or P(B|A) during training attenuated prediction suppression as measured during subsequent testing. We conclude that prediction suppression depends on contingency, as embodied in the predictive relations between the images, and not just on contiguity, as embodied in their co-occurrence. Copyright © 2016 the American Physiological Society.
The Processing of Attended and Predicted Sounds in Time.
Paris, Tim; Kim, Jeesun; Davis, Chris
2016-01-01
Neural responses to an attended event are typically enhanced relative to those from an unattended one (attention enhancement). Conversely, neural responses to a predicted event are typically reduced relative to those from an unpredicted one (prediction suppression). What remains to be established is what happens with attended and predicted events. To examine the interaction between attention and prediction, we combined two robust paradigms developed for studying attention and prediction effects on ERPs into an orthogonal design. Participants were presented with sounds in attended or unattended intervals with onsets that were either predicted by a moving visual cue or unpredicted (no cue was provided). We demonstrated an N1 enhancement effect for attended sounds and an N1 suppression effect for predicted sounds; furthermore, an interaction between these effects was found that emerged early in the N1 (50-95 msec), indicating that attention enhancement only occurred when the sound was unpredicted. This pattern of results can be explained by the precision of the predictive cue that reduces the need for attention selection in the attended and predicted condition.
Tulloch, Ayesha I T; Pichancourt, Jean-Baptiste; Gosper, Carl R; Sanders, Angela; Chadès, Iadine
2016-10-01
Changed fire regimes have led to declines of fire-regime-adapted species and loss of biodiversity globally. Fire affects population processes of growth, reproduction, and dispersal in different ways, but there is little guidance about the best fire regime(s) to maintain species population processes in fire-prone ecosystems. We use a process-based approach to determine the best range of fire intervals for keystone plant species in a highly modified Mediterranean ecosystem in southwestern Australia where current fire regimes vary. In highly fragmented areas, fires are few due to limited ignitions and active suppression of wildfire on private land, while in highly connected protected areas fires are frequent and extensive. Using matrix population models, we predict population growth of seven Banksia species under different environmental conditions and patch connectivity, and evaluate the sensitivity of species survival to different fire management strategies and burning intervals. We discover that contrasting, complementary patterns of species life-histories with time since fire result in no single best fire regime. All strategies result in the local patch extinction of at least one species. A small number of burning strategies secure complementary species sets depending on connectivity and post-fire growing conditions. A strategy of no fire always leads to fewer species persisting than prescribed fire or random wildfire, while too-frequent or too-rare burning regimes lead to the possible local extinction of all species. In low landscape connectivity, we find a smaller range of suitable fire intervals, and strategies of prescribed or random burning result in a lower number of species with positive growth rates after 100 years on average compared with burning high connectivity patches. Prescribed fire may reduce or increase extinction risk when applied in combination with wildfire depending on patch connectivity. Poor growing conditions result in a significantly
Campbell, J P; Gratton, M C; Salomone, J A; Lindholm, D J; Watson, W A
1994-01-01
In some emergency medical services (EMS) system designs, response time intervals are mandated with monetary penalties for noncompliance. These times are set with the goal of providing rapid, definitive patient care. The time interval of vehicle at scene-to-patient access (VSPA) has been measured, but its effect on response time interval compliance has not been determined. To determine the effect of the VSPA interval on the mandated code 1 (< 9 min) and code 2 (< 13 min) response time interval compliance in an urban, public-utility model system. A prospective, observational study used independent third-party riders to collect the VSPA interval for emergency life-threatening (code 1) and emergency nonlife-threatening (code 2) calls. The VSPA interval was added to the 9-1-1 call-to-dispatch and vehicle dispatch-to-scene intervals to determine the total time interval from call received until paramedic access to the patient (9-1-1 call-to-patient access). Compliance with the mandated response time intervals was determined using the traditional time intervals (9-1-1 call-to-scene) plus the VSPA time intervals (9-1-1 call-to-patient access). Chi-square was used to determine statistical significance. Of the 216 observed calls, 198 were matched to the traditional time intervals. Sixty-three were code 1, and 135 were code 2. Of the code 1 calls, 90.5% were compliant using 9-1-1 call-to-scene intervals dropping to 63.5% using 9-1-1 call-to-patient access intervals (p < 0.0005). Of the code 2 calls, 94.1% were compliant using 9-1-1 call-to-scene intervals. Compliance decreased to 83.7% using 9-1-1 call-to-patient access intervals (p = 0.012). The addition of the VSPA interval to the traditional time intervals impacts system response time compliance. Using 9-1-1 call-to-scene compliance as a basis for measuring system performance underestimates the time for the delivery of definitive care. This must be considered when response time interval compliances are defined.
Bone turnover marker reference intervals in young females.
Callegari, Emma T; Gorelik, Alexandra; Garland, Suzanne M; Chiang, Cherie Y; Wark, John D
2017-07-01
Background The use of bone turnover markers in clinical practice and research in younger people is limited by the lack of normative data and understanding of common causes of variation in bone turnover marker values in this demographic. To appropriately interpret bone turnover markers, robust reference intervals specific to age, development and sex are necessary. This study aimed to determine reference intervals of bone turnover markers in females aged 16-25 years participating in the Safe-D study. Methods Participants were recruited through social networking site Facebook and were asked to complete an extensive, online questionnaire and attend a site visit. Participants were tested for serum carboxy-terminal cross-linking telopeptide of type 1 collagen and total procollagen type 1 N-propeptide using the Roche Elecsys automated analyser. Reference intervals were determined using the 2.5th to 97.5th percentiles of normalized bone turnover marker values. Results Of 406 participants, 149 were excluded due to medical conditions or medication use (except hormonal contraception) which may affect bone metabolism. In the remaining 257 participants, the reference interval was 230-1000 ng/L for serum carboxy-terminal cross-linking telopeptide of type 1 collagen and 27-131 µg/L for procollagen type 1 N-propeptide. Both marker concentrations were inversely correlated with age and oral contraceptive pill use. Therefore, intervals specific to these variables were calculated. Conclusions We defined robust reference intervals for cross-linking telopeptide of type 1 collagen and procollagen type 1 N-propeptide in young females grouped by age and contraceptive pill use. We examined bone turnover markers' relationship with several lifestyle, clinical and demographic factors. Our normative intervals should aid interpretation of bone turnover markers in young females particularly in those aged 16 to 19 years where reference intervals are currently provisional.
Radwan, Asma; Wagner, Manfred; Amidon, Gordon L; Langguth, Peter
2014-06-16
Food intake may delay tablet disintegration. Current in vitro methods have little predictive potential to account for such effects. The effect of a variety of factors on the disintegration of immediate release tablets in the gastrointestinal tract has been identified. They include viscosity of the media, precipitation of food constituents on the surface of the tablet and reduction of water diffusivity in the media as well as changes in the hydrodynamics in the surrounding media of the solid dosage form. In order to improve the predictability of food affecting the disintegration of a dosage form, tablet disintegration in various types of a liquefied meal has been studied under static vs. dynamic (agitative) conditions. Viscosity, water diffusivity, osmolality and Reynolds numbers for the different media were characterized. A quantitative model is introduced which predicts the influence of the Reynolds number in the tablet disintegration apparatus on the disintegration time. Viscosity, water diffusivity and media flow velocity are shown to be important factors affecting dosage form disintegration. The results suggest the necessity of considering these parameters when designing a predictive model for simulating the in vivo conditions. Based on these experiments and knowledge on in vivo hydrodynamics in the GI tract, it is concluded that the disintegration tester under current pharmacopoeial conditions is operated in an unphysiological mode and no bioprediction may be derived. Recommendations regarding alternative mode of operation are made. Copyright © 2013 Elsevier B.V. All rights reserved.
Chen, Tina H; Wu, Steve W; Welge, Jeffrey A; Dixon, Stephan G; Shahana, Nasrin; Huddleston, David A; Sarvis, Adam R; Sallee, Floyd R; Gilbert, Donald L
2014-12-01
Clinical trials in children with attention-deficit hyperactivity disorder (ADHD) show variability in behavioral responses to the selective norepinephrine reuptake inhibitor atomoxetine. The objective of this study was to determine whether transcranial magnetic stimulation-evoked short interval cortical inhibition might be a biomarker predicting, or correlating with, clinical atomoxetine response. At baseline and after 4 weeks of atomoxetine treatment in 7- to 12-year-old children with ADHD, transcranial magnetic stimulation short interval cortical inhibition was measured, blinded to clinical improvement. Primary analysis was by multivariate analysis of covariance. Baseline short interval cortical inhibition did not predict clinical responses. However, paradoxically, after 4 weeks of atomoxetine, mean short interval cortical inhibition was reduced 31.9% in responders and increased 6.1% in nonresponders (analysis of covariance t 41 = 2.88; P = .0063). Percentage reductions in short interval cortical inhibition correlated with reductions in the ADHD Rating Scale (r = 0.50; P = .0005). In children ages 7 to 12 years with ADHD treated with atomoxetine, improvements in clinical symptoms are correlated with reductions in motor cortex short interval cortical inhibition. © The Author(s) 2014.
ERIC Educational Resources Information Center
President's Council on Physical Fitness and Sports, Washington, DC.
Regardless of the type of physical activity used, interval training is simply repeated periods of physical stress interspersed with recovery periods during which activity of a reduced intensity is performed. During the recovery periods, the individual usually keeps moving and does not completely recover before the next exercise interval (e.g.,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zapata, Francisco; Kreinovich, Vladik; Joslyn, Cliff A.
2013-08-01
To make a decision, we need to compare the values of quantities. In many practical situations, we know the values with interval uncertainty. In such situations, we need to compare intervals. Allen’s algebra describes all possible relations between intervals on the real line, and ordering relations between such intervals are well studied. In this paper, we extend this description to intervals in an arbitrary partially ordered set (poset). In particular, we explicitly describe ordering relations between intervals that generalize relation between points. As auxiliary results, we provide a logical interpretation of the relation between intervals, and extend the results aboutmore » interval graphs to intervals over posets.« less
Sideris, Costas; Alshurafa, Nabil; Pourhomayoun, Mohammad; Shahmohammadi, Farhad; Samy, Lauren; Sarrafzadeh, Majid
2015-01-01
In this paper, we propose a novel methodology for utilizing disease diagnostic information to predict severity of condition for Congestive Heart Failure (CHF) patients. Our methodology relies on a novel, clustering-based, feature extraction framework using disease diagnostic information. To reduce the dimensionality we identify disease clusters using cooccurence frequencies. We then utilize these clusters as features to predict patient severity of condition. We build our clustering and feature extraction algorithm using the 2012 National Inpatient Sample (NIS), Healthcare Cost and Utilization Project (HCUP) which contains 7 million discharge records and ICD-9-CM codes. The proposed framework is tested on Ronald Reagan UCLA Medical Center Electronic Health Records (EHR) from 3041 patients. We compare our cluster-based feature set with another that incorporates the Charlson comorbidity score as a feature and demonstrate an accuracy improvement of up to 14% in the predictability of the severity of condition.
Relation of weather forecasts to the prediction of dangerous forest fire conditions
R. H. Weidman
1923-01-01
The purpose of predicting dangerous forest-fire conditions, of course, is to reduce the great cost and damage caused by forest fires. In the region of Montana and northern Idaho alone the average cost to the United States Forest Service of fire protection and suppression is over $1,000,000 a year. Although the causes of forest fires will gradually be reduced by...
Diaconis, Persi; Holmes, Susan; Janson, Svante
2015-01-01
We work out a graph limit theory for dense interval graphs. The theory developed departs from the usual description of a graph limit as a symmetric function W (x, y) on the unit square, with x and y uniform on the interval (0, 1). Instead, we fix a W and change the underlying distribution of the coordinates x and y. We find choices such that our limits are continuous. Connections to random interval graphs are given, including some examples. We also show a continuity result for the chromatic number and clique number of interval graphs. Some results on uniqueness of the limit description are given for general graph limits. PMID:26405368
Kleandrova, Valeria V; Luan, Feng; González-Díaz, Humberto; Ruso, Juan M; Melo, André; Speck-Planche, Alejandro; Cordeiro, M Natália D S
2014-12-01
Nanotechnology has brought great advances to many fields of modern science. A manifold of applications of nanoparticles have been found due to their interesting optical, electrical, and biological/chemical properties. However, the potential toxic effects of nanoparticles to different ecosystems are of special concern nowadays. Despite the efforts of the scientific community, the mechanisms of toxicity of nanoparticles are still poorly understood. Quantitative-structure activity/toxicity relationships (QSAR/QSTR) models have just started being useful computational tools for the assessment of toxic effects of nanomaterials. But most QSAR/QSTR models have been applied so far to predict ecotoxicity against only one organism/bio-indicator such as Daphnia magna. This prevents having a deeper knowledge about the real ecotoxic effects of nanoparticles, and consequently, there is no possibility to establish an efficient risk assessment of nanomaterials in the environment. In this work, a perturbation model for nano-QSAR problems is introduced with the aim of simultaneously predicting the ecotoxicity of different nanoparticles against several assay organisms (bio-indicators), by considering also multiple measures of ecotoxicity, as well as the chemical compositions, sizes, conditions under which the sizes were measured, shapes, and the time during which the diverse assay organisms were exposed to nanoparticles. The QSAR-perturbation model was derived from a database containing 5520 cases (nanoparticle-nanoparticle pairs), and it was shown to exhibit accuracies of ca. 99% in both training and prediction sets. In order to demonstrate the practical applicability of our model, three different nickel-based nanoparticles (Ni) with experimental values reported in the literature were predicted. The predictions were found to be in very good agreement with the experimental evidences, confirming that Ni-nanoparticles are not ecotoxic when compared with other nanoparticles. The results
Fixed-interval performance and self-control in infants.
Darcheville, J C; Rivière, V; Wearden, J H
1993-01-01
Twenty-six infants, 3 to 23 months old, were trained on fixed-interval schedules ranging from 10 s to 80 s. The operant response was touching an illuminated location on a touch-sensitive screen, and 20 s of cartoon presentation was the reinforcer. The subjects were also trained in a six-phase self-control procedure in which the critical phases involved choice between 20 s of cartoon available after a 0.5-s delay (impulsive choice) and 40 s of cartoon delayed for 40 s (self-controlled choice). All the youngest children (3 to 5 months) showed long postreinforcement pauses on the fixed-interval schedule, with most intervals involving the emission of a single, reinforced, response, and all made self-controlled choices. Older subjects (9 to 23 months) either produced the same pattern as the younger ones on the fixed-interval schedule (classified as pause-sensitive subjects) or produced short pauses and higher steady response rates (classified as pause-insensitive subjects). All pause-sensitive subjects made self-controlled choices in the self-control condition, and all pause-insensitive subjects made impulsive ones. PMID:8409821
Effect of different rest intervals after whole-body vibration on vertical jump performance.
Dabbs, Nicole C; Muñoz, Colleen X; Tran, Tai T; Brown, Lee E; Bottaro, Martim
2011-03-01
Whole-body vibration (WBV) may potentiate vertical jump (VJ) performance via augmented muscular strength and motor function. The purpose of this study was to evaluate the effect of different rest intervals after WBV on VJ performance. Thirty recreationally trained subjects (15 men and 15 women) volunteered to participate in 4 testing visits separated by 24 hours. Visit 1 acted as a familiarization visit where subjects were introduced to the VJ and WBV protocols. Visits 2-4 contained 2 randomized conditions per visit with a 10-minute rest period between conditions. The WBV was administered on a pivotal platform with a frequency of 30 Hz and an amplitude of 6.5 mm in 4 bouts of 30 seconds for a total of 2 minutes with 30 seconds of rest between bouts. During WBV, subjects performed a quarter squat every 5 seconds, simulating a countermovement jump (CMJ). Whole-body vibration was followed by 3 CMJs with 5 different rest intervals: immediate, 30 seconds, 1 minute, 2 minutes, or 4 minutes. For a control condition, subjects performed squats with no WBV. There were no significant (p > 0.05) differences in peak velocity or relative ground reaction force after WBV rest intervals. However, results of VJ height revealed that maximum values, regardless of rest interval (56.93 ± 13.98 cm), were significantly (p < 0.05) greater than the control condition (54.44 ± 13.74 cm). Therefore, subjects' VJ height potentiated at different times after WBV suggesting strong individual differences in optimal rest interval. Coaches may use WBV to enhance acute VJ performance but should first identify each individual's optimal rest time to maximize the potentiating effects.
NASA Astrophysics Data System (ADS)
Fei, S.; Xinong, X.
2017-12-01
The fifth organic-matter-rich interval (ORI 5) in the He-third Member of the Paleogene Hetaoyuan Formation is believed to be the main exploration target for shale oil in Biyang Depression, eastern China. An important part of successful explorating and producing shale oil is to identify and predict organic-rich shale lithofacies with different reservoir capacities and rock geomechanical properties, which are related to organic matter content and mineral components. In this study, shale lithofacies are defined by core analysis data, well-logging and seismic data, and the spatial-temporal distribution of various lithologies are predicted qualitatively by seismic attribute technology and quantitatively by geostatistical inversion analysis, and the prediction results are confirmed by the logging data and geological background. ORI 5 is present in lacustrine expanding system tract and can be further divided into four parasequence sets based on the analysis of conventional logs, TOC content and wavelet transform. Calcareous shale, dolomitic shale, argillaceous shale, silty shale and muddy siltstone are defined within ORI 5, and can be separated and predicted in regional-scale by root mean square amplitude (RMS) analysis and wave impedance. The results indicate that in the early expansion system tract, dolomitic shale and calcareous shale widely developed in the study area, and argillaceous shale, silty shale, and muddy siltstone only developed in periphery of deep depression. With the lake level rising, argillaceous shale and calcareous shale are well developed, and argillaceous shale interbeded with silty shale or muddy siltstone developed in deep or semi-deep lake. In the late expansion system tract, argillaceous shale is widely deposited in the deepest depression, calcareous shale presented band distribution in the east of the depression. Actual test results indicate that these methods are feasible to predict the spatial distribution of shale lithofacies.
Marchioro, C A; Krechemer, F S; de Moraes, C P; Foerster, L A
2015-12-01
The diamondback moth, Plutella xylostella (L.), is a cosmopolitan pest of brassicaceous crops occurring in regions with highly distinct climate conditions. Several studies have investigated the relationship between temperature and P. xylostella development rate, providing degree-day models for populations from different geographical regions. However, there are no data available to date to demonstrate the suitability of such models to make reliable projections on the development time for this species in field conditions. In the present study, 19 models available in the literature were tested regarding their ability to accurately predict the development time of two cohorts of P. xylostella under field conditions. Only 11 out of the 19 models tested accurately predicted the development time for the first cohort of P. xylostella, but only seven for the second cohort. Five models correctly predicted the development time for both cohorts evaluated. Our data demonstrate that the accuracy of the models available for P. xylostella varies widely and therefore should be used with caution for pest management purposes.
Zbozinek, Tomislav D.; Hermans, Dirk; Prenoveau, Jason M.; Liao, Betty; Craske, Michelle G.
2014-01-01
Exposure therapy for anxiety disorders is translated from fear conditioning and extinction. While exposure therapy is effective in treating anxiety, fear sometimes returns after exposure. One pathway for return of fear is reinstatement: unsignaled unconditional stimuli following completion of extinction. The present study investigated the extent to which valence of the conditional stimulus (CS+) after extinction predicts return of CS+ fear after reinstatement. Participants (N = 84) engaged in a differential fear conditioning paradigm and were randomized to reinstatement or non-reinstatement. We hypothesized that more negative post-extinction CS+ valence would predict higher CS+ fear after reinstatement relative to non-reinstatement and relative to extinction retest. Results supported the hypotheses and suggest that strategies designed to decrease negative valence of the CS+ may reduce the return of fear via reinstatement following exposure therapy. PMID:24957680
Prediction of sand transport over immobile gravel from supply limited to capacity conditions
USDA-ARS?s Scientific Manuscript database
Prediction of the transport of sand in channels armored with gravel downstream of dams is difficult but necessary for the range of bed conditions from supply limited to capacity transport. Previous work has shown that information on the mean elevation of the sand relative to the gravel and on the s...
NASA Astrophysics Data System (ADS)
Rambo, J. E.; Kim, W.; Miller, K.
2017-12-01
Physical modeling of a delta's evolution can represent how changing the intervals of flood and interflood can alter a delta's fluvial pattern and geometry. Here we present a set of six experimental runs in which sediment and water were discharged at constant rates over each experiment. During the "flood" period, both sediment and water were discharged at rates of 0.25 cm3/s and 15 ml/s respectively, and during the "interflood" period, only water was discharged at 7.5 ml/s. The flood periods were only run for 30 minutes to keep the total volume of sediment constant. Run 0 did not have an interflood period and therefore ran with constant sediment and water discharge for the duration of the experiment.The other five runs had either 5, 10, or 15-min intervals of flood with 5, 10, or 15-min intervals of interflood. The experimental results show that Run 0 had the smallest topset area. This is due to a lack of surface reworking that takes place during interflood periods. Run 1 had 15-minute intervals of flood and 15-minute intervals of interflood, and it had the largest topset area. Additionally, the experiments that had longer intervals of interflood than flood had more elongated delta geometries. Wetted fraction color maps were also created to plot channel locations during each run. The maps show that the runs with longer interflood durations had channels occurring predominantly down the middle with stronger incisions; these runs produced deltas with more elongated geometries. When the interflood duration was even longer, however, strong channels started to occur at multiple locations. This increased interflood period allowed for the entire area over the delta's surface to be reworked, thus reducing the downstream slope and allowing channels to be more mobile laterally. Physical modeling of a delta allows us to predict a delta's resulting geometry given a set of conditions. This insight is needed especially with delta's being the home to many populations of people and
Lifecourse social conditions and racial disparities in incidence of first stroke.
Glymour, M Maria; Avendaño, Mauricio; Haas, Steven; Berkman, Lisa F
2008-12-01
Some previous studies found excess stroke rates among black subjects persisted after adjustment for socioeconomic status (SES), fueling speculation regarding racially patterned genetic predispositions to stroke. Previous research was hampered by incomplete SES assessments, without measures of childhood conditions or adult wealth. We assess the role of lifecourse SES in explaining stroke risk and stroke disparities. Health and Retirement Study participants age 50+ (n = 20,661) were followed on average 9.9 years for self- or proxy-reported first stroke (2175 events). Childhood social conditions (southern state of birth, parental SES, self-reported fair/poor childhood health, and attained height), adult SES (education, income, wealth, and occupational status) and traditional cardiovascular risk factors were used to predict first stroke onset using Cox proportional hazards models. Black subjects had a 48% greater risk of first stroke incidence than whites (95% confidence interval, 1.33-1.65). Childhood conditions predicted stroke risk in both blacks and whites, independently of adult SES. Adjustment for both childhood social conditions and adult SES measures attenuated racial differences to marginal significance (hazard ratio, 1.13; 95% CI, 1.00-1.28). Childhood social conditions predict stroke risk in black and White American adults. Additional adjustment for adult SES, in particular wealth, nearly eliminated the disparity in stroke risk between black and white subjects.
Deviney, Frank A.; Rice, Karen C.; Hornberger, George M.
2006-01-01
Acid rain affects headwater streams by temporarily reducing the acid‐neutralizing capacity (ANC) of the water, a process termed episodic acidification. The increase in acidic components in stream water can have deleterious effects on the aquatic biota. Although acidic deposition is uniform across Shenandoah National Park (SNP) in north central Virginia, the stream water quality response during rain events varies substantially. This response is a function of the catchment's underlying geology and topography. Geologic and topographic data for SNP's 231 catchments are readily available; however, long‐term measurements (tens of years) of ANC and accompanying discharge are not and would be prohibitively expensive to collect. Transfer function time series models were developed to predict hourly ANC from discharge for five SNP catchments with long‐term water‐quality and discharge records. Hourly ANC predictions over short time periods (≤1 week) were averaged, and distributions of the recurrence intervals of annual water‐year minimum ANC values were model‐simulated for periods of 6, 24, 72, and 168 hours. The distributions were extrapolated to the rest of the SNP catchments on the basis of catchment geology and topography. On the basis of the models, large numbers of SNP streams have 6‐ to 168‐hour periods of low‐ANC values, which may stress resident fish populations. Smaller catchments are more vulnerable to episodic acidification than larger catchments underlain by the same bedrock. Catchments with similar topography and size are more vulnerable if underlain by less basaltic/carbonate bedrock. Many catchments are predicted to have successive years of low‐ANC values potentially sufficient to extirpate some species.
Serial binary interval ratios improve rhythm reproduction.
Wu, Xiang; Westanmo, Anders; Zhou, Liang; Pan, Junhao
2013-01-01
Musical rhythm perception is a natural human ability that involves complex cognitive processes. Rhythm refers to the organization of events in time, and musical rhythms have an underlying hierarchical metrical structure. The metrical structure induces the feeling of a beat and the extent to which a rhythm induces the feeling of a beat is referred to as its metrical strength. Binary ratios are the most frequent interval ratio in musical rhythms. Rhythms with hierarchical binary ratios are better discriminated and reproduced than rhythms with hierarchical non-binary ratios. However, it remains unclear whether a superiority of serial binary over non-binary ratios in rhythm perception and reproduction exists. In addition, how different types of serial ratios influence the metrical strength of rhythms remains to be elucidated. The present study investigated serial binary vs. non-binary ratios in a reproduction task. Rhythms formed with exclusively binary (1:2:4:8), non-binary integer (1:3:5:6), and non-integer (1:2.3:5.3:6.4) ratios were examined within a constant meter. The results showed that the 1:2:4:8 rhythm type was more accurately reproduced than the 1:3:5:6 and 1:2.3:5.3:6.4 rhythm types, and the 1:2.3:5.3:6.4 rhythm type was more accurately reproduced than the 1:3:5:6 rhythm type. Further analyses showed that reproduction performance was better predicted by the distribution pattern of event occurrences within an inter-beat interval, than by the coincidence of events with beats, or the magnitude and complexity of interval ratios. Whereas rhythm theories and empirical data emphasize the role of the coincidence of events with beats in determining metrical strength and predicting rhythm performance, the present results suggest that rhythm processing may be better understood when the distribution pattern of event occurrences is taken into account. These results provide new insights into the mechanisms underlining musical rhythm perception.
Serial binary interval ratios improve rhythm reproduction
Wu, Xiang; Westanmo, Anders; Zhou, Liang; Pan, Junhao
2013-01-01
Musical rhythm perception is a natural human ability that involves complex cognitive processes. Rhythm refers to the organization of events in time, and musical rhythms have an underlying hierarchical metrical structure. The metrical structure induces the feeling of a beat and the extent to which a rhythm induces the feeling of a beat is referred to as its metrical strength. Binary ratios are the most frequent interval ratio in musical rhythms. Rhythms with hierarchical binary ratios are better discriminated and reproduced than rhythms with hierarchical non-binary ratios. However, it remains unclear whether a superiority of serial binary over non-binary ratios in rhythm perception and reproduction exists. In addition, how different types of serial ratios influence the metrical strength of rhythms remains to be elucidated. The present study investigated serial binary vs. non-binary ratios in a reproduction task. Rhythms formed with exclusively binary (1:2:4:8), non-binary integer (1:3:5:6), and non-integer (1:2.3:5.3:6.4) ratios were examined within a constant meter. The results showed that the 1:2:4:8 rhythm type was more accurately reproduced than the 1:3:5:6 and 1:2.3:5.3:6.4 rhythm types, and the 1:2.3:5.3:6.4 rhythm type was more accurately reproduced than the 1:3:5:6 rhythm type. Further analyses showed that reproduction performance was better predicted by the distribution pattern of event occurrences within an inter-beat interval, than by the coincidence of events with beats, or the magnitude and complexity of interval ratios. Whereas rhythm theories and empirical data emphasize the role of the coincidence of events with beats in determining metrical strength and predicting rhythm performance, the present results suggest that rhythm processing may be better understood when the distribution pattern of event occurrences is taken into account. These results provide new insights into the mechanisms underlining musical rhythm perception. PMID:23964258
Tran, Mark W; Weiland, Tracey J; Phillips, Georgina A
2015-01-01
Psychosocial factors such as marital status (odds ratio, 3.52; 95% confidence interval, 1.43-8.69; P = .006) and nonclinical factors such as outpatient nonattendances (odds ratio, 2.52; 95% confidence interval, 1.22-5.23; P = .013) and referrals made (odds ratio, 1.20; 95% confidence interval, 1.06-1.35; P = .003) predict hospital utilization for patients in a chronic disease management program. Along with optimizing patients' clinical condition by prescribed medical guidelines and supporting patient self-management, addressing psychosocial and nonclinical issues are important in attempting to avoid hospital utilization for people with chronic illnesses.
Chosen interval methods for solving linear interval systems with special type of matrix
NASA Astrophysics Data System (ADS)
Szyszka, Barbara
2013-10-01
The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.
NASA Astrophysics Data System (ADS)
Borah, Nabanita; Sukumarpillai, Abhilash; Sahai, Atul Kumar; Chattopadhyay, Rajib; Joseph, Susmitha; De, Soumyendu; Nath Goswami, Bhupendra; Kumar, Arun
2014-05-01
An ensemble prediction system (EPS) is devised for the extended range prediction (ERP) of monsoon intraseasonal oscillations (MISO) of Indian summer monsoon (ISM) using NCEP Climate Forecast System model version2 at T126 horizontal resolution. The EPS is formulated by producing 11 member ensembles through the perturbation of atmospheric initial conditions. The hindcast experiments were conducted at every 5-day interval for 45 days lead time starting from 16th May to 28th September during 2001-2012. The general simulation of ISM characteristics and the ERP skill of the proposed EPS at pentad mean scale are evaluated in the present study. Though the EPS underestimates both the mean and variability of ISM rainfall, it simulates the northward propagation of MISO reasonably well. It is found that the signal-to-noise ratio becomes unity by about18 days and the predictability error saturates by about 25 days. Though useful deterministic forecasts could be generated up to 2nd pentad lead, significant correlations are observed even up to 4th pentad lead. The skill in predicting large-scale MISO, which is assessed by comparing the predicted and observed MISO indices, is found to be ~17 days. It is noted that the prediction skill of actual rainfall is closely related to the prediction of amplitude of large scale MISO as well as the initial conditions related to the different phases of MISO. Categorical prediction skills reveals that break is more skillfully predicted, followed by active and then normal. The categorical probability skill scores suggest that useful probabilistic forecasts could be generated even up to 4th pentad lead.
NASA Astrophysics Data System (ADS)
Borah, N.; Abhilash, S.; Sahai, A. K.; Chattopadhyay, R.; Joseph, S.; Sharmila, S.; de, S.; Goswami, B.; Kumar, A.
2013-12-01
An ensemble prediction system (EPS) is devised for the extended range prediction (ERP) of monsoon intraseasonal oscillations (MISOs) of Indian summer monsoon (ISM) using NCEP Climate Forecast System model version2 at T126 horizontal resolution. The EPS is formulated by producing 11 member ensembles through the perturbation of atmospheric initial conditions. The hindcast experiments were conducted at every 5-day interval for 45 days lead time starting from 16th May to 28th September during 2001-2012. The general simulation of ISM characteristics and the ERP skill of the proposed EPS at pentad mean scale are evaluated in the present study. Though the EPS underestimates both the mean and variability of ISM rainfall, it simulates the northward propagation of MISO reasonably well. It is found that the signal-to-noise ratio becomes unity by about18 days and the predictability error saturates by about 25 days. Though useful deterministic forecasts could be generated up to 2nd pentad lead, significant correlations are observed even up to 4th pentad lead. The skill in predicting large-scale MISO, which is assessed by comparing the predicted and observed MISO indices, is found to be ~17 days. It is noted that the prediction skill of actual rainfall is closely related to the prediction of amplitude of large scale MISO as well as the initial conditions related to the different phases of MISO. Categorical prediction skills reveals that break is more skillfully predicted, followed by active and then normal. The categorical probability skill scores suggest that useful probabilistic forecasts could be generated even up to 4th pentad lead.
Chiu, Sherry Yueh-Hsia; Chuang, Shu-Ling; Chen, Sam Li-Sheng; Yen, Amy Ming-Fang; Fann, Jean Ching-Yuan; Chang, Dun-Cheng; Lee, Yi-Chia; Wu, Ming-Shiang; Chou, Chu-Kuang; Hsu, Wen-Feng; Chiou, Shu-Ti; Chiu, Han-Mo
2017-01-01
Objectives Interval colorectal cancer (CRC) after colonoscopy may affect effectiveness and cost-effectiveness of screening programmes. We aimed to investigate whether and how faecal haemoglobin concentration (FHbC) of faecal immunochemical testing (FIT) affected the risk prediction of interval cancer (IC) caused by inadequate colonoscopy quality in a FIT-based population screening programme. Design From 2004 to 2009, 29 969 subjects underwent complete colonoscopy after positive FIT in the Taiwanese Nationwide CRC Screening Program. The IC rate was traced until the end of 2012. The incidence of IC was calculated in relation to patient characteristics, endoscopy-related factors (such adenoma detection rate (ADR)) and FHbC. Poisson regression analysis was performed to assess the potential risk factors for colonoscopy IC. Results One hundred and sixty-two ICs developed after an index colonoscopy and the estimated incidence was 1.14 per 1000 person-years of observation for the entire cohort. Increased risk of IC was most remarkable in the uptake of colonoscopy in settings with ADR lower than 15% (adjusted relative risk (aRR)=3.09, 95% CI 1.55 to 6.18) and then higher FHbC (μg Hb/g faeces) (100–149: aRR=2.55, 95% CI 1.52 to 4.29, ≥150: aRR=2.74, 95% CI 1.84 to 4.09) with adjustment for older age and colorectal neoplasm detected at baseline colonoscopy. Similar findings were observed for subjects with negative index colonoscopy. Conclusions Colonoscopy ICs arising from FIT-based population screening programmes were mainly influenced by inadequate colonoscopy quality and independently predicted by FHbC that is associated with a priori chance of advanced neoplasm. This finding is helpful for future modification of screening logistics based on FHbC. PMID:26515543
NASA Technical Reports Server (NTRS)
Suzen, Y. Bora; Huang, P. G.; Hultgren, Lennart S.; Ashpis, David E.
2001-01-01
A new transport equation for the intermittency factor was proposed to predict separated and transitional boundary layers under low-pressure turbine airfoil conditions. The intermittent behavior of the transitional flows is taken into account and incorporated into computations by modifying the eddy viscosity, mu(sub t), with the intermittency factor, gamma. Turbulent quantities are predicted by using Menter's two-equation turbulence model (SST). The intermittency factor is obtained from a transport equation model, which not only can reproduce the experimentally observed streamwise variation of the intermittency in the transition zone, but also can provide a realistic cross-stream variation of the intermittency profile. In this paper, the intermittency model is used to predict a recent separated and transitional boundary layer experiment under low pressure turbine airfoil conditions. The experiment provides detailed measurements of velocity, turbulent kinetic energy and intermittency profiles for a number of Reynolds numbers and freestream turbulent intensity conditions and is suitable for validation purposes. Detailed comparisons of computational results with experimental data are presented and good agreements between the experiments and predictions are obtained.
NASA Technical Reports Server (NTRS)
Suzen, Y. B.; Huang, P. G.; Hultgren, Lennart S.; Ashpis, David E.
2003-01-01
A new transport equation for the intermittency factor was proposed to predict separated and transitional boundary layers under low-pressure turbine airfoil conditions. The intermittent behavior of the transitional flows is taken into account and incorporated into computations by modifying the eddy viscosity, t , with the intermittency factor, y. Turbulent quantities are predicted by using Menter s two-equation turbulence model (SST). The intermittency factor is obtained from a transport equation model, which not only can reproduce the experimentally observed streamwise variation of the intermittency in the transition zone, but also can provide a realistic cross-stream variation of the intermittency profile. In this paper, the intermittency model is used to predict a recent separated and transitional boundary layer experiment under low pressure turbine airfoil conditions. The experiment provides detailed measurements of velocity, turbulent kinetic energy and intermittency profiles for a number of Reynolds numbers and freestream turbulent intensity conditions and is suitable for validation purposes. Detailed comparisons of computational results with experimental data are presented and good agreements between the experiments and predictions are obtained.
The Use of One-Sample Prediction Intervals for Estimating CO2 Scrubber Canister Durations
2012-10-01
Grade and 812 D-Grade Sofnolime.3 Definitions According to Devore,4 A CI (confidence interval) refers to a parameter, or population ... characteristic , whose value is fixed but unknown to us. In contrast, a future value of Y is not a parameter but instead a random variable; for this
Control of Angular Intervals for Angle-Multiplexed Holographic Memory
NASA Astrophysics Data System (ADS)
Kinoshita, Nobuhiro; Muroi, Tetsuhiko; Ishii, Norihiko; Kamijo, Koji; Shimidzu, Naoki
2009-03-01
In angle-multiplexed holographic memory, the full width at half maximum of the Bragg selectivity curves is dependent on the angle formed between the medium and incident laser beams. This indicates the possibility of high density and high multiplexing number by varying the angular intervals between adjacent holograms. We propose an angular interval scheduling for closely stacking holograms into medium even when the angle range is limited. We obtained bit error rates of the order of 10-4 under the following conditions: medium thickness of 1 mm, laser beam wavelength of 532 nm, and angular multiplexing number of 300.
Improved central confidence intervals for the ratio of Poisson means
NASA Astrophysics Data System (ADS)
Cousins, R. D.
The problem of confidence intervals for the ratio of two unknown Poisson means was "solved" decades ago, but a closer examination reveals that the standard solution is far from optimal from the frequentist point of view. We construct a more powerful set of central confidence intervals, each of which is a (typically proper) subinterval of the corresponding standard interval. They also provide upper and lower confidence limits which are more restrictive than the standard limits. The construction follows Neyman's original prescription, though discreteness of the Poisson distribution and the presence of a nuisance parameter (one of the unknown means) lead to slightly conservative intervals. Philosophically, the issue of the appropriateness of the construction method is similar to the issue of conditioning on the margins in 2×2 contingency tables. From a frequentist point of view, the new set maintains (over) coverage of the unknown true value of the ratio of means at each stated confidence level, even though the new intervals are shorter than the old intervals by any measure (except for two cases where they are identical). As an example, when the number 2 is drawn from each Poisson population, the 90% CL central confidence interval on the ratio of means is (0.169, 5.196), rather than (0.108, 9.245). In the cited literature, such confidence intervals have applications in numerous branches of pure and applied science, including agriculture, wildlife studies, manufacturing, medicine, reliability theory, and elementary particle physics.
Image discrimination models predict detection in fixed but not random noise
NASA Technical Reports Server (NTRS)
Ahumada, A. J. Jr; Beard, B. L.; Watson, A. B. (Principal Investigator)
1997-01-01
By means of a two-interval forced-choice procedure, contrast detection thresholds for an aircraft positioned on a simulated airport runway scene were measured with fixed and random white-noise masks. The term fixed noise refers to a constant, or unchanging, noise pattern for each stimulus presentation. The random noise was either the same or different in the two intervals. Contrary to simple image discrimination model predictions, the same random noise condition produced greater masking than the fixed noise. This suggests that observers seem unable to hold a new noisy image for comparison. Also, performance appeared limited by internal process variability rather than by external noise variability, since similar masking was obtained for both random noise types.
Ecological Factors Predict Transition Readiness/Self-Management in Youth With Chronic Conditions.
Javalkar, Karina; Johnson, Meredith; Kshirsagar, Abhijit V; Ocegueda, Sofia; Detwiler, Randal K; Ferris, Maria
2016-01-01
Health care transition readiness or self-management among adolescents and young adults (AYA) with chronic conditions may be influenced by factors related to their surrounding environment. Study participants were AYA diagnosed with a chronic condition and evaluated at pediatric- and adult-focused subspecialty clinics at the University of North Carolina Hospital Systems. All participants were administered a provider-administered self-management/transition-readiness tool, the UNC TRxANSITION Scale. Geographic area and associated characteristics (ecological factors) were identified for each participant's ZIP code using the published U.S. Census data. The Level 1 model of the hierarchical linear regression used individual-level predictors of transition readiness/self-management. The Level 2 model incorporated the ecological factors. We enrolled 511 AYA with different chronic conditions aged 12-31 years with the following characteristics: mean age of 20± 4 years, 45% white, 42% black, and 54% female. Participants represented 214 ZIP codes in or around North Carolina, USA. The Level 1 model showed that age, gender, and race were significant predictors of transition readiness/self-management. On adding the ecological factors in the Level 2 model, race was no longer significant. Participants from a geographic area with a greater percentage of females (β = .114, p = .005) and a higher median income (β = .126, p = .002) had greater overall transition readiness. Ecological factors also predicted subdomains of transition readiness/self-management. In this cohort of adolescents and young adults with different chronic conditions, ecological disparities such as sex composition, median income, and language predict self-management/transition readiness. It is important to take ecological risk factors into consideration when preparing patients for health self-management or transition. Copyright © 2016 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All
Interval Predictor Models for Data with Measurement Uncertainty
NASA Technical Reports Server (NTRS)
Lacerda, Marcio J.; Crespo, Luis G.
2017-01-01
An interval predictor model (IPM) is a computational model that predicts the range of an output variable given input-output data. This paper proposes strategies for constructing IPMs based on semidefinite programming and sum of squares (SOS). The models are optimal in the sense that they yield an interval valued function of minimal spread containing all the observations. Two different scenarios are considered. The first one is applicable to situations where the data is measured precisely whereas the second one is applicable to data subject to known biases and measurement error. In the latter case, the IPMs are designed to fully contain regions in the input-output space where the data is expected to fall. Moreover, we propose a strategy for reducing the computational cost associated with generating IPMs as well as means to simulate them. Numerical examples illustrate the usage and performance of the proposed formulations.
Inverse analysis and regularisation in conditional source-term estimation modelling
NASA Astrophysics Data System (ADS)
Labahn, Jeffrey W.; Devaud, Cecile B.; Sipkens, Timothy A.; Daun, Kyle J.
2014-05-01
Conditional Source-term Estimation (CSE) obtains the conditional species mass fractions by inverting a Fredholm integral equation of the first kind. In the present work, a Bayesian framework is used to compare two different regularisation methods: zeroth-order temporal Tikhonov regulatisation and first-order spatial Tikhonov regularisation. The objectives of the current study are: (i) to elucidate the ill-posedness of the inverse problem; (ii) to understand the origin of the perturbations in the data and quantify their magnitude; (iii) to quantify the uncertainty in the solution using different priors; and (iv) to determine the regularisation method best suited to this problem. A singular value decomposition shows that the current inverse problem is ill-posed. Perturbations to the data may be caused by the use of a discrete mixture fraction grid for calculating the mixture fraction PDF. The magnitude of the perturbations is estimated using a box filter and the uncertainty in the solution is determined based on the width of the credible intervals. The width of the credible intervals is significantly reduced with the inclusion of a smoothing prior and the recovered solution is in better agreement with the exact solution. The credible intervals for temporal and spatial smoothing are shown to be similar. Credible intervals for temporal smoothing depend on the solution from the previous time step and a smooth solution is not guaranteed. For spatial smoothing, the credible intervals are not dependent upon a previous solution and better predict characteristics for higher mixture fraction values. These characteristics make spatial smoothing a promising alternative method for recovering a solution from the CSE inversion process.
Visual Bias Predicts Gait Adaptability in Novel Sensory Discordant Conditions
NASA Technical Reports Server (NTRS)
Brady, Rachel A.; Batson, Crystal D.; Peters, Brian T.; Mulavara, Ajitkumar P.; Bloomberg, Jacob J.
2010-01-01
We designed a gait training study that presented combinations of visual flow and support-surface manipulations to investigate the response of healthy adults to novel discordant sensorimotor conditions. We aimed to determine whether a relationship existed between subjects visual dependence and their postural stability and cognitive performance in a new discordant environment presented at the conclusion of training (Transfer Test). Our training system comprised a treadmill placed on a motion base facing a virtual visual scene that provided a variety of sensory challenges. Ten healthy adults completed 3 training sessions during which they walked on a treadmill at 1.1 m/s while receiving discordant support-surface and visual manipulations. At the first visit, in an analysis of normalized torso translation measured in a scene-movement-only condition, 3 of 10 subjects were classified as visually dependent. During the Transfer Test, all participants received a 2-minute novel exposure. In a combined measure of stride frequency and reaction time, the non-visually dependent subjects showed improved adaptation on the Transfer Test compared to their visually dependent counterparts. This finding suggests that individual differences in the ability to adapt to new sensorimotor conditions may be explained by individuals innate sensory biases. An accurate preflight assessment of crewmembers biases for visual dependence could be used to predict their propensities to adapt to novel sensory conditions. It may also facilitate the development of customized training regimens that could expedite adaptation to alternate gravitational environments.
Emotional arousal predicts intertemporal choice
Lempert, Karolina M.; Johnson, Eli; Phelps, Elizabeth A.
2016-01-01
People generally prefer immediate rewards to rewards received after a delay, often even when the delayed reward is larger. This phenomenon is known as temporal discounting. It has been suggested that preferences for immediate rewards may be due to their being more concrete than delayed rewards. This concreteness may evoke an enhanced emotional response. Indeed, manipulating the representation of a future reward to make it more concrete has been shown to heighten the reward’s subjective emotional intensity, making people more likely to choose it. Here we use an objective measure of arousal – pupil dilation – to investigate if emotional arousal mediates the influence of delayed reward concreteness on choice. We recorded pupil dilation responses while participants made choices between immediate and delayed rewards. We manipulated concreteness through time interval framing: delayed rewards were presented either with the date on which they would be received (e.g., “$30, May 3”; DATE condition, more concrete) or in terms of delay to receipt (e.g., “$30, 7 days; DAYS condition, less concrete). Contrary to prior work, participants were not overall more patient in the DATE condition. However, there was individual variability in response to time framing, and this variability was predicted by differences in pupil dilation between conditions. Emotional arousal increased as the subjective value of delayed rewards increased, and predicted choice of the delayed reward on each trial. This study advances our understanding of the role of emotion in temporal discounting. PMID:26882337
Fault detection for discrete-time LPV systems using interval observers
NASA Astrophysics Data System (ADS)
Zhang, Zhi-Hui; Yang, Guang-Hong
2017-10-01
This paper is concerned with the fault detection (FD) problem for discrete-time linear parameter-varying systems subject to bounded disturbances. A parameter-dependent FD interval observer is designed based on parameter-dependent Lyapunov and slack matrices. The design method is presented by translating the parameter-dependent linear matrix inequalities (LMIs) into finite ones. In contrast to the existing results based on parameter-independent and diagonal Lyapunov matrices, the derived disturbance attenuation, fault sensitivity and nonnegative conditions lead to less conservative LMI characterisations. Furthermore, without the need to design the residual evaluation functions and thresholds, the residual intervals generated by the interval observers are used directly for FD decision. Finally, simulation results are presented for showing the effectiveness and superiority of the proposed method.
Humidity-corrected Arrhenius equation: The reference condition approach.
Naveršnik, Klemen; Jurečič, Rok
2016-03-16
Accelerated and stress stability data is often used to predict shelf life of pharmaceuticals. Temperature, combined with humidity accelerates chemical decomposition and the Arrhenius equation is used to extrapolate accelerated stability results to long-term stability. Statistical estimation of the humidity-corrected Arrhenius equation is not straightforward due to its non-linearity. A two stage nonlinear fitting approach is used in practice, followed by a prediction stage. We developed a single-stage statistical procedure, called the reference condition approach, which has better statistical properties (less collinearity, direct estimation of uncertainty, narrower prediction interval) and is significantly easier to use, compared to the existing approaches. Our statistical model was populated with data from a 35-day stress stability study on a laboratory batch of vitamin tablets and required mere 30 laboratory assay determinations. The stability prediction agreed well with the actual 24-month long term stability of the product. The approach has high potential to assist product formulation, specification setting and stability statements. Copyright © 2016 Elsevier B.V. All rights reserved.
Conditioned pain modulation predicts duloxetine efficacy in painful diabetic neuropathy.
Yarnitsky, David; Granot, Michal; Nahman-Averbuch, Hadas; Khamaisi, Mogher; Granovsky, Yelena
2012-06-01
This study aims to individualize the selection of drugs for neuropathic pain by examining the potential coupling of a given drug's mechanism of action with the patient's pain modulation pattern. The latter is assessed by the conditioned pain modulation (CPM) and temporal summation (TS) protocols. We hypothesized that patients with a malfunctioning pain modulation pattern, such as less efficient CPM, would benefit more from drugs augmenting descending inhibitory pain control than would patients with a normal modulation pattern of efficient CPM. Thirty patients with painful diabetic neuropathy received 1 week of placebo, 1 week of 30 mg/d duloxetine, and 4 weeks of 60 mg/d duloxetine. Pain modulation was assessed psychophysically, both before and at the end of treatment. Patient assessment of drug efficacy, assessed weekly, was the study's primary outcome. Baseline CPM was found to be correlated with duloxetine efficacy (r=0.628, P<.001, efficient CPM is marked negative), such that less efficient CPM predicted efficacious use of duloxetine. Regression analysis (R(2)=0.673; P=.012) showed that drug efficacy was predicted only by CPM (P=.001) and not by pretreatment pain levels, neuropathy severity, depression level, or patient assessment of improvement by placebo. Furthermore, beyond its predictive value, the treatment-induced improvement in CPM was correlated with drug efficacy (r=-0.411, P=.033). However, this improvement occurred only in patients with less efficient CPM (16.8±16.0 to -1.1±15.5, P<.050). No predictive role was found for TS. In conclusion, the coupling of CPM and duloxetine efficacy highlights the importance of pain pathophysiology in the clinical decision-making process. This evaluative approach promotes personalized pain therapy. Copyright © 2012 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.
Prediction and visualization of redox conditions in the groundwater of Central Valley, California
Rosecrans, Celia Z.; Nolan, Bernard T.; Gronberg, JoAnn M.
2017-01-01
Regional-scale, three-dimensional continuous probability models, were constructed for aspects of redox conditions in the groundwater system of the Central Valley, California. These models yield grids depicting the probability that groundwater in a particular location will have dissolved oxygen (DO) concentrations less than selected threshold values representing anoxic groundwater conditions, or will have dissolved manganese (Mn) concentrations greater than selected threshold values representing secondary drinking water-quality contaminant levels (SMCL) and health-based screening levels (HBSL). The probability models were constrained by the alluvial boundary of the Central Valley to a depth of approximately 300 m. Probability distribution grids can be extracted from the 3-D models at any desired depth, and are of interest to water-resource managers, water-quality researchers, and groundwater modelers concerned with the occurrence of natural and anthropogenic contaminants related to anoxic conditions.Models were constructed using a Boosted Regression Trees (BRT) machine learning technique that produces many trees as part of an additive model and has the ability to handle many variables, automatically incorporate interactions, and is resistant to collinearity. Machine learning methods for statistical prediction are becoming increasing popular in that they do not require assumptions associated with traditional hypothesis testing. Models were constructed using measured dissolved oxygen and manganese concentrations sampled from 2767 wells within the alluvial boundary of the Central Valley, and over 60 explanatory variables representing regional-scale soil properties, soil chemistry, land use, aquifer textures, and aquifer hydrologic properties. Models were trained on a USGS dataset of 932 wells, and evaluated on an independent hold-out dataset of 1835 wells from the California Division of Drinking Water. We used cross-validation to assess the predictive performance of
Prediction and visualization of redox conditions in the groundwater of Central Valley, California
NASA Astrophysics Data System (ADS)
Rosecrans, Celia Z.; Nolan, Bernard T.; Gronberg, JoAnn M.
2017-03-01
Regional-scale, three-dimensional continuous probability models, were constructed for aspects of redox conditions in the groundwater system of the Central Valley, California. These models yield grids depicting the probability that groundwater in a particular location will have dissolved oxygen (DO) concentrations less than selected threshold values representing anoxic groundwater conditions, or will have dissolved manganese (Mn) concentrations greater than selected threshold values representing secondary drinking water-quality contaminant levels (SMCL) and health-based screening levels (HBSL). The probability models were constrained by the alluvial boundary of the Central Valley to a depth of approximately 300 m. Probability distribution grids can be extracted from the 3-D models at any desired depth, and are of interest to water-resource managers, water-quality researchers, and groundwater modelers concerned with the occurrence of natural and anthropogenic contaminants related to anoxic conditions. Models were constructed using a Boosted Regression Trees (BRT) machine learning technique that produces many trees as part of an additive model and has the ability to handle many variables, automatically incorporate interactions, and is resistant to collinearity. Machine learning methods for statistical prediction are becoming increasing popular in that they do not require assumptions associated with traditional hypothesis testing. Models were constructed using measured dissolved oxygen and manganese concentrations sampled from 2767 wells within the alluvial boundary of the Central Valley, and over 60 explanatory variables representing regional-scale soil properties, soil chemistry, land use, aquifer textures, and aquifer hydrologic properties. Models were trained on a USGS dataset of 932 wells, and evaluated on an independent hold-out dataset of 1835 wells from the California Division of Drinking Water. We used cross-validation to assess the predictive performance of
Scaling and memory in volatility return intervals in financial markets
NASA Astrophysics Data System (ADS)
Yamasaki, Kazuko; Muchnik, Lev; Havlin, Shlomo; Bunde, Armin; Stanley, H. Eugene
2005-06-01
For both stock and currency markets, we study the return intervals τ between the daily volatilities of the price changes that are above a certain threshold q. We find that the distribution function Pq(τ) scales with the mean return interval [Formula] as [Formula]. The scaling function f(x) is similar in form for all seven stocks and for all seven currency databases analyzed, and f(x) is consistent with a power-law form, f(x) ˜ x-γ with γ ≈ 2. We also quantify how the conditional distribution Pq(τ|τ0) depends on the previous return interval τ0 and find that small (or large) return intervals are more likely to be followed by small (or large) return intervals. This “clustering” of the volatility return intervals is a previously unrecognized phenomenon that we relate to the long-term correlations known to be present in the volatility. Author contributions: S.H. and H.E.S. designed research; K.Y., L.M., S.H., and H.E.S. performed research; A.B. contributed new reagents/analytic tools; A.B. analyzed data; and S.H. wrote the paper.Abbreviations: pdf, probability density function; S&P 500, Standard and Poor's 500 Index; USD, U.S. dollar; JPY, Japanese yen; SEK, Swedish krona.
Finding Every Root of a Broad Class of Real, Continuous Functions in a Given Interval
NASA Technical Reports Server (NTRS)
Tausworthe, Robert C.; Wolgast, Paul A.
2011-01-01
One of the most pervasive needs within the Deep Space Network (DSN) Metric Prediction Generator (MPG) view period event generation is that of finding solutions to given occurrence conditions. While the general form of an equation expresses equivalence between its left-hand and right-hand expressions, the traditional treatment of the subject subtracts the two sides, leaving an expression of the form Integral of(x) = 0. Values of the independent variable x satisfying this condition are roots, or solutions. Generally speaking, there may be no solutions, a unique solution, multiple solutions, or a continuum of solutions to a given equation. In particular, all view period events are modeled as zero crossings of various metrics; for example, the time at which the elevation of a spacecraft reaches its maximum value, as viewed from a Deep Space Station (DSS), is found by locating that point at which the derivative of the elevation function becomes zero. Moreover, each event type may have several occurrences within a given time interval of interest. For example, a spacecraft in a low Moon orbit will experience several possible occultations per day, each of which must be located in time. The MPG is charged with finding all specified event occurrences that take place within a given time interval (or pass ), without any special clues from operators as to when they may occur, for the entire spectrum of missions undertaken by the DSN. For each event type, the event metric function is a known form that can be computed for any instant within the interval. A method has been created for a mathematical root finder to be capable of finding all roots of an arbitrary continuous function, within a given interval, to be subject to very lenient, parameterized assumptions. One assumption is that adjacent roots are separated at least by a given amount, xGuard. Any point whose function value is less than ef in magnitude is considered to be a root, and the function values at distances x
Fusani, Leonida; Cardinale, Massimiliano; Carere, Claudio; Goymann, Wolfgang
2009-06-23
During migration, a number of bird species rely on stopover sites for resting and feeding before and after crossing ecological barriers such as deserts or seas. The duration of a stopover depends on the combined effects of environmental factors, endogenous programmes and physiological conditions. Previous studies indicated that lean birds prolong their refuelling stopover compared with fat birds; however, the quantitative relationship between physiological conditions and stopover behaviour has not been studied yet. Here, we tested in a large sample of free-living birds of three European passerines (whinchats, Saxicola rubetra, garden warblers, Sylvia borin and whitethroats, Sylvia communis) whether the amount of migratory restlessness (Zugunruhe) shown at a stopover site depends on physiological conditions. An integrated measure of condition based on body mass, amount of subcutaneous fat and thickness of pectoral muscles strongly predicted the intensity of Zugunruhe shown in recording cages in the night following capture. These results provide novel and robust quantitative evidence in support of the hypothesis that the amount of energy reserves plays a major role in determining the stopover duration in migratory birds.
Fusani, Leonida; Cardinale, Massimiliano; Carere, Claudio; Goymann, Wolfgang
2009-01-01
During migration, a number of bird species rely on stopover sites for resting and feeding before and after crossing ecological barriers such as deserts or seas. The duration of a stopover depends on the combined effects of environmental factors, endogenous programmes and physiological conditions. Previous studies indicated that lean birds prolong their refuelling stopover compared with fat birds; however, the quantitative relationship between physiological conditions and stopover behaviour has not been studied yet. Here, we tested in a large sample of free-living birds of three European passerines (whinchats, Saxicola rubetra, garden warblers, Sylvia borin and whitethroats, Sylvia communis) whether the amount of migratory restlessness (Zugunruhe) shown at a stopover site depends on physiological conditions. An integrated measure of condition based on body mass, amount of subcutaneous fat and thickness of pectoral muscles strongly predicted the intensity of Zugunruhe shown in recording cages in the night following capture. These results provide novel and robust quantitative evidence in support of the hypothesis that the amount of energy reserves plays a major role in determining the stopover duration in migratory birds. PMID:19324648
NASA Astrophysics Data System (ADS)
Mo, Ruping; Joe, Paul I.; Doyle, Chris; Whitfield, Paul H.
2014-01-01
A brief review of the anomalous weather conditions during the Vancouver 2010 Winter Olympic and Paralympic Games and the efforts to predict these anomalies based on some preceding El Niño-Southern Oscillation (ENSO) signals are presented. It is shown that the Olympic Games were held under extraordinarily warm conditions in February 2010, with monthly mean temperature anomalies of +2.2 °C in Vancouver and +2.8 °C in Whistler, ranking respectively as the highest and the second highest in the past 30 years (1981-2010). The warm conditions continued, but became less anomalous, in March 2010 for the Paralympic Games. While the precipitation amounts in the area remained near normal through this winter, the lack of snow due to warm conditions created numerous media headlines and practical problems for the alpine competitions. A statistical model was developed on the premise that February and March temperatures in the Vancouver area could be predicted using an ENSO signal with considerable lead time. This model successfully predicted the warmer-than-normal, lower-snowfall conditions for the Vancouver 2010 Winter Olympics and Paralympics.
NASA Astrophysics Data System (ADS)
Abhilash, S.; Sahai, A. K.; Borah, N.; Chattopadhyay, R.; Joseph, S.; Sharmila, S.; De, S.; Goswami, B. N.; Kumar, Arun
2014-05-01
An ensemble prediction system (EPS) is devised for the extended range prediction (ERP) of monsoon intraseasonal oscillations (MISO) of Indian summer monsoon (ISM) using National Centers for Environmental Prediction Climate Forecast System model version 2 at T126 horizontal resolution. The EPS is formulated by generating 11 member ensembles through the perturbation of atmospheric initial conditions. The hindcast experiments were conducted at every 5-day interval for 45 days lead time starting from 16th May to 28th September during 2001-2012. The general simulation of ISM characteristics and the ERP skill of the proposed EPS at pentad mean scale are evaluated in the present study. Though the EPS underestimates both the mean and variability of ISM rainfall, it simulates the northward propagation of MISO reasonably well. It is found that the signal-to-noise ratio of the forecasted rainfall becomes unity by about 18 days. The potential predictability error of the forecasted rainfall saturates by about 25 days. Though useful deterministic forecasts could be generated up to 2nd pentad lead, significant correlations are found even up to 4th pentad lead. The skill in predicting large-scale MISO, which is assessed by comparing the predicted and observed MISO indices, is found to be ~17 days. It is noted that the prediction skill of actual rainfall is closely related to the prediction of large-scale MISO amplitude as well as the initial conditions related to the different phases of MISO. An analysis of categorical prediction skills reveals that break is more skillfully predicted, followed by active and then normal. The categorical probability skill scores suggest that useful probabilistic forecasts could be generated even up to 4th pentad lead.
NASA Astrophysics Data System (ADS)
Williams, T. R. N.; Baxter, S.; Hartley, L.; Appleyard, P.; Koskinen, L.; Vanhanarkaus, O.; Selroos, J. O.; Munier, R.
2017-12-01
Discrete fracture network (DFN) models provide a natural analysis framework for rock conditions where flow is predominately through a series of connected discrete features. Mechanistic models to predict the structural patterns of networks are generally intractable due to inherent uncertainties (e.g. deformation history) and as such fracture characterisation typically involves empirical descriptions of fracture statistics for location, intensity, orientation, size, aperture etc. from analyses of field data. These DFN models are used to make probabilistic predictions of likely flow or solute transport conditions for a range of applications in underground resource and construction projects. However, there are many instances when the volumes in which predictions are most valuable are close to data sources. For example, in the disposal of hazardous materials such as radioactive waste, accurate predictions of flow-rates and network connectivity around disposal areas are required for long-term safety evaluation. The problem at hand is thus: how can probabilistic predictions be conditioned on local-scale measurements? This presentation demonstrates conditioning of a DFN model based on the current structural and hydraulic characterisation of the Demonstration Area at the ONKALO underground research facility. The conditioned realisations honour (to a required level of similarity) the locations, orientations and trace lengths of fractures mapped on the surfaces of the nearby ONKALO tunnels and pilot drillholes. Other data used as constraints include measurements from hydraulic injection tests performed in pilot drillholes and inflows to the subsequently reamed experimental deposition holes. Numerical simulations using this suite of conditioned DFN models provides a series of prediction-outcome exercises detailing the reliability of the DFN model to make local-scale predictions of measured geometric and hydraulic properties of the fracture system; and provides an understanding
NASA Astrophysics Data System (ADS)
Ulizio, T. P.; Bilbrey, C.; Stoyanoff, N.; Dixon, J. L.
2015-12-01
upper bound of the 99% confidence interval at all SNOTEL sites while precipitation accumulated in the form of rain is within the expected average, indicating an anomalously snow year and average amounts of rainfall during the same water year. This information can be used to better predict circumstances leading to slope failures in northern latitude alpine landscapes.
Changes in Pilot Behavior with Predictive System Status Information
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.
1998-01-01
Research has shown a strong pilot preference for predictive information of aircraft system status in the flight deck. However, changes in pilot behavior associated with using this predictive information have not been ascertained. The study described here quantified these changes using three types of predictive information (none, whether a parameter was changing abnormally, and the time for a parameter to reach an alert range) and three initial time intervals until a parameter alert range was reached (ITIs) (1 minute, 5 minutes, and 15 minutes). With predictive information, subjects accomplished most of their tasks before an alert occurred. Subjects organized the time they did their tasks by locus-of-control with no predictive information and for the 1-minute ITI, and by aviatenavigate-communicate for the time for a parameter to reach an alert range and the 15-minute conditions. Overall, predictive information and the longer ITIs moved subjects to performing tasks before the alert actually occurred and had them more mission oriented as indicated by their tasks grouping of aviate-navigate-communicate.
Predictive sensor method and apparatus
NASA Technical Reports Server (NTRS)
Nail, William L. (Inventor); Koger, Thomas L. (Inventor); Cambridge, Vivien (Inventor)
1990-01-01
A predictive algorithm is used to determine, in near real time, the steady state response of a slow responding sensor such as hydrogen gas sensor of the type which produces an output current proportional to the partial pressure of the hydrogen present. A microprocessor connected to the sensor samples the sensor output at small regular time intervals and predicts the steady state response of the sensor in response to a perturbation in the parameter being sensed, based on the beginning and end samples of the sensor output for the current sample time interval.
Predicting Secchi disk depth from average beam attenuation in a deep, ultra-clear lake
Larson, G.L.; Hoffman, R.L.; Hargreaves, B.R.; Collier, R.W.
2007-01-01
We addressed potential sources of error in estimating the water clarity of mountain lakes by investigating the use of beam transmissometer measurements to estimate Secchi disk depth. The optical properties Secchi disk depth (SD) and beam transmissometer attenuation (BA) were measured in Crater Lake (Crater Lake National Park, Oregon, USA) at a designated sampling station near the maximum depth of the lake. A standard 20 cm black and white disk was used to measure SD. The transmissometer light source had a nearly monochromatic wavelength of 660 nm and a path length of 25 cm. We created a SD prediction model by regression of the inverse SD of 13 measurements recorded on days when environmental conditions were acceptable for disk deployment with BA averaged over the same depth range as the measured SD. The relationship between inverse SD and averaged BA was significant and the average 95% confidence interval for predicted SD relative to the measured SD was ??1.6 m (range = -4.6 to 5.5 m) or ??5.0%. Eleven additional sample dates tested the accuracy of the predictive model. The average 95% confidence interval for these sample dates was ??0.7 m (range = -3.5 to 3.8 m) or ??2.2%. The 1996-2000 time-series means for measured and predicted SD varied by 0.1 m, and the medians varied by 0.5 m. The time-series mean annual measured and predicted SD's also varied little, with intra-annual differences between measured and predicted mean annual SD ranging from -2.1 to 0.1 m. The results demonstrated that this prediction model reliably estimated Secchi disk depths and can be used to significantly expand optical observations in an environment where the conditions for standardized SD deployments are limited. ?? 2007 Springer Science+Business Media B.V.
Robust Confidence Interval for a Ratio of Standard Deviations
ERIC Educational Resources Information Center
Bonett, Douglas G.
2006-01-01
Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…
Darrington, Richard T; Jiao, Jim
2004-04-01
Rapid and accurate stability prediction is essential to pharmaceutical formulation development. Commonly used stability prediction methods include monitoring parent drug loss at intended storage conditions or initial rate determination of degradants under accelerated conditions. Monitoring parent drug loss at the intended storage condition does not provide a rapid and accurate stability assessment because often <0.5% drug loss is all that can be observed in a realistic time frame, while the accelerated initial rate method in conjunction with extrapolation of rate constants using the Arrhenius or Eyring equations often introduces large errors in shelf-life prediction. In this study, the shelf life prediction of a model pharmaceutical preparation utilizing sensitive high-performance liquid chromatography-mass spectrometry (LC/MS) to directly quantitate degradant formation rates at the intended storage condition is proposed. This method was compared to traditional shelf life prediction approaches in terms of time required to predict shelf life and associated error in shelf life estimation. Results demonstrated that the proposed LC/MS method using initial rates analysis provided significantly improved confidence intervals for the predicted shelf life and required less overall time and effort to obtain the stability estimation compared to the other methods evaluated. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association.
Comparison Between Vortices Created and Evolving During Fixed and Dynamic Solar Wind Conditions
NASA Technical Reports Server (NTRS)
Collado-Vega, Yaireska M.; Kessel, R. L.; Sibeck, David Gary; Kalb, V. L.; Boller, R. A.; Rastaetter, L.
2013-01-01
We employ Magnetohydrodynamic (MHD) simulations to examine the creation and evolution of plasma vortices within the Earth's magnetosphere for steady solar wind plasma conditions. Very few vortices form during intervals of such solar wind conditions. Those that do remain in fixed positions for long periods (often hours) and exhibit rotation axes that point primarily in the x or y direction, parallel (or antiparallel) to the local magnetospheric magnetic field direction. Occasionally, the orientation of the axes rotates from the x direction to another direction. We compare our results with simulations previously done for unsteady solar wind conditions. By contrast, these vortices that form during intervals of varying solar wind conditions exhibit durations ranging from seconds (in the case of those with axes in the x or y direction) to minutes (in the case of those with axes in the z direction) and convect antisunward. The local-time dependent sense of rotation seen in these previously reported vortices suggests an interpretation in terms of the Kelvin-Helmholtz instability. For steady conditions, the biggest vortices developed on the dayside (about 6R(E) in diameter), had their rotation axes aligned with the y direction and had the longest periods of duration. We attribute these vortices to the flows set up by reconnection on the high latitude magnetopause during intervals of northward Interplanetary Magnetic Field (IMF) orientation. This is the first time that vortices due to high-latitude reconnection have been visualized. The model also successfully predicts the principal characteristics of previously reported plasma vortices within the magnetosphere, namely their dimension, flow velocities, and durations.
Commentary on Holmes et al. (2007): resolving the debate on when extinction risk is predictable.
Ellner, Stephen P; Holmes, Elizabeth E
2008-08-01
We reconcile the findings of Holmes et al. (Ecology Letters, 10, 2007, 1182) that 95% confidence intervals for quasi-extinction risk were narrow for many vertebrates of conservation concern, with previous theory predicting wide confidence intervals. We extend previous theory, concerning the precision of quasi-extinction estimates as a function of population dynamic parameters, prediction intervals and quasi-extinction thresholds, and provide an approximation that specifies the prediction interval and threshold combinations where quasi-extinction estimates are precise (vs. imprecise). This allows PVA practitioners to define the prediction interval and threshold regions of safety (low risk with high confidence), danger (high risk with high confidence), and uncertainty.
Identifying black swans in NextGen: predicting human performance in off-nominal conditions.
Wickens, Christopher D; Hooey, Becky L; Gore, Brian F; Sebok, Angelia; Koenicke, Corey S
2009-10-01
The objective is to validate a computational model of visual attention against empirical data--derived from a meta-analysis--of pilots' failure to notice safety-critical unexpected events. Many aircraft accidents have resulted, in part, because of failure to notice nonsalient unexpected events outside of foveal vision, illustrating the phenomenon of change blindness. A model of visual noticing, N-SEEV (noticing-salience, expectancy, effort, and value), was developed to predict these failures. First, 25 studies that reported objective data on miss rate for unexpected events in high-fidelity cockpit simulations were identified, and their miss rate data pooled across five variables (phase of flight, event expectancy, event location, presence of a head-up display, and presence of a highway-in-the-sky display). Second, the parameters of the N-SEEV model were tailored to mimic these dichotomies. The N-SEEV model output predicted variance in the obtained miss rate (r = .73). The individual miss rates of all six dichotomous conditions were predicted within 14%, and four of these were predicted within 7%. The N-SEEV model, developed on the basis of an independent data set, was able to successfully predict variance in this safety-critical measure of pilot response to abnormal circumstances, as collected from the literature. As new technology and procedures are envisioned for the future airspace, it is important to predict if these may compromise safety in terms of pilots' failing to notice unexpected events. Computational models such as N-SEEV support cost-effective means of making such predictions.
Investigating the Genetic Architecture of the PR Interval Using Clinical Phenotypes.
Mosley, Jonathan D; Shoemaker, M Benjamin; Wells, Quinn S; Darbar, Dawood; Shaffer, Christian M; Edwards, Todd L; Bastarache, Lisa; McCarty, Catherine A; Thompson, Will; Chute, Christopher G; Jarvik, Gail P; Crosslin, David R; Larson, Eric B; Kullo, Iftikhar J; Pacheco, Jennifer A; Peissig, Peggy L; Brilliant, Murray H; Linneman, James G; Witte, John S; Denny, Josh C; Roden, Dan M
2017-04-01
One potential use for the PR interval is as a biomarker of disease risk. We hypothesized that quantifying the shared genetic architectures of the PR interval and a set of clinical phenotypes would identify genetic mechanisms contributing to PR variability and identify diseases associated with a genetic predictor of PR variability. We used ECG measurements from the ARIC study (Atherosclerosis Risk in Communities; n=6731 subjects) and 63 genetically modulated diseases from the eMERGE network (Electronic Medical Records and Genomics; n=12 978). We measured pairwise genetic correlations (rG) between PR phenotypes (PR interval, PR segment, P-wave duration) and each of the 63 phenotypes. The PR segment was genetically correlated with atrial fibrillation (rG=-0.88; P =0.0009). An analysis of metabolic phenotypes in ARIC also showed that the P wave was genetically correlated with waist circumference (rG=0.47; P =0.02). A genetically predicted PR interval phenotype based on 645 714 single-nucleotide polymorphisms was associated with atrial fibrillation (odds ratio=0.89 per SD change; 95% confidence interval, 0.83-0.95; P =0.0006). The differing pattern of associations among the PR phenotypes is consistent with analyses that show that the genetic correlation between the P wave and PR segment was not significantly different from 0 (rG=-0.03 [0.16]). The genetic architecture of the PR interval comprises modulators of atrial fibrillation risk and obesity. © 2017 American Heart Association, Inc.
Goossens, Spencer; Mehdizadeh Rahimi, Ali
2017-01-01
We demonstrate that with two small modifications, the popular dielectric continuum model is capable of predicting, with high accuracy, ion solvation thermodynamics (Gibbs free energies, entropies, and heat capacities) in numerous polar solvents. We are also able to predict ion solvation free energies in water–co-solvent mixtures over available concentration series. The first modification to the classical dielectric Poisson model is a perturbation of the macroscopic dielectric-flux interface condition at the solute–solvent interface: we add a nonlinear function of the local electric field, giving what we have called a solvation-layer interface condition (SLIC). The second modification is including the microscopic interface potential (static potential) in our model. We show that the resulting model exhibits high accuracy without the need for fitting solute atom radii in a state-dependent fashion. Compared to experimental results in nine water–co-solvent mixtures, SLIC predicts transfer free energies to within 2.5 kJ/mol. The co-solvents include both protic and aprotic species, as well as biologically relevant denaturants such as urea and dimethylformamide. Furthermore, our results indicate that the interface potential is essential to reproduce entropies and heat capacities. These and previous tests of the SLIC model indicate that it is a promising dielectric continuum model for accurate predictions in a wide range of conditions.
NASA Astrophysics Data System (ADS)
Molavi Tabrizi, Amirhossein; Goossens, Spencer; Mehdizadeh Rahimi, Ali; Knepley, Matthew; Bardhan, Jaydeep P.
2017-03-01
We demonstrate that with two small modifications, the popular dielectric continuum model is capable of predicting, with high accuracy, ion solvation thermodynamics (Gibbs free energies, entropies, and heat capacities) in numerous polar solvents. We are also able to predict ion solvation free energies in water-co-solvent mixtures over available concentration series. The first modification to the classical dielectric Poisson model is a perturbation of the macroscopic dielectric-flux interface condition at the solute-solvent interface: we add a nonlinear function of the local electric field, giving what we have called a solvation-layer interface condition (SLIC). The second modification is including the microscopic interface potential (static potential) in our model. We show that the resulting model exhibits high accuracy without the need for fitting solute atom radii in a state-dependent fashion. Compared to experimental results in nine water-co-solvent mixtures, SLIC predicts transfer free energies to within 2.5 kJ/mol. The co-solvents include both protic and aprotic species, as well as biologically relevant denaturants such as urea and dimethylformamide. Furthermore, our results indicate that the interface potential is essential to reproduce entropies and heat capacities. These and previous tests of the SLIC model indicate that it is a promising dielectric continuum model for accurate predictions in a wide range of conditions.
Interval timing in genetically modified mice: a simple paradigm
Balci, F.; Papachristos, E. B.; Gallistel, C. R.; Brunner, D.; Gibson, J.; Shumyatsky, G. P.
2009-01-01
We describe a behavioral screen for the quantitative study of interval timing and interval memory in mice. Mice learn to switch from a short-latency feeding station to a long-latency station when the short latency has passed without a feeding. The psychometric function is the cumulative distribution of switch latencies. Its median measures timing accuracy and its interquartile interval measures timing precision. Next, using this behavioral paradigm, we have examined mice with a gene knockout of the receptor for gastrin-releasing peptide that show enhanced (i.e. prolonged) freezing in fear conditioning. We have tested the hypothesis that the mutants freeze longer because they are more uncertain than wild types about when to expect the electric shock. The knockouts however show normal accuracy and precision in timing, so we have rejected this alternative hypothesis. Last, we conduct the pharmacological validation of our behavioral screen using D-amphetamine and methamphetamine. We suggest including the analysis of interval timing and temporal memory in tests of genetically modified mice for learning and memory and argue that our paradigm allows this to be done simply and efficiently. PMID:17696995
Interval timing in genetically modified mice: a simple paradigm.
Balci, F; Papachristos, E B; Gallistel, C R; Brunner, D; Gibson, J; Shumyatsky, G P
2008-04-01
We describe a behavioral screen for the quantitative study of interval timing and interval memory in mice. Mice learn to switch from a short-latency feeding station to a long-latency station when the short latency has passed without a feeding. The psychometric function is the cumulative distribution of switch latencies. Its median measures timing accuracy and its interquartile interval measures timing precision. Next, using this behavioral paradigm, we have examined mice with a gene knockout of the receptor for gastrin-releasing peptide that show enhanced (i.e. prolonged) freezing in fear conditioning. We have tested the hypothesis that the mutants freeze longer because they are more uncertain than wild types about when to expect the electric shock. The knockouts however show normal accuracy and precision in timing, so we have rejected this alternative hypothesis. Last, we conduct the pharmacological validation of our behavioral screen using d-amphetamine and methamphetamine. We suggest including the analysis of interval timing and temporal memory in tests of genetically modified mice for learning and memory and argue that our paradigm allows this to be done simply and efficiently.
Shulman, Eric; Aagaard, Philip; Kargoli, Faraj; Hoch, Ethan; Zheng, Laura; Di Biase, Luigi; Fisher, John; Gross, Jay; Kim, Soo; Ferrick, Kevin; Krumerman, Andrew
2015-01-01
PR interval prolongation on electrocardiogram (ECG) increases the risk of atrial fibrillation (AF). Non-Hispanic Whites are at higher risk of AF compared to African Americans and Hispanics. However, it remains unknown if prolongation of the PR interval for the development of AF varies by race/ethnicity. Therefore, we determined whether race affects the PR interval length's ability to predict AF and if the commonly used criterion of 200 ms in AF prediction models can continue to be used for non-White cohorts. This is a retrospective epidemiological study of consecutive inpatient and outpatients. An ECG database was initially interrogated. Patients were included if their initial ECG demonstrated sinus rhythm and had two or more electrocardiograms and declared a race and/or ethnicity as non-Hispanic White, African American or Hispanic. Development of AF was stratified by race/ethnicity along varying PR intervals. Cox models controlled for age, gender, race/ethnicity, systolic blood pressure, BMI, QRS, QTc, heart rate, murmur, treatment for hypertension, heart failure and use of AV nodal blocking agents to assess PR interval's predictive ability for development of AF. 50,870 patients met inclusion criteria of which 5,199 developed AF over 3.72 mean years of follow-up. When the PR interval was separated by quantile, prolongation of the PR interval to predict AF first became significant in Hispanic and African Americans at the 92.5th quantile of 196-201 ms (HR: 1.42, 95% CI: 1.09-1.86, p=0.01; HR: 1.32, 95% CI: 1.07-1.64, p=0.01, respectively) then in non-Hispanic Whites at the 95th quantile at 203-212 ms (HR: 1.24, 95% CI: 1.24-1.53, p=0.04). For those with a PR interval above 200 ms, African Americans had a lower risk than non-Hispanic Whites to develop AF (HR: 0.80, 95% CI: 0.64-0.95, p=0.012), however, no significant difference was demonstrated in Hispanics. This is the first study to validate a PR interval value of 200 ms as a criterion in African Americans and
Experimenting with musical intervals
NASA Astrophysics Data System (ADS)
Lo Presto, Michael C.
2003-07-01
When two tuning forks of different frequency are sounded simultaneously the result is a complex wave with a repetition frequency that is the fundamental of the harmonic series to which both frequencies belong. The ear perceives this 'musical interval' as a single musical pitch with a sound quality produced by the harmonic spectrum responsible for the waveform. This waveform can be captured and displayed with data collection hardware and software. The fundamental frequency can then be calculated and compared with what would be expected from the frequencies of the tuning forks. Also, graphing software can be used to determine equations for the waveforms and predict their shapes. This experiment could be used in an introductory physics or musical acoustics course as a practical lesson in superposition of waves, basic Fourier series and the relationship between some of the ear's subjective perceptions of sound and the physical properties of the waves that cause them.
Stark, Zornitza; Wallace, Jane; Gillam, Lynn; Burgess, Matthew; Delatycki, Martin B
2016-10-01
Predictive genetic testing for a neurodegenerative condition in one individual in a family may have implications for other family members, in that it can reveal their genetic status. Herein a complex clinical case is explored where the testing wish of one family member was in direct conflict to that of another. The son of a person at 50% risk of an autosomal dominant neurodegenerative condition requested testing to reveal his genetic status. The main reason for the request was if he had the familial mutation, he and his partner planned to utilise preimplantation genetic diagnosis to prevent his offspring having the condition. His at-risk parent was clear that if they found out they had the mutation, they would commit suicide. We assess the potential benefits and harms from acceding to or denying such a request and present an approach to balancing competing rights of individuals within families at risk of late-onset genetic conditions, where family members have irreconcilable differences with respect to predictive testing. We argue that while it may not be possible to completely avoid harm in these situations, it is important to consider the magnitude of risks, and make every effort to limit the potential for adverse outcomes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Experimental and numerical investigation of low-drag intervals in turbulent boundary layer
NASA Astrophysics Data System (ADS)
Park, Jae Sung; Ryu, Sangjin; Lee, Jin
2017-11-01
It has been widely investigated that there is a substantial intermittency between high and low drag states in wall-bounded shear flows. Recent experimental and computational studies in a turbulent channel flow have identified low-drag time intervals based on wall shear stress measurements. These intervals are a weak turbulence state characterized by low-speed streaks and weak streamwise vortices. In this study, the spatiotemporal dynamics of low-drag intervals in a turbulent boundary layer is investigated using experiments and simulations. The low-drag intervals are monitored based on the wall shear stress measurement. We show that near the wall conditionally-sampled mean velocity profiles during low-drag intervals closely approach that of a low-drag nonlinear traveling wave solution as well as that of the so-called maximum drag reduction asymptote. This observation is consistent with the channel flow studies. Interestingly, the large spatial stretching of the streak is very evident in the wall-normal direction during low-drag intervals. Lastly, a possible connection between the mean velocity profile during the low-drag intervals and the Blasius profile will be discussed. This work was supported by startup funds from the University of Nebraska-Lincoln.
Fairley, Jillian A; Sejdić, Ervin; Chau, Tom
2010-02-26
Stride interval persistence, a term used to describe the correlation structure of stride interval time series, is thought to provide insight into neuromotor control, though its exact clinical meaning has not yet been realized. Since human locomotion is shaped by energy efficient movements, it has been hypothesized that stride interval dynamics and energy expenditure may be inherently tied, both having demonstrated similar sensitivities to age, disease, and pace-constrained walking. This study tested for correlations between stride interval persistence and measures of energy expenditure including mass-specific gross oxygen consumption per minute (VO₂), mass-specific gross oxygen cost per meter (VO₂) and heart rate (HR). Metabolic and stride interval data were collected from 30 asymptomatic children who completed one 10-minute walking trial under each of the following conditions: (i) overground walking, (ii) hands-free treadmill walking, and (iii) handrail-supported treadmill walking. Stride interval persistence was not significantly correlated with (p > 0.32), VO₂ (p > 0.18) or HR (p > 0.56). No simple linear dependence exists between stride interval persistence and measures of gross energy expenditure in asymptomatic children when walking overground and on a treadmill.
Scaling and memory in volatility return intervals in financial markets
Yamasaki, Kazuko; Muchnik, Lev; Havlin, Shlomo; Bunde, Armin; Stanley, H. Eugene
2005-01-01
For both stock and currency markets, we study the return intervals τ between the daily volatilities of the price changes that are above a certain threshold q. We find that the distribution function Pq(τ) scales with the mean return interval \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} \\begin{equation*}{\\bar {{\\tau}}}\\end{equation*}\\end{document} as \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} \\begin{equation*}P_{q}({\\tau})={\\bar {{\\tau}}}^{-1}f({\\tau}/{\\bar {{\\tau}}})\\end{equation*}\\end{document}. The scaling function f(x) is similar in form for all seven stocks and for all seven currency databases analyzed, and f(x) is consistent with a power-law form, f(x) ∼ x-γ with γ ≈ 2. We also quantify how the conditional distribution Pq(τ|τ0) depends on the previous return interval τ0 and find that small (or large) return intervals are more likely to be followed by small (or large) return intervals. This “clustering” of the volatility return intervals is a previously unrecognized phenomenon that we relate to the long-term correlations known to be present in the volatility. PMID:15980152
NASA Astrophysics Data System (ADS)
Mirbaha, Babak; Saffarzadeh, Mahmoud; AmirHossein Beheshty, Seyed; Aniran, MirMoosa; Yazdani, Mirbahador; Shirini, Bahram
2017-10-01
Analysis of vehicle speed with different weather condition and traffic characteristics is very effective in traffic planning. Since the weather condition and traffic characteristics vary every day, the prediction of average speed can be useful in traffic management plans. In this study, traffic and weather data for a two-lane highway located in Northwest of Iran were selected for analysis. After merging traffic and weather data, the linear regression model was calibrated for speed prediction using STATA12.1 Statistical and Data Analysis software. Variables like vehicle flow, percentage of heavy vehicles, vehicle flow in opposing lane, percentage of heavy vehicles in opposing lane, rainfall (mm), snowfall and maximum daily wind speed more than 13m/s were found to be significant variables in the model. Results showed that variables of vehicle flow and heavy vehicle percent acquired the positive coefficient that shows, by increasing these variables the average vehicle speed in every weather condition will also increase. Vehicle flow in opposing lane, percentage of heavy vehicle in opposing lane, rainfall amount (mm), snowfall and maximum daily wind speed more than 13m/s acquired the negative coefficient that shows by increasing these variables, the average vehicle speed will decrease.
Psychological and behavioral responses to interval and continuous exercise.
Stork, Matthew J; Gibala, Martin J; Martin Ginis, Kathleen A
2018-05-16
To compare psychological responses to, and preferences for, moderate-intensity continuous training (MICT), high-intensity interval training (HIIT), and sprint interval training (SIT) among inactive adults; and to investigate the relationships between affect, enjoyment, exercise preferences, and subsequent exercise behavior over a 4-wk follow-up period. Thirty inactive men and women (21.23±3.81 y), inexperienced with HIIT or SIT, completed three trials of cycle ergometer exercise in random order on separate days: MICT (45min continuous; ~70-75% of heart rate maximum (HRmax)); HIIT (10x1 min bouts at ~85-90%HRmax with 1-min recovery periods); and SIT (3x20-s "all-out" sprints with 2-min recovery periods). Perceived exertion (RPE), affect, and arousal were measured throughout the trials and enjoyment was measured post-exercise. Participants rank-ordered the protocols (#1-3) according to preference and logged their exercise over a 4-week follow-up. Despite elevated HR, RPE, and arousal during work periods (ps<0.05), and negative affect during HIIT and SIT, enjoyment and preferences for MICT, HIIT, and SIT were similar (ps>0.05). In-task affect was predictive of post-exercise enjoyment for each type of exercise (rs=0.32 to 0.47; ps<0.05). In-task affect and post-exercise enjoyment predicted preferences for HIIT and SIT (rss=-0.34 to -0.61; ps<0.05), but not for MICT (ps>0.05), respectively. Over the follow-up, participants completed more MICT (M=6.11±4.12) than SIT sessions (M=1.39±1.85; p<0.01, d=1.34). Although participants tended to complete more sessions of MICT than HIIT (M=3.54±4.23; p=0.16, d=0.56), and more sessions of HIIT than SIT (p=0.07, d=0.60), differences were not significant. In-task affect predicted the number of sessions of MICT (r=0.40; p<0.05), but not HIIT or SIT (ps>0.05). This study provides new evidence that a single session of HIIT and SIT can be as enjoyable and preferable as MICT among inactive individuals and that there may be differences
Raslear, T G; Shurtleff, D; Simmons, L
1992-01-01
Killeen and Fetterman's (1988) behavioral theory of animal timing predicts that decreases in the rate of reinforcement should produce decreases in the sensitivity (A') of temporal discriminations and a decrease in miss and correct rejection rates (decrease in bias toward "long" responses). Eight rats were trained on a 10- versus 0.1-s temporal discrimination with an intertrial interval of 5 s and were subsequently tested on probe days on the same discrimination with intertrial intervals of 1, 2.5, 5, 10, or 20 s. The rate of reinforcement declined for all animals as intertrial interval increased. Although sensitivity (A') decreased with increasing intertrial interval, all rats showed an increase in bias to make long responses. PMID:1447544
Audiovisual Interval Size Estimation Is Associated with Early Musical Training.
Abel, Mary Kathryn; Li, H Charles; Russo, Frank A; Schlaug, Gottfried; Loui, Psyche
2016-01-01
Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants' ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants' ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception.
Audiovisual Interval Size Estimation Is Associated with Early Musical Training
Abel, Mary Kathryn; Li, H. Charles; Russo, Frank A.; Schlaug, Gottfried; Loui, Psyche
2016-01-01
Although pitch is a fundamental attribute of auditory perception, substantial individual differences exist in our ability to perceive differences in pitch. Little is known about how these individual differences in the auditory modality might affect crossmodal processes such as audiovisual perception. In this study, we asked whether individual differences in pitch perception might affect audiovisual perception, as it relates to age of onset and number of years of musical training. Fifty-seven subjects made subjective ratings of interval size when given point-light displays of audio, visual, and audiovisual stimuli of sung intervals. Audiovisual stimuli were divided into congruent and incongruent (audiovisual-mismatched) stimuli. Participants’ ratings correlated strongly with interval size in audio-only, visual-only, and audiovisual-congruent conditions. In the audiovisual-incongruent condition, ratings correlated more with audio than with visual stimuli, particularly for subjects who had better pitch perception abilities and higher nonverbal IQ scores. To further investigate the effects of age of onset and length of musical training, subjects were divided into musically trained and untrained groups. Results showed that among subjects with musical training, the degree to which participants’ ratings correlated with auditory interval size during incongruent audiovisual perception was correlated with both nonverbal IQ and age of onset of musical training. After partialing out nonverbal IQ, pitch discrimination thresholds were no longer associated with incongruent audio scores, whereas age of onset of musical training remained associated with incongruent audio scores. These findings invite future research on the developmental effects of musical training, particularly those relating to the process of audiovisual perception. PMID:27760134
Predicting a contact's sensitivity to initial conditions using metrics of frictional coupling
Flicek, Robert C.; Hills, David A.; Brake, Matthew Robert W.
2016-09-29
This paper presents a method for predicting how sensitive a frictional contact’s steady-state behavior is to its initial conditions. Previous research has proven that if a contact is uncoupled, i.e. if slip displacements do not influence the contact pressure distribution, then its steady-state response is independent of initial conditions, but if the contact is coupled, the steady-state response depends on initial conditions. In this paper, two metrics for quantifying coupling in discrete frictional systems are examined. These metrics suggest that coupling is dominated by material dissimilarity due to Dundurs’ composite material parameter β when β ≥ 0.2, but geometric mismatchmore » becomes the dominant source of coupling for smaller values of β. Based on a large set of numerical simulations with different contact geometries, material combinations, and friction coefficients, a contact’s sensitivity to initial conditions is found to be correlated with the product of the coupling metric and the friction coefficient. For cyclic shear loading, this correlation is maintained for simulations with different contact geometries, material combinations, and friction coefficients. Furthermore, for cyclic bulk loading, the correlation is only maintained when the contact edge angle is held constant.« less
Predicting a contact's sensitivity to initial conditions using metrics of frictional coupling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flicek, Robert C.; Hills, David A.; Brake, Matthew Robert W.
This paper presents a method for predicting how sensitive a frictional contact’s steady-state behavior is to its initial conditions. Previous research has proven that if a contact is uncoupled, i.e. if slip displacements do not influence the contact pressure distribution, then its steady-state response is independent of initial conditions, but if the contact is coupled, the steady-state response depends on initial conditions. In this paper, two metrics for quantifying coupling in discrete frictional systems are examined. These metrics suggest that coupling is dominated by material dissimilarity due to Dundurs’ composite material parameter β when β ≥ 0.2, but geometric mismatchmore » becomes the dominant source of coupling for smaller values of β. Based on a large set of numerical simulations with different contact geometries, material combinations, and friction coefficients, a contact’s sensitivity to initial conditions is found to be correlated with the product of the coupling metric and the friction coefficient. For cyclic shear loading, this correlation is maintained for simulations with different contact geometries, material combinations, and friction coefficients. Furthermore, for cyclic bulk loading, the correlation is only maintained when the contact edge angle is held constant.« less
Aagten-Murphy, David; Cappagli, Giulia; Burr, David
2014-03-01
Expert musicians are able to time their actions accurately and consistently during a musical performance. We investigated how musical expertise influences the ability to reproduce auditory intervals and how this generalises across different techniques and sensory modalities. We first compared various reproduction strategies and interval length, to examine the effects in general and to optimise experimental conditions for testing the effect of music, and found that the effects were robust and consistent across different paradigms. Focussing on a 'ready-set-go' paradigm subjects reproduced time intervals drawn from distributions varying in total length (176, 352 or 704 ms) or in the number of discrete intervals within the total length (3, 5, 11 or 21 discrete intervals). Overall, Musicians performed more veridical than Non-Musicians, and all subjects reproduced auditory-defined intervals more accurately than visually-defined intervals. However, Non-Musicians, particularly with visual stimuli, consistently exhibited a substantial and systematic regression towards the mean interval. When subjects judged intervals from distributions of longer total length they tended to regress more towards the mean, while the ability to discriminate between discrete intervals within the distribution had little influence on subject error. These results are consistent with a Bayesian model that minimizes reproduction errors by incorporating a central tendency prior weighted by the subject's own temporal precision relative to the current distribution of intervals. Finally a strong correlation was observed between all durations of formal musical training and total reproduction errors in both modalities (accounting for 30% of the variance). Taken together these results demonstrate that formal musical training improves temporal reproduction, and that this improvement transfers from audition to vision. They further demonstrate the flexibility of sensorimotor mechanisms in adapting to
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cattaneo, Richard; Hanna, Rabbie K.; Jacobsen, Gordon
Purpose: Adjuvant radiation therapy (RT) has been shown to improve local control in patients with endometrial carcinoma. We analyzed the impact of the time interval between hysterectomy and RT initiation in patients with endometrial carcinoma. Methods and Materials: In this institutional review board-approved study, we identified 308 patients with endometrial carcinoma who received adjuvant RT after hysterectomy. All patients had undergone hysterectomy, oophorectomy, and pelvic and para-aortic lymph node evaluation from 1988 to 2010. Patients' demographics, pathologic features, and treatments were compared. The time interval between hysterectomy and the start of RT was calculated. The effects of time interval onmore » recurrence-free (RFS), disease-specific (DSS), and overall survival (OS) were calculated. Following univariate analysis, multivariate modeling was performed. Results: The median age and follow-up for the study cohort was 65 years and 72 months, respectively. Eighty-five percent of the patients had endometrioid carcinoma. RT was delivered with high-dose-rate brachytherapy alone (29%), pelvic RT alone (20%), or both (51%). Median time interval to start RT was 42 days (range, 21-130 days). A total of 269 patients (74%) started their RT <9 weeks after undergoing hysterectomy (group 1) and 26% started ≥9 weeks after surgery (group 2). There were a total of 43 recurrences. Tumor recurrence was significantly associated with treatment delay of ≥9 weeks, with 5-year RFS of 90% for group 1 compared to only 39% for group 2 (P<.001). On multivariate analysis, RT delay of ≥9 weeks (P<.001), presence of lymphovascular space involvement (P=.001), and higher International Federation of Gynecology and Obstetrics grade (P=.012) were independent predictors of recurrence. In addition, RT delay of ≥9 weeks was an independent significant predictor for worse DSS and OS (P=.001 and P=.01, respectively). Conclusions: Delay in administering adjuvant RT after
Li, Pengxiang; Kim, Michelle M; Doshi, Jalpa A
2010-08-20
The Centers for Medicare and Medicaid Services (CMS) has implemented the CMS-Hierarchical Condition Category (CMS-HCC) model to risk adjust Medicare capitation payments. This study intends to assess the performance of the CMS-HCC risk adjustment method and to compare it to the Charlson and Elixhauser comorbidity measures in predicting in-hospital and six-month mortality in Medicare beneficiaries. The study used the 2005-2006 Chronic Condition Data Warehouse (CCW) 5% Medicare files. The primary study sample included all community-dwelling fee-for-service Medicare beneficiaries with a hospital admission between January 1st, 2006 and June 30th, 2006. Additionally, four disease-specific samples consisting of subgroups of patients with principal diagnoses of congestive heart failure (CHF), stroke, diabetes mellitus (DM), and acute myocardial infarction (AMI) were also selected. Four analytic files were generated for each sample by extracting inpatient and/or outpatient claims for each patient. Logistic regressions were used to compare the methods. Model performance was assessed using the c-statistic, the Akaike's information criterion (AIC), the Bayesian information criterion (BIC) and their 95% confidence intervals estimated using bootstrapping. The CMS-HCC had statistically significant higher c-statistic and lower AIC and BIC values than the Charlson and Elixhauser methods in predicting in-hospital and six-month mortality across all samples in analytic files that included claims from the index hospitalization. Exclusion of claims for the index hospitalization generally led to drops in model performance across all methods with the highest drops for the CMS-HCC method. However, the CMS-HCC still performed as well or better than the other two methods. The CMS-HCC method demonstrated better performance relative to the Charlson and Elixhauser methods in predicting in-hospital and six-month mortality. The CMS-HCC model is preferred over the Charlson and Elixhauser methods
Tarone, Aaron M; Foran, David R
2008-07-01
Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.
Bravo, Fernando; Cross, Ian; Stamatakis, Emmanuel Andreas; Rohrmeier, Martin
2017-01-01
Previous neuroimaging studies have shown an increased sensory cortical response (i.e., heightened weight on sensory evidence) under higher levels of predictive uncertainty. The signal enhancement theory proposes that attention improves the quality of the stimulus representation, and therefore reduces uncertainty by increasing the gain of the sensory signal. The present study employed functional magnetic resonance imaging (fMRI) to investigate the neural correlates for ambiguous valence inferences signaled by auditory information within an emotion recognition paradigm. Participants categorized sound stimuli of three distinct levels of consonance/dissonance controlled by interval content. Separate behavioural and neuroscientific experiments were conducted. Behavioural results revealed that, compared with the consonance condition (perfect fourths, fifths and octaves) and the strong dissonance condition (minor/major seconds and tritones), the intermediate dissonance condition (minor thirds) was the most ambiguous, least salient and more cognitively demanding category (slowest reaction times). The neuroscientific findings were consistent with a heightened weight on sensory evidence whilst participants were evaluating intermediate dissonances, which was reflected in an increased neural response of the right Heschl's gyrus. The results support previous studies that have observed enhanced precision of sensory evidence whilst participants attempted to represent and respond to higher degrees of uncertainty, and converge with evidence showing preferential processing of complex spectral information in the right primary auditory cortex. These findings are discussed with respect to music-theoretical concepts and recent Bayesian models of perception, which have proposed that attention may heighten the weight of information coming from sensory channels to stimulate learning about unknown predictive relationships.
Cross, Ian; Stamatakis, Emmanuel Andreas; Rohrmeier, Martin
2017-01-01
Previous neuroimaging studies have shown an increased sensory cortical response (i.e., heightened weight on sensory evidence) under higher levels of predictive uncertainty. The signal enhancement theory proposes that attention improves the quality of the stimulus representation, and therefore reduces uncertainty by increasing the gain of the sensory signal. The present study employed functional magnetic resonance imaging (fMRI) to investigate the neural correlates for ambiguous valence inferences signaled by auditory information within an emotion recognition paradigm. Participants categorized sound stimuli of three distinct levels of consonance/dissonance controlled by interval content. Separate behavioural and neuroscientific experiments were conducted. Behavioural results revealed that, compared with the consonance condition (perfect fourths, fifths and octaves) and the strong dissonance condition (minor/major seconds and tritones), the intermediate dissonance condition (minor thirds) was the most ambiguous, least salient and more cognitively demanding category (slowest reaction times). The neuroscientific findings were consistent with a heightened weight on sensory evidence whilst participants were evaluating intermediate dissonances, which was reflected in an increased neural response of the right Heschl’s gyrus. The results support previous studies that have observed enhanced precision of sensory evidence whilst participants attempted to represent and respond to higher degrees of uncertainty, and converge with evidence showing preferential processing of complex spectral information in the right primary auditory cortex. These findings are discussed with respect to music-theoretical concepts and recent Bayesian models of perception, which have proposed that attention may heighten the weight of information coming from sensory channels to stimulate learning about unknown predictive relationships. PMID:28422990
Gallistel, C R; Gibbon, J
2000-04-01
The authors draw together and develop previous timing models for a broad range of conditioning phenomena to reveal their common conceptual foundations: First, conditioning depends on the learning of the temporal intervals between events and the reciprocals of these intervals, the rates of event occurrence. Second, remembered intervals and rates translate into observed behavior through decision processes whose structure is adapted to noise in the decision variables. The noise and the uncertainties consequent on it have both subjective and objective origins. A third feature of these models is their timescale invariance, which the authors argue is a very important property evident in the available experimental data. This conceptual framework is similar to the psychophysical conceptual framework in which contemporary models of sensory processing are rooted. The authors contrast it with the associative conceptual framework.
The Applicability of Confidence Intervals of Quantiles for the Generalized Logistic Distribution
NASA Astrophysics Data System (ADS)
Shin, H.; Heo, J.; Kim, T.; Jung, Y.
2007-12-01
The generalized logistic (GL) distribution has been widely used for frequency analysis. However, there is a little study related to the confidence intervals that indicate the prediction accuracy of distribution for the GL distribution. In this paper, the estimation of the confidence intervals of quantiles for the GL distribution is presented based on the method of moments (MOM), maximum likelihood (ML), and probability weighted moments (PWM) and the asymptotic variances of each quantile estimator are derived as functions of the sample sizes, return periods, and parameters. Monte Carlo simulation experiments are also performed to verify the applicability of the derived confidence intervals of quantile. As the results, the relative bias (RBIAS) and relative root mean square error (RRMSE) of the confidence intervals generally increase as return period increases and reverse as sample size increases. And PWM for estimating the confidence intervals performs better than the other methods in terms of RRMSE when the data is almost symmetric while ML shows the smallest RBIAS and RRMSE when the data is more skewed and sample size is moderately large. The GL model was applied to fit the distribution of annual maximum rainfall data. The results show that there are little differences in the estimated quantiles between ML and PWM while distinct differences in MOM.
VARIABLE TIME-INTERVAL GENERATOR
Gross, J.E.
1959-10-31
This patent relates to a pulse generator and more particularly to a time interval generator wherein the time interval between pulses is precisely determined. The variable time generator comprises two oscillators with one having a variable frequency output and the other a fixed frequency output. A frequency divider is connected to the variable oscillator for dividing its frequency by a selected factor and a counter is used for counting the periods of the fixed oscillator occurring during a cycle of the divided frequency of the variable oscillator. This defines the period of the variable oscillator in terms of that of the fixed oscillator. A circuit is provided for selecting as a time interval a predetermined number of periods of the variable oscillator. The output of the generator consists of a first pulse produced by a trigger circuit at the start of the time interval and a second pulse marking the end of the time interval produced by the same trigger circuit.
Pan, Wei-Xing; Schmidt, Robert; Wickens, Jeffery R; Hyland, Brian I
2005-06-29
Behavioral conditioning of cue-reward pairing results in a shift of midbrain dopamine (DA) cell activity from responding to the reward to responding to the predictive cue. However, the precise time course and mechanism underlying this shift remain unclear. Here, we report a combined single-unit recording and temporal difference (TD) modeling approach to this question. The data from recordings in conscious rats showed that DA cells retain responses to predicted reward after responses to conditioned cues have developed, at least early in training. This contrasts with previous TD models that predict a gradual stepwise shift in latency with responses to rewards lost before responses develop to the conditioned cue. By exploring the TD parameter space, we demonstrate that the persistent reward responses of DA cells during conditioning are only accurately replicated by a TD model with long-lasting eligibility traces (nonzero values for the parameter lambda) and low learning rate (alpha). These physiological constraints for TD parameters suggest that eligibility traces and low per-trial rates of plastic modification may be essential features of neural circuits for reward learning in the brain. Such properties enable rapid but stable initiation of learning when the number of stimulus-reward pairings is limited, conferring significant adaptive advantages in real-world environments.
Reaction time in pilots during intervals of high sustained g.
Truszczynski, Olaf; Lewkowicz, Rafal; Wojtkowiak, Mieczyslaw; Biernacki, Marcin P
2014-11-01
An important problem for pilots is visual disturbances occurring under +Gz acceleration. Assessment of the degree of intensification of these disturbances is generally accepted as the acceleration tolerance level (ATL) criterion determined in human centrifuges. The aim of this research was to evaluate the visual-motor responses of pilots during rapidly increasing acceleration contained in cyclic intervals of +6 Gz to the maximum ATL. The study involved 40 male pilots ages 32-41 yr. The task was a quick and faultless response to the light stimuli presented on a light bar during exposure to acceleration until reaching the ATL. Simple response time (SRT) measurements were performed using a visual-motor analysis system throughout the exposures which allowed assessment of a pilot's ATL. There were 29 pilots who tolerated the initial phase of interval acceleration and achieved +6 Gz, completing the test at ATL. Relative to the control measurements, the obtained results indicate a significant effect of the applied acceleration on response time. SRT during +6 Gz exposure was not significantly longer compared with the reaction time between each of the intervals. SRT and erroneous reactions indicated no statistically significant differences between the "lower" and "higher" ATL groups. SRT measurements over the +6-Gz exposure intervals did not vary between "lower" and "higher" ATL groups and, therefore, are not useful in predicting pilot performance. The gradual exposure to the maximum value of +6 Gz with exposure to the first three intervals on the +6-Gz plateau effectively differentiated pilots.
Maternal natal environment and breeding territory predict the condition and sex ratio of offspring.
Bowers, E Keith; Thompson, Charles F; Sakaluk, Scott K
2017-03-01
Females in a variety of taxa adjust offspring sex ratios to prevailing ecological conditions. However, little is known about whether conditions experienced during a female's early ontogeny influence the sex ratio of her offspring. We tested for past and present ecological predictors of offspring sex ratios among known-age females that were produced as offspring and bred as adults in a population of house wrens. The body condition of offspring that a female produced and the proportion of her offspring that were male were negatively correlated with the size of the brood in which she herself was reared. The proportion of sons within broods was negatively correlated with maternal hatching date, and varied positively with the quality of a female's current breeding territory as predicted. However, females producing relatively more sons than daughters were less likely to return to breed in the population the following year. Although correlative, our results suggest that the rearing environment can have enduring effects on later maternal investment and sex allocation. Moreover, the overproduction of sons relative to daughters may increase costs to a female's residual reproductive value, constraining the extent to which sons might be produced in high-quality breeding conditions. Sex allocation in birds remains a contentious subject, largely because effects on offspring sex ratios are small. Our results suggest that offspring sex ratios are shaped by various processes and trade-offs that act throughout the female life history and ultimately reduce the extent of sex-ratio adjustment relative to classic theoretical predictions.
ERIC Educational Resources Information Center
Kyndt, Eva; Vermeire, Eva; Cabus, Shana
2016-01-01
Purpose: This paper aims to examine which organisational learning conditions and individual characteristics predict the learning outcomes nurses achieve through informal learning activities. There is specific relevance for the nursing profession because of the rapidly changing healthcare systems. Design/Methodology/Approach: In total, 203 nurses…
A hybrid prognostic model for multistep ahead prediction of machine condition
NASA Astrophysics Data System (ADS)
Roulias, D.; Loutas, T. H.; Kostopoulos, V.
2012-05-01
Prognostics are the future trend in condition based maintenance. In the current framework a data driven prognostic model is developed. The typical procedure of developing such a model comprises a) the selection of features which correlate well with the gradual degradation of the machine and b) the training of a mathematical tool. In this work the data are taken from a laboratory scale single stage gearbox under multi-sensor monitoring. Tests monitoring the condition of the gear pair from healthy state until total brake down following several days of continuous operation were conducted. After basic pre-processing of the derived data, an indicator that correlated well with the gearbox condition was obtained. Consecutively the time series is split in few distinguishable time regions via an intelligent data clustering scheme. Each operating region is modelled with a feed-forward artificial neural network (FFANN) scheme. The performance of the proposed model is tested by applying the system to predict the machine degradation level on unseen data. The results show the plausibility and effectiveness of the model in following the trend of the timeseries even in the case that a sudden change occurs. Moreover the model shows ability to generalise for application in similar mechanical assets.
Minimax confidence intervals in geomagnetism
NASA Technical Reports Server (NTRS)
Stark, Philip B.
1992-01-01
The present paper uses theory of Donoho (1989) to find lower bounds on the lengths of optimally short fixed-length confidence intervals (minimax confidence intervals) for Gauss coefficients of the field of degree 1-12 using the heat flow constraint. The bounds on optimal minimax intervals are about 40 percent shorter than Backus' intervals: no procedure for producing fixed-length confidence intervals, linear or nonlinear, can give intervals shorter than about 60 percent the length of Backus' in this problem. While both methods rigorously account for the fact that core field models are infinite-dimensional, the application of the techniques to the geomagnetic problem involves approximations and counterfactual assumptions about the data errors, and so these results are likely to be extremely optimistic estimates of the actual uncertainty in Gauss coefficients.
Method of high precision interval measurement in pulse laser ranging system
NASA Astrophysics Data System (ADS)
Wang, Zhen; Lv, Xin-yuan; Mao, Jin-jin; Liu, Wei; Yang, Dong
2013-09-01
Laser ranging is suitable for laser system, for it has the advantage of high measuring precision, fast measuring speed,no cooperative targets and strong resistance to electromagnetic interference,the measuremen of laser ranging is the key paremeters affecting the performance of the whole system.The precision of the pulsed laser ranging system was decided by the precision of the time interval measurement, the principle structure of laser ranging system was introduced, and a method of high precision time interval measurement in pulse laser ranging system was established in this paper.Based on the analysis of the factors which affected the precision of range measure,the pulse rising edges discriminator was adopted to produce timing mark for the start-stop time discrimination,and the TDC-GP2 high precision interval measurement system based on TMS320F2812 DSP was designed to improve the measurement precision.Experimental results indicate that the time interval measurement method in this paper can obtain higher range accuracy. Compared with the traditional time interval measurement system,the method simplifies the system design and reduce the influence of bad weather conditions,furthermore,it satisfies the requirements of low costs and miniaturization.
A predictive model of geosynchronous magnetopause crossings
NASA Astrophysics Data System (ADS)
Dmitriev, A.; Suvorova, A.; Chao, J.-K.
2011-05-01
We have developed a model predicting whether or not the magnetopause crosses geosynchronous orbit at a given location for given solar wind pressure Psw, Bz component of the interplanetary magnetic field (IMF), and geomagnetic conditions characterized by 1 min SYM-H index. The model is based on more than 300 geosynchronous magnetopause crossings (GMCs) and about 6000 min when geosynchronous satellites of GOES and Los Alamos National Laboratory (LANL) series are located in the magnetosheath (so-called MSh intervals) in 1994-2001. Minimizing of the Psw required for GMCs and MSh intervals at various locations, Bz, and SYM-H allows describing both an effect of magnetopause dawn-dusk asymmetry and saturation of Bz influence for very large southward IMF. The asymmetry is strong for large negative Bz and almost disappears when Bz is positive. We found that the larger the amplitude of negative SYM-H, the lower the solar wind pressure required for GMCs. We attribute this effect to a depletion of the dayside magnetic field by a storm time intensification of the cross-tail current. It is also found that the magnitude of threshold for Bz saturation increases with SYM-H index such that for small negative and positive SYM-H the effect of saturation diminishes. This supports an idea that enhanced thermal pressure of the magnetospheric plasma and ring current particles during magnetic storms results in the saturation of magnetic effect of the IMF Bz at the dayside magnetopause. A noticeable advantage of the model's prediction capabilities in comparison with other magnetopause models makes the model useful for space weather predictions.
Aeroheating Testing and Predictions for Project Orion CEV at Turbulent Conditions
NASA Technical Reports Server (NTRS)
Hollis, Brian R.; Berger, Karen T.; Horvath, Thomas J.; Coblish, Joseph J.; Norris, Joseph D.; Lillard, Randolph P.; Kirk, Benjamin S.
2009-01-01
An investigation of the aeroheating environment of the Project Orion Crew Exploration Vehicle was performed in the Arnold Engineering Development Center Hypervelocity Wind Tunnel No. 9 Mach 8 and Mach 10 nozzles and in the NASA Langley Research Center 20 - Inch Mach 6 Air Tunnel. Heating data were obtained using a thermocouple-instrumented approx.0.035-scale model (0.1778-m/7-inch diameter) of the flight vehicle. Runs were performed in the Tunnel 9 Mach 10 nozzle at free stream unit Reynolds numbers of 1x10(exp 6)/ft to 20x10(exp 6)/ft, in the Tunnel 9 Mach 8 nozzle at free stream unit Reynolds numbers of 8 x 10(exp 6)/ft to 48x10(exp 6)/ft, and in the 20-Inch Mach 6 Air Tunnel at free stream unit Reynolds numbers of 1x10(exp 6)/ft to 7x10(exp 6)/ft. In both facilities, enthalpy levels were low and the test gas (N2 in Tunnel 9 and air in the 20-Inch Mach 6) behaved as a perfect-gas. These test conditions produced laminar, transitional and turbulent data in the Tunnel 9 Mach 10 nozzle, transitional and turbulent data in the Tunnel 9 Mach 8 nozzle, and laminar and transitional data in the 20- Inch Mach 6 Air Tunnel. Laminar and turbulent predictions were generated for all wind tunnel test conditions and comparisons were performed with the experimental data to help define the accuracy of computational method. In general, it was found that both laminar data and predictions, and turbulent data and predictions, agreed to within less than the estimated 12% experimental uncertainty estimate. Laminar heating distributions from all three data sets were shown to correlate well and demonstrated Reynolds numbers independence when expressed in terms of the Stanton number based on adiabatic wall-recovery enthalpy. Transition onset locations on the leeside centerline were determined from the data and correlated in terms of boundary-layer parameters. Finally turbulent heating augmentation ratios were determined for several body-point locations and correlated in terms of the
Gaussian process regression for tool wear prediction
NASA Astrophysics Data System (ADS)
Kong, Dongdong; Chen, Yongjie; Li, Ning
2018-05-01
To realize and accelerate the pace of intelligent manufacturing, this paper presents a novel tool wear assessment technique based on the integrated radial basis function based kernel principal component analysis (KPCA_IRBF) and Gaussian process regression (GPR) for real-timely and accurately monitoring the in-process tool wear parameters (flank wear width). The KPCA_IRBF is a kind of new nonlinear dimension-increment technique and firstly proposed for feature fusion. The tool wear predictive value and the corresponding confidence interval are both provided by utilizing the GPR model. Besides, GPR performs better than artificial neural networks (ANN) and support vector machines (SVM) in prediction accuracy since the Gaussian noises can be modeled quantitatively in the GPR model. However, the existence of noises will affect the stability of the confidence interval seriously. In this work, the proposed KPCA_IRBF technique helps to remove the noises and weaken its negative effects so as to make the confidence interval compressed greatly and more smoothed, which is conducive for monitoring the tool wear accurately. Moreover, the selection of kernel parameter in KPCA_IRBF can be easily carried out in a much larger selectable region in comparison with the conventional KPCA_RBF technique, which helps to improve the efficiency of model construction. Ten sets of cutting tests are conducted to validate the effectiveness of the presented tool wear assessment technique. The experimental results show that the in-process flank wear width of tool inserts can be monitored accurately by utilizing the presented tool wear assessment technique which is robust under a variety of cutting conditions. This study lays the foundation for tool wear monitoring in real industrial settings.
Benoit, S C; Morell, J R; Davidson, T L
1999-12-01
T. L. Davidson, A. M. Altizer, S. C. Benoit, E. K. Walls, and T. L. Powley (1997) reported that rats show facilitated responding to conditioned stimuli (CSs) that predict oil, after administration of the lipoprivic agent, Na-2-mercaptoacetate (MA). This facilitation was blocked by vagal deafferentation. The present article extends that investigation to another structure, the amygdala central nucleus (CN). The CN receives inputs from dorsal vagal nuclei, and neurotoxic lesions of this nucleus are reported to abolish feeding in response to lipoprivic challenges. In Experiment 1, rats with ibotenic acid (IBO) lesions of the CN failed to show enhanced appetitive responding during oil-predicting CSs after administration of MA. Experiment 2 used a conditioned taste-aversion procedure to establish that rats with IBO lesions of the CN were able to discriminate the tastes of sucrose and peanut oil and had intact CS-US representations. It is concluded that the amygdala CN is a necessary structure for the detection of lipoprivic challenges.
Fieuws, Steffen; Willems, Guy; Larsen-Tangmose, Sara; Lynnerup, Niels; Boldsen, Jesper; Thevissen, Patrick
2016-03-01
When an estimate of age is needed, typically multiple indicators are present as found in skeletal or dental information. There exists a vast literature on approaches to estimate age from such multivariate data. Application of Bayes' rule has been proposed to overcome drawbacks of classical regression models but becomes less trivial as soon as the number of indicators increases. Each of the age indicators can lead to a different point estimate ("the most plausible value for age") and a prediction interval ("the range of possible values"). The major challenge in the combination of multiple indicators is not the calculation of a combined point estimate for age but the construction of an appropriate prediction interval. Ignoring the correlation between the age indicators results in intervals being too small. Boldsen et al. (2002) presented an ad-hoc procedure to construct an approximate confidence interval without the need to model the multivariate correlation structure between the indicators. The aim of the present paper is to bring under attention this pragmatic approach and to evaluate its performance in a practical setting. This is all the more needed since recent publications ignore the need for interval estimation. To illustrate and evaluate the method, Köhler et al. (1995) third molar scores are used to estimate the age in a dataset of 3200 male subjects in the juvenile age range.
On the differentiation matrix for Daubechies-based wavelets on an interval
NASA Technical Reports Server (NTRS)
Jameson, Leland
1993-01-01
The differentiation matrix for a Daubechies-based wavlet basis defined on an interval will be constructed. It will be shown that the differentiation matrix based on the currently available boundary constructions does not maintain the superconvergence encountered under periodic boundary conditions.
Can longer forest harvest intervals increase summer streamflow for salmon recovery?
The Mashel Streamflow Modeling Project in the Mashel River Basin, Washington, is using a watershed-scale ecohydrological model to assess whether longer forest harvest intervals can remediate summer low flow conditions that have contributed to sharply reduced runs of spawning Chin...
Infinite time interval backward stochastic differential equations with continuous coefficients.
Zong, Zhaojun; Hu, Feng
2016-01-01
In this paper, we study the existence theorem for [Formula: see text] [Formula: see text] solutions to a class of 1-dimensional infinite time interval backward stochastic differential equations (BSDEs) under the conditions that the coefficients are continuous and have linear growths. We also obtain the existence of a minimal solution. Furthermore, we study the existence and uniqueness theorem for [Formula: see text] [Formula: see text] solutions of infinite time interval BSDEs with non-uniformly Lipschitz coefficients. It should be pointed out that the assumptions of this result is weaker than that of Theorem 3.1 in Zong (Turkish J Math 37:704-718, 2013).
Interval stability for complex systems
NASA Astrophysics Data System (ADS)
Klinshov, Vladimir V.; Kirillov, Sergey; Kurths, Jürgen; Nekorkin, Vladimir I.
2018-04-01
Stability of dynamical systems against strong perturbations is an important problem of nonlinear dynamics relevant to many applications in various areas. Here, we develop a novel concept of interval stability, referring to the behavior of the perturbed system during a finite time interval. Based on this concept, we suggest new measures of stability, namely interval basin stability (IBS) and interval stability threshold (IST). IBS characterizes the likelihood that the perturbed system returns to the stable regime (attractor) in a given time. IST provides the minimal magnitude of the perturbation capable to disrupt the stable regime for a given interval of time. The suggested measures provide important information about the system susceptibility to external perturbations which may be useful for practical applications. Moreover, from a theoretical viewpoint the interval stability measures are shown to bridge the gap between linear and asymptotic stability. We also suggest numerical algorithms for quantification of the interval stability characteristics and demonstrate their potential for several dynamical systems of various nature, such as power grids and neural networks.
Pavement remaining service interval implementation guidelines.
DOT National Transportation Integrated Search
2013-11-01
"Many important decisions are necessary in order to effectively provide and manage a pavement network. At the heart of this : process is the prediction of needed future construction events. One approach to providing a single numeric on the condition ...
Arnould, Valérie M. R.; Reding, Romain; Bormann, Jeanne; Gengler, Nicolas; Soyeurt, Hélène
2015-01-01
Simple Summary Reducing the frequency of milk recording decreases the costs of official milk recording. However, this approach can negatively affect the accuracy of predicting daily yields. Equations to predict daily yield from morning or evening data were developed in this study for fatty milk components from traits recorded easily by milk recording organizations. The correlation values ranged from 96.4% to 97.6% (96.9% to 98.3%) when the daily yields were estimated from the morning (evening) milkings. The simplicity of the proposed models which do not include the milking interval should facilitate their use by breeding and milk recording organizations. Abstract Reducing the frequency of milk recording would help reduce the costs of official milk recording. However, this approach could also negatively affect the accuracy of predicting daily yields. This problem has been investigated in numerous studies. In addition, published equations take into account milking intervals (MI), and these are often not available and/or are unreliable in practice. The first objective of this study was to propose models in which the MI was replaced by a combination of data easily recorded by dairy farmers. The second objective was to further investigate the fatty acids (FA) present in milk. Equations to predict daily yield from AM or PM data were based on a calibration database containing 79,971 records related to 51 traits [milk yield (expected AM, expected PM, and expected daily); fat content (expected AM, expected PM, and expected daily); fat yield (expected AM, expected PM, and expected daily; g/day); levels of seven different FAs or FA groups (expected AM, expected PM, and expected daily; g/dL milk), and the corresponding FA yields for these seven FA types/groups (expected AM, expected PM, and expected daily; g/day)]. These equations were validated using two distinct external datasets. The results obtained from the proposed models were compared to previously published results for
Predicting Operator Execution Times Using CogTool
NASA Technical Reports Server (NTRS)
Santiago-Espada, Yamira; Latorella, Kara A.
2013-01-01
Researchers and developers of NextGen systems can use predictive human performance modeling tools as an initial approach to obtain skilled user performance times analytically, before system testing with users. This paper describes the CogTool models for a two pilot crew executing two different types of a datalink clearance acceptance tasks, and on two different simulation platforms. The CogTool time estimates for accepting and executing Required Time of Arrival and Interval Management clearances were compared to empirical data observed in video tapes and registered in simulation files. Results indicate no statistically significant difference between empirical data and the CogTool predictions. A population comparison test found no significant differences between the CogTool estimates and the empirical execution times for any of the four test conditions. We discuss modeling caveats and considerations for applying CogTool to crew performance modeling in advanced cockpit environments.
Shiokai, Sachiko; Kitashiba, Hiroyasu; Nishio, Takeshi
2010-08-01
Although the dot-blot-SNP technique is a simple cost-saving technique suitable for genotyping of many plant individuals, optimization of hybridization and washing conditions for each SNP marker requires much time and labor. For prediction of the optimum hybridization conditions for each probe, we compared T (m) values estimated from nucleotide sequences using the DINAMelt web server, measured T (m) values, and hybridization conditions yielding allele-specific signals. The estimated T (m) values were comparable to the measured T (m) values with small differences of less than 3 degrees C for most of the probes. There were differences of approximately 14 degrees C between the specific signal detection conditions and estimated T (m) values. Change of one level of SSC concentrations of 0.1, 0.2, 0.5, and 1.0x SSC corresponded to a difference of approximately 5 degrees C in optimum signal detection temperature. Increasing the sensitivity of signal detection by shortening the exposure time to X-ray film changed the optimum hybridization condition for specific signal detection. Addition of competitive oligonucleotides to the hybridization mixture increased the suitable hybridization conditions by 1.8. Based on these results, optimum hybridization conditions for newly produced dot-blot-SNP markers will become predictable.
On the estimation of risk associated with an attenuation prediction
NASA Technical Reports Server (NTRS)
Crane, R. K.
1992-01-01
Viewgraphs from a presentation on the estimation of risk associated with an attenuation prediction is presented. Topics covered include: link failure - attenuation exceeding a specified threshold for a specified time interval or intervals; risk - the probability of one or more failures during the lifetime of the link or during a specified accounting interval; the problem - modeling the probability of attenuation by rainfall to provide a prediction of the attenuation threshold for a specified risk; and an accounting for the inadequacy of a model or models.
Pérez, Omar D; Aitken, Michael R F; Zhukovsky, Peter; Soto, Fabián A; Urcelay, Gonzalo P; Dickinson, Anthony
2016-12-15
Associative learning theories regard the probability of reinforcement as the critical factor determining responding. However, the role of this factor in instrumental conditioning is not completely clear. In fact, free-operant experiments show that participants respond at a higher rate on variable ratio than on variable interval schedules even though the reinforcement probability is matched between the schedules. This difference has been attributed to the differential reinforcement of long inter-response times (IRTs) by interval schedules, which acts to slow responding. In the present study, we used a novel experimental design to investigate human responding under random ratio (RR) and regulated probability interval (RPI) schedules, a type of interval schedule that sets a reinforcement probability independently of the IRT duration. Participants responded on each type of schedule before a final choice test in which they distributed responding between two schedules similar to those experienced during training. Although response rates did not differ during training, the participants responded at a lower rate on the RPI schedule than on the matched RR schedule during the choice test. This preference cannot be attributed to a higher probability of reinforcement for long IRTs and questions the idea that similar associative processes underlie classical and instrumental conditioning.
Geophysical Anomalies and Earthquake Prediction
NASA Astrophysics Data System (ADS)
Jackson, D. D.
2008-12-01
Finding anomalies is easy. Predicting earthquakes convincingly from such anomalies is far from easy. Why? Why have so many beautiful geophysical abnormalities not led to successful prediction strategies? What is earthquake prediction? By my definition it is convincing information that an earthquake of specified size is temporarily much more likely than usual in a specific region for a specified time interval. We know a lot about normal earthquake behavior, including locations where earthquake rates are higher than elsewhere, with estimable rates and size distributions. We know that earthquakes have power law size distributions over large areas, that they cluster in time and space, and that aftershocks follow with power-law dependence on time. These relationships justify prudent protective measures and scientific investigation. Earthquake prediction would justify exceptional temporary measures well beyond those normal prudent actions. Convincing earthquake prediction would result from methods that have demonstrated many successes with few false alarms. Predicting earthquakes convincingly is difficult for several profound reasons. First, earthquakes start in tiny volumes at inaccessible depth. The power law size dependence means that tiny unobservable ones are frequent almost everywhere and occasionally grow to larger size. Thus prediction of important earthquakes is not about nucleation, but about identifying the conditions for growth. Second, earthquakes are complex. They derive their energy from stress, which is perniciously hard to estimate or model because it is nearly singular at the margins of cracks and faults. Physical properties vary from place to place, so the preparatory processes certainly vary as well. Thus establishing the needed track record for validation is very difficult, especially for large events with immense interval times in any one location. Third, the anomalies are generally complex as well. Electromagnetic anomalies in particular require
TIME-INTERVAL MEASURING DEVICE
Gross, J.E.
1958-04-15
An electronic device for measuring the time interval between two control pulses is presented. The device incorporates part of a previous approach for time measurement, in that pulses from a constant-frequency oscillator are counted during the interval between the control pulses. To reduce the possible error in counting caused by the operation of the counter gating circuit at various points in the pulse cycle, the described device provides means for successively delaying the pulses for a fraction of the pulse period so that a final delay of one period is obtained and means for counting the pulses before and after each stage of delay during the time interval whereby a plurality of totals is obtained which may be averaged and multplied by the pulse period to obtain an accurate time- Interval measurement.
Li, Congjuan; Shi, Xiang; Mohamad, Osama Abdalla; Gao, Jie; Xu, Xinwen; Xie, Yijun
2017-01-01
Water influences various physiological and ecological processes of plants in different ecosystems, especially in desert ecosystems. The purpose of this study is to investigate the response of physiological and morphological acclimation of two shrubs Haloxylon ammodendron and Calligonum mongolicunl to variations in irrigation intervals. The irrigation frequency was set as 1-, 2-, 4-, 8- and 12-week intervals respectively from March to October during 2012-2014 to investigate the response of physiological and morphological acclimation of two desert shrubs Haloxylon ammodendron and Calligonum mongolicunl to variations in the irrigation system. The irrigation interval significantly affected the individual-scale carbon acquisition and biomass allocation pattern of both species. Under good water conditions (1- and 2-week intervals), carbon assimilation was significantly higher than other treatments; while, under water shortage conditions (8- and 12-week intervals), there was much defoliation; and under moderate irrigation intervals (4 weeks), the assimilative organs grew gently with almost no defoliation occurring. Both studied species maintained similar ecophysiologically adaptive strategies, while C. mongolicunl was more sensitive to drought stress because of its shallow root system and preferential belowground allocation of resources. A moderate irrigation interval of 4 weeks was a suitable pattern for both plants since it not only saved water but also met the water demands of the plants.
Improving orbit prediction accuracy through supervised machine learning
NASA Astrophysics Data System (ADS)
Peng, Hao; Bai, Xiaoli
2018-05-01
Due to the lack of information such as the space environment condition and resident space objects' (RSOs') body characteristics, current orbit predictions that are solely grounded on physics-based models may fail to achieve required accuracy for collision avoidance and have led to satellite collisions already. This paper presents a methodology to predict RSOs' trajectories with higher accuracy than that of the current methods. Inspired by the machine learning (ML) theory through which the models are learned based on large amounts of observed data and the prediction is conducted without explicitly modeling space objects and space environment, the proposed ML approach integrates physics-based orbit prediction algorithms with a learning-based process that focuses on reducing the prediction errors. Using a simulation-based space catalog environment as the test bed, the paper demonstrates three types of generalization capability for the proposed ML approach: (1) the ML model can be used to improve the same RSO's orbit information that is not available during the learning process but shares the same time interval as the training data; (2) the ML model can be used to improve predictions of the same RSO at future epochs; and (3) the ML model based on a RSO can be applied to other RSOs that share some common features.
High-intensity interval training: Modulating interval duration in overweight/obese men.
Smith-Ryan, Abbie E; Melvin, Malia N; Wingfield, Hailee L
2015-05-01
High-intensity interval training (HIIT) is a time-efficient strategy shown to induce various cardiovascular and metabolic adaptations. Little is known about the optimal tolerable combination of intensity and volume necessary for adaptations, especially in clinical populations. In a randomized controlled pilot design, we evaluated the effects of two types of interval training protocols, varying in intensity and interval duration, on clinical outcomes in overweight/obese men. Twenty-five men [body mass index (BMI) > 25 kg · m(2)] completed baseline body composition measures: fat mass (FM), lean mass (LM) and percent body fat (%BF) and fasting blood glucose, lipids and insulin (IN). A graded exercise cycling test was completed for peak oxygen consumption (VO2peak) and power output (PO). Participants were randomly assigned to high-intensity short interval (1MIN-HIIT), high-intensity interval (2MIN-HIIT) or control groups. 1MIN-HIIT and 2MIN-HIIT completed 3 weeks of cycling interval training, 3 days/week, consisting of either 10 × 1 min bouts at 90% PO with 1 min rests (1MIN-HIIT) or 5 × 2 min bouts with 1 min rests at undulating intensities (80%-100%) (2MIN-HIIT). There were no significant training effects on FM (Δ1.06 ± 1.25 kg) or %BF (Δ1.13% ± 1.88%), compared to CON. Increases in LM were not significant but increased by 1.7 kg and 2.1 kg for 1MIN and 2MIN-HIIT groups, respectively. Increases in VO2peak were also not significant for 1MIN (3.4 ml·kg(-1) · min(-1)) or 2MIN groups (2.7 ml · kg(-1) · min(-1)). IN sensitivity (HOMA-IR) improved for both training groups (Δ-2.78 ± 3.48 units; p < 0.05) compared to CON. HIIT may be an effective short-term strategy to improve cardiorespiratory fitness and IN sensitivity in overweight males.
High-intensity interval training: Modulating interval duration in overweight/obese men
Smith-Ryan, Abbie E.; Melvin, Malia N.; Wingfield, Hailee L.
2015-01-01
Introduction High-intensity interval training (HIIT) is a time-efficient strategy shown to induce various cardiovascular and metabolic adaptations. Little is known about the optimal tolerable combination of intensity and volume necessary for adaptations, especially in clinical populations. Objectives In a randomized controlled pilot design, we evaluated the effects of two types of interval training protocols, varying in intensity and interval duration, on clinical outcomes in overweight/obese men. Methods Twenty-five men [body mass index (BMI) > 25 kg·m2] completed baseline body composition measures: fat mass (FM), lean mass (LM) and percent body fat (%BF) and fasting blood glucose, lipids and insulin (IN). A graded exercise cycling test was completed for peak oxygen consumption (VO2peak) and power output (PO). Participants were randomly assigned to high-intensity short interval (1MIN-HIIT), high-intensity interval (2MIN-HIIT) or control groups. 1MIN-HIIT and 2MIN-HIIT completed 3 weeks of cycling interval training, 3 days/week, consisting of either 10 × 1 min bouts at 90% PO with 1 min rests (1MIN-HIIT) or 5 × 2 min bouts with 1 min rests at undulating intensities (80%–100%) (2MIN-HIIT). Results There were no significant training effects on FM (Δ1.06 ± 1.25 kg) or %BF (Δ1.13% ± 1.88%), compared to CON. Increases in LM were not significant but increased by 1.7 kg and 2.1 kg for 1MIN and 2MIN-HIIT groups, respectively. Increases in VO2peak were also not significant for 1MIN (3.4 ml·kg−1·min−1) or 2MIN groups (2.7 ml·kg−1·min−1). IN sensitivity (HOMA-IR) improved for both training groups (Δ −2.78 ± 3.48 units; p < 0.05) compared to CON. Conclusion HIIT may be an effective short-term strategy to improve cardiorespiratory fitness and IN sensitivity in overweight males. PMID:25913937
Fatigue Exacerbation by Interval or Continuous Exercise in Chronic Fatigue Syndrome.
Sandler, Carolina X; Lloyd, Andrew R; Barry, Benjamin K
2016-10-01
The objective of this study is to determine whether the typical exacerbation of symptoms in patients with chronic fatigue syndrome (CFS) after a bout of exercise differs between high-intensity interval training (HIIT) or continuous (CONT) aerobic exercise of the same duration and mechanical work. Participants with specialist-diagnosed CFS performed two 20-min bouts of cycling in a randomized crossover study. The bouts were either moderate-intensity continuous (70% age-predicted HR maximum) or high-intensity interval exercise, separated by at least 2 wk. Self-report questionnaires capturing fatigue, the related symptoms, and actigraphy were collected across 2 d before and 4 d after the exercise. Comparisons between exercise bouts were made using paired sample t-tests. Fourteen moderately affected participants who were unable to work, but not bed bound, completed the study (nine female, 32 ± 10 yr, 67 ± 11 kg). Mechanical work was matched successfully between the exercise bouts (HIIT, 83,037, vs CONT, 83,348 J, P = 0.84). Mean HR (HIIT, 76% ± 5%, vs CONT, 73% ± 6% age-predicted HR maximum, P < 0.05) and RPE (6-20) in the legs (HIIT, 15.4 ± 1.4, vs CONT, 13.2 ± 1.2, P < 0.001) were higher for the interval compared with continuous exercise. Mean fatigue scores (0-10) were similar before each exercise challenge (HIIT, 4.5 ± 1.8, vs CONT, 4.1 ± 1.7, P = 0.43). Participants reported an increase in fatigue scores after both challenges (mean difference: HIIT, 1.0 ± 1.3, P < 0.01; CONT, 1.5 ± 0.7, P < 0.001), but these exacerbations in fatigue were not statistically or clinically different (P = 0.20). High-intensity interval exercise did not exacerbate fatigue any more than continuous exercise of comparable workload. This finding supports evaluation of HIIT in graded exercise therapy interventions for patients with CFS.
Yuta, Atsushi; Ukai, Kotaro; Sakakura, Yasuo; Tani, Hideshi; Matsuda, Fukiko; Yang, Tian-qun; Majima, Yuichi
2002-07-01
We made a prediction of the Japanese cedar (Cryptomeria japonica) pollen counts at Tsu city based on male flower-setting conditions of standard trees. The 69 standard trees from 23 kinds of clones, planted at Mie Prefecture Science and Technology Promotion Center (Hakusan, Mie) in 1964, were selected. Male flower-setting conditions for 276 faces (69 trees x 4 points of the compass) were scored from 0 to 3. The average of scores and total pollen counts from 1988 to 2000 was analyzed. As the results, the average scores from standard trees and total pollen counts except two mass pollen-scattered years in 1995 and 2000 had a positive correlation (r = 0.914) by linear function. On the mass pollen-scattered years, pollen counts were influenced from the previous year. Therefore, the score of the present year minus that of the previous year were used for analysis. The average scores from male flower-setting conditions and pollen counts had a strong positive correlation (r = 0.994) when positive scores by taking account of the previous year were analyzed. We conclude that prediction of pollen counts are possible based on the male flower-setting conditions of standard trees.
Psychoacoustic Factors in Musical Intonation: Beats, Interval Tuning, and Inharmonicity.
NASA Astrophysics Data System (ADS)
Keislar, Douglas Fleming
Three psychoacoustic experiments were conducted using musically experienced subjects. In the first two experiments, the interval tested was the perfect fifth F4-C5; in the final one it was the major third F4-A4. The beat rate was controlled by two different methods: (1) simply retuning the interval, and (2) frequency-shifting one partial of each pair of beating partials without changing the overall interval tuning. The second method introduces inharmonicity. In addition, two levels of beat amplitude were introduced by using either a complete spectrum of 16 equal-amplitude partials per note, or by deleting one partial from each pair of beating partials. The results of all three experiments indicate that, for these stimuli, beating does not contribute significantly to the percept of "out-of-tuneness," because it made no difference statistically whether the beat amplitude was maximal or minimal. By contrast, mistuning the interval was highly significant. For the fifths, frequency-shifting the appropriate partials had about as much effect on the perceived intonation as mistuning the interval. For thirds, this effect was weaker, presumably since there were fewer inharmonic partials and they were higher in the harmonic series. Subjects were less consistent in their judgments of thirds than of fifths, perhaps because the equal-tempered and just thirds differ noticeably, unlike fifths. Since it is unlikely that beats would be more audible in real musical situations than under these laboratory conditions, these results suggest that the perception of intonation in music is dependent on the actual interval tuning rather than the concomitant beat rate. If beating partials are unimportant vis-a-vis interval tuning, this strengthens the argument for a cultural basis for musical intonation and scales, as opposed to the acoustical basis set forth by Helmholtz and others.
Aithal, Venkatesh; Kei, Joseph; Driscoll, Carlie; Murakoshi, Michio; Wada, Hiroshi
2018-02-01
Diagnosing conductive conditions in newborns is challenging for both audiologists and otolaryngologists. Although high-frequency tympanometry (HFT), acoustic stapedial reflex tests, and wideband absorbance measures are useful diagnostic tools, there is performance measure variability in their detection of middle ear conditions. Additional diagnostic sensitivity and specificity measures gained through new technology such as sweep frequency impedance (SFI) measures may assist in the diagnosis of middle ear dysfunction in newborns. The purpose of this study was to determine the test performance of SFI to predict the status of the outer and middle ear in newborns against commonly used reference standards. Automated auditory brainstem response (AABR), HFT (1000 Hz), transient evoked otoacoustic emission (TEOAE), distortion product otoacoustic emission (DPOAE), and SFI tests were administered to the study sample. A total of 188 neonates (98 males and 90 females) with a mean gestational age of 39.4 weeks were included in the sample. Mean age at the time of testing was 44.4 hr. Diagnostic accuracy of SFI was assessed in terms of its ability to identify conductive conditions in neonates when compared with nine different reference standards (including four single tests [AABR, HFT, TEOAE, and DPOAE] and five test batteries [HFT + DPOAE, HFT + TEOAE, DPOAE + TEOAE, DPOAE + AABR, and TEOAE + AABR]), using receiver operating characteristic (ROC) analysis and traditional test performance measures such as sensitivity and specificity. The test performance of SFI against the test battery reference standard of HFT + DPOAE and single reference standard of HFT was high with an area under the ROC curve (AROC) of 0.87 and 0.82, respectively. Although the HFT + DPOAE test battery reference standard performed better than the HFT reference standard in predicting middle ear conductive conditions in neonates, the difference in AROC was not significant. Further analysis revealed that the
ERIC Educational Resources Information Center
Du, Yunfei
This paper discusses the impact of sampling error on the construction of confidence intervals around effect sizes. Sampling error affects the location and precision of confidence intervals. Meta-analytic resampling demonstrates that confidence intervals can haphazardly bounce around the true population parameter. Special software with graphical…
Research on regional numerical weather prediction
NASA Technical Reports Server (NTRS)
Kreitzberg, C. W.
1976-01-01
Extension of the predictive power of dynamic weather forecasting to scales below the conventional synoptic or cyclonic scales in the near future is assessed. Lower costs per computation, more powerful computers, and a 100 km mesh over the North American area (with coarser mesh extending beyond it) are noted at present. Doubling the resolution even locally (to 50 km mesh) would entail a 16-fold increase in costs (including vertical resolution and halving the time interval), and constraints on domain size and length of forecast. Boundary conditions would be provided by the surrounding 100 km mesh, and time-varying lateral boundary conditions can be considered to handle moving phenomena. More physical processes to treat, more efficient numerical techniques, and faster computers (improved software and hardware) backing up satellite and radar data could produce further improvements in forecasting in the 1980s. Boundary layer modeling, initialization techniques, and quantitative precipitation forecasting are singled out among key tasks.
Gama, Repson; Van Dyk, J Susan; Burton, Mike H; Pletschke, Brett I
2017-06-01
The enzymatic degradation of lignocellulosic biomass such as apple pomace is a complex process influenced by a number of hydrolysis conditions. Predicting optimal conditions, including enzyme and substrate concentration, temperature and pH can improve conversion efficiency. In this study, the production of sugar monomers from apple pomace using commercial enzyme preparations, Celluclast 1.5L, Viscozyme L and Novozyme 188 was investigated. A limited number of experiments were carried out and then analysed using an artificial neural network (ANN) to model the enzymatic hydrolysis process. The ANN was used to simulate the enzymatic hydrolysis process for a range of input variables and the optimal conditions were successfully selected as was indicated by the R 2 value of 0.99 and a small MSE value. The inputs for the ANN were substrate loading, enzyme loading, temperature, initial pH and a combination of these parameters, while release profiles of glucose and reducing sugars were the outputs. Enzyme loadings of 0.5 and 0.2 mg/g substrate and a substrate loading of 30% were optimal for glucose and reducing sugar release from apple pomace, respectively, resulting in concentrations of 6.5 g/L glucose and 28.9 g/L reducing sugars. Apple pomace hydrolysis can be successfully carried out based on the predicted optimal conditions from the ANN.
NASA Technical Reports Server (NTRS)
Lo, Ching F.
1999-01-01
The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.
Development of an analytical method to predict helicopter main rotor performance in icing conditions
NASA Technical Reports Server (NTRS)
Britton, Randall K.
1992-01-01
Historically, certification of a helicopter for flight into known icing conditions was a problem. This is because of the current emphasis on flight testing for verification of system performance. Flight testing in icing conditions is difficult because, in addition to being dangerous and expensive, many times conditions which are sought after cannot be readily found in nature. The problem is compounded for helicopters because of their small range in comparison to many fixed wing aircraft. Thus, helicopters are forced to wait for conditions to occur in a certain region rather than seeking them out. These and other drawbacks to flight testing prompted extreme interest in developing validated alternatives to flight testing. One such alternative is theoretical prediction. It is desirable to have the ability to predict how a helicopter will perform when subjected to icing conditions. Herein, calculations are restricted to the main rotor, and are illustrated. The computational tool used to obtain performance is the lifting line analysis of B65. B65 incorporates experimental data into data banks in order to determine the section lift, drag, and moment characteristics of various airfoils at different Mach numbers and angles of attack. The local flow angle is calculated at user specified radial locations. This flow angle, along with the local Mach number is then cross referenced with the airfoil tables to obtain the local section characteristics. The local characteristics are then integrated together to obtain the entire rotor attributes. Once the clean performance is known, characterization of the type and shape of ice which accretes on the rotor blades is obtained using the analysis of LEWICE. The Interactive Boundary Layer (IBL) method then calculates the 2-D characteristics of the iced airfoil for input into the airfoil data bank of B65. Calculations are restricted to natural ice shedding and it is assumed that no de-icing takes place. Once the new lift, drag, and moment
A contracting-interval program for the Danilewski method. Ph.D. Thesis - Va. Univ.
NASA Technical Reports Server (NTRS)
Harris, J. D.
1971-01-01
The concept of contracting-interval programs is applied to finding the eigenvalues of a matrix. The development is a three-step process in which (1) a program is developed for the reduction of a matrix to Hessenberg form, (2) a program is developed for the reduction of a Hessenberg matrix to colleague form, and (3) the characteristic polynomial with interval coefficients is readily obtained from the interval of colleague matrices. This interval polynomial is then factored into quadratic factors so that the eigenvalues may be obtained. To develop a contracting-interval program for factoring this polynomial with interval coefficients it is necessary to have an iteration method which converges even in the presence of controlled rounding errors. A theorem is stated giving sufficient conditions for the convergence of Newton's method when both the function and its Jacobian cannot be evaluated exactly but errors can be made proportional to the square of the norm of the difference between the previous two iterates. This theorem is applied to prove the convergence of the generalization of the Newton-Bairstow method that is used to obtain quadratic factors of the characteristic polynomial.
Li, Congjuan; Shi, Xiang; Mohamad, Osama Abdalla; Gao, Jie; Xu, Xinwen; Xie, Yijun
2017-01-01
Background Water influences various physiological and ecological processes of plants in different ecosystems, especially in desert ecosystems. The purpose of this study is to investigate the response of physiological and morphological acclimation of two shrubs Haloxylon ammodendron and Calligonum mongolicunl to variations in irrigation intervals. Methodology/Principal findings The irrigation frequency was set as 1-, 2-, 4-, 8- and 12-week intervals respectively from March to October during 2012–2014 to investigate the response of physiological and morphological acclimation of two desert shrubs Haloxylon ammodendron and Calligonum mongolicunl to variations in the irrigation system. The irrigation interval significantly affected the individual-scale carbon acquisition and biomass allocation pattern of both species. Under good water conditions (1- and 2-week intervals), carbon assimilation was significantly higher than other treatments; while, under water shortage conditions (8- and 12-week intervals), there was much defoliation; and under moderate irrigation intervals (4 weeks), the assimilative organs grew gently with almost no defoliation occurring. Conclusion/Significance Both studied species maintained similar ecophysiologically adaptive strategies, while C. mongolicunl was more sensitive to drought stress because of its shallow root system and preferential belowground allocation of resources. A moderate irrigation interval of 4 weeks was a suitable pattern for both plants since it not only saved water but also met the water demands of the plants. PMID:28719623
The 32nd CDC: Robust stabilizer synthesis for interval plants using Nevanlina-pick theory
NASA Technical Reports Server (NTRS)
Bhattacharya, Saikat; Keel, L. H.; Bhattacharyya, S. P.
1989-01-01
The synthesis of robustly stabilizing compensators for interval plants, i.e., plants whose parameters vary within prescribed ranges is discussed. Well-known H(sup infinity) methods are used to establish robust stabilizability conditions for a family of plants and also to synthesize controllers that would stabilize the whole family. Though conservative, these methods give a very simple way to come up with a family of robust stabilizers for an interval plant.
NASA Astrophysics Data System (ADS)
Torries, Brian; Shamsaei, Nima
2017-12-01
The effects of different cooling rates, as achieved by varying the interlayer time interval, on the fatigue behavior of additively manufactured Ti-6Al-4V specimens were investigated and modeled via a microstructure-sensitive fatigue model. Comparisons are made between two sets of specimens fabricated via Laser Engineered Net Shaping (LENS™), with variance in interlayer time interval accomplished by depositing either one or two specimens per print operation. Fully reversed, strain-controlled fatigue tests were conducted, with fractography following specimen failure. A microstructure-sensitive fatigue model was calibrated to model the fatigue behavior of both sets of specimens and was found to be capable of correctly predicting the longer fatigue lives of the single-built specimens and the reduced scatter of the double-built specimens; all data points fell within the predicted upper and lower bounds of fatigue life. The time interval effects and the ability to be modeled are important to consider when producing test specimens that are smaller than the production part (i.e., property-performance relationships).
Khosrow-Khavar, Farzad; Tavakolian, Kouhyar; Blaber, Andrew; Menon, Carlo
2016-10-12
The purpose of this research was to design a delineation algorithm that could detect specific fiducial points of the seismocardiogram (SCG) signal with or without using the electrocardiogram (ECG) R-wave as the reference point. The detected fiducial points were used to estimate cardiac time intervals. Due to complexity and sensitivity of the SCG signal, the algorithm was designed to robustly discard the low-quality cardiac cycles, which are the ones that contain unrecognizable fiducial points. The algorithm was trained on a dataset containing 48,318 manually annotated cardiac cycles. It was then applied to three test datasets: 65 young healthy individuals (dataset 1), 15 individuals above 44 years old (dataset 2), and 25 patients with previous heart conditions (dataset 3). The algorithm accomplished high prediction accuracy with the rootmean- square-error of less than 5 ms for all the test datasets. The algorithm overall mean detection rate per individual recordings (DRI) were 74, 68, and 42 percent for the three test datasets when concurrent ECG and SCG were used. For the standalone SCG case, the mean DRI was 32, 14 and 21 percent. When the proposed algorithm applied to concurrent ECG and SCG signals, the desired fiducial points of the SCG signal were successfully estimated with a high detection rate. For the standalone case, however, the algorithm achieved high prediction accuracy and detection rate for only the young individual dataset. The presented algorithm could be used for accurate and non-invasive estimation of cardiac time intervals.
Hematology and plasma chemistry reference intervals for cultured tilapia (Oreochromis hybrid).
Hrubec, Terry C.; Cardinale, Jenifer L.; Smith, Stephen A.
2000-01-01
Tilapia are a commonly aquacultured fish yet little is known about their normal physiology and response to disease. In this study we determined the results of complete hematologic (n=40) and plasma biochemical profiles (n=63) in production tilapia (Oreochromis hybrids). The fish were raised in recirculating systems with a high stocking density (120 g/L), and were in the middle of a 15-month production cycle. Blood was analyzed using standard techniques, and reference intervals were determined using nonparametric methods. Non-production tilapia (n=15) from low-density tanks (4 g/L) also were sampled; the clinical chemistry results were compared to reference intervals from the fish raised in high-density tanks. Differences were noted in plasma protein, calcium and phosphorus concentrations, such that reference intervals for high-density production tilapia were not applicable to fish raised under different environmental and management conditions.
NASA Technical Reports Server (NTRS)
Grimes-Ledesma, Lorie; Murthy, Pappu L. N.; Phoenix, S. Leigh; Glaser, Ronald
2007-01-01
In conjunction with a recent NASA Engineering and Safety Center (NESC) investigation of flight worthiness of Kevlar Overwrapped Composite Pressure Vessels (COPVs) on board the Orbiter, two stress rupture life prediction models were proposed independently by Phoenix and by Glaser. In this paper, the use of these models to determine the system reliability of 24 COPVs currently in service on board the Orbiter is discussed. The models are briefly described, compared to each other, and model parameters and parameter uncertainties are also reviewed to understand confidence in reliability estimation as well as the sensitivities of these parameters in influencing overall predicted reliability levels. Differences and similarities in the various models will be compared via stress rupture reliability curves (stress ratio vs. lifetime plots). Also outlined will be the differences in the underlying model premises, and predictive outcomes. Sources of error and sensitivities in the models will be examined and discussed based on sensitivity analysis and confidence interval determination. Confidence interval results and their implications will be discussed for the models by Phoenix and Glaser.
One-way ANOVA based on interval information
NASA Astrophysics Data System (ADS)
Hesamian, Gholamreza
2016-08-01
This paper deals with extending the one-way analysis of variance (ANOVA) to the case where the observed data are represented by closed intervals rather than real numbers. In this approach, first a notion of interval random variable is introduced. Especially, a normal distribution with interval parameters is introduced to investigate hypotheses about the equality of interval means or test the homogeneity of interval variances assumption. Moreover, the least significant difference (LSD method) for investigating multiple comparison of interval means is developed when the null hypothesis about the equality of means is rejected. Then, at a given interval significance level, an index is applied to compare the interval test statistic and the related interval critical value as a criterion to accept or reject the null interval hypothesis of interest. Finally, the method of decision-making leads to some degrees to accept or reject the interval hypotheses. An applied example will be used to show the performance of this method.
Kim, Minseung; Zorraquino, Violeta; Tagkopoulos, Ilias
2015-03-01
A tantalizing question in cellular physiology is whether the cellular state and environmental conditions can be inferred by the expression signature of an organism. To investigate this relationship, we created an extensive normalized gene expression compendium for the bacterium Escherichia coli that was further enriched with meta-information through an iterative learning procedure. We then constructed an ensemble method to predict environmental and cellular state, including strain, growth phase, medium, oxygen level, antibiotic and carbon source presence. Results show that gene expression is an excellent predictor of environmental structure, with multi-class ensemble models achieving balanced accuracy between 70.0% (±3.5%) to 98.3% (±2.3%) for the various characteristics. Interestingly, this performance can be significantly boosted when environmental and strain characteristics are simultaneously considered, as a composite classifier that captures the inter-dependencies of three characteristics (medium, phase and strain) achieved 10.6% (±1.0%) higher performance than any individual models. Contrary to expectations, only 59% of the top informative genes were also identified as differentially expressed under the respective conditions. Functional analysis of the respective genetic signatures implicates a wide spectrum of Gene Ontology terms and KEGG pathways with condition-specific information content, including iron transport, transferases, and enterobactin synthesis. Further experimental phenotypic-to-genotypic mapping that we conducted for knock-out mutants argues for the information content of top-ranked genes. This work demonstrates the degree at which genome-scale transcriptional information can be predictive of latent, heterogeneous and seemingly disparate phenotypic and environmental characteristics, with far-reaching applications.
Pavlovian conditioning and cumulative reinforcement rate.
Harris, Justin A; Patterson, Angela E; Gharaei, Saba
2015-04-01
In 5 experiments using delay conditioning of magazine approach with rats, reinforcement rate was varied either by manipulating the mean interval between onset of the conditioned stimulus (CS) and unconditioned stimulus (US) or by manipulating the proportion of CS presentations that ended with the US (trial-based reinforcement rate). Both manipulations influenced the acquisition of responding. In each experiment, a specific comparison was made between 2 CSs that differed in their mean CS-US interval and in their trial-based reinforcement rate, such that the cumulative reinforcement rate-the cumulative duration of the CS between reinforcements-was the same for the 2 CSs. For example, a CS reinforced on 100% of trials with a mean CS-US interval of 60 s was compared with a CS reinforced on 33% of trials and a mean duration of 20 s. Across the 5 experiments, conditioning was virtually identical for the 2 CSs with matched cumulative reinforcement rate. This was true as long as the timing of the US was unpredictable and, thus, response rates were uniform across the length of the CS. We conclude that the effects of CS-US interval and of trial-based reinforcement rate are reducible entirely to their common effect on cumulative reinforcement rate. We discuss the implications of this for rate-based, trial-based, and real-time associative models of conditioning. (c) 2015 APA, all rights reserved).
Lean mass predicts conditioned pain modulation in adolescents across weight status.
Stolzman, S; Hoeger Bement, M
2016-07-01
There is a wide continuum of conditioned pain modulation (CPM) in adults with older adults experiencing an attenuated CPM response compared with younger adults. Less is known for adolescents and the role of anthropometrics. Fifty-six adolescents (15.1 ± 1.8 years; 32 normal weight and 24 overweight/obese; 27 boys) completed in a CPM session that included anthropometric testing. Pressure pain thresholds were measured at the nailbed and deltoid muscle (test stimuli) with the foot submerged in a cool or ice water bath (conditioning stimulus). Weight status, body composition (Dual-energy X-ray absorptiometry scan), physical activity levels and clinical pain were also evaluated. The CPM response in adolescents was similar across sites (nailbed vs. deltoid), weight status (normal vs. overweight/obese) and sex. CPM measured at the deltoid muscle was positively associated with left arm lean mass but not fat mass; lean mass of the arm uniquely predicted 10% of the CPM magnitude. CPM measured at the nailbed was positively correlated with physical activity levels. These results suggest that lean mass and physical activity levels may contribute to endogenous pain inhibition in adolescents across weight status. © 2016 European Pain Federation - EFIC®
Decadal climate predictions improved by ocean ensemble dispersion filtering
NASA Astrophysics Data System (ADS)
Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.
2017-06-01
Decadal predictions by Earth system models aim to capture the state and phase of the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-term weather forecasts represent an initial value problem and long-term climate projections represent a boundary condition problem, the decadal climate prediction falls in-between these two time scales. In recent years, more precise initialization techniques of coupled Earth system models and increased ensemble sizes have improved decadal predictions. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure, called ensemble dispersion filter, results in more accurate results than the standard decadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from ocean ensemble dispersion filtering toward the ensemble mean.
Cure modeling in real-time prediction: How much does it help?
Ying, Gui-Shuang; Zhang, Qiang; Lan, Yu; Li, Yimei; Heitjan, Daniel F
2017-08-01
Various parametric and nonparametric modeling approaches exist for real-time prediction in time-to-event clinical trials. Recently, Chen (2016 BMC Biomedical Research Methodology 16) proposed a prediction method based on parametric cure-mixture modeling, intending to cover those situations where it appears that a non-negligible fraction of subjects is cured. In this article we apply a Weibull cure-mixture model to create predictions, demonstrating the approach in RTOG 0129, a randomized trial in head-and-neck cancer. We compare the ultimate realized data in RTOG 0129 to interim predictions from a Weibull cure-mixture model, a standard Weibull model without a cure component, and a nonparametric model based on the Bayesian bootstrap. The standard Weibull model predicted that events would occur earlier than the Weibull cure-mixture model, but the difference was unremarkable until late in the trial when evidence for a cure became clear. Nonparametric predictions often gave undefined predictions or infinite prediction intervals, particularly at early stages of the trial. Simulations suggest that cure modeling can yield better-calibrated prediction intervals when there is a cured component, or the appearance of a cured component, but at a substantial cost in the average width of the intervals. Copyright © 2017 Elsevier Inc. All rights reserved.
Interpregnancy Interval and Adverse Pregnancy Outcomes: An Analysis of Successive Pregnancies.
Hanley, Gillian E; Hutcheon, Jennifer A; Kinniburgh, Brooke A; Lee, Lily
2017-03-01
To examine the association between interpregnancy interval and maternal-neonate health when matching women to their successive pregnancies to control for differences in maternal risk factors and compare these results with traditional unmatched designs. We conducted a retrospective cohort study of 38,178 women with three or more deliveries (two or greater interpregnancy intervals) between 2000 and 2015 in British Columbia, Canada. We examined interpregnancy interval (0-5, 6-11, 12-17, 18-23 [reference], 24-59, and 60 months or greater) in relation to neonatal outcomes (preterm birth [less than 37 weeks of gestation], small-for-gestational-age birth [less than the 10th centile], use of neonatal intensive care, low birth weight [less than 2,500 g]) and maternal outcomes (gestational diabetes, beginning the subsequent pregnancy obese [body mass index 30 or greater], and preeclampsia-eclampsia). We used conditional logistic regression to compare interpregnancy intervals within the same mother and unconditional (unmatched) logistic regression to enable comparison with prior research. Analyses using the traditional unmatched design showed significantly increased risks associated with short interpregnancy intervals (eg, there were 232 preterm births [12.8%] in 0-5 months compared with 501 [8.2%] in the 18-23 months reference group; adjusted odds ratio [OR] for preterm birth 1.53, 95% confidence interval [CI] 1.35-1.73). However, these risks were eliminated in within-woman matched analyses (adjusted OR for preterm birth 0.85, 95% CI 0.71-1.02). Matched results indicated that short interpregnancy intervals were significantly associated with increased risk of gestational diabetes (adjusted OR 1.35, 95% CI 1.02-1.80 for 0-5 months) and beginning the subsequent pregnancy obese (adjusted OR 1.61, 95% CI 1.05-2.45 for 0-5 months and adjusted OR 1.43, 95% CI 1.10-1.87 for 6-11 months). Previously reported associations between short interpregnancy intervals and adverse neonatal
Roth, Guy; Assor, Avi; Niemiec, Christopher P; Deci, Edward L; Ryan, Richard M
2009-07-01
The authors conducted 2 studies of 9th-grade Israeli adolescents (169 in Study 1, 156 in Study 2) to compare the parenting practices of conditional positive regard, conditional negative regard, and autonomy support using data from multiple reporters. Two socialization domains were studied: emotion control and academics. Results were consistent with the self-determination theory model of internalization, which posits that (a) conditional negative regard predicts feelings of resentment toward parents, which then predict dysregulation of negative emotions and academic disengagement; (b) conditional positive regard predicts feelings of internal compulsion, which then predict suppressive regulation of negative emotions and grade-focused academic engagement; and (c) autonomy support predicts sense of choice, which then predicts integrated regulation of negative emotions and interest-focused academic engagement. These findings suggest that even parents' use of conditional positive regard as a socialization practice has adverse emotional and academic consequences, relative to autonomy support.
Zhong, Qing; Chen, Qi-Yue; Li, Ping; Xie, Jian-Wei; Wang, Jia-Bin; Lin, Jian-Xian; Lu, Jun; Cao, Long-Long; Lin, Mi; Tu, Ru-Hong; Zheng, Chao-Hui; Huang, Chang-Ming
2018-04-20
The dynamic prognosis of patients who have undergone curative surgery for gastric cancer has yet to be reported. Our objective was to devise an accurate tool for predicting the conditional probability of survival for these patients. We analyzed 11,551 gastric cancer patients from the Surveillance, Epidemiology, and End Results database. Two-thirds of the patients were selected randomly for the development set and one-third for the validation set. Two nomograms were constructed to predict the conditional probability of overall survival and the conditional probability of disease-specific survival, using conditional survival methods. We then applied these nomograms to the 4,001 patients in the database from Fujian Medical University Union Hospital, Fuzhou, China, one of the most active Chinese institutes. The 5-year conditional probability of overall survival of the patients was 41.6% immediately after resection and increased to 52.8%, 68.2%, and 80.4% at 1, 2, and 3 years after gastrectomy. The 5-year conditional probability of disease-specific survival "increased" from 48.9% at the time of gastrectomy to 59.8%, 74.7%, and 85.5% for patients surviving 1, 2, and 3 years, respectively. Sex; race; age; depth of tumor invasion; lymph node metastasis; and tumor size, site, and grade were associated with overall survival and disease-specific survival (P <.05). Within the Surveillance, Epidemiology, and End Results validation set, the accuracy of the conditional probability of overall survival nomogram was 0.77, 0.81, 0.82, and 0.82 at 1, 3, 5, and 10 years after gastrectomy, respectively. Within the other validation set from the Fujian Medical University Union Hospital (n = 4,001), the accuracy of the conditional probability of overall survival nomogram was 0.76, 0.79, 0.77, and 0.77 at 1, 3, 5, and 10 years, respectively. The accuracy of the conditional probability of disease-specific survival model was also favorable. The calibration curve demonstrated good agreement
Zhu, K; Lou, Z; Zhou, J; Ballester, N; Kong, N; Parikh, P
2015-01-01
This article is part of the Focus Theme of Methods of Information in Medicine on "Big Data and Analytics in Healthcare". Hospital readmissions raise healthcare costs and cause significant distress to providers and patients. It is, therefore, of great interest to healthcare organizations to predict what patients are at risk to be readmitted to their hospitals. However, current logistic regression based risk prediction models have limited prediction power when applied to hospital administrative data. Meanwhile, although decision trees and random forests have been applied, they tend to be too complex to understand among the hospital practitioners. Explore the use of conditional logistic regression to increase the prediction accuracy. We analyzed an HCUP statewide inpatient discharge record dataset, which includes patient demographics, clinical and care utilization data from California. We extracted records of heart failure Medicare beneficiaries who had inpatient experience during an 11-month period. We corrected the data imbalance issue with under-sampling. In our study, we first applied standard logistic regression and decision tree to obtain influential variables and derive practically meaning decision rules. We then stratified the original data set accordingly and applied logistic regression on each data stratum. We further explored the effect of interacting variables in the logistic regression modeling. We conducted cross validation to assess the overall prediction performance of conditional logistic regression (CLR) and compared it with standard classification models. The developed CLR models outperformed several standard classification models (e.g., straightforward logistic regression, stepwise logistic regression, random forest, support vector machine). For example, the best CLR model improved the classification accuracy by nearly 20% over the straightforward logistic regression model. Furthermore, the developed CLR models tend to achieve better sensitivity of
Smeers, Inge; Decorte, Ronny; Van de Voorde, Wim; Bekaert, Bram
2018-05-01
DNA methylation is a promising biomarker for forensic age prediction. A challenge that has emerged in recent studies is the fact that prediction errors become larger with increasing age due to interindividual differences in epigenetic ageing rates. This phenomenon of non-constant variance or heteroscedasticity violates an assumption of the often used method of ordinary least squares (OLS) regression. The aim of this study was to evaluate alternative statistical methods that do take heteroscedasticity into account in order to provide more accurate, age-dependent prediction intervals. A weighted least squares (WLS) regression is proposed as well as a quantile regression model. Their performances were compared against an OLS regression model based on the same dataset. Both models provided age-dependent prediction intervals which account for the increasing variance with age, but WLS regression performed better in terms of success rate in the current dataset. However, quantile regression might be a preferred method when dealing with a variance that is not only non-constant, but also not normally distributed. Ultimately the choice of which model to use should depend on the observed characteristics of the data. Copyright © 2018 Elsevier B.V. All rights reserved.
Kliem, Sören; Weusthoff, Sarah; Hahlweg, Kurt; Baucom, Katherine J W; Baucom, Brian R
2015-12-01
Identifying risk factors for divorce or separation is an important step in the prevention of negative individual outcomes and societal costs associated with relationship dissolution. Programs that aim to prevent relationship distress and dissolution typically focus on changing processes that occur during couple conflict, although the predictive ability of conflict-specific variables has not been examined in the context of other factors related to relationship dissolution. The authors examine whether emotional responding and communication during couple conflict predict relationship dissolution after controlling for overall relationship quality and individual well-being. Using nonparametric conditional survival trees, the study at hand simultaneously examined the predictive abilities of physiological (systolic and diastolic blood pressure, heart rate, cortisol) and behavioral (fundamental frequency; f0) indices of emotional responding, as well as observationally coded positive and negative communication behavior, on long-term relationship stability after controlling for relationship satisfaction and symptoms of depression. One hundred thirty-six spouses were assessed after participating in a randomized clinical trial of a relationship distress prevention program as well as 11 years thereafter; 32.5% of the couples' relationships had dissolved by follow up. For men, the only significant predictor of relationship dissolution was cortisol change score (p = .012). For women, only f0 range was a significant predictor of relationship dissolution (p = .034). These findings highlight the importance of emotional responding during couple conflict for long-term relationship stability. (c) 2015 APA, all rights reserved).
Dynamic Parameters Variability: Time Interval Interference on Ground Reaction Force During Running.
Pennone, Juliana; Mezêncio, Bruno; Amadio, Alberto C; Serrão, Júlio C
2016-04-01
The aim of this study was to determine the effect of the time between measures on ground reaction force running variability; 15 healthy men (age = 23.8 ± 3.7 years; weight = 72.8 ± 7.7 kg; height 174.3 ± 8.4 cm) performed two trials of running 45 minutes at 9 km/hr at intervals of seven days. The ground reaction forces were recorded every 5 minutes. The coefficients of variation of indicative parameters of the ground reaction forces for each condition were compared. The coefficients of variations of the ground reaction forces curve analyzed between intervals and sessions were 21.9% and 21.48%, respectively. There was no significant difference for the ground reaction forces parameters Fy1, tFy1, TC1, Imp50, Fy2, and tFy2 between intervals and sessions. Although the ground reaction forces variables present a natural variability, this variability in intervals and in sessions remained consistent, ensuring a high reliability in repeated measures designs. © The Author(s) 2016.
Intermediate-term earthquake prediction
Knopoff, L.
1990-01-01
The problems in predicting earthquakes have been attacked by phenomenological methods from pre-historic times to the present. The associations of presumed precursors with large earthquakes often have been remarked upon. the difficulty in identifying whether such correlations are due to some chance coincidence or are real precursors is that usually one notes the associations only in the relatively short time intervals before the large events. Only rarely, if ever, is notice taken of whether the presumed precursor is to be found in the rather long intervals that follow large earthquakes, or in fact is absent in these post-earthquake intervals. If there are enough examples, the presumed correlation fails as a precursor in the former case, while in the latter case the precursor would be verified. Unfortunately, the observer is usually not concerned with the 'uniteresting' intervals that have no large earthquakes.
Stimulus-to-matching-stimulus interval influences N1, P2, and P3b in an equiprobable Go/NoGo task.
Steiner, Genevieve Z; Barry, Robert J; Gonsalvez, Craig J
2014-10-01
Previous research has shown that as the stimulus-to-matching-stimulus interval (including the target-to-target interval, TTI, and nontarget-to-nontarget interval, NNI) increases, the amplitude of the P300 ERP component increases systematically. Here, we extended previous P300 research and explored TTI and NNI effects on the various ERP components elicited in an auditory equiprobable Go/NoGo task. We also examined whether a similar mechanism was underpinning interval effects in early ERP components (e.g., N1). Thirty participants completed a specially-designed variable-ISI equiprobable task whilst their EEG activity was recorded. Component amplitudes were extracted using temporal PCA with unrestricted Varimax rotation. As expected, N1, P2, and P3b amplitudes increased as TTI and NNI increased, however, Processing Negativity (PN) and Slow Wave (SW) did not show the same systematic change with interval increments. To determine the origin of interval effects in sequential processing, a multiple regression analysis was conducted on each ERP component including stimulus type, interval, and all preceding components as predictors. These analyses showed that matching-stimulus interval predicted N1, P3b, and weakly predicted P2, but not PN or SW; SW was determined by P3b only. These results suggest that N1, P3b, and to some extent, P2, are affected by a similar temporal mechanism. However, the dissimilar pattern of results obtained for sequential ERP components indicates that matching-stimulus intervals are not affecting all aspects of stimulus processing. This argues against a global mechanism, such as a pathway-specific refractory effect, and suggests that stimulus processing is occurring in parallel pathways, some of which are not affected by temporal manipulations of matching-stimulus interval. Copyright © 2014 Elsevier B.V. All rights reserved.
Translating QT interval prolongation from conscious dogs to humans.
Dubois, Vincent F S; Smania, Giovanni; Yu, Huixin; Graf, Ramona; Chain, Anne S Y; Danhof, Meindert; Della Pasqua, Oscar
2017-02-01
In spite of screening procedures in early drug development, uncertainty remains about the propensity of new chemical entities (NCEs) to prolong the QT/QTc interval. The evaluation of proarrhythmic activity using a comprehensive in vitro proarrhythmia assay does not fully account for pharmacokinetic-pharmacodynamic (PKPD) differences in vivo. In the present study, we evaluated the correlation between drug-specific parameters describing QT interval prolongation in dogs and in humans. Using estimates of the drug-specific parameter, data on the slopes of the PKPD relationships of nine compounds with varying QT-prolonging effects (cisapride, sotalol, moxifloxacin, carabersat, GSK945237, SB237376 and GSK618334, and two anonymized NCEs) were analysed. Mean slope estimates varied between -0.98 ms μM -1 and 6.1 ms μM -1 in dogs and -10 ms μM -1 and 90 ms μM -1 in humans, indicating a wide range of effects on the QT interval. Linear regression techniques were then applied to characterize the correlation between the parameter estimates across species. For compounds without a mixed ion channel block, a correlation was observed between the drug-specific parameter in dogs and humans (y = -1.709 + 11.6x; R 2 = 0.989). These results show that per unit concentration, the drug effect on the QT interval in humans is 11.6-fold larger than in dogs. Together with information about the expected therapeutic exposure, the evidence of a correlation between the compound-specific parameter in dogs and in humans represents an opportunity for translating preclinical safety data before progression into the clinic. Whereas further investigation is required to establish the generalizability of our findings, this approach can be used with clinical trial simulations to predict the probability of QT prolongation in humans. © 2016 The British Pharmacological Society.
NASA Astrophysics Data System (ADS)
Zheng, Qin; Yang, Zubin; Sha, Jianxin; Yan, Jun
2017-02-01
In predictability problem research, the conditional nonlinear optimal perturbation (CNOP) describes the initial perturbation that satisfies a certain constraint condition and causes the largest prediction error at the prediction time. The CNOP has been successfully applied in estimation of the lower bound of maximum predictable time (LBMPT). Generally, CNOPs are calculated by a gradient descent algorithm based on the adjoint model, which is called ADJ-CNOP. This study, through the two-dimensional Ikeda model, investigates the impacts of the nonlinearity on ADJ-CNOP and the corresponding precision problems when using ADJ-CNOP to estimate the LBMPT. Our conclusions are that (1) when the initial perturbation is large or the prediction time is long, the strong nonlinearity of the dynamical model in the prediction variable will lead to failure of the ADJ-CNOP method, and (2) when the objective function has multiple extreme values, ADJ-CNOP has a large probability of producing local CNOPs, hence making a false estimation of the LBMPT. Furthermore, the particle swarm optimization (PSO) algorithm, one kind of intelligent algorithm, is introduced to solve this problem. The method using PSO to compute CNOP is called PSO-CNOP. The results of numerical experiments show that even with a large initial perturbation and long prediction time, or when the objective function has multiple extreme values, PSO-CNOP can always obtain the global CNOP. Since the PSO algorithm is a heuristic search algorithm based on the population, it can overcome the impact of nonlinearity and the disturbance from multiple extremes of the objective function. In addition, to check the estimation accuracy of the LBMPT presented by PSO-CNOP and ADJ-CNOP, we partition the constraint domain of initial perturbations into sufficiently fine grid meshes and take the LBMPT obtained by the filtering method as a benchmark. The result shows that the estimation presented by PSO-CNOP is closer to the true value than the
A Rescorla-Wagner drift-diffusion model of conditioning and timing
Alonso, Eduardo
2017-01-01
Computational models of classical conditioning have made significant contributions to the theoretic understanding of associative learning, yet they still struggle when the temporal aspects of conditioning are taken into account. Interval timing models have contributed a rich variety of time representations and provided accurate predictions for the timing of responses, but they usually have little to say about associative learning. In this article we present a unified model of conditioning and timing that is based on the influential Rescorla-Wagner conditioning model and the more recently developed Timing Drift-Diffusion model. We test the model by simulating 10 experimental phenomena and show that it can provide an adequate account for 8, and a partial account for the other 2. We argue that the model can account for more phenomena in the chosen set than these other similar in scope models: CSC-TD, MS-TD, Learning to Time and Modular Theory. A comparison and analysis of the mechanisms in these models is provided, with a focus on the types of time representation and associative learning rule used. PMID:29095819
Risch, Martin; Nydegger, Urs; Risch, Lorenz
2017-01-01
In clinical practice, laboratory results are often important for making diagnostic, therapeutic, and prognostic decisions. Interpreting individual results relies on accurate reference intervals and decision limits. Despite the considerable amount of resources in clinical medicine spent on elderly patients, accurate reference intervals for the elderly are rarely available. The SENIORLAB study set out to determine reference intervals in the elderly by investigating a large variety of laboratory parameters in clinical chemistry, hematology, and immunology. The SENIORLAB study is an observational, prospective cohort study. Subjectively healthy residents of Switzerland aged 60 years and older were included for baseline examination (n = 1467), where anthropometric measurements were taken, medical history was reviewed, and a fasting blood sample was drawn under optimal preanalytical conditions. More than 110 laboratory parameters were measured, and a biobank was set up. The study participants are followed up every 3 to 5 years for quality of life, morbidity, and mortality. The primary aim is to evaluate different laboratory parameters at age-related reference intervals. The secondary aims of this study include the following: identify associations between different parameters, identify diagnostic characteristics to diagnose different circumstances, identify the prevalence of occult disease in subjectively healthy individuals, and identify the prognostic factors for the investigated outcomes, including mortality. To obtain better grounds to justify clinical decisions, specific reference intervals for laboratory parameters of the elderly are needed. Reference intervals are obtained from healthy individuals. A major obstacle when obtaining reference intervals in the elderly is the definition of health in seniors because individuals without any medical condition and any medication are rare in older adulthood. Reference intervals obtained from such individuals cannot be
ITOH, Megumi; SAKURAI, Yoshie; NAKAJIMA, Yasuhiro; KAWAMOTO, Satoshi
2015-01-01
The association between blood calcium levels and electrocardiographic variables was compared in 137 normal parturient and 36 peripartum recumbent Holstein cows to determine whether hypocalcemia in peripartum dairy cows can be rapidly diagnosed using electrocardiograph. Inverse of STc (ST peak interval/SS interval0.5) and blood ionized calcium or serum calcium concentrations were strongly correlated, and both correlation coefficients were 0.81 (P<0.001). The 95% prediction interval indicated that cows with STc >0.385 ± 0.001 sec are very likely to be hypocalcemic (blood ionized or serum calcium concentrations of <0.9 mmol/l or <7.5 mg/dl, respectively). These findings indicate that hypocalcemia in parturient cows can be non-invasively estimated using the STc. PMID:26118411
The Effects of Acute Interval Exercise and Strawberry Intake on Postprandial Lipemia.
O'Doherty, Alasdair F; Jones, Huw S; Sathyapalan, Thozhukat; Ingle, Lee; Carroll, Sean
2017-11-01
Raised postprandial triglycerides (TAG) and related oxidative stresses are strongly associated with increased cardiovascular disease risk. Acute exercise and strawberry ingestion independently ameliorate postprandial lipid excursions and oxidative stress. However, the combined effects of these lifestyle interventions are unknown. We investigated whether acute exercise and strawberry consumption improved postprandial responses to an oral fat tolerance test (OFTT) in overweight/obese males. Overweight/obese adult males underwent four separate OFTT (73 g fat, 33 g carbohydrate) with blood sampled at baseline and hourly for 4 h after OFTT. Two OFTT contained 25 g freeze-dried strawberries and two contained strawberry flavoring (placebo). Participants performed 40 min of submaximal high-intensity interval cycling exercise 16 h before one strawberry and one placebo OFTT and rested before the remaining two OFTT. Serum TAG was analyzed, and TAG area under the curve (AUC) and incremental AUC (iAUC) were calculated. Oxidative stress markers were measured at baseline and 4 h. Differences between conditions (strawberry/placebo and exercise/rest) were assessed using repeated-measures ANOVA. Ten males (age = 31.5, interquartile range = 17.8 yr, body mass index = 29.9 ± 1.8 kg·m) completed the study. TAG AUC was 1.5 mmol per 4 h·L lower for the exercise conditions compared with the rest conditions (95% confidence interval [CI] = -2.3 to -0.8 mmol per 4 h·L, P = 0.001). TAG AUC was not different between strawberry and placebo conditions (95% CI = -1.3 to 0.6 mmol per 4 h·L, P = 0.475). TAG iAUC was 0.5 mmol per 4 h·L greater for the strawberry compared with the placebo conditions (95% CI = 0.1 to 1.0 mmol per 4 h·L, P = 0.021). There were no changes in markers of lipid related oxidative stress (P > 0.05). Acute submaximal high-intensity interval cycling exercise appears effective in reducing postprandial lipemia in overweight/obese adult males. However, strawberry ingestion
The Effect of Ratio and Interval Training on Pavlovian-Instrumental Transfer in Mice
Wiltgen, Brian J.; Sinclair, Courtney; Lane, Chadrick; Barrows, Frank; Molina, Martín; Chabanon-Hicks, Chloe
2012-01-01
Conditional stimuli (CS) that are paired with reward can be used to motivate instrumental responses. This process is called Pavlovian-instrumental transfer (PIT). A recent study in rats suggested that habitual responses are particularly sensitive to the motivational effects of reward cues. The current experiments examined this idea using ratio and interval training in mice. Two groups of animals were trained to lever press for food pellets that were delivered on random ratio or random interval schedules. Devaluation tests revealed that interval training led to habitual responding while ratio training produced goal-directed actions. The presentation of CSs paired with reward led to positive transfer in both groups, however, the size of this effect was much larger in mice that were trained on interval schedules. This result suggests that habitual responses are more sensitive to the motivational influence of reward cues than goal-directed actions. The implications for neurobiological models of motivation and drug seeking behaviors are discussed. PMID:23144742
Aguilera Eguía, Raúl Alberto; Russell Guzmán, Javier Antonio; Soto Muñoz, Marcelo Enrique; Villegas González, Bastián Eduardo; Poblete Aro, Carlos Emilio; Ibacache Palma, Alejandro
2015-03-05
Type 2 diabetes mellitus is one of the major non-communicable chronic diseases in the world. Its prevalence in Chile is significant, and complications associated with this disease involve great costs, which is why prevention and treatment of this condition are essential. Physical exercise is an effective means for prevention and treatment of type 2 diabetes mellitus. The emergence of new forms of physical training, such as "high intensity interval training", presents novel therapeutic alternatives for patients and health care professionals. To assess the validity and applicability of the results regarding the effectiveness of high intensity interval training in reducing glycosylated hemoglobin in adult patients with type 2 diabetes mellitus and answer the following question: In subjects with type 2 diabetes, can the method of high intensity interval training compared to moderate intensity exercise decrease glycosylated hemoglobin? We performed a critical analysis of the article "Feasibility and preliminary effectiveness of high intensity interval training in type 2 diabetes". We found no significant differences in the amount of glycosylated hemoglobin between groups of high intensity interval training and moderate-intensity exercise upon completion of the study (p>0.05). In adult patients with type 2 diabetes mellitus, high intensity interval training does not significantly improve glycosylated hemoglobin levels. Despite this, the high intensity interval training method shows as much improvement in body composition and physical condition as the moderate intensity exercise program.
2014-01-01
Background Genome-wide microarrays have been useful for predicting chemical-genetic interactions at the gene level. However, interpreting genome-wide microarray results can be overwhelming due to the vast output of gene expression data combined with off-target transcriptional responses many times induced by a drug treatment. This study demonstrates how experimental and computational methods can interact with each other, to arrive at more accurate predictions of drug-induced perturbations. We present a two-stage strategy that links microarray experimental testing and network training conditions to predict gene perturbations for a drug with a known mechanism of action in a well-studied organism. Results S. cerevisiae cells were treated with the antifungal, fluconazole, and expression profiling was conducted under different biological conditions using Affymetrix genome-wide microarrays. Transcripts were filtered with a formal network-based method, sparse simultaneous equation models and Lasso regression (SSEM-Lasso), under different network training conditions. Gene expression results were evaluated using both gene set and single gene target analyses, and the drug’s transcriptional effects were narrowed first by pathway and then by individual genes. Variables included: (i) Testing conditions – exposure time and concentration and (ii) Network training conditions – training compendium modifications. Two analyses of SSEM-Lasso output – gene set and single gene – were conducted to gain a better understanding of how SSEM-Lasso predicts perturbation targets. Conclusions This study demonstrates that genome-wide microarrays can be optimized using a two-stage strategy for a more in-depth understanding of how a cell manifests biological reactions to a drug treatment at the transcription level. Additionally, a more detailed understanding of how the statistical model, SSEM-Lasso, propagates perturbations through a network of gene regulatory interactions is achieved
Kray, Jutta
2006-08-11
Adult age differences in task switching and advance preparation were examined by comparing cue-based and memory-based switching conditions. Task switching was assessed by determining two types of costs that occur at the general (mixing costs) and specific (switching costs) level of switching. Advance preparation was investigated by varying the time interval until the next task (short, middle, very long). Results indicated that the implementation of task sets was different for cue-based switching with random task sequences and memory-based switching with predictable task sequences. Switching costs were strongly reduced under cue-based switching conditions, indicating that task-set cues facilitate the retrieval of the next task. Age differences were found for mixing costs and for switching costs only under cue-based conditions in which older adults showed smaller switching costs than younger adults. It is suggested that older adults adopt a less extreme bias between two tasks than younger adults in situations associated with uncertainty. For cue-based switching with random task sequences, older adults are less engaged in a complete reconfiguration of task sets because of the probability of a further task change. Furthermore, the reduction of switching costs was more pronounced for cue- than memory-based switching for short preparation intervals, whereas the reduction of switch costs was more pronounced for memory- than cue-based switching for longer preparation intervals at least for older adults. Together these findings suggest that the implementation of task sets is functionally different for the two types of task-switching conditions.
Stagnation-point heat-transfer rate predictions at aeroassist flight conditions
NASA Technical Reports Server (NTRS)
Gupta, Roop N.; Jones, Jim J.; Rochelle, William C.
1992-01-01
The results are presented for the stagnation-point heat-transfer rates used in the design process of the Aeroassist Flight Experiment (AFE) vehicle over its entire aeropass trajectory. The prediction methods used in this investigation demonstrate the application of computational fluid dynamics (CFD) techniques to a wide range of flight conditions and their usefulness in a design process. The heating rates were computed by a viscous-shock-layer (VSL) code at the lower altitudes and by a Navier-Stokes (N-S) code for the higher altitude cases. For both methods, finite-rate chemically reacting gas was considered, and a temperature-dependent wall-catalysis model was used. The wall temperature for each case was assumed to be radiative equilibrium temperature, based on total heating. The radiative heating was estimated by using a correlation equation. Wall slip was included in the N-S calculation method, and this method implicitly accounts for shock slip. The N-S/VSL combination of projection methods was established by comparison with the published benchmark flow-field code LAURA results at lower altitudes, and the direct simulation Monte Carlo results at higher altitude cases. To obtain the design heating rate over the entire forward face of the vehicle, a boundary-layer method (BLIMP code) that employs reacting chemistry and surface catalysis was used. The ratio of the VSL or N-S method prediction to that obtained from the boundary-layer method code at the stagnation point is used to define an adjustment factor, which accounts for the errors involved in using the boundary-layer method.
Predicting Maps of Green Growth in Košice
NASA Astrophysics Data System (ADS)
Poorova, Zuzana; Vranayova, Zuzana
2017-10-01
The paper deals with the changing of the traditional roofs in the city of Košice into green roofs. Possible areas of city housing estates, after taking into account the conditions of each of them (types of buildings, statics of buildings), are listed in the paper. The research is picturing the prediction maps of Košice city from 2017 to 2042 in 5-years interval. The paper is a segment of a dissertation work focusing on changing traditional roofs into green roofs with the aim to retain water, calculate the amount of retained water and show possibilities how to use this water.
Tarazona, J V; Rodríguez, C; Alonso, E; Sáez, M; González, F; San Andrés, M D; Jiménez, B; San Andrés, M I
2015-01-22
This article describes the toxicokinetics of perfluorooctane sulfonate (PFOS) in birds under low repeated dosing, equivalent to 0.085 μg/kg per day, representing environmentally realistic exposure conditions. The best fitting was provided by a simple pseudo monocompartmental first-order kinetics model, regulated by two rates, with a pseudo first-order dissipation half-life of 230 days, accounting for real elimination as well as binding of PFOS to non-exchangeable structures. The calculated assimilation efficiency was 0.66 with confidence intervals of 0.64 and 0.68. The model calculations confirmed that the measured maximum concentrations were still far from the steady state situation, which for this dose regime, was estimated at a value of about 65 μg PFOS/L serum achieved after a theoretical 210 weeks continuous exposure. The results confirm a very different kinetics than that observed in single-dose experiments confirming clear dose-related differences in apparent elimination rates in birds, as described for humans and monkeys; suggesting that a capacity-limited saturable process should also be considered in the kinetic behavior of PFOS in birds. Pseudo first-order kinetic models are highly convenient and frequently used for predicting bioaccumulation of chemicals in livestock and wildlife; the study suggests that previous bioaccumulation models using half-lives obtained at high doses are expected to underestimate the biomagnification potential of PFOS. The toxicokinetic parameters presented here can be used for higher-tier bioaccumulation estimations of PFOS in chickens and as surrogate values for modeling PFOS kinetics in wild bird species. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Jones, Randolph
2003-01-01
Explores the option of using distance education courses as a viable alternative to building classrooms and facilities. Distance education could be one solution to the financial and physical facilities problems universities may have in dealing with the predicted economic and enrollment conditions. (AEF)
The influence of weather on migraine – are migraine attacks predictable?
Hoffmann, Jan; Schirra, Tonio; Lo, Hendra; Neeb, Lars; Reuter, Uwe; Martus, Peter
2015-01-01
Objective The study aimed at elucidating a potential correlation between specific meteorological variables and the prevalence and intensity of migraine attacks as well as exploring a potential individual predictability of a migraine attack based on meteorological variables and their changes. Methods Attack prevalence and intensity of 100 migraineurs were correlated with atmospheric pressure, relative air humidity, and ambient temperature in 4-h intervals over 12 consecutive months. For each correlation, meteorological parameters at the time of the migraine attack as well as their variation within the preceding 24 h were analyzed. For migraineurs showing a positive correlation, logistic regression analysis was used to assess the predictability of a migraine attack based on meteorological information. Results In a subgroup of migraineurs, a significant weather sensitivity could be observed. In contrast, pooled analysis of all patients did not reveal a significant association. An individual prediction of a migraine attack based on meteorological data was not possible, mainly as a result of the small prevalence of attacks. Interpretation The results suggest that only a subgroup of migraineurs is sensitive to specific weather conditions. Our findings may provide an explanation as to why previous studies, which commonly rely on a pooled analysis, show inconclusive results. The lack of individual attack predictability indicates that the use of preventive measures based on meteorological conditions is not feasible. PMID:25642431
Error estimation for CFD aeroheating prediction under rarefied flow condition
NASA Astrophysics Data System (ADS)
Jiang, Yazhong; Gao, Zhenxun; Jiang, Chongwen; Lee, Chunhian
2014-12-01
Both direct simulation Monte Carlo (DSMC) and Computational Fluid Dynamics (CFD) methods have become widely used for aerodynamic prediction when reentry vehicles experience different flow regimes during flight. The implementation of slip boundary conditions in the traditional CFD method under Navier-Stokes-Fourier (NSF) framework can extend the validity of this approach further into transitional regime, with the benefit that much less computational cost is demanded compared to DSMC simulation. Correspondingly, an increasing error arises in aeroheating calculation as the flow becomes more rarefied. To estimate the relative error of heat flux when applying this method for a rarefied flow in transitional regime, theoretical derivation is conducted and a dimensionless parameter ɛ is proposed by approximately analyzing the ratio of the second order term to first order term in the heat flux expression in Burnett equation. DSMC simulation for hypersonic flow over a cylinder in transitional regime is performed to test the performance of parameter ɛ, compared with two other parameters, Knρ and MaṡKnρ.
González-García, Nadia; González, Martha A; Rendón, Pablo L
2016-07-15
Relationships between musical pitches are described as either consonant, when associated with a pleasant and harmonious sensation, or dissonant, when associated with an inharmonious feeling. The accurate singing of musical intervals requires communication between auditory feedback processing and vocal motor control (i.e. audio-vocal integration) to ensure that each note is produced correctly. The objective of this study is to investigate the neural mechanisms through which trained musicians produce consonant and dissonant intervals. We utilized 4 musical intervals (specifically, an octave, a major seventh, a fifth, and a tritone) as the main stimuli for auditory discrimination testing, and we used the same interval tasks to assess vocal accuracy in a group of musicians (11 subjects, all female vocal students at conservatory level). The intervals were chosen so as to test for differences in recognition and production of consonant and dissonant intervals, as well as narrow and wide intervals. The subjects were studied using fMRI during performance of the interval tasks; the control condition consisted of passive listening. Singing dissonant intervals as opposed to singing consonant intervals led to an increase in activation in several regions, most notably the primary auditory cortex, the primary somatosensory cortex, the amygdala, the left putamen, and the right insula. Singing wide intervals as opposed to singing narrow intervals resulted in the activation of the right anterior insula. Moreover, we also observed a correlation between singing in tune and brain activity in the premotor cortex, and a positive correlation between training and activation of primary somatosensory cortex, primary motor cortex, and premotor cortex during singing. When singing dissonant intervals, a higher degree of training correlated with the right thalamus and the left putamen. Our results indicate that singing dissonant intervals requires greater involvement of neural mechanisms
A framework for interval-valued information system
NASA Astrophysics Data System (ADS)
Yin, Yunfei; Gong, Guanghong; Han, Liang
2012-09-01
Interval-valued information system is used to transform the conventional dataset into the interval-valued form. To conduct the interval-valued data mining, we conduct two investigations: (1) construct the interval-valued information system, and (2) conduct the interval-valued knowledge discovery. In constructing the interval-valued information system, we first make the paired attributes in the database discovered, and then, make them stored in the neighbour locations in a common database and regard them as 'one' new field. In conducting the interval-valued knowledge discovery, we utilise some related priori knowledge and regard the priori knowledge as the control objectives; and design an approximate closed-loop control mining system. On the implemented experimental platform (prototype), we conduct the corresponding experiments and compare the proposed algorithms with several typical algorithms, such as the Apriori algorithm, the FP-growth algorithm and the CLOSE+ algorithm. The experimental results show that the interval-valued information system method is more effective than the conventional algorithms in discovering interval-valued patterns.
ERIC Educational Resources Information Center
Taatgen, Niels A.; van Rijn, Hedderik; Anderson, John
2007-01-01
A theory of prospective time perception is introduced and incorporated as a module in an integrated theory of cognition, thereby extending existing theories and allowing predictions about attention and learning. First, a time perception module is established by fitting existing datasets (interval estimation and bisection and impact of secondary…
Timothy syndrome-like condition with syndactyly but without prolongation of the QT interval.
Kosaki, Rika; Ono, Hiroshi; Terashima, Hiroshi; Kosaki, Kenjiro
2018-05-07
Timothy syndrome is characterized by a unique combination of a prolongation of the corrected QT interval of the electrocardiogram and bilateral cutaneous syndactyly of the fingers and the toes and is caused by heterozygous mutations in CACNA1C, a gene encoding a calcium channel. After the discovery of the CACNA1C gene as the causative gene for Timothy syndrome, patients with CACNA1C mutations with QT prolongation but without syndactyly were described. Here, we report a 5-year-old female patient with cutaneous syndactyly, developmental delay, and pulmonary hypertension. Exome analysis showed a previously undescribed de novo heterozygous mutation in the CACNA1C gene, p.Arg1024Gly. To our knowledge, this patient is the first to exhibit syndactyly and to carry a CACNA1C mutation but to not have QT prolongation, which has long been considered an obligatory feature of Timothy syndrome. © 2018 Wiley Periodicals, Inc.
Jović, Ozren; Smrečki, Neven; Popović, Zora
2016-04-01
A novel quantitative prediction and variable selection method called interval ridge regression (iRR) is studied in this work. The method is performed on six data sets of FTIR, two data sets of UV-vis and one data set of DSC. The obtained results show that models built with ridge regression on optimal variables selected with iRR significantly outperfom models built with ridge regression on all variables in both calibration (6 out of 9 cases) and validation (2 out of 9 cases). In this study, iRR is also compared with interval partial least squares regression (iPLS). iRR outperfomed iPLS in validation (insignificantly in 6 out of 9 cases and significantly in one out of 9 cases for p<0.05). Also, iRR can be a fast alternative to iPLS, especially in case of unknown degree of complexity of analyzed system, i.e. if upper limit of number of latent variables is not easily estimated for iPLS. Adulteration of hempseed (H) oil, a well known health beneficial nutrient, is studied in this work by mixing it with cheap and widely used oils such as soybean (So) oil, rapeseed (R) oil and sunflower (Su) oil. Binary mixture sets of hempseed oil with these three oils (HSo, HR and HSu) and a ternary mixture set of H oil, R oil and Su oil (HRSu) were considered. The obtained accuracy indicates that using iRR on FTIR and UV-vis data, each particular oil can be very successfully quantified (in all 8 cases RMSEP<1.2%). This means that FTIR-ATR coupled with iRR can very rapidly and effectively determine the level of adulteration in the adulterated hempseed oil (R(2)>0.99). Copyright © 2015 Elsevier B.V. All rights reserved.
Prediction and uncertainty in human Pavlovian to instrumental transfer.
Trick, Leanne; Hogarth, Lee; Duka, Theodora
2011-05-01
Attentional capture and behavioral control by conditioned stimuli have been dissociated in animals. The current study assessed this dissociation in humans. Participants were trained on a Pavlovian schedule in which 3 visual stimuli, A, B, and C, predicted the occurrence of an aversive noise with 90%, 50%, or 10% probability, respectively. Participants then went on to separate instrumental training in which a key-press response canceled the aversive noise with a .5 probability on a variable interval schedule. Finally, in the transfer phase, the 3 Pavlovian stimuli were presented in this instrumental schedule and were no longer differentially predictive of the outcome. Observing times and gaze dwell time indexed attention to these stimuli in both training and transfer. Aware participants acquired veridical outcome expectancies in training--that is, A > B > C, and these expectancies persisted into transfer. Most important, the transfer effect accorded with these expectancies, A > B > C. By contrast, observing times accorded with uncertainty--that is, they showed B > A = C during training, and B < A = C in the transfer phase. Dwell time bias supported this association between attention and uncertainty, although these data showed a slightly more complicated pattern. Overall, the study suggests that transfer is linked to outcome prediction and is dissociated from attention to conditioned stimuli, which is linked to outcome uncertainty.
Tse, Nga Yan; Goldsworthy, Mitchell R; Ridding, Michael C; Coxon, James P; Fitzgerald, Paul B; Fornito, Alex; Rogasch, Nigel C
2018-06-04
This study assessed the effect of interval duration on the direction and magnitude of changes in cortical excitability and inhibition when applying repeated blocks of intermittent theta burst stimulation (iTBS) over motor cortex. 15 participants received three different iTBS conditions on separate days: single iTBS; repeated iTBS with a 5 minute interval (iTBS-5-iTBS); and with a 15 minute interval (iTBS-15-iTBS). Changes in cortical excitability and short-interval cortical inhibition (SICI) were assessed via motor-evoked potentials (MEPs) before and up to 60 mins following stimulation. iTBS-15-iTBS increased MEP amplitude for up to 60 mins post stimulation, whereas iTBS-5-iTBS decreased MEP amplitude. In contrast, MEP amplitude was not altered by single iTBS. Despite the group level findings, only 53% of individuals showed facilitated MEPs following iTBS-15-iTBS, and only 40% inhibited MEPs following iTBS-5-iTBS. Modulation of SICI did not differ between conditions. These results suggest interval duration between spaced iTBS plays an important role in determining the direction of plasticity on excitatory, but not inhibitory circuits in human motor cortex. While repeated iTBS can increase the magnitude of MEP facilitation/inhibition in some individuals compared to single iTBS, the response to repeated iTBS appears variable between individuals in this small sample.
Arica, Sami; Firat Ince, N; Bozkurt, Abdi; Tewfik, Ahmed H; Birand, Ahmet
2011-07-01
Pharmacological measurement of baroreflex sensitivity (BRS) is widely accepted and used in clinical practice. Following the introduction of pharmacologically induced BRS (p-BRS), alternative assessment methods eliminating the use of drugs were in the center of interest of the cardiovascular research community. In this study we investigated whether p-BRS using phenylephrine injection can be predicted from non-pharmacological time and frequency domain indices computed from electrocardiogram (ECG) and blood pressure (BP) data acquired during deep breathing. In this scheme, ECG and BP data were recorded from 16 subjects in a two-phase experiment. In the first phase the subjects performed irregular deep breaths and in the second phase the subjects received phenylephrine injection. From the first phase of the experiment, a large pool of predictors describing the local characteristic of beat-to-beat interval tachogram (RR) and systolic blood pressure (SBP) were extracted in time and frequency domains. A subset of these indices was selected using twelve subjects with an exhaustive search fused with a leave one subject out cross validation procedure. The selected indices were used to predict the p-BRS on the remaining four test subjects. A multivariate regression was used in all prediction steps. The algorithm achieved best prediction accuracy with only two features extracted from the deep breathing data, one from the frequency and the other from the time domain. The normalized L2-norm error was computed as 22.9% and the correlation coefficient was 0.97 (p=0.03). These results suggest that the p-BRS can be estimated from non-pharmacological indices computed from ECG and invasive BP data related to deep breathing. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Khodayar, S.; Sehlinger, A.; Feldmann, H.; Kottmeier, C.
2015-12-01
The impact of soil initialization is investigated through perturbation simulations with the regional climate model COSMO-CLM. The focus of the investigation is to assess the sensitivity of simulated extreme periods, dry and wet, to soil moisture initialization in different climatic regions over Europe and to establish the necessary spin up time within the framework of decadal predictions for these regions. Sensitivity experiments consisted of a reference simulation from 1968 to 1999 and 5 simulations from 1972 to 1983. The Effective Drought Index (EDI) is used to select and quantify drought status in the reference run to establish the simulation time period for the sensitivity experiments. Different soil initialization procedures are investigated. The sensitivity of the decadal predictions to soil moisture initial conditions is investigated through the analysis of water cycle components' (WCC) variability. In an episodic time scale the local effects of soil moisture on the boundary-layer and the propagated effects on the large-scale dynamics are analysed. The results show: (a) COSMO-CLM reproduces the observed features of the drought index. (b) Soil moisture initialization exerts a relevant impact on WCC, e.g., precipitation distribution and intensity. (c) Regional characteristics strongly impact the response of the WCC. Precipitation and evapotranspiration deviations are larger for humid regions. (d) The initial soil conditions (wet/dry), the regional characteristics (humid/dry) and the annual period (wet/dry) play a key role in the time that soil needs to restore quasi-equilibrium and the impact on the atmospheric conditions. Humid areas, and for all regions, a humid initialization, exhibit shorter spin up times, also soil reacts more sensitive when initialised during dry periods. (e) The initial soil perturbation may markedly modify atmospheric pressure field, wind circulation systems and atmospheric water vapour distribution affecting atmospheric stability
Bonino, Angela Yarnell; Leibold, Lori J
2017-01-23
Collecting reliable behavioral data from toddlers and preschoolers is challenging. As a result, there are significant gaps in our understanding of human auditory development for these age groups. This paper describes an observer-based procedure for measuring hearing sensitivity with a two-interval, two-alternative forced-choice paradigm. Young children are trained to perform a play-based, motor response (e.g., putting a block in a bucket) whenever they hear a target signal. An experimenter observes the child's behavior and makes a judgment about whether the signal was presented during the first or second observation interval; the experimenter is blinded to the true signal interval, so this judgment is based solely on the child's behavior. These procedures were used to test 2 to 4 year-olds (n = 33) with no known hearing problems. The signal was a 1,000 Hz warble tone presented in quiet, and the signal level was adjusted to estimate a threshold corresponding to 71%-correct detection. A valid threshold was obtained for 82% of children. These results indicate that the two-interval procedure is both feasible and reliable for use with toddlers and preschoolers. The two-interval, observer-based procedure described in this paper is a powerful tool for evaluating hearing in young children because it guards against response bias on the part of the experimenter.
Richardson, Philip; Greenslade, Jaimi; Shanmugathasan, Sulochana; Doucet, Katherine; Widdicombe, Neil; Chu, Kevin; Brown, Anthony
2015-01-01
CARING is a screening tool developed to identify patients who have a high likelihood of death in 1 year. This study sought to validate a modified CARING tool (termed PREDICT) using a population of patients presenting to the Emergency Department. In total, 1000 patients aged over 55 years who were admitted to hospital via the Emergency Department between January and June 2009 were eligible for inclusion in this study. Data on the six prognostic indicators comprising PREDICT were obtained retrospectively from patient records. One-year mortality data were obtained from the State Death Registry. Weights were applied to each PREDICT criterion, and its final score ranged from 0 to 44. Receiver operator characteristic analyses and diagnostic accuracy statistics were used to assess the accuracy of PREDICT in identifying 1-year mortality. The sample comprised 976 patients with a median (interquartile range) age of 71 years (62-81 years) and a 1-year mortality of 23.4%. In total, 50% had ≥1 PREDICT criteria with a 1-year mortality of 40.4%. Receiver operator characteristic analysis gave an area under the curve of 0.86 (95% confidence interval: 0.83-0.89). Using a cut-off of 13 points, PREDICT had a 95.3% (95% confidence interval: 93.6-96.6) specificity and 53.9% (95% confidence interval: 47.5-60.3) sensitivity for predicting 1-year mortality. PREDICT was simpler than the CARING criteria and identified 158 patients per 1000 admitted who could benefit from advance care planning. PREDICT was successfully applied to the Australian healthcare system with findings similar to the original CARING study conducted in the United States. This tool could improve end-of-life care by identifying who should have advance care planning or an advance healthcare directive. © The Author(s) 2014.
NASA Astrophysics Data System (ADS)
Wu, Yanling
2018-05-01
In this paper, the extreme waves were generated using the open source computational fluid dynamic (CFD) tools — OpenFOAM and Waves2FOAM — using linear and nonlinear NewWave input. They were used to conduct the numerical simulation of the wave impact process. Numerical tools based on first-order (with and without stretching) and second-order NewWave are investigated. The simulation to predict force loading for the offshore platform under the extreme weather condition is implemented and compared.
Forster, Sarah E; Zirnheld, Patrick; Shekhar, Anantha; Steinhauer, Stuart R; O'Donnell, Brian F; Hetrick, William P
2017-09-01
Signals carried by the mesencephalic dopamine system and conveyed to anterior cingulate cortex are critically implicated in probabilistic reward learning and performance monitoring. A common evaluative mechanism purportedly subserves both functions, giving rise to homologous medial frontal negativities in feedback- and response-locked event-related brain potentials (the feedback-related negativity (FRN) and the error-related negativity (ERN), respectively), reflecting dopamine-dependent prediction error signals to unexpectedly negative events. Consistent with this model, the dopamine receptor antagonist, haloperidol, attenuates the ERN, but effects on FRN have not yet been evaluated. ERN and FRN were recorded during a temporal interval learning task (TILT) following randomized, double-blind administration of haloperidol (3 mg; n = 18), diphenhydramine (an active control for haloperidol; 25 mg; n = 20), or placebo (n = 21) to healthy controls. Centroparietal positivities, the Pe and feedback-locked P300, were also measured and correlations between ERP measures and behavioral indices of learning, overall accuracy, and post-error compensatory behavior were evaluated. We hypothesized that haloperidol would reduce ERN and FRN, but that ERN would uniquely track automatic, error-related performance adjustments, while FRN would be associated with learning and overall accuracy. As predicted, ERN was reduced by haloperidol and in those exhibiting less adaptive post-error performance; however, these effects were limited to ERNs following fast timing errors. In contrast, the FRN was not affected by drug condition, although increased FRN amplitude was associated with improved accuracy. Significant drug effects on centroparietal positivities were also absent. Our results support a functional and neurobiological dissociation between the ERN and FRN.
Palmer, Matthew A; Brewer, Neil; Weber, Nathan; Nagesh, Ambika
2013-03-01
Prior research points to a meaningful confidence-accuracy (CA) relationship for positive identification decisions. However, there are theoretical grounds for expecting that different aspects of the CA relationship (calibration, resolution, and over/underconfidence) might be undermined in some circumstances. This research investigated whether the CA relationship for eyewitness identification decisions is affected by three, forensically relevant variables: exposure duration, retention interval, and divided attention at encoding. In Study 1 (N = 986), a field experiment, we examined the effects of exposure duration (5 s vs. 90 s) and retention interval (immediate testing vs. a 1-week delay) on the CA relationship. In Study 2 (N = 502), we examined the effects of attention during encoding on the CA relationship by reanalyzing data from a laboratory experiment in which participants viewed a stimulus video under full or divided attention conditions and then attempted to identify two targets from separate lineups. Across both studies, all three manipulations affected identification accuracy. The central analyses concerned the CA relation for positive identification decisions. For the manipulations of exposure duration and retention interval, overconfidence was greater in the more difficult conditions (shorter exposure; delayed testing) than the easier conditions. Only the exposure duration manipulation influenced resolution (which was better for 5 s than 90 s), and only the retention interval manipulation affected calibration (which was better for immediate testing than delayed testing). In all experimental conditions, accuracy and diagnosticity increased with confidence, particularly at the upper end of the confidence scale. Implications for theory and forensic settings are discussed.
Numerical prediction of a draft tube flow taking into account uncertain inlet conditions
NASA Astrophysics Data System (ADS)
Brugiere, O.; Balarac, G.; Corre, C.; Metais, O.; Flores, E.; Pleroy
2012-11-01
The swirling turbulent flow in a hydroturbine draft tube is computed with a non-intrusive uncertainty quantification (UQ) method coupled to Reynolds-Averaged Navier-Stokes (RANS) modelling in order to take into account in the numerical prediction the physical uncertainties existing on the inlet flow conditions. The proposed approach yields not only mean velocity fields to be compared with measured profiles, as is customary in Computational Fluid Dynamics (CFD) practice, but also variance of these quantities from which error bars can be deduced on the computed profiles, thus making more significant the comparison between experiment and computation.
NASA Technical Reports Server (NTRS)
Slutz, R. J.; Gray, T. B.; West, M. L.; Stewart, F. G.; Leftin, M.
1971-01-01
A statistical study of formulas for predicting the sunspot number several years in advance is reported. By using a data lineup with cycle maxima coinciding, and by using multiple and nonlinear predictors, a new formula which gives better error estimates than former formulas derived from the work of McNish and Lincoln is obtained. A statistical analysis is conducted to determine which of several mathematical expressions best describes the relationship between 10.7 cm solar flux and Zurich sunspot numbers. Attention is given to the autocorrelation of the observations, and confidence intervals for the derived relationships are presented. The accuracy of predicting a value of 10.7 cm solar flux from a predicted sunspot number is dicussed.
Predicting motion sickness during parabolic flight
NASA Technical Reports Server (NTRS)
Harm, Deborah L.; Schlegel, Todd T.
2002-01-01
BACKGROUND: There are large individual differences in susceptibility to motion sickness. Attempts to predict who will become motion sick have had limited success. In the present study, we examined gender differences in resting levels of salivary amylase and total protein, cardiac interbeat intervals (R-R intervals), and a sympathovagal index and evaluated their potential to correctly classify individuals into two motion sickness severity groups. METHODS: Sixteen subjects (10 men and 6 women) flew four sets of 10 parabolas aboard NASA's KC-135 aircraft. Saliva samples for amylase and total protein were collected preflight on the day of the flight and motion sickness symptoms were recorded during each parabola. Cardiovascular parameters were collected in the supine position 1-5 days before the flight. RESULTS: There were no significant gender differences in sickness severity or any of the other variables mentioned above. Discriminant analysis using salivary amylase, R-R intervals and the sympathovagal index produced a significant Wilks' lambda coefficient of 0.36, p=0.006. The analysis correctly classified 87% of the subjects into the none-mild sickness or the moderate-severe sickness group. CONCLUSIONS: The linear combination of resting levels of salivary amylase, high-frequency R-R interval levels, and a sympathovagal index may be useful in predicting motion sickness severity.
Predicting Motion Sickness During Parabolic Flight
NASA Technical Reports Server (NTRS)
Harm, Deborah L.; Schlegel, Todd T.
2002-01-01
Background: There are large individual differences in susceptibility to motion sickness. Attempts to predict who will become motion sick have had limited success. In the present study we examined gender differences in resting levels of salivary amylase and total protein, cardiac interbeat intervals (R-R intervals), and a sympathovagal index and evaluated their potential to correctly classify individuals into two motion sickness severity groups. Methods: Sixteen subjects (10 men and 6 women) flew 4 sets of 10 parabolas aboard NASA's KC-135 aircraft. Saliva samples for amylase and total protein were collected preflight on the day of the flight and motion sickness symptoms were recorded during each parabola. Cardiovascular parameters were collected in the supine position 1-5 days prior to the flight. Results: There were no significant gender differences in sickness severity or any of the other variables mentioned above. Discriminant analysis using salivary amylase, R-R intervals and the sympathovagal index produced a significant Wilks' lambda coefficient of 0.36, p= 0.006. The analysis correctly classified 87% of the subjects into the none-mild sickness or the moderate-severe sickness group. Conclusions: The linear combination of resting levels of salivary amylase, high frequency R-R interval levels, and a sympathovagal index may be useful in predicting motion sickness severity.
Faude, Oliver; Schnittker, Reinhard; Schulte-Zurhausen, Roman; Müller, Florian; Meyer, Tim
2013-01-01
We aimed at comparing the endurance effects of high-intensity interval training (HIIT) with high-volume running training (HVT) during pre-season conditioning in 20 high-level youth football players (15.9 (s 0.8) years). Players either conducted HVT or HIIT during the summer preparation period. During winter preparation they performed the other training programme. Before and after each training period several fitness tests were conducted: multi-stage running test (to assess the individual anaerobic threshold (IAT) and maximal running velocity (Vmax)), vertical jumping height, and straight sprinting. A significant increase from pre- to post-test was observed in IAT velocity (P < 0.001) with a greater increase after HVT (+0.8 km · h(-1) vs. +0.5 km · h(-1) after HIIT, P = 0.04). Maximal velocity during the incremental exercise test also slightly increased with time (P = 0.09). Forty per cent (HIIT) and 15% (HVT) of all players did not improve IAT beyond baseline variability. The players who did not respond to HIIT were significantly slower during 30 m sprinting than responders (P = 0.02). No further significant differences between responders and non-responders were observed. Jump heights deteriorated significantly after both training periods (P < 0.003). Both training programmes seem to be promising means to improve endurance capacity in high-level youth football players during pre-season conditioning.
Using conditional probability to identify trends in intra-day high-frequency equity pricing
NASA Astrophysics Data System (ADS)
Rechenthin, Michael; Street, W. Nick
2013-12-01
By examining the conditional probabilities of price movements in a popular US stock over different high-frequency intra-day timespans, varying levels of trend predictability are identified. This study demonstrates the existence of predictable short-term trends in the market; understanding the probability of price movement can be useful to high-frequency traders. Price movement was examined in trade-by-trade (tick) data along with temporal timespans between 1 s to 30 min for 52 one-week periods for one highly-traded stock. We hypothesize that much of the initial predictability of trade-by-trade (tick) data is due to traditional market dynamics, or the bouncing of the price between the stock’s bid and ask. Only after timespans of between 5 to 10 s does this cease to explain the predictability; after this timespan, two consecutive movements in the same direction occur with higher probability than that of movements in the opposite direction. This pattern holds up to a one-minute interval, after which the strength of the pattern weakens.
A Copula-Based Conditional Probabilistic Forecast Model for Wind Power Ramps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodge, Brian S; Krishnan, Venkat K; Zhang, Jie
Efficient management of wind ramping characteristics can significantly reduce wind integration costs for balancing authorities. By considering the stochastic dependence of wind power ramp (WPR) features, this paper develops a conditional probabilistic wind power ramp forecast (cp-WPRF) model based on Copula theory. The WPRs dataset is constructed by extracting ramps from a large dataset of historical wind power. Each WPR feature (e.g., rate, magnitude, duration, and start-time) is separately forecasted by considering the coupling effects among different ramp features. To accurately model the marginal distributions with a copula, a Gaussian mixture model (GMM) is adopted to characterize the WPR uncertaintymore » and features. The Canonical Maximum Likelihood (CML) method is used to estimate parameters of the multivariable copula. The optimal copula model is chosen based on the Bayesian information criterion (BIC) from each copula family. Finally, the best conditions based cp-WPRF model is determined by predictive interval (PI) based evaluation metrics. Numerical simulations on publicly available wind power data show that the developed copula-based cp-WPRF model can predict WPRs with a high level of reliability and sharpness.« less
Thompson, Ronald E.; Hoffman, Scott A.
2006-01-01
A suite of 28 streamflow statistics, ranging from extreme low to high flows, was computed for 17 continuous-record streamflow-gaging stations and predicted for 20 partial-record stations in Monroe County and contiguous counties in north-eastern Pennsylvania. The predicted statistics for the partial-record stations were based on regression analyses relating inter-mittent flow measurements made at the partial-record stations indexed to concurrent daily mean flows at continuous-record stations during base-flow conditions. The same statistics also were predicted for 134 ungaged stream locations in Monroe County on the basis of regression analyses relating the statistics to GIS-determined basin characteristics for the continuous-record station drainage areas. The prediction methodology for developing the regression equations used to estimate statistics was developed for estimating low-flow frequencies. This study and a companion study found that the methodology also has application potential for predicting intermediate- and high-flow statistics. The statistics included mean monthly flows, mean annual flow, 7-day low flows for three recurrence intervals, nine flow durations, mean annual base flow, and annual mean base flows for two recurrence intervals. Low standard errors of prediction and high coefficients of determination (R2) indicated good results in using the regression equations to predict the statistics. Regression equations for the larger flow statistics tended to have lower standard errors of prediction and higher coefficients of determination (R2) than equations for the smaller flow statistics. The report discusses the methodologies used in determining the statistics and the limitations of the statistics and the equations used to predict the statistics. Caution is indicated in using the predicted statistics for small drainage area situations. Study results constitute input needed by water-resource managers in Monroe County for planning purposes and evaluation
Olatinwo, Rabiu O; Prabha, Thara V; Paz, Joel O; Hoogenboom, Gerrit
2012-03-01
Early leaf spot of peanut (Arachis hypogaea L.), a disease caused by Cercospora arachidicola S. Hori, is responsible for an annual crop loss of several million dollars in the southeastern United States alone. The development of early leaf spot on peanut and subsequent spread of the spores of C. arachidicola relies on favorable weather conditions. Accurate spatio-temporal weather information is crucial for monitoring the progression of favorable conditions and determining the potential threat of the disease. Therefore, the development of a prediction model for mitigating the risk of early leaf spot in peanut production is important. The specific objective of this study was to demonstrate the application of the high-resolution Weather Research and Forecasting (WRF) model for management of early leaf spot in peanut. We coupled high-resolution weather output of the WRF, i.e. relative humidity and temperature, with the Oklahoma peanut leaf spot advisory model in predicting favorable conditions for early leaf spot infection over Georgia in 2007. Results showed a more favorable infection condition in the southeastern coastline of Georgia where the infection threshold were met sooner compared to the southwestern and central part of Georgia where the disease risk was lower. A newly introduced infection threat index indicates that the leaf spot threat threshold was met sooner at Alma, GA, compared to Tifton and Cordele, GA. The short-term prediction of weather parameters and their use in the management of peanut diseases is a viable and promising technique, which could help growers make accurate management decisions, and lower disease impact through optimum timing of fungicide applications.
NASA Astrophysics Data System (ADS)
Olatinwo, Rabiu O.; Prabha, Thara V.; Paz, Joel O.; Hoogenboom, Gerrit
2012-03-01
Early leaf spot of peanut ( Arachis hypogaea L.), a disease caused by Cercospora arachidicola S. Hori, is responsible for an annual crop loss of several million dollars in the southeastern United States alone. The development of early leaf spot on peanut and subsequent spread of the spores of C. arachidicola relies on favorable weather conditions. Accurate spatio-temporal weather information is crucial for monitoring the progression of favorable conditions and determining the potential threat of the disease. Therefore, the development of a prediction model for mitigating the risk of early leaf spot in peanut production is important. The specific objective of this study was to demonstrate the application of the high-resolution Weather Research and Forecasting (WRF) model for management of early leaf spot in peanut. We coupled high-resolution weather output of the WRF, i.e. relative humidity and temperature, with the Oklahoma peanut leaf spot advisory model in predicting favorable conditions for early leaf spot infection over Georgia in 2007. Results showed a more favorable infection condition in the southeastern coastline of Georgia where the infection threshold were met sooner compared to the southwestern and central part of Georgia where the disease risk was lower. A newly introduced infection threat index indicates that the leaf spot threat threshold was met sooner at Alma, GA, compared to Tifton and Cordele, GA. The short-term prediction of weather parameters and their use in the management of peanut diseases is a viable and promising technique, which could help growers make accurate management decisions, and lower disease impact through optimum timing of fungicide applications.
An Event Restriction Interval Theory of Tense
ERIC Educational Resources Information Center
Beamer, Brandon Robert
2012-01-01
This dissertation presents a novel theory of tense and tense-like constructions. It is named after a key theoretical component of the theory, the event restriction interval. In Event Restriction Interval (ERI) Theory, sentences are semantically evaluated relative to an index which contains two key intervals, the evaluation interval and the event…
Physiological Responses to On-Court vs Running Interval Training in Competitive Tennis Players
Fernandez-Fernandez, Jaime; Sanz-Rivas, David; Sanchez-Muñoz, Cristobal; de la Aleja Tellez, Jose Gonzalez; Buchheit, Martin; Mendez-Villanueva, Alberto
2011-01-01
The aim of this study was to compare heart rate (HR), blood lactate (LA) and rate of perceived exertion (RPE) responses to a tennis-specific interval training (i.e., on-court) session with that of a matched-on-time running interval training (i.e., off-court). Eight well-trained, male (n = 4) and female (n = 4) tennis players (mean ± SD; age: 16.4 ± 1.8 years) underwent an incremental test where peak treadmill speed, maximum HR (HRmax) and maximum oxygen uptake (VO2max) were determined. The two interval training protocols (i.e., off- court and on-court) consisted of 4 sets of 120 s of work, interspersed with 90 s rest. Percentage of HRmax (95.9 ± 2.4 vs. 96.1 ± 2.2%; p = 0.79), LA (6.9 ± 2.5 vs. 6.2 ± 2.4 mmol·L-1; p = 0.14) and RPE (16.7 ± 2.1 vs. 16.3 ± 1.8; p = 0.50) responses were similar for off-court and on-court, respectively. The two interval training protocols used in the present study have equivalent physiological responses. Longitudinal studies are still warranted but tennis-specific interval training sessions could represent a time-efficient alternative to off-court (running) interval training for the optimization of the specific cardiorespiratory fitness in tennis players. Key points On-court interval training protocol can be used as an alternative to running interval training Technical/tactical training should be performed under conditions that replicate the physical and technical demands of a competitive match During the competitive season tennis on-court training might be preferred to off-court training PMID:24150630
Delirium prediction in the intensive care unit: comparison of two delirium prediction models.
Wassenaar, Annelies; Schoonhoven, Lisette; Devlin, John W; van Haren, Frank M P; Slooter, Arjen J C; Jorens, Philippe G; van der Jagt, Mathieu; Simons, Koen S; Egerod, Ingrid; Burry, Lisa D; Beishuizen, Albertus; Matos, Joaquim; Donders, A Rogier T; Pickkers, Peter; van den Boogaard, Mark
2018-05-05
Accurate prediction of delirium in the intensive care unit (ICU) may facilitate efficient use of early preventive strategies and stratification of ICU patients by delirium risk in clinical research, but the optimal delirium prediction model to use is unclear. We compared the predictive performance and user convenience of the prediction model for delirium (PRE-DELIRIC) and early prediction model for delirium (E-PRE-DELIRIC) in ICU patients and determined the value of a two-stage calculation. This 7-country, 11-hospital, prospective cohort study evaluated consecutive adults admitted to the ICU who could be reliably assessed for delirium using the Confusion Assessment Method-ICU or the Intensive Care Delirium Screening Checklist. The predictive performance of the models was measured using the area under the receiver operating characteristic curve. Calibration was assessed graphically. A physician questionnaire evaluated user convenience. For the two-stage calculation we used E-PRE-DELIRIC immediately after ICU admission and updated the prediction using PRE-DELIRIC after 24 h. In total 2178 patients were included. The area under the receiver operating characteristic curve was significantly greater for PRE-DELIRIC (0.74 (95% confidence interval 0.71-0.76)) compared to E-PRE-DELIRIC (0.68 (95% confidence interval 0.66-0.71)) (z score of - 2.73 (p < 0.01)). Both models were well-calibrated. The sensitivity improved when using the two-stage calculation in low-risk patients. Compared to PRE-DELIRIC, ICU physicians (n = 68) rated the E-PRE-DELIRIC model more feasible. While both ICU delirium prediction models have moderate-to-good performance, the PRE-DELIRIC model predicts delirium better. However, ICU physicians rated the user convenience of E-PRE-DELIRIC superior to PRE-DELIRIC. In low-risk patients the delirium prediction further improves after an update with the PRE-DELIRIC model after 24 h. ClinicalTrials.gov, NCT02518646 . Registered on 21 July 2015.
SINGLE-INTERVAL GAS PERMEABILITY ESTIMATION
Single-interval, steady-steady-state gas permeability testing requires estimation of pressure at a screened interval which in turn requires measurement of friction factors as a function of mass flow rate. Friction factors can be obtained by injecting air through a length of pipe...
Wind Information Uplink to Aircraft Performing Interval Management Operations
NASA Technical Reports Server (NTRS)
Ahmad, Nashat N.; Barmore, Bryan E.; Swieringa, Kurt A.
2016-01-01
Interval Management (IM) is an ADS-B-enabled suite of applications that use ground and flight deck capabilities and procedures designed to support the relative spacing of aircraft (Barmore et al., 2004, Murdoch et al. 2009, Barmore 2009, Swieringa et al. 2011; Weitz et al. 2012). Relative spacing refers to managing the position of one aircraft to a time or distance relative to another aircraft, as opposed to a static reference point such as a point over the ground or clock time. This results in improved inter-aircraft spacing precision and is expected to allow aircraft to be spaced closer to the applicable separation standard than current operations. Consequently, if the reduced spacing is used in scheduling, IM can reduce the time interval between the first and last aircraft in an overall arrival flow, resulting in increased throughput. Because IM relies on speed changes to achieve precise spacing, it can reduce costly, low-altitude, vectoring, which increases both efficiency and throughput in capacity-constrained airspace without negatively impacting controller workload and task complexity. This is expected to increase overall system efficiency. The Flight Deck Interval Management (FIM) equipment provides speeds to the flight crew that will deliver them to the achieve-by point at the controller-specified time, i.e., assigned spacing goal, after the target aircraft crosses the achieve-by point (Figure 1.1). Since the IM and target aircraft may not be on the same arrival procedure, the FIM equipment predicts the estimated times of arrival (ETA) for both the IM and target aircraft to the achieve-by point. This involves generating an approximate four-dimensional trajectory for each aircraft. The accuracy of the wind data used to generate those trajectories is critical to the success of the IM operation. There are two main forms of uncertainty in the wind information used by the FIM equipment. The first is the accuracy of the forecast modeling done by the weather
Mooij, Wolf M.; Bennetts, Robert E.; Kitchens, Wiley M.; DeAngelis, Donald L.
2002-01-01
The paper aims at exploring the viability of the Florida snail kite population under various drought regimes in its wetland habitat. The population dynamics of snail kites are strongly linked with the hydrology of the system due to the dependence of this bird species on one exclusive prey species, the apple snail, which is negatively affected by a drying out of habitat. Based on empirical evidence, it has been hypothesised that the viability of the snail kite population critically depends not only on the time interval between droughts, but also on the spatial extent of these droughts. A system wide drought is likely to result in reduced reproduction and increased mortality, whereas the birds can respond to local droughts by moving to sites where conditions are still favourable. This paper explores the implications of this hypothesis by means of a spatially-explicit individual-based model. The specific aim of the model is to study in a factorial design the dynamics of the kite population in relation to two scale parameters, the temporal interval between droughts and the spatial correlation between droughts. In the model high drought frequencies led to reduced numbers of kites. Also, habitat degradation due to prolonged periods of inundation led to lower predicted numbers of kites. Another main result was that when the spatial correlation between droughts was low, the model showed little variability in the predicted numbers of kites. But when droughts occurred mostly on a system wide level, environmental stochasticity strongly increased the stochasticity in kite numbers and in the worst case the viability of the kite population was seriously threatened.
NASA Technical Reports Server (NTRS)
Wolowicz, C. H.; Yancey, R. B.
1973-01-01
Preliminary correlations of flight-determined and predicted stability and control characteristics of the XB-70-1 reported in NASA TN D-4578 were subject to uncertainties in several areas which necessitated a review of prediction techniques particularly for the longitudinal characteristics. Reevaluation and updating of the original predictions, including aeroelastic corrections, for six specific flight-test conditions resulted in improved correlations of static pitch stability with flight data. The original predictions for the pitch-damping derivative, on the other hand, showed better correlation with flight data than the updated predictions. It appears that additional study is required in the application of aeroelastic corrections to rigid model wind-tunnel data and the theoretical determination of dynamic derivatives for this class of aircraft.
BINDER, KYLE W.; MURFEE, WALTER L.; SONG, JI; LAUGHLIN, M. HAROLD; PRICE, RICHARD J.
2009-01-01
Objectives Exercise training is known to enhance skeletal muscle blood flow capacity, with high-intensity interval sprint training (IST) primarily affecting muscles with a high proportion of fast twitch glycolytic fibers. The objective of this study was to determine the relative contributions of new arteriole formation and lumenal arteriolar remodeling to enhanced flow capacity and the impact of these adaptations on local microvascular hemodynamics deep within the muscle. Methods The authors studied arteriolar adaptation in the white/mixed-fiber portion of gastrocnemius muscles of IST (6 bouts of running/day; 2.5 min/bout; 60 m/min speed; 15% grade; 4.5 min rest between bouts; 5 training days/wk; 10 wks total) and sedentary (SED) control rats using whole-muscle Microfil casts. Dimensional and topological data were then used to construct a series of computational hemodynamic network models that incorporated physiological red blood cell distributions and hematocrit and diameter dependent apparent viscosities. Results In comparison to SED controls, IST elicited a significant increase in arterioles/order in the 3A through 6A generations. Predicted IST and SED flows through the 2A generation agreed closely with in vivo measurements made in a previous study, illustrating the accuracy of the model. IST shifted the bulk of the pressure drop across the network from the 3As to the 4As and 5As, and flow capacity increased from 0.7 mL/min in SED to 1.5 mL/min in IST when a driving pressure of 80 mmHg was applied. Conclusions The primary adaptation to IST is an increase in arterioles in the 3A through 6A generations, which, in turn, creates an approximate doubling of flow capacity and a deeper penetration of high pressure into the arteriolar network. PMID:17454671
Binder, Kyle W; Murfee, Walter L; Song, Ji; Laughlin, M Harold; Price, Richard J
2007-01-01
Exercise training is known to enhance skeletal muscle blood flow capacity, with high-intensity interval sprint training (IST) primarily affecting muscles with a high proportion of fast twitch glycolytic fibers. The objective of this study was to determine the relative contributions of new arteriole formation and lumenal arteriolar remodeling to enhanced flow capacity and the impact of these adaptations on local microvascular hemodynamics deep within the muscle. The authors studied arteriolar adaptation in the white/mixed-fiber portion of gastrocnemius muscles of IST (6 bouts of running/day; 2.5 min/bout; 60 m/min speed; 15% grade; 4.5 min rest between bouts; 5 training days/wk; 10 wks total) and sedentary (SED) control rats using whole-muscle Microfil casts. Dimensional and topological data were then used to construct a series of computational hemodynamic network models that incorporated physiological red blood cell distributions and hematocrit and diameter dependent apparent viscosities. In comparison to SED controls, IST elicited a significant increase in arterioles/order in the 3A through 6A generations. Predicted IST and SED flows through the 2A generation agreed closely with in vivo measurements made in a previous study, illustrating the accuracy of the model. IST shifted the bulk of the pressure drop across the network from the 3As to the 4As and 5As, and flow capacity increased from 0.7 mL/min in SED to 1.5 mL/min in IST when a driving pressure of 80 mmHg was applied. The primary adaptation to IST is an increase in arterioles in the 3A through 6A generations, which, in turn, creates an approximate doubling of flow capacity and a deeper penetration of high pressure into the arteriolar network.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Majumdar, S.
1997-02-01
Available models for predicting failure of flawed and unflawed steam generator tubes under normal operating, accident, and severe accident conditions are reviewed. Tests conducted in the past, though limited, tended to show that the earlier flow-stress model for part-through-wall axial cracks overestimated the damaging influence of deep cracks. This observation was confirmed by further tests at high temperatures, as well as by finite-element analysis. A modified correlation for deep cracks can correct this shortcoming of the model. Recent tests have shown that lateral restraint can significantly increase the failure pressure of tubes with unsymmetrical circumferential cracks. This observation was confirmedmore » by finite-element analysis. The rate-independent flow stress models that are successful at low temperatures cannot predict the rate-sensitive failure behavior of steam generator tubes at high temperatures. Therefore, a creep rupture model for predicting failure was developed and validated by tests under various temperature and pressure loadings that can occur during postulated severe accidents.« less
Lin, Ting; Harmsen, Stephen C.; Baker, Jack W.; Luco, Nicolas
2013-01-01
The conditional spectrum (CS) is a target spectrum (with conditional mean and conditional standard deviation) that links seismic hazard information with ground-motion selection for nonlinear dynamic analysis. Probabilistic seismic hazard analysis (PSHA) estimates the ground-motion hazard by incorporating the aleatory uncertainties in all earthquake scenarios and resulting ground motions, as well as the epistemic uncertainties in ground-motion prediction models (GMPMs) and seismic source models. Typical CS calculations to date are produced for a single earthquake scenario using a single GMPM, but more precise use requires consideration of at least multiple causal earthquakes and multiple GMPMs that are often considered in a PSHA computation. This paper presents the mathematics underlying these more precise CS calculations. Despite requiring more effort to compute than approximate calculations using a single causal earthquake and GMPM, the proposed approach produces an exact output that has a theoretical basis. To demonstrate the results of this approach and compare the exact and approximate calculations, several example calculations are performed for real sites in the western United States. The results also provide some insights regarding the circumstances under which approximate results are likely to closely match more exact results. To facilitate these more precise calculations for real applications, the exact CS calculations can now be performed for real sites in the United States using new deaggregation features in the U.S. Geological Survey hazard mapping tools. Details regarding this implementation are discussed in this paper.
NASA Astrophysics Data System (ADS)
Khodayar, Samiro; Kalthoff, Norbert
2013-04-01
Among all severe convective weather situations, fall season heavy rainfall represents the most threatening phenomenon in the western Mediterranean region. Devastating flash floods occur every year somewhere in eastern Spain, southern France, Italy, or North Africa, being responsible for a great proportion of the fatalities, property losses, and destruction of infrastructure caused by natural hazards. Investigations in the area have shown that most of the heavy rainfall events in this region can be attributed to mesoscale convective systems. The main goal of this investigation is to understand and identify the atmospheric conditions that favor the initiation and development of such systems. Insight of the involved processes and conditions will improve their predictability and help preventing some of the fatal consequences related with the occurrence of these weather phenomena. The HyMeX (Hydrological cycle in the Mediterranean eXperiment) provides a unique framework to investigate this issue. Making use of high-resolution seasonal simulations with the COSMO-CLM model the mean atmospheric conditions of the fall season, September, October and November, are investigated in different western Mediterranean regions such as eastern Spain, Southern France, northern Africa and Italy. The precipitation distribution, its daily cycle, and probability distribution function are evaluated to ascertain the similarities and differences between the regions of interest, as well as the spatial distribution of extreme events. Additionally, the regional differences of the boundary layer and mid-tropospheric conditions, atmospheric stability and inhibition, and low-level triggering are presented. Selected high impact weather HyMeX episodes' are analyzed with special focus on the atmospheric pre-conditions leading to the extreme weather situations. These pre-conditions are then compared to the mean seasonal conditions to identify and point out possible anomalies in the atmospheric
NASA Astrophysics Data System (ADS)
Hafezalkotob, Arian; Hafezalkotob, Ashkan
2017-06-01
A target-based MADM method covers beneficial and non-beneficial attributes besides target values for some attributes. Such techniques are considered as the comprehensive forms of MADM approaches. Target-based MADM methods can also be used in traditional decision-making problems in which beneficial and non-beneficial attributes only exist. In many practical selection problems, some attributes have given target values. The values of decision matrix and target-based attributes can be provided as intervals in some of such problems. Some target-based decision-making methods have recently been developed; however, a research gap exists in the area of MADM techniques with target-based attributes under uncertainty of information. We extend the MULTIMOORA method for solving practical material selection problems in which material properties and their target values are given as interval numbers. We employ various concepts of interval computations to reduce degeneration of uncertain data. In this regard, we use interval arithmetic and introduce innovative formula for interval distance of interval numbers to create interval target-based normalization technique. Furthermore, we use a pairwise preference matrix based on the concept of degree of preference of interval numbers to calculate the maximum, minimum, and ranking of these numbers. Two decision-making problems regarding biomaterials selection of hip and knee prostheses are discussed. Preference degree-based ranking lists for subordinate parts of the extended MULTIMOORA method are generated by calculating the relative degrees of preference for the arranged assessment values of the biomaterials. The resultant rankings for the problem are compared with the outcomes of other target-based models in the literature.
Belke, Terry W; Christie-Fougere, Melissa M
2006-11-01
Across two experiments, a peak procedure was used to assess the timing of the onset and offset of an opportunity to run as a reinforcer. The first experiment investigated the effect of reinforcer duration on temporal discrimination of the onset of the reinforcement interval. Three male Wistar rats were exposed to fixed-interval (FI) 30-s schedules of wheel-running reinforcement and the duration of the opportunity to run was varied across values of 15, 30, and 60s. Each session consisted of 50 reinforcers and 10 probe trials. Results showed that as reinforcer duration increased, the percentage of postreinforcement pauses longer than the 30-s schedule interval increased. On probe trials, peak response rates occurred near the time of reinforcer delivery and peak times varied with reinforcer duration. In a second experiment, seven female Long-Evans rats were exposed to FI 30-s schedules leading to 30-s opportunities to run. Timing of the onset and offset of the reinforcement period was assessed by probe trials during the schedule interval and during the reinforcement interval in separate conditions. The results provided evidence of timing of the onset, but not the offset of the wheel-running reinforcement period. Further research is required to assess if timing occurs during a wheel-running reinforcement period.
Chen, Xiaodong; Chen, Minglong; Wang, Yingying; Yang, Bing; Ju, Weizhu; Zhang, Fengxiang; Cao, Kejiang
2016-01-01
Abstract We sought to investigate variation of atrial electromechanical interval after catheter ablation procedure in patients with persistent atrial fibrillation using pulse Doppler (PW) and pulse tissue Doppler imaging (PW-TDI). A total of 25 consecutive in-patients with persistent atrial fibrillation, who restored sinus rhythm after ablation procedure, were recruited in our cardiac center. Echocardiography was performed on each patient at 2 hours, 1 day, 5 days, 1 month and 3 months after the ablation therapy, and atrial electromechanical delay was measured simultaneously by PW and PW-TDI. There was no significant difference between PW and TDI in measuring atrial electromechanical delay. However, at postoperative 2 hours, peak A detection rates were mathematically but nonsignificantly greater by PW-TDI than by PW. Second, there was a significant decreasing trend in atrial electromechanical interval from postoperative 2 hours to 3 months, but only postoperative 2-hour atrial electromechanical interval was significantly greater than atrial electromechanical interval at other time. Lastly, patients without postoperative 2-hour atrial electromechanical interval had a significantly longer duration of atrial fibrillation as compared to those with postoperative 2-hour atrial electromechanical interval, by the PW or by PW-TDI, respectively. In patients with persistent atrial fibrillation, atrial electromechanical interval may decrease significantly within the first 24 hours after ablation but remain consistent later, and was significantly related to patients’ duration of atrial fibrillation. Atrial electromechanical interval, as a potential predicted factor, is recommended to be measured by either PW or TDI after 24 hours, when patients had recovered sinus rhythm by radiofrequency ablation. PMID:27924066
Expressing Intervals in Automated Service Negotiation
NASA Astrophysics Data System (ADS)
Clark, Kassidy P.; Warnier, Martijn; van Splunter, Sander; Brazier, Frances M. T.
During automated negotiation of services between autonomous agents, utility functions are used to evaluate the terms of negotiation. These terms often include intervals of values which are prone to misinterpretation. It is often unclear if an interval embodies a continuum of real numbers or a subset of natural numbers. Furthermore, it is often unclear if an agent is expected to choose only one value, multiple values, a sub-interval or even multiple sub-intervals. Additional semantics are needed to clarify these issues. Normally, these semantics are stored in a domain ontology. However, ontologies are typically domain specific and static in nature. For dynamic environments, in which autonomous agents negotiate resources whose attributes and relationships change rapidly, semantics should be made explicit in the service negotiation. This paper identifies issues that are prone to misinterpretation and proposes a notation for expressing intervals. This notation is illustrated using an example in WS-Agreement.
Testing 40 Predictions from the Transtheoretical Model Again, with Confidence
ERIC Educational Resources Information Center
Velicer, Wayne F.; Brick, Leslie Ann D.; Fava, Joseph L.; Prochaska, James O.
2013-01-01
Testing Theory-based Quantitative Predictions (TTQP) represents an alternative to traditional Null Hypothesis Significance Testing (NHST) procedures and is more appropriate for theory testing. The theory generates explicit effect size predictions and these effect size estimates, with related confidence intervals, are used to test the predictions.…
Veres, Gábor; Szpisjak, László; Bajtai, Attila; Siska, Andrea; Klivényi, Péter; Ilisz, István; Földesi, Imre; Vécsei, László; Zádori, Dénes
2017-09-01
Evidence suggests that decreased α-tocopherol (the most biologically active substance in the vitamin E group) level can cause neurological symptoms, most likely ataxia. The aim of the current study was to first provide reference intervals for serum tocopherols in the adult Hungarian population with appropriate sample size, recruiting healthy control subjects and neurological patients suffering from conditions without symptoms of ataxia, myopathy or cognitive deficiency. A validated HPLC method applying a diode array detector and rac-tocol as internal standard was utilized for that purpose. Furthermore, serum cholesterol levels were determined as well for data normalization. The calculated 2.5-97.5% reference intervals for α-, β/γ- and δ-tocopherols were 24.62-54.67, 0.81-3.69 and 0.29-1.07 μm, respectively, whereas the tocopherol/cholesterol ratios were 5.11-11.27, 0.14-0.72 and 0.06-0.22 μmol/mmol, respectively. The establishment of these reference intervals may improve the diagnostic accuracy of tocopherol measurements in certain neurological conditions with decreased tocopherol levels. Moreover, the current study draws special attention to the possible pitfalls in the complex process of the determination of reference intervals as well, including the selection of study population, the application of internal standard and method validation and the calculation of tocopherol/cholesterol ratios. Copyright © 2017 John Wiley & Sons, Ltd.
Stork, Matthew J; Banfield, Laura E; Gibala, Martin J; Martin Ginis, Kathleen A
2017-12-01
While considerable evidence suggests that interval exercise confers numerous physiological adaptations linked to improved health, its psychological consequences and behavioural implications are less clear and the subject of intense debate. The purpose of this scoping review was to catalogue studies investigating the psychological responses to interval exercise in order to identify what psychological outcomes have been assessed, the research methods used, and the results. A secondary objective was to identify research issues and gaps. Forty-two published articles met the review inclusion/exclusion criteria. These studies involved 1258 participants drawn from various active/inactive and healthy/unhealthy populations, and 55 interval exercise protocols (69% high-intensity interval training [HIIT], 27% sprint interval training [SIT], and 4% body-weight interval training [BWIT]). Affect and enjoyment were the most frequently studied psychological outcomes. Post-exercise assessments indicate that overall, enjoyment of, and preferences for interval exercise are equal or greater than for continuous exercise, and participants can hold relatively positive social cognitions regarding interval exercise. Although several methodological issues (e.g., inconsistent use of terminology, measures and protocols) and gaps (e.g., data on adherence and real-world protocols) require attention, from a psychological perspective, the emerging data support the viability of interval exercise as an alternative to continuous exercise.
NASA Astrophysics Data System (ADS)
Zhang, Li
With the deregulation of the electric power market in New England, an independent system operator (ISO) has been separated from the New England Power Pool (NEPOOL). The ISO provides a regional spot market, with bids on various electricity-related products and services submitted by utilities and independent power producers. A utility can bid on the spot market and buy or sell electricity via bilateral transactions. Good estimation of market clearing prices (MCP) will help utilities and independent power producers determine bidding and transaction strategies with low risks, and this is crucial for utilities to compete in the deregulated environment. MCP prediction, however, is difficult since bidding strategies used by participants are complicated and MCP is a non-stationary process. The main objective of this research is to provide efficient short-term load and MCP forecasting and corresponding confidence interval estimation methodologies. In this research, the complexity of load and MCP with other factors is investigated, and neural networks are used to model the complex relationship between input and output. With improved learning algorithm and on-line update features for load forecasting, a neural network based load forecaster was developed, and has been in daily industry use since summer 1998 with good performance. MCP is volatile because of the complexity of market behaviors. In practice, neural network based MCP predictors usually have a cascaded structure, as several key input factors need to be estimated first. In this research, the uncertainties involved in a cascaded neural network structure for MCP prediction are analyzed, and prediction distribution under the Bayesian framework is developed. A fast algorithm to evaluate the confidence intervals by using the memoryless Quasi-Newton method is also developed. The traditional back-propagation algorithm for neural network learning needs to be improved since MCP is a non-stationary process. The extended Kalman
Automatic Error Analysis Using Intervals
ERIC Educational Resources Information Center
Rothwell, E. J.; Cloud, M. J.
2012-01-01
A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…
NASA Astrophysics Data System (ADS)
Johnson, William; Farnsworth, Anna; Vanness, Kurt; Hilpert, Markus
2017-04-01
The key element of a mechanistic theory to predict colloid attachment in porous media under environmental conditions where colloid-collector repulsion exists (unfavorable conditions for attachment) is representation of the nano-scale surface heterogeneity (herein called discrete heterogeneity) that drives colloid attachment under unfavorable conditions. The observed modes of colloid attachment under unfavorable conditions emerge from simulations that incorporate discrete heterogeneity. Quantitative prediction of attachment (and detachment) requires capturing the sizes, spatial frequencies, and other properties of roughness asperities and charge heterodomains in discrete heterogeneity representations of different surfaces. The fact that a given discrete heterogeneity representation will interact differently with different-sized colloids as well as different ionic strengths for a given sized colloid allows backing out representative discrete heterogeneity via comparison of simulations to experiments performed across a range of colloid size, solution IS, and fluid velocity. This has been achieved on unfavorable smooth surfaces yielding quantitative prediction of attachment, and qualitative prediction of detachment in response to ionic strength or flow perturbations. Extending this treatment to rough surfaces, and representing the contributions of nanoscale roughness as well as charge heterogeneity is a focus of this talk. Another focus of this talk is the upscaling the pore scale simulations to produce contrasting breakthrough-elution behaviors at the continuum (column) scale that are observed, for example, for different-sized colloids, or same-sized colloids under different ionic strength conditions. The outcome of mechanistic pore scale simulations incorporating discrete heterogeneity and subsequent upscaling is that temporal processes such as blocking and ripening will emerge organically from these simulations, since these processes fundamentally stem from the
Patient satisfaction with extended-interval warfarin monitoring.
Carris, Nicholas W; Hwang, Andrew Y; Smith, Steven M; Taylor, James R; Sando, Karen; Powell, Jason; Rosenberg, Eric I; Zumberg, Marc S; Gums, John G; Dietrich, Eric A; Anderson, Katherine Vogel
2016-11-01
Extended-interval monitoring of warfarin has been proposed to reduce follow-up burden and improve patient satisfaction. We aimed to make an initial assessment of anticoagulation satisfaction before and after an extended-interval warfarin monitoring intervention. We conducted a translational prospective single-arm pilot study of extended-interval warfarin monitoring in five pharmacist-managed anticoagulation clinics. Patients meeting CHEST guideline criteria for extended-interval warfarin monitoring began progressive extended-interval follow-up (6, 8, and 12 weeks thereafter). The Duke Anticoagulation Satisfaction Scale (DASS) was administered at baseline and at end-of-study or study removal (in patients no longer appropriate for extended interval follow-up). Forty-six patients had evaluable pre- and post-intervention DASS survey data. Mean age of patients was 66.5 years, 74 % were non-Hispanic whites, and 48 % were men. Patients completed a mean ± SD of 34 ± 22 weeks of follow-up. Mean ± SD total DASS score at baseline was 45.2 ± 14.2 versus 49.1 ± 14.9 at end-of-study (mean change, +3.9 [95 % CI -0.6-8.4; p = 0.09]), indicating no benefit-and trending toward decrement-to anticoagulation satisfaction. Change in anticoagulation satisfaction varied substantially following extended-interval monitoring, with no evidence of improved satisfaction. Plausible reasons for patients not preferring extended-interval monitoring include increased anxiety and disengagement from self-management activities, both potentially related to less frequent feedback and reassurance during extended interval-monitoring. Additional research is needed to identify who is likely to benefit most from extended-interval monitoring. Anticoagulation satisfaction should be considered with clinical factors and shared-decision making when implementing extended-interval warfarin monitoring.
NASA Astrophysics Data System (ADS)
Rouet-Leduc, B.; Hulbert, C.; Riviere, J.; Lubbers, N.; Barros, K.; Marone, C.; Johnson, P. A.
2016-12-01
Forecasting failure is a primary goal in diverse domains that include earthquake physics, materials science, nondestructive evaluation of materials and other engineering applications. Due to the highly complex physics of material failure and limitations on gathering data in the failure nucleation zone, this goal has often appeared out of reach; however, recent advances in instrumentation sensitivity, instrument density and data analysis show promise toward forecasting failure times. Here, we show that we can predict frictional failure times of both slow and fast stick slip failure events in the laboratory. This advance is made possible by applying a machine learning approach known as Random Forests1(RF) to the continuous acoustic emission (AE) time series recorded by detectors located on the fault blocks. The RF is trained using a large number of statistical features derived from the AE time series signal. The model is then applied to data not previously analyzed. Remarkably, we find that the RF method predicts upcoming failure time far in advance of a stick slip event, based only on a short time window of data. Further, the algorithm accurately predicts the time of the beginning and end of the next slip event. The predicted time improves as failure is approached, as other data features add to prediction. Our results show robust predictions of slow and dynamic failure based on acoustic emissions from the fault zone throughout the laboratory seismic cycle. The predictions are based on previously unidentified tremor-like acoustic signals that occur during stress build up and the onset of macroscopic frictional weakening. We suggest that the tremor-like signals carry information about fault zone processes and allow precise predictions of failure at any time in the slow slip or stick slip cycle2. If the laboratory experiments represent Earth frictional conditions, it could well be that signals are being missed that contain highly useful predictive information. 1Breiman
ERIC Educational Resources Information Center
Pinto, Carlos; Machado, Armando
2011-01-01
To better understand short-term memory for temporal intervals, we re-examined the choose-short effect. In Experiment 1, to contrast the predictions of two models of this effect, the subjective shortening and the coding models, pigeons were exposed to a delayed matching-to-sample task with three sample durations (2, 6 and 18 s) and retention…
Khelil-Arfa, H; Boudon, A; Maxin, G; Faverdin, P
2012-10-01
The increase in the worldwide demand for dairy products, associated with global warming, will emphasize the issue of water use efficiency in dairy systems. The evaluation of environmental issues related to the management of animal dejections will also require precise biotechnical models that can predict effluent management in farms. In this study, equations were developed and evaluated for predicting the main water flows at the dairy cow level, based on parameters related to cow productive performance and diet under thermoneutral conditions. Two datasets were gathered. The first one comprised 342 individual measurements of water balance in dairy cows obtained during 18 trials at the experimental farm of Méjussaume (INRA, France). Predictive equations of water intake, urine and fecal water excretion were developed by multiple regression using a stepwise selection of regressors from a list of seven candidate parameters, which were milk yield, dry matter intake (DMI), body weight, diet dry matter content (DM), proportion of concentrate (CONC) and content of crude protein (CP) ingested with forage and concentrate (CPf and CPc, g/kg DM). The second dataset was used for external validation of the developed equations and comprised 196 water flow measurements on experimental lots obtained from 43 published papers related to water balance or digestibility measurements in dairy cows. Although DMI was the first predictor of the total water intake (TWI), with a partial r(2) of 0.51, DM was the first predictive parameter of free water intake (FWI), with a partial r(2) of 0.57, likely due to the large variability of DM in the first dataset (from 11.5 to 91.4 g/100 g). This confirmed the compensation between water drunk and ingested with diet when DM changes. The variability of urine volume was explained mainly by the CPf associated with DMI (r.s.d. 5.4 kg/day for an average flow of 24.0 kg/day) and that of fecal water was explained by the proportion of CONC in the diet and DMI
Iannetti, Luigi; Salini, Romolo; Sperandii, Anna Franca; Santarelli, Gino Angelo; Neri, Diana; Di Marzio, Violeta; Romantini, Romina; Migliorati, Giacomo; Baranyi, József
2017-01-02
Traditional Italian pork products can be consumed after variable drying periods, where the temporal decrease of water activity spans from optimal to inactivating values. This makes it necessary to A) consider the bias factor when applying culture-medium-based predictive models to sausage; B) apply the dynamic version (described by differential equations) of those models; C) combine growth and death models in a continuous way, including the highly uncertain growth/no growth range separating the two regions. This paper tests the applicability of published predictive models on the responses of Listeria monocytogenes and Yersinia enterocolitica to dynamic conditions in traditional Italian pork sausage, where the environment changes from growth-supporting to inhibitory conditions, so the growth and death models need to be combined. The effect of indigenous lactic acid bacteria was also taken into account in the predictions. Challenge tests were carried out using such sausages, inoculated separately with L. monocytogenes and Y. enterocolitica, stored for 480h at 8, 12, 18 and 20°C. The pH was fairly constant, while the water activity changed dynamically. The effects of the environment on the specific growth and death rate of the studied organisms were predicted using previously published predictive models and parameters. Microbial kinetics in many products with a long shelf-life and dynamic internal environment, could result in both growth and inactivation, making it difficult to estimate the bacterial concentration at the time of consumption by means of commonly available predictive software tools. Our prediction of the effect of the storage environment, where the water activity gradually decreases during a drying period, is designed to overcome these difficulties. The methodology can be used generally to predict and visualise bacterial kinetics under temporal variation of environments, which is vital when assessing the safety of many similar products. Copyright
Predicting Road Conditions with Internet Search
2016-01-01
Traffic congestion is an important problem both on an individual and on a societal level and much research has been done to explain and prevent their emergence. There are currently many systems which provide a reasonably good picture of actual road traffic by employing either fixed measurement points on highways or so called “floating car data” i.e. by using velocity and location data from roaming, networked, GPS enabled members of traffic. Some of these systems also offer forecasting of road conditions based on such historical data. To my knowledge there is as yet no system which offers advance notice on road conditions based on a signal which is guaranteed to occur in advance of these conditions and this is the novelty of this paper. Google Search intensity for the German word stau (i.e. traffic jam) peaks 2 hours ahead of the number of traffic jam reports as reported by the ADAC, a well known German automobile club and the largest of its kind in Europe. This is true both in the morning (7 am to 9 am) and in the evening (4 pm to 6 pm). The main result of this paper is then that after controlling for time-of-day and day-of-week effects we can still explain a significant additional portion of the variation of the number of traffic jam reports with Google Trends and we can thus explain well over 80% of the variation of road conditions using Google search activity. A one percent increase in Google stau searches implies a .4 percent increase of traffic jams. Our paper is a proof of concept that aggregate, timely delivered behavioural data can help fine tune modern societies and prompts for more research with better, more disaggregated data in order to also achieve practical solutions. PMID:27571518
Change of pH during excess sludge fermentation under alkaline, acidic and neutral conditions.
Yuan, Yue; Peng, Yongzhen; Liu, Ye; Jin, Baodan; Wang, Bo; Wang, Shuying
2014-12-01
The change in pH during excess sludge (ES) fermentation of varying sludge concentrations was investigated in a series of reactors at alkaline, acidic, and neutral pHs. The results showed that the changes were significantly affected by fermentative conditions. Under different conditions, pH exhibited changing profiles. When ES was fermented under alkaline conditions, pH decreased in a range of (10±1). At the beginning of alkaline fermentation, pH dropped significantly, at intervals of 4h, 4h, and 5h with sludge concentrations of 8665.6mg/L, 6498.8mg/L, and 4332.5mg/L, then it would become moderate. However, under acidic conditions, pH increased from 4 to 5. Finally, under neutral conditions pH exhibited a decrease then an increase throughout entire fermentation process. Further study showed short-chain fatty acids (SCFAs), ammonia nitrogen and cations contributed to pH change under various fermentation conditions. This study presents a novel strategy based on pH change to predict whether SCFAs reach their stable stage. Copyright © 2014 Elsevier Ltd. All rights reserved.
Broughton, Heather M; Govender, Danny; Shikwambana, Purvance; Chappell, Patrick; Jolles, Anna
2017-06-01
The International Species Information System has set forth an extensive database of reference intervals for zoologic species, allowing veterinarians and game park officials to distinguish normal health parameters from underlying disease processes in captive wildlife. However, several recent studies comparing reference values from captive and free-ranging animals have found significant variation between populations, necessitating the development of separate reference intervals in free-ranging wildlife to aid in the interpretation of health data. Thus, this study characterizes reference intervals for six biochemical analytes, eleven hematologic or immune parameters, and three hormones using samples from 219 free-ranging African lions ( Panthera leo ) captured in Kruger National Park, South Africa. Using the original sample population, exclusion criteria based on physical examination were applied to yield a final reference population of 52 clinically normal lions. Reference intervals were then generated via 90% confidence intervals on log-transformed data using parametric bootstrapping techniques. In addition to the generation of reference intervals, linear mixed-effect models and generalized linear mixed-effect models were used to model associations of each focal parameter with the following independent variables: age, sex, and body condition score. Age and sex were statistically significant drivers for changes in hepatic enzymes, renal values, hematologic parameters, and leptin, a hormone related to body fat stores. Body condition was positively correlated with changes in monocyte counts. Given the large variation in reference values taken from captive versus free-ranging lions, it is our hope that this study will serve as a baseline for future clinical evaluations and biomedical research targeting free-ranging African lions.
Comparison between volatility return intervals of the S&P 500 index and two common models
NASA Astrophysics Data System (ADS)
Vodenska-Chitkushev, I.; Wang, F. Z.; Weber, P.; Yamasaki, K.; Havlin, S.; Stanley, H. E.
2008-01-01
We analyze the S&P 500 index data for the 13-year period, from January 1, 1984 to December 31, 1996, with one data point every 10 min. For this database, we study the distribution and clustering of volatility return intervals, which are defined as the time intervals between successive volatilities above a certain threshold q. We find that the long memory in the volatility leads to a clustering of above-median as well as below-median return intervals. In addition, it turns out that the short return intervals form larger clusters compared to the long return intervals. When comparing the empirical results to the ARMA-FIGARCH and fBm models for volatility, we find that the fBm model predicts scaling better than the ARMA-FIGARCH model, which is consistent with the argument that both ARMA-FIGARCH and fBm capture the long-term dependence in return intervals to a certain extent, but only fBm accounts for the scaling. We perform the Student's t-test to compare the empirical data with the shuffled records, ARMA-FIGARCH and fBm. We analyze separately the clusters of above-median return intervals and the clusters of below-median return intervals for different thresholds q. We find that the empirical data are statistically different from the shuffled data for all thresholds q. Our results also suggest that the ARMA-FIGARCH model is statistically different from the S&P 500 for intermediate q for both above-median and below-median clusters, while fBm is statistically different from S&P 500 for small and large q for above-median clusters and for small q for below-median clusters. Neither model can fully explain the entire regime of q studied.
Explorations in Statistics: Confidence Intervals
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2009-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This third installment of "Explorations in Statistics" investigates confidence intervals. A confidence interval is a range that we expect, with some level of confidence, to include the true value of a population parameter…
Henson, Mary P.; Bergstedt, Roger A.; Adams, Jean V.
2003-01-01
The ability to predict when sea lampreys (Petromyzon marinus) will metamorphose from the larval phase to the parasitic phase is essential to the operation of the sea lamprey control program. During the spring of 1994, two populations of sea lamprey larvae from two rivers were captured, measured, weighed, implanted with coded wire tags, and returned to the same sites in the streams from which they were taken. Sea lampreys were recovered in the fall, after metamorphosis would have occurred, and checked for the presence of a tag. When the spring data were compared to the fall data it was found that the minimum requirements (length ≥ 120 mm, weight ≥ 3 g, and condition factor ≥ 1.50) suggested for metamorphosis did define a pool of larvae capable of metamorphosing. However, logistic regressions that relate the probability of metamorphosis to size are necessary to predict metamorphosis in a population. The data indicated, based on cross-validation, that weight measurements alone predicted metamorphosis with greater precision than length or condition factor in both the Marengo and Amnicon rivers. Based on the Akaike Information Criterion, weight alone was a better predictor in the Amnicon River, but length and condition factor combined predicted metamorphosis better in the Marengo River. There would be no additional cost if weight alone were used instead of length. However, if length and weight were measured the gain in predictive power would not be enough to justify the additional cost.
Classic conditioning in aged rabbits: delay, trace, and long-delay conditioning.
Solomon, P R; Groccia-Ellison, M E
1996-06-01
Young (0.5 years) and aged (2+, 3+, and 4+ years) rabbits underwent acquisition of the classically conditioned nictitating membrane response in a delay (500-ms conditioned stimulus [CS], 400-ms interstimulus interval [ISI]), long-delay (1,000-ms CS, 900-ms ISI), or trace (500-ms CS, 400-ms stimulus-free period) paradigm. Collapsing across age groups, there is a general tendency for animals to acquire trace conditioning more slowly than delay conditioning. Collapsing across conditioning paradigms, there is a general tendency for aged animals to acquire more slowly than younger animals. Of greater significance, however, are the age differences in the different conditioning paradigms. In the delay and long-delay paradigms, significant conditioning deficits first appeared in the 4(+)-year-old group. In the trace conditioning paradigm, significant conditioning deficits became apparent in the 2(+)-year-old animals.
Methods for evaluating the predictive accuracy of structural dynamic models
NASA Technical Reports Server (NTRS)
Hasselman, Timothy K.; Chrostowski, Jon D.
1991-01-01
Modeling uncertainty is defined in terms of the difference between predicted and measured eigenvalues and eigenvectors. Data compiled from 22 sets of analysis/test results was used to create statistical databases for large truss-type space structures and both pretest and posttest models of conventional satellite-type space structures. Modeling uncertainty is propagated through the model to produce intervals of uncertainty on frequency response functions, both amplitude and phase. This methodology was used successfully to evaluate the predictive accuracy of several structures, including the NASA CSI Evolutionary Structure tested at Langley Research Center. Test measurements for this structure were within + one-sigma intervals of predicted accuracy for the most part, demonstrating the validity of the methodology and computer code.
Kimura, Kenta; Kimura, Motohiro; Iwaki, Sunao
2016-10-01
The present study aimed to investigate whether or not the evaluative processing of action feedback can be modulated by temporal prediction. For this purpose, we examined the effects of the predictability of the timing of action feedback on an ERP effect that indexed the evaluative processing of action feedback, that is, an ERP effect that has been interpreted as a feedback-related negativity (FRN) elicited by "bad" action feedback or a reward positivity (RewP) elicited by "good" action feedback. In two types of experimental blocks, the participants performed a gambling task in which they chose one of two cards and received an action feedback that indicated monetary gain or loss. In fixed blocks, the time interval between the participant's choice and the onset of the action feedback was fixed at 0, 500, or 1,000 ms in separate blocks; thus, the timing of action feedback was predictable. In mixed blocks, the time interval was randomly chosen from the same three intervals with equal probability; thus, the timing was less predictable. The results showed that the FRN/RewP was smaller in mixed than fixed blocks for the 0-ms interval trial, whereas there was no difference between the two block types for the 500-ms and 1,000-ms interval trials. Interestingly, the smaller FRN/RewP was due to the modulation of gain ERPs rather than loss ERPs. These results suggest that temporal prediction can modulate the evaluative processing of action feedback, and particularly good feedback, such as that which indicates monetary gain. © 2016 Society for Psychophysiological Research.
NASA Astrophysics Data System (ADS)
Labahn, Jeffrey William; Devaud, Cecile
2017-05-01
A Reynolds-Averaged Navier-Stokes (RANS) simulation of the semi-industrial International Flame Research Foundation (IFRF) furnace is performed using a non-adiabatic Conditional Source-term Estimation (CSE) formulation. This represents the first time that a CSE formulation, which accounts for the effect of radiation on the conditional reaction rates, has been applied to a large scale semi-industrial furnace. The objective of the current study is to assess the capabilities of CSE to accurately reproduce the velocity field, temperature, species concentration and nitrogen oxides (NOx) emission for the IFRF furnace. The flow field is solved using the standard k-ε turbulence model and detailed chemistry is included. NOx emissions are calculated using two different methods. Predicted velocity profiles are in good agreement with the experimental data. The predicted peak temperature occurs closer to the centreline, as compared to the experimental observations, suggesting that the mixing between the fuel jet and vitiated air jet may be overestimated. Good agreement between the species concentrations, including NOx, and the experimental data is observed near the burner exit. Farther downstream, the centreline oxygen concentration is found to be underpredicted. Predicted NOx concentrations are in good agreement with experimental data when calculated using the method of Peters and Weber. The current study indicates that RANS-CSE can accurately predict the main characteristics seen in a semi-industrial IFRF furnace.
Modeling the formation of methane hydrate-bearing intervals in fine-grained sediments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malinverno, Alberto; Cook, Ann; Daigle, Hugh
Sediment grain size exerts a fundamental control on how methane hydrates are distributed within the pore space. Fine-grained muds are the predominant sediments in continental margins, and hydrates in these sediments have often been observed in semi-vertical veins and fractures. In several instances, these hydrate veins/fractures are found in discrete depth intervals a few tens meters thick within the gas hydrate stability zone (GHSZ) surrounded by hydrate-free sediments above and below. As they are not obviously connected with free gas occurring beneath the base of the GHSZ, these isolated hydrate-bearing intervals have been interpreted as formed by microbial methane generatedmore » in situ. To investigate further the formation of these hydrate deposits, we applied a time-dependent advection-diffusion-reaction model that includes the effects of sedimentation, solute diffusion, and microbial methane generation. The microbial methane generation term depends on the amount of metabolizable organic carbon deposited at the seafloor, which is degraded at a prescribed rate resulting in methane formation beneath the sulfate reduction zone. In the model, methane hydrate precipitates once the dissolved methane concentration is greater than solubility, or hydrate dissolves if concentration goes below solubility. If the deposition of organic carbon at the seafloor is kept constant in time, we found that the predicted amounts of hydrate formed in discrete intervals within the GHSZ are much less than those estimated from observations. We then investigated the effect of temporal variations in the deposition of organic carbon. If greater amounts of organic carbon are deposited during some time interval, methane generation is enhanced during burial in the corresponding sediment interval. With variations in organic carbon deposition that are consistent with observations in continental margin sediments, we were able to reproduce the methane hydrate contents estimated in discrete
Interval-based reconstruction for uncertainty quantification in PET
NASA Astrophysics Data System (ADS)
Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis
2018-02-01
A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.
Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions
NASA Astrophysics Data System (ADS)
Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter
2017-11-01
Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.
Matsui, T; Arai, I; Gotoh, S; Hattori, H; Takase, B; Kikuchi, M; Ishihara, M
2005-10-01
The impaired balance of the low-frequency/high-frequency ratio obtained from spectral components of RR intervals can be a diagnostic test for sepsis. In addition, it is known that a reduction of heart rate variability (HRV) is useful in identifying septic patients at risk of the development of multiple organ dysfunction syndrome (MODS). We have reported a non-contact method using a microwave radar to monitor the heart and respiratory rates of a healthy person placed inside an isolator or of experimental animals exposed to toxic materials. With the purpose of preventing secondary exposure of medical personnel to toxic materials under biochemical hazard conditions, we designed a novel apparatus for non-contact measurement of HRV using a 1215 MHz microwave radar, a high-pass filter, and a personal computer. The microwave radar monitors only the small reflected waves from the subject's chest wall, which are modulated by the cardiac and respiratory motion. The high-pass filter enhances the cardiac signal and attenuates the respiratory signal. In a human trial, RR intervals derived from the non-contact apparatus significantly correlated with those derived from ECG (r=0.98, P<0.0001). The non-contact apparatus showed a similar power spectrum of RR intervals to that of ECG. Our non-contact HRV measurement apparatus appears promising for future pre-hospital monitoring of septic patients or for predicting MODS patients, inside isolators or in the field for mass casualties under biochemical hazard circumstances.
Why noise is useful in functional and neural mechanisms of interval timing?
2013-01-01
Background The ability to estimate durations in the seconds-to-minutes range - interval timing - is essential for survival, adaptation and its impairment leads to severe cognitive and/or motor dysfunctions. The response rate near a memorized duration has a Gaussian shape centered on the to-be-timed interval (criterion time). The width of the Gaussian-like distribution of responses increases linearly with the criterion time, i.e., interval timing obeys the scalar property. Results We presented analytical and numerical results based on the striatal beat frequency (SBF) model showing that parameter variability (noise) mimics behavioral data. A key functional block of the SBF model is the set of oscillators that provide the time base for the entire timing network. The implementation of the oscillators block as simplified phase (cosine) oscillators has the additional advantage that is analytically tractable. We also checked numerically that the scalar property emerges in the presence of memory variability by using biophysically realistic Morris-Lecar oscillators. First, we predicted analytically and tested numerically that in a noise-free SBF model the output function could be approximated by a Gaussian. However, in a noise-free SBF model the width of the Gaussian envelope is independent of the criterion time, which violates the scalar property. We showed analytically and verified numerically that small fluctuations of the memorized criterion time leads to scalar property of interval timing. Conclusions Noise is ubiquitous in the form of small fluctuations of intrinsic frequencies of the neural oscillators, the errors in recording/retrieving stored information related to criterion time, fluctuation in neurotransmitters’ concentration, etc. Our model suggests that the biological noise plays an essential functional role in the SBF interval timing. PMID:23924391
Chang, Young-Soo; Hong, Sung Hwa; Kim, Eun Yeon; Choi, Ji Eun; Chung, Won-Ho; Cho, Yang-Sun; Moon, Il Joon
2018-05-18
Despite recent advancement in the prediction of cochlear implant outcome, the benefit of bilateral procedures compared to bimodal stimulation and how we predict speech perception outcomes of sequential bilateral cochlear implant based on bimodal auditory performance in children remain unclear. This investigation was performed: (1) to determine the benefit of sequential bilateral cochlear implant and (2) to identify the associated factors for the outcome of sequential bilateral cochlear implant. Observational and retrospective study. We retrospectively analyzed 29 patients with sequential cochlear implant following bimodal-fitting condition. Audiological evaluations were performed; the categories of auditory performance scores, speech perception with monosyllable and disyllables words, and the Korean version of Ling. Audiological evaluations were performed before sequential cochlear implant with the bimodal fitting condition (CI1+HA) and one year after the sequential cochlear implant with bilateral cochlear implant condition (CI1+CI2). The good Performance Group (GP) was defined as follows; 90% or higher in monosyllable and bisyllable tests with auditory-only condition or 20% or higher improvement of the scores with CI1+CI2. Age at first implantation, inter-implant interval, categories of auditory performance score, and various comorbidities were analyzed by logistic regression analysis. Compared to the CI1+HA, CI1+CI2 provided significant benefit in categories of auditory performance, speech perception, and Korean version of Ling results. Preoperative categories of auditory performance scores were the only associated factor for being GP (odds ratio=4.38, 95% confidence interval - 95%=1.07-17.93, p=0.04). The children with limited language development in bimodal condition should be considered as the sequential bilateral cochlear implant and preoperative categories of auditory performance score could be used as the predictor in speech perception after sequential
Chen, Xiaodong; Chen, Minglong; Wang, Yingying; Yang, Bing; Ju, Weizhu; Zhang, Fengxiang; Cao, Kejiang
2016-11-01
We sought to investigate variation of atrial electromechanical interval after catheter ablation procedure in patients with persistent atrial fibrillation using pulse Doppler (PW) and pulse tissue Doppler imaging (PW-TDI). A total of 25 consecutive in-patients with persistent atrial fibrillation, who restored sinus rhythm after ablation procedure, were recruited in our cardiac center. Echocardiography was performed on each patient at 2 hours, 1 day, 5 days, 1 month and 3 months after the ablation therapy, and atrial electromechanical delay was measured simultaneously by PW and PW-TDI. There was no significant difference between PW and TDI in measuring atrial electromechanical delay. However, at postoperative 2 hours, peak A detection rates were mathematically but nonsignificantly greater by PW-TDI than by PW. Second, there was a significant decreasing trend in atrial electromechanical interval from postoperative 2 hours to 3 months, but only postoperative 2-hour atrial electromechanical interval was significantly greater than atrial electromechanical interval at other time. Lastly, patients without postoperative 2-hour atrial electromechanical interval had a significantly longer duration of atrial fibrillation as compared to those with postoperative 2-hour atrial electromechanical interval, by the PW or by PW-TDI, respectively. In patients with persistent atrial fibrillation, atrial electromechanical interval may decrease significantly within the first 24 hours after ablation but remain consistent later, and was significantly related to patients' duration of atrial fibrillation. Atrial electromechanical interval, as a potential predicted factor, is recommended to be measured by either PW or TDI after 24 hours, when patients had recovered sinus rhythm by radiofrequency ablation. © 2016 by the Journal of Biomedical Research. All rights reserved.
Marinaccio, Christian; Giudice, Giuseppe; Nacchiero, Eleonora; Robusto, Fabio; Opinto, Giuseppina; Lastilla, Gaetano; Maiorano, Eugenio; Ribatti, Domenico
2016-08-01
that the correlation between tumor cell proliferation and neo-angiogenesis in interval sentinel lymph nodes in melanoma could be used as a good predictive marker to distinguish interval positive sentinel lymph nodes with micrometastasis from interval positive lymph nodes with macrometastasis subgroups.
A Novel Method for Satellite Maneuver Prediction
NASA Astrophysics Data System (ADS)
Shabarekh, C.; Kent-Bryant, J.; Keselman, G.; Mitidis, A.
2016-09-01
A space operations tradecraft consisting of detect-track-characterize-catalog is insufficient for maintaining Space Situational Awareness (SSA) as space becomes increasingly congested and contested. In this paper, we apply analytical methodology from the Geospatial-Intelligence (GEOINT) community to a key challenge in SSA: predicting where and when a satellite may maneuver in the future. We developed a machine learning approach to probabilistically characterize Patterns of Life (PoL) for geosynchronous (GEO) satellites. PoL are repeatable, predictable behaviors that an object exhibits within a context and is driven by spatio-temporal, relational, environmental and physical constraints. An example of PoL are station-keeping maneuvers in GEO which become generally predictable as the satellite re-positions itself to account for orbital perturbations. In an earlier publication, we demonstrated the ability to probabilistically predict maneuvers of the Galaxy 15 (NORAD ID: 28884) satellite with high confidence eight days in advance of the actual maneuver. Additionally, we were able to detect deviations from expected PoL within hours of the predicted maneuver [6]. This was done with a custom unsupervised machine learning algorithm, the Interval Similarity Model (ISM), which learns repeating intervals of maneuver patterns from unlabeled historical observations and then predicts future maneuvers. In this paper, we introduce a supervised machine learning algorithm that works in conjunction with the ISM to produce a probabilistic distribution of when future maneuvers will occur. The supervised approach uses a Support Vector Machine (SVM) to process the orbit state whereas the ISM processes the temporal intervals between maneuvers and the physics-based characteristics of the maneuvers. This multiple model approach capitalizes on the mathematical strengths of each respective algorithm while incorporating multiple features and inputs. Initial findings indicate that the combined
Schott, Whitney; Aurino, Elisabetta; Penny, Mary E; Behrman, Jere R
2017-10-24
We investigated intergenerational associations of adolescent mothers' and grandmothers' anthropometrics and schooling with adolescent mothers' offspring's anthropometrics in Ethiopia, India, Peru, and Vietnam. We examined birthweight (n = 283), birthweight Z-score (BWZ), conditional growth in weight-for-age Z-score (cWAZ, residuals from a regression of WAZ at last survey round on BWZ, sex, and age), and height-for-age Z-score (HAZ) of children born to older cohort adolescent girls in the Young Lives study. Our key independent variables were adolescent mothers' body size: HAZ and body-mass-index-for-age Z-score (BMIZ) at age 8, conditional HAZ (cHAZ, residuals from a regression of HAZ at the end of a growth period on prior HAZ, age, and sex), conditional BMIZ growth (cBMIZ, calculated analogously), and grandmaternal BMIZ, HAZ, and schooling. We adjusted for child, maternal, and household characteristics. Adolescent mothers' cHAZ (ages 8-15) predicted birthweight (β = 130 g, 95% confidence interval (CI) 31-228), BWZ (β = 0.31, CI 0.09-0.53), and cWAZ (β = 0.28, CI 0.04-0.51). Adolescent mothers' BMIZ at age 8 predicted birthweight (β = 79 g, CI 16-43) and BWZ (β = 0.22, CI 0.08-0.36). Adolescent mothers' cBMIZ (ages 12-15) predicted child cWAZ and HAZ. Grandmothers' schooling predicted grandchild birthweight (β = 22 g, CI 1-44) and BWZ (β = 0.05, CI 0.01-0.10). © 2017 New York Academy of Sciences.
Survey of Condition Indicators for Condition Monitoring Systems (Open Access)
2014-09-29
Hinesburg, Vermont, 05461, USA jz@renewablenrgsystems.com ABSTRACT Currently, the wind energy industry is swiftly changing its maintenance strategy...from schedule based maintenance to predictive based maintenance . Condition monitoring systems (CMS) play an important role in the predictive... maintenance cycle. As condition monitoring systems are being adopted by more and more OEM and O&M service providers from the wind energy industry, it is
Tomao, Federica; D'Incalci, Maurizio; Biagioli, Elena; Peccatori, Fedro A; Colombo, Nicoletta
2017-09-15
The platinum-free interval is the most important predictive factor of a response to subsequent lines of chemotherapy and the most important prognostic factor for progression-free and overall survival in patients with recurrent epithelial ovarian cancer. A nonplatinum regimen is generally considered the most appropriate approach when the disease recurs very early after the end of chemotherapy, whereas platinum-based chemotherapy is usually adopted when the platinum-free interval exceeds 12 months. However, the therapeutic management of patients with intermediate sensitivity (ie, when the relapse occurs between 6 and 12 months) remains debatable. Preclinical and clinical data suggest that the extension of platinum-free interval (using a nonplatinum-based regimen) might restore platinum sensitivity, thus allowing survival improvement. The objective of this review was to critically analyze preclinical and clinical evidences supporting this hypothesis. Cancer 2017;123:3450-9. © 2017 American Cancer Society. © 2017 American Cancer Society.
Lack of predictive power of trait fear and anxiety for conditioned pain modulation (CPM).
Horn-Hofmann, Claudia; Priebe, Janosch A; Schaller, Jörg; Görlitz, Rüdiger; Lautenbacher, Stefan
2016-12-01
In recent years the association of conditioned pain modulation (CPM) with trait fear and anxiety has become a hot topic in pain research due to the assumption that such variables may explain the low CPM efficiency in some individuals. However, empirical evidence concerning this association is still equivocal. Our study is the first to investigate the predictive power of fear and anxiety for CPM by using a well-established psycho-physiological measure of trait fear, i.e. startle potentiation, in addition to two self-report measures of pain-related trait anxiety. Forty healthy, pain-free participants (female: N = 20; age: M = 23.62 years) underwent two experimental blocks in counter-balanced order: (1) a startle paradigm with affective picture presentation and (2) a CPM procedure with hot water as conditioning stimulus (CS) and contact heat as test stimulus (TS). At the end of the experimental session, pain catastrophizing (PCS) and pain anxiety (PASS) were assessed. PCS score, PASS score and startle potentiation to threatening pictures were entered as predictors in a linear regression model with CPM magnitude as criterion. We were able to show an inhibitory CPM effect in our sample: pain ratings of the heat stimuli were significantly reduced during hot water immersion. However, CPM was neither predicted by self-report of pain-related anxiety nor by startle potentiation as psycho-physiological measure of trait fear. These results corroborate previous negative findings concerning the association between trait fear/anxiety and CPM efficiency and suggest that shifting the focus from trait to state measures might be promising.
Differential Effects of the Cannabinoid Agonist WIN55,212-2 on Delay and Trace Eyeblink Conditioning
Steinmetz, Adam B.; Freeman, John H.
2014-01-01
Central cannabinoid-1 receptors (CB1R) play a role in the acquisition of delay eyeblink conditioning but not trace eyeblink conditioning in humans and animals. However, it is not clear why trace conditioning is immune to the effects of cannabinoid receptor compounds. The current study examined the effects of variants of delay and trace conditioning procedures to elucidate the factors that determine the effects of CB1R agonists on eyeblink conditioning. In Experiment 1 rats were administered the cannabinoid agonist WIN55,212-2 during delay, long delay, or trace conditioning. Rats were impaired during delay and long delay but not trace conditioning; the impairment was greater for long delay than delay conditioning. Trace conditioning was further examined in Experiment 2 by manipulating the trace interval and keeping constant the conditioned stimulus (CS) duration. It was found that when the trace interval was 300 ms or less WIN55,212-2 administration impaired the rate of learning. Experiment 3 tested whether the trace interval duration or the relative durations of the CS and trace interval were critical parameters influencing the effects of WIN55,212-2 on eyeblink conditioning. Rats were not impaired with a 100 ms CS, 200 ms trace paradigm but were impaired with a 1000 ms CS, 500 ms trace paradigm, indicating that the duration of the trace interval does not matter but the proportion of the interstimulus interval occupied by the CS relative to the trace period is critical. Taken together the results indicate that cannabinoid agonists affect cerebellar learning the CS is longer than the trace interval. PMID:24128358
Santos, Thays Brenner; Kramer-Soares, Juliana Carlota; Favaro, Vanessa Manchim; Oliveira, Maria Gabriela Menezes
2017-10-01
Time plays an important role in conditioning, it is not only possible to associate stimuli with events that overlap, as in delay fear conditioning, but it is also possible to associate stimuli that are discontinuous in time, as shown in trace conditioning for a discrete stimuli. The environment itself can be a powerful conditioned stimulus (CS) and be associated to unconditioned stimulus (US). Thus, the aim of the present study was to determine the parameters in which contextual fear conditioning occurs by the maintenance of a contextual representation over short and long time intervals. The results showed that a contextual representation can be maintained and associated after 5s, even in the absence of a 15s re-exposure to the training context before US delivery. The same effect was not observed with a 24h interval of discontinuity. Furthermore, optimal conditioned response with a 5s interval is produced only when the contexts (of pre-exposure and shock) match. As the pre-limbic cortex (PL) is necessary for the maintenance of a continuous representation of a stimulus, the involvement of the PL in this temporal and contextual processing was investigated. The reversible inactivation of the PL by muscimol infusion impaired the acquisition of contextual fear conditioning with a 5s interval, but not with a 24h interval, and did not impair delay fear conditioning. The data provided evidence that short and long intervals of discontinuity have different mechanisms, thus contributing to a better understanding of PL involvement in contextual fear conditioning and providing a model that considers both temporal and contextual factors in fear conditioning. Copyright © 2017 Elsevier Inc. All rights reserved.
Fast transfer of crossmodal time interval training.
Chen, Lihan; Zhou, Xiaolin
2014-06-01
Sub-second time perception is essential for many important sensory and perceptual tasks including speech perception, motion perception, motor coordination, and crossmodal interaction. This study investigates to what extent the ability to discriminate sub-second time intervals acquired in one sensory modality can be transferred to another modality. To this end, we used perceptual classification of visual Ternus display (Ternus in Psychol Forsch 7:81-136, 1926) to implicitly measure participants' interval perception in pre- and posttests and implemented an intra- or crossmodal sub-second interval discrimination training protocol in between the tests. The Ternus display elicited either an "element motion" or a "group motion" percept, depending on the inter-stimulus interval between the two visual frames. The training protocol required participants to explicitly compare the interval length between a pair of visual, auditory, or tactile stimuli with a standard interval or to implicitly perceive the length of visual, auditory, or tactile intervals by completing a non-temporal task (discrimination of auditory pitch or tactile intensity). Results showed that after fast explicit training of interval discrimination (about 15 min), participants improved their ability to categorize the visual apparent motion in Ternus displays, although the training benefits were mild for visual timing training. However, the benefits were absent for implicit interval training protocols. This finding suggests that the timing ability in one modality can be rapidly acquired and used to improve timing-related performance in another modality and that there may exist a central clock for sub-second temporal processing, although modality-specific perceptual properties may constrain the functioning of this clock.
2006-12-01
Data transfer unit ( DTU ) • Remote data concentrator (RDC) • Main processor unit (MPU) • 2 junction boxes (JB1/JB2) • 20 drive train and...NETWORKS TO PREDICT UH-60L ELECTRICAL GENERATOR CONDITION USING (IMD-HUMS) DATA by Evangelos Tourvalis December 2006 Thesis Advisor...including the time for reviewing instruction, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the
Extinction of Pavlovian conditioning: The influence of trial number and reinforcement history.
Chan, C K J; Harris, Justin A
2017-08-01
Pavlovian conditioning is sensitive to the temporal relationship between the conditioned stimulus (CS) and the unconditioned stimulus (US). This has motivated models that describe learning as a process that continuously updates associative strength during the trial or specifically encodes the CS-US interval. These models predict that extinction of responding is also continuous, such that response loss is proportional to the cumulative duration of exposure to the CS without the US. We review evidence showing that this prediction is incorrect, and that extinction is trial-based rather than time-based. We also present two experiments that test the importance of trials versus time on the Partial Reinforcement Extinction Effect (PREE), in which responding extinguishes more slowly for a CS that was inconsistently reinforced with the US than for a consistently reinforced one. We show that increasing the number of extinction trials of the partially reinforced CS, relative to the consistently reinforced CS, overcomes the PREE. However, increasing the duration of extinction trials by the same amount does not overcome the PREE. We conclude that animals learn about the likelihood of the US per trial during conditioning, and learn trial-by-trial about the absence of the US during extinction. Moreover, what they learn about the likelihood of the US during conditioning affects how sensitive they are to the absence of the US during extinction. Copyright © 2017 Elsevier B.V. All rights reserved.
Gao, Yong-Ming; Wan, Ping
2002-06-01
Screening markers efficiently is the foundation of mapping QTLs by composite interval mapping. Main and interaction markers distinguished, besides using background control for genetic variation, could also be used to construct intervals of two-way searching for mapping QTLs with epistasis, which can save a lot of calculation time. Therefore, the efficiency of marker screening would affect power and precision of QTL mapping. A doubled haploid population with 200 individuals and 5 chromosomes was constructed, with 50 markers evenly distributed at 10 cM space. Among a total of 6 QTLs, one was placed on chromosome I, two linked on chromosome II, and the other three linked on chromosome IV. QTL setting included additive effects and epistatic effects of additive x additive, the corresponding QTL interaction effects were set if data were collected under multiple environments. The heritability was assumed to be 0.5 if no special declaration. The power of marker screening by stepwise regression, forward regression, and three methods for random effect prediction, e.g. best linear unbiased prediction (BLUP), linear unbiased prediction (LUP) and adjusted unbiased prediction (AUP), was studied and compared through 100 Monte Carlo simulations. The results indicated that the marker screening power by stepwise regression at 0.1, 0.05 and 0.01 significant level changed from 2% to 68%, the power changed from 2% to 72% by forward regression. The larger the QTL effects, the higher the marker screening power. While the power of marker screening by three random effect prediction was very low, the maximum was only 13%. That suggested that regression methods were much better than those by using the approaches of random effect prediction to identify efficient markers flanking QTLs, and forward selection method was more simple and efficient. The results of simulation study on heritability showed that heightening of both general heritability and interaction heritability of genotype x
Multistressor predictive models of invertebrate condition in the Corn Belt, USA
Waite, Ian R.; Van Metre, Peter C.
2017-01-01
Understanding the complex relations between multiple environmental stressors and ecological conditions in streams can help guide resource-management decisions. During 14 weeks in spring/summer 2013, personnel from the US Geological Survey and the US Environmental Protection Agency sampled 98 wadeable streams across the Midwest Corn Belt region of the USA for water and sediment quality, physical and habitat characteristics, and ecological communities. We used these data to develop independent predictive disturbance models for 3 macroinvertebrate metrics and a multimetric index. We developed the models based on boosted regression trees (BRT) for 3 stressor categories, land use/land cover (geographic information system [GIS]), all in-stream stressors combined (nutrients, habitat, and contaminants), and for GIS plus in-stream stressors. The GIS plus in-stream stressor models had the best overall performance with an average cross-validation R2 across all models of 0.41. The models were generally consistent in the explanatory variables selected within each stressor group across the 4 invertebrate metrics modeled. Variables related to riparian condition, substrate size or embeddedness, velocity and channel shape, nutrients (primarily NH3), and contaminants (pyrethroid degradates) were important descriptors of the invertebrate metrics. Models based on all measured in-stream stressors performed comparably to models based on GIS landscape variables, suggesting that the in-stream stressor characterization reasonably represents the dominant factors affecting invertebrate communities and that GIS variables are acting as surrogates for in-stream stressors that directly affect in-stream biota.
Michael C. Stambaugh; Richard P. Guyette; Joseph M. Marschall; Daniel C. Dey
2016-01-01
Characterization of scale dependence of fire intervals could inform interpretations of fire history and improve fire prescriptions that aim to mimic historical fire regime conditions. We quantified the temporal variability in fire regimes and described the spatial dependence of fire intervals through the analysis of multi-century fire scar records (8 study sites, 332...
A Class of Prediction-Correction Methods for Time-Varying Convex Optimization
NASA Astrophysics Data System (ADS)
Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro
2016-09-01
This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.
NASA Astrophysics Data System (ADS)
Krakovsky, Y. M.; Luzgin, A. N.; Mikhailova, E. A.
2018-05-01
At present, cyber-security issues associated with the informatization objects of industry occupy one of the key niches in the state management system. As a result of functional disruption of these systems via cyberattacks, an emergency may arise related to loss of life, environmental disasters, major financial and economic damage, or disrupted activities of cities and settlements. When cyberattacks occur with high intensity, in these conditions there is the need to develop protection against them, based on machine learning methods. This paper examines interval forecasting and presents results with a pre-set intensity level. The interval forecasting is carried out based on a probabilistic cluster model. This method involves forecasting of one of the two predetermined intervals in which a future value of the indicator will be located; probability estimates are used for this purpose. A dividing bound of these intervals is determined by a calculation method based on statistical characteristics of the indicator. Source data are used that includes a number of hourly cyberattacks using a honeypot from March to September 2013.
NASA Technical Reports Server (NTRS)
Bolten, John; Crow, Wade
2012-01-01
The added value of satellite-based surface soil moisture retrievals for agricultural drought monitoring is assessed by calculating the lagged rank correlation between remotely-sensed vegetation indices (VI) and soil moisture estimates obtained both before and after the assimilation of surface soil moisture retrievals derived from the Advanced Microwave Scanning Radiometer-EOS (AMSR-E) into a soil water balance model. Higher soil moisture/VI lag correlations imply an enhanced ability to predict future vegetation conditions using estimates of current soil moisture. Results demonstrate that the assimilation of AMSR-E surface soil moisture retrievals substantially improve the performance of a global drought monitoring system - particularly in sparsely-instrumented areas of the world where high-quality rainfall observations are unavailable.
NASA Astrophysics Data System (ADS)
Andrieu, Simon; Brigaud, Benjamin; Barbarand, Jocelyn; Lasseur, Eric; Saucède, Thomas
2016-11-01
The objective of this work is to improve our understanding of the processes controlling changes in the architecture and facies of intracontinental carbonate platforms. We examined the facies and sequence stratigraphy of Aalenian to Oxfordian limestones of western France. Seventy-seven outcrop sections were studied and thirty-one sedimentary facies identified in five depositional environments ranging from lower offshore to backshore. Platform evolution was reconstructed along a 500 km cross-section. Twenty-two depositional sequences were identified on the entire western France platform and correlated with European third-order sequences at the biozone level, demonstrating that eustasy was the major factor controlling the cyclic trend of accommodation. The tectonic subsidence rate was computed from accommodation measurements from the Aalenian to the Oxfordian in key localities. Tectonism controlled the sedimentation rate and platform architecture at a longer time scale. Tectonic subsidence triggered the demise of carbonate production at the Bathonian/Callovian boundary while the uplift made possible the recovery of carbonate platform from Caen to Le Mans during the mid Oxfordian. Topography of the Paleozoic basement mainly controlled lateral variations of paleodepth within the western France platform until the mid Bathonian. A synthesis of carbonate production in the western Tethyan domain at that time was conducted. Stages of high carbonate production during the Bajocian/Bathonian and the middle to late Oxfordian are synchronous with low δ13C, high eccentricity intervals, and rather dry climate promoting (1) evaporation and carbonate supersaturation, and (2) oligotrophic conditions. Periods of low carbonate production during the Aalenian and from the middle Callovian to early Oxfordian correlate with high δ13C and low eccentricity intervals, characterized by wet climate and less oligotrophic conditions. Such conditions tend to diminish growth potential of carbonate
Wahlheim, Christopher N
2011-07-01
Four experiments examined the monitoring accuracy of immediate and delayed judgments of learning (JOLs) under conditions of proactive interference (PI). PI was produced using paired-associate learning tasks that conformed to variations of classic A-B, A-D paradigms. Results revealed that the relative monitoring accuracy of interference items was better for delayed than for immediate JOLs. However, delayed JOLs were overconfident for interference items, but not for items devoid of interference. Intrusions retrieved prior to delayed JOLs produced inflated predictions of performance. These results show that delayed JOLs enhance monitoring accuracy in PI situations, except when intrusions are mistaken for target responses.
NASA Technical Reports Server (NTRS)
Li, Tao; Hasegawa, Toshihiro; Yin, Xinyou; Zhu, Yan; Boote, Kenneth; Adam, Myriam; Bregaglio, Simone; Buis, Samuel; Confalonieri, Roberto; Fumoto, Tamon;
2014-01-01
Predicting rice (Oryza sativa) productivity under future climates is important for global food security. Ecophysiological crop models in combination with climate model outputs are commonly used in yield prediction, but uncertainties associated with crop models remain largely unquantified. We evaluated 13 rice models against multi-year experimental yield data at four sites with diverse climatic conditions in Asia and examined whether different modeling approaches on major physiological processes attribute to the uncertainties of prediction to field measured yields and to the uncertainties of sensitivity to changes in temperature and CO2 concentration [CO2]. We also examined whether a use of an ensemble of crop models can reduce the uncertainties. Individual models did not consistently reproduce both experimental and regional yields well, and uncertainty was larger at the warmest and coolest sites. The variation in yield projections was larger among crop models than variation resulting from 16 global climate model-based scenarios. However, the mean of predictions of all crop models reproduced experimental data, with an uncertainty of less than 10 percent of measured yields. Using an ensemble of eight models calibrated only for phenology or five models calibrated in detail resulted in the uncertainty equivalent to that of the measured yield in well-controlled agronomic field experiments. Sensitivity analysis indicates the necessity to improve the accuracy in predicting both biomass and harvest index in response to increasing [CO2] and temperature.
Dhutia, Harshil; Malhotra, Aneil; Parpia, Sameer; Gabus, Vincent; Finocchiaro, Gherardo; Mellor, Greg; Merghani, Ahmed; Millar, Lynne; Narain, Rajay; Sheikh, Nabeel; Behr, Elijah R; Papadakis, Michael; Sharma, Sanjay
2016-01-01
The short QT syndrome is a cardiac channelopathy characterised by accelerated repolarisation which manifests as a short QT interval on the ECG. The definition of a short QT interval is debated, ranging from <390 to ≤320 ms, and its clinical significance in healthy young individuals is unknown. We assessed the prevalence and medium-term significance of an isolated short QT interval in a diverse young British population. Between 2005 and 2013, 18 825 apparently healthy people aged 14-35 years underwent cardiovascular evaluation with history, physical examination and ECG. QT intervals were measured by cardiologists using 4 recommended guidelines (Seattle 2013, Heart Rhythm Society 2013, European Society of Cardiology 2010 and American Heart Association 2009). The prevalence of a short QT interval was 0.1% (26 patients, ≤320 ms), 0.2% (44 patients, ≤330 ms), 7.9% (1478 patients, <380 ms), 15.8% (2973 patients, <390 ms). Male gender and Afro-Caribbean ethnicity had the strongest association with short QT intervals. Athletes had shorter QT intervals than non-athletes but athletic status did not predict short QT intervals. Individuals with short QT intervals ≤320 ms did not report syncope or a sinister family history, and during a follow-up period of 5.3±1.2 years, there were no deaths in this group. The prevalence of a short QT interval depends on the recommended cut-off value. Even at values ≤320 ms, there was an excellent medium-term prognosis among 14 people followed. We conclude that a definition of ≤320 ms is realistic to prevent overdiagnosis and excessive investigations. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Integrated Modeling for Road Condition Prediction
DOT National Transportation Integrated Search
2017-12-31
Transportation Systems Management and Operations (TSMO) is at a critical point in its development due to an explosion in data availability and analytics. Intelligent transportation systems (ITS) gathering data about weather and traffic conditions cou...
Reward positivity: Reward prediction error or salience prediction error?
Heydari, Sepideh; Holroyd, Clay B
2016-08-01
The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis. © 2016 Society for Psychophysiological Research.
Vicente-Pérez, Ricardo; Avendaño-Reyes, Leonel; Mejía-Vázquez, Ángel; Álvarez-Valenzuela, F Daniel; Correa-Calderón, Abelardo; Mellado, Miguel; Meza-Herrera, Cesar A; Guerra-Liera, Juan E; Robinson, P H; Macías-Cruz, Ulises
2016-01-01
Rectal temperature (RT) is the foremost physiological variable indicating if an animal is suffering hyperthermia. However, this variable is traditionally measured by invasive methods, which may compromise animal welfare. Models to predict RT have been developed for growing pigs and lactating dairy cows, but not for pregnant heat-stressed ewes. Our aim was to develop a prediction equation for RT using non-invasive physiological variables in pregnant ewes under heat stress. A total of 192 records of respiratory frequency (RF) and hair coat temperature in various body regions (i.e., head, rump, flank, shoulder, and belly) obtained from 24 Katahdin × Pelibuey pregnant multiparous ewes were collected during the last third of gestation (i.e., d 100 to lambing) with a 15 d sampling interval. Hair coat temperatures were taken using infrared thermal imaging technology. Initially, a Pearson correlation analysis examined the relationship among variables, and then multiple linear regression analysis was used to develop the prediction equations. All predictor variables were positively correlated (P<0.01; r=0.59-0.67) with RT. The adjusted equation which best predicted RT (P<0.01; Radj(2)=56.15%; CV=0.65%) included as predictors RF and head and belly temperatures. Comparison of predicted and observed values for RT indicates a suitable agreement (P<0.01) between them with moderate accuracy (Radj(2)=56.15%) when RT was calculated with the adjusted equation. In general, the final equation does not violate any assumption of multiple regression analysis. The RT in heat-stressed pregnant ewes can be predicted with an adequate accuracy using non-invasive physiologic variables, and the final equation was: RT=35.57+0.004 (RF)+0.067 (heat temperature)+0.028 (belly temperature). Copyright © 2015 Elsevier Ltd. All rights reserved.
Faydasicok, Ozlem; Arik, Sabri
2013-08-01
The main problem with the analysis of robust stability of neural networks is to find the upper bound norm for the intervalized interconnection matrices of neural networks. In the previous literature, the major three upper bound norms for the intervalized interconnection matrices have been reported and they have been successfully applied to derive new sufficient conditions for robust stability of delayed neural networks. One of the main contributions of this paper will be the derivation of a new upper bound for the norm of the intervalized interconnection matrices of neural networks. Then, by exploiting this new upper bound norm of interval matrices and using stability theory of Lyapunov functionals and the theory of homomorphic mapping, we will obtain new sufficient conditions for the existence, uniqueness and global asymptotic stability of the equilibrium point for the class of neural networks with discrete time delays under parameter uncertainties and with respect to continuous and slope-bounded activation functions. The results obtained in this paper will be shown to be new and they can be considered alternative results to previously published corresponding results. We also give some illustrative and comparative numerical examples to demonstrate the effectiveness and applicability of the proposed robust stability condition. Copyright © 2013 Elsevier Ltd. All rights reserved.
Predicting Hydrologic Function With Aquatic Gene Fragments
NASA Astrophysics Data System (ADS)
Good, S. P.; URycki, D. R.; Crump, B. C.
2018-03-01
Recent advances in microbiology techniques, such as genetic sequencing, allow for rapid and cost-effective collection of large quantities of genetic information carried within water samples. Here we posit that the unique composition of aquatic DNA material within a water sample contains relevant information about hydrologic function at multiple temporal scales. In this study, machine learning was used to develop discharge prediction models trained on the relative abundance of bacterial taxa classified into operational taxonomic units (OTUs) based on 16S rRNA gene sequences from six large arctic rivers. We term this approach "genohydrology," and show that OTU relative abundances can be used to predict river discharge at monthly and longer timescales. Based on a single DNA sample from each river, the average Nash-Sutcliffe efficiency (NSE) for predicted mean monthly discharge values throughout the year was 0.84, while the NSE for predicted discharge values across different return intervals was 0.67. These are considerable improvements over predictions based only on the area-scaled mean specific discharge of five similar rivers, which had average NSE values of 0.64 and -0.32 for seasonal and recurrence interval discharge values, respectively. The genohydrology approach demonstrates that genetic diversity within the aquatic microbiome is a large and underutilized data resource with benefits for prediction of hydrologic function.
The influence of interpregnancy interval on infant mortality.
McKinney, David; House, Melissa; Chen, Aimin; Muglia, Louis; DeFranco, Emily
2017-03-01
In Ohio, the infant mortality rate is above the national average and the black infant mortality rate is more than twice the white infant mortality rate. Having a short interpregnancy interval has been shown to correlate with preterm birth and low birthweight, but the effect of short interpregnancy interval on infant mortality is less well established. We sought to quantify the population impact of interpregnancy interval on the risk of infant mortality. This was a statewide population-based retrospective cohort study of all births (n = 1,131,070) and infant mortalities (n = 8152) using linked Ohio birth and infant death records from January 2007 through September 2014. For this study we analyzed 5 interpregnancy interval categories: 0-<6, 6-<12, 12-<24, 24-<60, and ≥60 months. The primary outcome for this study was infant mortality. During the study period, 3701 infant mortalities were linked to a live birth certificate with an interpregnancy interval available. We calculated the frequency and relative risk of infant mortality for each interval compared to a referent interval of 12-<24 months. Stratified analyses by maternal race were also performed. Adjusted risks were estimated after accounting for statistically significant and biologically plausible confounding variables. Adjusted relative risk was utilized to calculate the attributable risk percent of short interpregnancy intervals on infant mortality. Short interpregnancy intervals were common in Ohio during the study period. Of all multiparous births, 20.5% followed an interval of <12 months. The overall infant mortality rate during this time was 7.2 per 1000 live births (6.0 for white mothers and 13.1 for black mothers). Infant mortalities occurred more frequently for births following short intervals of 0-<6 months (9.2 per 1000) and 6-<12 months (7.1 per 1000) compared to 12-<24 months (5.6 per 1000) (P < .001 and <.001). The highest risk for infant mortality followed interpregnancy intervals of 0
Advanced Interval Management: A Benefit Analysis
NASA Technical Reports Server (NTRS)
Timer, Sebastian; Peters, Mark
2016-01-01
This document is the final report for the NASA Langley Research Center (LaRC)- sponsored task order 'Possible Benefits for Advanced Interval Management Operations.' Under this research project, Architecture Technology Corporation performed an analysis to determine the maximum potential benefit to be gained if specific Advanced Interval Management (AIM) operations were implemented in the National Airspace System (NAS). The motivation for this research is to guide NASA decision-making on which Interval Management (IM) applications offer the most potential benefit and warrant further research.
Resistance-based interval exercise acutely improves endothelial function in type 2 diabetes.
Francois, Monique E; Durrer, Cody; Pistawka, Kevin J; Halperin, Frank A; Little, Jonathan P
2016-11-01
Different modes of exercise, disease, and training status can modify endothelial shear stress and result in distinct effects on endothelial function. To date, no study has examined the influence of type 2 diabetes (T2D) and training status on the acute endothelial response to different modes of interval exercise (INT). We examined the effect of a single session of resistance- and cardio-based INT compared with a time-matched control on endothelial function in 12 age-matched T2D participants, 12 untrained, and 11 trained adults (aged 56 ± 7 yr). Flow-mediated dilation (%FMD) of the brachial artery was assessed at baseline and immediately, 1, and 2 h after an acute bout of cardio interval (C-INT), resistance interval (R-INT), and seated control (CTL); these interventions were randomized and separated by >2 days. C-INT involved seven 1-min cycling intervals at 85% of peak power with 1-min recovery between. R-INT involved the same pattern of seven 1-min intervals using leg resistance exercises. Endothelial function (%FMD) was improved after R-INT in all groups (Condition × Time interaction, P < 0.01), an effect that was most robust in T2D where %FMD was higher immediately (+4.0 ± 2.8%), 1 h (+2.5 ± 2.5%), and 2 h (+1.9 ± 1.9%) after R-INT compared with CTL (P < 0.01 for all). C-INT improved %FMD in T2D at 1-h postexercise (+1.6 ± 2.2%, P = 0.03) compared with CTL. In conclusion, R-INT acutely improves endothelial function throughout the 2-h postexercise period in T2D patients. The long-term impact of resistance exercise performed in an interval pattern is warranted. Copyright © 2016 the American Physiological Society.
The prelaying interval of emperor geese on the Yukon-Kuskokwim Delta, Alaska
Hupp, Jerry W.; Schmutz, J.A.; Ely, Craig R.
2006-01-01
We marked 136 female Emperor Geese (Chen canagica) in western Alaska with VHF or satellite (PTT) transmitters from 1999 to 2003 to monitor their spring arrival and nest initiation dates on the Yukon Delta, and to estimate prelaying interval lengths once at the nesting area. Ninety-two females with functional transmitters returned to the Yukon Delta in the spring after they were marked, and we located the nests of 35 of these individuals. Prelaying intervals were influenced by when snow melted in the spring and individual arrival dates on the Yukon Delta. The median prelaying interval was 15 days (range = 12-19 days) in a year when snow melted relatively late, and 11 days (range = 4-16 days) in two warmer years when snow melted earlier. In years when snow melted earlier, prelaying intervals of <12 days for 11 of 15 females suggested they initiated rapid follicle development on spring staging areas. The prelaying interval declined by approximately 0.4 days and nest initiation date increased approximately 0.5 days for each day a female delayed her arrival. Thus, females that arrived first on the Yukon Delta had prelaying intervals up to four days longer, yet they nested up to five days earlier, than females that arrived last. The proximity of spring staging areas on the Alaska Peninsula to nesting areas on the Yukon Delta may enable Emperor Geese to alter timing of follicle development depending on annual conditions, and to invest nutrients acquired from both areas in eggs during their formation. Plasticity in timing of follicle development is likely advantageous in a variable environment where melting of snow cover in the spring can vary by 2-3 weeks annually. ?? The Cooper Ornithological Society 2006.
Resistance-based interval exercise acutely improves endothelial function in type 2 diabetes
Francois, Monique E.; Durrer, Cody; Pistawka, Kevin J.; Halperin, Frank A.
2016-01-01
Different modes of exercise, disease, and training status can modify endothelial shear stress and result in distinct effects on endothelial function. To date, no study has examined the influence of type 2 diabetes (T2D) and training status on the acute endothelial response to different modes of interval exercise (INT). We examined the effect of a single session of resistance- and cardio-based INT compared with a time-matched control on endothelial function in 12 age-matched T2D participants, 12 untrained, and 11 trained adults (aged 56 ± 7 yr). Flow-mediated dilation (%FMD) of the brachial artery was assessed at baseline and immediately, 1, and 2 h after an acute bout of cardio interval (C-INT), resistance interval (R-INT), and seated control (CTL); these interventions were randomized and separated by >2 days. C-INT involved seven 1-min cycling intervals at 85% of peak power with 1-min recovery between. R-INT involved the same pattern of seven 1-min intervals using leg resistance exercises. Endothelial function (%FMD) was improved after R-INT in all groups (Condition × Time interaction, P < 0.01), an effect that was most robust in T2D where %FMD was higher immediately (+4.0 ± 2.8%), 1 h (+2.5 ± 2.5%), and 2 h (+1.9 ± 1.9%) after R-INT compared with CTL (P < 0.01 for all). C-INT improved %FMD in T2D at 1-h postexercise (+1.6 ± 2.2%, P = 0.03) compared with CTL. In conclusion, R-INT acutely improves endothelial function throughout the 2-h postexercise period in T2D patients. The long-term impact of resistance exercise performed in an interval pattern is warranted. PMID:27638878
The effects of dorsal bundle lesions on serial and trace conditioning.
Tsaltas, E; Preston, G C; Gray, J A
1983-12-01
The performance of rats with neurotoxic lesions of the dorsal ascending noradrenergic bundle (DB) was compared with that of sham-operated control animals under two behavioural conditions. Animals with DB lesions were slower than controls to acquire a classically-conditioned emotional response (conditioned suppression) with a trace interval interposed between the clicker conditioned stimulus (CS) and the shock reinforcer. However, if the latter half of the trace interval was filled by a second stimulus, a light, the DB-lesioned animals acquired conditioned suppression to the clicker faster than did controls under the same conditions. These results are discussed in terms of the attentional theory of DB function.
Return Intervals Approach to Financial Fluctuations
NASA Astrophysics Data System (ADS)
Wang, Fengzhong; Yamasaki, Kazuko; Havlin, Shlomo; Stanley, H. Eugene
Financial fluctuations play a key role for financial markets studies. A new approach focusing on properties of return intervals can help to get better understanding of the fluctuations. A return interval is defined as the time between two successive volatilities above a given threshold. We review recent studies and analyze the 1000 most traded stocks in the US stock markets. We find that the distribution of the return intervals has a well approximated scaling over a wide range of thresholds. The scaling is also valid for various time windows from one minute up to one trading day. Moreover, these results are universal for stocks of different countries, commodities, interest rates as well as currencies. Further analysis shows some systematic deviations from a scaling law, which are due to the nonlinear correlations in the volatility sequence. We also examine the memory in return intervals for different time scales, which are related to the long-term correlations in the volatility. Furthermore, we test two popular models, FIGARCH and fractional Brownian motion (fBm). Both models can catch the memory effect but only fBm shows a good scaling in the return interval distribution.
Sun, Bo; Ye, Tianyuan; Feng, Qiang; Yao, Jinghua; Wei, Mumeng
2015-09-10
This paper studies the corrosion behavior of B10 copper-nickel alloy in marine environment. Accelerated degradation test under marine environmental conditions was designed and performed based on the accelerated testing principle and the corrosion degradation mechanism. With the prolongation of marine corrosion time, the thickness of Cu₂O film increased gradually. Its corrosion product was Cu₂(OH)₃Cl, which increased in quantity over time. Cl - was the major factor responsible for the marine corrosion of copper and copper alloy. Through the nonlinear fitting of corrosion rate and corrosion quantity (corrosion weight loss), degradation data of different corrosion cycles, the quantitative effects of two major factors, i.e. , dissolved oxygen (DO) and corrosion medium temperature, on corrosion behavior of copper alloy were analyzed. The corrosion failure prediction models under different ambient conditions were built. One-day corrosion weight loss under oxygenated stirring conditions was equivalent to 1.31-day weight loss under stationary conditions, and the corrosion rate under oxygenated conditions was 1.31 times higher than that under stationary conditions. In addition, corrosion medium temperature had a significant effect on the corrosion of B10 copper sheet.
Sun, Bo; Ye, Tianyuan; Feng, Qiang; Yao, Jinghua; Wei, Mumeng
2015-01-01
This paper studies the corrosion behavior of B10 copper-nickel alloy in marine environment. Accelerated degradation test under marine environmental conditions was designed and performed based on the accelerated testing principle and the corrosion degradation mechanism. With the prolongation of marine corrosion time, the thickness of Cu2O film increased gradually. Its corrosion product was Cu2(OH)3Cl, which increased in quantity over time. Cl− was the major factor responsible for the marine corrosion of copper and copper alloy. Through the nonlinear fitting of corrosion rate and corrosion quantity (corrosion weight loss), degradation data of different corrosion cycles, the quantitative effects of two major factors, i.e., dissolved oxygen (DO) and corrosion medium temperature, on corrosion behavior of copper alloy were analyzed. The corrosion failure prediction models under different ambient conditions were built. One-day corrosion weight loss under oxygenated stirring conditions was equivalent to 1.31-day weight loss under stationary conditions, and the corrosion rate under oxygenated conditions was 1.31 times higher than that under stationary conditions. In addition, corrosion medium temperature had a significant effect on the corrosion of B10 copper sheet. PMID:28793549
NASA Astrophysics Data System (ADS)
Tadesse, T.; Zaitchik, B. F.; Habib, S.; Funk, C. C.; Senay, G. B.; Dinku, T.; Policelli, F. S.; Block, P.; Baigorria, G. A.; Beyene, S.; Wardlow, B.; Hayes, M. J.
2014-12-01
The development of effective strategies to adapt to changes in the character of droughts and floods in Africa will rely on improved seasonal prediction systems that are robust to an evolving climate baseline and can be integrated into disaster preparedness and response. Many efforts have been made to build models to improve seasonal forecasts in the Greater Horn of Africa region (GHA) using satellite and climate data, but these efforts and models must be improved and translated into future conditions under evolving climate conditions. This has considerable social significance, but is challenged by the nature of climate predictability and the adaptability of coupled natural and human systems facing exposure to climate extremes. To address these issues, work is in progress under a project funded by NASA. The objectives of the project include: 1) Characterize and explain large-scale drivers in the ocean-atmosphere-land system associated with years of extreme flood or drought in the GHA. 2) Evaluate the performance of state-of-the-art seasonal forecast methods for prediction of decision-relevant metrics of hydrologic extremes. 3) Apply seasonal forecast systems to prediction of socially relevant impacts on crops, flood risk, and economic outcomes, and assess the value of these predictions to decision makers. 4) Evaluate the robustness of seasonal prediction systems to evolving climate conditions. The National Drought Mitigation Center (University of Nebraska-Lincoln, USA) is leading this project in collaboration with the USGS, Johns Hopkins University, University of Wisconsin-Madison, the International Research Institute for Climate and Society, NASA, and GHA local experts. The project is also designed to have active engagement of end users in various sectors, university researchers, and extension agents in GHA through workshops and/or webinars. This project is expected improve and implement new and existing climate- and remote sensing-based agricultural
Eliciting interval beliefs: An experimental study
Peeters, Ronald; Wolk, Leonard
2017-01-01
In this paper we study the interval scoring rule as a mechanism to elicit subjective beliefs under varying degrees of uncertainty. In our experiment, subjects forecast the termination time of a time series to be generated from a given but unknown stochastic process. Subjects gradually learn more about the underlying process over time and hence the true distribution over termination times. We conduct two treatments, one with a high and one with a low volatility process. We find that elicited intervals are better when subjects are facing a low volatility process. In this treatment, participants learn to position their intervals almost optimally over the course of the experiment. This is in contrast with the high volatility treatment, where subjects, over the course of the experiment, learn to optimize the location of their intervals but fail to provide the optimal length. PMID:28380020
Hays, G C
2000-09-21
Sea turtles nest on sandy beaches and tend to show high fidelity to specific nesting areas, but, despite this fidelity, the inter-annual variation in nesting numbers may be large. This variation may reflect the fact that turtles do not usually nest in consecutive years. Here, theoretical models are developed in which the interval between successive nesting years (the remigration interval) reflects conditions encountered on the feeding grounds, with good feeding years leading to a reduction in the remigration interval and vice versa. These simple models produce high levels of inter-annual variation in nesting numbers with, on occasion, almost no turtles nesting in some years even when the population is large and stable. The implications for assessing the size of sea turtle populations are considered. Copyright 2000 Academic Press.
NASA Astrophysics Data System (ADS)
Hung, Nguyen Trong; Thuan, Le Ba; Thanh, Tran Chi; Nhuan, Hoang; Khoai, Do Van; Tung, Nguyen Van; Lee, Jin-Young; Jyothi, Rajesh Kumar
2018-06-01
Modeling uranium dioxide pellet process from ammonium uranyl carbonate - derived uranium dioxide powder (UO2 ex-AUC powder) and predicting fuel rod temperature distribution were reported in the paper. Response surface methodology (RSM) and FRAPCON-4.0 code were used to model the process and to predict the fuel rod temperature under steady-state operating condition. Fuel rod design of AP-1000 designed by Westinghouse Electric Corporation, in these the pellet fabrication parameters are from the study, were input data for the code. The predictive data were suggested the relationship between the fabrication parameters of UO2 pellets and their temperature image in nuclear reactor.
Gajic, Ognjen; Dabbagh, Ousama; Park, Pauline K; Adesanya, Adebola; Chang, Steven Y; Hou, Peter; Anderson, Harry; Hoth, J Jason; Mikkelsen, Mark E; Gentile, Nina T; Gong, Michelle N; Talmor, Daniel; Bajwa, Ednan; Watkins, Timothy R; Festic, Emir; Yilmaz, Murat; Iscimen, Remzi; Kaufman, David A; Esper, Annette M; Sadikot, Ruxana; Douglas, Ivor; Sevransky, Jonathan; Malinchoc, Michael
2011-02-15
Accurate, early identification of patients at risk for developing acute lung injury (ALI) provides the opportunity to test and implement secondary prevention strategies. To determine the frequency and outcome of ALI development in patients at risk and validate a lung injury prediction score (LIPS). In this prospective multicenter observational cohort study, predisposing conditions and risk modifiers predictive of ALI development were identified from routine clinical data available during initial evaluation. The discrimination of the model was assessed with area under receiver operating curve (AUC). The risk of death from ALI was determined after adjustment for severity of illness and predisposing conditions. Twenty-two hospitals enrolled 5,584 patients at risk. ALI developed a median of 2 (interquartile range 1-4) days after initial evaluation in 377 (6.8%; 148 ALI-only, 229 adult respiratory distress syndrome) patients. The frequency of ALI varied according to predisposing conditions (from 3% in pancreatitis to 26% after smoke inhalation). LIPS discriminated patients who developed ALI from those who did not with an AUC of 0.80 (95% confidence interval, 0.78-0.82). When adjusted for severity of illness and predisposing conditions, development of ALI increased the risk of in-hospital death (odds ratio, 4.1; 95% confidence interval, 2.9-5.7). ALI occurrence varies according to predisposing conditions and carries an independently poor prognosis. Using routinely available clinical data, LIPS identifies patients at high risk for ALI early in the course of their illness. This model will alert clinicians about the risk of ALI and facilitate testing and implementation of ALI prevention strategies. Clinical trial registered with www.clinicaltrials.gov (NCT00889772).
Super-Latent Inhibition of Conditioned Taste Preference with a Long Retention Interval
ERIC Educational Resources Information Center
De la Casa, L. G.; Marquez, R.; Lubow, R. E.
2009-01-01
A long delay inserted between conditioning and test phases of a 3-stage Latent Inhibition (LI) procedure produces differential effects on LI depending on the delay context. Thus, enhanced LI has been obtained when the delay is spent in a context that is different from the remaining experimental contexts, but not when it is the same. The present…
Matrix Sturm-Liouville equation with a Bessel-type singularity on a finite interval
NASA Astrophysics Data System (ADS)
Bondarenko, Natalia
2017-03-01
The matrix Sturm-Liouville equation on a finite interval with a Bessel-type singularity in the end of the interval is studied. Special fundamental systems of solutions for this equation are constructed: analytic Bessel-type solutions with the prescribed behavior at the singular point and Birkhoff-type solutions with the known asymptotics for large values of the spectral parameter. The asymptotic formulas for Stokes multipliers, connecting these two fundamental systems of solutions, are derived. We also set boundary conditions and obtain asymptotic formulas for the spectral data (the eigenvalues and the weight matrices) of the boundary value problem. Our results will be useful in the theory of direct and inverse spectral problems.
Confidence Intervals for Laboratory Sonic Boom Annoyance Tests
NASA Technical Reports Server (NTRS)
Rathsam, Jonathan; Christian, Andrew
2016-01-01
Commercial supersonic flight is currently forbidden over land because sonic booms have historically caused unacceptable annoyance levels in overflown communities. NASA is providing data and expertise to noise regulators as they consider relaxing the ban for future quiet supersonic aircraft. One deliverable NASA will provide is a predictive model for indoor annoyance to aid in setting an acceptable quiet sonic boom threshold. A laboratory study was conducted to determine how indoor vibrations caused by sonic booms affect annoyance judgments. The test method required finding the point of subjective equality (PSE) between sonic boom signals that cause vibrations and signals not causing vibrations played at various amplitudes. This presentation focuses on a few statistical techniques for estimating the interval around the PSE. The techniques examined are the Delta Method, Parametric and Nonparametric Bootstrapping, and Bayesian Posterior Estimation.
Evaluation of Tp-E Interval and Tp-E/QT Ratio in Patients with Aortic Stenosis.
Yayla, Çağrı; Bilgin, Murat; Akboğa, Mehmet Kadri; Gayretli Yayla, Kadriye; Canpolat, Uğur; Dinç Asarcikli, Lale; Doğan, Mehmet; Turak, Osman; Çay, Serkan; Özeke, Özcan; Akyel, Ahmet; Yeter, Ekrem; Aydoğdu, Sinan
2016-05-01
The risk of syncope and sudden cardiac death due to ventricular arrhythmias increased in patients with aortic stenosis (AS). Recently, it was shown that Tp-e interval, Tp-e/QT, and Tp-e/QTc ratio can be novel indicators for prediction of ventricular arrhythmias and mortality. We aimed to investigate the association between AS and ventricular repolarization using Tp-e interval and Tp-e/QT ratio. Totally, 105 patients with AS and 60 control subjects were enrolled to this study. The severity of AS was defined by transthoracic echocardiographic examination. Tp-e interval, Tp-e/QT, and Tp-e/QTc ratios were measured from the 12-lead electrocardiogram. Tp-e interval, Tp-e/QT, and Tp-e/QTc ratios were significantly increased in parallel to the severity of AS (P < 0.001, P = 0.001, and P = 0.001, respectively). Also, it was shown that Tp-e/QTc ratio had significant positive correlation with mean aortic gradient (r = 0.192, P = 0.049). In multivariate logistic regression analysis, Tp-e/QTc ratio and left ventricular mass were found to be independent predictors of severe AS (P = 0.03 and P = 0.04, respectively). Our study showed that Tp-e interval, Tp-e/QT, and Tp-e/QTc ratios were increased in patients with severe AS. Tp-e/QTc ratio and left ventricular mass were found as independent predictors of severe AS. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Qiao, T.; Ren, J.; Craigie, C.; Zabalza, J.; Maltin, Ch.; Marshall, S.
2015-03-01
It is well known that the eating quality of beef has a significant influence on the repurchase behavior of consumers. There are several key factors that affect the perception of quality, including color, tenderness, juiciness, and flavor. To support consumer repurchase choices, there is a need for an objective measurement of quality that could be applied to meat prior to its sale. Objective approaches such as offered by spectral technologies may be useful, but the analytical algorithms used remain to be optimized. For visible and near infrared (VISNIR) spectroscopy, Partial Least Squares Regression (PLSR) is a widely used technique for meat related quality modeling and prediction. In this paper, a Support Vector Machine (SVM) based machine learning approach is presented to predict beef eating quality traits. Although SVM has been successfully used in various disciplines, it has not been applied extensively to the analysis of meat quality parameters. To this end, the performance of PLSR and SVM as tools for the analysis of meat tenderness is evaluated, using a large dataset acquired under industrial conditions. The spectral dataset was collected using VISNIR spectroscopy with the wavelength ranging from 350 to 1800 nm on 234 beef M. longissimus thoracis steaks from heifers, steers, and young bulls. As the dimensionality with the VISNIR data is very high (over 1600 spectral bands), the Principal Component Analysis (PCA) technique was applied for feature extraction and data reduction. The extracted principal components (less than 100) were then used for data modeling and prediction. The prediction results showed that SVM has a greater potential to predict beef eating quality than PLSR, especially for the prediction of tenderness. The infl uence of animal gender on beef quality prediction was also investigated, and it was found that beef quality traits were predicted most accurately in beef from young bulls.
Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro
2017-10-01
The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r → Z transform.
Pietras, Cynthia J; Brandt, Andrew E; Searcy, Gabriel D
2010-01-01
An experiment with adult humans investigated the effects of response-contingent money loss (response-cost punishment) on monetary-reinforced responding. A yoked-control procedure was used to separate the effects on responding of the response-cost contingency from the effects of reduced reinforcement density. Eight adults pressed buttons for money on a three-component multiple reinforcement schedule. During baseline, responding in all components produced money gains according to a random-interval 20-s schedule. During punishment conditions, responding during the punishment component conjointly produced money losses according to a random-interval schedule. The value of the response-cost schedule was manipulated across conditions to systematically evaluate the effects on responding of response-cost frequency. Participants were assigned to one of two yoked-control conditions. For participants in the Yoked Punishment group, during punishment conditions money losses were delivered in the yoked component response independently at the same intervals that money losses were produced in the punishment component. For participants in the Yoked Reinforcement group, responding in the yoked component produced the same net earnings as produced in the punishment component. In 6 of 8 participants, contingent response cost selectively decreased response rates in the punishment component and the magnitude of the decrease was directly related to the punishment schedule value. Under punishment conditions, for participants in the Yoked Punishment group response rates in the yoked component also decreased, but the decrease was less than that observed in the punishment component, whereas for participants in the Yoked Reinforcement group response rates in the yoked component remained similar to rates in the no-punishment component. These results provide further evidence that contingent response cost functions similarly to noxious punishers in that it appears to suppress responding apart from
Eguchi, Kazuo; Kuruvilla, Sujith; Ogedegbe, Gbenga; Gerin, William; Schwartz, Joseph E; Pickering, Thomas G
2009-06-01
To clarify whether a shorter interval between three successive home blood pressure (HBP) readings (10 s vs. 1 min) taken twice a day gives a better prediction of the average 24-h BP and better patient compliance. We enrolled 56 patients from a hypertension clinic (mean age: 60 +/- 14 years; 54% female patients). The study consisted of three clinic visits, with two 4-week periods of self-monitoring of HBP between them, and a 24-h ambulatory BP monitoring at the second visit. Using a crossover design, with order randomized, the oscillometric HBP device (HEM-5001) could be programmed to take three consecutive readings at either 10-s or 1-min intervals, each of which was done for 4 weeks. Patients were asked to measure three HBP readings in the morning and evening. All the readings were stored in the memory of the monitors. The analyses were performed using the second-third HBP readings. The average systolic BP/diastolic BP for the 10-s and 1-min intervals at home were 136.1 +/- 15.8/77.5 +/- 9.5 and 133.2 +/- 15.5/76.9 +/- 9.3 mmHg (P = 0.001/0.19 for the differences in systolic BP and diastolic BP), respectively. The 1-min BP readings were significantly closer to the average of awake ambulatory BP (131 +/- 14/79 +/- 10 mmHg) than the 10-s interval readings. There was no significant difference in patients' compliance in taking adequate numbers of readings at the different time intervals. The 1-min interval between HBP readings gave a closer agreement with the daytime average BP than the 10-s interval.
Prediction of glass durability as a function of environmental conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jantzen, C M
1988-01-01
A thermodynamic model of glass durability is applied to natural, ancient, and nuclear waste glasses. The durabilities of over 150 different natural and man-made glasses, including actual ancient Roman and Islamic glasses (Jalame ca. 350 AD, Nishapur 10-11th century AD and Gorgon 9-11th century AD), are compared. Glass durability is a function of the thermodynamic hydration free energy, ..delta..G/sub hyd/, which can be calculated from glass composition and solution pH. The durability of the most durable nuclear waste glasses examined was /approximately/10/sup 6/ years. The least durable waste glass formulations were comparable in durability to the most durable simulated medievalmore » window glasses of /approximately/10/sup 3/ years. In this manner, the durability of nuclear waste glasses has been interpolated to be /approximately/10/sup 6/ years and no less than 10/sup 3/ years. Hydration thermodynamics have been shown to be applicable to the dissolution of glass in various natural environments. Groundwater-glass interactions relative to geologic disposal of nuclear waste, hydration rind dating of obsidians, andor other archeological studies can be modeled, e.g., the relative durabilities of six simulated medieval window glasses have been correctly predicted for both laboratory (one month) and burial (5 years) experiments. Effects of solution pH on glass dissolution has been determined experimentally for the 150 different glasses and can be predicted theoretically by hydration thermodynamics. The effects of solution redox on dissolution of glass matrix elements such as SI and B have shown to be minimal. The combined effects of solution pH and Eh have been described and unified by construction of thermodynamically calculated Pourbaix (pH-Eh) diagrams for glass dissolution. The Pourbaix diagrams have been quantified to describe glass dissolution as a function of environmental conditions by use of the data derived from hydration thermodynamics. 56 refs., 7 figs.« less
de Souza, Jorge F T; Dáttilo, Murilo; de Mello, Marco T; Tufik, Sergio; Antunes, Hanna K M
2017-01-01
Introduction: Sleep deprivation can impair several physiological systems and recently, new evidence has pointed to the relationship between a lack of sleep and carbohydrate metabolism, consequently resulting in insulin resistance. To minimize this effect, High-Intensity Interval Training (HIIT) is emerging as a potential strategy. Objective: The aim of this study was to investigate the effects of HIIT on insulin resistance induced by sleep deprivation. Method: Eleven healthy male volunteers were recruited, aged 18-35 years, who declared taking 7-8 h sleep per night. All volunteers were submitted to four different conditions: a single night of regular sleep (RS condition), 24 h of total sleep deprivation ( SD condition), HIIT training followed by regular sleep (HIIT+RS condition), and HIIT training followed by 24 h of total sleep deprivation (HIIT+ SD condition). They performed six training sessions over 2 weeks and each session consisted of 8-12 × 60 s intervals at 100% of peak power output. In each experimental condition, tests for glucose, insulin, cortisol, free fatty acids, and insulin sensitivity, measured by oral glucose tolerance test (OGTT), were performed. Results: Sleep deprivation increased glycaemia and insulin levels, as well as the area under the curve. Furthermore, an increase in free fatty acids concentrations and basal metabolism was observed. There were no differences in the concentrations of cortisol. However, HIIT before 24 h of sleep deprivation attenuated the increase of glucose, insulin, and free fatty acids. Conclusion: Twenty-four hours of sleep deprivation resulted in acute insulin resistance. However, HIIT is an effective strategy to minimize the deleterious effects promoted by this condition.
de Souza, Jorge F. T.; Dáttilo, Murilo; de Mello, Marco T.; Tufik, Sergio; Antunes, Hanna K. M.
2017-01-01
Introduction: Sleep deprivation can impair several physiological systems and recently, new evidence has pointed to the relationship between a lack of sleep and carbohydrate metabolism, consequently resulting in insulin resistance. To minimize this effect, High-Intensity Interval Training (HIIT) is emerging as a potential strategy. Objective: The aim of this study was to investigate the effects of HIIT on insulin resistance induced by sleep deprivation. Method: Eleven healthy male volunteers were recruited, aged 18–35 years, who declared taking 7–8 h sleep per night. All volunteers were submitted to four different conditions: a single night of regular sleep (RS condition), 24 h of total sleep deprivation (SD condition), HIIT training followed by regular sleep (HIIT+RS condition), and HIIT training followed by 24 h of total sleep deprivation (HIIT+SD condition). They performed six training sessions over 2 weeks and each session consisted of 8–12 × 60 s intervals at 100% of peak power output. In each experimental condition, tests for glucose, insulin, cortisol, free fatty acids, and insulin sensitivity, measured by oral glucose tolerance test (OGTT), were performed. Results: Sleep deprivation increased glycaemia and insulin levels, as well as the area under the curve. Furthermore, an increase in free fatty acids concentrations and basal metabolism was observed. There were no differences in the concentrations of cortisol. However, HIIT before 24 h of sleep deprivation attenuated the increase of glucose, insulin, and free fatty acids. Conclusion: Twenty-four hours of sleep deprivation resulted in acute insulin resistance. However, HIIT is an effective strategy to minimize the deleterious effects promoted by this condition. PMID:29270126
Acheson, Dean T; Forsyth, John P; Moses, Erica
2012-03-01
Interoceptive fear conditioning is at the core of contemporary behavioral accounts of panic disorder. Yet, to date only one study has attempted to evaluate interoceptive fear conditioning in humans (see Acheson, Forsyth, Prenoveau, & Bouton, 2007). That study used brief (physiologically inert) and longer-duration (panicogenic) inhalations of 20% CO(2)-enriched air as an interoceptive conditioned (CS) and unconditioned (US) stimulus and evaluated fear learning in three conditions: CS only, CS-US paired, and CS-US unpaired. Results showed fear conditioning in the paired condition, and fearful responding and resistance to extinction in an unpaired condition. The authors speculated that such effects may be due to difficulty discriminating between the CS and the US. The aims of the present study are to (a) replicate and expand this line of work using an improved methodology, and (b) clarify the role of CS-US discrimination difficulties in either potentiating or depotentiating fear learning. Healthy participants (N=104) were randomly assigned to one of four conditions: (a) CS only, (b) contingent CS-US pairings, (c) unpaired CS and US presentations, or (d) an unpaired "discrimination" contingency, which included an exteroceptive discrimination cue concurrently with CS onset. Electrodermal and self-report ratings served as indices of conditioned responding. Consistent with expectation, the paired contingency and unpaired contingencies yielded elevated fearful responding to the CS alone. Moreover, adding a discrimination cue to the unpaired contingency effectively attenuated fearful responding. Overall, findings are consistent with modern learning theory accounts of panic and highlight the role of interoceptive conditioning and unpredictability in the etiology of panic disorder. Copyright © 2011. Published by Elsevier Ltd.
Agustina López, M; Jimena Santos, M; Cortasa, Santiago; Fernández, Rodrigo S; Carbó Tano, Martin; Pedreira, María E
2016-12-01
The reconsolidation process is the mechanism by which strength and/or content of consolidated memories are updated. Prediction error (PE) is the difference between the prediction made and current events. It is proposed as a necessary condition to trigger the reconsolidation process. Here we analyzed deeply the role of the PE in the associative memory reconsolidation in the crab Neohelice granulata. An incongruence between the learned temporal relationship between conditioned and unconditioned stimuli (CS-US) was enough to trigger the reconsolidation process. Moreover, after a partial reinforced training, a PE of 50% opened the possibility to labilize the consolidated memory with a reminder which included or not the US. Further, during an extinction training a small PE in the first interval between CSs was enough to trigger reconsolidation. Overall, we highlighted the relation between training history and different reactivation possibilities to recruit the process responsible of memory updating. Copyright © 2016 Elsevier Inc. All rights reserved.
High resolution time interval meter
Martin, A.D.
1986-05-09
Method and apparatus are provided for measuring the time interval between two events to a higher resolution than reliability available from conventional circuits and component. An internal clock pulse is provided at a frequency compatible with conventional component operating frequencies for reliable operation. Lumped constant delay circuits are provided for generating outputs at delay intervals corresponding to the desired high resolution. An initiation START pulse is input to generate first high resolution data. A termination STOP pulse is input to generate second high resolution data. Internal counters count at the low frequency internal clock pulse rate between the START and STOP pulses. The first and second high resolution data are logically combined to directly provide high resolution data to one counter and correct the count in the low resolution counter to obtain a high resolution time interval measurement.
Suárez Rodríguez, David; del Valle Soto, Miguel
2017-01-01
Background The aim of this study is to find the differences between two specific interval exercises. We begin with the hypothesis that the use of microintervals of work and rest allow for greater intensity of play and a reduction in fatigue. Methods Thirteen competition-level male tennis players took part in two interval training exercises comprising nine 2 min series, which consisted of hitting the ball with cross-court forehand and backhand shots, behind the service box. One was a high-intensity interval training (HIIT), made up of periods of continuous work lasting 2 min, and the other was intermittent interval training (IIT), this time with intermittent 2 min intervals, alternating periods of work with rest periods. Average heart rate (HR) and lactate levels were registered in order to observe the physiological intensity of the two exercises, along with the Borg Scale results for perceived exertion and the number of shots and errors in order to determine the intensity achieved and the degree of fatigue throughout the exercise. Results There were no significant differences in the average heart rate, lactate or the Borg Scale. Significant differences were registered, on the other hand, with a greater number of shots in the first two HIIT series (series 1 p>0.009; series 2 p>0.056), but not in the third. The number of errors was significantly lower in all the IIT series (series 1 p<0.035; series 2 p<0.010; series 3 p<0.001). Conclusion Our study suggests that high-intensity intermittent training allows for greater intensity of play in relation to the real time spent on the exercise, reduced fatigue levels and the maintaining of greater precision in specific tennis-related exercises. PMID:29021912
Konrad, J; Clérico, G; Garrido, M J; Taminelli, G; Yuponi, M; Yuponi, R; Crudeli, G; Sansinena, M
2017-08-01
The excellent adaptation of water buffalo (Bubalis bubalis) to swampy environments means that animals are frequently managed in areas with restricted access for reproductive procedures. The objective of this study was to evaluate the effect of the ovum pick-up (OPU) interval on follicular population, oocyte recovery, oocyte quality and in vitro embryo production. Twelve Murrah buffaloes were subjected to two consecutive dominant follicle reductions, and randomly assigned to either 7-day (n=6) or 14-day (n=6) OPU interval groups. Although there was no significant difference in the average number of small (<3mm) and large (>8mm) diameter follicles available per OPU, a higher proportion of medium-sized follicles (3-8mm) were observed in the 14-day interval group (5.129 vs 3.267; p<0.05). The number of recovered oocytes per donor was also significantly higher (4.51 vs. 2.8; p<0.05) in the 14-day interval group, although this was attributed to an increase in the proportion of lower quality oocytes (grades III and IV). After in vitro fertilization, embryo developmental competence from grade I and II oocytes was superior to that from grade III and IV oocytes, irrespective of OPU interval group. There was no significant difference in the proportion of grade I and II oocytes cleaved after sperm co-incubation; however, there was a higher proportion of blastocysts produced in 14-day interval group (28 vs. 6%, p<0.05). No blastocysts were produced from grade III and IV oocytes. This study indicates it is possible to use a 14-day interval for oocyte collection in water buffalo; this approach could be considered as an alternative when access to animals is restricted. Copyright © 2017 Elsevier B.V. All rights reserved.
Simulating Conditional Deterministic Predictability within Ocean Frontogenesis
2014-03-26
Prediction System (COAMPS; Hodur, 1997) across the inner domain. The surface wind stress is determined from the atmo- spheric model wind velocity...layers on the light side of the front. Increasing the strength of the down-front wind increases the frontogenesis. Mahadevan and Tandon (2006) showed...Filaments of shallow MLD, large frontogenesis and large surface divergence ( upwelling ) are found in the OSEs, but at different locations and strengths . The
NASA Astrophysics Data System (ADS)
Li, Yi; Xu, Yan Long
2018-05-01
When the dependence of the function on uncertain variables is non-monotonic in interval, the interval of function obtained by the classic interval extension based on the first order Taylor series will exhibit significant errors. In order to reduce theses errors, the improved format of the interval extension with the first order Taylor series is developed here considering the monotonicity of function. Two typical mathematic examples are given to illustrate this methodology. The vibration of a beam with lumped masses is studied to demonstrate the usefulness of this method in the practical application, and the necessary input data of which are only the function value at the central point of interval, sensitivity and deviation of function. The results of above examples show that the interval of function from the method developed by this paper is more accurate than the ones obtained by the classic method.
Confronting uncertainty in flood damage predictions
NASA Astrophysics Data System (ADS)
Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Merz, Bruno
2015-04-01
Reliable flood damage models are a prerequisite for the practical usefulness of the model results. Oftentimes, traditional uni-variate damage models as for instance depth-damage curves fail to reproduce the variability of observed flood damage. Innovative multi-variate probabilistic modelling approaches are promising to capture and quantify the uncertainty involved and thus to improve the basis for decision making. In this study we compare the predictive capability of two probabilistic modelling approaches, namely Bagging Decision Trees and Bayesian Networks. For model evaluation we use empirical damage data which are available from computer aided telephone interviews that were respectively compiled after the floods in 2002, 2005 and 2006, in the Elbe and Danube catchments in Germany. We carry out a split sample test by sub-setting the damage records. One sub-set is used to derive the models and the remaining records are used to evaluate the predictive performance of the model. Further we stratify the sample according to catchments which allows studying model performance in a spatial transfer context. Flood damage estimation is carried out on the scale of the individual buildings in terms of relative damage. The predictive performance of the models is assessed in terms of systematic deviations (mean bias), precision (mean absolute error) as well as in terms of reliability which is represented by the proportion of the number of observations that fall within the 95-quantile and 5-quantile predictive interval. The reliability of the probabilistic predictions within validation runs decreases only slightly and achieves a very good coverage of observations within the predictive interval. Probabilistic models provide quantitative information about prediction uncertainty which is crucial to assess the reliability of model predictions and improves the usefulness of model results.
Wang, Wei-Qing; Cheng, Hong-Yan; Song, Song-Quan
2013-01-01
Effects of temperature, storage time and their combination on germination of aspen (Populus tomentosa) seeds were investigated. Aspen seeds were germinated at 5 to 30°C at 5°C intervals after storage for a period of time under 28°C and 75% relative humidity. The effect of temperature on aspen seed germination could not be effectively described by the thermal time (TT) model, which underestimated the germination rate at 5°C and poorly predicted the time courses of germination at 10, 20, 25 and 30°C. A modified TT model (MTT) which assumed a two-phased linear relationship between germination rate and temperature was more accurate in predicting the germination rate and percentage and had a higher likelihood of being correct than the TT model. The maximum lifetime threshold (MLT) model accurately described the effect of storage time on seed germination across all the germination temperatures. An aging thermal time (ATT) model combining both the TT and MLT models was developed to describe the effect of both temperature and storage time on seed germination. When the ATT model was applied to germination data across all the temperatures and storage times, it produced a relatively poor fit. Adjusting the ATT model to separately fit germination data at low and high temperatures in the suboptimal range increased the models accuracy for predicting seed germination. Both the MLT and ATT models indicate that germination of aspen seeds have distinct physiological responses to temperature within a suboptimal range. PMID:23658654
Developing a predictive tropospheric ozone model for Tabriz
NASA Astrophysics Data System (ADS)
Khatibi, Rahman; Naghipour, Leila; Ghorbani, Mohammad A.; Smith, Michael S.; Karimi, Vahid; Farhoudi, Reza; Delafrouz, Hadi; Arvanaghi, Hadi
2013-04-01
Predictive ozone models are becoming indispensable tools by providing a capability for pollution alerts to serve people who are vulnerable to the risks. We have developed a tropospheric ozone prediction capability for Tabriz, Iran, by using the following five modeling strategies: three regression-type methods: Multiple Linear Regression (MLR), Artificial Neural Networks (ANNs), and Gene Expression Programming (GEP); and two auto-regression-type models: Nonlinear Local Prediction (NLP) to implement chaos theory and Auto-Regressive Integrated Moving Average (ARIMA) models. The regression-type modeling strategies explain the data in terms of: temperature, solar radiation, dew point temperature, and wind speed, by regressing present ozone values to their past values. The ozone time series are available at various time intervals, including hourly intervals, from August 2010 to March 2011. The results for MLR, ANN and GEP models are not overly good but those produced by NLP and ARIMA are promising for the establishing a forecasting capability.
NASA Astrophysics Data System (ADS)
Romanov, D.; Epting, J.; Huggenberger, P.; Kaufmann, G.
2009-04-01
Karst aquifers are very sensitive to environmental changes. Small variations of boundary conditions can trigger significant and fast changes of the basic properties of these geological formations. Furthermore, a large number of hydraulic structures have been built in Karst terrains and close to urban areas. Within such settings it is of primary importance to understand the basic processes governing the system and to predict the evolution of Karst aquifers in order to mitigate hazards. There has been great progress in numerical modeling of the evolution of Karst during the last decades. We are now able to model early karstification of locations with complicated geological and geochemical settings and our knowledge about basic processes governing Karst evolution has increased significantly. However, there are still not many modeling attempts with data from real Karst aquifers. A model describing the evolution of a gypsum Karst aquifer along the Birs River in Switzerland is presented in this study. The initial and boundary conditions for the simulations are taken from results of geophysical and geological field studies and a detailed 3D hydrogeological model of the area. Three time intervals of the aquifer's development are discussed in details. The first covers the natural karstification for a period between several hundreds up to a few thousands years. The results from this evolution period are used as initial conditions for the second interval, which covers the time between 1890 and 2007 AD. This period is characterized by anthropogenic alterations of the system through a man-made river dam, which considerably changes the evolution of the aquifer. In 2006 and 2007 AD - after serious subsidence of a nearby highway has been observed - technical measures have been conducted and thus the boundary conditions have changed once again. This is the beginning for the third modeled interval. A forecast for the following 100 years is developed. Our results correlate very well
IBM system/360 assembly language interval arithmetic software
NASA Technical Reports Server (NTRS)
Phillips, E. J.
1972-01-01
Computer software designed to perform interval arithmetic is described. An interval is defined as the set of all real numbers between two given numbers including or excluding one or both endpoints. Interval arithmetic consists of the various elementary arithmetic operations defined on the set of all intervals, such as interval addition, subtraction, union, etc. One of the main applications of interval arithmetic is in the area of error analysis of computer calculations. For example, it has been used sucessfully to compute bounds on sounding errors in the solution of linear algebraic systems, error bounds in numerical solutions of ordinary differential equations, as well as integral equations and boundary value problems. The described software enables users to implement algorithms of the type described in references efficiently on the IBM 360 system.
Magnetic Resonance Fingerprinting with short relaxation intervals.
Amthor, Thomas; Doneva, Mariya; Koken, Peter; Sommer, Karsten; Meineke, Jakob; Börnert, Peter
2017-09-01
The aim of this study was to investigate a technique for improving the performance of Magnetic Resonance Fingerprinting (MRF) in repetitive sampling schemes, in particular for 3D MRF acquisition, by shortening relaxation intervals between MRF pulse train repetitions. A calculation method for MRF dictionaries adapted to short relaxation intervals and non-relaxed initial spin states is presented, based on the concept of stationary fingerprints. The method is applicable to many different k-space sampling schemes in 2D and 3D. For accuracy analysis, T 1 and T 2 values of a phantom are determined by single-slice Cartesian MRF for different relaxation intervals and are compared with quantitative reference measurements. The relevance of slice profile effects is also investigated in this case. To further illustrate the capabilities of the method, an application to in-vivo spiral 3D MRF measurements is demonstrated. The proposed computation method enables accurate parameter estimation even for the shortest relaxation intervals, as investigated for different sampling patterns in 2D and 3D. In 2D Cartesian measurements, we achieved a scan acceleration of more than a factor of two, while maintaining acceptable accuracy: The largest T 1 values of a sample set deviated from their reference values by 0.3% (longest relaxation interval) and 2.4% (shortest relaxation interval). The largest T 2 values showed systematic deviations of up to 10% for all relaxation intervals, which is discussed. The influence of slice profile effects for multislice acquisition is shown to become increasingly relevant for short relaxation intervals. In 3D spiral measurements, a scan time reduction of 36% was achieved, maintaining the quality of in-vivo T1 and T2 maps. Reducing the relaxation interval between MRF sequence repetitions using stationary fingerprint dictionaries is a feasible method to improve the scan efficiency of MRF sequences. The method enables fast implementations of 3D spatially
The Influence of Interpregnancy Interval on Infant Mortality
MCKINNEY, David; HOUSE, Melissa; CHEN, Aimin; MUGLIA, Louis; DEFRANCO, Emily
2017-01-01
Background In Ohio the infant mortality rate is above the national average and the black infant mortality rate is more than twice the white infant mortality rate. Having a short interpregnancy interval has been shown to correlate with preterm birth and low birth weight, but the effect of short interpregnancy interval on infant mortality is less well established. Objective To quantify the population impact of interpregnancy interval on the risk of infant mortality. Study Design This was a statewide population-based retrospective cohort study of all births (n=1,131,070) and infant mortalities (n=8,152) using linked Ohio birth and infant death records from 1/2007 through 9/2014. For this study we analyzed 5 interpregnancy interval categories: 0 to < 6 months, 6 to < 12 months, 12 to < 24 months, 24 to < 60 months, and ≥ 60 months. The primary outcome for this study was infant mortality. During the study period, 3701 infant mortalities were linked to a live birth certificate with an interpregnancy interval available. We calculated the frequency and relative risk (RR) of infant mortality for each interval compared to a referent interval of 12 to < 24 months. Stratified analyses by maternal race were also performed. Adjusted risks were estimated after accounting for statistically significant and biologically plausible confounding variables. Adjusted relative risk was utilized to calculate the attributable risk percent of short interpregnancy intervals on infant mortality. Results Short interpregnancy intervals were common in Ohio during the study period. 20.5% of all multiparous births followed an interval of < 12 months. The overall infant mortality rate during this time was 7.2 per 1000 live births (6.0 for white mothers and 13.1 for black mothers). Infant mortalities occurred more frequently for births that occurred following short intervals of 0 to < 6 months (9.2 per 1000) and 6 to < 12 months (7.1 per 1000) compared to 12 to < 24 months (5.6 per 1000), (p= <0
Mäkelä, Mikko; Fraikin, Laurent; Léonard, Angélique; Benavente, Verónica; Fullana, Andrés
2016-03-15
The effects of hydrothermal treatment on the drying properties of sludge were determined. Sludge was hydrothermally treated at 180-260 °C for 0.5-5 h using NaOH and HCl as additives to influence reaction conditions. Untreated sludge and attained hydrochar samples were then dried under identical conditions with a laboratory microdryer and an X-ray microtomograph was used to follow changes in sample dimensions. The effective moisture diffusivities of sludge and hydrochar samples were determined and the effect of process conditions on respective mean diffusivities evaluated using multiple linear regression. Based on the results the drying time of untreated sludge decreased from approximately 80 min to 37-59 min for sludge hydrochar. Drying of untreated sludge was governed by the falling rate period where drying flux decreased continuously as a function of sludge moisture content due to heat and mass transfer limitations and sample shrinkage. Hydrothermal treatment increased the drying flux of sludge hydrochar and decreased the effect of internal heat and mass transfer limitations and sample shrinkage especially at higher treatment temperatures. The determined effective moisture diffusivities of sludge and hydrochar increased as a function of decreasing moisture content and the mean diffusivity of untreated sludge (8.56·10(-9) m(2) s(-1)) and sludge hydrochar (12.7-27.5·10(-9) m(2) s(-1)) were found statistically different. The attained regression model indicated that treatment temperature governed the mean diffusivity of hydrochar, as the effects of NaOH and HCl were statistically insignificant. The attained results enabled prediction of sludge drying properties through mean moisture diffusivity based on hydrothermal treatment conditions. Copyright © 2015 Elsevier Ltd. All rights reserved.
Interval Exercise Therapy for Type 2 Diabetes.
Hamasaki, Hidetaka
2018-01-01
Regular exercise improves glycemic control and reduces cardiovascular risk and mortality in patients with type 2 diabetes. Continuous moderate- to high-intensity exercise has been recommended to manage type 2 diabetes; however, only approximately 30% of diabetic patients achieve the recommended levels of physical activity. The reasons for not engaging in regular exercise vary; however, one of the common reasons is lack of time. Recently, the effectiveness of shortduration interval exercise such as high-intensity interval training and interval walking has been observed. Thus, the author aimed to summarize the current knowledge and discuss recent literature regarding the effects of interval exercise therapy in type 2 diabetes. The author searched the English literature on interval training and type 2 diabetes using Pub- Med. A total of 8 studies met the criteria. Interval exercise is feasible and effective in obtaining glycemic control in patients with type 2 diabetes. It may also improve body composition, insulin sensitivity, aerobic capacity, and oxidative stress more effectively than continuous exercise. As a novel exercise therapy, interval training appears to be effective in managing type 2 diabetes. However, the safety and efficacy of this exercise modality in patients with progressed diabetic complications or a history of cardiovascular disease and in extremely older individuals remain unknown. Additionally, there is considerable heterogeneity in exercise interventions (intensity and duration) between clinical studies. Further studies are needed. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Oswald, William E.; Stewart, Aisha E. P.; Flanders, W. Dana; Kramer, Michael R.; Endeshaw, Tekola; Zerihun, Mulat; Melaku, Birhanu; Sata, Eshetu; Gessesse, Demelash; Teferi, Tesfaye; Tadesse, Zerihun; Guadie, Birhan; King, Jonathan D.; Emerson, Paul M.; Callahan, Elizabeth K.; Moe, Christine L.; Clasen, Thomas F.
2016-01-01
This study developed and validated a model for predicting the probability that communities in Amhara Region, Ethiopia, have low sanitation coverage, based on environmental and sociodemographic conditions. Community sanitation coverage was measured between 2011 and 2014 through trachoma control program evaluation surveys. Information on environmental and sociodemographic conditions was obtained from available data sources and linked with community data using a geographic information system. Logistic regression was used to identify predictors of low community sanitation coverage (< 20% versus ≥ 20%). The selected model was geographically and temporally validated. Model-predicted probabilities of low community sanitation coverage were mapped. Among 1,502 communities, 344 (22.90%) had coverage below 20%. The selected model included measures for high topsoil gravel content, an indicator for low-lying land, population density, altitude, and rainfall and had reasonable predictive discrimination (area under the curve = 0.75, 95% confidence interval = 0.72, 0.78). Measures of soil stability were strongly associated with low community sanitation coverage, controlling for community wealth, and other factors. A model using available environmental and sociodemographic data predicted low community sanitation coverage for areas across Amhara Region with fair discrimination. This approach could assist sanitation programs and trachoma control programs, scaling up or in hyperendemic areas, to target vulnerable areas with additional activities or alternate technologies. PMID:27430547
Svensson, Fredrik; Aniceto, Natalia; Norinder, Ulf; Cortes-Ciriano, Isidro; Spjuth, Ola; Carlsson, Lars; Bender, Andreas
2018-05-29
Making predictions with an associated confidence is highly desirable as it facilitates decision making and resource prioritization. Conformal regression is a machine learning framework that allows the user to define the required confidence and delivers predictions that are guaranteed to be correct to the selected extent. In this study, we apply conformal regression to model molecular properties and bioactivity values and investigate different ways to scale the resultant prediction intervals to create as efficient (i.e., narrow) regressors as possible. Different algorithms to estimate the prediction uncertainty were used to normalize the prediction ranges, and the different approaches were evaluated on 29 publicly available data sets. Our results show that the most efficient conformal regressors are obtained when using the natural exponential of the ensemble standard deviation from the underlying random forest to scale the prediction intervals, but other approaches were almost as efficient. This approach afforded an average prediction range of 1.65 pIC50 units at the 80% confidence level when applied to bioactivity modeling. The choice of nonconformity function has a pronounced impact on the average prediction range with a difference of close to one log unit in bioactivity between the tightest and widest prediction range. Overall, conformal regression is a robust approach to generate bioactivity predictions with associated confidence.
Tresoldi, Claudia; Bianchi, Elena; Pellegata, Alessandro Filippo; Dubini, Gabriele; Mantero, Sara
2017-08-01
The in vitro replication of physiological mechanical conditioning through bioreactors plays a crucial role in the development of functional Small-Caliber Tissue-Engineered Blood Vessels. An in silico scaffold-specific model under pulsatile perfusion provided by a bioreactor was implemented using a fluid-structure interaction (FSI) approach for viscoelastic tubular scaffolds (e.g. decellularized swine arteries, DSA). Results of working pressures, circumferential deformations, and wall shear stress on DSA fell within the desired physiological range and indicated the ability of this model to correctly predict the mechanical conditioning acting on the cells-scaffold system. Consequently, the FSI model allowed us to a priori define the stimulation pattern, driving in vitro physiological maturation of scaffolds, especially with viscoelastic properties.
Protein carbamylation predicts mortality in ESRD.
Koeth, Robert A; Kalantar-Zadeh, Kamyar; Wang, Zeneng; Fu, Xiaoming; Tang, W H Wilson; Hazen, Stanley L
2013-04-01
Traditional risk factors fail to explain the increased risk for cardiovascular morbidity and mortality in ESRD. Cyanate, a reactive electrophilic species in equilibrium with urea, posttranslationally modifies proteins through a process called carbamylation, which promotes atherosclerosis. The plasma level of protein-bound homocitrulline (PBHCit), which results from carbamylation, predicts major adverse cardiac events in patients with normal renal function, but whether this relationship is similar in ESRD is unknown. We quantified serum PBHCit in a cohort of 347 patients undergoing maintenance hemodialysis with 5 years of follow-up. Kaplan-Meier analyses revealed a significant association between elevated PBHCit and death (log-rank P<0.01). After adjustment for patient characteristics, laboratory values, and comorbid conditions, the risk for death among patients with PBHCit values in the highest tertile was more than double the risk among patients with values in the middle tertile (adjusted hazard ratio [HR], 2.4; 95% confidence interval [CI], 1.5-3.9) or the lowest tertile (adjusted HR, 2.3; 95% CI, 1.5-3.7). Including PBHCit significantly improved the multivariable model, with a net reclassification index of 14% (P<0.01). In summary, serum PBHCit, a footprint of protein carbamylation, predicts increased cardiovascular risk in patients with ESRD, supporting a mechanistic link among uremia, inflammation, and atherosclerosis.
Segregation of Brain Structural Networks Supports Spatio-Temporal Predictive Processing.
Ciullo, Valentina; Vecchio, Daniela; Gili, Tommaso; Spalletta, Gianfranco; Piras, Federica
2018-01-01
The ability to generate probabilistic expectancies regarding when and where sensory stimuli will occur, is critical to derive timely and accurate inferences about updating contexts. However, the existence of specialized neural networks for inferring predictive relationships between events is still debated. Using graph theoretical analysis applied to structural connectivity data, we tested the extent of brain connectivity properties associated with spatio-temporal predictive performance across 29 healthy subjects. Participants detected visual targets appearing at one out of three locations after one out of three intervals; expectations about stimulus location (spatial condition) or onset (temporal condition) were induced by valid or invalid symbolic cues. Connectivity matrices and centrality/segregation measures, expressing the relative importance of, and the local interactions among specific cerebral areas respect to the behavior under investigation, were calculated from whole-brain tractography and cortico-subcortical parcellation. Results: Response preparedness to cued stimuli relied on different structural connectivity networks for the temporal and spatial domains. Significant covariance was observed between centrality measures of regions within a subcortical-fronto-parietal-occipital network -comprising the left putamen, the right caudate nucleus, the left frontal operculum, the right inferior parietal cortex, the right paracentral lobule and the right superior occipital cortex-, and the ability to respond after a short cue-target delay suggesting that the local connectedness of such nodes plays a central role when the source of temporal expectation is explicit. When the potential for functional segregation was tested, we found highly clustered structural connectivity across the right superior, the left middle inferior frontal gyrus and the left caudate nucleus as related to explicit temporal orienting. Conversely, when the interaction between explicit and
Intervality and coherence in complex networks
NASA Astrophysics Data System (ADS)
Domínguez-García, Virginia; Johnson, Samuel; Muñoz, Miguel A.
2016-06-01
Food webs—networks of predators and prey—have long been known to exhibit "intervality": species can generally be ordered along a single axis in such a way that the prey of any given predator tend to lie on unbroken compact intervals. Although the meaning of this axis—usually identified with a "niche" dimension—has remained a mystery, it is assumed to lie at the basis of the highly non-trivial structure of food webs. With this in mind, most trophic network modelling has for decades been based on assigning species a niche value by hand. However, we argue here that intervality should not be considered the cause but rather a consequence of food-web structure. First, analysing a set of 46 empirical food webs, we find that they also exhibit predator intervality: the predators of any given species are as likely to be contiguous as the prey are, but in a different ordering. Furthermore, this property is not exclusive of trophic networks: several networks of genes, neurons, metabolites, cellular machines, airports, and words are found to be approximately as interval as food webs. We go on to show that a simple model of food-web assembly which does not make use of a niche axis can nevertheless generate significant intervality. Therefore, the niche dimension (in the sense used for food-web modelling) could in fact be the consequence of other, more fundamental structural traits. We conclude that a new approach to food-web modelling is required for a deeper understanding of ecosystem assembly, structure, and function, and propose that certain topological features thought to be specific of food webs are in fact common to many complex networks.
NASA Astrophysics Data System (ADS)
Fu, Chao; Ren, Xingmin; Yang, Yongfeng; Xia, Yebao; Deng, Wangqun
2018-07-01
A non-intrusive interval precise integration method (IPIM) is proposed in this paper to analyze the transient unbalance response of uncertain rotor systems. The transfer matrix method (TMM) is used to derive the deterministic equations of motion of a hollow-shaft overhung rotor. The uncertain transient dynamic problem is solved by combing the Chebyshev approximation theory with the modified precise integration method (PIM). Transient response bounds are calculated by interval arithmetic of the expansion coefficients. Theoretical error analysis of the proposed method is provided briefly, and its accuracy is further validated by comparing with the scanning method in simulations. Numerical results show that the IPIM can keep good accuracy in vibration prediction of the start-up transient process. Furthermore, the proposed method can also provide theoretical guidance to other transient dynamic mechanical systems with uncertainties.
Temporal Prediction Errors Affect Short-Term Memory Scanning Response Time.
Limongi, Roberto; Silva, Angélica M
2016-11-01
The Sternberg short-term memory scanning task has been used to unveil cognitive operations involved in time perception. Participants produce time intervals during the task, and the researcher explores how task performance affects interval production - where time estimation error is the dependent variable of interest. The perspective of predictive behavior regards time estimation error as a temporal prediction error (PE), an independent variable that controls cognition, behavior, and learning. Based on this perspective, we investigated whether temporal PEs affect short-term memory scanning. Participants performed temporal predictions while they maintained information in memory. Model inference revealed that PEs affected memory scanning response time independently of the memory-set size effect. We discuss the results within the context of formal and mechanistic models of short-term memory scanning and predictive coding, a Bayes-based theory of brain function. We state the hypothesis that our finding could be associated with weak frontostriatal connections and weak striatal activity.
Estimation of TOA based MUSIC algorithm and cross correlation algorithm of appropriate interval
NASA Astrophysics Data System (ADS)
Lin, Wei; Liu, Jun; Zhou, Yineng; Huang, Jiyan
2017-03-01
Localization of mobile station (MS) has now gained considerable attention due to its wide applications in military, environmental, health and commercial systems. Phrase angle and encode data of MSK system model are two critical parameters in time-of-arrival (TOA) localization technique; nevertheless, precise value of phrase angle and encode data are not easy to achieved in general. In order to meet the actual situation, we should consider the condition that phase angle and encode data is unknown. In this paper, a novel TOA localization method, which combine MUSIC algorithm and cross correlation algorithm in an appropriate interval, is proposed. Simulations show that the proposed method has better performance than music algorithm and cross correlation algorithm of the whole interval.
Interval and Contour Processing in Autism
ERIC Educational Resources Information Center
Heaton, Pamela
2005-01-01
High functioning children with autism and age and intelligence matched controls participated in experiments testing perception of pitch intervals and musical contours. The finding from the interval study showed superior detection of pitch direction over small pitch distances in the autism group. On the test of contour discrimination no group…
Wilkerson, Gary B; Colston, Marisa A
2015-06-01
Researchers have identified high exposure to game conditions, low back dysfunction, and poor endurance of the core musculature as strong predictors for the occurrence of sprains and strains among collegiate football players. To refine a previously developed injury-prediction model through analysis of 3 consecutive seasons of data. Cohort study. National Collegiate Athletic Association Division I Football Championship Subdivision football program. For 3 consecutive years, all 152 team members (age = 19.7 ± 1.5 years, height = 1.84 ± 0.08 m, mass = 101.08 ± 19.28 kg) presented for a mandatory physical examination on the day before initiation of preseason practice sessions. Associations between preseason measurements and the subsequent occurrence of a core or lower extremity sprain or strain were established for 256 player-seasons of data. We used receiver operating characteristic analysis to identify optimal cut points for dichotomous categorizations of cases as high risk or low risk. Both logistic regression and Cox regression analyses were used to identify a multivariable injury-prediction model with optimal discriminatory power. Exceptionally good discrimination between injured and uninjured cases was found for a 3-factor prediction model that included equal to or greater than 1 game as a starter, Oswestry Disability Index score equal to or greater than 4, and poor wall-sit-hold performance. The existence of at least 2 of the 3 risk factors demonstrated 56% sensitivity, 80% specificity, an odds ratio of 5.28 (90% confidence interval = 3.31, 8.44), and a hazard ratio of 2.97 (90% confidence interval = 2.14, 4.12). High exposure to game conditions was the dominant injury risk factor for collegiate football players, but a surprisingly mild degree of low back dysfunction and poor core-muscle endurance appeared to be important modifiable risk factors that should be identified and addressed before participation.
Intact interval timing in circadian CLOCK mutants.
Cordes, Sara; Gallistel, C R
2008-08-28
While progress has been made in determining the molecular basis for the circadian clock, the mechanism by which mammalian brains time intervals measured in seconds to minutes remains a mystery. An obvious question is whether the interval-timing mechanism shares molecular machinery with the circadian timing mechanism. In the current study, we trained circadian CLOCK +/- and -/- mutant male mice in a peak-interval procedure with 10 and 20-s criteria. The mutant mice were more active than their wild-type littermates, but there were no reliable deficits in the accuracy or precision of their timing as compared with wild-type littermates. This suggests that expression of the CLOCK protein is not necessary for normal interval timing.