Predicting birth weight with conditionally linear transformation models.
Möst, Lisa; Schmid, Matthias; Faschingbauer, Florian; Hothorn, Torsten
2016-12-01
Low and high birth weight (BW) are important risk factors for neonatal morbidity and mortality. Gynecologists must therefore accurately predict BW before delivery. Most prediction formulas for BW are based on prenatal ultrasound measurements carried out within one week prior to birth. Although successfully used in clinical practice, these formulas focus on point predictions of BW but do not systematically quantify uncertainty of the predictions, i.e. they result in estimates of the conditional mean of BW but do not deliver prediction intervals. To overcome this problem, we introduce conditionally linear transformation models (CLTMs) to predict BW. Instead of focusing only on the conditional mean, CLTMs model the whole conditional distribution function of BW given prenatal ultrasound parameters. Consequently, the CLTM approach delivers both point predictions of BW and fetus-specific prediction intervals. Prediction intervals constitute an easy-to-interpret measure of prediction accuracy and allow identification of fetuses subject to high prediction uncertainty. Using a data set of 8712 deliveries at the Perinatal Centre at the University Clinic Erlangen (Germany), we analyzed variants of CLTMs and compared them to standard linear regression estimation techniques used in the past and to quantile regression approaches. The best-performing CLTM variant was competitive with quantile regression and linear regression approaches in terms of conditional coverage and average length of the prediction intervals. We propose that CLTMs be used because they are able to account for possible heteroscedasticity, kurtosis, and skewness of the distribution of BWs. © The Author(s) 2014.
Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection
NASA Technical Reports Server (NTRS)
Kumar, Sricharan; Srivistava, Ashok N.
2012-01-01
Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.
Prediction of Malaysian monthly GDP
NASA Astrophysics Data System (ADS)
Hin, Pooi Ah; Ching, Soo Huei; Yeing, Pan Wei
2015-12-01
The paper attempts to use a method based on multivariate power-normal distribution to predict the Malaysian Gross Domestic Product next month. Letting r(t) be the vector consisting of the month-t values on m selected macroeconomic variables, and GDP, we model the month-(t+1) GDP to be dependent on the present and l-1 past values r(t), r(t-1),…,r(t-l+1) via a conditional distribution which is derived from a [(m+1)l+1]-dimensional power-normal distribution. The 100(α/2)% and 100(1-α/2)% points of the conditional distribution may be used to form an out-of sample prediction interval. This interval together with the mean of the conditional distribution may be used to predict the month-(t+1) GDP. The mean absolute percentage error (MAPE), estimated coverage probability and average length of the prediction interval are used as the criterions for selecting the suitable lag value l-1 and the subset from a pool of 17 macroeconomic variables. It is found that the relatively better models would be those of which 2 ≤ l ≤ 3, and involving one or two of the macroeconomic variables given by Market Indicative Yield, Oil Prices, Exchange Rate and Import Trade.
Steiner, Genevieve Z.; Barry, Robert J.; Gonsalvez, Craig J.
2016-01-01
In oddball tasks, increasing the time between stimuli within a particular condition (target-to-target interval, TTI; nontarget-to-nontarget interval, NNI) systematically enhances N1, P2, and P300 event-related potential (ERP) component amplitudes. This study examined the mechanism underpinning these effects in ERP components recorded from 28 adults who completed a conventional three-tone oddball task. Bivariate correlations, partial correlations and multiple regression explored component changes due to preceding ERP component amplitudes and intervals found within the stimulus series, rather than constraining the task with experimentally constructed intervals, which has been adequately explored in prior studies. Multiple regression showed that for targets, N1 and TTI predicted N2, TTI predicted P3a and P3b, and Processing Negativity (PN), P3b, and TTI predicted reaction time. For rare nontargets, P1 predicted N1, NNI predicted N2, and N1 predicted Slow Wave (SW). Findings show that the mechanism is operating on separate stages of stimulus-processing, suggestive of either increased activation within a number of stimulus-specific pathways, or very long component generator recovery cycles. These results demonstrate the extent to which matching-stimulus intervals influence ERP component amplitudes and behavior in a three-tone oddball task, and should be taken into account when designing similar studies. PMID:27445774
Steiner, Genevieve Z; Barry, Robert J; Gonsalvez, Craig J
2016-01-01
In oddball tasks, increasing the time between stimuli within a particular condition (target-to-target interval, TTI; nontarget-to-nontarget interval, NNI) systematically enhances N1, P2, and P300 event-related potential (ERP) component amplitudes. This study examined the mechanism underpinning these effects in ERP components recorded from 28 adults who completed a conventional three-tone oddball task. Bivariate correlations, partial correlations and multiple regression explored component changes due to preceding ERP component amplitudes and intervals found within the stimulus series, rather than constraining the task with experimentally constructed intervals, which has been adequately explored in prior studies. Multiple regression showed that for targets, N1 and TTI predicted N2, TTI predicted P3a and P3b, and Processing Negativity (PN), P3b, and TTI predicted reaction time. For rare nontargets, P1 predicted N1, NNI predicted N2, and N1 predicted Slow Wave (SW). Findings show that the mechanism is operating on separate stages of stimulus-processing, suggestive of either increased activation within a number of stimulus-specific pathways, or very long component generator recovery cycles. These results demonstrate the extent to which matching-stimulus intervals influence ERP component amplitudes and behavior in a three-tone oddball task, and should be taken into account when designing similar studies.
Moustafa, Ahmed A.; Wufong, Ella; Servatius, Richard J.; Pang, Kevin C. H.; Gluck, Mark A.; Myers, Catherine E.
2013-01-01
A recurrent-network model provides a unified account of the hippocampal region in mediating the representation of temporal information in classical eyeblink conditioning. Much empirical research is consistent with a general conclusion that delay conditioning (in which the conditioned stimulus CS and unconditioned stimulus US overlap and co-terminate) is independent of the hippocampal system, while trace conditioning (in which the CS terminates before US onset) depends on the hippocampus. However, recent studies show that, under some circumstances, delay conditioning can be hippocampal-dependent and trace conditioning can be spared following hippocampal lesion. Here, we present an extension of our prior trial-level models of hippocampal function and stimulus representation that can explain these findings within a unified framework. Specifically, the current model includes adaptive recurrent collateral connections that aid in the representation of intra-trial temporal information. With this model, as in our prior models, we argue that the hippocampus is not specialized for conditioned response timing, but rather is a general-purpose system that learns to predict the next state of all stimuli given the current state of variables encoded by activity in recurrent collaterals. As such, the model correctly predicts that hippocampal involvement in classical conditioning should be critical not only when there is an intervening trace interval, but also when there is a long delay between CS onset and US onset. Our model simulates empirical data from many variants of classical conditioning, including delay and trace paradigms in which the length of the CS, the inter-stimulus interval, or the trace interval is varied. Finally, we discuss model limitations, future directions, and several novel empirical predictions of this temporal processing model of hippocampal function and learning. PMID:23178699
Selective Attention in Pigeon Temporal Discrimination.
Subramaniam, Shrinidhi; Kyonka, Elizabeth
2017-07-27
Cues can vary in how informative they are about when specific outcomes, such as food availability, will occur. This study was an experimental investigation of the functional relation between cue informativeness and temporal discrimination in a peak-interval (PI) procedure. Each session consisted of fixed-interval (FI) 2-s and 4-s schedules of food and occasional, 12-s PI trials during which pecks had no programmed consequences. Across conditions, the phi (ϕ) correlation between key light color and FI schedule value was manipulated. Red and green key lights signaled the onset of either or both FI schedules. Different colors were either predictive (ϕ = 1), moderately predictive (ϕ = 0.2-0.8), or not predictive (ϕ = 0) of a specific FI schedule. This study tested the hypothesis that temporal discrimination is a function of the momentary conditional probability of food; that is, pigeons peck the most at either 2 s or 4 s when ϕ = 1 and peck at both intervals when ϕ < 1. Response distributions were bimodal Gaussian curves; distributions from red- and green-key PI trials converged when ϕ ≤ 0.6. Peak times estimated by summed Gaussian functions, averaged across conditions and pigeons, were 1.85 s and 3.87 s, however, pigeons did not always maximize the momentary probability of food. When key light color was highly correlated with FI schedules (ϕ ≥ 0.6), estimates of peak times indicated that temporal discrimination accuracy was reduced at the unlikely interval, but not the likely interval. The mechanism of this reduced temporal discrimination accuracy could be interpreted as an attentional process.
The effects of context and musical training on auditory temporal-interval discrimination.
Banai, Karen; Fisher, Shirley; Ganot, Ron
2012-02-01
Non sensory factors such as stimulus context and musical experience are known to influence auditory frequency discrimination, but whether the context effect extends to auditory temporal processing remains unknown. Whether individual experiences such as musical training alter the context effect is also unknown. The goal of the present study was therefore to investigate the effects of stimulus context and musical experience on auditory temporal-interval discrimination. In experiment 1, temporal-interval discrimination was compared between fixed context conditions in which a single base temporal interval was presented repeatedly across all trials and variable context conditions in which one of two base intervals was randomly presented on each trial. Discrimination was significantly better in the fixed than in the variable context conditions. In experiment 2 temporal discrimination thresholds of musicians and non-musicians were compared across 3 conditions: a fixed context condition in which the target interval was presented repeatedly across trials, and two variable context conditions differing in the frequencies used for the tones marking the temporal intervals. Musicians outperformed non-musicians on all 3 conditions, but the effects of context were similar for the two groups. Overall, it appears that, like frequency discrimination, temporal-interval discrimination benefits from having a fixed reference. Musical experience, while improving performance, did not alter the context effect, suggesting that improved discrimination skills among musicians are probably not an outcome of more sensitive contextual facilitation or predictive coding mechanisms. Copyright © 2011 Elsevier B.V. All rights reserved.
Prediction of future asset prices
NASA Astrophysics Data System (ADS)
Seong, Ng Yew; Hin, Pooi Ah; Ching, Soo Huei
2014-12-01
This paper attempts to incorporate trading volumes as an additional predictor for predicting asset prices. Denoting r(t) as the vector consisting of the time-t values of the trading volume and price of a given asset, we model the time-(t+1) asset price to be dependent on the present and l-1 past values r(t), r(t-1), ....., r(t-1+1) via a conditional distribution which is derived from a (2l+1)-dimensional power-normal distribution. A prediction interval based on the 100(α/2)% and 100(1-α/2)% points of the conditional distribution is then obtained. By examining the average lengths of the prediction intervals found by using the composite indices of the Malaysia stock market for the period 2008 to 2013, we found that the value 2 appears to be a good choice for l. With the omission of the trading volume in the vector r(t), the corresponding prediction interval exhibits a slightly longer average length, showing that it might be desirable to keep trading volume as a predictor. From the above conditional distribution, the probability that the time-(t+1) asset price will be larger than the time-t asset price is next computed. When the probability differs from 0 (or 1) by less than 0.03, the observed time-(t+1) increase in price tends to be negative (or positive). Thus the above probability has a good potential of being used as a market indicator in technical analysis.
Tran, Mark W; Weiland, Tracey J; Phillips, Georgina A
2015-01-01
Psychosocial factors such as marital status (odds ratio, 3.52; 95% confidence interval, 1.43-8.69; P = .006) and nonclinical factors such as outpatient nonattendances (odds ratio, 2.52; 95% confidence interval, 1.22-5.23; P = .013) and referrals made (odds ratio, 1.20; 95% confidence interval, 1.06-1.35; P = .003) predict hospital utilization for patients in a chronic disease management program. Along with optimizing patients' clinical condition by prescribed medical guidelines and supporting patient self-management, addressing psychosocial and nonclinical issues are important in attempting to avoid hospital utilization for people with chronic illnesses.
NASA Astrophysics Data System (ADS)
Kumar, Girish; Jain, Vipul; Gandhi, O. P.
2018-03-01
Maintenance helps to extend equipment life by improving its condition and avoiding catastrophic failures. Appropriate model or mechanism is, thus, needed to quantify system availability vis-a-vis a given maintenance strategy, which will assist in decision-making for optimal utilization of maintenance resources. This paper deals with semi-Markov process (SMP) modeling for steady state availability analysis of mechanical systems that follow condition-based maintenance (CBM) and evaluation of optimal condition monitoring interval. The developed SMP model is solved using two-stage analytical approach for steady-state availability analysis of the system. Also, CBM interval is decided for maximizing system availability using Genetic Algorithm approach. The main contribution of the paper is in the form of a predictive tool for system availability that will help in deciding the optimum CBM policy. The proposed methodology is demonstrated for a centrifugal pump.
Bioinactivation: Software for modelling dynamic microbial inactivation.
Garre, Alberto; Fernández, Pablo S; Lindqvist, Roland; Egea, Jose A
2017-03-01
This contribution presents the bioinactivation software, which implements functions for the modelling of isothermal and non-isothermal microbial inactivation. This software offers features such as user-friendliness, modelling of dynamic conditions, possibility to choose the fitting algorithm and generation of prediction intervals. The software is offered in two different formats: Bioinactivation core and Bioinactivation SE. Bioinactivation core is a package for the R programming language, which includes features for the generation of predictions and for the fitting of models to inactivation experiments using non-linear regression or a Markov Chain Monte Carlo algorithm (MCMC). The calculations are based on inactivation models common in academia and industry (Bigelow, Peleg, Mafart and Geeraerd). Bioinactivation SE supplies a user-friendly interface to selected functions of Bioinactivation core, namely the model fitting of non-isothermal experiments and the generation of prediction intervals. The capabilities of bioinactivation are presented in this paper through a case study, modelling the non-isothermal inactivation of Bacillus sporothermodurans. This study has provided a full characterization of the response of the bacteria to dynamic temperature conditions, including confidence intervals for the model parameters and a prediction interval of the survivor curve. We conclude that the MCMC algorithm produces a better characterization of the biological uncertainty and variability than non-linear regression. The bioinactivation software can be relevant to the food and pharmaceutical industry, as well as to regulatory agencies, as part of a (quantitative) microbial risk assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Rustic, G. T.; Polissar, P. J.; Ravelo, A. C.; White, S. M.
2017-12-01
The El Niño Southern Oscillation (ENSO) plays a dominant role in Earth's climate variability. Paleoceanographic evidence suggests that ENSO has changed in the past, and these changes have been linked to large-scale climatic shifts. While a close relationship between ENSO evolution and climate boundary conditions has been predicted, testing these predictions remains challenging. These climate boundary conditions, including insolation, the mean surface temperature gradient of the tropical Pacific, global ice volume, and tropical thermocline depth, often co-vary and may work together to suppress or enhance the ocean-atmosphere feedbacks that drive ENSO variability. Furthermore, suitable paleo-archives spanning multiple climate states are sparse. We have aimed to test ENSO response to changing climate boundary conditions by generating new reconstructions of mixed-layer variability from sedimentary archives spanning the last three glacial-interglacial cycles from the Central Tropical Pacific Line Islands, where El Niño is strongly expressed. We analyzed Mg/Ca ratios from individual foraminifera to reconstruct mixed-layer variability at discrete time intervals representing combinations of climatic boundary conditions from the middle Holocene to Marine Isotope Stage (MIS) 8. We observe changes in the mixed-layer temperature variability during MIS 5 and during the previous interglacial (MIS 7) showing significant reductions in ENSO amplitude. Differences in variability during glacial and interglacial intervals are also observed. Additionally, we reconstructed mixed-layer and thermocline conditions using multi-species Mg/Ca and stable isotope measurements to more fully characterize the state of the Central Tropical Pacific during these intervals. These reconstructions provide us with a unique view of Central Tropical Pacific variability and water-column structure at discrete intervals under varying boundary climate conditions with which to assess factors that shape ENSO variability.
Perceptual Responses to High- and Moderate-Intensity Interval Exercise in Adolescents.
Malik, Adam A; Williams, Craig A; Weston, Kathryn L; Barker, Alan R
2018-05-01
Continuous high-intensity exercise is proposed to evoke unpleasant sensations as predicted by the dual-mode theory and may negatively impact on future exercise adherence. Previous studies support unpleasant sensations in affective responses during continuous high-intensity exercise, but the affect experience during high-intensity interval exercise (HIIE) involving brief bursts of high-intensity exercise separated by low-intensity activity is poorly understood in adolescents. We examined the acute affective, enjoyment, and perceived exertion responses to HIIE compared with moderate-intensity interval exercise (MIIE) in adolescents. Thirteen adolescent boys (mean ± SD: age, 14.0 ± 0.5 yr) performed two counterbalanced exercise conditions: 1) HIIE: 8 × 1-min work intervals at 90% maximal aerobic speed; and 2) MIIE: between 9 and 12 × 1-min work intervals at 90% ventilatory threshold where the number of intervals performed were distance-matched to HIIE. HIIE and MIIE work intervals were interspersed with 75 s active recovery at 4 km·h. Affect, enjoyment, and RPE were recorded before, during, and after exercise. Affect responses declined in both conditions but the fall was greater in HIIE than MIIE (P < 0.025, effect size [ES], 0.64 to 0.81). Affect remained positive at the end-work interval for both conditions (MIIE, 2.62 ± 1.50; HIIE, 1.15 ± 2.08 on feeling scale). No enjoyment differences were evident during HIIE and MIIE (P = 0.32), but HIIE elicited greater postexercise enjoyment compared with MIIE (P = 0.01, ES = 0.47). RPE was significantly higher during HIIE than MIIE across all work intervals (all P < 0.03, ES > 0.64). Despite elevated RPE, HIIE did not elicit prominent unpleasant feelings as predicted by the dual-mode theory and was associated with greater postexercise enjoyment responses than MIIE. This study demonstrates the feasibility of the application of HIIE as an alternative form of physical activity in adolescents.
Tanaka, Tomohiro; Nishida, Satoshi
2015-01-01
The neuronal processes that underlie visual searches can be divided into two stages: target discrimination and saccade preparation/generation. This predicts that the length of time of the prediscrimination stage varies according to the search difficulty across different stimulus conditions, whereas the length of the latter postdiscrimination stage is stimulus invariant. However, recent studies have suggested that the length of the postdiscrimination interval changes with different stimulus conditions. To address whether and how the visual stimulus affects determination of the postdiscrimination interval, we recorded single-neuron activity in the lateral intraparietal area (LIP) when monkeys (Macaca fuscata) performed a color-singleton search involving four stimulus conditions that differed regarding luminance (Bright vs. Dim) and target-distractor color similarity (Easy vs. Difficult). We specifically focused on comparing activities between the Bright-Difficult and Dim-Easy conditions, in which the visual stimuli were considerably different, but the mean reaction times were indistinguishable. This allowed us to examine the neuronal activity when the difference in the degree of search speed between different stimulus conditions was minimal. We found that not only prediscrimination but also postdiscrimination intervals varied across stimulus conditions: the postdiscrimination interval was longer in the Dim-Easy condition than in the Bright-Difficult condition. Further analysis revealed that the postdiscrimination interval might vary with stimulus luminance. A computer simulation using an accumulation-to-threshold model suggested that the luminance-related difference in visual response strength at discrimination time could be the cause of different postdiscrimination intervals. PMID:25995344
Detection in fixed and random noise in foveal and parafoveal vision explained by template learning
NASA Technical Reports Server (NTRS)
Beard, B. L.; Ahumada, A. J. Jr; Watson, A. B. (Principal Investigator)
1999-01-01
Foveal and parafoveal contrast detection thresholds for Gabor and checkerboard targets were measured in white noise by means of a two-interval forced-choice paradigm. Two white-noise conditions were used: fixed and twin. In the fixed noise condition a single noise sample was presented in both intervals of all the trials. In the twin noise condition the same noise sample was used in the two intervals of a trial, but a new sample was generated for each trial. Fixed noise conditions usually resulted in lower thresholds than twin noise. Template learning models are presented that attribute this advantage of fixed over twin noise either to fixed memory templates' reducing uncertainty by incorporation of the noise or to the introduction, by the learning process itself, of more variability in the twin noise condition. Quantitative predictions of the template learning process show that it contributes to the accelerating nonlinear increase in performance with signal amplitude at low signal-to-noise ratios.
Excitation-based and informational masking of a tonal signal in a four-tone masker.
Leibold, Lori J; Hitchens, Jack J; Buss, Emily; Neff, Donna L
2010-04-01
This study examined contributions of peripheral excitation and informational masking to the variability in masking effectiveness observed across samples of multi-tonal maskers. Detection thresholds were measured for a 1000-Hz signal presented simultaneously with each of 25, four-tone masker samples. Using a two-interval, forced-choice adaptive task, thresholds were measured with each sample fixed throughout trial blocks for ten listeners. Average thresholds differed by as much as 26 dB across samples. An excitation-based model of partial loudness [Moore, B. C. J. et al. (1997). J. Audio Eng. Soc. 45, 224-237] was used to predict thresholds. These predictions accounted for a significant portion of variance in the data of several listeners, but no relation between the model and data was observed for many listeners. Moreover, substantial individual differences, on the order of 41 dB, were observed for some maskers. The largest individual differences were found for maskers predicted to produce minimal excitation-based masking. In subsequent conditions, one of five maskers was randomly presented in each interval. The difference in performance for samples with low versus high predicted thresholds was reduced in random compared to fixed conditions. These findings are consistent with a trading relation whereby informational masking is largest for conditions in which excitation-based masking is smallest.
A dynamic model for plant growth: validation study under changing temperatures
NASA Technical Reports Server (NTRS)
Wann, M.; Raper, C. D. Jr; Raper CD, J. r. (Principal Investigator)
1984-01-01
A dynamic simulation model to describe vegetative growth of plants, for which some functions and parameter values have been estimated previously by optimization search techniques and numerical experimentation based on data from constant temperature experiments, is validated under conditions of changing temperatures. To test the predictive capacity of the model, dry matter accumulation in the leaves, stems, and roots of tobacco plants (Nicotiana tabacum L.) was measured at 2- or 3-day intervals during a 5-week period when temperatures in controlled-environment rooms were programmed for changes at weekly and daily intervals and in ascending or descending sequences within a range of 14 to 34 degrees C. Simulations of dry matter accumulation and distribution were carried out using the programmed changes for experimental temperatures and compared with the measured values. The agreement between measured and predicted values was close and indicates that the temperature-dependent functional forms derived from constant-temperature experiments are adequate for modelling plant growth responses to conditions of changing temperatures with switching intervals as short as 1 day.
Extreme precipitation patterns and reductions of terrestrial ecosystem production across biomes
Yongguang Zhang; M. Susan Moran; Mark A. Nearing; Guillermo E. Ponce Campos; Alfredo R. Huete; Anthony R. Buda; David D. Bosch; Stacey A. Gunter; Stanley G. Kitchen; W. Henry McNab; Jack A. Morgan; Mitchel P. McClaran; Diane S. Montoya; Debra P.C. Peters; Patrick J. Starks
2013-01-01
Precipitation regimes are predicted to shift to more extreme patterns that are characterized by more heavy rainfall events and longer dry intervals, yet their ecological impacts on vegetation production remain uncertain across biomes in natural climatic conditions. This in situ study investigated the effects of these climatic conditions on aboveground net primary...
A gentle introduction to quantile regression for ecologists
Cade, B.S.; Noon, B.R.
2003-01-01
Quantile regression is a way to estimate the conditional quantiles of a response variable distribution in the linear model that provides a more complete view of possible causal relationships between variables in ecological processes. Typically, all the factors that affect ecological processes are not measured and included in the statistical models used to investigate relationships between variables associated with those processes. As a consequence, there may be a weak or no predictive relationship between the mean of the response variable (y) distribution and the measured predictive factors (X). Yet there may be stronger, useful predictive relationships with other parts of the response variable distribution. This primer relates quantile regression estimates to prediction intervals in parametric error distribution regression models (eg least squares), and discusses the ordering characteristics, interval nature, sampling variation, weighting, and interpretation of the estimates for homogeneous and heterogeneous regression models.
Gomes, Hilary; Barrett, Sophia; Duff, Martin; Barnhardt, Jack; Ritter, Walter
2008-03-01
We examined the impact of perceptual load by manipulating interstimulus interval (ISI) in two auditory selective attention studies that varied in the difficulty of the target discrimination. In the paradigm, channels were separated by frequency and target/deviant tones were softer in intensity. Three ISI conditions were presented: fast (300ms), medium (600ms) and slow (900ms). Behavioral (accuracy and RT) and electrophysiological measures (Nd, P3b) were observed. In both studies, participants evidenced poorer accuracy during the fast ISI condition than the slow suggesting that ISI impacted task difficulty. However, none of the three measures of processing examined, Nd amplitude, P3b amplitude elicited by unattended deviant stimuli, or false alarms to unattended deviants, were impacted by ISI in the manner predicted by perceptual load theory. The prediction based on perceptual load theory, that there would be more processing of irrelevant stimuli under conditions of low as compared to high perceptual load, was not supported in these auditory studies. Task difficulty/perceptual load impacts the processing of irrelevant stimuli in the auditory modality differently than predicted by perceptual load theory, and perhaps differently than in the visual modality.
Pavement remaining service interval implementation guidelines.
DOT National Transportation Integrated Search
2013-11-01
"Many important decisions are necessary in order to effectively provide and manage a pavement network. At the heart of this : process is the prediction of needed future construction events. One approach to providing a single numeric on the condition ...
Tallot, Lucille; Diaz-Mataix, Lorenzo; Perry, Rosemarie E.; Wood, Kira; LeDoux, Joseph E.; Mouly, Anne-Marie; Sullivan, Regina M.; Doyère, Valérie
2017-01-01
The updating of a memory is triggered whenever it is reactivated and a mismatch from what is expected (i.e., prediction error) is detected, a process that can be unraveled through the memory's sensitivity to protein synthesis inhibitors (i.e., reconsolidation). As noted in previous studies, in Pavlovian threat/aversive conditioning in adult rats, prediction error detection and its associated protein synthesis-dependent reconsolidation can be triggered by reactivating the memory with the conditioned stimulus (CS), but without the unconditioned stimulus (US), or by presenting a CS–US pairing with a different CS–US interval than during the initial learning. Whether similar mechanisms underlie memory updating in the young is not known. Using similar paradigms with rapamycin (an mTORC1 inhibitor), we show that preweaning rats (PN18–20) do form a long-term memory of the CS–US interval, and detect a 10-sec versus 30-sec temporal prediction error. However, the resulting updating/reconsolidation processes become adult-like after adolescence (PN30–40). Our results thus show that while temporal prediction error detection exists in preweaning rats, specific infant-type mechanisms are at play for associative learning and memory. PMID:28202715
On the accuracy and reliability of predictions by control-system theory.
Bourbon, W T; Copeland, K E; Dyer, V R; Harman, W K; Mosley, B L
1990-12-01
In three experiments we used control-system theory (CST) to predict the results of tracking tasks on which people held a handle to keep a cursor even with a target on a computer screen. 10 people completed a total of 104 replications of the task. In each experiment, there were two conditions: in one, only the handle affected the position of the cursor; in the other, a random disturbance also affected the cursor. From a person's performance during Condition 1, we derived constants used in the CST model to predict the results of Condition 2. In two experiments, predictions occurred a few minutes before Condition 2; in one experiment, the delay was 1 yr. During a 1-min. experimental run, the positions of handle and cursor, produced by the person, were each sampled 1800 times, once every 1/30 sec. During a modeling run, the model predicted the positions of the handle and target for each of the 1800 intervals sampled in the experimental run. In 104 replications, the mean correlation between predicted and actual positions of the handle was .996; SD = .002.
Robust functional regression model for marginal mean and subject-specific inferences.
Cao, Chunzheng; Shi, Jian Qing; Lee, Youngjo
2017-01-01
We introduce flexible robust functional regression models, using various heavy-tailed processes, including a Student t-process. We propose efficient algorithms in estimating parameters for the marginal mean inferences and in predicting conditional means as well as interpolation and extrapolation for the subject-specific inferences. We develop bootstrap prediction intervals (PIs) for conditional mean curves. Numerical studies show that the proposed model provides a robust approach against data contamination or distribution misspecification, and the proposed PIs maintain the nominal confidence levels. A real data application is presented as an illustrative example.
Method and apparatus to predict the remaining service life of an operating system
Greitzer, Frank L.; Kangas, Lars J.; Terrones, Kristine M.; Maynard, Melody A.; Pawlowski, Ronald A. , Ferryman; Thomas A.; Skorpik, James R.; Wilson, Bary W.
2008-11-25
A method and computer-based apparatus for monitoring the degradation of, predicting the remaining service life of, and/or planning maintenance for, an operating system are disclosed. Diagnostic information on degradation of the operating system is obtained through measurement of one or more performance characteristics by one or more sensors onboard and/or proximate the operating system. Though not required, it is preferred that the sensor data are validated to improve the accuracy and reliability of the service life predictions. The condition or degree of degradation of the operating system is presented to a user by way of one or more calculated, numeric degradation figures of merit that are trended against one or more independent variables using one or more mathematical techniques. Furthermore, more than one trendline and uncertainty interval may be generated for a given degradation figure of merit/independent variable data set. The trendline(s) and uncertainty interval(s) are subsequently compared to one or more degradation figure of merit thresholds to predict the remaining service life of the operating system. The present invention enables multiple mathematical approaches in determining which trendline(s) to use to provide the best estimate of the remaining service life.
Isarida, Takeo; Sakai, Tetsuya; Kubota, Takayuki; Koga, Miho; Katayama, Yu; Isarida, Toshiko K
2014-04-01
The present study investigated context effects of incidental odors in free recall after a short retention interval (5 min). With a short retention interval, the results are not confounded by extraneous odors or encounters with the experimental odor and possible rehearsal during a long retention interval. A short study time condition (4 s per item), predicted not to be affected by adaptation to the odor, and a long study time condition (8 s per item) were used. Additionally, we introduced a new method for recovery from adaptation, where a dissimilar odor was briefly presented at the beginning of the retention interval, and we demonstrated the effectiveness of this technique. An incidental learning paradigm was used to prevent overshadowing from confounding the results. In three experiments, undergraduates (N = 200) incidentally studied words presented one-by-one and received a free recall test. Two pairs of odors and a third odor having different semantic-differential characteristics were selected from 14 familiar odors. One of the odors was presented during encoding, and during the test, the same odor (same-context condition) or the other odor within the pair (different-context condition) was presented. Without using a recovery-from-adaptation method, a significant odor-context effect appeared in the 4-s/item condition, but not in the 8-s/item condition. Using the recovery-from-adaptation method, context effects were found for both the 8- and the 4-s/item conditions. The size of the recovered odor-context effect did not change with study time. There were no serial position effects. Implications of the present findings are discussed.
Extreme precipitation patterns reduced terrestrial ecosystem production across biomass
USDA-ARS?s Scientific Manuscript database
Precipitation regimes are predicted to shift to more extreme patterns that are characterized by more intense rainfall events and longer dry intervals, yet their ecological impacts on vegetation production remain uncertain across biomes in natural climatic conditions. This in situ study investigated ...
Prediction future asset price which is non-concordant with the historical distribution
NASA Astrophysics Data System (ADS)
Seong, Ng Yew; Hin, Pooi Ah
2015-12-01
This paper attempts to predict the major characteristics of the future asset price which is non-concordant with the distribution estimated from the price today and the prices on a large number of previous days. The three major characteristics of the i-th non-concordant asset price are the length of the interval between the occurrence time of the previous non-concordant asset price and that of the present non-concordant asset price, the indicator which denotes that the non-concordant price is extremely small or large by its values -1 and 1 respectively, and the degree of non-concordance given by the negative logarithm of the probability of the left tail or right tail of which one of the end points is given by the observed future price. The vector of three major characteristics of the next non-concordant price is modelled to be dependent on the vectors corresponding to the present and l - 1 previous non-concordant prices via a 3-dimensional conditional distribution which is derived from a 3(l + 1)-dimensional power-normal mixture distribution. The marginal distribution for each of the three major characteristics can then be derived from the conditional distribution. The mean of the j-th marginal distribution is an estimate of the value of the j-th characteristics of the next non-concordant price. Meanwhile, the 100(α/2) % and 100(1 - α/2) % points of the j-th marginal distribution can be used to form a prediction interval for the j-th characteristic of the next non-concordant price. The performance measures of the above estimates and prediction intervals indicate that the fitted conditional distribution is satisfactory. Thus the incorporation of the distribution of the characteristics of the next non-concordant price in the model for asset price has a good potential of yielding a more realistic model.
Perez, Claudio A; Cohn, Theodore E; Medina, Leonel E; Donoso, José R
2007-08-31
Stochastic resonance (SR) is the counterintuitive phenomenon in which noise enhances detection of sub-threshold stimuli. The SR psychophysical threshold theory establishes that the required amplitude to exceed the sensory threshold barrier can be reached by adding noise to a sub-threshold stimulus. The aim of this study was to test the SR theory by comparing detection results from two different randomly-presented stimulus conditions. In the first condition, optimal noise was present during the whole attention interval; in the second, the optimal noise was restricted to the same time interval as the stimulus. SR threshold theory predicts no difference between the two conditions because noise helps the sub-threshold stimulus to reach threshold in both cases. The psychophysical experimental method used a 300 ms rectangular force pulse as a stimulus within an attention interval of 1.5 s, applied to the index finger of six human subjects in the two distinct conditions. For all subjects we show that in the condition in which the noise was present only when synchronized with the stimulus, detection was better (p<0.05) than in the condition in which the noise was delivered throughout the attention interval. These results provide the first direct evidence that SR threshold theory is incomplete and that a new phenomenon has been identified, which we call Coincidence-Enhanced Stochastic Resonance (CESR). We propose that CESR might occur because subject uncertainty is reduced when noise points at the same temporal window as the stimulus.
Darrington, Richard T; Jiao, Jim
2004-04-01
Rapid and accurate stability prediction is essential to pharmaceutical formulation development. Commonly used stability prediction methods include monitoring parent drug loss at intended storage conditions or initial rate determination of degradants under accelerated conditions. Monitoring parent drug loss at the intended storage condition does not provide a rapid and accurate stability assessment because often <0.5% drug loss is all that can be observed in a realistic time frame, while the accelerated initial rate method in conjunction with extrapolation of rate constants using the Arrhenius or Eyring equations often introduces large errors in shelf-life prediction. In this study, the shelf life prediction of a model pharmaceutical preparation utilizing sensitive high-performance liquid chromatography-mass spectrometry (LC/MS) to directly quantitate degradant formation rates at the intended storage condition is proposed. This method was compared to traditional shelf life prediction approaches in terms of time required to predict shelf life and associated error in shelf life estimation. Results demonstrated that the proposed LC/MS method using initial rates analysis provided significantly improved confidence intervals for the predicted shelf life and required less overall time and effort to obtain the stability estimation compared to the other methods evaluated. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association.
Assessing predictability of a hydrological stochastic-dynamical system
NASA Astrophysics Data System (ADS)
Gelfan, Alexander
2014-05-01
The water cycle includes the processes with different memory that creates potential for predictability of hydrological system based on separating its long and short memory components and conditioning long-term prediction on slower evolving components (similar to approaches in climate prediction). In the face of the Panta Rhei IAHS Decade questions, it is important to find a conceptual approach to classify hydrological system components with respect to their predictability, define predictable/unpredictable patterns, extend lead-time and improve reliability of hydrological predictions based on the predictable patterns. Representation of hydrological systems as the dynamical systems subjected to the effect of noise (stochastic-dynamical systems) provides possible tool for such conceptualization. A method has been proposed for assessing predictability of hydrological system caused by its sensitivity to both initial and boundary conditions. The predictability is defined through a procedure of convergence of pre-assigned probabilistic measure (e.g. variance) of the system state to stable value. The time interval of the convergence, that is the time interval during which the system losses memory about its initial state, defines limit of the system predictability. The proposed method was applied to assess predictability of soil moisture dynamics in the Nizhnedevitskaya experimental station (51.516N; 38.383E) located in the agricultural zone of the central European Russia. A stochastic-dynamical model combining a deterministic one-dimensional model of hydrothermal regime of soil with a stochastic model of meteorological inputs was developed. The deterministic model describes processes of coupled heat and moisture transfer through unfrozen/frozen soil and accounts for the influence of phase changes on water flow. The stochastic model produces time series of daily meteorological variables (precipitation, air temperature and humidity), whose statistical properties are similar to those of the corresponding series of the actual data measured at the station. Beginning from the initial conditions and being forced by Monte-Carlo generated synthetic meteorological series, the model simulated diverging trajectories of soil moisture characteristics (water content of soil column, moisture of different soil layers, etc.). Limit of predictability of the specific characteristic was determined through time of stabilization of variance of the characteristic between the trajectories, as they move away from the initial state. Numerical experiments were carried out with the stochastic-dynamical model to analyze sensitivity of the soil moisture predictability assessments to uncertainty in the initial conditions, to determine effects of the soil hydraulic properties and processes of soil freezing on the predictability. It was found, particularly, that soil water content predictability is sensitive to errors in the initial conditions and strongly depends on the hydraulic properties of soil under both unfrozen and frozen conditions. Even if the initial conditions are "well-established", the assessed predictability of water content of unfrozen soil does not exceed 30-40 days, while for frozen conditions it may be as long as 3-4 months. The latter creates opportunity for utilizing the autumn water content of soil as the predictor for spring snowmelt runoff in the region under consideration.
Image discrimination models predict detection in fixed but not random noise
NASA Technical Reports Server (NTRS)
Ahumada, A. J. Jr; Beard, B. L.; Watson, A. B. (Principal Investigator)
1997-01-01
By means of a two-interval forced-choice procedure, contrast detection thresholds for an aircraft positioned on a simulated airport runway scene were measured with fixed and random white-noise masks. The term fixed noise refers to a constant, or unchanging, noise pattern for each stimulus presentation. The random noise was either the same or different in the two intervals. Contrary to simple image discrimination model predictions, the same random noise condition produced greater masking than the fixed noise. This suggests that observers seem unable to hold a new noisy image for comparison. Also, performance appeared limited by internal process variability rather than by external noise variability, since similar masking was obtained for both random noise types.
Fractal analyses reveal independent complexity and predictability of gait
Dierick, Frédéric; Nivard, Anne-Laure
2017-01-01
Locomotion is a natural task that has been assessed for decades and used as a proxy to highlight impairments of various origins. So far, most studies adopted classical linear analyses of spatio-temporal gait parameters. Here, we use more advanced, yet not less practical, non-linear techniques to analyse gait time series of healthy subjects. We aimed at finding more sensitive indexes related to spatio-temporal gait parameters than those previously used, with the hope to better identify abnormal locomotion. We analysed large-scale stride interval time series and mean step width in 34 participants while altering walking direction (forward vs. backward walking) and with or without galvanic vestibular stimulation. The Hurst exponent α and the Minkowski fractal dimension D were computed and interpreted as indexes expressing predictability and complexity of stride interval time series, respectively. These holistic indexes can easily be interpreted in the framework of optimal movement complexity. We show that α and D accurately capture stride interval changes in function of the experimental condition. Walking forward exhibited maximal complexity (D) and hence, adaptability. In contrast, walking backward and/or stimulation of the vestibular system decreased D. Furthermore, walking backward increased predictability (α) through a more stereotyped pattern of the stride interval and galvanic vestibular stimulation reduced predictability. The present study demonstrates the complementary power of the Hurst exponent and the fractal dimension to improve walking classification. Our developments may have immediate applications in rehabilitation, diagnosis, and classification procedures. PMID:29182659
Acquisition with partial and continuous reinforcement in pigeon autoshaping.
Gottlieb, Daniel A
2004-08-01
Contemporary time accumulation models make the unique prediction that acquisition of a conditioned response will be equally rapid with partial and continuous reinforcement, if the time between conditioned stimuli is held constant. To investigate this, acquisition of conditioned responding was examined in pigeon autoshaping under conditions of 100% and 25% reinforcement, holding intertrial interval constant. Contrary to what was predicted, evidence for slowed acquisition in partially reinforced animals was observed with several response measures. However, asymptotic performance was superior with 25% reinforcement. A switching of reinforcement contingencies after initial acquisition did not immediately affect responding. After further sessions, partial reinforcement augmented responding, whereas continuous reinforcement did not, irrespective of an animal's reinforcement history. Subsequent training with a novel stimulus maintained the response patterns. These acquisition results generally support associative, rather than time accumulation, accounts of conditioning.
It's time to fear! Interval timing in odor fear conditioning in rats
Shionoya, Kiseko; Hegoburu, Chloé; Brown, Bruce L.; Sullivan, Regina M.; Doyère, Valérie; Mouly, Anne-Marie
2013-01-01
Time perception is crucial to goal attainment in humans and other animals, and interval timing also guides fundamental animal behaviors. Accumulating evidence has made it clear that in associative learning, temporal relations between events are encoded, and a few studies suggest this temporal learning occurs very rapidly. Most of these studies, however, have used methodologies that do not permit investigating the emergence of this temporal learning. In the present study we monitored respiration, ultrasonic vocalization (USV) and freezing behavior in rats in order to perform fine-grain analysis of fear responses during odor fear conditioning. In this paradigm an initially neutral odor (the conditioned stimulus, CS) predicted the arrival of an aversive unconditioned stimulus (US, footshock) at a fixed 20-s time interval. We first investigated the development of a temporal pattern of responding related to CS-US interval duration. The data showed that during acquisition with odor-shock pairings, a temporal response pattern of respiration rate was observed. Changing the CS-US interval duration from 20-s to 30-s resulted in a shift of the temporal response pattern appropriate to the new duration thus demonstrating that the pattern reflected the learning of the CS-US interval. A temporal pattern was also observed during a retention test 24 h later for both respiration and freezing measures, suggesting that the animals had stored the interval duration in long-term memory. We then investigated the role of intra-amygdalar dopaminergic transmission in interval timing. For this purpose, the D1 dopaminergic receptors antagonist SCH23390 was infused in the basolateral amygdala before conditioning. This resulted in an alteration of timing behavior, as reflected in differential temporal patterns between groups observed in a 24 h retention test off drug. The present data suggest that D1 receptor dopaminergic transmission within the amygdala is involved in temporal processing. PMID:24098277
Intervals for posttest probabilities: a comparison of 5 methods.
Mossman, D; Berger, J O
2001-01-01
Several medical articles discuss methods of constructing confidence intervals for single proportions and the likelihood ratio, but scant attention has been given to the systematic study of intervals for the posterior odds, or the positive predictive value, of a test. The authors describe 5 methods of constructing confidence intervals for posttest probabilities when estimates of sensitivity, specificity, and the pretest probability of a disorder are derived from empirical data. They then evaluate each method to determine how well the intervals' coverage properties correspond to their nominal value. When the estimates of pretest probabilities, sensitivity, and specificity are derived from more than 80 subjects and are not close to 0 or 1, all methods generate intervals with appropriate coverage properties. When these conditions are not met, however, the best-performing method is an objective Bayesian approach implemented by a simple simulation using a spreadsheet. Physicians and investigators can generate accurate confidence intervals for posttest probabilities in small-sample situations using the objective Bayesian approach.
NASA Technical Reports Server (NTRS)
Zhang, D.; Anthes, R. A.
1982-01-01
A one-dimensional, planetary boundary layer (PBL) model is presented and verified using April 10, 1979 SESAME data. The model contains two modules to account for two different regimes of turbulent mixing. Separate parameterizations are made for stable and unstable conditions, with a predictive slab model for surface temperature. Atmospheric variables in the surface layer are calculated with a prognostic model, with moisture included in the coupled surface/PBL modeling. Sensitivity tests are performed for factors such as moisture availability, albedo, surface roughness, and thermal capacity, and a 24 hr simulation is summarized for day and night conditions. The comparison with the SESAME data comprises three hour intervals, using a time-dependent geostrophic wind. Close correlations were found with daytime conditions, but not in nighttime thermal structure, while the turbulence was faithfully predicted. Both geostrophic flow and surface characteristics were shown to have significant effects on the model predictions
Time and Order Effects on Causal Learning
ERIC Educational Resources Information Center
Alvarado, Angelica; Jara, Elvia; Vila, Javier; Rosas, Juan M.
2006-01-01
Five experiments were conducted to explore trial order and retention interval effects upon causal predictive judgments. Experiment 1 found that participants show a strong effect of trial order when a stimulus was sequentially paired with two different outcomes compared to a condition where both outcomes were presented intermixed. Experiment 2…
CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.
Cooley, Richard L.; Vecchia, Aldo V.
1987-01-01
A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.
Modelling volatility recurrence intervals in the Chinese commodity futures market
NASA Astrophysics Data System (ADS)
Zhou, Weijie; Wang, Zhengxin; Guo, Haiming
2016-09-01
The law of extreme event occurrence attracts much research. The volatility recurrence intervals of Chinese commodity futures market prices are studied: the results show that the probability distributions of the scaled volatility recurrence intervals have a uniform scaling curve for different thresholds q. So we can deduce the probability distribution of extreme events from normal events. The tail of a scaling curve can be well fitted by a Weibull form, which is significance-tested by KS measures. Both short-term and long-term memories are present in the recurrence intervals with different thresholds q, which denotes that the recurrence intervals can be predicted. In addition, similar to volatility, volatility recurrence intervals also have clustering features. Through Monte Carlo simulation, we artificially synthesise ARMA, GARCH-class sequences similar to the original data, and find out the reason behind the clustering. The larger the parameter d of the FIGARCH model, the stronger the clustering effect is. Finally, we use the Fractionally Integrated Autoregressive Conditional Duration model (FIACD) to analyse the recurrence interval characteristics. The results indicated that the FIACD model may provide a method to analyse volatility recurrence intervals.
Gajic, Ognjen; Dabbagh, Ousama; Park, Pauline K; Adesanya, Adebola; Chang, Steven Y; Hou, Peter; Anderson, Harry; Hoth, J Jason; Mikkelsen, Mark E; Gentile, Nina T; Gong, Michelle N; Talmor, Daniel; Bajwa, Ednan; Watkins, Timothy R; Festic, Emir; Yilmaz, Murat; Iscimen, Remzi; Kaufman, David A; Esper, Annette M; Sadikot, Ruxana; Douglas, Ivor; Sevransky, Jonathan; Malinchoc, Michael
2011-02-15
Accurate, early identification of patients at risk for developing acute lung injury (ALI) provides the opportunity to test and implement secondary prevention strategies. To determine the frequency and outcome of ALI development in patients at risk and validate a lung injury prediction score (LIPS). In this prospective multicenter observational cohort study, predisposing conditions and risk modifiers predictive of ALI development were identified from routine clinical data available during initial evaluation. The discrimination of the model was assessed with area under receiver operating curve (AUC). The risk of death from ALI was determined after adjustment for severity of illness and predisposing conditions. Twenty-two hospitals enrolled 5,584 patients at risk. ALI developed a median of 2 (interquartile range 1-4) days after initial evaluation in 377 (6.8%; 148 ALI-only, 229 adult respiratory distress syndrome) patients. The frequency of ALI varied according to predisposing conditions (from 3% in pancreatitis to 26% after smoke inhalation). LIPS discriminated patients who developed ALI from those who did not with an AUC of 0.80 (95% confidence interval, 0.78-0.82). When adjusted for severity of illness and predisposing conditions, development of ALI increased the risk of in-hospital death (odds ratio, 4.1; 95% confidence interval, 2.9-5.7). ALI occurrence varies according to predisposing conditions and carries an independently poor prognosis. Using routinely available clinical data, LIPS identifies patients at high risk for ALI early in the course of their illness. This model will alert clinicians about the risk of ALI and facilitate testing and implementation of ALI prevention strategies. Clinical trial registered with www.clinicaltrials.gov (NCT00889772).
Machine learning approaches for estimation of prediction interval for the model output.
Shrestha, Durga L; Solomatine, Dimitri P
2006-03-01
A novel method for estimating prediction uncertainty using machine learning techniques is presented. Uncertainty is expressed in the form of the two quantiles (constituting the prediction interval) of the underlying distribution of prediction errors. The idea is to partition the input space into different zones or clusters having similar model errors using fuzzy c-means clustering. The prediction interval is constructed for each cluster on the basis of empirical distributions of the errors associated with all instances belonging to the cluster under consideration and propagated from each cluster to the examples according to their membership grades in each cluster. Then a regression model is built for in-sample data using computed prediction limits as targets, and finally, this model is applied to estimate the prediction intervals (limits) for out-of-sample data. The method was tested on artificial and real hydrologic data sets using various machine learning techniques. Preliminary results show that the method is superior to other methods estimating the prediction interval. A new method for evaluating performance for estimating prediction interval is proposed as well.
Shape of the equatorial magnetopause affected by the radial interplanetary magnetic field
NASA Astrophysics Data System (ADS)
Grygorov, K.; Šafránková, J.; Němeček, Z.; Pi, G.; Přech, L.; Urbář, J.
2017-11-01
The ability of a prediction of the magnetopause location under various upstream conditions can be considered as a test of our understanding of the solar wind-magnetosphere interaction. The present magnetopause models are parametrized with the solar wind dynamic pressure and usually with the north-south interplanetary magnetic field (IMF) component. However, several studies pointed out an importance of the radial IMF component, but results of these studies are controversial up to now. The present study compares magnetopause observations by five THEMIS spacecraft during long lasting intervals of the radial IMF with two empirical magnetopause models. A comparison reveals that the magnetopause location is highly variable and that the average difference between the observed and predicted positions is ≈ + 0.7 RE under this condition. The difference does not depend on the local times and other parameters, like the upstream pressure, IMF north-south component, or tilt angle of the Earth dipole. We conclude that our results strongly support the suggestion on a global expansion of the equatorial magnetopause during intervals of the radial IMF.
White, K G; Wixted, J T
1999-01-01
We present a new model of remembering in the context of conditional discrimination. For procedures such as delayed matching to sample, the effect of the sample stimuli at the time of remembering is represented by a pair of Thurstonian (normal) distributions of effective stimulus values. The critical assumption of the model is that, based on prior experience, each effective stimulus value is associated with a ratio of reinforcers obtained for previous correct choices of the comparison stimuli. That ratio determines the choice that is made on the basis of the matching law. The standard deviations of the distributions are assumed to increase with increasing retention-interval duration, and the distance between their means is assumed to be a function of other factors that influence overall difficulty of the discrimination. It is a behavioral model in that choice is determined by its reinforcement history. The model predicts that the biasing effects of the reinforcer differential increase with decreasing discriminability and with increasing retention-interval duration. Data from several conditions using a delayed matching-to-sample procedure with pigeons support the predictions. PMID:10028693
Commentary on Holmes et al. (2007): resolving the debate on when extinction risk is predictable.
Ellner, Stephen P; Holmes, Elizabeth E
2008-08-01
We reconcile the findings of Holmes et al. (Ecology Letters, 10, 2007, 1182) that 95% confidence intervals for quasi-extinction risk were narrow for many vertebrates of conservation concern, with previous theory predicting wide confidence intervals. We extend previous theory, concerning the precision of quasi-extinction estimates as a function of population dynamic parameters, prediction intervals and quasi-extinction thresholds, and provide an approximation that specifies the prediction interval and threshold combinations where quasi-extinction estimates are precise (vs. imprecise). This allows PVA practitioners to define the prediction interval and threshold regions of safety (low risk with high confidence), danger (high risk with high confidence), and uncertainty.
Predicted Weather Display and Decision Support Interface for Flight Deck
NASA Technical Reports Server (NTRS)
Johnson, Walter W. (Inventor); Wong, Dominic G. (Inventor); Koteskey, Robert W. (Inventor); Wu, Shu-Chieh (Inventor)
2017-01-01
A system and method for providing visual depictions of a predictive weather forecast for in-route vehicle trajectory planning. The method includes displaying weather information on a graphical display, displaying vehicle position information on the graphical display, selecting a predictive interval, displaying predictive weather information for the predictive interval on the graphical display, and displaying predictive vehicle position information for the predictive interval on the graphical display, such that the predictive vehicle position information is displayed relative to the predictive weather information, for in-route trajectory planning.
Tomaselli Muensterman, Elena; Tisdale, James E
2018-06-08
Prolongation of the heart rate-corrected QT (QTc) interval increases the risk for torsades de pointes (TdP), a potentially fatal arrhythmia. The likelihood of TdP is higher in patients with risk factors, which include female sex, older age, heart failure with reduced ejection fraction, hypokalemia, hypomagnesemia, concomitant administration of ≥ 2 QTc interval-prolonging medications, among others. Assessment and quantification of risk factors may facilitate prediction of patients at highest risk for developing QTc interval prolongation and TdP. Investigators have utilized the field of predictive analytics, which generates predictions using techniques including data mining, modeling, machine learning, and others, to develop methods of risk quantification and prediction of QTc interval prolongation. Predictive analytics have also been incorporated into clinical decision support (CDS) tools to alert clinicians regarding patients at increased risk of developing QTc interval prolongation. The objectives of this paper are to assess the effectiveness of predictive analytics for identification of patients at risk of drug-induced QTc interval prolongation, and to discuss the efficacy of incorporation of predictive analytics into CDS tools in clinical practice. A systematic review of English language articles (human subjects only) was performed, yielding 57 articles, with an additional 4 articles identified from other sources; a total of 10 articles were included in this review. Risk scores for QTc interval prolongation have been developed in various patient populations including those in cardiac intensive care units (ICUs) and in broader populations of hospitalized or health system patients. One group developed a risk score that includes information regarding genetic polymorphisms; this score significantly predicted TdP. Development of QTc interval prolongation risk prediction models and incorporation of these models into CDS tools reduces the risk of QTc interval prolongation in cardiac ICUs and identifies health-system patients at increased risk for mortality. The impact of these QTc interval prolongation predictive analytics on overall patient safety outcomes, such as TdP and sudden cardiac death relative to the cost of development and implementation, requires further study. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
On the Development and Mechanics of Delayed Matching-to-Sample Performance
Kangas, Brian D; Berry, Meredith S; Branch, Marc N
2011-01-01
Despite its frequent use to assess effects of environmental and pharmacological variables on short-term memory, little is known about the development of delayed matching-to-sample (DMTS) performance. This study was designed to examine the dimensions and dynamics of DMTS performance development over a long period of exposure to provide a more secure foundation for assessing stability in future research. Six pigeons were exposed to a DMTS task with variable delays for 300 sessions (i.e., 18,000 total trials; 3,600 trials per retention interval). Percent-correct and log-d measures used to quantify the development of conditional stimulus control under the procedure generally and at each of five retention intervals (0, 2, 4, 8 and 16-s) individually revealed that high levels of accuracy developed relatively quickly under the shorter retention intervals, but increases in accuracy under the longer retention intervals sometimes were not observed until 100–150 sessions had passed, with some still increasing at Session 300. Analyses of errors suggested that retention intervals induced biases by shifting control from the sample stimulus to control by position, something that was predicted by observed response biases during initial training. These results suggest that although it may require a great deal of exposure to DMTS prior to obtaining asymptotic steady state, quantification of model parameters may help predict trends when extended exposure is not feasible. PMID:21541127
Haidar, Ziad A; Papanna, Ramesha; Sibai, Baha M; Tatevian, Nina; Viteri, Oscar A; Vowels, Patricia C; Blackwell, Sean C; Moise, Kenneth J
2017-08-01
Traditionally, 2-dimensional ultrasound parameters have been used for the diagnosis of a suspected morbidly adherent placenta previa. More objective techniques have not been well studied yet. The objective of the study was to determine the ability of prenatal 3-dimensional power Doppler analysis of flow and vascular indices to predict the morbidly adherent placenta objectively. A prospective cohort study was performed in women between 28 and 32 gestational weeks with known placenta previa. Patients underwent a two-dimensional gray-scale ultrasound that determined management decisions. 3-Dimensional power Doppler volumes were obtained during the same examination and vascular, flow, and vascular flow indices were calculated after manual tracing of the viewed placenta in the sweep; data were blinded to obstetricians. Morbidly adherent placenta was confirmed by histology. Severe morbidly adherent placenta was defined as increta/percreta on histology, blood loss >2000 mL, and >2 units of PRBC transfused. Sensitivities, specificities, predictive values, and likelihood ratios were calculated. Student t and χ 2 tests, logistic regression, receiver-operating characteristic curves, and intra- and interrater agreements using Kappa statistics were performed. The following results were found: (1) 50 women were studied: 23 had morbidly adherent placenta, of which 12 (52.2%) were severe morbidly adherent placenta; (2) 2-dimensional parameters diagnosed morbidly adherent placenta with a sensitivity of 82.6% (95% confidence interval, 60.4-94.2), a specificity of 88.9% (95% confidence interval, 69.7-97.1), a positive predictive value of 86.3% (95% confidence interval, 64.0-96.4), a negative predictive value of 85.7% (95% confidence interval, 66.4-95.3), a positive likelihood ratio of 7.4 (95% confidence interval, 2.5-21.9), and a negative likelihood ratio of 0.2 (95% confidence interval, 0.08-0.48); (3) mean values of the vascular index (32.8 ± 7.4) and the vascular flow index (14.2 ± 3.8) were higher in morbidly adherent placenta (P < .001); (4) area under the receiver-operating characteristic curve for the vascular and vascular flow indices were 0.99 and 0.97, respectively; (5) the vascular index ≥21 predicted morbidly adherent placenta with a sensitivity and a specificity of 95% (95% confidence interval, 88.2-96.9) and 91%, respectively (95% confidence interval, 87.5-92.4), 92% positive predictive value (95% confidence interval, 85.5-94.3), 90% negative predictive value (95% confidence interval, 79.9-95.3), positive likelihood ratio of 10.55 (95% confidence interval, 7.06-12.75), and negative likelihood ratio of 0.05 (95% confidence interval, 0.03-0.13); and (6) for the severe morbidly adherent placenta, 2-dimensional ultrasound had a sensitivity of 33.3% (95% confidence interval, 11.3-64.6), a specificity of 81.8% (95% confidence interval, 47.8-96.8), a positive predictive value of 66.7% (95% confidence interval, 24.1-94.1), a negative predictive value of 52.9% (95% confidence interval, 28.5-76.1), a positive likelihood ratio of 1.83 (95% confidence interval, 0.41-8.11), and a negative likelihood ratio of 0.81 (95% confidence interval, 0.52-1.26). A vascular index ≥31 predicted the diagnosis of a severe morbidly adherent placenta with a 100% sensitivity (95% confidence interval, 72-100), a 90% specificity (95% confidence interval, 81.7-93.8), an 88% positive predictive value (95% confidence interval, 55.0-91.3), a 100% negative predictive value (95% confidence interval, 90.9-100), a positive likelihood ratio of 10.0 (95% confidence interval, 3.93-16.13), and a negative likelihood ratio of 0 (95% confidence interval, 0-0.34). Intrarater and interrater agreements were 94% (P < .001) and 93% (P < .001), respectively. The vascular index accurately predicts the morbidly adherent placenta in patients with placenta previa. In addition, 3-dimensional power Doppler vascular and vascular flow indices were more predictive of severe cases of morbidly adherent placenta compared with 2-dimensional ultrasound. This objective technique may limit the variations in diagnosing morbidly adherent placenta because of the subjectivity of 2-dimensional ultrasound interpretations. Copyright © 2017 Elsevier Inc. All rights reserved.
Inverse analysis and regularisation in conditional source-term estimation modelling
NASA Astrophysics Data System (ADS)
Labahn, Jeffrey W.; Devaud, Cecile B.; Sipkens, Timothy A.; Daun, Kyle J.
2014-05-01
Conditional Source-term Estimation (CSE) obtains the conditional species mass fractions by inverting a Fredholm integral equation of the first kind. In the present work, a Bayesian framework is used to compare two different regularisation methods: zeroth-order temporal Tikhonov regulatisation and first-order spatial Tikhonov regularisation. The objectives of the current study are: (i) to elucidate the ill-posedness of the inverse problem; (ii) to understand the origin of the perturbations in the data and quantify their magnitude; (iii) to quantify the uncertainty in the solution using different priors; and (iv) to determine the regularisation method best suited to this problem. A singular value decomposition shows that the current inverse problem is ill-posed. Perturbations to the data may be caused by the use of a discrete mixture fraction grid for calculating the mixture fraction PDF. The magnitude of the perturbations is estimated using a box filter and the uncertainty in the solution is determined based on the width of the credible intervals. The width of the credible intervals is significantly reduced with the inclusion of a smoothing prior and the recovered solution is in better agreement with the exact solution. The credible intervals for temporal and spatial smoothing are shown to be similar. Credible intervals for temporal smoothing depend on the solution from the previous time step and a smooth solution is not guaranteed. For spatial smoothing, the credible intervals are not dependent upon a previous solution and better predict characteristics for higher mixture fraction values. These characteristics make spatial smoothing a promising alternative method for recovering a solution from the CSE inversion process.
Predicting Secchi disk depth from average beam attenuation in a deep, ultra-clear lake
Larson, G.L.; Hoffman, R.L.; Hargreaves, B.R.; Collier, R.W.
2007-01-01
We addressed potential sources of error in estimating the water clarity of mountain lakes by investigating the use of beam transmissometer measurements to estimate Secchi disk depth. The optical properties Secchi disk depth (SD) and beam transmissometer attenuation (BA) were measured in Crater Lake (Crater Lake National Park, Oregon, USA) at a designated sampling station near the maximum depth of the lake. A standard 20 cm black and white disk was used to measure SD. The transmissometer light source had a nearly monochromatic wavelength of 660 nm and a path length of 25 cm. We created a SD prediction model by regression of the inverse SD of 13 measurements recorded on days when environmental conditions were acceptable for disk deployment with BA averaged over the same depth range as the measured SD. The relationship between inverse SD and averaged BA was significant and the average 95% confidence interval for predicted SD relative to the measured SD was ??1.6 m (range = -4.6 to 5.5 m) or ??5.0%. Eleven additional sample dates tested the accuracy of the predictive model. The average 95% confidence interval for these sample dates was ??0.7 m (range = -3.5 to 3.8 m) or ??2.2%. The 1996-2000 time-series means for measured and predicted SD varied by 0.1 m, and the medians varied by 0.5 m. The time-series mean annual measured and predicted SD's also varied little, with intra-annual differences between measured and predicted mean annual SD ranging from -2.1 to 0.1 m. The results demonstrated that this prediction model reliably estimated Secchi disk depths and can be used to significantly expand optical observations in an environment where the conditions for standardized SD deployments are limited. ?? 2007 Springer Science+Business Media B.V.
NASA Astrophysics Data System (ADS)
Kasiviswanathan, K.; Sudheer, K.
2013-05-01
Artificial neural network (ANN) based hydrologic models have gained lot of attention among water resources engineers and scientists, owing to their potential for accurate prediction of flood flows as compared to conceptual or physics based hydrologic models. The ANN approximates the non-linear functional relationship between the complex hydrologic variables in arriving at the river flow forecast values. Despite a large number of applications, there is still some criticism that ANN's point prediction lacks in reliability since the uncertainty of predictions are not quantified, and it limits its use in practical applications. A major concern in application of traditional uncertainty analysis techniques on neural network framework is its parallel computing architecture with large degrees of freedom, which makes the uncertainty assessment a challenging task. Very limited studies have considered assessment of predictive uncertainty of ANN based hydrologic models. In this study, a novel method is proposed that help construct the prediction interval of ANN flood forecasting model during calibration itself. The method is designed to have two stages of optimization during calibration: at stage 1, the ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector, and during stage 2, the optimal variability of ANN parameters (obtained in stage 1) is identified so as to create an ensemble of predictions. During the 2nd stage, the optimization is performed with multiple objectives, (i) minimum residual variance for the ensemble mean, (ii) maximum measured data points to fall within the estimated prediction interval and (iii) minimum width of prediction interval. The method is illustrated using a real world case study of an Indian basin. The method was able to produce an ensemble that has an average prediction interval width of 23.03 m3/s, with 97.17% of the total validation data points (measured) lying within the interval. The derived prediction interval for a selected hydrograph in the validation data set is presented in Fig 1. It is noted that most of the observed flows lie within the constructed prediction interval, and therefore provides information about the uncertainty of the prediction. One specific advantage of the method is that when ensemble mean value is considered as a forecast, the peak flows are predicted with improved accuracy by this method compared to traditional single point forecasted ANNs. Fig. 1 Prediction Interval for selected hydrograph
Efthimiou, George C; Bartzis, John G; Berbekar, Eva; Hertwig, Denise; Harms, Frank; Leitl, Bernd
2015-06-26
The capability to predict short-term maximum individual exposure is very important for several applications including, for example, deliberate/accidental release of hazardous substances, odour fluctuations or material flammability level exceedance. Recently, authors have proposed a simple approach relating maximum individual exposure to parameters such as the fluctuation intensity and the concentration integral time scale. In the first part of this study (Part I), the methodology was validated against field measurements, which are governed by the natural variability of atmospheric boundary conditions. In Part II of this study, an in-depth validation of the approach is performed using reference data recorded under truly stationary and well documented flow conditions. For this reason, a boundary-layer wind-tunnel experiment was used. The experimental dataset includes 196 time-resolved concentration measurements which detect the dispersion from a continuous point source within an urban model of semi-idealized complexity. The data analysis allowed the improvement of an important model parameter. The model performed very well in predicting the maximum individual exposure, presenting a factor of two of observations equal to 95%. For large time intervals, an exponential correction term has been introduced in the model based on the experimental observations. The new model is capable of predicting all time intervals giving an overall factor of two of observations equal to 100%.
Humidity-corrected Arrhenius equation: The reference condition approach.
Naveršnik, Klemen; Jurečič, Rok
2016-03-16
Accelerated and stress stability data is often used to predict shelf life of pharmaceuticals. Temperature, combined with humidity accelerates chemical decomposition and the Arrhenius equation is used to extrapolate accelerated stability results to long-term stability. Statistical estimation of the humidity-corrected Arrhenius equation is not straightforward due to its non-linearity. A two stage nonlinear fitting approach is used in practice, followed by a prediction stage. We developed a single-stage statistical procedure, called the reference condition approach, which has better statistical properties (less collinearity, direct estimation of uncertainty, narrower prediction interval) and is significantly easier to use, compared to the existing approaches. Our statistical model was populated with data from a 35-day stress stability study on a laboratory batch of vitamin tablets and required mere 30 laboratory assay determinations. The stability prediction agreed well with the actual 24-month long term stability of the product. The approach has high potential to assist product formulation, specification setting and stability statements. Copyright © 2016 Elsevier B.V. All rights reserved.
Artificial intelligence: a new approach for prescription and monitoring of hemodialysis therapy.
Akl, A I; Sobh, M A; Enab, Y M; Tattersall, J
2001-12-01
The effect of dialysis on patients is conventionally predicted using a formal mathematical model. This approach requires many assumptions of the processes involved, and validation of these may be difficult. The validity of dialysis urea modeling using a formal mathematical model has been challenged. Artificial intelligence using neural networks (NNs) has been used to solve complex problems without needing a mathematical model or an understanding of the mechanisms involved. In this study, we applied an NN model to study and predict concentrations of urea during a hemodialysis session. We measured blood concentrations of urea, patient weight, and total urea removal by direct dialysate quantification (DDQ) at 30-minute intervals during the session (in 15 chronic hemodialysis patients). The NN model was trained to recognize the evolution of measured urea concentrations and was subsequently able to predict hemodialysis session time needed to reach a target solute removal index (SRI) in patients not previously studied by the NN model (in another 15 chronic hemodialysis patients). Comparing results of the NN model with the DDQ model, the prediction error was 10.9%, with a not significant difference between predicted total urea nitrogen (UN) removal and measured UN removal by DDQ. NN model predictions of time showed a not significant difference with actual intervals needed to reach the same SRI level at the same patient conditions, except for the prediction of SRI at the first 30-minute interval, which showed a significant difference (P = 0.001). This indicates the sensitivity of the NN model to what is called patient clearance time; the prediction error was 8.3%. From our results, we conclude that artificial intelligence applications in urea kinetics can give an idea of intradialysis profiling according to individual clinical needs. In theory, this approach can be extended easily to other solutes, making the NN model a step forward to achieving artificial-intelligent dialysis control.
Sato, Takako; Zaitsu, Kei; Tsuboi, Kento; Nomura, Masakatsu; Kusano, Maiko; Shima, Noriaki; Abe, Shuntaro; Ishii, Akira; Tsuchihashi, Hitoshi; Suzuki, Koichi
2015-05-01
Estimation of postmortem interval (PMI) is an important goal in judicial autopsy. Although many approaches can estimate PMI through physical findings and biochemical tests, accurate PMI calculation by these conventional methods remains difficult because PMI is readily affected by surrounding conditions, such as ambient temperature and humidity. In this study, Sprague-Dawley (SD) rats (10 weeks) were sacrificed by suffocation, and blood was collected by dissection at various time intervals (0, 3, 6, 12, 24, and 48 h; n = 6) after death. A total of 70 endogenous metabolites were detected in plasma by gas chromatography-tandem mass spectrometry (GC-MS/MS). Each time group was separated from each other on the principal component analysis (PCA) score plot, suggesting that the various endogenous metabolites changed with time after death. To prepare a prediction model of a PMI, a partial least squares (or projection to latent structure, PLS) regression model was constructed using the levels of significantly different metabolites determined by variable importance in the projection (VIP) score and the Kruskal-Wallis test (P < 0.05). Because the constructed PLS regression model could successfully predict each PMI, this model was validated with another validation set (n = 3). In conclusion, plasma metabolic profiling demonstrated its ability to successfully estimate PMI under a certain condition. This result can be considered to be the first step for using the metabolomics method in future forensic casework.
Neeman, Noga; Spotila, James R; O'Connor, Michael P
2015-09-07
Variation in the yearly number of sea turtles nesting at rookeries can interfere with population estimates and obscure real population dynamics. Previous theoretical models suggested that this variation in nesting numbers may be driven by changes in resources at the foraging grounds. We developed a physiologically-based model that uses temperatures at foraging sites to predict foraging conditions, resource accumulation, remigration probabilities, and, ultimately, nesting numbers for a stable population of sea turtles. We used this model to explore several scenarios of temperature variation at the foraging grounds, including one-year perturbations and cyclical temperature oscillations. We found that thermally driven resource variation can indeed synchronize nesting in groups of turtles, creating cohorts, but that these cohorts tend to break down over 5-10 years unless regenerated by environmental conditions. Cohorts were broken down faster at lower temperatures. One-year perturbations of low temperature had a synchronizing effect on nesting the following year, while high temperature perturbations tended to delay nesting in a less synchronized way. Cyclical temperatures lead to cyclical responses both in nesting numbers and remigration intervals, with the amplitude and lag of the response depending on the duration of the cycle. Overall, model behavior is consistent with observations at nesting beaches. Future work should focus on refining the model to fit particular nesting populations and testing further whether or not it may be used to predict observed nesting numbers and remigration intervals. Copyright © 2015 Elsevier Ltd. All rights reserved.
Chang, Young-Soo; Hong, Sung Hwa; Kim, Eun Yeon; Choi, Ji Eun; Chung, Won-Ho; Cho, Yang-Sun; Moon, Il Joon
2018-05-18
Despite recent advancement in the prediction of cochlear implant outcome, the benefit of bilateral procedures compared to bimodal stimulation and how we predict speech perception outcomes of sequential bilateral cochlear implant based on bimodal auditory performance in children remain unclear. This investigation was performed: (1) to determine the benefit of sequential bilateral cochlear implant and (2) to identify the associated factors for the outcome of sequential bilateral cochlear implant. Observational and retrospective study. We retrospectively analyzed 29 patients with sequential cochlear implant following bimodal-fitting condition. Audiological evaluations were performed; the categories of auditory performance scores, speech perception with monosyllable and disyllables words, and the Korean version of Ling. Audiological evaluations were performed before sequential cochlear implant with the bimodal fitting condition (CI1+HA) and one year after the sequential cochlear implant with bilateral cochlear implant condition (CI1+CI2). The good Performance Group (GP) was defined as follows; 90% or higher in monosyllable and bisyllable tests with auditory-only condition or 20% or higher improvement of the scores with CI1+CI2. Age at first implantation, inter-implant interval, categories of auditory performance score, and various comorbidities were analyzed by logistic regression analysis. Compared to the CI1+HA, CI1+CI2 provided significant benefit in categories of auditory performance, speech perception, and Korean version of Ling results. Preoperative categories of auditory performance scores were the only associated factor for being GP (odds ratio=4.38, 95% confidence interval - 95%=1.07-17.93, p=0.04). The children with limited language development in bimodal condition should be considered as the sequential bilateral cochlear implant and preoperative categories of auditory performance score could be used as the predictor in speech perception after sequential cochlear implant. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Lifecourse social conditions and racial disparities in incidence of first stroke.
Glymour, M Maria; Avendaño, Mauricio; Haas, Steven; Berkman, Lisa F
2008-12-01
Some previous studies found excess stroke rates among black subjects persisted after adjustment for socioeconomic status (SES), fueling speculation regarding racially patterned genetic predispositions to stroke. Previous research was hampered by incomplete SES assessments, without measures of childhood conditions or adult wealth. We assess the role of lifecourse SES in explaining stroke risk and stroke disparities. Health and Retirement Study participants age 50+ (n = 20,661) were followed on average 9.9 years for self- or proxy-reported first stroke (2175 events). Childhood social conditions (southern state of birth, parental SES, self-reported fair/poor childhood health, and attained height), adult SES (education, income, wealth, and occupational status) and traditional cardiovascular risk factors were used to predict first stroke onset using Cox proportional hazards models. Black subjects had a 48% greater risk of first stroke incidence than whites (95% confidence interval, 1.33-1.65). Childhood conditions predicted stroke risk in both blacks and whites, independently of adult SES. Adjustment for both childhood social conditions and adult SES measures attenuated racial differences to marginal significance (hazard ratio, 1.13; 95% CI, 1.00-1.28). Childhood social conditions predict stroke risk in black and White American adults. Additional adjustment for adult SES, in particular wealth, nearly eliminated the disparity in stroke risk between black and white subjects.
Crossover transition in the fluctuation of Internet
NASA Astrophysics Data System (ADS)
Qian, Jiang-Hai
2018-06-01
The inconsistent fluctuation behavior of Internet predicted by preferential attachment(PA) and Gibrat's law requires empirical investigations on the actual system. By using the interval-tunable Gibrat's law statistics, we find the actual fluctuation, characterized by the conditional standard deviation of the degree growth rate, changes with the interval length and displays a crossover transition from PA type to Gibrat's law type, which has not yet been captured by any previous models. We characterize the transition dynamics quantitatively and determine the applicative range of PA and Gibrat's law. The correlation analysis indicates the crossover transition may be attributed to the accumulative correlation between the internal links.
Illness perception and adherence to healthy behaviour in Jordanian coronary heart disease patients.
Mosleh, Sultan M; Almalik, Mona Ma
2016-06-01
Patients diagnosed with coronary heart disease are strongly recommended to adopt healthier behaviours and adhere to prescribed medication. Previous research on patients with a wide range of health conditions has explored the role of patients' illness perceptions in explaining coping and health outcomes. However, among coronary heart disease patients, this has not been well examined. The purpose of this study was to explore coronary heart disease patients' illness perception beliefs and investigate whether these beliefs could predict adherence to healthy behaviours. A multi-centre cross-sectional study was conducted at four tertiary hospitals in Jordan. A convenience sample of 254 patients (73% response rate), who visited the cardiac clinic for routine review, participated in the study. Participants completed a self-reported questionnaire, which included the Brief Illness Perception Questionnaire, the Godin Leisure Time Activity questionnaire and the Morisky Medication Adherence Scale. Patients reported high levels of disease understanding (coherence) and they were convinced that they were able to control their condition by themselves and/or with appropriate treatment. Male patients perceived lower consequences (p<0.05) and had a better understanding of their illness than female patients (p<0.001). There were significant associations between increasing age and each of timeline (r=0.326, p<0.001), (r=0.146, p<0.024) and coherence (r=-0.166, p<0.010). Adjusted regression analysis showed that exercise adherence was predicted by both a strong perception in personal control (β 2.66, 95% confidence interval 1.28-4.04), timeline (β -1.85, 95% confidence interval 0. 8-2.88) and illness coherence (β 2.12, 95% confidence interval 0.35-3.90). Medication adherence was predicted by perception of personal control and treatment control. Adherence to a low-fat diet regimen was predicted by perception of illness coherence only (odds ratio 12, 95% confidence interval 1.04-1.33). Finally, the majority of patients thought that the cause of their heart problem was related to coronary heart disease risk factors such as obesity and high-fat meals. Patients' illness beliefs are candidates for a psycho-educational intervention that should be targeted at improved disease management practices and better adherence to recommended healthy behaviours. © The European Society of Cardiology 2014.
Stimulus-level interference disrupts repetition benefit during task switching in middle childhood
Karayanidis, Frini; Jamadar, Sharna; Sanday, Dearne
2013-01-01
The task-switching paradigm provides a powerful tool to measure the development of core cognitive control processes. In this study, we use the alternating runs task-switching paradigm to assess preparatory control processes involved in flexibly preparing for a predictable change in task and stimulus-driven control processes involved in controlling stimulus-level interference. We present three experiments that examine behavioral and event-related potential (ERP) measures of task-switching performance in middle childhood and young adulthood under low and high stimulus interference conditions. Experiment 1 confirms that our new child-friendly tasks produce similar behavioral and electrophysiological findings in young adults as those previously reported. Experiment 2 examines task switching with univalent stimuli across a range of preparation intervals in middle childhood. Experiment 3 compares task switching with bivalent stimuli across the same preparation intervals in children and young adults. Children produced a larger RT switch cost than adults with univalent stimuli and a short preparation interval. Both children and adults showed significant reduction in switch cost with increasing preparation interval, but in children this was caused by greater increase in RT for repeat than switch trials. Response-locked ERPs showed intact preparation for univalent, but less efficient preparation for bivalent stimulus conditions. Stimulus-locked ERPs confirmed that children showed greater stimulus-level interference for repeat trials, especially with bivalent stimuli. We conclude that children show greater stimulus-level interference especially for repeat trials under high interference conditions, suggesting weaker mental representation of the current task set. PMID:24367317
Comparison Between Vortices Created and Evolving During Fixed and Dynamic Solar Wind Conditions
NASA Technical Reports Server (NTRS)
Collado-Vega, Yaireska M.; Kessel, R. L.; Sibeck, David Gary; Kalb, V. L.; Boller, R. A.; Rastaetter, L.
2013-01-01
We employ Magnetohydrodynamic (MHD) simulations to examine the creation and evolution of plasma vortices within the Earth's magnetosphere for steady solar wind plasma conditions. Very few vortices form during intervals of such solar wind conditions. Those that do remain in fixed positions for long periods (often hours) and exhibit rotation axes that point primarily in the x or y direction, parallel (or antiparallel) to the local magnetospheric magnetic field direction. Occasionally, the orientation of the axes rotates from the x direction to another direction. We compare our results with simulations previously done for unsteady solar wind conditions. By contrast, these vortices that form during intervals of varying solar wind conditions exhibit durations ranging from seconds (in the case of those with axes in the x or y direction) to minutes (in the case of those with axes in the z direction) and convect antisunward. The local-time dependent sense of rotation seen in these previously reported vortices suggests an interpretation in terms of the Kelvin-Helmholtz instability. For steady conditions, the biggest vortices developed on the dayside (about 6R(E) in diameter), had their rotation axes aligned with the y direction and had the longest periods of duration. We attribute these vortices to the flows set up by reconnection on the high latitude magnetopause during intervals of northward Interplanetary Magnetic Field (IMF) orientation. This is the first time that vortices due to high-latitude reconnection have been visualized. The model also successfully predicts the principal characteristics of previously reported plasma vortices within the magnetosphere, namely their dimension, flow velocities, and durations.
Construction of prediction intervals for Palmer Drought Severity Index using bootstrap
NASA Astrophysics Data System (ADS)
Beyaztas, Ufuk; Bickici Arikan, Bugrayhan; Beyaztas, Beste Hamiye; Kahya, Ercan
2018-04-01
In this study, we propose an approach based on the residual-based bootstrap method to obtain valid prediction intervals using monthly, short-term (three-months) and mid-term (six-months) drought observations. The effects of North Atlantic and Arctic Oscillation indexes on the constructed prediction intervals are also examined. Performance of the proposed approach is evaluated for the Palmer Drought Severity Index (PDSI) obtained from Konya closed basin located in Central Anatolia, Turkey. The finite sample properties of the proposed method are further illustrated by an extensive simulation study. Our results revealed that the proposed approach is capable of producing valid prediction intervals for future PDSI values.
Camargos, Paulo; Fonseca, Ana Cristina; Amantéa, Sérgio; Oliveira, Elizabeth; Benfica, Maria das Graças; Chamone, Chequer
2017-05-01
The etiological diagnosis of pleural effusion is a difficult task because the diagnostic tools can only establish a definitive etiological diagnosis in at most 76% of cases. To verify the diagnostic accuracy of the latex agglutination test (LAT) for the etiological diagnosis of pleural effusions caused by Streptococcus pneumoniae and Haemophilus influenzae type b. After thoracocentesis, paired fresh samples of pleural fluid from 418 children and adolescents were included in this investigation. They were tested blindly and simultaneously through counterimmunoelectrophoresis (CIE) and LAT for both bacteria. Sensitivity, specificity, predictive values and likelihood ratios (LR) were calculated taking CIE as a reference standard. The sensitivity and specificity of LAT was 100% (95% confidence interval, 94.4%-100%) and 83.3% (95% confidence interval, 79.0%-87.0%), respectively, whereas the positive (calculated from Bayes' theorem) and negative predictive values were, respectively, lower than 1% and 100% (95% confidence interval, 98.8%-100%). Positive and negative LR were 6.0 (95% confidence interval, 4.7-7.6) and zero, respectively. Our results suggest that LAT is a useful tool for the etiological diagnosis of pleural effusion. It is a reliable, rapid, simple to perform and shows an excellent yield in our studied population, helping to prescribe appropriate antibiotics for this clinical condition. © 2015 John Wiley & Sons Ltd.
A field technique for estimating aquifer parameters using flow log data
Paillet, Frederick L.
2000-01-01
A numerical model is used to predict flow along intervals between producing zones in open boreholes for comparison with measurements of borehole flow. The model gives flow under quasi-steady conditions as a function of the transmissivity and hydraulic head in an arbitrary number of zones communicating with each other along open boreholes. The theory shows that the amount of inflow to or outflow from the borehole under any one flow condition may not indicate relative zone transmissivity. A unique inversion for both hydraulic-head and transmissivity values is possible if flow is measured under two different conditions such as ambient and quasi-steady pumping, and if the difference in open-borehole water level between the two flow conditions is measured. The technique is shown to give useful estimates of water levels and transmissivities of two or more water-producing zones intersecting a single interval of open borehole under typical field conditions. Although the modeling technique involves some approximation, the principle limit on the accuracy of the method under field conditions is the measurement error in the flow log data. Flow measurements and pumping conditions are usually adjusted so that transmissivity estimates are most accurate for the most transmissive zones, and relative measurement error is proportionately larger for less transmissive zones. The most effective general application of the borehole-flow model results when the data are fit to models that systematically include more production zones of progressively smaller transmissivity values until model results show that all accuracy in the data set is exhausted.A numerical model is used to predict flow along intervals between producing zones in open boreholes for comparison with measurements of borehole flow. The model gives flow under quasi-steady conditions as a function of the transmissivity and hydraulic head in an arbitrary number of zones communicating with each other along open boreholes. The theory shows that the amount of inflow to or outflow from the borehole under any one flow condition may not indicate relative zone transmissivity. A unique inversion for both hydraulic-head and transmissivity values is possible if flow is measured under two different conditions such as ambient and quasi-steady pumping, and if the difference in open-borehole water level between the two flow conditions is measured. The technique is shown to give useful estimates of water levels and transmissivities of two or more water-producing zones intersecting a single interval of open borehole under typical field conditions. Although the modeling technique involves some approximation, the principle limit on the accuracy of the method under field conditions is the measurement error in the flow log data. Flow measurements and pumping conditions are usually adjusted so that transmissivity estimates are most accurate for the most transmissive zones, and relative measurement error is proportionately larger for less transmissive zones. The most effective general application of the borehole-flow model results when the data are fit to models that symmetrically include more production zones of progressively smaller transmissivity values until model results show that all accuracy in the data set is exhausted.
Prediction Interval Development for Wind-Tunnel Balance Check-Loading
NASA Technical Reports Server (NTRS)
Landman, Drew; Toro, Kenneth G.; Commo, Sean A.; Lynn, Keith C.
2014-01-01
Results from the Facility Analysis Verification and Operational Reliability project revealed a critical gap in capability in ground-based aeronautics research applications. Without a standardized process for check-loading the wind-tunnel balance or the model system, the quality of the aerodynamic force data collected varied significantly between facilities. A prediction interval is required in order to confirm a check-loading. The prediction interval provides an expected upper and lower bound on balance load prediction at a given confidence level. A method has been developed which accounts for sources of variability due to calibration and check-load application. The prediction interval method of calculation and a case study demonstrating its use is provided. Validation of the methods is demonstrated for the case study based on the probability of capture of confirmation points.
NASA Astrophysics Data System (ADS)
van Lien, René; Schutte, Nienke M.; Meijer, Jan H.; de Geus, Eco J. C.
2013-04-01
The validity of estimating the PEP from a fixed value for the Q-wave onset to the R-wave peak (QR) interval and from the R-wave peak to the dZ/dt-min peak (ISTI) interval is evaluated. Ninety-one subjects participated in a laboratory experiment in which a variety of physical and mental stressors were presented and 31 further subjects participated in a sequence of structured ambulatory activities in which large variation in posture and physical activity was induced. PEP, QR interval, and ISTI were scored. Across the diverse laboratory and ambulatory conditions the QR interval could be approximated by a fixed interval of 40 ms but 95% confidence intervals were large (25 to 54 ms). Multilevel analysis showed that 79% to 81% of the within and between-subject variation in the RB interval could be predicted by the ISTI. However, the optimal intercept and slope values varied significantly across subjects and study setting. Bland-Altman plots revealed a large discrepancy between the estimated PEP and the actual PEP based on the Q-wave onset and B-point. It is concluded that the estimated PEP can be a useful tool but cannot replace the actual PEP to index cardiac sympathetic control.
Early Identification of Patients at Risk of Acute Lung Injury
Gajic, Ognjen; Dabbagh, Ousama; Park, Pauline K.; Adesanya, Adebola; Chang, Steven Y.; Hou, Peter; Anderson, Harry; Hoth, J. Jason; Mikkelsen, Mark E.; Gentile, Nina T.; Gong, Michelle N.; Talmor, Daniel; Bajwa, Ednan; Watkins, Timothy R.; Festic, Emir; Yilmaz, Murat; Iscimen, Remzi; Kaufman, David A.; Esper, Annette M.; Sadikot, Ruxana; Douglas, Ivor; Sevransky, Jonathan
2011-01-01
Rationale: Accurate, early identification of patients at risk for developing acute lung injury (ALI) provides the opportunity to test and implement secondary prevention strategies. Objectives: To determine the frequency and outcome of ALI development in patients at risk and validate a lung injury prediction score (LIPS). Methods: In this prospective multicenter observational cohort study, predisposing conditions and risk modifiers predictive of ALI development were identified from routine clinical data available during initial evaluation. The discrimination of the model was assessed with area under receiver operating curve (AUC). The risk of death from ALI was determined after adjustment for severity of illness and predisposing conditions. Measurements and Main Results: Twenty-two hospitals enrolled 5,584 patients at risk. ALI developed a median of 2 (interquartile range 1–4) days after initial evaluation in 377 (6.8%; 148 ALI-only, 229 adult respiratory distress syndrome) patients. The frequency of ALI varied according to predisposing conditions (from 3% in pancreatitis to 26% after smoke inhalation). LIPS discriminated patients who developed ALI from those who did not with an AUC of 0.80 (95% confidence interval, 0.78–0.82). When adjusted for severity of illness and predisposing conditions, development of ALI increased the risk of in-hospital death (odds ratio, 4.1; 95% confidence interval, 2.9–5.7). Conclusions: ALI occurrence varies according to predisposing conditions and carries an independently poor prognosis. Using routinely available clinical data, LIPS identifies patients at high risk for ALI early in the course of their illness. This model will alert clinicians about the risk of ALI and facilitate testing and implementation of ALI prevention strategies. Clinical trial registered with www.clinicaltrials.gov (NCT00889772). PMID:20802164
Matsui, T; Arai, I; Gotoh, S; Hattori, H; Takase, B; Kikuchi, M; Ishihara, M
2005-10-01
The impaired balance of the low-frequency/high-frequency ratio obtained from spectral components of RR intervals can be a diagnostic test for sepsis. In addition, it is known that a reduction of heart rate variability (HRV) is useful in identifying septic patients at risk of the development of multiple organ dysfunction syndrome (MODS). We have reported a non-contact method using a microwave radar to monitor the heart and respiratory rates of a healthy person placed inside an isolator or of experimental animals exposed to toxic materials. With the purpose of preventing secondary exposure of medical personnel to toxic materials under biochemical hazard conditions, we designed a novel apparatus for non-contact measurement of HRV using a 1215 MHz microwave radar, a high-pass filter, and a personal computer. The microwave radar monitors only the small reflected waves from the subject's chest wall, which are modulated by the cardiac and respiratory motion. The high-pass filter enhances the cardiac signal and attenuates the respiratory signal. In a human trial, RR intervals derived from the non-contact apparatus significantly correlated with those derived from ECG (r=0.98, P<0.0001). The non-contact apparatus showed a similar power spectrum of RR intervals to that of ECG. Our non-contact HRV measurement apparatus appears promising for future pre-hospital monitoring of septic patients or for predicting MODS patients, inside isolators or in the field for mass casualties under biochemical hazard circumstances.
Kray, Jutta
2006-08-11
Adult age differences in task switching and advance preparation were examined by comparing cue-based and memory-based switching conditions. Task switching was assessed by determining two types of costs that occur at the general (mixing costs) and specific (switching costs) level of switching. Advance preparation was investigated by varying the time interval until the next task (short, middle, very long). Results indicated that the implementation of task sets was different for cue-based switching with random task sequences and memory-based switching with predictable task sequences. Switching costs were strongly reduced under cue-based switching conditions, indicating that task-set cues facilitate the retrieval of the next task. Age differences were found for mixing costs and for switching costs only under cue-based conditions in which older adults showed smaller switching costs than younger adults. It is suggested that older adults adopt a less extreme bias between two tasks than younger adults in situations associated with uncertainty. For cue-based switching with random task sequences, older adults are less engaged in a complete reconfiguration of task sets because of the probability of a further task change. Furthermore, the reduction of switching costs was more pronounced for cue- than memory-based switching for short preparation intervals, whereas the reduction of switch costs was more pronounced for memory- than cue-based switching for longer preparation intervals at least for older adults. Together these findings suggest that the implementation of task sets is functionally different for the two types of task-switching conditions.
Using conditional probability to identify trends in intra-day high-frequency equity pricing
NASA Astrophysics Data System (ADS)
Rechenthin, Michael; Street, W. Nick
2013-12-01
By examining the conditional probabilities of price movements in a popular US stock over different high-frequency intra-day timespans, varying levels of trend predictability are identified. This study demonstrates the existence of predictable short-term trends in the market; understanding the probability of price movement can be useful to high-frequency traders. Price movement was examined in trade-by-trade (tick) data along with temporal timespans between 1 s to 30 min for 52 one-week periods for one highly-traded stock. We hypothesize that much of the initial predictability of trade-by-trade (tick) data is due to traditional market dynamics, or the bouncing of the price between the stock’s bid and ask. Only after timespans of between 5 to 10 s does this cease to explain the predictability; after this timespan, two consecutive movements in the same direction occur with higher probability than that of movements in the opposite direction. This pattern holds up to a one-minute interval, after which the strength of the pattern weakens.
NASA Astrophysics Data System (ADS)
Bogachev, Mikhail I.; Kireenkov, Igor S.; Nifontov, Eugene M.; Bunde, Armin
2009-06-01
We study the statistics of return intervals between large heartbeat intervals (above a certain threshold Q) in 24 h records obtained from healthy subjects. We find that both the linear and the nonlinear long-term memory inherent in the heartbeat intervals lead to power-laws in the probability density function PQ(r) of the return intervals. As a consequence, the probability WQ(t; Δt) that at least one large heartbeat interval will occur within the next Δt heartbeat intervals, with an increasing elapsed number of intervals t after the last large heartbeat interval, follows a power-law. Based on these results, we suggest a method of obtaining a priori information about the occurrence of the next large heartbeat interval, and thus to predict it. We show explicitly that the proposed method, which exploits long-term memory, is superior to the conventional precursory pattern recognition technique, which focuses solely on short-term memory. We believe that our results can be straightforwardly extended to obtain more reliable predictions in other physiological signals like blood pressure, as well as in other complex records exhibiting multifractal behaviour, e.g. turbulent flow, precipitation, river flows and network traffic.
Timescale dependent deformation of orogenic belts?
NASA Astrophysics Data System (ADS)
Hoth, S.; Friedrich, A. M.; Vietor, T.; Hoffmann-Rothe, A.; Kukowski, N.; Oncken, O.
2004-12-01
The principle aim to link geodetic, paleoseismologic and geologic estimates of fault slip is to extrapolate the respective rates from one timescale to the other to finally predict the recurrence interval of large earthquakes, which threat human habitats. This approach however, is based on two often implicitly made assumptions: a uniform slip distribution through time and space and no changes of the boundary conditions during the time interval of interest. Both assumptions are often hard to verify. A recent study, which analysed an exceptionally complete record of seismic slip for the Wasatch and related faults (Basin and Range province), ranging from 10 yr to 10 Myr suggests that such a link between geodetic and geologic rates might not exist, i.e., that our records of fault displacement may depend on the timescale over which they were measured. This view derives support from results of scaled 2D sandbox experiments, as well as numerical simulations with distinct elements, both of which investigated the effect of boundary conditions such as flexure, mechanic stratigraphy and erosion on the spatio-temporal distribution of deformation within bivergent wedges. We identified three types of processes based on their distinct spatio-temporal distribution of deformation. First, incremental strain and local strain rates are very short-lived are broadly distributed within the bivergent wedge and no temporal pattern could be established. Second, footwall shortcuts and the re-activation of either internal thrusts or of the retro shear-zone are irregularly distributed in time and are thus not predictable either, but last for a longer time interval. Third, the stepwise initiation and propagation of the deformation front is very regular in time, since it depends on the thickness of the incoming layer and on its internal and basal material properties. We consider the propagation of the deformation front as an internal clock of a thrust belt, which is therefore predictable. A deformation front advance cycle requires the longest timescale. Thus, despite known and constant boundary conditions during the simulations, we found only one regular temporal pattern of deformation in a steady active bivergent-wedge. We therefore propose that the structural inventory of an orogenic belt is hierarchically ordered with respect to accumulated slip, in analogy to the discharge pattern in a drainage network. The deformation front would have the highest, a branching splay the lowest order. Since kinematic boundary conditions control deformation front advance, its timing and the related maximum magnitude of finite strain, i.e. throw on the frontal thrust are predictable. However, the number of controlling factors, such as the degree of strain softening, the orientation of faults or fluid flow and resulting cementation of faults, responsible for the reactivation of faults increases with increasing distance from the deformation front. Since it is rarely possible to determine the complete network of forces within a wedge, the reactivation of lower order structures is not predictable in time and space. Two implications for field studies may emerge: A change of the propagation of deformation can only be determined, if at least two accretion cycles are sampled. The link between geodetic, paleoseismologic and geologic fault slip estimates can only be successfully derived if the position of the investigated fault within the hierarchical order has not changed over the time interval of interest.
Lifetime Estimation of the Upper Stage of GSAT-14 in Geostationary Transfer Orbit.
Jeyakodi David, Jim Fletcher; Sharma, Ram Krishan
2014-01-01
The combination of atmospheric drag and lunar and solar perturbations in addition to Earth's oblateness influences the orbital lifetime of an upper stage in geostationary transfer orbit (GTO). These high eccentric orbits undergo fluctuations in both perturbations and velocity and are very sensitive to the initial conditions. The main objective of this paper is to predict the reentry time of the upper stage of the Indian geosynchronous satellite launch vehicle, GSLV-D5, which inserted the satellite GSAT-14 into a GTO on January 05, 2014, with mean perigee and apogee altitudes of 170 km and 35975 km. Four intervals of near linear variation of the mean apogee altitude observed were used in predicting the orbital lifetime. For these four intervals, optimal values of the initial osculating eccentricity and ballistic coefficient for matching the mean apogee altitudes were estimated with the response surface methodology using a genetic algorithm. It was found that the orbital lifetime from these four time spans was between 144 and 148 days.
Lifetime Estimation of the Upper Stage of GSAT-14 in Geostationary Transfer Orbit
Jeyakodi David, Jim Fletcher; Sharma, Ram Krishan
2014-01-01
The combination of atmospheric drag and lunar and solar perturbations in addition to Earth's oblateness influences the orbital lifetime of an upper stage in geostationary transfer orbit (GTO). These high eccentric orbits undergo fluctuations in both perturbations and velocity and are very sensitive to the initial conditions. The main objective of this paper is to predict the reentry time of the upper stage of the Indian geosynchronous satellite launch vehicle, GSLV-D5, which inserted the satellite GSAT-14 into a GTO on January 05, 2014, with mean perigee and apogee altitudes of 170 km and 35975 km. Four intervals of near linear variation of the mean apogee altitude observed were used in predicting the orbital lifetime. For these four intervals, optimal values of the initial osculating eccentricity and ballistic coefficient for matching the mean apogee altitudes were estimated with the response surface methodology using a genetic algorithm. It was found that the orbital lifetime from these four time spans was between 144 and 148 days. PMID:27437491
A test of the reward-contrast hypothesis.
Dalecki, Stefan J; Panoz-Brown, Danielle E; Crystal, Jonathon D
2017-12-01
Source memory, a facet of episodic memory, is the memory of the origin of information. Whereas source memory in rats is sustained for at least a week, spatial memory degraded after approximately a day. Different forgetting functions may suggest that two memory systems (source memory and spatial memory) are dissociated. However, in previous work, the two tasks used baiting conditions consisting of chocolate and chow flavors; notably, the source memory task used the relatively better flavor. Thus, according to the reward-contrast hypothesis, when chocolate and chow were presented within the same context (i.e., within a single radial maze trial), the chocolate location was more memorable than the chow location because of contrast. We tested the reward-contrast hypothesis using baiting configurations designed to produce reward-contrast. The reward-contrast hypothesis predicts that under these conditions, spatial memory will survive a 24-h retention interval. We documented elimination of spatial memory performance after a 24-h retention interval using a reward-contrast baiting pattern. These data suggest that reward contrast does not explain our earlier findings that source memory survives unusually long retention intervals. Copyright © 2017 Elsevier B.V. All rights reserved.
Agustina López, M; Jimena Santos, M; Cortasa, Santiago; Fernández, Rodrigo S; Carbó Tano, Martin; Pedreira, María E
2016-12-01
The reconsolidation process is the mechanism by which strength and/or content of consolidated memories are updated. Prediction error (PE) is the difference between the prediction made and current events. It is proposed as a necessary condition to trigger the reconsolidation process. Here we analyzed deeply the role of the PE in the associative memory reconsolidation in the crab Neohelice granulata. An incongruence between the learned temporal relationship between conditioned and unconditioned stimuli (CS-US) was enough to trigger the reconsolidation process. Moreover, after a partial reinforced training, a PE of 50% opened the possibility to labilize the consolidated memory with a reminder which included or not the US. Further, during an extinction training a small PE in the first interval between CSs was enough to trigger reconsolidation. Overall, we highlighted the relation between training history and different reactivation possibilities to recruit the process responsible of memory updating. Copyright © 2016 Elsevier Inc. All rights reserved.
CIACCIO, EDWARD J.; BIVIANO, ANGELO B.; GAMBHIR, ALOK; EINSTEIN, ANDREW J.; GARAN, HASAN
2014-01-01
Background When atrial fibrillation (AF) is incessant, imaging during a prolonged ventricular RR interval may improve image quality. It was hypothesized that long RR intervals could be predicted from preceding RR values. Methods From the PhysioNet database, electrocardiogram RR intervals were obtained from 74 persistent AF patients. An RR interval lengthened by at least 250 ms beyond the immediately preceding RR interval (termed T0 and T1, respectively) was considered prolonged. A two-parameter scatterplot was used to predict the occurrence of a prolonged interval T0. The scatterplot parameters were: (1) RR variability (RRv) estimated as the average second derivative from 10 previous pairs of RR differences, T13–T2, and (2) Tm–T1, the difference between Tm, the mean from T13 to T2, and T1. For each patient, scatterplots were constructed using preliminary data from the first hour. The ranges of parameters 1 and 2 were adjusted to maximize the proportion of prolonged RR intervals within range. These constraints were used for prediction of prolonged RR in test data collected during the second hour. Results The mean prolonged event was 1.0 seconds in duration. Actual prolonged events were identified with a mean positive predictive value (PPV) of 80% in the test set. PPV was >80% in 36 of 74 patients. An average of 10.8 prolonged RR intervals per 60 minutes was correctly identified. Conclusions A method was developed to predict prolonged RR intervals using two parameters and prior statistical sampling for each patient. This or similar methodology may help improve cardiac imaging in many longstanding persistent AF patients. PMID:23998759
Modeling stream fish distributions using interval-censored detection times.
Ferreira, Mário; Filipe, Ana Filipa; Bardos, David C; Magalhães, Maria Filomena; Beja, Pedro
2016-08-01
Controlling for imperfect detection is important for developing species distribution models (SDMs). Occupancy-detection models based on the time needed to detect a species can be used to address this problem, but this is hindered when times to detection are not known precisely. Here, we extend the time-to-detection model to deal with detections recorded in time intervals and illustrate the method using a case study on stream fish distribution modeling. We collected electrofishing samples of six fish species across a Mediterranean watershed in Northeast Portugal. Based on a Bayesian hierarchical framework, we modeled the probability of water presence in stream channels, and the probability of species occupancy conditional on water presence, in relation to environmental and spatial variables. We also modeled time-to-first detection conditional on occupancy in relation to local factors, using modified interval-censored exponential survival models. Posterior distributions of occupancy probabilities derived from the models were used to produce species distribution maps. Simulations indicated that the modified time-to-detection model provided unbiased parameter estimates despite interval-censoring. There was a tendency for spatial variation in detection rates to be primarily influenced by depth and, to a lesser extent, stream width. Species occupancies were consistently affected by stream order, elevation, and annual precipitation. Bayesian P-values and AUCs indicated that all models had adequate fit and high discrimination ability, respectively. Mapping of predicted occupancy probabilities showed widespread distribution by most species, but uncertainty was generally higher in tributaries and upper reaches. The interval-censored time-to-detection model provides a practical solution to model occupancy-detection when detections are recorded in time intervals. This modeling framework is useful for developing SDMs while controlling for variation in detection rates, as it uses simple data that can be readily collected by field ecologists.
The "where is it?" reflex: autoshaping the orienting response.
Buzsáki, G
1982-05-01
The goal of this review is to compare two divergent lines of research on signal-centered behavior: the orienting reflex (OR) and autoshaping. A review of conditioning experiments in animals and humans suggests that the novelty hypothesis of the OR is no longer tenable. Only stimuli that represent biological "relevance" elicit ORs. A stimulus may be relevant a priori (i.e., unconditioned) or as a result of conditioning. Exposure to a conditioned stimulus (CS) that predicts a positive reinforcer causes the animal to orient to it throughout conditioning. Within the CS-US interval, the initial CS-directed orienting response is followed by US-directed tendencies. Experimental evidence is shown that the development and maintenance of the conditioned OR occur in a similar fashion both in response-independent (classical) and response-dependent (instrumental) paradigms. It is proposed that the conditioned OR and the signal-directed autoshaped response are identical. Signals predicting aversive events repel the subject from the source of the CS. It is suggested that the function of the CS is not only to signal the probability of US occurrence, but also to serve as a spatial cue to guide the animal in the environment.
The "where is it?" reflex: autoshaping the orienting response.
Buzsáki, G
1982-01-01
The goal of this review is to compare two divergent lines of research on signal-centered behavior: the orienting reflex (OR) and autoshaping. A review of conditioning experiments in animals and humans suggests that the novelty hypothesis of the OR is no longer tenable. Only stimuli that represent biological "relevance" elicit ORs. A stimulus may be relevant a priori (i.e., unconditioned) or as a result of conditioning. Exposure to a conditioned stimulus (CS) that predicts a positive reinforcer causes the animal to orient to it throughout conditioning. Within the CS-US interval, the initial CS-directed orienting response is followed by US-directed tendencies. Experimental evidence is shown that the development and maintenance of the conditioned OR occur in a similar fashion both in response-independent (classical) and response-dependent (instrumental) paradigms. It is proposed that the conditioned OR and the signal-directed autoshaped response are identical. Signals predicting aversive events repel the subject from the source of the CS. It is suggested that the function of the CS is not only to signal the probability of US occurrence, but also to serve as a spatial cue to guide the animal in the environment. PMID:7097153
An actual load forecasting methodology by interval grey modeling based on the fractional calculus.
Yang, Yang; Xue, Dingyü
2017-07-17
The operation processes for thermal power plant are measured by the real-time data, and a large number of historical interval data can be obtained from the dataset. Within defined periods of time, the interval information could provide important information for decision making and equipment maintenance. Actual load is one of the most important parameters, and the trends hidden in the historical data will show the overall operation status of the equipments. However, based on the interval grey parameter numbers, the modeling and prediction process is more complicated than the one with real numbers. In order not lose any information, the geometric coordinate features are used by the coordinates of area and middle point lines in this paper, which are proved with the same information as the original interval data. The grey prediction model for interval grey number by the fractional-order accumulation calculus is proposed. Compared with integer-order model, the proposed method could have more freedom with better performance for modeling and prediction, which can be widely used in the modeling process and prediction for the small amount interval historical industry sequence samples. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Changes in Pilot Behavior with Predictive System Status Information
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.
1998-01-01
Research has shown a strong pilot preference for predictive information of aircraft system status in the flight deck. However, changes in pilot behavior associated with using this predictive information have not been ascertained. The study described here quantified these changes using three types of predictive information (none, whether a parameter was changing abnormally, and the time for a parameter to reach an alert range) and three initial time intervals until a parameter alert range was reached (ITIs) (1 minute, 5 minutes, and 15 minutes). With predictive information, subjects accomplished most of their tasks before an alert occurred. Subjects organized the time they did their tasks by locus-of-control with no predictive information and for the 1-minute ITI, and by aviatenavigate-communicate for the time for a parameter to reach an alert range and the 15-minute conditions. Overall, predictive information and the longer ITIs moved subjects to performing tasks before the alert actually occurred and had them more mission oriented as indicated by their tasks grouping of aviate-navigate-communicate.
NASA Astrophysics Data System (ADS)
Sun, Chao; Zhang, Chunran; Gu, Xinfeng; Liu, Bin
2017-10-01
Constraints of the optimization objective are often unable to be met when predictive control is applied to industrial production process. Then, online predictive controller will not find a feasible solution or a global optimal solution. To solve this problem, based on Back Propagation-Auto Regressive with exogenous inputs (BP-ARX) combined control model, nonlinear programming method is used to discuss the feasibility of constrained predictive control, feasibility decision theorem of the optimization objective is proposed, and the solution method of soft constraint slack variables is given when the optimization objective is not feasible. Based on this, for the interval control requirements of the controlled variables, the slack variables that have been solved are introduced, the adaptive weighted interval predictive control algorithm is proposed, achieving adaptive regulation of the optimization objective and automatically adjust of the infeasible interval range, expanding the scope of the feasible region, and ensuring the feasibility of the interval optimization objective. Finally, feasibility and effectiveness of the algorithm is validated through the simulation comparative experiments.
Time-Based Loss in Visual Short-Term Memory is from Trace Decay, not Temporal Distinctiveness
Ricker, Timothy J.; Spiegel, Lauren R.; Cowan, Nelson
2014-01-01
There is no consensus as to why forgetting occurs in short-term memory tasks. In past work, we have shown that forgetting occurs with the passage of time, but there are two classes of theories that can explain this effect. In the present work, we investigate the reason for time-based forgetting by contrasting the predictions of temporal distinctiveness and trace decay in the procedure in which we have observed such loss, involving memory for arrays of characters or letters across several seconds. The first theory, temporal distinctiveness, predicts that increasing the amount of time between trials will lead to less proactive interference, resulting in less forgetting across a retention interval. In the second theory, trace decay, temporal distinctiveness between trials is irrelevant to the loss over a retention interval. Using visual array change detection tasks in four experiments, we find small proactive interference effects on performance under some specific conditions, but no concomitant change in the effect of a retention interval. We conclude that trace decay is the more suitable class of explanations of the time-based forgetting in short-term memory that we have observed, and we suggest the need for further clarity in what the exact basis of that decay may be. PMID:24884646
Predicting Maps of Green Growth in Košice
NASA Astrophysics Data System (ADS)
Poorova, Zuzana; Vranayova, Zuzana
2017-10-01
The paper deals with the changing of the traditional roofs in the city of Košice into green roofs. Possible areas of city housing estates, after taking into account the conditions of each of them (types of buildings, statics of buildings), are listed in the paper. The research is picturing the prediction maps of Košice city from 2017 to 2042 in 5-years interval. The paper is a segment of a dissertation work focusing on changing traditional roofs into green roofs with the aim to retain water, calculate the amount of retained water and show possibilities how to use this water.
Structural reliability analysis under evidence theory using the active learning kriging model
NASA Astrophysics Data System (ADS)
Yang, Xufeng; Liu, Yongshou; Ma, Panke
2017-11-01
Structural reliability analysis under evidence theory is investigated. It is rigorously proved that a surrogate model providing only correct sign prediction of the performance function can meet the accuracy requirement of evidence-theory-based reliability analysis. Accordingly, a method based on the active learning kriging model which only correctly predicts the sign of the performance function is proposed. Interval Monte Carlo simulation and a modified optimization method based on Karush-Kuhn-Tucker conditions are introduced to make the method more efficient in estimating the bounds of failure probability based on the kriging model. Four examples are investigated to demonstrate the efficiency and accuracy of the proposed method.
A Comparison of Metamodeling Techniques via Numerical Experiments
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2016-01-01
This paper presents a comparative analysis of a few metamodeling techniques using numerical experiments for the single input-single output case. These experiments enable comparing the models' predictions with the phenomenon they are aiming to describe as more data is made available. These techniques include (i) prediction intervals associated with a least squares parameter estimate, (ii) Bayesian credible intervals, (iii) Gaussian process models, and (iv) interval predictor models. Aspects being compared are computational complexity, accuracy (i.e., the degree to which the resulting prediction conforms to the actual Data Generating Mechanism), reliability (i.e., the probability that new observations will fall inside the predicted interval), sensitivity to outliers, extrapolation properties, ease of use, and asymptotic behavior. The numerical experiments describe typical application scenarios that challenge the underlying assumptions supporting most metamodeling techniques.
The Processing of Attended and Predicted Sounds in Time.
Paris, Tim; Kim, Jeesun; Davis, Chris
2016-01-01
Neural responses to an attended event are typically enhanced relative to those from an unattended one (attention enhancement). Conversely, neural responses to a predicted event are typically reduced relative to those from an unpredicted one (prediction suppression). What remains to be established is what happens with attended and predicted events. To examine the interaction between attention and prediction, we combined two robust paradigms developed for studying attention and prediction effects on ERPs into an orthogonal design. Participants were presented with sounds in attended or unattended intervals with onsets that were either predicted by a moving visual cue or unpredicted (no cue was provided). We demonstrated an N1 enhancement effect for attended sounds and an N1 suppression effect for predicted sounds; furthermore, an interaction between these effects was found that emerged early in the N1 (50-95 msec), indicating that attention enhancement only occurred when the sound was unpredicted. This pattern of results can be explained by the precision of the predictive cue that reduces the need for attention selection in the attended and predicted condition.
NASA Astrophysics Data System (ADS)
Rings, Joerg; Vrugt, Jasper A.; Schoups, Gerrit; Huisman, Johan A.; Vereecken, Harry
2012-05-01
Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive probability density function (pdf) of any quantity of interest is a weighted average of pdfs centered around the individual (possibly bias-corrected) forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts, and reflect the individual models skill over a training (calibration) period. The original BMA approach presented by Raftery et al. (2005) assumes that the conditional pdf of each individual model is adequately described with a rather standard Gaussian or Gamma statistical distribution, possibly with a heteroscedastic variance. Here we analyze the advantages of using BMA with a flexible representation of the conditional pdf. A joint particle filtering and Gaussian mixture modeling framework is presented to derive analytically, as closely and consistently as possible, the evolving forecast density (conditional pdf) of each constituent ensemble member. The median forecasts and evolving conditional pdfs of the constituent models are subsequently combined using BMA to derive one overall predictive distribution. This paper introduces the theory and concepts of this new ensemble postprocessing method, and demonstrates its usefulness and applicability by numerical simulation of the rainfall-runoff transformation using discharge data from three different catchments in the contiguous United States. The revised BMA method receives significantly lower-prediction errors than the original default BMA method (due to filtering) with predictive uncertainty intervals that are substantially smaller but still statistically coherent (due to the use of a time-variant conditional pdf).
40 CFR 264.97 - General ground-water monitoring requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... background median levels for each constituent. (3) A tolerance or prediction interval procedure in which an... each constituent in each compliance well is compared to the upper tolerance or prediction limit. (4) A... must be maintained. This performance standard does not apply to tolerance intervals, prediction...
Exclusion of RAI2 as the causative gene for Nance-Horan syndrome.
Walpole, S M; Ronce, N; Grayson, C; Dessay, B; Yates, J R; Trump, D; Toutain, A
1999-05-01
Nance-Horan syndrome (NHS) is an X-linked condition characterised by congenital cataracts, microphthalmia and/or microcornea, unusual dental morphology, dysmorphic facial features, and developmental delay in some cases. Recent linkage studies have mapped the NHS disease gene to a 3.5-cM interval on Xp22.2 between DXS1053 and DXS443. We previously identified a human homologue of a mouse retinoic-acid-induced gene (RAI2) within the NHS critical flanking interval and have tested the gene as a candidate for Nance-Horan syndrome in nine NHS-affected families. Direct sequencing of the RAI2 gene and predicted promoter region has revealed no mutations in the families screened; RAI2 is therefore unlikely to be associated with NHS.
Optimal go/no-go ratios to maximize false alarms.
Young, Michael E; Sutherland, Steven C; McCoy, Anthony W
2018-06-01
Despite the ubiquity of go/no-go tasks in the study of behavioral inhibition, there is a lack of evidence regarding the impact of key design characteristics, including the go/no-go ratio, intertrial interval, and number of types of go stimuli, on the production of different response classes of central interest. In the present study we sought to empirically determine the optimal conditions to maximize the production of a rare outcome of considerable interest to researchers: false alarms. As predicted, the shortest intertrial intervals (450 ms), intermediate go/no-go ratios (2:1 to 4:1), and the use of multiple types of go stimuli produced the greatest numbers of false alarms. These results are placed within the context of behavioral changes during learning.
Pliocene Model Intercomparison (PlioMIP) Phase 2: Scientific Objectives and Experimental Design
NASA Technical Reports Server (NTRS)
Haywood, A. M.; Dowsett, H. J.; Dolan, A. M.; Rowley, D.; Abe-Ouchi, A.; Otto-Bliesner, B.; Chandler, M. A.; Hunter, S. J.; Lunt, D. J.; Pound, M.;
2015-01-01
The Pliocene Model Intercomparison Project (PlioMIP) is a co-ordinated international climate modelling initiative to study and understand climate and environments of the Late Pliocene, and their potential relevance in the context of future climate change. PlioMIP operates under the umbrella of the Palaeoclimate Modelling Intercomparison Project (PMIP), which examines multiple intervals in Earth history, the consistency of model predictions in simulating these intervals and their ability to reproduce climate signals preserved in geological climate archives. This paper provides a thorough model intercomparison project description, and documents the experimental design in a detailed way. Specifically, this paper describes the experimental design and boundary conditions that will be utilized for the experiments in Phase 2 of PlioMIP.
On the estimation of risk associated with an attenuation prediction
NASA Technical Reports Server (NTRS)
Crane, R. K.
1992-01-01
Viewgraphs from a presentation on the estimation of risk associated with an attenuation prediction is presented. Topics covered include: link failure - attenuation exceeding a specified threshold for a specified time interval or intervals; risk - the probability of one or more failures during the lifetime of the link or during a specified accounting interval; the problem - modeling the probability of attenuation by rainfall to provide a prediction of the attenuation threshold for a specified risk; and an accounting for the inadequacy of a model or models.
Reliable prediction intervals with regression neural networks.
Papadopoulos, Harris; Haralambous, Haris
2011-10-01
This paper proposes an extension to conventional regression neural networks (NNs) for replacing the point predictions they produce with prediction intervals that satisfy a required level of confidence. Our approach follows a novel machine learning framework, called Conformal Prediction (CP), for assigning reliable confidence measures to predictions without assuming anything more than that the data are independent and identically distributed (i.i.d.). We evaluate the proposed method on four benchmark datasets and on the problem of predicting Total Electron Content (TEC), which is an important parameter in trans-ionospheric links; for the latter we use a dataset of more than 60000 TEC measurements collected over a period of 11 years. Our experimental results show that the prediction intervals produced by our method are both well calibrated and tight enough to be useful in practice. Copyright © 2011 Elsevier Ltd. All rights reserved.
Lu, Hongwei; Zhang, Chenxi; Sun, Ying; Hao, Zhidong; Wang, Chunfang; Tian, Jiajia
2015-08-01
Predicting the termination of paroxysmal atrial fibrillation (AF) may provide a signal to decide whether there is a need to intervene the AF timely. We proposed a novel RdR RR intervals scatter plot in our study. The abscissa of the RdR scatter plot was set to RR intervals and the ordinate was set as the difference between successive RR intervals. The RdR scatter plot includes information of RR intervals and difference between successive RR intervals, which captures more heart rate variability (HRV) information. By RdR scatter plot analysis of one minute RR intervals for 50 segments with non-terminating AF and immediately terminating AF, it was found that the points in RdR scatter plot of non-terminating AF were more decentralized than the ones of immediately terminating AF. By dividing the RdR scatter plot into uniform grids and counting the number of non-empty grids, non-terminating AF and immediately terminating AF segments were differentiated. By utilizing 49 RR intervals, for 20 segments of learning set, 17 segments were correctly detected, and for 30 segments of test set, 20 segments were detected. While utilizing 66 RR intervals, for 18 segments of learning set, 16 segments were correctly detected, and for 28 segments of test set, 20 segments were detected. The results demonstrated that during the last one minute before the termination of paroxysmal AF, the variance of the RR intervals and the difference of the neighboring two RR intervals became smaller. The termination of paroxysmal AF could be successfully predicted by utilizing the RdR scatter plot, while the predicting accuracy should be further improved.
Forensic use of the Greulich and Pyle atlas: prediction intervals and relevance.
Chaumoitre, K; Saliba-Serre, B; Adalian, P; Signoli, M; Leonetti, G; Panuel, M
2017-03-01
The Greulich and Pyle (GP) atlas is one of the most frequently used methods of bone age (BA) estimation. Our aim is to assess its accuracy and to calculate the prediction intervals at 95% for forensic use. The study was conducted on a multi-ethnic sample of 2614 individuals (1423 boys and 1191 girls) referred to the university hospital of Marseille (France) for simple injuries. Hand radiographs were analysed using the GP atlas. Reliability of GP atlas and agreement between BA and chronological age (CA) were assessed and prediction intervals at 95% were calculated. The repeatability was excellent and the reproducibility was good. Pearson's linear correlation coefficient between CA and BA was 0.983. The mean difference between BA and CA was -0.18 years (boys) and 0.06 years (girls). The prediction interval at 95% for CA was given for each GP category and ranged between 1.2 and more than 4.5 years. The GP atlas is a reproducible and repeatable method that is still accurate for the present population, with a high correlation between BA and CA. The prediction intervals at 95% are wide, reflecting individual variability, and should be known when the method is used in forensic cases. • The GP atlas is still accurate at the present time. • There is a high correlation between bone age and chronological age. • Individual variability must be known when GP is used in forensic cases. • Prediction intervals (95%) are large; around 4 years after 10 year olds.
Prediction Models for Dynamic Demand Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aman, Saima; Frincu, Marc; Chelmis, Charalampos
2015-11-02
As Smart Grids move closer to dynamic curtailment programs, Demand Response (DR) events will become necessary not only on fixed time intervals and weekdays predetermined by static policies, but also during changing decision periods and weekends to react to real-time demand signals. Unique challenges arise in this context vis-a-vis demand prediction and curtailment estimation and the transformation of such tasks into an automated, efficient dynamic demand response (D 2R) process. While existing work has concentrated on increasing the accuracy of prediction models for DR, there is a lack of studies for prediction models for D 2R, which we address inmore » this paper. Our first contribution is the formal definition of D 2R, and the description of its challenges and requirements. Our second contribution is a feasibility analysis of very-short-term prediction of electricity consumption for D 2R over a diverse, large-scale dataset that includes both small residential customers and large buildings. Our third, and major contribution is a set of insights into the predictability of electricity consumption in the context of D 2R. Specifically, we focus on prediction models that can operate at a very small data granularity (here 15-min intervals), for both weekdays and weekends - all conditions that characterize scenarios for D 2R. We find that short-term time series and simple averaging models used by Independent Service Operators and utilities achieve superior prediction accuracy. We also observe that workdays are more predictable than weekends and holiday. Also, smaller customers have large variation in consumption and are less predictable than larger buildings. Key implications of our findings are that better models are required for small customers and for non-workdays, both of which are critical for D 2R. Also, prediction models require just few days’ worth of data indicating that small amounts of historical training data can be used to make reliable predictions, simplifying the complexity of big data challenge associated with D 2R.« less
Preliminary dynamic tests of a flight-type ejector
NASA Technical Reports Server (NTRS)
Drummond, Colin K.
1992-01-01
A thrust augmenting ejector was tested to provide experimental data to assist in the assessment of theoretical models to predict duct and ejector fluid-dynamic characteristics. Eleven full-scale thrust augmenting ejector tests were conducted in which a rapid increase in the ejector nozzle pressure ratio was effected through a unique bypass/burst-disk subsystem. The present work examines two cases representative of the test performance window. In the first case, the primary nozzle pressure ration (NPR) increased 36 percent from one unchoked (NPR = 1.29) primary flow condition to another (NPR = 1.75) over a 0.15 second interval. The second case involves choked primary flow conditions, where a 17 percent increase in primary nozzle flowrate (from NPR = 2.35 to NPR = 2.77) occurred over approximately 0.1 seconds. Transient signal treatment of the present dataset is discussed and initial interpretations of the results are compared with theoretical predictions for a similar STOVL ejector model.
Research on regional numerical weather prediction
NASA Technical Reports Server (NTRS)
Kreitzberg, C. W.
1976-01-01
Extension of the predictive power of dynamic weather forecasting to scales below the conventional synoptic or cyclonic scales in the near future is assessed. Lower costs per computation, more powerful computers, and a 100 km mesh over the North American area (with coarser mesh extending beyond it) are noted at present. Doubling the resolution even locally (to 50 km mesh) would entail a 16-fold increase in costs (including vertical resolution and halving the time interval), and constraints on domain size and length of forecast. Boundary conditions would be provided by the surrounding 100 km mesh, and time-varying lateral boundary conditions can be considered to handle moving phenomena. More physical processes to treat, more efficient numerical techniques, and faster computers (improved software and hardware) backing up satellite and radar data could produce further improvements in forecasting in the 1980s. Boundary layer modeling, initialization techniques, and quantitative precipitation forecasting are singled out among key tasks.
NASA Astrophysics Data System (ADS)
Romanov, D.; Epting, J.; Huggenberger, P.; Kaufmann, G.
2009-04-01
Karst aquifers are very sensitive to environmental changes. Small variations of boundary conditions can trigger significant and fast changes of the basic properties of these geological formations. Furthermore, a large number of hydraulic structures have been built in Karst terrains and close to urban areas. Within such settings it is of primary importance to understand the basic processes governing the system and to predict the evolution of Karst aquifers in order to mitigate hazards. There has been great progress in numerical modeling of the evolution of Karst during the last decades. We are now able to model early karstification of locations with complicated geological and geochemical settings and our knowledge about basic processes governing Karst evolution has increased significantly. However, there are still not many modeling attempts with data from real Karst aquifers. A model describing the evolution of a gypsum Karst aquifer along the Birs River in Switzerland is presented in this study. The initial and boundary conditions for the simulations are taken from results of geophysical and geological field studies and a detailed 3D hydrogeological model of the area. Three time intervals of the aquifer's development are discussed in details. The first covers the natural karstification for a period between several hundreds up to a few thousands years. The results from this evolution period are used as initial conditions for the second interval, which covers the time between 1890 and 2007 AD. This period is characterized by anthropogenic alterations of the system through a man-made river dam, which considerably changes the evolution of the aquifer. In 2006 and 2007 AD - after serious subsidence of a nearby highway has been observed - technical measures have been conducted and thus the boundary conditions have changed once again. This is the beginning for the third modeled interval. A forecast for the following 100 years is developed. Our results correlate very well with the findings of the field studies of the area. Furthermore, predicted evolution timescales are reasonable from what is known about the past of the aquifer. The Karst evolution models allowed simulating the development of aquifer properties, which subsequently could be transferred to the 3D hydrogeological model, allowing a more realistic representation of subsurface heterogeneities. It could be demonstrated that the various investigative methods for Karst aquifer characterization are complementing each other and allow the interpretation of short-term impacts and long-term development on system-dynamics. The obtained results show that our models can be applied not only for theoretical research of simplified and idealized Karst aquifers, but also to places with complex geological and hydrological properties. Investigative methods for similar subsidence problems can be optimized, leading from general measurements and monitoring technologies to tools with predictive character.
Bravo, Fernando; Cross, Ian; Stamatakis, Emmanuel Andreas; Rohrmeier, Martin
2017-01-01
Previous neuroimaging studies have shown an increased sensory cortical response (i.e., heightened weight on sensory evidence) under higher levels of predictive uncertainty. The signal enhancement theory proposes that attention improves the quality of the stimulus representation, and therefore reduces uncertainty by increasing the gain of the sensory signal. The present study employed functional magnetic resonance imaging (fMRI) to investigate the neural correlates for ambiguous valence inferences signaled by auditory information within an emotion recognition paradigm. Participants categorized sound stimuli of three distinct levels of consonance/dissonance controlled by interval content. Separate behavioural and neuroscientific experiments were conducted. Behavioural results revealed that, compared with the consonance condition (perfect fourths, fifths and octaves) and the strong dissonance condition (minor/major seconds and tritones), the intermediate dissonance condition (minor thirds) was the most ambiguous, least salient and more cognitively demanding category (slowest reaction times). The neuroscientific findings were consistent with a heightened weight on sensory evidence whilst participants were evaluating intermediate dissonances, which was reflected in an increased neural response of the right Heschl's gyrus. The results support previous studies that have observed enhanced precision of sensory evidence whilst participants attempted to represent and respond to higher degrees of uncertainty, and converge with evidence showing preferential processing of complex spectral information in the right primary auditory cortex. These findings are discussed with respect to music-theoretical concepts and recent Bayesian models of perception, which have proposed that attention may heighten the weight of information coming from sensory channels to stimulate learning about unknown predictive relationships.
Cross, Ian; Stamatakis, Emmanuel Andreas; Rohrmeier, Martin
2017-01-01
Previous neuroimaging studies have shown an increased sensory cortical response (i.e., heightened weight on sensory evidence) under higher levels of predictive uncertainty. The signal enhancement theory proposes that attention improves the quality of the stimulus representation, and therefore reduces uncertainty by increasing the gain of the sensory signal. The present study employed functional magnetic resonance imaging (fMRI) to investigate the neural correlates for ambiguous valence inferences signaled by auditory information within an emotion recognition paradigm. Participants categorized sound stimuli of three distinct levels of consonance/dissonance controlled by interval content. Separate behavioural and neuroscientific experiments were conducted. Behavioural results revealed that, compared with the consonance condition (perfect fourths, fifths and octaves) and the strong dissonance condition (minor/major seconds and tritones), the intermediate dissonance condition (minor thirds) was the most ambiguous, least salient and more cognitively demanding category (slowest reaction times). The neuroscientific findings were consistent with a heightened weight on sensory evidence whilst participants were evaluating intermediate dissonances, which was reflected in an increased neural response of the right Heschl’s gyrus. The results support previous studies that have observed enhanced precision of sensory evidence whilst participants attempted to represent and respond to higher degrees of uncertainty, and converge with evidence showing preferential processing of complex spectral information in the right primary auditory cortex. These findings are discussed with respect to music-theoretical concepts and recent Bayesian models of perception, which have proposed that attention may heighten the weight of information coming from sensory channels to stimulate learning about unknown predictive relationships. PMID:28422990
Schott, Whitney; Aurino, Elisabetta; Penny, Mary E; Behrman, Jere R
2017-10-24
We investigated intergenerational associations of adolescent mothers' and grandmothers' anthropometrics and schooling with adolescent mothers' offspring's anthropometrics in Ethiopia, India, Peru, and Vietnam. We examined birthweight (n = 283), birthweight Z-score (BWZ), conditional growth in weight-for-age Z-score (cWAZ, residuals from a regression of WAZ at last survey round on BWZ, sex, and age), and height-for-age Z-score (HAZ) of children born to older cohort adolescent girls in the Young Lives study. Our key independent variables were adolescent mothers' body size: HAZ and body-mass-index-for-age Z-score (BMIZ) at age 8, conditional HAZ (cHAZ, residuals from a regression of HAZ at the end of a growth period on prior HAZ, age, and sex), conditional BMIZ growth (cBMIZ, calculated analogously), and grandmaternal BMIZ, HAZ, and schooling. We adjusted for child, maternal, and household characteristics. Adolescent mothers' cHAZ (ages 8-15) predicted birthweight (β = 130 g, 95% confidence interval (CI) 31-228), BWZ (β = 0.31, CI 0.09-0.53), and cWAZ (β = 0.28, CI 0.04-0.51). Adolescent mothers' BMIZ at age 8 predicted birthweight (β = 79 g, CI 16-43) and BWZ (β = 0.22, CI 0.08-0.36). Adolescent mothers' cBMIZ (ages 12-15) predicted child cWAZ and HAZ. Grandmothers' schooling predicted grandchild birthweight (β = 22 g, CI 1-44) and BWZ (β = 0.05, CI 0.01-0.10). © 2017 New York Academy of Sciences.
Thompson, Ronald E.; Hoffman, Scott A.
2006-01-01
A suite of 28 streamflow statistics, ranging from extreme low to high flows, was computed for 17 continuous-record streamflow-gaging stations and predicted for 20 partial-record stations in Monroe County and contiguous counties in north-eastern Pennsylvania. The predicted statistics for the partial-record stations were based on regression analyses relating inter-mittent flow measurements made at the partial-record stations indexed to concurrent daily mean flows at continuous-record stations during base-flow conditions. The same statistics also were predicted for 134 ungaged stream locations in Monroe County on the basis of regression analyses relating the statistics to GIS-determined basin characteristics for the continuous-record station drainage areas. The prediction methodology for developing the regression equations used to estimate statistics was developed for estimating low-flow frequencies. This study and a companion study found that the methodology also has application potential for predicting intermediate- and high-flow statistics. The statistics included mean monthly flows, mean annual flow, 7-day low flows for three recurrence intervals, nine flow durations, mean annual base flow, and annual mean base flows for two recurrence intervals. Low standard errors of prediction and high coefficients of determination (R2) indicated good results in using the regression equations to predict the statistics. Regression equations for the larger flow statistics tended to have lower standard errors of prediction and higher coefficients of determination (R2) than equations for the smaller flow statistics. The report discusses the methodologies used in determining the statistics and the limitations of the statistics and the equations used to predict the statistics. Caution is indicated in using the predicted statistics for small drainage area situations. Study results constitute input needed by water-resource managers in Monroe County for planning purposes and evaluation of water-resources availability.
General intelligence predicts memory change across sleep.
Fenn, Kimberly M; Hambrick, David Z
2015-06-01
Psychometric intelligence (g) is often conceptualized as the capability for online information processing but it is also possible that intelligence may be related to offline processing of information. Here, we investigated the relationship between psychometric g and sleep-dependent memory consolidation. Participants studied paired-associates and were tested after a 12-hour retention interval that consisted entirely of wake or included a regular sleep phase. We calculated the number of word-pairs that were gained and lost across the retention interval. In a separate session, participants completed a battery of cognitive ability tests to assess g. In the wake group, g was not correlated with either memory gain or memory loss. In the sleep group, we found that g correlated positively with memory gain and negatively with memory loss. Participants with a higher level of general intelligence showed more memory gain and less memory loss across sleep. Importantly, the correlation between g and memory loss was significantly stronger in the sleep condition than in the wake condition, suggesting that the relationship between g and memory loss across time is specific to time intervals that include sleep. The present research suggests that g not only reflects the capability for online cognitive processing, but also reflects capability for offline processes that operate during sleep.
Coherent nature of the radiation emitted in delayed luminescence of leaves
Bajpai
1999-06-07
After exposure to light, a living system emits a photon signal of characteristic shape. The signal has a small decay region and a long tail region. The flux of photons in the decay region changes by 2 to 3 orders of magnitude, but remains almost constant in the tail region. The decaying part is attributed to delayed luminescence and the constant part to ultra-weak luminescence. Biophoton emission is the common name given to both kinds of luminescence, and photons emitted are called biophotons. The decay character of the biophoton signal is not exponential, which is suggestive of a coherent signal. We sought to establish the coherent nature by measuring the conditional probability of zero photon detection in a small interval Delta. Our measurements establish the coherent nature of biophotons emitted by different leaves at various temperatures in the range 15-50 degrees C. Our set up could measure the conditional probability for Delta=100 &mgr;s in only 100 ms, which enabled us to make its measurement in the decaying part of the signal. Various measurements were repeated 2000 times in contiguous intervals, which determined the dependence of the conditional probability on signal strength. The observed conditional probabilities at different signal strengths are in agreement with the predictions for coherent photons. The agreement is impressive at the discriminatory range, 0.1-5 counts per Delta, of signal strengths. The predictions for coherent and thermal photons differ substantially in this range. We used the values of Delta in the range, 10 &mgr;s-10 ms for obtaining a discriminatory signal strength in different regions of a decaying signal. These measurements establish the coherent nature of photons in all regions of a biophoton signal from 10 ms to 5 hr. We have checked the efficacy of out method by measuring the conditional probability of zero-photon detection in the radiation of a light emitting diode along with a leaf for Delta in the range 10 &mgr;s-100 &mgr;s. The conditional probability in the diode radiation was different from the prediction for coherent photons when the signal strength was less than 2.5 counts per Delta. Only the diode radiation exhibited photon bunching at signal strength around 0.05 count in Delta. Copyright 1999 Academic Press.
Oswald, William E.; Stewart, Aisha E. P.; Flanders, W. Dana; Kramer, Michael R.; Endeshaw, Tekola; Zerihun, Mulat; Melaku, Birhanu; Sata, Eshetu; Gessesse, Demelash; Teferi, Tesfaye; Tadesse, Zerihun; Guadie, Birhan; King, Jonathan D.; Emerson, Paul M.; Callahan, Elizabeth K.; Moe, Christine L.; Clasen, Thomas F.
2016-01-01
This study developed and validated a model for predicting the probability that communities in Amhara Region, Ethiopia, have low sanitation coverage, based on environmental and sociodemographic conditions. Community sanitation coverage was measured between 2011 and 2014 through trachoma control program evaluation surveys. Information on environmental and sociodemographic conditions was obtained from available data sources and linked with community data using a geographic information system. Logistic regression was used to identify predictors of low community sanitation coverage (< 20% versus ≥ 20%). The selected model was geographically and temporally validated. Model-predicted probabilities of low community sanitation coverage were mapped. Among 1,502 communities, 344 (22.90%) had coverage below 20%. The selected model included measures for high topsoil gravel content, an indicator for low-lying land, population density, altitude, and rainfall and had reasonable predictive discrimination (area under the curve = 0.75, 95% confidence interval = 0.72, 0.78). Measures of soil stability were strongly associated with low community sanitation coverage, controlling for community wealth, and other factors. A model using available environmental and sociodemographic data predicted low community sanitation coverage for areas across Amhara Region with fair discrimination. This approach could assist sanitation programs and trachoma control programs, scaling up or in hyperendemic areas, to target vulnerable areas with additional activities or alternate technologies. PMID:27430547
Karsten, Schober; Stephanie, Savino; Vedat, Yildiz
2017-11-10
The objective of the study was to evaluate the effects of body weight (BW), breed, and sex on two-dimensional (2D) echocardiographic measures, reference ranges, and prediction intervals using allometrically-scaled data of left atrial (LA) and left ventricular (LV) size and LV wall thickness in healthy cats. Study type was retrospective, observational, and clinical cohort. 150 healthy cats were enrolled and 2D echocardiograms analyzed. LA diameter, LV wall thickness, and LV dimension were quantified using three different imaging views. The effect of BW, breed, sex, age, and interaction (BW*sex) on echocardiographic variables was assessed using univariate and multivariate regression and linear mixed model analysis. Standard (using raw data) and allometrically scaled (Y=a × M b ) reference intervals and prediction intervals were determined. BW had a significant (P<0.05) independent effect on 2D variables whereas breed, sex, and age did not. There were clinically relevant differences between reference intervals using mean ± 2SD of raw data and mean and 95% prediction interval of allometrically-scaled variables, most prominent in larger (>6 kg) and smaller (<3 kg) cats. A clinically relevant difference between thickness of the interventricular septum (IVS) and dimension of the LV posterior wall (LVPW) was identified. In conclusion, allometric scaling and BW-based 95% prediction intervals should be preferred over conventional 2D echocardiographic reference intervals in cats, in particular in small and large cats. These results are particularly relevant to screening examinations for feline hypertrophic cardiomyopathy.
Formiga, Francesc; Ferrer, Assumpta; Padros, Gloria; Montero, Abelardo; Gimenez-Argente, Carme; Corbella, Xavier
2016-01-01
Objective To investigate the predictive value of functional impairment, chronic conditions, and laboratory biomarkers of aging for predicting 5-year mortality in the elderly aged 85 years. Methods Predictive value for mortality of different geriatric assessments carried out during the OCTABAIX study was evaluated after 5 years of follow-up in 328 subjects aged 85 years. Measurements included assessment of functional status comorbidity, along with laboratory tests on vitamin D, cholesterol, CD4/CD8 ratio, hemoglobin, and serum thyrotropin. Results Overall, the mortality rate after 5 years of follow-up was 42.07%. Bivariate analysis showed that patients who survived were predominantly female (P=0.02), and they showed a significantly better baseline functional status for both basic (P<0.001) and instrumental (P<0.001) activities of daily living (Barthel and Lawton index), better cognitive performance (Spanish version of the Mini-Mental State Examination) (P<0.001), lower comorbidity conditions (Charlson) (P<0.001), lower nutritional risk (Mini Nutritional Assessment) (P<0.001), lower risk of falls (Tinetti gait scale) (P<0.001), less percentage of heart failure (P=0.03) and chronic obstructive pulmonary disease (P=0.03), and took less chronic prescription drugs (P=0.002) than nonsurvivors. Multivariate Cox regression analysis identified a decreased score in the Lawton index (hazard ratio 0.86, 95% confidence interval: 0.78–0.91) and higher comorbidity conditions (hazard ratio 1.20, 95% confidence interval: 1.08–1.33) as independent predictors of mortality at 5 years in the studied population. Conclusion The ability to perform instrumental activities of daily living and the global comorbidity assessed at baseline were the predictors of death, identified in our 85-year-old community-dwelling subjects after 5 years of follow-up. PMID:27143867
Explaining negative refraction without negative refractive indices.
Talalai, Gregory A; Garner, Timothy J; Weiss, Steven J
2018-03-01
Negative refraction through a triangular prism may be explained without assigning a negative refractive index to the prism by using array theory. For the case of a beam incident upon the wedge, the array theory accurately predicts the beam transmission angle through the prism and provides an estimate of the frequency interval at which negative refraction occurs. The hypotenuse of the prism has a staircase shape because it is built of cubic unit cells. The large phase delay imparted by each unit cell, combined with the staircase shape of the hypotenuse, creates the necessary conditions for negative refraction. Full-wave simulations using the finite-difference time-domain method show that array theory accurately predicts the beam transmission angle.
NASA Astrophysics Data System (ADS)
Tesoriero, A. J.; Terziotti, S.
2014-12-01
Nitrate trends in streams often do not match expectations based on recent nitrogen source loadings to the land surface. Groundwater discharge with long travel times has been suggested as the likely cause for these observations. The fate of nitrate in groundwater depends to a large extent on the occurrence of denitrification along flow paths. Because denitrification in groundwater is inhibited when dissolved oxygen (DO) concentrations are high, defining the oxic-suboxic interface has been critical in determining pathways for nitrate transport in groundwater and to streams at the local scale. Predicting redox conditions on a regional scale is complicated by the spatial variability of reaction rates. In this study, logistic regression and boosted classification tree analysis were used to predict the probability of oxic water in groundwater in the Chesapeake Bay watershed. The probability of oxic water (DO > 2 mg/L) was predicted by relating DO concentrations in over 3,000 groundwater samples to indicators of residence time and/or electron donor availability. Variables that describe position in the flow system (e.g., depth to top of the open interval), soil drainage and surficial geology were the most important predictors of oxic water. Logistic regression and boosted classification tree analysis correctly predicted the presence or absence of oxic conditions in over 75 % of the samples in both training and validation data sets. Predictions of the percentages of oxic wells in deciles of risk were very accurate (r2>0.9) in both the training and validation data sets. Depth to the bottom of the oxic layer was predicted and is being used to estimate the effect that groundwater denitrification has on stream nitrate concentrations and the time lag between the application of nitrogen at the land surface and its effect on streams.
Bayesian averaging over Decision Tree models for trauma severity scoring.
Schetinin, V; Jakaite, L; Krzanowski, W
2018-01-01
Health care practitioners analyse possible risks of misleading decisions and need to estimate and quantify uncertainty in predictions. We have examined the "gold" standard of screening a patient's conditions for predicting survival probability, based on logistic regression modelling, which is used in trauma care for clinical purposes and quality audit. This methodology is based on theoretical assumptions about data and uncertainties. Models induced within such an approach have exposed a number of problems, providing unexplained fluctuation of predicted survival and low accuracy of estimating uncertainty intervals within which predictions are made. Bayesian method, which in theory is capable of providing accurate predictions and uncertainty estimates, has been adopted in our study using Decision Tree models. Our approach has been tested on a large set of patients registered in the US National Trauma Data Bank and has outperformed the standard method in terms of prediction accuracy, thereby providing practitioners with accurate estimates of the predictive posterior densities of interest that are required for making risk-aware decisions. Copyright © 2017 Elsevier B.V. All rights reserved.
Time-based loss in visual short-term memory is from trace decay, not temporal distinctiveness.
Ricker, Timothy J; Spiegel, Lauren R; Cowan, Nelson
2014-11-01
There is no consensus as to why forgetting occurs in short-term memory tasks. In past work, we have shown that forgetting occurs with the passage of time, but there are 2 classes of theories that can explain this effect. In the present work, we investigate the reason for time-based forgetting by contrasting the predictions of temporal distinctiveness and trace decay in the procedure in which we have observed such loss, involving memory for arrays of characters or letters across several seconds. The 1st theory, temporal distinctiveness, predicts that increasing the amount of time between trials will lead to less proactive interference, resulting in less forgetting across a retention interval. In the 2nd theory, trace decay, temporal distinctiveness between trials is irrelevant to the loss over a retention interval. Using visual array change detection tasks in 4 experiments, we find small proactive interference effects on performance under some specific conditions, but no concomitant change in the effect of a retention interval. We conclude that trace decay is the more suitable class of explanations of the time-based forgetting in short-term memory that we have observed, and we suggest the need for further clarity in what the exact basis of that decay may be. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Environmental trends in extinction during the Paleozoic
NASA Technical Reports Server (NTRS)
Sepkoski, J. John, Jr.
1987-01-01
Extinction intensities calculated from 505 Paleozoic marine assemblages divided among six environmental zones and 40 stratigraphic intervals indicate that whole communities exhibit increasing extinction offshore but that genera within individual taxonomic classes tend to have their highest extinction onshore. The offshore trend at the community level results from a concentration of genera in classes with low characteristic extinction rates in nearshore environments. This finding is consistent with the ecologic expectation that organisms inhabiting unpredictably fluctuating environments should suffer more extinction than counterparts living under more predictably equitable conditions.
Optical modeling of stratopheric aerosols - Present status
NASA Technical Reports Server (NTRS)
Rosen, J. M.; Hofmann, D. J.
1986-01-01
A stratospheric aerosol optical model is developed which is based on a size distribution conforming to direct measurements. Additional constraints are consistent with large data sets of independently measured macroscopic aerosol properties such as mass and backscatter. The period under study covers background as well as highly disturbed volcanic conditions and an altitude interval ranging from the tropopause to about 30 km. The predictions of the model are used to form a basis for interpreting and intercomparing several diverse types of stratospheric aerosol measurement.
Evaluation of SAMe-TT2R2 Score on Predicting Success With Extended-Interval Warfarin Monitoring.
Hwang, Andrew Y; Carris, Nicholas W; Dietrich, Eric A; Gums, John G; Smith, Steven M
2018-06-01
In patients with stable international normalized ratios, 12-week extended-interval warfarin monitoring can be considered; however, predictors of success with this strategy are unknown. The previously validated SAMe-TT 2 R 2 score (considering sex, age, medical history, treatment, tobacco, and race) predicts anticoagulation control during standard follow-up (every 4 weeks), with lower scores associated with greater time in therapeutic range. To evaluate the ability of the SAMe-TT 2 R 2 score in predicting success with extended-interval warfarin follow-up in patients with previously stable warfarin doses. In this post hoc analysis of a single-arm feasibility study, baseline SAMe-TT 2 R 2 scores were calculated for patients with ≥1 extended-interval follow-up visit. The primary analysis assessed achieved weeks of extended-interval follow-up according to baseline SAMe-TT 2 R 2 scores. A total of 47 patients receiving chronic anticoagulation completed a median of 36 weeks of extended-interval follow-up. The median baseline SAMe-TT 2 R 2 score was 1 (range 0-5). Lower SAMe-TT 2 R 2 scores appeared to be associated with greater duration of extended-interval follow-up achieved, though the differences between scores were not statistically significant. No individual variable of the SAMe-TT 2 R 2 score was associated with achieved weeks of extended-interval follow-up. Analysis of additional patient factors found that longer duration (≥24 weeks) of prior stable treatment was significantly associated with greater weeks of extended-interval follow-up completed ( P = 0.04). Conclusion and Relevance: This pilot study provides limited evidence that the SAMe-TT 2 R 2 score predicts success with extended-interval warfarin follow-up but requires confirmation in a larger study. Further research is also necessary to establish additional predictors of successful extended-interval warfarin follow-up.
A Copula-Based Conditional Probabilistic Forecast Model for Wind Power Ramps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodge, Brian S; Krishnan, Venkat K; Zhang, Jie
Efficient management of wind ramping characteristics can significantly reduce wind integration costs for balancing authorities. By considering the stochastic dependence of wind power ramp (WPR) features, this paper develops a conditional probabilistic wind power ramp forecast (cp-WPRF) model based on Copula theory. The WPRs dataset is constructed by extracting ramps from a large dataset of historical wind power. Each WPR feature (e.g., rate, magnitude, duration, and start-time) is separately forecasted by considering the coupling effects among different ramp features. To accurately model the marginal distributions with a copula, a Gaussian mixture model (GMM) is adopted to characterize the WPR uncertaintymore » and features. The Canonical Maximum Likelihood (CML) method is used to estimate parameters of the multivariable copula. The optimal copula model is chosen based on the Bayesian information criterion (BIC) from each copula family. Finally, the best conditions based cp-WPRF model is determined by predictive interval (PI) based evaluation metrics. Numerical simulations on publicly available wind power data show that the developed copula-based cp-WPRF model can predict WPRs with a high level of reliability and sharpness.« less
NASA Astrophysics Data System (ADS)
Abhilash, S.; Sahai, A. K.; Borah, N.; Chattopadhyay, R.; Joseph, S.; Sharmila, S.; De, S.; Goswami, B. N.; Kumar, Arun
2014-05-01
An ensemble prediction system (EPS) is devised for the extended range prediction (ERP) of monsoon intraseasonal oscillations (MISO) of Indian summer monsoon (ISM) using National Centers for Environmental Prediction Climate Forecast System model version 2 at T126 horizontal resolution. The EPS is formulated by generating 11 member ensembles through the perturbation of atmospheric initial conditions. The hindcast experiments were conducted at every 5-day interval for 45 days lead time starting from 16th May to 28th September during 2001-2012. The general simulation of ISM characteristics and the ERP skill of the proposed EPS at pentad mean scale are evaluated in the present study. Though the EPS underestimates both the mean and variability of ISM rainfall, it simulates the northward propagation of MISO reasonably well. It is found that the signal-to-noise ratio of the forecasted rainfall becomes unity by about 18 days. The potential predictability error of the forecasted rainfall saturates by about 25 days. Though useful deterministic forecasts could be generated up to 2nd pentad lead, significant correlations are found even up to 4th pentad lead. The skill in predicting large-scale MISO, which is assessed by comparing the predicted and observed MISO indices, is found to be ~17 days. It is noted that the prediction skill of actual rainfall is closely related to the prediction of large-scale MISO amplitude as well as the initial conditions related to the different phases of MISO. An analysis of categorical prediction skills reveals that break is more skillfully predicted, followed by active and then normal. The categorical probability skill scores suggest that useful probabilistic forecasts could be generated even up to 4th pentad lead.
Francq, Bernard G; Govaerts, Bernadette
2016-06-30
Two main methodologies for assessing equivalence in method-comparison studies are presented separately in the literature. The first one is the well-known and widely applied Bland-Altman approach with its agreement intervals, where two methods are considered interchangeable if their differences are not clinically significant. The second approach is based on errors-in-variables regression in a classical (X,Y) plot and focuses on confidence intervals, whereby two methods are considered equivalent when providing similar measures notwithstanding the random measurement errors. This paper reconciles these two methodologies and shows their similarities and differences using both real data and simulations. A new consistent correlated-errors-in-variables regression is introduced as the errors are shown to be correlated in the Bland-Altman plot. Indeed, the coverage probabilities collapse and the biases soar when this correlation is ignored. Novel tolerance intervals are compared with agreement intervals with or without replicated data, and novel predictive intervals are introduced to predict a single measure in an (X,Y) plot or in a Bland-Atman plot with excellent coverage probabilities. We conclude that the (correlated)-errors-in-variables regressions should not be avoided in method comparison studies, although the Bland-Altman approach is usually applied to avert their complexity. We argue that tolerance or predictive intervals are better alternatives than agreement intervals, and we provide guidelines for practitioners regarding method comparison studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Running Speed Can Be Predicted from Foot Contact Time during Outdoor over Ground Running.
de Ruiter, Cornelis J; van Oeveren, Ben; Francke, Agnieta; Zijlstra, Patrick; van Dieen, Jaap H
2016-01-01
The number of validation studies of commercially available foot pods that provide estimates of running speed is limited and these studies have been conducted under laboratory conditions. Moreover, internal data handling and algorithms used to derive speed from these pods are proprietary and thereby unclear. The present study investigates the use of foot contact time (CT) for running speed estimations, which potentially can be used in addition to the global positioning system (GPS) in situations where GPS performance is limited. CT was measured with tri axial inertial sensors attached to the feet of 14 runners, during natural over ground outdoor running, under optimized conditions for GPS. The individual relationships between running speed and CT were established during short runs at different speeds on two days. These relations were subsequently used to predict instantaneous speed during a straight line 4 km run with a single turning point halfway. Stopwatch derived speed, measured for each of 32 consecutive 125m intervals during the 4 km runs, was used as reference. Individual speed-CT relations were strong (r2 >0.96 for all trials) and consistent between days. During the 4km runs, median error (ranges) in predicted speed from CT 2.5% (5.2) was higher (P<0.05) than for GPS 1.6% (0.8). However, around the turning point and during the first and last 125m interval, error for GPS-speed increased to 5.0% (4.5) and became greater (P<0.05) than the error predicted from CT: 2.7% (4.4). Small speed fluctuations during 4km runs were adequately monitored with both methods: CT and GPS respectively explained 85% and 73% of the total speed variance during 4km runs. In conclusion, running speed estimates bases on speed-CT relations, have acceptable accuracy and could serve to backup or substitute for GPS during tarmac running on flat terrain whenever GPS performance is limited.
Wilkerson, Gary B; Colston, Marisa A
2015-06-01
Researchers have identified high exposure to game conditions, low back dysfunction, and poor endurance of the core musculature as strong predictors for the occurrence of sprains and strains among collegiate football players. To refine a previously developed injury-prediction model through analysis of 3 consecutive seasons of data. Cohort study. National Collegiate Athletic Association Division I Football Championship Subdivision football program. For 3 consecutive years, all 152 team members (age = 19.7 ± 1.5 years, height = 1.84 ± 0.08 m, mass = 101.08 ± 19.28 kg) presented for a mandatory physical examination on the day before initiation of preseason practice sessions. Associations between preseason measurements and the subsequent occurrence of a core or lower extremity sprain or strain were established for 256 player-seasons of data. We used receiver operating characteristic analysis to identify optimal cut points for dichotomous categorizations of cases as high risk or low risk. Both logistic regression and Cox regression analyses were used to identify a multivariable injury-prediction model with optimal discriminatory power. Exceptionally good discrimination between injured and uninjured cases was found for a 3-factor prediction model that included equal to or greater than 1 game as a starter, Oswestry Disability Index score equal to or greater than 4, and poor wall-sit-hold performance. The existence of at least 2 of the 3 risk factors demonstrated 56% sensitivity, 80% specificity, an odds ratio of 5.28 (90% confidence interval = 3.31, 8.44), and a hazard ratio of 2.97 (90% confidence interval = 2.14, 4.12). High exposure to game conditions was the dominant injury risk factor for collegiate football players, but a surprisingly mild degree of low back dysfunction and poor core-muscle endurance appeared to be important modifiable risk factors that should be identified and addressed before participation.
Wilkerson, Gary B.; Colston, Marisa A.
2015-01-01
Context Researchers have identified high exposure to game conditions, low back dysfunction, and poor endurance of the core musculature as strong predictors for the occurrence of sprains and strains among collegiate football players. Objective To refine a previously developed injury-prediction model through analysis of 3 consecutive seasons of data. Design Cohort study. Setting National Collegiate Athletic Association Division I Football Championship Subdivision football program. Patients or Other Participants For 3 consecutive years, all 152 team members (age = 19.7 ± 1.5 years, height = 1.84 ± 0.08 m, mass = 101.08 ± 19.28 kg) presented for a mandatory physical examination on the day before initiation of preseason practice sessions. Main Outcome Measure(s) Associations between preseason measurements and the subsequent occurrence of a core or lower extremity sprain or strain were established for 256 player-seasons of data. We used receiver operating characteristic analysis to identify optimal cut points for dichotomous categorizations of cases as high risk or low risk. Both logistic regression and Cox regression analyses were used to identify a multivariable injury-prediction model with optimal discriminatory power. Results Exceptionally good discrimination between injured and uninjured cases was found for a 3-factor prediction model that included equal to or greater than 1 game as a starter, Oswestry Disability Index score equal to or greater than 4, and poor wall-sit–hold performance. The existence of at least 2 of the 3 risk factors demonstrated 56% sensitivity, 80% specificity, an odds ratio of 5.28 (90% confidence interval = 3.31, 8.44), and a hazard ratio of 2.97 (90% confidence interval = 2.14, 4.12). Conclusions High exposure to game conditions was the dominant injury risk factor for collegiate football players, but a surprisingly mild degree of low back dysfunction and poor core-muscle endurance appeared to be important modifiable risk factors that should be identified and addressed before participation. PMID:25844856
[The evaluation and prognosis of the psychophysiological status of a human operator].
Sukhov, A E; Chaĭchenko, G M
1989-01-01
In experiments on 56 healthy subjects (18-20 years old) the quality of their activity was determined during compensatory watching the mark at complicating regimes of work. Depending on the difficulty of the task five groups of subjects were singled out with optimum working capacity in one of four working conditions: normal, ordinary and strenuous work, model of stress situation. It is established that the change of the number of significant correlative connections between main parameters of psychophysiological state of man-operator reflects the condition of his functional systems. On the basis of computation of total range of organization values of both R-R intervals of the ECG and duration of expiration, the success of the man-operator work in complex conditions of activity is predicted.
Gaussian process regression for tool wear prediction
NASA Astrophysics Data System (ADS)
Kong, Dongdong; Chen, Yongjie; Li, Ning
2018-05-01
To realize and accelerate the pace of intelligent manufacturing, this paper presents a novel tool wear assessment technique based on the integrated radial basis function based kernel principal component analysis (KPCA_IRBF) and Gaussian process regression (GPR) for real-timely and accurately monitoring the in-process tool wear parameters (flank wear width). The KPCA_IRBF is a kind of new nonlinear dimension-increment technique and firstly proposed for feature fusion. The tool wear predictive value and the corresponding confidence interval are both provided by utilizing the GPR model. Besides, GPR performs better than artificial neural networks (ANN) and support vector machines (SVM) in prediction accuracy since the Gaussian noises can be modeled quantitatively in the GPR model. However, the existence of noises will affect the stability of the confidence interval seriously. In this work, the proposed KPCA_IRBF technique helps to remove the noises and weaken its negative effects so as to make the confidence interval compressed greatly and more smoothed, which is conducive for monitoring the tool wear accurately. Moreover, the selection of kernel parameter in KPCA_IRBF can be easily carried out in a much larger selectable region in comparison with the conventional KPCA_RBF technique, which helps to improve the efficiency of model construction. Ten sets of cutting tests are conducted to validate the effectiveness of the presented tool wear assessment technique. The experimental results show that the in-process flank wear width of tool inserts can be monitored accurately by utilizing the presented tool wear assessment technique which is robust under a variety of cutting conditions. This study lays the foundation for tool wear monitoring in real industrial settings.
NASA Astrophysics Data System (ADS)
Borah, Nabanita; Sukumarpillai, Abhilash; Sahai, Atul Kumar; Chattopadhyay, Rajib; Joseph, Susmitha; De, Soumyendu; Nath Goswami, Bhupendra; Kumar, Arun
2014-05-01
An ensemble prediction system (EPS) is devised for the extended range prediction (ERP) of monsoon intraseasonal oscillations (MISO) of Indian summer monsoon (ISM) using NCEP Climate Forecast System model version2 at T126 horizontal resolution. The EPS is formulated by producing 11 member ensembles through the perturbation of atmospheric initial conditions. The hindcast experiments were conducted at every 5-day interval for 45 days lead time starting from 16th May to 28th September during 2001-2012. The general simulation of ISM characteristics and the ERP skill of the proposed EPS at pentad mean scale are evaluated in the present study. Though the EPS underestimates both the mean and variability of ISM rainfall, it simulates the northward propagation of MISO reasonably well. It is found that the signal-to-noise ratio becomes unity by about18 days and the predictability error saturates by about 25 days. Though useful deterministic forecasts could be generated up to 2nd pentad lead, significant correlations are observed even up to 4th pentad lead. The skill in predicting large-scale MISO, which is assessed by comparing the predicted and observed MISO indices, is found to be ~17 days. It is noted that the prediction skill of actual rainfall is closely related to the prediction of amplitude of large scale MISO as well as the initial conditions related to the different phases of MISO. Categorical prediction skills reveals that break is more skillfully predicted, followed by active and then normal. The categorical probability skill scores suggest that useful probabilistic forecasts could be generated even up to 4th pentad lead.
NASA Astrophysics Data System (ADS)
Borah, N.; Abhilash, S.; Sahai, A. K.; Chattopadhyay, R.; Joseph, S.; Sharmila, S.; de, S.; Goswami, B.; Kumar, A.
2013-12-01
An ensemble prediction system (EPS) is devised for the extended range prediction (ERP) of monsoon intraseasonal oscillations (MISOs) of Indian summer monsoon (ISM) using NCEP Climate Forecast System model version2 at T126 horizontal resolution. The EPS is formulated by producing 11 member ensembles through the perturbation of atmospheric initial conditions. The hindcast experiments were conducted at every 5-day interval for 45 days lead time starting from 16th May to 28th September during 2001-2012. The general simulation of ISM characteristics and the ERP skill of the proposed EPS at pentad mean scale are evaluated in the present study. Though the EPS underestimates both the mean and variability of ISM rainfall, it simulates the northward propagation of MISO reasonably well. It is found that the signal-to-noise ratio becomes unity by about18 days and the predictability error saturates by about 25 days. Though useful deterministic forecasts could be generated up to 2nd pentad lead, significant correlations are observed even up to 4th pentad lead. The skill in predicting large-scale MISO, which is assessed by comparing the predicted and observed MISO indices, is found to be ~17 days. It is noted that the prediction skill of actual rainfall is closely related to the prediction of amplitude of large scale MISO as well as the initial conditions related to the different phases of MISO. Categorical prediction skills reveals that break is more skillfully predicted, followed by active and then normal. The categorical probability skill scores suggest that useful probabilistic forecasts could be generated even up to 4th pentad lead.
Time and resource limits on working memory: cross-age consistency in counting span performance.
Ransdell, Sarah; Hecht, Steven
2003-12-01
This longitudinal study separated resource demand effects from those of retention interval in a counting span task among 100 children tested in grade 2 and again in grades 3 and 4. A last card large counting span condition had an equivalent memory load to a last card small, but the last card large required holding the count over a longer retention interval. In all three waves of assessment, the last card large condition was found to be less accurate than the last card small. A model predicting reading comprehension showed that age was a significant predictor when entered first accounting for 26% of the variance, but counting span accounted for a further 22% of the variance. Span at Wave 1 accounted for significant unique variance at Wave 2 and at Wave 3. Results were similar for math calculation with age accounting for 31% of the variance and counting span accounting for a further 34% of the variance. Span at Wave 1 explained unique variance in math at Wave 2 and at Wave 3.
Morton, Michael J; Laffoon, Susan W
2008-06-01
This study extends the market mapping concept introduced by Counts et al. (Counts, M.E., Hsu, F.S., Tewes, F.J., 2006. Development of a commercial cigarette "market map" comparison methodology for evaluating new or non-conventional cigarettes. Regul. Toxicol. Pharmacol. 46, 225-242) to include both temporal cigarette and testing variation and also machine smoking with more intense puffing parameters, as defined by the Massachusetts Department of Public Health (MDPH). The study was conducted over a two year period and involved a total of 23 different commercial cigarette brands from the U.S. marketplace. Market mapping prediction intervals were developed for 40 mainstream cigarette smoke constituents and the potential utility of the market map as a comparison tool for new brands was demonstrated. The over-time character of the data allowed for the variance structure of the smoke constituents to be more completely characterized than is possible with one-time sample data. The variance was partitioned among brand-to-brand differences, temporal differences, and the remaining residual variation using a mixed random and fixed effects model. It was shown that a conventional weighted least squares model typically gave similar prediction intervals to those of the more complicated mixed model. For most constituents there was less difference in the prediction intervals calculated from over-time samples and those calculated from one-time samples than had been anticipated. One-time sample maps may be adequate for many purposes if the user is aware of their limitations. Cigarette tobacco fillers were analyzed for nitrate, nicotine, tobacco-specific nitrosamines, ammonia, chlorogenic acid, and reducing sugars. The filler information was used to improve predicting relationships for several of the smoke constituents, and it was concluded that the effects of filler chemistry on smoke chemistry were partial explanations of the observed brand-to-brand variation.
NASA Technical Reports Server (NTRS)
Smith, H. D.; Mattox, D. M.; Wilcox, W. R.; Subramanian, R. S.; Meyyappan, M.
1982-01-01
An experiment was carried out on board a Space Processing Applications Rocket with the aim of demonstrating bubble migration in molten glass due to a temperature gradient under low gravity conditions. During the flight, a sample of a sodium borate melt with a specific bubble array, contained in a platinum/fused silica cell, was subjected to a well defined temperature gradient for more than 4 minutes. Photographs taken at one second intervals during the experiment clearly show that the bubbles move toward the hot spot on the platinum heater strip. This result is consistent with the predictions of the theory of thermocapillary driven bubble motion.
Ozdemir, Rahmi; Isguder, Rana; Kucuk, Mehmet; Karadeniz, Cem; Ceylan, Gokhan; Katipoglu, Nagehan; Yilmazer, Murat Muhtar; Yozgat, Yilmaz; Mese, Timur; Agin, Hasan
2016-10-01
To assess the feasibility of 12-lead electrocardiographic (ECG) measures such as P wave dispersion (PWd), QT interval, QT dispersion (QTd), Tp-e interval, Tp-e/QT and Tp-e/QTc ratio in predicting poor outcome in patients diagnosed with sepsis in pediatric intensive care unit (PICU). Ninety-three patients diagnosed with sepsis, severe sepsis or septic shock and 103 age- and sex-matched healthy children were enrolled into the study. PWd, QT interval, QTd, Tp-e interval and Tp-e/QT, Tp-e/QTc ratios were obtained from a 12-lead electrocardiogram. PWd, QTd, Tp-e interval and Tp-e/QT, Tp-e/QTc ratios were significantly higher in septic patients compared with the controls. During the study period, 41 patients had died. In multivariate logistic regression analyses, only Tp-e/QT ratio was found to be an independent predictor of mortality. The ECG measurements can predict the poor outcome in patients with sepsis. The Tp-e/QT ratio may be a valuable tool in predicting mortality for patients with sepsis in the PICU. © The Author [2016]. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Vandermolen, Brooke I; Hezelgrave, Natasha L; Smout, Elizabeth M; Abbott, Danielle S; Seed, Paul T; Shennan, Andrew H
2016-10-01
Quantitative fetal fibronectin testing has demonstrated accuracy for prediction of spontaneous preterm birth in asymptomatic women with a history of preterm birth. Predictive accuracy in women with previous cervical surgery (a potentially different risk mechanism) is not known. We sought to compare the predictive accuracy of cervicovaginal fluid quantitative fetal fibronectin and cervical length testing in asymptomatic women with previous cervical surgery to that in women with 1 previous preterm birth. We conducted a prospective blinded secondary analysis of a larger observational study of cervicovaginal fluid quantitative fetal fibronectin concentration in asymptomatic women measured with a Hologic 10Q system (Hologic, Marlborough, MA). Prediction of spontaneous preterm birth (<30, <34, and <37 weeks) with cervicovaginal fluid quantitative fetal fibronectin concentration in primiparous women who had undergone at least 1 invasive cervical procedure (n = 473) was compared with prediction in women who had previous spontaneous preterm birth, preterm prelabor rupture of membranes, or late miscarriage (n = 821). Relationship with cervical length was explored. The rate of spontaneous preterm birth <34 weeks in the cervical surgery group was 3% compared with 9% in previous spontaneous preterm birth group. Receiver operating characteristic curves comparing quantitative fetal fibronectin for prediction at all 3 gestational end points were comparable between the cervical surgery and previous spontaneous preterm birth groups (34 weeks: area under the curve, 0.78 [95% confidence interval 0.64-0.93] vs 0.71 [95% confidence interval 0.64-0.78]; P = .39). Prediction of spontaneous preterm birth using cervical length compared with quantitative fetal fibronectin for prediction of preterm birth <34 weeks of gestation offered similar prediction (area under the curve, 0.88 [95% confidence interval 0.79-0.96] vs 0.77 [95% confidence interval 0.62-0.92], P = .12 in the cervical surgery group; and 0.77 [95% confidence interval 0.70-0.84] vs 0.74 [95% confidence interval 0.67-0.81], P = .32 in the previous spontaneous preterm birth group). Prediction of spontaneous preterm birth using cervicovaginal fluid quantitative fetal fibronectin in asymptomatic women with cervical surgery is valid, and has comparative accuracy to that in women with a history of spontaneous preterm birth. Copyright © 2016 Elsevier Inc. All rights reserved.
Hypothesis testing and earthquake prediction.
Jackson, D D
1996-04-30
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.
Hypothesis testing and earthquake prediction.
Jackson, D D
1996-01-01
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663
Stephen, Julia M; Ranken, Doug F; Aine, Cheryl J
2006-01-01
The sensitivity of visual areas to different temporal frequencies, as well as the functional connections between these areas, was examined using magnetoencephalography (MEG). Alternating circular sinusoids (0, 3.1, 8.7 and 14 Hz) were presented to foveal and peripheral locations in the visual field to target ventral and dorsal stream structures, respectively. It was hypothesized that higher temporal frequencies would preferentially activate dorsal stream structures. To determine the effect of frequency on the cortical response we analyzed the late time interval (220-770 ms) using a multi-dipole spatio-temporal analysis approach to provide source locations and timecourses for each condition. As an exploratory aspect, we performed cross-correlation analysis on the source timecourses to determine which sources responded similarly within conditions. Contrary to predictions, dorsal stream areas were not activated more frequently during high temporal frequency stimulation. However, across cortical sources the frequency-following response showed a difference, with significantly higher power at the second harmonic for the 3.1 and 8.7 Hz stimulation and at the first and second harmonics for the 14 Hz stimulation with this pattern seen robustly in area V1. Cross-correlations of the source timecourses showed that both low- and high-order visual areas, including dorsal and ventral stream areas, were significantly correlated in the late time interval. The results imply that frequency information is transferred to higher-order visual areas without translation. Despite the less complex waveforms seen in the late interval of time, the cross-correlation results show that visual, temporal and parietal cortical areas are intricately involved in late-interval visual processing.
Somers, Judith A.E.; Braakman, Eric; van der Holt, Bronno; Petersen, Eefke J.; Marijt, Erik W.A.; Huisman, Cynthia; Sintnicolaas, Kees; Oudshoorn, Machteld; Groenendijk-Sijnke, Marlies E.; Brand, Anneke; Cornelissen, Jan J.
2014-01-01
Double umbilical cord blood transplantation is increasingly applied in the treatment of adult patients with high-risk hematological malignancies and has been associated with improved engraftment as compared to that provided by single unit cord blood transplantation. The mechanism of improved engraftment is, however, still incompletely understood as only one unit survives. In this multicenter phase II study we evaluated engraftment, early chimerism, recovery of different cell lineages and transplant outcome in 53 patients who underwent double cord blood transplantation preceded by a reduced intensity conditioning regimen. Primary graft failure occurred in one patient. Engraftment was observed in 92% of patients with a median time to neutrophil recovery of 36 days (range, 15–102). Ultimate single donor chimerism was established in 94% of patients. Unit predominance occurred by day 11 after transplantation and early CD4+ T-cell chimerism predicted for unit survival. Total nucleated cell viability was also associated with unit survival. With a median follow up of 35 months (range, 10–51), the cumulative incidence of relapse and non-relapse mortality rate at 2 years were 39% and 19%, respectively. Progressionfree survival and overall survival rates at 2 years were 42% (95% confidence interval, 28–56) and 57% (95% confidence interval, 43–70), respectively. Double umbilical cord blood transplantation preceded by a reduced intensity conditioning regimen using cyclophosphamide/fludarabine/4 Gy total body irradiation results in a high engraftment rate with low non-relapse mortality. Moreover, prediction of unit survival by early CD4+ lymphocyte chimerism might suggest a role for CD4+ lymphocyte mediated unit-versus-unit alloreactivity. www.trialregister.nl NTR1573. PMID:25107890
Ramos, Fernando; Robledo, Cristina; Pereira, Arturo; Pedro, Carmen; Benito, Rocío; de Paz, Raquel; Del Rey, Mónica; Insunza, Andrés; Tormo, Mar; Díez-Campelo, María; Xicoy, Blanca; Salido, Eduardo; Sánchez-Del-Real, Javier; Arenillas, Leonor; Florensa, Lourdes; Luño, Elisa; Del Cañizo, Consuelo; Sanz, Guillermo F; María Hernández-Rivas, Jesús
2017-09-01
The International Prognostic Scoring System and its revised form (IPSS-R) are the most widely used indices for prognostic assessment of patients with myelodysplastic syndromes (MDS), but can only partially account for the observed variation in patient outcomes. This study aimed to evaluate the relative contribution of patient condition and mutational status in peripheral blood when added to the IPSS-R, for estimating overall survival and the risk of leukemic transformation in patients with MDS. A prospective cohort (2006-2015) of 200 consecutive patients with MDS were included in the study series and categorized according to the IPSS-R. Patients were further stratified according to patient condition (assessed using the multidimensional Lee index for older adults) and genetic mutations (peripheral blood samples screened using next-generation sequencing). The change in likelihood-ratio was tested in Cox models after adding individual covariates. The addition of the Lee index to the IPSS-R significantly improved prediction of overall survival [hazard ratio (HR) 3.02, 95% confidence interval (CI) 1.96-4.66, P < 0.001), and mutational analysis significantly improved prediction of leukemic evolution (HR 2.64, 1.56-4.46, P < 0.001). Non-leukemic death was strongly linked to patient condition (HR 2.71, 1.72-4.25, P < 0.001), but not to IPSS-R score (P = 0.35) or mutational status (P = 0.75). Adjustment for exposure to disease-modifying therapy, evaluated as a time-dependent covariate, had no effect on the proposed model's predictive ability. In conclusion, patient condition, assessed by the multidimensional Lee index and patient mutational status can improve the prediction of clinical outcomes of patients with MDS already stratified by IPSS-R. © 2017 Wiley Periodicals, Inc.
Differential Effects of the Cannabinoid Agonist WIN55,212-2 on Delay and Trace Eyeblink Conditioning
Steinmetz, Adam B.; Freeman, John H.
2014-01-01
Central cannabinoid-1 receptors (CB1R) play a role in the acquisition of delay eyeblink conditioning but not trace eyeblink conditioning in humans and animals. However, it is not clear why trace conditioning is immune to the effects of cannabinoid receptor compounds. The current study examined the effects of variants of delay and trace conditioning procedures to elucidate the factors that determine the effects of CB1R agonists on eyeblink conditioning. In Experiment 1 rats were administered the cannabinoid agonist WIN55,212-2 during delay, long delay, or trace conditioning. Rats were impaired during delay and long delay but not trace conditioning; the impairment was greater for long delay than delay conditioning. Trace conditioning was further examined in Experiment 2 by manipulating the trace interval and keeping constant the conditioned stimulus (CS) duration. It was found that when the trace interval was 300 ms or less WIN55,212-2 administration impaired the rate of learning. Experiment 3 tested whether the trace interval duration or the relative durations of the CS and trace interval were critical parameters influencing the effects of WIN55,212-2 on eyeblink conditioning. Rats were not impaired with a 100 ms CS, 200 ms trace paradigm but were impaired with a 1000 ms CS, 500 ms trace paradigm, indicating that the duration of the trace interval does not matter but the proportion of the interstimulus interval occupied by the CS relative to the trace period is critical. Taken together the results indicate that cannabinoid agonists affect cerebellar learning the CS is longer than the trace interval. PMID:24128358
NASA Astrophysics Data System (ADS)
Kiro, Yael; Goldstein, Steven L.; Garcia-Veigas, Javier; Levy, Elan; Kushnir, Yochanan; Stein, Mordechai; Lazar, Boaz
2017-04-01
Thick halite intervals recovered by the Dead Sea Deep Drilling Project cores show evidence for severely arid climatic conditions in the eastern Mediterranean during the last three interglacials. In particular, the core interval corresponding to the peak of the last interglacial (Marine Isotope Stage 5e or MIS 5e) contains ∼30 m of salt over 85 m of core length, making this the driest known period in that region during the late Quaternary. This study reconstructs Dead Sea lake levels during the salt deposition intervals, based on water and salt budgets derived from the Dead Sea brine composition and the amount of salt in the core. Modern water and salt budgets indicate that halite precipitates only during declining lake levels, while the amount of dissolved Na+ and Cl- accumulates during wetter intervals. Based on the compositions of Dead Sea brines from pore waters and halite fluid inclusions, we estimate that ∼12-16 cm of halite precipitated per meter of lake-level drop. During periods of halite precipitation, the Mg2+ concentration increases and the Na+/Cl- ratio decreases in the lake. Our calculations indicate major lake-level drops of ∼170 m from lake levels of 320 and 310 m below sea level (mbsl) down to lake levels of ∼490 and ∼480 mbsl, during MIS 5e and the Holocene, respectively. These lake levels are much lower than typical interglacial lake levels of around 400 mbsl. These lake-level drops occurred as a result of major decreases in average fresh water runoff, to ∼40% of the modern value (pre-1964, before major fresh water diversions), reflecting severe droughts during which annual precipitation in Jerusalem was lower than 350 mm/y, compared to ∼600 mm/y today. Nevertheless, even during salt intervals, the changes in halite facies and the occurrence of alternating periods of halite and detritus in the Dead Sea core stratigraphy reflect fluctuations between drier and wetter conditions around our estimated average. The halite intervals include periods that are richer and poorer in halite, indicating (based on the sedimentation rate) that severe dry conditions with water availability as low as ∼20% of the present day, continued for periods of decades to centuries, and fluctuated with wetter conditions that spanned centuries to millennia when water availability was ∼50-100% of the present day. These conclusions have potential implications for the coming decades, as climate models predict greater aridity in the region.
Weighted regression analysis and interval estimators
Donald W. Seegrist
1974-01-01
A method for deriving the weighted least squares estimators for the parameters of a multiple regression model. Confidence intervals for expected values, and prediction intervals for the means of future samples are given.
A Class of Prediction-Correction Methods for Time-Varying Convex Optimization
NASA Astrophysics Data System (ADS)
Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro
2016-09-01
This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.
Prediction and uncertainty in human Pavlovian to instrumental transfer.
Trick, Leanne; Hogarth, Lee; Duka, Theodora
2011-05-01
Attentional capture and behavioral control by conditioned stimuli have been dissociated in animals. The current study assessed this dissociation in humans. Participants were trained on a Pavlovian schedule in which 3 visual stimuli, A, B, and C, predicted the occurrence of an aversive noise with 90%, 50%, or 10% probability, respectively. Participants then went on to separate instrumental training in which a key-press response canceled the aversive noise with a .5 probability on a variable interval schedule. Finally, in the transfer phase, the 3 Pavlovian stimuli were presented in this instrumental schedule and were no longer differentially predictive of the outcome. Observing times and gaze dwell time indexed attention to these stimuli in both training and transfer. Aware participants acquired veridical outcome expectancies in training--that is, A > B > C, and these expectancies persisted into transfer. Most important, the transfer effect accorded with these expectancies, A > B > C. By contrast, observing times accorded with uncertainty--that is, they showed B > A = C during training, and B < A = C in the transfer phase. Dwell time bias supported this association between attention and uncertainty, although these data showed a slightly more complicated pattern. Overall, the study suggests that transfer is linked to outcome prediction and is dissociated from attention to conditioned stimuli, which is linked to outcome uncertainty.
The influence of weather on migraine – are migraine attacks predictable?
Hoffmann, Jan; Schirra, Tonio; Lo, Hendra; Neeb, Lars; Reuter, Uwe; Martus, Peter
2015-01-01
Objective The study aimed at elucidating a potential correlation between specific meteorological variables and the prevalence and intensity of migraine attacks as well as exploring a potential individual predictability of a migraine attack based on meteorological variables and their changes. Methods Attack prevalence and intensity of 100 migraineurs were correlated with atmospheric pressure, relative air humidity, and ambient temperature in 4-h intervals over 12 consecutive months. For each correlation, meteorological parameters at the time of the migraine attack as well as their variation within the preceding 24 h were analyzed. For migraineurs showing a positive correlation, logistic regression analysis was used to assess the predictability of a migraine attack based on meteorological information. Results In a subgroup of migraineurs, a significant weather sensitivity could be observed. In contrast, pooled analysis of all patients did not reveal a significant association. An individual prediction of a migraine attack based on meteorological data was not possible, mainly as a result of the small prevalence of attacks. Interpretation The results suggest that only a subgroup of migraineurs is sensitive to specific weather conditions. Our findings may provide an explanation as to why previous studies, which commonly rely on a pooled analysis, show inconclusive results. The lack of individual attack predictability indicates that the use of preventive measures based on meteorological conditions is not feasible. PMID:25642431
Effects of Temporal Features and Order on the Apparent duration of a Visual Stimulus
Bruno, Aurelio; Ayhan, Inci; Johnston, Alan
2012-01-01
The apparent duration of a visual stimulus has been shown to be influenced by its speed. For low speeds, apparent duration increases linearly with stimulus speed. This effect has been ascribed to the number of changes that occur within a visual interval. Accordingly, a higher number of changes should produce an increase in apparent duration. In order to test this prediction, we asked subjects to compare the relative duration of a 10-Hz drifting comparison stimulus with a standard stimulus that contained a different number of changes in different conditions. The standard could be static, drifting at 10 Hz, or mixed (a combination of variable duration static and drifting intervals). In this last condition the number of changes was intermediate between the static and the continuously drifting stimulus. For all standard durations, the mixed stimulus looked significantly compressed (∼20% reduction) relative to the drifting stimulus. However, no difference emerged between the static (that contained no changes) and the mixed stimuli (which contained an intermediate number of changes). We also observed that when the standard was displayed first, it appeared compressed relative to when it was displayed second with a magnitude that depended on standard duration. These results are at odds with a model of time perception that simply reflects the number of temporal features within an interval in determining the perceived passing of time. PMID:22461778
Drug-physiology interaction and its influence on the QT prolongation-mechanistic modeling study.
Wiśniowska, Barbara; Polak, Sebastian
2018-06-01
The current study is an example of drug-disease interaction modeling where a drug induces a condition which can affect the pharmacodynamics of other concomitantly taken drugs. The electrophysiological effects of hypokaliemia and heart rate changes induced by the antiasthmatic drugs were simulated with the use of the cardiac safety simulator. Biophysically detailed model of the human cardiac physiology-ten Tusscher ventricular cardiomyocyte cell model-was employed to generate pseudo-ECG signals and QTc intervals for 44 patients from four clinical studies. Simulated and observed mean QTc values with standard deviation (SD) for each reported study point were compared and differences were analyzed with Student's t test (α = 0.05). The simulated results reflected the QTc interval changes measured in patients, as well as their clinically observed interindividual variability. The QTc interval changes were highly correlated with the change in plasma potassium both in clinical studies and in the simulations (Pearson's correlation coefficient > 0.55). The results suggest that the modeling and simulation approach could provide valuable quantitative insight into the cardiological effect of the potassium and heart rate changes caused by electrophysiologically inactive, non-cardiological drugs. This allows to simulate and predict the joint effect of several risk factors for QT prolongation, e.g., drug-dependent QT prolongation due to the ion channels inhibition and the current patient physiological conditions.
Lu, Yuzhen; Du, Changwen; Yu, Changbing; Zhou, Jianmin
2014-08-01
Fast and non-destructive determination of rapeseed protein content carries significant implications in rapeseed production. This study presented the first attempt of using Fourier transform mid-infrared photoacoustic spectroscopy (FTIR-PAS) to quantify protein content of rapeseed. The full-spectrum model was first built using partial least squares (PLS). Interval selection methods including interval partial least squares (iPLS), synergy interval partial least squares (siPLS), backward elimination interval partial least squares (biPLS) and dynamic backward elimination interval partial least squares (dyn-biPLS) were then employed to select the relevant band or band combination for PLS modeling. The full-spectrum PLS model achieved an ratio of prediction to deviation (RPD) of 2.047. In comparison, all interval selection methods produced better results than full-spectrum modeling. siPLS achieved the best predictive accuracy with an RPD of 3.215 when the spectrum was sectioned into 25 intervals, and two intervals (1198-1335 and 1614-1753 cm(-1) ) were selected. iPLS excelled biPLS and dyn-biPLS, and dyn-biPLS performed slightly better than biPLS. FTIR-PAS was verified as a promising analytical tool to quantify rapeseed protein content. Interval selection could extract the relevant individual band or synergy band associated with the sample constituent of interest, and then improve the prediction accuracy of the full-spectrum model. © 2013 Society of Chemical Industry.
Cardiorespiratory Information Dynamics during Mental Arithmetic and Sustained Attention
Widjaja, Devy; Montalto, Alessandro; Vlemincx, Elke; Marinazzo, Daniele; Van Huffel, Sabine; Faes, Luca
2015-01-01
An analysis of cardiorespiratory dynamics during mental arithmetic, which induces stress, and sustained attention was conducted using information theory. The information storage and internal information of heart rate variability (HRV) were determined respectively as the self-entropy of the tachogram, and the self-entropy of the tachogram conditioned to the knowledge of respiration. The information transfer and cross information from respiration to HRV were assessed as the transfer and cross-entropy, both measures of cardiorespiratory coupling. These information-theoretic measures identified significant nonlinearities in the cardiorespiratory time series. Additionally, it was shown that, although mental stress is related to a reduction in vagal activity, no difference in cardiorespiratory coupling was found when several mental states (rest, mental stress, sustained attention) are compared. However, the self-entropy of HRV conditioned to respiration was very informative to study the predictability of RR interval series during mental tasks, and showed higher predictability during mental arithmetic compared to sustained attention or rest. PMID:26042824
Manktelow, Bradley N; Seaton, Sarah E; Evans, T Alun
2016-12-01
There is an increasing use of statistical methods, such as funnel plots, to identify poorly performing healthcare providers. Funnel plots comprise the construction of control limits around a benchmark and providers with outcomes falling outside the limits are investigated as potential outliers. The benchmark is usually estimated from observed data but uncertainty in this estimate is usually ignored when constructing control limits. In this paper, the use of funnel plots in the presence of uncertainty in the value of the benchmark is reviewed for outcomes from a Binomial distribution. Two methods to derive the control limits are shown: (i) prediction intervals; (ii) tolerance intervals Tolerance intervals formally include the uncertainty in the value of the benchmark while prediction intervals do not. The probability properties of 95% control limits derived using each method were investigated through hypothesised scenarios. Neither prediction intervals nor tolerance intervals produce funnel plot control limits that satisfy the nominal probability characteristics when there is uncertainty in the value of the benchmark. This is not necessarily to say that funnel plots have no role to play in healthcare, but that without the development of intervals satisfying the nominal probability characteristics they must be interpreted with care. © The Author(s) 2014.
Predicting Operator Execution Times Using CogTool
NASA Technical Reports Server (NTRS)
Santiago-Espada, Yamira; Latorella, Kara A.
2013-01-01
Researchers and developers of NextGen systems can use predictive human performance modeling tools as an initial approach to obtain skilled user performance times analytically, before system testing with users. This paper describes the CogTool models for a two pilot crew executing two different types of a datalink clearance acceptance tasks, and on two different simulation platforms. The CogTool time estimates for accepting and executing Required Time of Arrival and Interval Management clearances were compared to empirical data observed in video tapes and registered in simulation files. Results indicate no statistically significant difference between empirical data and the CogTool predictions. A population comparison test found no significant differences between the CogTool estimates and the empirical execution times for any of the four test conditions. We discuss modeling caveats and considerations for applying CogTool to crew performance modeling in advanced cockpit environments.
A dual memory theory of the testing effect.
Rickard, Timothy C; Pan, Steven C
2017-06-05
A new theoretical framework for the testing effect-the finding that retrieval practice is usually more effective for learning than are other strategies-is proposed, the empirically supported tenet of which is that separate memories form as a consequence of study and test events. A simplest case quantitative model is derived from that framework for the case of cued recall. With no free parameters, that model predicts both proportion correct in the test condition and the magnitude of the testing effect across 10 experiments conducted in our laboratory, experiments that varied with respect to material type, retention interval, and performance in the restudy condition. The model also provides the first quantitative accounts of (a) the testing effect as a function of performance in the restudy condition, (b) the upper bound magnitude of the testing effect, (c) the effect of correct answer feedback, (d) the testing effect as a function of retention interval for the cases of feedback and no feedback, and (e) the effect of prior learning method on subsequent learning through testing. Candidate accounts of several other core phenomena in the literature, including test-potentiated learning, recognition versus cued recall training effects, cued versus free recall final test effects, and other select transfer effects, are also proposed. Future prospects and relations to other theories are discussed.
Method of and apparatus for modeling interactions
Budge, Kent G.
2004-01-13
A method and apparatus for modeling interactions can accurately model tribological and other properties and accommodate topological disruptions. Two portions of a problem space are represented, a first with a Lagrangian mesh and a second with an ALE mesh. The ALE and Lagrangian meshes are constructed so that each node on the surface of the Lagrangian mesh is in a known correspondence with adjacent nodes in the ALE mesh. The interaction can be predicted for a time interval. Material flow within the ALE mesh can accurately model complex interactions such as bifurcation. After prediction, nodes in the ALE mesh in correspondence with nodes on the surface of the Lagrangian mesh can be mapped so that they are once again adjacent to their corresponding Lagrangian mesh nodes. The ALE mesh can then be smoothed to reduce mesh distortion that might reduce the accuracy or efficiency of subsequent prediction steps. The process, from prediction through mapping and smoothing, can be repeated until a terminal condition is reached.
James, Katherine M; Cowl, Clayton T; Tilburt, Jon C; Sinicrope, Pamela S; Robinson, Marguerite E; Frimannsdottir, Katrin R; Tiedje, Kristina; Koenig, Barbara A
2011-10-01
To assess the impact of direct-to-consumer (DTC) predictive genomic risk information on perceived risk and worry in the context of routine clinical care. Patients attending a preventive medicine clinic between June 1 and December 18, 2009, were randomly assigned to receive either genomic risk information from a DTC product plus usual care (n=74) or usual care alone (n=76). At intervals of 1 week and 1 year after their clinic visit, participants completed surveys containing validated measures of risk perception and levels of worry associated with the 12 conditions assessed by the DTC product. Of 345 patients approached, 150 (43%) agreed to participate, 64 (19%) refused, and 131 (38%) did not respond. Compared with those receiving usual care, participants who received genomic risk information initially rated their risk as higher for 4 conditions (abdominal aneurysm [P=.001], Graves disease [P=.04], obesity [P=.01], and osteoarthritis [P=.04]) and lower for one (prostate cancer [P=.02]). Although differences were not significant, they also reported higher levels of worry for 7 conditions and lower levels for 5 others. At 1 year, there were no significant differences between groups. Predictive genomic risk information modestly influences risk perception and worry. The extent and direction of this influence may depend on the condition being tested and its baseline prominence in preventive health care and may attenuate with time.
Emotional arousal predicts intertemporal choice
Lempert, Karolina M.; Johnson, Eli; Phelps, Elizabeth A.
2016-01-01
People generally prefer immediate rewards to rewards received after a delay, often even when the delayed reward is larger. This phenomenon is known as temporal discounting. It has been suggested that preferences for immediate rewards may be due to their being more concrete than delayed rewards. This concreteness may evoke an enhanced emotional response. Indeed, manipulating the representation of a future reward to make it more concrete has been shown to heighten the reward’s subjective emotional intensity, making people more likely to choose it. Here we use an objective measure of arousal – pupil dilation – to investigate if emotional arousal mediates the influence of delayed reward concreteness on choice. We recorded pupil dilation responses while participants made choices between immediate and delayed rewards. We manipulated concreteness through time interval framing: delayed rewards were presented either with the date on which they would be received (e.g., “$30, May 3”; DATE condition, more concrete) or in terms of delay to receipt (e.g., “$30, 7 days; DAYS condition, less concrete). Contrary to prior work, participants were not overall more patient in the DATE condition. However, there was individual variability in response to time framing, and this variability was predicted by differences in pupil dilation between conditions. Emotional arousal increased as the subjective value of delayed rewards increased, and predicted choice of the delayed reward on each trial. This study advances our understanding of the role of emotion in temporal discounting. PMID:26882337
Performance factors in associative learning: assessment of the sometimes competing retrieval model.
Witnauer, James E; Wojick, Brittany M; Polack, Cody W; Miller, Ralph R
2012-09-01
Previous simulations revealed that the sometimes competing retrieval model (SOCR; Stout & Miller, Psychological Review, 114, 759-783, 2007), which assumes local error reduction, can explain many cue interaction phenomena that elude traditional associative theories based on total error reduction. Here, we applied SOCR to a new set of Pavlovian phenomena. Simulations used a single set of fixed parameters to simulate each basic effect (e.g., blocking) and, for specific experiments using different procedures, used fitted parameters discovered through hill climbing. In simulation 1, SOCR was successfully applied to basic acquisition, including the overtraining effect, which is context dependent. In simulation 2, we applied SOCR to basic extinction and renewal. SOCR anticipated these effects with both fixed parameters and best-fitting parameters, although the renewal effects were weaker than those observed in some experiments. In simulation 3a, feature-negative training was simulated, including the often observed transition from second-order conditioning to conditioned inhibition. In simulation 3b, SOCR predicted the observation that conditioned inhibition after feature-negative and differential conditioning depends on intertrial interval. In simulation 3c, SOCR successfully predicted failure of conditioned inhibition to extinguish with presentations of the inhibitor alone under most circumstances. In simulation 4, cue competition, including blocking (4a), recovery from relative validity (4b), and unblocking (4c), was simulated. In simulation 5, SOCR correctly predicted that inhibitors gain more behavioral control than do excitors when they are trained in compound. Simulation 6 demonstrated that SOCR explains the slower acquisition observed following CS-weak shock pairings.
Kimura, Kenta; Kimura, Motohiro; Iwaki, Sunao
2016-10-01
The present study aimed to investigate whether or not the evaluative processing of action feedback can be modulated by temporal prediction. For this purpose, we examined the effects of the predictability of the timing of action feedback on an ERP effect that indexed the evaluative processing of action feedback, that is, an ERP effect that has been interpreted as a feedback-related negativity (FRN) elicited by "bad" action feedback or a reward positivity (RewP) elicited by "good" action feedback. In two types of experimental blocks, the participants performed a gambling task in which they chose one of two cards and received an action feedback that indicated monetary gain or loss. In fixed blocks, the time interval between the participant's choice and the onset of the action feedback was fixed at 0, 500, or 1,000 ms in separate blocks; thus, the timing of action feedback was predictable. In mixed blocks, the time interval was randomly chosen from the same three intervals with equal probability; thus, the timing was less predictable. The results showed that the FRN/RewP was smaller in mixed than fixed blocks for the 0-ms interval trial, whereas there was no difference between the two block types for the 500-ms and 1,000-ms interval trials. Interestingly, the smaller FRN/RewP was due to the modulation of gain ERPs rather than loss ERPs. These results suggest that temporal prediction can modulate the evaluative processing of action feedback, and particularly good feedback, such as that which indicates monetary gain. © 2016 Society for Psychophysiological Research.
Bursch, B; Lester, P; Jiang, L; Rotheram-Borus, M J; Weiss, R
2008-07-01
The objective of this study was to identify salient parent and adolescent psychosocial factors related to somatic symptoms in adolescents. As part of a larger intervention study conducted in New York, 409 adolescents were recruited from 269 parents with HIV. A longitudinal model predicted adolescent somatization scores six years after baseline assessment. Adolescent somatic symptoms were assessed at baseline and at 3-month intervals for the first two years and then at 6-month intervals using the Brief Symptom Inventory. Baseline data from adolescents and parents were used to predict adolescent somatic symptoms. Variables related to increased adolescent somatic symptoms over six years included being younger and female; an increased number of adolescent medical hospitalizations; more stressful life events; adolescent perception of a highly rejecting parenting style; more parent-youth conflict; no experience of parental death; and parental distress over their own pain symptoms. Our findings extend the literature by virtue of the longitudinal design; inclusion of both parent and child variables in one statistical model; identification of study participants by their potentially stressful living condition rather than by disease or somatic symptom status; and inclusion of serious parental illness and death in the study.
Confidence intervals in Flow Forecasting by using artificial neural networks
NASA Astrophysics Data System (ADS)
Panagoulia, Dionysia; Tsekouras, George
2014-05-01
One of the major inadequacies in implementation of Artificial Neural Networks (ANNs) for flow forecasting is the development of confidence intervals, because the relevant estimation cannot be implemented directly, contrasted to the classical forecasting methods. The variation in the ANN output is a measure of uncertainty in the model predictions based on the training data set. Different methods for uncertainty analysis, such as bootstrap, Bayesian, Monte Carlo, have already proposed for hydrologic and geophysical models, while methods for confidence intervals, such as error output, re-sampling, multi-linear regression adapted to ANN have been used for power load forecasting [1-2]. The aim of this paper is to present the re-sampling method for ANN prediction models and to develop this for flow forecasting of the next day. The re-sampling method is based on the ascending sorting of the errors between real and predicted values for all input vectors. The cumulative sample distribution function of the prediction errors is calculated and the confidence intervals are estimated by keeping the intermediate value, rejecting the extreme values according to the desired confidence levels, and holding the intervals symmetrical in probability. For application of the confidence intervals issue, input vectors are used from the Mesochora catchment in western-central Greece. The ANN's training algorithm is the stochastic training back-propagation process with decreasing functions of learning rate and momentum term, for which an optimization process is conducted regarding the crucial parameters values, such as the number of neurons, the kind of activation functions, the initial values and time parameters of learning rate and momentum term etc. Input variables are historical data of previous days, such as flows, nonlinearly weather related temperatures and nonlinearly weather related rainfalls based on correlation analysis between the under prediction flow and each implicit input variable of different ANN structures [3]. The performance of each ANN structure is evaluated by the voting analysis based on eleven criteria, which are the root mean square error (RMSE), the correlation index (R), the mean absolute percentage error (MAPE), the mean percentage error (MPE), the mean percentage error (ME), the percentage volume in errors (VE), the percentage error in peak (MF), the normalized mean bias error (NMBE), the normalized root mean bias error (NRMSE), the Nash-Sutcliffe model efficiency coefficient (E) and the modified Nash-Sutcliffe model efficiency coefficient (E1). The next day flow for the test set is calculated using the best ANN structure's model. Consequently, the confidence intervals of various confidence levels for training, evaluation and test sets are compared in order to explore the generalisation dynamics of confidence intervals from training and evaluation sets. [1] H.S. Hippert, C.E. Pedreira, R.C. Souza, "Neural networks for short-term load forecasting: A review and evaluation," IEEE Trans. on Power Systems, vol. 16, no. 1, 2001, pp. 44-55. [2] G. J. Tsekouras, N.E. Mastorakis, F.D. Kanellos, V.T. Kontargyri, C.D. Tsirekis, I.S. Karanasiou, Ch.N. Elias, A.D. Salis, P.A. Kontaxis, A.A. Gialketsi: "Short term load forecasting in Greek interconnected power system using ANN: Confidence Interval using a novel re-sampling technique with corrective Factor", WSEAS International Conference on Circuits, Systems, Electronics, Control & Signal Processing, (CSECS '10), Vouliagmeni, Athens, Greece, December 29-31, 2010. [3] D. Panagoulia, I. Trichakis, G. J. Tsekouras: "Flow Forecasting via Artificial Neural Networks - A Study for Input Variables conditioned on atmospheric circulation", European Geosciences Union, General Assembly 2012 (NH1.1 / AS1.16 - Extreme meteorological and hydrological events induced by severe weather and climate change), Vienna, Austria, 22-27 April 2012.
An Efficient Interval Type-2 Fuzzy CMAC for Chaos Time-Series Prediction and Synchronization.
Lee, Ching-Hung; Chang, Feng-Yu; Lin, Chih-Min
2014-03-01
This paper aims to propose a more efficient control algorithm for chaos time-series prediction and synchronization. A novel type-2 fuzzy cerebellar model articulation controller (T2FCMAC) is proposed. In some special cases, this T2FCMAC can be reduced to an interval type-2 fuzzy neural network, a fuzzy neural network, and a fuzzy cerebellar model articulation controller (CMAC). So, this T2FCMAC is a more generalized network with better learning ability, thus, it is used for the chaos time-series prediction and synchronization. Moreover, this T2FCMAC realizes the un-normalized interval type-2 fuzzy logic system based on the structure of the CMAC. It can provide better capabilities for handling uncertainty and more design degree of freedom than traditional type-1 fuzzy CMAC. Unlike most of the interval type-2 fuzzy system, the type-reduction of T2FCMAC is bypassed due to the property of un-normalized interval type-2 fuzzy logic system. This causes T2FCMAC to have lower computational complexity and is more practical. For chaos time-series prediction and synchronization applications, the training architectures with corresponding convergence analyses and optimal learning rates based on Lyapunov stability approach are introduced. Finally, two illustrated examples are presented to demonstrate the performance of the proposed T2FCMAC.
Effects of metric hierarchy and rhyme predictability on word duration in The Cat in the Hat.
Breen, Mara
2018-05-01
Word durations convey many types of linguistic information, including intrinsic lexical features like length and frequency and contextual features like syntactic and semantic structure. The current study was designed to investigate whether hierarchical metric structure and rhyme predictability account for durational variation over and above other features in productions of a rhyming, metrically-regular children's book: The Cat in the Hat (Dr. Seuss, 1957). One-syllable word durations and inter-onset intervals were modeled as functions of segment number, lexical frequency, word class, syntactic structure, repetition, and font emphasis. Consistent with prior work, factors predicting longer word durations and inter-onset intervals included more phonemes, lower frequency, first mention, alignment with a syntactic boundary, and capitalization. A model parameter corresponding to metric grid height improved model fit of word durations and inter-onset intervals. Specifically, speakers realized five levels of metric hierarchy with inter-onset intervals such that interval duration increased linearly with increased height in the metric hierarchy. Conversely, speakers realized only three levels of metric hierarchy with word duration, demonstrating that they shortened the highly predictable rhyme resolutions. These results further understanding of the factors that affect spoken word duration, and demonstrate the myriad cues that children receive about linguistic structure from nursery rhymes. Copyright © 2018 Elsevier B.V. All rights reserved.
On the use and the performance of software reliability growth models
NASA Technical Reports Server (NTRS)
Keiller, Peter A.; Miller, Douglas R.
1991-01-01
We address the problem of predicting future failures for a piece of software. The number of failures occurring during a finite future time interval is predicted from the number failures observed during an initial period of usage by using software reliability growth models. Two different methods for using the models are considered: straightforward use of individual models, and dynamic selection among models based on goodness-of-fit and quality-of-prediction criteria. Performance is judged by the relative error of the predicted number of failures over future finite time intervals relative to the number of failures eventually observed during the intervals. Six of the former models and eight of the latter are evaluated, based on their performance on twenty data sets. Many open questions remain regarding the use and the performance of software reliability growth models.
Cure modeling in real-time prediction: How much does it help?
Ying, Gui-Shuang; Zhang, Qiang; Lan, Yu; Li, Yimei; Heitjan, Daniel F
2017-08-01
Various parametric and nonparametric modeling approaches exist for real-time prediction in time-to-event clinical trials. Recently, Chen (2016 BMC Biomedical Research Methodology 16) proposed a prediction method based on parametric cure-mixture modeling, intending to cover those situations where it appears that a non-negligible fraction of subjects is cured. In this article we apply a Weibull cure-mixture model to create predictions, demonstrating the approach in RTOG 0129, a randomized trial in head-and-neck cancer. We compare the ultimate realized data in RTOG 0129 to interim predictions from a Weibull cure-mixture model, a standard Weibull model without a cure component, and a nonparametric model based on the Bayesian bootstrap. The standard Weibull model predicted that events would occur earlier than the Weibull cure-mixture model, but the difference was unremarkable until late in the trial when evidence for a cure became clear. Nonparametric predictions often gave undefined predictions or infinite prediction intervals, particularly at early stages of the trial. Simulations suggest that cure modeling can yield better-calibrated prediction intervals when there is a cured component, or the appearance of a cured component, but at a substantial cost in the average width of the intervals. Copyright © 2017 Elsevier Inc. All rights reserved.
Häggström, J; Andersson, Å O; Falk, T; Nilsfors, L; OIsson, U; Kresken, J G; Höglund, K; Rishniw, M; Tidholm, A; Ljungvall, I
2016-09-01
Echocardiography is a cost-efficient method to screen cats for presence of heart disease. Current reference intervals for feline cardiac dimensions do not account for body weight (BW). To study the effect of BW on heart rate (HR), aortic (Ao), left atrial (LA) and ventricular (LV) linear dimensions in cats, and to calculate 95% prediction intervals for these variables in normal adult pure-bred cats. 19 866 pure-bred cats. Clinical data from heart screens conducted between 1999 and 2014 were included. Associations between BW, HR, and cardiac dimensions were assessed using univariate linear models and allometric scaling, including all cats, and only those considered normal, respectively. Prediction intervals were created using 95% confidence intervals obtained from regression curves. Associations between BW and echocardiographic dimensions were best described by allometric scaling, and all dimensions increased with increasing BW (all P<0.001). Strongest associations were found between BW and Ao, LV end diastolic, LA dimensions, and thickness of LV free wall. Weak linear associations were found between BW and HR and left atrial to aortic ratio (LA:Ao), for which HR decreased with increasing BW (P<0.001), and LA:Ao increased with increasing BW (P<0.001). Marginal differences were found for prediction formulas and prediction intervals when the dataset included all cats versus only those considered normal. BW had a clinically relevant effect on echocardiographic dimensions in cats, and BW based 95% prediction intervals may help in screening cats for heart disease. Copyright © 2016 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.
Differentiating the origin of outflow tract ventricular arrhythmia using a simple, novel approach.
Efimova, Elena; Dinov, Borislav; Acou, Willem-Jan; Schirripa, Valentina; Kornej, Jelena; Kosiuk, Jedrzej; Rolf, Sascha; Sommer, Philipp; Richter, Sergio; Bollmann, Andreas; Hindricks, Gerhard; Arya, Arash
2015-07-01
Numerous electrocardiographic (ECG) criteria have been proposed to identify localization of outflow tract ventricular arrhythmias (OT-VAs); however, in some cases, it is difficult to accurately localize the origin of OT-VA using the surface ECG. The purpose of this study was to assess a simple criterion for localization of OT-VAs during electrophysiology study. We measured the interval from the onset of the earliest QRS complex of premature ventricular contractions (PVCs) to the distal right ventricular apical signal (the QRS-RVA interval) in 66 patients (31 men aged 53.3 ± 14.0 years; right ventricular outflow tract [RVOT] origin in 37) referred for ablation of symptomatic outflow tract PVCs. We prospectively validated this criterion in 39 patients (22 men aged 52 ± 15 years; RVOT origin in 19). Compared with patients with RVOT PVCs, the QRS-RVA interval was significantly longer in patients with left ventricular outflow tract (LVOT) PVCs (70 ± 14 vs 33.4±10 ms, P < .001). Receiver operating characteristic analysis showed that a QRS-RVA interval ≥49 ms had sensitivity, specificity, and positive and negative predictive values of 100%, 94.6%, 93.5%, and 100%, respectively, for prediction of an LVOT origin. The same analysis in the validation cohort showed sensitivity, specificity, and positive and negative predictive values of 94.7%, 95%, 95%, and 94.7%, respectively. When these data were combined, a QRS-RVA interval ≥49 ms had sensitivity, specificity, and positive and negative predictive values of 98%, 94.6%, 94.1%, and 98.1%, respectively, for prediction of an LVOT origin. A QRS-RVA interval ≥49 ms suggests an LVOT origin. The QRS-RVA interval is a simple and accurate criterion for differentiating the origin of outflow tract arrhythmia during electrophysiology study; however, the accuracy of this criterion in identifying OT-VA from the right coronary cusp is limited. Copyright © 2015 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.
A predictive model of geosynchronous magnetopause crossings
NASA Astrophysics Data System (ADS)
Dmitriev, A.; Suvorova, A.; Chao, J.-K.
2011-05-01
We have developed a model predicting whether or not the magnetopause crosses geosynchronous orbit at a given location for given solar wind pressure Psw, Bz component of the interplanetary magnetic field (IMF), and geomagnetic conditions characterized by 1 min SYM-H index. The model is based on more than 300 geosynchronous magnetopause crossings (GMCs) and about 6000 min when geosynchronous satellites of GOES and Los Alamos National Laboratory (LANL) series are located in the magnetosheath (so-called MSh intervals) in 1994-2001. Minimizing of the Psw required for GMCs and MSh intervals at various locations, Bz, and SYM-H allows describing both an effect of magnetopause dawn-dusk asymmetry and saturation of Bz influence for very large southward IMF. The asymmetry is strong for large negative Bz and almost disappears when Bz is positive. We found that the larger the amplitude of negative SYM-H, the lower the solar wind pressure required for GMCs. We attribute this effect to a depletion of the dayside magnetic field by a storm time intensification of the cross-tail current. It is also found that the magnitude of threshold for Bz saturation increases with SYM-H index such that for small negative and positive SYM-H the effect of saturation diminishes. This supports an idea that enhanced thermal pressure of the magnetospheric plasma and ring current particles during magnetic storms results in the saturation of magnetic effect of the IMF Bz at the dayside magnetopause. A noticeable advantage of the model's prediction capabilities in comparison with other magnetopause models makes the model useful for space weather predictions.
Oza, Veeral M; Skeans, Jacob M; Muscarella, Peter; Walker, Jon P; Sklaw, Brett C; Cronley, Kevin M; El-Dika, Samer; Swanson, Benjamin; Hinton, Alice; Conwell, Darwin L; Krishna, Somashekar G
2015-08-01
Our objective was to delineate predictive factors differentiating groove pancreatitis (GP) from other lesions involving the head of the pancreas (HOP). A case-control study of patients older than 10 years was performed comparing patients with GP to those with other surgically resected HOP lesions. Thirteen patients with GP (mean ± SD age, 51.9 ± 10.5 years; 11 males [84.6%]), all with a history of smoking (mean, 37.54 ± 17.8 pack-years), were identified. Twelve patients (92.3%) had a history of heavy alcohol drinking (heavy alcohol [EtOH]). The mean lesion size was 2.6 ± 1.1 cm, and the CA 19-9 was elevated (>37 IU/mL) in 5 patients (45.5%). The most common histopathologic condition was duodenal wall cyst with myofibroblastic proliferation and changes of chronic pancreatitis in the HOP.Univariate analysis revealed decreasing age, male sex, weight loss, nausea/vomiting, heavy EtOH, smoking, and a history of chronic pancreatitis were predictive of GP. A multivariate analysis among smokers demonstrated that weight loss (P = 0.006; odds ratio, 11.96; 95% confidence interval, 2.1-70.2), and heavy EtOH (P < 0.001; odds ratio, 82.2; 95% confidence interval, 9.16-738.1) were most predictive of GP. Compared to pancreatic adenocarcinoma (n = 183), weight loss and heavy EtOH remained predictive of GP. Groove pancreatitis in the HOP is associated with a history of heavy EtOH and weight loss. In the absence of these symptoms, it is essential to rule out a malignant lesion.
Transient stress-coupling between the 1992 Landers and 1999 Hector Mine, California, earthquakes
Masterlark, Timothy; Wang, H.F.
2002-01-01
A three-dimensional finite-element model (FEM) of the Mojave block region in southern California is constructed to investigate transient stress-coupling between the 1992 Landers and 1999 Hector Mine earthquakes. The FEM simulates a poroelastic upper-crust layer coupled to a viscoelastic lower-crust layer, which is decoupled from the upper mantle. FEM predictions of the transient mechanical behavior of the crust are constrained by global positioning system (GPS) data, interferometric synthetic aperture radar (InSAR) images, fluid-pressure data from water wells, and the dislocation source of the 1999 Hector Mine earthquake. Two time-dependent parameters, hydraulic diffusivity of the upper crust and viscosity of the lower crust, are calibrated to 10–2 m2·sec–1 and 5 × 1018 Pa·sec respectively. The hydraulic diffusivity is relatively insensitive to heterogeneous fault-zone permeability specifications and fluid-flow boundary conditions along the elastic free-surface at the top of the problem domain. The calibrated FEM is used to predict the evolution of Coulomb stress during the interval separating the 1992 Landers and 1999 Hector Mine earthquakes. The predicted change in Coulomb stress near the hypocenter of the Hector Mine earthquake increases from 0.02 to 0.05 MPa during the 7-yr interval separating the two events. This increase is primarily attributed to the recovery of decreased excess fluid pressure from the 1992 Landers coseismic (undrained) strain field. Coulomb stress predictions are insensitive to small variations of fault-plane dip and hypocentral depth estimations of the Hector Mine rupture.
ERIC Educational Resources Information Center
Hinderliter, Charles F.; Andrews, Amy; Misanin, James R.
2012-01-01
In conditioned taste aversion (CTA), a taste, the conditioned stimulus (CS), is paired with an illness-inducing stimulus, the unconditioned stimulus (US), to produce CS-US associations at very long (hours) intervals, a result that appears to violate the law of contiguity. The specific length of the maximum effective trace interval that has been…
Extinction of Pavlovian conditioning: The influence of trial number and reinforcement history.
Chan, C K J; Harris, Justin A
2017-08-01
Pavlovian conditioning is sensitive to the temporal relationship between the conditioned stimulus (CS) and the unconditioned stimulus (US). This has motivated models that describe learning as a process that continuously updates associative strength during the trial or specifically encodes the CS-US interval. These models predict that extinction of responding is also continuous, such that response loss is proportional to the cumulative duration of exposure to the CS without the US. We review evidence showing that this prediction is incorrect, and that extinction is trial-based rather than time-based. We also present two experiments that test the importance of trials versus time on the Partial Reinforcement Extinction Effect (PREE), in which responding extinguishes more slowly for a CS that was inconsistently reinforced with the US than for a consistently reinforced one. We show that increasing the number of extinction trials of the partially reinforced CS, relative to the consistently reinforced CS, overcomes the PREE. However, increasing the duration of extinction trials by the same amount does not overcome the PREE. We conclude that animals learn about the likelihood of the US per trial during conditioning, and learn trial-by-trial about the absence of the US during extinction. Moreover, what they learn about the likelihood of the US during conditioning affects how sensitive they are to the absence of the US during extinction. Copyright © 2017 Elsevier B.V. All rights reserved.
Pastore, Gianni; Maines, Massimiliano; Marcantoni, Lina; Zanon, Francesco; Noventa, Franco; Corbucci, Giorgio; Baracca, Enrico; Aggio, Silvio; Picariello, Claudio; Lanza, Daniela; Rigatelli, Gianluca; Carraro, Mauro; Roncon, Loris; Barold, S Serge
2016-12-01
Estimating left ventricular electrical delay (Q-LV) from a 12-lead ECG may be important in evaluating cardiac resynchronization therapy (CRT). The purpose of this study was to assess the impact of Q-LV interval on ECG configuration. One hundred ninety-two consecutive patients undergoing CRT implantation were divided electrocardiographically into 3 groups: left bundle branch block (LBBB), right bundle branch block (RBBB), and nonspecific intraventricular conduction delay (IVCD). The IVCD group was further subdivided into 81 patients with left (L)-IVCD and 15 patients with right (R)-IVCD (resembling RBBB, but without S wave in leads I and aVL). The Q-LV interval in the different groups and the relationship between ECG parameters and the maximum Q-LV interval were analyzed. Patients with LBBB presented a long Q-LV interval (147.7 ± 14.6 ms, all exceeding cutoff value of 110 ms), whereas RBBB patients presented a very short Q-LV interval (75.2 ± 16.3 ms, all <110 ms). Patients with an IVCD displayed a wide range of Q-LV intervals. In L-IVCD, mid-QRS notching/slurring showed the strongest correlation with a longer Q-LV interval, followed, in decreasing order, by QRS duration >150 ms and intrinsicoid deflection >60 ms. Isolated mid-QRS notching/slurring predicted Q-LV interval >110 ms in 68% of patients. The R-IVCD group presented an unexpectedly longer Q-LV interval (127.0 ± 12.5 ms; 13/15 patients had Q-LV >110 ms). Patients with LBBB have a very prolonged Q-LV interval. Mid-QRS notching in lateral leads strongly predicts a longer Q-LV interval in L-IVCD patients. Patients with R-IVCD constitute a subgroup of patients with a long Q-LV interval. Copyright © 2016 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.
Yoon, Hyun; Gi, Mi Young; Cha, Ju Ae; Yoo, Chan Uk; Park, Sang Muk
2018-03-01
This study assessed the association of metabolic syndrome and metabolic syndrome score with the predicted forced vital capacity and predicted forced expiratory volume in 1 s (predicted forced expiratory volume in 1 s) values in Korean non-smoking adults. We analysed data obtained from 6684 adults during the 2013-2015 Korean National Health and Nutrition Examination Survey. After adjustment for related variables, metabolic syndrome ( p < 0.001) and metabolic syndrome score ( p < 0.001) were found to be inversely associated with the predicted forced vital capacity and forced expiratory volume in 1 s values. The odds ratios of restrictive pulmonary disease (the predicted forced vital capacity < 80.0% with forced expiratory volume in 1 s/FVC ⩾ 70.0%) by metabolic syndrome score with metabolic syndrome score 0 as a reference group showed no significance for metabolic syndrome score 1 [1.061 (95% confidence interval, 0.755-1.490)] and metabolic syndrome score 2 [1.247 (95% confidence interval, 0.890-1.747)], but showed significant for metabolic syndrome score 3 [1.433 (95% confidence interval, 1.010-2.033)] and metabolic syndrome score ⩾ 4 [1.760 (95% confidence interval, 1.216-2.550)]. In addition, the odds ratio of restrictive pulmonary disease of the metabolic syndrome [1.360 (95% confidence interval, 1.118-1.655)] was significantly higher than those of non-metabolic syndrome. Metabolic syndrome and metabolic syndrome score were inversely associated with the predicted forced vital capacity and forced expiratory volume in 1 s values in Korean non-smoking adults. In addition, metabolic syndrome and metabolic syndrome score were positively associated with the restrictive pulmonary disease.
Effect of antacids on predicted steady-state cimetidine concentrations.
Russell, W L; Lopez, L M; Normann, S A; Doering, P L; Guild, R T
1984-05-01
The purpose of this study was to evaluate effects of antacids on predicted steady-state concentrations of cimetidine. Ten healthy volunteers received in random order one week apart, cimetidine and cimetidine and antacid suspension. Blood was obtained at specified times and analyzed for cimetidine. Bioavailability was assessed by comparison of peak concentration, time to peak concentration, area under the curve, and time spent over 0.5 micrograms/ml. Single-dose data were extrapolated to steady-state using computer simulation. Concurrent administration of antacid suspension reduced parameters of bioavailability approximately 30%. When steady-state conditions were simulated, concentrations of cimetidine greater than or equal to 0.5 micrograms/ml were maintained for the entire dosing interval in seven of 10 subjects. These data suggest that temporal separation of cimetidine and antacid suspension may be unnecessary.
Online incidental statistical learning of audiovisual word sequences in adults: a registered report.
Kuppuraj, Sengottuvel; Duta, Mihaela; Thompson, Paul; Bishop, Dorothy
2018-02-01
Statistical learning has been proposed as a key mechanism in language learning. Our main goal was to examine whether adults are capable of simultaneously extracting statistical dependencies in a task where stimuli include a range of structures amenable to statistical learning within a single paradigm. We devised an online statistical learning task using real word auditory-picture sequences that vary in two dimensions: (i) predictability and (ii) adjacency of dependent elements. This task was followed by an offline recall task to probe learning of each sequence type. We registered three hypotheses with specific predictions. First, adults would extract regular patterns from continuous stream (effect of grammaticality). Second, within grammatical conditions, they would show differential speeding up for each condition as a factor of statistical complexity of the condition and exposure. Third, our novel approach to measure online statistical learning would be reliable in showing individual differences in statistical learning ability. Further, we explored the relation between statistical learning and a measure of verbal short-term memory (STM). Forty-two participants were tested and retested after an interval of at least 3 days on our novel statistical learning task. We analysed the reaction time data using a novel regression discontinuity approach. Consistent with prediction, participants showed a grammaticality effect, agreeing with the predicted order of difficulty for learning different statistical structures. Furthermore, a learning index from the task showed acceptable test-retest reliability ( r = 0.67). However, STM did not correlate with statistical learning. We discuss the findings noting the benefits of online measures in tracking the learning process.
Online incidental statistical learning of audiovisual word sequences in adults: a registered report
Duta, Mihaela; Thompson, Paul
2018-01-01
Statistical learning has been proposed as a key mechanism in language learning. Our main goal was to examine whether adults are capable of simultaneously extracting statistical dependencies in a task where stimuli include a range of structures amenable to statistical learning within a single paradigm. We devised an online statistical learning task using real word auditory–picture sequences that vary in two dimensions: (i) predictability and (ii) adjacency of dependent elements. This task was followed by an offline recall task to probe learning of each sequence type. We registered three hypotheses with specific predictions. First, adults would extract regular patterns from continuous stream (effect of grammaticality). Second, within grammatical conditions, they would show differential speeding up for each condition as a factor of statistical complexity of the condition and exposure. Third, our novel approach to measure online statistical learning would be reliable in showing individual differences in statistical learning ability. Further, we explored the relation between statistical learning and a measure of verbal short-term memory (STM). Forty-two participants were tested and retested after an interval of at least 3 days on our novel statistical learning task. We analysed the reaction time data using a novel regression discontinuity approach. Consistent with prediction, participants showed a grammaticality effect, agreeing with the predicted order of difficulty for learning different statistical structures. Furthermore, a learning index from the task showed acceptable test–retest reliability (r = 0.67). However, STM did not correlate with statistical learning. We discuss the findings noting the benefits of online measures in tracking the learning process. PMID:29515876
Application of Interval Predictor Models to Space Radiation Shielding
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy,Daniel P.; Norman, Ryan B.; Blattnig, Steve R.
2016-01-01
This paper develops techniques for predicting the uncertainty range of an output variable given input-output data. These models are called Interval Predictor Models (IPM) because they yield an interval valued function of the input. This paper develops IPMs having a radial basis structure. This structure enables the formal description of (i) the uncertainty in the models parameters, (ii) the predicted output interval, and (iii) the probability that a future observation would fall in such an interval. In contrast to other metamodeling techniques, this probabilistic certi cate of correctness does not require making any assumptions on the structure of the mechanism from which data are drawn. Optimization-based strategies for calculating IPMs having minimal spread while containing all the data are developed. Constraints for bounding the minimum interval spread over the continuum of inputs, regulating the IPMs variation/oscillation, and centering its spread about a target point, are used to prevent data over tting. Furthermore, we develop an approach for using expert opinion during extrapolation. This metamodeling technique is illustrated using a radiation shielding application for space exploration. In this application, we use IPMs to describe the error incurred in predicting the ux of particles resulting from the interaction between a high-energy incident beam and a target.
Change of pH during excess sludge fermentation under alkaline, acidic and neutral conditions.
Yuan, Yue; Peng, Yongzhen; Liu, Ye; Jin, Baodan; Wang, Bo; Wang, Shuying
2014-12-01
The change in pH during excess sludge (ES) fermentation of varying sludge concentrations was investigated in a series of reactors at alkaline, acidic, and neutral pHs. The results showed that the changes were significantly affected by fermentative conditions. Under different conditions, pH exhibited changing profiles. When ES was fermented under alkaline conditions, pH decreased in a range of (10±1). At the beginning of alkaline fermentation, pH dropped significantly, at intervals of 4h, 4h, and 5h with sludge concentrations of 8665.6mg/L, 6498.8mg/L, and 4332.5mg/L, then it would become moderate. However, under acidic conditions, pH increased from 4 to 5. Finally, under neutral conditions pH exhibited a decrease then an increase throughout entire fermentation process. Further study showed short-chain fatty acids (SCFAs), ammonia nitrogen and cations contributed to pH change under various fermentation conditions. This study presents a novel strategy based on pH change to predict whether SCFAs reach their stable stage. Copyright © 2014 Elsevier Ltd. All rights reserved.
Decadal climate predictions improved by ocean ensemble dispersion filtering
NASA Astrophysics Data System (ADS)
Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.
2017-06-01
Decadal predictions by Earth system models aim to capture the state and phase of the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-term weather forecasts represent an initial value problem and long-term climate projections represent a boundary condition problem, the decadal climate prediction falls in-between these two time scales. In recent years, more precise initialization techniques of coupled Earth system models and increased ensemble sizes have improved decadal predictions. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure, called ensemble dispersion filter, results in more accurate results than the standard decadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from ocean ensemble dispersion filtering toward the ensemble mean.
Santos, Thays Brenner; Kramer-Soares, Juliana Carlota; Favaro, Vanessa Manchim; Oliveira, Maria Gabriela Menezes
2017-10-01
Time plays an important role in conditioning, it is not only possible to associate stimuli with events that overlap, as in delay fear conditioning, but it is also possible to associate stimuli that are discontinuous in time, as shown in trace conditioning for a discrete stimuli. The environment itself can be a powerful conditioned stimulus (CS) and be associated to unconditioned stimulus (US). Thus, the aim of the present study was to determine the parameters in which contextual fear conditioning occurs by the maintenance of a contextual representation over short and long time intervals. The results showed that a contextual representation can be maintained and associated after 5s, even in the absence of a 15s re-exposure to the training context before US delivery. The same effect was not observed with a 24h interval of discontinuity. Furthermore, optimal conditioned response with a 5s interval is produced only when the contexts (of pre-exposure and shock) match. As the pre-limbic cortex (PL) is necessary for the maintenance of a continuous representation of a stimulus, the involvement of the PL in this temporal and contextual processing was investigated. The reversible inactivation of the PL by muscimol infusion impaired the acquisition of contextual fear conditioning with a 5s interval, but not with a 24h interval, and did not impair delay fear conditioning. The data provided evidence that short and long intervals of discontinuity have different mechanisms, thus contributing to a better understanding of PL involvement in contextual fear conditioning and providing a model that considers both temporal and contextual factors in fear conditioning. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Charonko, John J.; Vlachos, Pavlos P.
2013-06-01
Numerous studies have established firmly that particle image velocimetry (PIV) is a robust method for non-invasive, quantitative measurements of fluid velocity, and that when carefully conducted, typical measurements can accurately detect displacements in digital images with a resolution well below a single pixel (in some cases well below a hundredth of a pixel). However, to date, these estimates have only been able to provide guidance on the expected error for an average measurement under specific image quality and flow conditions. This paper demonstrates a new method for estimating the uncertainty bounds to within a given confidence interval for a specific, individual measurement. Here, cross-correlation peak ratio, the ratio of primary to secondary peak height, is shown to correlate strongly with the range of observed error values for a given measurement, regardless of flow condition or image quality. This relationship is significantly stronger for phase-only generalized cross-correlation PIV processing, while the standard correlation approach showed weaker performance. Using an analytical model of the relationship derived from synthetic data sets, the uncertainty bounds at a 95% confidence interval are then computed for several artificial and experimental flow fields, and the resulting errors are shown to match closely to the predicted uncertainties. While this method stops short of being able to predict the true error for a given measurement, knowledge of the uncertainty level for a PIV experiment should provide great benefits when applying the results of PIV analysis to engineering design studies and computational fluid dynamics validation efforts. Moreover, this approach is exceptionally simple to implement and requires negligible additional computational cost.
NASA Astrophysics Data System (ADS)
Ebert, Robert; Bagenal, Fran; McComas, David; Fowler, Christopher
2014-09-01
We examine Ulysses solar wind and interplanetary magnetic field (IMF) observations at 5 AU for two ~13 month intervals during the rising and declining phases of solar cycle 23 and the predicted response of the Jovian magnetosphere during these times. The declining phase solar wind, composed primarily of corotating interaction regions and high-speed streams, was, on average, faster, hotter, less dense, and more Alfvénic relative to the rising phase solar wind, composed mainly of slow wind and interplanetary coronal mass ejections. Interestingly, none of solar wind and IMF distributions reported here were bimodal, a feature used to explain the bimodal distribution of bow shock and magnetopause standoff distances observed at Jupiter. Instead, many of these distributions had extended, non-Gaussian tails that resulted in large standard deviations and much larger mean over median values. The distribution of predicted Jupiter bow shock and magnetopause standoff distances during these intervals were also not bimodal, the mean/median values being larger during the declining phase by ~1 - 4%. These results provide data-derived solar wind and IMF boundary conditions at 5 AU for models aimed at studying solar wind-magnetosphere interactions at Jupiter and can support the science investigations of upcoming Jupiter system missions. Here, we provide expectations for Juno, which is scheduled to arrive at Jupiter in July 2016. Accounting for the long-term decline in solar wind dynamic pressure reported by McComas et al. (2013), Jupiter’s bow shock and magnetopause is expected to be at least 8 - 12% further from Jupiter, if these trends continue.
Hourly Wind Speed Interval Prediction in Arid Regions
NASA Astrophysics Data System (ADS)
Chaouch, M.; Ouarda, T.
2013-12-01
The long and extended warm and dry summers, the low rate of rain and humidity are the main factors that explain the increase of electricity consumption in hot arid regions. In such regions, the ventilating and air-conditioning installations, that are typically the most energy-intensive among energy consumption activities, are essential for securing healthy, safe and suitable indoor thermal conditions for building occupants and stored materials. The use of renewable energy resources such as solar and wind represents one of the most relevant solutions to overcome the increase of the electricity demand challenge. In the recent years, wind energy is gaining more importance among the researchers worldwide. Wind energy is intermittent in nature and hence the power system scheduling and dynamic control of wind turbine requires an estimate of wind energy. Accurate forecast of wind speed is a challenging task for the wind energy research field. In fact, due to the large variability of wind speed caused by the unpredictable and dynamic nature of the earth's atmosphere, there are many fluctuations in wind power production. This inherent variability of wind speed is the main cause of the uncertainty observed in wind power generation. Furthermore, producing wind power forecasts might be obtained indirectly by modeling the wind speed series and then transforming the forecasts through a power curve. Wind speed forecasting techniques have received substantial attention recently and several models have been developed. Basically two main approaches have been proposed in the literature: (1) physical models such as Numerical Weather Forecast and (2) statistical models such as Autoregressive integrated moving average (ARIMA) models, Neural Networks. While the initial focus in the literature has been on point forecasts, the need to quantify forecast uncertainty and communicate the risk of extreme ramp events has led to an interest in producing probabilistic forecasts. In short term context, probabilistic forecasts might be more relevant than point forecasts for the planner to build scenarios In this paper, we are interested in estimating predictive intervals of the hourly wind speed measures in few cities in United Arab emirates (UAE). More precisely, given a wind speed time series, our target is to forecast the wind speed at any specific hour during the day and provide in addition an interval with the coverage probability 0
Shulman, Eric; Aagaard, Philip; Kargoli, Faraj; Hoch, Ethan; Zheng, Laura; Di Biase, Luigi; Fisher, John; Gross, Jay; Kim, Soo; Ferrick, Kevin; Krumerman, Andrew
2015-01-01
PR interval prolongation on electrocardiogram (ECG) increases the risk of atrial fibrillation (AF). Non-Hispanic Whites are at higher risk of AF compared to African Americans and Hispanics. However, it remains unknown if prolongation of the PR interval for the development of AF varies by race/ethnicity. Therefore, we determined whether race affects the PR interval length's ability to predict AF and if the commonly used criterion of 200 ms in AF prediction models can continue to be used for non-White cohorts. This is a retrospective epidemiological study of consecutive inpatient and outpatients. An ECG database was initially interrogated. Patients were included if their initial ECG demonstrated sinus rhythm and had two or more electrocardiograms and declared a race and/or ethnicity as non-Hispanic White, African American or Hispanic. Development of AF was stratified by race/ethnicity along varying PR intervals. Cox models controlled for age, gender, race/ethnicity, systolic blood pressure, BMI, QRS, QTc, heart rate, murmur, treatment for hypertension, heart failure and use of AV nodal blocking agents to assess PR interval's predictive ability for development of AF. 50,870 patients met inclusion criteria of which 5,199 developed AF over 3.72 mean years of follow-up. When the PR interval was separated by quantile, prolongation of the PR interval to predict AF first became significant in Hispanic and African Americans at the 92.5th quantile of 196-201 ms (HR: 1.42, 95% CI: 1.09-1.86, p=0.01; HR: 1.32, 95% CI: 1.07-1.64, p=0.01, respectively) then in non-Hispanic Whites at the 95th quantile at 203-212 ms (HR: 1.24, 95% CI: 1.24-1.53, p=0.04). For those with a PR interval above 200 ms, African Americans had a lower risk than non-Hispanic Whites to develop AF (HR: 0.80, 95% CI: 0.64-0.95, p=0.012), however, no significant difference was demonstrated in Hispanics. This is the first study to validate a PR interval value of 200 ms as a criterion in African Americans and Hispanics for the development of AF. However, a value of 200 ms may be less sensitive as a predictive measure for the development of AF in African Americans compared to non-Hispanic Whites. Copyright © 2015 Elsevier Inc. All rights reserved.
Quitting activity and tobacco brand Switching: findings from the ITC-4 Country Survey
Cowie, Genevieve A.; Swift, Elena; Partos, Timea; Borland, Ron
2015-01-01
Objective Among Australian smokers, to examine associations between cigarette brand switching, quitting activity and possible causal directions by lagging the relationships in different directions. Methods Current smokers from nine waves (2002 to early 2012) of the ITC-4 Country Survey Australian dataset were surveyed. Measures were brand switching, both brand family and product type (roll-your-own versus factory-made cigarettes) reported in adjacent waves, interest in quitting, recent quit attempts, and one month sustained abstinence. Results Switching at one interval was unrelated to concurrent quit interest. Quit interest predicted switching at the following interval, but the effect disappeared once subsequent quit attempts were controlled for. Recent quit attempts more strongly predicted switching at concurrent (OR 1.34, 95% CI=1.18–1.52, p<0.001) and subsequent intervals (OR 1.31, 95% CI= 1.12–1.53, p=0.001) than switching predicted quit attempts, with greater asymmetry when both types of switching were combined. One month sustained abstinence and switching were unrelated in the same interval; however after controlling for concurrent switching and excluding type switchers, sustained abstinence predicted lower chance of switching at the following interval (OR=0.66, 95% CI=0.47–0.93, p=0.016). Conclusions The asymmetry suggests brand switching does not affect subsequent quitting. Implications Brand switching does not appear to interfere with quitting. PMID:25827182
Quitting activity and tobacco brand switching: findings from the ITC-4 Country Survey.
Cowie, Genevieve A; Swift, Elena; Partos, Timea; Borland, Ron
2015-04-01
Among Australian smokers, to examine associations between cigarette brand switching, quitting activity and possible causal directions by lagging the relationships in different directions. Current smokers from nine waves (2002 to early 2012) of the ITC-4 Country Survey Australian dataset were surveyed. Measures were brand switching, both brand family and product type (roll-your-own versus factory-made cigarettes) reported in adjacent waves, interest in quitting, recent quit attempts, and one month sustained abstinence. Switching at one interval was unrelated to concurrent quit interest. Quit interest predicted switching at the following interval, but the effect disappeared once subsequent quit attempts were controlled for. Recent quit attempts more strongly predicted switching at concurrent (OR 1.34, 95%CI=1.18-1.52, p<0.001) and subsequent intervals (OR 1.31, 95%CI=1.12-1.53, p=0.001) than switching predicted quit attempts, with greater asymmetry when both types of switching were combined. One month sustained abstinence and switching were unrelated in the same interval; however, after controlling for concurrent switching and excluding type switchers, sustained abstinence predicted lower chance of switching at the following interval (OR=0.66, 95%CI=0.47-0.93, p=0.016). The asymmetry suggests brand switching does not affect subsequent quitting. Brand switching does not appear to interfere with quitting. © 2015 Public Health Association of Australia.
Studies of the flow and turbulence fields in a turbulent pulsed jet flame using LES/PDF
NASA Astrophysics Data System (ADS)
Zhang, Pei; Masri, Assaad R.; Wang, Haifeng
2017-09-01
A turbulent piloted jet flame subject to a rapid velocity pulse in its fuel jet inflow is proposed as a new benchmark case for the study of turbulent combustion models. In this work, we perform modelling studies of this turbulent pulsed jet flame and focus on the predictions of its flow and turbulence fields. An advanced modelling strategy combining the large eddy simulation (LES) and the probability density function (PDF) methods is employed to model the turbulent pulsed jet flame. Characteristics of the velocity measurements are analysed to produce a time-dependent inflow condition that can be fed into the simulations. The effect of the uncertainty in the inflow turbulence intensity is investigated and is found to be very small. A method of specifying the inflow turbulence boundary condition for the simulations of the pulsed jet flame is assessed. The strategies for validating LES of statistically transient flames are discussed, and a new framework is developed consisting of different averaging strategies and a bootstrap method for constructing confidence intervals. Parametric studies are performed to examine the sensitivity of the predictions of the flow and turbulence fields to model and numerical parameters. A direct comparison of the predicted and measured time series of the axial velocity demonstrates a satisfactory prediction of the flow and turbulence fields of the pulsed jet flame by the employed modelling methods.
Predicting Near-Term Water Quality from Satellite Observations of Watershed Conditions
NASA Astrophysics Data System (ADS)
Weiss, W. J.; Wang, L.; Hoffman, K.; West, D.; Mehta, A. V.; Lee, C.
2017-12-01
Despite the strong influence of watershed conditions on source water quality, most water utilities and water resource agencies do not currently have the capability to monitor watershed sources of contamination with great temporal or spatial detail. Typically, knowledge of source water quality is limited to periodic grab sampling; automated monitoring of a limited number of parameters at a few select locations; and/or monitoring relevant constituents at a treatment plant intake. While important, such observations are not sufficient to inform proactive watershed or source water management at a monthly or seasonal scale. Satellite remote sensing data on the other hand can provide a snapshot of an entire watershed at regular, sub-monthly intervals, helping analysts characterize watershed conditions and identify trends that could signal changes in source water quality. Accordingly, the authors are investigating correlations between satellite remote sensing observations of watersheds and source water quality, at a variety of spatial and temporal scales and lags. While correlations between remote sensing observations and direct in situ measurements of water quality have been well described in the literature, there are few studies that link remote sensing observations across a watershed with near-term predictions of water quality. In this presentation, the authors will describe results of statistical analyses and discuss how these results are being used to inform development of a desktop decision support tool to support predictive application of remote sensing data. Predictor variables under evaluation include parameters that describe vegetative conditions; parameters that describe climate/weather conditions; and non-remote sensing, in situ measurements. Water quality parameters under investigation include nitrogen, phosphorus, organic carbon, chlorophyll-a, and turbidity.
Royal, M D; Pryce, J E; Woolliams, J A; Flint, A P F
2002-11-01
The decline of fertility in the UK dairy herd and the unfavorable genetic correlation (r(a)) between fertility and milk yield has necessitated the broadening of breeding goals to include fertility. The coefficient of genetic variation present in fertility is of similar magnitude to that present in production traits; however, traditional measurements of fertility (such as calving interval, days open, nonreturn rate) have low heritability (h2 < 0.05), and recording is often poor, hindering identification of genetically superior animals. An alternative approach is to use endocrine measurements of fertility such as interval to commencement of luteal activity postpartum (CLA), which has a higher h2 (0.16 to 0.23) and is free from management bias. Although CLA has favorable phenotypic correlations with traditional measures of fertility, if it is to be used in a selection index, the genetic correlation (ra) of this trait with fertility and other components of the index must be estimated. The aim of the analyses reported here was to obtain information on the ra between lnCLA and calving interval (CI), average body condition score (BCS; one to nine, an indicator of energy balance estimated from records taken at different months of lactation), production and a number of linear type traits. Genetic models were fitted using ASREML, and r(a) were inferred from genetic regression of lnCLA on sire-predicted transmitting abilities (PTA) for the trait concerned by multiplying the regression coefficient (b) by the ratio of the genetic standard deviations. The inferred r(a) between lnCLA and CI and average BCS were 0.36 and -0.84, respectively. Genetic correlations between InCLA and milk fat and protein yields were all positive and ranged between 0.33 and 0.69. Genetic correlations between InCLA and linear type traits reflecting body structure ranged from -0.25 to 0.15, and between udder characteristics they ranged from -0.16 to 0.05. Thus, incorporation of endocrine parameters of fertility, such as CIA, into a fertility index may offer the potential to improve the accuracy of breeding value prediction for fertility, thus allowing producers to make more informed selection decisions.
James, Katherine M.; Cowl, Clayton T.; Tilburt, Jon C.; Sinicrope, Pamela S.; Robinson, Marguerite E.; Frimannsdottir, Katrin R.; Tiedje, Kristina; Koenig, Barbara A.
2011-01-01
OBJECTIVE: To assess the impact of direct-to-consumer (DTC) predictive genomic risk information on perceived risk and worry in the context of routine clinical care. PATIENTS AND METHODS: Patients attending a preventive medicine clinic between June 1 and December 18, 2009, were randomly assigned to receive either genomic risk information from a DTC product plus usual care (n=74) or usual care alone (n=76). At intervals of 1 week and 1 year after their clinic visit, participants completed surveys containing validated measures of risk perception and levels of worry associated with the 12 conditions assessed by the DTC product. RESULTS: Of 345 patients approached, 150 (43%) agreed to participate, 64 (19%) refused, and 131 (38%) did not respond. Compared with those receiving usual care, participants who received genomic risk information initially rated their risk as higher for 4 conditions (abdominal aneurysm [P=.001], Graves disease [P=.04], obesity [P=.01], and osteoarthritis [P=.04]) and lower for one (prostate cancer [P=.02]). Although differences were not significant, they also reported higher levels of worry for 7 conditions and lower levels for 5 others. At 1 year, there were no significant differences between groups. CONCLUSION: Predictive genomic risk information modestly influences risk perception and worry. The extent and direction of this influence may depend on the condition being tested and its baseline prominence in preventive health care and may attenuate with time. Trial Registration: clinicaltrials.gov identifier: NCT00782366 PMID:21964170
NASA Astrophysics Data System (ADS)
Park, Sumin; Im, Jungho; Park, Seonyeong
2016-04-01
A drought occurs when the condition of below-average precipitation in a region continues, resulting in prolonged water deficiency. A drought can last for weeks, months or even years, so can have a great influence on various ecosystems including human society. In order to effectively reduce agricultural and economic damage caused by droughts, drought monitoring and forecasts are crucial. Drought forecast research is typically conducted using in situ observations (or derived indices such as Standardized Precipitation Index (SPI)) and physical models. Recently, satellite remote sensing has been used for short term drought forecasts in combination with physical models. In this research, drought intensification was predicted using satellite-derived drought indices such as Normalized Difference Drought Index (NDDI), Normalized Multi-band Drought Index (NMDI), and Scaled Drought Condition Index (SDCI) generated from Moderate Resolution Imaging Spectroradiometer (MODIS) and Tropical Rainfall Measuring Mission (TRMM) products over the Korean Peninsula. Time series of each drought index at the 8 day interval was investigated to identify drought intensification patterns. Drought condition at the previous time step (i.e., 8 days before) and change in drought conditions between two previous time steps (e.g., between 16 days and 8 days before the time step to forecast) Results show that among three drought indices, SDCI provided the best performance to predict drought intensification compared to NDDI and NMDI through qualitative assessment. When quantitatively compared with SPI, SDCI showed a potential to be used for forecasting short term drought intensification. Finally this research provided a SDCI-based equation to predict short term drought intensification optimized over the Korean Peninsula.
USDA-ARS?s Scientific Manuscript database
Doramectin concentration in the serum of pastured cattle treated repeatedly at 28 d intervals at two dosage rates was used to predict the probability that cattle fever ticks could successfully feed to repletion during the interval between treatments. At ~270 µg/kg, the doramectin concentration dropp...
Segregation of Brain Structural Networks Supports Spatio-Temporal Predictive Processing.
Ciullo, Valentina; Vecchio, Daniela; Gili, Tommaso; Spalletta, Gianfranco; Piras, Federica
2018-01-01
The ability to generate probabilistic expectancies regarding when and where sensory stimuli will occur, is critical to derive timely and accurate inferences about updating contexts. However, the existence of specialized neural networks for inferring predictive relationships between events is still debated. Using graph theoretical analysis applied to structural connectivity data, we tested the extent of brain connectivity properties associated with spatio-temporal predictive performance across 29 healthy subjects. Participants detected visual targets appearing at one out of three locations after one out of three intervals; expectations about stimulus location (spatial condition) or onset (temporal condition) were induced by valid or invalid symbolic cues. Connectivity matrices and centrality/segregation measures, expressing the relative importance of, and the local interactions among specific cerebral areas respect to the behavior under investigation, were calculated from whole-brain tractography and cortico-subcortical parcellation. Results: Response preparedness to cued stimuli relied on different structural connectivity networks for the temporal and spatial domains. Significant covariance was observed between centrality measures of regions within a subcortical-fronto-parietal-occipital network -comprising the left putamen, the right caudate nucleus, the left frontal operculum, the right inferior parietal cortex, the right paracentral lobule and the right superior occipital cortex-, and the ability to respond after a short cue-target delay suggesting that the local connectedness of such nodes plays a central role when the source of temporal expectation is explicit. When the potential for functional segregation was tested, we found highly clustered structural connectivity across the right superior, the left middle inferior frontal gyrus and the left caudate nucleus as related to explicit temporal orienting. Conversely, when the interaction between explicit and implicit temporal orienting processes was considered at the long interval, we found that explicit processes were related to centrality measures of the bilateral inferior parietal lobule. Degree centrality of the same region in the left hemisphere covaried with behavioral measures indexing the process of attentional re-orienting. These results represent a crucial step forward the ordinary predictive processing description, as we identified the patterns of connectivity characterizing the brain organization associated with the ability to generate and update temporal expectancies in case of contextual violations.
Scherrer, Stephen R; Rideout, Brendan P; Giorli, Giacomo; Nosal, Eva-Marie; Weng, Kevin C
2018-01-01
Passive acoustic telemetry using coded transmitter tags and stationary receivers is a popular method for tracking movements of aquatic animals. Understanding the performance of these systems is important in array design and in analysis. Close proximity detection interference (CPDI) is a condition where receivers fail to reliably detect tag transmissions. CPDI generally occurs when the tag and receiver are near one another in acoustically reverberant settings. Here we confirm transmission multipaths reflected off the environment arriving at a receiver with sufficient delay relative to the direct signal cause CPDI. We propose a ray-propagation based model to estimate the arrival of energy via multipaths to predict CPDI occurrence, and we show how deeper deployments are particularly susceptible. A series of experiments were designed to develop and validate our model. Deep (300 m) and shallow (25 m) ranging experiments were conducted using Vemco V13 acoustic tags and VR2-W receivers. Probabilistic modeling of hourly detections was used to estimate the average distance a tag could be detected. A mechanistic model for predicting the arrival time of multipaths was developed using parameters from these experiments to calculate the direct and multipath path lengths. This model was retroactively applied to the previous ranging experiments to validate CPDI observations. Two additional experiments were designed to validate predictions of CPDI with respect to combinations of deployment depth and distance. Playback of recorded tags in a tank environment was used to confirm multipaths arriving after the receiver's blanking interval cause CPDI effects. Analysis of empirical data estimated the average maximum detection radius (AMDR), the farthest distance at which 95% of tag transmissions went undetected by receivers, was between 840 and 846 m for the deep ranging experiment across all factor permutations. From these results, CPDI was estimated within a 276.5 m radius of the receiver. These empirical estimations were consistent with mechanistic model predictions. CPDI affected detection at distances closer than 259-326 m from receivers. AMDR determined from the shallow ranging experiment was between 278 and 290 m with CPDI neither predicted nor observed. Results of validation experiments were consistent with mechanistic model predictions. Finally, we were able to predict detection/nondetection with 95.7% accuracy using the mechanistic model's criterion when simulating transmissions with and without multipaths. Close proximity detection interference results from combinations of depth and distance that produce reflected signals arriving after a receiver's blanking interval has ended. Deployment scenarios resulting in CPDI can be predicted with the proposed mechanistic model. For deeper deployments, sea-surface reflections can produce CPDI conditions, resulting in transmission rejection, regardless of the reflective properties of the seafloor.
The meaning of diagnostic test results: a spreadsheet for swift data analysis.
Maceneaney, P M; Malone, D E
2000-03-01
To design a spreadsheet program to: (a) analyse rapidly diagnostic test result data produced in local research or reported in the literature; (b) correct reported predictive values for disease prevalence in any population; (c) estimate the post-test probability of disease in individual patients. Microsoft Excel(TM)was used. Section A: a contingency (2 x 2) table was incorporated into the spreadsheet. Formulae for standard calculations [sample size, disease prevalence, sensitivity and specificity with 95% confidence intervals, predictive values and likelihood ratios (LRs)] were linked to this table. The results change automatically when the data in the true or false negative and positive cells are changed. Section B: this estimates predictive values in any population, compensating for altered disease prevalence. Sections C-F: Bayes' theorem was incorporated to generate individual post-test probabilities. The spreadsheet generates 95% confidence intervals, LRs and a table and graph of conditional probabilities once the sensitivity and specificity of the test are entered. The latter shows the expected post-test probability of disease for any pre-test probability when a test of known sensitivity and specificity is positive or negative. This spreadsheet can be used on desktop and palmtop computers. The MS Excel(TM)version can be downloaded via the Internet from the URL ftp://radiography.com/pub/Rad-data99.xls A spreadsheet is useful for contingency table data analysis and assessment of the clinical meaning of diagnostic test results. Copyright 2000 The Royal College of Radiologists.
Segundo, J P; Sugihara, G; Dixon, P; Stiber, M; Bersier, L F
1998-12-01
This communication describes the new information that may be obtained by applying nonlinear analytical techniques to neurobiological time-series. Specifically, we consider the sequence of interspike intervals Ti (the "timing") of trains recorded from synaptically inhibited crayfish pacemaker neurons. As reported earlier, different postsynaptic spike train forms (sets of timings with shared properties) are generated by varying the average rate and/or pattern (implying interval dispersions and sequences) of presynaptic spike trains. When the presynaptic train is Poisson (independent exponentially distributed intervals), the form is "Poisson-driven" (unperturbed and lengthened intervals succeed each other irregularly). When presynaptic trains are pacemaker (intervals practically equal), forms are either "p:q locked" (intervals repeat periodically), "intermittent" (mostly almost locked but disrupted irregularly), "phase walk throughs" (intermittencies with briefer regular portions), or "messy" (difficult to predict or describe succinctly). Messy trains are either "erratic" (some intervals natural and others lengthened irregularly) or "stammerings" (intervals are integral multiples of presynaptic intervals). The individual spike train forms were analysed using attractor reconstruction methods based on the lagged coordinates provided by successive intervals from the time-series Ti. Numerous models were evaluated in terms of their predictive performance by a trial-and-error procedure: the most successful model was taken as best reflecting the true nature of the system's attractor. Each form was characterized in terms of its dimensionality, nonlinearity and predictability. (1) The dimensionality of the underlying dynamical attractor was estimated by the minimum number of variables (coordinates Ti) required to model acceptably the system's dynamics, i.e. by the system's degrees of freedom. Each model tested was based on a different number of Ti; the smallest number whose predictions were judged successful provided the best integer approximation of the attractor's true dimension (not necessarily an integer). Dimensionalities from three to five provided acceptable fits. (2) The degree of nonlinearity was estimated by: (i) comparing the correlations between experimental results and data from linear and nonlinear models, and (ii) tuning model nonlinearity via a distance-weighting function and identifying the either local or global neighborhood size. Lockings were compatible with linear models and stammerings were marginal; nonlinear models were best for Poisson-driven, intermittent and erratic forms. (3) Finally, prediction accuracy was plotted against increasingly long sequences of intervals forecast: the accuracies for Poisson-driven, locked and stammering forms were invariant, revealing irregularities due to uncorrelated noise, but those of intermittent and messy erratic forms decayed rapidly, indicating an underlying deterministic process. The excellent reconstructions possible for messy erratic and for some intermittent forms are especially significant because of their relatively low dimensionality (around 4), high degree of nonlinearity and prediction decay with time. This is characteristic of chaotic systems, and provides evidence that nonlinear couplings between relatively few variables are the major source of the apparent complexity seen in these cases. This demonstration of different dimensions, degrees of nonlinearity and predictabilities provides rigorous support for the categorization of different synaptically driven discharge forms proposed earlier on the basis of more heuristic criteria. This has significant implications. (1) It demonstrates that heterogeneous postsynaptic forms can indeed be induced by manipulating a few presynaptic variables. (2) Each presynaptic timing induces a form with characteristic dimensionality, thus breaking up the preparation into subsystems such that the physical variables in each operate as one
Santacana, Maria; Maiques, Oscar; Valls, Joan; Gatius, Sònia; Abó, Ana Isabel; López-García, María Ángeles; Mota, Alba; Reventós, Jaume; Moreno-Bueno, Gema; Palacios, Jose; Bartosch, Carla; Dolcet, Xavier; Matias-Guiu, Xavier
2014-12-01
Histologic typing may be difficult in a subset of endometrial carcinoma (EC) cases. In these cases, interobserver agreement improves when immunohistochemistry (IHC) is used. A series of endometrioid type (EEC) grades 1, 2, and 3 and serous type (SC) were immunostained for p53, p16, estrogen receptor, PTEN, IMP2, IMP3, HER2, cyclin B2 and E1, HMGA2, FolR1, MSLN, Claudins 3 and 4, and NRF2. Nine biomarkers showed significant differences with thresholds in IHC value scale between both types (p53 ≥ 20, IMP2 ≥ 115, IMP3 ≥ 2, cyclin E1 ≥ 220, HMGA2 ≥ 30, FolR1 ≥ 50, p16 ≥ 170, nuclear PTEN ≥ 2 and estrogen receptor ≤ 50; P < .005). This combination led to increased discrimination when considering cases satisfying 0 to 5 conditions predicted as EEC and those satisfying 6 to 9 conditions predicted as SC. This signature correctly predicted all 48 EEC grade 1-2 cases and 18 SC cases, but 3 SC cases were wrongly predicted as EEC. Sensitivity was 86% (95% confidence interval [CI], 64%-97%), and specificity was 100% (95% CI, 89%-100%). The classifier correctly predicted all 28 EEC grade 3 cases but only identified the EEC and SC components in 4 of 9 mixed EEC-SC. An independent validation series (29 EEC grades 1-2, 28 EEC grade 3, and 31 SC) showed 100% sensitivity (95% CI, 84%-100%) and 83% specificity (95% CI, 64%-94%). We propose an internally and externally validated 9-protein biomarker signature to predict the histologic type of EC (EEC or SC) by IHC. The results also suggest that mixed EEC-SC is molecularly ambiguous. Copyright © 2014 Elsevier Inc. All rights reserved.
Khosrow-Khavar, Farzad; Tavakolian, Kouhyar; Blaber, Andrew; Menon, Carlo
2016-10-12
The purpose of this research was to design a delineation algorithm that could detect specific fiducial points of the seismocardiogram (SCG) signal with or without using the electrocardiogram (ECG) R-wave as the reference point. The detected fiducial points were used to estimate cardiac time intervals. Due to complexity and sensitivity of the SCG signal, the algorithm was designed to robustly discard the low-quality cardiac cycles, which are the ones that contain unrecognizable fiducial points. The algorithm was trained on a dataset containing 48,318 manually annotated cardiac cycles. It was then applied to three test datasets: 65 young healthy individuals (dataset 1), 15 individuals above 44 years old (dataset 2), and 25 patients with previous heart conditions (dataset 3). The algorithm accomplished high prediction accuracy with the rootmean- square-error of less than 5 ms for all the test datasets. The algorithm overall mean detection rate per individual recordings (DRI) were 74, 68, and 42 percent for the three test datasets when concurrent ECG and SCG were used. For the standalone SCG case, the mean DRI was 32, 14 and 21 percent. When the proposed algorithm applied to concurrent ECG and SCG signals, the desired fiducial points of the SCG signal were successfully estimated with a high detection rate. For the standalone case, however, the algorithm achieved high prediction accuracy and detection rate for only the young individual dataset. The presented algorithm could be used for accurate and non-invasive estimation of cardiac time intervals.
Knežević, Varja; Tunić, Tanja; Gajić, Pero; Marjan, Patricija; Savić, Danko; Tenji, Dina; Teodorović, Ivana
2016-11-01
Recovery after exposure to herbicides-atrazine, isoproturon, and trifluralin-their binary and ternary mixtures, was studied under laboratory conditions using a slightly adapted standard protocol for Lemna minor. The objectives of the present study were (1) to compare empirical to predicted toxicity of selected herbicide mixtures; (2) to assess L. minor recovery potential after exposure to selected individual herbicides and their mixtures; and (3) to suggest an appropriate recovery potential assessment approach and endpoint in a modified laboratory growth inhibition test. The deviation of empirical from predicted toxicity was highest in binary mixtures of dissimilarly acting herbicides. The concentration addition model slightly underestimated mixture effects, indicating potential synergistic interactions between photosynthetic inhibitors (atrazine and isoproturon) and a cell mitosis inhibitor (trifluralin). Recovery after exposure to the binary mixture of atrazine and isoproturon was fast and concentration-independent: no significant differences between relative growth rates (RGRs) in any of the mixtures (IC10 Mix , 25 Mix , and 50 Mix ) versus control level were recorded in the last interval of the recovery phase. The recovery of the plants exposed to binary and ternary mixtures of dissimilarly acting herbicides was strictly concentration-dependent. Only plants exposed to IC10 Mix , regardless of the herbicides, recovered RGRs close to control level in the last interval of the recovery phase. The inhibition of the RGRs in the last interval of the recovery phase compared with the control level is a proposed endpoint that could inform on reversibility of the effects and indicate possible mixture effects on plant population recovery potential.
Illumination discrimination in the absence of a fixed surface-reflectance layout
Radonjić, Ana; Ding, Xiaomao; Krieger, Avery; Aston, Stacey; Hurlbert, Anya C.; Brainard, David H.
2018-01-01
Previous studies have shown that humans can discriminate spectral changes in illumination and that this sensitivity depends both on the chromatic direction of the illumination change and on the ensemble of surfaces in the scene. These studies, however, always used stimulus scenes with a fixed surface-reflectance layout. Here we compared illumination discrimination for scenes in which the surface reflectance layout remains fixed (fixed-surfaces condition) to those in which surface reflectances were shuffled randomly across scenes, but with the mean scene reflectance held approximately constant (shuffled-surfaces condition). Illumination discrimination thresholds in the fixed-surfaces condition were commensurate with previous reports. Thresholds in the shuffled-surfaces condition, however, were considerably elevated. Nonetheless, performance in the shuffled-surfaces condition exceeded that attainable through random guessing. Analysis of eye fixations revealed that in the fixed-surfaces condition, low illumination discrimination thresholds (across observers) were predicted by low overall fixation spread and high consistency of fixation location and fixated surface reflectances across trial intervals. Performance in the shuffled-surfaces condition was not systematically related to any of the eye-fixation characteristics we examined for that condition, but was correlated with performance in the fixed-surfaces condition. PMID:29904786
Interval Predictor Models with a Formal Characterization of Uncertainty and Reliability
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2014-01-01
This paper develops techniques for constructing empirical predictor models based on observations. By contrast to standard models, which yield a single predicted output at each value of the model's inputs, Interval Predictors Models (IPM) yield an interval into which the unobserved output is predicted to fall. The IPMs proposed prescribe the output as an interval valued function of the model's inputs, render a formal description of both the uncertainty in the model's parameters and of the spread in the predicted output. Uncertainty is prescribed as a hyper-rectangular set in the space of model's parameters. The propagation of this set through the empirical model yields a range of outputs of minimal spread containing all (or, depending on the formulation, most) of the observations. Optimization-based strategies for calculating IPMs and eliminating the effects of outliers are proposed. Outliers are identified by evaluating the extent by which they degrade the tightness of the prediction. This evaluation can be carried out while the IPM is calculated. When the data satisfies mild stochastic assumptions, and the optimization program used for calculating the IPM is convex (or, when its solution coincides with the solution to an auxiliary convex program), the model's reliability (that is, the probability that a future observation would be within the predicted range of outputs) can be bounded rigorously by a non-asymptotic formula.
Improving orbit prediction accuracy through supervised machine learning
NASA Astrophysics Data System (ADS)
Peng, Hao; Bai, Xiaoli
2018-05-01
Due to the lack of information such as the space environment condition and resident space objects' (RSOs') body characteristics, current orbit predictions that are solely grounded on physics-based models may fail to achieve required accuracy for collision avoidance and have led to satellite collisions already. This paper presents a methodology to predict RSOs' trajectories with higher accuracy than that of the current methods. Inspired by the machine learning (ML) theory through which the models are learned based on large amounts of observed data and the prediction is conducted without explicitly modeling space objects and space environment, the proposed ML approach integrates physics-based orbit prediction algorithms with a learning-based process that focuses on reducing the prediction errors. Using a simulation-based space catalog environment as the test bed, the paper demonstrates three types of generalization capability for the proposed ML approach: (1) the ML model can be used to improve the same RSO's orbit information that is not available during the learning process but shares the same time interval as the training data; (2) the ML model can be used to improve predictions of the same RSO at future epochs; and (3) the ML model based on a RSO can be applied to other RSOs that share some common features.
Three-dimensional elastic-plastic finite-element analysis of fatigue crack propagation
NASA Technical Reports Server (NTRS)
Goglia, G. L.; Chermahini, R. G.
1985-01-01
Fatigue cracks are a major problem in designing structures subjected to cyclic loading. Cracks frequently occur in structures such as aircraft and spacecraft. The inspection intervals of many aircraft structures are based on crack-propagation lives. Therefore, improved prediction of propagation lives under flight-load conditions (variable-amplitude loading) are needed to provide more realistic design criteria for these structures. The main thrust was to develop a three-dimensional, nonlinear, elastic-plastic, finite element program capable of extending a crack and changing boundary conditions for the model under consideration. The finite-element model is composed of 8-noded (linear-strain) isoparametric elements. In the analysis, the material is assumed to be elastic-perfectly plastic. The cycle stress-strain curve for the material is shown Zienkiewicz's initial-stress method, von Mises's yield criterion, and Drucker's normality condition under small-strain assumptions are used to account for plasticity. The three-dimensional analysis is capable of extending the crack and changing boundary conditions under cyclic loading.
Risk factors and mortality associated with default from multidrug-resistant tuberculosis treatment.
Franke, Molly F; Appleton, Sasha C; Bayona, Jaime; Arteaga, Fernando; Palacios, Eda; Llaro, Karim; Shin, Sonya S; Becerra, Mercedes C; Murray, Megan B; Mitnick, Carole D
2008-06-15
Completing treatment for multidrug-resistant (MDR) tuberculosis (TB) may be more challenging than completing first-line TB therapy, especially in resource-poor settings. The objectives of this study were to (1) identify risk factors for default from MDR TB therapy (defined as prolonged treatment interruption), (2) quantify mortality among patients who default from treatment, and (3) identify risk factors for death after default from treatment. We performed a retrospective chart review to identify risk factors for default from MDR TB therapy and conducted home visits to assess mortality among patients who defaulted from such therapy. Sixty-seven (10.0%) of 671 patients defaulted from MDR TB therapy. The median time to treatment default was 438 days (interquartile range, 152-710 days), and 27 (40.3%) of the 67 patients who defaulted from treatment had culture-positive sputum at the time of default. Substance use (hazard ratio, 2.96; 95% confidence interval, 1.56-5.62; P = .001), substandard housing conditions (hazard ratio, 1.83; 95% confidence interval, 1.07-3.11; P = .03), later year of enrollment (hazard ratio, 1.62, 95% confidence interval, 1.09-2.41; P = .02), and health district (P = .02) predicted default from therapy in a multivariable analysis. Severe adverse events did not predict default from therapy. Forty-seven (70.1%) of 67 patients who defaulted from therapy were successfully traced; of these, 25 (53.2%) had died. Poor bacteriologic response, <1 year of treatment at the time of default, low education level, and diagnosis with a psychiatric disorder significantly predicted death after default in a multivariable analysis. The proportion of patients who defaulted from MDR TB treatment was relatively low. The large proportion of patients who had culture-positive sputum at the time of treatment default underscores the public health importance of minimizing treatment default. Prognosis for patients who defaulted from therapy was poor. Interventions aimed at preventing treatment default may reduce TB-related mortality.
Complex Dynamic Processes in Sign Tracking With an Omission Contingency (Negative Automaintenance)
Killeen, Peter R.
2008-01-01
Hungry pigeons received food periodically, signaled by the onset of a keylight. Key pecks aborted the feeding. Subjects responded for thousands of trials, despite the contingent nonreinforcement, with varying probability as the intertrial interval was varied. Hazard functions showed the dominant tendency to be perseveration in responding and not responding. Once perseveration was accounted for, a linear operator model of associative conditioning further improved predictions. Response rates during trials were correlated with the prior probabilities of a response. Rescaled range analyses showed that the behavioral trajectories were a kind of fractional Brownian motion. PMID:12561133
Complex dynamic processes in sign tracking with an omission contingency (negative automaintenance).
Killeen, Peter R
2003-01-01
Hungry pigeons received food periodically, signaled by the onset of a keylight. Key pecks aborted the feeding. Subjects responded for thousands of trials, despite the contingent nonreinforcement, with varying probability as the intertrial interval was varied. Hazard functions showed the dominant tendency to be perseveration in responding and not responding. Once perseveration was accounted for, a linear operator model of associative conditioning further improved predictions. Response rates during trials were correlated with the prior probabilities of a response. Rescaled range analyses showed that the behavioral trajectories were a kind of fractional Brownian motion.
Soil, Water, and Vegetation Conditions in South Texas
NASA Technical Reports Server (NTRS)
Wiegand, C. L.; Gausman, H. W.; Leamer, R. W.; Richardson, A. J. (Principal Investigator)
1976-01-01
The author has identified the following significant results. Reflectance differences between the dead leaves of six crops (corn, cotton, sorghum, sugar cane, citrus, and avocado) and the respective bare soils where the dead leaves were lying on the ground were determined from laboratory spectrophotometric measurements over the 0.5- to 2.5 micron wavelength interval. The largest differences were in the near infrared waveband 0.75- to 1.35 microns. Leaf area index was predicted from plant height, percent ground cover, and plant population for irrigated and nonirrigated grain sorghum fields for the 1975 growing season.
Endogenous modulation of low frequency oscillations by temporal expectations
Cravo, Andre M.; Rohenkohl, Gustavo; Wyart, Valentin
2011-01-01
Recent studies have associated increasing temporal expectations with synchronization of higher frequency oscillations and suppression of lower frequencies. In this experiment, we explore a proposal that low-frequency oscillations provide a mechanism for regulating temporal expectations. We used a speeded Go/No-go task and manipulated temporal expectations by changing the probability of target presentation after certain intervals. Across two conditions, the temporal conditional probability of target events differed substantially at the first of three possible intervals. We found that reactions times differed significantly at this first interval across conditions, decreasing with higher temporal expectations. Interestingly, the power of theta activity (4–8 Hz), distributed over central midline sites, also differed significantly across conditions at this first interval. Furthermore, we found a transient coupling between theta phase and beta power after the first interval in the condition with high temporal expectation for targets at this time point. Our results suggest that the adjustments in theta power and the phase-power coupling between theta and beta contribute to a central mechanism for controlling neural excitability according to temporal expectations. PMID:21900508
Mekitarian Filho, Eduardo; Horita, Sérgio Massaru; Gilio, Alfredo Elias; Alves, Anna Cláudia Dominguez; Nigrovic, Lise E
2013-09-01
In a retrospective cohort of 494 children with meningitis in Sao Paulo, Brazil, the Bacterial Meningitis Score identified all the children with bacterial meningitis (sensitivity 100%, 95% confidence interval: 92-100% and negative predictive value 100%, 95% confidence interval: 98-100%). Addition of cerebrospinal fluid lactate to the score did not improve clinical prediction rule performance.
Jodice, Patrick G.R.; Collopy, Michael W.
1999-01-01
The diving behavior of Marbled Murrelets (Brachyramphus marmoratus) was studied using telemetry along the Oregon coast during the 1995 and 1996 breeding seasons and examined in relation to predictions from optimal-breathing models. Duration of dives, pauses, dive bouts, time spent under water during dive bouts, and nondiving intervals between successive dive bouts were recorded. Most diving metrics differed between years but not with oceanographic conditions or shore type. There was no effect of water depth on mean dive time or percent time spent under water even though dive bouts occurred in depths from 3 to 36 m. There was a significant, positive relationship between mean dive time and mean pause time at the dive-bout scale each year. At the dive-cycle scale, there was a significant positive relationship between dive time and preceding pause time in each year and a significant positive relationship between dive time and ensuing pause time in 1996. Although it appears that aerobic diving was the norm, there appeared to be an increase in anaerobic diving in 1996. The diving performance of Marbled Murrelets in this study appeared to be affected by annual changes in environmental conditions and prey resources but did not consistently fit predictions from optimal-breathing models.
Hinderliter, Charles F; Goodhart, Mark; Anderson, Matthew J; Misanin, James R
2002-06-01
Assuming body temperature correlates with metabolic activities, rate of body temperature recovery was manipulated to assess effects on long-trace conditioning in a conditioned taste-aversion paradigm. Following 10 min. access to a .1% saccharin solution and then 10 min. immersion in 0-0.5 degrees C water, two groups of 16 Wistar-derived, 81-113 day-old, male albino rats received either saline or lithium chloride injections 3 hr. later. These two groups were subdivided on basis of warming rate during the 3-hr. interval. Half of the rats recovered at room temperature (20 degrees to 21 degrees C), and half recovered in an incubator maintained at 30 degrees C. Maintaining a lowered body temperature between the conditioned stimulus and unconditioned stimulus allowed an association to be made at 3 hr., an interval that normally does not support conditioning. In contrast, lowering body temperature and then inducing a fast warming rate did not produce evidence of an aversion. It is suggested that maintaining a low body temperature over the interval between the presentation of the conditioned stimulus and unconditioned stimulus slows a metabolic clock that extends the measured interval at which associations can be made using conditioned taste-aversion procedures.
Tulloch, Ayesha I T; Pichancourt, Jean-Baptiste; Gosper, Carl R; Sanders, Angela; Chadès, Iadine
2016-10-01
Changed fire regimes have led to declines of fire-regime-adapted species and loss of biodiversity globally. Fire affects population processes of growth, reproduction, and dispersal in different ways, but there is little guidance about the best fire regime(s) to maintain species population processes in fire-prone ecosystems. We use a process-based approach to determine the best range of fire intervals for keystone plant species in a highly modified Mediterranean ecosystem in southwestern Australia where current fire regimes vary. In highly fragmented areas, fires are few due to limited ignitions and active suppression of wildfire on private land, while in highly connected protected areas fires are frequent and extensive. Using matrix population models, we predict population growth of seven Banksia species under different environmental conditions and patch connectivity, and evaluate the sensitivity of species survival to different fire management strategies and burning intervals. We discover that contrasting, complementary patterns of species life-histories with time since fire result in no single best fire regime. All strategies result in the local patch extinction of at least one species. A small number of burning strategies secure complementary species sets depending on connectivity and post-fire growing conditions. A strategy of no fire always leads to fewer species persisting than prescribed fire or random wildfire, while too-frequent or too-rare burning regimes lead to the possible local extinction of all species. In low landscape connectivity, we find a smaller range of suitable fire intervals, and strategies of prescribed or random burning result in a lower number of species with positive growth rates after 100 years on average compared with burning high connectivity patches. Prescribed fire may reduce or increase extinction risk when applied in combination with wildfire depending on patch connectivity. Poor growing conditions result in a significantly reduced number of species exhibiting positive growth rates after 100 years of management. By exploring the consequences of managing fire, we are able to identify which species are likely to disappear under a given fire regime. Identifying the appropriate complementarity of fire intervals, and their species-specific as well as community-level consequences, is crucial to reduce local extinctions of species in fragmented fire-prone landscapes. © 2016 by the Ecological Society of America.
Predictive sensor method and apparatus
NASA Technical Reports Server (NTRS)
Nail, William L. (Inventor); Koger, Thomas L. (Inventor); Cambridge, Vivien (Inventor)
1990-01-01
A predictive algorithm is used to determine, in near real time, the steady state response of a slow responding sensor such as hydrogen gas sensor of the type which produces an output current proportional to the partial pressure of the hydrogen present. A microprocessor connected to the sensor samples the sensor output at small regular time intervals and predicts the steady state response of the sensor in response to a perturbation in the parameter being sensed, based on the beginning and end samples of the sensor output for the current sample time interval.
Setting the Revisit Interval in Primary Care
Schwartz, Lisa M; Woloshin, Steven; Wasson, John H; Renfrew, Roger A; Welch, H Gilbert
1999-01-01
OBJECTIVE Although longitudinal care constitutes the bulk of primary care, physicians receive little guidance on the fundamental question of how to time follow-up visits. We sought to identify important predictors of the revisit interval and to describe the variability in how physicians set these intervals when caring for patients with common medical conditions. DESIGN Cross-sectional survey of physicians performed at the end of office visits for consecutive patients with hypertension, angina, diabetes, or musculoskeletal pain. PARTICIPANTS/SETTING One hundred sixty-four patients under the care of 11 primary care physicians in the Dartmouth Primary Care Cooperative Research Network. MEASUREMENTS The main outcome measures were the variability in mean revisit intervals across physicians and the proportion of explained variance by potential determinants of revisit intervals. We assessed the relation between the revisit interval (dependent variable) and three groups of independent variables, patient characteristics (e.g., age, physician perception of patient health), identification of individual physician, and physician characterization of the visit (e.g., routine visit, visit requiring a change in management, or visit occurring on a “hectic” day), using multiple regression that accounted for the natural grouping of patients within physician. MAIN RESULTS Revisit intervals ranged from 1 week to over 1 year. The most common intervals were 12 and 16 weeks. Physicians’ perception of fair-poor health status and visits involving a change in management were most strongly related to shorter revisit intervals. In multivariate analyses, patient characteristics explained about 18% of the variance in revisit intervals, and adding identification of the individual provider doubled the explained variance to about 40%. Physician characterization of the visit increased explained variance to 57%. The average revisit interval adjusted for patient characteristics for each of the 11 physicians varied from 4 to 20 weeks. Although all physicians lengthened revisit intervals for routine visits and shortened them when changing management, the relative ranking of mean revisit intervals for each physician changed little for different visit characterizations—some physicians were consistently long and others were consistently short. CONCLUSION Physicians vary widely in their recommendations for office revisits. Patient factors accounted for only a small part of this variation. Although physicians responded to visits in predictable ways, each physician appeared to have a unique set point for the length of the revisits interval. PMID:10203635
Investigation of the air pollutant distribution over Northeast Asia using Models-3/CMAQ
NASA Astrophysics Data System (ADS)
Kim, J. Y.; Ghim, Y. S.; Won, J.-G.; Yoon, S.-C.; Woo, J.-H.
2003-04-01
Northeast Asia is one of the most densely populated areas in the world. Huge amount of air pollutants emitted in the area is transported to the east along with prevailing westerlies. In spring of Northeast Asia, migratory anticyclones are frequent. Transport and distribution of air pollutants can be substantially altered according to the locations of anticyclones. In this work, two different synoptic meteorological conditions associated with different locations of anticyclones in May 1999 were identified. The distributions of gaseous and particulate pollutants in these meteorological conditions were predicted and compared. Models-3/CMAQ (USEPA Models-3/Community Multi-scale Air Quality) and MM5 (PSU/NCAR Mesoscale Modeling System) were used to predict air quality and meteorology, respectively. The modeling domain was 5,184 km x 3,456 km centering on the Korean Peninsula (130o N, 40o E). The grid size was 108 km x 108 km and the number of grids was 48 in the west-east direction and 32 in the south-north direction. The number of layers in the vertical direction was six to the height of 500 hPa. Emission data were taken from the Center for Global and Regional Environmental Research, University of Iowa for anthropogenic emissions and from GEIA (Global Emissions Inventory Activity) for biogenic emissions. The GDAPS (Global Data Assimilation and Prediction System) data of six-hour intervals were used for initial and boundary conditions of MM5.
Yelland, L N; Gajewski, B J; Colombo, J; Gibson, R A; Makrides, M; Carlson, S E
2016-09-01
The DHA to Optimize Mother Infant Outcome (DOMInO) and Kansas DHA Outcomes Study (KUDOS) were randomized controlled trials that supplemented mothers with 800 and 600mg DHA/day, respectively, or a placebo during pregnancy. DOMInO was conducted in Australia and KUDOS in the United States. Both trials found an unanticipated and statistically significant reduction in early preterm birth (ePTB; i.e., birth before 34 weeks gestation). However, in each trial, the number of ePTBs were small. We used a novel Bayesian approach to estimate statistically derived low, moderate or high risk for ePTB, and to test for differences between the DHA and placebo groups. In both trials, the model predicted DHA would significantly reduce the expected proportion of deliveries in the high risk group under the trial conditions of the parent studies. Among the next 300,000 births in Australia we estimated that 1112 ePTB (95% credible interval 51-2189) could be avoided by providing DHA. And in the USA we estimated that 106,030 ePTB (95% credible interval 6400 to 175,700) could be avoided with DHA. Copyright © 2016 Elsevier Ltd. All rights reserved.
Relations between cognitive status and medication adherence in patients treated for memory disorders
Ownby, Raymond L.; Hertzog, Christopher; Czaja, Sara J.
2012-01-01
Medication adherence has been increasingly recognized as an important factor in elderly persons' health. Various studies have shown that medication non-adherence is associated with poor health status in this population. As part of a study of the effects of two interventions to promote medication adherence in patients treated for memory problems, information on medication adherence and cognitive status was collected at 3-month intervals. Twenty-seven participants (16 men, 11 women, age 71–92 years) were assigned to control or treatment conditions and adherence was evaluated with an electronic monitoring device. Cognitive status was evaluated at 3-month intervals beginning in April of 2003 and continuing through September of 2006. We have previously reported on the effectiveness of these interventions to promote adherence. In this paper, we examine the relations of cognitive status and adherence over time using a partial least squares path model in order to evaluate the extent to which adherence to cholinesterase medications was related to cognitive status. Adherence predicted cognitive status at later time points while cognition did not, in general, predict adherence. Results thus suggest that interventions to ensure high levels of medication adherence may be important for maintaining cognitive function in affected elderly people. PMID:24575293
Quantifying uncertainty on sediment loads using bootstrap confidence intervals
NASA Astrophysics Data System (ADS)
Slaets, Johanna I. F.; Piepho, Hans-Peter; Schmitter, Petra; Hilger, Thomas; Cadisch, Georg
2017-01-01
Load estimates are more informative than constituent concentrations alone, as they allow quantification of on- and off-site impacts of environmental processes concerning pollutants, nutrients and sediment, such as soil fertility loss, reservoir sedimentation and irrigation channel siltation. While statistical models used to predict constituent concentrations have been developed considerably over the last few years, measures of uncertainty on constituent loads are rarely reported. Loads are the product of two predictions, constituent concentration and discharge, integrated over a time period, which does not make it straightforward to produce a standard error or a confidence interval. In this paper, a linear mixed model is used to estimate sediment concentrations. A bootstrap method is then developed that accounts for the uncertainty in the concentration and discharge predictions, allowing temporal correlation in the constituent data, and can be used when data transformations are required. The method was tested for a small watershed in Northwest Vietnam for the period 2010-2011. The results showed that confidence intervals were asymmetric, with the highest uncertainty in the upper limit, and that a load of 6262 Mg year-1 had a 95 % confidence interval of (4331, 12 267) in 2010 and a load of 5543 Mg an interval of (3593, 8975) in 2011. Additionally, the approach demonstrated that direct estimates from the data were biased downwards compared to bootstrap median estimates. These results imply that constituent loads predicted from regression-type water quality models could frequently be underestimating sediment yields and their environmental impact.
Caro-Martín, C Rocío; Leal-Campanario, Rocío; Sánchez-Campusano, Raudel; Delgado-García, José M; Gruart, Agnès
2015-11-04
We were interested in determining whether rostral medial prefrontal cortex (rmPFC) neurons participate in the measurement of conditioned stimulus-unconditioned stimulus (CS-US) time intervals during classical eyeblink conditioning. Rabbits were conditioned with a delay paradigm consisting of a tone as CS. The CS started 50, 250, 500, 1000, or 2000 ms before and coterminated with an air puff (100 ms) directed at the cornea as the US. Eyelid movements were recorded with the magnetic search coil technique and the EMG activity of the orbicularis oculi muscle. Firing activities of rmPFC neurons were recorded across conditioning sessions. Reflex and conditioned eyelid responses presented a dominant oscillatory frequency of ≈12 Hz. The firing rate of each recorded neuron presented a single peak of activity with a frequency dependent on the CS-US interval (i.e., ≈12 Hz for 250 ms, ≈6 Hz for 500 ms, and≈3 Hz for 1000 ms). Interestingly, rmPFC neurons presented their dominant firing peaks at three precise times evenly distributed with respect to CS start and also depending on the duration of the CS-US interval (only for intervals of 250, 500, and 1000 ms). No significant neural responses were recorded at very short (50 ms) or long (2000 ms) CS-US intervals. rmPFC neurons seem not to encode the oscillatory properties characterizing conditioned eyelid responses in rabbits, but are probably involved in the determination of CS-US intervals of an intermediate range (250-1000 ms). We propose that a variable oscillator underlies the generation of working memories in rabbits. The way in which brains generate working memories (those used for the transient processing and storage of newly acquired information) is still an intriguing question. Here, we report that the firing activities of neurons located in the rostromedial prefrontal cortex recorded in alert behaving rabbits are controlled by a dynamic oscillator. This oscillator generated firing frequencies in a variable band of 3-12 Hz depending on the conditioned stimulus-unconditioned stimulus intervals (1 s, 500 ms, 250 ms) selected for classical eyeblink conditioning of behaving rabbits. Shorter (50 ms) and longer (2 s) intervals failed to activate the oscillator and prevented the acquisition of conditioned eyelid responses. This is an unexpected mechanism to generate sustained firing activities in neural circuits generating working memories. Copyright © 2015 the authors 0270-6474/15/3514809-13$15.00/0.
Clinical history and biologic age predicted falls better than objective functional tests.
Gerdhem, Paul; Ringsberg, Karin A M; Akesson, Kristina; Obrant, Karl J
2005-03-01
Fall risk assessment is important because the consequences, such as a fracture, may be devastating. The objective of this study was to find the test or tests that best predicted falls in a population-based sample of elderly women. The fall-predictive ability of a questionnaire, a subjective estimate of biologic age and objective functional tests (gait, balance [Romberg and sway test], thigh muscle strength, and visual acuity) were compared in 984 randomly selected women, all 75 years of age. A recalled fall was the most important predictor for future falls. Only recalled falls and intake of psycho-active drugs independently predicted future falls. Women with at least five of the most important fall predictors (previous falls, conditions affecting the balance, tendency to fall, intake of psychoactive medication, inability to stand on one leg, high biologic age) had an odds ratio of 11.27 (95% confidence interval 4.61-27.60) for a fall (sensitivity 70%, specificity 79%). The more time-consuming objective functional tests were of limited importance for fall prediction. A simple clinical history, the inability to stand on one leg, and a subjective estimate of biologic age were more important as part of the fall risk assessment.
NASA Technical Reports Server (NTRS)
Farassat, Fereidoun; Casper, Jay H.
2012-01-01
We show that a simple modification of Formulation 1 of Farassat results in a new analytic expression that is highly suitable for broadband noise prediction when extensive turbulence simulation is available. This result satisfies all the stringent requirements, such as permitting the use of the exact geometry and kinematics of the moving body, that we have set as our goal in the derivation of useful acoustic formulas for the prediction of rotating blade and airframe noise. We also derive a simple analytic expression for the autocorrelation of the acoustic pressure that is valid in the near and far fields. Our analysis is based on the time integral of the acoustic pressure that can easily be obtained at any resolution for any observer time interval and digitally analyzed for broadband noise prediction. We have named this result as Formulation 2B of Farassat. One significant consequence of Formulation 2B is the derivation of the acoustic velocity potential for the thickness and loading terms of the Ffowcs Williams-Hawkings (FW-H) equation. This will greatly enhance the usefulness of the Fast Scattering Code (FSC) by providing a high fidelity boundary condition input for scattering predictions.
Mohebbi, Maryam; Ghassemian, Hassan
2011-08-01
Atrial fibrillation (AF) is the most common cardiac arrhythmia and increases the risk of stroke. Predicting the onset of paroxysmal AF (PAF), based on noninvasive techniques, is clinically important and can be invaluable in order to avoid useless therapeutic intervention and to minimize risks for the patients. In this paper, we propose an effective PAF predictor which is based on the analysis of the RR-interval signal. This method consists of three steps: preprocessing, feature extraction and classification. In the first step, the QRS complexes are detected from the electrocardiogram (ECG) signal and then the RR-interval signal is extracted. In the next step, the recurrence plot (RP) of the RR-interval signal is obtained and five statistically significant features are extracted to characterize the basic patterns of the RP. These features consist of the recurrence rate, length of longest diagonal segments (L(max )), average length of the diagonal lines (L(mean)), entropy, and trapping time. Recurrence quantification analysis can reveal subtle aspects of dynamics not easily appreciated by other methods and exhibits characteristic patterns which are caused by the typical dynamical behavior. In the final step, a support vector machine (SVM)-based classifier is used for PAF prediction. The performance of the proposed method in prediction of PAF episodes was evaluated using the Atrial Fibrillation Prediction Database (AFPDB) which consists of both 30 min ECG recordings that end just prior to the onset of PAF and segments at least 45 min distant from any PAF events. The obtained sensitivity, specificity, positive predictivity and negative predictivity were 97%, 100%, 100%, and 96%, respectively. The proposed methodology presents better results than other existing approaches.
NASA Astrophysics Data System (ADS)
Shi, Yongli; Wu, Zhong; Zhi, Kangyi; Xiong, Jun
2018-03-01
In order to realize reliable commutation of brushless DC motors (BLDCMs), a simple approach is proposed to detect and correct signal faults of Hall position sensors in this paper. First, the time instant of the next jumping edge for Hall signals is predicted by using prior information of pulse intervals in the last electrical period. Considering the possible errors between the predicted instant and the real one, a confidence interval is set by using the predicted value and a suitable tolerance for the next pulse edge. According to the relationship between the real pulse edge and the confidence interval, Hall signals can be judged and the signal faults can be corrected. Experimental results of a BLDCM at steady speed demonstrate the effectiveness of the approach.
Persistent opioid use following Cesarean delivery: patterns and predictors among opioid naïve women
Bateman, Brian T.; Franklin, Jessica M.; Bykov, Katsiaryna; Avorn, Jerry; Shrank, William H.; Brennan, Troyen A.; Landon, Joan E.; Rathmell, James P.; Huybrechts, Krista F.; Fischer, Michael A.; Choudhry, Niteesh K.
2016-01-01
Background The incidence of opioid-related death in women has increased five-fold over the past decade. For many women, their initial opioid exposure will occur in the setting of routine medical care. Approximately 1 in 3 deliveries in the U.S. is by Cesarean and opioids are commonly prescribed for post-surgical pain management. Objective The objective of this study was to determine the risk that opioid naïve women prescribed opioids after Cesarean delivery will subsequently become consistent prescription opioid users in the year following delivery, and to identify predictors for this behavior. Study Design We identified women in a database of commercial insurance beneficiaries who underwent Cesarean delivery and who were opioid-naïve in the year prior to delivery. To identify persistent users of opioids, we used trajectory models, which group together patients with similar patterns of medication filling during follow-up, based on patterns of opioid dispensing in the year following Cesarean delivery. We then constructed a multivariable logistic regression model to identify independent risk factors for membership in the persistent user group. Results 285 of 80,127 (0.36%, 95% confidence interval 0.32 to 0.40), opioid-naïve women became persistent opioid users (identified using trajectory models based on monthly patterns of opioid dispensing) following Cesarean delivery. Demographics and baseline comorbidity predicted such use with moderate discrimination (c statistic = 0.73). Significant predictors included a history of cocaine abuse (risk 7.41%; adjusted odds ratio 6.11, 95% confidence interval 1.03 to 36.31) and other illicit substance abuse (2.36%; adjusted odds ratio 2.78, 95% confidence interval 1.12 to 6.91), tobacco use (1.45%; adjusted odds ratio 3.04, 95% confidence interval 2.03 to 4.55), back pain (0.69%; adjusted odds ratio 1.74, 95% confidence interval 1.33 to 2.29), migraines (0.91%; adjusted odds ratio 2.14, 95% confidence interval 1.58 to 2.90), antidepressant use (1.34%; adjusted odds ratio 3.19, 95% confidence interval 2.41 to 4.23) and benzodiazepine use (1.99%; adjusted odds ratio 3.72, 95% confidence interval 2.64 to 5.26) in the year prior to Cesarean delivery. Conclusions A very small proportion of opioid-naïve women (approximately 1 in 300) become persistent prescription opioid users following Cesarean delivery. Pre-existing psychiatric comorbidity, certain pain conditions, and substance use/abuse conditions identifiable at the time of initial opioid prescribing were predictors of persistent use. PMID:26996986
Drake, Anna; Martin, Kathy
2018-02-09
Weather and ecological factors are known to influence breeding phenology and thus individual fitness. We predicted concordance between weather conditions and annual variation in phenology within a community of eight resident, cavity-nesting bird species over a 17-year period. We show that, although clutch initiation dates for six of our eight species are correlated with local daily maximum temperatures, this common driver does not produce a high degree of breeding synchrony due to species-specific responses to conditions during different periods of the preceding winter or spring. These "critical temperature periods" were positively associated with average lay date for each species, although the interval between critical periods and clutch initiation varied from 4-78 days. The ecological factors we examined (cavity availability and a food pulse) had an additional influence on timing in only one of our eight focal species. Our results have strong implications for understanding heterogeneous wildlife responses to climate change: divergent responses would be expected within communities where species respond to local conditions within different temporal windows, due to differing warming trends between winter and spring. Our system therefore indicates that climate change could alter relative breeding phenology among sympatric species in temperate ecosystems.
Carberry, Angela E; Raynes-Greenow, Camille H; Turner, Robin M; Jeffery, Heather E
2013-10-15
Customized birth weight charts that incorporate maternal characteristics are now being adopted into clinical practice. However, there is controversy surrounding the value of these charts in the prediction of growth and perinatal outcomes. The objective of this study was to assess the use of customized charts in predicting growth, defined by body fat percentage, and perinatal morbidity. A total of 581 term (≥37 weeks' gestation) neonates born in Sydney, Australia, in 2010 were included. Body fat percentage measurements were taken by using air displacement plethysmography. Objective composite measurements of perinatal morbidity were used to identify neonates who had poor outcomes; these data were extracted from medical records. The value of customized charts was assessed by calculating positive predictive values, negative predictive values, and odds ratios with 95% confidence intervals. Customized versus population-based charts did not improve the prediction of either low body fat percentage (59% vs. 66% positive predictive value and 87% vs. 89% negative predictive value, respectively) or high body fat percentage (48% vs. 53% positive predictive value and 90% vs. 89% negative predictive value, respectively). Customized charts were not better than population-based charts at predicting perinatal morbidity (for customized charts, odds ratio = 1.02, 95% confidence interval: 1.01, 1.04; for population-based charts, odds ratio = 1.03, 95% confidence interval: 1.01, 1.05) per percentile decrease in birth weight. Customized birth weight charts do not provide significant improvements over population-based charts in predicting neonatal growth and morbidity.
The Effect of Information Feedback Upon Psychophysical Judgments
NASA Technical Reports Server (NTRS)
Atkinson, Richard C.; Carterette, Edward C.; Kinchla, Ronald A.
1964-01-01
An analysis was made of the role of presentation schedules and information feedback on performance in a forced-choice signal detection task. The experimental results indicate that information feedback facilitates performance, but only for certain presentation schedules. The present study was designed to assess performance in a signal detection task under two conditions of information feedback. In the I-condition, S was told on each trial whether his detection response was correct or incorrect; in the !-condition S was given no feedback regarding the correctness of his response. The task involved a 2-response, forced-choice auditory detection problem. On each trial 2 temporal intervals were defined and S was required to report which interval he believed contained the signal; i. e., in one interval a tone burst in a background of white noise was presented, while the other interval contained only white noise. A trial will be denoted as s1 or s2, depending on whether the signal was embedded in the 1st or 2nd interval; the S's response will be denoted A1 or A2 to indicate which interval he reported contained the signal. The probability of an s1 trial will be denoted as y. In this study two values of y were used (.50 and.75) and, as indicated above, two conditions of information feedback. Thus there were 4 experimental conditions (501, · 50I, 751, 75I); each S was run under all 4 conditions. Method Gaussian noise was presented binaurally in S's headphones throughout a test session and the signal was a 1000-cps sinusoid tone; the tone was presented for 100 msec. including equal fall and rise times of 20 msec. The ratio of signal energy to noise power in a unit bandwidth was 2.9, and was constant throughout the study. The. S was seated before a stimulus display board. On each trial a red warning light was flashed for 100 msec. Two amber lights then came on successively each for 1 sec.; these lights defined the 2 observation intervals. The onset of the signal occurred 500 msec. after the onset of one of the observation intervals. After the second amber light went off, S indicated his response by pressing 1 of 2 wand switches under cards reading "1st interval" and "2nd interval." For the !-condition a green light flashed on above the correct response key after S's response; the green light was omitted in the !-condition. Each trial lasted 6 sec. The S's were 12 male college students with normal hearing. They were run for two practice sessions followed by 20 test sessions. Test sessions were run on consecutive days, 350 trials/day. Each day S ran on 1 of the 4 experimental conditions; in successive 4-day blocks S ran one day on each of the 4 experimental conditions in a random order. Thus, over 20 days each of the experimental conditions was repeated 5 times.
Ongoing behavior predicts perceptual report of interval duration
Gouvêa, Thiago S.; Monteiro, Tiago; Soares, Sofia; Atallah, Bassam V.; Paton, Joseph J.
2014-01-01
The ability to estimate the passage of time is essential for adaptive behavior in complex environments. Yet, it is not known how the brain encodes time over the durations necessary to explain animal behavior. Under temporally structured reinforcement schedules, animals tend to develop temporally structured behavior, and interval timing has been suggested to be accomplished by learning sequences of behavioral states. If this is true, trial to trial fluctuations in behavioral sequences should be predictive of fluctuations in time estimation. We trained rodents in an duration categorization task while continuously monitoring their behavior with a high speed camera. Animals developed highly reproducible behavioral sequences during the interval being timed. Moreover, those sequences were often predictive of perceptual report from early in the trial, providing support to the idea that animals may use learned behavioral patterns to estimate the duration of time intervals. To better resolve the issue, we propose that continuous and simultaneous behavioral and neural monitoring will enable identification of neural activity related to time perception that is not explained by ongoing behavior. PMID:24672473
Chen, Tina H; Wu, Steve W; Welge, Jeffrey A; Dixon, Stephan G; Shahana, Nasrin; Huddleston, David A; Sarvis, Adam R; Sallee, Floyd R; Gilbert, Donald L
2014-12-01
Clinical trials in children with attention-deficit hyperactivity disorder (ADHD) show variability in behavioral responses to the selective norepinephrine reuptake inhibitor atomoxetine. The objective of this study was to determine whether transcranial magnetic stimulation-evoked short interval cortical inhibition might be a biomarker predicting, or correlating with, clinical atomoxetine response. At baseline and after 4 weeks of atomoxetine treatment in 7- to 12-year-old children with ADHD, transcranial magnetic stimulation short interval cortical inhibition was measured, blinded to clinical improvement. Primary analysis was by multivariate analysis of covariance. Baseline short interval cortical inhibition did not predict clinical responses. However, paradoxically, after 4 weeks of atomoxetine, mean short interval cortical inhibition was reduced 31.9% in responders and increased 6.1% in nonresponders (analysis of covariance t 41 = 2.88; P = .0063). Percentage reductions in short interval cortical inhibition correlated with reductions in the ADHD Rating Scale (r = 0.50; P = .0005). In children ages 7 to 12 years with ADHD treated with atomoxetine, improvements in clinical symptoms are correlated with reductions in motor cortex short interval cortical inhibition. © The Author(s) 2014.
A model of interval timing by neural integration.
Simen, Patrick; Balci, Fuat; de Souza, Laura; Cohen, Jonathan D; Holmes, Philip
2011-06-22
We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.
A Rescorla-Wagner drift-diffusion model of conditioning and timing
Alonso, Eduardo
2017-01-01
Computational models of classical conditioning have made significant contributions to the theoretic understanding of associative learning, yet they still struggle when the temporal aspects of conditioning are taken into account. Interval timing models have contributed a rich variety of time representations and provided accurate predictions for the timing of responses, but they usually have little to say about associative learning. In this article we present a unified model of conditioning and timing that is based on the influential Rescorla-Wagner conditioning model and the more recently developed Timing Drift-Diffusion model. We test the model by simulating 10 experimental phenomena and show that it can provide an adequate account for 8, and a partial account for the other 2. We argue that the model can account for more phenomena in the chosen set than these other similar in scope models: CSC-TD, MS-TD, Learning to Time and Modular Theory. A comparison and analysis of the mechanisms in these models is provided, with a focus on the types of time representation and associative learning rule used. PMID:29095819
Sad facial cues inhibit temporal attention: evidence from an event-related potential study.
Kong, Xianxian; Chen, Xiaoqiang; Tan, Bo; Zhao, Dandan; Jin, Zhenlan; Li, Ling
2013-06-19
We examined the influence of different emotional cues (happy or sad) on temporal attention (short or long interval) using behavioral as well as event-related potential recordings during a Stroop task. Emotional stimuli cued short and long time intervals, inducing 'sad-short', 'sad-long', 'happy-short', and 'happy-long' conditions. Following the intervals, participants performed a numeric Stroop task. Behavioral results showed the temporal attention effects in the sad-long, happy-long, and happy-short conditions, in which valid cues quickened the reaction times, but not in the sad-short condition. N2 event-related potential components showed sad cues to have decreased activity for short intervals compared with long intervals, whereas happy cues did not. Taken together, these findings provide evidence for different modulation of sad and happy facial cues on temporal attention. Furthermore, sad cues inhibit temporal attention, resulting in longer reaction time and decreased neural activity in the short interval by diverting more attentional resources.
Starks, Elizabeth; Cooper, Ryan; Leavitt, Peter R; Wissel, Björn
2014-04-01
The anticipated impacts of climate change on aquatic biota are difficult to evaluate because of potentially contrasting effects of temperature and hydrology on lake ecosystems, particularly those closed-basin lakes within semiarid regions. To address this shortfall, we quantified decade-scale changes in chemical and biological properties of 20 endorheic lakes in central North America in response to a pronounced transition from a drought to a pluvial period during the early 21st century. Lakes exhibited marked temporal changes in chemical characteristics and formed two discrete clusters corresponding to periods of substantially different effective moisture (as Palmer Drought Severity Index, PDSI). Discriminant function analysis (DFA) explained 90% of variability in fish assemblage composition and showed that fish communities were predicted best by environmental conditions during the arid interval (PDSI <-2). DFA also predicted that lakes could support more fish species during pluvial periods, but their occurrences may be limited by periodic stress due to recurrent droughts and physical barriers to colonization. Zooplankton taxonomic assemblages in fishless lakes were resilient to short-term changes in meteorological conditions, and did not vary between drought and deluge periods. Conversely, zooplankton taxa in fish-populated lakes decreased substantially in biomass during the wet interval, likely due to increased zooplanktivory by fish. The powerful effects of such climatic variability on hydrology and the strong subsequent links to water chemistry and biota indicate that future changes in global climate could result in significant restructuring of aquatic communities. Together these findings suggest that semiarid lakes undergoing temporary climate shifts provide a useful model system for anticipating the effects of global climate change on lake food webs. © 2013 John Wiley & Sons Ltd.
Lovich, Jeffrey E.; Ennen, Joshua R.; Madrak, Sheila V.; Loughran, Caleb L.; Meyer, Katherin P.; Arundel, Terence R.; Bjurlin, Curtis D.
2011-01-01
We studied the long-term response of a cohort of eight female Agassiz’s desert tortoises (Gopherus agassizii) during the first 15 years following a large fire at a wind energy generation facility near Palm Springs, California, USA. The fire burned a significant portion of the study site in 1995. Tortoise activity areas were mapped using minimum convex polygons for a proximate post-fire interval from 1997 to 2000, and a long-term post-fire interval from 2009 to 2010. In addition, we measured the annual reproductive output of eggs each year and monitored the body condition of tortoises over time. One adult female tortoise was killed by the fire and five tortoises bore exposure scars that were not fatal. Despite predictions that tortoises would make the short-distance movements from burned to nearby unburned habitats, most activity areas and their centroids remained in burned areas for the duration of the study. The percentage of activity area burned did not differ significantly between the two monitoring periods. Annual reproductive output and measures of body condition remained statistically similar throughout the monitoring period. Despite changes in plant composition, conditions at this site appeared to be suitable for survival of tortoises following a major fire. High productivity at the site may have buffered tortoises from the adverse impacts of fire if they were not killed outright. Tortoise populations at less productive desert sites may not have adequate resources to sustain normal activity areas, reproductive output, and body conditions following fire.
Lock-and-key mechanisms of cerebellar memory recall based on rebound currents.
Wetmore, Daniel Z; Mukamel, Eran A; Schnitzer, Mark J
2008-10-01
A basic question for theories of learning and memory is whether neuronal plasticity suffices to guide proper memory recall. Alternatively, information processing that is additional to readout of stored memories might occur during recall. We formulate a "lock-and-key" hypothesis regarding cerebellum-dependent motor memory in which successful learning shapes neural activity to match a temporal filter that prevents expression of stored but inappropriate motor responses. Thus, neuronal plasticity by itself is necessary but not sufficient to modify motor behavior. We explored this idea through computational studies of two cerebellar behaviors and examined whether deep cerebellar and vestibular nuclei neurons can filter signals from Purkinje cells that would otherwise drive inappropriate motor responses. In eyeblink conditioning, reflex acquisition requires the conditioned stimulus (CS) to precede the unconditioned stimulus (US) by >100 ms. In our biophysical models of cerebellar nuclei neurons this requirement arises through the phenomenon of postinhibitory rebound depolarization and matches longstanding behavioral data on conditioned reflex timing and reliability. Although CS-US intervals<100 ms may induce Purkinje cell plasticity, cerebellar nuclei neurons drive conditioned responses only if the CS-US training interval was >100 ms. This bound reflects the minimum time for deinactivation of rebound currents such as T-type Ca2+. In vestibulo-ocular reflex adaptation, hyperpolarization-activated currents in vestibular nuclei neurons may underlie analogous dependence of adaptation magnitude on the timing of visual and vestibular stimuli. Thus, the proposed lock-and-key mechanisms link channel kinetics to recall performance and yield specific predictions of how perturbations to rebound depolarization affect motor expression.
Agrawal, Swati; Cerdeira, Ana Sofia; Redman, Christopher; Vatish, Manu
2018-02-01
Preeclampsia is a major cause of morbidity and mortality worldwide. Numerous candidate biomarkers have been proposed for diagnosis and prediction of preeclampsia. Measurement of maternal circulating angiogenesis biomarker as the ratio of sFlt-1 (soluble FMS-like tyrosine kinase-1; an antiangiogenic factor)/PlGF (placental growth factor; an angiogenic factor) reflects the antiangiogenic balance that characterizes incipient or overt preeclampsia. The ratio increases before the onset of the disease and thus may help in predicting preeclampsia. We conducted a meta-analysis to explore the predictive accuracy of sFlt-1/PlGF ratio in preeclampsia. We included 15 studies with 534 cases with preeclampsia and 19 587 controls. The ratio has a pooled sensitivity of 80% (95% confidence interval, 0.68-0.88), specificity of 92% (95% confidence interval, 0.87-0.96), positive likelihood ratio of 10.5 (95% confidence interval, 6.2-18.0), and a negative likelihood ratio of 0.22 (95% confidence interval, 0.13-0.35) in predicting preeclampsia in both high- and low-risk patients. Most of the studies have not made a distinction between early- and late-onset disease, and therefore, the analysis for it could not be done. It can prove to be a valuable screening tool for preeclampsia and may also help in decision-making, treatment stratification, and better resource allocation. © 2017 American Heart Association, Inc.
Luck, Camilla C; Lipp, Ottmar V
2016-02-01
Electrodermal activity in studies of human fear conditioning is often scored by distinguishing two electrodermal responses occurring during the conditional stimulus-unconditional stimulus interval. These responses, known as first interval responding (FIR) and second interval responding (SIR), are reported to be differentially sensitive to the effects of orienting and anticipation. Recently, the FIR/SIR scoring convention has been questioned, with some arguing in favor of scoring a single response within the entire conditional stimulus-unconditional stimulus interval (entire interval responding, EIR). EIR can be advantageous in practical terms but may fail to capture experimental effects when manipulations produce dissociations between orienting and anticipation. As an illustration, we rescored the data reported by Luck and Lipp (2015b) using both FIR/SIR and EIR scoring techniques and provide evidence that the EIR scoring technique fails to detect the effects of instructed extinction, an experimental manipulation which produces a dissociation between orienting and anticipation. Thus, using a technique that scores electrodermal response indices of fear conditioning in multiple latency windows is recommended. Copyright © 2015 Elsevier B.V. All rights reserved.
Retention interval affects visual short-term memory encoding.
Bankó, Eva M; Vidnyánszky, Zoltán
2010-03-01
Humans can efficiently store fine-detailed facial emotional information in visual short-term memory for several seconds. However, an unresolved question is whether the same neural mechanisms underlie high-fidelity short-term memory for emotional expressions at different retention intervals. Here we show that retention interval affects the neural processes of short-term memory encoding using a delayed facial emotion discrimination task. The early sensory P100 component of the event-related potentials (ERP) was larger in the 1-s interstimulus interval (ISI) condition than in the 6-s ISI condition, whereas the face-specific N170 component was larger in the longer ISI condition. Furthermore, the memory-related late P3b component of the ERP responses was also modulated by retention interval: it was reduced in the 1-s ISI as compared with the 6-s condition. The present findings cannot be explained based on differences in sensory processing demands or overall task difficulty because there was no difference in the stimulus information and subjects' performance between the two different ISI conditions. These results reveal that encoding processes underlying high-precision short-term memory for facial emotional expressions are modulated depending on whether information has to be stored for one or for several seconds.
NASA Astrophysics Data System (ADS)
Raju, P. V. S.; Potty, Jayaraman; Mohanty, U. C.
2011-09-01
Comprehensive sensitivity analyses on physical parameterization schemes of Weather Research Forecast (WRF-ARW core) model have been carried out for the prediction of track and intensity of tropical cyclones by taking the example of cyclone Nargis, which formed over the Bay of Bengal and hit Myanmar on 02 May 2008, causing widespread damages in terms of human and economic losses. The model performances are also evaluated with different initial conditions of 12 h intervals starting from the cyclogenesis to the near landfall time. The initial and boundary conditions for all the model simulations are drawn from the global operational analysis and forecast products of National Center for Environmental Prediction (NCEP-GFS) available for the public at 1° lon/lat resolution. The results of the sensitivity analyses indicate that a combination of non-local parabolic type exchange coefficient PBL scheme of Yonsei University (YSU), deep and shallow convection scheme with mass flux approach for cumulus parameterization (Kain-Fritsch), and NCEP operational cloud microphysics scheme with diagnostic mixed phase processes (Ferrier), predicts better track and intensity as compared against the Joint Typhoon Warning Center (JTWC) estimates. Further, the final choice of the physical parameterization schemes selected from the above sensitivity experiments is used for model integration with different initial conditions. The results reveal that the cyclone track, intensity and time of landfall are well simulated by the model with an average intensity error of about 8 hPa, maximum wind error of 12 m s-1and track error of 77 km. The simulations also show that the landfall time error and intensity error are decreasing with delayed initial condition, suggesting that the model forecast is more dependable when the cyclone approaches the coast. The distribution and intensity of rainfall are also well simulated by the model and comparable with the TRMM estimates.
NASA Technical Reports Server (NTRS)
Bahler, D. D.; Owen, H. A., Jr.; Wilson, T. G.
1978-01-01
A model describing the turning-on period of a power switching transistor in an energy storage voltage step-up converter is presented. Comparisons between an experimental layout and the circuit model during the turning-on interval demonstrate the ability of the model to closely predict the effects of circuit topology on the performance of the converter. A phenomenon of particular importance that is observed in the experimental circuits and is predicted by the model is the deleterious feedback effect of the parasitic emitter lead inductance on the base current waveform during the turning-on interval.
Local rollback for fault-tolerance in parallel computing systems
Blumrich, Matthias A [Yorktown Heights, NY; Chen, Dong [Yorktown Heights, NY; Gara, Alan [Yorktown Heights, NY; Giampapa, Mark E [Yorktown Heights, NY; Heidelberger, Philip [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Steinmacher-Burow, Burkhard [Boeblingen, DE; Sugavanam, Krishnan [Yorktown Heights, NY
2012-01-24
A control logic device performs a local rollback in a parallel super computing system. The super computing system includes at least one cache memory device. The control logic device determines a local rollback interval. The control logic device runs at least one instruction in the local rollback interval. The control logic device evaluates whether an unrecoverable condition occurs while running the at least one instruction during the local rollback interval. The control logic device checks whether an error occurs during the local rollback. The control logic device restarts the local rollback interval if the error occurs and the unrecoverable condition does not occur during the local rollback interval.
Intermediate-term earthquake prediction
Knopoff, L.
1990-01-01
The problems in predicting earthquakes have been attacked by phenomenological methods from pre-historic times to the present. The associations of presumed precursors with large earthquakes often have been remarked upon. the difficulty in identifying whether such correlations are due to some chance coincidence or are real precursors is that usually one notes the associations only in the relatively short time intervals before the large events. Only rarely, if ever, is notice taken of whether the presumed precursor is to be found in the rather long intervals that follow large earthquakes, or in fact is absent in these post-earthquake intervals. If there are enough examples, the presumed correlation fails as a precursor in the former case, while in the latter case the precursor would be verified. Unfortunately, the observer is usually not concerned with the 'uniteresting' intervals that have no large earthquakes.
Tarone, Aaron M; Foran, David R
2008-07-01
Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.
NASA Astrophysics Data System (ADS)
Ouyang, Qin; Chen, Quansheng; Zhao, Jiewen
2016-02-01
The approach presented herein reports the application of near infrared (NIR) spectroscopy, in contrast with human sensory panel, as a tool for estimating Chinese rice wine quality; concretely, to achieve the prediction of the overall sensory scores assigned by the trained sensory panel. Back propagation artificial neural network (BPANN) combined with adaptive boosting (AdaBoost) algorithm, namely BP-AdaBoost, as a novel nonlinear algorithm, was proposed in modeling. First, the optimal spectra intervals were selected by synergy interval partial least square (Si-PLS). Then, BP-AdaBoost model based on the optimal spectra intervals was established, called Si-BP-AdaBoost model. These models were optimized by cross validation, and the performance of each final model was evaluated according to correlation coefficient (Rp) and root mean square error of prediction (RMSEP) in prediction set. Si-BP-AdaBoost showed excellent performance in comparison with other models. The best Si-BP-AdaBoost model was achieved with Rp = 0.9180 and RMSEP = 2.23 in the prediction set. It was concluded that NIR spectroscopy combined with Si-BP-AdaBoost was an appropriate method for the prediction of the sensory quality in Chinese rice wine.
Robust mobility in human-populated environments
NASA Astrophysics Data System (ADS)
Gonzalez, Juan Pablo; Phillips, Mike; Neuman, Brad; Likhachev, Max
2012-06-01
Creating robots that can help humans in a variety of tasks requires robust mobility and the ability to safely navigate among moving obstacles. This paper presents an overview of recent research in the Robotics Collaborative Technology Alliance (RCTA) that addresses many of the core requirements for robust mobility in human-populated environments. Safe Interval Path Planning (SIPP) allows for very fast planning in dynamic environments when planning timeminimal trajectories. Generalized Safe Interval Path Planning extends this concept to trajectories that minimize arbitrary cost functions. Finally, generalized PPCP algorithm is used to generate plans that reason about the uncertainty in the predicted trajectories of moving obstacles and try to actively disambiguate the intentions of humans whenever necessary. We show how these approaches consider moving obstacles and temporal constraints and produce high-fidelity paths. Experiments in simulated environments show the performance of the algorithms under different controlled conditions, and experiments on physical mobile robots interacting with humans show how the algorithms perform under the uncertainties of the real world.
Frequency of depression in type 2 diabetes mellitus and an analysis of predictive factors.
Arshad, Abdul Rehman; Alvi, Kamran Yousaf
2016-04-01
To determine frequency of depression in patients with diabetes mellitus type 2 and to identify predictive factors. The observational study was carried out at 1 Mountain Medical Battalion, Bagh, Azad Kashmir, Pakistan, from June 2013 to May 2014, and comprised type 2 diabetic patients who were not using anti-depressants and did not have history of other psychiatric illnesses. Demographic data, duration of diabetes, presence of hypertension and type of treatment were recorded and body mass index was calculated. Patient Health Questionnaire-9, translated into Urdu, was administered during face-to-face interviews. Scores >5 indicated depression, which was classified into different grades of severity using standard cut-off values. Of the 133 patients, 51(38.35%) were depressed. Depression was mild in 34(26%), moderate in 12(9.6%), moderately severe in 4(2.9%) and severe in 1(0.7%) patient. On univariate binary logistic regression, female gender (odds ratio=3.07; 95% confidence interval = 1.43, 6.59), lesser education (odds ratio = 0.90; 95% confidence interval 0.84, 0.97) shorter duration of diabetes (odds ratio=0.87; 95% confidence interval = 0.80, 0.96) and higher body mass index (odds ratio=1.41; 95% confidence interval = 1.05, 1.25) were significantly associated with depression. Only shorter duration of diabetes (odds ratio=0.90; 95% confidence interval = 0.82, 0.99) remained significant after adjustment for confounders. Age, level of education, glycaemic control and type of treatment did not predict depression. A significant proportion of type 2 diabetics were depressed. Shorter duration of diabetes reliably predicted depression in these patients.
Pollard, C E; Valentin, J-P; Hammond, T G
2008-08-01
Drug-induced prolongation of the QT interval is having a significant impact on the ability of the pharmaceutical industry to develop new drugs. The development implications for a compound causing a significant effect in the 'Thorough QT/QTc Study' -- as defined in the clinical regulatory guidance (ICH E14) -- are substantial. In view of this, and the fact that QT interval prolongation is linked to direct inhibition of the hERG channel, in the early stages of drug discovery the focus is on testing for and screening out hERG activity. This has led to understanding of how to produce low potency hERG blockers whilst retaining desirable properties. Despite this, a number of factors mean that when an integrated risk assessment is generated towards the end of the discovery phase (by conducting at least an in vivo QT assessment) a QT interval prolongation risk is still often apparent; inhibition of hERG channel trafficking and partitioning into cardiac tissue are just two confounding factors. However, emerging information suggests that hERG safety margins have high predictive value and that when hERG and in vivo non-clinical data are combined, their predictive value to man, whilst not perfect, is >80%. Although understanding the anomalies is important and is being addressed, of greater importance is developing a better understanding of TdP, with the aim of being able to predict TdP rather than using an imperfect surrogate marker (QT interval prolongation). Without an understanding of how to predict TdP risk, high-benefit drugs for serious indications may never be marketed.
Sleep problems and disability retirement: a register-based follow-up study.
Lallukka, Tea; Haaramo, Peija; Lahelma, Eero; Rahkonen, Ossi
2011-04-15
Among aging employees, sleep problems are prevalent, but they may have serious consequences that are poorly understood. This study examined whether sleep problems are associated with subsequent disability retirement. Baseline questionnaire survey data collected in 2000-2002 among employees of the city of Helsinki, Finland, were linked with register data on disability retirement diagnoses by the end of 2008 (n = 457) for those with written consent for such linkages (74%; N = 5,986). Sleep problems were measured by the Jenkins Sleep Questionnaire. Cox regression analysis was used to calculate hazard ratios and 95% confidence intervals for disability retirement. Gender- and age-adjusted frequent sleep problems predicted disability retirement due to all causes (hazard ratio (HR) = 3.22, 95% confidence interval (CI): 2.26, 4.60), mental disorders (HR = 9.06, 95% CI: 3.27, 25.10), and musculoskeletal disorders (HR = 3.27, 95% CI: 1.91, 5.61). Adjustments for confounders, that is, baseline sociodemographic factors, work arrangements, psychosocial working conditions, and sleep duration, had negligible effects on these associations, whereas baseline physical working conditions and health attenuated the associations. Health behaviors and obesity did not mediate the examined associations. In conclusion, sleep problems are associated with subsequent disability retirement. To prevent early exit from work, sleep problems among aging employees need to be addressed.
Effect of work and recovery durations on W' reconstitution during intermittent exercise.
Skiba, Philip F; Jackman, Sarah; Clarke, David; Vanhatalo, Anni; Jones, Andrew M
2014-07-01
We recently presented an integrating model of the curvature constant of the hyperbolic power-time relationship (W') that permits the calculation of the W' balance (W'BAL) remaining at any time during intermittent exercise. Although a relationship between recovery power and the rate of W' recovery was demonstrated, the effect of the length of work or recovery intervals remains unclear. After determining VO2max, critical power, and W', 11 subjects completed six separate exercise tests on a cycle ergometer on different days, and in random order. Tests consisted of a period of intermittent severe-intensity exercise until the subject depleted approximately 50% of their predicted W'BAL, followed by a constant work rate (CWR) exercise bout until exhaustion. Work rates were kept constant between trials; however, either work or recovery durations during intermittent exercise were varied. The actual W' measured during the CWR (W'ACT) was compared with the amount of W' predicted to be available by the W'BAL model. Although some differences between W'BAL and W'ACT were noted, these amounted to only -1.6 ± 1.1 kJ when averaged across all conditions. The W'ACT was linearly correlated with the difference between VO2 at the start of CWR and VO2max (r = 0.79, P < 0.01). The W'BAL model provided a generally robust prediction of CWR W'. There may exist a physiological optimum formulation of work and recovery intervals such that baseline VO2 can be minimized, leading to an enhancement of subsequent exercise tolerance. These results may have important implications for athletic training and racing.
Work-related stress, education and work ability among hospital nurses.
Golubic, Rajna; Milosevic, Milan; Knezevic, Bojana; Mustajbegovic, Jadranka
2009-10-01
This paper is a report of a study conducted to determine which occupational stressors are present in nurses' working environment; to describe and compare occupational stress between two educational groups of nurses; to estimate which stressors and to what extent predict nurses' work ability; and to determine if educational level predicts nurses' work ability. Nurses' occupational stress adversely affects their health and nursing quality. Higher educational level has been shown to have positive effects on the preservation of good work ability. A cross-sectional study was conducted in 2006-2007. Questionnaires were distributed to a convenience sample of 1392 (59%) nurses employed at four university hospitals in Croatia (n = 2364). The response rate was 78% (n = 1086). Data were collected using the Occupational Stress Assessment Questionnaire and Work Ability Index Questionnaire. We identified six major groups of occupational stressors: 'Organization of work and financial issues', 'public criticism', 'hazards at workplace', 'interpersonal conflicts at workplace', 'shift work' and 'professional and intellectual demands'. Nurses with secondary school qualifications perceived Hazards at workplace and Shift work as statistically significantly more stressful than nurses a with college degree. Predictors statistically significantly related with low work ability were: Organization of work and financial issues (odds ratio = 1.69, 95% confidence interval 122-236), lower educational level (odds ratio = 1.69, 95% confidence interval 122-236) and older age (odds ratio = 1.07, 95% confidence interval 1.05-1.09). Hospital managers should develop strategies to address and improve the quality of working conditions for nurses in Croatian hospitals. Providing educational and career prospects can contribute to decreasing nurses' occupational stress levels, thus maintaining their work ability.
Mooij, Wolf M.; Bennetts, Robert E.; Kitchens, Wiley M.; DeAngelis, Donald L.
2002-01-01
The paper aims at exploring the viability of the Florida snail kite population under various drought regimes in its wetland habitat. The population dynamics of snail kites are strongly linked with the hydrology of the system due to the dependence of this bird species on one exclusive prey species, the apple snail, which is negatively affected by a drying out of habitat. Based on empirical evidence, it has been hypothesised that the viability of the snail kite population critically depends not only on the time interval between droughts, but also on the spatial extent of these droughts. A system wide drought is likely to result in reduced reproduction and increased mortality, whereas the birds can respond to local droughts by moving to sites where conditions are still favourable. This paper explores the implications of this hypothesis by means of a spatially-explicit individual-based model. The specific aim of the model is to study in a factorial design the dynamics of the kite population in relation to two scale parameters, the temporal interval between droughts and the spatial correlation between droughts. In the model high drought frequencies led to reduced numbers of kites. Also, habitat degradation due to prolonged periods of inundation led to lower predicted numbers of kites. Another main result was that when the spatial correlation between droughts was low, the model showed little variability in the predicted numbers of kites. But when droughts occurred mostly on a system wide level, environmental stochasticity strongly increased the stochasticity in kite numbers and in the worst case the viability of the kite population was seriously threatened.
NASA Astrophysics Data System (ADS)
Riesselman, C. R.; Taylor-Silva, B.; Patterson, M. O.
2017-12-01
The Late Pliocene is the most recent interval in Earth's history to sustain global temperatures within the range of warming predicted for the 21st century. Published global reconstructions and climate models find an average +2° C summer SST anomaly relative to modern during the 3.3-3.0 Ma PRISM interval, when atmospheric CO2 concentrations last reached 400 ppm. Here, we present a new diatom-based reconstruction of Pliocene interglacial sea surface conditions from IODP Site U1361, on the East Antarctic continental rise. U1361 biogenic silica concentrations document the alternation of diatom-rich and diatom-poor lithologies; we interpret 8 diatom-rich mudstones within this sequence to record interglacial periods between 3.8 and 2.8 Ma. We find that open-ocean conditions in the mid-Pliocene became increasingly influenced by sea ice from 3.6-3.2 Ma, prior to the onset of Northern Hemisphere glaciation. This cooling trend was interrupted by a temporary southward migration of the Antarctic Polar Front, bathing U1361 in warmer subantarctic waters during a single interglacial, marine isotope stage KM3 (3.17-3.15 Ma), that corresponds to a maximum in summer insolation at 65°S. Following this interval of transient warmth, interglacial periods became progressively cooler starting at 3 Ma, coinciding with a transition from obliquity to precession as the dominant orbital driver of Antarctic ice sheet fluctuations. Building on the identification of a single outlier interglacial within the PRISM interval, we have revisited older reconstructions to explore the response of the Southern Ocean/cryosphere system to peak late Pliocene warmth. By applying a modern chronostratigraphic framework to those low-resolution "mean interglacial" records, we identify the same frontal migration in 4 other cores in the Pacific sector of the Southern Ocean, documenting a major migration of the polar front during a key interval of warm climate. These new results suggest that increased summer insolation during KM3, combined with atmospheric CO2 similar to modern concentrations, provided sufficient forcing to overcome bathymetric constraints on polar frontal position, pushing warm subantarctic waters into proximity with vulnerable portions of Antarctica's marine ice sheets.
Stubbs, D A; Cohen, S L
1972-11-01
Pigeons performed on a second-order schedule in which fixed-interval components were maintained under a variable-interval schedule. Completion of each fixed-interval component resulted in a brief-stimulus presentation and/or food. The relation of the brief stimulus and food was varied across conditions. Under some conditions, the brief stimulus was never paired with food. Under other conditions, the brief stimulus was paired with food; three different pairing procedures were used: (a) a response produced the simultaneous onset of the stimulus and food; (b) a response produced the stimulus before food with the stimulus remaining on during food presentation; (c) a response produced the stimulus and the offset of the stimulus was simultaneous with the onset of the food cycle. The various pairing and nonpairing operations all produced similar effects on performance. Under all conditions, response rates were positively accelerated within fixed-interval components. Total response rates and Index of Curvature measures were similar across conditions. In one condition, a blackout was paired with food; with this different stimulus in effect, less curvature resulted. The results suggest that pairing of a stimulus is not a necessary condition for within-component patterning under some second-order schedules.
Stubbs, D. Alan; Cohen, Steven L.
1972-01-01
Pigeons performed on a second-order schedule in which fixed-interval components were maintained under a variable-interval schedule. Completion of each fixed-interval component resulted in a brief-stimulus presentation and/or food. The relation of the brief stimulus and food was varied across conditions. Under some conditions, the brief stimulus was never paired with food. Under other conditions, the brief stimulus was paired with food; three different pairing procedures were used: (a) a response produced the simultaneous onset of the stimulus and food; (b) a response produced the stimulus before food with the stimulus remaining on during food presentation; (c) a response produced the stimulus and the offset of the stimulus was simultaneous with the onset of the food cycle. The various pairing and nonpairing operations all produced similar effects on performance. Under all conditions, response rates were positively accelerated within fixed-interval components. Total response rates and Index of Curvature measures were similar across conditions. In one condition, a blackout was paired with food; with this different stimulus in effect, less curvature resulted. The results suggest that pairing of a stimulus is not a necessary condition for within-component patterning under some second-order schedules. PMID:16811634
Vleugels, Jasper L A; Dijkgraaf, Marcel G W; Hazewinkel, Yark; Wanders, Linda K; Fockens, Paul; Dekker, Evelien
2018-05-01
Real-time differentiation of diminutive polyps (1-5 mm) during endoscopy could replace histopathology analysis. According to guidelines, implementation of optical diagnosis into routine practice would require it to identify rectosigmoid neoplastic lesions with a negative predictive value (NPV) of more than 90%, using histologic findings as a reference, and agreement with histology-based surveillance intervals for more than 90% of cases. We performed a prospective study with 39 endoscopists accredited to perform colonoscopies on participants with positive results from fecal immunochemical tests in the Bowel Cancer Screening Program at 13 centers in the Netherlands. Endoscopists were trained in optical diagnosis using a validated module (Workgroup serrAted polypS and Polyposis). After meeting predefined performance thresholds in the training program, the endoscopists started a 1-year program (continuation phase) in which they performed narrow band imaging analyses during colonoscopies of participants in the screening program and predicted histological findings with confidence levels. The endoscopists were randomly assigned to groups that received feedback or no feedback on the accuracy of their predictions. Primary outcome measures were endoscopists' abilities to identify rectosigmoid neoplastic lesions (using histology as a reference) with NPVs of 90% or more, and selecting surveillance intervals that agreed with those determined by histology for at least 90% of cases. Of 39 endoscopists initially trained, 27 (69%) completed the training program. During the continuation phase, these 27 endoscopists performed 3144 colonoscopies in which 4504 diminutive polyps were removed. The endoscopists identified neoplastic lesions with a pooled NPV of 90.8% (95% confidence interval 88.6-92.6); their proposed surveillance intervals agreed with those determined by histologic analysis for 95.4% of cases (95% confidence interval 94.0-96.6). Findings did not differ between the group that did vs did not receive feedback. Sixteen endoscopists (59%) identified rectosigmoid neoplastic lesions with NPVs greater than 90% and selected surveillance intervals in agreement with those determined from histology for more than 90% of patients. In a prospective study following a validated training module, we found that a selected group of endoscopists identified rectosigmoid neoplastic lesions with pooled NPVs greater than 90% and accurately selected surveillance intervals for more than 90% of patients over the course of 1 year. Providing regular interim feedback on the accuracy of neoplastic lesion prediction and surveillance interval selection did not lead to differences in those endpoints. Monitoring is suggested, as individual performance varied. ClinicalTrials.gov no: NCT02516748; Netherland Trial Register: NTR4635. Copyright © 2018 AGA Institute. Published by Elsevier Inc. All rights reserved.
Extreme ecological response of a seabird community to unprecedented sea ice cover.
Barbraud, Christophe; Delord, Karine; Weimerskirch, Henri
2015-05-01
Climate change has been predicted to reduce Antarctic sea ice but, instead, sea ice surrounding Antarctica has expanded over the past 30 years, albeit with contrasted regional changes. Here we report a recent extreme event in sea ice conditions in East Antarctica and investigate its consequences on a seabird community. In early 2014, the Dumont d'Urville Sea experienced the highest magnitude sea ice cover (76.8%) event on record (1982-2013: range 11.3-65.3%; mean±95% confidence interval: 27.7% (23.1-32.2%)). Catastrophic effects were detected in the breeding output of all sympatric seabird species, with a total failure for two species. These results provide a new view crucial to predictive models of species abundance and distribution as to how extreme sea ice events might impact an entire community of top predators in polar marine ecosystems in a context of expanding sea ice in eastern Antarctica.
Hackenberg, T D; Hineline, P N
1992-01-01
Pigeons chose between two schedules of food presentation, a fixed-interval schedule and a progressive-interval schedule that began at 0 s and increased by 20 s with each food delivery provided by that schedule. Choosing one schedule disabled the alternate schedule and stimuli until the requirements of the chosen schedule were satisfied, at which point both schedules were again made available. Fixed-interval duration remained constant within individual sessions but varied across conditions. Under reset conditions, completing the fixed-interval schedule not only produced food but also reset the progressive interval to its minimum. Blocks of sessions under the reset procedure were interspersed with sessions under a no-reset procedure, in which the progressive schedule value increased independent of fixed-interval choices. Median points of switching from the progressive to the fixed schedule varied systematically with fixed-interval value, and were consistently lower during reset than during no-reset conditions. Under the latter, each subject's choices of the progressive-interval schedule persisted beyond the point at which its requirements equaled those of the fixed-interval schedule at all but the highest fixed-interval value. Under the reset procedure, switching occurred at or prior to that equality point. These results qualitatively confirm molar analyses of schedule preference and some versions of optimality theory, but they are more adequately characterized by a model of schedule preference based on the cumulated values of multiple reinforcers, weighted in inverse proportion to the delay between the choice and each successive reinforcer. PMID:1548449
The right time to learn: mechanisms and optimization of spaced learning
Smolen, Paul; Zhang, Yili; Byrne, John H.
2016-01-01
For many types of learning, spaced training, which involves repeated long inter-trial intervals, leads to more robust memory formation than does massed training, which involves short or no intervals. Several cognitive theories have been proposed to explain this superiority, but only recently have data begun to delineate the underlying cellular and molecular mechanisms of spaced training, and we review these theories and data here. Computational models of the implicated signalling cascades have predicted that spaced training with irregular inter-trial intervals can enhance learning. This strategy of using models to predict optimal spaced training protocols, combined with pharmacotherapy, suggests novel ways to rescue impaired synaptic plasticity and learning. PMID:26806627
Methods for evaluating the predictive accuracy of structural dynamic models
NASA Technical Reports Server (NTRS)
Hasselman, Timothy K.; Chrostowski, Jon D.
1991-01-01
Modeling uncertainty is defined in terms of the difference between predicted and measured eigenvalues and eigenvectors. Data compiled from 22 sets of analysis/test results was used to create statistical databases for large truss-type space structures and both pretest and posttest models of conventional satellite-type space structures. Modeling uncertainty is propagated through the model to produce intervals of uncertainty on frequency response functions, both amplitude and phase. This methodology was used successfully to evaluate the predictive accuracy of several structures, including the NASA CSI Evolutionary Structure tested at Langley Research Center. Test measurements for this structure were within + one-sigma intervals of predicted accuracy for the most part, demonstrating the validity of the methodology and computer code.
Real-time flight conflict detection and release based on Multi-Agent system
NASA Astrophysics Data System (ADS)
Zhang, Yifan; Zhang, Ming; Yu, Jue
2018-01-01
This paper defines two-aircrafts, multi-aircrafts and fleet conflict mode, sets up space-time conflict reservation on the basis of safety interval and conflict warning time in three-dimension. Detect real-time flight conflicts combined with predicted flight trajectory of other aircrafts in the same airspace, and put forward rescue resolutions for the three modes respectively. When accorded with the flight conflict conditions, determine the conflict situation, and enter the corresponding conflict resolution procedures, so as to avoid the conflict independently, as well as ensure the flight safety of aimed aircraft. Lastly, the correctness of model is verified with numerical simulation comparison.
Dynamical genetic programming in XCSF.
Preen, Richard J; Bull, Larry
2013-01-01
A number of representation schemes have been presented for use within learning classifier systems, ranging from binary encodings to artificial neural networks. This paper presents results from an investigation into using a temporally dynamic symbolic representation within the XCSF learning classifier system. In particular, dynamical arithmetic networks are used to represent the traditional condition-action production system rules to solve continuous-valued reinforcement learning problems and to perform symbolic regression, finding competitive performance with traditional genetic programming on a number of composite polynomial tasks. In addition, the network outputs are later repeatedly sampled at varying temporal intervals to perform multistep-ahead predictions of a financial time series.
Mining Recent Temporal Patterns for Event Detection in Multivariate Time Series Data
Batal, Iyad; Fradkin, Dmitriy; Harrison, James; Moerchen, Fabian; Hauskrecht, Milos
2015-01-01
Improving the performance of classifiers using pattern mining techniques has been an active topic of data mining research. In this work we introduce the recent temporal pattern mining framework for finding predictive patterns for monitoring and event detection problems in complex multivariate time series data. This framework first converts time series into time-interval sequences of temporal abstractions. It then constructs more complex temporal patterns backwards in time using temporal operators. We apply our framework to health care data of 13,558 diabetic patients and show its benefits by efficiently finding useful patterns for detecting and diagnosing adverse medical conditions that are associated with diabetes. PMID:25937993
Working memory capacity and the spacing effect in cued recall.
Delaney, Peter F; Godbole, Namrata R; Holden, Latasha R; Chang, Yoojin
2018-07-01
Spacing repetitions typically improves memory (the spacing effect). In three cued recall experiments, we explored the relationship between working memory capacity and the spacing effect. People with higher working memory capacity are more accurate on memory tasks that require retrieval relative to people with lower working memory capacity. The experiments used different retention intervals and lags between repetitions, but were otherwise similar. Working memory capacity and spacing of repetitions both improved memory in most of conditions, but they did not interact, suggesting additive effects. The results are consistent with the ACT-R model's predictions, and with a study-phase recognition process underpinning the spacing effect in cued recall.
Single-channel autocorrelation functions: the effects of time interval omission.
Ball, F G; Sansom, M S
1988-01-01
We present a general mathematical framework for analyzing the dynamic aspects of single channel kinetics incorporating time interval omission. An algorithm for computing model autocorrelation functions, incorporating time interval omission, is described. We show, under quite general conditions, that the form of these autocorrelations is identical to that which would be obtained if time interval omission was absent. We also show, again under quite general conditions, that zero correlations are necessarily a consequence of the underlying gating mechanism and not an artefact of time interval omission. The theory is illustrated by a numerical study of an allosteric model for the gating mechanism of the locust muscle glutamate receptor-channel. PMID:2455553
Ward, Ryan D; Gallistel, C R; Jensen, Greg; Richards, Vanessa L; Fairhurst, Stephen; Balsam, Peter D
2012-07-01
In a conditioning protocol, the onset of the conditioned stimulus ([CS]) provides information about when to expect reinforcement (unconditioned stimulus [US]). There are two sources of information from the CS in a delay conditioning paradigm in which the CS-US interval is fixed. The first depends on the informativeness, the degree to which CS onset reduces the average expected time to onset of the next US. The second depends only on how precisely a subject can represent a fixed-duration interval (the temporal Weber fraction). In three experiments with mice, we tested the differential impact of these two sources of information on rate of acquisition of conditioned responding (CS-US associability). In Experiment 1, we showed that associability (the inverse of trials to acquisition) increased in proportion to informativeness. In Experiment 2, we showed that fixing the duration of the US-US interval or the CS-US interval or both had no effect on associability. In Experiment 3, we equated the increase in information produced by varying the C/T ratio with the increase produced by fixing the duration of the CS-US interval. Associability increased with increased informativeness, but, as in Experiment 2, fixing the CS-US duration had no effect on associability. These results are consistent with the view that CS-US associability depends on the increased rate of reward signaled by CS onset. The results also provide further evidence that conditioned responding is temporally controlled when it emerges.
A model of interval timing by neural integration
Simen, Patrick; Balci, Fuat; deSouza, Laura; Cohen, Jonathan D.; Holmes, Philip
2011-01-01
We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes; that correlations among them can be largely cancelled by balancing excitation and inhibition; that neural populations can act as integrators; and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule’s predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior. PMID:21697374
Bebbington, Emily; Furniss, Dominic
2015-02-01
We integrated two factors, demographic population shifts and changes in prevalence of disease, to predict future trends in demand for hand surgery in England, to facilitate workforce planning. We analysed Hospital Episode Statistics data for Dupuytren's disease, carpal tunnel syndrome, cubital tunnel syndrome, and trigger finger from 1998 to 2011. Using linear regression, we estimated trends in both diagnosis and surgery until 2030. We integrated this regression with age specific population data from the Office for National Statistics in order to estimate how this will contribute to a change in workload over time. There has been a significant increase in both absolute numbers of diagnoses and surgery for all four conditions. Combined with future population data, we calculate that the total operative burden for these four conditions will increase from 87,582 in 2011 to 170,166 (95% confidence interval 144,517-195,353) in 2030. The prevalence of these diseases in the ageing population, and increasing prevalence of predisposing factors such as obesity and diabetes, may account for the predicted increase in workload. The most cost effective treatments must be sought, which requires high quality clinical trials. Our methodology can be applied to other sub-specialties to help anticipate the need for future service provision. Copyright © 2014 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Bayesian modeling of Clostridium perfringens growth in beef-in-sauce products.
Jaloustre, S; Cornu, M; Morelli, E; Noël, V; Delignette-Muller, M L
2011-04-01
Models on Clostridium perfringens growth which have been published to date have all been deterministic. A probabilistic model describing growth under non-isothermal conditions was thus proposed for predicting C. perfringens growth in beef-in-sauce products cooked and distributed in a French hospital. Model parameters were estimated from different types of data from various studies. A Bayesian approach was proposed to model the overall uncertainty regarding parameters and potential variability on the 'work to be done' (h(0)) during the germination, outgrowth and lag phase. Three models which differed according to their description of this parameter h(0) were tested. The model with inter-curve variability on h(0) was found to be the best one, on the basis of goodness-of-fit assessment and validation with literature data on results obtained under non-isothermal conditions. This model was used in two-dimensional Monte Carlo simulations to predict C. perfringens growth throughout the preparation of beef-in-sauce products, using temperature profiles recorded in a hospital kitchen. The median predicted growth was 7.8×10(-2) log(10) cfu·g(-1) (95% credibility interval [2.4×10(-2), 0.8]) despite the fact that for more than 50% of the registered temperature profiles cooling steps were longer than those required by French regulations. Copyright © 2010 Elsevier Ltd. All rights reserved.
Winslow, Stephen D; Pepich, Barry V; Martin, John J; Hallberg, George R; Munch, David J; Frebis, Christopher P; Hedrick, Elizabeth J; Krop, Richard A
2006-01-01
The United States Environmental Protection Agency's Office of Ground Water and Drinking Water has developed a single-laboratory quantitation procedure: the lowest concentration minimum reporting level (LCMRL). The LCMRL is the lowest true concentration for which future recovery is predicted to fall, with high confidence (99%), between 50% and 150%. The procedure takes into account precision and accuracy. Multiple concentration replicates are processed through the entire analytical method and the data are plotted as measured sample concentration (y-axis) versus true concentration (x-axis). If the data support an assumption of constant variance over the concentration range, an ordinary least-squares regression line is drawn; otherwise, a variance-weighted least-squares regression is used. Prediction interval lines of 99% confidence are drawn about the regression. At the points where the prediction interval lines intersect with data quality objective lines of 50% and 150% recovery, lines are dropped to the x-axis. The higher of the two values is the LCMRL. The LCMRL procedure is flexible because the data quality objectives (50-150%) and the prediction interval confidence (99%) can be varied to suit program needs. The LCMRL determination is performed during method development only. A simpler procedure for verification of data quality objectives at a given minimum reporting level (MRL) is also presented. The verification procedure requires a single set of seven samples taken through the entire method procedure. If the calculated prediction interval is contained within data quality recovery limits (50-150%), the laboratory performance at the MRL is verified.
Kimura, Kenta; Kimura, Motohiro
2016-09-28
The evaluative processing of the valence of action feedback is reflected by an event-related brain potential component called feedback-related negativity (FRN) or reward positivity (RewP). Recent studies have shown that FRN/RewP is markedly reduced when the action-feedback interval is long (e.g. 6000 ms), indicating that an increase in the action-feedback interval can undermine the evaluative processing of the valence of action feedback. The aim of the present study was to investigate whether or not such undermined evaluative processing of delayed action feedback could be restored by improving the accuracy of the prediction in terms of the timing of action feedback. With a typical gambling task in which the participant chose one of two cards and received an action feedback indicating monetary gain or loss, the present study showed that FRN/RewP was significantly elicited even when the action-feedback interval was 6000 ms, when an auditory stimulus sequence was additionally presented during the action-feedback interval as a temporal cue. This result suggests that the undermined evaluative processing of delayed action feedback can be restored by increasing the accuracy of the prediction on the timing of the action feedback.
Testing 40 Predictions from the Transtheoretical Model Again, with Confidence
ERIC Educational Resources Information Center
Velicer, Wayne F.; Brick, Leslie Ann D.; Fava, Joseph L.; Prochaska, James O.
2013-01-01
Testing Theory-based Quantitative Predictions (TTQP) represents an alternative to traditional Null Hypothesis Significance Testing (NHST) procedures and is more appropriate for theory testing. The theory generates explicit effect size predictions and these effect size estimates, with related confidence intervals, are used to test the predictions.…
Driving Errors in Parkinson’s Disease: Moving Closer to Predicting On-Road Outcomes
Brumback, Babette; Monahan, Miriam; Malaty, Irene I.; Rodriguez, Ramon L.; Okun, Michael S.; McFarland, Nikolaus R.
2014-01-01
Age-related medical conditions such as Parkinson’s disease (PD) compromise driver fitness. Results from studies are unclear on the specific driving errors that underlie passing or failing an on-road assessment. In this study, we determined the between-group differences and quantified the on-road driving errors that predicted pass or fail on-road outcomes in 101 drivers with PD (mean age = 69.38 ± 7.43) and 138 healthy control (HC) drivers (mean age = 71.76 ± 5.08). Participants with PD had minor differences in demographics and driving habits and history but made more and different driving errors than HC participants. Drivers with PD failed the on-road test to a greater extent than HC drivers (41% vs. 9%), χ2(1) = 35.54, HC N = 138, PD N = 99, p < .001. The driving errors predicting on-road pass or fail outcomes (95% confidence interval, Nagelkerke R2 =.771) were made in visual scanning, signaling, vehicle positioning, speeding (mainly underspeeding, t(61) = 7.004, p < .001, and total errors. Although it is difficult to predict on-road outcomes, this study provides a foundation for doing so. PMID:24367958
Cortical activity patterns predict robust speech discrimination ability in noise
Shetake, Jai A.; Wolf, Jordan T.; Cheung, Ryan J.; Engineer, Crystal T.; Ram, Satyananda K.; Kilgard, Michael P.
2012-01-01
The neural mechanisms that support speech discrimination in noisy conditions are poorly understood. In quiet conditions, spike timing information appears to be used in the discrimination of speech sounds. In this study, we evaluated the hypothesis that spike timing is also used to distinguish between speech sounds in noisy conditions that significantly degrade neural responses to speech sounds. We tested speech sound discrimination in rats and recorded primary auditory cortex (A1) responses to speech sounds in background noise of different intensities and spectral compositions. Our behavioral results indicate that rats, like humans, are able to accurately discriminate consonant sounds even in the presence of background noise that is as loud as the speech signal. Our neural recordings confirm that speech sounds evoke degraded but detectable responses in noise. Finally, we developed a novel neural classifier that mimics behavioral discrimination. The classifier discriminates between speech sounds by comparing the A1 spatiotemporal activity patterns evoked on single trials with the average spatiotemporal patterns evoked by known sounds. Unlike classifiers in most previous studies, this classifier is not provided with the stimulus onset time. Neural activity analyzed with the use of relative spike timing was well correlated with behavioral speech discrimination in quiet and in noise. Spike timing information integrated over longer intervals was required to accurately predict rat behavioral speech discrimination in noisy conditions. The similarity of neural and behavioral discrimination of speech in noise suggests that humans and rats may employ similar brain mechanisms to solve this problem. PMID:22098331
NASA Astrophysics Data System (ADS)
Schmidt, T.; Kalisch, J.; Lorenz, E.; Heinemann, D.
2015-10-01
Clouds are the dominant source of variability in surface solar radiation and uncertainty in its prediction. However, the increasing share of solar energy in the world-wide electric power supply increases the need for accurate solar radiation forecasts. In this work, we present results of a shortest-term global horizontal irradiance (GHI) forecast experiment based on hemispheric sky images. A two month dataset with images from one sky imager and high resolutive GHI measurements from 99 pyranometers distributed over 10 km by 12 km is used for validation. We developed a multi-step model and processed GHI forecasts up to 25 min with an update interval of 15 s. A cloud type classification is used to separate the time series in different cloud scenarios. Overall, the sky imager based forecasts do not outperform the reference persistence forecasts. Nevertheless, we find that analysis and forecast performance depend strongly on the predominant cloud conditions. Especially convective type clouds lead to high temporal and spatial GHI variability. For cumulus cloud conditions, the analysis error is found to be lower than that introduced by a single pyranometer if it is used representatively for the whole area in distances from the camera larger than 1-2 km. Moreover, forecast skill is much higher for these conditions compared to overcast or clear sky situations causing low GHI variability which is easier to predict by persistence. In order to generalize the cloud-induced forecast error, we identify a variability threshold indicating conditions with positive forecast skill.
Dissociable Roles of Different Types of Working Memory Load in Visual Detection
Konstantinou, Nikos; Lavie, Nilli
2013-01-01
We contrasted the effects of different types of working memory (WM) load on detection. Considering the sensory-recruitment hypothesis of visual short-term memory (VSTM) within load theory (e.g., Lavie, 2010) led us to predict that VSTM load would reduce visual-representation capacity, thus leading to reduced detection sensitivity during maintenance, whereas load on WM cognitive control processes would reduce priority-based control, thus leading to enhanced detection sensitivity for a low-priority stimulus. During the retention interval of a WM task, participants performed a visual-search task while also asked to detect a masked stimulus in the periphery. Loading WM cognitive control processes (with the demand to maintain a random digit order [vs. fixed in conditions of low load]) led to enhanced detection sensitivity. In contrast, loading VSTM (with the demand to maintain the color and positions of six squares [vs. one in conditions of low load]) reduced detection sensitivity, an effect comparable with that found for manipulating perceptual load in the search task. The results confirmed our predictions and established a new functional dissociation between the roles of different types of WM load in the fundamental visual perception process of detection. PMID:23713796
Eclipse-Free-Time Assessment Tool for IRIS
NASA Technical Reports Server (NTRS)
Eagle, David
2012-01-01
IRIS_EFT is a scientific simulation that can be used to perform an Eclipse-Free- Time (EFT) assessment of IRIS (Infrared Imaging Surveyor) mission orbits. EFT is defined to be those time intervals longer than one day during which the IRIS spacecraft is not in the Earth s shadow. Program IRIS_EFT implements a special perturbation of orbital motion to numerically integrate Cowell's form of the system of differential equations. Shadow conditions are predicted by embedding this integrator within Brent s method for finding the root of a nonlinear equation. The IRIS_EFT software models the effects of the following types of orbit perturbations on the long-term evolution and shadow characteristics of IRIS mission orbits. (1) Non-spherical Earth gravity, (2) Atmospheric drag, (3) Point-mass gravity of the Sun, and (4) Point-mass gravity of the Moon. The objective of this effort was to create an in-house computer program that would perform eclipse-free-time analysis. of candidate IRIS spacecraft mission orbits in an accurate and timely fashion. The software is a suite of Fortran subroutines and data files organized as a "computational" engine that is used to accurately predict the long-term orbit evolution of IRIS mission orbits while searching for Earth shadow conditions.
Stochastic simulation and analysis of biomolecular reaction networks
Frazier, John M; Chushak, Yaroslav; Foy, Brent
2009-01-01
Background In recent years, several stochastic simulation algorithms have been developed to generate Monte Carlo trajectories that describe the time evolution of the behavior of biomolecular reaction networks. However, the effects of various stochastic simulation and data analysis conditions on the observed dynamics of complex biomolecular reaction networks have not recieved much attention. In order to investigate these issues, we employed a a software package developed in out group, called Biomolecular Network Simulator (BNS), to simulate and analyze the behavior of such systems. The behavior of a hypothetical two gene in vitro transcription-translation reaction network is investigated using the Gillespie exact stochastic algorithm to illustrate some of the factors that influence the analysis and interpretation of these data. Results Specific issues affecting the analysis and interpretation of simulation data are investigated, including: (1) the effect of time interval on data presentation and time-weighted averaging of molecule numbers, (2) effect of time averaging interval on reaction rate analysis, (3) effect of number of simulations on precision of model predictions, and (4) implications of stochastic simulations on optimization procedures. Conclusion The two main factors affecting the analysis of stochastic simulations are: (1) the selection of time intervals to compute or average state variables and (2) the number of simulations generated to evaluate the system behavior. PMID:19534796
Saccadic eye movements do not disrupt the deployment of feature-based attention.
Kalogeropoulou, Zampeta; Rolfs, Martin
2017-07-01
The tight link of saccades to covert spatial attention has been firmly established, yet their relation to other forms of visual selection remains poorly understood. Here we studied the temporal dynamics of feature-based attention (FBA) during fixation and across saccades. Participants reported the orientation (on a continuous scale) of one of two sets of spatially interspersed Gabors (black or white). We tested performance at different intervals between the onset of a colored cue (black or white, indicating which stimulus was the most probable target; red: neutral condition) and the stimulus. FBA built up after cue onset: Benefits (errors for valid vs. neutral cues), costs (invalid vs. neutral), and the overall cueing effect (valid vs. invalid) increased with the cue-stimulus interval. Critically, we also tested visual performance at different intervals after a saccade, when FBA had been fully deployed before saccade initiation. Cueing effects were evident immediately after the saccade and were predicted most accurately and most precisely by fully deployed FBA, indicating that FBA was continuous throughout saccades. Finally, a decomposition of orientation reports into target reports and random guesses confirmed continuity of report precision and guess rates across the saccade. We discuss the role of FBA in perceptual continuity across saccades.
Dignath, David; Janczyk, Markus
2017-09-01
According to the ideomotor principle, behavior is controlled via a retrieval of the sensory consequences that will follow from the respective movement ("action-effects"). These consequences include not only what will happen, but also when something will happen. In fact, recollecting the temporal duration between response and effect takes time and prolongs the initiation of the response. We investigated the associative structure of action-effect learning with delayed effects and asked whether participants acquire integrated action-time-effect episodes that comprise a compound of all three elements or whether they acquire separate traces that connect actions to the time until an effect occurs and actions to the effects that follow them. In three experiments, results showed that participants retrieve temporal intervals that follow from their actions even when the identity of the effect could not be learned. Furthermore, retrieval of temporal intervals in isolation was not inferior to retrieval of temporal intervals that were consistently followed by predictable action-effects. More specifically, when tested under extinction, retrieval of action-time and action-identity associations seems to compete against each other, similar to overshadowing effects reported for stimulus-response conditioning. Together, these results suggest that people anticipate when the consequences of their action will occur, independently from what the consequences will be.
Cuenca-Navalon, Elena; Laumen, Marco; Finocchiaro, Thomas; Steinseifer, Ulrich
2016-07-01
A physiological control algorithm is being developed to ensure an optimal physiological interaction between the ReinHeart total artificial heart (TAH) and the circulatory system. A key factor for that is the long-term, accurate determination of the hemodynamic state of the cardiovascular system. This study presents a method to determine estimation models for predicting hemodynamic parameters (pump chamber filling and afterload) from both left and right cardiovascular circulations. The estimation models are based on linear regression models that correlate filling and afterload values with pump intrinsic parameters derived from measured values of motor current and piston position. Predictions for filling lie in average within 5% from actual values, predictions for systemic afterload (AoPmean , AoPsys ) and mean pulmonary afterload (PAPmean ) lie in average within 9% from actual values. Predictions for systolic pulmonary afterload (PAPsys ) present an average deviation of 14%. The estimation models show satisfactory prediction and confidence intervals and are thus suitable to estimate hemodynamic parameters. This method and derived estimation models are a valuable alternative to implanted sensors and are an essential step for the development of a physiological control algorithm for a fully implantable TAH. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Graham, Wendy; Destouni, Georgia; Demmy, George; Foussereau, Xavier
1998-07-01
The methodology developed in Destouni and Graham [Destouni, G., Graham, W.D., 1997. The influence of observation method on local concentration statistics in the subsurface. Water Resour. Res. 33 (4) 663-676.] for predicting locally measured concentration statistics for solute transport in heterogeneous porous media under saturated flow conditions is applied to the prediction of conservative nonreactive solute transport in the vadose zone where observations are obtained by soil coring. Exact analytical solutions are developed for both the mean and variance of solute concentrations measured in discrete soil cores using a simplified physical model for vadose-zone flow and solute transport. Theoretical results show that while the ensemble mean concentration is relatively insensitive to the length-scale of the measurement, predictions of the concentration variance are significantly impacted by the sampling interval. Results also show that accounting for vertical heterogeneity in the soil profile results in significantly less spreading in the mean and variance of the measured solute breakthrough curves, indicating that it is important to account for vertical heterogeneity even for relatively small travel distances. Model predictions for both the mean and variance of locally measured solute concentration, based on independently estimated model parameters, agree well with data from a field tracer test conducted in Manatee County, Florida.
A population model of chaparral vegetation response to frequent wildfires.
Lucas, Timothy A; Johns, Garrett; Jiang, Wancen; Yang, Lucie
2013-12-01
The recent increase in wildfire frequency in the Santa Monica Mountains (SMM) may substantially impact plant community structure. Species of Chaparral shrubs represent the dominant vegetation type in the SMM. These species can be divided into three life history types according to their response to wildfires. Nonsprouting species are completely killed by fire and reproduce by seeds that germinate in response to a fire cue, obligate sprouting species survive by resprouting from dormant buds in a root crown because their seeds are destroyed by fire, and facultative sprouting species recover after fire both by seeds and resprouts. Based on these assumptions, we developed a set of nonlinear difference equations to model each life history type. These models can be used to predict species survivorship under varying fire return intervals. For example, frequent fires can lead to localized extinction of nonsprouting species such as Ceanothus megacarpus while several facultative sprouting species such as Ceanothus spinosus and Malosma (Rhus) laurina will persist as documented by a longitudinal study in a biological preserve in the SMM. We estimated appropriate parameter values for several chaparral species using 25 years of data and explored parameter relationships that lead to equilibrium populations. We conclude by looking at the survival strategies of these three species of chaparral shrubs under varying fire return intervals and predict changes in plant community structure under fire intervals of short return. In particular, our model predicts that an average fire return interval of greater than 12 years is required for 50 % of the initial Ceanothus megacarpus population and 25 % of the initial Ceanothus spinosus population to survive. In contrast, we predict that the Malosma laurina population will have 90 % survivorship for an average fire return interval of at least 6 years.
Nonparametric Stochastic Model for Uncertainty Quantifi cation of Short-term Wind Speed Forecasts
NASA Astrophysics Data System (ADS)
AL-Shehhi, A. M.; Chaouch, M.; Ouarda, T.
2014-12-01
Wind energy is increasing in importance as a renewable energy source due to its potential role in reducing carbon emissions. It is a safe, clean, and inexhaustible source of energy. The amount of wind energy generated by wind turbines is closely related to the wind speed. Wind speed forecasting plays a vital role in the wind energy sector in terms of wind turbine optimal operation, wind energy dispatch and scheduling, efficient energy harvesting etc. It is also considered during planning, design, and assessment of any proposed wind project. Therefore, accurate prediction of wind speed carries a particular importance and plays significant roles in the wind industry. Many methods have been proposed in the literature for short-term wind speed forecasting. These methods are usually based on modeling historical fixed time intervals of the wind speed data and using it for future prediction. The methods mainly include statistical models such as ARMA, ARIMA model, physical models for instance numerical weather prediction and artificial Intelligence techniques for example support vector machine and neural networks. In this paper, we are interested in estimating hourly wind speed measures in United Arab Emirates (UAE). More precisely, we predict hourly wind speed using a nonparametric kernel estimation of the regression and volatility functions pertaining to nonlinear autoregressive model with ARCH model, which includes unknown nonlinear regression function and volatility function already discussed in the literature. The unknown nonlinear regression function describe the dependence between the value of the wind speed at time t and its historical data at time t -1, t - 2, … , t - d. This function plays a key role to predict hourly wind speed process. The volatility function, i.e., the conditional variance given the past, measures the risk associated to this prediction. Since the regression and the volatility functions are supposed to be unknown, they are estimated using nonparametric kernel methods. In addition, to the pointwise hourly wind speed forecasts, a confidence interval is also provided which allows to quantify the uncertainty around the forecasts.
NASA Astrophysics Data System (ADS)
Ford, T.; Dirmeyer, P.
2016-12-01
The influence of antecedent drought conditions on the onset of heat waves in North America is important as the establishment of past heat wave events has been connected to both advection of warm, dry air and limitation of local moisture recycling due to dry soils. The strong connection between the land surface and subsequent extreme heat offers promise that realistic soil moisture initialization could improve model forecast skill. However, there is still a lack of consensus about the (1) the role of antecedent drought conditions in forcing heat waves over North America and (2) the ability of numerical forecast models to predict extreme heat events at sub-seasonal to seasonal time scales. For this project, we use atmospheric reanalysis datasets to establish the connection between drought and subsequent extreme heat events. The Standardized Precipitation Index (SPI), computed over 30-, 60-, and 90-day intervals, is used to identify drought events, while the excess heat factor defines subsequent heat wave events. We focus on heat waves immediately following drought periods, including events coinciding with but not beginning prior to the start of drought, as well as heat wave events beginning no more than 3 days after the demise of a drought event. Hindcasts from individual model ensemble members of the Sub-seasonal to Seasonal Prediction (S2S) Project and the Phase II of the North American Multi-Model Ensemble (NMME) are assessed with regard to heat wave prediction. Each individual S2S and NMME ensemble member is evaluated to determine if their respective hindcasts are able to capture/predict heat wave events identified in the reanalysis products.
Sacko, Ryan S; McIver, Kerry; Brian, Ali; Stodden, David F
2018-04-02
This study examined the metabolic cost (METs) of performing object projection skills at three practice trial intervals (6, 12, and 30 seconds). Forty adults (female n = 20) aged 18-30 (M = 23.7 ± 2.9 years) completed three, nine-minute sessions of skill trials performed at 6, 12, and 30 second intervals. Participants performed kicking, throwing and striking trials in a blocked schedule with maximal effort. Average METs during each session were measured using a COSMED K4b2. A three (interval condition) X two (sex) ANOVA was conducted to examine differences in METs across interval conditions and by sex. Results indicated a main effect for interval condition (F(5,114) = 187.02, p < .001, η 2 = 0.76) with decreased interval times yielding significantly higher METs [30 sec = 3.45, 12 sec = 5.68, 6 sec = 8.21]. A main effect for sex (F(5, 114) = 35.39, p < .001, η 2 = 0.24) also was found with men demonstrating higher METs across all intervals. At a rate of only two trials/min, participants elicited moderate physical activity, with 12 and 6-second intervals exhibiting vigorous PA. Demonstrating MVPA during the performance of object projection skill performance has potential implications for PA interventions.
Taylor, J M; Law, N
1998-10-30
We investigate the importance of the assumed covariance structure for longitudinal modelling of CD4 counts. We examine how individual predictions of future CD4 counts are affected by the covariance structure. We consider four covariance structures: one based on an integrated Ornstein-Uhlenbeck stochastic process; one based on Brownian motion, and two derived from standard linear and quadratic random-effects models. Using data from the Multicenter AIDS Cohort Study and from a simulation study, we show that there is a noticeable deterioration in the coverage rate of confidence intervals if we assume the wrong covariance. There is also a loss in efficiency. The quadratic random-effects model is found to be the best in terms of correctly calibrated prediction intervals, but is substantially less efficient than the others. Incorrectly specifying the covariance structure as linear random effects gives too narrow prediction intervals with poor coverage rates. Fitting using the model based on the integrated Ornstein-Uhlenbeck stochastic process is the preferred one of the four considered because of its efficiency and robustness properties. We also use the difference between the future predicted and observed CD4 counts to assess an appropriate transformation of CD4 counts; a fourth root, cube root and square root all appear reasonable choices.
Ouyang, Qin; Chen, Quansheng; Zhao, Jiewen
2016-02-05
The approach presented herein reports the application of near infrared (NIR) spectroscopy, in contrast with human sensory panel, as a tool for estimating Chinese rice wine quality; concretely, to achieve the prediction of the overall sensory scores assigned by the trained sensory panel. Back propagation artificial neural network (BPANN) combined with adaptive boosting (AdaBoost) algorithm, namely BP-AdaBoost, as a novel nonlinear algorithm, was proposed in modeling. First, the optimal spectra intervals were selected by synergy interval partial least square (Si-PLS). Then, BP-AdaBoost model based on the optimal spectra intervals was established, called Si-BP-AdaBoost model. These models were optimized by cross validation, and the performance of each final model was evaluated according to correlation coefficient (Rp) and root mean square error of prediction (RMSEP) in prediction set. Si-BP-AdaBoost showed excellent performance in comparison with other models. The best Si-BP-AdaBoost model was achieved with Rp=0.9180 and RMSEP=2.23 in the prediction set. It was concluded that NIR spectroscopy combined with Si-BP-AdaBoost was an appropriate method for the prediction of the sensory quality in Chinese rice wine. Copyright © 2015 Elsevier B.V. All rights reserved.
Schneider, Hauke; Huynh, Thien J; Demchuk, Andrew M; Dowlatshahi, Dar; Rodriguez-Luna, David; Silva, Yolanda; Aviv, Richard; Dzialowski, Imanuel
2018-06-01
The intracerebral hemorrhage (ICH) score is the most commonly used grading scale for stratifying functional outcome in patients with acute ICH. We sought to determine whether a combination of the ICH score and the computed tomographic angiography spot sign may improve outcome prediction in the cohort of a prospective multicenter hemorrhage trial. Prospectively collected data from 241 patients from the observational PREDICT study (Prediction of Hematoma Growth and Outcome in Patients With Intracerebral Hemorrhage Using the CT-Angiography Spot Sign) were analyzed. Functional outcome at 3 months was dichotomized using the modified Rankin Scale (0-3 versus 4-6). Performance of (1) the ICH score and (2) the spot sign ICH score-a scoring scale combining ICH score and spot sign number-was tested. Multivariable analysis demonstrated that ICH score (odds ratio, 3.2; 95% confidence interval, 2.2-4.8) and spot sign number (n=1: odds ratio, 2.7; 95% confidence interval, 1.1-7.4; n>1: odds ratio, 3.8; 95% confidence interval, 1.2-17.1) were independently predictive of functional outcome at 3 months with similar odds ratios. Prediction of functional outcome was not significantly different using the spot sign ICH score compared with the ICH score alone (spot sign ICH score area under curve versus ICH score area under curve: P =0.14). In the PREDICT cohort, a prognostic score adding the computed tomographic angiography-based spot sign to the established ICH score did not improve functional outcome prediction compared with the ICH score. © 2018 American Heart Association, Inc.
A method to quantify movement activity of groups of animals using automated image analysis
NASA Astrophysics Data System (ADS)
Xu, Jianyu; Yu, Haizhen; Liu, Ying
2009-07-01
Most physiological and environmental changes are capable of inducing variations in animal behavior. The behavioral parameters have the possibility to be measured continuously in-situ by a non-invasive and non-contact approach, and have the potential to be used in the actual productions to predict stress conditions. Most vertebrates tend to live in groups, herds, flocks, shoals, bands, packs of conspecific individuals. Under culture conditions, the livestock or fish are in groups and interact on each other, so the aggregate behavior of the group should be studied rather than that of individuals. This paper presents a method to calculate the movement speed of a group of animal in a enclosure or a tank denoted by body length speed that correspond to group activity using computer vision technique. Frame sequences captured at special time interval were subtracted in pairs after image segmentation and identification. By labeling components caused by object movement in difference frame, the projected area caused by the movement of every object in the capture interval was calculated; this projected area was divided by the projected area of every object in the later frame to get body length moving distance of each object, and further could obtain the relative body length speed. The average speed of all object can well respond to the activity of the group. The group activity of a tilapia (Oreochromis niloticus) school to high (2.65 mg/L) levels of unionized ammonia (UIA) concentration were quantified based on these methods. High UIA level condition elicited a marked increase in school activity at the first hour (P<0.05) exhibiting an avoidance reaction (trying to flee from high UIA condition), and then decreased gradually.
A real-time approach for heart rate monitoring using a Hilbert transform in seismocardiograms.
Jafari Tadi, Mojtaba; Lehtonen, Eero; Hurnanen, Tero; Koskinen, Juho; Eriksson, Jonas; Pänkäälä, Mikko; Teräs, Mika; Koivisto, Tero
2016-11-01
Heart rate monitoring helps in assessing the functionality and condition of the cardiovascular system. We present a new real-time applicable approach for estimating beat-to-beat time intervals and heart rate in seismocardiograms acquired from a tri-axial microelectromechanical accelerometer. Seismocardiography (SCG) is a non-invasive method for heart monitoring which measures the mechanical activity of the heart. Measuring true beat-to-beat time intervals from SCG could be used for monitoring of the heart rhythm, for heart rate variability analysis and for many other clinical applications. In this paper we present the Hilbert adaptive beat identification technique for the detection of heartbeat timings and inter-beat time intervals in SCG from healthy volunteers in three different positions, i.e. supine, left and right recumbent. Our method is electrocardiogram (ECG) independent, as it does not require any ECG fiducial points to estimate the beat-to-beat intervals. The performance of the algorithm was tested against standard ECG measurements. The average true positive rate, positive prediction value and detection error rate for the different positions were, respectively, supine (95.8%, 96.0% and ≃0.6%), left (99.3%, 98.8% and ≃0.001%) and right (99.53%, 99.3% and ≃0.01%). High correlation and agreement was observed between SCG and ECG inter-beat intervals (r > 0.99) for all positions, which highlights the capability of the algorithm for SCG heart monitoring from different positions. Additionally, we demonstrate the applicability of the proposed method in smartphone based SCG. In conclusion, the proposed algorithm can be used for real-time continuous unobtrusive cardiac monitoring, smartphone cardiography, and in wearable devices aimed at health and well-being applications.
Temporal Prediction Errors Affect Short-Term Memory Scanning Response Time.
Limongi, Roberto; Silva, Angélica M
2016-11-01
The Sternberg short-term memory scanning task has been used to unveil cognitive operations involved in time perception. Participants produce time intervals during the task, and the researcher explores how task performance affects interval production - where time estimation error is the dependent variable of interest. The perspective of predictive behavior regards time estimation error as a temporal prediction error (PE), an independent variable that controls cognition, behavior, and learning. Based on this perspective, we investigated whether temporal PEs affect short-term memory scanning. Participants performed temporal predictions while they maintained information in memory. Model inference revealed that PEs affected memory scanning response time independently of the memory-set size effect. We discuss the results within the context of formal and mechanistic models of short-term memory scanning and predictive coding, a Bayes-based theory of brain function. We state the hypothesis that our finding could be associated with weak frontostriatal connections and weak striatal activity.
Four Weeks of Off-Season Training Improves Peak Oxygen Consumption in Female Field Hockey Players
Funch, Lindsey T.; Lind, Erik; Van Langen, Deborah; Hokanson, James F.
2017-01-01
The purpose of the study was to examine the changes in peak oxygen consumption (V˙O2peak) and running economy (RE) following four-weeks of high intensity training and concurrent strength and conditioning during the off-season in collegiate female field hockey players. Fourteen female student-athletes (age 19.29 ± 0.91 years) were divided into two training groups, matched from baseline V˙O2peak: High Intensity Training (HITrun; n = 8) and High Intensity Interval Training (HIIT; n = 6). Participants completed 12 training sessions. HITrun consisted of 30 min of high-intensity running, while HIIT consisted of a series of whole-body high intensity Tabata-style intervals (75–85% of age predicted maximum heart rate) for a total of four minutes. In addition to the interval training, the off-season training included six resistance training sessions, three team practices, and concluded with a team scrimmage. V˙O2peak was measured pre- and post-training to determine the effectiveness of the training program. A two-way mixed (group × time) ANOVA showed a main effect of time with a statistically significant difference in V˙O2peak from pre- to post-testing, F(1, 12) = 12.657, p = 0.004, partial η2 = 0.041. Average (±SD) V˙O2peak increased from 44.64 ± 3.74 to 47.35 ± 3.16 mL·kg−1·min−1 for HIIT group and increased from 45.39 ± 2.80 to 48.22 ± 2.42 mL·kg−1·min−1 for HITrun group. Given the similar improvement in aerobic power, coaches and training staff may find the time saving element of HIIT-type conditioning programs attractive. PMID:29910449
Time prediction of failure a type of lamps by using general composite hazard rate model
NASA Astrophysics Data System (ADS)
Riaman; Lesmana, E.; Subartini, B.; Supian, S.
2018-03-01
This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.
van Daalen, Marjolijn A; de Kat, Dorothée S; Oude Grotebevelsborg, Bernice F L; de Leeuwe, Roosje; Warnaar, Jeroen; Oostra, Roelof Jan; M Duijst-Heesters, Wilma L J
2017-03-01
This study aimed to develop an aquatic decomposition scoring (ADS) method and investigated the predictive value of this method in estimating the postmortem submersion interval (PMSI) of bodies recovered from the North Sea. This method, consisting of an ADS item list and a pictorial reference atlas, showed a high interobserver agreement (Krippendorff's alpha ≥ 0.93) and hence proved to be valid. This scoring method was applied to data, collected from closed cases-cases in which the postmortal submersion interval (PMSI) was known-concerning bodies recovered from the North Sea from 1990 to 2013. Thirty-eight cases met the inclusion criteria and were scored by quantifying the observed total aquatic decomposition score (TADS). Statistical analysis demonstrated that TADS accurately predicts the PMSI (p < 0.001), confirming that the decomposition process in the North Sea is strongly correlated to time. © 2017 American Academy of Forensic Sciences.
Aramaki, Yu; Haruno, Masahiko; Osu, Rieko; Sadato, Norihiro
2011-07-06
In periodic bimanual movements, anti-phase-coordinated patterns often change into in-phase patterns suddenly and involuntarily. Because behavior in the initial period of a sequence of cycles often does not show any obvious errors, it is difficult to predict subsequent movement errors in the later period of the cyclical sequence. Here, we evaluated performance in the later period of the cyclical sequence of bimanual periodic movements using human brain activity measured with functional magnetic resonance imaging as well as using initial movement features. Eighteen subjects performed a 30 s bimanual finger-tapping task. We calculated differences in initiation-locked transient brain activity between antiphase and in-phase tapping conditions. Correlation analysis revealed that the difference in the anterior putamen activity during antiphase compared within-phase tapping conditions was strongly correlated with future instability as measured by the mean absolute deviation of the left-hand intertap interval during antiphase movements relative to in-phase movements (r = 0.81). Among the initial movement features we measured, only the number of taps to establish the antiphase movement pattern exhibited a significant correlation. However, the correlation efficient of 0.60 was not high enough to predict the characteristics of subsequent movement. There was no significant correlation between putamen activity and initial movement features. It is likely that initiating unskilled difficult movements requires increased anterior putamen activity, and this activity increase may facilitate the initiation of movement via the basal ganglia-thalamocortical circuit. Our results suggest that initiation-locked transient activity of the anterior putamen can be used to predict future motor performance.
Human's choices in situations of time-based diminishing returns.
Hackenberg, T D; Axtell, S A
1993-01-01
Three experiments examined adult humans' choices in situations with contrasting short-term and long-term consequences. Subjects were given repeated choices between two time-based schedules of points exchangeable for money: a fixed schedule and a progressive schedule that began at 0 s and increased by 5 s with each point delivered by that schedule. Under "reset" conditions, choosing the fixed schedule not only produced a point but it also reset the requirements of the progressive schedule to 0 s. In the first two experiments, reset conditions alternated with "no-reset" conditions, in which progressive-schedule requirements were independent of fixed-schedule choices. Experiment 1 entailed choices between a progressive-interval schedule and a fixed-interval schedule, the duration of which varied across conditions. Switching from the progressive- to the fixed-interval schedule was systematically related to fixed-interval size in 4 of 8 subjects, and in all subjects occurred consistently sooner in the progressive-schedule sequence under reset than under no-reset procedures. The latter result was replicated in a second experiment, in which choices between progressive- and fixed-interval schedules were compared with choices between progressive- and fixed-time schedules. In Experiment 3, switching patterns under reset conditions were unrelated to variations in intertrial interval. In none of the experiments did orderly choice patterns depend on verbal descriptions of the contingencies or on schedule-controlled response patterns in the presence of the chosen schedules. The overall pattern of results indicates control of choices by temporarily remote consequences, and is consistent with versions of optimality theory that address performance in situations of diminishing returns. PMID:8315364
Reboul, Q; Delabaere, A; Luo, Z C; Nuyt, A-M; Wu, Y; Chauleur, C; Fraser, W; Audibert, F
2017-03-01
To compare third-trimester ultrasound screening methods to predict small-for-gestational age (SGA), and to evaluate the impact of the ultrasound-delivery interval on screening performance. In this prospective study, data were collected from a multicenter singleton cohort study investigating the links between various exposures during pregnancy with birth outcome and later health in children. We included women, recruited in the first trimester, who had complete outcome data and had undergone third-trimester ultrasound examination. Demographic, clinical and biological variables were also collected from both parents. We compared prediction of delivery of a SGA neonate (birth weight < 10 th percentile) by the following methods: abdominal circumference (AC) Z-score based on Hadlock curves (Hadlock AC), on INTERGROWTH-21 st Project curves (Intergrowth AC) and on Salomon curves (Salomon AC); estimated fetal weight (EFW) Z-score based on Hadlock curves (Hadlock EFW) and on customized curves from Gardosi (Gardosi EFW); and fetal growth velocity based on change in AC between second and third trimesters (FGVAC). We also assessed the following ultrasound-delivery intervals: ≤ 4 weeks, ≤ 6 weeks and ≤ 10 weeks. Third-trimester ultrasound was performed in 1805 patients with complete outcome data, of whom 158 (8.8%) delivered a SGA neonate. Ultrasound examination was at a median gestational age of 32 (interquartile range, 31-33) weeks. The ultrasound-delivery interval was ≤ 4 weeks in 17.2% of cases, ≤ 6 weeks in 48.1% of cases and ≤ 10 weeks in 97.3% of cases. Areas under the receiver-operating characteristics curve (AUC) were 0.772 for Salomon AC, 0.768 for Hadlock EFW, 0.766 for Hadlock AC, 0.765 for Intergrowth AC, 0.708 for Gardosi EFW and 0.674 for FGVAC (all P < 0.0001). The screening method with the highest AUC for an ultrasound-delivery interval ≤ 4 weeks was Salomon AC (AUC, 0.856), ≤ 6 weeks was Hadlock AC (AUC, 0.824) and ≤ 10 weeks was Salomon AC (AUC, 0.780). At a fixed 10% false-positive rate, the best detection rates were 60.0%, 54.1% and 42.1% for intervals ≤ 4, ≤ 6 and ≤ 10 weeks, respectively. Third-trimester ultrasound measurements provide poor to moderate prediction of SGA. A shorter ultrasound-delivery interval provides better prediction than does a longer interval. Further studies are needed to test the effect of including maternal or biological characteristics in SGA screening. Copyright © 2016 ISUOG. Published by John Wiley & Sons Ltd. Copyright © 2016 ISUOG. Published by John Wiley & Sons Ltd.
Identifying QT prolongation from ECG impressions using a general-purpose Natural Language Processor
Denny, Joshua C.; Miller, Randolph A.; Waitman, Lemuel Russell; Arrieta, Mark; Peterson, Joshua F.
2009-01-01
Objective Typically detected via electrocardiograms (ECGs), QT interval prolongation is a known risk factor for sudden cardiac death. Since medications can promote or exacerbate the condition, detection of QT interval prolongation is important for clinical decision support. We investigated the accuracy of natural language processing (NLP) for identifying QT prolongation from cardiologist-generated, free-text ECG impressions compared to corrected QT (QTc) thresholds reported by ECG machines. Methods After integrating negation detection to a locally-developed natural language processor, the KnowledgeMap concept identifier, we evaluated NLP-based detection of QT prolongation compared to the calculated QTc on a set of 44,318 ECGs obtained from hospitalized patients. We also created a string query using regular expressions to identify QT prolongation. We calculated sensitivity and specificity of the methods using manual physician review of the cardiologist-generated reports as the gold standard. To investigate causes of “false positive” calculated QTc, we manually reviewed randomly selected ECGs with a long calculated QTc but no mention of QT prolongation. Separately, we validated the performance of the negation detection algorithm on 5,000 manually-categorized ECG phrases for any medical concept (not limited to QT prolongation) prior to developing the NLP query for QT prolongation. Results The NLP query for QT prolongation correctly identified 2,364 of 2,373 ECGs with QT prolongation with a sensitivity of 0.996 and a positive predictive value of 1.000. There were no false positives. The regular expression query had a sensitivity of 0.999 and positive predictive value of 0.982. In contrast, the positive predictive value of common QTc thresholds derived from ECG machines was 0.07–0.25 with corresponding sensitivities of 0.994–0.046. The negation detection algorithm had a recall of 0.973 and precision of 0.982 for 10,490 concepts found within ECG impressions. Conclusions NLP and regular expression queries of cardiologists’ ECG interpretations can more effectively identify QT prolongation than the automated QTc intervals reported by ECG machines. Future clinical decision support could employ NLP queries to detect QTc prolongation and other reported ECG abnormalities. PMID:18938105
NASA Technical Reports Server (NTRS)
Chiavassa, G.; Liandrat, J.
1996-01-01
We construct compactly supported wavelet bases satisfying homogeneous boundary conditions on the interval (0,1). The maximum features of multiresolution analysis on the line are retained, including polynomial approximation and tree algorithms. The case of H(sub 0)(sup 1)(0, 1)is detailed, and numerical values, required for the implementation, are provided for the Neumann and Dirichlet boundary conditions.
ERIC Educational Resources Information Center
Biedenkapp, Joseph C.; Rudy, Jerry W.
2007-01-01
Contextual fear conditioning was maintained over a 15-day retention interval suggesting no forgetting of the conditioning experience. However, a more subtle generalization test revealed that, as the retention interval increased, rats showed enhanced generalized fear to an altered context. Preexposure to the training context prior to conditioning,…
Daluwatte, Chathuri; Vicente, Jose; Galeotti, Loriano; Johannesen, Lars; Strauss, David G; Scully, Christopher G
Performance of ECG beat detectors is traditionally assessed on long intervals (e.g.: 30min), but only incorrect detections within a short interval (e.g.: 10s) may cause incorrect (i.e., missed+false) heart rate limit alarms (tachycardia and bradycardia). We propose a novel performance metric based on distribution of incorrect beat detection over a short interval and assess its relationship with incorrect heart rate limit alarm rates. Six ECG beat detectors were assessed using performance metrics over long interval (sensitivity and positive predictive value over 30min) and short interval (Area Under empirical cumulative distribution function (AUecdf) for short interval (i.e., 10s) sensitivity and positive predictive value) on two ECG databases. False heart rate limit and asystole alarm rates calculated using a third ECG database were then correlated (Spearman's rank correlation) with each calculated performance metric. False alarm rates correlated with sensitivity calculated on long interval (i.e., 30min) (ρ=-0.8 and p<0.05) and AUecdf for sensitivity (ρ=0.9 and p<0.05) in all assessed ECG databases. Sensitivity over 30min grouped the two detectors with lowest false alarm rates while AUecdf for sensitivity provided further information to identify the two beat detectors with highest false alarm rates as well, which was inseparable with sensitivity over 30min. Short interval performance metrics can provide insights on the potential of a beat detector to generate incorrect heart rate limit alarms. Published by Elsevier Inc.
Role of working memory and lexical knowledge in perceptual restoration of interrupted speech.
Nagaraj, Naveen K; Magimairaj, Beula M
2017-12-01
The role of working memory (WM) capacity and lexical knowledge in perceptual restoration (PR) of missing speech was investigated using the interrupted speech perception paradigm. Speech identification ability, which indexed PR, was measured using low-context sentences periodically interrupted at 1.5 Hz. PR was measured for silent gated, low-frequency speech noise filled, and low-frequency fine-structure and envelope filled interrupted conditions. WM capacity was measured using verbal and visuospatial span tasks. Lexical knowledge was assessed using both receptive vocabulary and meaning from context tests. Results showed that PR was better for speech noise filled condition than other conditions tested. Both receptive vocabulary and verbal WM capacity explained unique variance in PR for the speech noise filled condition, but were unrelated to performance in the silent gated condition. It was only receptive vocabulary that uniquely predicted PR for fine-structure and envelope filled conditions. These findings suggest that the contribution of lexical knowledge and verbal WM during PR depends crucially on the information content that replaced the silent intervals. When perceptual continuity was partially restored by filler speech noise, both lexical knowledge and verbal WM capacity facilitated PR. Importantly, for fine-structure and envelope filled interrupted conditions, lexical knowledge was crucial for PR.
Mattos, A Z; Mattos, A A
Many different non-invasive methods have been studied with the purpose of staging liver fibrosis. The objective of this study was verifying if transient elastography is superior to aspartate aminotransferase to platelet ratio index for staging fibrosis in patients with chronic hepatitis C. A systematic review with meta-analysis of studies which evaluated both non-invasive tests and used biopsy as the reference standard was performed. A random-effects model was used, anticipating heterogeneity among studies. Diagnostic odds ratio was the main effect measure, and summary receiver operating characteristic curves were created. A sensitivity analysis was planned, in which the meta-analysis would be repeated excluding each study at a time. Eight studies were included in the meta-analysis. Regarding the prediction of significant fibrosis, transient elastography and aspartate aminotransferase to platelet ratio index had diagnostic odds ratios of 11.70 (95% confidence interval = 7.13-19.21) and 8.56 (95% confidence interval = 4.90-14.94) respectively. Concerning the prediction of cirrhosis, transient elastography and aspartate aminotransferase to platelet ratio index had diagnostic odds ratios of 66.49 (95% confidence interval = 23.71-186.48) and 7.47 (95% confidence interval = 4.88-11.43) respectively. In conclusion, there was no evidence of significant superiority of transient elastography over aspartate aminotransferase to platelet ratio index regarding the prediction of significant fibrosis, but the former proved to be better than the latter concerning prediction of cirrhosis.
Modeling Relationships Between Flight Crew Demographics and Perceptions of Interval Management
NASA Technical Reports Server (NTRS)
Remy, Benjamin; Wilson, Sara R.
2016-01-01
The Interval Management Alternative Clearances (IMAC) human-in-the-loop simulation experiment was conducted to assess interval management system performance and participants' acceptability and workload while performing three interval management clearance types. Twenty-four subject pilots and eight subject controllers flew ten high-density arrival scenarios into Denver International Airport during two weeks of data collection. This analysis examined the possible relationships between subject pilot demographics on reported perceptions of interval management in IMAC. Multiple linear regression models were created with a new software tool to predict subject pilot questionnaire item responses from demographic information. General patterns were noted across models that may indicate flight crew demographics influence perceptions of interval management.
Valdez-Jasso, Daniela; Bia, Daniel; Zócalo, Yanina; Armentano, Ricardo L.; Haider, Mansoor A.; Olufsen, Mette S.
2013-01-01
A better understanding of the biomechanical properties of the arterial wall provides important insight into arterial vascular biology under normal (healthy) and pathological conditions. This insight has potential to improve tracking of disease progression and to aid in vascular graft design and implementation. In this study, we use linear and nonlinear viscoelastic models to predict biomechanical properties of the thoracic descending aorta and the carotid artery under ex vivo and in vivo conditions in ovine and human arteries. Models analyzed include a four-parameter (linear) Kelvin viscoelastic model and two five-parameter nonlinear viscoelastic models (an arctangent and a sigmoid model) that relate changes in arterial blood pressure to the vessel cross-sectional area (via estimation of vessel strain). These models were developed using the framework of Quasilinear Viscoelasticity (QLV) theory and were validated using measurements from the thoracic descending aorta and the carotid artery obtained from human and ovine arteries. In vivo measurements were obtained from ten ovine aortas and ten human carotid arteries. Ex vivo measurements (from both locations) were made in eleven male Merino sheep. Biomechanical properties were obtained through constrained estimation of model parameters. To further investigate the parameter estimates we computed standard errors and confidence intervals and we used analysis of variance to compare results within and between groups. Overall, our results indicate that optimal model selection depends on the arterial type. Results showed that for the thoracic descending aorta (under both experimental conditions) the best predictions were obtained with the nonlinear sigmoid model, while under healthy physiological pressure loading the carotid arteries nonlinear stiffening with increasing pressure is negligible, and consequently, the linear (Kelvin) viscoelastic model better describes the pressure-area dynamics in this vessel. Results comparing biomechanical properties show that the Kelvin and sigmoid models were able to predict the zero-pressure vessel radius; that under ex vivo conditions vessels are more rigid, and comparatively, that the carotid artery is stiffer than the thoracic descending aorta; and that the viscoelastic gain and relaxation parameters do not differ significantly between vessels or experimental conditions. In conclusion, our study demonstrates that the proposed models can predict pressure-area dynamics and that model parameters can be extracted for further interpretation of biomechanical properties. PMID:21203846
Characterization of minimal sequences associated with self-similar interval exchange maps
NASA Astrophysics Data System (ADS)
Cobo, Milton; Gutiérrez-Romo, Rodolfo; Maass, Alejandro
2018-04-01
The construction of affine interval exchange maps (IEMs) with wandering intervals that are semi-conjugate to a given self-similar IEM is strongly related to the existence of the so-called minimal sequences associated with local potentials, which are certain elements of the substitution subshift arising from the given IEM. In this article, under the condition called unique representation property, we characterize such minimal sequences for potentials coming from non-real eigenvalues of the substitution matrix. We also give conditions on the slopes of the affine extensions of a self-similar IEM that determine whether it exhibits a wandering interval or not.
Ma, Heng; Yang, Jun; Liu, Jing; Ge, Lan; An, Jing; Tang, Qing; Li, Han; Zhang, Yu; Chen, David; Wang, Yong; Liu, Jiabin; Liang, Zhigang; Lin, Kai; Jin, Lixin; Bi, Xiaoming; Li, Kuncheng; Li, Debiao
2012-04-15
Myocardial perfusion magnetic resonance imaging (MRI) with sliding-window conjugate-gradient highly constrained back-projection reconstruction (SW-CG-HYPR) allows whole left ventricular coverage, improved temporal and spatial resolution and signal/noise ratio, and reduced cardiac motion-related image artifacts. The accuracy of this technique for detecting coronary artery disease (CAD) has not been determined in a large number of patients. We prospectively evaluated the diagnostic performance of myocardial perfusion MRI with SW-CG-HYPR in patients with suspected CAD. A total of 50 consecutive patients who were scheduled for coronary angiography with suspected CAD underwent myocardial perfusion MRI with SW-CG-HYPR at 3.0 T. The perfusion defects were interpreted qualitatively by 2 blinded observers and were correlated with x-ray angiographic stenoses ≥50%. The prevalence of CAD was 56%. In the per-patient analysis, the sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of SW-CG-HYPR was 96% (95% confidence interval 82% to 100%), 82% (95% confidence interval 60% to 95%), 87% (95% confidence interval 70% to 96%), 95% (95% confidence interval 74% to100%), and 90% (95% confidence interval 82% to 98%), respectively. In the per-vessel analysis, the corresponding values were 98% (95% confidence interval 91% to 100%), 89% (95% confidence interval 80% to 94%), 86% (95% confidence interval 76% to 93%), 99% (95% confidence interval 93% to 100%), and 93% (95% confidence interval 89% to 97%), respectively. In conclusion, myocardial perfusion MRI using SW-CG-HYPR allows whole left ventricular coverage and high resolution and has high diagnostic accuracy in patients with suspected CAD. Copyright © 2012 Elsevier Inc. All rights reserved.
A sequential solution for anisotropic total variation image denoising with interval constraints
NASA Astrophysics Data System (ADS)
Xu, Jingyan; Noo, Frédéric
2017-09-01
We show that two problems involving the anisotropic total variation (TV) and interval constraints on the unknown variables admit, under some conditions, a simple sequential solution. Problem 1 is a constrained TV penalized image denoising problem; problem 2 is a constrained fused lasso signal approximator. The sequential solution entails finding first the solution to the unconstrained problem, and then applying a thresholding to satisfy the constraints. If the interval constraints are uniform, this sequential solution solves problem 1. If the interval constraints furthermore contain zero, the sequential solution solves problem 2. Here uniform interval constraints refer to all unknowns being constrained to the same interval. A typical example of application is image denoising in x-ray CT, where the image intensities are non-negative as they physically represent linear attenuation coefficient in the patient body. Our results are simple yet seem unknown; we establish them using the Karush-Kuhn-Tucker conditions for constrained convex optimization.
Chandran, S; Parker, F; Lontos, S; Vaughan, R; Efthymiou, M
2015-12-01
Polyps identified at colonoscopy are predominantly diminutive (<5 mm) with a small risk (>1%) of high-grade dysplasia or carcinoma; however, the cost of histological assessment is substantial. The aim of this study was to determine whether prediction of colonoscopy surveillance intervals based on real-time endoscopic assessment of polyp histology is accurate and cost effective. A prospective cohort study was conducted across a tertiary care and private community hospital. Ninety-four patients underwent colonoscopy and polypectomy of diminutive (≤5 mm) polyps from October 2012 to July 2013, yielding a total of 159 polyps. Polyps were examined and classified according to the Sano-Emura classification system. The endoscopic assessment (optical diagnosis) of polyp histology was used to predict appropriate colonoscopy surveillance intervals. The main outcome measure was the accuracy of optical diagnosis of diminutive colonic polyps against the gold standard of histological assessment. Optical diagnosis was correct in 105/108 (97.2%) adenomas. This yielded a sensitivity, specificity and positive and negative predictive values (with 95%CI) of 97.2% (92.1-99.4%), 78.4% (64.7-88.7%), 90.5% (83.7-95.2%) and 93% (80.9-98.5%) respectively. Ninety-two (98%) patients were correctly triaged to their repeat surveillance colonoscopy. Based on these findings, a cut and discard approach would have resulted in a saving of $319.77 per patient. Endoscopists within a tertiary care setting can accurately predict diminutive polyp histology and confer an appropriate surveillance interval with an associated financial benefit to the healthcare system. However, limitations to its application in the community setting exist, which may improve with further training and high-definition colonoscopes. © 2015 Royal Australasian College of Physicians.
Modified 30-second Sit to Stand test predicts falls in a cohort of institutionalized older veterans
Chassé, Kathleen
2017-01-01
Physical function performance tests, including sit to stand tests and Timed Up and Go, assess the functional capacity of older adults. Their ability to predict falls warrants further investigation. The objective was to determine if a modified 30-second Sit to Stand test that allowed upper extremity use and Timed Up and Go test predicted falls in institutionalized Veterans. Fifty-three older adult Veterans (mean age = 91 years, 49 men) residing in a long-term care hospital completed modified 30-second Sit to Stand and Timed Up and Go tests. The number of falls over one year was collected. The ability of modified 30-second Sit to Stand or Timed Up and Go to predict if participants had fallen was examined using logistic regression. The ability of these tests to predict the number of falls was examined using negative binomial regression. Both analyses controlled for age, history of falls, cognition, and comorbidities. The modified 30-second Sit to Stand was significantly (p < 0.05) related to if participants fell (odds ratio = 0.75, 95% confidence interval = 0.58, 0.97) and the number of falls (incidence rate ratio = 0.82, 95% confidence interval = 0.68, 0.98); decreased repetitions were associated with increased number of falls. Timed Up and Go was not significantly (p > 0.05) related to if participants fell (odds ratio = 1.03, 95% confidence interval = 0.96, 1.10) or the number of falls (incidence rate ratio = 1.01, 95% confidence interval = 0.98, 1.05). The modified 30-second Sit to Stand that allowed upper extremity use offers an alternative method to screen for fall risk in older adults in long-term care. PMID:28464024
Modified 30-second Sit to Stand test predicts falls in a cohort of institutionalized older veterans.
Applebaum, Eva V; Breton, Dominic; Feng, Zhuo Wei; Ta, An-Tchi; Walsh, Kayley; Chassé, Kathleen; Robbins, Shawn M
2017-01-01
Physical function performance tests, including sit to stand tests and Timed Up and Go, assess the functional capacity of older adults. Their ability to predict falls warrants further investigation. The objective was to determine if a modified 30-second Sit to Stand test that allowed upper extremity use and Timed Up and Go test predicted falls in institutionalized Veterans. Fifty-three older adult Veterans (mean age = 91 years, 49 men) residing in a long-term care hospital completed modified 30-second Sit to Stand and Timed Up and Go tests. The number of falls over one year was collected. The ability of modified 30-second Sit to Stand or Timed Up and Go to predict if participants had fallen was examined using logistic regression. The ability of these tests to predict the number of falls was examined using negative binomial regression. Both analyses controlled for age, history of falls, cognition, and comorbidities. The modified 30-second Sit to Stand was significantly (p < 0.05) related to if participants fell (odds ratio = 0.75, 95% confidence interval = 0.58, 0.97) and the number of falls (incidence rate ratio = 0.82, 95% confidence interval = 0.68, 0.98); decreased repetitions were associated with increased number of falls. Timed Up and Go was not significantly (p > 0.05) related to if participants fell (odds ratio = 1.03, 95% confidence interval = 0.96, 1.10) or the number of falls (incidence rate ratio = 1.01, 95% confidence interval = 0.98, 1.05). The modified 30-second Sit to Stand that allowed upper extremity use offers an alternative method to screen for fall risk in older adults in long-term care.
Salminen, Marika; Vahlberg, Tero; Räihä, Ismo; Niskanen, Leo; Kivelä, Sirkka-Liisa; Irjala, Kerttu
2015-05-01
To analyze whether sex hormone levels predict the incidence of type2 diabetes among elderly Finnish men. This was a prospective population-based study, with a 9-year follow up period. The study population in the municipality of Lieto, Finland, consisted of elderly (age ≥64 years) men free of type 2 diabetes at baseline in 1998-1999 (n = 430). Body mass index and cardiovascular disease-adjusted hazard ratios and their 95% confidence intervals for type 2 diabetes predicted by testosterone, free testosterone, sex hormone-binding globulin, luteinizing hormone, and testosterone/luteinizing hormone were estimated. A total of 30 new cases of type 2 diabetes developed during the follow-up period. After adjustment, only higher levels of testosterone (hazard ratio for one-unit increase 0.93, 95% confidence interval 0.87-0.99, P = 0.020) and free testosterone (hazard ratio for 10-unit increase 0.96, 95% confidence interval 0.91-1.00, P = 0.044) were associated with a lower risk of incident type 2 diabetes during the follow up. These associations (0.94, 95% confidence interval 0.87-1.00, P = 0.050 and 0.95, 95% confidence interval 0.90-1.00, P = 0.035, respectively) persisted even after additional adjustment of sex hormone-binding globulin. Higher levels of testosterone and free testosterone independently predicted a reduced risk of type 2 diabetes in the elderly men. © 2014 Japan Geriatrics Society.
Geophysical Anomalies and Earthquake Prediction
NASA Astrophysics Data System (ADS)
Jackson, D. D.
2008-12-01
Finding anomalies is easy. Predicting earthquakes convincingly from such anomalies is far from easy. Why? Why have so many beautiful geophysical abnormalities not led to successful prediction strategies? What is earthquake prediction? By my definition it is convincing information that an earthquake of specified size is temporarily much more likely than usual in a specific region for a specified time interval. We know a lot about normal earthquake behavior, including locations where earthquake rates are higher than elsewhere, with estimable rates and size distributions. We know that earthquakes have power law size distributions over large areas, that they cluster in time and space, and that aftershocks follow with power-law dependence on time. These relationships justify prudent protective measures and scientific investigation. Earthquake prediction would justify exceptional temporary measures well beyond those normal prudent actions. Convincing earthquake prediction would result from methods that have demonstrated many successes with few false alarms. Predicting earthquakes convincingly is difficult for several profound reasons. First, earthquakes start in tiny volumes at inaccessible depth. The power law size dependence means that tiny unobservable ones are frequent almost everywhere and occasionally grow to larger size. Thus prediction of important earthquakes is not about nucleation, but about identifying the conditions for growth. Second, earthquakes are complex. They derive their energy from stress, which is perniciously hard to estimate or model because it is nearly singular at the margins of cracks and faults. Physical properties vary from place to place, so the preparatory processes certainly vary as well. Thus establishing the needed track record for validation is very difficult, especially for large events with immense interval times in any one location. Third, the anomalies are generally complex as well. Electromagnetic anomalies in particular require some understanding of their sources and the physical properties of the crust, which also vary from place to place and time to time. Anomalies are not necessarily due to stress or earthquake preparation, and separating the extraneous ones is a problem as daunting as understanding earthquake behavior itself. Fourth, the associations presented between anomalies and earthquakes are generally based on selected data. Validating a proposed association requires complete data on the earthquake record and the geophysical measurements over a large area and time, followed by prospective testing which allows no adjustment of parameters, criteria, etc. The Collaboratory for Study of Earthquake Predictability (CSEP) is dedicated to providing such prospective testing. Any serious proposal for prediction research should deal with the problems above, and anticipate the huge investment in time required to test hypotheses.
Fixed-interval performance and self-control in children.
Darcheville, J C; Rivière, V; Wearden, J H
1992-01-01
Operant responses of 16 children (mean age 6 years and 1 month) were reinforced according to different fixed-interval schedules (with interreinforcer intervals of 20, 30, or 40 s) in which the reinforcers were either 20-s or 40-s presentations of a cartoon. In another procedure, they received training on a self-control paradigm in which both reinforcer delay (0.5 s or 40 s) and reinforcer duration (20 s or 40 s of cartoons) varied, and subjects were offered a choice between various combinations of delay and duration. Individual differences in behavior under the self-control procedure were precisely mirrored by individual differences under the fixed-interval schedule. Children who chose the smaller immediate reinforcer on the self-control procedure (impulsive) produced short postreinforcement pauses and high response rates in the fixed-interval conditions, and both measures changed little with changes in fixed-interval value. Conversely, children who chose the larger delayed reinforcer in the self-control condition (the self-controlled subjects) exhibited lower response rates and long postreinforcement pauses, which changed systematically with changes in the interval, in their fixed-interval performances. PMID:1573372
Tolleson, D R; Schafer, D W
2014-01-01
Monitoring the nutritional status of range cows is difficult. Near-infrared spectroscopy (NIRS) of feces has been used to predict diet quality in cattle. When fecal NIRS is coupled with decision support software such as the Nutritional Balance Analyzer (NUTBAL PRO), nutritional status and animal performance can be monitored. Approximately 120 Hereford and 90 CGC composite (50% Red Angus, 25% Tarentaise, and 25% Charolais) cows grazing in a single herd were used in a study to determine the ability of fecal NIRS and NutbalPro to project BCS (1 = thin and 9 = fat) under commercial scale rangeland conditions in central Arizona. Cattle were rotated across the 31,000 ha allotment at 10 to 20 d intervals. Cattle BCS and fecal samples (approximately 500 g) composited from 5 to 10 cows were collected in the pasture approximately monthly at the midpoint of each grazing period. Samples were frozen and later analyzed by NIRS for prediction of diet crude protein (CP) and digestible organic matter (DOM). Along with fecal NIRS predicted diet quality, animal breed type, reproductive status, and environmental conditions were input to the software for each fecal sampling and BCS date. Three different evaluations were performed. First, fecal NIRS and NutbalPro derived BCS was projected forward from each sampling as if it were a "one-time only" measurement. Second, BCS was derived from the average predicted weight change between 2 sampling dates for a given period. Third, inputs to the model were adjusted to better represent local animals and conditions. Fecal NIRS predicted diet quality varied from a minimum of approximately 5% CP and 57% DOM in winter to a maximum of approximately 11% CP and 60% DOM in summer. Diet quality correlated with observed seasonal changes and precipitation events. In evaluation 1, differences in observed versus projected BCS were not different (P > 0.1) between breed types but these values ranged from 0.1 to 1.1 BCS in Herefords and 0.0 to 0.9 in CGC. In evaluation 2, differences in observed versus projected BCS were not different (P > 0.1) between breed types but these values ranged from 0.00 to 0.46 in Hereford and 0.00 to 0.67 in CGC. In evaluation 3, the range of differences between observed and projected BCS was 0.04 to 0.28. The greatest difference in projected versus observed BCS occurred during periods of lowest diet quality. Body condition was predicted accurately enough to be useful in monitoring the nutrition of range beef cows under the conditions of this study.
Cardiovascular performance of adult breeding sows fails to obey allometric scaling laws.
van Essen, G J; Vernooij, J C M; Heesterbeek, J A P; Anjema, D; Merkus, D; Duncker, D J
2011-02-01
In view of the remarkable decrease of the relative heart weight (HW) and the relative blood volume in growing pigs, we investigated whether HW, cardiac output (CO), and stroke volume (SV) of modern growing pigs are proportional to BW, as predicted by allometric scaling laws: HW (or CO or SV) = a·BW(b), in which a and b are constants, and constant b is a multiple of 0.25 (quarter-power scaling law). Specifically, we tested the hypothesis that both HW and CO scale with BW to the power of 0.75 (HW or CO = a·BW(0.75)) and SV scales with BW to the power of 1.00 (SV = a·BW(1.0)). For this purpose, 2 groups of pigs (group 1, consisting of 157 pigs of 50 ± 1 kg; group 2, consisting of 45 pigs of 268 ± 18 kg) were surgically instrumented with a flow probe or a thermodilution dilution catheter, under open-chest anesthetized conditions to measure CO and SV, after which HW was determined. The 95% confidence intervals of power-coefficient b for HW were 0.74 to 0.80, encompassing the predicted value of 0.75, suggesting that HW increased proportionally with BW, as predicted by the allometric scaling laws. In contrast, the 95% confidence intervals of power-coefficient b for CO and SV as measured with flow probes were 0.40 to 0.56 and 0.39 to 0.61, respectively, and values obtained with the thermodilution technique were 0.34 to 0.53 and 0.40 to 0.62, respectively. Thus, the 95% confidence limits failed to encompass the predicted values of b for CO and SV of 0.75 and 1.0, respectively. In conclusion, although adult breeding sows display normal heart growth, cardiac performance appears to be disproportionately low for BW. This raises concern regarding the health status of adult breeding sows.
A diabetes-predictive amino acid score and future cardiovascular disease.
Magnusson, Martin; Lewis, Gregory D; Ericson, Ulrika; Orho-Melander, Marju; Hedblad, Bo; Engström, Gunnar; Ostling, Gerd; Clish, Clary; Wang, Thomas J; Gerszten, Robert E; Melander, Olle
2013-07-01
We recently identified a metabolic signature of three amino acids (tyrosine, phenylalanine, and isoleucine) that strongly predicts diabetes development. As novel modifiable targets for intervention are needed to meet the expected increase of cardiovascular disease (CVD) caused by the diabetes epidemic, we investigated whether this diabetes-predictive amino acid score (DM-AA score) predicts development of CVD and its functional consequences. We performed a matched case-control study derived from the population-based Malmö Diet and Cancer Cardiovascular Cohort (MDC-CC), all free of CVD. During 12 years of follow-up, 253 individuals developed CVD and were matched for age, sex, and Framingham risk score with 253 controls. Amino acids were profiled in baseline plasma samples, using liquid chromatography-tandem mass spectrometry, and relationship to incident CVD was assessed using conditional logistic regression. We further examined whether the amino acid score also correlated with anatomical [intima-media thickness (IMT) and plaque formation] and functional (exercise-induced myocardial ischaemia) abnormalities. Compared with the lowest quartile of the DM-AA score, the odds ratio (95% confidence interval) for incident CVD in subjects belonging to quartiles 2, 3, and 4 was 1.27 (0.72-2.22), 1.96 (1.07-3.60), and 2.20 (1.12-4.31) (Ptrend = 0.010), respectively, after multivariate adjustment. Increasing quartile of the DM-AA score was cross-sectionally related to carotid IMT (Ptrend = 0.037) and with the presence of at least one plaque larger than 10 mm(2) (Ptrend = 0.001). Compared with the lowest quartile of the DM-AA score, the odds ratio (95% confidence interval) for inducible ischaemia in subjects belonging to quartiles 2, 3, and 4 was 3.31 (1.05-10.4), 4.24 (1.36-13.3), and 4.86 (1.47-16.1) (Ptrend = 0.011), respectively. This study identifies branched-chain and aromatic amino acids as novel markers of CVD development and as an early link between diabetes and CVD susceptibility.
Predicting motion sickness during parabolic flight
NASA Technical Reports Server (NTRS)
Harm, Deborah L.; Schlegel, Todd T.
2002-01-01
BACKGROUND: There are large individual differences in susceptibility to motion sickness. Attempts to predict who will become motion sick have had limited success. In the present study, we examined gender differences in resting levels of salivary amylase and total protein, cardiac interbeat intervals (R-R intervals), and a sympathovagal index and evaluated their potential to correctly classify individuals into two motion sickness severity groups. METHODS: Sixteen subjects (10 men and 6 women) flew four sets of 10 parabolas aboard NASA's KC-135 aircraft. Saliva samples for amylase and total protein were collected preflight on the day of the flight and motion sickness symptoms were recorded during each parabola. Cardiovascular parameters were collected in the supine position 1-5 days before the flight. RESULTS: There were no significant gender differences in sickness severity or any of the other variables mentioned above. Discriminant analysis using salivary amylase, R-R intervals and the sympathovagal index produced a significant Wilks' lambda coefficient of 0.36, p=0.006. The analysis correctly classified 87% of the subjects into the none-mild sickness or the moderate-severe sickness group. CONCLUSIONS: The linear combination of resting levels of salivary amylase, high-frequency R-R interval levels, and a sympathovagal index may be useful in predicting motion sickness severity.
Serbin, Lisa A; Kingdon, Danielle; Ruttle, Paula L; Stack, Dale M
2015-11-01
Most theoretical models of developmental psychopathology involve a transactional, bidirectional relation between parenting and children's behavior problems. The present study utilized a cross-lagged panel, multiple interval design to model change in bidirectional relations between child and parent behavior across successive developmental periods. Two major categories of child behavior problems, internalizing and externalizing, and two aspects of parenting, positive (use of support and structure) and harsh discipline (use of physical punishment), were modeled across three time points spaced 3 years apart. Two successive developmental intervals, from approximately age 7.5 to 10.5 and from 10.5 to 13.5, were included. Mother-child dyads (N = 138; 65 boys) from a lower income longitudinal sample of families participated, with standardized measures of mothers rating their own parenting behavior and teachers reporting on child's behavior. Results revealed different types of reciprocal relations between specific aspects of child and parent behavior, with internalizing problems predicting an increase in positive parenting over time, which subsequently led to a reduction in internalizing problems across the successive 3-year interval. In contrast, externalizing predicted reduced levels of positive parenting in a reciprocal sequence that extended across two successive intervals and predicted increased levels of externalizing over time. Implications for prevention and early intervention are discussed.
Predicting Motion Sickness During Parabolic Flight
NASA Technical Reports Server (NTRS)
Harm, Deborah L.; Schlegel, Todd T.
2002-01-01
Background: There are large individual differences in susceptibility to motion sickness. Attempts to predict who will become motion sick have had limited success. In the present study we examined gender differences in resting levels of salivary amylase and total protein, cardiac interbeat intervals (R-R intervals), and a sympathovagal index and evaluated their potential to correctly classify individuals into two motion sickness severity groups. Methods: Sixteen subjects (10 men and 6 women) flew 4 sets of 10 parabolas aboard NASA's KC-135 aircraft. Saliva samples for amylase and total protein were collected preflight on the day of the flight and motion sickness symptoms were recorded during each parabola. Cardiovascular parameters were collected in the supine position 1-5 days prior to the flight. Results: There were no significant gender differences in sickness severity or any of the other variables mentioned above. Discriminant analysis using salivary amylase, R-R intervals and the sympathovagal index produced a significant Wilks' lambda coefficient of 0.36, p= 0.006. The analysis correctly classified 87% of the subjects into the none-mild sickness or the moderate-severe sickness group. Conclusions: The linear combination of resting levels of salivary amylase, high frequency R-R interval levels, and a sympathovagal index may be useful in predicting motion sickness severity.
Hong, Jie; Wang, Yinding; McDermott, Suzanne; Cai, Bo; Aelion, C Marjorie; Lead, Jamie
2016-05-01
Intellectual disability (ID) and cerebral palsy (CP) are serious neurodevelopment conditions and low birth weight (LBW) is correlated with both ID and CP. The actual causes and mechanisms for each of these child outcomes are not well understood. In this study, the relationship between bioaccessible metal concentrations in urban soil and these child conditions were investigated. A physiologically based extraction test (PBET) mimicking gastric and intestinal processes was applied to measure the bio-accessibility of four metals (cadmium (Cd), chromium (Cr), nickel (Ni), and lead (Pb)) in urban soil, and a Bayesian Kriging method was used to estimate metal concentrations in geocoded maternal residential sites. The results showed that bioaccessible metal concentrations of Cd, Ni, and Pb in the intestinal phase were statistically significantly associated with the child outcomes. Lead and nickel were associated with ID, lead and cadmium was associated with LBW, and cadmium was associated with CP. The total concentrations and stomach concentrations were not correlated to significant effects in any of the analyses. For lead, an estimated threshold value was found that was statistically significant in predicting low birth weight. The change point test was statistically significant (p value = 0.045) at an intestine threshold level of 9.2 mg/kg (95% confidence interval 8.9-9.4, p value = 0.0016), which corresponds to 130.6 mg/kg of total Pb concentration in the soil. This is a narrow confidence interval for an important relationship. Published by Elsevier Ltd.
Transient dwarfism of soil fauna during the Paleocene–Eocene Thermal Maximum
Smith, Jon J.; Hasiotis, Stephen T.; Kraus, Mary J.; Woody, Daniel T.
2009-01-01
Soil organisms, as recorded by trace fossils in paleosols of the Willwood Formation, Wyoming, show significant body-size reductions and increased abundances during the Paleocene–Eocene Thermal Maximum (PETM). Paleobotanical, paleopedologic, and oxygen isotope studies indicate high temperatures during the PETM and sharp declines in precipitation compared with late Paleocene estimates. Insect and oligochaete burrows increase in abundance during the PETM, suggesting longer periods of soil development and improved drainage conditions. Crayfish burrows and molluscan body fossils, abundant below and above the PETM interval, are significantly less abundant during the PETM, likely because of drier floodplain conditions and lower water tables. Burrow diameters of the most abundant ichnofossils are 30–46% smaller within the PETM interval. As burrow size is a proxy for body size, significant reductions in burrow diameter suggest that their tracemakers were smaller bodied. Smaller body sizes may have resulted from higher subsurface temperatures, lower soil moisture conditions, or nutritionally deficient vegetation in the high-CO2 atmosphere inferred for the PETM. Smaller soil fauna co-occur with dwarf mammal taxa during the PETM; thus, a common forcing mechanism may have selected for small size in both above- and below-ground terrestrial communities. We predict that soil fauna have already shown reductions in size over the last 150 years of increased atmospheric CO2 and surface temperatures or that they will exhibit this pattern over the next century. We retrodict also that soil fauna across the Permian-Triassic and Triassic-Jurassic boundary events show significant size decreases because of similar forcing mechanisms driven by rapid global warming. PMID:19805060
Transient dwarfism of soil fauna during the Paleocene-Eocene Thermal Maximum.
Smith, Jon J; Hasiotis, Stephen T; Kraus, Mary J; Woody, Daniel T
2009-10-20
Soil organisms, as recorded by trace fossils in paleosols of the Willwood Formation, Wyoming, show significant body-size reductions and increased abundances during the Paleocene-Eocene Thermal Maximum (PETM). Paleobotanical, paleopedologic, and oxygen isotope studies indicate high temperatures during the PETM and sharp declines in precipitation compared with late Paleocene estimates. Insect and oligochaete burrows increase in abundance during the PETM, suggesting longer periods of soil development and improved drainage conditions. Crayfish burrows and molluscan body fossils, abundant below and above the PETM interval, are significantly less abundant during the PETM, likely because of drier floodplain conditions and lower water tables. Burrow diameters of the most abundant ichnofossils are 30-46% smaller within the PETM interval. As burrow size is a proxy for body size, significant reductions in burrow diameter suggest that their tracemakers were smaller bodied. Smaller body sizes may have resulted from higher subsurface temperatures, lower soil moisture conditions, or nutritionally deficient vegetation in the high-CO(2) atmosphere inferred for the PETM. Smaller soil fauna co-occur with dwarf mammal taxa during the PETM; thus, a common forcing mechanism may have selected for small size in both above- and below-ground terrestrial communities. We predict that soil fauna have already shown reductions in size over the last 150 years of increased atmospheric CO(2) and surface temperatures or that they will exhibit this pattern over the next century. We retrodict also that soil fauna across the Permian-Triassic and Triassic-Jurassic boundary events show significant size decreases because of similar forcing mechanisms driven by rapid global warming.
Transient dwarfism of soil fauna during the Paleocene-Eocene Thermal Maximum
Smith, J.J.; Hasiotis, S.T.; Kraus, M.J.; Woody, D.T.
2009-01-01
Soil organisms, as recorded by trace fossils in paleosols of the Willwood Formation, Wyoming, show significant body-size reductions and increased abundances during the Paleocene-Eocene Thermal Maximum (PETM). Paleobotanical, paleopedologic, and oxygen isotope studies indicate high temperatures during the PETM and sharp declines in precipitation compared with late Paleocene estimates. Insect and oligochaete burrows increase in abundance during the PETM, suggesting longer periods of soil development and improved drainage conditions. Crayfish burrows and molluscan body fossils, abundant below and above the PETM interval, are significantly less abundant during the PETM, likely because of drier floodplain conditions and lower water tables. Burrow diameters of the most abundant ichnofossils are 30-46% smaller within the PETM interval. As burrow size is a proxy for body size, significant reductions in burrow diameter suggest that their tracemakers were smaller bodied. Smaller body sizes may have resulted from higher subsurface temperatures, lower soil moisture conditions, or nutritionally deficient vegetation in the high-CO2 atmosphere inferred for the PETM. Smaller soil fauna co-occur with dwarf mammal taxa during the PETM; thus, a common forcing mechanism may have selected for small size in both above- and below-ground terrestrial communities. We predict that soil fauna have already shown reductions in size over the last 150 years of increased atmospheric CO2 and surface temperatures or that they will exhibit this pattern over the next century. We retrodict also that soil fauna across the Permian-Triassic and Triassic-Jurassic boundary events show significant size decreases because of similar forcing mechanisms driven by rapid global warming.
The effects of dorsal bundle lesions on serial and trace conditioning.
Tsaltas, E; Preston, G C; Gray, J A
1983-12-01
The performance of rats with neurotoxic lesions of the dorsal ascending noradrenergic bundle (DB) was compared with that of sham-operated control animals under two behavioural conditions. Animals with DB lesions were slower than controls to acquire a classically-conditioned emotional response (conditioned suppression) with a trace interval interposed between the clicker conditioned stimulus (CS) and the shock reinforcer. However, if the latter half of the trace interval was filled by a second stimulus, a light, the DB-lesioned animals acquired conditioned suppression to the clicker faster than did controls under the same conditions. These results are discussed in terms of the attentional theory of DB function.
Timing matters: sonar call groups facilitate target localization in bats.
Kothari, Ninad B; Wohlgemuth, Melville J; Hulgard, Katrine; Surlykke, Annemarie; Moss, Cynthia F
2014-01-01
To successfully negotiate a cluttered environment, an echolocating bat must control the timing of motor behaviors in response to dynamic sensory information. Here we detail the big brown bat's adaptive temporal control over sonar call production for tracking prey, moving predictably or unpredictably, under different experimental conditions. We studied the adaptive control of vocal-motor behaviors in free-flying big brown bats, Eptesicus fuscus, as they captured tethered and free-flying insects, in open and cluttered environments. We also studied adaptive sonar behavior in bats trained to track moving targets from a resting position. In each of these experiments, bats adjusted the features of their calls to separate target and clutter. Under many task conditions, flying bats produced prominent sonar sound groups identified as clusters of echolocation pulses with relatively stable intervals, surrounded by longer pulse intervals. In experiments where bats tracked approaching targets from a resting position, bats also produced sonar sound groups, and the prevalence of these sonar sound groups increased when motion of the target was unpredictable. We hypothesize that sonar sound groups produced during flight, and the sonar call doublets produced by a bat tracking a target from a resting position, help the animal resolve dynamic target location and represent the echo scene in greater detail. Collectively, our data reveal adaptive temporal control over sonar call production that allows the bat to negotiate a complex and dynamic environment.
Timing matters: sonar call groups facilitate target localization in bats
Kothari, Ninad B.; Wohlgemuth, Melville J.; Hulgard, Katrine; Surlykke, Annemarie; Moss, Cynthia F.
2014-01-01
To successfully negotiate a cluttered environment, an echolocating bat must control the timing of motor behaviors in response to dynamic sensory information. Here we detail the big brown bat's adaptive temporal control over sonar call production for tracking prey, moving predictably or unpredictably, under different experimental conditions. We studied the adaptive control of vocal-motor behaviors in free-flying big brown bats, Eptesicus fuscus, as they captured tethered and free-flying insects, in open and cluttered environments. We also studied adaptive sonar behavior in bats trained to track moving targets from a resting position. In each of these experiments, bats adjusted the features of their calls to separate target and clutter. Under many task conditions, flying bats produced prominent sonar sound groups identified as clusters of echolocation pulses with relatively stable intervals, surrounded by longer pulse intervals. In experiments where bats tracked approaching targets from a resting position, bats also produced sonar sound groups, and the prevalence of these sonar sound groups increased when motion of the target was unpredictable. We hypothesize that sonar sound groups produced during flight, and the sonar call doublets produced by a bat tracking a target from a resting position, help the animal resolve dynamic target location and represent the echo scene in greater detail. Collectively, our data reveal adaptive temporal control over sonar call production that allows the bat to negotiate a complex and dynamic environment. PMID:24860509
Ionization effects and linear stability in a coaxial plasma device
NASA Astrophysics Data System (ADS)
Kurt, Erol; Kurt, Hilal; Bayhan, Ulku
2009-03-01
A 2-D computer simulation of a coaxial plasma device depending on the conservation equations of electrons, ions and excited atoms together with the Poisson equation for a plasma gun is carried out. Some characteristics of the plasma focus device (PF) such as critical wave numbers a c and voltages U c in the cases of various pressures Pare estimated in order to satisfy the necessary conditions of traveling particle densities ( i.e. plasma patterns) via a linear analysis. Oscillatory solutions are characterized by a nonzero imaginary part of the growth rate Im ( σ) for all cases. The model also predicts the minimal voltage ranges of the system for certain pressure intervals.
An Efficient Pattern Mining Approach for Event Detection in Multivariate Temporal Data
Batal, Iyad; Cooper, Gregory; Fradkin, Dmitriy; Harrison, James; Moerchen, Fabian; Hauskrecht, Milos
2015-01-01
This work proposes a pattern mining approach to learn event detection models from complex multivariate temporal data, such as electronic health records. We present Recent Temporal Pattern mining, a novel approach for efficiently finding predictive patterns for event detection problems. This approach first converts the time series data into time-interval sequences of temporal abstractions. It then constructs more complex time-interval patterns backward in time using temporal operators. We also present the Minimal Predictive Recent Temporal Patterns framework for selecting a small set of predictive and non-spurious patterns. We apply our methods for predicting adverse medical events in real-world clinical data. The results demonstrate the benefits of our methods in learning accurate event detection models, which is a key step for developing intelligent patient monitoring and decision support systems. PMID:26752800
Richardson, Philip; Greenslade, Jaimi; Shanmugathasan, Sulochana; Doucet, Katherine; Widdicombe, Neil; Chu, Kevin; Brown, Anthony
2015-01-01
CARING is a screening tool developed to identify patients who have a high likelihood of death in 1 year. This study sought to validate a modified CARING tool (termed PREDICT) using a population of patients presenting to the Emergency Department. In total, 1000 patients aged over 55 years who were admitted to hospital via the Emergency Department between January and June 2009 were eligible for inclusion in this study. Data on the six prognostic indicators comprising PREDICT were obtained retrospectively from patient records. One-year mortality data were obtained from the State Death Registry. Weights were applied to each PREDICT criterion, and its final score ranged from 0 to 44. Receiver operator characteristic analyses and diagnostic accuracy statistics were used to assess the accuracy of PREDICT in identifying 1-year mortality. The sample comprised 976 patients with a median (interquartile range) age of 71 years (62-81 years) and a 1-year mortality of 23.4%. In total, 50% had ≥1 PREDICT criteria with a 1-year mortality of 40.4%. Receiver operator characteristic analysis gave an area under the curve of 0.86 (95% confidence interval: 0.83-0.89). Using a cut-off of 13 points, PREDICT had a 95.3% (95% confidence interval: 93.6-96.6) specificity and 53.9% (95% confidence interval: 47.5-60.3) sensitivity for predicting 1-year mortality. PREDICT was simpler than the CARING criteria and identified 158 patients per 1000 admitted who could benefit from advance care planning. PREDICT was successfully applied to the Australian healthcare system with findings similar to the original CARING study conducted in the United States. This tool could improve end-of-life care by identifying who should have advance care planning or an advance healthcare directive. © The Author(s) 2014.
Pezze, M A; Marshall, H J; Cassaday, H J
2016-08-01
In an appetitively motivated procedure, we have previously reported that systemic treatment with the dopamine (DA) D1 receptor agonist SKF81297 (0.4 and 0.8 mg/kg) depressed acquisition at a 2 s inter-stimulus-interval (ISI), suitable to detect trace conditioning impairment. However since DA is involved in reinforcement processes, the generality of effects across appetitively- and aversively-motivated trace conditioning procedures cannot be assumed. The present study tested the effects of SKF81297 (0.4 and 0.8 mg/kg) in an established conditioned emotional response (CER) procedure. Trace-dependent conditioning was clearly shown in two experiments: while conditioning was relatively strong at a 3-s ISI, it was attenuated at a 30-s ISI. This was shown after two (Experiment 1) or four (Experiment 2) conditioning trials conducted in - as far as possible - the same CER procedure. Contrary to prediction, in neither experiment was there any indication that trace conditioning was attenuated by treatment with 0.4 or 0.8 mg/kg SKF81297. In the same rats, locomotor activity was significantly enhanced at the 0.8 mg/kg dose of SKF81297. These results suggest that procedural details of the trace conditioning variant in use are an important determinant of the profile of dopaminergic modulation.
Finding factors that predict treatment-resistant depression: Results of a cohort study.
Cepeda, M Soledad; Reps, Jenna; Ryan, Patrick
2018-05-22
Treatment for depressive disorders often requires subsequent interventions. Patients who do not respond to antidepressants have treatment-resistant depression (TRD). Predicting who will develop TRD may help healthcare providers make more effective treatment decisions. We sought to identify factors that predict TRD in a real-world setting using claims databases. A retrospective cohort study was conducted in a US claims database of adult subjects with newly diagnosed and treated depression with no mania, dementia, and psychosis. The index date was the date of antidepressant dispensing. The outcome was TRD, defined as having at least three distinct antidepressants or one antidepressant and one antipsychotic within 1 year after the index date. Predictors were age, gender, medical conditions, medications, and procedures 1 year before the index date. Of 230,801 included patients, 10.4% developed TRD within 1 year. TRD patients at baseline were younger; 10.87% were between 18 and 19 years old versus 7.64% in the no-TRD group, risk ratio (RR) = 1.42 (95% confidence interval [CI] 1.37-1.48). TRD patients were more likely to have an anxiety disorder at baseline than non-TRD patients, RR = 1.38 (95% CI 1.35-1.14). At 3.68, fatigue had the highest RR (95% CI 3.18-4.25). TRD patients had substance use disorders, psychiatric conditions, insomnia, and pain more often at baseline than non-TRD patients. Ten percent of subjects newly diagnosed and treated for depression developed TRD within a year. They were younger and suffered more frequently from fatigue, substance use disorders, anxiety, psychiatric conditions, insomnia, and pain than non-TRD patients. © 2018 Wiley Periodicals, Inc.
Ingham, Roger J.; Bothe, Anne K.; Wang, Yuedong; Purkhiser, Krystal; New, Anneliese
2012-01-01
Purpose To relate changes in four variables previously defined as characteristic of normally fluent speech to changes in phonatory behavior during oral reading by persons who stutter (PWS) and normally fluent controls under multiple fluency-inducing (FI) conditions. Method Twelve PWS and 12 controls each completed 4 ABA experiments. During A phases, participants read normally. B phases were 4 different FI conditions: auditory masking, chorus reading, whispering, and rhythmic stimulation. Dependent variables were the durations of accelerometer-recorded phonated intervals; self-judged speech effort; and observer-judged stuttering frequency, speech rate, and speech naturalness. The method enabled a systematic replication of Ingham et al. (2009). Results All FI conditions resulted in decreased stuttering and decreases in the number of short phonated intervals, as compared with baseline conditions, but the only FI condition that satisfied all four characteristics of normally fluent speech was chorus reading. Increases in longer phonated intervals were associated with decreased stuttering but also with poorer naturalness and/or increased speech effort. Previous findings concerning the effects of FI conditions on speech naturalness and effort were replicated. Conclusions Measuring all relevant characteristics of normally fluent speech, in the context of treatments that aim to reduce the occurrence of short-duration PIs, may aid the search for an explanation of the nature of stuttering and may also maximize treatment outcomes for adults who stutter. PMID:22365886
NASA Astrophysics Data System (ADS)
van Horssen, Wim T.; Wang, Yandong; Cao, Guohua
2018-06-01
In this paper, it is shown how characteristic coordinates, or equivalently how the well-known formula of d'Alembert, can be used to solve initial-boundary value problems for wave equations on fixed, bounded intervals involving Robin type of boundary conditions with time-dependent coefficients. A Robin boundary condition is a condition that specifies a linear combination of the dependent variable and its first order space-derivative on a boundary of the interval. Analytical methods, such as the method of separation of variables (SOV) or the Laplace transform method, are not applicable to those types of problems. The obtained analytical results by applying the proposed method, are in complete agreement with those obtained by using the numerical, finite difference method. For problems with time-independent coefficients in the Robin boundary condition(s), the results of the proposed method also completely agree with those as for instance obtained by the method of separation of variables, or by the finite difference method.
Anatomy of Some Non-Heinrich Events During The Last Glacial Maximum on Laurentian Fan
NASA Astrophysics Data System (ADS)
Gil, I. M.; Keigwin, L. D.
2013-12-01
High-resolution diatom assemblage analyses coupled with oxygen and carbon isotopic records from a new 28 m piston core on Laurentian Fan reveal significant sedimentological and marine productivity changes related to variability of the nearby Laurentide Ice Sheet during the Last Glacial Maximum. Between 21.0 and 19.7 ka and between 18.8 and 18.6 ka, olive-grey clays intervals interrupt the usual glacial red-clays sedimentation. The timing of these two intervals corresponds to reported occurrence of layers low in detrital carbonate (LDC, considered as non-Heinrich events) that occurred between Heinrich Event 1 and 2. Diatoms are only abundant during those LDC - olive-grey clay intervals and suggest ice retreat (allowing light penetration necessary to diatoms). The species succession reveals also different environmental conditions. The 21.0 to 19.7 ka interval is divisible to two main periods: the first was characterized by environmental conditions dominated by ice, while the second period (starting at 20.2 ka) was warmer than the first. During the shorter 18.8 to 18.6 ka interval, conditions were even warmer than during the 20.2 to 19.7 ka sub-interval. Finally, the comparison of the interpreted oceanographic conditions with changes in Ice Rafted Debris and other records from the North Atlantic will bring a new insight into those episodes that precede the transition to deglaciation beginning ~18.2 ka on Laurentian Fan (based on δ18-O in N. pachyderma (s.)).
Choice with a fixed requirement for food, and the generality of the matching relation
Stubbs, D. Alan; Dreyfus, Leon R.; Fetterman, J. Gregor; Dorman, Lana G.
1986-01-01
Pigeons were trained on choice procedures in which responses on each of two keys were reinforced probabilistically, but only after a schedule requirement had been met. Under one arrangement, a fixed-interval choice procedure was used in which responses were not reinforced until the interval was over; then a response on one key would be reinforced, with the effective key changing irregularly from interval to interval. Under a second, fixed-ratio choice procedure, responses on either key counted towards completion of the ratio and then, once the ratio had been completed, a response on the probabilistically selected key would produce food. In one experiment, the schedule requirements were varied for both fixed-interval and fixed-ratio schedules. In the second experiment, relative reinforcement rate was varied. And in a third experiment, the duration of an intertrial interval separating choices was varied. The results for 11 pigeons across all three experiments indicate that there were often large deviations between relative response rates and relative reinforcement rates. Overall performance measures were characterized by a great deal of variability across conditions. More detailed measures of choice across the schedule requirement were also quite variable across conditions. In spite of this variability, performance was consistent across conditions in its efficiency of producing food. The absence of matching of behavior allocation to reinforcement rate indicates an important difference between the present procedures and other choice procedures; that difference raises questions about the specific conditions that lead to matching as an outcome. PMID:16812452
Svensson, Fredrik; Aniceto, Natalia; Norinder, Ulf; Cortes-Ciriano, Isidro; Spjuth, Ola; Carlsson, Lars; Bender, Andreas
2018-05-29
Making predictions with an associated confidence is highly desirable as it facilitates decision making and resource prioritization. Conformal regression is a machine learning framework that allows the user to define the required confidence and delivers predictions that are guaranteed to be correct to the selected extent. In this study, we apply conformal regression to model molecular properties and bioactivity values and investigate different ways to scale the resultant prediction intervals to create as efficient (i.e., narrow) regressors as possible. Different algorithms to estimate the prediction uncertainty were used to normalize the prediction ranges, and the different approaches were evaluated on 29 publicly available data sets. Our results show that the most efficient conformal regressors are obtained when using the natural exponential of the ensemble standard deviation from the underlying random forest to scale the prediction intervals, but other approaches were almost as efficient. This approach afforded an average prediction range of 1.65 pIC50 units at the 80% confidence level when applied to bioactivity modeling. The choice of nonconformity function has a pronounced impact on the average prediction range with a difference of close to one log unit in bioactivity between the tightest and widest prediction range. Overall, conformal regression is a robust approach to generate bioactivity predictions with associated confidence.
Patient-ventilator asynchrony affects pulse pressure variation prediction of fluid responsiveness.
Messina, Antonio; Colombo, Davide; Cammarota, Gianmaria; De Lucia, Marta; Cecconi, Maurizio; Antonelli, Massimo; Corte, Francesco Della; Navalesi, Paolo
2015-10-01
During partial ventilatory support, pulse pressure variation (PPV) fails to adequately predict fluid responsiveness. This prospective study aims to investigate whether patient-ventilator asynchrony affects PPV prediction of fluid responsiveness during pressure support ventilation (PSV). This is an observational physiological study evaluating the response to a 500-mL fluid challenge in 54 patients receiving PSV, 27 without (Synch) and 27 with asynchronies (Asynch), as assessed by visual inspection of ventilator waveforms by 2 skilled blinded physicians. The area under the curve was 0.71 (confidence interval, 0.57-0.83) for the overall population, 0.86 (confidence interval, 0.68-0.96) in the Synch group, and 0.53 (confidence interval, 0.33-0.73) in the Asynch group (P = .018). Sensitivity and specificity of PPV were 78% and 89% in the Synch group and 36% and 46% in the Asynch group. Logistic regression showed that the PPV prediction was influenced by patient-ventilator asynchrony (odds ratio, 8.8 [2.0-38.0]; P < .003). Of the 27 patients without asynchronies, 12 had a tidal volume greater than or equal to 8 mL/kg; in this subgroup, the rate of correct classification was 100%. Patient-ventilator asynchrony affects PPV performance during partial ventilatory support influencing its efficacy in predicting fluid responsiveness. Copyright © 2015 Elsevier Inc. All rights reserved.
Sampling interval analysis and CDF generation for grain-scale gravel bed topography
USDA-ARS?s Scientific Manuscript database
In river hydraulics, there is a continuing need for characterizing bed elevations to arrive at quantitative roughness measures that can be used in predicting flow depth and for improved prediction of fine-sediment transport over and through coarse beds. Recently published prediction methods require...
Finite Element Model of the Knee for Investigation of Injury Mechanisms: Development and Validation
Kiapour, Ali; Kiapour, Ata M.; Kaul, Vikas; Quatman, Carmen E.; Wordeman, Samuel C.; Hewett, Timothy E.; Demetropoulos, Constantine K.; Goel, Vijay K.
2014-01-01
Multiple computational models have been developed to study knee biomechanics. However, the majority of these models are mainly validated against a limited range of loading conditions and/or do not include sufficient details of the critical anatomical structures within the joint. Due to the multifactorial dynamic nature of knee injuries, anatomic finite element (FE) models validated against multiple factors under a broad range of loading conditions are necessary. This study presents a validated FE model of the lower extremity with an anatomically accurate representation of the knee joint. The model was validated against tibiofemoral kinematics, ligaments strain/force, and articular cartilage pressure data measured directly from static, quasi-static, and dynamic cadaveric experiments. Strong correlations were observed between model predictions and experimental data (r > 0.8 and p < 0.0005 for all comparisons). FE predictions showed low deviations (root-mean-square (RMS) error) from average experimental data under all modes of static and quasi-static loading, falling within 2.5 deg of tibiofemoral rotation, 1% of anterior cruciate ligament (ACL) and medial collateral ligament (MCL) strains, 17 N of ACL load, and 1 mm of tibiofemoral center of pressure. Similarly, the FE model was able to accurately predict tibiofemoral kinematics and ACL and MCL strains during simulated bipedal landings (dynamic loading). In addition to minimal deviation from direct cadaveric measurements, all model predictions fell within 95% confidence intervals of the average experimental data. Agreement between model predictions and experimental data demonstrates the ability of the developed model to predict the kinematics of the human knee joint as well as the complex, nonuniform stress and strain fields that occur in biological soft tissue. Such a model will facilitate the in-depth understanding of a multitude of potential knee injury mechanisms with special emphasis on ACL injury. PMID:24763546
Measurement of Phonated Intervals during Four Fluency-Inducing Conditions
ERIC Educational Resources Information Center
Davidow, Jason H.; Bothe, Anne K.; Andreatta, Richard D.; Ye, Jun
2009-01-01
Purpose: Previous investigations of persons who stutter have demonstrated changes in vocalization variables during fluency-inducing conditions (FICs). A series of studies has also shown that a reduction in short intervals of phonation, those from 30 to 200 ms, is associated with decreased stuttering. The purpose of this study, therefore, was to…
NASA Technical Reports Server (NTRS)
Sohl, L. E.; Chandler, M. A.
2001-01-01
The Neoproterozoic Snowball Earth intervals provide excellent opportunities to examine the environmental limits on terrestrial metazoans. A series of GCM simulations was run in order to quantify climatic conditions during these intervals. Additional information is contained in the original extended abstract.
Lo, Monica Y; Bonthala, Nirupama; Holper, Elizabeth M; Banks, Kamakki; Murphy, Sabina A; McGuire, Darren K; de Lemos, James A; Khera, Amit
2013-03-15
Women with angina pectoris and abnormal stress test findings commonly have no epicardial coronary artery disease (CAD) at catheterization. The aim of the present study was to develop a risk score to predict obstructive CAD in such patients. Data were analyzed from 337 consecutive women with angina pectoris and abnormal stress test findings who underwent cardiac catheterization at our center from 2003 to 2007. Forward selection multivariate logistic regression analysis was used to identify the independent predictors of CAD, defined by ≥50% diameter stenosis in ≥1 epicardial coronary artery. The independent predictors included age ≥55 years (odds ratio 2.3, 95% confidence interval 1.3 to 4.0), body mass index <30 kg/m(2) (odds ratio 1.9, 95% confidence interval 1.1 to 3.1), smoking (odds ratio 2.6, 95% confidence interval 1.4 to 4.8), low high-density lipoprotein cholesterol (odds ratio 2.9, 95% confidence interval 1.5 to 5.5), family history of premature CAD (odds ratio 2.4, 95% confidence interval 1.0 to 5.7), lateral abnormality on stress imaging (odds ratio 2.8, 95% confidence interval 1.5 to 5.5), and exercise capacity <5 metabolic equivalents (odds ratio 2.4, 95% confidence interval 1.1 to 5.6). Assigning each variable 1 point summed to constitute a risk score, a graded association between the score and prevalent CAD (ptrend <0.001). The risk score demonstrated good discrimination with a cross-validated c-statistic of 0.745 (95% confidence interval 0.70 to 0.79), and an optimized cutpoint of a score of ≤2 included 62% of the subjects and had a negative predictive value of 80%. In conclusion, a simple clinical risk score of 7 characteristics can help differentiate those more or less likely to have CAD among women with angina pectoris and abnormal stress test findings. This tool, if validated, could help to guide testing strategies in women with angina pectoris. Copyright © 2013 Elsevier Inc. All rights reserved.
Abstract: Inference and Interval Estimation for Indirect Effects With Latent Variable Models.
Falk, Carl F; Biesanz, Jeremy C
2011-11-30
Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods. This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx. Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.
Fiore, Lorenzo; Lorenzetti, Walter; Ratti, Giovannino
2005-11-30
A procedure is proposed to compare single-unit spiking activity elicited in repetitive cycles with an inhomogeneous Poisson process (IPP). Each spike sequence in a cycle is discretized and represented as a point process on a circle. The interspike interval probability density predicted for an IPP is computed on the basis of the experimental firing probability density; differences from the experimental interval distribution are assessed. This procedure was applied to spike trains which were repetitively induced by opening-closing movements of the distal article of a lobster leg. As expected, the density of short interspike intervals, less than 20-40 ms in length, was found to lie greatly below the level predicted for an IPP, reflecting the occurrence of the refractory period. Conversely, longer intervals, ranging from 20-40 to 100-120 ms, were markedly more abundant than expected; this provided evidence for a time window of increased tendency to fire again after a spike. Less consistently, a weak depression of spike generation was observed for longer intervals. A Monte Carlo procedure, implemented for comparison, produced quite similar results, but was slightly less precise and more demanding as concerns computation time.
Liu, C C; Crone, N E; Franaszczuk, P J; Cheng, D T; Schretlen, D S; Lenz, F A
2011-08-25
The current model of fear conditioning suggests that it is mediated through modules involving the amygdala (AMY), hippocampus (HIP), and frontal lobe (FL). We now test the hypothesis that habituation and acquisition stages of a fear conditioning protocol are characterized by different event-related causal interactions (ERCs) within and between these modules. The protocol used the painful cutaneous laser as the unconditioned stimulus and ERC was estimated by analysis of local field potentials recorded through electrodes implanted for investigation of epilepsy. During the prestimulus interval of the habituation stage FL>AMY ERC interactions were common. For comparison, in the poststimulus interval of the habituation stage, only a subdivision of the FL (dorsolateral prefrontal cortex, dlPFC) still exerted the FL>AMY ERC interaction (dlFC>AMY). For a further comparison, during the poststimulus interval of the acquisition stage, the dlPFC>AMY interaction persisted and an AMY>FL interaction appeared. In addition to these ERC interactions between modules, the results also show ERC interactions within modules. During the poststimulus interval, HIP>HIP ERC interactions were more common during acquisition, and deep hippocampal contacts exerted causal interactions on superficial contacts, possibly explained by connectivity between the perihippocampal gyrus and the HIP. During the prestimulus interval of the habituation stage, AMY>AMY ERC interactions were commonly found, while interactions between the deep and superficial AMY (indirect pathway) were independent of intervals and stages. These results suggest that the network subserving fear includes distributed or widespread modules, some of which are themselves "local networks." ERC interactions between and within modules can be either static or change dynamically across intervals or stages of fear conditioning. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
Examining impulse-variability in overarm throwing.
Urbin, M A; Stodden, David; Boros, Rhonda; Shannon, David
2012-01-01
The purpose of this study was to examine variability in overarm throwing velocity and spatial output error at various percentages of maximum to test the prediction of an inverted-U function as predicted by impulse-variability theory and a speed-accuracy trade-off as predicted by Fitts' Law Thirty subjects (16 skilled, 14 unskilled) were instructed to throw a tennis ball at seven percentages of their maximum velocity (40-100%) in random order (9 trials per condition) at a target 30 feet away. Throwing velocity was measured with a radar gun and interpreted as an index of overall systemic power output. Within-subject throwing velocity variability was examined using within-subjects repeated-measures ANOVAs (7 repeated conditions) with built-in polynomial contrasts. Spatial error was analyzed using mixed model regression. Results indicated a quadratic fit with variability in throwing velocity increasing from 40% up to 60%, where it peaked, and then decreasing at each subsequent interval to maximum (p < .001, η2 = .555). There was no linear relationship between speed and accuracy. Overall, these data support the notion of an inverted-U function in overarm throwing velocity variability as both skilled and unskilled subjects approach maximum effort. However, these data do not support the notion of a speed-accuracy trade-off. The consistent demonstration of an inverted-U function associated with systemic power output variability indicates an enhanced capability to regulate aspects of force production and relative timing between segments as individuals approach maximum effort, even in a complex ballistic skill.
An aerial sightability model for estimating ferruginous hawk population size
Ayers, L.W.; Anderson, S.H.
1999-01-01
Most raptor aerial survey projects have focused on numeric description of visibility bias without identifying the contributing factors or developing predictive models to account for imperfect detection rates. Our goal was to develop a sightability model for nesting ferruginous hawks (Buteo regalis) that could account for nests missed during aerial surveys and provide more accurate population estimates. Eighteen observers, all unfamiliar with nest locations in a known population, searched for nests within 300 m of flight transects via a Maule fixed-wing aircraft. Flight variables tested for their influence on nest-detection rates included aircraft speed, height, direction of travel, time of day, light condition, distance to nest, and observer experience level. Nest variables included status (active vs. inactive), condition (i.e., excellent, good, fair, poor, bad), substrate type, topography, and tree density. A multiple logistic regression model identified nest substrate type, distance to nest, and observer experience level as significant predictors of detection rates (P < 0.05). The overall model was significant (??26 = 124.4, P < 0.001, n = 255 nest observations), and the correct classification rate was 78.4%. During 2 validation surveys, observers saw 23.7% (14/59) and 36.5% (23/63) of the actual population. Sightability model predictions, with 90% confidence intervals, captured the true population in both tests. Our results indicate standardized aerial surveys, when used in conjunction with the predictive sightability model, can provide unbiased population estimates for nesting ferruginous hawks.
Uncertainty quantification methodologies development for stress corrosion cracking of canister welds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dingreville, Remi Philippe Michel; Bryan, Charles R.
2016-09-30
This letter report presents a probabilistic performance assessment model to evaluate the probability of canister failure (through-wall penetration) by SCC. The model first assesses whether environmental conditions for SCC – the presence of an aqueous film – are present at canister weld locations (where tensile stresses are likely to occur) on the canister surface. Geometry-specific storage system thermal models and weather data sets representative of U.S. spent nuclear fuel (SNF) storage sites are implemented to evaluate location-specific canister surface temperature and relative humidity (RH). As the canister cools and aqueous conditions become possible, the occurrence of corrosion is evaluated. Corrosionmore » is modeled as a two-step process: first, pitting is initiated, and the extent and depth of pitting is a function of the chloride surface load and the environmental conditions (temperature and RH). Second, as corrosion penetration increases, the pit eventually transitions to a SCC crack, with crack initiation becoming more likely with increasing pit depth. Once pits convert to cracks, a crack growth model is implemented. The SCC growth model includes rate dependencies on both temperature and crack tip stress intensity factor, and crack growth only occurs in time steps when aqueous conditions are predicted. The model suggests that SCC is likely to occur over potential SNF interim storage intervals; however, this result is based on many modeling assumptions. Sensitivity analyses provide information on the model assumptions and parameter values that have the greatest impact on predicted storage canister performance, and provide guidance for further research to reduce uncertainties.« less
NASA Astrophysics Data System (ADS)
Schmidt, Thomas; Kalisch, John; Lorenz, Elke; Heinemann, Detlev
2016-03-01
Clouds are the dominant source of small-scale variability in surface solar radiation and uncertainty in its prediction. However, the increasing share of solar energy in the worldwide electric power supply increases the need for accurate solar radiation forecasts. In this work, we present results of a very short term global horizontal irradiance (GHI) forecast experiment based on hemispheric sky images. A 2-month data set with images from one sky imager and high-resolution GHI measurements from 99 pyranometers distributed over 10 km by 12 km is used for validation. We developed a multi-step model and processed GHI forecasts up to 25 min with an update interval of 15 s. A cloud type classification is used to separate the time series into different cloud scenarios. Overall, the sky-imager-based forecasts do not outperform the reference persistence forecasts. Nevertheless, we find that analysis and forecast performance depends strongly on the predominant cloud conditions. Especially convective type clouds lead to high temporal and spatial GHI variability. For cumulus cloud conditions, the analysis error is found to be lower than that introduced by a single pyranometer if it is used representatively for the whole area in distances from the camera larger than 1-2 km. Moreover, forecast skill is much higher for these conditions compared to overcast or clear sky situations causing low GHI variability, which is easier to predict by persistence. In order to generalize the cloud-induced forecast error, we identify a variability threshold indicating conditions with positive forecast skill.
Robinson, M.M.; Valdes, P.J.; Haywood, A.M.; Dowsett, H.J.; Hill, D.J.; Jones, S.M.
2011-01-01
The mid-Pliocene warm period (MPWP; ~. 3.3 to 3.0. Ma) is the most recent interval in Earth's history in which global temperatures reached and remained at levels similar to those projected for the near future. The distribution of global warmth, however, was different than today in that the high latitudes warmed more than the tropics. Multiple temperature proxies indicate significant sea surface warming in the North Atlantic and Arctic Oceans during the MPWP, but predictions from a fully coupled ocean-atmosphere model (HadCM3) have so far been unable to fully predict the large scale of sea surface warming in the high latitudes. If climate proxies accurately represent Pliocene conditions, and if no weakness exists in the physics of the model, then model boundary conditions may be in error. Here we alter a single boundary condition (bathymetry) to examine if Pliocene high latitude warming was aided by an increase in poleward heat transport due to changes in the subsidence of North Atlantic Ocean ridges. We find an increase in both Arctic sea surface temperature and deepwater production in model experiments that incorporate a deepened Greenland-Scotland Ridge. These results offer both a mechanism for the warming in the North Atlantic and Arctic Oceans indicated by numerous proxies and an explanation for the apparent disparity between proxy data and model simulations of Pliocene northern North Atlantic and Arctic Ocean conditions. Determining the causes of Pliocene warmth remains critical to fully understanding comparisons of the Pliocene warm period to possible future climate change scenarios. ?? 2011.
NASA Technical Reports Server (NTRS)
Schlegel, T. T.; Marthol, H.; Bucchner, S.; Tutaj, M.; Berlin, D.; Axelrod, F. B.; Hilz, M. J.
2004-01-01
Patients with familial dysautonomia (FD) have an increased risk of sudden death, but sensitive and specific predictors of sudden death in FD are lacking. Methods. We recorded 10-min resting high-fidelity 12-lead ECGs in 14 FD patients and in 14 age/gender-matched healthy subjects and studied 25+ different heart rate variability (HRV) indices for their ability to predict sudden death in the FD patients. Indices studied included those from 4 "nonlinear" HRV techniques (detrended fluctuation analysis, approximate entropy, correlation dimension, and PoincarC analyses). The predictive value of PR, QRS, QTc and JTc intervals, QT dispersion (QTd), beat-to-beat QT and PR interval variability indices (QTVI and PRVI) and 12- lead high frequency QRS ECG (150-250 Hz) were also studied. FD patients and controls (C) differed (Pless than 0.0l) with respect to 20+ of the HRV indices (FD less than C) and with respect to QTVI and PRVI (FDBC) and HF QRS- related root mean squared voltages (FDBC) and reduced amplitude zone counts (FD less than C). They differed less with respect to PR intervals (FD less than C) and JTc intervals (FD greater than C) (P less than 0.05 for both) and did not differ at all with respect to QRS and QTc intervals and to QTd. Within 12 months after study, 2 of the 14 patients succumbed to sudden cardiac arrest. The best predictor of sudden death was the degree of diminution in HRV vagal-cardiac (parasympathetic) parameters such as RMSSD, the SDl of Poincare plots, and HF spectral power. Excluding the two FD patients who had resting tachycardia (HR greater than 100, which confounds traditional HRV analyses), the following criteria were independently 100% sensitive and 100% specific for predicting sudden death in the remaining 12 FD patients during spontaneous breathing: RMSSD less than 13 ms and/or PoincarC SD1 less than 9 ms. In FD patients without supine tachycardia, the degree of diminution in parasympathetic HRV parameters (by high-fidelity ECG) predicts incipient death.
Effect of low body temperature on associative interference in conditioned taste aversion.
Christianson, John P; Anderson, Mathew J; Misanin, James R; Hinderliter, Charles F
2005-06-01
When two novel conditioned stimuli precede an unconditioned stimulus (US), the interval between the two conditioned stimuli (CS1 and CS2) influences the magnitude of the CS-US associability of each CS. As the interval between CS1 and CS2 increases, the associability of CS1 with the US decreases due to interference by CS2 and the associability of CS2 increases, given its temporal proximity to the US. Because hypothermia has been reported to increase the interval at which conditioned taste aversions can be formed, its influence was examined on the above relationship, i.e., how interference from CS2 affects the associability of CS1 with the US. Rats received a conditioned taste aversion procedure where CS1 and CS2 were presented either one after the other or separated by an 80-min. delay. For all subjects, the US or pseudo-US was presented immediately after CS2. When hypothermia was interpolated between the two flavor stimuli that were spaced 80 min. apart, CS2-interference with the CS1-US association was greatly attenuated. We propose that hypothermia modifies internal timing mechanisms such that the externally timed 80-min. CS1-CS2 interval was perceived as much shorter for rats made hypothermic. As a result of this perceived shortened inter-CS interval, CS2 produced less interference for the CS1-US association than would be expected for such a relatively long delay between CS1 and CS2.
CS Informativeness Governs CS-US Associability
Ward, Ryan D.; Gallistel, C. R.; Jensen, Greg; Richards, Vanessa L.; Fairhurst, Stephen; Balsam, Peter D
2012-01-01
In a conditioning protocol, the onset of the conditioned stimulus (CS) provides information about when to expect reinforcement (the US). There are two sources of information from the CS in a delay conditioning paradigm in which the CS-US interval is fixed. The first depends on the informativeness, the degree to which CS onset reduces the average expected time to onset of the next US. The second depends only on how precisely a subject can represent a fixed-duration interval (the temporal Weber fraction). In three experiments with mice, we tested the differential impact of these two sources of information on rate of acquisition of conditioned responding (CS-US associability). In Experiment 1, we show that associability (the inverse of trials to acquisition) increases in proportion to informativeness. In Experiment 2, we show that fixing the duration of the US-US interval or the CS-US interval or both has no effect on associability. In Experiment 3, we equated the increase in information produced by varying the C̅/T̅ ratio with the increase produced by fixing the duration of the CS-US interval. Associability increased with increased informativeness, but, as in Experiment 2, fixing the CS-US duration had no effect on associability. These results are consistent with the view that CS-US associability depends on the increased rate of reward signaled by CS onset. The results also provide further evidence that conditioned responding is temporally controlled when it emerges. PMID:22468633
Stimulus-to-matching-stimulus interval influences N1, P2, and P3b in an equiprobable Go/NoGo task.
Steiner, Genevieve Z; Barry, Robert J; Gonsalvez, Craig J
2014-10-01
Previous research has shown that as the stimulus-to-matching-stimulus interval (including the target-to-target interval, TTI, and nontarget-to-nontarget interval, NNI) increases, the amplitude of the P300 ERP component increases systematically. Here, we extended previous P300 research and explored TTI and NNI effects on the various ERP components elicited in an auditory equiprobable Go/NoGo task. We also examined whether a similar mechanism was underpinning interval effects in early ERP components (e.g., N1). Thirty participants completed a specially-designed variable-ISI equiprobable task whilst their EEG activity was recorded. Component amplitudes were extracted using temporal PCA with unrestricted Varimax rotation. As expected, N1, P2, and P3b amplitudes increased as TTI and NNI increased, however, Processing Negativity (PN) and Slow Wave (SW) did not show the same systematic change with interval increments. To determine the origin of interval effects in sequential processing, a multiple regression analysis was conducted on each ERP component including stimulus type, interval, and all preceding components as predictors. These analyses showed that matching-stimulus interval predicted N1, P3b, and weakly predicted P2, but not PN or SW; SW was determined by P3b only. These results suggest that N1, P3b, and to some extent, P2, are affected by a similar temporal mechanism. However, the dissimilar pattern of results obtained for sequential ERP components indicates that matching-stimulus intervals are not affecting all aspects of stimulus processing. This argues against a global mechanism, such as a pathway-specific refractory effect, and suggests that stimulus processing is occurring in parallel pathways, some of which are not affected by temporal manipulations of matching-stimulus interval. Copyright © 2014 Elsevier B.V. All rights reserved.
Variations in rupture process with recurrence interval in a repeated small earthquake
Vidale, J.E.; Ellsworth, W.L.; Cole, A.; Marone, Chris
1994-01-01
In theory and in laboratory experiments, friction on sliding surfaces such as rock, glass and metal increases with time since the previous episode of slip. This time dependence is a central pillar of the friction laws widely used to model earthquake phenomena. On natural faults, other properties, such as rupture velocity, porosity and fluid pressure, may also vary with the recurrence interval. Eighteen repetitions of the same small earthquake, separated by intervals ranging from a few days to several years, allow us to test these laboratory predictions in situ. The events with the longest time since the previous earthquake tend to have about 15% larger seismic moment than those with the shortest intervals, although this trend is weak. In addition, the rupture durations of the events with the longest recurrence intervals are more than a factor of two shorter than for the events with the shortest intervals. Both decreased duration and increased friction are consistent with progressive fault healing during the time of stationary contact.In theory and in laboratory experiments, friction on sliding surfaces such as rock, glass and metal increases with time since the previous episode of slip. This time dependence is a central pillar of the friction laws widely used to model earthquake phenomena. On natural faults, other properties, such as rupture velocity, porosity and fluid pressure, may also vary with the recurrence interval. Eighteen repetitions of the same small earthquake, separated by intervals ranging from a few days to several years, allow us to test these laboratory predictions in situ. The events with the longest time since the previous earthquake tend to have about 15% larger seismic moment than those with the shortest intervals, although this trend is weak. In addition, the rupture durations of the events with the longest recurrence intervals are more than a factor of two shorter than for the events with the shortest intervals. Both decreased duration and increased friction are consistent with progressive fault healing during the time of stationary contact.
Ito, Masanori; Kado, Naoki; Suzuki, Toshiaki; Ando, Hiroshi
2013-01-01
[Purpose] The purpose of this study was to investigate the influence of external pacing with periodic auditory stimuli on the control of periodic movement. [Subjects and Methods] Eighteen healthy subjects performed self-paced, synchronization-continuation, and syncopation-continuation tapping. Inter-onset intervals were 1,000, 2,000 and 5,000 ms. The variability of inter-tap intervals was compared between the different pacing conditions and between self-paced tapping and each continuation phase. [Results] There were no significant differences in the mean and standard deviation of the inter-tap interval between pacing conditions. For the 1,000 and 5,000 ms tasks, there were significant differences in the mean inter-tap interval following auditory pacing compared with self-pacing. For the 2,000 ms syncopation condition and 5,000 ms task, there were significant differences from self-pacing in the standard deviation of the inter-tap interval following auditory pacing. [Conclusion] These results suggest that the accuracy of periodic movement with intervals of 1,000 and 5,000 ms can be improved by the use of auditory pacing. However, the consistency of periodic movement is mainly dependent on the inherent skill of the individual; thus, improvement of consistency based on pacing is unlikely. PMID:24259932
Miskovic, Vladimir; Keil, Andreas
2015-01-01
The visual system is biased towards sensory cues that have been associated with danger or harm through temporal co-occurrence. An outstanding question about conditioning-induced changes in visuocortical processing is the extent to which they are driven primarily by top-down factors such as expectancy or by low-level factors such as the temporal proximity between conditioned stimuli and aversive outcomes. Here, we examined this question using two different differential aversive conditioning experiments: participants learned to associate a particular grating stimulus with an aversive noise that was presented either in close temporal proximity (delay conditioning experiment) or after a prolonged stimulus-free interval (trace conditioning experiment). In both experiments we probed cue-related cortical responses by recording steady-state visual evoked potentials (ssVEPs). Although behavioral ratings indicated that all participants successfully learned to discriminate between the grating patterns that predicted the presence versus absence of the aversive noise, selective amplification of population-level responses in visual cortex for the conditioned danger signal was observed only when the grating and the noise were temporally contiguous. Our findings are in line with notions purporting that changes in the electrocortical response of visual neurons induced by aversive conditioning are a product of Hebbian associations among sensory cell assemblies rather than being driven entirely by expectancy-based, declarative processes. PMID:23398582
Lockie, Robert G; Stage, Alyssa A; Stokes, John J; Orjalo, Ashley J; Davis, DeShaun L; Giuliano, Dominic V; Moreno, Matthew R; Risso, Fabrice G; Lazar, Adrina; Birmingham-Babauta, Samantha A; Tomita, Tricia M
2016-12-03
Leg power is an important characteristic for soccer, and jump tests can measure this capacity. Limited research has analyzed relationships between jumping and soccer-specific field test performance in collegiate male players. Nineteen Division I players completed tests of: leg power (vertical jump (VJ), standing broad jump (SBJ), left- and right-leg triple hop (TH)); linear (30 m sprint; 0⁻5 m, 5⁻10 m, 0⁻10, 0⁻30 m intervals) and change-of-direction (505) speed; soccer-specific fitness (Yo-Yo Intermittent Recovery Test Level 2); and 7 × 30-m sprints to measure repeated-sprint ability (RSA; total time (TT), performance decrement (PD)). Pearson's correlations ( r ) determined jump and field test relationships; stepwise regression ascertained jump predictors of the tests ( p < 0.05). All jumps correlated with the 0⁻5, 0⁻10, and 0⁻30 m sprint intervals ( r = -0.65⁻-0.90). VJ, SBJ, and left- and right-leg TH correlated with RSA TT ( r = -0.51⁻-0.59). Right-leg TH predicted the 0⁻5 and 0⁻10 m intervals (R² = 0.55⁻0.81); the VJ predicted the 0⁻30 m interval and RSA TT (R² = 0.41⁻0.84). Between-leg TH asymmetry correlated with and predicted left-leg 505 and RSA PD ( r = -0.68⁻0.62; R² = 0.39⁻0.46). Improvements in jumping ability could contribute to faster speed and RSA performance in collegiate soccer players.
Wen, Dongqi; Zhai, Wenjuan; Xiang, Sheng; Hu, Zhice; Wei, Tongchuan; Noll, Kenneth E
2017-11-01
Determination of the effect of vehicle emissions on air quality near roadways is important because vehicles are a major source of air pollution. A near-roadway monitoring program was undertaken in Chicago between August 4 and October 30, 2014, to measure ultrafine particles, carbon dioxide, carbon monoxide, traffic volume and speed, and wind direction and speed. The objective of this study was to develop a method to relate short-term changes in traffic mode of operation to air quality near roadways using data averaged over 5-min intervals to provide a better understanding of the processes controlling air pollution concentrations near roadways. Three different types of data analysis are provided to demonstrate the type of results that can be obtained from a near-roadway sampling program based on 5-min measurements: (1) development of vehicle emission factors (EFs) for ultrafine particles as a function of vehicle mode of operation, (2) comparison of measured and modeled CO 2 concentrations, and (3) application of dispersion models to determine concentrations near roadways. EFs for ultrafine particles are developed that are a function of traffic volume and mode of operation (free flow and congestion) for light-duty vehicles (LDVs) under real-world conditions. Two air quality models-CALINE4 (California Line Source Dispersion Model, version 4) and AERMOD (American Meteorological Society/U.S. Environmental Protection Agency Regulatory Model)-are used to predict the ultrafine particulate concentrations near roadways for comparison with measured concentrations. When using CALINE4 to predict air quality levels in the mixing cell, changes in surface roughness and stability class have no effect on the predicted concentrations. However, when using AERMOD to predict air quality in the mixing cell, changes in surface roughness have a significant impact on the predicted concentrations. The paper provides emission factors (EFs) that are a function of traffic volume and mode of operation (free flow and congestion) for LDVs under real-world conditions. The good agreement between monitoring and modeling results indicates that high-resolution, simultaneous measurements of air quality and meteorological and traffic conditions can be used to determine real-world, fleet-wide vehicle EFs as a function of vehicle mode of operation under actual driving conditions.
MacDonald, Stuart W. S.; Vergote, David; Jhamandas, Jack; Westaway, David; Dixon, Roger A.
2016-01-01
Objectives: Mild cognitive impairment (MCI) is a high-risk condition for progression to Alzheimer’s disease (AD). Vascular health is a key mechanism underlying age-related cognitive decline and neurodegeneration. AD-related genetic risk factors may be associated with preclinical cognitive status changes. We examine independent and cross-domain interactive effects of vascular and genetic markers for predicting MCI status and stability. Method: We used cross-sectional and 2-wave longitudinal data from the Victoria Longitudinal Study, including indicators of vascular health (e.g., reported vascular diseases, measured lung capacity and pulse rate) and genetic risk factors—that is, apolipoprotein E (APOE; rs429358 and rs7412; the presence vs absence of ε4) and catechol-O-methyltransferase (COMT; rs4680; met/met vs val/val). We examined associations with objectively classified (a) cognitive status at baseline (not impaired congnitive (NIC) controls vs MCI) and (b) stability or transition of cognitive status across a 4-year interval (stable NIC–NIC vs chronic MCI–MCI or transitional NIC–MCI). Results: Using logistic regression, indicators of vascular health, both independently and interactively with APOE ε4, were associated with risk of MCI at baseline and/or associated with MCI conversion or MCI stability over the retest interval. Discussion: Several vascular health markers of aging predict MCI risk. Interactively, APOE ε4 may intensify the vascular health risk for MCI. PMID:26362601
Haggerty, Christopher M.; de Zélicourt, Diane A.; Restrepo, Maria; Rossignac, Jarek; Spray, Thomas L.; Kanter, Kirk R.; Fogel, Mark A.; Yoganathan, Ajit P.
2012-01-01
Background Virtual modeling of cardiothoracic surgery is a new paradigm that allows for systematic exploration of various operative strategies and uses engineering principles to predict the optimal patient-specific plan. This study investigates the predictive accuracy of such methods for the surgical palliation of single ventricle heart defects. Methods Computational fluid dynamics (CFD)-based surgical planning was used to model the Fontan procedure for four patients prior to surgery. The objective for each was to identify the operative strategy that best distributed hepatic blood flow to the pulmonary arteries. Post-operative magnetic resonance data were acquired to compare (via CFD) the post-operative hemodynamics with predictions. Results Despite variations in physiologic boundary conditions (e.g., cardiac output, venous flows) and the exact geometry of the surgical baffle, sufficient agreement was observed with respect to hepatic flow distribution (90% confidence interval-14 ± 4.3% difference). There was also good agreement of flow-normalized energetic efficiency predictions (19 ± 4.8% error). Conclusions The hemodynamic outcomes of prospective patient-specific surgical planning of the Fontan procedure are described for the first time with good quantitative comparisons between preoperatively predicted and postoperative simulations. These results demonstrate that surgical planning can be a useful tool for single ventricle cardiothoracic surgery with the ability to deliver significant clinical impact. PMID:22777126
The microcomputer scientific software series 2: general linear model--regression.
Harold M. Rauscher
1983-01-01
The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...
NASA Astrophysics Data System (ADS)
Rambo, J. E.; Kim, W.; Miller, K.
2017-12-01
Physical modeling of a delta's evolution can represent how changing the intervals of flood and interflood can alter a delta's fluvial pattern and geometry. Here we present a set of six experimental runs in which sediment and water were discharged at constant rates over each experiment. During the "flood" period, both sediment and water were discharged at rates of 0.25 cm3/s and 15 ml/s respectively, and during the "interflood" period, only water was discharged at 7.5 ml/s. The flood periods were only run for 30 minutes to keep the total volume of sediment constant. Run 0 did not have an interflood period and therefore ran with constant sediment and water discharge for the duration of the experiment.The other five runs had either 5, 10, or 15-min intervals of flood with 5, 10, or 15-min intervals of interflood. The experimental results show that Run 0 had the smallest topset area. This is due to a lack of surface reworking that takes place during interflood periods. Run 1 had 15-minute intervals of flood and 15-minute intervals of interflood, and it had the largest topset area. Additionally, the experiments that had longer intervals of interflood than flood had more elongated delta geometries. Wetted fraction color maps were also created to plot channel locations during each run. The maps show that the runs with longer interflood durations had channels occurring predominantly down the middle with stronger incisions; these runs produced deltas with more elongated geometries. When the interflood duration was even longer, however, strong channels started to occur at multiple locations. This increased interflood period allowed for the entire area over the delta's surface to be reworked, thus reducing the downstream slope and allowing channels to be more mobile laterally. Physical modeling of a delta allows us to predict a delta's resulting geometry given a set of conditions. This insight is needed especially with delta's being the home to many populations of people and a habitat for various other species.
NASA Astrophysics Data System (ADS)
Justino, F. J.; Lindemann, D.; Kucharski, F.
2016-02-01
Earth climate history has been punctuated by cold (glacial) and warm (inter-glacial) intervals associated with modification of the planetary orbit and subsequently changes in paleotopography.During the Pleistocene epoch, the time interval between 1.8 million and 11,700 before present, remarkable episodes of warmer climates such as the Marine IsotopeStage (MIS) 1, 5e, 11c, and 31 which occurred at 9, 127, 409, and 1080 ka, lead to changes in air temperature in the polar regions and substantial melting of polar glaciers. Based on first ever multi-millennium coupled climate simulations of the Marine Isotope Stage 31 (MIS31), long-term oceanic conditions characteristic of this interval have been analyzed. Modeling experiments forced by modified West Antarctic Ice Sheet (WAIS) topography and astronomical configuration, demonstrated that substantial increase in the thermohaline flow and its associated northward heat transport in both Atlantic and Pacific oceans are predicted to occur during the MIS31. In the Atlantic these changes are driven by enhanced oceanic heat loss and increased water density. In the Pacific, anomalous atmospheric circulation leads to an overall increase of the water mass transport in the subtropical gyre, and drastically modified subtropical cell.Additional aspects related to the formation of the Pacific ocean MOC will be presented. This study is sponsored by the Brazilian Antarctic Program Grant CNPq 407681/2013-2.
Nonuniform sampling and non-Fourier signal processing methods in multidimensional NMR
Mobli, Mehdi; Hoch, Jeffrey C.
2017-01-01
Beginning with the introduction of Fourier Transform NMR by Ernst and Anderson in 1966, time domain measurement of the impulse response (the free induction decay, FID) consisted of sampling the signal at a series of discrete intervals. For compatibility with the discrete Fourier transform (DFT), the intervals are kept uniform, and the Nyquist theorem dictates the largest value of the interval sufficient to avoid aliasing. With the proposal by Jeener of parametric sampling along an indirect time dimension, extension to multidimensional experiments employed the same sampling techniques used in one dimension, similarly subject to the Nyquist condition and suitable for processing via the discrete Fourier transform. The challenges of obtaining high-resolution spectral estimates from short data records using the DFT were already well understood, however. Despite techniques such as linear prediction extrapolation, the achievable resolution in the indirect dimensions is limited by practical constraints on measuring time. The advent of non-Fourier methods of spectrum analysis capable of processing nonuniformly sampled data has led to an explosion in the development of novel sampling strategies that avoid the limits on resolution and measurement time imposed by uniform sampling. The first part of this review discusses the many approaches to data sampling in multidimensional NMR, the second part highlights commonly used methods for signal processing of such data, and the review concludes with a discussion of other approaches to speeding up data acquisition in NMR. PMID:25456315
Stirling, Aaron D; Moran, Neil R; Kelly, Michael E; Ridgway, Paul F; Conlon, Kevin C
2017-10-01
Using revised Atlanta classification defined outcomes, we compare absolute values in C-reactive protein (CRP), with interval changes in CRP, for severity stratification in acute pancreatitis (AP). A retrospective study of all first incidence AP was conducted over a 5-year period. Interval change in CRP values from admission to day 1, 2 and 3 was compared against the absolute values. Receiver-operator characteristic (ROC) curve and likelihood ratios (LRs) were used to compare ability to predict severe and mild disease. 337 cases of first incidence AP were included in our analysis. ROC curve analysis demonstrated the second day as the most useful time for repeat CRP measurement. A CRP interval change >90 mg/dL at 48 h (+LR 2.15, -LR 0.26) was equivalent to an absolute value of >150 mg/dL within 48 h (+LR 2.32, -LR 0.25). The optimal cut-off for absolute CRP based on new, more stringent definition of severity was >190 mg/dL (+LR 2.72, -LR 0.24). Interval change in CRP is a comparable measure to absolute CRP in the prognostication of AP severity. This study suggests a rise of >90 mg/dL from admission or an absolute value of >190 mg/dL at 48 h predicts severe disease with the greatest accuracy. Copyright © 2017 International Hepato-Pancreato-Biliary Association Inc. Published by Elsevier Ltd. All rights reserved.
Yao, X; Anderson, D L; Ross, S A; Lang, D G; Desai, B Z; Cooper, D C; Wheelan, P; McIntyre, M S; Bergquist, M L; MacKenzie, K I; Becherer, J D; Hashim, M A
2008-01-01
Background and purpose: Drug-induced prolongation of the QT interval can lead to torsade de pointes, a life-threatening ventricular arrhythmia. Finding appropriate assays from among the plethora of options available to predict reliably this serious adverse effect in humans remains a challenging issue for the discovery and development of drugs. The purpose of the present study was to develop and verify a reliable and relatively simple approach for assessing, during preclinical development, the propensity of drugs to prolong the QT interval in humans. Experimental approach: Sixteen marketed drugs from various pharmacological classes with a known incidence—or lack thereof—of QT prolongation in humans were examined in hERG (human ether a-go-go-related gene) patch-clamp assay and an anaesthetized guinea-pig assay for QT prolongation using specific protocols. Drug concentrations in perfusates from hERG assays and plasma samples from guinea-pigs were determined using liquid chromatography-mass spectrometry. Key results: Various pharmacological agents that inhibit hERG currents prolong the QT interval in anaesthetized guinea-pigs in a manner similar to that seen in humans and at comparable drug exposures. Several compounds not associated with QT prolongation in humans failed to prolong the QT interval in this model. Conclusions and implications: Analysis of hERG inhibitory potency in conjunction with drug exposures and QT interval measurements in anaesthetized guinea-pigs can reliably predict, during preclinical drug development, the risk of human QT prolongation. A strategy is proposed for mitigating the risk of QT prolongation of new chemical entities during early lead optimization. PMID:18587422
Finding Every Root of a Broad Class of Real, Continuous Functions in a Given Interval
NASA Technical Reports Server (NTRS)
Tausworthe, Robert C.; Wolgast, Paul A.
2011-01-01
One of the most pervasive needs within the Deep Space Network (DSN) Metric Prediction Generator (MPG) view period event generation is that of finding solutions to given occurrence conditions. While the general form of an equation expresses equivalence between its left-hand and right-hand expressions, the traditional treatment of the subject subtracts the two sides, leaving an expression of the form Integral of(x) = 0. Values of the independent variable x satisfying this condition are roots, or solutions. Generally speaking, there may be no solutions, a unique solution, multiple solutions, or a continuum of solutions to a given equation. In particular, all view period events are modeled as zero crossings of various metrics; for example, the time at which the elevation of a spacecraft reaches its maximum value, as viewed from a Deep Space Station (DSS), is found by locating that point at which the derivative of the elevation function becomes zero. Moreover, each event type may have several occurrences within a given time interval of interest. For example, a spacecraft in a low Moon orbit will experience several possible occultations per day, each of which must be located in time. The MPG is charged with finding all specified event occurrences that take place within a given time interval (or pass ), without any special clues from operators as to when they may occur, for the entire spectrum of missions undertaken by the DSN. For each event type, the event metric function is a known form that can be computed for any instant within the interval. A method has been created for a mathematical root finder to be capable of finding all roots of an arbitrary continuous function, within a given interval, to be subject to very lenient, parameterized assumptions. One assumption is that adjacent roots are separated at least by a given amount, xGuard. Any point whose function value is less than ef in magnitude is considered to be a root, and the function values at distances xGuard away from a root are larger than ef, unless there is another root located in this vicinity. A root is considered found if, during iteration, two root candidates differ by less than a pre-specified ex, and the optimum cubic polynomial matching the function at the end and at two interval points (that is within a relative error fraction L at its midpoint) is reliable in indicating whether the function has extrema within the interval. The robustness of this method depends solely on choosing these four parameters that control the search. The roots of discontinuous functions were also found, but at degraded performance.
NASA Technical Reports Server (NTRS)
Grimes-Ledesma, Lorie; Murthy, Pappu L. N.; Phoenix, S. Leigh; Glaser, Ronald
2007-01-01
In conjunction with a recent NASA Engineering and Safety Center (NESC) investigation of flight worthiness of Kevlar Overwrapped Composite Pressure Vessels (COPVs) on board the Orbiter, two stress rupture life prediction models were proposed independently by Phoenix and by Glaser. In this paper, the use of these models to determine the system reliability of 24 COPVs currently in service on board the Orbiter is discussed. The models are briefly described, compared to each other, and model parameters and parameter uncertainties are also reviewed to understand confidence in reliability estimation as well as the sensitivities of these parameters in influencing overall predicted reliability levels. Differences and similarities in the various models will be compared via stress rupture reliability curves (stress ratio vs. lifetime plots). Also outlined will be the differences in the underlying model premises, and predictive outcomes. Sources of error and sensitivities in the models will be examined and discussed based on sensitivity analysis and confidence interval determination. Confidence interval results and their implications will be discussed for the models by Phoenix and Glaser.
Chonchaiya, Weerasak; Tardif, Twila; Mai, Xiaoqin; Xu, Lin; Li, Mingyan; Kaciroti, Niko; Kileny, Paul R; Shao, Jie; Lozoff, Betsy
2013-03-01
Auditory processing capabilities at the subcortical level have been hypothesized to impact an individual's development of both language and reading abilities. The present study examined whether auditory processing capabilities relate to language development in healthy 9-month-old infants. Participants were 71 infants (31 boys and 40 girls) with both Auditory Brainstem Response (ABR) and language assessments. At 6 weeks and/or 9 months of age, the infants underwent ABR testing using both a standard hearing screening protocol with 30 dB clicks and a second protocol using click pairs separated by 8, 16, and 64-ms intervals presented at 80 dB. We evaluated the effects of interval duration on ABR latency and amplitude elicited by the second click. At 9 months, language development was assessed via parent report on the Chinese Communicative Development Inventory - Putonghua version (CCDI-P). Wave V latency z-scores of the 64-ms condition at 6 weeks showed strong direct relationships with Wave V latency in the same condition at 9 months. More importantly, shorter Wave V latencies at 9 months showed strong relationships with the CCDI-P composite consisting of phrases understood, gestures, and words produced. Likewise, infants who had greater decreases in Wave V latencies from 6 weeks to 9 months had higher CCDI-P composite scores. Females had higher language development scores and shorter Wave V latencies at both ages than males. Interestingly, when the ABR Wave V latencies at both ages were taken into account, the direct effects of gender on language disappeared. In conclusion, these results support the importance of low-level auditory processing capabilities for early language acquisition in a population of typically developing young infants. Moreover, the auditory brainstem response in this paradigm shows promise as an electrophysiological marker to predict individual differences in language development in young children. © 2012 Blackwell Publishing Ltd.
NASA Technical Reports Server (NTRS)
Sohn, Ki-Hyeon; Reshotko, Eli
1991-01-01
A detailed investigation to document momentum and thermal development of boundary layers undergoing natural transition on a heated flat plate was performed. Experimental results of both overall and conditionally sampled characteristics of laminar, transitional, and low Reynolds number turbulent boundary layers are presented. Measurements were acquired in a low-speed, closed-loop wind tunnel with a freestream velocity of 100 ft/s and zero pressure gradient over a range of freestream turbulence intensities (TI) from 0.4 to 6 percent. The distributions of skin friction, heat transfer rate and Reynolds shear stress were all consistent with previously published data. Reynolds analogy factors for R(sub theta) is less than 2300 were found to be well predicted by laminar and turbulent correlations which accounted for an unheated starting length. The measured laminar value of Reynolds analogy factor was as much as 53 percent higher than the Pr(sup -2/3). A small dependence of turbulent results on TI was observed. Conditional sampling performed in the transitional boundary layer indicated the existence of a near-wall drop in intermittency, pronounced at certain low intermittencies, which is consistent with the cross-sectional shape of turbulent spots observed by others. Non-turbulent intervals were observed to possess large magnitudes of near-wall unsteadiness and turbulent intervals had peak values as much as 50 percent higher than were measured at fully turbulent stations. Non-turbulent and turbulent profiles in transitional boundary layers cannot be simply treated as Blasius and fully turbulent profiles, respectively. The boundary layer spectra indicate predicted selective amplification of T-S waves for TI is approximately 0.4 percent. However, for TI is approximately 0.8 and 1.1 percent, T-S waves are localized very near the wall and do not play a dominant role in transition process.
The sinking of the El Faro: predicting real world rogue waves during Hurricane Joaquin.
Fedele, Francesco; Lugni, Claudio; Chawla, Arun
2017-09-11
We present a study on the prediction of rogue waves during the 1-hour sea state of Hurricane Joaquin when the Merchant Vessel El Faro sank east of the Bahamas on October 1, 2015. High-resolution hindcast of hurricane-generated sea states and wave simulations are combined with novel probabilistic models to quantify the likelihood of rogue wave conditions. The data suggests that the El Faro vessel was drifting at an average speed of approximately 2.5 m/s prior to its sinking. As a result, we estimated that the probability that El Faro encounters a rogue wave whose crest height exceeds 14 meters while drifting over a time interval of 10 (50) minutes is ~1/400 (1/130). The largest simulated wave is generated by the constructive interference of elementary spectral components (linear dispersive focusing) enhanced by bound nonlinearities. Not surprisingly then, its characteristics are quite similar to those displayed by the Andrea, Draupner and Killard rogue waves.
Generic buckling curves for specially orthotropic rectangular plates
NASA Technical Reports Server (NTRS)
Brunnelle, E. J.; Oyibo, G. A.
1983-01-01
Using a double affine transformation, the classical buckling equation for specially orthotropic plates and the corresponding virtual work theorem are presented in a particularly simple fashion. These dual representations are characterized by a single material constant, called the generalized rigidity ratio, whose range is predicted to be the closed interval from 0 to 1 (if this prediction is correct then the numerical results using a ratio greater than 1 in the specially orthotropic plate literature are incorrect); when natural boundary conditions are considered a generalized Poisson's ratio is introduced. Thus the buckling results are valid for any specially orthotropic material; hence the curves presented in the text are generic rather than specific. The solution trends are twofold; the buckling coefficients decrease with decreasing generalized rigidity ratio and, when applicable, they decrease with increasing generalized Poisson's ratio. Since the isotropic plate is one limiting case of the above analysis, it is also true that isotropic buckling coefficients decrease with increasing Poission's ratio.
Real-time flutter boundary prediction based on time series models
NASA Astrophysics Data System (ADS)
Gu, Wenjing; Zhou, Li
2018-03-01
For the purpose of predicting the flutter boundary in real time during flutter flight tests, two time series models accompanied with corresponding stability criterion are adopted in this paper. The first method simplifies a long nonstationary response signal as many contiguous intervals and each is considered to be stationary. The traditional AR model is then established to represent each interval of signal sequence. While the second employs a time-varying AR model to characterize actual measured signals in flutter test with progression variable speed (FTPVS). To predict the flutter boundary, stability parameters are formulated by the identified AR coefficients combined with Jury's stability criterion. The behavior of the parameters is examined using both simulated and wind-tunnel experiment data. The results demonstrate that both methods show significant effectiveness in predicting the flutter boundary at lower speed level. A comparison between the two methods is also given in this paper.
Developing a predictive tropospheric ozone model for Tabriz
NASA Astrophysics Data System (ADS)
Khatibi, Rahman; Naghipour, Leila; Ghorbani, Mohammad A.; Smith, Michael S.; Karimi, Vahid; Farhoudi, Reza; Delafrouz, Hadi; Arvanaghi, Hadi
2013-04-01
Predictive ozone models are becoming indispensable tools by providing a capability for pollution alerts to serve people who are vulnerable to the risks. We have developed a tropospheric ozone prediction capability for Tabriz, Iran, by using the following five modeling strategies: three regression-type methods: Multiple Linear Regression (MLR), Artificial Neural Networks (ANNs), and Gene Expression Programming (GEP); and two auto-regression-type models: Nonlinear Local Prediction (NLP) to implement chaos theory and Auto-Regressive Integrated Moving Average (ARIMA) models. The regression-type modeling strategies explain the data in terms of: temperature, solar radiation, dew point temperature, and wind speed, by regressing present ozone values to their past values. The ozone time series are available at various time intervals, including hourly intervals, from August 2010 to March 2011. The results for MLR, ANN and GEP models are not overly good but those produced by NLP and ARIMA are promising for the establishing a forecasting capability.
Effects of dopamine D1 modulation of the anterior cingulate cortex in a fear conditioning procedure
Pezze, M.A.; Marshall, H.J.; Domonkos, A.; Cassaday, H.J.
2016-01-01
The anterior cingulate cortex (AC) component of the medial prefrontal cortex (mPFC) has been implicated in attention and working memory as measured by trace conditioning. Since dopamine (DA) is a key modulator of mPFC function, the present study evaluated the role of DA receptor agents in rat AC, using trace fear conditioning. A conditioned stimulus (CS, noise) was followed by an unconditioned stimulus (US, shock) with or without a 10 s trace interval interposed between these events in a between-subjects design. Conditioned suppression of drinking was assessed in response to presentation of the CS or an experimental background stimulus (flashing lights, previously presented for the duration of the conditioning session). The selective D1 agonist SKF81297 (0.05 μg/side) or D1 antagonist SCH23390 (0.5 μg/side) was administered by intra-cerebral microinfusion directly into AC. It was predicted that either of these manipulations should be sufficient to impair trace (but not delay) conditioning. Counter to expectation, there was no effect of DA D1 modulation on trace conditioning as measured by suppression to the noise CS. However, rats infused with SKF81297 acquired stronger conditioned suppression to the experimental background stimulus than those infused with SCH23390 or saline. Thus, the DA D1 agonist SKF81297 increased conditioned suppression to the contextual background light stimulus but was otherwise without effect on fear conditioning. PMID:26343307
Remote Measurement of Atmospheric Temperatures By Raman Lidar
NASA Technical Reports Server (NTRS)
Salzman, Jack A.; Coney, Thom A.
1973-01-01
The Raman shifted return of a lidar, or optical radar, system has been utilized to make atmospheric temperature measurements. These measurements were made along a horizontal path at temperatures between -20 C and +30 C and at ranges of about 100 meters. The temperature data were acquired by recording the intensity ratio of two portions of the Raman spectrum which were simultaneously sampled from a preset range. The lidar unit employed in this testing consisted of a 4 joule-10ppm laser operating at 694.3 nm, a 10-inch Schmidt-Cassegrain telescope, and a system of time-gated detection and signal processing electronics. The detection system processed three return signal wavelength intervals - two intervals along the rotational Raman scattered spectrum and one interval centered at the Rayleigh-Mie scattered wavelength. The wavelength intervals were resolved by using a pellicle beam splitter and three optical interference filters. Raman return samples were taken from one discrete range segment during each test shot and the signal intensities were displayed in digital format. The Rayleigh-Mie techniques. The test site utilized to evaluate this measurement technique encompassed a total path length of 200 meters. Major components of the test site included a trailer-van housing the lidar unit, a controlled environment test zone, and a beam terminator. The control zone which was located about 100 meters from the trailer was 12 meters in length, 2.4 meters in diameter, and was equipped with hinged doors at each end. The temperature of the air inside the zone could be either raised or lowered with respect to ambient air through the use of infrared heaters or a liquid-nitrogen cooling system. Conditions inside the zone were continuously monitored with a thermocouple rake assembly. The test path length was terminated by a 1.2 meter square array of energy absorbing cones and a flat black screen. Tests were initially conducted at strictly ambient conditions utilizing the normal outside air temperatures as a test parameter. These tests provided a calibration of the Raman intensity ratio as a function of' temperature for the particular optical-filter arrangement used in this system while also providing a test of' the theoretical prediction formulated in the design of the system. Later tests utilized zone temperatures above and below ambient to provide temperature gradient data. These tests indicate that ten shots, or one minute of' data acquisition, from a 100 meter range can provide absolute temperature measurements with an accuracy of + 30 C and a range resolution of about 5 meters. Because this measurement accuracy compares well with that predicted for this particular unit, it is suggested that a field-application system could be built with signif'icant improvements in both absolute accuracy and range.
Long-term prediction of creep strains of mineral wool slabs under constant compressive stress
NASA Astrophysics Data System (ADS)
Gnip, Ivan; Vaitkus, Saulius; Keršulis, Vladislovas; Vėjelis, Sigitas
2012-02-01
The results obtained in determining the creep strain of mineral wool slabs under compressive stress, used for insulating flat roofs and facades, cast-in-place floors, curtain and external basement walls, as well as for sound insulation of floors, are presented. The creep strain tests were conducted under a compressive stress of σ c =0.35 σ 10%. Interval forecasting of creep strain was made by extrapolating the creep behaviour and approximated in accordance with EN 1606 by a power equation and reduced to a linear form using logarithms. This was performed for a lead time of 10 years. The extension of the range of the confidence interval due to discount of the prediction data, i.e. a decrease in their informativity was allowed for by an additional coefficient. Analysis of the experimental data obtained from the tests having 65 and 122 days duration showed that the prediction of creep strains for 10 years can be made based on data obtained in experiments with durations shorter than the 122 days as specified by EN 13162. Interval prediction of creep strains (with a confidence probability of 90%) was based on using the mean square deviation of the actual direct observations of creep strains in logarithmic form to have the linear trend in a retrospective area.
The 32nd CDC: System identification using interval dynamic models
NASA Technical Reports Server (NTRS)
Keel, L. H.; Lew, J. S.; Bhattacharyya, S. P.
1992-01-01
Motivated by the recent explosive development of results in the area of parametric robust control, a new technique to identify a family of uncertain systems is identified. The new technique takes the frequency domain input and output data obtained from experimental test signals and produces an 'interval transfer function' that contains the complete frequency domain behavior with respect to the test signals. This interval transfer function is one of the key concepts in the parametric robust control approach and identification with such an interval model allows one to predict the worst case performance and stability margins using recent results on interval systems. The algorithm is illustrated by applying it to an 18 bay Mini-Mast truss structure.
Poor sleep quality predicts deficient emotion information processing over time in early adolescence.
Soffer-Dudek, Nirit; Sadeh, Avi; Dahl, Ronald E; Rosenblat-Stein, Shiran
2011-11-01
There is deepening understanding of the effects of sleep on emotional information processing. Emotion information processing is a key aspect of social competence, which undergoes important maturational and developmental changes in adolescence; however, most research in this area has focused on adults. Our aim was to test the links between sleep and emotion information processing during early adolescence. Sleep and facial information processing were assessed objectively during 3 assessment waves, separated by 1-year lags. Data were obtained in natural environments-sleep was assessed in home settings, and facial information processing was assessed at school. 94 healthy children (53 girls, 41 boys), aged 10 years at Time 1. N/A. Facial information processing was tested under neutral (gender identification) and emotional (emotional expression identification) conditions. Sleep was assessed in home settings using actigraphy for 7 nights at each assessment wave. Waking > 5 min was considered a night awakening. Using multilevel modeling, elevated night awakenings and decreased sleep efficiency significantly predicted poor performance only in the emotional information processing condition (e.g., b = -1.79, SD = 0.52, confidence interval: lower boundary = -2.82, upper boundary = -0.076, t(416.94) = -3.42, P = 0.001). Poor sleep quality is associated with compromised emotional information processing during early adolescence, a sensitive period in socio-emotional development.
Preliminary dynamic tests of a flight-type ejector
NASA Technical Reports Server (NTRS)
Drummond, Colin K.
1992-01-01
A thrust augmenting ejector was tested to provide experimental data to assist in the assessment of theoretical models to predict duct and ejector fluid-dynamic characteristics. Eleven full-scale thrust augmenting ejector tests were conducted in which a rapid increase in the ejector nozzle pressure ratio was effected through a unique facility, bypass/burst-disk subsystem. The present work examines two cases representative of the test performance window. In the first case, the primary nozzle pressure ration (NPR) increased 36 percent from one unchoked (NPR = 1.29) primary flow condition to another (NPR = 1.75) over a 0.15 second interval. The second case involves choked primary flow conditions, where a 17 percent increase in primary nozzle flowrate (from NPR = 2.35 to NPR = 2.77) occurred over approximately 0.1 seconds. Although the real-time signal measurements support qualitative remarks on ejector performance, extracting quantitative ejector dynamic response was impeded by excessive aerodynamic noise and thrust stand dynamic (resonance) characteristics. It does appear, however, that a quasi-steady performance assumption is valid for this model with primary nozzle pressure increased on the order of 50 lb(sub f)/s. Transient signal treatment of the present dataset is discussed and initial interpretations of the results are compared with theoretical predictions for a similar Short Takeoff and Vertical Landing (STOVL) ejector model.
A numerical forecast model for road meteorology
NASA Astrophysics Data System (ADS)
Meng, Chunlei
2017-05-01
A fine-scale numerical model for road surface parameters prediction (BJ-ROME) is developed based on the Common Land Model. The model is validated using in situ observation data measured by the ROSA road weather stations of Vaisala Company, Finland. BJ-ROME not only takes into account road surface factors, such as imperviousness, relatively low albedo, high heat capacity, and high heat conductivity, but also considers the influence of urban anthropogenic heat, impervious surface evaporation, and urban land-use/land-cover changes. The forecast time span and the update interval of BJ-ROME in vocational operation are 24 and 3 h, respectively. The validation results indicate that BJ-ROME can successfully simulate the diurnal variation of road surface temperature both under clear-sky and rainfall conditions. BJ-ROME can simulate road water and snow depth well if the artificial removing was considered. Road surface energy balance in rainy days is quite different from that in clear-sky conditions. Road evaporation could not be neglected in road surface water cycle research. The results of sensitivity analysis show solar radiation correction coefficient, asphalt depth, and asphalt heat conductivity are important parameters in road interface temperatures simulation. The prediction results could be used as a reference of maintenance decision support system to mitigate the traffic jam and urban water logging especially in large cities.
NASA Technical Reports Server (NTRS)
Lo, Ching F.
1999-01-01
The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.
Amat, Juan A; Hortas, Francisco; Arroyo, Gonzalo M; Rendón, Miguel A; Ramírez, José M; Rendón-Martos, Manuel; Pérez-Hurtado, Alejandro; Garrido, Araceli
2007-06-01
Greater flamingos in southern Spain foraged in areas distant from a breeding site, spending 4-6 days in foraging areas between successive visits to the colony to feed their chicks. During four years, we took blood samples from chicks to ascertain whether there were interannual variations in several blood parameters, indicative of food quality and feeding frequencies. When the chicks were captured, 20-31% of them had their crops empty, indicating that not all chicks were fed daily. Additional evidence of variations in feeding frequencies was obtained from a principal component analysis (PCA) on plasma chemistry values, which also indicated that there were annual variations in the quality of food received by chicks. The association of cholesterol and glucose with some PC axes indicated that some chicks were experiencing fasting periods. Of all plasma metabolites considered, cholesterol was the best one to predict body condition. Greater flamingo chicks experiencing longer fasting intervals, as suggested by higher plasma levels of cholesterol, were in lower body condition.
Aldars-García, Laila; Berman, María; Ortiz, Jordi; Ramos, Antonio J; Marín, Sonia
2018-06-01
The probability of growth and aflatoxin B 1 (AFB 1 ) production of 20 isolates of Aspergillus flavus were studied using a full factorial design with eight water activity levels (0.84-0.98 a w ) and six temperature levels (15-40 °C). Binary data obtained from growth studies were modelled using linear logistic regression analysis as a function of temperature, water activity and time for each isolate. In parallel, AFB 1 was extracted at different times from newly formed colonies (up to 20 mm in diameter). Although a total of 950 AFB 1 values over time for all conditions studied were recorded, they were not considered to be enough to build probability models over time, and therefore, only models at 30 days were built. The confidence intervals of the regression coefficients of the probability of growth models showed some differences among the 20 growth models. Further, to assess the growth/no growth and AFB 1 /no- AFB 1 production boundaries, 0.05 and 0.5 probabilities were plotted at 30 days for all of the isolates. The boundaries for growth and AFB 1 showed that, in general, the conditions for growth were wider than those for AFB 1 production. The probability of growth and AFB 1 production seemed to be less variable among isolates than AFB 1 accumulation. Apart from the AFB 1 production probability models, using growth probability models for AFB 1 probability predictions could be, although conservative, a suitable alternative. Predictive mycology should include a number of isolates to generate data to build predictive models and take into account the genetic diversity of the species and thus make predictions as similar as possible to real fungal food contamination. Copyright © 2017 Elsevier Ltd. All rights reserved.
Can orthoses and navicular drop affect foot motion patterns during running?
Eslami, Mansour; Ferber, Reed
2013-07-01
The purpose of this study was to examine the influence of semi-rigid foot orthoses on forefoot-rearfoot joint coupling patterns in individuals with different navicular drop measures during heel-toe running. Ten trials were collected from twenty-three male subjects who ran slowly shod at 170 steps per minute (2.23m/s) with a semi-rigid orthoses and without. Forefoot-rearfoot coupling motions were assessed using a vector coding technique during four intervals across the first 50% of stance. Subjects were divided into two groups based on navicular drop measures. A three way ANOVA was performed to examine the interaction and main effects of stance interval, orthoses condition and navicular drop (p<0.05). There were no interaction effects among stance interval, orthoses condition, or navicular drop (p=0.14) whereas an interaction effect of orthoses condition and stance interval was observed (p=0.01; effect size=0.74). Forefoot-rearfoot coupling motion in the no-orthoses condition increased from heel-strike to foot-flat phase at a rate faster than the orthoses condition (p=0.02). Foot orthoses significantly decrease the forefoot-rearfoot joint coupling angle by reducing forefoot frontal plane motion relative to the rearfoot. Navicular drop measures did not influence joint coupling relationships between the forefoot and rearfoot during the first 50% of stance regardless of orthotic condition. Copyright © 2012 Sports Medicine Australia. All rights reserved.
Fieuws, Steffen; Willems, Guy; Larsen-Tangmose, Sara; Lynnerup, Niels; Boldsen, Jesper; Thevissen, Patrick
2016-03-01
When an estimate of age is needed, typically multiple indicators are present as found in skeletal or dental information. There exists a vast literature on approaches to estimate age from such multivariate data. Application of Bayes' rule has been proposed to overcome drawbacks of classical regression models but becomes less trivial as soon as the number of indicators increases. Each of the age indicators can lead to a different point estimate ("the most plausible value for age") and a prediction interval ("the range of possible values"). The major challenge in the combination of multiple indicators is not the calculation of a combined point estimate for age but the construction of an appropriate prediction interval. Ignoring the correlation between the age indicators results in intervals being too small. Boldsen et al. (2002) presented an ad-hoc procedure to construct an approximate confidence interval without the need to model the multivariate correlation structure between the indicators. The aim of the present paper is to bring under attention this pragmatic approach and to evaluate its performance in a practical setting. This is all the more needed since recent publications ignore the need for interval estimation. To illustrate and evaluate the method, Köhler et al. (1995) third molar scores are used to estimate the age in a dataset of 3200 male subjects in the juvenile age range.
Gauss, T; Merckx, P; Brasher, C; Kavafyan, J; Le Bihan, E; Aussilhou, B; Belghiti, J; Mantz, J
2013-02-01
Perioperative coordination facilitates team communication and planning. The aim of this study was to determine how often deviation from predicted surgical conditions and a pre-established anaesthetic care plan in major abdominal surgery occurred, and whether this was associated with an increase in adverse clinical events. In this prospective observational study, weekly preoperative interdisciplinary team meetings were conducted according to a joint care plan checklist in a tertiary care centre in France. Any discordance with preoperative predictions and deviation from the care plan were noted. A link to the incidence of predetermined adverse intraoperative events was investigated. Intraoperative adverse clinical events (ACEs) occurred in 15 % of all cases and were associated with postoperative complications [relative risk (RR) = 1.5; 95 % confidence interval (1.1; 2.2)]. Quality of prediction of surgical procedural items was modest, with one in five to six items not correctly predicted. Discordant surgical prediction was associated with an increased incidence of ACE. Deviation from the anaesthetic care plan occurred in around 13 %, which was more frequent when surgical prediction was inaccurate (RR > 3) and independently associated with ACE (odds ratio 6). Surgery was more difficult than expected in up to one out of five cases. In a similar proportion, disagreement between preoperative care plans and observed clinical management was independently associated with an increased risk of adverse clinical events.
The Development of Storm Surge Ensemble Prediction System and Case Study of Typhoon Meranti in 2016
NASA Astrophysics Data System (ADS)
Tsai, Y. L.; Wu, T. R.; Terng, C. T.; Chu, C. H.
2017-12-01
Taiwan is under the threat of storm surge and associated inundation, which is located at a potentially severe storm generation zone. The use of ensemble prediction can help forecasters to know the characteristic of storm surge under the uncertainty of track and intensity. In addition, it can help the deterministic forecasting. In this study, the kernel of ensemble prediction system is based on COMCOT-SURGE (COrnell Multi-grid COupled Tsunami Model - Storm Surge). COMCOT-SURGE solves nonlinear shallow water equations in Open Ocean and coastal regions with the nested-grid scheme and adopts wet-dry-cell treatment to calculate potential inundation area. In order to consider tide-surge interaction, the global TPXO 7.1 tide model provides the tidal boundary conditions. After a series of validations and case studies, COMCOT-SURGE has become an official operating system of Central Weather Bureau (CWB) in Taiwan. In this study, the strongest typhoon in 2016, Typhoon Meranti, is chosen as a case study. We adopt twenty ensemble members from CWB WRF Ensemble Prediction System (CWB WEPS), which differs from parameters of microphysics, boundary layer, cumulus, and surface. From box-and-whisker results, maximum observed storm surges were located in the interval of the first and third quartile at more than 70 % gauge locations, e.g. Toucheng, Chengkung, and Jiangjyun. In conclusion, the ensemble prediction can effectively help forecasters to predict storm surge especially under the uncertainty of storm track and intensity
Tracking Temporal Hazard in the Human Electroencephalogram Using a Forward Encoding Model
2018-01-01
Abstract Human observers automatically extract temporal contingencies from the environment and predict the onset of future events. Temporal predictions are modeled by the hazard function, which describes the instantaneous probability for an event to occur given it has not occurred yet. Here, we tackle the question of whether and how the human brain tracks continuous temporal hazard on a moment-to-moment basis, and how flexibly it adjusts to strictly implicit variations in the hazard function. We applied an encoding-model approach to human electroencephalographic data recorded during a pitch-discrimination task, in which we implicitly manipulated temporal predictability of the target tones by varying the interval between cue and target tone (i.e. the foreperiod). Critically, temporal predictability either was driven solely by the passage of time (resulting in a monotonic hazard function) or was modulated to increase at intermediate foreperiods (resulting in a modulated hazard function with a peak at the intermediate foreperiod). Forward-encoding models trained to predict the recorded EEG signal from different temporal hazard functions were able to distinguish between experimental conditions, showing that implicit variations of temporal hazard bear tractable signatures in the human electroencephalogram. Notably, this tracking signal was reconstructed best from the supplementary motor area, underlining this area’s link to cognitive processing of time. Our results underline the relevance of temporal hazard to cognitive processing and show that the predictive accuracy of the encoding-model approach can be utilized to track abstract time-resolved stimuli. PMID:29740594
Bogle, R.W.
1960-11-22
A radio ranging device is described which utilizes a superregenerative oscillator having alternate sending and receiving phases with an intervening ranging interval between said phases, means for varying said ranging interval, means responsive to an on-range noise reduction condition for stopping said means for varying the ranging interval and indicating means coupled to the ranging interval varying means and calibrated in accordance with one-half the product of the ranging interval times the velocity of light whereby the range is indicated.
Music enhances performance and perceived enjoyment of sprint interval exercise.
Stork, Matthew J; Kwan, Matthew Y W; Gibala, Martin J; Martin Ginis, Kathleen A
2015-05-01
Interval exercise training can elicit physiological adaptations similar to those of traditional endurance training, but with reduced time. However, the intense nature of specific protocols, particularly the "all-out" efforts characteristic of sprint interval training (SIT), may be perceived as being aversive. The purpose of this study was to determine whether listening to self-selected music can reduce the potential aversiveness of an acute session of SIT by improving affect, motivation, and enjoyment, and to examine the effects of music on performance. Twenty moderately active adults (22 ± 4 yr) unfamiliar with interval exercise completed an acute session of SIT under two different conditions: music and no music. The exercise consisted of four 30-s "all-out" Wingate Anaerobic Test bouts on a cycle ergometer, separated by 4 min of rest. Peak and mean power output, RPE, affect, task motivation, and perceived enjoyment of the exercise were measured. Mixed-effects models were used to evaluate changes in dependent measures over time and between the two conditions. Peak and mean power over the course of the exercise session were higher in the music condition (coefficient = 49.72 [SE = 13.55] and coefficient = 23.65 [SE = 11.30]; P < 0.05). A significant time by condition effect emerged for peak power (coefficient = -12.31 [SE = 4.95]; P < 0.05). There were no between-condition differences in RPE, affect, or task motivation. Perceived enjoyment increased over time and was consistently higher in the music condition (coefficient = 7.00 [SE = 3.05]; P < 0.05). Music enhances in-task performance and enjoyment of an acute bout of SIT. Listening to music during intense interval exercise may be an effective strategy for facilitating participation in, and adherence to, this form of training.
Odonkor, Charles A; Schonberger, Robert B; Dai, Feng; Shelley, Kirk H; Silverman, David G; Barash, Paul G
2013-10-01
The primary aims of this study were to design prediction models based on a functional marker (preoperative gait speed) to predict readiness for home discharge time of 90 mins or less and to identify those at risk for unplanned admissions after elective ambulatory surgery. This prospective observational cohort study evaluated all patients scheduled for elective ambulatory surgery. Home discharge readiness and unplanned admissions were the primary outcomes. Independent variables included preoperative gait speed, heart rate, and total anesthesia time. The relationship between all predictors and each primary outcome was determined in separate multivariable logistic regression models. After adjustment for covariates, gait speed with adjusted odds ratio of 3.71 (95% confidence interval, 1.21-11.26), P = 0.02, was independently associated with early home discharge readiness of 90 mins or less. Importantly, gait speed dichotomized as greater or less than 1 m/sec predicted unplanned admissions, with odds ratio of 0.35 (95% confidence interval, 0.16-0.76, P = 0.008) for those with speeds 1 m/sec or greater in comparison with those with speeds less than 1 m/sec. In a separate model, history of cardiac surgery with adjusted odds ratio of 7.5 (95% confidence interval, 2.34-24.41; P = 0.001) was independently associated with unplanned admissions after elective ambulatory surgery, when other covariates were held constant. This study demonstrates the use of novel prediction models based on gait speed testing to predict early home discharge and to identify those patients at risk for unplanned admissions after elective ambulatory surgery.
Bouwer, Fleur L; Werner, Carola M; Knetemann, Myrthe; Honing, Henkjan
2016-05-01
Beat perception is the ability to perceive temporal regularity in musical rhythm. When a beat is perceived, predictions about upcoming events can be generated. These predictions can influence processing of subsequent rhythmic events. However, statistical learning of the order of sounds in a sequence can also affect processing of rhythmic events and must be differentiated from beat perception. In the current study, using EEG, we examined the effects of attention and musical abilities on beat perception. To ensure we measured beat perception and not absolute perception of temporal intervals, we used alternating loud and soft tones to create a rhythm with two hierarchical metrical levels. To control for sequential learning of the order of the different sounds, we used temporally regular (isochronous) and jittered rhythmic sequences. The order of sounds was identical in both conditions, but only the regular condition allowed for the perception of a beat. Unexpected intensity decrements were introduced on the beat and offbeat. In the regular condition, both beat perception and sequential learning were expected to enhance detection of these deviants on the beat. In the jittered condition, only sequential learning was expected to affect processing of the deviants. ERP responses to deviants were larger on the beat than offbeat in both conditions. Importantly, this difference was larger in the regular condition than in the jittered condition, suggesting that beat perception influenced responses to rhythmic events in addition to sequential learning. The influence of beat perception was present both with and without attention directed at the rhythm. Moreover, beat perception as measured with ERPs correlated with musical abilities, but only when attention was directed at the stimuli. Our study shows that beat perception is possible when attention is not directed at a rhythm. In addition, our results suggest that attention may mediate the influence of musical abilities on beat perception. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Effects Of Reinforcement History On Response Rate And Response Pattern In Periodic Reinforcement
López, Florente; Menez, Marina
2005-01-01
Several researchers have suggested that conditioning history may have long-term effects on fixed-interval performances of rats. To test this idea and to identify possible factors involved in temporal control development, groups of rats initially were exposed to different reinforcement schedules: continuous, fixed-time, and random-interval. Afterwards, half of the rats in each group were studied on a fixed-interval 30-s schedule of reinforcement and the other half on a fixed-interval 90-s schedule of reinforcement. No evidence of long-term effects attributable to conditioning history on either response output or response patterning was found; history effects were transitory. Different tendencies in trajectory across sessions were observed for measures of early and late responding within the interreinforcer interval, suggesting that temporal control is the result of two separate processes: one involved in response output and the other in time allocation of responding and not responding. PMID:16047607
McBride, W. Scott; Wacker, Michael A.
2015-01-01
A test well was drilled by the City of Tallahassee to assess the suitability of the site for the installation of a new well for public water supply. The test well is in Leon County in north-central Florida. The U.S. Geological Survey delineated high-permeability zones in the Upper Floridan aquifer, using borehole-geophysical data collected from the open interval of the test well. A composite water sample was collected from the open interval during high-flow conditions, and three discrete water samples were collected from specified depth intervals within the test well during low-flow conditions. Water-quality, source tracer, and age-dating results indicate that the open interval of the test well produces water of consistently high quality throughout its length. The cavernous nature of the open interval makes it likely that the highly permeable zones are interconnected in the aquifer by secondary porosity features.
A Numerical-Analytical Approach to Modeling the Axial Rotation of the Earth
NASA Astrophysics Data System (ADS)
Markov, Yu. G.; Perepelkin, V. V.; Rykhlova, L. V.; Filippova, A. S.
2018-04-01
A model for the non-uniform axial rotation of the Earth is studied using a celestial-mechanical approach and numerical simulations. The application of an approximate model containing a small number of parameters to predict variations of the axial rotation velocity of the Earth over short time intervals is justified. This approximate model is obtained by averaging variable parameters that are subject to small variations due to non-stationarity of the perturbing factors. The model is verified and compared with predictions over a long time interval published by the International Earth Rotation and Reference Systems Service (IERS).
Confidence Intervals from Realizations of Simulated Nuclear Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Younes, W.; Ratkiewicz, A.; Ressler, J. J.
2017-09-28
Various statistical techniques are discussed that can be used to assign a level of confidence in the prediction of models that depend on input data with known uncertainties and correlations. The particular techniques reviewed in this paper are: 1) random realizations of the input data using Monte-Carlo methods, 2) the construction of confidence intervals to assess the reliability of model predictions, and 3) resampling techniques to impose statistical constraints on the input data based on additional information. These techniques are illustrated with a calculation of the keff value, based on the 235U(n, f) and 239Pu (n, f) cross sections.
Technique for simulating peak-flow hydrographs in Maryland
Dillow, Jonathan J.A.
1998-01-01
The efficient design and management of many bridges, culverts, embankments, and flood-protection structures may require the estimation of time-of-inundation and (or) storage of floodwater relating to such structures. These estimates can be made on the basis of information derived from the peak-flow hydrograph. Average peak-flow hydrographs corresponding to a peak discharge of specific recurrence interval can be simulated for drainage basins having drainage areas less than 500 square miles in Maryland, using a direct technique of known accuracy. The technique uses dimensionless hydrographs in conjunction with estimates of basin lagtime and instantaneous peak flow. Ordinary least-squares regression analysis was used to develop an equation for estimating basin lagtime in Maryland. Drainage area, main channel slope, forest cover, and impervious area were determined to be the significant explanatory variables necessary to estimate average basin lagtime at the 95-percent confidence interval. Qualitative variables included in the equation adequately correct for geographic bias across the State. The average standard error of prediction associated with the equation is approximated as plus or minus (+/-) 37.6 percent. Volume correction factors may be applied to the basin lagtime on the basis of a comparison between actual and estimated hydrograph volumes prior to hydrograph simulation. Three dimensionless hydrographs were developed and tested using data collected during 278 significant rainfall-runoff events at 81 stream-gaging stations distributed throughout Maryland and Delaware. The data represent a range of drainage area sizes and basin conditions. The technique was verified by applying it to the simulation of 20 peak-flow events and comparing actual and simulated hydrograph widths at 50 and 75 percent of the observed peak-flow levels. The events chosen are considered extreme in that the average recurrence interval of the selected peak flows is 130 years. The average standard errors of prediction were +/- 61 and +/- 56 percent at the 50 and 75 percent of peak-flow hydrograph widths, respectively.
Chua, Eric Chern-Pin; Tan, Wen-Qi; Yeo, Sing-Chen; Lau, Pauline; Lee, Ivan; Mien, Ivan Ho; Puvanendran, Kathiravelu; Gooley, Joshua J.
2012-01-01
Study Objectives: To assess whether changes in psychomotor vigilance during sleep deprivation can be estimated using heart rate variability (HRV). Design: HRV, ocular, and electroencephalogram (EEG) measures were compared for their ability to predict lapses on the Psychomotor Vigilance Task (PVT). Setting: Chronobiology and Sleep Laboratory, Duke-NUS Graduate Medical School Singapore. Participants: Twenty-four healthy Chinese men (mean age ± SD = 25.9 ± 2.8 years). Interventions: Subjects were kept awake continuously for 40 hours under constant environmental conditions. Every 2 hours, subjects completed a 10-minute PVT to assess their ability to sustain visual attention. Measurements and Results: During each PVT, we examined the electrocardiogram (ECG), EEG, and percentage of time that the eyes were closed (PERCLOS). Similar to EEG power density and PERCLOS measures, the time course of ECG RR-interval power density in the 0.02- 0.08-Hz range correlated with the 40-hour profile of PVT lapses. Based on receiver operating characteristic curves, RR-interval power density performed as well as EEG power density at identifying a sleepiness-related increase in PVT lapses above threshold. RR-interval power density (0.02-0.08 Hz) also classified subject performance with sensitivity and specificity similar to that of PERCLOS. Conclusions: The ECG carries information about a person's vigilance state. Hence, HRV measures could potentially be used to predict when an individual is at increased risk of attentional failure. Our results suggest that HRV monitoring, either alone or in combination with other physiologic measures, could be incorporated into safety devices to warn drowsy operators when their performance is impaired. Citation: Chua ECP; Tan WQ; Yeo SC; Lau P; Lee I; Mien IH; Puvanendran K; Gooley JJ. Heart rate variability can be used to estimate sleepiness-related decrements in psychomotor vigilance during total sleep deprivation. SLEEP 2012;35(3):325-334. PMID:22379238
Lengyel, Csaba; Orosz, Andrea; Hegyi, Péter; Komka, Zsolt; Udvardy, Anna; Bosnyák, Edit; Trájer, Emese; Pavlik, Gábor; Tóth, Miklós; Wittmann, Tibor; Papp, Julius Gy.; Varró, András; Baczkó, István
2011-01-01
Background Sudden cardiac death in competitive athletes is rare but it is significantly more frequent than in the normal population. The exact cause is seldom established and is mostly attributed to ventricular fibrillation. Myocardial hypertrophy and slow heart rate, both characteristic changes in top athletes in response to physical conditioning, could be associated with increased propensity for ventricular arrhythmias. We investigated conventional ECG parameters and temporal short-term beat-to-beat variability of repolarization (STVQT), a presumptive novel parameter for arrhythmia prediction, in professional soccer players. Methods Five-minute 12-lead electrocardiograms were recorded from professional soccer players (n = 76, all males, age 22.0±0.61 years) and age-matched healthy volunteers who do not participate in competitive sports (n = 76, all males, age 22.0±0.54 years). The ECGs were digitized and evaluated off-line. The temporal instability of beat-to-beat heart rate and repolarization were characterized by the calculation of short-term variability of the RR and QT intervals. Results Heart rate was significantly lower in professional soccer players at rest (61±1.2 vs. 72±1.5/min in controls). The QT interval was prolonged in players at rest (419±3.1 vs. 390±3.6 in controls, p<0.001). QTc was significantly longer in players compared to controls calculated with Fridericia and Hodges correction formulas. Importantly, STVQT was significantly higher in players both at rest and immediately after the game compared to controls (4.8±0.14 and 4.3±0.14 vs. 3.5±0.10 ms, both p<0.001, respectively). Conclusions STVQT is significantly higher in professional soccer players compared to age-matched controls, however, further studies are needed to relate this finding to increased arrhythmia propensity in this population. PMID:21526208
Lawrence, T; Moskal, J T; Diduch, D R
1999-07-01
It has often been hospital policy to send all resected specimens obtained during a total hip or knee arthroplasty for histological evaluation. This practice is expensive and may be unnecessary. We sought to determine the ability of surgeons to diagnose primary joint conditions correctly, and we attempted to identify any possible risks to the patient resulting from the omission of routine histological evaluation of specimens at the surgeon's discretion. Our objective was to ascertain whether routine histological evaluation could be safely omitted from the protocol for primary hip and knee arthroplasty without compromising the care of the patient. A total of 1388 consecutive arthroplasties in 1136 patients were identified from a database of primary total hip and knee arthroplasties that was prospectively maintained by the senior one of us. Follow-up data obtained at a mean of 5.5 years (range, two to ten years) were available after 92 percent (1273) of the 1388 arthroplasties. The preoperative diagnosis was determined from the history, findings on clinical examination, and radiographs. The intraoperative diagnosis was determined by gross inspection of joint fluid, articular cartilage, synovial tissue, and the cut surfaces of resected specimens. The combination of the preoperative and intraoperative diagnoses was considered to be the surgeon's clinical diagnosis. All resected specimens were sent for routine histological evaluation, and a pathological diagnosis was made. Attention was given to whether a discrepancy between the surgeon's clinical diagnosis and the pathological diagnosis altered the management of the patient. The original diagnoses were updated with use of annual radiographs and clinical assessments. The cost of histological examination of specimens obtained at arthroplasty was determined by consultation with hospital administration, accounting, and pathology department personnel. A pathological fracture or an impending fracture was diagnosed preoperatively and confirmed intraoperatively during twelve of the 1388 arthroplasties. Histological analysis demonstrated malignancy in specimens obtained during eleven of these arthroplasties and evidence of a benign rheumatoid geode in the specimen obtained during the twelfth arthroplasty. The preoperative and intraoperative diagnoses made before and during the remaining 1376 arthroplasties were benign conditions, which were confirmed histologically in all patients. No diagnosis changed during the follow-up period. As demonstrated by a comparison with the histological diagnosis, the surgeon's clinical diagnosis of malignancy had a sensitivity of 100 percent (95 percent confidence interval, 74.0 to 100 percent), a specificity of 99.9 percent (95 percent confidence interval, 99.6 to 100 percent), a positive predictive value of 91.7 percent (95 percent confidence interval, 64.6 to 98.5 percent), and a negative predictive value of 100 percent (95 percent confidence interval, 99.7 to 100 percent). There was a discrepancy between the preoperative and intraoperative diagnoses associated with eleven arthroplasties. All eleven intraoperative diagnoses were correct, as confirmed histologically. Excluding the patients who had a pathological or impending fracture, the accuracy of the surgeon's preoperative diagnosis was 99.2 percent (95 percent confidence interval, 98.6 to 99.5 percent). When the intraoperative and preoperative diagnoses were combined, the accuracy was 100 percent (95 percent confidence interval, 99.7 to 100 percent). Histological evaluation at our hospital resulted in total charges, including hospital costs and professional fees, of $196.27 and a mean total reimbursement of $102.59 per evaluation. In our series of 1136 patients with 1388 arthroplasties, these costs could have been eliminated for all but the twelve patients who had a suspected malignant lesion and the one patient in whom pigmented villonodular synovitis was found. (ABSTRACT
Mohan, Shiwali; Venkatakrishnan, Anusha; Nelson, Les; Silva, Michael; Springer, Aaron
2017-01-01
Background Implementation intentions are mental representations of simple plans to translate goal intentions into behavior under specific conditions. Studies show implementation intentions can produce moderate to large improvements in behavioral goal achievement. Human associative memory mechanisms have been implicated in the processes by which implementation intentions produce effects. On the basis of the adaptive control of thought-rational (ACT-R) theory of cognition, we hypothesized that the strength of implementation intention effect could be manipulated in predictable ways using reminders delivered by a mobile health (mHealth) app. Objective The aim of this experiment was to manipulate the effects of implementation intentions on daily behavioral goal success in ways predicted by the ACT-R theory concerning mHealth reminder scheduling. Methods An incomplete factorial design was used in this mHealth study. All participants were asked to choose a healthy behavior goal associated with eat slowly, walking, or eating more vegetables and were asked to set implementation intentions. N=64 adult participants were in the study for 28 days. Participants were stratified by self-efficacy and assigned to one of two reminder conditions: reminders-presented versus reminders-absent. Self-efficacy and reminder conditions were crossed. Nested within the reminders-presented condition was a crossing of frequency of reminders sent (high, low) by distribution of reminders sent (distributed, massed). Participants in the low frequency condition got 7 reminders over 28 days; those in the high frequency condition were sent 14. Participants in the distributed conditions were sent reminders at uniform intervals. Participants in the massed distribution conditions were sent reminders in clusters. Results There was a significant overall effect of reminders on achieving a daily behavioral goal (coefficient=2.018, standard error [SE]=0.572, odds ratio [OR]=7.52, 95% CI 0.9037-3.2594, P<.001). As predicted by ACT-R, using default theoretical parameters, there was an interaction of reminder frequency by distribution on daily goal success (coefficient=0.7994, SE=0.2215, OR=2.2242, 95% CI 0.3656-1.2341, P<.001). The total number of times a reminder was acknowledged as received by a participant had a marginal effect on daily goal success (coefficient=0.0694, SE=0.0410, OR=1.0717, 95% CI −0.01116 to 0.1505, P=.09), and the time since acknowledging receipt of a reminder was highly significant (coefficient=−0.0490, SE=0.0104, OR=0.9522, 95% CI −0.0700 to −0.2852], P<.001). A dual system ACT-R mathematical model was fit to individuals’ daily goal successes and reminder acknowledgments: a goal-striving system dependent on declarative memory plus a habit-forming system that acquires automatic procedures for performance of behavioral goals. Conclusions Computational cognitive theory such as ACT-R can be used to make precise quantitative predictions concerning daily health behavior goal success in response to implementation intentions and the dosing schedules of reminders. PMID:29191800
Predicting future protection of respirator users: Statistical approaches and practical implications.
Hu, Chengcheng; Harber, Philip; Su, Jing
2016-01-01
The purpose of this article is to describe a statistical approach for predicting a respirator user's fit factor in the future based upon results from initial tests. A statistical prediction model was developed based upon joint distribution of multiple fit factor measurements over time obtained from linear mixed effect models. The model accounts for within-subject correlation as well as short-term (within one day) and longer-term variability. As an example of applying this approach, model parameters were estimated from a research study in which volunteers were trained by three different modalities to use one of two types of respirators. They underwent two quantitative fit tests at the initial session and two on the same day approximately six months later. The fitted models demonstrated correlation and gave the estimated distribution of future fit test results conditional on past results for an individual worker. This approach can be applied to establishing a criterion value for passing an initial fit test to provide reasonable likelihood that a worker will be adequately protected in the future; and to optimizing the repeat fit factor test intervals individually for each user for cost-effective testing.
Effects of conditioned reinforcement frequency in an intermittent free-feeding situation, 12
Zimmerman, Joseph; Hanford, Peter V.; Brown, Wyman
1967-01-01
Key-pecking intermittently produced a set of brief exteroceptive stimulus changes under two-component multiple schedules of conditioned reinforcement. Throughout the study, free access to grain was concurrently provided on an intermittent basis via a variable-interval tape. Free food presentations scheduled by the tape were delivered if no peck had been emitted for 6 sec, and the brief stimulus changes produced by responding under the multiple schedules were those which accompanied food presentation. The second component of each multiple schedule was always associated with a 1-min, variable-interval schedule of conditioned reinforcement. The schedule associated with the first component was systematically varied and conditioned reinforcement was either absent (extinction) or programmed on a 1-, 3-, 6-, or 12-min variable-interval schedule. Under these conditions, rate of responding in the manipulated component decreased monotonically with a decrease in the frequency of conditioned reinforcement. In addition, contrast effects were often obtained in the constant, second component. These results are similar to those obtained with similar multiple schedules of primary reinforcement. PMID:6033554
NASA Astrophysics Data System (ADS)
Wang, F.; Annable, M. D.; Jawitz, J. W.
2012-12-01
The equilibrium streamtube model (EST) has demonstrated the ability to accurately predict dense nonaqueous phase liquid (DNAPL) dissolution in laboratory experiments and numerical simulations. Here the model is applied to predict DNAPL dissolution at a PCE-contaminated dry cleaner site, located in Jacksonville, Florida. The EST is an analytical solution with field-measurable input parameters. Here, measured data from a field-scale partitioning tracer test were used to parameterize the EST model and the predicted PCE dissolution was compared to measured data from an in-situ alcohol (ethanol) flood. In addition, a simulated partitioning tracer test from a calibrated spatially explicit multiphase flow model (UTCHEM) was also used to parameterize the EST analytical solution. The ethanol prediction based on both the field partitioning tracer test and the UTCHEM tracer test simulation closely matched the field data. The PCE EST prediction showed a peak shift to an earlier arrival time that was concluded to be caused by well screen interval differences between the field tracer test and alcohol flood. This observation was based on a modeling assessment of potential factors that may influence predictions by using UTCHEM simulations. The imposed injection and pumping flow pattern at this site for both the partitioning tracer test and alcohol flood was more complex than the natural gradient flow pattern (NGFP). Both the EST model and UTCHEM were also used to predict PCE dissolution under natural gradient conditions, with much simpler flow patterns than the forced-gradient double five spot of the alcohol flood. The NGFP predictions based on parameters determined from tracer tests conducted with complex flow patterns underestimated PCE concentrations and total mass removal. This suggests that the flow patterns influence aqueous dissolution and that the aqueous dissolution under the NGFP is more efficient than dissolution under complex flow patterns.
Prediction of mortality rates using a model with stochastic parameters
NASA Astrophysics Data System (ADS)
Tan, Chon Sern; Pooi, Ah Hin
2016-10-01
Prediction of future mortality rates is crucial to insurance companies because they face longevity risks while providing retirement benefits to a population whose life expectancy is increasing. In the past literature, a time series model based on multivariate power-normal distribution has been applied on mortality data from the United States for the years 1933 till 2000 to forecast the future mortality rates for the years 2001 till 2010. In this paper, a more dynamic approach based on the multivariate time series will be proposed where the model uses stochastic parameters that vary with time. The resulting prediction intervals obtained using the model with stochastic parameters perform better because apart from having good ability in covering the observed future mortality rates, they also tend to have distinctly shorter interval lengths.
The Use of One-Sample Prediction Intervals for Estimating CO2 Scrubber Canister Durations
2012-10-01
Grade and 812 D-Grade Sofnolime.3 Definitions According to Devore,4 A CI (confidence interval) refers to a parameter, or population ... characteristic , whose value is fixed but unknown to us. In contrast, a future value of Y is not a parameter but instead a random variable; for this
NASA Astrophysics Data System (ADS)
Torries, Brian; Shamsaei, Nima
2017-12-01
The effects of different cooling rates, as achieved by varying the interlayer time interval, on the fatigue behavior of additively manufactured Ti-6Al-4V specimens were investigated and modeled via a microstructure-sensitive fatigue model. Comparisons are made between two sets of specimens fabricated via Laser Engineered Net Shaping (LENS™), with variance in interlayer time interval accomplished by depositing either one or two specimens per print operation. Fully reversed, strain-controlled fatigue tests were conducted, with fractography following specimen failure. A microstructure-sensitive fatigue model was calibrated to model the fatigue behavior of both sets of specimens and was found to be capable of correctly predicting the longer fatigue lives of the single-built specimens and the reduced scatter of the double-built specimens; all data points fell within the predicted upper and lower bounds of fatigue life. The time interval effects and the ability to be modeled are important to consider when producing test specimens that are smaller than the production part (i.e., property-performance relationships).
Wen, Wen; Yamashita, Atsushi; Asama, Hajime
2015-01-01
The sense of agency refers to the feeling that one is controlling events through one’s own behavior. This study examined how task performance and the delay of events influence one’s sense of agency during continuous action accompanied by a goal. The participants were instructed to direct a moving dot into a square as quickly as possible by pressing the left and right keys on a keyboard to control the direction in which the dot traveled. The interval between the key press and response of the dot (i.e., direction change) was manipulated to vary task difficulty. Moreover, in the assisted condition, the computer ignored participants’ erroneous commands, resulting in improved task performance but a weaker association between the participants’ commands and actual movements of the dot relative to the condition in which all of the participants’ commands were executed (i.e., self-control condition). The results showed that participants’ sense of agency increased with better performance in the assisted condition relative to the self-control condition, even though a large proportion of their commands were not executed. We concluded that, when the action-feedback association was uncertain, cognitive inference was more dominant relative to the process of comparing predicted and perceived information in the judgment of agency. PMID:25893992
Detecting false positives in multielement designs: implications for brief assessments.
Bartlett, Sara M; Rapp, John T; Henrickson, Marissa L
2011-11-01
The authors assessed the extent to which multielement designs produced false positives using continuous duration recording (CDR) and interval recording with 10-s and 1-min interval sizes. Specifically, they created 6,000 graphs with multielement designs that varied in the number of data paths, and the number of data points per data path, using a random number generator. In Experiment 1, the authors visually analyzed the graphs for the occurrence of false positives. Results indicated that graphs depicting only two sessions for each condition (e.g., a control condition plotted with multiple test conditions) produced the highest percentage of false positives for CDR and interval recording with 10-s and 1-min intervals. Conversely, graphs with four or five sessions for each condition produced the lowest percentage of false positives for each method. In Experiment 2, they applied two new rules, which were intended to decrease false positives, to each graph that depicted a false positive in Experiment 1. Results showed that application of new rules decreased false positives to less than 5% for all of the graphs except for those with two data paths and two data points per data path. Implications for brief assessments are discussed.
Spatiotemporal canards in neural field equations
NASA Astrophysics Data System (ADS)
Avitabile, D.; Desroches, M.; Knobloch, E.
2017-04-01
Canards are special solutions to ordinary differential equations that follow invariant repelling slow manifolds for long time intervals. In realistic biophysical single-cell models, canards are responsible for several complex neural rhythms observed experimentally, but their existence and role in spatially extended systems is largely unexplored. We identify and describe a type of coherent structure in which a spatial pattern displays temporal canard behavior. Using interfacial dynamics and geometric singular perturbation theory, we classify spatiotemporal canards and give conditions for the existence of folded-saddle and folded-node canards. We find that spatiotemporal canards are robust to changes in the synaptic connectivity and firing rate. The theory correctly predicts the existence of spatiotemporal canards with octahedral symmetry in a neural field model posed on the unit sphere.
Pezze, Marie-Astrid; Marshall, Hayley J; Cassaday, Helen J
2017-06-28
The muscarinic acetylcholine receptor is an important modulator of medial prefrontal cortex (mPFC) functions, such as the working memory required to bridge a trace interval in associative leaning. Aversive and appetitive trace conditioning procedures were used to examine the effects of scopolamine (0.1 and 0.5 mg/kg, i.p.) in male rats. Follow-up experiments tested the effects of microinfusion of 0.15 μg of scopolamine (0.075 μg of in 0.5 μl/side) in infralimbic (IL) versus prelimbic regions of rat mPFC, in appetitive trace and locomotor activity (LMA) procedures. Systemic scopolamine was without effect in an aversive trace conditioning procedure, but impaired appetitive conditioning at a 2 s trace interval. This effect was demonstrated as reduced responding during presentations of the conditioned stimulus (CS) and during the interstimulus interval (ISI). There was no such effect on responding during food (unconditioned stimulus, US) responding or in the intertrial interval (ITI). In contrast, systemic scopolamine dose-relatedly increased LMA. Trace conditioning was similarly impaired at the 2 s trace (shown as reduced responding to the CS and during the ISI, but not during US presentations or in the ITI) after infusion in mPFC, whereas LMA was increased (after infusion in IL only). Therefore, our results point to the importance of cholinergic modulation in mPFC for trace conditioning and show that the observed effects cannot be attributed to reduced activity. SIGNIFICANCE STATEMENT Events are very often separated in time, in which case working memory is necessary to condition their association in "trace conditioning." The present study used conditioning variants motivated aversively with foot shock and appetitively with food. The drug scopolamine was used to block muscarinic acetylcholine receptors involved in working memory. The results show that reduced cholinergic transmission in medial prefrontal cortex (mPFC) impaired appetitive trace conditioning at a 2 s trace interval. However, scopolamine was without effect in the aversive procedure, revealing the importance of procedural differences to the demonstration of the drug effect. The finding that blockade of muscarinic receptors in mPFC impaired trace conditioning shows that these receptors are critical modulators of short-term working memory. Copyright © 2017 Pezze et al.
Pezze, Marie-Astrid; Marshall, Hayley J.
2017-01-01
The muscarinic acetylcholine receptor is an important modulator of medial prefrontal cortex (mPFC) functions, such as the working memory required to bridge a trace interval in associative leaning. Aversive and appetitive trace conditioning procedures were used to examine the effects of scopolamine (0.1 and 0.5 mg/kg, i.p.) in male rats. Follow-up experiments tested the effects of microinfusion of 0.15 μg of scopolamine (0.075 μg of in 0.5 μl/side) in infralimbic (IL) versus prelimbic regions of rat mPFC, in appetitive trace and locomotor activity (LMA) procedures. Systemic scopolamine was without effect in an aversive trace conditioning procedure, but impaired appetitive conditioning at a 2 s trace interval. This effect was demonstrated as reduced responding during presentations of the conditioned stimulus (CS) and during the interstimulus interval (ISI). There was no such effect on responding during food (unconditioned stimulus, US) responding or in the intertrial interval (ITI). In contrast, systemic scopolamine dose-relatedly increased LMA. Trace conditioning was similarly impaired at the 2 s trace (shown as reduced responding to the CS and during the ISI, but not during US presentations or in the ITI) after infusion in mPFC, whereas LMA was increased (after infusion in IL only). Therefore, our results point to the importance of cholinergic modulation in mPFC for trace conditioning and show that the observed effects cannot be attributed to reduced activity. SIGNIFICANCE STATEMENT Events are very often separated in time, in which case working memory is necessary to condition their association in “trace conditioning.” The present study used conditioning variants motivated aversively with foot shock and appetitively with food. The drug scopolamine was used to block muscarinic acetylcholine receptors involved in working memory. The results show that reduced cholinergic transmission in medial prefrontal cortex (mPFC) impaired appetitive trace conditioning at a 2 s trace interval. However, scopolamine was without effect in the aversive procedure, revealing the importance of procedural differences to the demonstration of the drug effect. The finding that blockade of muscarinic receptors in mPFC impaired trace conditioning shows that these receptors are critical modulators of short-term working memory. PMID:28559376
A new method for determining a sector alert
DOT National Transportation Integrated Search
2008-09-29
The Traffic Flow Management System (TFMS) currently declares an alert for any 15-minute interval in which the predicted demand exceeds the Monitor/Alert Parameter (MAP) for any airport, sector, or fix. For a sector, TFMS predicts the demand for each ...
Inverse modeling with RZWQM2 to predict water quality
Nolan, Bernard T.; Malone, Robert W.; Ma, Liwang; Green, Christopher T.; Fienen, Michael N.; Jaynes, Dan B.
2011-01-01
This chapter presents guidelines for autocalibration of the Root Zone Water Quality Model (RZWQM2) by inverse modeling using PEST parameter estimation software (Doherty, 2010). Two sites with diverse climate and management were considered for simulation of N losses by leaching and in drain flow: an almond [Prunus dulcis (Mill.) D.A. Webb] orchard in the San Joaquin Valley, California and the Walnut Creek watershed in central Iowa, which is predominantly in corn (Zea mays L.)–soybean [Glycine max (L.) Merr.] rotation. Inverse modeling provides an objective statistical basis for calibration that involves simultaneous adjustment of model parameters and yields parameter confidence intervals and sensitivities. We describe operation of PEST in both parameter estimation and predictive analysis modes. The goal of parameter estimation is to identify a unique set of parameters that minimize a weighted least squares objective function, and the goal of predictive analysis is to construct a nonlinear confidence interval for a prediction of interest by finding a set of parameters that maximizes or minimizes the prediction while maintaining the model in a calibrated state. We also describe PEST utilities (PAR2PAR, TSPROC) for maintaining ordered relations among model parameters (e.g., soil root growth factor) and for post-processing of RZWQM2 outputs representing different cropping practices at the Iowa site. Inverse modeling provided reasonable fits to observed water and N fluxes and directly benefitted the modeling through: (i) simultaneous adjustment of multiple parameters versus one-at-a-time adjustment in manual approaches; (ii) clear indication by convergence criteria of when calibration is complete; (iii) straightforward detection of nonunique and insensitive parameters, which can affect the stability of PEST and RZWQM2; and (iv) generation of confidence intervals for uncertainty analysis of parameters and model predictions. Composite scaled sensitivities, which reflect the total information provided by the observations for a parameter, indicated that most of the RZWQM2 parameters at the California study site (CA) and Iowa study site (IA) could be reliably estimated by regression. Correlations obtained in the CA case indicated that all model parameters could be uniquely estimated by inverse modeling. Although water content at field capacity was highly correlated with bulk density (−0.94), the correlation is less than the threshold for nonuniqueness (0.95, absolute value basis). Additionally, we used truncated singular value decomposition (SVD) at CA to mitigate potential problems with highly correlated and insensitive parameters. Singular value decomposition estimates linear combinations (eigenvectors) of the original process-model parameters. Parameter confidence intervals (CIs) at CA indicated that parameters were reliably estimated with the possible exception of an organic pool transfer coefficient (R45), which had a comparatively wide CI. However, the 95% confidence interval for R45 (0.03–0.35) is mostly within the range of values reported for this parameter. Predictive analysis at CA generated confidence intervals that were compared with independently measured annual water flux (groundwater recharge) and median nitrate concentration in a collocated monitoring well as part of model evaluation. Both the observed recharge (42.3 cm yr−1) and nitrate concentration (24.3 mg L−1) were within their respective 90% confidence intervals, indicating that overall model error was within acceptable limits.
Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media
Cooley, R.L.; Christensen, S.
2006-01-01
Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.
2016-12-01
Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.
Heritability of and mortality prediction with a longevity phenotype: the healthy aging index.
Sanders, Jason L; Minster, Ryan L; Barmada, M Michael; Matteini, Amy M; Boudreau, Robert M; Christensen, Kaare; Mayeux, Richard; Borecki, Ingrid B; Zhang, Qunyuan; Perls, Thomas; Newman, Anne B
2014-04-01
Longevity-associated genes may modulate risk for age-related diseases and survival. The Healthy Aging Index (HAI) may be a subphenotype of longevity, which can be constructed in many studies for genetic analysis. We investigated the HAI's association with survival in the Cardiovascular Health Study and heritability in the Long Life Family Study. The HAI includes systolic blood pressure, pulmonary vital capacity, creatinine, fasting glucose, and Modified Mini-Mental Status Examination score, each scored 0, 1, or 2 using approximate tertiles and summed from 0 (healthy) to 10 (unhealthy). In Cardiovascular Health Study, the association with mortality and accuracy predicting death were determined with Cox proportional hazards analysis and c-statistics, respectively. In Long Life Family Study, heritability was determined with a variance component-based family analysis using a polygenic model. Cardiovascular Health Study participants with unhealthier index scores (7-10) had 2.62-fold (95% confidence interval: 2.22, 3.10) greater mortality than participants with healthier scores (0-2). The HAI alone predicted death moderately well (c-statistic = 0.643, 95% confidence interval: 0.626, 0.661, p < .0001) and slightly worse than age alone (c-statistic = 0.700, 95% confidence interval: 0.684, 0.717, p < .0001; p < .0001 for comparison of c-statistics). Prediction increased significantly with adjustment for demographics, health behaviors, and clinical comorbidities (c-statistic = 0.780, 95% confidence interval: 0.765, 0.794, p < .0001). In Long Life Family Study, the heritability of the HAI was 0.295 (p < .0001) overall, 0.387 (p < .0001) in probands, and 0.238 (p = .0004) in offspring. The HAI should be investigated further as a candidate phenotype for uncovering longevity-associated genes in humans.
Heritability of and Mortality Prediction With a Longevity Phenotype: The Healthy Aging Index
2014-01-01
Background. Longevity-associated genes may modulate risk for age-related diseases and survival. The Healthy Aging Index (HAI) may be a subphenotype of longevity, which can be constructed in many studies for genetic analysis. We investigated the HAI’s association with survival in the Cardiovascular Health Study and heritability in the Long Life Family Study. Methods. The HAI includes systolic blood pressure, pulmonary vital capacity, creatinine, fasting glucose, and Modified Mini-Mental Status Examination score, each scored 0, 1, or 2 using approximate tertiles and summed from 0 (healthy) to 10 (unhealthy). In Cardiovascular Health Study, the association with mortality and accuracy predicting death were determined with Cox proportional hazards analysis and c-statistics, respectively. In Long Life Family Study, heritability was determined with a variance component–based family analysis using a polygenic model. Results. Cardiovascular Health Study participants with unhealthier index scores (7–10) had 2.62-fold (95% confidence interval: 2.22, 3.10) greater mortality than participants with healthier scores (0–2). The HAI alone predicted death moderately well (c-statistic = 0.643, 95% confidence interval: 0.626, 0.661, p < .0001) and slightly worse than age alone (c-statistic = 0.700, 95% confidence interval: 0.684, 0.717, p < .0001; p < .0001 for comparison of c-statistics). Prediction increased significantly with adjustment for demographics, health behaviors, and clinical comorbidities (c-statistic = 0.780, 95% confidence interval: 0.765, 0.794, p < .0001). In Long Life Family Study, the heritability of the HAI was 0.295 (p < .0001) overall, 0.387 (p < .0001) in probands, and 0.238 (p = .0004) in offspring. Conclusion. The HAI should be investigated further as a candidate phenotype for uncovering longevity-associated genes in humans. PMID:23913930
Kim, Sung Han; Park, Boram; Joo, Jungnam; Joung, Jae Young; Seo, Ho Kyung; Chung, Jinsoo; Lee, Kang Hyun
2017-01-01
Objective To evaluate predictive factors for retrograde ureteral stent failure in patients with non-urological malignant ureteral obstruction. Materials and methods Between 2005 and 2014, medical records of 284 malignant ureteral obstruction patients with 712 retrograde ureteral stent trials including 63 (22.2%) having bilateral malignant ureteral obstruction were retrospectively reviewed. Retrograde ureteral stent failure was defined as the inability to place ureteral stents by cystoscopy, recurrent stent obstruction within one month, or non-relief of azotemia within one week from the prior retrograde ureteral stent. The clinicopathological parameters and first retrograde pyelographic findings were analyzed to investigate the predictive factors for retrograde ureteral stent failure and conversion to percutaneous nephrostomy in multivariate analysis with a statistical significance of p < 0.05. Results Retrograde ureteral stent failure was detected in 14.1% of patients. The mean number of retrograde ureteral stent placements and indwelling duration of the ureteral stents were 2.5 ± 2.6 times and 8.6 ± 4.0 months, respectively. Multivariate analyses identified several specific RGP findings as significant predictive factors for retrograde ureteral stent failure (p < 0.05). The significant retrograde pyelographic findings included grade 4 hydronephrosis (hazard ratio 4.10, 95% confidence interval 1.39–12.09), irreversible ureteral kinking (hazard ratio 2.72, confidence interval 1.03–7.18), presence of bladder invasion (hazard ratio 4.78, confidence interval 1.81–12.63), and multiple lesions of ureteral stricture (hazard ratio 3.46, confidence interval 1.35–8.83) (p < 0.05). Conclusion Retrograde pyelography might prevent unnecessary and ineffective retrograde ureteral stent trials in patients with advanced non-urological malignant ureteral obstruction. PMID:28931043
Lockie, Robert G.; Stage, Alyssa A.; Stokes, John J.; Orjalo, Ashley J.; Davis, DeShaun L.; Giuliano, Dominic V.; Moreno, Matthew R.; Risso, Fabrice G.; Lazar, Adrina; Birmingham-Babauta, Samantha A.; Tomita, Tricia M.
2016-01-01
Leg power is an important characteristic for soccer, and jump tests can measure this capacity. Limited research has analyzed relationships between jumping and soccer-specific field test performance in collegiate male players. Nineteen Division I players completed tests of: leg power (vertical jump (VJ), standing broad jump (SBJ), left- and right-leg triple hop (TH)); linear (30 m sprint; 0–5 m, 5–10 m, 0–10, 0–30 m intervals) and change-of-direction (505) speed; soccer-specific fitness (Yo-Yo Intermittent Recovery Test Level 2); and 7 × 30-m sprints to measure repeated-sprint ability (RSA; total time (TT), performance decrement (PD)). Pearson’s correlations (r) determined jump and field test relationships; stepwise regression ascertained jump predictors of the tests (p < 0.05). All jumps correlated with the 0–5, 0–10, and 0–30 m sprint intervals (r = −0.65–−0.90). VJ, SBJ, and left- and right-leg TH correlated with RSA TT (r = −0.51–−0.59). Right-leg TH predicted the 0–5 and 0–10 m intervals (R2 = 0.55–0.81); the VJ predicted the 0–30 m interval and RSA TT (R2 = 0.41–0.84). Between-leg TH asymmetry correlated with and predicted left-leg 505 and RSA PD (r = −0.68–0.62; R2 = 0.39–0.46). Improvements in jumping ability could contribute to faster speed and RSA performance in collegiate soccer players. PMID:29910304
Filgueiras, Paulo R; Terra, Luciana A; Castro, Eustáquio V R; Oliveira, Lize M S L; Dias, Júlio C M; Poppi, Ronei J
2015-09-01
This paper aims to estimate the temperature equivalent to 10% (T10%), 50% (T50%) and 90% (T90%) of distilled volume in crude oils using (1)H NMR and support vector regression (SVR). Confidence intervals for the predicted values were calculated using a boosting-type ensemble method in a procedure called ensemble support vector regression (eSVR). The estimated confidence intervals obtained by eSVR were compared with previously accepted calculations from partial least squares (PLS) models and a boosting-type ensemble applied in the PLS method (ePLS). By using the proposed boosting strategy, it was possible to identify outliers in the T10% property dataset. The eSVR procedure improved the accuracy of the distillation temperature predictions in relation to standard PLS, ePLS and SVR. For T10%, a root mean square error of prediction (RMSEP) of 11.6°C was obtained in comparison with 15.6°C for PLS, 15.1°C for ePLS and 28.4°C for SVR. The RMSEPs for T50% were 24.2°C, 23.4°C, 22.8°C and 14.4°C for PLS, ePLS, SVR and eSVR, respectively. For T90%, the values of RMSEP were 39.0°C, 39.9°C and 39.9°C for PLS, ePLS, SVR and eSVR, respectively. The confidence intervals calculated by the proposed boosting methodology presented acceptable values for the three properties analyzed; however, they were lower than those calculated by the standard methodology for PLS. Copyright © 2015 Elsevier B.V. All rights reserved.
Bye, Robin T; Neilson, Peter D
2010-10-01
Physiological tremor during movement is characterized by ∼10 Hz oscillation observed both in the electromyogram activity and in the velocity profile. We propose that this particular rhythm occurs as the direct consequence of a movement response planning system that acts as an intermittent predictive controller operating at discrete intervals of ∼100 ms. The BUMP model of response planning describes such a system. It forms the kernel of Adaptive Model Theory which defines, in computational terms, a basic unit of motor production or BUMP. Each BUMP consists of three processes: (1) analyzing sensory information, (2) planning a desired optimal response, and (3) execution of that response. These processes operate in parallel across successive sequential BUMPs. The response planning process requires a discrete-time interval in which to generate a minimum acceleration trajectory to connect the actual response with the predicted future state of the target and compensate for executional error. We have shown previously that a response planning time of 100 ms accounts for the intermittency observed experimentally in visual tracking studies and for the psychological refractory period observed in double stimulation reaction time studies. We have also shown that simulations of aimed movement, using this same planning interval, reproduce experimentally observed speed-accuracy tradeoffs and movement velocity profiles. Here we show, by means of a simulation study of constant velocity tracking movements, that employing a 100 ms planning interval closely reproduces the measurement discontinuities and power spectra of electromyograms, joint-angles, and angular velocities of physiological tremor reported experimentally. We conclude that intermittent predictive control through sequential operation of BUMPs is a fundamental mechanism of 10 Hz physiological tremor in movement. Copyright © 2010 Elsevier B.V. All rights reserved.
Combining Speed Information Across Space
NASA Technical Reports Server (NTRS)
Verghese, Preeti; Stone, Leland S.
1995-01-01
We used speed discrimination tasks to measure the ability of observers to combine speed information from multiple stimuli distributed across space. We compared speed discrimination thresholds in a classical discrimination paradigm to those in an uncertainty/search paradigm. Thresholds were measured using a temporal two-interval forced-choice design. In the discrimination paradigm, the n gratings in each interval all moved at the same speed and observers were asked to choose the interval with the faster gratings. Discrimination thresholds for this paradigm decreased as the number of gratings increased. This decrease was not due to increasing the effective stimulus area as a control experiment that increased the area of a single grating did not show a similar improvement in thresholds. Adding independent speed noise to each of the n gratings caused thresholds to decrease at a rate similar to the original no-noise case, consistent with observers combining an independent sample of speed from each grating in both the added- and no-noise cases. In the search paradigm, observers were asked to choose the interval in which one of the n gratings moved faster. Thresholds in this case increased with the number of gratings, behavior traditionally attributed to an input bottleneck. However, results from the discrimination paradigm showed that the increase was not due to observers' inability to process these gratings. We have also shown that the opposite trends of the data in the two paradigms can be predicted by a decision theory model that combines independent samples of speed information across space. This demonstrates that models typically used in classical detection and discrimination paradigms are also applicable to search paradigms. As our model does not distinguish between samples in space and time, it predicts that discrimination performance should be the same regardless of whether the gratings are presented in two spatial intervals or two temporal intervals. Our last experiment largely confirmed this prediction.
Gallistel, C R; Gibbon, J
2000-04-01
The authors draw together and develop previous timing models for a broad range of conditioning phenomena to reveal their common conceptual foundations: First, conditioning depends on the learning of the temporal intervals between events and the reciprocals of these intervals, the rates of event occurrence. Second, remembered intervals and rates translate into observed behavior through decision processes whose structure is adapted to noise in the decision variables. The noise and the uncertainties consequent on it have both subjective and objective origins. A third feature of these models is their timescale invariance, which the authors argue is a very important property evident in the available experimental data. This conceptual framework is similar to the psychophysical conceptual framework in which contemporary models of sensory processing are rooted. The authors contrast it with the associative conceptual framework.
Raslear, T G; Shurtleff, D; Simmons, L
1992-01-01
Killeen and Fetterman's (1988) behavioral theory of animal timing predicts that decreases in the rate of reinforcement should produce decreases in the sensitivity (A') of temporal discriminations and a decrease in miss and correct rejection rates (decrease in bias toward "long" responses). Eight rats were trained on a 10- versus 0.1-s temporal discrimination with an intertrial interval of 5 s and were subsequently tested on probe days on the same discrimination with intertrial intervals of 1, 2.5, 5, 10, or 20 s. The rate of reinforcement declined for all animals as intertrial interval increased. Although sensitivity (A') decreased with increasing intertrial interval, all rats showed an increase in bias to make long responses. PMID:1447544
Li, Congjuan; Shi, Xiang; Mohamad, Osama Abdalla; Gao, Jie; Xu, Xinwen; Xie, Yijun
2017-01-01
Water influences various physiological and ecological processes of plants in different ecosystems, especially in desert ecosystems. The purpose of this study is to investigate the response of physiological and morphological acclimation of two shrubs Haloxylon ammodendron and Calligonum mongolicunl to variations in irrigation intervals. The irrigation frequency was set as 1-, 2-, 4-, 8- and 12-week intervals respectively from March to October during 2012-2014 to investigate the response of physiological and morphological acclimation of two desert shrubs Haloxylon ammodendron and Calligonum mongolicunl to variations in the irrigation system. The irrigation interval significantly affected the individual-scale carbon acquisition and biomass allocation pattern of both species. Under good water conditions (1- and 2-week intervals), carbon assimilation was significantly higher than other treatments; while, under water shortage conditions (8- and 12-week intervals), there was much defoliation; and under moderate irrigation intervals (4 weeks), the assimilative organs grew gently with almost no defoliation occurring. Both studied species maintained similar ecophysiologically adaptive strategies, while C. mongolicunl was more sensitive to drought stress because of its shallow root system and preferential belowground allocation of resources. A moderate irrigation interval of 4 weeks was a suitable pattern for both plants since it not only saved water but also met the water demands of the plants.
Waynforth, David
2015-10-01
Human birth interval length is indicative of the level of parental investment that a child will receive: a short interval following birth means that parental resources must be split with a younger sibling during a period when the older sibling remains highly dependent on their parents. From a life-history theoretical perspective, it is likely that there are evolved mechanisms that serve to maximize fitness depending on context. One context that would be expected to result in short birth intervals, and lowered parental investment, is after a child with low expected fitness is born. Here, data drawn from a longitudinal British birth cohort study were used to test whether birth intervals were shorter following the birth of a child with a long-term health problem. Data on the timing of 4543 births were analysed using discrete-time event history analysis. The results were consistent with the hypothesis: birth intervals were shorter following the birth of a child diagnosed by a medical professional with a severe but non-fatal medical condition. Covariates in the analysis were also significantly associated with birth interval length: births of twins or multiple births, and relationship break-up were associated with significantly longer birth intervals. © 2015 The Author(s).
Martínez-Alanis, Marisol; Ruiz-Velasco, Silvia; Lerma, Claudia
2016-12-15
Most approaches to predict ventricular tachyarrhythmias which are based on RR intervals consider only sinus beats, excluding premature ventricular complexes (PVCs). The method known as heartprint, which analyses PVCs and their characteristics, has prognostic value for fatal arrhythmias on long recordings of RR intervals (>70,000 beats). To evaluate characteristics of PVCs from short term recordings (around 1000 beats) and their prognostic value for imminent sustained tachyarrhythmia. We analyzed 132 pairs of short term RR interval recordings (one before tachyarrhythmia and one control) obtained from 78 patients. Patients were classified into two groups based on the history of accelerated heart rate (HR) (HR>90bpm) before a tachyarrhythmia episode. Heartprint indexes, such as mean coupling interval (meanCI) and the number of occurrences of the most prevalent form of PVCs (SNIB) were calculated. The predictive value of all the indexes and of the combination of different indexes was calculated. MeanCI shorter than 482ms and the occurrence of more repetitive arrhythmias (sNIB≥2.5), had a significant prognostic value for patients with accelerated heart rate: adjusted odds ratio of 2.63 (1.33-5.17) for meanCI and 2.28 (1.20-4.33) for sNIB. Combining these indexes increases the adjusted odds ratio: 10.94 (3.89-30.80). High prevalence of repeating forms of PVCs and shorter CI are potentially useful risk markers of imminent ventricular tachyarrhythmia. Knowing if a patient has history of VT/VF preceded by accelerated HR, improves the prognostic value of these risk markers. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Fenske, Timothy S; Ahn, Kwang W; Graff, Tara M; DiGilio, Alyssa; Bashir, Qaiser; Kamble, Rammurti T; Ayala, Ernesto; Bacher, Ulrike; Brammer, Jonathan E; Cairo, Mitchell; Chen, Andy; Chen, Yi-Bin; Chhabra, Saurabh; D'Souza, Anita; Farooq, Umar; Freytes, Cesar; Ganguly, Siddhartha; Hertzberg, Mark; Inwards, David; Jaglowski, Samantha; Kharfan-Dabaja, Mohamed A; Lazarus, Hillard M; Nathan, Sunita; Pawarode, Attaphol; Perales, Miguel-Angel; Reddy, Nishitha; Seo, Sachiko; Sureda, Anna; Smith, Sonali M; Hamadani, Mehdi
2016-07-01
For diffuse large B-cell lymphoma (DLBCL) patients progressing after autologous haematopoietic cell transplantation (autoHCT), allogeneic HCT (alloHCT) is often considered, although limited information is available to guide patient selection. Using the Center for International Blood and Marrow Transplant Research (CIBMTR) database, we identified 503 patients who underwent alloHCT after disease progression/relapse following a prior autoHCT. The 3-year probabilities of non-relapse mortality, progression/relapse, progression-free survival (PFS) and overall survival (OS) were 30, 38, 31 and 37% respectively. Factors associated with inferior PFS on multivariate analysis included Karnofsky performance status (KPS) <80, chemoresistance, autoHCT to alloHCT interval <1-year and myeloablative conditioning. Factors associated with worse OS on multivariate analysis included KPS<80, chemoresistance and myeloablative conditioning. Three adverse prognostic factors were used to construct a prognostic model for PFS, including KPS<80 (4 points), autoHCT to alloHCT interval <1-year (2 points) and chemoresistant disease at alloHCT (5 points). This CIBMTR prognostic model classified patients into four groups: low-risk (0 points), intermediate-risk (2-5 points), high-risk (6-9 points) or very high-risk (11 points), predicting 3-year PFS of 40, 32, 11 and 6%, respectively, with 3-year OS probabilities of 43, 39, 19 and 11% respectively. In conclusion, the CIBMTR prognostic model identifies a subgroup of DLBCL patients experiencing long-term survival with alloHCT after a failed prior autoHCT. © 2016 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Gyalay, S.; Vogt, M.; Withers, P.
2015-12-01
Previous studies have mapped locations from the magnetic equator to the ionosphere in order to understand how auroral features relate to magnetospheric sources. Vogt et al. (2011) in particular mapped equatorial regions to the ionosphere by using a method of flux equivalence—requiring that the magnetic flux in a specified region at the equator is equal to the magnetic flux in the region to which it maps in the ionosphere. This is preferred to methods relying on tracing field lines from global Jovian magnetic field models, which are inaccurate beyond 30 Jupiter radii from the planet. That previous study produced a two-dimensional model—accounting for changes with radial distance and local time—of the normal component of the magnetic field in the equatorial region. However, this two-dimensional fit—which aggregated all equatorial data from Pioneer 10, Pioneer 11, Voyager 1, Voyager 2, Ulysses, and Galileo—did not account for temporal variability resulting from changing solar wind conditions. Building off of that project, this study aims to map the Jovian aurora to the magnetosphere for two separate cases: with a nominal magnetosphere, and with a magnetosphere compressed by high solar wind dynamic pressure. Using the Michigan Solar Wind Model (mSWiM) to predict the solar wind conditions upstream of Jupiter, intervals of high solar wind dynamic pressure were separated from intervals of low solar wind dynamic pressure—thus creating two datasets of magnetometer measurements to be used for two separate 2D fits, and two separate mappings.
Dynamics of acoustically levitated disk samples.
Xie, W J; Wei, B
2004-10-01
The acoustic levitation force on disk samples and the dynamics of large water drops in a planar standing wave are studied by solving the acoustic scattering problem through incorporating the boundary element method. The dependence of levitation force amplitude on the equivalent radius R of disks deviates seriously from the R3 law predicted by King's theory, and a larger force can be obtained for thin disks. When the disk aspect ratio gamma is larger than a critical value gamma(*) ( approximately 1.9 ) and the disk radius a is smaller than the critical value a(*) (gamma) , the levitation force per unit volume of the sample will increase with the enlargement of the disk. The acoustic levitation force on thin-disk samples ( gamma= gamma(*) ) can be formulated by the shape factor f(gamma,a) when a= a(*) (gamma) . It is found experimentally that a necessary condition of the acoustic field for stable levitation of a large water drop is to adjust the reflector-emitter interval H slightly above the resonant interval H(n) . The simulation shows that the drop is flattened and the central parts of its top and bottom surface become concave with the increase of sound pressure level, which agrees with the experimental observation. The main frequencies of the shape oscillation under different sound pressures are slightly larger than the Rayleigh frequency because of the large shape deformation. The simulated translational frequencies of the vertical vibration under normal gravity condition agree with the theoretical analysis.
NASA Astrophysics Data System (ADS)
de Wet, Gregory A.; Castañeda, Isla S.; DeConto, Robert M.; Brigham-Grette, Julie
2016-02-01
Previous periods of extreme warmth in Earth's history are of great interest in light of current and predicted anthropogenic warming. Numerous so called "super interglacial" intervals, with summer temperatures significantly warmer than today, have been identified in the 3.6 million year (Ma) sediment record from Lake El'gygytgyn, northeast Russia. To date, however, a high-resolution paleotemperature reconstruction from any of these super interglacials is lacking. Here we present a paleotemperature reconstruction based on branched glycerol dialkyl glycerol tetraethers (brGDGTs) from Marine Isotope Stages (MIS) 35 to MIS 29, including super interglacial MIS 31. To investigate this period in detail, samples were analyzed with an unprecedented average sample resolution of 500 yrs from MIS 33 to MIS 30. Our results suggest the entire period currently defined as MIS 33-31 (∼1114-1062 kyr BP) was characterized by generally warm and highly variable conditions at the lake, at times out of phase with Northern Hemisphere summer insolation, and that cold "glacial" conditions during MIS 32 lasted only a few thousand years. Close similarities are seen with coeval records from high southern latitudes, supporting the suggestion that the interval from MIS 33 to MIS 31 was an exceptionally long interglacial (Teitler et al., 2015). Based on brGDGT temperatures from Lake El'gygytgyn (this study and unpublished results), warming in the western Arctic during MIS 31 was matched only by MIS 11 during the Pleistocene.
Banos, G; Brotherstone, S; Coffey, M P
2004-08-01
Body condition score (BCS) records of primiparous Holstein cows were analyzed both as a single measure per animal and as repeated measures per sire of cow. The former resulted in a single, average, genetic evaluation for each sire, and the latter resulted in separate genetic evaluations per day of lactation. Repeated measure analysis yielded genetic correlations of less than unity between days of lactation, suggesting that BCS may not be the same trait across lactation. Differences between daily genetic evaluations on d 10 or 30 and subsequent daily evaluations were used to assess BCS change at different stages of lactation. Genetic evaluations for BCS level or change were used to estimate genetic correlations between BCS measures and fertility traits in order to assess the capacity of BCS to predict fertility. Genetic correlation estimates with calving interval and non-return rate were consistently higher for daily BCS than single measure BCS evaluations, but results were not always statistically different. Genetic correlations between BCS change and fertility traits were not significantly different from zero. The product of the accuracy of BCS evaluations with their genetic correlation with the UK fertility index, comprising calving interval and non-return rate, was consistently higher for daily than for single BCS evaluations, by 28 to 53%. This product is associated with the conceptual correlated response in fertility from BCS selection and was highest for early (d 10 to 75) evaluations.
NASA Technical Reports Server (NTRS)
Johnson, Walter W.; Lachter, Joel; Brandt, Summer; Koteskey, Robert; Dao, Arik-Quang; Kraut, Josh; Ligda, Sarah; Battiste, Vernol
2012-01-01
In todays terminal operations, controller workload increases and throughput decreases when fixed standard terminal arrival routes (STARs) are impacted by storms. To circumvent this operational constraint, Prete, Krozel, Mitchell, Kim and Zou (2008) proposed to use automation to dynamically adapt arrival and departure routing based on weather predictions. The present study examined this proposal in the context of a NextGen trajectory-based operation concept, focusing on the acceptability and its effect on the controllers ability to manage traffic flows. Six controllers and twelve transport pilots participated in a human-in-the-loop simulation of arrival operations into Louisville International Airport with interval management requirements. Three types of routing structures were used: Static STARs (similar to current routing, which require the trajectories of individual aircraft to be modified to avoid the weather), Dynamic routing (automated adaptive routing around weather), and Dynamic Adjusted routing (automated adaptive routing around weather with aircraft entry time adjusted to account for differences in route length). Spacing Responsibility, whether responsibility for interval management resided with the controllers (as today), or resided with the pilot (who used a flight deck based automated spacing algorithm), was also manipulated. Dynamic routing as a whole was rated superior to static routing, especially by pilots, both in terms of workload reduction and flight path safety. A downside of using dynamic routing was that the paths flown in the dynamic conditions tended to be somewhat longer than the paths flown in the static condition.
Dynamics of acoustically levitated disk samples
NASA Astrophysics Data System (ADS)
Xie, W. J.; Wei, B.
2004-10-01
The acoustic levitation force on disk samples and the dynamics of large water drops in a planar standing wave are studied by solving the acoustic scattering problem through incorporating the boundary element method. The dependence of levitation force amplitude on the equivalent radius R of disks deviates seriously from the R3 law predicted by King’s theory, and a larger force can be obtained for thin disks. When the disk aspect ratio γ is larger than a critical value γ*(≈1.9) and the disk radius a is smaller than the critical value a*(γ) , the levitation force per unit volume of the sample will increase with the enlargement of the disk. The acoustic levitation force on thin-disk samples (γ⩽γ*) can be formulated by the shape factor f(γ,a) when a⩽a*(γ) . It is found experimentally that a necessary condition of the acoustic field for stable levitation of a large water drop is to adjust the reflector-emitter interval H slightly above the resonant interval Hn . The simulation shows that the drop is flattened and the central parts of its top and bottom surface become concave with the increase of sound pressure level, which agrees with the experimental observation. The main frequencies of the shape oscillation under different sound pressures are slightly larger than the Rayleigh frequency because of the large shape deformation. The simulated translational frequencies of the vertical vibration under normal gravity condition agree with the theoretical analysis.
White, J.D.; Running, S.W.; Thornton, P.E.; Keane, R.E.; Ryan, K.C.; Fagre, D.B.; Key, C.H.
1998-01-01
Glacier National Park served as a test site for ecosystem analyses than involved a suite of integrated models embedded within a geographic information system. The goal of the exercise was to provide managers with maps that could illustrate probable shifts in vegetation, net primary production (NPP), and hydrologic responses associated with two selected climatic scenarios. The climatic scenarios were (a) a recent 12-yr record of weather data, and (b) a reconstituted set that sequentially introduced in repeated 3-yr intervals wetter-cooler, drier-warmer, and typical conditions. To extrapolate the implications of changes in ecosystem processes and resulting growth and distribution of vegetation and snowpack, the model incorporated geographic data. With underlying digital elevation maps, soil depth and texture, extrapolated climate, and current information on vegetation types and satellite-derived estimates of a leaf area indices, simulations were extended to envision how the park might look after 120 yr. The predictions of change included underlying processes affecting the availability of water and nitrogen. Considerable field data were acquired to compare with model predictions under current climatic conditions. In general, the integrated landscape models of ecosystem processes had good agreement with measured NPP, snowpack, and streamflow, but the exercise revealed the difficulty and necessity of averaging point measurements across landscapes to achieve comparable results with modeled values. Under the extremely variable climate scenario significant changes in vegetation composition and growth as well as hydrologic responses were predicted across the park. In particular, a general rise in both the upper and lower limits of treeline was predicted. These shifts would probably occur along with a variety of disturbances (fire, insect, and disease outbreaks) as predictions of physiological stress (water, nutrients, light) altered competitive relations and hydrologic responses. The use of integrated landscape models applied in this exercise should provide managers with insights into the underlying processes important in maintaining community structure, and at the same time, locate where changes on the landscape are most likely to occur.
Dynamic Predictive Model for Growth of Bacillus cereus from Spores in Cooked Beans.
Juneja, Vijay K; Mishra, Abhinav; Pradhan, Abani K
2018-02-01
Kinetic growth data for Bacillus cereus grown from spores were collected in cooked beans under several isothermal conditions (10 to 49°C). Samples were inoculated with approximately 2 log CFU/g heat-shocked (80°C for 10 min) spores and stored at isothermal temperatures. B. cereus populations were determined at appropriate intervals by plating on mannitol-egg yolk-polymyxin agar and incubating at 30°C for 24 h. Data were fitted into Baranyi, Huang, modified Gompertz, and three-phase linear primary growth models. All four models were fitted to the experimental growth data collected at 13 to 46°C. Performances of these models were evaluated based on accuracy and bias factors, the coefficient of determination ( R 2 ), and the root mean square error. Based on these criteria, the Baranyi model best described the growth data, followed by the Huang, modified Gompertz, and three-phase linear models. The maximum growth rates of each primary model were fitted as a function of temperature using the modified Ratkowsky model. The high R 2 values (0.95 to 0.98) indicate that the modified Ratkowsky model can be used to describe the effect of temperature on the growth rates for all four primary models. The acceptable prediction zone (APZ) approach also was used for validation of the model with observed data collected during single and two-step dynamic cooling temperature protocols. When the predictions using the Baranyi model were compared with the observed data using the APZ analysis, all 24 observations for the exponential single rate cooling were within the APZ, which was set between -0.5 and 1 log CFU/g; 26 of 28 predictions for the two-step cooling profiles also were within the APZ limits. The developed dynamic model can be used to predict potential B. cereus growth from spores in beans under various temperature conditions or during extended chilling of cooked beans.
A Novel Method for Satellite Maneuver Prediction
NASA Astrophysics Data System (ADS)
Shabarekh, C.; Kent-Bryant, J.; Keselman, G.; Mitidis, A.
2016-09-01
A space operations tradecraft consisting of detect-track-characterize-catalog is insufficient for maintaining Space Situational Awareness (SSA) as space becomes increasingly congested and contested. In this paper, we apply analytical methodology from the Geospatial-Intelligence (GEOINT) community to a key challenge in SSA: predicting where and when a satellite may maneuver in the future. We developed a machine learning approach to probabilistically characterize Patterns of Life (PoL) for geosynchronous (GEO) satellites. PoL are repeatable, predictable behaviors that an object exhibits within a context and is driven by spatio-temporal, relational, environmental and physical constraints. An example of PoL are station-keeping maneuvers in GEO which become generally predictable as the satellite re-positions itself to account for orbital perturbations. In an earlier publication, we demonstrated the ability to probabilistically predict maneuvers of the Galaxy 15 (NORAD ID: 28884) satellite with high confidence eight days in advance of the actual maneuver. Additionally, we were able to detect deviations from expected PoL within hours of the predicted maneuver [6]. This was done with a custom unsupervised machine learning algorithm, the Interval Similarity Model (ISM), which learns repeating intervals of maneuver patterns from unlabeled historical observations and then predicts future maneuvers. In this paper, we introduce a supervised machine learning algorithm that works in conjunction with the ISM to produce a probabilistic distribution of when future maneuvers will occur. The supervised approach uses a Support Vector Machine (SVM) to process the orbit state whereas the ISM processes the temporal intervals between maneuvers and the physics-based characteristics of the maneuvers. This multiple model approach capitalizes on the mathematical strengths of each respective algorithm while incorporating multiple features and inputs. Initial findings indicate that the combined approach can predict 70% of maneuver times within 3 days of a true maneuver time and 22% of maneuver times within 24 hours of a maneuver. We have also been able to detect deviations from expected maneuver patterns up to a week in advance.
Suicide and self-injury among children and youth with chronic health conditions.
Barnes, Andrew J; Eisenberg, Marla E; Resnick, Michael D
2010-05-01
Chronic conditions may be associated with suicide risk. This study aimed to specify the extent to which youth chronic conditions are at risk for suicidality and self-harm. Logistic regression was used to estimate odds of self-harm, suicidal ideation, and suicide attempts in 10- to 19-year-olds with and without chronic physical and/or mental health conditions. Independent of race, socioeconomic status, absent parent, special education status, substance use, and emotional distress, youth with co-occurring chronic physical and mental conditions (n = 4099) had significantly higher odds of self-harm (odds ratio [OR]: 2.5 [99% confidence interval (CI): 2.3-2.8), suicidal ideation (OR: 2.5 [99% CI: 2.3-2.8), and suicide attempts (OR: 3.5 [99% CI: 3.1-3.9]) than healthy peers (n = 106,967), as did those with chronic mental conditions alone (n = 8752). Youth with chronic physical conditions alone (n = 12,554) were at slightly elevated risk for all 3 outcomes. Findings were similar among male and female youth, with a risk gradient by grade. Chronic physical conditions are associated with a slightly elevated risk for self-harm, suicidal thinking, and attempted suicide; chronic mental conditions are associated with an increased risk for all 3 outcomes. Co-occurring chronic physical and mental conditions are associated with an increased risk for self-harm and suicidal ideation that is similar to the risk in chronic mental conditions and with an attempted suicide risk in excess of that predicted by the chronic mental health conditions alone. Preventive interventions for these youth should be developed and evaluated.
Kuiper, Gerhardus J A J M; Houben, Rik; Wetzels, Rick J H; Verhezen, Paul W M; Oerle, Rene van; Ten Cate, Hugo; Henskens, Yvonne M C; Lancé, Marcus D
2017-11-01
Low platelet counts and hematocrit levels hinder whole blood point-of-care testing of platelet function. Thus far, no reference ranges for MEA (multiple electrode aggregometry) and PFA-100 (platelet function analyzer 100) devices exist for low ranges. Through dilution methods of volunteer whole blood, platelet function at low ranges of platelet count and hematocrit levels was assessed on MEA for four agonists and for PFA-100 in two cartridges. Using (multiple) regression analysis, 95% reference intervals were computed for these low ranges. Low platelet counts affected MEA in a positive correlation (all agonists showed r 2 ≥ 0.75) and PFA-100 in an inverse correlation (closure times were prolonged with lower platelet counts). Lowered hematocrit did not affect MEA testing, except for arachidonic acid activation (ASPI), which showed a weak positive correlation (r 2 = 0.14). Closure time on PFA-100 testing was inversely correlated with hematocrit for both cartridges. Regression analysis revealed different 95% reference intervals in comparison with originally established intervals for both MEA and PFA-100 in low platelet or hematocrit conditions. Multiple regression analysis of ASPI and both tests on the PFA-100 for combined low platelet and hematocrit conditions revealed that only PFA-100 testing should be adjusted for both thrombocytopenia and anemia. 95% reference intervals were calculated using multiple regression analysis. However, coefficients of determination of PFA-100 were poor, and some variance remained unexplained. Thus, in this pilot study using (multiple) regression analysis, we could establish reference intervals of platelet function in anemia and thrombocytopenia conditions on PFA-100 and in thrombocytopenia conditions on MEA.
Effect of different rest intervals after whole-body vibration on vertical jump performance.
Dabbs, Nicole C; Muñoz, Colleen X; Tran, Tai T; Brown, Lee E; Bottaro, Martim
2011-03-01
Whole-body vibration (WBV) may potentiate vertical jump (VJ) performance via augmented muscular strength and motor function. The purpose of this study was to evaluate the effect of different rest intervals after WBV on VJ performance. Thirty recreationally trained subjects (15 men and 15 women) volunteered to participate in 4 testing visits separated by 24 hours. Visit 1 acted as a familiarization visit where subjects were introduced to the VJ and WBV protocols. Visits 2-4 contained 2 randomized conditions per visit with a 10-minute rest period between conditions. The WBV was administered on a pivotal platform with a frequency of 30 Hz and an amplitude of 6.5 mm in 4 bouts of 30 seconds for a total of 2 minutes with 30 seconds of rest between bouts. During WBV, subjects performed a quarter squat every 5 seconds, simulating a countermovement jump (CMJ). Whole-body vibration was followed by 3 CMJs with 5 different rest intervals: immediate, 30 seconds, 1 minute, 2 minutes, or 4 minutes. For a control condition, subjects performed squats with no WBV. There were no significant (p > 0.05) differences in peak velocity or relative ground reaction force after WBV rest intervals. However, results of VJ height revealed that maximum values, regardless of rest interval (56.93 ± 13.98 cm), were significantly (p < 0.05) greater than the control condition (54.44 ± 13.74 cm). Therefore, subjects' VJ height potentiated at different times after WBV suggesting strong individual differences in optimal rest interval. Coaches may use WBV to enhance acute VJ performance but should first identify each individual's optimal rest time to maximize the potentiating effects.
Refusal bias in HIV prevalence estimates from nationally representative seroprevalence surveys.
Reniers, Georges; Eaton, Jeffrey
2009-03-13
To assess the relationship between prior knowledge of one's HIV status and the likelihood to refuse HIV testing in populations-based surveys and explore its potential for producing bias in HIV prevalence estimates. Using longitudinal survey data from Malawi, we estimate the relationship between prior knowledge of HIV-positive status and subsequent refusal of an HIV test. We use that parameter to develop a heuristic model of refusal bias that is applied to six Demographic and Health Surveys, in which refusal by HIV status is not observed. The model only adjusts for refusal bias conditional on a completed interview. Ecologically, HIV prevalence, prior testing rates and refusal for HIV testing are highly correlated. Malawian data further suggest that amongst individuals who know their status, HIV-positive individuals are 4.62 (95% confidence interval, 2.60-8.21) times more likely to refuse testing than HIV-negative ones. On the basis of that parameter and other inputs from the Demographic and Health Surveys, our model predicts downward bias in national HIV prevalence estimates ranging from 1.5% (95% confidence interval, 0.7-2.9) for Senegal to 13.3% (95% confidence interval, 7.2-19.6) for Malawi. In absolute terms, bias in HIV prevalence estimates is negligible for Senegal but 1.6 (95% confidence interval, 0.8-2.3) percentage points for Malawi. Downward bias is more severe in urban populations. Because refusal rates are higher in men, seroprevalence surveys also tend to overestimate the female-to-male ratio of infections. Prior knowledge of HIV status informs decisions to participate in seroprevalence surveys. Informed refusals may produce bias in estimates of HIV prevalence and the sex ratio of infections.
Tracking a changing environment: optimal sampling, adaptive memory and overnight effects.
Dunlap, Aimee S; Stephens, David W
2012-02-01
Foraging in a variable environment presents a classic problem of decision making with incomplete information. Animals must track the changing environment, remember the best options and make choices accordingly. While several experimental studies have explored the idea that sampling behavior reflects the amount of environmental change, we take the next logical step in asking how change influences memory. We explore the hypothesis that memory length should be tied to the ecological relevance and the value of the information learned, and that environmental change is a key determinant of the value of memory. We use a dynamic programming model to confirm our predictions and then test memory length in a factorial experiment. In our experimental situation we manipulate rates of change in a simple foraging task for blue jays over a 36 h period. After jays experienced an experimentally determined change regime, we tested them at a range of retention intervals, from 1 to 72 h. Manipulated rates of change influenced learning and sampling rates: subjects sampled more and learned more quickly in the high change condition. Tests of retention revealed significant interactions between retention interval and the experienced rate of change. We observed a striking and surprising difference between the high and low change treatments at the 24h retention interval. In agreement with earlier work we find that a circadian retention interval is special, but we find that the extent of this 'specialness' depends on the subject's prior experience of environmental change. Specifically, experienced rates of change seem to influence how subjects balance recent information against past experience in a way that interacts with the passage of time. Copyright © 2011 Elsevier B.V. All rights reserved.
McLean, A P; Blampied, N M
1995-01-01
Behavioral momentum theory relates resistance to change of responding in a multiple-schedule component to the total reinforcement obtained in that component, regardless of how the reinforcers are produced. Four pigeons responded in a series of multiple-schedule conditions in which a variable-interval 40-s schedule arranged reinforcers for pecking in one component and a variable-interval 360-s schedule arranged them in the other. In addition, responses on a second key were reinforced according to variable-interval schedules that were equal in the two components. In different parts of the experiment, responding was disrupted by changing the rate of reinforcement on the second key or by delivering response-independent food during a blackout separating the two components. Consistent with momentum theory, responding on the first key in Part 1 changed more in the component with the lower reinforcement total when it was disrupted by changes in the rate of reinforcement on the second key. However, responding on the second key changed more in the component with the higher reinforcement total. In Parts 2 and 3, responding was disrupted with free food presented during intercomponent blackouts, with extinction (Part 2) or variable-interval 80-s reinforcement (Part 3) arranged on the second key. Here, resistance to change was greater for the component with greater overall reinforcement. Failures of momentum theory to predict short-term differences in resistance to change occurred with disruptors that caused greater change between steady states for the richer component. Consistency of effects across disruptors may yet be found if short-term effects of disruptors are assessed relative to the extent of change observed after prolonged exposure.
NASA Technical Reports Server (NTRS)
Ulaby, F. T. (Principal Investigator); Dobson, M. C.; Moezzi, S.
1982-01-01
Radar simulations were performed at five-day intervals over a twenty-day period and used to estimate soil moisture from a generalized algorithm requiring only received power and the mean elevation of a test site near Lawrence, Kansas. The results demonstrate that the soil moisture of about 90% of the 20-m by 20-m pixel elements can be predicted with an accuracy of + or - 20% of field capacity within relatively flat agricultural portions of the test site. Radar resolutions of 93 m by 100 m with 23 looks or coarser gave the best results, largely because of the effects of signal fading. For the distribution of land cover categories, soils, and elevation in the test site, very coarse radar resolutions of 1 km by 1 km and 2.6 km by 3.1 km gave the best results for wet moisture conditions while a finer resolution of 93 m by 100 m was found to yield superior results for dry to moist soil conditions.
NASA Astrophysics Data System (ADS)
Seo, Jongmin; Schiavazzi, Daniele; Marsden, Alison
2017-11-01
Cardiovascular simulations are increasingly used in clinical decision making, surgical planning, and disease diagnostics. Patient-specific modeling and simulation typically proceeds through a pipeline from anatomic model construction using medical image data to blood flow simulation and analysis. To provide confidence intervals on simulation predictions, we use an uncertainty quantification (UQ) framework to analyze the effects of numerous uncertainties that stem from clinical data acquisition, modeling, material properties, and boundary condition selection. However, UQ poses a computational challenge requiring multiple evaluations of the Navier-Stokes equations in complex 3-D models. To achieve efficiency in UQ problems with many function evaluations, we implement and compare a range of iterative linear solver and preconditioning techniques in our flow solver. We then discuss applications to patient-specific cardiovascular simulation and how the problem/boundary condition formulation in the solver affects the selection of the most efficient linear solver. Finally, we discuss performance improvements in the context of uncertainty propagation. Support from National Institute of Health (R01 EB018302) is greatly appreciated.
Subvocal articulatory rehearsal during verbal working memory in multiple sclerosis.
Sweet, Lawrence H; Vanderhill, Susan D; Jerskey, Beth A; Gordon, Norman M; Paul, Robert H; Cohen, Ronald A
2010-10-01
This study was designed to examine verbal working memory (VWM) components among multiple sclerosis (MS) patients and determine the influence of information processing speed. Of two frequently studied VWM sub-components, subvocal rehearsal was expected to be more affected by MS than short-term memory buffering. Furthermore, worse subvocal rehearsal was predicted to be specifically related to slower cognitive processing. Fifteen MS patients were administered a neuropsychological battery assessing VWM, processing speed, mood, fatigue, and disability. Participants performed a 2-Back VWM task with modified nested conditions designed to increase subvocal rehearsal (via inter-stimulus interval) and short-term memory buffering demands (via phonological similarity). Performance during these 2-Back conditions did not significantly differ and both exhibited strong positive correlations with disability. However, only scores on the subvocal rehearsal 2-Back were significantly related to performance on the remaining test battery, including processing speed and depressive symptoms. Findings suggest that performance during increased subvocal rehearsal demands is specifically influenced by cognitive processing speed and depressive symptoms.
Genç, Nevim; Doğan, Esra Can; Narcı, Ali Oğuzhan; Bican, Emine
2017-05-01
In this study, a multi-response optimization method using Taguchi's robust design approach is proposed for imidacloprid removal by reverse osmosis. Tests were conducted with different membrane type (BW30, LFC-3, CPA-3), transmembrane pressure (TMP = 20, 25, 30 bar), volume reduction factor (VRF = 2, 3, 4), and pH (3, 7, 11). Quality and quantity of permeate are optimized with the multi-response characteristics of the total dissolved solid (TDS), conductivity, imidacloprid, and total organic carbon (TOC) rejection ratios and flux of permeate. The optimized conditions were determined as membrane type of BW30, TMP 30 bar, VRF 3, and pH 11. Under these conditions, TDS, conductivity, imidacloprid, and TOC rejections and permeate flux were 97.50 97.41, 97.80, 98.00% and 30.60 L/m2·h, respectively. Membrane type was obtained as the most effective factor; its contribution is 64%. The difference between the predicted and observed value of multi-response signal/noise (MRSN) is within the confidence interval.
Contamination in the Prospective Study of Child Maltreatment and Female Adolescent Health
Noll, Jennie G.; Peugh, James L.; Griffin, Amanda M.; Bensman, Heather E.
2016-01-01
Objective To evaluate the impact of contamination, or the presence of child maltreatment in a comparison condition, when estimating the broad, longitudinal effects of child maltreatment on female health at the transition to adulthood. Methods The Female Adolescent Development Study (N = 514; age range: 14–19 years) used a prospective cohort design to examine the effects of substantiated child maltreatment on teenage births, obesity, major depression, and past-month cigarette use. Contamination was controlled via a multimethod strategy that used both adolescent self-report and Child Protective Services records to remove cases of child maltreatment from the comparison condition. Results Substantiated child maltreatment significantly predicted each outcome, relative risks = 1.47–2.95, 95% confidence intervals: 1.03–7.06, with increases in corresponding effect size magnitudes, only when contamination was controlled using the multimethod strategy. Conclusions Contamination truncates risk estimates of child maltreatment and controlling it can strengthen overall conclusions about the effects of child maltreatment on female health. PMID:25797944
Complexity Matching Effects in Bimanual and Interpersonal Syncopated Finger Tapping
Coey, Charles A.; Washburn, Auriel; Hassebrock, Justin; Richardson, Michael J.
2016-01-01
The current study was designed to investigate complexity matching during syncopated behavioral coordination. Participants either tapped in (bimanual) syncopation using their two hands, or tapped in (interpersonal) syncopation with a partner, with each participant using one of their hands. The time series of inter-tap intervals (ITI) from each hand were submitted to fractal analysis, as well as to short-term and multi-timescale cross-correlation analyses. The results demonstrated that the fractal scaling of one hand’s ITI was strongly correlated to that of the other hand, and this complexity matching effect was stronger in the bimanual condition than in the interpersonal condition. Moreover, the degree of complexity matching was predicted by the strength of short-term cross-correlation and the stability of the asynchrony between the two tapping series. These results suggest that complexity matching is not specific to the inphase synchronization tasks used in past research, but is a general result of coordination between complex systems. PMID:26840612
Yu, Chi-Yang; Huang, Liang-Yu; Kuan, I-Ching; Lee, Shiow-Ling
2013-01-01
Biodiesel, a non-toxic and biodegradable fuel, has recently become a major source of renewable alternative fuels. Utilization of lipase as a biocatalyst to produce biodiesel has advantages over common alkaline catalysts such as mild reaction conditions, easy product separation, and use of waste cooking oil as raw material. In this study, Pseudomonas cepacia lipase immobilized onto magnetic nanoparticles (MNP) was used for biodiesel production from waste cooking oil. The optimal dosage of lipase-bound MNP was 40% (w/w of oil) and there was little difference between stepwise addition of methanol at 12 h- and 24 h-intervals. Reaction temperature, substrate molar ratio (methanol/oil), and water content (w/w of oil) were optimized using response surface methodology (RSM). The optimal reaction conditions were 44.2 °C, substrate molar ratio of 5.2, and water content of 12.5%. The predicted and experimental molar conversions of fatty acid methyl esters (FAME) were 80% and 79%, respectively. PMID:24336109
Skalicky, Simon E; Fenwick, Eva; Martin, Keith R; Crowston, Jonathan; Goldberg, Ivan; McCluskey, Peter
2016-07-01
The aim of the study is to measure the impact of age-related macular degeneration on vision-related activity limitation and preference-based status for glaucoma patients. This was a cross-sectional study. Two-hundred glaucoma patients of whom 73 had age-related macular degeneration were included in the research. Sociodemographic information, visual field parameters and visual acuity were collected. Age-related macular degeneration was scored using the Age-Related Eye Disease Study system. The Rasch-analysed Glaucoma Activity Limitation-9 and the Visual Function Questionnaire Utility Index measured vision-related activity limitation and preference-based status, respectively. Regression models determined factors predictive of vision-related activity limitation and preference-based status. Differential item functioning compared Glaucoma Activity Limitation-9 item difficulty for those with and without age-related macular degeneration. Mean age was 73.7 (±10.1) years. Lower better eye mean deviation (β: 1.42, 95% confidence interval: 1.24-1.63, P < 0.001) and age-related macular degeneration (β: 1.26 95% confidence interval: 1.10-1.44, P = 0.001) were independently associated with worse vision-related activity limitation. Worse eye visual acuity (β: 0.978, 95% confidence interval: 0.961-0.996, P = 0.018), high risk age-related macular degeneration (β: 0.981, 95% confidence interval: 0.965-0.998, P = 0.028) and severe glaucoma (β: 0.982, 95% confidence interval: 0.966-0.998, P = 0.032) were independently associated with worse preference-based status. Glaucoma patients with age-related macular degeneration found using stairs, walking on uneven ground and judging distances of foot to step/curb significantly more difficult than those without age-related macular degeneration. Vision-related activity limitation and preference-based status are negatively impacted by severe glaucoma and age-related macular degeneration. Patients with both conditions perceive increased difficulty walking safely compared with patients with glaucoma alone. © 2015 Royal Australian and New Zealand College of Ophthalmologists.
The QT Interval and Risk of Incident Atrial Fibrillation
Mandyam, Mala C.; Soliman, Elsayed Z.; Alonso, Alvaro; Dewland, Thomas A.; Heckbert, Susan R.; Vittinghoff, Eric; Cummings, Steven R.; Ellinor, Patrick T.; Chaitman, Bernard R.; Stocke, Karen; Applegate, William B.; Arking, Dan E.; Butler, Javed; Loehr, Laura R.; Magnani, Jared W.; Murphy, Rachel A.; Satterfield, Suzanne; Newman, Anne B.; Marcus, Gregory M.
2013-01-01
BACKGROUND Abnormal atrial repolarization is important in the development of atrial fibrillation (AF), but no direct measurement is available in clinical medicine. OBJECTIVE To determine whether the QT interval, a marker of ventricular repolarization, could be used to predict incident AF. METHODS We examined a prolonged QT corrected by the Framingham formula (QTFram) as a predictor of incident AF in the Atherosclerosis Risk in Communities (ARIC) study. The Cardiovascular Health Study (CHS) and Health, Aging, and Body Composition (Health ABC) study were used for validation. Secondary predictors included QT duration as a continuous variable, a short QT interval, and QT intervals corrected by other formulae. RESULTS Among 14,538 ARIC participants, a prolonged QTFram predicted a roughly two-fold increased risk of AF (hazard ratio [HR] 2.05, 95% confidence interval [CI] 1.42–2.96, p<0.001). No substantive attenuation was observed after adjustment for age, race, sex, study center, body mass index, hypertension, diabetes, coronary disease, and heart failure. The findings were validated in CHS and Health ABC and were similar across various QT correction methods. Also in ARIC, each 10-ms increase in QTFram was associated with an increased unadjusted (HR 1.14, 95%CI 1.10–1.17, p<0.001) and adjusted (HR 1.11, 95%CI 1.07–1.14, p<0.001) risk of AF. Findings regarding a short QT were inconsistent across cohorts. CONCLUSIONS A prolonged QT interval is associated with an increased risk of incident AF. PMID:23872693
Average variograms to guide soil sampling
NASA Astrophysics Data System (ADS)
Kerry, R.; Oliver, M. A.
2004-10-01
To manage land in a site-specific way for agriculture requires detailed maps of the variation in the soil properties of interest. To predict accurately for mapping, the interval at which the soil is sampled should relate to the scale of spatial variation. A variogram can be used to guide sampling in two ways. A sampling interval of less than half the range of spatial dependence can be used, or the variogram can be used with the kriging equations to determine an optimal sampling interval to achieve a given tolerable error. A variogram might not be available for the site, but if the variograms of several soil properties were available on a similar parent material and or particular topographic positions an average variogram could be calculated from these. Averages of the variogram ranges and standardized average variograms from four different parent materials in southern England were used to suggest suitable sampling intervals for future surveys in similar pedological settings based on half the variogram range. The standardized average variograms were also used to determine optimal sampling intervals using the kriging equations. Similar sampling intervals were suggested by each method and the maps of predictions based on data at different grid spacings were evaluated for the different parent materials. Variograms of loss on ignition (LOI) taken from the literature for other sites in southern England with similar parent materials had ranges close to the average for a given parent material showing the possible wider application of such averages to guide sampling.
Three-Component Commitment and Turnover: An Examination of Temporal Aspects
ERIC Educational Resources Information Center
Culpepper, Robert A.
2011-01-01
SEM (N = 182) was employed to examine implied temporal aspects of three-component commitment theory as they relate to turnover. Consistent with expectations, affective commitment predicted subsequent turnover in an immediate and relatively short interval of 4 months, but failed to do in a much longer but outlying interval of 5-12 months. Side bet…
ERIC Educational Resources Information Center
Taatgen, Niels A.; van Rijn, Hedderik; Anderson, John
2007-01-01
A theory of prospective time perception is introduced and incorporated as a module in an integrated theory of cognition, thereby extending existing theories and allowing predictions about attention and learning. First, a time perception module is established by fitting existing datasets (interval estimation and bisection and impact of secondary…
Mode of detection: an independent prognostic factor for women with breast cancer.
Hofvind, Solveig; Holen, Åsne; Román, Marta; Sebuødegård, Sofie; Puig-Vives, Montse; Akslen, Lars
2016-06-01
To investigate breast cancer survival and risk of breast cancer death by detection mode (screen-detected, interval, and detected outside the screening programme), adjusting for prognostic and predictive tumour characteristics. Information about detection mode, prognostic (age, tumour size, histologic grade, lymph node status) and predictive factors (molecular subtypes based on immunohistochemical analyses of hormone receptor status (estrogen and progesterone) and Her2 status) were available for 8344 women in Norway aged 50-69 at diagnosis of breast cancer, 2005-2011. A total of 255 breast cancer deaths were registered by the end of 2011. Kaplan-Meier method was used to estimate six years breast cancer specific survival and Cox proportional hazard model to estimate hazard ratio (HR) for breast cancer death by detection mode, adjusting for prognostic and predictive factors. Women with screen-detected cancer had favourable prognostic and predictive tumour characteristics compared with interval cancers and those detected outside the screening programme. The favourable characteristics were present for screen-detected cancers, also within the subtypes. Adjusted HR of dying from breast cancer was two times higher for women with symptomatic breast cancer (interval or outside the screening), using screen-detected tumours as the reference. Detection mode is an independent prognostic factor for women diagnosed with breast cancer. Information on detection mode might be relevant for patient management to avoid overtreatment. © The Author(s) 2015.
Brown, Kevin L.; Stanton, Mark E.
2008-01-01
Eyeblink classical conditioning (EBC) was observed across a broad developmental period with tasks utilizing two interstimulus intervals (ISIs). In ISI discrimination, two distinct conditioned stimuli (CSs; light and tone) are reinforced with a periocular shock unconditioned stimulus (US) at two different CS-US intervals. Temporal uncertainty is identical in design with the exception that the same CS is presented at both intervals. Developmental changes in conditioning have been reported in each task beyond ages when single-ISI learning is well developed. The present study sought to replicate and extend these previous findings by testing each task at four separate ages. Consistent with previous findings, younger rats (postnatal day – PD - 23 and 30) trained in ISI discrimination showed evidence of enhanced cross-modal influence of the short CS-US pairing upon long CS conditioning relative to older subjects. ISI discrimination training at PD43-47 yielded outcomes similar to those in adults (PD65-71). Cross-modal transfer effects in this task therefore appear to diminish between PD30 and PD43-47. Comparisons of ISI discrimination with temporal uncertainty indicated that cross-modal transfer in ISI discrimination at the youngest ages did not represent complete generalization across CSs. ISI discrimination undergoes a more protracted developmental emergence than single-cue EBC and may be a more sensitive indicator of developmental disorders involving cerebellar dysfunction. PMID:18726989
Pavlovian conditioning and cumulative reinforcement rate.
Harris, Justin A; Patterson, Angela E; Gharaei, Saba
2015-04-01
In 5 experiments using delay conditioning of magazine approach with rats, reinforcement rate was varied either by manipulating the mean interval between onset of the conditioned stimulus (CS) and unconditioned stimulus (US) or by manipulating the proportion of CS presentations that ended with the US (trial-based reinforcement rate). Both manipulations influenced the acquisition of responding. In each experiment, a specific comparison was made between 2 CSs that differed in their mean CS-US interval and in their trial-based reinforcement rate, such that the cumulative reinforcement rate-the cumulative duration of the CS between reinforcements-was the same for the 2 CSs. For example, a CS reinforced on 100% of trials with a mean CS-US interval of 60 s was compared with a CS reinforced on 33% of trials and a mean duration of 20 s. Across the 5 experiments, conditioning was virtually identical for the 2 CSs with matched cumulative reinforcement rate. This was true as long as the timing of the US was unpredictable and, thus, response rates were uniform across the length of the CS. We conclude that the effects of CS-US interval and of trial-based reinforcement rate are reducible entirely to their common effect on cumulative reinforcement rate. We discuss the implications of this for rate-based, trial-based, and real-time associative models of conditioning. (c) 2015 APA, all rights reserved).
Probabilistic population projections with migration uncertainty
Azose, Jonathan J.; Ševčíková, Hana; Raftery, Adrian E.
2016-01-01
We produce probabilistic projections of population for all countries based on probabilistic projections of fertility, mortality, and migration. We compare our projections to those from the United Nations’ Probabilistic Population Projections, which uses similar methods for fertility and mortality but deterministic migration projections. We find that uncertainty in migration projection is a substantial contributor to uncertainty in population projections for many countries. Prediction intervals for the populations of Northern America and Europe are over 70% wider, whereas prediction intervals for the populations of Africa, Asia, and the world as a whole are nearly unchanged. Out-of-sample validation shows that the model is reasonably well calibrated. PMID:27217571
NASA Technical Reports Server (NTRS)
Kaplan, Michael L.; Lux, Kevin M.; Cetola, Jeffrey D.; Huffman, Allan W.; Riordan, Allen J.; Slusser, Sarah W.; Lin, Yuh-Lang; Charney, Joseph J.; Waight, Kenneth T.
2004-01-01
Real-time prediction of environments predisposed to producing moderate-severe aviation turbulence is studied. We describe the numerical model and its postprocessing system designed for said prediction of environments predisposed to severe aviation turbulence as well as presenting numerous examples of its utility. The numerical model is MASS version 5.13, which is integrated over three different grid matrices in real time on a university work station in support of NASA Langley Research Center s B-757 turbulence research flight missions. The postprocessing system includes several turbulence-related products, including four turbulence forecasting indices, winds, streamlines, turbulence kinetic energy, and Richardson numbers. Additionally, there are convective products including precipitation, cloud height, cloud mass fluxes, lifted index, and K-index. Furthermore, soundings, sounding parameters, and Froude number plots are also provided. The horizontal cross-section plot products are provided from 16 000 to 46 000 ft in 2000-ft intervals. Products are available every 3 hours at the 60- and 30-km grid interval and every 1.5 hours at the 15-km grid interval. The model is initialized from the NWS ETA analyses and integrated two times a day.
Flint, L.E.; Flint, A.L.
2008-01-01
Stream temperature is an important component of salmonid habitat and is often above levels suitable for fish survival in the Lower Klamath River in northern California. The objective of this study was to provide boundary conditions for models that are assessing stream temperature on the main stem for the purpose of developing strategies to manage stream conditions using Total Maximum Daily Loads. For model input, hourly stream temperatures for 36 tributaries were estimated for 1 Jan. 2001 through 31 Oct. 2004. A basin-scale approach incorporating spatially distributed energy balance data was used to estimate the stream temperatures with measured air temperature and relative humidity data and simulated solar radiation, including topographic shading and corrections for cloudiness. Regression models were developed on the basis of available stream temperature data to predict temperatures for unmeasured periods of time and for unmeasured streams. The most significant factor in matching measured minimum and maximum stream temperatures was the seasonality of the estimate. Adding minimum and maximum air temperature to the regression model improved the estimate, and air temperature data over the region are available and easily distributed spatially. The addition of simulated solar radiation and vapor saturation deficit to the regression model significantly improved predictions of maximum stream temperature but was not required to predict minimum stream temperature. The average SE in estimated maximum daily stream temperature for the individual basins was 0.9 ?? 0.6??C at the 95% confidence interval. Copyright ?? 2008 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.
Effects of novelty and methamphetamine on conditioned and sensory reinforcement
Lloyd, David R.; Kausch, Michael A.; Gancarz, Amy M.; Beyley, Linda J.; Richards, Jerry B.
2012-01-01
Background Light onset can be both a sensory reinforcer (SR) with intrinsic reinforcing properties, and a conditioned reinforcer (CR) which predicts a biologically important reinforcer. Stimulant drugs, such as methamphetamine (METH), may increase the reinforcing effectiveness of CRs by enhancing the predictive properties of the CR. In contrast, METH-induced increases in the reinforcing effectiveness of SRs, are mediated by the immediate sensory consequences of the light. Methods The effects of novelty (on SRs) and METH (on both CRs and SRs) were tested. Experiment 1: Rats were pre-exposed to 5 s light and water pairings presented according to a variable-time (VT) 2 min schedule or unpaired water and light presented according to independent, concurrent VT 2 min schedules. Experiment 2: Rats were pre-exposed to 5 s light presented according to a VT 2 min schedule, or no stimuli. In both experiments, the pre-exposure phase was followed by a test phase in which 5 s light onset was made response-contingent on a variable-interval (VI) 2 min schedule and the effects of METH (0.5 mg/kg) were determined. Results Novel light onset was a more effective reinforcer than familiar light onset. METH increased the absolute rate of responding without increasing the relative frequency of responding for both CRs and SRs. Conclusion Novelty plays a role in determining the reinforcing effectiveness of SRs. The results are consistent with the interpretation that METH-induced increases in reinforcer effectiveness of CRs and SRs may be mediated by immediate sensory consequences, rather than prediction. PMID:22814112
QT-RR relationships and suitable QT correction formulas for halothane-anesthetized dogs.
Tabo, Mitsuyasu; Nakamura, Mikiko; Kimura, Kazuya; Ito, Shigeo
2006-10-01
Several QT correction (QTc) formulas have been used for assessing the QT liability of drugs. However, they are known to under- and over-correct the QT interval and tend to be specific to species and experimental conditions. The purpose of this study was to determine a suitable formula for halothane-anesthetized dogs highly sensitive to drug-induced QT interval prolongation. Twenty dogs were anesthetized with 1.5% halothane and the relationship between the QT and RR intervals were obtained by changing the heart rate under atrial pacing conditions. The QT interval was corrected for the RR interval by applying 4 published formulas (Bazett, Fridericia, Van de Water, and Matsunaga); Fridericia's formula (QTcF = QT/RR(0.33)) showed the least slope and lowest R(2) value for the linear regression of QTc intervals against RR intervals, indicating that it dissociated changes in heart rate most effectively. An optimized formula (QTcX = QT/RR(0.3879)) is defined by analysis of covariance and represents a correction algorithm superior to Fridericia's formula. For both Fridericia's and the optimized formula, QT-prolonging drugs (d,l-sotalol, astemizole) showed QTc interval prolongation. A non-QT-prolonging drug (d,l-propranolol) failed to prolong the QTc interval. In addition, drug-induced changes in QTcF and QTcX intervals were highly correlated with those of the QT interval paced at a cycle length of 500 msec. These findings suggest that Fridericia's and the optimized formula, although the optimized is a little bit better, are suitable for correcting the QT interval in halothane-anesthetized dogs and help to evaluate the potential QT prolongation of drugs with high accuracy.
Aagten-Murphy, David; Cappagli, Giulia; Burr, David
2014-03-01
Expert musicians are able to time their actions accurately and consistently during a musical performance. We investigated how musical expertise influences the ability to reproduce auditory intervals and how this generalises across different techniques and sensory modalities. We first compared various reproduction strategies and interval length, to examine the effects in general and to optimise experimental conditions for testing the effect of music, and found that the effects were robust and consistent across different paradigms. Focussing on a 'ready-set-go' paradigm subjects reproduced time intervals drawn from distributions varying in total length (176, 352 or 704 ms) or in the number of discrete intervals within the total length (3, 5, 11 or 21 discrete intervals). Overall, Musicians performed more veridical than Non-Musicians, and all subjects reproduced auditory-defined intervals more accurately than visually-defined intervals. However, Non-Musicians, particularly with visual stimuli, consistently exhibited a substantial and systematic regression towards the mean interval. When subjects judged intervals from distributions of longer total length they tended to regress more towards the mean, while the ability to discriminate between discrete intervals within the distribution had little influence on subject error. These results are consistent with a Bayesian model that minimizes reproduction errors by incorporating a central tendency prior weighted by the subject's own temporal precision relative to the current distribution of intervals. Finally a strong correlation was observed between all durations of formal musical training and total reproduction errors in both modalities (accounting for 30% of the variance). Taken together these results demonstrate that formal musical training improves temporal reproduction, and that this improvement transfers from audition to vision. They further demonstrate the flexibility of sensorimotor mechanisms in adapting to different task conditions to minimise temporal estimation errors. © 2013.
NASA Astrophysics Data System (ADS)
Ajaz, M.; Ali, Y.; Ullah, S.; Ali, Q.; Tabassam, U.
2018-05-01
In this research paper, comprehensive results on the double differential yield of π± and K± mesons, protons and antiprotons as a function of laboratory momentum in several polar angle ranges from 0-420 mrad for pions, 0-360 mrad for kaons, proton and antiproton are reported. EPOS 1.99, EPOS-LHC and QGSJETII-04 models are used to perform simulations. The predictions of these models at 90 GeV/c are plotted for comparison, which shows that QGSJETII-04 model gives overall higher yield for π+ mesons in the polar angle interval of 0-40 mrad but for the π‑ the yield is higher only up to 20 mrad. For π+ mesons after 40 mrad, EPOS-LHC predicts higher yield as compared to EPOS 1.99 and QGSJETII-04 while EPOS-LHC and EPOS 1.99 give similar behavior in these two intervals. However, for π‑ mesons EPOS-LHC and EPOS 1.99 give similar behavior in these two intervals. For of K± mesons, QGSJETII-04 model gives higher predictions in all cases from 0-300 mrad, while EPOS 1.99 and EPOS-LHC show similar distributions. In case of protons, all models give similar distribution but this is not true for antiproton. All models are in good agreement for p > 20 GeV/c. EPOS 1.99 produce lower yield compared to the other two models from 60-360 mrad polar angle interval.
Grover, Anita; Benet, Leslie Z.
2013-01-01
Intermittent drug dosing intervals are usually initially guided by the terminal pharmacokinetic half life and are dependent on drug formulation. For chronic multiple dosing and for extended release dosage forms, the terminal half life often does not predict the plasma drug accumulation or fluctuation observed. We define and advance applications for the operational multiple dosing half lives for drug accumulation and fluctuation after multiple oral dosing at steady-state. Using Monte Carlo simulation, our results predict a way to maximize the operational multiple dosing half lives relative to the terminal half life by using a first-order absorption rate constant close to the terminal elimination rate constant in the design of extended release dosage forms. In this way, drugs that may be eliminated early in the development pipeline due to a relatively short half life can be formulated to be dosed at intervals three times the terminal half life, maximizing compliance, while maintaining tight plasma concentration accumulation and fluctuation ranges. We also present situations in which the operational multiple dosing half lives will be especially relevant in the determination of dosing intervals, including for drugs that follow a direct PKPD model and have a narrow therapeutic index, as the rate of concentration decrease after chronic multiple dosing (that is not the terminal half life) can be determined via simulation. These principles are illustrated with case studies on valproic acid, diazepam, and anti-hypertensives. PMID:21499748
Smeers, Inge; Decorte, Ronny; Van de Voorde, Wim; Bekaert, Bram
2018-05-01
DNA methylation is a promising biomarker for forensic age prediction. A challenge that has emerged in recent studies is the fact that prediction errors become larger with increasing age due to interindividual differences in epigenetic ageing rates. This phenomenon of non-constant variance or heteroscedasticity violates an assumption of the often used method of ordinary least squares (OLS) regression. The aim of this study was to evaluate alternative statistical methods that do take heteroscedasticity into account in order to provide more accurate, age-dependent prediction intervals. A weighted least squares (WLS) regression is proposed as well as a quantile regression model. Their performances were compared against an OLS regression model based on the same dataset. Both models provided age-dependent prediction intervals which account for the increasing variance with age, but WLS regression performed better in terms of success rate in the current dataset. However, quantile regression might be a preferred method when dealing with a variance that is not only non-constant, but also not normally distributed. Ultimately the choice of which model to use should depend on the observed characteristics of the data. Copyright © 2018 Elsevier B.V. All rights reserved.
Faydasicok, Ozlem; Arik, Sabri
2013-08-01
The main problem with the analysis of robust stability of neural networks is to find the upper bound norm for the intervalized interconnection matrices of neural networks. In the previous literature, the major three upper bound norms for the intervalized interconnection matrices have been reported and they have been successfully applied to derive new sufficient conditions for robust stability of delayed neural networks. One of the main contributions of this paper will be the derivation of a new upper bound for the norm of the intervalized interconnection matrices of neural networks. Then, by exploiting this new upper bound norm of interval matrices and using stability theory of Lyapunov functionals and the theory of homomorphic mapping, we will obtain new sufficient conditions for the existence, uniqueness and global asymptotic stability of the equilibrium point for the class of neural networks with discrete time delays under parameter uncertainties and with respect to continuous and slope-bounded activation functions. The results obtained in this paper will be shown to be new and they can be considered alternative results to previously published corresponding results. We also give some illustrative and comparative numerical examples to demonstrate the effectiveness and applicability of the proposed robust stability condition. Copyright © 2013 Elsevier Ltd. All rights reserved.
Simulation, prediction, and genetic analyses of daily methane emissions in dairy cattle.
Yin, T; Pinent, T; Brügemann, K; Simianer, H; König, S
2015-08-01
This study presents an approach combining phenotypes from novel traits, deterministic equations from cattle nutrition, and stochastic simulation techniques from animal breeding to generate test-day methane emissions (MEm) of dairy cows. Data included test-day production traits (milk yield, fat percentage, protein percentage, milk urea nitrogen), conformation traits (wither height, hip width, body condition score), female fertility traits (days open, calving interval, stillbirth), and health traits (clinical mastitis) from 961 first lactation Brown Swiss cows kept on 41 low-input farms in Switzerland. Test-day MEm were predicted based on the traits from the current data set and 2 deterministic prediction equations, resulting in the traits labeled MEm1 and MEm2. Stochastic simulations were used to assign individual concentrate intake in dependency of farm-type specifications (requirement when calculating MEm2). Genetic parameters for MEm1 and MEm2 were estimated using random regression models. Predicted MEm had moderate heritabilities over lactation and ranged from 0.15 to 0.37, with highest heritabilities around DIM 100. Genetic correlations between MEm1 and MEm2 ranged between 0.91 and 0.94. Antagonistic genetic correlations in the range from 0.70 to 0.92 were found for the associations between MEm2 and milk yield. Genetic correlations between MEm with days open and with calving interval increased from 0.10 at the beginning to 0.90 at the end of lactation. Genetic relationships between MEm2 and stillbirth were negative (0 to -0.24) from the beginning to the peak phase of lactation. Positive genetic relationships in the range from 0.02 to 0.49 were found between MEm2 with clinical mastitis. Interpretation of genetic (co)variance components should also consider the limitations when using data generated by prediction equations. Prediction functions only describe that part of MEm which is dependent on the factors and effects included in the function. With high probability, there are more important effects contributing to variations of MEm that are not explained or are independent from these functions. Furthermore, autocorrelations exist between indicator traits and predicted MEm. Nevertheless, this integrative approach, combining information from dairy cattle nutrition with dairy cattle genetics, generated novel traits which are difficult to record on a large scale. The simulated data basis for MEm was used to determine the size of a cow calibration group for genomic selection. A calibration group including 2,581 cows with MEm phenotypes was competitive with conventional breeding strategies. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Predicting Long-Term Outcomes in Pleural Infections. RAPID Score for Risk Stratification.
White, Heath D; Henry, Christopher; Stock, Eileen M; Arroliga, Alejandro C; Ghamande, Shekhar
2015-09-01
Pleural infections are associated with significant morbidity and mortality. The recently developed RAPID (renal, age, purulence, infection source, and dietary factors) score consists of five clinical factors that can identify patients at risk for increased mortality. The objective of this study was to further validate the RAPID score in a diverse cohort, identify factors associated with mortality, and provide long-term outcomes. We evaluated a single-center retrospective cohort of 187 patients with culture-positive pleural infections. Patients were classified by RAPID scores into low-risk (0-2), medium-risk (3-4), and high-risk (5-7) groups. The Social Security Death Index was used to determine date of death. All-cause mortality was assessed at 3 months, 1 year, 3 years, and 5 years. Clinical factors and comorbid conditions were evaluated for association. Three-month mortality for low-, medium-, and high-risk groups was 1.5, 17.8, and 47.8%, respectively. Increased odds were observed among medium-risk (odds ratio, 14.3; 95% confidence interval, 1.8-112.6; P = 0.01) and high-risk groups (odds ratio, 53.3; 95% confidence interval, 6.8-416.8; P < 0.01). This trend continued at 1, 3, and 5 years. Factors associated with high-risk scores include gram-negative rod infections, heart disease, diabetes, cancer, lung disease, and increased length of stay. When applied to a diverse patient cohort, the RAPID score predicts outcomes in patients up to 5 years and may aid in long-term risk stratification on presentation.
Klungsøyr, Kari; Øyen, Nina; Tell, Grethe S.; Næss, Øyvind; Skjærven, Rolv
2016-01-01
Preconception predictors of gestational hypertension and preeclampsia may identify opportunities for early detection and improve our understanding of the pathogenesis and life course epidemiology of these conditions. Female participants in community-based Cohort Norway health surveys, 1994 to 2003, were prospectively followed through 2012 via record linkages to Medical Birth Registry of Norway. Analyses included 13 217 singleton pregnancies (average of 1.59 births to 8321 women) without preexisting hypertension. Outcomes were gestational hypertension without proteinuria (n=237) and preeclampsia (n=429). Mean age (SD) at baseline was 27.9 years (4.5), and median follow-up was 4.8 years (interquartile range 2.6–7.8). Gestational hypertension and preeclampsia shared several baseline risk factors: family history of diabetes mellitus, pregravid diabetes mellitus, a high total cholesterol/high-density lipoprotein cholesterol ratio (>5), overweight and obesity, and elevated blood pressure status. For preeclampsia, a family history of myocardial infarction before 60 years of age and elevated triglyceride levels (≥1.7 mmol/L) also predicted risk while physical activity was protective. Preterm preeclampsia was predicted by past-year binge drinking (≥5 drinks on one occasion) with an adjusted odds ratio of 3.7 (95% confidence interval 1.3–10.8) and by past-year physical activity of ≥3 hours per week with an adjusted odds ratio of 0.5 (95% confidence interval 0.3–0.8). The results suggest similarities and important differences between gestational hypertension, preeclampsia, and preterm preeclampsia. Modifiable risk factors could be targeted for improving pregnancy outcomes and the short- and long-term sequelae for mothers and offspring. PMID:27113053
Forster, Sarah E; Zirnheld, Patrick; Shekhar, Anantha; Steinhauer, Stuart R; O'Donnell, Brian F; Hetrick, William P
2017-09-01
Signals carried by the mesencephalic dopamine system and conveyed to anterior cingulate cortex are critically implicated in probabilistic reward learning and performance monitoring. A common evaluative mechanism purportedly subserves both functions, giving rise to homologous medial frontal negativities in feedback- and response-locked event-related brain potentials (the feedback-related negativity (FRN) and the error-related negativity (ERN), respectively), reflecting dopamine-dependent prediction error signals to unexpectedly negative events. Consistent with this model, the dopamine receptor antagonist, haloperidol, attenuates the ERN, but effects on FRN have not yet been evaluated. ERN and FRN were recorded during a temporal interval learning task (TILT) following randomized, double-blind administration of haloperidol (3 mg; n = 18), diphenhydramine (an active control for haloperidol; 25 mg; n = 20), or placebo (n = 21) to healthy controls. Centroparietal positivities, the Pe and feedback-locked P300, were also measured and correlations between ERP measures and behavioral indices of learning, overall accuracy, and post-error compensatory behavior were evaluated. We hypothesized that haloperidol would reduce ERN and FRN, but that ERN would uniquely track automatic, error-related performance adjustments, while FRN would be associated with learning and overall accuracy. As predicted, ERN was reduced by haloperidol and in those exhibiting less adaptive post-error performance; however, these effects were limited to ERNs following fast timing errors. In contrast, the FRN was not affected by drug condition, although increased FRN amplitude was associated with improved accuracy. Significant drug effects on centroparietal positivities were also absent. Our results support a functional and neurobiological dissociation between the ERN and FRN.
DeCarlo, Correne A; MacDonald, Stuart W S; Vergote, David; Jhamandas, Jack; Westaway, David; Dixon, Roger A
2016-11-01
Mild cognitive impairment (MCI) is a high-risk condition for progression to Alzheimer's disease (AD). Vascular health is a key mechanism underlying age-related cognitive decline and neurodegeneration. AD-related genetic risk factors may be associated with preclinical cognitive status changes. We examine independent and cross-domain interactive effects of vascular and genetic markers for predicting MCI status and stability. We used cross-sectional and 2-wave longitudinal data from the Victoria Longitudinal Study, including indicators of vascular health (e.g., reported vascular diseases, measured lung capacity and pulse rate) and genetic risk factors-that is, apolipoprotein E (APOE; rs429358 and rs7412; the presence vs absence of ε4) and catechol-O-methyltransferase (COMT; rs4680; met/met vs val/val). We examined associations with objectively classified (a) cognitive status at baseline (not impaired congnitive (NIC) controls vs MCI) and (b) stability or transition of cognitive status across a 4-year interval (stable NIC-NIC vs chronic MCI-MCI or transitional NIC-MCI). Using logistic regression, indicators of vascular health, both independently and interactively with APOE ε4, were associated with risk of MCI at baseline and/or associated with MCI conversion or MCI stability over the retest interval. Several vascular health markers of aging predict MCI risk. Interactively, APOE ε4 may intensify the vascular health risk for MCI. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Concerning the Motion of FTEs and Attendant Signatures
NASA Technical Reports Server (NTRS)
Sibeck, David G.
2010-01-01
We employ the Cooling et al. [2001] model to predict the location, orientation, and motion of flux transfer events (FTEs) generated along finite length component and anti parallel reconnection lines for typical solar wind plasma conditions and various interplanetary magnetic field (IMF) orientations in the plane perpendicular to the SunEarth line at the solstices and equinoxes. For duskward and northward or southward IMF orientations, events formed by component reconnection originate along reconnection curves passing through the sub solar point that tilt from southern dawn to northern dusk. They maintain this orientation as they move either northward into the northern dawn quadrant or southward into the southern dusk quadrant. By contrast, events formed by antiparallel reconnection originate along reconnection curves running from northern dawn to southern dusk in the southern dawn and northern dusk quadrants and maintain these orientations as they move anti sunward into both these quadrants. Although both the component and antiparallel reconnection models can explain previously reported event orientations on the southern dusk magnetopause during intervals of northward and dawn ward IMF orientation, only the component model explains event occurrence near the subsolar magnetopause during intervals when the IMF does not point due southward.
NASA Technical Reports Server (NTRS)
Delgado, Irebert R.
2015-01-01
An experimental and analytical fatigue life study was performed on the Grainex Mar-M 247 disk used in NASA s Turbine Seal Test Facility. To preclude fatigue cracks from growing to critical size in the NASA disk bolt holes due to cyclic loading at severe test conditions, a retirement-for-cause methodology was adopted to detect and monitor cracks within the bolt holes using eddy-current inspection. For the NASA disk material that was tested, the fatigue strain-life to crack initiation at a total strain of 0.5 percent, a minimum to maximum strain ratio of 0, and a bolt hole temperature of 649 C was calculated to be 665 cycles using -99.95 percent prediction intervals. The fatigue crack propagation life was calculated to be 367 cycles after implementing a safety factor of 2 on life. Thus, the NASA disk bolt hole total life or retirement life was determined to be 1032 cycles at a crack depth of 0.501 mm. An initial NASA disk bolt hole inspection at 665 cycles is suggested with 50 cycle inspection intervals thereafter to monitor fatigue crack growth.
Genetics Home Reference: ankyrin-B syndrome
... beats, which is known as a prolonged QT interval (long QT). Some affected individuals have impaired progression ( ... sudden death. When associated with a prolonged QT interval, the condition is sometimes classified as long QT ...
The influence of CS-US interval on several different indices of learning in appetitive conditioning
Delamater, Andrew R.; Holland, Peter C.
2010-01-01
Four experiments examined the effects of varying the CS-US interval (and US density) on learning in an appetitive magazine approach task with rats. Learning was assessed with conditioned response (CR) measures, as well as measures of sensory-specific stimulus-outcome associations (Pavlovian-instrumental transfer, potentiated feeding, and US devaluation). The results from these studies indicate that there exists an inverse relation between CS-US interval and magazine approach CRs, but that sensory-specific stimulus-outcome associations are established over a wide range of relatively long, but not short, CS-US intervals. These data suggest that simple CR measures provide us with different information about what is learned than measures of the specific stimulus-outcome association, and that time is a more critical variable for the former than latter component of learning. PMID:18426304
Nonuniform sampling and non-Fourier signal processing methods in multidimensional NMR.
Mobli, Mehdi; Hoch, Jeffrey C
2014-11-01
Beginning with the introduction of Fourier Transform NMR by Ernst and Anderson in 1966, time domain measurement of the impulse response (the free induction decay, FID) consisted of sampling the signal at a series of discrete intervals. For compatibility with the discrete Fourier transform (DFT), the intervals are kept uniform, and the Nyquist theorem dictates the largest value of the interval sufficient to avoid aliasing. With the proposal by Jeener of parametric sampling along an indirect time dimension, extension to multidimensional experiments employed the same sampling techniques used in one dimension, similarly subject to the Nyquist condition and suitable for processing via the discrete Fourier transform. The challenges of obtaining high-resolution spectral estimates from short data records using the DFT were already well understood, however. Despite techniques such as linear prediction extrapolation, the achievable resolution in the indirect dimensions is limited by practical constraints on measuring time. The advent of non-Fourier methods of spectrum analysis capable of processing nonuniformly sampled data has led to an explosion in the development of novel sampling strategies that avoid the limits on resolution and measurement time imposed by uniform sampling. The first part of this review discusses the many approaches to data sampling in multidimensional NMR, the second part highlights commonly used methods for signal processing of such data, and the review concludes with a discussion of other approaches to speeding up data acquisition in NMR. Copyright © 2014 Elsevier B.V. All rights reserved.
Subjective and Real Time: Coding Under Different Drug States
Sanchez-Castillo, Hugo; Taylor, Kathleen M.; Ward, Ryan D.; Paz-Trejo, Diana B.; Arroyo-Araujo, Maria; Castillo, Oscar Galicia; Balsam, Peter D.
2016-01-01
Organisms are constantly extracting information from the temporal structure of the environment, which allows them to select appropriate actions and predict impending changes. Several lines of research have suggested that interval timing is modulated by the dopaminergic system. It has been proposed that higher levels of dopamine cause an internal clock to speed up, whereas less dopamine causes a deceleration of the clock. In most experiments the subjects are first trained to perform a timing task while drug free. Consequently, most of what is known about the influence of dopaminergic modulation of timing is on well-established timing performance. In the current study the impact of altered DA on the acquisition of temporal control was the focal question. Thirty male Sprague-Dawley rats were distributed randomly into three different groups (haloperidol, d-amphetamine or vehicle). Each animal received an injection 15 min prior to the start of every session from the beginning of interval training. The subjects were trained in a Fixed Interval (FI) 16s schedule followed by training on a peak procedure in which 64s non-reinforced peak trials were intermixed with FI trials. In a final test session all subjects were given vehicle injections and 10 consecutive non-reinforced peak trials to see if training under drug conditions altered the encoding of time. The current study suggests that administration of drugs that modulate dopamine do not alter the encoding temporal durations but do acutely affect the initiation of responding. PMID:27087743
NASA Astrophysics Data System (ADS)
Jha, Mayank Shekhar; Dauphin-Tanguy, G.; Ould-Bouamama, B.
2016-06-01
The paper's main objective is to address the problem of health monitoring of system parameters in Bond Graph (BG) modeling framework, by exploiting its structural and causal properties. The system in feedback control loop is considered uncertain globally. Parametric uncertainty is modeled in interval form. The system parameter is undergoing degradation (prognostic candidate) and its degradation model is assumed to be known a priori. The detection of degradation commencement is done in a passive manner which involves interval valued robust adaptive thresholds over the nominal part of the uncertain BG-derived interval valued analytical redundancy relations (I-ARRs). The latter forms an efficient diagnostic module. The prognostics problem is cast as joint state-parameter estimation problem, a hybrid prognostic approach, wherein the fault model is constructed by considering the statistical degradation model of the system parameter (prognostic candidate). The observation equation is constructed from nominal part of the I-ARR. Using particle filter (PF) algorithms; the estimation of state of health (state of prognostic candidate) and associated hidden time-varying degradation progression parameters is achieved in probabilistic terms. A simplified variance adaptation scheme is proposed. Associated uncertainties which arise out of noisy measurements, parametric degradation process, environmental conditions etc. are effectively managed by PF. This allows the production of effective predictions of the remaining useful life of the prognostic candidate with suitable confidence bounds. The effectiveness of the novel methodology is demonstrated through simulations and experiments on a mechatronic system.
Endothelial cell density to predict endothelial graft failure after penetrating keratoplasty.
Lass, Jonathan H; Sugar, Alan; Benetz, Beth Ann; Beck, Roy W; Dontchev, Mariya; Gal, Robin L; Kollman, Craig; Gross, Robert; Heck, Ellen; Holland, Edward J; Mannis, Mark J; Raber, Irving; Stark, Walter; Stulting, R Doyle
2010-01-01
To determine whether preoperative and/or postoperative central endothelial cell density (ECD) and its rate of decline postoperatively are predictive of graft failure caused by endothelial decompensation following penetrating keratoplasty to treat a moderate-risk condition, principally, Fuchs dystrophy or pseudophakic corneal edema. In a subset of Cornea Donor Study participants, a central reading center determined preoperative and postoperative ECD from available specular images for 17 grafts that failed because of endothelial decompensation and 483 grafts that did not fail. Preoperative ECD was not predictive of graft failure caused by endothelial decompensation (P = .91). However, the 6-month ECD was predictive of subsequent failure (P < .001). Among those that had not failed within the first 6 months, the 5-year cumulative incidence (+/-95% confidence interval) of failure was 13% (+/-12%) for the 33 participants with a 6-month ECD of less than 1700 cells/mm(2) vs 2% (+/-3%) for the 137 participants with a 6-month ECD of 2500 cells/mm(2) or higher. After 5 years' follow-up, 40 of 277 participants (14%) with a clear graft had an ECD below 500 cells/mm(2). Preoperative ECD is unrelated to graft failure from endothelial decompensation, whereas there is a strong correlation of ECD at 6 months with graft failure from endothelial decompensation. A graft can remain clear after 5 years even when the ECD is below 500 cells/mm(2).
The prognostic value of sleep patterns in disorders of consciousness in the sub-acute phase.
Arnaldi, Dario; Terzaghi, Michele; Cremascoli, Riccardo; De Carli, Fabrizio; Maggioni, Giorgio; Pistarini, Caterina; Nobili, Flavio; Moglia, Arrigo; Manni, Raffaele
2016-02-01
This study aimed to evaluate, through polysomnographic analysis, the prognostic value of sleep patterns, compared to other prognostic factors, in patients with disorders of consciousness (DOCs) in the sub-acute phase. Twenty-seven patients underwent 24-h polysomnography and clinical evaluation 3.5 ± 2 months after brain injury. Their clinical outcome was assessed 18.5 ± 9.9 months later. Polysomnographic recordings were evaluated using visual and quantitative indexes. A general linear model was applied to identify features able to predict clinical outcome. Clinical status at follow-up was analysed as a function of the baseline clinical status, the interval between brain injury and follow-up evaluation, patient age and gender, the aetiology of the injury, the lesion site, and visual and quantitative sleep indexes. A better clinical outcome was predicted by a visual index indicating the presence of sleep integrity (p=0.0006), a better baseline clinical status (p=0.014), and younger age (p=0.031). Addition of the quantitative sleep index strengthened the prediction. More structured sleep emerged as a valuable predictor of a positive clinical outcome in sub-acute DOC patients, even stronger than established predictors (e.g. age and baseline clinical condition). Both visual and quantitative sleep evaluation could be helpful in predicting clinical outcome in sub-acute DOCs. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Predicting Hydrologic Function With Aquatic Gene Fragments
NASA Astrophysics Data System (ADS)
Good, S. P.; URycki, D. R.; Crump, B. C.
2018-03-01
Recent advances in microbiology techniques, such as genetic sequencing, allow for rapid and cost-effective collection of large quantities of genetic information carried within water samples. Here we posit that the unique composition of aquatic DNA material within a water sample contains relevant information about hydrologic function at multiple temporal scales. In this study, machine learning was used to develop discharge prediction models trained on the relative abundance of bacterial taxa classified into operational taxonomic units (OTUs) based on 16S rRNA gene sequences from six large arctic rivers. We term this approach "genohydrology," and show that OTU relative abundances can be used to predict river discharge at monthly and longer timescales. Based on a single DNA sample from each river, the average Nash-Sutcliffe efficiency (NSE) for predicted mean monthly discharge values throughout the year was 0.84, while the NSE for predicted discharge values across different return intervals was 0.67. These are considerable improvements over predictions based only on the area-scaled mean specific discharge of five similar rivers, which had average NSE values of 0.64 and -0.32 for seasonal and recurrence interval discharge values, respectively. The genohydrology approach demonstrates that genetic diversity within the aquatic microbiome is a large and underutilized data resource with benefits for prediction of hydrologic function.
Biasi, G.P.; Weldon, R.J.; Fumal, T.E.; Seitz, G.G.
2002-01-01
We introduce a quantitative approach to paleoearthquake dating and apply it to paleoseismic data from the Wrightwood and Pallett Creek sites on the southern San Andreas fault. We illustrate how stratigraphic ordering, sedimentological, and historical data can be used quantitatively in the process of estimating earthquake ages. Calibrated radiocarbon age distributions are used directly from layer dating through recurrence intervals and recurrence probability estimation. The method does not eliminate subjective judgements in event dating, but it does provide a means of systematically and objectively approaching the dating process. Date distributions for the most recent 14 events at Wrightwood are based on sample and contextual evidence in Fumal et al. (2002) and site context and slip history in Weldon et al. (2002). Pallett Creek event and dating descriptions are from published sources. For the five most recent events at Wrightwood, our results are consistent with previously published estimates, with generally comparable or narrower uncertainties. For Pallett Creek, our earthquake date estimates generally overlap with previous results but typically have broader uncertainties. Some event date estimates are very sensitive to details of data interpretation. The historical earthquake in 1857 ruptured the ground at both sites but is not constrained by radiocarbon data. Radiocarbon ages, peat accumulation rates, and historical constraints at Pallett Creek for event X yield a date estimate in the earliest 1800s and preclude a date in the late 1600s. This event is almost certainly the historical 1812 earthquake, as previously concluded by Sieh et al. (1989). This earthquake also produced ground deformation at Wrightwood. All events at Pallett Creek, except for event T, about A.D. 1360, and possibly event I, about A.D. 960, have corresponding events at Wrightwood with some overlap in age ranges. Event T falls during a period of low sedimentation at Wrightwood when conditions were not favorable for recording earthquake evidence. Previously proposed correlations of Pallett Creek X with Wrightwood W3 in the 1690s and Pallett Creek event V with W5 around 1480 (Fumal et al., 1993) appear unlikely after our dating reevaluation. Apparent internal inconsistencies among event, layer, and dating relationships around events R and V identify them as candidates for further investigation at the site. Conditional probabilities of earthquake recurrence were estimated using Poisson, lognormal, and empirical models. The presence of 12 or 13 events at Wrightwood during the same interval that 10 events are reported at Pallett Creek is reflected in mean recurrence intervals of 105 and 135 years, respectively. Average Poisson model 30-year conditional probabilities are about 20% at Pallett Creek and 25% at Wrightwood. The lognormal model conditional probabilities are somewhat higher, about 25% for Pallett Creek and 34% for Wrightwood. Lognormal variance ??ln estimates of 0.76 and 0.70, respectively, imply only weak time predictability. Conditional probabilities of 29% and 46%, respectively, were estimated for an empirical distribution derived from the data alone. Conditional probability uncertainties are dominated by the brevity of the event series; dating uncertainty contributes only secondarily. Wrightwood and Pallett Creek event chronologies both suggest variations in recurrence interval with time, hinting that some form of recurrence rate modulation may be at work, but formal testing shows that neither series is more ordered than might be produced by a Poisson process.
ERIC Educational Resources Information Center
Capaldi, E. J.; Martins, Ana; Miller, Ronald M.
2007-01-01
Rats in a Pavlovian situation were trained under three different reward schedules, at either a 30 s or a 90 s intertrial interval (ITI): Consistent reward (C), 50% irregular reward (I), and single alternation of reward and nonrewarded trials (SA). Activity was recorded to the conditioned stimulus (CS) and in all 10 s bins in each ITI except the…
ERIC Educational Resources Information Center
Woodruff-Pak, Diana S.; Seta, Susan E.; Roker, LaToya A.; Lehr, Melissa A.
2007-01-01
The aim of this study was to examine parameters affecting age differences in eyeblink classical conditioning in a large sample of young and middle-aged rabbits. A total of 122 rabbits of mean ages of 4 or 26 mo were tested at inter-stimulus intervals (ISIs) of 600 or 750 msec in the delay or trace paradigms. Paradigm affected both age groups…
Ghanipoor Machiani, Sahar; Abbas, Montasir
2016-11-01
Accurate modeling of driver decisions in dilemma zones (DZ), where drivers are not sure whether to stop or go at the onset of yellow, can be used to increase safety at signalized intersections. This study utilized data obtained from two different driving simulator studies (VT-SCORES and NADS datasets) to investigate the possibility of developing accurate driver-decision prediction/classification models in DZ. Canonical discriminant analysis was used to construct the prediction models, and two timeframes were considered. The first timeframe used data collected during green immediately before the onset of yellow, and the second timeframe used data collected during the first three seconds after the onset of yellow. Signal protection algorithms could use the results of the prediction model during the first timeframe to decide the best time for ending the green signal, and could use the results of the prediction model during the first three seconds of yellow to extend the clearance interval. It was found that the discriminant model using data collected during the first three seconds of yellow was the most accurate, at 99% accuracy. It was also found that data collection should focus on variables that are related to speed, acceleration, time, and distance to intersection, as opposed to secondary variables, such as pavement conditions, since secondary variables did not significantly change the accuracy of the prediction models. The results reveal a promising possibility for incorporating the developed models in traffic-signal controllers to improve DZ-protection strategies. Copyright © 2015 Elsevier Ltd. All rights reserved.
Memory Binding Test Predicts Incident Amnestic Mild Cognitive Impairment.
Mowrey, Wenzhu B; Lipton, Richard B; Katz, Mindy J; Ramratan, Wendy S; Loewenstein, David A; Zimmerman, Molly E; Buschke, Herman
2016-07-14
The Memory Binding Test (MBT), previously known as Memory Capacity Test, has demonstrated discriminative validity for distinguishing persons with amnestic mild cognitive impairment (aMCI) and dementia from cognitively normal elderly. We aimed to assess the predictive validity of the MBT for incident aMCI. In a longitudinal, community-based study of adults aged 70+, we administered the MBT to 246 cognitively normal elderly adults at baseline and followed them annually. Based on previous work, a subtle reduction in memory binding at baseline was defined by a Total Items in the Paired (TIP) condition score of ≤22 on the MBT. Cox proportional hazards models were used to assess the predictive validity of the MBT for incident aMCI accounting for the effects of covariates. The hazard ratio of incident aMCI was also assessed for different prediction time windows ranging from 4 to 7 years of follow-up, separately. Among 246 controls who were cognitively normal at baseline, 48 developed incident aMCI during follow-up. A baseline MBT reduction was associated with an increased risk for developing incident aMCI (hazard ratio (HR) = 2.44, 95% confidence interval: 1.30-4.56, p = 0.005). When varying the prediction window from 4-7 years, the MBT reduction remained significant for predicting incident aMCI (HR range: 2.33-3.12, p: 0.0007-0.04). Persons with poor performance on the MBT are at significantly greater risk for developing incident aMCI. High hazard ratios up to seven years of follow-up suggest that the MBT is sensitive to early disease.
Increased ambient air temperature alters the severity of soil water repellency
NASA Astrophysics Data System (ADS)
van Keulen, Geertje; Sinclair, Kat; Hallin, Ingrid; Doerr, Stefan; Urbanek, Emilia; Quinn, Gerry; Matthews, Peter; Dudley, Ed; Francis, Lewis; Gazze, S. Andrea; Whalley, Richard
2017-04-01
Soil repellency, the inability of soils to wet readily, has detrimental environmental impacts such as increased runoff, erosion and flooding, reduced biomass production, inefficient use of irrigation water and preferential leaching of pollutants. Its impacts may exacerbate (summer) flood risks associated with more extreme drought and precipitation events. In this study we have tested the hypothesis that transitions between hydrophobic and hydrophilic soil particle surface characteristics, in conjunction with soil structural properties, strongly influence the hydrological behaviour of UK soils under current and predicted UK climatic conditions. We have addressed the hypothesis by applying different ambient air temperatures under controlled conditions to simulate the effect of predicted UK climatic conditions on the wettability of soils prone to develop repellency at different severities. Three UK silt-loam soils under permanent vegetation were selected for controlled soil perturbation studies. The soils were chosen based on the severity of hydrophobicity that can be achieved in the field: severe to extreme (Cefn Bryn, Gower, Wales), intermediate to severe (National Botanical Garden, Wales), and subcritical (Park Grass, Rothamsted Research near London). The latter is already highly characterised so was also used as a control. Soils were fully saturated with water and then allowed to dry out gradually upon exposure to controlled laboratory conditions. Soils were allowed to adapt for a few hours to a new temperature prior to initiation of the controlled experiments. Soil wettability was determined at highly regular intervals by measuring water droplet penetration times. Samples were collected at four time points: fully wettable, just prior to and after the critical soil moisture concentrations (CSC), and upon reaching air dryness (to constant weight), for further (ultra)metaproteomic and nanomechanical studies to allow integration of bulk soil characterisations with functional expression and nanoscale studies to generate deep mechanistic understanding of the roles of microbes in soil ecosystems. Our controlled soil pertubation studies have shown that an increase in ambient temperature has consistently affected the severity of soil water repellency. Surprisingly, a higher ambient air temperature impacts soils that in the field develop subcritical and extreme repellency, differently under controlled laboratory conditions. We will discuss the impact of these results in relation to predicted UK climatic conditions. Soil metaproteomics will provide mechanistic insight at the molecular level whether differential microbial adaptation is correlated with the apparent different response to a higher ambient air temperature.
Guida, Pietro; Mastro, Florinda; Scrascia, Giuseppe; Whitlock, Richard; Paparella, Domenico
2014-12-01
A systematic review of the European System for Cardiac Operative Risk Evaluation (euroSCORE) II performance for prediction of operative mortality after cardiac surgery has not been performed. We conducted a meta-analysis of studies based on the predictive accuracy of the euroSCORE II. We searched the Embase and PubMed databases for all English-only articles reporting performance characteristics of the euroSCORE II. The area under the receiver operating characteristic curve, the observed/expected mortality ratio, and observed-expected mortality difference with their 95% confidence intervals were analyzed. Twenty-two articles were selected, including 145,592 procedures. Operative mortality occurred in 4293 (2.95%), whereas the expected events according to euroSCORE II were 4802 (3.30%). Meta-analysis of these studies provided an area under the receiver operating characteristic curve of 0.792 (95% confidence interval, 0.773-0.811), an estimated observed/expected ratio of 1.019 (95% confidence interval, 0.899-1.139), and observed-expected difference of 0.125 (95% confidence interval, -0.269 to 0.519). Statistical heterogeneity was detected among retrospective studies including less recent procedures. Subgroups analysis confirmed the robustness of combined estimates for isolated valve procedures and those combined with revascularization surgery. A significant overestimation of the euroSCORE II with an observed/expected ratio of 0.829 (95% confidence interval, 0.677-0.982) was observed in isolated coronary artery bypass grafting and a slight underestimation of predictions in high-risk patients (observed/expected ratio 1.253 and observed-expected difference 1.859). Despite the heterogeneity, the results from this meta-analysis show a good overall performance of the euroSCORE II in terms of discrimination and accuracy of model predictions for operative mortality. Validation of the euroSCORE II in prospective populations needs to be further studied for a continuous improvement of patients' risk stratification before cardiac surgery. Copyright © 2014 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.
Wilquin, Hélène; Delevoye-Turrell, Yvonne; Dione, Mariama; Giersch, Anne
2018-01-01
Objective: Basic temporal dysfunctions have been described in patients with schizophrenia, which may impact their ability to connect and synchronize with the outer world. The present study was conducted with the aim to distinguish between interval timing and synchronization difficulties and more generally the spatial-temporal organization disturbances for voluntary actions. A new sensorimotor synchronization task was developed to test these abilities. Method: Twenty-four chronic schizophrenia patients matched with 27 controls performed a spatial-tapping task in which finger taps were to be produced in synchrony with a regular metronome to six visual targets presented around a virtual circle on a tactile screen. Isochronous (time intervals of 500 ms) and non-isochronous auditory sequences (alternated time intervals of 300/600 ms) were presented. The capacity to produce time intervals accurately versus the ability to synchronize own actions (tap) with external events (tone) were measured. Results: Patients with schizophrenia were able to produce the tapping patterns of both isochronous and non-isochronous auditory sequences as accurately as controls producing inter-response intervals close to the expected interval of 500 and 900 ms, respectively. However, the synchronization performances revealed significantly more positive asynchrony means (but similar variances) in the patient group than in the control group for both types of auditory sequences. Conclusion: The patterns of results suggest that patients with schizophrenia are able to perceive and produce both simple and complex sequences of time intervals but are impaired in the ability to synchronize their actions with external events. These findings suggest a specific deficit in predictive timing, which may be at the core of early symptoms previously described in schizophrenia.
Santana, Victor M; Alday, Josu G; Lee, HyoHyeMi; Allen, Katherine A; Marrs, Rob H
2016-01-01
A present challenge in fire ecology is to optimize management techniques so that ecological services are maximized and C emissions minimized. Here, we modeled the effects of different prescribed-burning rotation intervals and wildfires on carbon emissions (present and future) in British moorlands. Biomass-accumulation curves from four Calluna-dominated ecosystems along a north-south gradient in Great Britain were calculated and used within a matrix-model based on Markov Chains to calculate above-ground biomass-loads and annual C emissions under different prescribed-burning rotation intervals. Additionally, we assessed the interaction of these parameters with a decreasing wildfire return intervals. We observed that litter accumulation patterns varied between sites. Northern sites (colder and wetter) accumulated lower amounts of litter with time than southern sites (hotter and drier). The accumulation patterns of the living vegetation dominated by Calluna were determined by site-specific conditions. The optimal prescribed-burning rotation interval for minimizing annual carbon emissions also differed between sites: the optimal rotation interval for northern sites was between 30 and 50 years, whereas for southern sites a hump-backed relationship was found with the optimal interval either between 8 to 10 years or between 30 to 50 years. Increasing wildfire frequency interacted with prescribed-burning rotation intervals by both increasing C emissions and modifying the optimum prescribed-burning interval for minimum C emission. This highlights the importance of studying site-specific biomass accumulation patterns with respect to environmental conditions for identifying suitable fire-rotation intervals to minimize C emissions.
Li, Congjuan; Shi, Xiang; Mohamad, Osama Abdalla; Gao, Jie; Xu, Xinwen; Xie, Yijun
2017-01-01
Background Water influences various physiological and ecological processes of plants in different ecosystems, especially in desert ecosystems. The purpose of this study is to investigate the response of physiological and morphological acclimation of two shrubs Haloxylon ammodendron and Calligonum mongolicunl to variations in irrigation intervals. Methodology/Principal findings The irrigation frequency was set as 1-, 2-, 4-, 8- and 12-week intervals respectively from March to October during 2012–2014 to investigate the response of physiological and morphological acclimation of two desert shrubs Haloxylon ammodendron and Calligonum mongolicunl to variations in the irrigation system. The irrigation interval significantly affected the individual-scale carbon acquisition and biomass allocation pattern of both species. Under good water conditions (1- and 2-week intervals), carbon assimilation was significantly higher than other treatments; while, under water shortage conditions (8- and 12-week intervals), there was much defoliation; and under moderate irrigation intervals (4 weeks), the assimilative organs grew gently with almost no defoliation occurring. Conclusion/Significance Both studied species maintained similar ecophysiologically adaptive strategies, while C. mongolicunl was more sensitive to drought stress because of its shallow root system and preferential belowground allocation of resources. A moderate irrigation interval of 4 weeks was a suitable pattern for both plants since it not only saved water but also met the water demands of the plants. PMID:28719623
Calibration method helps in seismic velocity interpretation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guzman, C.E.; Davenport, H.A.; Wilhelm, R.
1997-11-03
Acoustic velocities derived from seismic reflection data, when properly calibrated to subsurface measurements, help interpreters make pure velocity predictions. A method of calibrating seismic to measured velocities has improved interpretation of subsurface features in the Gulf of Mexico. In this method, the interpreter in essence creates a kind of gauge. Properly calibrated, the gauge enables the interpreter to match predicted velocities to velocities measured at wells. Slow-velocity zones are of special interest because they sometimes appear near hydrocarbon accumulations. Changes in velocity vary in strength with location; the structural picture is hidden unless the variations are accounted for by mappingmore » in depth instead of time. Preliminary observations suggest that the presence of hydrocarbons alters the lithology in the neighborhood of the trap; this hydrocarbon effect may be reflected in the rock velocity. The effect indicates a direct use of seismic velocity in exploration. This article uses the terms seismic velocity and seismic stacking velocity interchangeably. It uses ground velocity, checkshot average velocity, and well velocity interchangeably. Interval velocities are derived from seismic stacking velocities or well average velocities; they refer to velocities of subsurface intervals or zones. Interval travel time (ITT) is the reciprocal of interval velocity in microseconds per foot.« less
NASA Astrophysics Data System (ADS)
Müller, Aline Lima Hermes; Picoloto, Rochele Sogari; Mello, Paola de Azevedo; Ferrão, Marco Flores; dos Santos, Maria de Fátima Pereira; Guimarães, Regina Célia Lourenço; Müller, Edson Irineu; Flores, Erico Marlon Moraes
2012-04-01
Total sulfur concentration was determined in atmospheric residue (AR) and vacuum residue (VR) samples obtained from petroleum distillation process by Fourier transform infrared spectroscopy with attenuated total reflectance (FT-IR/ATR) in association with chemometric methods. Calibration and prediction set consisted of 40 and 20 samples, respectively. Calibration models were developed using two variable selection models: interval partial least squares (iPLS) and synergy interval partial least squares (siPLS). Different treatments and pre-processing steps were also evaluated for the development of models. The pre-treatment based on multiplicative scatter correction (MSC) and the mean centered data were selected for models construction. The use of siPLS as variable selection method provided a model with root mean square error of prediction (RMSEP) values significantly better than those obtained by PLS model using all variables. The best model was obtained using siPLS algorithm with spectra divided in 20 intervals and combinations of 3 intervals (911-824, 823-736 and 737-650 cm-1). This model produced a RMSECV of 400 mg kg-1 S and RMSEP of 420 mg kg-1 S, showing a correlation coefficient of 0.990.
Longhi, Daniel Angelo; Martins, Wiaslan Figueiredo; da Silva, Nathália Buss; Carciofi, Bruno Augusto Mattar; de Aragão, Gláucia Maria Falcão; Laurindo, João Borges
2017-01-02
In predictive microbiology, the model parameters have been estimated using the sequential two-step modeling (TSM) approach, in which primary models are fitted to the microbial growth data, and then secondary models are fitted to the primary model parameters to represent their dependence with the environmental variables (e.g., temperature). The Optimal Experimental Design (OED) approach allows reducing the experimental workload and costs, and the improvement of model identifiability because primary and secondary models are fitted simultaneously from non-isothermal data. Lactobacillus viridescens was selected to this study because it is a lactic acid bacterium of great interest to meat products preservation. The objectives of this study were to estimate the growth parameters of L. viridescens in culture medium from TSM and OED approaches and to evaluate both the number of experimental data and the time needed in each approach and the confidence intervals of the model parameters. Experimental data for estimating the model parameters with TSM approach were obtained at six temperatures (total experimental time of 3540h and 196 experimental data of microbial growth). Data for OED approach were obtained from four optimal non-isothermal profiles (total experimental time of 588h and 60 experimental data of microbial growth), two profiles with increasing temperatures (IT) and two with decreasing temperatures (DT). The Baranyi and Roberts primary model and the square root secondary model were used to describe the microbial growth, in which the parameters b and T min (±95% confidence interval) were estimated from the experimental data. The parameters obtained from TSM approach were b=0.0290 (±0.0020) [1/(h 0.5 °C)] and T min =-1.33 (±1.26) [°C], with R 2 =0.986 and RMSE=0.581, and the parameters obtained with the OED approach were b=0.0316 (±0.0013) [1/(h 0.5 °C)] and T min =-0.24 (±0.55) [°C], with R 2 =0.990 and RMSE=0.436. The parameters obtained from OED approach presented smaller confidence intervals and best statistical indexes than those from TSM approach. Besides, less experimental data and time were needed to estimate the model parameters with OED than TSM. Furthermore, the OED model parameters were validated with non-isothermal experimental data with great accuracy. In this way, OED approach is feasible and is a very useful tool to improve the prediction of microbial growth under non-isothermal condition. Copyright © 2016 Elsevier B.V. All rights reserved.